Deploy new infrastructure
How to deploy Terraform code from the Service Catalog
There are three ways to use Terraform code from the Service Catalog:
- Using vanilla Terraform with the Service Catalog
- Using Terragrunt with the Service Catalog
- Using Terraform Cloud or Terraform Enterprise with the Service Catalog
Using vanilla Terraform with the Service Catalog
Below are the instructions for using the vanilla terraform
binary—that is, with no wrappers, extensions, or UI—to
deploy Terraform code from the Service Catalog. See
examples/for-learning-and-testing for working sample code.
Find a service. Browse the
modules
folder to find a service you wish to deploy. For this tutorial, we'll use thevpc
service in modules/networking/vpc as an example.Create a Terraform configuration. Create a Terraform configuration file, such as
main.tf
.Configure the provider. Inside of
main.tf
, configure the Terraform providers for your chosen service. Forvpc
, and for most of the services in this Service Catalog, you'll need to configure the AWS provider:provider "aws" {
# The AWS region in which all resources will be created
region = "eu-west-1"
# Only these AWS Account IDs may be operated on by this template
allowed_account_ids = ["111122223333"]
}Configure the backend. You'll also want to configure the backend to use to store Terraform state:
terraform {
# Configure S3 as a backend for storing Terraform state
backend "s3" {
bucket = "<YOUR S3 BUCKET>"
region = "eu-west-1"
key = "<YOUR PATH>/terraform.tfstate"
encrypt = true
dynamodb_table = "<YOUR DYNAMODB TABLE>"
}
}Use the service. Now you can add the service to your code:
module "vpc" {
# Make sure to replace <VERSION> in this URL with the latest terraform-aws-service-catalog release from
# https://github.com/gruntwork-io/terraform-aws-service-catalog/releases
source = "git@github.com:gruntwork-io/terraform-aws-service-catalog.git//modules/networking/vpc?ref=<VERSION>"
# Fill in the arguments for this service
aws_region = "eu-west-1"
vpc_name = "example-vpc"
cidr_block = "10.0.0.0/16"
num_nat_gateways = 1
create_flow_logs = false
}Let's walk through the code above:
Module. We pull in the code for the service using Terraform's native
module
keyword. For background info, see How to create reusable infrastructure with Terraform modules.Git / SSH URL. We recommend setting the
source
URL to a Git URL with SSH authentication (see module sources for other types of source URLs you can use). This will allow you to access the code in the Gruntwork Service Catalog using an SSH key for authentication, without having to hard-code credentials anywhere.Versioned URL. Note the
?ref=<VERSION>
at the end of thesource
URL. This parameter allows you to pull in a specific version of each service so that you don’t accidentally pull in potentially backwards incompatible code in the future. You should replace<VERSION>
with the latest version from the releases page.Arguments. Below the
source
URL, you’ll need to pass in the arguments specific to that service. You can find all the required and optional variables defined invariables.tf
of the service (e.g., check out thevariables.tf
forvpc
).
Add outputs. You may wish to add some output variables, perhaps in an
outputs.tf
file, that forward along some of the output variables from the service. You can find all the outputs defined inoutputs.tf
for the service (e.g., check outoutputs.tf
forvpc
).output "vpc_name" {
description = "The VPC name"
value = module.vpc.vpc_name
}
output "vpc_id" {
description = "The VPC ID"
value = module.vpc.vpc_id
}
output "vpc_cidr_block" {
description = "The VPC CIDR block"
value = module.vpc.vpc_cidr_block
}
# ... Etc (see the service's outputs.tf for all available outputs) ...Authenticate. You will need to authenticate to both AWS and GitHub:
AWS Authentication: See A Comprehensive Guide to Authenticating to AWS on the Command Line for instructions.
GitHub Authentication: All of Gruntwork's code lives in GitHub, and as most of the repos are private, you must authenticate to GitHub to be able to access the code. For Terraform, we recommend using Git / SSH URLs and using SSH keys for authentication. See Link Your GitHub ID for instructions on linking your GitHub ID and gaining access.
Deploy. You can now deploy the service as follows:
terraform init
terraform apply
Using Terragrunt with the Service Catalog
Terragrunt is a thin, open source wrapper for Terraform that helps you keep your
code DRY. Below are the instructions for using the terragrunt
binary to deploy Terraform code from the Service Catalog. See examples/for-production for working
sample code.
First, we need to do some one time setup. One of the ways Terragrunt helps you keep your code DRY is by allowing you to
define common configurations once in a root terragrunt.hcl
file and to include
those configurations in all child
terragrunt.hcl
files. The folder structure might look something like this:
terragrunt.hcl # root terragrunt.hcl
dev/
stage/
prod/
└ eu-west-1/
└ prod/
└ vpc/
└ terragrunt.hcl # child terragrunt.hcl
Here's how you configure the root terragrunt.hcl
:
Configure the provider. Inside of
terragrunt.hcl
, configure the Terraform providers for your chosen service. Forvpc
, and for most of the services in this Service Catalog, you'll need to configure the AWS provider. We'll do this using agenerate
block:generate "provider" {
path = "provider.tf"
if_exists = "overwrite_terragrunt"
contents = <<EOF
provider "aws" {
# The AWS region in which all resources will be created
region = "eu-west-1"
# Only these AWS Account IDs may be operated on by this template
allowed_account_ids = ["111122223333"]
}
EOF
}Configure the backend. You'll also want to configure the backend to use to store Terraform state. We'll do this using a
remote_state
block:remote_state {
backend = "s3"
config = {
bucket = "<YOUR S3 BUCKET>"
region = "eu-west-1"
key = "${path_relative_to_include()}/terraform.tfstate"
encrypt = true
dynamodb_table = "<YOUR DYNAMODB TABLE>"
}
generate = {
path = "backend.tf"
if_exists = "overwrite_terragrunt"
}
}
Now you can create child terragrunt.hcl
files to deploy services as follows:
Find a service. Browse the
modules
folder to find a service you wish to deploy. For this tutorial, we'll use thevpc
service in modules/networking/vpc as an example.Create a child Terragrunt configuration. Create a child Terragrunt configuration file called
terragrunt.hcl
.Include the root Terragrunt configuration. Pull in all the settings from the root
terragrunt.hcl
by using aninclude
block:include {
path = find_in_parent_folders()
}Use the service. Now you can add the service to your child
terragrunt.hcl
:terraform {
# Make sure to replace <VERSION> in this URL with the latest terraform-aws-service-catalog release from
# https://github.com/gruntwork-io/terraform-aws-service-catalog/releases
source = "git@github.com:gruntwork-io/terraform-aws-service-catalog.git//modules/networking/vpc?ref=<VERSION>"
}
# Fill in the arguments for this service
inputs = {
aws_region = "eu-west-1"
vpc_name = "example-vpc"
cidr_block = "10.0.0.0/16"
num_nat_gateways = 1
create_flow_logs = false
}Let's walk through the code above:
Module. We pull in the code for the service using Terragrunt's support for remote Terraform configurations.
Git / SSH URL. We recommend setting the
source
URL to a Git URL with SSH authentication (see module sources for other types of source URLs you can use). This will allow you to access the code in the Gruntwork Service Catalog using an SSH key for authentication, without having to hard-code credentials anywhere.Versioned URL. Note the
?ref=<VERSION>
at the end of thesource
URL. This parameter allows you to pull in a specific version of each service so that you don’t accidentally pull in potentially backwards incompatible code in the future. You should replace<VERSION>
with the latest version from the releases page.Arguments. In the
inputs
block, you’ll need to pass in the arguments specific to that service. You can find all the required and optional variables defined invariables.tf
of the service (e.g., check out thevariables.tf
forvpc
).
Authenticate. You will need to authenticate to both AWS and GitHub:
AWS Authentication: See A Comprehensive Guide to Authenticating to AWS on the Command Line for instructions.
GitHub Authentication: All of Gruntwork's code lives in GitHub, and as most of the repos are private, you must authenticate to GitHub to be able to access the code. For Terraform, we recommend using Git / SSH URLs and using SSH keys for authentication. See How to get access to the Gruntwork Infrastructure as Code Library for instructions on setting up your SSH key.
Deploy. You can now deploy the service as follows:
terragrunt apply
Using Terraform Cloud or Terraform Enterprise with the Service Catalog
(Documentation coming soon. If you need help with this ASAP, please contact support@gruntwork.io.)
How to build machine images using Packer templates from the Service Catalog
Some of the services in the Gruntwork Service Catalog require you to build an Amazon Machine Image (AMI) to install and configure the software that will run on EC2 instances. These services define and manage the AMI as code using Packer templates.
For example, the eks-workers
service defines an
eks-node-al2.pkr.hcl
Packer template that can be used to create an AMI
for the Kubernetes worker nodes. This Packer template uses the EKS optimized
AMI as its base, which already has Docker,
kubelet, and the AWS IAM Authenticator installed, and on top of that, it installs the other common software you'll
want on an EC2 instance in production, such as tools for gathering metrics, log aggregation, intrusion prevention,
and so on.
The packer templates are provided as hcl
files in each service module folder, and follow the naming convention of:
<servertype>-<os>.pkr.hcl
Below are instructions on how to build an AMI using these Packer templates. We'll be using the
eks-node-al2.pkr.hcl
Packer template as an example.
Check out the code. Run
git clone git@github.com:gruntwork-io/terraform-aws-service-catalog.git
to check out the code onto your own computer.(Optional) Make changes to the Packer template. If you need to install custom software into your AMI—e.g., extra tools for monitoring or other server hardening tools required by your company—copy the Packer template into one of your own Git repos, update it accordingly, and make sure to commit the changes. Note that the Packer templates in the Gruntwork Service Catalog are designed to capture all the install steps in a single
shell
provisioner that uses the Gruntwork Installer to install and configure the software in just a few lines of code. We intentionally designed the templates this way so you can easily copy the Packer template, add all the custom logic you need for your use cases, and only have a few lines of versioned Gruntwork code to maintain to pull in all the Service Catalog logic.Authenticate. You will need to authenticate to both AWS and GitHub:
AWS Authentication: See A Comprehensive Guide to Authenticating to AWS on the Command Line for instructions.
GitHub Authentication: All of Gruntwork's code lives in GitHub, and as most of the repos are private, you must authenticate to GitHub to be able to access the code. For Packer, you must use a GitHub personal access token set as the environment variable
GITHUB_OAUTH_TOKEN
for authentication:export GITHUB_OAUTH_TOKEN=xxx
See How to get access to the Gruntwork Infrastructure as Code Library for instructions on setting up GitHub personal access token.
Set variables. Each Packer template defines variables you can set in a
variables
block at the top, such as what AWS region to use, what VPC to use for the build, what AWS accounts to share the AMI with, etc. We recommend setting these variables in a JSON vars file and checking that file into version control so that you have a versioned history of exactly what settings you used when building each AMI. For example, here's whateks-vars.json
might look like:{
"service_catalog_ref": "<VERSION>",
"version_tag": "<TAG>"
}This file defines two variables that are required by almost every Packer template in the Gruntwork Service Catalog:
Service Catalog Version. You must replace
<VERSION>
with the version of the Service Catalog (from the releases page) you wish to use for this build. Specifying a specific version allows you to know exactly what code you're using and ensures you don’t accidentally pull in potentially backwards incompatible code in future builds.AMI Version. You must replace
<TAG>
with the version number to use for this AMI. The Packer build will add aversion
tag to the AMI with this value, which provides a more human-friendly and readable version number than an AMI ID that you could use to find and sort your AMIs. You'll want to use a different<TAG>
every time you run a Packer build.
Build. Now you can build an AMI as follows:
packer build -var-file=eks-vars.json eks-node-al2.pkr.hcl
How to deploy newly built AMIs?
Once you build the AMI, the next step is to deploy it to your infrastructure. Each service that requires an AMI offers two configuration inputs for selecting the AMI, and you must pick one:
*_ami
(e.g., thecluster_instance_ami
input variable in theeks-workers
module)*_ami_filters
(e.g., thecluster_instance_ami_filters
input variable in theeks-workers
module)
The two approaches are mutually exclusive. When specifying both, *_ami
is always used and the input to
*_ami_filters
is ignored.
The *_ami
input variable can be used to directly specify the AMI to use. When using this input, the value should be
set to the AMI ID that is returned by the packer call. It should be in the format ami-<some_unique_hash>
.
The *_ami_filters
input variable takes in an AMI filter expression that can be used for dynamically looking up a
prebuilt AMI. The supported filters for the lookup can be obtained from the describe-images command
description in the AWS CLI reference. The
most commonly used filters will be:
name
: Filter by the name of the AMI. Note that this supports unix glob expressions (e.g.,*-eks-node
will match any image with the suffix-eks-node
in the name).tag:<key>
: Filter by the given tag key. Note that<key>
can be any tag key that is applied to the AMI. For example, to search for AMIs with the tagservice-catalog-module = eks-workers
, you can use the following filter:cluster_instance_ami_filters = {
owners = ["self"]
filters = [{
name = "tag:service-catalog-module"
values = ["eks-workers"]
}]
}Note that tags are only visible in the account that owns the AMI. Even if you share the AMI in the packer template, the AMI tags will not be visible when you query it from a shared account.