This release bumps the version of the ALB module used by Jenkins to v0.20.1 to fix an issue related to outputs from the ALB module.
Migration guide
The jenkins-server module no longer takes the aws_account_id variable. To update to this release, do not pass the variable as an input.
The infrastructure-deployer now supports selecting the container to run in a multi container deployment for the ecs-deploy-runner. Note that this version of the infrastructure-deployer is only compatible with an ecs-deploy-runner that is deployed with this version.
The infrastructure-deploy-script now supports running destroy. Note that the threat model of running destroy in the CI/CD pipeline is not well thought out and is not recommended. Instead, directly call the ECS task to run destroy using privileged credentials.
ecs-deploy-runner now supports specifying multiple container images, and choosing a container image based on a user defined name. This allows you to configure and use different Docker containers for different purposes of your infrastructure pipeline.
The CLI arg for setting the log level in infrastructure-deployer and infrastructure-deploy-script has been renamed to --log-level instead of --loglevel.
The infrastructure-deploy-script no longer supports passing in the private SSH key via CLI args. You must pass it in with the environment variable DEPLOY_SCRIPT_SSH_PRIVATE_KEY.
install-jenkins will automatically disable jenkins so that it won't start on boot. This ensures that jenkins will not be started unless it has been successfully configured with run-jenkins. To get the previous behavior, pass in --module-param "run-on-boot=true".
You can now enable cross-region replication for Aurora by setting source_region and replication_source_identifier to the region and ARN, respectively, of a primary Aurora DB.
The eks-cluster-control-plane module now outputs the cluster security group ID so that you can extend it with additional rules.
The eks-cluster-workers module now appends the cluster security group to the node instead of rolling out its own group by default. Note that it still creates its own group to make it easier to append rules that are only specific to the self-managed workers.
This release also fixes a bug with the eks-k8s-role-mapping module, where previously it did not support including the Fargate execution role. If you don't include the Fargate execution role in the mapping, terraform may delete the configuration rules that enable Fargate to communicate with the Kubernetes API as workers.
eks-k8s-role-mapping is now a pure terraform module and no longer uses python to assist in generating the role mapping. Note that this will cause a drift in the configuration state due to some of the attributes being reorganized, but the configuration is semantically equivalent (thus the roll out is backwards compatible).
eks-cluster-control-plane module will now automatically download and install kubergrunt if it is not available in the target system. This behavior can be disabled by setting the input variable auto_install_kubergrunt to false.
This release also includes several documentation fixes to READMEs of various modules.
The lambda module is now more robust to partial failures in the module. Previously you could end up in a state where you couldn't apply or destroy the module if it only partially applied the resources due to output errors. This release addresses that by changing the output logic.
Note that previously this module output null for all the outputs when create_resources was false. However, with this release the output is converted to "". If you depended on behavior of null outputs, you will need to adjust your code to convert null checks to "".
ALB outputs have been adjusted to use for syntax as opposed to zipmap for the listener port => cert ARN mapping. This was due to an obscure Terraform bug that is not yet fixed/released.
The install.sh scripts for the cloudwatch-log-aggregation-scripts, syslog, and cloudwatch-memory-disk-metrics-scripts modules were unnecessarily using eval to execute scripts used in the install steps. This led to unexpected behavior, such as --module-param arguments being shell expanded. We've removed the calls to eval and replaced with a straight call to the underlying scripts.
This release is marked as backwards incompatible, but this only applies if you were (intentionally or otherwise) relying on the eval behavior (which is not likely or recommended!).
kms-master-key now supports configuring service principal permissions with conditions. As part of this change, the way CloudTrail is setup in the Landing Zone modules have been updated to better support the multiaccount configuration. Refer to the updated docs on multiaccount CloudTrail for more information.
The cloudtrail module now supports reusing an existing KMS key in your account, as opposed to creating a new one. To use an existing key, set the kms_key_already_exists variable to true and provide the ARN of the key to the variable kms_key_arn.
Note that as part of this change, the aws_account_id variable was removed from the module and it will now look up the account ID based on the configured authentication credentials of the provider. Remove the variable in your module block to have a backwards compatible deployment.
The iam-policies module now allows sts:TagSession for the automation users
In v0.29.0, we updated account-baseline-app and account-baseline-security to allow for centralizing Config output in a single bucket. In this release, we take the same approach with account-baseline-root. It now supports using config bucket in security account.
The aws-config module has been refactored to better support multi-region, multi-account configurations. Previously, running the aws-config-multi-region would create an S3 bucket, an IAM role, and an SNS topic in each region. When run in multiple accounts, such as when using the Gruntwork reference architecture, each account would have the aforementioned resources within each region. This configuration was impractical to use since Config would be publishing data to dozens of buckets and topics, making it difficult to monitor and triage.
With this release, the aws-config-multi-region module has been modified as follows:
Only one IAM role is created. The AWS Config configuration recorder in each region assumes this role.
One S3 bucket is created in the same region as the global_recorder_region. The AWS Config configuration recorder in each region can this bucket.
One SNS topic is created per region. According to the AWS documentation, the topic must exist in the same region as the configuration recorder.
An aggregator resource is created to capture Config data from all regions to the global_recorder_region. The aggregated view in the AWS console interface will show results from all regions.
In addition, the account-baseline-* modules can now be configured in the following way:
The account-baseline-security module can be configured as the “central” account in which to aggregate all other accounts.
The account-baseline-app module can be configured to use the central/security account.
In this configuration, the central account will be configured with an S3 Bucket in the same region as the global_recorder_region and an SNS topic will be created in each region. Any account configured with account-baseline-app can publish to the S3 bucket in the central account, and to send SNS notifications to the topic in the corresponding region of the central account. In addition, all configuration recorders across all accounts will be aggregated to the global_recorder_region of the central account.
Migration guide
First, remove the now-unused regional AWS Config buckets from the terraform state so that the data remains intact. If you don't need the data, you can remove the buckets after removing them from the Terraform state. If you're using bash, the following loop should do the trick
Find additional migration instructions below for the modules affected by this change.
For aws-config:
s3_bucket_name remains a required variable.
If should_create_s3_bucket=true (the default), an S3 bucket will be created. If it is false, AWS Config will be configured to use an existing bucket with the name provided by s3_bucket_name.
sns_topic_name is now optional. If sns_topic_name is provided, an SNS topic will be created. If sns_topic_arn is provided, AWS Config will be configured to use that topic.
If should_create_iam_role is true (the default), an IAM role will be created with the default name of AWSConfigRole.
For aws-config-multi-region:
global_recorder_region is no longer required. The default is now us-east-1.
The name_prefix variable has been removed.
s3_bucket_name is now required. In addition, if should_create_s3_bucket=true (the default), an S3 bucket will be created in the same region as global_recorder_region. If should_create_s3_bucket=false, the configuration recorder will be configured to use an existing bucket with the name provided by s3_bucket_name.
If a list of account IDs is provided in the linked_accounts variable, the S3 bucket and SNS topic policies will be configured to allow write access from those accounts.
If an account ID is provided in the central_account_id variable, AWS Config will be configured to publish to the S3 bucket and SNS topic in that account.
If kms_key_arn is provided, the S3 bucket and SNS topic will be encrypted with the provided key. If kms_key_arn is left as null, the S3 bucket will be encrypted with the default aws/s3 key, and the SNS topic will not be encrypted.
For account-baseline-security:
If a list of account IDs is provided in config_linked_accounts, those accounts will be granted access to the S3 bucket and SNS topic in the security account.
If the config_s3_bucket_name variable is provided, the S3 bucket will be created with that name. If no name is provided, the bucket will have the default name of ${var.name_prefix}-config.
For account-baseline-app:
The config_central_account_id variable should be configured with the ID of the account that contains the S3 bucket and SNS topic. This will typically be the account that is configured with account-baseline-security.
If the config_s3_bucket_name variable is provided, AWS Config will be configured to use that name (but the bucket will not be created within the account). If no name is provided, AWS Config will be configured to use a default name of ${var.name_prefix}-config. This bucket must already exist and should have appropriate permissions to allow access from this account. To set up permissions, provide this account ID in the config_linked_accounts of the account-baseline-security modules.
Added a new module called executable-dependency that can be used to install an executable if it's not installed already. This is useful if your Terraform code depends on external dependencies, such as terraform-aws-eks, which depends on kubergrunt.
The vpc-peering module can now optionally create resources using the create_resources variable. This weird parameter exists solely because Terraform does not support conditional modules. Therefore, this is a hack to allow you to conditionally decide if the VPC Peering function and other resources should be created or not.