Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
The modules under iam-policies now allow you to set the create_resources parameter to false to have the module not create any resources. This is a workaround for Terraform not supporting the count parameter on module { ... } blocks.
You can now define custom metric filters in addition to the default filters required by the Benchmark from the cloudtrail module. Previously this was only available through the cloudwatch-logs-metric-filters module.
Adds the ability to define custom metric filters in addition to the default filters required by the Benchmark. Thanks to @frankzieglermbc for his contribution.
This release exposes the ca_cert_identifier argument for aws_db_instance. This argument configures which CA certificate bundle is used by RDS. The expiration of the previous CA bundle is March 5, 2020, at which point TLS connections that haven't been updated will break. Refer to the AWS documentation on this.
The argument defaults to rds-ca-2019. Once you run terraform apply with this update, it will update the instance, but the change will not take effect until the next DB modification window. You can use apply_immediately=true to restart the instance. Until the instance is restarted, the Terraform plan will result in a perpetual diff.
This update adds tags for ECS services and task definitions. To add a tag to a service, provide a map with the service_tags variable. Similar, to tag task definitions, provide a map with the task_definition_tags variable. For example:
Use the propagate_tags variable to propagate tags to ECS tasks. If you set propagate_tags to SERVICE, the tags from service_tags will be set on tasks. If you want to propagate tags from task definitions, set propagate_tags="TASK_DEFINITION". If you set propagate_tags=null, tasks will be created with no tags. The default is SERVICE.
Compatibility note
Tag propagation requires that you adopt the new ARN and resource ID format. If you don't do this, you may encounter the following error:
InvalidParameterException: The new ARN and resource ID format must be enabled to propagate tags. Opt in to the new format and try again.
To opt-in to the new format as the account default using the AWS CLI, use the following aws commands:
This will set the account default, but note that the setting is per-user, per-region. The commands above should be executed within each region that uses ECS.
Furthermore, you may also need to run the commands for IAM users that already exist in the account but haven't opted in to the new format. To do so, authenticate as the IAM user who will be running Terraform (such as a CI machine user), and use the put-account-setting variant of the command within the appropriate regions. For example:
This release introduces support for ECS capacity providers in the ecs-service module. This allows you to provide a strategy for how to run the ECS tasks of the service, such as distributing the load between Fargate, and Fargate Spot.
The eks-cluster-control-plane now supports specifying a CIDR block to restrict access to the public Kubernetes API endpoint. Note that this is only used for the public endpoint: you cannot restrict access by CIDR for the private endpoint yet.
This release includes the following feature enhancements:
You can now specify the encryption mode of the root volume for EC2 instances deployed using the eks-cluster-workers module using the cluster_instance_root_volume_encryption input variable.
You can now define the --txt-owner-id argument using the txt_owner_id input variable for external-dns. This argument is used to uniquely tag DNS records on the Hosted Zone so that multiple instances of external-dns can manage records against the same Hosted Zone.
The eks-k8s-role-mapping now outputs the yaml file in a deterministic order. Previously the yaml was non-deterministic, causing potential perpetual diffs when nothing has actually changed.
This release also includes a number of minor bug fixes:
All examples have been improved to use the correct IAM Role ARN for the EKS role mapping for authentication.
Broken links in the READMEs have been fixed.
The root README has an updated architecture diagram for Fargate and Managed Node Groups.
Starting this release, the modules in this repo have official support for Fargate:
eks-cluster-control-plane now has a new input variable fargate_only, which will create Fargate Profiles for the default and kube-system namespace so that all Pods in those namespaces will be routed to Fargate. This will also adjust the core administrative Pods to run on Fargate so that you can have a functioning EKS cluster without worker nodes.
eks-k8s-external-dns, eks-k8s-cluster-autoscaler, eks-cloudwatch-container-logs, and eks-alb-ingress-controller now support deploying with IAM Role for Service Accounts, inline creating IAM roles and associating them with the Service Accounts within the modules.
The underlying helm charts used in the modules eks-k8s-external-dns, eks-k8s-cluster-autoscaler, eks-cloudwatch-container-logs, and eks-alb-ingress-controller have been bumped to the most recent version.
eks-k8s-external-dns, eks-k8s-cluster-autoscaler, eks-cloudwatch-container-logs, and eks-alb-ingress-controller now support scheduling on Fargate if you have mixed worker pools.
eks-k8s-external-dns-iam-policy, eks-k8s-cluster-autoscaler-iam-policy, and eks-alb-ingress-controller-iam-policy now support conditionally turning off creation of the IAM policy with the input variable create_resources.
The worker IAM role is no longer required for eks-k8s-role-mapping.
This release introduces a new module eks-cluster-managed-workers, which provisions EKS Managed Node Groups. This is an alternative worker pool to the existing eks-cluster-workers module that has some nice properties. You can read more about the differences to self managed workers in the module README.
The python scripts used in eks-k8s-role-mapping and eks-cluster-control-plane no longer support Mac OSX 12. If you are on OSX 12, please use prior versions of this module or upgrade your OSX version.
The python scripts used in eks-k8s-role-mapping and eks-cluster-control-plane now support Python 3.8.
The logs/cloudwatch-log-aggregation-iam-policy module can now be conditionally excluded based on the input variable create_resources. When create_resources is false, the module will not create any resources and become a no-op.
None of the Terraform modules has been updated in this release
The codegen generator go library has been updated to allow rendering explicit blocks at the end of main.tf and outputs.tf, separate from each region configuration.
This release introduces a new module aws-config-multi-region which can be used to configure AWS Config in multiple regions of an account.
The following additional fixes are also included in this release:
The guardduty-multi-region module now supports automatically detecting which regions are enabled on your account. This means that you no longer need to manually maintain the opt_out_regions list.
Fix a bug in the aws-config module where the aws_config_delivery_channel resource sometimes fails due to a race condition with the IAM policy to write to SNS.
New modules for configuring AWS GuardDuty, a service for detecting threats and continuously monitoring your AWS accounts and workloads against malicious activity and unauthorized behavior.
Now vpc-app and vpc-mgmt will create a single VPC endpoint for all tiers. Previously we were creating separate endpoints per tier, but that makes it more likely to reach the max VPC endpoints per region limits of AWS as you add more VPCs, which is not extendable. By consolidating, we can bring down the VPC endpoint count per VPC to 2 from 6.
NOTE: Since the VPC endpoints need to be recreated with this change, existing VPCs will experience a brief outage when trying to reach these endpoints (S3 and DynamoDB) while the endpoints are being recreated when you upgrade to this release. This can not be avoided as you can only have one VPC endpoint per route table and so you can not create the new consolidated endpoints first before removing the old ones. You can expect up to 10 seconds of endpoint access downtime for terraform to do the recreation.