Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Updates in this version:
Update EKS modules to latest version.
Update k8s-service to use helm v3
Update k8s-service to use latest chart versions.
If you would like to take an existing Reference Architecture and update to this version, see the guide below.
IMPORTANT: This has been updated to allow upgrades post deprecation of helm v2 repository.
If you are running an EKS flavored Reference Architecture deployed prior to this release (all Reference Architectures before 06/11/2020), you can follow the guides in the following order to update your EKS cluster to this version.
This upgrade moves you to Kubernetes 1.16, the Gruntwork terraform-aws-eks module to v0.20.1, and Helm 3. You will first update the cluster itself, then the core services, and finally, your own services that run in the cluster.
NOTE: You must fully roll out the changes at each bullet point prior to moving on to the next step, unless stated otherwise.
Update your EKS cluster to run Kubernetes version 1.14 (instructions). Note that you must update the module versions to upgrade beyond 1.14, so if you want to upgrade to 1.15 and 1.16, wait until the end of the guide.
Upgrade Gruntwork library modules eks-cluster-control-plane and eks-cluster-workers in the eks-cluster service module to version v0.9.8 (instructions).
Update eks-clusters service module (instructions).
At this point, you can repeat the steps in step (1) to upgrade the Kubernetes version to 1.15 and 1.16.
Upgrade k8s-service service module to use Helm v3 (instructions). This must be rolled out to ALL your services before you can move on to the next step.
Update k8s-service to use chart version 0.1.0 (instructions).
Update eks-core-services service module (instructions).
Update k8s-namespace-with-tiller module to remove references to Tiller (instructions).
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Updates in this version:
Support for nvme-cli
Bumping to t3.micro
Bumping to latest module-ci for jenkins-server
Bug fixes with helm
Bug fixes in tls-scripts
Compatibility update with latest terragrunt version
The variable aws_region was removed from the module, it's value will be retrieved from the region on the provider. When updating to this new version, make sure to remove the aws_region parameter to the module.
You can now configure the asg-rolling-deploy module to NOT use ELB health checks during a deploy by setting the use_elb_health_checks variable to false. This is useful for testing connectivity before health check endpoints are available.
terraform-update-variable now supports commiting updates to a separate branch. Note that as part of this change, the --skip-git option has been updated to take in the value as opposed to being a bare option. If you were using the --skip-git flag previously, you will now need to pass in --skip-git true.
The rds and aurora modules have been updated to remove redundant/duplicate resources by taking advantage of Terraform 0.12 syntax (i.e., for_each, null defaults, and dynamic blocks). This greatly simplifies the code and makes it more maintainable, but because many resources were renamed, this is a backwards incompatible change, so make sure to follow the migration guide below when upgrading!
All input and output variables are the same, so you will not need to do any code changes. There are no changes in functionality either, so there shouldn't be anything new to apply (i.e., when you finish the migration, the plan migration should show no changes). The only thing that changed in this upgrade is that several resources were renamed in the Terraform code, so you'll need to update your Terraform state so it knows about these new names. You do this using the state mv command (Note: If you're using Terragrunt, replace terraform with terragrunt in all the commands in this migration guide):
terraform state mv OLD_ADDRESS NEW_ADDRESS
Where OLD_ADDRESS is the resource address with the old resource name and NEW_ADDRESS is the resource address with the new name. The easiest way to get the old and new address is to upgrade to the new version of this module and run terraform plan. When you do so, you'll see output like this:
The lines that show you resources being removed (with a - in front of them) show the old addresses in a comment above the resource:
# module.aurora_serverless.aws_rds_cluster.cluster_with_encryption_serverless[0] will be destroyed - resource "aws_rds_cluster" "cluster_with_encryption_serverless" {
And the lines that show the very same resources being added (with a + in front of them) show the new addresses in a comment above the resource:
# module.aurora_serverless.aws_rds_cluster.cluster will be created + resource "aws_rds_cluster" "cluster" {
You'll want to run terraform state mv (or terragrunt state mv) on each pair of these resources:
terraform state mv \ module.aurora_serverless.aws_rds_cluster.cluster_with_encryption_serverless[0] \ module.aurora_serverless.aws_rds_cluster.cluster
Improved the Aurora documentation and added a dedicated Aurora Serverless example. This release also adds support for specifying a scaling_configuration_timeout_action when using the aurora module in serverless mode.
The efs module can now create EFS access points and corresponding IAM policies for you. Use the efs_access_points input variable to specify what access points you want and configure the user settings, root directory, read-only access, and read-write access for each one.
The rds module now supports cross-region replication! You can enable it by setting the replicate_source_db input variable to the ARN of a primary DB that should be replicated. See rds-mysql-with-cross-region-replica for a working example.
Added primary_address and read_replica_addresses outputs to the rds module.
The ecs-service module now allows you to mount EFS Volumes in your ECS Tasks (including Fargate tasks) using the new efs_volumes input variable. See also the efs module for creating EFS volumes.
The ecs-cluster module now attaches the ecs:UpdateContainerInstancesState permission to the ECS Cluster's IAM role. This is required for automated ECS instance draining (e.g., when receiving a spot instance termination notice).
You can now bind different containers and ports to each target group created for the ECS service. This can be used to expose multiple containers or ports to existing ALBs or NLBs.
eks-k8s-external-dns is now using a more up to date Helm chart to deploy external-dns. Additionally, you can now configure the logging format between text and json.
eks-alb-ingress-controller now supports selecting a different container version of the ingress controller. This can be used to deploy the v2 alpha image with shared ALB support.
The control plane Python PEX binaries now support long path names on Windows. Previously the scripts were causing errors when attempting to unpack the dependent libraries.
The cluster upgrade script now supports updating to Kubernetes version 1.16. The eks-cloudwatch-container-logs is also now compatible with Kubernetes version 1.16.
The sqs module can now be turned off by setting create_resources = true. When this option is passed in, the module will disable all the resources, effectively simulating a conditional.
As outlined in the AWS docs, the key policy in the security account should allow trail/* so that all trails in external accounts can use the key for encryption (but not decryption). Without this, running the account baseline in a sub account results in InsufficientEncryptionPolicyException.