The following changes were made to the server-group module:
IMPROVEMENT: Fixed an issue where an Auto Scaling Group's DesiredInstances property was left at 0 after the rolling_deployment.py script failed to reach a passing health check before timing out. (#29)
IMPROVEMENT: Expose var.deployment_health_check_max_retries and var.deployment_health_check_retry_interval_in_seconds so that Terraform code that calls the server-group module can control how long the rolling_deployment.py will run before timing out. (#29)
IMPROVEMENT: Updated to latest version of Boto to address transient AWS issues. (#29)
IMPROVEMENT: Expose var.additional_security_group_ids to add arbitrary Security Groups to the Launch Configuration created.
https://github.com/gruntwork-io/module-ci/pull/60: The git-add-commit-push script no longer defaults the branch name to $CIRCLE_BRANCH. Instead, it uses git to look up the name of the currently checked-out branch in pwd. In most cases this will produce the exact same effect as before and no code changes will be required. Note that you can always use the --branch-name argument to override the default branch name in git-add-commit-push.
git-add-commit-push has been moved from the gruntwork-module-circleci-helpers module to the git-helpers module.
terraform-update-variable now depends on git-helpers being installed, as it uses git-add-commit-push under the hood to be able to more reliably commit and push changes.
All the pre-commit hooks that were in modules/pre-commit are now in their own open source repo: https://github.com/gruntwork-io/pre-commit. Please update your .pre-commit-config.yml files to point to the new repo and its version numbers.
https://github.com/gruntwork-io/module-data-storage/pull/47: In the aurora module, you can now use the db_instance_parameter_group_name param to set the parameter group for instances separately from the parameter group for the entire cluster (which can be set via the db_cluster_parameter_group_name param).
This pre-release introduces the elasticsearch-cluster module which makes it easy to deploy an autoscaling group of Elasticsearch nodes in AWS.
Support added with merge of #15
Marking this as a pre-release given that we're introducing elasticsearch-cluster mostly so that other modules (namely: Logstash and Kibana) can begin integrating and using our own elasticsearch module. More features and enhancements will be added.
This release features a stable implementation of Kafka and all the Confluent Tools, and we consider this code production-ready. We are still marking this as a pre-release because we've discovered an unusual edge case with Zookeeper.
In particular, when Zookeeper is colocated with multiple other services and re-deployed one server at a time, one of the Zookeeper nodes will remain in the Ensemble, but fail to sync all the znodes (key/value pairs). As a result, when Kafka looks up information about a broker from the out-of-sync node, it receives the error "znode not found" and fails to start correctly. We have not seen any evidence that this issue affects a standalone Zookeeper cluster.
See "Backwards-Incompatible Changes" in release v0.3.0 for important background on the EXTERNAL, INTERNAL, and HEALTHCHECK Kafka listeners.
The changes in this release from v0.3.1 include the following:
REST Proxy now favors the bootstrap.servers property instead of the zookeeper.connect property. This is because REST Proxy was discovering all the Kafka listeneers stored in Zookeeper, not just the EXTERNAL listeners we wanted it to receive.
Schema Registry now favors the kafkastore.bootstrap.servers property over the kafkastore.connection.url property (which points to the Zookeeper servers) for the same reason.
The source_ami_filter used in all example Packer templates now species the additional filter:
This ensures that the CentOS 7 AMI will use a gp2 (SSD) EBS Volume by default.
All the run-xxx scripts now use a common pattern of arguments like --kafka-brokers-eni-tag and --schema-registry-eni-tag-dns. Previously, some arguments that made reference to ENI values did not have eni in their argument name.
Previously, whenever a script searched for an Elastic Network Interface (ENI), it queried AWS for all a given EC2 Instance's ENIs and arbitrarily returned information about the first ENI in the results. Now, we explicitly look for the ENI whose DeviceIndex is 1 to guarantee that we will get the ENI that re-attaches after an EC2 Instance re-spawns.
All Bash scripts have been updated to use the bash-commons module in this repo. Note that Gruntwork has also released an official bash-commons repo that we hope to migrate to in the future.
One of our customers wished to assign their own Security Groups to the Kafka Cluster, in addition to the Security Group that's automatically created by the kafka-cluster module. So we added var.additional_security_group_ids to each of kafka-cluster and confluent-tools-cluster.
Previously, Kafka Connect was not configured correctly to handle SSL. We made some updates to the default configuration so that Kafka Connect and connectors correctly connect to Kafka over SSL when applicable.
Previously, Schema Registry listed every possible Kafka listener, including the EXTERNAL, INTERNAL, and HEALTHCHECK listeners. This caused an issue where Schema Registry would choose a listener at random, sometimes fail to connect, and therefore fail to start.
Schema Registry's configuration now lists only the EXTERNAL listeners for Kafka.
Previously, we evaluated every possible configuration variable that could be used, which caused an issue when we attempted to resolve <__PUBLIC_IP__> even though there was no public IP address defined for an EC2 Instance. At the time, we solved this issue by "downgrading" to a private IP address, but this behavior was error prone.
We now do "lazy evaluation" of configuration variables so that a configuration variable is only evaluated if it's actually used in a configuration file or script argument. Now, if you attempt to use <__PUBLIC_IP__> when there is no public IP address defined for the EC2 Instance, it will throw an error.
We now have an end-to-end integration tests for Kafka Connect, which helps us discover and fix some configuration issues!
Over the next few weeks, we plan to dig in deeper to the Zookeeper issue. Once fixed, we'll make the appropriate changes and issue a final release.
The cross-account-iam-roles module now sets a default max expiration of 12 hours for IAM Roles intended for human users (e.g., allow-read-only-access-from-other-accounts) and a default max expiration of 1 hour for IAM Roles intended for machine users (e.g., allow-auto-deploy-access-from-other-accounts). Both of these expiration values are configurable via the new input variables max_session_duration_human_users and max_session_duration_machine_users.
The aws-auth script now accepts optional --mfa-duration-seconds and --role-duration-seconds parameters that specify the session expiration for the creds you get back when authenticating with an MFA token or assuming an IAM role, respectively. The default for both of these has been set to 12 hours to be more human-friendly.
The auto-update, ntp, fail2ban, and ip-lockdown modules now all use bash-commons under the hood. That means you must install bash-commonsbefore installing any of those other modules.
The auto-update and ntp modules now support Amazon Linux 2. We will add Amazon Linux 2 support for fail2ban and ip-lockdown modules in the future.