HCP Terraform allows organizations to scale the management of their cloud infrastructure. However, onboarding multiple teams with unique requirements and workflows introduces its own set of challenges. This blog post will demonstrate how you can automate your HCP Terraform workspace setup by using the TFE provider and building an onboarding module.
A common scenario
To illustrate a common scenario, imagine a tech company. We’ll call them “HashiCups”. Their platform team has successfully built their initial cloud landing zones using HCP Terraform.
Cloud landing zone: A pre-configured, secure, and scalable environment that serves as a foundation for deploying and managing cloud resources.
Now they’re ready to make their first attempt at on-boarding an application team to HCP Terraform, with many more teams to follow. They realize that manually creating and configuring workspaces for each team is time-consuming and prone to errors. They need an automated onboarding process that's not only efficient but also scalable and consistent.
They’ve decided they’re going to add another abstraction layer to codify and automate the onboarding setup for HCP Terraform workspaces, teams and processes. They’ll do this using Terraform as the engine once again, with the TFE provider.
With this provider they can build a reusable Terraform module (we’ll call it the “workspace onboarding module”) that encapsulates best practices for workspace creation, permission management, and team onboarding. This approach should allow HashiCups to scale effortlessly as they bring more teams into their infrastructure as code ecosystem.
Onboarding the first team
The HashiCups platform team will start their onboarding process by having a meeting with the application team. To prepare for this meeting, they’ll review their objectives.
The platform team has two main objectives here:
- Get the application team up and running as quickly as possible.
- Create and test their reusable onboarding pattern (which is codified in a Terraform module) so that they can iron out any issues before they offer it to other teams.
Based on these objectives, in their first meeting, they will ask:
- If the team is familiar with workspaces in HCP Terraform and provide an overview if necessary.
- What their environment landscape looks like (the promotion path i.e. path from dev>test>prod).
- Who should be permitted to change infrastructure configuration, and if those permissions depend on the environment.
What is an HCP Terraform workspace?
In HCP Terraform, a workspace is a fundamental concept that is used to organize infrastructure as code, so it makes sense to start the meeting reviewing what workspaces are and what’s the impact on the team’s IaC code.
An HCP Terraform workspace is an isolated environment where a specific team or working group can manage a specific set of infrastructure resources. Each workspace maintains its own state file, which is important for tracking the current state of your infrastructure and ensuring that Terraform can accurately plan and apply changes to it. It provides a collaborative space for teams to manage infrastructure as code, with capabilities such as version control integration, secure state management, and role-based access control.
Workspace scoping recommendations
Our recommended practice is that you structure your HCP Terraform setup so that each workspace corresponds to a specific:
- Business unit
- Application name
- Infrastructure layer
- Promotion path environment (i.e. dev>test>prod)
- and/or region
Some example workspace names for a simple application following this recommendation could include:
- bu1-billing-prod-us-east
- bu1-billing-staging-us-east
For more complex scenarios, teams will need to divide their workspaces into even smaller scopes. If they have a large number of resources to deploy that becomes harder to manage and decipher. For example:
- bu2-orders-networking-prod-us-east
- bu2-orders-compute-prod-us-east
- bu2-orders-db-prod-us-east
- bu2-orders-networking-staging-us-east
- bu2-orders-compute-staging-us-east
- bu2-orders-db-staging-us-east
The main takeaway here is you can delineate your workspace scopes according to how you think you should isolate each environment to ensure three things:
- Adequately limiting the potential impact or 'blast radius' of any change-related failures
- Preventing performance degradations from affecting other workspaces
- Accommodating different infrastructure sizing and configuration needs for development, testing, and production scenarios
The requirements
After asking the questions listed earlier and building a general understanding of workspaces and how they can be scoped, the HashiCups platform team has gotten a set of requirements from the application team.
The application team explained that they use a 3-environment landscape (development, staging and production), which will translate into three workspaces. Through meetings with other stakeholders, such as security, operations leadership, and platform team leadership (sometimes these best-practice-building groups are called a “center of cloud excellence (CCoE)”), the platform team has an additional set of requirements for HCP Terraform workspace default settings:
- Each application team should have a group that is responsible for workspace administration and another group that has the necessary permissions to use the workspaces.
- Powerful data removal commands like
terraform destroy
should not be allowed for production. Only development and staging environments. - Technical leadership has decided on workspace naming conventions. Each name will have only two pieces of information: An application identifier, followed by an environment identifier (
<application>-<environment>
), and the workspace name must be in lowercase. - Generally the environment used by the end users must use the
prod
environment identifier.
After completing the discovery process,the platform team can now create the first version of the workspace onboarding module.
Making the onboarding pattern reusable
The workspace onboarding module will generate the workspaces needed for the first application team. Rather than hardcoding their team-specific requirements into the workspace, the onboarding module will have empty variable fields so that any team in the organization can use the same module to customize workspaces for their own specific needs. For example, while the first team had 3 environments, some teams have 2 environments, and some teams have more than 3 environments. The number of environments generated will need to be a variable field in the module.
Create the variable definitions
The first file we’ll create is the variables.tf
file, where we’ll define four variables:
application_id
, to hold the application (unique) identifier.admin_team_name
, to hold the name of the (pre-existing) HCP Terraform team representing the application administrators.user_team_name
, to hold the name of the (pre-existing) HCP Terraform team representing the application infrastructure engineers (or developers).environment_names
, to hold the list of environment names (dev, prod, etc.) in this application’s environment landscape.
The environment_names
variable also needs a validation block to ensure that there is an environment named prod
, as per the organization’s requirements.
variable "environment_names" {
description = "A list of environment names"
type = list(string)
validation {
condition = contains([for env in var.environment_names : lower(env)], "prod")
error_message = "The list of environment names must contain 'prod'."
}
}
variable "admin_team_name" {
description = "The name of the team for the workspace administrators"
type = string
}
variable "user_team_name" {
description = "The name of the team for the workspace users"
type = string
}
variable "application_id" {
description = "The identifier of the application"
type = string
}
Create the workspaces
The next step is creating the main.tf
file, where admins will define the workspaces and team permissions. When creating the workspace for the prod
environment, the team configures it so that destroy plans aren’t allowed, as per the organization’s requirements. They’ll also use string interpolation to name the workspace according to the organization’s naming convention. See how this looks in the configuration below.
resource "tfe_workspace" "workspace" {
for_each = toset(var.environment_names)
name = "${lower(var.application_id)}-${lower(each.value)}"
description = "Workspace for the ${each.value} environment of application ${var.application_id}"
allow_destroy_plan = each.value == "prod" ? false : true
}
data "tfe_team" "admin_team" {
name = var.admin_team_name
}
data "tfe_team" "user_team" {
name = var.user_team_name
}
resource "tfe_team_access" "admin_team_access" {
for_each = toset(var.environment_names)
workspace_id = tfe_workspace.workspace[each.value].id
team_id = data.tfe_team.admin_team.id
access = "admin"
}
resource "tfe_team_access" "user_team_access" {
for_each = toset(var.environment_names)
workspace_id = tfe_workspace.workspace[each.value].id
team_id = data.tfe_team.user_team.id
access = "write"
}
Note that this example is using data sources to fetch information about the admin_team
and the user_team
. An alternative would be to accept the team ID instead of the team name as an input variable. Using the team ID as an input variable can simplify the code and make it more efficient in terms of data processing. However, it may also make it less intuitive for a human to understand the input at a glance.
Make outputs available
One of the key principles in infrastructure as code is composition. Composition in the context of IaC and Terraform refers to the practice of building complex configurations by combining smaller, reusable components. This approach enables modular, scalable, and maintainable infrastructure definitions.
To enable composition with modules, the team needs to share information using outputs. In this case, they made the IDs of the workspaces created for the application team available, as well as the IDs of the admin and user teams in the outputs.tf
file:
output "workspace_ids" {
description = "The IDs of the created workspaces"
value = { for k, v in tfe_workspace.workspace : k => v.id }
}
output "admin_team_ids" {
description = "The IDs of the admin teams"
value = data.tfe_team.admin_team.id
}
output "user_team_ids" {
description = "The IDs of the user teams"
value = data.tfe_team.user_team.id
}
For a more in-depth discussion about outputs in Terraform, have a look at this discussion from HashiConf 2024: Meet the experts: Terraform module design.
Module tests
At this point, the team has a working module, but it’s still missing an important component: Terraform tests. These tests are necessary to ensure that as engineers improve the module they do not introduce bugs or break existing functionality.
Terraform tests live under the tests
directory in the module code repository.
Test setup
The first step when writing a test suite is to ensure that the prerequisites are available. In this case, the prerequisites are the HCP Terraform teams for the workspace administrators and the workspace users.
To define the prerequisites, the platform team will create the file tests/testing/setup/main.tf
with the following content:
resource "tfe_team" "admin_team" {
name = "admins-test"
}
resource "tfe_team" "user_team" {
name = "users-test"
}
Test suite
The next step is to write the test suite. The platform team will create tests that ensure that the validation code on the environment_names
variable works as expected.
To define the test suite, they’ll create the file tests/environment_landscape_validation.tftest.hcl
with the following content:
provider "tfe" {
organization = "
from HashiCorp Blog https://ift.tt/SDAisUM
via IFTTT
No comments:
Post a Comment