My IaC AWS Multi-Account Provisioning BluePrint & Best Practices…

And How to assume the Terraform Execution Role with SSO Users.

Hector
17 min readOct 28, 2023

On this Article I will show you how to structure the IaC (Terraform) projects so they require no Value files (which usually tend to be scattered all over the place). I will also demonstrate how to use One single configuration file to rule all environments and How do I separate Platform and Application Infrastructure code.

Topics in this Article:

  • Terragrunt as the Winner of the multi-account deployment tools
  • Terraform & Terragrunt working together
  • Application Infra Terragrunt Main Config file (Locals/Backend S3/AWS Provider)
  • AWS Identity Center (SSO) Permission Sets for Terraform Execution (Assuming Role with No long-living AWS Key and Secrets)
  • Platform vs Application Infrastructure
  • Platform Infra Terragrunt Main Config file (Locals/Backend S3/AWS Provider)
  • Creating the Terraform Execution Role (for Application Infra Deployment) with Trust Policy of an Role Provisioned with IAM Access Identity (SSO)
  • Securely Configuring SSO AWS temp Credentials
  • AWS-VAULT

Terragrunt

In my opinion Terragrunt is by far the best tool available (that I know) to Structure Multi-Account Deployments for Infrastructure.

Way better than Terraform Workspaces for sure.

Terragrunt is a thin wrapper that provides extra tools for keeping your configurations DRY, working with multiple Terraform modules, and managing remote state.

Terragrunt has the ability to generate code in to the downloaded remote Terraform modules before calling out to terraform using the generate block. This can be used to inject common terraform configurations into all the modules that you use.

The Terragrunt documentation suggests structuring the code as follows:

But you need to be cautious to not overly use terragrunt to orchestrate the Resources.

Terraform should be intended to Bundle different modules into onef or the Application or Platform Services.

Terragrunt should be intended only to Deployments to Multi-Account Platforms of an already consolidated Terraform Module.

Use Terraform for Creating the Application Module & Terragrunt to Deploy it to Different Environments.

Do NOT use Terragrunt for creating the Application Module (left side), otherwise can get over complicated and the Terraform State will be generated per component basis instead of per app. Use it only to Deploy (right side).

Avoid this mistake I made in the past by overly-separating the Application Infra into components inside Terragrunt… Supposedly for having a better control over the individual resources but it ended up causing more issues.

Why is this a potential issue?

In one of my projects I was tasked to migrating the Multi-Account Deployment Method from Terragrunt to Terraform workspaces. This in order to align with the Organisation standard procedure to manage different AWS environments.

This procedure could have been very straight forward since Terragrunt is merely a terraform wrapper.

But by breaking up the app into componets, then those components living inside Modules then it becomes a nightmare having to merge state files.

So if you want to keep the Terragrunt footprint minimal then….

Terraform should be intended to Bundle different modules into one for the Application or Platform Service.

Example: Minimal Application Infra Resources:

infrastructure/modules/apps/application_1/main.tf

data "aws_caller_identity" "current" {}

locals {
aws_account_id = data.aws_caller_identity.current.account_id
}

module "app_logs" {
source = "git::https://github.com/terraform-aws-modules/terraform-aws-dynamodb-table.git//"
name = "${var.app_name}-logs"
billing_mode = "PAY_PER_REQUEST"
hash_key = "organization"
range_key = "job_id"

server_side_encryption_enabled = true
deletion_protection_enabled = false

attributes = [
{
name = "organization"
type = "S"
},
{
name = "job_id"
type = "S"
}
]
}

module "app_users" {
source = "git::https://github.com/terraform-aws-modules/terraform-aws-dynamodb-table.git//"
# Another DynamoDB table here if required
}

module "lambda_execution_role" {
source = "../lambda-execution-role"
aws_account_id = local.aws_account_id
app_name = var.app_name
env = var.env
}

module "s3_bucket_files" {
source = "git::https://github.com/terraform-aws-modules/terraform-aws-s3-bucket.git//?ref=v3.6.1"
bucket = var.env == "production" ? "my-company-app-${var.app_name}-files" : "my-company-app-${var.app_name}-files-${var.env}"
block_public_acls = "true"
block_public_policy = "true"
ignore_public_acls = "true"
restrict_public_buckets = "true"
}

module "another_s3_bucket_files" {
source = "git::https://github.com/terraform-aws-modules/terraform-aws-s3-bucket.git//?ref=v3.6.1"
# Another S3 Bucker here if required
}

# SSM Params are a great method to communicate Values to Serverless or any other tool
module "ssm_params" {
source = "../ssm-parameters-store"
parameters = {
1 = {
name = "/app/${var.app_name}/s3_bucket_files"
value = module.s3_bucket_files.s3_bucket_id
}
2 = {
name = "/app/${var.app_name}/lambda_role"
value = module.lambda_execution_role.role_arn
}
3 = {
name = "/app/${var.app_name}/event_bridge_name"
value = "${var.app_name}-event-bus"
}
4 = {
name = "/app/${var.app_name}/logs"
value = module.app_logs.dynamodb_table_id
}
}
}

In this module we call other modules to create the following kind of resources:

  • DynamoDB Tables
  • S3 Bucket
  • IAM Lambda Execution Role
  • SSM Parameters holding the ARNs/IDs of the previously created resources

The only thing missing here, as demostrated on this other article. Are the Backend S3 & AWS Provider Configuration… This is when Terragrunt enters the game…

Platform vs Application

This separation of concerns mens either having different GIT repositories or simply by treating them as different projects within the same Repo… The importat thing here is that the Application Infra Code should not contain any direct dependency to the Platform Infra Code… so this can be easily extracted to live with the Application Source Code (BE/FE)…

But how can we recognise what should be placed on Platform Infra and what on App Infra?

Platform Infrastructure:

Platform infrastructure is broader and refers to the foundational components and services that support multiple applications across an organisation. This often includes:

  • IAM roles for CI/CD
  • Route53 Hosted Zones
  • Networking: Like VPCs, firewalls, and NAT.
  • Monitoring and logging: Like Prometheus, Grafana, or CloudWatch.

Application Infrastructure:

Application infrastructure refers to the set of components and resources required to run a specific application or set of applications. This often includes:

  • EC2 Instances, Lambda Functions
  • Transit Gateways, Load Balancers
  • App Databases: Such as MySQL, PostgreSQL, or MongoDB.
  • Message Queues: Like SQS or Kafka.

The focus is on the needs of individual apps.

A great example would be Serverless Framework or AWS SAM

For Serverless Applications there is this nice guide about using Terraform and Serverless.

Do not Hardcode Dependant resources IDs (like route53_zoneid, vpc_id, transitgateway_id, etc) on config or values files.

Use Terraform Data Sources to allow Terraform to use information defined outside of Terraform, defined by another separate Terraform configuration, or modified by functions.

When not using Terraform but other tool like Serverless Framework or SAM:

Store any dependency resource ids (like the S3 bucket or DynamoDb ARNs) on SSM parameters.

Then fetch the SSM Parameters that the app reqiuires to work. On deployment send either the SSM Parameters Values ready to be use or the Actual SSM Parameter Name for the application to fetch the value on runtime as Env Variables.

You can also use Secrets Manager to store things like PostgresRDS Credentials for other apps to read them

Terragrunt for Application IaC

Specify the environments to deploy within the ‘live“ directory

├── README.md
├── live
│ ├── _env
│ │ └── app.hcl
│ ├── dev
│ │ └── app
│ │ └── terragrunt.hcl <--- this is not a config file, but a env resource, file
│ ├── production
│ │ └── app
│ │ └── terragrunt.hcl
│ ├── terragrunt.hcl <------ Terragrunt config file (Unique)
│ └── test
│ └── app
│ └── terragrunt.hcl
└── modules
├── lambda-execution-role
│ ├── data.tf
│ ├── main.tf
│ ├── output.tf
│ └── variables.tf
├── application_1
│ ├── main.tf
│ └── variables.tf
└── ssm-parameters-store
├── main.tf
└── variables.tf

The One and only terragrunt.hcl configuration file

Here is how you can configure a multi-account deployment using a single terragrunt.hcl config file.

Lets compare the terraform workspace config vs the Terragrunt Config.

Local variable definitions stay “Almost” the same and since we are structuring our Terragrunt Live directory with the name of the different environments.

Since we have the Directory name matching the AWS Environment name. Then we can infer the Environment value at runtime with this simple regex:

Terraform workspaces vs Terragrunt locals definition

Using terragrunt generate, the Backend Remote State config now allows interpolation…

…The AWS Provider is not created nor destroyed; rather, it is Generated:

infrastructure/live/terragrunt.hcl

Here is the complete terragrunt.hcl file:

Please notice the “profile” parameter on the backend.tf and the provider.tf generators. We will get rid of them on the following section…

locals {
# ENV
env_regex = "infrastructure/live/([a-zA-Z0-9-]+)/"
aws_env = try(regex(local.env_regex, get_original_terragrunt_dir())[0])
# Application
app_name = "my-app"
component = "api"

# AWS Organizations Accounts
account_mapping = {
dev = 22222222222
test = 111111111111
production = 000000000000
shared-services = 555555555555
}
# IAM Roles to Assume
account_role_name = "apps-terraform-execution-role" # <--- Role to Assume
# Region and Zones
region = "us-east-1"
}
remote_state {
backend = "s3"
generate = {
path = "backend.tf"
if_exists = "overwrite_terragrunt"
}
config = {
bucket = "terraform-state-shared-services"
key = "${local.app_name}/${get_path_from_repo_root()}/terraform.tfstate"
region = local.region
profile = "shared-services" #<----- AWS profile
encrypt = true
dynamodb_table = "shared-services-lock-table"
}
}
generate "provider" {
path = "provider.tf"
if_exists = "overwrite_terragrunt"
contents = <<-EOF
provider "aws" {
region = "${local.region}"
profile = "shared-services" #<----- AWS profile
allowed_account_ids = [
"${local.account_mapping[local.aws_env]}"
]
assume_role {
role_arn = "arn:aws:iam::${local.account_mapping[local.aws_env]}:role/${local.account_role_name}"
}
default_tags {
tags = {
Environment = "${local.aws_env}"
ManagedBy = "terraform"
DeployedBy = "terragrunt"
Creator = "${get_env("USER", "NOT_SET")}"
Application = "${local.app_name}"
Component = "${local.component}"
}
}
}
EOF
}

Since Terragrunt allows interpolations on the generators, then we can parametrize it for all target AWS Accounts.

infrastructure/live/_env/app.hcl

include "root" {
path = find_in_parent_folders()
}

locals {
env_vars = read_terragrunt_config(find_in_parent_folders("terragrunt.hcl"))
env = local.env_vars.locals.aws_env
}

terraform {
source = "../../..//modules/application"
}

inputs = {
env = local.env
app_name = local.env_vars.locals.app_name
}

infrastructure/live/(dev|test|production)/app.hcl

include "root" {
path = find_in_parent_folders()
}

include "env" {
path = "../../_env/app.hcl"
}

Terragrunt Platform Infrastructure

Setting up the Roles that Terraform Requires for Provisioning Infra

account_role_name     = "apps-terraform-execution-role" # <--- Role to Assume

Also in this Previous Article I explained how to setup the terraform-multiaccount-role on AWS Shared Services Account and the terraform-role on each of the Workloads Accounts (Dev, Test, Production).

But that setup allows one Entity (the terraform-multiaccount-role) to assume (the terraform-role) on all accounts. Thats not convenient If you want to empower the Development team to manage their own infra on Dev environment. So then you need an Entity that can only assume role into Dev Account. and another for Test and another for Production.

AWS Identity Center delegated administrator Account & Permissions Sets

Follow this documentation to Create the Permission Set and How to Assign Permissions sets to AWS Accounts.

Create the following Permissions Sets: terraform-dev, terraform-test, terraform-production.

Create the following inline policy for each Target Account on Each Permission Set. Allowing the Permission sets (which is setup on the Identity Account) Deployed into the Shared Services account to assume role on the following accounts:

Dev: 222222222222

Test: 111111111111

Production: 000000000000

So 3 different Permissions Sets looking like this:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::000000000000:role/apps-terraform-execution-role"
},
{
"Sid": "Statement1",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::terraform-state-shared-services*",
"arn:aws:s3:::terraform-state-shared-services/*"
]
},
{
"Sid": "Statement2",
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:DeleteItem"
],
"Resource": [
"arn:aws:dynamodb:us-east-1:555555555555:table/shared-services-lock-table"
]
}
]
}

Creating the apps-terraform-execution-role on each workload accounts & Allow assume role from the SSO user.

Note: You can also play a bit with the policy by allowing the “terraform-production” to also assue role on Test.

Platform Infrastructure (IaC)

For a minimal application we will need to provide Platform resources like DNS, Networking & Identity Management Roles and Policies.

➜  live git:(main) ✗ tree   
.
├── _env
│ ├── route53-zones.hcl
│ └── iam-roles.hcl
├── dev
│ ├── iam-roles
│ │ └── terragrunt.hcl
│ ├── route53-zones
│ │ └── terragrunt.hcl
│ └── platform
│ └── terragrunt.hcl
├── network
│ ├── dmz
│ │ └── terragrunt.hcl
│ ├── network-egress
│ │ └── terragrunt.hcl
│ ├── transitgateway-dmz-vpc-attachment
│ │ └── terragrunt.hcl
│ └── transitgateway-egress-vpc-attachment
│ ├── mystate.tfstate
│ └── terragrunt.hcl
├── production
│ ├── iam-roles
│ │ └── terragrunt.hcl
│ └── platform
│ └── terragrunt.hcl
├── terragrunt.hcl
└── test
├── iam-roles
│ └── terragrunt.hcl
├── route53-zones
│ └── terragrunt.hcl
└── platform
└── terragrunt.hcl

Locals

Here is an example of how you can centralize multiple-account values in a single location:

locals {
# ENV
env_regex = "infrastructure/live/([a-zA-Z0-9-]+)/"
aws_env = try(regex(local.env_regex, get_original_terragrunt_dir())[0])
# Application
app_name = "my-application-platform"
component = "platform"
platform_apps = [
"application_1",
"application_2",
"application_3"
]
# AWS Organizations Accounts
application_account_mapping = {
dev = 222222222222
test = 111111111111
production = 000000000000
}
platform_account_mapping = {
shared-services = 555555555555
network = "666666666666"
dns = 888888888888
}
account_mapping = merge(local.application_account_mapping, local.platform_account_mapping)

# IAM Roles to Assume
account_role_name = "terraform-role"
multiaccount_role_arn = "arn:aws:iam::555555555555:role/terraform-multiaccount-role" # shared-services

terraform_execution_role_mapping = {
dev = "arn:aws:iam::555555555555:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_terraform-dev_ab1"
test = "arn:aws:iam::555555555555:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_terraform-test_ab2"
production = "arn:aws:iam::555555555555:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_terraform-production_ab3"
}

# Other Platform Config Values Below
# ===================================
# DNS
main_public_dns_zone_name = "example.com"
env_public_dns_zone_subdomain = "${local.aws_env}"
environment_hosted_zone_name = "${local.env_public_dns_zone_subdomain}.${local.main_public_dns_zone_name}"
acm_subject_alternative_names = [
"${local.environment_hosted_zone_name}",
"*.${local.environment_hosted_zone_name}",
"*.api.${local.environment_hosted_zone_name}",
"*.myapps.${local.environment_hosted_zone_name}",
"*.apps.${local.environment_hosted_zone_name}"
]
# Region and Zones
region = "us-east-1"
azs = [
"us-east-1a",
"us-east-1b",
"us-east-1c"
]
platform_accounts = {
dev = {
cidr = "10.3.0.0/16"
private_subnets = [
"10.3.1.0/24",
"10.3.2.0/24",
"10.3.3.0/24"
]
}
test = {
cidr = "10.2.0.0/16"
private_subnets = [
"10.2.1.0/24",
"10.2.2.0/24",
"10.2.3.0/24"
]
}
production = {
cidr = "10.0.0.0/16"
private_subnets = [
"10.0.1.0/24",
"10.0.2.0/24",
"10.0.3.0/24"
]
}
}
}

infrastructure/live/dev/iam-roles

include "root" {
path = find_in_parent_folders()
}

include "env" {
path = "../../_env/iam-roles.hcl"
}

infrastructure/live/_env/iam-roles.hcl

locals {
env_vars = read_terragrunt_config(find_in_parent_folders("terragrunt.hcl"))
env = local.env_vars.locals.aws_env
}

terraform {
source = "../../../modules//platform/iam-roles"
}

inputs = {
trusted_role_arns = [
local.env_vars.locals.terraform_execution_role_mapping[local.env]
]
}

infrastructure/modules/platform/iam-roles

Notice that we are narrowing the scope of “iam:*” since this is a required permissions because Terraform will need to create, remove and update Roles and Policies for the Application. but in this case we can narrow to Roles and Policies within the :role/app/*

infrastructure/modules/platform/iam-roles

data "aws_caller_identity" "current" {}
module "app_terraform_execution_role" {
source = "../../iam-role"
role_name = "apps-terraform-execution-role"
create_role = true
role_requires_mfa = var.role_requires_mfa
path = "/"
description = "Terraform Execution role"
trusted_role_arns = var.trusted_role_arns
number_of_custom_role_policy_arns = 1
policy = jsonencode(
{
"Version" : "2012-10-17",
"Statement" : [
{
"Action" : var.app_allowed_actions,
"Effect" : "Allow",
"Resource" : "*",
"Condition" : {
"StringEquals" : {
"aws:RequestedRegion" : "us-east-1"
}
}
},
{
"Action" : [
"iam:*"
],
"Effect" : "Allow",
"Resource" : "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/app/*"
}
]
})
}

variable "app_allowed_actions" {
default = [
"cloudwatch:*",
"iam:List*",
"iam:Get*",
"iam:Describe*",
"logs:*",
"logs:ListTagsLogGroup",
"s3:*",
"secretsmanager:*",
"ses:*",
"sns:*",
"ssm:*",
"dynamodb:*",
"sts:*",
"route53:*"
]
}

infrastructure/modules/iam-role

module "iam_policy" {
source = "git::https://github.com/terraform-aws-modules/terraform-aws-iam.git//modules/iam-policy"
path = var.path
name = "${var.role_name}-policy"
description = var.description
policy = var.policy
}


module "iam_assumable_role" {
source = "git::https://github.com/terraform-aws-modules/terraform-aws-iam.git//modules/iam-assumable-role"
create_role = var.create_role
role_name = var.role_name
role_requires_mfa = var.role_requires_mfa
trusted_role_arns = var.trusted_role_arns
custom_role_policy_arns = [
module.iam_policy.arn
]
number_of_custom_role_policy_arns = var.number_of_custom_role_policy_arns
}

AWS Authorization: Since we are going to use SSO then we don’t need long living credentials 🥳

Local Config Profiles Configuration (~/.aws/config)

Platform

So here we have the ckicken and Egg issue… We have a clean AWS Account, and we need to give permissions for the Platform Infra Provisioning. We cannot do this using IaC because there is no role setup for this.

So you need to Manually (by AWS Console or CLI) setup wither IAM or IAM Identity center User for Terraform Platform deployments. Once that is setup then we can provision the IaC.

All Roles and Policies specific for the Application provisioning (so the CICD side) will be created from Platform IaC. All all Roles and Policies for the Application to run (Like Lambda Execution Roles) will be created from within the application code.

[profile terraform-platform]
sso_start_url=https://my-organization.awsapps.com/start
sso_region=us-east-1
sso_account_id=555555555555
sso_role_name=PlatformTerraform
region=us-east-1
output=json

Application Resources (Where developers get write access)

[profile terraform]
sso_start_url=https://my-organization.awsapps.com/start
sso_region=us-east-1
sso_account_id=555555555555
sso_role_name=terraform-dev
region=us-east-1
output=json

[profile terraform-test]
sso_start_url=https://my-organization.awsapps.com/start
sso_region=us-east-1
sso_account_id=555555555555
sso_role_name=terraform-test
region=us-east-1
output=json

[profile terraform-production]
sso_start_url=https://my-organization.awsapps.com/start
sso_region=us-east-1
sso_account_id=555555555555
sso_role_name=terraform-production
region=us-east-1
output=json

AWS Backend & Provider Config

Oh-oh…Houston we have a problem…. there are different permissions Sets per AWS Accounts… this means we cannot use anymore `profile = shared-services`

Maybe a mapping? or just adding a prefix with the environment?….


remote_state {
backend = "s3"
.....
}
config = {
........

# Here is the issue
profile = "terraform" # or maybe "terraform-test" or "terraform-production"
}
}

generate "provider" {
path = "provider.tf"
if_exists = "overwrite_terragrunt"
contents = <<-EOF
provider "aws" {
...
....
profile = "terraform" # or maybe "terraform-test" or "terraform-production"
}
EOF
}

Lets just get rid of it and set it as Env Variable

locals {
# ENV
env_regex = "infrastructure/live/([a-zA-Z0-9-]+)/"
aws_env = try(regex(local.env_regex, get_original_terragrunt_dir())[0])
# Application
app_name = "my-application-platform"
component = "platform"
platform_apps = [
"application_1",
"application_2",
"application_3"
]
# AWS Organizations Accounts
application_account_mapping = {
dev = 222222222222
test = 111111111111
production = 000000000000
}
platform_account_mapping = {
shared-services = 555555555555
network = "666666666666"
dns = 888888888888
}
account_mapping = merge(local.application_account_mapping, local.platform_account_mapping)

# IAM Roles to Assume
account_role_name = "terraform-role"
multiaccount_role_arn = "arn:aws:iam::555555555555:role/terraform-multiaccount-role" # shared-services

terraform_execution_role_mapping = {
dev = "arn:aws:iam::555555555555:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_terraform-dev_ab1"
test = "arn:aws:iam::555555555555:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_terraform-test_ab2"
production = "arn:aws:iam::555555555555:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_terraform-production_ab3"
}

# Other Platform Config Values Below
# ===================================
# DNS
main_public_dns_zone_name = "example.com"
env_public_dns_zone_subdomain = "${local.aws_env}"
environment_hosted_zone_name = "${local.env_public_dns_zone_subdomain}.${local.main_public_dns_zone_name}"
acm_subject_alternative_names = [
"${local.environment_hosted_zone_name}",
"*.${local.environment_hosted_zone_name}",
"*.api.${local.environment_hosted_zone_name}",
"*.myapps.${local.environment_hosted_zone_name}",
"*.apps.${local.environment_hosted_zone_name}"
]
# Region and Zones
region = "us-east-1"
azs = [
"us-east-1a",
"us-east-1b",
"us-east-1c"
]
platform_accounts = {
dev = {
cidr = "10.3.0.0/16"
private_subnets = [
"10.3.1.0/24",
"10.3.2.0/24",
"10.3.3.0/24"
]
}
test = {
cidr = "10.2.0.0/16"
private_subnets = [
"10.2.1.0/24",
"10.2.2.0/24",
"10.2.3.0/24"
]
}
production = {
cidr = "10.0.0.0/16"
private_subnets = [
"10.0.1.0/24",
"10.0.2.0/24",
"10.0.3.0/24"
]
}
}
}
remote_state {
backend = "s3"
generate = {
path = "backend.tf"
if_exists = "overwrite_terragrunt"
}
config = {
bucket = "terraform-state-shared-services"
key = "${local.app_name}/${get_path_from_repo_root()}/terraform.tfstate"
region = local.region
encrypt = true
dynamodb_table = "shared-services-lock-table"
}
}
generate "provider" {
path = "provider.tf"
if_exists = "overwrite_terragrunt"
contents = <<-EOF
provider "aws" {
region = "${local.region}"
allowed_account_ids = [
"${local.account_mapping[local.aws_env]}"
]
assume_role {
role_arn = "arn:aws:iam::${local.account_mapping[local.aws_env]}:role/${local.account_role_name}"
}
default_tags {
tags = {
Environment = "${local.aws_env}"
ManagedBy = "terraform"
DeployedBy = "terragrunt"
Creator = "${get_env("USER", "NOT_SET")}"
Application = "${local.app_name}"
Component = "${local.component}"
}
}
}
EOF
}

Deploying

Platform

aws sso login --profile terraform-platform
export AWS_PROFILE=terraform-platform

# Dev
cd infrastructure/live/dev/iam-roles
terragrunt init
terragrunt plan
terragrunt apply

# Test
cd infrastructure/live/test/iam-roles
terragrunt init
terragrunt plan
terragrunt apply

# Production
cd infrastructure/live/production/iam-roles
terragrunt init
terragrunt plan
terragrunt apply

Application

Here we use the new Roles we just created

# Dev
cd infrastructure/live/dev/app
aws sso login --profile terraform-dev
export AWS_PROFILE=terraform-dev
terragrunt init
terragrunt plan
terragrunt apply

# Test
cd infrastructure/live/test/app
aws sso login --profile terraform-test
export AWS_PROFILE=terraform-test
terragrunt init
terragrunt plan
terragrunt apply

# Production
cd infrastructure/live/production/app
aws sso login --profile terraform-production
export AWS_PROFILE=terraform-production
terragrunt init
terragrunt plan
terragrunt apply

AWS Vault

AWS Vault is a tool to securely store and access AWS credentials in a development environment.

AWS Vault stores IAM credentials in your operating system’s secure keystore and then generates temporary credentials from those to expose to your shell and applications. It’s designed to be complementary to the AWS CLI tools, and is aware of your profiles and configuration in ~/.aws/config.

brew install --cask aws-vault

So this means we can remove the “profile” params from the backend config & the provider

Deploying with AWS-Vault

Platform

# Dev
cd infrastructure/live/dev/iam-roles
aws-vault exec terraform-platform terragrunt init
aws-vault exec terraform-platform terragrunt plan
aws-vault exec terraform-platform terragrunt apply

#test
cd infrastructure/live/test/iam-roles
aws-vault exec terraform-platform terragrunt init
aws-vault exec terraform-platform terragrunt plan
aws-vault exec terraform-platform terragrunt apply

# production
cd infrastructure/live/production/iam-roles
aws-vault exec terraform-platform terragrunt init
aws-vault exec terraform-platform terragrunt plan
aws-vault exec terraform-platform terragrunt apply

Application

cd infrastructure/live/dev/app
aws-vault exec terraform-dev terragrunt init
aws-vault exec terraform-dev terragrunt plan
aws-vault exec terraform-dev terragrunt apply

cd infrastructure/live/test/app
aws-vault exec terraform-test terragrunt init
aws-vault exec terraform-test terragrunt plan
aws-vault exec terraform-test terragrunt apply

cd infrastructure/live/production/app
aws-vault exec terraform-production terragrunt init
aws-vault exec terraform-production terragrunt plan
aws-vault exec terraform-production terragrunt apply

Output

 $ aws-vault exec terraform-production terragrunt apply                           



Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/terraform-aws-modules/terraform-aws-dynamodb-table.git for jobs_logs...
- jobs_logs in .terraform/modules/logs
- lambda_execution_role in ../lambda-execution-role
Downloading git::https://github.com/terraform-aws-modules/terraform-aws-s3-bucket.git?ref=v3.6.1 for s3_bucket_files...
- s3_bucket_files in .terraform/modules/s3_bucket_files
- ssm_params in ../ssm-parameters-store

Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Installing hashicorp/aws v5.23.1...
- Installed hashicorp/aws v5.23.1 (signed by HashiCorp)

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Acquiring state lock. This may take a few moments...
module.lambda_execution_role.data.aws_iam_policy_document.lambda_assume_role_policy: Reading...
module.s3_bucket_files.data.aws_caller_identity.current: Reading...
module.s3_bucket_files.data.aws_canonical_user_id.this: Reading...
data.aws_caller_identity.current: Reading...
module.s3_bucket_files.aws_s3_bucket.this[0]: Refreshing state... [id=my-company-myapp-files-dev]
module.jobs_logs.aws_dynamodb_table.this[0]: Refreshing state... [id=my-company-myapp-logs]
module.lambda_execution_role.data.aws_iam_policy_document.lambda_assume_role_policy: Read complete after 0s [id=000000000]
data.aws_caller_identity.current: Read complete after 0s [id=000000000]
module.lambda_execution_role.data.aws_iam_policy_document.inline_policy: Reading...
module.s3_bucket_files.data.aws_caller_identity.current: Read complete after 0s [id=000000000]
module.lambda_execution_role.data.aws_iam_policy_document.inline_policy: Read complete after 0s [id=000000000]
module.lambda_execution_role.aws_iam_role.lambda_execution_role: Refreshing state... [id=my-company-myapp-lambda-execution-role]
module.s3_bucket_files.data.aws_canonical_user_id.this: Read complete after 1s [id=51528157cefbacf17d3cc90bf31e07f8f463cadfbf31739f6b264afac67fb2ce]
module.s3_bucket_files.aws_s3_bucket_public_access_block.this[0]: Refreshing state... [id=my-company-myapp-files-dev]
module.ssm_params.aws_ssm_parameter.this["2"]: Refreshing state... [id=/app/myapp/lambda_role]
module.ssm_params.aws_ssm_parameter.this["3"]: Refreshing state... [id=/app/myapp/event_bridge_name]
module.ssm_params.aws_ssm_parameter.this["4"]: Refreshing state... [id=/app/myapp/logs]
module.ssm_params.aws_ssm_parameter.this["1"]: Refreshing state... [id=/app/myapp/s3_bucket_files]

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.
Releasing state lock. This may take a few moments...

Thanks for Reading!.

--

--