Terraform AWS Provider — Everything you need to know about Multi-Account Authentication and Configuration
There are multiple options available to configure the authentication between Terraform and AWS. But the two entry points are IAM users and IAM Identity Center Users (SSO). The Authentication is configured in the Terraform AWS Provider, To choose an option will depend among other things if you are executing terraform from your local machine, or from a CI/CD, etc.
I think the best way to understand the different options available is to going step by step exploring each configuration options and then wrapping up with a Complete working Demo.
This post will cover the following:
- Terraform AWS Provider — Multi Account Setup
- AWS Provider Configuration
- Authentication and Configuration for IAM Identity Center (SSO) Users
- Authentication and Configuration for IAM users (No SSO)
- Implementing AWS Identity Account
- Implementing a Shared Services Account
- Configuring Terraform State for Multiple AWS Accounts
- Partial Configuration Storing the Terraform State on the Workloads AWS Accounts (Dev, Test, Prod)
- Partial Configuration Centralising Storing All Terraform States on the Shared-Services Account
- Implementing Terraform Workspaces
- alias: Multiple Provider Configurations
- Put it all together : Demo time
- Patterns for Terraform & Terragrunt for Multi-Account Deployments
Terraform AWS Provider — Multi Account Setup
Scenario:
You already have multiple AWS Accounts. AWS Organizations is enabled and AWS IAM Identity Center is the configured IdP.
Your platform consists on the following AWS Accounts environments:
Application Workloads Accounts environments for dev, test and prod.
Identity Account environment for setting up Users
Shared Services Account for the Shared Services
The requirement is to provision an S3 bucket to the Application Workloads Accounts.
To make this document easier to follow, let’s set the AccountIDs on the table below for all the examples.
Then the Most basic terraform code we can use to demostrate is as follows:
+------------------+----------------+
| AWS Account Name | AWS Account ID |
+------------------+----------------+
| Identity | 000000000000 |
| Shared-Services | 111111111111 |
| Dev | 222222222222 |
| Test | 333333333333 |
| Prod | 444444444444 |
+------------------+----------------+
# This is the Tereraform AWS Provider
provider "aws" {
region = "us-east-1"
}
resource "aws_s3_bucket" "my_bucket" {
bucket = "hector-terraform-multiaccount-bucket-${var.env}"
tags = {
Name = "hector-terraform-assume-role-${var.env}"
Environment = "${var.env}"
}
}
`
variable "env" {
type = string
}
If we want to make this Terraform code capable to deploy in a MultiAccount setup we need to add details on the AWS Provider configuration section. So exactly what is missing here is the configuration on the AWS Provider.
AWS Provider Configuration
The AWS Provider supports assuming an IAM role, either in the provider configuration block parameter assume_role
or in a named profile.
The AWS Provider supports assuming an IAM role using web identity federation and OpenID Connect (OIDC). This can be configured either using environment variables or in a named profile.
Configuration for the AWS Provider can be derived from several sources, which are applied in the following order:
1. Parameters in the provider configuration
2. Environment variables
3. Shared credentials files
4. Shared configuration files
5. Container credentials
6. Instance profile credentials and region
This order matches the precedence used by the AWS CLI and the AWS SDKs.
When using a named profile, the AWS Provider also supports sourcing credentials from an external process.
The full documentation on how to configure the AWS Provider can be found here.
Authentication and Configuration — IAM Identity Center (SSO) Users
Requirements:
- AWS Organization with multiple AWS Accounts
- IAM Identity Center Setup with Access portal url (https://my-organization.awsapps.com/start#)
- IAM Identity Center Group & User
- IAM Identity Permission Set named “Terraform“
- Assign the IAM Identity Center PowerUserAccess Permission Set named “Terraform“ with the Group where your user is assigned to any of the Target AWS Accounts (Dev, Test, Prod)
- You don’t need any entry on the ~/.aws/credentials or ~/.aws/config
How to get the Short-term credentials?
Option 1: AWS IAM Identity Center credentials (Recommended)
To extend the duration of your credentials, is recommended that you configure the AWS CLI to retrieve them automatically using the aws configure sso command. Learn more.
$ aws configure sso
SSO session name (Recommended): my-sso
SSO start URL [None]: https://my-organization.awsapps.com/start#
SSO region [None]: us-east-1
SSO registration scopes [sso:account:access]:
Attempting to automatically open the SSO authorization page in your default browser.
If the browser does not open or you wish to use a different device to authorize this request, open the following URL:
https://device.sso.us-east-1.amazonaws.com/
Then enter the code:
TJDF-ZXWL
There are 9 AWS accounts available to you.
Using the account ID 222222222222
There are 3 roles available to you.
Using the role name "Terraform"
CLI default client Region [None]: us-east-1
CLI default output format [None]: json
CLI profile name [dev]:
To use this profile, specify the profile name using --profile, as shown:
aws s3 ls --profile dev
So this has updated our AWS credentials and created the profile “dev”, if execute the command recommended on the output of the previous command, we will get an answer, we can also try the same command without the profile to see the difference and verify that we are indeed using different aws profiles.
➜ aws s3 ls --profile dev
2043-04-04 18:02:47 dev-bucket-for-demo
➜ aws s3 ls
Unable to locate credentials. You can configure credentials by running "aws configure".
Now let’s update the AWS Provider configuration and include the profile configuration on the AWS Provider:
provider "aws" {
# Only required if custom config or credentials file names.
shared_config_files = ["/Users/hreyes/.aws/custom_config"]
shared_credentials_files = ["/Users/hreyes/.aws/custom_credentials"]
# Interpolation is possible here
profile = "${var.env}"
region = "us-east-1"
}
Notice that I have added a variable for the profile = “${var.env}” in this way I can execute to different environments (as long as the profile is available) by updating this variable.
Option 2: Set AWS environment variables (Short-term credentials)
Navigate to your SSO start URL (https://my-organization.awsapps.com/start#). After login, there will be temporarily credentials provided to you. Copy and paste this to your “Command Line”
export AWS_ACCESS_KEY_ID="AOOOOOOOOOOOOOO7"
export AWS_SECRET_ACCESS_KEY="91M....Bpppgk"
export AWS_SESSION_TOKEN="IQoJb3JpZ2luX2VjENL//////////wE.....YiJ21q6R4"
Since the credentials are provided on the environment variables the Provider configuration requites No-Changes at all:
provider "aws" {
region = "us-east-1"
}
Option 3: Manually add a profile to your AWS credentials file (Short-term credentials)
Navigate to your SSO start URL (https://my-organization.awsapps.com/start#). After login, there will be temporarely credentials provided to you. Copy and Paste this to your “AWS Credentials File”
[dev]
aws_access_key_id=AS.......457N
aws_secret_access_key=K2VTBb7.................ZQ/eX4El4
aws_session_token=IQoJb3J...Ibu6MTM=
provider "aws" {
region = "us-east-1"
profile = "${var.env}"
}
Authentication and Configuration — IAM users (No SSO)
You should avoid at all cost Setting Up IAM Users scattered all over your Organization AWS Accounts like this:
IRecommend to read and follow the best practices proposed by AWS about using Long term credentials a.k.a AWS Key and Secrets from IAM users.
A s best practice for your AWS Organization, creating IAM Users on the AWS Accounts IS NOT Recommended.
Manage Instead all user creation and permission from within a single well monitored account allow these users to assume role to other AWS accounts.
The AWS Organizations management account or any Organization member account with delegated administration for the AWS Identity Access Services can take the role of the “Identity” Account.
AWS Identity Account
This AWS Account will hold all the IAM and IAM Access Identity (SSO) users and groups. The policies associated to these Users and Groups will only allow assuming another IAM Role in the Target Workloads AWS Accounts.
+------------------+----------------+
| AWS Account Name | AWS Account ID |
+------------------+----------------+
| Identity | 000000000000 |
| Shared-Services | 111111111111 |
| Dev | 222222222222 |
| Test | 333333333333 |
| Prod | 444444444444 |
+------------------+----------------+
Shared-Services Account
Allowing IAM user to assume role to the workload accounts is great to centralize the user setup. But IaC and life in general is much complicated than that.
Terraform Workspaces came to the arena with his single backend to configure distinct instances without configuring a new backend or changing authentication credentials. So …. what would be the best location to configure the backend a.k.a terraform state?… Surely NOT in the Identity Account…
Instead of an IAM user directly Assuming a Role on a Workload AWS Account we can introduce one extra jump in the middle to a Shared-Services Account.
The IAM user on the Identity account is only allowed to assume the role “terraform-multiaccount-role” on the Shared-Services account.
The Assumed tole “terraform-multiaccount-role” is allowed to assume the role “terraform-role” on the Workload Accounts.
This is Actually not difficult at all we just need to combine the Profile and the assume_role parameters on the Terraform AWS Provider config.
provider "aws" {
region = "us-east-1"
profile = "shared-services"
assume_role {
role_arn = "arn:aws:iam::${lookup(local.account_mapping, local.env)}:role/terraform-role"
}
}
locals {
env = var.env
account_mapping = {
dev : 222222222222
test : 333333333333
uat : 444444444444
}
}
Creating the IAM Resources to make this work is also easy.
Identity Account: Create an IAM user named “terraform-multiaccount” with No AWS Console access. Create an AWS Key and Secret for the user. Download the generated credentials to be configured to allow access to AWS from either a CI/CD tool or locally.
Shared-Services Account: Setup the IAM Role named “terraform-multiaccount-role” with a Trust Policy allowing the IAM User terraform-multiaccount on the Identity account to assume this role. Also we need to allow terraform to manage the backend configuration and to assume the terraform role on the Workloads accounts.
Workloads Accounts: Setup the IAM role named “terraform-role” with Trust Policy allowing the Role “terraform-multiaccount-role” on the Shared-Services Account to assume this role, Also Add the a policy allowing the following actions full access for demo purposes (Route53, S3, CloudFront, DynamoDB)
AWS Credentials file
Generate AWS Access Keys for the new user and configure them on the ~/.aws/credentials file.
[terraform]
aws_access_key_id=AKI.........HOW
aws_secret_access_key=rZR3I.......a9lh48
Make sure to add the credentials under the [terraform] credentials profile. We will reference this on the AWS Config file.
AWS Config file
Configure the first Assume role jump to shared-services on ~/.aws/config
[profile shared-services]
role_arn=arn:aws:iam::111111111111:role/terraform-multiaccount-role
source_profile=terraform
region=us-east-1
output=json
main.tf (local terraform state)
provider "aws" {
region = "us-east-1"
profile = "shared-services"
assume_role {
role_arn = "arn:aws:iam::${lookup(local.account_mapping, local.env)}:role/terraform-role"
}
}
locals {
env = var.env
account_mapping = {
dev : 222222222222
test : 333333333333
prod : 444444444444
}
}
resource "aws_s3_bucket" "my_bucket" {
bucket = "hector-terraform-multiaccount-bucket-${var.env}"
tags = {
Name = "hector-terraform-assume-role-${var.env}"
Environment = "${var.env}"
}
}
`
variable "env" {
type = string
}
Configuring Terraform State and Multiple AWS Accounts
- Partial Configuration Storing the Terraform State on the Workloads AWS Accounts (Dev, Test, Prod)
- Partial Configuration Centralising Storing All Terraform States on the Shared-Services Account
- Terraform Workspaces
Option 1: Partial Configuration Storing the Terraform State on the Workloads AWS Accounts (Dev, Test, Prod)
We need one S3 bucket and a DynamoDB table on every target Workloads AWS Account. Since there are no Interpolations allowed on the terraform backend configuration then we need to ommit that parameter and pass it instead as a CLI parameter upon terraform init.
terraform {
backend "s3" {
key = "my-bucket/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-remote-state-lock"
encrypt = true
}
}
# Dev Env
terraform init \
-backend-config="bucket=my-bucket-for-terraform-state-dev"
# Test Env
terraform init \
-backend-config="bucket=my-bucket-for-terraform-state-test"
# Prod Env
terraform init \
-backend-config="bucket=my-bucket-for-terraform-state-prod"
Option 2: Partial Configuration Centralising Storing All Terraform States on the Shared-Services Account
We need one S3 bucket and a DynamoDB table on the Shared-Services Account. But we still need to omit the Bucket Key and pass it as CLI parameter adding the environment to the Key so we do not overwrite it.
We can also pass a different DynamoDB table to hold the lockID.
Add “profile” pointing to the Shared-Services account in the backend s3
terraform {
backend "s3" {
bucket = "terraform-state-shared-services"
# Set the profile to store the state on "shared-services"
# Otherwise the profile will point to the same account as the AWS Provider
profile = "shared-services"
region = "us-east-1"
encrypt = true
}
}
# Same applies replace dev for the other environments when executing
# Dev
terraform init \
-backend-config="dynamodb_table=terraform-remote-state-lock-dev" \
-backend-config="key=dev/my-bucket/terraform.tfstate"
terraform plan -var 'env=dev'
terraform apply -var 'env=dev'
# Test
terraform init \
-backend-config="dynamodb_table=terraform-remote-state-lock-test" \
-backend-config="key=test/my-bucket/terraform.tfstate"
terraform plan -var 'env=test'
terraform apply -var 'env=test'
# Prod
terraform init \
-backend-config="dynamodb_table=terraform-remote-state-lock-prod" \
-backend-config="key=prod/my-bucket/terraform.tfstate"
terraform plan -var 'env=prod'
terraform apply -var 'env=prod'
Option 3: Terraform Workspaces
Terraform workspaces works pretty much as the previous option 2. The only difference is that we can replace the variable env within your Terraform configuration, and include the name of the current workspace using the ${terraform.workspace}
interpolation sequence. This can be used anywhere interpolations are allowed.
The benefit is that we don’t need to provide the S3 backend Key upon the terraform init. Terraform workspaces will take care of this for us.
The persistent data stored in the backend belongs to a workspace. The backend initially has only one workspace containing one Terraform state associated with that configuration. Some backends support multiple named workspaces, allowing multiple states to be associated with a single configuration. The configuration still has only one backend, but you can deploy multiple distinct instances of that configuration without configuring a new backend or changing authentication credentials.
Important: Workspaces are not appropriate for system decomposition or deployments requiring separate credentials and access controls. Refer to Use Cases in the Terraform CLI documentation for details and recommended alternatives.
main.tf
# Terraform state will be stored on Shared-Services
terraform {
backend "s3" {
bucket = "terraform-state-shared-services" # Shared Services
region = "us-east-1"
profile = "shared-services"
# Set this only when using Terraform Workspaces
key = "terraform-demo-my-bucket/terraform.tfstate"
dynamodb_table = "terraform-remote-state-lock"
# When NOT Implementing Workspaces Do the following:
# terraform init \
# -backend-config="dynamodb_table=shared-services-us-east-1-lock-table" \
# -backend-config="key=terraform-demo-my-bucket/terraform.tfstate"
# terraform plan -var 'env=dev'
# terraform apply -var 'env=dev'
}
}
provider "aws" {
region = "us-east-1"
profile = "shared-services"
assume_role {
role_arn = "arn:aws:iam::${lookup(local.account_mapping, local.env)}:role/terraform-role"
}
}
locals {
# Un-Comment depending if you implement Workspaces or Not.
env = terraform.workspace
# env = var.env # We got rid of this
region = "us-east-1"
account_mapping = {
identity: 000000000000
dns: 555555555555
dev : 222222222222
default : 222222222222
test : 333333333333
prod : 444444444444
shared-services: 111111111111
}
}
resource "aws_s3_bucket" "my_bucket" {
bucket = "hector-terraform-multiaccount-bucket-${local.env}"
tags = {
Name = "hector-terraform-assume-role-${local.env}"
Environment = "${local.env}"
}
}
terraform init
# To the following for each of the workload environments
# Dev
terraform workspace select dev || terraform workspace new dev
terraform plan
terraform apply
# Test
terraform workspace select test || terraform workspace new test
terraform plan
terraform apply
# Prod
terraform workspace select prod || terraform workspace new prod
terraform plan
terraform apply
alias: Multiple Provider Configurations
You can optionally define multiple configurations for the same provider, and select which one to use on a per-resource or per-module basis. The primary reason for this is to support multiple regions for a cloud platform; other examples include targeting multiple Docker hosts, multiple Consul hosts, etc.
Classic Scenario
Objective: Make available publicly the following DNS Domains
- dev.example.com
- test.example.com
- prod.example.com
You have the Domain example.com registered somewhere (GoDaddy or anywhere) and created a Route53 Hosted Zone on an AWS Account specifically created to manage DNS. The NameServers of the Hosted Zone have been correctly configured on your Domain Registration Provider.
Each of the Hosted Zone (dev, test, prod) must be provisioned on its own Workloads AWS Account so we can create valid public Route53 Records pointing to resources within the same account like a CloudFront distribution.
You have the following AWS Organization:
+------------------+----------------+
| AWS Account Name | AWS Account ID |
+------------------+----------------+
| DNS | 555555555555 |
| Shared-Services | 111111111111 |
| Dev | 222222222222 |
| Test | 333333333333 |
| Prod | 444444444444 |
+------------------+----------------+
The DNS Manager (the one that contains the main domain example.com) is in a different AWS Account (DNS Account) than the Workload accounts where the Hosted Zone for the environment will be created (dev.example.com, …). In order to make that a valid public domain you need to create an NS Record on the main “example.com” HostedZone. So on the dns Account.
Managing two different AWS Providers with an alias makes this task posible within a single execution.
main.tf
# AWS Provider to Workloads Accounts (Dev, Test, Prod)
provider "aws" {
region = "us-east-1"
profile = "shared-services"
assume_role {
role_arn = "arn:aws:iam::${lookup(local.account_mapping, local.env)}:role/terraform-role"
}
}
# AWS Provider to DNS Management Account
# Here we setup the alias
provider "aws" {
alias = "dns"
region = "us-east-1"
profile = "shared-services"
assume_role {
role_arn = "arn:aws:iam::${lookup(local.account_mapping, "dns")}:role/terraform-role"
}
}
# AWS Resource to provision on Workloads Accounts (Dev, Test, Prod)
resource "aws_route53_zone" "workload_zone" {
name = "${local.env}.example.com"
tags = {
Environment = local.env
}
}
# Fetch Main Hosted Zone on DNS Management Account
data "aws_route53_zone" "main" {
provider = aws.dns
name = "example.com."
private_zone = false
}
# Add Route53 Record with the NS Record Values from the Workload Accounts Into the DNS Management Account
resource "aws_route53_record" "workload_zone_ns" {
depends_on = [aws_route53_zone.workload_zone]
provider = aws.dns
zone_id = data.aws_route53_zone.main.zone_id
name = "${local.env}.example.com"
type = "NS"
ttl = "300"
records = aws_route53_zone.workload_zone.name_servers
}
The result will look like this:
Result
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_route53_record.workload_zone_ns will be created
+ resource "aws_route53_record" "workload_zone_ns" {
+ allow_overwrite = (known after apply)
+ fqdn = (known after apply)
+ id = (known after apply)
+ name = "dev.example.com"
+ records = (known after apply)
+ ttl = 30
+ type = "NS"
+ zone_id = "ZZZZZZZZZZZZZZZZZZZZ"
}
# aws_route53_zone.workload_zone will be created
+ resource "aws_route53_zone" "workload_zone" {
+ arn = (known after apply)
+ comment = "Managed by Terraform"
+ force_destroy = false
+ id = (known after apply)
+ name = "dev.example.com"
+ name_servers = (known after apply)
+ primary_name_server = (known after apply)
+ tags = {
+ "Environment" = "dev"
}
+ tags_all = {
+ "Environment" = "dev"
}
+ zone_id = (known after apply)
}
Plan: 2 to add, 0 to change, 0 to destroy.
Do you want to perform these actions in workspace "dev"?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_route53_zone.workload_zone: Creating...
aws_route53_zone.workload_zone: Still creating... [10s elapsed]
aws_route53_zone.workload_zone: Still creating... [20s elapsed]
aws_route53_zone.workload_zone: Still creating... [30s elapsed]
aws_route53_zone.workload_zone: Still creating... [40s elapsed]
aws_route53_zone.workload_zone: Still creating... [50s elapsed]
aws_route53_zone.workload_zone: Creation complete after 53s [id=.....]
aws_route53_record.workload_zone_ns: Creating...
aws_route53_record.workload_zone_ns: Still creating... [10s elapsed]
aws_route53_record.workload_zone_ns: Still creating... [20s elapsed]
aws_route53_record.workload_zone_ns: Still creating... [30s elapsed]
aws_route53_record.workload_zone_ns: Still creating... [40s elapsed]
aws_route53_record.workload_zone_ns: Creation complete after 45s [id=ZZZZ-example.com_NS]
Releasing state lock. This may take a few moments...
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
Implementation Time — Put it all together : Demo time
Put it all together : Demo time — Setup a Web Page with CloudFront + S3 + Route53 + ACM
First let’s separate te Deployment into Platform Provisioning and Application Provisioning.
Platform Provisioning : Create the Route53 Hosted Zones on each of the workload accounts. Create a Rote53 NS Record on the DNS Management account per each Route53 HostedZone created.
Validate the Hosted Zone with a Certificate with ACM, you create one per HostedZone on the workload accounts..
Application Provisioning: Create a S3 bucket and a CloudFront distribution with the S3 bucket as Origin and Create a custom domain with a Route53 Record.
Then upload a single S3 Object named index.html
To provision this Resources I will make use of Terraform Registry Modules instead of the Terraform Resources. You can check this other article out where I explain the differences between one and the other.
Platform Provisioning (platform/main.tf):
Execute
terraform init
# To the following for each of the workload environments
# Dev
terraform workspace select dev || terraform workspace new dev
terraform plan
terraform apply
# Test
terraform workspace select test || terraform workspace new test
terraform plan
terraform apply
# Prod
terraform workspace select prod || terraform workspace new prod
terraform plan
terraform apply
Application Provisioning (application/main.tf):
The index.html file to download is as simple as this:
<!DOCTYPE html>
<html>
<head>
<title>Home Page</title>
</head>
<body>
<h1>Welcome to My Web Cloudfront Bucket</h1>
</body>
</html>
Execute
# To the following for each of the workload environments
# Dev
terraform workspace select dev || terraform workspace new dev
terraform plan
terraform apply
# Test
terraform workspace select test || terraform workspace new test
terraform plan
terraform apply
# Prod
terraform workspace select prod || terraform workspace new prod
terraform plan
terraform apply
...
...
...
module.s3_cf_web.aws_cloudfront_distribution.this: Creation complete after 5m45s [id=E3FGECN18059TZ]
module.s3_cf_web.aws_route53_record.aliases["portal.test-lab.example.com"]: Creating...
module.s3_cf_web.aws_route53_record.aliases["hector.test-lab.example.com"]: Creating...
module.s3_cf_web.aws_route53_record.aliases["hector.test-lab.example.com"]: Still creating... [10s elapsed]
module.s3_cf_web.aws_route53_record.aliases["portal.test-lab.example.com"]: Still creating... [10s elapsed]
module.s3_cf_web.aws_route53_record.aliases["portal.test-lab.example.com"]: Still creating... [20s elapsed]
module.s3_cf_web.aws_route53_record.aliases["hector.test-lab.example.com"]: Still creating... [20s elapsed]
module.s3_cf_web.aws_route53_record.aliases["portal.test-lab.example.com"]: Creation complete after 29s [id=Z06811122OEWBU7P3Y3Y7_portal.test-lab.example.com_CNAME]
module.s3_cf_web.aws_route53_record.aliases["hector.test-lab.example.com"]: Still creating... [30s elapsed]
module.s3_cf_web.aws_route53_record.aliases["hector.test-lab.example.com"]: Still creating... [40s elapsed]
module.s3_cf_web.aws_route53_record.aliases["hector.test-lab.example.com"]: Creation complete after 48s [id=Z06811122OEWBU7P3Y3Y7_hector.test-lab.example.com_CNAME]
Releasing state lock. This may take a few moments...
Apply complete! Resources: 9 added, 0 changed, 0 destroyed.
Test:
To test Open the browser and navigate to the urls you have setup for the FrontEnd… in this example we used (portal.test-lab.example.com or hector.test-lab.example.com)
Patterns for Terraform & Terragrunt for Multi-Account Deployments
So here I have described how to configure the Terraform AWS provider to allow multi account deployments.
If you want to learn about how to Design your Infrastructure As Code (Terraform & Terragrunt) for Multu Account Deployment then take a look to a previous article I wrote here:
Adopt Open ID Connect (OIDC) in Terraform for secure multi-account CI/CD to AWS
There is also this nice article if you wan to configure secure multi-account CI/CD to AWS.