Secrets
Continuous Integration systems often require access to sensitive resources, which necessitates the use of secrets such as API keys, passwords, or certificates. Pipelines is designed to minimize the use of long-lived secrets and instead leverages ephemeral credentials whenever possible. This approach reduces the risk of credential leaks and streamlines secret rotation.
Authenticating with your SCM Platform
- GitHub
- GitLab
To interact with the GitHub API, Pipelines uses either a GitHub App or Machine User Personal Access Tokens (PATs), depending on your installation method. For information on creating and managing these tokens, see the Machine Users documentation.
To interact with the GitLab API, Pipelines requires a Machine User with a Personal Access Token that has API scope. For information on creating and managing these tokens, see the Machine Users documentation.
Authenticating with Cloud Providers
Pipelines requires authentication with your cloud provider but avoids long-lived credentials by utilizing OIDC (OpenID Connect). OIDC establishes an authenticated relationship between a specific Git reference in a repository and a corresponding cloud provider identity, enabling Pipelines to assume the identity based on where the pipeline is executed.
- AWS
- Azure
Authenticating with AWS
Pipelines uses OIDC to authenticate with AWS, allowing it to assume an AWS IAM role without long-lived credentials.
The role assumption process operates as follows:
- GitHub
- GitLab
For more details, see GitHub's OIDC documentation for AWS.
For more details, see GitLab's OIDC documentation for AWS.
As a result, Pipelines avoids storing long-lived AWS credentials and instead relies on ephemeral credentials generated by AWS STS. These credentials grant least-privilege access to the resources needed for the specific operation being performed (e.g., read access during a pull/merge request open event or write access during a merge).
Authenticating with Azure
Pipelines uses OIDC to authenticate with Azure, allowing it to obtain access tokens from Entra ID without long-lived credentials.
The authentication process operates as follows:
- GitHub
- GitLab
For more details, see GitHub's OIDC documentation for Azure.
For more details, see GitLab's documentation on Azure integration.
As a result, Pipelines avoids storing long-lived Azure credentials and instead relies on ephemeral access tokens generated by Entra ID. These tokens grant least-privilege access to the resources needed for the specific operation being performed.
Other providers
If you are managing configurations for additional services using Infrastructure as Code (IaC) tools like Terragrunt, you may need to configure a provider for those services in Pipelines. In such cases, you must supply the necessary credentials for authenticating with the provider. Whenever possible, follow the same principles: use ephemeral credentials, grant only the minimum permissions required, and avoid storing long-lived credentials on disk.
Configuring providers in Terragrunt
For example, consider configuring the Cloudflare Terraform provider. This provider supports multiple authentication methods to enable secure API calls to Cloudflare services. To authenticate with Cloudflare and manage the associated credentials securely, you need to configure your terragrunt.hcl file appropriately.
First, examine the default cloud provider authentication setup in the root root.hcl file from Gruntwork provided Boilerplate templates:
- AWS
- Azure
generate "provider" {
path = "provider.tf"
if_exists = "overwrite_terragrunt"
contents = <<EOF
provider "aws" {
region = "${local.aws_region}"
# Only these AWS Account IDs may be operated on by this template
allowed_account_ids = ["${local.account_id}"]
# tags
default_tags {
tags = ${jsonencode(local.tags)}
}
}
EOF
}
This provider block (the value of contents) is dynamically generated as the file provider.tf during the execution of any terragrunt command and supplies the OpenTofu/Terraform AWS provider with the required configuration to discover credentials made available by the pipelines.
generate "provider" {
path = "provider.tf"
if_exists = "overwrite_terragrunt"
contents = <<EOF
provider "azurerm" {
features {}
}
EOF
}
This provider block (the value of contents) is dynamically generated as the file provider.tf during the execution of any terragrunt command and supplies the OpenTofu/Terraform Azure provider with the required configuration to discover credentials made available by the pipelines.
With this approach, no secrets are written to disk. Instead, the cloud provider dynamically retrieves secrets at runtime.
According to the Cloudflare documentation, the Cloudflare provider supports several authentication methods. One option involves using the api_token field in the provider block, as illustrated in the documentation:
generate "cloudflare_provider" {
path = "cloudflare-provider.tf"
if_exists = "overwrite_terragrunt"
contents = <<EOF
provider "cloudflare" {
api_token = var.cloudflare_api_token
}
EOF
}
To populate the var.cloudflare_api_token for the provider, you must include a variable "cloudflare_api_token" {} block within a .tf file that is committed to the repository. Additionally, the TF_VAR_cloudflare_api_token environment variable needs to be set to the corresponding Cloudflare API token value.
A straightforward method for achieving this is by using the inputs attribute in terragrunt.hcl files to assign the cloudflare_api_token variable a value derived from the CLOUDFLARE_API_TOKEN environment variable.
inputs = {
cloudflare_api_token = "${run_cmd("--terragrunt-quiet", "./fetch-cloudflare-api-token.sh")}"
}
In this context, fetch-cloudflare-api-token.sh is a script designed to retrieve the Cloudflare API token from a secret store and output it to stdout.
You are free to use any method to fetch the secret, provided it outputs the value to stdout.
Here are straightforward examples of how you might fetch the secret based on your cloud provider:
- AWS
- Azure
Using AWS Secrets Manager:
aws secretsmanager get-secret-value --secret-id cloudflare-api-token --query SecretString --output text
Using AWS SSM Parameter Store:
aws ssm get-parameter --name cloudflare-api-token --query Parameter.Value --output text --with-decryption
Given that Pipelines is already authenticated with AWS for interacting with state, this setup provides a convenient method for retrieving secrets.
Using Azure Key Vault:
az keyvault secret show --vault-name <your-vault-name> --name cloudflare-api-token --query value --output tsv
Given that Pipelines is already authenticated with Azure for interacting with state, this setup provides a convenient method for retrieving secrets.
Alternatively, note that the api_token field is optional. Similar to cloud provider authentication, you can use the CLOUDFLARE_API_TOKEN environment variable to supply the API token to the provider at runtime.
To achieve this, you can update the provider block as follows:
generate "cloudflare_provider" {
path = "cloudflare-provider.tf"
if_exists = "overwrite_terragrunt"
contents = <<EOF
provider "cloudflare" {}
EOF
}
To ensure the CLOUDFLARE_API_TOKEN environment variable is set in the environment before Terragrunt invokes OpenTofu/Terraform, configure the terraform block in your terragrunt.hcl file as follows:
terraform {
extra_arguments "env_vars" {
commands = ["apply", "plan"]
env_vars = {
CLOUDFLARE_API_TOKEN = "${run_cmd("--terragrunt-quiet", "./fetch-cloudflare-api-token.sh")}"
}
}
}
Managing secrets
When configuring providers and Pipelines, it's important to store secrets in a secure and accessible location. Several options are available for managing secrets, each with its advantages and trade-offs.
- GitHub
- GitLab
GitHub Secrets
GitHub Secrets is the simplest option for storing secrets and is natively supported in GitHub Actions. Refer to GitHub's documentation on using secrets in GitHub Actions for guidance on setting and using secrets.
Advantages:
- Easy to configure and use within GitHub Actions workflows
- No additional infrastructure or external services required
- Built-in masking for sensitive values
Trade-offs:
- Secrets are available to all workflows without granular authorization
- Editing workflows may be required to access these secrets securely
GitLab CI/CD Variables
GitLab CI/CD Variables provide a native way to store secrets for your pipelines. They can be set at the project or group level and support masking and protection features. Refer to GitLab's documentation on CI/CD variables for guidance.
Advantages:
- Native integration with GitLab CI/CD
- Support for group-level and project-level variables
- Built-in masking for sensitive values
Trade-offs:
- Limited secret rotation capabilities
- Manual management required for multi-project deployments
Cloud Provider Secret Stores
Cloud providers offer dedicated secret management services with advanced features and security controls.
- AWS
- Azure
AWS Secrets Manager
AWS Secrets Manager offers a sophisticated solution for managing secrets. It allows for provisioning secrets in AWS and configuring fine-grained access controls through AWS IAM. It also supports advanced features like secret rotation and access auditing.
Advantages:
- Granular access permissions, ensuring secrets are only accessible when required
- Support for automated secret rotation and detailed access auditing
Trade-offs:
- Increased complexity in setup and management
- Potentially higher costs associated with its use
Refer to the AWS Secrets Manager documentation for further details.
AWS SSM Parameter Store
AWS SSM Parameter Store is a simpler and more cost-effective alternative to Secrets Manager. It supports secret storage and access control through AWS IAM, providing a basic solution for managing sensitive data.
Advantages:
- Lower cost compared to Secrets Manager
- Granular access control similar to Secrets Manager
Trade-offs:
- Limited functionality compared to Secrets Manager, such as less robust secret rotation capabilities
Refer to the AWS SSM Parameter Store documentation for additional information.
Azure Key Vault
Azure Key Vault provides a comprehensive solution for managing secrets, keys, and certificates. It offers fine-grained access controls through Azure RBAC and supports advanced features like secret versioning and access auditing.
Advantages:
- Granular access permissions with Azure RBAC and access policies
- Support for secret versioning, soft-delete, and purge protection
- Integration with Azure Monitor for detailed audit logs
- Hardware Security Module (HSM) backed options for enhanced security
Trade-offs:
- Additional setup complexity for RBAC and access policies
- Costs associated with transactions and HSM-backed vaults
Refer to the Azure Key Vault documentation for further details.
Deciding on a secret store
When selecting a secret store, consider the following key factors:
- Cost: Evaluate the financial implications of using a particular secret store.
- Complexity: Assess how straightforward it is to set up and manage secrets.
- Granularity: Determine the level of access control the store offers.
Choose a secret store that aligns with your organization's security, operational, and budgetary requirements. Collaborate with relevant stakeholders to ensure the selected option meets your organizational needs effectively.