I’ve been trying to add CI/CD pipeline circleci
to my AWS project
written in Terraform
.
The problem is, terraform init
plan
apply
works in my local machine, but it throws this error in CircleCI.
Error —
Initializing the backend...
╷
│ Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
│
│ Please see https://www.terraform.io/docs/language/settings/backends/s3.html
│ for more information about providing credentials.
│
│ Error: NoCredentialProviders: no valid providers in chain. Deprecated.
│ For verbose messaging see aws.Config.CredentialsChainVerboseErrors
My circleCi config
is this —
version: 2.1
orbs:
python: circleci/python@1.5.0
# terraform: circleci/terraform@3.1.0
jobs:
build:
# will use a python 3.10.2 container
docker:
- image: cimg/python:3.10.2
working_directory: ~/project
steps:
- checkout
- run:
name: Check pyton version
command: python --version
- run:
name: get current dir
command: pwd
- run:
name: list of things in that
command: ls -a
- run:
name: Install terraform
command: bash scripts/install_tf.sh
- run:
name: Init infrastructure
command: bash scripts/init.sh dev
# Invoke jobs via workflows
workflows:
.......
And my init.sh
is —
cd ./Terraform
echo "arg: $1"
if [[ "$1" == "dev" || "$1" == "stage" || "$1" == "prod" ]];
then
echo "environement: $1"
terraform init -migrate-state -backend-config=backend.$1.conf -var-file=terraform.$1.tfvars
else
echo "Wrong Argument"
echo "Pass 'dev', 'stage' or 'prod' only."
fi
My main.tf
is —
provider "aws" {
profile = "${var.profile}"
region = "${var.region}"
}
terraform {
backend "s3" {
}
}
And `backend.dev.conf is —
bucket = "bucket-name"
key = "mystate.tfstate"
region = "ap-south-1"
profile = "dev"
Also, my terraform.dev.tfvars
is —
region = "ap-south-1"
profile = "dev"
These work perfectly with in my local unix (mac m1), but throws error in circleCI
for backend. Yes, I’ve added environment variables with my aws_secret_access_key
and aws_access_key_id
, still it doesn’t work.
I’ve seen so many tutorials and nothing seems to solve this, I don’t want to write aws credentials in my code. any idea how I can solve this?
Update:
I have updated my pipeline to this —
version: 2.1
orbs:
python: circleci/python@1.5.0
aws-cli: circleci/aws-cli@3.1.3
jobs:
build:
# will use a python 3.10.2 container
docker:
- image: cimg/python:3.10.2
working_directory: ~/project
# Checkout the code as the first step. This is a dedicated
steps:
- checkout
- run:
name: Check pyton version
command: python --version
- run:
name: get current dir
command: pwd
- run:
name: list of things in that
command: ls -a
aws-cli-cred-setup:
executor: aws-cli/default
steps:
- aws-cli/setup:
aws-access-key-id: aws_access_key_id
aws-secret-access-key: aws_secret_access_key
aws-region: region
- run:
name: get aws acc info
command: aws sts get-caller-identity
terraform-setup:
executor: aws-cli/default
working_directory: ~/project
steps:
- checkout
- run:
name: Install terraform
command: bash scripts/install_tf.sh
- run:
name: Init infrastructure
command: bash scripts/init.sh dev
context: terraform
# Invoke jobs via workflows
workflows:
dev_workflow:
jobs:
- build:
filters:
branches:
only: main
- aws-cli-cred-setup
# context: aws
- terraform-setup:
requires:
- aws-cli-cred-setup
But it still throws the same error.
The latest aws cli version that worked for me was 2.7.31
.
aws cli 2.7.32 — 2.8.x — breaks on aws sso login
command
21:05:19 ❯ aws sso login --profile bb-alpha
Inline SSO configuration and sso_session cannot be configured on the same profile.
aws cli 2.9.0-2.9.20 — aws sso login
works, but terraform plan
does not
21:11:48 ❯ tf plan
╷
│ Error: configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found.
│
│ Please see https://registry.terraform.io/providers/hashicorp/aws
│ for more information about providing credentials.
│
│ Error: failed to refresh cached credentials, the SSO session has expired or is invalid: open /Users/danget/.aws/sso/cache/c777c5264364f423c5a5bf7de842d3982cdbf67d.json: no such file or directory
│
2 notes:
- the
c777c526...bf67d.json
file that terraform is looking for exists when logging in with2.7.31
but there are different named .json files with the2.9.x
versions. Maybe the hashing changed? - The format of the
~/.aws/sso/*.json
files have different keys (output generated viafile.json | jq 'keys'
v2.9.x
[
"clientId",
"clientSecret",
"expiresAt",
"scopes"
]
v2.7.31
[
"accessToken",
"expiresAt",
"region",
"startUrl"
]
Your first question
if my provider setting is explicitly declaring the credentials to use
inside the Terraform source code, why does my OS-level AWS
configuration matter at all?
The error message «Failed to load backend: Error configuring the backend «s3″» is referring to your Backend S3 configuration.
Look in the file ./.terraform/terraform.tfstate
and you will see the S3 Backend configuration.
The Terraform S3 Backend is different than the Terraform AWS Provider. The error message «No valid credential sources found for AWS Provider.» is misleading.
It implies that the AWS Provider configuration is used, which is false. S3 Backend credentials are configured separately and stored in the terraform.tfstate
file.
Your OS-level AWS configuration matters because if no S3 Backend credentials are specified, as documented here https://www.terraform.io/docs/backends/types/s3.html, then Terraform defaults to using the following, in order:
- Environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
- AWS Shared credentials file, default value is «~/.aws/credentials».
You didn’t specify any credentials in your S3 Backend config so terraform is defaulting to the AWS Shared Credentials File.
Your S3 Backend configuration contains no credentials.
terraform {
backend "s3" {
bucket = "example_tf_states"
key = "global/vpc/us_east_1/example_state.tfstate"
encrypt = true
region = "us-east-1"
}
}
Your second question,
How can I make Terraform only use the creds defined in my Terraform
configuration and ignore what’s in my OS user profile?
First, Backends cannot contain interpolation, see https://www.terraform.io/docs/backends/config.html. So you cannot use any variables in the Backend config. e.g. this config is invalid
terraform {
backend "s3" {
bucket = "example_tf_states"
key = "global/vpc/us_east_1/example_state.tfstate"
encrypt = true
region = "us-east-1"
access_key = ${var.access_key}
secret_key = ${var.secret_key}
}
}
If you want to specify AWS credentials when running terraform init
you specify backend configuration as options.
terraform init --backend-config="access_key=your_access_key" --backend-config="secret_key=your_secret_key"
This produces a S3 Backend config that looks like this, stored in the ./.terraform/terraform.tfstate
file:
{
"version": 3,
"serial": 1,
"lineage": "bd737d2d-1181-ed64-db57-467d14d2155a",
"backend": {
"type": "s3",
"config": {
"access_key": "your_access_key",
"secret_key": "your_secret_key"
},
"hash": 9345827190033900985
},
Again, the S3 Backend credentials are configured separately from your AWS Provider credentials.
Re-run terraform init
and specify the credentials on the command line as --backend-config
options to fix your error.
Решение ошибки с не валидными кредами Error configuring the backend "s3": No valid credential sources found for AWS Provider
при инициализации бэкенда на S3 с использованием AWS профилей
Дано
Провайдер настроен на использование профиля environment_profile
под текущей системной учетной записью:
provider "aws" { region = "us-east-1" profile = "environment_profile" }
Состояние Terraform’а хранится в S3 бакете на отдельном аккаунте и использует его профиль:
terraform { backend "s3" { bucket = "terraform" key = "states/terraform.tfstate" region = "us-east-1" profile = "remote_state_profile" } }
Креды профилей находятся в файле ~/.aws/credentials
:
[remote_state_profile] aws_access_key_id = AKIAIOSFODNN7EXAMPLE1 aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY1 [environment_profile] aws_access_key_id = AKIAIOSFODNN7EXAMPLE2 aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY2
Проблема
При попытке инициализировать бэкенд вылетает ошибка, что креды не найдены или не валидные:
terraform init Error loading previously configured backend: Error configuring the backend "s3": No valid credential sources found for AWS Provider. Please see https://terraform.io/docs/providers/aws/index.html for more information on providing credentials for the AWS Provider Please update the configuration in your Terraform files to fix this error. If you'd like to update the configuration interactively without storing the values in your configuration, run "terraform init".
Однако, конфиг правильный и ключи точно подходят к своим аккаунтам. При этом, креды профиля окружения работают нормально. Это бага, кочующая от версии к версии, когда Terraform не может использовать нужный профиль для инициализации бэкенда. На гитхабе открыто много тасков, но однозначного решения нет. У кого-то работает, у кого-то нет.
Решение
Чтобы начать пользоваться AWS профилями в Terraform’е, достаточно добавить переменную окружения для инициализации бэкенда:
export AWS_PROFILE=remote_state_profile terraform init
И затем использовать Terraform по назначению с профилем окружения, который корректно подхватывается из конфига провайдера:
terraform plan
Использовалась последняя версия Terraform на момент написания:
terraform version Terraform v0.11.10
I am trying to setup s3 as a version control system using terraform. However, I keep getting errors. I have tried defining profile and access key, secret key in my code. However, I still face the issue. Could any please guide me on how could I bypass the error. I am using Terraform v0.14.4. I am pretty much new to the terraform kindly guide me if there is something wrong with the syntax I use for the version I’m using. Below is the code and underneath is the error that I am getting. Thanks in advance.
provider «aws» { region = «us-east-1» profile = «Default» aws_access_key_id = «#############» aws_secret_access_key = «#####################» } terraform { backend «s3» { bucket = «mybucket_test998» key = «terraorm.tfstate» region = «us-east-1» } } $ sudo terraform init
Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
Please see https://www.terraform.io/docs/backends/types/s3.html for more information about providing credentials.
Error: NoCredentialProviders: no valid providers in chain. Deprecated. For verbose messaging see aws.Config.CredentialsChainVerboseErrors
def initAlgorithm(self, config=None):
self.addParameter(QgsProcessingParameterFile('dailyshpfolder', 'Please Input Your Daily SHP folder', behavior=QgsProcessingParameterFile.Folder, fileFilter='All files (*.*)', defaultValue=None))
def __init__(self):
super().__init__()
def name(self):
return 'Daily to Master'
def tr(self, text):
return QCoreApplication.translate("Daily to Master", text)
def displayName(self):
return self.tr("Move daily SHP into Master layers -HA- Tech Services")
def helpUrl(self):
return "https://qgis.org"
def createInstance(self):
return DailyToMaster()
def processAlgorithm(self, parameters, context, model_feedback):
# Use a multi-step feedback, so that individual child algorithm progress reports are adjusted for the
# overall progress through the model
feedback = QgsProcessingMultiStepFeedback(29, model_feedback)
results = {}
outputs = {}
# Add SHP from daily SHP folder
alg_params = {
'INPUT': parameters['dailyshpfolder']
}
outputs['AddShpFromDailyShpFolder'] = processing.run('script:Import_folder', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(1)
if feedback.isCanceled():
return {}
# Append Natural
HAnatural = QgsProject.instance().mapLayersByName('HA_Natural.shp')
natural = QgsProject.instance().mapLayersByName('Natural')
if HAnatural and natural:
alg_params = {
'ACTION_ON_DUPLICATE': 0, # Just APPEND all features, no matter of duplicates
'SOURCE_FIELD': '',
'SOURCE_LAYER': QgsExpression("layer_property('HA_Natural.shp','name')").evaluate(),
'TARGET_FIELD': '',
'TARGET_LAYER': QgsExpression("layer_property('Natural','name')").evaluate()
}
context = dataobjects.createContext()
context.setInvalidGeometryCheck(QgsFeatureRequest.GeometryNoCheck)
outputs['AppendNatural'] = processing.run('etl_load:appendfeaturestolayer',alg_params, context=context, feedback=feedback, is_child_algorithm=True)
if not HAnatural or not natural:
print('No new Natural feature')
pass
feedback.setCurrentStep(2)
if feedback.isCanceled():
return {}
# Append Field Drain
HAfielddrain = QgsProject.instance().mapLayersByName('HA_Field_Drain.shp')
fielddrain = QgsProject.instance().mapLayersByName('Field Drain')
if HAfielddrain and fielddrain :
alg_params = {
'ACTION_ON_DUPLICATE': 0, # Just APPEND all features, no matter of duplicates
'SOURCE_FIELD': '',
'SOURCE_LAYER': QgsExpression("layer_property('HA_Field_Drain.shp','name')").evaluate(),
'TARGET_FIELD': '',
'TARGET_LAYER': QgsExpression("layer_property('Field Drain','name')").evaluate()
}
context = dataobjects.createContext()
context.setInvalidGeometryCheck(QgsFeatureRequest.GeometryNoCheck)
outputs['AppendFieldDrain'] = processing.run('etl_load:appendfeaturestolayer', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
elif not HAfielddrain or not fielddrain :
print('No new Field Drain feature')
pass
feedback.setCurrentStep(3)
if feedback.isCanceled():
return {}
# Append Points
HApoints = QgsProject.instance().mapLayersByName('HA_Points.shp')
points = QgsProject.instance().mapLayersByName('Points')
if HApoints and points :
alg_params = {
'ACTION_ON_DUPLICATE': 0, # Just APPEND all features, no matter of duplicates
'SOURCE_FIELD': '',
'SOURCE_LAYER': QgsExpression("layer_property('Points.shp','name')").evaluate(),
'TARGET_FIELD': '',
'TARGET_LAYER': QgsExpression("layer_property('Points','name')").evaluate()
}
context = dataobjects.createContext()
context.setInvalidGeometryCheck(QgsFeatureRequest.GeometryNoCheck)
outputs['AppendPoints'] = processing.run('etl_load:appendfeaturestolayer', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
elif not HApoints or not points :
print('No new Points feature')
pass
feedback.setCurrentStep(4)
if feedback.isCanceled():
return {}
# Append Modern
HAmodern = QgsProject.instance().mapLayersByName('HA_Modern.shp')
modern = QgsProject.instance().mapLayersByName('Modern')
if HAmodern and modern :
alg_params = {
'ACTION_ON_DUPLICATE': 0, # Just APPEND all features, no matter of duplicates
'SOURCE_FIELD': '',
'SOURCE_LAYER': QgsExpression("layer_property('HA_Modern.shp','name')").evaluate(),
'TARGET_FIELD': '',
'TARGET_LAYER': QgsExpression("layer_property('Modern','name')").evaluate()
}
context = dataobjects.createContext()
context.setInvalidGeometryCheck(QgsFeatureRequest.GeometryNoCheck)
outputs['AppendModern'] = processing.run('etl_load:appendfeaturestolayer', alg_params,context=context, feedback=feedback, is_child_algorithm=True)
elif not HAmodern or not modern :
print('No new Modern feature')
pass
feedback.setCurrentStep(5)
if feedback.isCanceled():
return {}
# Append GR Point
HAGRpoint = QgsProject.instance().mapLayersByName('HA_GR_Point.shp')
GRpoint = QgsProject.instance().mapLayersByName('GR Point')
if HAGRpoint and GRpoint :
alg_params = {
'ACTION_ON_DUPLICATE': 0, # Just APPEND all features, no matter of duplicates
'SOURCE_FIELD': '',
'SOURCE_LAYER': QgsExpression("layer_property('HA_GR_Point.shp','name')").evaluate(),
'TARGET_FIELD': '',
'TARGET_LAYER': QgsExpression("layer_property('GR Point','name')").evaluate()
}
context = dataobjects.createContext()
context.setInvalidGeometryCheck(QgsFeatureRequest.GeometryNoCheck)
outputs['AppendGrPoint'] = processing.run('etl_load:appendfeaturestolayer', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
elif not HAGRpoint or not GRpoint :
print('No new GR Point feature')
pass
feedback.setCurrentStep(6)
if feedback.isCanceled():
return {}
# Append Test Pit
HAtestpit = QgsProject.instance().mapLayersByName('HA_Test_Pit.shp')
testpit = QgsProject.instance().mapLayersByName('Test Pit')
if HAtestpit and testpit :
alg_params = {
'ACTION_ON_DUPLICATE': 0, # Just APPEND all features, no matter of duplicates
'SOURCE_FIELD': '',
'SOURCE_LAYER': QgsExpression("layer_property('HA_Test_Pit.shp','name')").evaluate(),
'TARGET_FIELD': '',
'TARGET_LAYER': QgsExpression("layer_property('Test Pit','name')").evaluate()
}
context = dataobjects.createContext()
context.setInvalidGeometryCheck(QgsFeatureRequest.GeometryNoCheck)
outputs['AppendTestPit'] = processing.run('etl_load:appendfeaturestolayer', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
elif not HAtestpit or not testpit :
print('No new Test Pit feature')
pass
feedback.setCurrentStep(7)
if feedback.isCanceled():
return {}
# Append Excavated
HAexcavated = QgsProject.instance().mapLayersByName('HA_Excavated.shp')
excavated = QgsProject.instance().mapLayersByName('Excavated')
if HAexcavated and excavated :
alg_params = {
'ACTION_ON_DUPLICATE': 0, # Just APPEND all features, no matter of duplicates
'SOURCE_FIELD': '',
'SOURCE_LAYER': QgsExpression("layer_property('HA_Excavated.shp','name')").evaluate(),
'TARGET_FIELD': '',
'TARGET_LAYER': QgsExpression("layer_property('Excavated','name')").evaluate()
}
context = dataobjects.createContext()
context.setInvalidGeometryCheck(QgsFeatureRequest.GeometryNoCheck)
outputs['AppendExcavated'] = processing.run('etl_load:appendfeaturestolayer', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
elif not HAexcavated or not excavated :
print('No new Excavated feature')
pass
feedback.setCurrentStep(8)
if feedback.isCanceled():
return {}
# Append Structure
HAstructure = QgsProject.instance().mapLayersByName('HA_Structure.shp')
structure = QgsProject.instance().mapLayersByName('Structure')
if HAstructure and structure :
alg_params = {
'ACTION_ON_DUPLICATE': 0, # Just APPEND all features, no matter of duplicates
'SOURCE_FIELD': '',
'SOURCE_LAYER': QgsExpression("layer_property('HA_Structure.shp','name')").evaluate(),
'TARGET_FIELD': '',
'TARGET_LAYER': QgsExpression("layer_property('Structure','name')").evaluate()
}
context = dataobjects.createContext()
context.setInvalidGeometryCheck(QgsFeatureRequest.GeometryNoCheck)
outputs['AppendStructure'] = processing.run('etl_load:appendfeaturestolayer', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
elif not HAstructure or not structure :
print('No new Structure feature')
pass
feedback.setCurrentStep(9)
if feedback.isCanceled():
return {}
# Append Sample
HAsample = QgsProject.instance().mapLayersByName('HA_Sample.shp')
sample = QgsProject.instance().mapLayersByName('Sample')
if HAsample and sample :
alg_params = {
'ACTION_ON_DUPLICATE': 0, # Just APPEND all features, no matter of duplicates
'SOURCE_FIELD': '',
'SOURCE_LAYER': QgsExpression("layer_property('HA_Sample.shp','name')").evaluate(),
'TARGET_FIELD': '',
'TARGET_LAYER': QgsExpression("layer_property('Sample','name')").evaluate()
}
context = dataobjects.createContext()
context.setInvalidGeometryCheck(QgsFeatureRequest.GeometryNoCheck)
outputs['AppendSample'] = processing.run('etl_load:appendfeaturestolayer', alg_params,context=context, feedback=feedback, is_child_algorithm=True)
elif not HAsample or not sample :
print('No new Sample feature')
pass
feedback.setCurrentStep(10)
if feedback.isCanceled():
return {}
# Append Inhumation
HAinhum = QgsProject.instance().mapLayersByName('HA_Inhumation.shp')
inhum = QgsProject.instance().mapLayersByName('Inhumation')
if HAinhum and inhum :
alg_params = {
'ACTION_ON_DUPLICATE': 0, # Just APPEND all features, no matter of duplicates
'SOURCE_FIELD': '',
'SOURCE_LAYER': QgsExpression("layer_property('HA_Inhumation.shp','name')").evaluate(),
'TARGET_FIELD': '',
'TARGET_LAYER': QgsExpression("layer_property('Inhumation','name')").evaluate()
}
context = dataobjects.createContext()
context.setInvalidGeometryCheck(QgsFeatureRequest.GeometryNoCheck)
outputs['AppendInhumation'] = processing.run('etl_load:appendfeaturestolayer', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
elif not HAinhum or not inhum :
print('No new Inhumation feature')
pass
return results
Scenario: I am trying to read remote terraform state which is stored in AWS S3 bucket. I have configured aws credentials using aws configure cli and using the credentials I am able to read AWS S3 bucket and object of tf state.
terraform init is working fine. But when I run terraform plan I am getting the following error:
Error: error configuring S3 Backend: no valid credential sources for S3 Backend found. │ │ Please see https://www.terraform.io/docs/language/settings/backends/s3.html │ for more information about providing credentials. │ │ Error: NoCredentialProviders: no valid providers in chain. Deprecated. │ For verbose messaging see aws.Config.CredentialsChainVerboseErrors
│
provider "aws" { region = "ap-southeast-1" profile = "dev-token" shared_credentials_file = "~/.aws/credentials" } data "terraform_remote_state" "ekscluster_state" { backend = "s3" config = { bucket = "bucket" region = "ap-southeast-1" key = "remote.tfstate" } } data "aws_eks_cluster" "db_subnet_ids" { name = data.terraform_remote_state.ekscluster_state.outputs.db_subnet_ids } resource "aws_db_subnet_group" "aurora_subnet_group" { name = "name" subnet_ids = data.aws_eks_cluster.db_subnet_ids tags = { Name = format("%s", "name") } }
Remote state contains all the content. Note: storing state on S3 is working fine with same credentials.
Looking forward to hear some hint.