An error occurred 403 when calling the headobject operation forbidden

I'm trying to setup a Amazon Linux AMI(ami-f0091d91) and have a script that runs a copy command to copy from a S3 bucket. aws --debug s3 cp s3://aws-codedeploy-us-west-2/latest/codedeploy-agent.noa...

I’m trying to setup a Amazon Linux AMI(ami-f0091d91) and have a script that runs a copy command to copy from a S3 bucket.

aws --debug s3 cp s3://aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm .

This script works perfectly on my local machine but fails with the following error on the Amazon Image:

2016-03-22 01:07:47,110 - MainThread - botocore.auth - DEBUG - StringToSign: HEAD

Tue, 22 Mar 2016 01:07:47 GMT
x-amz-security-token:AQoDYXdzEPr//////////wEa4ANtcDKVDItVq8Z5OKms8wpQ3MS4dxLtxVq6Om1aWDhLmZhL2zdqiasNBV4nQtVqwyPsRVyxl1Urq1BBCnZzDdl4blSklm6dvu+3efjwjhudk7AKaCEHWlTd/VR3cksSNMFTcI9aIUUwzGW8lD9y8MVpKzDkpxzNB7ZJbr9HQNu8uF/st0f45+ABLm8X4FsBPCl2I3wKqvwV/s2VioP/tJf7RGQK3FC079oxw3mOid5sEi28o0Qp4h/Vy9xEHQ28YQNHXOBafHi0vt7vZpOtOfCJBzXvKbk4zRXbLMamnWVe3V0dArncbNEgL1aAi1ooSQ8+Xps8ufFnqDp7HsquAj50p459XnPedv90uFFd6YnwiVkng9nNTAF+2Jo73+eKTt955Us25Chxvk72nAQsAZlt6NpfR+fF/Qs7jjMGSF6ucjkKbm0x5aCqCw6YknsoE1Rtn8Qz9tFxTmUzyCTNd7uRaxbswm7oHOdsM/Q69otjzqSIztlwgUh2M53LzgChQYx5RjYlrjcyAolRguJjpSq3LwZ5NEacm/W17bDOdaZL3y1977rSJrCxb7lmnHCOER5W0tsF9+XUGW1LMX69EWgFYdn5QNqFk6mcJsZWrR9dkehaQwjLPcv/29QcM+b5u/0goazCtwU=
/aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm
2016-03-22 01:07:47,111 - MainThread - botocore.endpoint - DEBUG - Sending http request: <PreparedRequest [HEAD]>
2016-03-22 01:07:47,111 - MainThread - botocore.vendored.requests.packages.urllib3.connectionpool - INFO - Starting new HTTPS connection (1): aws-codedeploy-us-west-2.s3.amazonaws.com
2016-03-22 01:07:47,151 - MainThread - botocore.vendored.requests.packages.urllib3.connectionpool - DEBUG - "HEAD /latest/codedeploy-agent.noarch.rpm HTTP/1.1" 403 0
2016-03-22 01:07:47,151 - MainThread - botocore.parsers - DEBUG - Response headers: {'x-amz-id-2': '0mRvGge9ugu+KKyDmROm4jcTa1hAnA5Ax8vUlkKZXoJ//HVJAKxbpFHvOGaqiECa4sgon2F1kXw=', 'server': 'AmazonS3', 'transfer-encoding': 'chunked', 'x-amz-request-id': '6204CD88E880E5DD', 'date': 'Tue, 22 Mar 2016 01:07:46 GMT', 'content-type': 'application/xml'}
2016-03-22 01:07:47,152 - MainThread - botocore.parsers - DEBUG - Response body:

2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event needs-retry.s3.HeadObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7f421075bcd0>
2016-03-22 01:07:47,152 - MainThread - botocore.retryhandler - DEBUG - No retry needed.
2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.HeadObject: calling handler <function enhance_error_msg at 0x7f4211085758>
2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.HeadObject: calling handler <awscli.errorhandler.ErrorHandler object at 0x7f421100cc90>
2016-03-22 01:07:47,152 - MainThread - awscli.errorhandler - DEBUG - HTTP Response Code: 403
2016-03-22 01:07:47,152 - MainThread - awscli.customizations.s3.s3handler - DEBUG - Exception caught during task execution: A client error (403) occurred when calling the HeadObject operation: Forbidden
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/s3handler.py", line 100, in call
    total_files, total_parts = self._enqueue_tasks(files)
  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/s3handler.py", line 178, in _enqueue_tasks
    for filename in files:
  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/fileinfobuilder.py", line 31, in call
    for file_base in files:
  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 142, in call
    for src_path, extra_information in file_iterator:
  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 314, in list_objects
    yield self._list_single_object(s3_path)
  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 343, in _list_single_object
    response = self._client.head_object(**params)
  File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 228, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 488, in _make_api_call
    model=operation_model, context=request_context
  File "/usr/local/lib/python2.7/site-packages/botocore/hooks.py", line 226, in emit
    return self._emit(event_name, kwargs)
  File "/usr/local/lib/python2.7/site-packages/botocore/hooks.py", line 209, in _emit
    response = handler(**kwargs)
  File "/usr/local/lib/python2.7/site-packages/awscli/errorhandler.py", line 70, in __call__
    http_status_code=http_response.status_code)
ClientError: A client error (403) occurred when calling the HeadObject operation: Forbidden
2016-03-22 01:07:47,153 - Thread-1 - awscli.customizations.s3.executor - DEBUG - Received print task: PrintTask(message='A client error (403) occurred when calling the HeadObject operation: Forbidden', error=True, total_parts=None, warning=None)
A client error (403) occurred when calling the HeadObject operation: Forbidden

However, when I run it with the --no-sign-request option, it works perfectly:

aws --debug --no-sign-request s3 cp s3://aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm .

Can someone please explain what is going on?

Apache Airflow version: 2.1.0 Running from Apache/airflow Container

Kubernetes version (if you are using kubernetes) (use kubectl version):

Environment:

  • Cloud provider or hardware configuration: Airflow Container
  • OS (e.g. from /etc/os-release): Windows
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

What happened: Using S3Hook to upload file to AWS S3. I have created user in aws with full S3 permissions.
I have created s3_connection in connections like this

{
"aws_access_key_id":"********", 
"aws_secret_access_key":"********",
"region_name":"us-west-2"
}

But When I run my DAG task I am getting error «role_arn» is NONE.

What you expected to happen: Expected to connect successfully to S3 and upload the local test.txt file to S3.

How to reproduce it:
Here is the code snippet.

from airflow import DAG
import airflow
from airflow.providers.amazon.aws.hooks.s3 import S3Hook
from airflow.operators.python import PythonOperator

from datetime import timedelta, datetime

default_args = {
    'owner':'Lax',
    'retries':0,
    'email_on_retry': False,
    'email':'email@gmail.com',
    'email_on_failure': False,
    'retry_delay': timedelta(minutes=1)
}    

def file_upload(filename,key,bucket_name):
    hook = S3Hook('s3_connection')
    hook.load_file(filename, key, bucket_name)
    

with DAG(dag_id='s3_hook_demo', start_date = datetime(2021,7,14), schedule_interval = "@daily", default_args = default_args,
         catchup=False) as dag:

    file_upload_task = PythonOperator(
        task_id='file_upload',        
        op_kwargs= {
            'filename': 'test.txt',
            'key':'text.txt',
            'bucket_name':'angfdighubfjhdknmmf'
        },
        python_callable=file_upload
    )

Anything else we need to know:

Airflow

There are few issues here. First, your bucket policy document is not a valid json but I guess that error happened during coping.

aws s3 cp s3://url doesnt work simply because bucket policy blocks it which is intended behavior in this case. Note that explicit deny always wins. Your bucket policy denies any upload if server side encryption header is missing in HTTP request. No matter how you define your IAM policy attached to a user, that user will not be able use the mentioned command as is due to the explicit deny.

If you want to make it work, you just need to specify server side encryption in your CLI command by using appropriate flag --sse AES256 (this is true when uploading objects to s3 bucket).

aws s3 cp s3://url --sse AES256

Other things that I have noticed:

In this part

Effect: Deny,
Principal: *,
Action: s3:*,
Resource: arn:aws:s3:::bucket1/*”,
Condition: {
    Bool: {
        aws:SecureTransport: false
    }
}

you are denying all s3 actions if the request is not using HTTPS but you have specified only objects in that bucket – Resource: arn:aws:s3:::bucket1/*” not the bucket itself – Resource: arn:aws:s3:::bucket1”, thus your statement applies only to object level operations. Is this intended behavior? If you want to deny all the actions for both object level operations and bucket level operations that are not using HTTPS then you need to change you current Resource to

Effect: Deny,
Principal: *,
Action: s3:*,
Resource: [
    arn:aws:s3:::bucket1”,
    arn:aws:s3:::bucket1/*”
],
Condition: {
    Bool: {
        aws:SecureTransport: false
    }
}

And in this section

  {
        Action: [
            s3:GetObject
        ],
        Resource: [
            arn:aws:s3:::bucket1,
            arn:aws:s3:::bucket1/*
        ],
        Effect: Allow       
   }

this line in your Resourcearn:aws:s3:::bucket1 is completely redundant because s3:GetObject action is object level operation and your statement doesnt contain any bucket level operations. You can freely remove it. So it should look something like this

   {
        Action: [
            s3:GetObject
        ],
        Resource: arn:aws:s3:::bucket1/*,
        Effect: Allow       
   }

UPDATE

When getting object, be sure that you specify some object, not just url of the bucket.

This will work

aws s3 cp s3://bucket/file.txt .

This will fail with 403 error

aws s3 cp s3://bucket .

If you want to download multiple files at the same time using the above command, you will need to do two things. First, you will need to update your IAM permissions to include s3:ListBucket on the bucket.

{
    Version: 2012-10-17,
    Statement: [
        {
            Sid: VisualEditor0,
            Effect: Allow,
            Action: s3:GetObject,
            Resource: arn:aws:s3:::bucket/*
        },
        {
            Sid: VisualEditor1,
            Effect: Allow,
            Action: s3:ListBucket,
            Resource: arn:aws:s3:::bucket
        }
    ]
}

Second thing, you will need to specify --recursive flag in cp command.

aws s3 cp s3://bucket . --recursive

amazon web services – s3 – An error occurred (403) when calling the HeadObject operation: Forbidden

Related posts on Amazon :

  • Problems with Hadoop distcp from HDFS to Amazon S3
  • amazon web services – AWS trusted adviser vs Inspector
  • mysql – Amazon RedShift – How to query OLAP way
  • amazon web services – Terraform – Upload file to S3 on every apply
  • How to set Cache-Control Header in amazon cloudfront?
  • amazon web services – Get request parameters by AWS request ID
  • amazon web services – How to get arn of EC2 instance in AWS
  • api – amazon s3 invalid principal in bucket policy

S3 forbidden access when reading a bucket by a lambda function #1732

Comments

I’m trying to use localstack to create a lambda function which downloads files from a s3 bucket but it fails with a Forbidden status

The lambda’s code that download the file uses the boto3 library :

The steps I’ve taken to create the bucket and upload the file are :

Create a bucket :
awslocal s3 mb s3://input

Grant it with public access
awslocal s3api put-bucket-acl —bucket input —acl public-read-write

Upload a file into the bucket
awslocal s3 cp file s3://input/file

The steps to create the lambda and run it are :

Creating a zip file containing all the necessary files for the lambda

Create the lambda function

awslocal lambda create-function —function-name f1 —runtime python2.7 —handler main.handler —memory-size 128 —zip-file fileb://main.zip —role r1

  • Run the function

awslocal lambda invoke —function-name f1 —payload », my_test

When I run the lambda function I receive the following exception :

An error occurred (403) when calling the HeadObject operation: Forbidden: ClientError Traceback (most recent call last): . File «/var/runtime/boto3/s3/inject.py», line 246, in bucket_download_file ExtraArgs=ExtraArgs, Callback=Callback, Config=Config) File «/var/runtime/boto3/s3/inject.py», line 172, in download_file extra_args=ExtraArgs, callback=Callback) File «/var/runtime/boto3/s3/transfer.py», line 307, in download_file future.result() File «/var/runtime/s3transfer/futures.py», line 106, in result return self._coordinator.result() File «/var/runtime/s3transfer/futures.py», line 265, in result raise self._exception ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden

The text was updated successfully, but these errors were encountered:

Источник

I would appreciate your help on this. The scenario is that I am trying to publish AWS VPC Flow Logs from account A to S3 bucket in another account B. I am able to do so but when i try to download the logs from account A, i am getting the error «fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden». My Account A has IAM userA that i am using.

I am using the below command to download:

aws s3 cp s3://vpcflowlogscross/VPCLogs/{subscribe_account_id}/vpcflowlogs/{region_code}/2021/06/14/xyz.log.gz .

I have tried adding the region as well in the above command but no luck. Also while investigating this i came across documentation to add ListBucket permission which i already have.

My Bucket Policy in Account B:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AWSLogDeliveryWrite",
            "Effect": "Allow",
            "Principal": {
                "Service": "delivery.logs.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::vpcflowlogs-cross/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": "bucket-owner-full-control"
                }
            }
        },
        {
            "Sid": "AWSLogDeliveryCheck",
            "Effect": "Allow",
            "Principal": {
                "Service": "delivery.logs.amazonaws.com"
            },
            "Action": [
                "s3:GetBucketAcl",
                "s3:ListBucket"
            ],
            "Resource": "arn:aws:s3:::vpcflowlogs-cross"
        },
        {
            "Sid": "DownloadObjects",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::AccountA:user/userA"
            },
            "Action": [
                "s3:GetObject",
                "s3:GetObjectAcl",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::vpcflowlogs-cross",
                "arn:aws:s3:::vpcflowlogs-cross/*"
            ]
   ]
}

I am using the below IAM user policy in Account A to download the objects that are in Account B S3 bucket.

My IAM user Policy in Account A:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "IAMAllowDownload",
            "Effect": "Allow",
            "Action": [
                "s3:GetObjectAcl",
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::vpcflowlogs-cross",
                "arn:aws:s3:::vpcflowlogs-cross/*"
            ]
        }
    ]
}

I have tried changing the bucket and IAM policy but still not luck. I understand that the log objects getting published to the account B are owned by it but i am using the s3 «getobject» permission in bucket policy for the account A Principal userA shouldn’t this allow me to download the objects. Kindly assist in solving this.

I’m attempting to write an ec2 user-data script that will pull a file down from a private s3 bucket. The ec2 instances are located in multiple regions, which I believe eliminates the possibility of using a bucket policy to restrict access to a VPC (which was working well in my prototype, but broke in the second region).

Based on advice here and elsewhere, the approach that seems like it should work is giving the ec2 instance an IAM role with access to that s3 bucket. And in fact, this almost seems to work for me. However, just not at the time the user-data script is running.

My user-data script has a while loop that checks for the existence of the file I’m trying to download from s3, and will keep retrying for 5 minutes. If I log in manually during that window and run the exact aws command manually in that 5 minute window, the user-data script succeeds as expected, but it never succeeds on its own.

apt install -y python-minimal
apt install -y unzip
wget "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip"
unzip awscli-bundle.zip
./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws

mkdir -p /opt/myapp
cd /opt/myapp

n=0
while [ ! -f app.tar.gz ]; do
    aws s3 cp --region us-west-1 "s3://bucket_name/app.tar.gz" app.tar.gz
    n=$[$n+1]
    [ $n -ge 60 ] && break
    sleep 5
done

tar -zxf app.tar.gz
./bin/startapp

That is a distilled version of my user-data script. If I’m able to login and run that same aws command manually, I believe the IAM role must be correct; but I don’t understand what else might be going wrong. When the aws command is run from the user-data script, the error is: fatal error: ‘An error occurred (403) when calling the HeadObject operation: Forbidden’

I figured it out. I had an error in my cloud formation template that was creating the EC2 instances. As a result, the EC2 instances that were trying to access the above code deploy buckets, were in different regions (not us-west-2). It seems like the access policies on the buckets (owned by Amazon) only allow access from the region they belong in.
When I fixed the error in my template (it was wrong parameter map), the error disappeared

in my case the problem was the Resource statement in the user access policy.

First we had "Resource": "arn:aws:s3:::BUCKET_NAME",
but in order to have access to objects within a bucket you need a /* at the end:
"Resource": "arn:aws:s3:::BUCKET_NAME/*"

From the AWS documentation:

Bucket access permissions specify which users are allowed access to the objects in a bucket and which types of access they have. Object access permissions specify which users are allowed access to the object and which types of access they have. For example, one user might have only read permission, while another might have read and write permissions.

Trying to solve this problem myself, I discovered that there is no HeadBucket permission. It looks like there is, because that’s what the error message tells you, but actually the HEAD operation requires the ListBucket permission.
I also discovered that my IAM policy and my bucket policy were conflicting. Make sure you check both.

Tags:

Amazon S3

Amazon Web Services

Aws Cli

Related

I’m trying to setup a Amazon Linux AMI(ami-f0091d91) and have a script that runs a copy command to copy from a S3 bucket.

 aws --debug s3 cp s3://aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm .

This script works perfectly on my local machine but fails with the following error on the Amazon Image:

2016-03-22 01:07:47,110 - MainThread - botocore.auth - DEBUG - StringToSign:
HEAD


Tue, 22 Mar 2016 01:07:47 GMT
x-amz-security-token:AQoDYXdzEPr//////////wEa4ANtcDKVDItVq8Z5OKms8wpQ3MS4dxLtxVq6Om1aWDhLmZhL2zdqiasNBV4nQtVqwyPsRVyxl1Urq1BBCnZzDdl4blSklm6dvu+3efjwjhudk7AKaCEHWlTd/VR3cksSNMFTcI9aIUUwzGW8lD9y8MVpKzDkpxzNB7ZJbr9HQNu8uF/st0f45+ABLm8X4FsBPCl2I3wKqvwV/s2VioP/tJf7RGQK3FC079oxw3mOid5sEi28o0Qp4h/Vy9xEHQ28YQNHXOBafHi0vt7vZpOtOfCJBzXvKbk4zRXbLMamnWVe3V0dArncbNEgL1aAi1ooSQ8+Xps8ufFnqDp7HsquAj50p459XnPedv90uFFd6YnwiVkng9nNTAF+2Jo73+eKTt955Us25Chxvk72nAQsAZlt6NpfR+fF/Qs7jjMGSF6ucjkKbm0x5aCqCw6YknsoE1Rtn8Qz9tFxTmUzyCTNd7uRaxbswm7oHOdsM/Q69otjzqSIztlwgUh2M53LzgChQYx5RjYlrjcyAolRguJjpSq3LwZ5NEacm/W17bDOdaZL3y1977rSJrCxb7lmnHCOER5W0tsF9+XUGW1LMX69EWgFYdn5QNqFk6mcJsZWrR9dkehaQwjLPcv/29QcM+b5u/0goazCtwU=
/aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm
2016-03-22 01:07:47,111 - MainThread - botocore.endpoint - DEBUG - Sending http request: <PreparedRequest [HEAD]>
2016-03-22 01:07:47,111 - MainThread - botocore.vendored.requests.packages.urllib3.connectionpool - INFO - Starting new HTTPS connection (1): aws-codedeploy-us-west-2.s3.amazonaws.com
2016-03-22 01:07:47,151 - MainThread - botocore.vendored.requests.packages.urllib3.connectionpool - DEBUG - "HEAD /latest/codedeploy-agent.noarch.rpm HTTP/1.1" 403 0
2016-03-22 01:07:47,151 - MainThread - botocore.parsers - DEBUG - Response headers: {'x-amz-id-2': '0mRvGge9ugu+KKyDmROm4jcTa1hAnA5Ax8vUlkKZXoJ//HVJAKxbpFHvOGaqiECa4sgon2F1kXw=', 'server': 'AmazonS3', 'transfer-encoding': 'chunked', 'x-amz-request-id': '6204CD88E880E5DD', 'date': 'Tue, 22 Mar 2016 01:07:46 GMT', 'content-type': 'application/xml'}
2016-03-22 01:07:47,152 - MainThread - botocore.parsers - DEBUG - Response body:

2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event needs-retry.s3.HeadObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7f421075bcd0>
2016-03-22 01:07:47,152 - MainThread - botocore.retryhandler - DEBUG - No retry needed.
2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.HeadObject: calling handler <function enhance_error_msg at 0x7f4211085758>
2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.HeadObject: calling handler <awscli.errorhandler.ErrorHandler object at 0x7f421100cc90>
2016-03-22 01:07:47,152 - MainThread - awscli.errorhandler - DEBUG - HTTP Response Code: 403
2016-03-22 01:07:47,152 - MainThread - awscli.customizations.s3.s3handler - DEBUG - Exception caught during task execution: A client error (403) occurred when calling the HeadObject operation: Forbidden
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/s3handler.py", line 100, in call
    total_files, total_parts = self._enqueue_tasks(files)
  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/s3handler.py", line 178, in _enqueue_tasks
    for filename in files:
  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/fileinfobuilder.py", line 31, in call
    for file_base in files:
  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 142, in call
    for src_path, extra_information in file_iterator:
  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 314, in list_objects
    yield self._list_single_object(s3_path)
  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 343, in _list_single_object
    response = self._client.head_object(**params)
  File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 228, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 488, in _make_api_call
    model=operation_model, context=request_context
  File "/usr/local/lib/python2.7/site-packages/botocore/hooks.py", line 226, in emit
    return self._emit(event_name, kwargs)
  File "/usr/local/lib/python2.7/site-packages/botocore/hooks.py", line 209, in _emit
    response = handler(**kwargs)
  File "/usr/local/lib/python2.7/site-packages/awscli/errorhandler.py", line 70, in __call__
    http_status_code=http_response.status_code)
ClientError: A client error (403) occurred when calling the HeadObject operation: Forbidden
2016-03-22 01:07:47,153 - Thread-1 - awscli.customizations.s3.executor - DEBUG - Received print task: PrintTask(message='A client error (403) occurred when calling the HeadObject operation: Forbidden', error=True, total_parts=None, warning=None)
A client error (403) occurred when calling the HeadObject operation: Forbidden

However, when I run it with the --no-sign-request option, it works perfectly:

 aws --debug --no-sign-request s3 cp s3://aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm .

Can someone please explain what is going on?

Like this post? Please share to your friends:
  • An error in computer data is called
  • An error in chemistry by william faulkner
  • An error has occurredjsplugin 3005
  • An error has occurred роблокс
  • An error has occurred with the fingerprint sensor