An error occurred accessdenied when calling the listbuckets operation access denied

I am getting: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied When I try to get folder from my S3 bucket. Using this command: aws s3 cp s3://bucket-n...

You have given permission to perform commands on objects inside the S3 bucket, but you have not given permission to perform any actions on the bucket itself.

Slightly modifying your policy would look like this:

{
  "Version": "version_id",
  "Statement": [
    {
        "Sid": "some_id",
        "Effect": "Allow",
        "Action": [
            "s3:*"
        ],
        "Resource": [
            "arn:aws:s3:::bucketname",
            "arn:aws:s3:::bucketname/*"
        ]
    }
  ] 
}

However, that probably gives more permission than is needed. Following the AWS IAM best practice of Granting Least Privilege would look something like this:

{
  "Version": "2012-10-17",
  "Statement": [
      {
          "Effect": "Allow",
          "Action": [
              "s3:ListBucket"
          ],
          "Resource": [
              "arn:aws:s3:::bucketname"
          ]
      },
      {
          "Effect": "Allow",
          "Action": [
              "s3:GetObject"
          ],
          "Resource": [
              "arn:aws:s3:::bucketname/*"
          ]
      }
  ]
}

Robin Métral's user avatar

Robin Métral

2,9963 gold badges16 silver badges32 bronze badges

answered Aug 4, 2016 at 19:04

Mark B's user avatar

Mark BMark B

172k24 gold badges292 silver badges286 bronze badges

11

If you wanted to copy all s3 bucket objects using the command «aws s3 cp s3://bucket-name/data/all-data/ . —recursive» as you mentioned, here is a safe and minimal policy to do that:

{
  "Version": "2012-10-17",
  "Statement": [
      {
          "Effect": "Allow",
          "Action": [
              "s3:ListBucket"
          ],
          "Resource": [
              "arn:aws:s3:::bucket-name"
          ],
          "Condition": {
              "StringLike": {
                  "s3:prefix": "data/all-data/*"
              }
          }
      },
      {
          "Effect": "Allow",
          "Action": [
              "s3:GetObject"
          ],
          "Resource": [
              "arn:aws:s3:::bucket-name/data/all-data/*"
          ]
      }
  ]
}

The first statement in this policy allows for listing objects inside a specific bucket’s sub directory. The resource needs to be the arn of the S3 bucket, and to limit listing to only a sub-directory in that bucket you can edit the «s3:prefix» value.

The second statement in this policy allows for getting objects inside of the bucket at a specific sub-directory. This means that anything inside the «s3://bucket-name/data/all-data/» path you will be able to copy. Be aware that this doesn’t allow you to copy from parent paths such as «s3://bucket-name/data/».

This solution is specific to limiting use for AWS CLI commands; if you need to limit S3 access through the AWS console or API, then more policies will be needed. I suggest taking a look here: https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/.

A similar issue to this can be found here which led me to the solution I am giving.
https://github.com/aws/aws-cli/issues/2408

Hope this helps!

answered Aug 2, 2017 at 18:49

Robert Smith's user avatar

4

I got the same error when using policy as below, although i have «s3:ListBucket» for s3:ListObjects operation.

{
"Version": "2012-10-17",
"Statement": [
    {
        "Action": [
            "s3:ListBucket",
            "s3:GetObject",
            "s3:GetObjectAcl"
        ],
        "Resource": [
            "arn:aws:s3:::<bucketname>/*",
            "arn:aws:s3:::*-bucket/*"
        ],
        "Effect": "Allow"
    }
  ]
 }

Then i fixed it by adding one line
«arn:aws:s3:::bucketname»

{
"Version": "2012-10-17",
"Statement": [
    {
        "Action": [
            "s3:ListBucket",
            "s3:GetObject",
            "s3:GetObjectAcl"
        ],
        "Resource": [
             "arn:aws:s3:::<bucketname>",
            "arn:aws:s3:::<bucketname>/*",
            "arn:aws:s3:::*-bucket/*"
        ],
        "Effect": "Allow"
    }
 ]
}

answered Jul 3, 2017 at 4:33

Gabriel Wu's user avatar

Gabriel WuGabriel Wu

1,71217 silver badges29 bronze badges

1

I tried the following:

aws s3 ls s3.console.aws.amazon.com/s3/buckets/{bucket name}

This gave me the error:

An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied

Using this form worked:

aws s3 ls {bucket name}

answered Aug 1, 2019 at 23:24

Henry's user avatar

HenryHenry

7,6112 gold badges38 silver badges38 bronze badges

1

I was unable to access to S3 because

  • first I configured key access on the instance (it was impossible to attach role after the launch then)
  • forgot about it for a few months
  • attached role to instance
  • tried to access.
    The configured key had higher priority than role, and access was denied because the user wasn’t granted with necessary S3 permissions.

Solution: rm -rf .aws/credentials, then aws uses role.

answered Mar 30, 2017 at 20:05

Putnik's user avatar

PutnikPutnik

5,3685 gold badges37 silver badges57 bronze badges

1

I faced with the same issue. I just added credentials config:

aws_access_key_id = your_aws_access_key_id
aws_secret_access_key = your_aws_secret_access_key

into «~/.aws/credentials» + restart terminal for default profile.

In the case of multi profiles —profile arg needs to be added:

aws s3 sync ./localDir s3://bucketName --profile=${PROFILE_NAME}

where PROFILE_NAME:

.bash_profile ( or .bashrc) -> export PROFILE_NAME="yourProfileName"

More info about how to config credentials and multi profiles can be found here

answered Jul 30, 2019 at 0:10

Ihor Pavlyk's user avatar

Ihor PavlykIhor Pavlyk

1,04113 silver badges10 bronze badges

For Amazon users who have enabled MFA, please use this:
aws s3 ls s3://bucket-name --profile mfa.

And prepare the profile mfa first by running
aws sts get-session-token --serial-number arn:aws:iam::123456789012:mfa/user-name --token-code 928371 --duration 129600. (replace 123456789012, user-name and 928371).
enter image description here

answered Mar 3, 2022 at 7:57

Lane's user avatar

LaneLane

4,4361 gold badge32 silver badges19 bronze badges

1

You have to specify Resource for the bucket via "arn:aws:s3:::bucketname" or "arn:aws:3:::bucketname*". The latter is preferred since it allows manipulations on the bucket’s objects too. Notice there is no slash!

Listing objects is an operation on Bucket. Therefore, action "s3:ListBucket" is required.
Adding an object to the Bucket is an operation on Object. Therefore, action "s3:PutObject" is needed.
Certainly, you may want to add other actions as you require.

{
"Version": "version_id",
"Statement": [
    {
        "Sid": "some_id",
        "Effect": "Allow",
        "Action": [
            "s3:ListBucket",
            "s3:PutObject"
        ],
        "Resource": [
            "arn:aws:s3:::bucketname*"
        ]
    }
] 
}

Robert Johnstone's user avatar

answered Apr 5, 2017 at 12:42

marzhaev's user avatar

marzhaevmarzhaev

4794 silver badges4 bronze badges

2

Okay for those who have done all the above and still getting this issue, try this:

Bucket Policy should look like this:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowBucketSync",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:PutObjectAcl",
                "s3:DeleteObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::BUCKET_NAME",
                "arn:aws:s3:::BUCKET_NAME/*"
            ]
        }
    ]
}

Then save and ensure your Instance or Lightsail is connected to the right profile on AWS Configure.

First:
try adding --recursive at the end, any luck? No okay try the one below.

Second:
Okay now try this instead: --no-sign-request

so it should look like this:

sudo aws s3 sync s3://BUCKET_NAME /yourpath/path/folder --no-sign-request

You’re welcome 😂

answered Sep 24, 2021 at 9:09

Bonny James's user avatar

Bonny JamesBonny James

4373 silver badges4 bronze badges

I was thinking the error is due to «s3:ListObjects» action but I had to add the action «s3:ListBucket» to solve the issue «AccessDenied for ListObjects for S3 bucket»

answered Dec 14, 2018 at 1:44

Sudhakar Naidu's user avatar

I’m adding an answer with the same direction as the accepted answer but with small (important) differences and adding more details.

Consider the configuration below:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:ListBucket"],
      "Resource": ["arn:aws:s3:::<Bucket-Name>"]
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:DeleteObject"
      ],
      "Resource": ["arn:aws:s3:::<Bucket-Name>/*"]
    }
  ]
}

The policy grants programmatic write-delete access and is separated into two parts:

The ListBucket action provides permissions on the bucket level and the other PutObject/DeleteObject actions require permissions on the objects inside the bucket.

The first Resource element specifies arn:aws:s3:::<Bucket-Name> for the ListBucket action so that applications can list all objects in the bucket.

The second Resource element specifies arn:aws:s3:::<Bucket-Name>/* for the PutObject, and DeletObject actions so that applications can write or delete any objects in the bucket.

The separation into two different ‘arns’ is important from security reasons in order to specify bucket-level and object-level fine grained permissions.

Notice that if I would have specified just GetObject in the 2nd block what would happen is that in cases of programmatic access I would receive an error like:

Upload failed: <file-name> to <bucket-name>:<path-in-bucket> An error occurred (AccessDenied) when calling the PutObject operation: Access Denied.

answered May 28, 2020 at 0:42

RtmY's user avatar

RtmYRtmY

15.9k10 gold badges105 silver badges111 bronze badges

To allow permissions in s3 bucket go to the permissions tab in s3 bucket and in bucket policy change the action to this which will allow all actions to be performed:

"Action":"*"

answered Dec 1, 2020 at 18:05

SYED FAISAL's user avatar

SYED FAISALSYED FAISAL

4675 silver badges8 bronze badges

Here’s the policy that worked for me.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::bucket-name"
      ]
    },
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
        "s3:GetObject"
      ],
      "Resource": [
        "arn:aws:s3:::bucket-name/*"
      ]
    }
  ]
}

answered Jul 13, 2021 at 0:35

Indika K's user avatar

Indika KIndika K

1,30215 silver badges26 bronze badges

I had a similar problem while trying to sync an entire s3 bucket locally. For me MFA (Multi-factor authentication) was enforced on my account, which is required while making commands via AWS CLI.

So the solution for me was — provide mfa credentials using a profile (mfa documentation) while using any AWS CLI commands.

answered Oct 4, 2021 at 12:45

Onkar Singh's user avatar

Ran into a similar issues, for me the problem was that I had different AWS keys set in my bash_profile.

I answered a similar question here: https://stackoverflow.com/a/57317494/11871462

If you have conflicting AWS keys in your bash_profile, AWS CLI defaults to these instead.

answered Aug 1, 2019 at 22:15

Varun Tandon's user avatar

I had this issue
my requirement i wanted to allow user to write to specific path

{
            "Sid": "raspiiotallowspecificBucket",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::<bucketname>/scripts",
                "arn:aws:s3:::<bucketname>/scripts/*"
            ]
        },

and problem was solved with this change

{
            "Sid": "raspiiotallowspecificBucket",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::<bucketname>",
                "arn:aws:s3:::<bucketname>/*"
            ]
        },

answered Feb 2, 2020 at 20:59

Ameen's user avatar

AmeenAmeen

373 bronze badges

I like this better than any of the previous answers. It shows how to use the YAML format and lets you use a variable to specify the bucket.

    - PolicyName: "AllowIncomingBucket"
      PolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: "Allow"
            Action: "s3:*"
            Resource:
              - !Ref S3BucketArn
              - !Join ["/", [!Ref S3BucketArn, '*']]

answered Aug 12, 2020 at 15:03

BlackMetalOwl's user avatar

My issue was having set

env: 
  AWS_ACCESS_KEY_ID: {{ secrets.AWS_ACCESS_KEY_ID }} 
  AWS_SECRET_ACCESS_KEY: {{ secrets.AWS_SECRET_ACCESS_KEY }}

again, under the aws-sync GitHub Action as environment variables. They were coming from my GitHub settings. Though in my case I had assumed a role in the previous step which would set me some new keys into those same name environment variables. So i was overwriting the good assumed keys with the bad GitHub basic keys.

Please take care of this if you’re assuming roles.

answered Mar 13, 2021 at 0:15

Hunor Kovács's user avatar

I had the same issue. I had to provide the right resource and action, resource is your bucket’s arn and action in your desired permission. Also please ensure you have your right user arn. Below is my solution.

{
    "Version": "2012-10-17",
    "Id": "Policy1546414123454",
    "Statement": [
        {
            "Sid": "Stmt1546414471931",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123456789101:root"
            },
            "Action": ["s3:ListBucket", "s3:ListBucketVersions"],
            "Resource": "arn:aws:s3:::bucket-name"
        }
    ]
}

answered May 11, 2021 at 21:03

Akash Yellappa's user avatar

If you are suddenly getting this error on a new version of minio on buckets that used to work, the reason is that bucket access policy defaults were changed from version 2021 to 2022. Now in version 2022 by default all buckets (both newly created and existing ones) have Access Policy set to Private — it is not sufficient to provide server credentials to access them — you will still get errors such as these (here: returned to the python minio client):

S3Error: S3 operation failed; code: AccessDenied, message: Access Denied., resource: /dicts, request_id: 16FCBE6EC0E70439, host_id: 61486e5a-20be-42fc-bd5b-7f2093494367, bucket_name: dicts

To roll back to the previous security settings in version 2022, the quickest method is to change the bucket access Access Policy back to Public in the MinIO console (or via mc client).

answered Jun 28, 2022 at 9:52

mirekphd's user avatar

mirekphdmirekphd

3,7602 gold badges31 silver badges48 bronze badges

This is not the best practice but this will unblock you.
Make sure for the user that’s executing the command, it has the following policy attached to it under it’s permission.
A. PowerUserAccess
B. AmazonS3FullAccess

enter image description here

enter image description here

answered Sep 24, 2022 at 5:42

grepit's user avatar

grepitgrepit

20.4k6 gold badges100 silver badges81 bronze badges

I had faced same error «An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied»

Note:
Bucket policy not a good solution.
In IAM service create new custom policy attached with respective user would be safer.

Solved by below procedure:

IAM Service > Policies > Create Policy > select JSON >

{
"Version": "2012-10-17",
"Statement": [
    {
        "Effect": "Allow",
        "Action": [
            "s3:ListBucket",
            "s3:GetObject",
            "s3:GetObjectAcl",
            "s3:ListBucketVersions"
        ],
        "Resource": [
            "arn:aws:s3:::<bucket name>"
        ]
    },
    {
        "Effect": "Allow",
        "Action": [
            "s3:ListBucket",
            "s3:GetObject",
            "s3:GetObjectAcl",
            "s3:PutObject",
            "s3:DeleteObject",
            "s3:ListBucketMultipartUploads",
            "s3:ListMultipartUploadParts",
            "s3:AbortMultipartUpload",
            "s3:DeleteObjectVersion",
            "s3:GetObjectVersion",
            "s3:PutObjectACL",
            "s3:ListBucketVersions"
        ],
        "Resource": [
            "arn:aws:s3:::<bucketname>/*"
        ]
    }
]

}

Select Next Tag > Review Policy enter and create policy.

Select the newly created policy
Select the tab ‘Policy Usage’ in edit window of newly created policy window.
Select «Attach» select the user from the list and Save.

Now try in console with bucket name to list the objects, without bucket name it throws same error.

$aws s3 ls

answered Jan 13 at 9:01

Manikandan Jayaraman's user avatar

Amazon Web Services (AWS) Identity & Access Management (IAM) is a foundational service that provides security in the cloud. It allows you to manage access to your AWS services, resources, and applications. It’s a core service for AWS, but nothing’s perfect. And while using it, you may encounter errors. But don’t sweat it! Let’s dig into the cause and resolution for five common AWS IAM errors.


Accelerate your career

Get started with ACG and transform your career with courses and real hands-on labs in AWS, Microsoft Azure, Google Cloud, and beyond.


1. AccessDeniedException – I Can’t Assume a Role

IAM roles can be used to delegate access to your AWS resources across different AWS accounts that you own. For example, you can share resources in one account with users in a different account. This is made possible by establishing trust relationships between the trusting account and your other AWS trusted accounts.

Let’s take the case where you want to give users in your development account access to resources in your production account. This could be a case where there is a need to promote an update made in development to production. This type of access is called cross-account access. If permissions aren’t set up correctly, you may encounter the error below.

Error
An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:iam:::user is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::user:role/role

Cause
There are two possible causes for this AccessDenied error: the user in your development account doesn’t have permission to call sts:AssumeRole, or the trust relationship in the production account is not configured correctly.

Assuming you’ve already created a role in your production account that a user in your development account can assume (to retrieve temporary security credentials), consider the solutions below. 

Solution #1
Verify the IAM policy attached to the user in your development account grants that user permission to the sts:AssumeRole action for the role in your production account they are attempting to assume. You must explicitly grant this permission using a policy similar to what’s shown below.

{
 "Version": "2012-10-17",
 "Statement": [{
   "Effect": "Allow",
   "Action": ["sts:AssumeRole"],
   "Resource": "arn:aws:iam::user:role/role"
 }]
}

Solution #2
Maybe the user in your development account already has permission to the sts:AssumeRole action, but the error still occurs. The next step is to verify that your development account (the account from which you are calling AssumeRole) is set up in your production account as a trusted entity for the role the user is attempting to assume. A role similar to what’s shown below in your production account should do the trick.

{
 "Version": "2012-10-17",
 "Statement": [
   {
     "Effect": "Allow",
     "Principal": {
       "AWS": "arn:aws:iam::user:user-name"
     },
     "Action": "sts:AssumeRole",
     "Condition": {}
   }
 ]
}

Upon success of assuming the role, the AssumeRole API returns a set of temporary security credentials that can be used to access the production account with the permissions specified in role.

2. AccessDeniedException – I Can’t Call an AWS API Operation

When providing access to resources in your AWS account, consider the principle of least-privileged permissions. Least-privileged permissions grant only the minimum level of access necessary to perform a given task. This principle highlights the fact that users and services cannot access resources until access is explicitly granted. 

Let’s take the case of a user attempting to call the list bucket operation on an Amazon S3 bucket using the command line interface. The user is met with the error below.

Error
An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied

Cause
The AccessDenied error occurs because the user attempting to perform this action has not been explicitly granted access to list the bucket contents. The user will not have access to perform this action unless you explicitly grant it. 

Solution
The easy solution is to attach an Inline Policy, similar to the snippet below, to the user.

{
   "Version": "2012-10-17",
   "Statement": [
       {
           "Sid": "VisualEditor0",
           "Effect": "Allow",
           "Action": [
               "s3:ListAllMyBuckets",
               "s3:ListBucket",
               "s3:HeadBucket"
           ],
           "Resource": "*"
       }
   ]
}

To provide an additional level of security, you can name objects in the Resource element instead of using the wildcard *, which represents all resources. If you’re not familiar with the Resource element, it specifies the object or objects that the policy covers. 

The example below allows access to all items within a specific Amazon S3 bucket using the Resource, the Amazon Resource Name (ARN), and the wildcard *.

{
   "Version": "2012-10-17",
   "Statement": [
       {
           "Sid": "VisualEditor0",
           "Effect": "Allow",
           "Action": [
               "s3:ListAllMyBuckets",
               "s3:ListBucket",
               "s3:HeadBucket"
           ],
           "Resource": "arn:aws:s3:::bucket_name/*"
       }
   ]
}

Let’s start your AWS journey

Looking to get AWS certified or level up your cloud career? Learn in-demand AWS skills by doing — with ACG.


3. UnauthorizedOperation – I am not Authorized to Perform an Operation

When attempting to perform an operation, you may see an error stating you’re not authorized to perform that operation. Let’s take the case of listing EC2 instances in an account using the describe-instances action.

Error
An error occurred (UnauthorizedOperation) when calling the DescribeInstances operation: You are not authorized to perform this operation.

Cause
The UnauthorizedOperation error occurs because either the user or role trying to perform the operation doesn’t have permission to describe (or list) EC2 instances. 

Solution
The easy solution is to attach an Inline Policy, similar to the snippet below, giving the user access.

{
   "Version": "2012-10-17",
   "Statement": [
       {
           "Sid": "VisualEditor0",
           "Effect": "Allow",
           "Action": [
               "ec2:DescribeInstances"
           ],
           "Resource": "*"
       }
   ]
}

It is important to highlight that the DescribeInstances action cannot be defined with an ARN in the Resource element. Some services do not allow you to specify actions for individual resources and require that you use the wildcard * in the Resource element instead. While you can define resource level permissions for a subset of the EC2 APIs, the DescribeInstances action currently does not support resource level permissions.  In this case, if you add an ARN number to the Resource element, you will continue to see the UnauthorizedOperation error.


Want to Prevent the deletion of an Amazon S3 Bucket? Use the AWS Policy Generator tool to create policies that control access to AWS products and resources!


4. One Service is Not Authorized to Perform an Action on Another Service

When managing your AWS resources, you often need to grant one AWS service access to another service to accomplish tasks. Let’s take the case where you need to query a DynamoDB table from a Lambda function. The following Lambda code snippet, to query the USERS table, results in the error shown below. 

table = boto3.resource('dynamodb').Table('USERS')

response = table.query(KeyConditionExpression=Key('USER_ID').eq(userid))

Error
arn:aws:sts::user:assumed-role/role/function is not authorized to perform: dynamodb:Query on resource: arn:aws:dynamodb:region:account:table/USERS

Cause
This error is caused because the Lambda’s execution role does not have permission to query the USERS DynamoDB table.

Solution
The simple solution is to modify the Lambda’s execution role by attaching an Inline Policy similar to the following:

{
   "Version": "2012-10-17",
   "Statement": [
       {
           "Sid": "VisualEditor0",
           "Effect": "Allow",
           "Action": "dynamodb:Query",
           "Resource": "arn:aws:dynamodb:region:account:table/USERS"
       }
   ]
}

The same method can be followed to allow Lambda access to Amazon S3. The method described above will work if the Lambda function and S3 bucket are in the same AWS account. However, if they are in different accounts, you will need to grant Amazon S3 permissions on both the Lambda execution role and the bucket policy.

5. The policy must contain a valid version string

When creating or modifying a policy, you may encounter an error that states the policy must contain a valid Version string. This Version policy element is not the same as multiple version support for managed policies. The Version policy element specifies the language syntax rules that should be used to process the policy. This can be a point of confusion for those new to IAM as they often try to use the current date for the Version policy element; however, the Version is limited to a few select values. For example, using the current date for the Version string, similar to what’s shown below, will cause an error.

{
   "Version": "2020-07-30",
   "Statement": [
       {
           "Sid": "VisualEditor0",
           "Effect": "Allow",
           "Action": [
               "ec2:DescribeInstances"
           ],
           "Resource": "*"
       }
   ]
}

Error
This policy contains the following error: The policy must contain a valid version string

Cause
The error occurs because Version is limited to a few select values.

Solution
The solution is to use one of the valid Version element values. Currently, IAM supports the following Version element values:

  • 2012-10-17 – This is the current version of the policy language.
  • 2008-10-17 – This is an older version of the policy language and doesn’t support newer features. 

If you do not include a Version element, the value defaults to 2008-10-17.


Learn more about IAM

Well, there you have it! We’ve reviewed some of the common errors along with resolutions that you may encounter when using IAM.

Looking for more details and tips to help you troubleshoot other errors with IAM? Check out my new introductory course around IAM, Identity and Access Management (IAM) Concepts.

And if you want to learn more about IAM in Azure, check out free-for-the-month-of-October course IAM for Azure. It’s one of the two dozen free cloud courses available with A Cloud Guru’s free tier.


There’s more where that came from! A Cloud Guru offers learning paths, quizzes, certification prep, and more. 

AWS S3 ListObjects Access Denied error can be resolved easily with these troubleshooting tips by our experts.

At Bobcares, we offer solutions for every query, big and small, as a part of our AWS Support Services.

Let’s take a look at how our AWS Support Team is ready to help customers troubleshoot AWS S3 ListObjects Access Denied.

All about AWS S3 ListObjects Access Denied Error

Have you been coming across the following error while trying to access your AWS S3 bucket?

An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied

Each time an AWS S3 sync command is run, it leads to the Amazon S3 listing the source and destination in order to verify the object exists.

In other words, it results in the following API calls: CopyObject, ListObjectsV2, PutObject, and GetObject.

AWS S3 ListObjects Access Denied Error

  • CopyObject API call for the bucket to bucket operation
  • PutObject API for local to bucket operation
  • GetObject API for the bucket to local operation

The Access Denied error occurs due to not having the required permissions to perform actions on the bucket. Fortunately, there is an easy resolution AWS S3 ListObjects operation Access Denied error.

How to resolve AWS S3 ListObjects Access Denied

According to our AWS experts, the fix for this specific issue involves configuring the IAM policy.

To begin with, we have to ensure that we have permission to list objects in the bucket as per the IAM and bucket policies if the IAM user or role belongs to another AWS account.

However, if the user or role belongs to the bucket owner’s account, we need permission only from IAM or the bucket policy.

Additionally, our AWS experts suggest checking other policy statements for explicit denial of action.

For instance, here is a sample IAM policy that offers permission to s3:ListBucket

s3:ListBucket- Name of the permission that permits a user to list objects in the bucket.

ListObjectsV2- Name of the API call that lists objects in the bucket.

    "Action": "s3:ListBucket",
    "Effect": "Allow",
    "Resource": "arn:aws:s3:::AWSDOC-SAMPLE-BUCKET"
  }]
}

Moreover, here is a sample bucket policy that offers user arn:aws:iam::202204295674:user/user1 access to s3:ListBucket:

{
  "Id": "Policy1546414473940",
  "Version": "2012-10-17",
  "Statement": [{
    "Sid": "Stmt1546414471931",
    "Action": "s3:ListBucket",
    "Effect": "Allow",
    "Resource": "arn:aws:s3:::AWSDOC-SAMPLE-BUCKET",
    "Principal": {
      "AWS": [
        "arn:aws:iam::202204295674:user/user1"
      ]
    }
  }]
}

Checking ListObjectV2 permission

If the Request Pays is enabled and our bucket belongs to another user, we have ot check whether the IAM and bucket policies both offer ListObjectsV2 permissions. If yes, verify the sync command syntax. In fact, one of our customers came across the AWS S3 Access Denied ListObjects error due to an incorrect sync command syntax.

Here is a quick look at the sync command syntax when Request Pays is enabled:

aws s3 sync ./ s3://requester-pays-bucket/ --request-payer requester

However, if we are still facing the error, it is time to attach a policy that permits ListBucket action on the bucket as well as GetObject action on bucket objects to the IAM user or role with S3 bucket access.

 {
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::OUR_BUCKET/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::OUR_BUCKET"
            ]
        }
    ]
}

Alternatively, our AWS experts suggest verifying that the policy does not restrict access to GetObject or ListObject action. Furthermore, check if there is a condition that permits only a particular IP range to access bucket objects.

[Need assistance with another query? We are available 24/7.]

Conclusion

In a nutshell, our skilled AWS Support Engineers at Bobcares demonstrated how to troubleshoot and resolve AWS S3 ListObjects Access Denied error.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

Why does my Amazon EMR application fail with an HTTP 403 «Access Denied» AmazonS3Exception?

Last updated: 2022-05-03

When I submit an application to an Amazon EMR cluster, the application fails with an HTTP 403 «Access Denied» AmazonS3Exception:

java.io.IOException: com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 8B28722038047BAA; S3 Extended Request ID: puwS77OKgMrvjd30/EY4CWlC/AuhOOSNsxfI8xQJXMd20c7sCq4ljjVKsX4AwS7iuo92C9m+GWY=), S3 Extended Request ID: puwS77OKgMrvjd30/EY4CWlC/AuhOOSNsxfI8xQJXMd20c7sCq4ljjVKsX4AwS7iuo92C9m+GWY=

Resolution

If permissions are not configured correctly, you might get an «Access Denied» error on Amazon EMR or Amazon Simple Storage Service (Amazon S3).

First, check the credentials or role specified in your application code

Run the following command on the EMR cluster’s master node. Replace s3://doc-example-bucket/abc/ with your Amazon S3 path.

aws s3 ls s3://doc-example-bucket/abc/

Check the policy for the Amazon EC2 instance profile role

If the Amazon Elastic Compute Cloud (Amazon EC2) instance profile doesn’t have the required read and write permissions on the S3 buckets, you might get the “Access Denied” error.

Note: By default, applications inherit Amazon S3 access from the IAM role for the Amazon EC2 instance profile. Be sure that the IAM policies attached to this role allow the required S3 operations on the source and destination buckets.

To troubleshoot this issue, check if you have the required read permission by running the following command:

$ aws s3 ls s3://doc-example-bucket/myfolder/

Your output might look like the following:

An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied

-or-

Run the following command:

$ hdfs dfs -ls s3://doc-example-bucket/myfolder

Your output might look like the following:

ls: com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: RBT41F8SVAZ9F90B; S3 Extended Request ID: ih/UlagUkUxe/ty7iq508hYVfRVqo+pB6/xEVr5WHuvcIlfQnFf33zGTAaoP2i7cAb1ZPIWQ6Cc=; Proxy: null), S3 Extended Request ID: ih/UlagUkUxe/ty7iq508hYVfRVqo+pB6/xEVr5WHuvcIlfQnFf33zGTAaoP2i7cAb1ZPIWQ6Cc=

Be sure that the instance profile role has the required read and write permissions for the S3 buckets. For example, the S3 actions in the following IAM policy provide the required read and write access to the S3 bucket doc-example-bucket:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "ListObjectsInBucket",
      "Effect": "Allow",
      "Action": [
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::doc-example-bucket"
      ]
    },
    {
      "Sid": "AllObjectActions",
      "Effect": "Allow",
      "Action": "s3:*Object*",
      "Resource": [
        "arn:aws:s3:::doc-example-bucket/*"
      ]
    }
  ]
}

Check the IAM role for the EMRFS role mapping

If you use a security configuration to specify IAM roles for EMRFS, then you’re using role mapping. Your application inherits the S3 permissions from the IAM role based on the role-mapping configuration.

The IAM policy attached to these roles must have the required S3 permissions on the source and destination buckets. To specify IAM roles for EMRFS requests to Amazon S3, see Set up a security configuration with IAM roles for EMRFS.

Check the Amazon S3 VPC endpoint policy

If the EMR cluster’s subnet route table has a route to an Amazon S3 VPC endpoint, then confirm that the endpoint policy allows the required Amazon S3 operations.

To check and modify the endpoint policy using CLI:

Run the following command to review the endpoint policy. Replace vpce-xxxxxxxx with your VPC ID.

aws ec2 describe-vpc-endpoints --vpc-endpoint-ids "vpce-xxxxxxxx"

If necessary, run the following command to upload a modified endpoint policy. Replace the VPC ID and JSON file path.

aws ec2 modify-vpc-endpoint --vpc-endpoint-id "vpce-xxxxxxxx" --policy-document file://policy.json

To check and modify the endpoint policy using the Amazon VPC console:

  1. Open the Amazon VPC console.
  2. In the navigation pane, choose Endpoints.
  3. Select the Amazon S3 endpoint (the one that’s on the EMR cluster’s subnet route table). Then, choose the Policy tab to review the endpoint policy.
  4. To add the required Amazon S3 actions, choose Edit Policy.

Check the S3 source and destination bucket policies

Bucket policies specify the actions that are allowed or denied for principals. The source and destination bucket policies must allow the EC2 instance profile role or the mapped IAM role to perform the required Amazon S3 operations.

To check and modify the bucket policies using CLI:

Run the following command to review a bucket policy. Replace doc-example-bucket with the name of the source or destination bucket.

aws s3api get-bucket-policy --bucket doc-example-bucket

If necessary, run the following command to upload a modified bucket policy. Replace the bucket name and JSON file path.

aws s3api put-bucket-policy --bucket doc-example-bucket --policy file://policy.json

To check and modify the bucket policies using the Amazon S3 console:

  1. Open the Amazon S3 console.
  2. Choose the bucket.
  3. Choose the Permissions tab.
  4. Choose Bucket Policy to review and modify the bucket policy.

Accessing S3 buckets in another account

Important: If your application accesses an S3 bucket that belongs to another AWS account, then the account owner must allow your IAM role on the bucket policy.

For example, the following bucket policy gives all IAM roles and users in emr-account full access to s3://doc-example-bucket/myfolder/.

{
  "Id": "MyCustomPolicy",
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowRootAndHomeListingOfCompanyBucket",
      "Principal": {
        "AWS": [
          "arn:aws:iam::emr-account:root"
        ]
      },
      "Action": [
        "s3:ListBucket"
      ],
      "Effect": "Allow",
      "Resource": [
        "arn:aws:s3:::doc-example-bucket"
      ],
      "Condition": {
        "StringEquals": {
          "s3:prefix": [
            "",
            "myfolder/"
          ],
          "s3:delimiter": [
            "/"
          ]
        }
      }
    },
    {
      "Sid": "AllowListingOfUserFolder",
      "Principal": {
        "AWS": [
          "arn:aws:iam::emr-account:root"
        ]
      },
      "Action": [
        "s3:ListBucket"
      ],
      "Effect": "Allow",
      "Resource": [
        "arn:aws:s3:::doc-example-bucket"
      ],
      "Condition": {
        "StringLike": {
          "s3:prefix": [
            "myfolder/*"
          ]
        }
      }
    },
    {
      "Sid": "AllowAllS3ActionsInUserFolder",
      "Principal": {
        "AWS": [
          "arn:aws:iam::emr-account:root"
        ]
      },
      "Effect": "Allow",
      "Action": [
        "s3:*"
      ],
      "Resource": [
        "arn:aws:s3:::doc-example-bucket/myfolder/*",
        "arn:aws:s3:::doc-example-bucket/myfolder*"
      ]
    }
  ]
}


Did this article help?


Do you need billing or technical support?

AWS support for Internet Explorer ends on 07/31/2022. Supported browsers are Chrome, Firefox, Edge, and Safari.
Learn more »

Содержание

  1. AWS S3 ListObjects Access Denied | Troubleshooting Tips
  2. All about AWS S3 ListObjects Access Denied Error
  3. How to resolve AWS S3 ListObjects Access Denied
  4. Checking ListObjectV2 permission
  5. Conclusion
  6. PREVENT YOUR SERVER FROM CRASHING!
  7. ClientError: An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied #79
  8. Comments
  9. Cannot Delete S3 Bucket even though the IAM user as S3FullAccess policy
  10. 1 Answer 1
  11. Related
  12. Hot Network Questions
  13. Subscribe to RSS
  14. AccessDenied для ListObjects for S3 bucket при разрешениях s3: *
  15. 12 ответов

AWS S3 ListObjects Access Denied | Troubleshooting Tips

by Nikhath K | May 9, 2022

AWS S3 ListObjects Access Denied error can be resolved easily with these troubleshooting tips by our experts.

At Bobcares, we offer solutions for every query, big and small, as a part of our AWS Support Services.

Let’s take a look at how our AWS Support Team is ready to help customers troubleshoot AWS S3 ListObjects Access Denied.

All about AWS S3 ListObjects Access Denied Error

Have you been coming across the following error while trying to access your AWS S3 bucket?

An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied

Each time an AWS S3 sync command is run, it leads to the Amazon S3 listing the source and destination in order to verify the object exists.

In other words, it results in the following API calls: CopyObject, ListObjectsV2, PutObject, and GetObject.

  • CopyObject API call for the bucket to bucket operation
  • PutObject API for local to bucket operation
  • GetObject API for the bucket to local operation

The Access Denied error occurs due to not having the required permissions to perform actions on the bucket. Fortunately, there is an easy resolution AWS S3 ListObjects operation Access Denied error.

How to resolve AWS S3 ListObjects Access Denied

According to our AWS experts, the fix for this specific issue involves configuring the IAM policy.

To begin with, we have to ensure that we have permission to list objects in the bucket as per the IAM and bucket policies if the IAM user or role belongs to another AWS account.

However, if the user or role belongs to the bucket owner’s account, we need permission only from IAM or the bucket policy.

Additionally, our AWS experts suggest checking other policy statements for explicit denial of action.

For instance, here is a sample IAM policy that offers permission to s3:ListBucket

s3:ListBucket- Name of the permission that permits a user to list objects in the bucket.

ListObjectsV2- Name of the API call that lists objects in the bucket.

Moreover, here is a sample bucket policy that offers user arn:aws:iam::202204295674:user/user1 access to s3:ListBucket:

Checking ListObjectV2 permission

If the Request Pays is enabled and our bucket belongs to another user, we have ot check whether the IAM and bucket policies both offer ListObjectsV2 permissions. If yes, verify the sync command syntax. In fact, one of our customers came across the AWS S3 Access Denied ListObjects error due to an incorrect sync command syntax.

Here is a quick look at the sync command syntax when Request Pays is enabled:

However, if we are still facing the error, it is time to attach a policy that permits ListBucket action on the bucket as well as GetObject action on bucket objects to the IAM user or role with S3 bucket access.

Alternatively, our AWS experts suggest verifying that the policy does not restrict access to GetObject or ListObject action. Furthermore, check if there is a condition that permits only a particular IP range to access bucket objects.

[Need assistance with another query? We are available 24/7.]

Conclusion

In a nutshell, our skilled AWS Support Engineers at Bobcares demonstrated how to troubleshoot and resolve AWS S3 ListObjects Access Denied error.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

Источник

ClientError: An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied #79

When trying to access an s3 resource using a wildcard using:

df_spending = dd.read_csv(s3_location_wildcard, dtype = dtype, storage_options=<«anon»: True>, blocksize=»16 MiB»).persist()

I get this error inside s3fs/core.py:

ClientError Traceback (most recent call last)

/opt/anaconda3/envs/coiled_env/lib/python3.8/site-packages/s3fs/core.py in _lsdir(self, path, refresh, max_items, delimiter)
420 dircache = []
—> 421 async for i in it:
422 dircache.extend(i.get(‘CommonPrefixes’, []))

/opt/anaconda3/envs/coiled_env/lib/python3.8/site-packages/aiobotocore/paginate.py in anext(self)
30 while True:
—> 31 response = await self._make_request(current_kwargs)
32 parsed = self._extract_parsed_response(response)

/opt/anaconda3/envs/coiled_env/lib/python3.8/site-packages/aiobotocore/client.py in _make_api_call(self, operation_name, api_params)
150 error_class = self.exceptions.from_code(error_code)
—> 151 raise error_class(parsed_response, operation_name)
152 else:

ClientError: An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied

The above exception was the direct cause of the following exception:

PermissionError Traceback (most recent call last)
in
4
5 #df_spending = dd.read_csv(s3_location, dtype = dtype, storage_options=<«anon»: True>, blocksize=»16 MiB»).persist() #blocksize=»16 MiB»,
—-> 6 df_spending = dd.read_csv(s3_location_wildcard, dtype = dtype, storage_options=<«anon»: True>, blocksize=»16 MiB»).persist() #blocksize=»16 MiB»,
7
8 df_spending.head()

The individual CSV files can be read into dd.read_csv but the wildcard version throws the above error.

import s3fs
s3fs.version

import dask
dask.version

The text was updated successfully, but these errors were encountered:

Источник

Cannot Delete S3 Bucket even though the IAM user as S3FullAccess policy

I cannot delete the bucket from an IAM user account which uses a virtual MFA device profile

I have generated session toekns and added it to the profile section of

/.aws/credentials file. and the profile config is added to the

When I run the command to delete this bucket (it is empty)

Also, the bucket does not show up in Management Console nor on ls command

gives no output, and

I have the following policies attached to this user via a group

How do I delete this bucket? Why doesn’t it show up at all? I know it exists because

1 Answer 1

Bucket names must be unique within a partition. A partition is a grouping of Regions. AWS currently has three partitions: aws (Standard Regions), aws-cn (China Regions), and aws-us-gov (AWS GovCloud [US] Regions).

From the Rules for bucket naming. Most likely a bucket with that name was created by a different account not under your control. That is why it doesn’t show up. AFAIK there is no other way to resolve that than choosing a different name for your bucket.

Hot Network Questions

To subscribe to this RSS feed, copy and paste this URL into your RSS reader.

Site design / logo © 2023 Stack Exchange Inc; user contributions licensed under CC BY-SA . rev 2023.1.16.43160

By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.

Источник

AccessDenied для ListObjects for S3 bucket при разрешениях s3: *

Произошла ошибка (AccessDenied) при вызове операции ListObjects: доступ запрещен

Когда я пытаюсь получить папку из своего ведра S3.

Используя эту команду:

Разрешения IAM для сегмента выглядят следующим образом:

Что мне нужно изменить, чтобы успешно использовать copy и ls ?

12 ответов

Вы дали разрешение на выполнение команд над объектами внутри корзины S3, но не дали разрешения на выполнение каких-либо действий с самой корзиной.

Если немного изменить политику, это будет выглядеть так:

Однако это, вероятно, дает больше разрешений, чем необходимо. Следуя передовой практике AWS IAM по предоставления наименьшего количества Привилегия будет выглядеть примерно так:

Я пробовал следующее:

Это дало мне ошибку:

Использование этой формы работало:

Чтобы разрешить разрешения в корзине s3, перейдите на вкладку разрешений в корзине s3 и в политике корзины измените действие на это, которое позволит выполнять все действия:

Я получил ту же ошибку при использовании политики, как показано ниже, хотя у меня есть «s3: ListBucket» для операции s3: ListObjects.

Затем я исправил это, добавив одну строчку «arn: aws: s3 . bucketname»

Мне не удалось получить доступ к S3, потому что

  • сначала настроил ключевой доступ на инстансе (тогда после запуска прикрепить роль было невозможно)
  • забыл об этом на несколько месяцев
  • прикрепленная роль к экземпляру
  • пытался получить доступ. Настроенный ключ имел более высокий приоритет, чем роль, и в доступе было отказано, так как пользователю не были предоставлены необходимые разрешения S3.

Решение: rm -rf .aws/credentials , тогда aws использует роль.

Возникли похожие проблемы, для меня проблема заключалась в том, что у меня были разные ключи AWS, установленные в моем bash_profile.

Если у вас есть конфликтующие ключи AWS в вашем bash_profile, вместо них по умолчанию используется интерфейс командной строки AWS.

Я думал, что ошибка вызвана действием «s3: ListObjects» , но мне пришлось добавить действие «s3: ListBucket» , чтобы решить проблему «AccessDenied for ListObjects for S3 ведро «

Вы должны указать ресурс для сегмента с помощью «arn:aws:s3. bucketname» или «arn:aws:3. bucketname*» . Последний вариант предпочтительнее, так как он позволяет манипулировать объектами ведра. Обратите внимание, что косой черты нет!

Список объектов — это операция над Bucket. Следовательно, требуется действие «s3:ListBucket» . Добавление объекта в Bucket — это операция над Object. Следовательно, действие «s3:PutObject» необходимо. Конечно, вы можете добавить другие действия по мере необходимости.

Если вы хотите скопировать все объекты корзины s3 с помощью команды «aws s3 cp s3: // bucket-name / data / all-data /. —Recursive», как вы упомянули, вот безопасная и минимальная политика для этого:

Первый оператор в этой политике позволяет перечислять объекты внутри подкаталога определенного сегмента. Ресурс должен быть arn корзины S3, и чтобы ограничить листинг только подкаталогом в этой корзине, вы можете изменить значение «s3: prefix».

Второй оператор в этой политике позволяет получать объекты внутри корзины в определенном подкаталоге. Это означает, что все, что находится внутри пути «s3: // bucket-name / data / all-data /», вы сможете скопировать. Имейте в виду, что это не позволяет вам копировать из родительских путей, таких как «s3: // bucket-name / data /».

Это решение предназначено для ограничения использования команд интерфейса командной строки AWS; если вам нужно ограничить доступ к S3 через консоль AWS или API, то потребуются дополнительные политики. Предлагаю взглянуть здесь: https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3 -bucket /.

Здесь можно найти похожую проблему, которая привела меня к решению, которое я даю. https://github.com/aws/aws-cli/issues/2408

Надеюсь это поможет!

У меня была эта проблема, и я хотел разрешить пользователю писать по определенному пути

Источник

I’m having an annoying problem using the cli with s3. I’m using an EC2 role tied to a policy that allows full S3 access to a specific folder in a bucket. I have also modified it to add full access/control to the bucket itself too, and later added another section to give listbucket permission to all of s3. I can copy files to the folder no problem. If I try to sync or ls though I get

fatal error: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied

Short of giving full access to all of s3 which I don’t want to/can’t do I’m not sure what else to try.

My policy is below. I’ve added some stuff to it but I’m not even sure if they’re valid actions.

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "Stmt-----------------",
			"Effect": "Allow",
			"Action": [
				"s3:*"
			],
			"Resource": [
				"arn:aws:s3:::bucketname/",
				"arn:aws:s3:::bucketname/*",
				"arn:aws:s3:::bucketname/foldername/",
				"arn:aws:s3:::bucketname/foldername/*"
			]
		},
		{
			"Sid": "Stmt---------------",
			"Effect": "Allow",
			"Action": [
				"s3:ListBucket",
				"s3:ListObjects",
				"s3:GetBucket"
			],
			"Resource": [
				"arn:aws:s3::::"
			]
		}
	]
}

Like this post? Please share to your friends:
  • An error occurred the ni service locator is not running
  • An error occurred the following file does not exist sh exe сайлент хилл
  • An error occurred that prevented traktor dj from opening что делать
  • An error occurred starting vegas pro there is no license to use this software
  • An error occurred starting vegas pro the system is low on memory