IAM policy blocks access to Cloud Object Storage bucket when automated backup runs

We’re running automated backups from our ERP system to Cloud Object Storage, but they started failing yesterday with AccessDenied errors. The backup script uses a service credential that was working fine for weeks.

The error occurs when the script tries to upload files to our production backup bucket. I’ve verified the service credential is still active and the IAM policy scoping looks correct to me, but clearly something is blocking access. The service credential has Writer role on the bucket.

The integration worked perfectly until we added a second backup bucket for disaster recovery. Now neither bucket accepts uploads from the automated job. Manual uploads through the console work fine with my user account.

Has anyone dealt with IAM policies blocking service credentials after configuration changes? Not sure if this is a policy scope issue or something with how the service credential permissions are set up.

I reviewed the authorization policies and found the issue - there was no service-to-service authorization for the new bucket. But I’m still getting AccessDenied on the original bucket too, which makes me think the service credential itself needs to be regenerated. Should I delete and recreate the credential, or is there a way to refresh the permissions?

Also worth checking if there are any authorization policies between services that might have been affected. Sometimes when you create new resources, the authorization policies don’t automatically extend to them. Your ERP backup service might need explicit authorization to access both COS buckets.

Good point. I checked the IAM policy and it does reference the specific bucket ARN. But I’m confused about the correct way to grant access to multiple buckets. Should I use a wildcard in the resource specification, or do I need to list each bucket separately in the policy? Also, the service credential was created with Writer role at the instance level, not bucket level. Could that be causing conflicts?

Don’t delete the credential yet. Let me walk through the complete fix for your IAM policy scoping and service credential permissions issues.

First, verify your service credential’s current access:


ibmcloud iam service-id SERVICE_ID_NAME
ibmcloud iam service-policies SERVICE_ID_NAME

For IAM policy scoping with multiple buckets, you have two approaches:

  1. Wildcard approach (simpler for automated backup integration): Create a policy with resource type ‘bucket’ and use ‘*’ for resource ID. This grants access to all buckets in the instance.

  2. Explicit listing (more secure): Create separate policies for each bucket ARN:

  • crn:v1:bluemix:public:cloud-object-storage:global:a/ACCOUNT_ID:instance/INSTANCE_ID:bucket:backup-bucket-prod
  • crn:v1:bluemix:public:cloud-object-storage:global:a/ACCOUNT_ID:instance/INSTANCE_ID:bucket:backup-bucket-dr

For service credential permissions, the issue is that instance-level Writer role doesn’t automatically grant bucket access when bucket-level policies exist. Here’s the fix:

  1. Remove the instance-level Writer role from the service credential

  2. Create bucket-level policies instead:

    • Grant ‘Writer’ role on each bucket resource
    • Include ‘Object Writer’ role for object operations
  3. Add service-to-service authorization:


ibmcloud iam authorization-policy-create \
  SERVICE_NAME cloud-object-storage \
  Writer --target-service-instance-id INSTANCE_ID

For the automated backup integration, ensure your backup script uses the HMAC credentials correctly. The AccessDenied error often happens when the signature doesn’t match due to clock skew or incorrect endpoint URLs.

Verify your backup script is using:

  • Private endpoint if running within IBM Cloud (faster, no egress charges)
  • Correct region-specific endpoint
  • HMAC credentials from the service credential (not API key directly)

After updating policies, wait 5-10 minutes for propagation. The service credential itself doesn’t need regeneration - the policies control access. Test with a manual backup run before scheduling:


s3cmd put test-file.txt s3://backup-bucket-prod/ \
  --access_key=HMAC_ACCESS_KEY \
  --secret_key=HMAC_SECRET_KEY

If you still see AccessDenied after policy updates, check the Activity Tracker logs. They’ll show exactly which policy evaluation failed and why. Look for events with action ‘cloud-object-storage.object.create’ and outcome ‘failure’.

One gotcha: if you’re using resource groups, make sure the service credential has Viewer role on the resource group itself, in addition to the bucket policies. This is required for the credential to even see the buckets exist.

The key principle here is that IAM policy scoping for service credentials needs to be explicit about resources. Instance-level roles are convenient but don’t play well with bucket-level policies. For production automated backup integration, always use bucket-level policies with service-to-service authorization.

The instance-level role versus bucket-level policy is probably your issue. When you have both, the more restrictive one wins. Your bucket policy is likely overriding the instance-level Writer role. You need to make sure the service credential permissions align with your policy scope. I’d recommend using bucket-level policies for each bucket rather than instance-level roles when you have multiple buckets with different access requirements. That gives you better control over the automated backup integration.

I’ve seen this before. When you added the second bucket, did you update the IAM policy to include both bucket ARNs? Service credentials need explicit resource access. Check if your policy still references the old single bucket ARN instead of wildcarding or listing both buckets. The policy scope might be too narrow now.