Cloud Object Storage access denied when using VPC endpoint for data transfers

I’m trying to transfer large datasets from our compute instances to Cloud Object Storage using a VPC endpoint for private connectivity, but I keep getting access denied errors. The same credentials work fine when accessing COS through the public endpoint, so I know the IAM permissions are correct.

The VPC endpoint is configured and bound to our VPC. I can resolve the private endpoint hostname from the compute instances, but PUT operations fail with 403 Forbidden:


aws s3 cp data.zip s3://my-bucket/ \
  --endpoint-url https://s3.private.us-south.cloud-object-storage.appdomain.cloud
upload failed: Access Denied (403)

The service credential has Writer access to the bucket, and public endpoint access works with the same credential. I’ve verified the VPC endpoint configuration shows as ‘stable’ in the console. What additional permissions or configuration is needed for COS access via VPC endpoints? We need private connectivity to avoid egress charges on multi-TB transfers.

Found it! The bucket had a configuration policy that only allowed access from the public endpoint. There was a setting in the bucket access policies section that I had to toggle to enable private endpoint access. Once I enabled that, uploads through the VPC endpoint started working immediately.

I checked IAM authorizations and there is a policy allowing ‘VPC Infrastructure Services’ to access my COS instance with ‘Reader’ role. Should that be ‘Writer’ instead? The policy was auto-created when I set up the VPC endpoint, but maybe the role is insufficient for uploads.

VPC endpoints for COS require additional IAM authorization beyond just the service credential permissions. You need to create a service-to-service authorization policy that allows your VPC to access the COS instance.

Check if you have an authorization policy between the VPC infrastructure service and your COS instance. Without this, the VPC endpoint can’t proxy requests to COS even if your service credential is valid.

I’ve seen this exact issue before. The problem is usually that the VPC endpoint needs to be explicitly allowed in the COS bucket’s access policy. Even though your service credential has Writer access, the bucket might have a policy that restricts access to specific endpoints or network paths.

Go to your bucket settings in the COS console and look for ‘Access Policies’ or ‘Bucket Configuration’. If there’s an allowed IP list or endpoint restriction, you need to add an exception for the VPC endpoint’s network range or enable private endpoint access.

The authorization policy role determines what the VPC endpoint itself can do, but your actual data operations are still governed by the service credential’s IAM permissions. So the Reader role on the authorization policy is typically sufficient - the VPC endpoint just needs to be able to route requests.

Have you checked the bucket’s IAM policy? Sometimes buckets have additional policies that restrict access to specific IP ranges or network paths. If the bucket policy only allows public endpoint access, that would explain why private endpoint requests are denied even with valid credentials.

Perfect! You’ve identified the root cause. Let me provide a comprehensive solution for configuring Cloud Object Storage access via VPC endpoints:

1. VPC Endpoint Configuration: First, ensure your VPC endpoint is properly configured and bound to the correct VPC:


ibmcloud is virtual-private-endpoint-gateways --output json | \
  jq '.[] | select(.resource_type=="provider_cloud_service") | {name, lifecycle_state, target}'

Verify the endpoint shows ‘stable’ lifecycle state and targets the COS service.

2. IAM Role Permissions (Service Credential): Your service credential needs appropriate IAM roles for the intended operations:

a) List and read operations: Reader role

b) Upload and write operations: Writer role

c) Delete operations: Manager role

Verify your service credential’s roles:


ibmcloud iam service-policy SERVICE_CREDENTIAL_ID

Create or update the service credential if needed:


ibmcloud resource service-key-create cos-writer-key Writer \
  --instance-name my-cos-instance \
  --parameters '{"HMAC":true}'

3. Bucket Policy Updates (Your Issue): COS buckets can have access policies that restrict connectivity to specific endpoints. This is what was blocking your VPC endpoint access:

a) Check current bucket configuration:

In the IBM Cloud console:

  • Navigate to your COS instance > Buckets > Select bucket
  • Go to ‘Configuration’ tab > ‘Access policies’ section
  • Look for ‘Allowed IP addresses’ or ‘Endpoint restrictions’

b) Enable private endpoint access:

You need to explicitly allow private endpoint connectivity. This can be done via:

Console method:

  • Bucket Configuration > Access policies
  • Enable ‘Allow access from private endpoints’
  • Save changes

CLI method (using bucket configuration API):

{
  "firewall": {
    "allowed_network_type": ["private", "public"]
  }
}

Apply with:


ibmcloud cos bucket-config-set --bucket BUCKET_NAME \
  --config-json @bucket-config.json

c) Update IAM authorization policy:

Ensure service-to-service authorization exists:


ibmcloud iam authorization-policy-create is \
  cloud-object-storage Reader \
  --source-service-instance-id VPC_INSTANCE_ID \
  --target-service-instance-id COS_INSTANCE_ID

Note: Writer role on the authorization policy is not required - it’s the service credential that needs Writer role for uploads, not the VPC endpoint authorization.

4. Network Connectivity Verification: Test DNS resolution and network path to the private endpoint:

a) From a compute instance in the VPC:


nslookup s3.private.us-south.cloud-object-storage.appdomain.cloud
telnet s3.private.us-south.cloud-object-storage.appdomain.cloud 443

b) Verify routing to the VPC endpoint:


traceroute s3.private.us-south.cloud-object-storage.appdomain.cloud

The route should stay within IBM’s private network (10.x.x.x addresses).

5. Data Transfer Operations: Once configuration is complete, test uploads:

a) Using AWS CLI:

aws s3 cp data.zip s3://my-bucket/ \
  --endpoint-url https://s3.private.us-south.cloud-object-storage.appdomain.cloud \
  --region us-south

b) Using IBM Cloud CLI:

ibmcloud cos upload --bucket my-bucket \
  --key data.zip --file ./data.zip \
  --endpoint-url https://s3.private.us-south.cloud-object-storage.appdomain.cloud

c) For large transfers, use multipart uploads:

aws s3 cp large-file.tar.gz s3://my-bucket/ \
  --endpoint-url https://s3.private.us-south.cloud-object-storage.appdomain.cloud \
  --region us-south \
  --storage-class STANDARD \
  --metadata purpose=backup

6. Troubleshooting Access Denied Errors: If you still get 403 errors after enabling private endpoint access:

a) Check bucket IAM policies:


ibmcloud cos bucket-policy-get --bucket BUCKET_NAME

Look for explicit deny rules or IP restrictions.

b) Verify HMAC credentials are current:

Regenerate HMAC credentials after VPC endpoint changes:


ibmcloud resource service-key-create new-hmac-key Writer \
  --instance-name my-cos-instance \
  --parameters '{"HMAC":true}'

c) Review Activity Tracker events:


ibmcloud at event-search --service-name cloud-object-storage \
  --action cloud-object-storage.bucket.access-denied

Look for specific error reasons in the event data.

d) Check for context-based restrictions:

If your account uses CBR rules, ensure the VPC endpoint’s network zone is allowed:


ibmcloud cbr rules --service-name cloud-object-storage

7. Performance Optimization: For multi-TB transfers over VPC endpoints:

  • Use multipart uploads for files >100MB
  • Enable parallel transfers: aws s3 sync with `–max-concurrent-requests 20
  • Monitor transfer speeds with --debug flag to identify bottlenecks
  • Consider using IBM Aspera for very large datasets (50GB+)
  • Verify compute instance network bandwidth isn’t limiting throughput

Key Takeaways:

  • VPC endpoint access requires THREE configurations: VPC endpoint setup, IAM service credential permissions, AND bucket policy allowing private endpoints
  • The bucket policy is often overlooked - it must explicitly allow private endpoint access even when IAM permissions are correct
  • Service-to-service authorization (VPC → COS) typically only needs Reader role
  • Your service credential needs Writer role for uploads, Manager role for deletes
  • Always test with small files first before initiating multi-TB transfers
  • Monitor costs - while VPC endpoints eliminate egress charges, you still pay for storage and API requests

Your specific issue was the bucket policy restricting access to public endpoints only. Enabling private endpoint access in the bucket configuration resolved the 403 errors immediately.

Another thing to verify - are you using HMAC credentials or IAM API keys for authentication? VPC endpoints work with both, but the authentication flow is slightly different. If you’re using HMAC credentials generated before the VPC endpoint was created, try generating new HMAC credentials and see if that resolves the issue.

Also check if your bucket has any firewall rules configured. COS supports IP-based access policies that could be blocking requests coming through the VPC endpoint’s network path.