Synapse workspace firewall blocks Data Factory pipeline from accessing dedicated SQL pool

We’re running into a persistent connectivity issue with our Data Factory pipeline trying to access our Synapse dedicated SQL pool. The pipeline worked fine in dev, but production keeps failing with connection timeout errors.

Our Synapse workspace has firewall rules configured, and I’ve verified the Data Factory managed identity has proper permissions. The error suggests it’s being blocked at the network level:


Error: Connection timeout after 30s
Failed to connect to synapse-prod.sql.azuresynapse.net:1433
Verify firewall rules allow access from source IP

I’ve added the Data Factory integration runtime IPs to the Synapse firewall allowlist, but still getting blocked. Has anyone dealt with this? Not sure if I’m missing something with the managed IP ranges or if there’s a connectivity testing approach I should use before the pipeline runs.

Before you implement the managed VNet solution, you can test connectivity using the Test Connection feature in Data Factory linked service configuration. It’ll tell you exactly where the connection is failing - DNS resolution, firewall, authentication, etc. Also check your Synapse firewall logs in the Networking blade to see if connection attempts are even reaching the workspace. If you see no logs, it’s probably being blocked upstream at NSG or route table level.

We’re using Azure IR. I see the ‘Allow Azure services’ option but our security team is hesitant because it opens access to all Azure services. Is there a way to restrict it to just our Data Factory instance? We need to maintain strict network isolation for compliance.

I’ll add some context on the compliance side since you mentioned security concerns. The Managed Private Endpoint approach actually gives you better audit trails than firewall rules. Every connection is logged with the specific Data Factory resource ID, so you can track exactly which pipeline accessed what. Here’s the implementation path that addresses all your concerns:

Synapse Firewall Configuration: Disable ‘Allow Azure services’ - you don’t need it with private endpoints. Keep your firewall rules strict, only allowing your on-premises IPs or specific VNets if needed for admin access.

Data Factory Managed VNet Setup:

  1. Create Managed Virtual Network IR in Data Factory
  2. Create Managed Private Endpoint pointing to your Synapse workspace SQL endpoint
  3. Approve the private endpoint connection in Synapse Networking blade
  4. Update your linked service to use the Managed VNet IR

Connectivity Testing: Before running production pipelines, use the linked service Test Connection feature. With managed private endpoints, you should see successful connection within 2-3 seconds. If it fails, check:

  • Private endpoint approval status in Synapse
  • DNS resolution (should resolve to private IP 10.x.x.x range)
  • Data Factory managed identity still has SQL permissions

Data Factory Managed IP Ranges: With managed VNet, you don’t need to track Azure IR IP ranges anymore. The connection uses the private endpoint’s static private IP. Your network team can monitor traffic in Azure Monitor, and you’ll see source IP as the managed VNet’s address space.

Compliance Benefits:

  • All traffic stays on Azure backbone (never traverses internet)
  • Private endpoint connections are logged with resource IDs
  • You can use Azure Policy to enforce managed VNet usage
  • NSG flow logs capture all connection attempts

One gotcha: Managed VNet IRs take 5-7 minutes to warm up on first use. Factor this into your pipeline SLAs. After warmup, performance is identical to regular Azure IR. We’ve been running this setup for 8 months across 40+ pipelines with zero connectivity issues.

Good call on the logs. I found the connection attempts in Synapse firewall logs - they’re being rejected with source IPs that aren’t in our allowlist. The IPs keep changing which confirms the Azure IR dynamic IP issue. Going to proceed with the managed VNet solution.