ERP activity logs streamed to Azure Sentinel for real-time threat detection and compliance reporting

We successfully implemented real-time threat detection by streaming our ERP activity logs into Azure Sentinel. Our legacy ERP system generates extensive audit logs covering user access, financial transactions, and data modifications, but we had no centralized security monitoring.

The implementation involved three main components:

  1. Log Analytics Integration: Created custom workspace with data retention policies and configured log ingestion pipelines
  2. Sentinel Data Connectors: Built custom connector using REST API to pull ERP logs every 5 minutes
  3. Custom Analytics Rules: Developed KQL queries to detect suspicious patterns like after-hours access, bulk data exports, and privilege escalations

The connector pulls logs in JSON format and transforms them into CommonSecurityLog schema. We’re now detecting threats within minutes instead of days during manual audits.


// Sample log transformation pseudocode:
1. Fetch ERP audit logs via REST API endpoint
2. Parse XML response and extract security events
3. Map ERP fields to CommonSecurityLog schema
4. POST transformed logs to Log Analytics workspace
5. Trigger Sentinel analytics rules on ingestion

Anyone else implementing similar ERP security monitoring?

We use Standard tier with 90-day retention for hot data, then automatic transition to Archive tier. For cost optimization, we implemented filtering at the connector level - only sending security-relevant events (authentication, authorization, sensitive data access, configuration changes). This reduced our daily ingestion from 45GB to about 8GB. We exclude routine read operations and non-security system logs. The filtering logic is configurable via JSON config file, so we can adjust without redeploying the connector. Definitely filter at source rather than ingesting everything - the cost savings are substantial.

We containerized the connector using Docker and deployed it to Azure Container Instances with automatic restarts. The connector is version-controlled in Azure DevOps with CI/CD pipeline. When ERP updates occur, we test against dev environment first, then promote through staging to production. The connector configuration is externalized, so schema changes usually only require config updates without code changes. We also implemented health monitoring - the connector sends heartbeat metrics to Application Insights every minute, and we have alerts if it stops sending data.

This is exactly what we needed! We’re running SAP ECC and struggling with the same visibility gap. Quick question on your Log Analytics integration - what retention tier did you configure for the ERP logs? We’re debating between Standard and Archive tiers given the volume. Also, did you implement any filtering at the source to reduce ingestion costs, or are you sending everything to Sentinel?