I’d like to hear how others approach automated testing for production planning deployments in S/4HANA 1909. Our challenge is balancing comprehensive end-to-end process validation against practical deployment timelines.
We currently run basic transport-based smoke tests after each deployment-checking that key transactions load and basic configuration is intact. But production planning has frequent customizing changes (work centers, routing updates, BOM modifications), and our smoke tests often miss issues that only surface during actual planning runs.
I’m evaluating whether to invest in a full data-driven test framework with ABAP Unit that validates complete planning scenarios (MRP run, capacity leveling, order release) versus keeping lightweight deployment checks and relying on business user acceptance testing.
What’s your experience with automated testing depth in production planning modules? Do you find value in end-to-end automation, or is it overkill for configuration-heavy deployments?
Consider the test pyramid principle here. You want many unit tests (configuration validation), fewer integration tests (planning run execution), and minimal end-to-end tests (complete order-to-production scenarios). For production planning, I’d recommend: 1) Smoke tests for deployment validation, 2) Integration tests for MRP and capacity planning functions using test data, 3) One or two critical end-to-end scenarios that represent your most important business processes. This balanced approach gives you confidence without excessive maintenance burden. Also, make your tests self-healing where possible-use dynamic test data generation rather than hardcoded values.
From a DevOps perspective, I’d advocate for layered testing. Smoke tests after transport import are non-negotiable-they catch deployment failures quickly. But production planning benefits from additional scenario-based tests that run nightly, not in the deployment pipeline. We use ABAP Unit with test data fixtures that represent our common planning situations. The key is separating deployment validation (fast, basic) from regression testing (comprehensive, slower). This way you don’t block releases while still maintaining quality coverage.
The frequency of config changes in production planning is exactly why end-to-end automation is valuable. We had a situation where a work center capacity change broke downstream capacity leveling, but smoke tests passed because transactions loaded fine. The issue only appeared when running CM01. Now we have automated tests that execute key planning functions: MD01 with test material, CM01 for capacity check, CO40 for order release. These aren’t full business scenarios, but they exercise the critical paths. The investment paid off within three months when we caught a routing issue before production deployment.
I think the question isn’t end-to-end vs smoke tests, but rather what level of confidence you need for production deployment. In our environment, production planning changes can impact manufacturing schedules worth millions, so we err on the side of comprehensive testing. We’ve built a data-driven framework using eCATT and ABAP Unit that maintains test material masters, BOMs, and routings in a dedicated test client. After deployment, automated tests run planning scenarios and validate outputs against expected results. Yes, it requires maintenance, but the cost of a planning failure in production far exceeds the testing investment.
We went the full end-to-end route with production planning and honestly, it’s been mixed results. The test framework takes significant maintenance-every time planning strategy or BOM structure changes, we’re updating test data and assertions. The value comes when you catch integration issues between planning and procurement, but for pure configuration changes, it feels like overhead. Our sweet spot has been mid-level testing: validate planning run execution and key outputs without full scenario coverage.