Manufacturing plan batch processing times out during large BOM explosions

We’re experiencing timeout issues with our nightly batch jobs that handle bill of materials explosions for manufacturing planning. The process typically completes in 2-3 hours but now runs for 8+ hours before timing out.

Our environment runs Infor SCM IS-2022.2 with approximately 45,000 active BOMs, many with 8-10 levels of recursion depth. The daily planning cycle includes full BOM explosion for all active products.

Current batch configuration:


batch.timeout=28800
batch.parallel.threads=4
bom.explosion.cache.enabled=false

The timeout occurs during the recursion phase when processing complex multi-level BOMs. We’ve noticed the MBOM_STRUCTURE table has grown to 2.3M rows. Database query logs show individual BOM explosion queries taking 15-30 seconds each.

Is there a recommended approach for optimizing BOM recursion depth handling and improving batch processing performance? Should we be looking at incremental caching strategies or database indexing improvements?

Let me provide a comprehensive solution addressing all the key optimization areas:

1. BOM Recursion Depth Optimization: Implement recursive CTEs for deep hierarchy processing. Replace iterative explosion logic with:

WITH RECURSIVE bom_tree AS (
  SELECT item_id, component_id, 1 as level
  FROM MBOM_STRUCTURE WHERE parent_item_id = :root_item
  UNION ALL
  SELECT b.item_id, b.component_id, bt.level + 1
  FROM MBOM_STRUCTURE b JOIN bom_tree bt ON b.parent_item_id = bt.component_id
  WHERE bt.level < 10
)
SELECT * FROM bom_tree;

This handles recursion in a single query execution rather than multiple round trips.

2. Batch Parallel Processing Configuration: Update your batch settings:


batch.timeout=43200
batch.parallel.threads=10
batch.chunk.size=500
bom.explosion.parallel.enabled=true
bom.max.recursion.depth=10

Increase parallel threads to 10 for your volume. Enable parallel BOM explosion processing and set explicit recursion limits to prevent runaway queries.

3. Database Index Strategy: Implement the composite indexes James suggested, plus add covering indexes:

CREATE INDEX idx_mbom_parent_level ON MBOM_STRUCTURE(parent_item_id, level, component_id);
CREATE INDEX idx_mbom_effective ON MBOM_STRUCTURE(effective_date, item_id) WHERE active_flag='Y';
CREATE INDEX idx_mbom_covering ON MBOM_STRUCTURE(parent_item_id, component_id, quantity, effective_date) INCLUDE (level, item_id);

The covering index eliminates table lookups for common BOM queries.

4. Incremental Caching vs Full Recalculation: Enable incremental caching with change detection:


bom.explosion.cache.enabled=true
bom.cache.strategy=incremental
bom.cache.ttl=86400
bom.change.tracking.enabled=true

This processes only modified BOMs. Maintain a change log that tracks BOM updates and triggers selective re-explosion. For unchanged BOMs, serve from cache. This typically reduces processing volume by 70-80% in stable production environments.

Implement cache warming for high-usage BOMs during off-peak hours. Use a two-tier strategy: hot cache (memory) for top 20% frequently accessed BOMs, warm cache (database) for remaining 80%.

5. Query Execution Plan Analysis: Regularly analyze execution plans and monitor:

  • Table scan occurrences on MBOM tables
  • Join operation costs for multi-level explosions
  • Index usage statistics
  • Query compilation times

Set up automated monitoring:


bom.query.monitoring.enabled=true
bom.slow.query.threshold=5000
bom.execution.plan.logging=true

Additional Recommendations:

  • Partition MBOM_STRUCTURE table by effective_date for faster date-range queries
  • Archive inactive BOMs older than 2 years to separate table
  • Increase batch JVM heap to 8GB minimum
  • Consider splitting BOMs with 8+ levels into separate high-priority batch job
  • Implement batch job monitoring with alerts for queries exceeding 5-second threshold

Expected Results: With these changes, you should see:

  • Batch completion time: 8+ hours → 2-3 hours
  • Individual BOM explosion: 15-30s → 1-3s
  • Overall processing volume: Reduced by 70% through incremental approach
  • Database load: Significantly lower due to efficient indexing and caching

Implement these changes in stages: indexes first, then caching, then parallel processing tuning. Monitor performance after each stage.

Definitely create composite indexes. For BOM recursion queries, I recommend:

CREATE INDEX idx_mbom_parent_level ON MBOM_STRUCTURE(parent_item_id, level, component_id);
CREATE INDEX idx_mbom_effective ON MBOM_STRUCTURE(effective_date, item_id) WHERE active_flag='Y';

These indexes specifically target the recursion patterns. The first handles parent-child traversal efficiently, the second filters active BOMs by effective date. With 2.3M rows, proper indexing should cut your query times from 15-30s down to under 2 seconds per BOM.

Check your batch job’s memory allocation too. BOM explosion with deep recursion can consume significant heap space. We increased our batch JVM from 4GB to 8GB and saw immediate improvements in processing times for complex BOMs.

I’ve seen this pattern before. With 45K BOMs and that recursion depth, you’re hitting classic performance bottlenecks. First quick check - run an execution plan analysis on your BOM explosion queries. Look specifically at table scans on MBOM_STRUCTURE and MBOM_COMPONENT tables.

Your current setup with cache disabled and only 4 parallel threads is definitely contributing to the issue. For your volume, you should consider bumping parallel threads to 8-12, but more importantly, enable incremental caching. The cache.enabled=false setting means you’re recalculating everything from scratch every night.

We had similar issues last year. One thing that helped was implementing a two-tier caching strategy. Use in-memory cache for frequently accessed BOMs (top 20% by usage) and database-backed cache for the rest. This reduced our explosion times by 60%.

Also review your recursion depth limits. If you have BOMs going 10 levels deep, question whether that’s necessary. We found several legacy BOMs with unnecessary nesting that we flattened to 6-7 levels without impacting manufacturing logic.