Recipe management API: batch import versus single record creation performance

We’re migrating 3000+ manufacturing recipes into Aras from our legacy system. Each recipe has 15-30 process steps with parameters, materials, and equipment references. I’m evaluating whether to use batch import through the API or create recipes individually.

Batch import would be faster initially, but I’m concerned about error handling - if one recipe in a batch of 100 fails, do we lose all 100? Single record creation gives us better control and immediate validation feedback, but at 3000 recipes it could take days.

What’s the practical performance difference? Our recipes are complex - each one creates the parent recipe item plus related process step items, material relationships, and equipment assignments. Looking for real-world experience with recipe API imports at scale.

Having migrated recipe data for multiple process manufacturing clients, I can share detailed insights on batch versus single record performance:

Batch Import Limits:

Batch size significantly impacts performance and reliability. Our testing with Aras recipe management showed:

  • Batches of 10 recipes: ~12 seconds per batch (1.2s per recipe)
  • Batches of 50 recipes: ~45 seconds per batch (0.9s per recipe)
  • Batches of 100 recipes: ~110 seconds per batch (1.1s per recipe)
  • Single record creation: ~2.5s per recipe

The performance curve isn’t linear. Medium batches (25-50) offer the best throughput while maintaining manageable error scope. Beyond 50 recipes per batch, transaction overhead and memory consumption reduce efficiency gains.

For 3000 recipes with complex relationships, optimal approach:

  • Batch size: 25 recipes per API call
  • Expected duration: ~120 batches × 30 seconds = 60 minutes
  • Compare to single records: 3000 × 2.5s = 125 minutes

Batch import cuts migration time by 50% while keeping error scope manageable.

Error Handling Strategies:

This is where batch import requires sophisticated logic. Aras API transactions are atomic - batch failure rolls back all records in that batch. Implement these error handling patterns:

  1. Pre-validation: Validate all recipe data before API submission. Check required fields, reference integrity, data types. Catch 80% of errors before hitting the API.

  2. Graceful Degradation: When a batch fails, automatically retry with batch size = 1 for that subset. This isolates the problem recipe while successfully importing the others.

  3. Detailed Logging: Log every API call with full request payload and response. When investigating failures, you need to see exactly what data caused the error.

  4. Checkpoint Resume: Track successfully imported recipes in a state file. If migration fails halfway through, resume from the last checkpoint rather than starting over.

  5. Partial Success Handling: Some failures occur after parent recipe creation but during relationship processing. Implement cleanup logic to remove orphaned parent records or mark them for manual completion.

Example error handling flow:


// Pseudocode - Batch import with error handling:
1. Load batch of 25 recipes from source system
2. Validate each recipe against schema and reference data
3. Submit batch via POST /Recipe API endpoint
4. If batch succeeds: Log success, move to next batch
5. If batch fails:
   - Log error details and full payload
   - Split batch into individual recipes
   - Retry each recipe separately
   - Continue with remaining batches
// See: Error Handling Best Practices Guide

Performance Tuning:

Beyond batch size, these factors impact import performance:

  • Relationship Complexity: Each recipe’s process steps, materials, and equipment create multiple database transactions. Recipes with 30 steps take 3x longer than recipes with 10 steps. Profile your recipe complexity distribution to estimate realistic timing.

  • Server Resources: Database connection pool size, application server memory, and network latency all affect throughput. Run imports during off-peak hours to maximize available resources.

  • Parallel Processing: If you have multiple integration users, you can run parallel import threads. Two threads with batches of 25 can halve migration time. Monitor server load - beyond 3-4 parallel threads, you’ll hit resource contention.

  • Incremental Commits: Rather than one massive migration, import recipes in phases (by product line, by facility, by complexity). This reduces risk and allows validation between phases.

For your 3000-recipe migration, I recommend:

  • Batch size of 25 recipes
  • Two parallel import threads (1500 recipes each)
  • Pre-validation to catch data quality issues
  • Automatic retry with batch size = 1 for failures
  • Estimated duration: 45-60 minutes for successful imports
  • Plan additional time for error investigation and remediation

The batch approach with proper error handling gives you 2x performance improvement over single record creation while maintaining data integrity and recoverability. The key is not treating it as all-or-nothing - your error handling strategy determines success more than batch size.

We imported 5000 recipes last year. Started with batch imports of 50 recipes at a time. Performance was good initially, but error handling became a nightmare. One malformed material reference would fail the entire batch. We ended up switching to smaller batches of 10 with better validation logic.

Currently planning to include related items in the same API payload as the parent recipe. Each recipe JSON would have nested arrays for process steps, materials, and equipment. Is that the right approach, or should we create parent recipes first and then add relationships in separate calls?

Nested relationships in a single call are cleaner but harder to debug when they fail. We use a two-phase approach: create parent recipes in batches first, then add process steps and relationships in a second pass. This lets you verify parent recipe creation succeeded before investing time in the complex relationship data.

How are you handling the related items - process steps, materials, equipment? Are those in the same API call as the parent recipe, or separate calls?