Migrating Two Salesforce Orgs
By: Roopa Sunder Raj

Mergers and acquisitions require consolidating customer, sales, and service data quickly. If both organizations use Salesforce, data must be merged into one org to unify records. This supports a complete customer view, standardized processes, better service, accurate reporting, and reduces costs by eliminating duplicate systems, integrations, and licenses.
To address these challenges, this article presents a practical Salesforce org-to-org data migration approach. The following sections outline strategy, architecture, execution steps, tools, challenges, lessons learned, and best practices to support a smooth and successful migration.
As the need for a unified customer view arises during a merger, the imperative for consolidation becomes evident. Consolidating Salesforce orgs creates a single source of truth so teams work with consistent data.
However, Salesforce org-to-org migration (moving data between separate Salesforce environments) is complex due to differences in data models (how information is structured), custom fields (user-created data fields), processes (defined workflows), automations (automatic actions based on rules), and business rules (company-specific logic). A structured, well-planned approach is essential to maintain data integrity, prevent functional disruptions, and ensure business continuity throughout the merger.
Migration Objectives – Business
Phase 1: Migrate active Accounts and in-flight Orders for the VAR (Value Added Resellers) sales channel to ensure business continuity.
Phase 2: Migrate remaining sales channels and historical orders to enable complete visibility, reporting, and compliance.
End-to-End Migration Methodology
Discovery & Assessment
- Define migration scope, user stories, objects, fields, and products.
- Identify special data (Files, Attachments, Notes), especially for in-flight orders.
- Clean data: remove duplicates, fix nulls, formats, field limits, and data type mismatches
- Define extraction filters and query parameters.
- Enrich and finalize filters: sales channels, eligible statuses, age of data, exclude test data, and include only pre-installation in-flight orders.
- Validate migration accuracy with source-to-target count and object-level reconciliation reports.
Data Mapping Analysis

Data mapping aligns, transforms, and loads source Salesforce data into the target org to keep data integrity and continuity across objects, fields, and products.
- Object Mapping: Match standard and custom objects, analyze dependencies, order loads, and identify necessary consolidations or transformations.
- Field Mapping: Normalize picklists (dropdown options), align record types (categories of records), resolve data type and length mismatches, and apply required transformation and default logic.
- Product Mapping: Match legacy products to the new model, align attributes and rules, and address deprecated or merged products.
Data Migration Approach
MuleSoft was chosen for its ability to handle large-scale Salesforce data migrations with complex dependencies using a reusable integration framework.
MuleSoft offers a scalable integration layer for Salesforce migrations, supporting various data loads while ensuring integrity, performance, and fault tolerance.
Select a migration strategy based on data volume and dependency complexity.
Scenario 1: Small Data Volumes (≤ 25,000 Records)
- Extract with Salesforce APIs, transform to CSV, resolve lookups, and load with Bulk API v2.
- Suitable for low-complexity migrations with minimal relationship complexity

Scenario 2: Large Data Volumes with Parent–Child Dependencies (> 25,000 Records)
- Incremental extraction with batch processing
- Transform data, resolve parent–child dependencies, and load via Bulk API v2
- This approach enables scalability, fault tolerance, and recovery for enterprise migrations.
Cutover & Go-Live Plan
A structured cutover allows a controlled transition with minimal risk and no disruption.
Prerequisites: Complete UAT (User Acceptance Testing—a process where users test the system), data cleansing (removing or correcting invalid data), finalized filters and mappings (rules for which data to move and how to match it), validated products/reference data (checking related information is correct), trained support teams, operational monitoring setup, and approved business communication plan.
Pre-Migration: Disable source org triggers, flows, validation rules, sharing rules, and roll-up summaries; capture baseline counts; configure target users, profiles, and access; populate source-target ID references; and execute migration using MuleSoft.
Post-Migration: Resolve issues and exceptions, validate record counts and samples, re-enable automations, complete validation, announce go-live, and archive data.
Data Migration Specific Challenges and Mitigation
| Issue Type | Challenge | Mitigation |
|---|---|---|
| Data Quality | Special characters or new lines causing batch failures | Cleanse data using replace logic; retrieve failed records via Job ID and retry failed records only |
| Data Integration | Load failures due to incorrect field-level permissions | Update field-level security for integration user and retry |
| Data Performance | Timeouts and record lock errors | Reduce MuleSoft concurrency to 1, limit batch size to 5,000, and query source using Account IDs |
Lessons Learned
| Area | Observation | Impact | Recommendation |
|---|---|---|---|
| Data Mapping & Scope | Frequent mapping updates due to missing fields, late object discovery, and evolving scope | Rework, delays, and data inconsistencies | Define mapping and scope early with experienced stakeholders; validate filters upfront. |
| Picklist Value Alignment | Missing target picklist values caused load failures. | Data load failures and interruptions during migration execution. | Relax constraints during migration, clean up post-migration |
| Team Readiness | Limited hands-on system knowledge of both source and target systems | Knowledge gaps slowed decisions and increased tribal dependency | Engage experienced resources early; train teams on source and target systems |
| Automation Constraints & Performance | Triggers and validations blocked loads and impacted dry-run performance | Slower execution and unexpected failures | Create permission-set to bypass automations |
| Environment Parity | Test environments not aligned with production | Issues surfaced late in production like test coverage, slow performance loading objects | Refresh test environments from production to discover issues early |
Best Practices

- Prepare and validate early: Clean source data, finalize mappings and filters, and validate dependencies before migration execution.
- Design for scale and automation control: Use Bulk API and batch processing, load parent objects first, and bypass automations during migration.
- Test, validate, and recover: Dry-runs with production-like data, reconcile counts and relationships, and retry only failed records.