✅ Problem Statement (Refined & Expanded)
In many Salesforce development or QA environments, repeated testing often leads to the creation of thousands of test records across multiple objects. Whether you’re testing a data import process, automation rules, integrations, or user journeys — each test run leaves behind a trail of test data.
Over time, this leftover data begins to:
- Slow down the org
- Complicate debugging
- Cause data conflicts in future test runs
- Exhaust storage limits
Manually deleting this test data becomes time-consuming, repetitive, and error-prone — especially when data spans multiple related objects or involves content files like ContentDocument or ContentDocumentLink.
Hence, having a reusable, generic, and automated cleanup process is essential to maintain a clean development or testing sandbox. That’s where a Batch Apex job comes in handy.
✅ Why Use a Batch Apex Job?
Salesforce imposes several limits on DML operations, such as:
- Max 10,000 records per transaction
- Max 150 DML operations per transaction
- Heap and CPU limits
A Batch Apex job lets you:
- Handle large datasets (millions of records)
- Process them in manageable chunks
- Perform cascading deletions or cleanups safely
- Chain multiple cleanups automatically
You can even schedule it to run nightly, or execute it manually from Developer Console or a custom admin UI.
✅ Example Scenario
Imagine you have the following during each test cycle:
- Thousands of
Contactrecords - Related entries in a custom object like
ERx_Import__BigTable__c - Associated
ContentDocumentandContentDocumentLinkfiles (e.g. test attachments)
Instead of cleaning these up manually every time, you can run a batch class that:
- Deletes all
Contactrecords - Automatically continues with deletion of
BigTablerecords - Deletes related file records (
ContentDocument,ContentDocumentLink) - Cleans up custom metadata like
ERx_Import__ERx_Import_File_Details__c
This approach ensures repeatable, fast, and consistent cleanup, allowing testers and developers to start fresh every time.
✅ Key Benefits of This Approach
- 🚀 Speed up test cycles — no more manual deletion
- 💡 Minimize data conflicts — clean environment each run
- 🔁 Reusable logic — supports multiple objects
- 🧠 Scalable — handles 10k+ records per object
- 📅 Schedule-ready — can be run nightly or post-deployment
Code Example-
global class DeleteAllRecordsBatch implements Database.Batchable<SObject> {
private String sObjectApiName;
global DeleteAllRecordsBatch(String sObjectApiName) {
this.sObjectApiName = sObjectApiName;
}
global Database.QueryLocator start(Database.BatchableContext bc) {
String query = 'SELECT Id FROM ' + sObjectApiName;
return Database.getQueryLocator(query);
}
global void execute(Database.BatchableContext bc, List<SObject> scope) {
delete scope;
}
global void finish(Database.BatchableContext bc) {
if(sObjectApiName == 'Contact') {
Database.executeBatch(new DeleteAllRecordsBatch('ERx_Import__BigTable__c'), 2000);
} else {
Map<Id, ERx_Import__ERx_Import_File_Details__c> importFiles = new Map<Id, ERx_Import__ERx_Import_File_Details__c>(
[SELECT Id FROM ERx_Import__ERx_Import_File_Details__c]
);
if (importFiles.isEmpty()) return;
List<ContentDocumentLink> links = [
SELECT ContentDocumentId
FROM ContentDocumentLink
WHERE LinkedEntityId IN :importFiles.keySet()
];
Set<Id> documentIds = new Set<Id>();
for (ContentDocumentLink link : links) {
documentIds.add(link.ContentDocumentId);
}
// Delete links with partial DML
if (!links.isEmpty()) {
Database.delete(links, false);
}
// Delete ContentDocuments with partial DML
if (!documentIds.isEmpty()) {
List<ContentDocument> docsToDelete = [
SELECT Id FROM ContentDocument WHERE Id IN :documentIds
];
Database.delete(docsToDelete, false);
}
// Delete import file detail records with partial DML
if (!importFiles.isEmpty()) {
Database.delete(importFiles.values(), false);
}
}
}
}
How to run?
Database.executeBatch(new DeleteAllRecordsBatch(‘Contact’), 2000);