Most Salesforce orgs aren't dying from one big architectural mistake. They're dying from 15 small Flow anti-patterns that compound silently until a 200-record import takes down the queue at 2 AM.
This checklist is the audit we run on day one of every Clientell engagement. 15 patterns, each with the signal you can grep for, why it tanks, and the rebuild that fixes it.
Governor limits to remember
Before the patterns: the numbers your Flows are racing against. If you don't have these memorized, the audit feels arbitrary. With them memorized, every anti-pattern becomes obvious.
| Limit | Cap |
|---|---|
| DML statements per transaction | 150 |
| SOQL queries per transaction | 100 |
| SOQL rows retrieved per transaction | 50,000 |
| Heap size (sync / async) | 6 MB / 12 MB |
| Trigger depth | 16 |
| SendEmail invocations per transaction | 10 |
| CPU time (sync / async) | 10s / 60s |
The 15 anti-patterns
01. DML inside loops
Signal: Update / Create / Delete element placed inside a Loop. Why it tanks: Each iteration consumes one DML. Flow halts after the 150th DML in the transaction. Rebuild: Append to a record collection inside the loop. Single DML after the loop.
02. Get Records inside loops
Signal: Get Records element placed inside a Loop. Why it tanks: Each iteration burns one of your 100 SOQL/txn; 50,000-row retrieval cap. Rebuild: Get all records before the loop, or use a Map collection keyed by lookup field.
03. Multiple Updates on the same record
Signal: Two or more Update elements writing to the same record. Why it tanks: Each Update is a separate DML against your 150/txn budget. Rebuild: One Assignment sets all fields, then one Update Records.
04. Nested loops
Signal: A Loop element inside another Loop. Why it tanks: Amplifies any other anti-pattern exponentially. 200 × 50 = 10,000 ops can hit the 10s CPU cap. Rebuild: Flatten to sequential stages. Use a Map keyed by parent ID instead of nesting.
05. After-Save where Before-Save would do
Signal: Triggered After save, but the Flow only sets fields on the triggering record. Why it tanks: ~85x slower per Salesforce Architects benchmark; eats DML. Rebuild: Switch to Before-Save (in-memory, zero DML). Caveat: no emails, related records, or callouts.
06. No fault paths on DML
Signal: Get / Update / Create / Delete with no Fault connector. Why it tanks: Unhandled exceptions roll back the transaction; you find out via the Flow Error email days later. Rebuild: Connect every DML to a Fault path that logs to a Custom Object error table.
07. Self-update without ISCHANGED
Signal: Record-triggered Flow updates the same record; no entry-criteria check.
Why it tanks: Self-fires; chains across automation; hits the 16-trigger-depth cap and throws "Maximum trigger depth exceeded".
Rebuild: ISCHANGED({!$Record.Field}) in entry criteria, or use Before-Save.
08. Mixed automation surfaces
Signal: Process Builder + Flow + Apex Trigger all firing on the same event. Why it tanks: Run order undefined; race conditions; debugging is impossible. Rebuild: Setup → Flow Trigger Explorer for Flow + PB. Object Manager → Triggers for Apex. Consolidate to one surface.
09. Decision branches that both can be true
Signal: Two outcomes whose criteria overlap. Why it tanks: First match wins silently. The bug only shows in prod. Rebuild: Make outcomes mutually exclusive. Add an explicit Default outcome for unhandled cases.
10. Hardcoded record / user IDs
Signal: 18-char ID literals in Decision / Assignment elements. Why it tanks: Breaks in sandbox refresh; breaks the day a user / queue is renamed. Rebuild: Lookup via Custom Metadata Type, queried at runtime.
11. Hardcoded picklist values in Decisions
Signal: Decision branches written as string-equals on picklist values. Why it tanks: New picklist value, Decision silently misses it; no error, just wrong outcome. Rebuild: Use a Picklist Choice Set resource (dynamic), or add a Default outcome that errors on unhandled values.
12. Process Builder / Workflow Rules still running
Signal: Active Process Builders or Workflow Rules in Setup. Why it tanks: End-of-support hit Dec 31, 2025. Still runs, but no bug fixes. You're on your own. Rebuild: Setup → Migrate to Flow. Migrate before something breaks unattended.
13. Schedule-triggered Flow on full base
Signal: Scheduled Flow with no filter on the trigger. Why it tanks: Scans every record every run; hits SOQL / DML caps at scale. Rebuild: Filter to records modified in last X hours, or move to Apex Batch for size.
14. Send Email / Email Alert inside loops
Signal: Send Email Action or Email Alert inside a Loop. Why it tanks: Send Email: 10 invocations / txn. Email Alerts: shared org-wide DailyWorkflowEmails, burns through it in bulk. Rebuild: Aggregate recipients into a Collection; one send outside the loop with multiple To-addresses.
15. No bulk testing
Signal: Flow tested with 1 record, never with 200. Why it tanks: Data Loader runs 200-record chunks. 20,000 records = 100 chunks; one bad pattern kills the whole load. Rebuild: Test with a 200-row CSV via Data Loader before you flip the Flow on in prod.
How to run this audit
The whole list takes 15 minutes if you know your way around Setup. Open the printable PDF, walk top to bottom, mark each pattern as Pass / Found / N/A. Anything in the Found column moves to a remediation sprint.
If you find more than 5 patterns, your Flow architecture has compound debt. Don't try to fix all 5 at once; fix the ones with the highest blast radius first (DML in loops, mixed automation surfaces, no fault paths).
Where this came from
These 15 are the patterns we see most often when Clientell engineers do a free org audit. The full list of patterns we look for runs 60+ items, but these 15 cover roughly 80% of the production failures we've debugged. The PDF version is the audit sheet our engineers carry into every customer call.
Print it, run it, fix the top 3 things you find. That's usually the highest-leverage afternoon a Salesforce admin spends in a quarter.