Your Salesforce Org Is Not Ready for AI.
Here Is the Technical Debt Audit That Proves It
Every org wants Agentforce. Almost none of them have the foundation to run it. After auditing Salesforce orgs everyday, I can show you exactly where the landmines are hiding, and the order in which to defuse them before you deploy a single AI agent.
Technical debt is the silent killer of AI adoption in Salesforce. Duplicate records make Einstein hallucinate. Spaghetti automations trigger unpredictable side effects when agents invoke Flows. Ungoverned permissions give AI access to data it should never see. Vibe-coded Apex classes introduce vulnerabilities nobody understands. This guide covers the five layers of Salesforce technical debt (code, automation, data, configuration, and security), a 12-step audit checklist you can run this week, and a phased remediation plan that prioritizes the fixes that unblock AI adoption fastest.
In This Article
Here is what happened. For the last two years, the entire Salesforce ecosystem has been sprinting toward AI. Agentforce. Einstein. Data Cloud. Copilots. Agents. The C-suite heard “AI” and approved budgets. The implementation teams heard “Agentforce” and started building agents.
Then they hit the wall.
The Agentforce agent could not find the right customer record because the same person existed as three different Contacts with slightly different email addresses. The Einstein prediction model produced nonsense because 30% of the Opportunity Amount fields were blank or contained placeholder data. The Flow that the agent triggered to process a return also triggered a deprecated Process Builder on the same object, which fired an Outbound Message to a URL that no longer existed, which threw 47,000 errors overnight.
These are not AI problems. They are technical debt problems. And they have been building quietly for years, waiting for something complex enough to expose them. AI is that something.
2025 introduced “vibe coding” tools like Agentforce Vibes that let anyone generate Apex, Flows, and LWC components from natural language prompts. The upside: faster development. The downside: AI-generated code now accounts for a growing share of new code in Salesforce orgs, and studies show it carries significantly higher rates of security vulnerabilities, code duplication, and poorly structured logic. As one Salesforce MVP put it, “If you can build things faster, that does not necessarily mean you are going to build better things faster. It just means you are going to make more, faster.” Building faster on a weak foundation does not fix the foundation. It just makes the collapse more spectacular.
Not all technical debt is created equal. Some slows you down. Some blocks AI adoption entirely. Some leaves you vulnerable to a data breach. I categorize Salesforce technical debt into five layers, ordered by how directly each one undermines AI readiness.
Hard-coded IDs, SOQL queries inside loops, test classes that hit 75% coverage but validate nothing, Apex triggers with no documentation. The code works until it does not.
Overlapping Process Builders, Workflow Rules that Salesforce is retiring, Flows that nobody mapped, triggers firing in unpredictable orders. Automations that fight each other.
Duplicate records, incomplete fields, no standardized entry rules, orphaned data from abandoned integrations. This is the debt that directly causes AI to hallucinate and produce unreliable outputs.
Hundreds of unused custom fields, objects nobody touches, page layouts from three admin generations ago, permission sets that do not reflect current roles. Every unused component adds noise and confusion for AI systems trying to understand your data model.
Overly permissive profiles, connected apps with unrotated secrets, session IDs in outbound messages, no MFA enforcement. The 2025 Salesforce data breaches exposed that most orgs had security configurations that were dangerously outdated. AI agents inherit the permissions of the user they run as. If those permissions are wrong, the agent has wrong access.
Data debt is the most critical for AI readiness because it directly corrupts AI outputs. An Agentforce agent grounded in dirty Data Cloud profiles will produce confidently wrong answers. Security debt is the most dangerous because it determines the blast radius if something goes wrong. Automation debt is the most disruptive because it causes unpredictable side effects when agents trigger Flows. Code and configuration debt are important but slower-burning. Prioritize in this order: data, security, automation, code, configuration.
I have a simple rule I apply in every org audit: run the test suite with seeAllData=false and see what breaks. In about 80% of the orgs I review, at least one critical test class relies on production data to pass. That means it is not actually testing the code. It is testing whether a specific record still exists. The moment someone deletes that record, the deployment pipeline breaks.
But code debt goes much deeper than bad tests.
The Four Code Debt Patterns That Block AI
| Pattern | Why It Exists | Why It Blocks AI | Fix Complexity |
|---|---|---|---|
| Hard-coded IDs | Developer copied a Record Type ID from sandbox during a rush deployment | Breaks between environments. Agent actions that reference these IDs fail silently in production. | Low |
| SOQL inside loops | Non-bulkified trigger written years ago when data volumes were small | Agent triggers that process records in bulk hit the 100 SOQL query limit instantly, causing silent failures. | Medium |
| Phantom test coverage | Tests written solely to hit 75% line coverage with zero assertions | Gives false confidence. Code that “passes” tests still fails under real conditions. Deployment pipeline is unreliable. | High |
| Vibe-coded Apex | AI-generated code deployed without architectural review | Often includes security vulnerabilities, redundant logic, and patterns that work for small data but fail at scale. Nobody on the team understands the generated code well enough to debug it. | Critical |
// REAL EXAMPLE: This trigger exists in more orgs than I care to admit. // It works perfectly... for 1 record. It hits governor limits at 50. trigger UpdateAccountRating on Opportunity (after update) { for (Opportunity opp : Trigger.new) { // SOQL INSIDE A LOOP. This is the #1 governor limit violation. Account acc = [SELECT Id, Rating FROM Account WHERE Id = :opp.AccountId]; // Hard-coded Record Type ID. Breaks in every environment except prod. if (opp.RecordTypeId == '012000000000ABC') { acc.Rating = 'Hot'; update acc; // DML INSIDE A LOOP. Second governor limit bomb. } } }Apex · Anti-Pattern (Real Example)
When an Agentforce agent invokes an Apex action, it runs in the same execution context as any other Apex transaction. It is subject to the same governor limits: 100 SOQL queries, 150 DML operations, 10-second CPU timeout. If your existing triggers are not bulkified, an agent action that processes 200 records during a batch operation will fail. The agent will not know why. It will retry. It will fail again. And your service team will get a Slack message at 2am.
I audited an org last quarter that had, on the Opportunity object alone: 4 Apex triggers, 3 Process Builders, 2 Workflow Rules, and 6 Record-Triggered Flows. Some of them did the same thing. Some conflicted. Nobody had a complete map of what happened when an Opportunity record was saved.
This is automation debt. It accumulates when teams build new automations without retiring old ones, when Salesforce releases new automation tools (Flows) without automatically migrating the deprecated ones (Process Builders, Workflow Rules), and when multiple admins build solutions in parallel without coordinating.
Salesforce has been signalling the retirement of Process Builder and Workflow Rules for years. They are functionally deprecated. Any org still running active Process Builders is carrying debt that compounds every time a new Flow is added on the same object, because Process Builders and Flows interact in unpredictable ways when both fire on the same record event. If you have not migrated yet, this is the year to do it. Before you add AI agents that depend on those automations behaving predictably.
I will say this as directly as I can: you cannot run clean AI on dirty data. It does not matter how good the Agentforce agent is, how well you configure your topics and instructions, or how much you spend on Data Cloud licenses. If the data underneath is fragmented, duplicated, incomplete, or stale, the AI will produce outputs that are confidently, persuasively wrong.
Data debt is the most dangerous form of technical debt for AI because it directly corrupts the grounding that makes AI agents useful. When Agentforce queries a unified customer profile in Data Cloud, it trusts that profile. If the profile merges three duplicate Contact records with conflicting email addresses and phone numbers, the agent does not know which one is correct. It picks one. Sometimes it picks the wrong one. And the customer gets a response addressed to the wrong name, referencing the wrong order, sent to the wrong email.
The Data Debt Inventory
| Data Debt Type | How It Manifests | AI Impact | Detection Method |
|---|---|---|---|
| Duplicate records | Same customer exists as 3 Contacts, 2 Leads, and 1 Person Account | Agent cannot determine which record is the “truth.” Grounding fails. | Run duplicate rules report. Check Data Cloud identity resolution match rates. |
| Incomplete fields | 30% of Opportunity Amount fields are null or $0 | Einstein predictions and agent recommendations based on revenue data become meaningless. | Field usage report in Setup. Data completeness dashboard. |
| Stale data | 50,000 Leads untouched for 2+ years still in “Open” status | AI treats these as active pipeline. Distorts forecasting and prioritization. | Query for records with LastModifiedDate older than 24 months. |
| Orphaned integration data | Records created by a decommissioned integration, linked to nothing | Increases noise in Data Cloud. Consumes ingestion credits for useless data. | Check CreatedBy for integration users. Cross-reference with active integrations. |
| No identity strategy | No match rules, no reconciliation rules in Data Cloud | Unified profiles are actually fragmented profiles. The “single view” is a lie. | Review Data Cloud identity resolution configuration. Check merge rates. |
Data Cloud uses consumption-based pricing. Every record you ingest, every query you run, every segment you build consumes credits. If you ingest 5 million records and 40% of them are duplicates, stale leads, or orphaned integration data, you are burning credits on garbage data that actively makes your AI worse. The fix is not “ingest everything and let Data Cloud sort it out.” The fix is cleaning the data before it enters Data Cloud. Ingest less. Plan more. Track everything.
Open any Salesforce org that has been running for more than five years and go to the Object Manager. Count the custom fields on the Account object. I have seen orgs with 400+ custom fields on Account. At least 150 of them had 0% population. Nobody knew what they were for. Nobody was willing to delete them because “maybe someone uses that for a report.”
Configuration debt does not crash your org. It does not throw errors. It silently increases cognitive load for every admin, developer, and AI agent that has to navigate your data model. When Agentforce tries to determine which fields are relevant to answer a customer question, a clean object with 30 well-named fields is a simple problem. An object with 400 fields, half of them blank, many with ambiguous names like Custom_Flag__c and Legacy_Status_Old__c, is a much harder problem for any reasoning engine.
Go to Setup, search for “Optimizer.” Run it. It is free. It scans your entire org and flags unused fields, overlapping automations, performance bottlenecks, and security concerns. It takes 10 minutes to run and produces a prioritized report. I start every org health assessment with this tool, and it has never failed to surface at least a dozen quick-win cleanup opportunities. If you have never run it, do it today. You will be surprised.
In 2025, a series of coordinated cyberattacks targeted Salesforce customers through malicious connected applications and social engineering of admins. Customer data was exfiltrated. Org secrets were compromised. Several ISV partner orgs were temporarily disabled from accessing Salesforce entirely. The breaches exposed a painful truth: most orgs had security configurations that were years out of date.
AI compounds the security problem. Agentforce agents run as a specific user in your org. They inherit that user’s profile, permission sets, and field-level security. If your permission model is overly broad (which most are, because restricting permissions is tedious and nobody prioritized it), then an AI agent has access to data it should never see. A customer-facing service agent that can read the Annual Revenue field on Account, or the Discount Percentage on Opportunity, is a data leak waiting to happen.
Three Security Fixes Before You Deploy Any AI Agent
Security Debt Remediation Priorities
The Spring ’26 release introduces significant security changes. Certificate lifecycles are shortening to 200 days (dropping to 100 in 2027). Outbound Messages will no longer include session IDs. External Client Apps replace Connected Apps with modern OAuth flows and a closed security posture by default. If you have not reviewed your org’s security configurations in the last three months, they are almost certainly carrying debt that needs addressing before AI deployment.
AI Readiness Audit: 12 Steps
Give your org 1 point for each check where you find no significant issues. 10-12 points: your org is AI-ready, proceed with Agentforce. 7-9 points: remediate the flagged items before deploying agents to production. 4-6 points: significant debt. Dedicate a sprint to cleanup before AI work. 0-3 points: stop all new feature development. Your org needs a dedicated remediation project before it can safely support AI or any further customization.
| Phase | Focus Area | Actions | Timeframe |
|---|---|---|---|
| Phase 1 | Security + Data | Run Security Health Check and fix critical items. Audit Connected Apps. Deduplicate Contact and Lead records. Set up duplicate rules to prevent new duplicates. Define data completeness thresholds for AI-critical fields. | Weeks 1-2 |
| Phase 2 | Automation | Map all automations per object. Migrate top 5 highest-risk Process Builders to Flows. Retire Workflow Rules firing on the same objects as Flows. Test the full save cycle on each object after migration. | Weeks 3-4 |
| Phase 3 | Code | Fix hard-coded IDs. Bulkify non-bulkified triggers on your top 5 objects. Rewrite phantom test classes with actual assertions. Review any vibe-coded Apex for security vulnerabilities. | Weeks 5-6 |
| Phase 4 | Configuration | Delete unused custom fields (0% population). Archive stale records. Clean up page layouts. Remove unused objects. Update permission sets to reflect current roles. | Weeks 7-8 |
| Phase 5 | AI Deployment | Create dedicated Agent User profile with least-privilege permissions. Deploy first Agentforce agent in sandbox with Agent Script guardrails. Run Testing Center simulations. Promote to production with monitoring. | Weeks 9-10 |
Run the Optimizer. Run the Security Health Check. Count your active Process Builders. These three actions take 30 minutes total and give you a quantified picture of your org’s debt. Print the results. Bring them to your next sprint planning meeting. The numbers are hard to argue with.
I have watched companies spend six months building an Agentforce agent, only to discover at launch that the agent could not reliably identify which customer it was talking to because the Contact database had 30% duplicate records. I have seen Einstein forecasting models produce absurd pipeline projections because half the Opportunity Amount fields contained placeholder values from a 2019 data migration that nobody cleaned up.
The pattern is always the same. The AI is fine. The data is not. The automations are not. The security model is not. The org was built for a world where a human could look at a messy record and use judgment. AI does not have that luxury. It trusts the data. It trusts the permissions. It trusts the automation output. If any of those are wrong, the AI is wrong too, with total confidence.
Run the audit. Fix the foundation. Then deploy the AI. That order matters more than any other architectural decision you will make this year.
