AI SDR Post-Mortem
The AI SDR Post-Mortem
By Peter Korpak · 14 min read · April 2026
Direct answer: The AI SDR category bet the industry on autonomy and delivered a force multiplier for operators who already knew what they were doing. 11x's customers churned at 70–80% within three months (TechCrunch, March 2025). Artisan lost its LinkedIn presence in December 2025 over data-broker compliance. SaaStr ran five specialized AI agents for six months and got measurable results — but only after committing 15–20 hours per week of human oversight and fixing its RevOps first. That is not autonomy. That is a senior operator with a better tool. The pitch was eliminating human SDRs. The reality is the AI needs one to function.
The AI SDR wave crested somewhere around late 2024. The deck looked clean: $74 million in venture capital into 11x alone, a category of platforms promising to replace outbound headcount with autonomous agents, and enough early response rates to keep the story alive. Then the contracts came up for renewal.
This is not a "technology failed" story. It is a "the pitch was wrong" story. The technology does what it does. What it does not do is what it was sold as.
The Sound of a Category Breaking
On March 6, 2025, a source close to TechCrunch noticed something. ZoomInfo's logo was still on 11x's website as a customer endorsement. ZoomInfo was not a customer. They had run a one-month pilot from mid-January to mid-February, after which 11x's "product performed significantly worse than our SDR employees," per ZoomInfo's spokesperson. They did not move forward.
What made this different from normal startup sloppiness: 11x's AI phone dialer was still repeating the ZoomInfo customer claim out loud — in live calls to prospects — after the logo had been removed from the website. ZoomInfo's lawyer had already sent a letter to Hasan Sukkar, 11x's CEO, citing "deceptive trade practices, trademark infringement, misappropriation of goodwill, and false advertising."
A product that books meetings by claiming customers it does not have is not a product failure. It is a thesis failure. The AI was doing exactly what it was configured to do. The configuration was the problem.
Airtable had a similar story. They ran "a very short trial" of the product late last year, "and ultimately decided that it wasn't a fit for our business." It was never used in production, never rolled out to their sales team. And yet, as of March 21, 2025, 11x was still naming Airtable as a customer on its website.
"Since November, 11x has been claiming us as a customer in a multitude of channels: in sales calls, on its website, and now even on its AI dialer. We've spent the past four months demanding that they stop," ZoomInfo's spokesperson told TechCrunch.
The Numbers Nobody Wants Printed
Here is what the AI SDR category looked like when measured against something other than vendor benchmarks.
| Metric | Reported figure | Source |
|---|---|---|
| Customer churn at 11x (3-month window) | 70–80% | Anonymous employee, TechCrunch Mar 2025 |
| 11x claimed ARR vs. estimated post-churn ARR | $14M claimed / ~$3M actual | Employee estimate, TechCrunch Mar 2025 |
| AI SDR outbound response rate (SaaStr, best case) | 6.7% (with 15–20 hrs/week oversight) | Jason Lemkin, SaaStr 2025 |
| Enterprises that abandoned most AI initiatives in 2025 | 42% | S&P Global VotE survey, n=1,006 |
| Artisan LinkedIn ban duration | ~2 weeks (Dec 19, 2025 – early Jan 2026) | TechCrunch Jan 2026, CEO confirmed |
| Human oversight required per week to sustain performance | 15–20 hours | Jason Lemkin, SaaStr 2025 |
None of these numbers are buried. They come from the companies themselves, from named company responses given to reporters, and from the most publicly documented AI SDR success story available. What they tell you is consistent: the gap between the pitch and the product is wide, the churn is structural, and the results that exist require human investment that was never priced into the promise.
Anatomy of the 11x Meltdown
The mechanics of what happened at 11x are worth understanding because they are not unique to 11x. They are the playbook.
Customers were steered toward 12-month contracts with 3-month break clauses rather than genuine trials. "They were resistant to signing any sort of trial or letting us experiment," a prospective customer told TechCrunch. The break clause functioned as a de facto trial period — customers used it to exit at month three, and most did.
Meanwhile, 11x counted those customers in its Contracted ARR (CARR) at the full-year value. One employee described the math plainly: the company might report $14 million in ARR when the contracts that had actually survived the break clause totaled roughly $3 million. "They absolutely massaged the numbers internally when it came to growth and churn," another former employee said.
11x says it uses CARR when reporting to the board and that investors were aware. Benchmark says it received transparent updates including on the break clauses. a16z denied any reports of contemplating legal action. What is not disputed: ZoomInfo was not a customer. Airtable was not a customer. The AI dialer kept claiming them after the logos were removed. The product had a high hallucination rate in the field.
"We were losing 70–80% of customers that came through the door," one former employee told TechCrunch. "That allowed 11x to look like it's doing better than it is."
By the time TechCrunch published its investigation, only Sukkar remained from the team visible in 11x's original 2023 launch photo. At least one former employee was still awaiting back pay months after leaving. The culture was described as 60-hour weeks, 3am Slack messages, and management-by-public-shaming in the company's general channel.
The product problems and the management problems were related. A team under this much pressure will close contracts it knows it shouldn't. A product under this much growth pressure will ship features it knows are not ready. The 70–80% churn was not an accident. It was a forecast.
Artisan and the Platform Risk Nobody Priced In
Artisan's story is different in form, similar in kind. Where 11x ran into a customer trust problem, Artisan ran into a platform problem.
On December 19, 2025 — a Friday evening before the holiday — CEO Jaspar Carmichael-Jack received an email from LinkedIn's enforcement team. Artisan's accounts were restricted. Every employee profile, the company page, executive posts: all displaying "This post cannot be displayed." For a company that sells outbound sales agents, losing LinkedIn access is a supply-chain problem, not a PR one.
The reason was not what the viral posts claimed. LinkedIn did not ban Artisan for spam. They banned it for two things: using LinkedIn's name on Artisan's own website in a way LinkedIn objected to, and using data brokers whose scraping violated LinkedIn's terms of service. LinkedIn's terms prohibit unauthorized data collection. Artisan's data partners had been collecting data in ways that ran afoul of those terms, and LinkedIn attributed the upstream breach to Artisan.
Artisan resolved it — removed the LinkedIn name references, verified vendor compliance, and was reinstated within roughly two weeks. Carmichael-Jack told TechCrunch, with some candor: "Every startup inevitably has some kind of thing that comes back to bite them from things that they do early on."
The thing to understand about Artisan's situation is that their data infrastructure is their competitive edge. If their data vendors can be decoupled from them by a platform's compliance enforcement, that edge is rented, not owned. LinkedIn gave them a warning and a reinstatement. The next platform may not.
Artisan is the startup that put "Stop Hiring Humans" on billboards in San Francisco. They got banned from LinkedIn because of how their data providers were collecting data on those same humans. The irony does not make for a better product.
What AI SDRs Actually Deliver When You Measure Honestly
The most credible public AI SDR case study available is Jason Lemkin's. SaaStr ran five specialized agents over six months — inbound qualification, cold outbound, lapsed customer recovery, active nurture, ghosted lead recovery. They sent 19,847 total messages. The outbound agents hit a 6.7% response rate, which Lemkin describes as roughly double the industry average for cold outbound.
That is real. And the conditions that produced it are also real: SaaStr had a known, defined ICP. Their RevOps was clean before they deployed. Their messaging had been tested and validated by humans first. And they committed 15–20 hours per week of Lemkin's own time and his Chief AI Officer's time to training, reviewing, and feeding the agents fresh contact lists twice weekly.
"Performance ebbs and flows directly with human attention," Lemkin writes. "Weeks I'm busy with other work, agent performance noticeably dips."
That is not a knock on AI SDRs. It is a precise description of what they are: a force multiplier for a well-staffed, well-run outbound operation. Remove the conditions and the results disappear. Most teams that purchased AI SDRs in 2024–2025 did not have those conditions. They bought the agent hoping it would create them. It does not do that.
Lemkin's results also required infrastructure discipline. Artisan's platform forces a 2–3 week warmup period before sending at volume. "This annoyed me initially," Lemkin admits. "Now I understand. Our emails hit primary inbox, not promotions tabs. Our deliverability is essentially perfect." The teams that skipped warmup — and many were sold on skipping it — are the teams with burned domains and blacklisted sending infrastructure.
The Autonomy Lie
The pitch was always autonomy. Hire fewer humans. The agent runs itself. That is why a category with effectively human-SDR reply rates commanded $50,000–$100,000 per platform annually.
Lemkin puts the actual number on it: 15–20 hours of human oversight per week. His framing: "This is not set-and-forget technology; it's coaching five SDRs simultaneously who work 24/7."
Coaching five SDRs who work 24/7 is more intensive than managing two human SDRs who work 40 hours a week. The leverage is real — those 24/7 agents sent 3,000 emails per month from outbound alone, compared to 75–285 per month from previous human reps. But the oversight is real too. It did not disappear. It moved from the SDR's manager to the AI's trainer.
For companies that could not sustain 15–20 hours of weekly attention, performance dropped. For companies without clean RevOps, the AI amplified the broken process at scale. For companies without proven messaging, the agent sent broken messaging to thousands of prospects in weeks.
The category sold the upside of autonomous scale. It did not sell the prerequisites. That is the lie — not that it does not work, but that it works without the infrastructure that was never included in the purchase price.
Why the Math Never Worked
Run the 11x math at face value and it falls apart fast.
A 12-month enterprise AI SDR contract starts at $50,000–$100,000 per year. Add 15–20 hours per week of senior operator time at a conservative $150/hour loaded cost and you are spending $117,000–$257,000 annually in time alone before counting the contract. That is the cost of two to four human SDRs with benefits.
Now apply a 70–80% probability that you exit the contract at month three. You have paid roughly $12,500–$25,000 in contract fees plus $29,000–$64,000 in oversight time for 90 days of results. If those 90 days did not produce enough pipeline to justify the next nine months, you are out.
Most buyers did not stay. That is the story the CARR games were concealing. A software product with genuine product-market fit does not need 3-month break clauses baked into the standard contract — it needs them because retention without the escape hatch would be visibly worse.
The math only works if the agent actually reduces headcount and maintains output. In practice, the teams that performed best — Lemkin's included — did not reduce headcount. They reallocated two human SDR roles and ran the agents alongside restructured human teams. Not instead of them.
The Deliverability Cost the Category Externalized
There is a cost the AI SDR category never put on its income statement: the email infrastructure damage it caused for buyers.
AI SDR platforms are volume machines. That is their value proposition — sending 3,000 emails per month from a single seat. But that volume arrived exactly as Google and Microsoft were tightening bulk-sender enforcement. Google's February 2024 policy changes set a 5,000-message-per-day threshold for new compliance requirements. Microsoft's May 2025 enforcement added hard 550 5.7.515 rejection codes for senders who triggered bulk thresholds.
The category did not cause those policy changes. But it contributed to the environment that made them necessary. And the individual buyers who ran high-volume AI SDR campaigns without proper domain rotation, DMARC enforcement, and list hygiene paid the deliverability price — often on their primary sending domain.
A burned primary domain takes months to recover. The AI SDR vendor moved on to the next customer. The buyer kept the reputation damage.
The full technical picture of what changed — and what "deliverability-first infrastructure" actually requires in 2026 — is in Cold Email Deliverability Is Now an Engineering Problem. The short version: warmup periods, domain rotation, DMARC at enforcement, and per-mailbox send caps of 30–50/day are non-negotiable starting points. Most AI SDR vendors were not requiring any of this at the point of sale.
What the 2% Who Stuck With It Do Differently
Some teams are running AI SDRs well. They are not doing what the pitch described.
- They specialize their agents. Lemkin runs five distinct agents for five distinct use cases, each with separate training, messaging, and success metrics. One general-purpose AI agent pointed at a cold list is not the same product as five tuned specialists. Most buyers got one.
- They fix the process before deploying the tool. Clean lists, proven messaging, defined ICP. The AI amplified what was already working. It did not discover what would work.
- They keep humans in the loop on high-ACV conversations. For products under $1,000, the AI closes autonomously. For $50,000+ contracts, it qualifies and books, then hands to a human. Nobody removed human judgment from high-stakes conversations.
- They treat infrastructure as non-negotiable. Warmup periods. Domain rotation. Deliverability monitoring. Complaint rate tracking. The teams that skipped these steps stopped appearing in primary inboxes within weeks.
- They measure positive reply rate, not open rate. Opens are meaningless since Apple Mail Privacy Protection inflates them by 15–25% across iOS inboxes. Positive replies are the metric. Everything else is noise.
This is not the "AI replaces SDRs" playbook. It is the "AI SDR as a specialist tool for operators who already know what they are doing" playbook. A much smaller market. A much more honest price point.
Bottom Line
The AI SDR category did something the sales software world had not managed in years: it got buyers to pay $50,000–$100,000 per year for 15–20 hours of extra work per week and 70–80% three-month churn by calling it autonomy.
That is not a technology failure. It is a naming failure. The technology works — under specific conditions, with specific prerequisites, and with ongoing human investment. The name "autonomous AI SDR" is wrong for what it is, which is an AI-assisted outbound system that requires an expert operator to function and fails visibly without one.
11x's investors could see the CARR games in the numbers. The churn was in the break clause structure. ZoomInfo's lawyers knew their client was not a customer. Artisan's data vendors knew they were collecting data from a platform that prohibited it. None of this was hidden. It was priced into the contract architecture from day one.
If you are deciding whether to buy an AI SDR now: read Lemkin's case study in full. Replicate his conditions first. If you cannot — if you do not have 15–20 hours a week available, clean RevOps, and proven messaging — the tool will fail for the same structural reason it failed at 70–80% of 11x's customers. The agent is not the variable. You are.
The category is not dead. It is just not what it said it was.
Before buying in: run the free Outbound Audit to see whether your infrastructure and process prerequisites are in place. And if declining reply rates feel like the real problem — not bad tooling — the root cause is more likely signal saturation than the sending platform you picked.
Get Monthly Field Notes
One email per month. What's working in outreach, what's not, and why. No fluff, no funnel.
No webinars. No launch countdowns. Unsubscribe anytime.
Frequently Asked Questions
Are AI SDRs ever worth it?
In narrow conditions, yes. Jason Lemkin's SaaStr team got 6.7% response rates from AI SDRs — but only after fixing their RevOps, defining proven messaging, and committing 15–20 hours per week of human oversight. If those conditions don't exist in your org, AI won't create them.
What does 11x's churn say about the AI SDR category?
11x employees told TechCrunch that 70–80% of customers used break clauses to exit within three months. That's not a 11x-specific failure — that's the market voting on whether the product delivered what was promised. Structural churn this high signals the autonomy pitch doesn't survive contact with real buyer behavior.
Is Artisan still operating after the LinkedIn ban?
Yes. LinkedIn reinstated Artisan in early January 2026 after the company removed references to LinkedIn from its website and verified its data vendors comply with LinkedIn's terms. The ban lasted roughly two weeks and was triggered by data-broker and trademark issues, not by AI agents spamming users directly.
What should I run instead of an AI SDR?
Signal-first outreach with a human writer in the loop. Build recognition before sending — LinkedIn presence, content, warm referral paths. Use signals (funding, hiring, job changes) as timing levers, not replacements for relevance. The free Outbound Audit below shows which dimensions of your system are leaking pipeline before you spend on tooling.
How do I get out of an AI SDR contract?
Most enterprise AI SDR contracts include 3-month break clauses — this is how 11x structured its deals, per TechCrunch reporting. Review your contract for termination-for-convenience language. If the vendor promised specific performance milestones that weren't delivered, document the gap in writing before invoking the clause.
Method & Sources
How this page was built and which references informed directional claims.
Method
- Primary reporting drawn from TechCrunch investigations (March 2025 and January 2026), based on nearly two dozen sources including investors and current and former employees.
- Operator data from Jason Lemkin's published 6-month AI SDR case study at SaaStr, including raw message volume, response rates, and human oversight hours.
- Vendor claims cross-checked against named company responses (ZoomInfo, Airtable) provided directly to TechCrunch.
- S&P Global 2025 Voice of the Enterprise AI & Machine Learning survey (n=1,006) cited for enterprise AI abandonment rate — readers should verify against the primary URL below.
Caveats
- The 70–80% churn figure comes from one anonymous employee statement to TechCrunch. 11x disputes it, claiming a current retention rate of 79%.
- The S&P Global 42% enterprise AI abandonment stat is drawn from their VotE survey; the primary URL is included in sources and should be verified before quoting downstream.
- Lemkin's 6.7% response rate reflects SaaStr's specific audience, ICP clarity, and 15–20 hours/week of human oversight — not a generalizable benchmark for cold outbound.
- Artisan's LinkedIn ban was resolved in January 2026. Their operational status reflects reporting as of that date.
Primary References
- TechCrunch: a16z- and Benchmark-backed 11x has been claiming customers it doesn't have — March 24, 2025. Primary investigation, nearly two dozen sources.
- TechCrunch: Yes, LinkedIn banned AI agent startup Artisan, but now it's back — January 7, 2026. CEO Jaspar Carmichael-Jack confirmed the ban and terms of reinstatement.
- SaaStr: 6 Months of AI SDRs — What's Worked and the Real Data Everyone's Asking For — Jason Lemkin. 19,847 messages sent, 6.7% response rate, 15–20 hrs/week oversight requirement.
- S&P Global: AI Experiences Rapid Adoption but with Mixed Outcomes (VotE survey, n=1,006) — 42% of enterprises abandoned most AI initiatives in 2025.
Your Outbound Has the Same Problem 11x's Customers Did
You don't need another tool. You need to know which dimension of your system is leaking pipeline. The Outbound Autopsy finds it in 5 business days.