Insights
Case Studies6 min read Audio

Klarna Had to Rehire Humans. Here's the Lesson for Your Business.

2026-03-31JR Intelligence
Listen to this article
0:00 / 0:00

In early 2024, Klarna made headlines by replacing the equivalent of 700 full-time customer service employees with a single AI system. The numbers looked outstanding: $40 million saved annually, resolution times cut from 11 minutes to 2. The CEO went on CNBC. Tech Twitter celebrated.

By late 2025, Klarna was quietly rehiring human agents.

Customer satisfaction had dropped. Interactions were fast, but customers felt shortchanged — complex billing disputes, fraud cases, emotionally charged situations all got routed through the same flat AI that handled "where's my refund?" The AI couldn't read the room. Klarna's CEO admitted it publicly, calling the initial approach a mistake.

This is the most instructive case study in AI right now. Not because AI failed — it didn't. The AI was doing exactly what it was built to do. The mistake was in where they pointed it.

The Replace vs. Augment Trap

There's a seductive math to full replacement. If a customer service rep costs $50,000 a year and an AI agent costs $600, the ROI writes itself. Block's Jack Dorsey cut 40% of his company in February 2026, calling it a move toward an "intelligence-native" company. Amazon eliminated 14,000 corporate roles. Salesforce went from 9,000 customer support headcount to 5,000 by deploying agentic AI.

Wall Street cheered all of it. Block's stock went up on the layoff news.

But Klarna's rehiring tells a more nuanced story, and it's the one that matters for smaller businesses that don't have the runway to run expensive experiments.

The companies getting this right aren't asking "how many people can AI replace?" They're asking a different question: what work should AI own, and what work should humans own?

That's not a philosophical question. It has a practical answer.

What AI Actually Does Well in Customer-Facing Roles

Vodafone deployed an AI assistant called TOBi that autonomously resolved 70% of customer inquiries and cut cost-per-chat by 70%. When they upgraded to a second version, first-time resolution rates went from 15% to 60% and their Net Promoter Score improved by 14 points.

Happy Wax, a fragrance brand with a lean team, deployed an AI agent that fully resolved over 50% of support conversations without any human involvement — within 90 days of going live.

Urban Rest, a global accommodation provider, deployed Agentforce to handle queries around the clock and projected 25–30% ROI within a year.

None of these companies fired their entire support team. They all drew a line: AI handles the first tier, the repetitive and the transactional. Humans handle everything above that line.

That's the model that works. And the line isn't arbitrary — it's defined by what type of conversation is happening.

AI wins at:

  • Order status, tracking, policy questions
  • FAQ deflection and first-contact resolution on known issues
  • Initial qualification and triage
  • After-hours availability with consistent quality

Humans win at:

  • Complaints with emotional stakes
  • Multi-issue or multi-touchpoint problems
  • Situations requiring judgment about exceptions
  • Anything where the customer needs to feel heard, not processed

Klarna tried to put AI on both sides of that line. That's where it broke.

The 50/50 Split That's Becoming Standard

Salesforce's current model is illustrative. AI handles 50% of interactions. Costs dropped 17%. The other 50% — the harder, higher-value conversations — still go to humans. Marc Benioff called it out explicitly: this isn't a story about replacing people, it's about changing the ratio.

For an SMB, that ratio might look different, but the logic is identical. If you're running a 10-person professional services firm and your team spends three hours a day answering the same 15 questions over email and chat, that's AI work. The business development calls, the client escalations, the creative problem-solving — that's human work. Neither category disappears; you're just routing them correctly.

The business.com 2026 SMB AI survey found that only 12% of SMBs are very likely to reduce staff due to AI. But 64% are launching training programs to teach employees to work with AI tools. The most realistic outcome isn't replacement — it's redeployment. The team you have spends less time on low-value repetitive work and more time on the things that actually require them.

What This Costs to Get Wrong

The Klarna story isn't just cautionary at the customer satisfaction level. There's a harder cost.

After Klarna's public admission, they had to rebuild hiring pipelines, retrain new agents on systems and product context, and manage the PR fallout of an AI-first strategy that publicly backfired. Rebuilding is more expensive than doing it right the first time.

For smaller businesses, the equivalent is scaling an AI deployment that feels successful by volume metrics — tickets handled, cost per interaction — while your repeat customer rate quietly erodes because the 5% of hard situations are getting handled badly. By the time the churn shows up in the numbers, the damage is done.

This is why the audit matters before the deployment. Not to slow things down, but to map where the actual failure modes are before you're live and accountable to customers.

A Practical Starting Point

If you're running a service business and thinking about customer-facing AI, here's a diagnostic worth doing before spending anything:

Categorize your inbound volume. Pull three months of support tickets, emails, or calls and tag each one. What percentage are repeatable, answerable by anyone who knows your policies? That's your automation candidate pool. What percentage required judgment, context, or a relationship? That's your human-required pool.

If your repeatable volume is under 40%, AI-first customer service probably isn't your highest ROI move right now. If it's over 60%, you have a real case.

Then design the handoff, not just the bot. The failure point in most AI customer service deployments isn't the AI — it's what happens when the AI can't resolve something. How does it hand off? What context does the human agent inherit? Does the customer have to repeat themselves? A seamless escalation is worth more than a smarter AI.

Measure satisfaction, not just deflection. Volume handled is easy to measure. Whether customers felt well-served is harder but more important. Set baseline NPS or CSAT scores before you deploy and track them monthly after.


Klarna's story ends well. By Q3 2025 their AI was still doing the equivalent work of 853 full-time employees, total savings had reached $60 million, and they had a hybrid model that was actually working. The rehiring wasn't a failure — it was a correction.

The lesson isn't "don't use AI in customer service." It's that the decision about where to draw the human-AI line is the most important strategic choice in the deployment. Get that right and the economics are real. Get it wrong and you're paying twice: once for the AI and once for the cleanup.

If you want to figure out where that line sits in your business, an AI audit is where that conversation starts.

Case StudiesImplementation

Ready to build?

One conversation. No pitch deck. We'll map your bottleneck and tell you honestly if AI infrastructure fits.