top of page
Search

Part 1: Why We Need a Third Way

The AI Conversation We're Not Having


Every discussion about AI seems to devolve into the same tired debate: Will AI save us or destroy us? Are you a believer or a skeptic? Should we accelerate or pump the brakes?


This binary thinking is killing our ability to implement AI effectively. While we're busy arguing about robot overlords versus digital utopia, real organizations are struggling with real questions that don't fit neatly into either camp.


It's time for a third way—one that neither fears nor worships AI, but instead asks a simple question: How can we use this technology to make work more human, not less?


The Problem with Extremes


The Doom Scenario


Visit any tech forum and you'll find the Doomers. They're the ones sharing articles about "slaughterbots" (yes, that's a real headline), warning about mass unemployment, and treating every AI advancement like we're one step closer to Skynet.


Their organizations move slowly, if at all. They implement grudgingly, with so many restrictions that the tools become useless. Their employees, especially younger ones, grow frustrated with outdated processes while competitors zoom past.


I recently spoke with a company that spent 18 months in committee meetings about whether to allow ChatGPT for drafting internal emails. Eighteen months. Their competitors had already automated entire workflows.


The fear is understandable. When OpenAI launched ChatGPT in November 2022, I dismissed it as "spicy auto-correct." But fear-based paralysis isn't a strategy—it's a surrender.


The Acceleration Trap


The opposite extreme is equally dangerous. The Accelerationists treat AI like a silver bullet. They rush to automate everything, viewing employees as expensive legacy systems to be deprecated.


Remember Klarna? The "buy now, pay later" company proudly announced they'd laid off 700 customer service workers after implementing AI. The tech press loved it. The stock market rewarded it.


Eleven months later, they were quietly hiring them back.


Why? Because they'd destroyed something more valuable than headcount savings: institutional knowledge, customer relationships, and most importantly, trust. Every remaining employee now knew they were just a slightly better algorithm away from unemployment. Innovation stopped. Morale tanked. The best people left.


The Hidden Cost of Binary Thinking


This false dichotomy creates three major problems:

  1. We miss the actual opportunity: While we debate extremes, we're not talking about how AI could eliminate the parts of work everyone hates while amplifying human capabilities.

  2. We paralyze decision-makers: Leaders feel forced to choose between being labeled as Luddites or as heartless technocrats. So they choose inaction.

  3. We ignore the people actually doing the work: The debate happens between executives and tech evangelists, while the people who could provide the most valuable input—frontline workers—are excluded entirely.


The Data We're Ignoring


A recent survey found that 39% of people who haven't adopted AI made a conscious choice not to. Their top reasons:


  • They prefer human interaction

  • They worry about data privacy

  • They don't trust AI information

  • They value human accountability

  • They believe AI exhibits bias


These aren't the rantings of technophobes. These are legitimate concerns from thoughtful people. And instead of addressing them, we're telling them to pick a side in a war they never wanted to fight.


The Third Way: Human-Centered AI


What if we stopped asking "Should we use AI?" and started asking "How can AI make work better for humans?"


This isn't a compromise or a middle ground. It's a completely different framework—one that puts human needs, capabilities, and potential at the center of every AI decision.


IKEA's Master Class


When IKEA implemented AI customer service, they had 8,500 call center employees who were technically redundant. The Accelerationists would have fired them all. The Doomers would have avoided AI entirely.


IKEA chose differently. They retrained all 8,500 employees as interior design consultants.


Now customers can get instant answers to simple questions from AI ("What are the dimensions of this table?") while having rich, creative conversations with humans about room design, style choices, and making their house feel like home.

Result: Higher customer satisfaction. Higher employee satisfaction. Higher revenue. No dystopia. No unemployment. Just better work.


Our Experiment


We've spent the last year implementing AI with three core principles:

  1. Trust: Every decision must build, not break, trust with employees and clients

  2. Human-Centered: AI augments human capability rather than replacing it

  3. Purposeful: Every implementation must solve a real problem with measurable impact


The results:

  • 20% productivity gains

  • Zero layoffs

  • Increased employee satisfaction

  • More innovation from frontline workers


But more importantly, we've proven something: You don't have to choose between progress and people.


Why This Matters Now


We're at an inflection point. The decisions organizations make about AI in the next 2-3 years will set patterns that last for decades. If we let the extremists dominate the conversation, we'll either:

  • Fall so far behind that catching up becomes impossible, or

  • Destroy the human elements that make organizations innovative, resilient, and valuable


But there's another option.


Your Role in the Third Way


Whether you're a CEO, a middle manager, or an individual contributor, you have more power than you think:

  • If you're in leadership: You set the tone. Your first AI implementation will signal whether this is about empowering or replacing people.

  • If you're in IT: You're the bridge between capability and need. You can advocate for human-centered implementation.

  • If you're a frontline worker: You know where the real problems are. Your input is invaluable—if you're given a voice.


The Questions That Matter


Stop asking:

  • "Are you pro-AI or anti-AI?"

  • "How fast can we implement this?"

  • "How many people can we replace?"


Start asking:

  • "What parts of work do people hate that AI could handle?"

  • "How can we use AI to let humans do more meaningful work?"

  • "What would implementation look like if job security wasn't a concern?"

  • "How do we measure success beyond efficiency?"


The Path Forward


Over the next four parts of this series, I'll show you exactly how to implement this third way:

  • Part 2: Building a foundation of trust that makes real innovation possible

  • Part 3: Keeping humans at the center of your AI strategy

  • Part 4: Ensuring every implementation is purposeful, not just possible

  • Part 5: Measuring success and learning from failure


This isn't theoretical. We've done it. Others have done it. You can do it.


The Challenge


I challenge you to reject the false binary. Refuse to be either a Doomer or an Accelerationist. Instead, commit to something harder but more valuable: using AI to make work more human.


Because in the end, that's what technology should do: amplify human potential, not replace it.


The robots work for us. It's time we started acting like it.

 
 
 

Recent Posts

See All
Part 5: Measuring Success and Learning from Failure

Beyond the Efficiency Metrics After a year of AI implementation, here's the number everyone wants to know: 20% productivity gain. But that number tells you almost nothing about whether our implementat

 
 
 
Part 4: Purposeful Implementation

Solving Real Problems, Not Creating New Ones I need to tell you about Marty the Robot. Marty is my grocery store's $35,000 "innovation"—a...

 
 
 
Part 3: Keeping Humans at the Center

The Difference Between Augmentation and Replacement There's a moment in every AI implementation where you face a choice: Do we use this...

 
 
 

Comments


  • White LinkedIn Icon

©2025 Chris Swecker

bottom of page