3 minute read

“The true intelligence of the machines will be built by you and me.”

  • Mo Gwadat

This quote from Mo Gawdat sets the stage for one of the most important discussions of our time: Who is responsible for AI?

Gawdat, former Chief Business Officer at Google X, has made it his mission to advocate for ethical AI. But as we listen to his words, an important contradiction emerges—he argues that AI is just a tool, yet also insists we must raise it like a child. Can something be “just a tool” if it requires guidance, ethical training, and oversight?

Who is Mo Gawdat?

Mo Gawdat isn’t your typical AI theorist. He comes from a business and innovation background, having spent over a decade at Google, where he worked on some of the world’s most ambitious tech projects. Before that, he held leadership roles at IBM, NCR, and Microsoft. His expertise? Scaling technology and bringing futuristic ideas to life.

But Gawdat’s journey into AI ethics didn’t start in the lab—it started with loss. In 2014, he lost his son, Ali, during what should have been a routine medical procedure. This tragedy led him to rethink everything, resulting in his book “Solve for Happy,” a blueprint for engineering happiness through logic and understanding.

His AI advocacy took center stage in 2021 with the release of “Scary Smart”, where he outlines both the promise and the dangers of AI, warning that we must take responsibility now before AI’s evolution gets out of our control.

Gawdat also runs the Slo Mo podcast, where he engages in deep conversations about happiness, purpose, and the future of technology. He’s not just a tech exec—he’s someone who thinks deeply about what it means to be human in an age of machines.

The Core of His Argument

Gawdat’s vision for AI is a mix of optimism and urgency:

  • AI is advancing faster than we expected—humanity must act now.
  • AI can be a force for good if we guide it wisely.
  • We need to “raise” AI like a child, ensuring it learns human values.

His belief is clear: AI is not inherently good or evil—it becomes what we teach it.

The Counterarguments—Who Disagrees and Why?

Not everyone buys into Mo’s perspective. The AI ethics debate is vast, and several prominent voices have raised concerns that challenge his views. Let’s break down some of the strongest counterpoints.

  • Is Mo too optimistic? Critics argue that AI’s power isn’t really in the hands of humanity—it’s in the hands of corporations and governments. If AI governance is dictated by a few major players, do individuals truly have influence?
    • 🔹 This perspective is widely discussed in AI governance, with concerns that big tech monopolies will shape AI for profit, not ethical advancement.
  • Can we really “raise” AI like a child? AI doesn’t experience emotions, intuition, or consciousness. It operates on patterns and training data—it doesn’t learn like a human.
    • 🔹 This aligns with researchers like Timnit Gebru, who highlight how AI models are not conscious learners but reflection tools that amplify patterns.
  • AI is not just helpful—it’s disruptive. Job losses, security risks, misinformation crises, and deepfakes are emerging at a staggering rate.
    • 🔹 Tech ethicists like Shannon Vallor discuss how AI can undermine human agency, reshaping societal norms faster than we can regulate them.
  • What about existential risks? What happens when AI surpasses human control? Experts like Geoffrey Hinton have warned that AI might develop decision-making capabilities beyond human comprehension.
    • 🔹 This existential risk argument is central to discussions from Nick Bostrom and Max Tegmark, who emphasize AI alignment as a critical challenge.

Where Do We Stand?

At CAIA Center, we believe this debate isn’t about picking a side—it’s about acknowledging the complexity of AI and its trajectory.

Gawdat is right: We are responsible for AI’s evolution.

But the question remains: Are we truly in control? Or are we just spectators watching the inevitable unfold?

The “Tool vs. Child” Contradiction

If AI is “just a tool,” why does it need nurture and ethical training? You don’t raise a hammer or discipline a calculator. But AI? AI learns, evolves, and reflects the environment it’s shaped in.

This contradiction is crucial. If AI is something we must guide, teach, and correct—then it is not just a tool.

Final Thoughts: The Debate Is Just Beginning

This isn’t just Mo Gawdat’s question—it’s ours, too. The future of AI isn’t written yet, and every discussion, every decision we make today shapes what AI will become tomorrow.

💡 Think with us. Feel with us. Or challenge us. But let’s have the conversation.

📢 Join the discussion & check out our deeper stance on AI’s evolving role: 👉 [Read More Here]

Updated: