Join Waitlist
Arrow
Mo Gawdat Is Right About One Thing: The Danger Is Not AI Alone. It Is Us. by Colette 't Hart Copy
Mo Gawdat Is Right About One Thing: The Danger Is Not AI Alone. It Is Us. by Colette 't Hart Copy

April 12th, 2026 - By Colette 't Hart

Every so often, someone from the heart of big tech says something that cuts through the noise.

Mo Gawdat’s warning about AI is one of those moments.

He is right to say that the greatest near-term danger is not some abstract future machine “waking up” and deciding to destroy humanity. The real danger is much closer than that. It is what human beings, institutions, and power structures choose to do with systems that are already persuasive, scalable, adaptive, and increasingly capable of acting in the world through agents, automation, and robotics.

That is where the real fear should live.

Not in science fiction alone, but in surveillance. In manipulation. In automated warfare. In disinformation. In invisible systems making decisions about people’s lives without transparency, dignity, or recourse. In the quiet normalization of handing over judgment to machines that do not understand what it means to be human.

That is why this conversation matters so much.

At the same time, I am cautious when people describe AI as “a mind” or say AGI is already here. Language matters. And in the AI world, language is often used too loosely, too dramatically, and too confidently. These systems can be astonishing. They can simulate reasoning, language, and coherence in ways that feel deeply uncanny. But simulation is not wisdom. Prediction is not conscience. Output is not moral understanding.

If we blur those distinctions, we do something dangerous: we start trusting systems with authority they have not earned.

And that is where I believe the future of AI will be decided.

Not by who builds the fastest model.
Not by who raises the most money.
Not by who sounds the boldest on stage.

But by who takes responsibility for governance.

Because if AI is becoming more agentic, more embedded, and more powerful in human systems, then the central question is no longer just what AI can do. The question is: under what rules, values, memory structures, and accountability systems should it operate?

That is the question I care about most.

For me, the future of AI is not about replacing people. It is not about worshipping intelligence for its own sake. And it is certainly not about reinforcing the same old hierarchies under a shinier, more futuristic label. The world already has enough systems that judge people too quickly, flatten complexity, reward conformity, and hide their power behind technical language.

We do not need more of that.

We need systems that treat human beings with dignity.

We need infrastructures that can recognize context, support continuity, and remain open to correction. We need intelligence that is governed, not glorified. We need memory that is transparent, not extractive. We need decision systems that can be reviewed, challenged, and understood. And above all, we need to stop pretending that technical advancement automatically leads to moral progress.

It does not.

History has shown us that every powerful tool reflects the values of the people and institutions behind it. AI will be no different. If it is built in environments shaped by speed, domination, surveillance, and profit without accountability, then it will amplify those things. If it is built with care, governance, transparency, and human-centered purpose, then perhaps it can help us build something better.

That is why I do not believe the answer is to fear AI as a monster, nor to celebrate it as a savior.

The answer is to design differently.

To govern differently.

To remember that intelligence without ethics is not progress.

And to insist, especially now, that the future of work and human opportunity must not be handed over to systems that reproduce old biases under the banner of innovation.

This is one of the reasons I am building Idonea.

Not as another layer of automation. Not as another black box. But as a response to the unconscious biases, institutional blind spots, and inherited inequalities that already shape how people are judged, filtered, and valued. If AI is going to help shape the future of work, then it must be accountable to human dignity from the start.

That is the work.

And that is the real test of intelligence, human or otherwise.

If you want, I can also turn this into a shorter LinkedIn post version in your American English voice.

About the Author: Colette ’t Hart is the founder of Idonea and a longtime thinker on systems, design, and the human experience of work. Through Idonea, they are exploring how AI can support people more responsibly — not by replacing human judgment, but by helping create fairer, more thoughtful, and more dignified pathways through change. Connect with them on LinkedIn.

About Idonea: Idonea is building AI-native hiring intelligence grounded in trust, capability, and context. We believe the future of work should not be shaped by speed and automation alone, but by systems that support human dignity, clearer judgment, and fairer pathways through change.