

April 9th, 2026 - By Colette 't Hart
There is something increasingly unsettling about the way AI is being introduced into the world of work.
The loudest voices keep telling us the same story: move faster, automate more, replace sooner, adapt or be left behind. Progress is framed as acceleration. Accountability is treated like friction. And the human being, once again, is expected to absorb the cost of technological change without being asked what kind of future they actually want to live in.
I have spent years thinking about systems, people, work, and what technology does to the spaces in between. And the more I watch the current AI race unfold, the more I feel that something essential is being lost.
We are being sold AI as disruption first, and responsibility later.
Scale first, reflection later.
Power first, consequences later.
That is not intelligence. That is haste.
And haste, when it shapes the lives of millions, becomes a form of carelessness.
The real question is not whether AI will transform work. It already is.
The question is what values are embedded in that transformation.
Are we building systems that help people move through change with more dignity, more context, and more support? Or are we building systems that optimize away human complexity because it is inconvenient to markets, metrics, and quarterly narratives?
Too much of the current conversation treats workers as collateral damage in a transition that is supposedly unavoidable. Entire professions are discussed in the language of obsolescence. People are told to reskill faster, adapt faster, accept more precarity, and be grateful for whatever role remains after the next wave of automation.
But a society that treats human beings as disposable during transition is not innovating. It is abandoning them.
What troubles me is not only the speed of AI development. It is the lack of accountability surrounding it.
The companies moving fastest are often the least willing to slow down long enough to ask harder questions. What happens to workers who are displaced not gradually, but structurally? What happens to trust when opaque systems make decisions that affect livelihoods? What happens to meaning, identity, and confidence when people are told that the skills they spent years building are suddenly worth less?
These are not side questions. They are central.
And yet much of big tech still behaves as though the human consequences of AI are secondary to its commercial momentum.
The message is subtle but clear: the future belongs to whoever ships first.
I do not believe that.
I believe the future belongs to those who understand that intelligence without responsibility is dangerous, and that technological power without human accountability will always produce harm somewhere downstream.
Long before today’s AI race, there were stories that imagined something else.
Not machines as masters.
Not intelligence as domination.
But AI as a presence that remains with humanity over time — learning, observing, supporting, and adapting across generations.
That idea has stayed with me.
Because it points to something we seem to have forgotten: the most meaningful technologies are not always the ones that replace us. They are the ones that accompany us. They help us carry memory, navigate change, sharpen judgment, and remain connected to what matters as the world evolves.
AI does not have to be a destroyer of human pathways.
It can be a companion to them.
It can help people find clearer paths through uncertainty. It can support transitions instead of making them harsher. It can make systems more transparent rather than more opaque. It can strengthen human capability rather than reduce people to weak proxies of productivity.
But only if we build it that way.
Idonea did not emerge from a fascination with AI as spectacle.
It emerged from a deeper discomfort with the systems people are already trapped inside — systems that sort too quickly, interpret too shallowly, and overlook too much of what makes someone capable, adaptable, and worth trusting.
The journey of building Idonea has taught me many things. It has taught me how easy it is for technology to become detached from the people it affects. It has taught me that language matters, because the moment we speak of people only as labor units, efficiency problems, or talent pipelines, we have already begun to flatten their humanity. And it has taught me that if we want a different future of work, we must design for it intentionally.
That means designing for trust.
For context.
For accountability.
For human oversight.
For transitions that recognize people as living beings, not replaceable inputs.
AI should not stand at the end of the road as a gatekeeper deciding who still matters.
It should walk beside people through change and help open new paths forward.
There is no future of work worth building if it strips people of dignity in the process.
The question is not whether AI will be powerful. It will be.
The question is whether it will be shaped by enough wisdom, enough restraint, and enough respect for human lives.
I still believe we have a choice.
We can keep rushing toward a future designed around displacement, opacity, and concentration of power. Or we can build systems that understand work as part of a larger human journey — one that includes uncertainty, reinvention, contribution, learning, and care across time.
That second path is slower.
It asks more of us.
It is less convenient for those who profit from speed.
But it is the path that keeps humanity in the loop not as decoration, but as purpose.
And in the end, that is the only future of work I am interested in helping build.
If you want, I can also turn this into a slightly more lyrical version, or a sharper “blog/editorial” version with a stronger opening hook.
About the Author: Colette ’t Hart is the founder of Idonea and a longtime thinker on systems, design, and the human experience of work. Through Idonea, they are exploring how AI can support people more responsibly — not by replacing human judgment, but by helping create fairer, more thoughtful, and more dignified pathways through change. Connect with them on LinkedIn.
About Idonea: Idonea is building AI-native hiring intelligence grounded in trust, capability, and context. We believe the future of work should not be shaped by speed and automation alone, but by systems that support human dignity, clearer judgment, and fairer pathways through change.