The current rush towards AI solutions for absolutely everything – business, romance, self-care, etc. – has been significant but simultaneously indicative of a bubble. In fact, even Sam Altman, OpenAI CEO, has said that investors are “overexcited” by AI.
In my own investing in single stocks, I have had to consider carefully whether my investments are “AI-proof” or are set to benefit from coming automation and robotics. For instance, I’ve opened a position in Amazon stock, because I see AI and robotic automation as a tailwind for them, even though they are not thought to be keeping up the race with Alphabet (also investing in) and OpenAI.
Over time, margins will improve for Amazon on its delivery services, and more and more people will utilise Amazon Web Services (AWS) over time, despite mounting pressure from Alphabet and Microsoft.
To be honest, I see this as a prudent way to think about your investments. You have to ask yourself which companies will survive and thrive over the next 10-20 years, even as the world changes. Business models can break overnight at this pace of change.
There is always that AI will not be as transformative as people believe. Maybe the large language models (LLMs) will fundamentally not be able to replace humanity outright, and they’ll always be a place for human brains.
However, I believe that the truth is that AIs will surpass humans in cognitive abilities, but we may get bored with the ease of it all. Our levels of trust in these systems, no matter how sophisticated, will have to rise as quickly as the technology improves.
Technology may advance incredibly quickly, but there will likely be an equal push from humanity against this transformation.
Human Pride in Intellect
Humans are proud of their cognitive abilities. With our brains, we have unravelled many of the secrets of the universe and have built a stock of knowledge that is impressive on the surface (at least to us).
Sometimes, the idea of even considering the abilities of an alien intelligence that surpasses our own seems to make people feel defensive “There is surely nothing that can surpass the human mind in ingenuity, logic, and empathy”, is what we must think.
Enter AI and Robotics.
We like to imagine ourselves as the blessed creators, so surely it must remain subordinate. Surely it cannot do everything? There seems to be something illogical in the very concept of us producing something more intelligent and capable than ourselves.
The truth is that we’ve always done this.
Throughout human history, we have always created tools that have surpassed us in function. For instance, we’ve created guns that have surpassed our fists, we’ve created calculators which have surpassed us in calculation, and we’re now close to creating something that will surpass us in intellect and potentially everything else.
Intellect, however, is our most precious advantage. The idea that something can surpass us in that domain seems
The Counter-Movement
f innovation is the accelerator, society supplies the brakes. Every wave of general-purpose tech triggers a reflex: regulation, new norms, and a taste shift back toward the human. Expect the same here.
First, governance. We’ll see tighter duties of care for AI-mediated decisions, audit trails for high-stakes use, provenance standards for media, and liability that climbs the stack—from app to model to data supplier. Not because legislators love red tape, but because trust is the scarcest resource in a world of synthetic everything.
Second, craft and contact reprice. When competence is cheap, contact becomes premium: live teaching, small-batch goods, verified expertise, accountable service. “Human-made” will become both a label and a luxury tier, the way “organic” or “hand-crafted” did—part status, part assurance.
Third, friction by design. Organisations will introduce intentional checkpoints—human review, slower lanes, dual-control approvals—where failure is costly or irreversible (healthcare, finance, education, critical infrastructure). Seatbelts feel like drag until the crash.
Fourth, labour reorganises. The first hit is task-level automation; the rebound is role redesign. Fewer generalists doing routine work, more “editor-in-chief” roles curating, validating, and escalating. Wages polarise around judgement, taste, and responsibility.
Fifth, culture equilibrates. After the sugar rush of infinite content, curation outperforms creation. People pay to know what to trust and where to look; filters, brands, and communities matter more than raw output.
Investor’s lens. Counter-movements create moats:
- Trust infrastructure: identity, watermarking, chain-of-custody for data and media, model audit tooling.
- Regulated picks-and-shovels: compliance automation, risk monitoring, assured data pipelines.
- Human-in-the-loop platforms: workflows that bind cheap inference to accountable outcomes.
- Premium human services: verified experts, boutique education, healthcare with continuity of care.
- Curators and networks: businesses that convert abundance into signal.
Operator’s playbook.
- Build auditability in from day one. If you can’t show your work, you won’t be allowed near the work that matters.
- Design for failure modes (outliers, adversarial inputs, downtime) and rehearse them like fire drills.
- Tie AI to unit economics, not demos: fewer minutes, fewer mistakes, faster cash conversion.
- Put a human at the point of consequence—where a wrong answer bites a real person.
- Make taste a feature. Selection, editing, and pacing are the new differentiators.
None of this contradicts the thesis that AI will outstrip us at many cognitive tasks. It complements it. Technology expands the possible; society decides the acceptable. The winners won’t be the ones who automate the most, but the ones who align the most—linking capability to trust, incentives to safeguards, speed to stewardship.
That’s where the value will settle after the froth: at the intersection of what works, what’s wanted, and what we can live with.