PRESS OFFICE
LISTING
HomeNewsAboutContactWebsite
News

How everyday users and AI experts are experiencing the state of AI right now

AI commentary often falls into two extremes: The highly technical forecasts on one end, and lightweight takes on the latest viral demo on the other. But most people live in the space between – using AI daily while struggling to understand why it behaves the way it does, or what is actually changing.
Ari Ramkilowan, head of Machine Learning, and Stef Adonis, head of Marketing at Helm
Ari Ramkilowan, head of Machine Learning, and Stef Adonis, head of Marketing at Helm

This piece offers that middle view: a grounded look at the current state of AI, shaped by both technical understanding and everyday experience.

AI has quietly become part of the fabric of everyday life. Many people no longer 'Google', they ask AI. Voice queries feel natural, children use AI for study timetables, and adults use it for writing, admin, recipes, and planning.

Because AI has blended into the background, we rarely notice how often we use it. Yet frustrations can be common. In reality, these moments are almost never technical failures, they are communication failures.

AI behaves less like an all-knowing machine and more like a highly-capable intern. It is excellent with context and direction, but unpredictable when instructions are vague. The first prompt often determines the quality of everything that follows. What most people perceive to be 'AI failures' are actually misalignments between what the user imagined and what the model interpreted based on the instructions provided.

Our reliance on tools isn’t new, we depend on calculators and navigation apps every day. AI is simply the next extension of this behaviour, surfacing information instantly instead of making us search for it. Using AI for timetables, recipes, summaries, admin and quick answers is a healthy reliance. The risk isn’t in using AI, but forgetting to question it.

Over-reliance begins when we outsource not just tasks, but thinking. When we stop asking, 'Does this make sense?' or 'Why is this the answer?', we begin offloading judgement itself. Critical thinking and expertise don’t disappear overnight, they erode quietly when speed becomes more important than understanding.

The goal isn’t to resist AI, but to work with it consciously. Healthy reliance accelerates us, over-reliance dulls us.

One of the most interesting social shifts AI has sparked is a rise in default skepticism. When people see an unusual video or image, the first instinct is often: 'That’s probably AI.'

The rapid improvement of AI-generated visuals is being matched by a rise in public skepticism. This doesn’t eliminate risks like misinformation, but it does reshape the environment. Households, group chats and social feeds are full of 'spot the AI' moments. People examine lighting, shadows, expressions, and fingers. Teenagers debate whether clips are deepfakes or just odd angles. Adults discuss whether something looks 'too perfect'.

Ironically, AI may be accelerating critical awareness as well as diminishing it.

Everyday users value convenience, speed and seamless integration into tools they already use – WhatsApp, browsers, file systems, email and voice assistants. They judge AI by friction, by how many steps stand between them and the result.

Experts see what lies beneath – the architecture, data flows, failure modes and design patterns that govern how AI works. They analyse where systems succeed, where they break, and how user behaviour influences outcomes.

These perspectives collide in the middle, where adoption, misunderstanding and opportunity meet.

The clearest trendlines emerging from current lab research and industry behaviour include:

  1. The decade of agents: AI is shifting from answering questions to performing tasks. This is not simply about chat interfaces becoming more capable, it’s about systems that can plan, act, and iterate across multiple steps. While headlines may frame this as a short-term leap, the deeper shift toward agent-based systems will likely unfold gradually and define the next five years rather than the next year.

  2. A more nuanced spectrum of autonomy: The industry is developing clearer gradations between semi-autonomous tools and fully agentic systems. Much of the innovation is happening in the space between – where structured guardrails, human oversight, and multi-step reasoning intersect. Expect more clarity around 'design patterns' for how agents operate safely and effectively.

  3. Bespoke, vertical agents: Small, task-specific agents are increasing in popularity because they offer high upside with relatively contained risk – depending on the use case. When narrowly scoped, these agents can automate meaningful work without the broader failure exposure of general-purpose systems. In many cases, the risk lies less in the technology itself and more in how it is implemented and governed.

  4. Agent orchestration as a new skillset: Software engineering roles are evolving, not disappearing. Increasingly, the work involves coordinating multiple specialised agents rather than writing every function line by line. This resembles the role of a solutions architect: someone who understands the technical landscape deeply enough to design systems, anticipate failure modes, and step in when needed. Orchestration requires expertise, not abstraction away from it.

  5. New AI surfaces beyond chat: The chat window is unlikely to remain the dominant interface. AI is steadily embedding itself inside existing workflows – voice interactions, browser-level assistants, productivity tools, and systems that automatically resurface relevant notes before meetings. The focus is shifting from 'going to AI' to AI quietly operating where work already happens.

  6. Breakthroughs in model efficiency: Progress will not rely solely on building larger models. While scale still matters, architectural innovation is becoming equally important. Expect larger models to become more capable in the cloud, while smaller models become more practical on edge devices such as laptops and smartphones. Rather than reducing reliance on cloud compute, AI is likely to expand on two fronts – deeper investment in large-scale data centres alongside increasing capability at the edge. Efficiency is becoming a core differentiator across both.

  7. More realistic image and video generation, and faster countermeasures: Generative visuals are improving rapidly, lowering the barrier to both creative expression and potential misuse. However, acceleration is occurring in detection systems and public skepticism. The trajectory is not one-sided; realism and countermeasures are evolving in parallel.

  8. 'Computer use' experiments: Large language models are beginning to interact directly with interfaces – controlling cursors, navigating applications, and executing multi-step workflows. While still rudimentary and largely demonstrated in controlled environments, the economic implications are significant. If refined, this capability could allow AI to operate within existing digital systems without requiring custom integrations.

These are not merely predictions, they are present realities with clear momentum.

AI is moving from something we 'use' to something that forms part of the infrastructure of daily life. The challenge ahead is not whether AI will become part of our world, it already has. The challenge is how we use it: consciously, critically, and creatively. That’s where the everyday user and the expert finally meet.

9 Mar 2026 10:03

<<Back

About the author

Ari Ramkilowan is head of Machine Learning at Helm, and Stef Adonis is head of Marketing at Helm.