
By: Paul Golding, Vice President, Edge AI and Robotics at Analog Devices, shares four predictions for 2026:
In 2026, artificial intelligence will step out of our screens and into the world
The next frontier of AI will be Physical Intelligence. The scaling laws that powered the success of large language and vision models will continue through 2026 but will extend into models that learn from vibration, sound, magnetics, and motion (stubborn attributes of the physical world). I predict these physical reasoning models will migrate from the datacenter to the edge, powering a new type of fluid autonomy that thinks and acts locally, sensitive to local physics and without recourse to centralized servers. Such models will dynamically learn from novel situations, exposed to only a few examples of novel circumstance. Think of a mobile factory robot that can reason for itself and determine what to do when faced with an unexpected obstacle. We can expect to see an increase in hybrid “world models” that blend mathematical and physical reasoning with data-driven sensor-fused dynamics, and systems that not only describe the world but participate in it and learn, as Richard Sutton says, from their own “experience.”
Audio will become the dominant AI interface in consumer electronics
Audio is about to become a reasoning channel, and we’ll see this come to life in a big way in 2026. With spatial sound, sensor fusion, and on-device reasoning converging, consumer electronics will evolve into contextual companions. Augmented Reality glasses and hearables like earbuds and in-vehicle sound systems will quietly interpret our environment, inferring intent, emotion, and presence. These technological leaps will lead to significantly better noise cancellation in our hearable devices, improved battery life, and new form factors that haven’t yet been imagined. The always-in-ear hearable experience, already on the rise among Gen Z, will become increasingly prevalent due to the “super-human” hearing of context-aware AI.
Agentic AI will give rise to physically intelligent models, trained via physically accurate simulation environments
The next evolution in Edge AI will be agentic. In the future, agentic systems will decide, not just predict, and act autonomously in the world via physically grounded interventions rehearsed in simulated environments. To support this, 2026 will see the mainstream arrival of digital twins to imbue large models with physical-system awareness. Imagine AI models learning to predict forces instead of text, but within the safety of a scalable simulated environment. Physically intelligent foundation models will merge reasoning with sensor intelligence to orchestrate machines, simulations, and data. Today, many factories have the technology to do predictive maintenance, but you can imagine a future where an agent on the factory floor acts on that prediction. It autonomously reroutes the production line to a healthier machine, adjusts the strained machine to 70% to extend its life, and coordinates with supply chain agents to adjust inventory—all without human intervention.
AI will have its agentic “inception” moment with the emergence of micro-intelligence
In 2026, a new class of tiny recursive models will rise—compact systems with remarkable depth of reasoning across a narrow domain but able to run at the edge. Think of them as micro-intelligences rather than just small models: fluid, adaptive, and task-specific, yet still capable of abstraction and reflection. They will occupy the middle ground between rigid programmed AI seen at the edge today and sprawling foundation models like GPT-5, powering specialized reasoning on chips, in sensors, and inside the smallest of systems, acting as orchestrators of the specialized agents emerging today. These new kinds of models will arise from the race to build fluidly intelligent systems, as encouraged by the ARC Prize and similar initiatives. I predict the rise of new types of AI benchmarks designed to measure and encourage a new kind of engineering intelligence—multiagent micro-intelligences that can collaborate to solve complex engineering problems, moving from the world of abstract mathematical challenges (like Math Olympiads) to practical problem-solving systems.
By: Massimiliano “Max” Versace, Vice President, Emergent AI at Analog Devices, shares the following predictions for 2026:
Decentralized AI will appear in new-generation humanoid robotics by the end of 2026
By late 2026, decentralized AI architectures merging sensing with neuromorphic and in-memory compute will transition from pilot programs to early commercial deployment. We’ll see humanoid robotics systems getting a bit closer to biological systems, where local circuits in sensory organs and spinal pathways handle reflexes and balance, allowing smoother, more adaptive movement, drastically reduced power consumption, and freeing the central brain to “think and plan.”
These technological leaps will start with intelligent sensors that embed novel AI compute, such as neuromorphic and in-memory-compute architectures, directly within the sensor. The combination of decentralized AI and novel AI compute architecture will dramatically reduce latency and power consumption, allowing always-on AI at the edge and freeing larger processors to focus on higher-level reasoning, planning, and learning, rather than micromanaging continuous sensorimotor control loops. By enabling real-time, low-latency AI processing at the edge, robots will become more efficient, responsive, and capable of near-biological sensory-motor skills. This shift will power a step change in their ability to engage complex, dynamic environments with fluid, reliable coordination and pave the way for practical and pervasive humanoid robotics.
In 2026, we’ll see the rise of analog AI compute
Historically sidelined due to scalability and precision limitations, analog compute is reemerging in 2026 as digital architectures face energy, latency, and memory bottlenecks with no solution in sight. This is especially critical in edge environments where real-time responsiveness and power efficiency are a must.
Analog AI compute uses the physics of the sensing and computing substrate to perform computation, transforming energy directly into AI inference. This is a different approach to AI compute vs. conventional digital processors, which separate sensing from computation. Analog AI collapses these layers into a unified framework where intelligence begins at the sensor itself.
By the end of 2026 we’ll see initial deployments and adoption of this technology, particularly in robotics, wearables, and autonomous applications, where analog AI enables real-time responsiveness, smoother interactions, longer battery life, and more natural behavior in the devices they power.
Taken together, these predictions point to a clear direction for 2026: AI will become more physical, decentralized, and embedded, learning directly from the world, reasoning closer to where data is generated, and acting with greater autonomy in dynamic environments. Whether it’s “world models” that blend physics with data-driven dynamics, context-aware audio interfaces in everyday devices, or humanoid robotics enabled by low-latency edge intelligence, the common thread is a move toward systems that don’t just interpret reality, but participate in it.
From Paul Golding’s view of Physical Intelligence and micro-intelligences orchestrating specialized agents, to Max Versace’soutlook on decentralized compute and the rise of analog AI, the message is consistent: future AI breakthroughs will be constrained and enabled, by how efficiently we sense, compute, and respond at the edge. As these capabilities mature, 2026 is likely to mark an inflection point where intelligence becomes more natural, more power-efficient, and more seamlessly integrated into the environments where we live and work.
