Loading...

AI Goes Physical: Moving past chatbots to the convergence of AI and robotics

AI Goes Physical: Moving past chatbots to the convergence of AI and robotics
Loading...

For the last few years, public attention around artificial intelligence has been fixed on text. Chatbots that answer questions, generate essays, and write code. That phase served its purpose. It proved that large language models could handle complex reasoning tasks at a useful level. But the next stage of AI development is not about better text generation. It is about giving AI a body.

The convergence of AI and robotics is not new as a research agenda. What is new is the pace at which it has become practical. Until recently, the robotics side and the AI side evolved in separate tracks. Robotics engineers spent decades solving problems of actuation, sensing, and mechanical design. AI researchers, especially in the deep learning era, focused on perception and language. These two communities spoke different technical languages and published in different journals. That separation is closing fast.

The shift has a specific technical driver. Foundation models trained on massive datasets have developed a surprising capacity: spatial reasoning. Models work on multimodal systems and can now interpret visual scenes, understand object relationships, and translate high-level instructions into physical action sequences. This was not reliably possible three years ago. A robot that can look at a cluttered table and figure out how to pick up a specific object without explicit programming for every scenario is a qualitatively different machine from the ones we had before.

Loading...

Consider what this means for manufacturing. Traditional industrial robots are precise but rigid. They execute pre-programmed motions in controlled environments. A welding arm on an automotive line performs the same arc thousands of times. Introduce an AI layer with real-time perception, and that same arm can now adapt to variation in parts, detect defects mid-process, and adjust its trajectory without human intervention. 

The warehouse and logistics sector is even further along. They operate over lacs of robots across its fulfilment network. These are not simple conveyor systems. They navigate dynamic environments, coordinate with each other, and increasingly handle unstructured tasks like picking irregularly shaped items from bins. The AI driving these systems learns from operational data continuously, improving pick rates and reducing error.

Healthcare presents a different set of problems. Surgical robots have been in operating rooms for over two decades, but they function as sophisticated tools controlled by a surgeon's hands. The next generation, informed by AI trained on millions of surgical procedures, will offer real-time guidance, anomaly detection, and semi-autonomous execution of routine steps. Early clinical trials are already underway.

Loading...

Agriculture is less discussed but equally affected. Autonomous tractors now use computer vision to distinguish crops from weeds and apply herbicide only where needed, reducing chemical use by up to 80% in field trials. These machines operate on uneven terrain, in variable light, under tree canopy. The AI has to be robust in ways that a chatbot never needs to be.

The technical challenges ahead are real and worth naming plainly. Power consumption remains a constraint for mobile robots. Latency in decision-making is critical when a robot arm is moving near a human worker. Sim-to-real transfer, where a model trained in simulation must perform in the physical world, still produces failures that would be unacceptable in safety-critical applications. And the question of liability when an AI-controlled machine causes harm has no settled legal framework in most jurisdictions.

But the trajectory is clear. AI is leaving the screen. The companies, research labs, and governments that understand this shift will build the next generation of physical infrastructure. Those that continue to think of AI as primarily a software product, a chatbot, a content tool, will find themselves working with yesterday's assumptions.

Loading...

The convergence of AI and robotics is not a future possibility. It is the present engineering problem. And it requires people who understand both sides of the equation.

"We spent the last decade teaching machines to read and write. The harder problem was always teaching them to move, touch, and interact with the physical world. That problem is now being solved, and it will change every industry that depends on physical labour, precision manufacturing, or human-machine collaboration. This is where serious engineering begins."

Santhosh Subramani

Santhosh Subramani


Prof. Santhosh Subramani is IEEE Member


Sign up for Newsletter

Select your Newsletter frequency