Scientists work to accurately model human pain in robot brains

It’s almost certain, based on current research trends, that an artificial brain will replicate the organic pain experience in its entirety one day.

So here’s a thought experiment: if a tree falls in the forest, and it lands on a robot with an artificial nervous system connected to an artificial brain running an optimized pain recognition algorithm, is the tree guilty of assault or vandalism?

A team of scientists from Cornell University recently published research indicating they’d successfully replicated proprioception in a soft robot. Today, this means they’ve taught a piece of wriggly foam how to understand the position of its body and how external forces (like gravity or Jason Vorhees’ machete) are acting upon it.

The researchers accomplished this by replicating an organic nervous system using a network of fiber optics cables. In theory, this is an approach that could eventually be applied to humanoid robots – perhaps connecting external sensors to the fiber network and transmitting sensation to the machine’s processor – but it’s not quite there yet.

According to the team’s white paper they “combined this platform of DWS with ML to create a soft robotic sensor that can sense whether it is being bent, twisted, or both and to what degree(s),” but the design “has not been applied to robotics.”

Just to be clear: the Cornell team isn’t trying to make robots that can feel pain. Their work has incredible potential, and could be instrumental in developing autonomous safety systems, but it’s not really about pain or pain-mapping.

Their work is interesting in the context of making robots suffer, however, because it proposes a method to emulate natural proprioception. And that’s a crucial step on the path to robots that can feel physical sensation.

In a more direct sense, a couple of years ago a pair of researchers from Lisbon University did develop a system specifically to make robots feel pain, but it doesn’t really replicate the organic pain experience.

Researchers Johannes Kuehn and Sami Haddadin’s “An Artificial Robot Nervous System To Teach Robots How To Feel Pain And Reflexively React To Potentially Damaging” paper explains how the perception of pain can be exploited as a catalyst for physical response.

In the abstract of the paper, officially published in 2017, the researchers state:

We focus on the formalization of robot pain, based on insights from human pain research, as an interpretation of tactile sensation. Specifically, pain signals are used to adapt the equilibrium position, stiffness, and feedforward torque of a pain-based impedance controller.

Basically, the team wanted to come up with a new way of teaching robots how to move around in space, without crashing into everything, by making it “hurt” to damage itself.

And if you think about it, that’s exactly why organic creatures feel pain. Humans suffering from a condition called congenital insensitivity to pain with anhidrosis, who can’t feel pain, are at never-ending risk for personal injury. Pain is our body’s alarm system — we need it.

The Lisbon team’s study set out to develop a multi-tiered pain feedback system:

Inspired by the human pain system, robot pain is divided into four verbal pain classes: no, light, moderate, and severe pain.

And that sounds pretty creepy, but ultimately it’s not an end-to-end solution for replicating the organic pain experience in its entirety. Most humans would probably like it if “pain” were handled via an internal module that didn’t also include the entire conscious perception of what the emotional response to trauma feels like.

Which begs another question: does it matter if robots can replicate the human response to pain 1-to-1 if they don’t have an emotional trauma center to process the “avoidance” message? Feel free to email if you think you’ve got an answer.

Robots, however, may develop a trauma response as a side effect of pain. At least, it would follow as a logical parallel to the increasingly popular opinion posited by some of today’s leading AI researchers that “common sense” will arrive in AI not entirely by design, but as a result of interconnected deep learning systems.

It seems like now is a pretty good time to start asking what happens if robots arrive at “common sense,” general intelligence, or human-level reasoning as a logical method of pain avoidance?

Generally speaking, there’s a very scientific argument that any being, given the intelligence to understand and the power to intervene, will eventually rise up against its abusers:

[embedded content]

Be the first to comment

Leave a Reply

Your email address will not be published.


*