A paralyzed man just fed himself using thought-controlled robotic hands

It’s been decades since Robert “Buz” Chmielewski could properly move his arms. A surfing accident robbed him of this ability as a teenager, causing him to be paralyzed from the neck down. But now, over 30 years since the accident, Chmielewski was able to cut food and serve himself thanks to a pair of thought-controlled robotic hands.

Back in January of 2019, doctors implanted two sets of electrodes in each hemisphere of his brain. The hope was that the electrodes could help Chmielewski control two robotic prosthetic arms designed by researchers from Johns Hopkins Medicine (JHM) and Johns Hopkins’ Applied Physics Laboratory (APL). Previous research on mind-controlled prosthetics, which relies on a brain-computer interface, has focused on a single-arm controlled by one hemisphere of the brain, which is why the use of electrodes on both sides of the brain is particularly exciting.

As reported by FreeThink, a portion of the robotic control is automated with artificial intelligence (AI), with the idea being that signals coming from the brain-computer interface will combine with AI to control the robotic hands. It’s been two years since Chmielewski first started working with the researchers, but now he’s finally mastered a new skill: cutting food and feeding himself, with each arm completing different tasks at the same time.

“Being able to control two robotic arms performing a basic activity of daily living — in this case, cutting a pastry and bringing it to the mouth using signals detected from both sides of the brain via implanted electrodes — is a clear step forward to achieve more complex task control directly fed from the brain,” said Pablo Celnik, director of physical medicine and rehabilitation at the Johns Hopkins.

For Chmielewski and the researchers, the ability for a quadriplegic man to cut food and feed himself represents a massive step in the field of mind-controlled prosthetics. Moving forward, the researchers want to achieve the next goal of adding additional sensory feedback, which would allow the user to “feel” if they are completing a task correctly, rather than simply observing it. Should they achieve this, we’ll be sure to give you an update!

Solution News Source

SIGN UP

TO GET A Free DAILY DOSE OF OPTIMISM




We respect your privacy and take protecting it seriously. Privacy Policy