Q&A with Dr. Jacob George

Tell us about yourself — what is your background in and how did you end up in your current position?

My background is in biomedical engineering. I received my B.S. in Biomedical Engineering from The University of Texas at Austin, then I received my M.S. and my Ph.D. at the University of Utah, also in Biomedical Engineering. My research has always been at the intersection of AI and medicine, and in my graduate studies I started to blend more into rehabilitative robotics. Now my research is focused at the center of three areas: brain-computer interfaces, rehabilitation robotics, and bio-inspired artificial intelligence.

Now, I’m an assistant professor at the University of Utah with a dual appointment in the Department of Electrical and Computer Engineering, and the Department of Physical Medicine and Rehabilitation. My lab is called the Utah Neurorobotics Lab. We work on exciting projects like robotic devices controlled by thought that can give users a sense of touch, and hopefully make people more dexterous and able to interact with their environment in more seamless ways. Ultimately, we have goals of extending even beyond that, and doing fun things, potentially sci-fi things, like enhancing healthy people’s function beyond normal capabilities.

However, we traditionally work with individuals with life-altering neuromuscular impairments. The primary patient populations that we work with are amputees, stroke patients and spinal-cord-injury patients.

Tell us about the Utah Neurorobotics Lab and its work with brain-machine interfacing and bionics?

Going back to the three different pillars that I mentioned, we work on brain-computer interfaces, rehabilitation robotics and bio-inspired artificial intelligence. A lot of what we do is focused on individuals with neuromuscular impairments and trying to restore function back to their impacted areas. Our focus is primarily on upper-limb function, which is critical to your daily life. We work with prosthetic hands for people who have lost their limb, and we also work with technology like exoskeletons that allow individuals to move their hand even if it’s paralyzed.

There are all these different ways that you can come up with controlling prosthetics; current things on the market include different tilt-switches on your feet, where you would move your foot left and right to open and close your hand. While helpful, we don’t think using your feet to control your hands is intuitive, and it truly means that you can’t walk and move your hands at the same time. That’s where our interest in brain-computer interfaces comes in: a person thinks about moving their hand in a natural way and the device follows suit.

When you move your hand, an electrical signal starts in your brain, travels down through your spinal cord, through your arm nerves, activates your muscles, and then your muscle contractions pull on your tendons which cause your fingers to move. That’s the naturally existing pathway from thought to action. When you lose a hand, that pathway still exists– there is just nothing to move at the end of it. What we do is tap into that pathway and use existing biological signals to control robotic devices seamlessly.

Going the other way, when you touch an object, a physical force is exerted on your fingers and gets converted into an electrical signal. That electrical signal then goes back up to your brain, where it is interpreted as a sense of touch. We can also tap into that existing pathway: using artificially-evoked electrical signals, we can send an electrical signal back up to the brain, through the existing nerves, to cause a person to feel a sensation as if it was coming from their hand. This electrical stimulation of the nerves can be used to cause an individual to feel something from their hand even after they’ve lost their hand due to amputation. Electrical stimulation can also be used to activate the muscles, and cause a paralyzed hand to move again.

Connecting robotic devices to a human through a brain-computer interface is where we start to bring in AI. Artificial intelligence serves as a link between those two. When a person is thinking about an action, how do we decode that thought into the physical movement of a robotic device? And when a robotic device is touched, how do we encode that information into a realistic sense of touch? We use AI to solve both of these challenges.

Going a step further, if we look at how the biological nervous system naturally communicates that information, then we can try to model our AI systems based on these biological systems to improve our AI performance. There are a lot of great things that biological neural networks can do, like reorganize themselves to be robust over time. However, this is a very difficult task for artificial neural networks. We can learn a little bit from both perspectives, neuroscience and AI, as we work with this symbiotic link between the human nervous system and external robotic devices.

The Neurorobotics Lab is famous for the “LUKE hand”: what factors influenced your decision to develop a hand and not another appendage?

Your hands are probably the single-most used appendage in your life. They are used for all sorts of different things: feeding yourself, clothing yourself or playing catch with your kid. But, more importantly, your hands are also fundamental to your sense of self. Think about how much of the world is perceived through your hands, for example, when holding hands with a loved one, or physically feeling the world that’s around you. Our hands are a fundamental part of who we are, which is an important reason to focus on it.

If you think about the technology for upper-limb loss, the current state of the art is to give someone a body-powered hook. This is, not kidding, the same technology we have been using since the American Civil War. These body-powered hooks are not intuitive and do not provide a sense of touch. Our sense of touch is critically important to our dexterity, so it creates a bidirectional research focus with tremendous impact.

A particular reason for me is that hands are extremely complex, which means that it’s a fascinating problem to study. No one knows what your hands are going to do next, right? Only you. It’s not an easy problem, and it’s not an easy thing to do. When you think about that complexity and the importance of our hands, it makes for a really exciting research area.

What are some of the challenges (unexpected or otherwise) associated with training machine neural networks to the human brain?

When you’re working with AI in brain-computer interfaces, the interaction that the human has with the device is such a critical part of it. It’s not always about having the most accurate algorithm or best AI approach. A lot of times it’s about having the most consistent and adaptable system a person can learn. You can easily leverage how adaptable and intelligent people are and use that to your advantage.

The other thing I would say is that the biggest challenge with anything health related is data collection. You see videos of AI where they train a robot arm to open doors, a robot to walk up and down stairs, or a robot to play soccer against itself. There are even AI agents that can play chess and complex video games against themselves and learn how to beat humans in those games. But what you don’t see behind the scenes is how much compute power really went into that. They’ve played millions of games of chess against themselves and other AI agents, until they’ve become these super powerful computers.

If you’re doing something that interacts with a human, you don’t have that luxury of being able to do millions and millions of iterations. One-shot learning is this really interesting phenomenon that humans are able to do: you reach out, touch something that’s hot, and you’ve learned that, instantly. You’re never touching that thing again because it’s hot. A computer would stick its hand in fire millions and millions of times, until it figured out that fire was bad. Humans have this awesome ability to learn things instantly, and just pick up on things quickly.

One-shot learning is a major challenge in the AI field, but it’s even more of a challenge when you’re working with humans and you don’t even have the ability to use conventional machine-learning techniques, like running simulations on a supercomputer one million times over. When you’re working with a human-robot interface, you have to have the human in the loop at all times.

Utah Neurorobotics Lab recently received a grant from Facebook Reality Labs– can you talk about the research to make interacting with the Metaverse and IoT more inclusive?

Facebook Reality Labs put out a call for proposals, specifically, to see if there were ways to make interfaces with the Metaverse, the augmented reality and virtual reality interfaces, inclusive to all individuals. A lot of what they’re focusing on, their technology moving forward, is centered around the wrist: a band around your wrist that would allow you to determine what a persons’ hand is doing and provide feedback to what they’re touching in augmented or virtual reality.

My lab is working on very similar technologies– we actually developed our own versions of these types of interfaces that focus around the wrist to provide information about what the hand is doing and what the hand is touching. Our spin on it is that there are a couple different things we can do to make our data private and be sensitive to the data we are collecting from individuals, while making the design of the technology inclusive for users.

The amount of information that companies ask of you when you want to engage with them, even to do something simple, seems outrageous. So, imagine that you’re interacting with the Metaverse, and all you want to do is be able to grab an object. So then why do they need to know the position of every single finger and every component of how your hand works? You should be able to share the bare minimum to get by. Instead of collecting tons and tons of data, and building these really intensive models of people’s hands, instead we let the people be the drivers of their own collection.

So, if a user tells us what type of data they want, we might give them some tips for best practice, and we might encourage them to do a little bit more, so we can give them optimal performance. It’s similar to when you Google an ambiguous term, like “jaguar” and Google says something along the lines of, “Hey, if you let us see your search history, we could fine tune your search results to focus on the car brand or the animal.” We try to think about data from your hands in the same way: we want the data we collect to be very specialized to the users and their needs.

When approaching inclusivity, current devices work by picking up movement that a hand is doing: the electrical activity in your arm that’s associated with attempted movement. The number of neuromuscular disabilities that exist is tremendous — even going into muscular-skeletal issues like arthritis. So, we’re making sure that Metaverse technology being developed is inclusive to these individuals. More specifically, we’re looking at making this technology inclusive to stroke patients with paralyzed limbs in the near future. But ultimately, whatever we develop is going to be inclusive to all individuals, and is designed to be inclusive from the get go. We’re excited to be working with Facebook, now Meta, on this project, and applaud them for having the vision to address these potential issues a priori.

What breakthroughs are on the horizon for the neurorobotics field?

What I think is one of the most exciting things that’s been happening recently is the aggressive push into the commercial market. Neurorobotics, brain-computer interfaces, these types of things, had been academic research for a long time. Now we have these big players coming in with lots of funding, and a lot of this is sparked by Neuralink, and the hype behind that. There’s never been more money available for startup companies, but now there’s a lot of biomedical startup companies centered around neurotechnology and neurorobotics. That’s a really, really exciting trend, and I think we’re going to start to see some cool medical applications coming out this decade.

Arguably the most successful brain-computer interfaces in clinical setting are the cochlear implant and deep-brain stimulators. Both of these are developed by the big-name medical companies. But, now we have other companies coming in and pushing the field even further, talking about very promising neurotechnology. I think it’s important to say that a lot of this technology is not nearly as close as Elon Musk likes to hyperbolize, but there are several important neurotech advances that will happen in our lifetime.

In the near future we will see commercial implantable brain-computer interfaces that can allow people who are fully paralyzed, with locked-in syndrome or severe spinal cord injuries, to control robotic devices. I think that’s probably the most realistic near-term application since there’s been a lot of academic research in that space already thanks to key players like Blackrock Neurotech and the BrainGate consortium. I think we will also have systems that reanimate and rehabilitate paralyzed limbs, allowing people to walk again after a life-altering neuromuscular impairment.

Even further out, there are going to be things that can modulate autonomic nerve function; there’s already some promising work going on with other peripheral nerves like the vagus nerve. Your nerves go everywhere: they target your gut, they target your heart, they target your lungs. There is cool work being done to modulate neural activity to control obesity and to prevent heart disease or pulmonary diseases. I think these are also realistic clinical applications in our lifetimes. Much further out, there’s research that can hopefully help people with memory issues and restoring cognition, and potentially even enhancing cognition. But the idea of Elon Musk making robot bodies to store our memories is extremely far out; we might only begin to see faint sparks of that type of technology towards the end of our lifetimes.

Prosthesis abandonment for upper extremities is a significant problem, with estimated rates ranging from 20–50%. How does abandonment impact your work?

Abandonment rates are certainly a driving factor for almost all of the research we do. When you’re working with patients, you always build things off of patient needs. It’s extremely important, and easy, to ask patients why they abandon their prostheses. There are some things that are a layer or two deeper, that patients don’t realize would have an impact on them, but for the most part, the patients are spot on: they know what they want, and that’s what we’re trying to give them.

We looked at a survey from transradial amputees in particular: almost everything related to the top five factors was, essentially, related to wanting a functional prosthetic wrist. They wanted to be able to rotate their wrist. They wanted to be able to flex their wrist. They wanted to be able to deviate their wrist. They wanted to be able to do simultaneous movements, like hand plus wrist, and they wanted the control to not be so demanding on their visual attention. All of those things are direct reasons why they don’t use their prosthesis, and they are direct reasons why we do the research we do.

You can also ask patients about how much a sense of touch plays into your function and use. Those are a bit more obscure, but when you start to give patients these rudimentary devices, letting them see what it’s like, you can get their direct feedback and they can tell you if they would continue to use this device. It can be simple things, like being too heavy or too long, or it can be more complex things like, “I don’t know how to say it, but it just feels like it’s a part of my body, not a tool.”

Those are the motivating factors that help us, and then the challenge that we face is convincing insurance companies to cover these new technologies that patients prefer. That’s where we get a little bit more technical, not just surveying if they like the device, but assessing functional benefits. Our job as researchers is to translate patient needs into a physical, engineered device and then provide quantifiable evidence that the device helps the user.

One example: we recently put together a prosthetic wrist that can be used with any prosthetic hand to provide additional functions outlined in that survey I described. When we let patients use it, our goal is to prove that the device helps patients in such a meaningful way that insurance companies would be willing to reimburse the cost of the device. We worked to prove that patients, when using the prosthetic wrist we built, did fewer unnatural, compensatory movements. When you don’t have a wrist, and you go to pick something up, you often compensate by chicken-winging your elbow or leaning over to the side. By over-using these shoulder and lower-back muscles, you end up with shoulder and back injuries and further complications that end up costing the insurance companies a lot more money down the road. So, if our prosthetic wrist can prevent these long-term shoulder and back injuries, it’s worth the money for insurance companies to reimburse the cost of the device and get the device to patients immediately.

What piece of “Old School” technology do you wish would make a comeback?

I’ve always missed having a physical keyboard on my phone. I grew up with a flip phone and you could send a message without ever looking at it– having that physical keyboard where you can actually feel exactly where the buttons are. I remember being in school and sending messages underneath my desk without looking at the screen. Nowadays, if you’re in a meeting, everyone knows you’re texting because you’re probably staring down at your screen to see what you’re typing.

A lot of my research is in the area of haptics, focused on our sense of touch. The haptic feedback on newer phones is not good enough: you have to be staring at your screen. I think that that’s one of the biggest pieces of technology that has been missed. New features they have on phones where you can kind of feel a double or triple tap or a longer buzz, all those different features help, but we’re still miles away from having AR and VR that feels like you’re really clicking a physical keyboard.

Anything else you would like to add?

I want to take the time to thank all of the participants involved in our research. They are critical to what we do. They’re not only committing their time and effort, but for neural prosthetic devices, they’re also selflessly giving their body to science: having research devices implanted in their body and undergoing surgeries. But most importantly, research participants are pioneers, leading the way not to help themselves, but for the altruistic benefit of helping others in the future once the technology has matured. I always have to give shout-outs to them for being the heroes to make this technology progress. Without them, we wouldn’t be able to do what we do.

ABOUT SUPERPOSITION

We believe that progress is measured not by the number of new technologies created, but by their ability to be understood, embraced, and leveraged for positive impact. We seek to bring you easy-to-understand briefs on science and technology that can change the world, interviews with the leaders at the forefront of these breakthroughs, and writing that illuminates the importance of communication within and beyond the scientific community. The Superposition is produced by JDI, a boutique consultancy that brings emerging technology and science-driven companies to market. Our mission is to make precedent-setting science companies well known and understood. We pursue mastery of marketing, communications, and design to ensure that our clients get the attention they deserve. We believe that a good story — well told — can change the world.

To learn more, or get in touch, reach out to us at superposition@jones-dilworth.com.

story-shapeCreated with Sketch.
OUR WEEKLY DISPATCH + NEWSLETTER
THE SUPERPOSITION
© 2024 Jones-Dilworth, Inc. Privacy Policy