Ann Johnson, a Regina woman who lost her ability to speak due to a stroke 18 years ago, is regaining her voice thanks to a brain implant and cutting-edge artificial intelligence (A.I.) technology. Johnson’s stroke in 2005 left her with locked-in syndrome, a condition where she is fully aware but unable to move or communicate effectively.
Researchers from the University of California San Francisco (UCSF) and the University of California Berkeley (UCB) have developed a brain-computer interface that could revolutionize communication for people with similar conditions. This technology deciphers Johnson’s brain signals and translates them into words.
Dr. Edward Chang, a neurosurgeon and chair of neurological surgery at UCSF, implanted 253 electrodes onto the surface of Johnson’s brain. These electrodes intercept brain signals that would normally control facial muscles and speech-related movements. A cable connected to a port on Johnson’s head links the electrodes to a computer system.
To train the A.I. algorithms to recognize Johnson’s unique brain signals for speech, she repeatedly recited phrases from a 1,024-word conversational vocabulary. The computer learned to associate these patterns with different speech sounds. An avatar and a voice were integrated into the system, allowing Johnson to communicate through her digital counterpart.
One particularly poignant moment was when Johnson’s voice was synthesized using a recording of her speaking at her wedding before her stroke. Hearing her synthesized voice for the first time, Johnson said, “My brain feels funny when it hears my synthesized voice. It’s like hearing an old voice.”
Ann Johnson hopes her journey will inspire others facing similar challenges. She wants to demonstrate that disabilities should not define or limit individuals. The breakthrough technology offers hope to those living with locked-in syndrome and similar conditions, paving the way for a future where communication barriers can be overcome through A.I. and brain-computer interfaces.