0
These Hearing Aids Will Tune in to Your Brain
Imagine you’re at a bustling dinner party filled with laughter, music, and clinking silverware. You’re trying to follow a conversation across the table, but every word feels like it’s wrapped in noise. For most people, these types of party scenarios, where it’s difficult to filter out extraneous sounds and focus on a single source, are an occasional annoyance. For millions with hearing loss, they’re a daily challenge—and not just in busy settings.Today’s hearing aids aren’t great at determining which sounds to amplify and which to ignore, and this often leaves users overwhelmed and fatigued. Even the routine act of conversing with a loved one during a car ride can be mentally draining, simply because the hum of the engine and road noises are magnified to create loud and constant background static that blurs speech.In recent years, modern hearing aids have made impressive strides. They can, for example, use a technology called adaptive beamforming to focus their microphones in the direction of a talker. Noise-reduction settings also help decrease background cacophony, and some devices even use machine-learning-based analysis, trained on uploaded data, to detect certain environments—for example a car or a party—and deploy custom settings.That’s why I was initially surprised to find out that today’s state-of-the-art hearing aids aren’t good enough. “It’s like my ears work but my brain is tired,” I remember one elderly man complaining, frustrated with the inadequacy of his cutting-edge noise-suppression hearing aids. At the time, I was a graduate student at the University of Texas at Dallas, surveying individuals with hearing loss. The man’s insight led me to a realization: Mental strain is an unaddressed frontier of hearing technology.But what if hearing aids were more than just amplifiers? What if they were listeners too? I envision a new generation of intelligent hearing aids that not only boost sound but also read the wearer’s brain waves and other key physiological markers, enabling them to react accordingly to improve hearing and counter fatigue.Until last spring, when I took time off to care for my child, I was a senior audio research scientist at Harman International, in Los Angeles. My work combined cognitive neuroscience, auditory prosthetics, and the processing of biosignals, which are measurable physiological cues that reflect our mental and physical state. I’m passionate about developing brain-computer interfaces (BCIs) and adaptive signal-processing systems that make life easier for people with hearing loss. And I’m not alone. A number of researchers and companies are working to create smart hearing aids, and it’s likely they’ll come on the market within a decade.Two technologies in particular are poised to revolutionize hearing aids, offering personalized, fatigue-free listening experiences: electroencephalography (EEG), which tracks brain activity, and pupillometry, which uses eye measurements to gauge cognitive effort. These approaches might even be used to improve consumer audio devices, transforming the way we listen everywhere.Aging Populations in a Noisy WorldMore than 430 million people suffer from disabling hearing loss worldwide, including 34 million children, according to the World Health Organization. And the problem will likely get worse due to rising life expectancies and the fact that the world itself seems to be getting louder. By 2050, an estimated 2.5 billion people will suffer some degree of hearing loss and 700 million will require intervention. On top of that, as many as 1.4 billion of today’s young people—nearly half of those aged 12 to 34—could be at risk of permanent hearing loss from listening to audio devices too loud and for too long.Every year, close to a trillion dollars is lost globally due to unaddressed hearing loss, a trend that is also likely getting more pronounced. That doesn’t account for the significant emotional and physical toll on the hearing impaired, including isolation, loneliness, depression, shame, anxiety, sleep disturbances, and loss of balance. Flex-printed electrode arrays, such as these from the Fraunhofer Institute for Digital Media Technology, offer a comfortable option for collecting high-quality EEG signals. Leona Hofmann/Fraunhofer IDMTAnd yet, despite widespread availability, hearing aid adoption remains low. According to a 2024 study published in The Lancet, only about 13 percent of Americans adults with hearing loss regularly wear hearing aids. Key reasons for this deficiency include discomfort, stigma, cost—and, crucially, frustration with the poor performance of hearing aids in noisy environments.Historically, hearing technology has come a long way. As early as the 13th century, people began using horns of cows and rams as “ear trumpets.” Commercial versions made of various materials, including brass and wood, came on the market in the early 19th century. (Beethoven, who famously began losing his hearing in his twenties, used variously shaped ear trumpets, some of which are now on display in a museum in Bonn, Germany.) But these contraptions were so bulky that users had to hold them with their hands or wear them within headbands. To avoid stigma, some even hid hearing aids inside furniture to mask their disability. In 1819, a special acoustic chair was designed for the king of Portugal, featuring arms ornately carved to look like open lion mouths, which helped transmit sound to the king’s ear via speaking tubes.Modern hearing aids came into being after the advent of electronics in the early 20th century. Early devices used vacuum tubes and then transistors to amplify sound, shrinking over time from bulky body-worn boxes to discreet units that fit behind or inside the ear. At their core, today’s hearing aids still work on the same principle: A microphone picks up sound, a processor amplifies and shapes it to match the user’s hearing loss, and a tiny speaker delivers the adjusted sound into the ear canal.Today’s best-in-class devices, like those from Oticon, Phonak, and Starkey, have pioneered increasingly advanced technologies, including the aforementioned beamforming microphones, frequency lowering to better pick up high-pitched sounds and voices, and machine learning to recognize and adapt to specific environments. For example, the device may reduce amplification in a quiet room to avoid escalating background hums or else increase amplification in a noisy café to make speech more intelligible. Advances in the AI technique of deep learning, which relies on artificial neural networks to automatically recognize patterns, also hold enormous promise. Using context-aware algorithms, this technology can, for example, be used to help distinguish between speech and noise, predict and suppress unwanted clamor in real time, and attempt to clean up speech that is muffled or distorted.The problem? As of right now, consumer systems respond only to external acoustic environments and not to the internal cognitive state of the listener—which means they act on imperfect and incomplete information. So, what if hearing aids were more empathetic? What if they could sense when the listener’s brain feels tired or overwhelmed and automatically use that feedback to deploy advanced features?Using EEG to Augment Hearing AidsWhen it comes to creating intelligent hearing aids, there are two main challenges. The first is building convenient, power-efficient wearable devices that accurately detect brain states. The second, perhaps more difficult step is decoding feedback from the brain and using that information to help hearing aids adapt in real time to the listener’s cognitive state and auditory experience.Let’s start with EEG. This century-old noninvasive technology uses electrodes placed on the scalp to measure the brain’s electrical activity through voltage fluctuations, which are recorded as “brain waves.” Brain-computer interfaces allow researchers to accurately determine a listener’s focus in multitalker environments. Here, professor Christopher Smalt works on an attention-decoding system at the MIT Lincoln Laboratory.MIT Lincoln LaboratoryClinically, EEG has long been applied for diagnosing epilepsy and sleep disorders, monitoring brain injuries, assessing hearing ability in infants and impaired individuals, and more. And while standard EEG requires conductive gel and bulky headsets, we now have versions that are far more portable and convenient. These breakthroughs have already allowed EEG to migrate from hospitals into the consumer tech spaces, driving everything from neurofeedback headbands to the BCIs in gaming and wellness apps that allow people to control devices with their minds.The cEEGrid project at Oldenburg University, in Germany, positions lightweight adhesive electrodes around the ear to create a low-profile version. In Denmark, Aarhus University’s Center for Ear-EEG also has an ear-based EEG system designed for comfort and portability. While the signal-to-noise ratio is slightly lower compared to head-worn EEG, these ear-based systems have proven sufficiently accurate for gauging attention, listening effort, hearing thresholds, and speech tracking in real time.For hearing aids, EEG technology can pick up brain-wave patterns that reveal how well a listener is following speech: When listeners are paying attention, their brain rhythms synchronize with the syllabic rhythms of discourse, essentially tracking the speaker’s cadence. By contrast, if the signal becomes weaker or less precise, it suggests the listener is struggling to comprehend and losing focus.During my own Ph.D. research, I observed firsthand how real-time brain-wave patterns, picked up by EEG, can reflect the quality of a listener’s speech cognition. For example, when participants successfully homed in on a single talker in a crowded room, their neural rhythms aligned nearly perfectly with that speaker’s voice. It was as if there were a brain-based spotlight on that speaker! But when background fracas grew louder or the listener’s attention drifted, those patterns waned, revealing stress in keeping up.Today, researchers at Oldenburg University, Aarhus University, and MIT are developing attention-decoding algorithms specifically for auditory applications. For example, Oldenburg’s cEEGrid technology has been used to successfully identify which of two speakers a listener is trying to hear. In a related study, researchers demonstrated that Ear-EEG can track the attended speech stream in multitalker environments.All of this could prove transformational in creating neuroadaptive hearing aids. If a listener’s EEG reveals a drop in speech tracking, the hearing aid could infer increased listening difficulty, even if ambient noise levels have remained constant. For example, if a hearing-impaired car driver can’t focus on a conversation due to mental fatigue caused by background noise, the hearing aid could switch on beamforming to better spotlight the passenger’s voice, as well as machine-learning settings to deploy sound canceling that blocks the din of the road.Of course, there are several hurdles to cross before commercialization becomes possible. For one thing, EEG-paired hearing aids will need to handle the fact that neural responses differ from person to person, which means they will likely need to be calibrated individually to capture each wearer’s unique brain-speech patterns.Additionally, EEG signals are themselves notoriously “noisy,” especially in real-world environments. Luckily, we already have algorithms and processing tools for cleaning and organizing these signals so computer models can search for key patterns that predict mental states, including attention drift and fatigue.Commercial versions of EEG-paired hearing aids will also need to be small and energy-efficient when it comes to signal processing and real-time computation. And getting them to work reliably, despite head movement and daily activity, will be no small feat. Importantly, companies will need to resolve ethical and regulatory considerations, such as data ownership. To me, these challenges seem surmountable, especially with technology progressing at a rapid clip.A Window to the Brain: Using Our Eyes to HearNow let’s consider a second way of reading brain states: through the listener’s eyes.When a person has trouble hearing and starts feeling overwhelmed, the body reacts. Heart-rate variability diminishes, indicating stress, and sweating increases. Researchers are investigating how these types of autonomic nervous-system responses can be measured and used to create smart hearing aids. For the purposes of this article, I will focus on a response that seems especially promising—namely, pupil size.Pupillometry is the measurement of pupil size and how it changes in response to stimuli. We all know that pupils expand or contract depending on light brightness. As it turns out, pupil size is also an accurate means of evaluating attention, arousal, mental strain—and, crucially, listening effort. Pupil size is determined by both external stimuli, such as light, and internal stimuli, such as fatigue or excitement.Chris PhilpotIn recent years, studies at University College London and Leiden University have demonstrated that pupil dilation is consistently greater in hearing-impaired individuals when processing speech in noisy conditions. Research has also shown pupillometry to be a sensitive, objective correlate of speech intelligibility and mental strain. It could therefore offer a feedback mechanism for user-aware hearing aids that dynamically adjust amplification strategies, directional focus, or noise reduction based not just on the acoustic environment but on how hard the user is working to comprehend speech.While more straightforward than EEG, pupillometry presents its own engineering challenges. Unlike with ears, which can be assessed from behind, pupillometry requires a direct line of sight to the pupil, necessitating a stable, front-facing camera-to-eye configuration—which isn’t easy to achieve when a wearer is moving around in real-world settings. On top of that, most pupil-tracking systems require infrared illumination and high-resolution optical cameras, which are too bulky and power intensive for the tiny housings of in-ear or behind-the-ear hearing aids. All this makes it unlikely that standalone hearing aids will include pupil-tracking hardware in the near future.A more viable approach may be pairing hearing aids with smart glasses or other wearables that contain the necessary eye-tracking hardware. Products from companies like Tobii and Pupil Labs already offer real-time pupillometry via lightweight headgear for use in research, behavioral analysis, and assistive technology for people with medical conditions that limit movement but leave eye control intact. Apple’s Vision Pro and other augmented reality or virtual reality platforms also include built-in eye-tracking sensors that could support pupillometry-driven adaptations for audio content. Smart glasses that measure pupil size, such as these made by Tobii, could help determine listening strain. TobiiOnce pupil data is acquired, the next step will be real-time interpretation. Here, again, is where machine learning can use large datasets to detect patterns signifying increased cognitive load or attentional shifts. For instance, if a listener’s pupils dilate unnaturally during a conversation, signifying strain, the hearing aid could automatically engage a more aggressive noise suppression mode or narrow its directional microphone beam. These types of systems can also learn from contextual features, such as time of day or prior environments, to continuously refine their response strategies.While no commercial hearing aid currently integrates pupillometry, adjacent industries are moving quickly. Emteq Labs is developing “emotion-sensing” glasses that combine facial and eye tracking, along with pupil measurement, to do things like evaluate mental health and capture consumer insights. Ethical controversies aside—just imagine what dystopian governments might do with emotion-reading eyewear!—such devices show that it’s feasible to embed biosignal monitoring in consumer-grade smart glasses.A Future with Empathetic Hearing AidsBack at the dinner party, it remains nearly impossible to participate in conversation. “Why even bother going out?” some ask. But that will soon change.We’re at the cusp of a paradigm shift in auditory technology, from device-centered to user-centered innovation. In the next five years, we may see hybrid solutions where EEG-enabled earbuds work in tandem with smart glasses. In 10 years, fully integrated biosignal-driven hearing aids could become the standard. And in 50? Perhaps audio systems will evolve into cognitive companions, devices that adjust, advise, and align with our mental state.Personalizing hearing-assistance technology isn’t just about improving clarity; it’s also about easing mental fatigue, reducing social isolation, and empowering people to engage confidently with the world. Ultimately, it’s about restoring dignity, connection, and joy.