TAG: "Hearing"

UCSF neuroscientist wins Russ Prize, bioengineering’s highest honor

Michael Merzenich lauded for contributions to cochlear implants for the deaf.

Michael Merzenich, UC San Francisco

By Pete Farley, UC San Francisco

Ohio University and the National Academy of Engineering (NAE) announced today (Jan. 7) that UC San Francisco neuroscientist Michael M. Merzenich, Ph.D., is a winner of the 2015 Fritz J. and Dolores H. Russ Prize, the bioengineering profession’s highest honor. Merzenich shares the prize with four other scientists for their fundamental contributions to the development of cochlear implants, electrical devices that enable the deaf to hear.

The cochlear implant is the most-used neural prosthesis developed to date; more than 320,000 hearing-impaired people have received implants in one or both ears.

“This year’s Russ Prize recipients personify how engineering transforms the health and happiness of people across the globe,” said NAE President C.D. Mote Jr. “The creators of the cochlear implant have improved remarkably the lives of people everywhere who are hearing impaired.”

Cochlear implants are electronic devices that allow people with severe to profound sensorineural hearing loss to hear sounds. In such implants, an externally worn audio processor detects sounds and encodes them into electrical signals that are transmitted to small, surgically implanted components that directly simulate the auditory nerve. The auditory nerve sends the signals to the brain, where they are interpreted as sounds.

Merzenich, professor emeritus of otolaryngology at UCSF, established some of the neurophysiological underpinnings of present cochlear implant designs beginning in the early 1970s. In collaboration with two UCSF colleagues, the late Robin P. Michelson, M.D., and Robert A. Schindler, M.D., professor emeritus of otolaryngology, Merzenich later conducted one of the first clinical trials of multichannel cochlear implants. These trials paved the way for the eventual commercialization of UCSF-designed devices in the late 1980s by Advanced Bionics, still one of the world’s leading manufacturers of cochlear implants.

Merzenich shares the Russ Prize with Blake S. Wilson, adjunct professor of biomedical engineering, electrical and computer engineering, and surgery at Duke University and co-director of the Duke Hearing Center; Graeme M. Clark, Ph.D., Foundation Professor of Otolaryngology at the University of Melbourne, Australia; Erwin Hochmair, DTech, professor emeritus in the Institute for Ion Physics and Applied Physics at the University of Innsbruck, Austria; and Ingeborg Hochmair-Desoyer, Ph.D., professor of biomedical engineering at the Technical University of Vienna, Austria.

“I am very, very pleased that the cochlear implant has been recognized as a significant advancement that contributes positively to the quality of life of those with hearing impairment,” said Dennis Irwin, Ph.D., dean of Ohio University’s Russ College of Engineering and Technology. “I have had the privilege of knowing and working with several individuals with profound hearing loss throughout my early life and later career, and I witnessed the difficulty several of them faced in athletic pursuits, education and their careers.”

Created by Ohio University alumnus Fritz Russ, a 1942 electrical engineering graduate, and his wife, Dolores, the Russ Prize, which carries a $500,000 award, recognizes a bioengineering achievement that has significantly improved the human condition. Awarded biennially by the NAE, the prize recognizes bioengineering achievements worldwide that are in widespread use and have improved the human condition. Previous recipients include the inventors of the implantable heart pacemaker, kidney dialysis, the automated DNA sequencer and the technology enabling LASIK and PRK eye surgeries.

View original article

CATEGORY: NewsComments Off

Songbirds may help build a better hearing aid

Avian ability to pinpoint ‘signal’ sounds inspires algorithm at heart of new auditory device.

Zebra finches and other songbirds can distinguish a mate's song amid a cacaphony of sounds. That ability is helping researchers develop a better hearing aid for humans. (Credit: iStock)

By Kate Rix, UC Newsroom

Untreated hearing loss can have devastating and alienating repercussions on a person’s life: isolation, depression, sapped cognition, even dementia.

Yet only 1 in 5 Americans who could benefit from a hearing aid actually wears one. Some don’t seek help because their loss has been so gradual that they do not feel impaired. Others cannot afford the device. Many own hearing aids but leave them in a drawer. Wearing them is just too unpleasant.

“In a crowded place, it can be very difficult to follow a conversation even if you don’t have hearing deficits,” says UC Berkeley neuroscientist Frederic Theunissen. “That situation can be terrible for a person wearing a hearing aid, which amplifies everything.”

Imagine the chaotic din in which everything is equally amplified: your friend’s voice, the loud people a few tables over and the baby crying across the room.

In that scenario, the friend’s voice is the signal or sound that the listener is trying to hear. Tuning in to signal sounds, even with background noise, is something that healthy human brains and ears do remarkably well. The question for Theunissen — a professor who focuses on auditory perception — was how to make a hearing aid that processes sound the way the brain does.

“We were inspired by the biology of hearing,” Theunissen said. “How does the brain do it?”

Songbirds excel at listening in crowded, noisy environments

Humans aren’t the only ones able to hone in on specific sounds in noisy environments. For the past two years, Theunissen and the graduate students in his lab have studied songbirds, which are especially adept at listening in crowded, noisy environments.

By looking at songbird brain imagery, the researchers now understand how chatty, social animals distinguish the chirp of a mate from the din of dozens of other birds.

They were able to identify the exact neurons that tune into a signal and remain tuned there no matter how noisy the environment becomes. These neurons shine what Theunissen calls an “auditory spotlight” by focusing in on certain features or “edges” of a sound. Imagine you are looking for your cellphone on a table covered with objects. In the same way that your eye can find for a specific rectangular shape and color, your ear searches for and finds certain pitches and frequencies: the sound of a friend’s voice in a restaurant.

“Our brain does all this work, suppressing echoes and background noise, conducting auditory scene analysis,” Theunissen says.

A Proof of Concept Commercialization Gap grant from UC Research Initiatives in the Office of the President provided the critical funding the lab needed to take the discovery one giant step farther.

Algorithm replicates ‘auditory spotlight’

The neurological “auditory spotlight” process has been reproduced in the form of an algorithm. Theunissen’s team is working with Starkey Hearing Technologies, an international firm with a research office in Berkeley. Together, they are testing the algorithm’s potential benefits for hearing-impaired subjects if loaded into hearing aids.

This next generation of hearing aids will detect the features of the signal and separate it from any background noise. Unlike a traditional hearing aid, it will have a variable gain so that signal sounds get a boost without distortion, while background sounds are attenuated without being completely muffled out.

“This hearing aid should not eliminate all of the noise or distort the signal,” Theunissen says. “That wouldn’t sound real, and the real sound is the most pleasant and the one that we want to hear.”

The funding from UC Research Initiatives — $100,000 for one year — moved Theunissen’s research from his lab and closer to the marketplace. The hearing aid algorithm is the first potential commercial application of his lab’s work.

“We are a lab doing basic science,” he says. “There is a purist pleasure in solving problems, but also an excitement that there are real problems to be solved.”

CATEGORY: SpotlightComments Off

Research aims to help veterans with hearing loss

UC Riverside team tries crowdfunding to support project.

Alison Smith, a disabled veteran and UC Riverside graduate student, is part of a research team that is developing a brain-training game to help veterans suffering combat-related hearing loss.

By Bettye Miller, UC Riverside

Many combat veterans suffer hearing loss from blast waves that makes it difficult to understand speech in noisy environments — a condition called auditory dysfunction — which may lead to isolation and depression. There is no known treatment.

Building on promising brain-training research at UC Riverside related to improving vision, researchers at UC Riverside and the National Center for Rehabilitative Auditory Research are developing a novel approach to treat auditory dysfunction by training the auditory cortex to better process complex sounds.

The team is seeking public support to raise the estimated $100,000 needed to fund research and develop a computer game they believe will improve the brain’s ability to process and distinguish sounds.

“This is exploratory research, which is extremely hard to fund,” said Aaron Seitz, UCR professor of neuropsychology. “Most grants fund basic science research. We are creating a brain-training game based on our best understanding of auditory dysfunction. There’s enough research out there to tell us that this is a solvable problem. These disabled veterans are a patient population that has no other resource.”

Seitz said the research team is committed to the project regardless of funding, but donations will accelerate development of the brain-training game by UCR graduate and undergraduate students in computer science and neuroscience; pilot studies on UCR students with normal hearing; testing the game with veterans; and refining the game to the point that it can be released for public use.

Auditory dysfunction is progressive, said Alison Smith, a graduate student in neuroscience studying hearing loss in combat vets who is a disabled veteran. Nearly 8 percent of combat veterans who served in Afghanistan and Iraq suffer from traumatic brain injury, she said. Of those, a significant number complain about difficulty understanding speech in noisy environments, even though they show no external hearing loss.

“Approximately 10 percent of the civilian population is at risk for noise-induced hearing loss, and there have been more than 20,000 significant cases of hearing loss per year since 2004,” added Smith, who served in the Army National Guard as a combat medic for five years.

This research also may help many other hearing-impaired populations, including musicians, mechanics and machinists; reduce the effects of age-related hearing loss; and aid individuals with hearing aids and cochlear implants.

“This kind of training has never been done before,” Seitz said. “We’re taking what we know about the building blocks of speech and what we know about the auditory cortex and the building blocks of hearing, and developing a way to retrain the auditory cortex to process complex sounds.”

The goal is to revive the auditory processing system that was damaged by blast waves and improve hearing, he said. “They may not hear as well as they did before the damage occurred, but we’re hoping to get them to a more normal point.”

UC Riverside launched the research project after audiologists at the Veterans Administration hospital in Loma Linda approached UCR neuroscientist Khaleel Razak about the hearing difficulties faced by returning combat veterans after he presented a seminar on age-related hearing loss. Razak is a consultant on the project.

In addition to Seitz and Smith, team members include Frederick J. Gallun, a researcher at the National Center for Rehabilitative Auditory Research and associate professor in otolaryngology and the Neuroscience Graduate Program at Oregon Health and Science University; Victor Zordan, UCR associate professor of computer science who specializes in video game design and intelligent systems; and Dominique Simmons, a cognitive psychology graduate student studying audiovisual speech perception.

Seitz said he hopes to begin testing the game on veterans by summer 2015.

“Whether or not you agree with the war, these are people who have gone overseas to serve their country,” he said. “When they come back, it’s our responsibility to care for them. We have to find a way to help our disabled vets. Right now, there’s nothing out there for veterans who are suffering this kind of hearing loss. This is our best shot.”

Contributions made through experiment.com are not tax-deductible. Individuals who wish to make a tax-deductible donation may give to the UCR Brain Game Center through UCR Online Giving and use the “special instructions” field to designate the gift for the “Can brain training help soldiers with brain injury regain hearing?” project.

View original article

CATEGORY: NewsComments Off

UCLA, House Clinic sign letter of intent for clinical partnership

Alliance would create leader in clinical care, research, education for hearing, ear disorders.

Gerald Berke, UCLA

By Elaine Schmidt, UCLA

The UCLA Department of Head and Neck Surgery and House Clinic announced today (Nov. 25) that they have signed a letter of intent to pursue and finalize a clinical partnership. The alliance would create the nation’s leader in patient care, research and education for hearing and ear disorders.

“We are thrilled to invite the House Clinic’s world-class group of physicians to the internationally recognized UCLA Health and David Geffen School of Medicine at UCLA,” said Dr. David Feinberg, president of the UCLA Health System, CEO of the UCLA Hospital System and associate vice chancellor of the Geffen School of Medicine at UCLA. “Our partnership will enable patients from Los Angeles and throughout the world to be treated by House doctors as part of the UCLA network.”

The move would preserve each organization’s identity and mission while blending clinical operations to expand patient access to House and UCLA specialists. The clinic’s nine physicians, including two neurosurgeons specializing in tumors and other diseases affecting the inner ear and skull base, would join UCLA’s network.

“The House Clinic is recognized internationally for its education of past, as well as future, leaders in the field of otology,” said Dr. John House, the son of founder Dr. Howard House. “We have identified a renowned institution in UCLA whose mission is complementary to our own and who shares our values and high standards for superb patient care built upon cutting-edge research.”

“House Clinic looks forward to leveraging UCLA’s research facilities and clinical network, and expanding our access to patient care,” said Dr. Jennifer Derebery, president of the House Clinic. “UCLA’s strong community outreach and reputation as the preeminent medical school in southern California will further our mission of advancing the medical and surgical treatment of hearing loss and ear disorders.”

Based in downtown Los Angeles, the House Clinic has satellite locations in Orange, Huntington Beach, Bakersfield, Santa Monica, Encino and Ventura. All of the sites, including Los Angeles, dispense hearing aids, and the Orange location also offers medical care.

Both UCLA and House have attracted global acclaim for improving the medical and surgical treatment of hearing loss and ear disorders.

UCLA was ranked No. 10 for ear, nose and throat care in U.S. News and World Report’s 2015 “Best Hospitals” edition. The UCLA neurotology program provides advanced medical and surgical therapies for the treatment of hearing loss and balance disorders and it has been named a U.S. Center of Excellence by the National Institutes of Health. Both UCLA and House are among a handful of sites designated by the state of California for cochlear implantation surgery. UCLA’s pediatric program provides comprehensive medical and surgical management of ear, nose and throat disorders for children from birth to age 21.

Founded in 1942, House Clinic helped pioneer the development of skull base surgery and was instrumental in developing the cochlear implant, which revolutionized the treatment of deafness. In 1960, Dr. William House performed the first cochlear implant surgery in the United States, and in 1979 he performed the world’s first auditory brainstem implant. In May, House surgeons implanted a deaf pediatric patient with an auditory brainstem device — a first in the U.S. — as part of a National Institutes of Health clinical trial.

“The House Clinic continues to be the nation’s premier organization in the treatment of hearing loss and ear disorders,” said Dr. Gerald Berke, chair of head and neck surgery at UCLA. “Having them join the UCLA Health family demonstrates our commitment to future growth and excellence in the field and reflects House’s recognition of UCLA as the most important health care provider in Southern California.”

View original article

CATEGORY: NewsComments Off

Toys that could damage children’s hearing

Putting tape over speakers can help keep down the volume.

UC Irvine Health otolaryngologist Dr. Hamid Djalilian and his team tested more than two dozen popular toys to determine which had the highest sound levels in three scenarios: at the ear with no tape over the speaker, at the ear with tape over the speaker and at a child’s arm’s length (approximately 30 centimeters) with tape over the speaker.

Djalilian suggests a couple of things parents can do keep down the volume on toys:

  • Put occlusive tape or super glue over the speaker to mute the sound
  • Put tape over the volume control, preventing your child from increasing the volume to unsafe levels

View results of toy tests

CATEGORY: NewsComments Off

Kids’ ear infections cost health care system nearly $3B a year

Ear infections are the most common reason for antibiotic use among all children.

Nina Shapiro, UCLA

Nina Shapiro, UCLA

Acute otitis media, or ear infection, is the most common ailment among kids of preschool age and younger in the U.S., primarily because these children have immature middle-ear drainage systems, higher exposure to respiratory illnesses and undeveloped immune systems.

And because it’s also the most common reason for antibiotic use among all children, the costs associated with acute otitis media (AOM) are under more scrutiny than ever by health care and government administrators, especially given today’s political and economic climate,  strained health-care resources and cost-containment efforts.

While estimates of the economic impact of AOM have been formulated in the past, a new study by UCLA and Harvard University researchers is the first to use a national population database that gives a direct, head-to-head comparison of expenditures for pediatric patients diagnosed with ear infections and similar patients without ear infections.

The findings show that AOM is associated with significant increases in direct costs incurred by consumers and the health care system. With its high prevalence across the U.S., pediatric AOM accounts for approximately $2.88 billion in added health care expenses annually and is a significant health care utilization concern.

The research is published in the current edition of the journal The Laryngoscope.

“Although the annual incidence of ear infection may be declining in the U.S., the number of kids affected remains high, and the public health implications of AOM are substantial,” said study co-author Dr. Nina Shapiro, director of pediatric otolaryngology at Mattel Children’s Hospital UCLA and a professor of head and neck surgery at the David Geffen School of Medicine at UCLA. ”As our health care system continues to be vigorously discussed around the nation, efforts to control costs and allocate resources appropriately are of prime importance.”

Read more

For more health news, visit UC Health, subscribe by email or follow us on Flipboard.

CATEGORY: NewsComments Off

Neuroscientist awarded NSF grant

Funding will support research on auditory processing, sound localization.

Khaleel Razak, UC Riverside

Khaleel Razak, UC Riverside

Twenty years ago Khaleel A. Razak was an electronics engineering student focused on creating a telephone for hearing-impaired children in Chennai, India. Today he is a neuroscientist at UC Riverside whose research on how the brain processes everyday sounds may lead to therapies for age-related hearing problems and fragile X syndrome.

An assistant professor of psychology and neuroscience at UC Riverside, Razak has been awarded a five-year, $866,902 Faculty Early Career Development Program (CAREER) grant from the National Science Foundation to further his research.

Razak’s lab at UCR focuses on how the auditory brain processes behaviorally relevant sounds and how those mechanisms are altered by developmental experience, disease and aging. The NSF grant will specifically support research on how the auditory cortex of the brain processes information about sound locations.

“Precise sound localization can be a matter of life and death,” he explained. “The auditory cortex is necessary for sound localization, but our understanding of the relevant neural processing is rudimentary.  Sound localization is also interesting from a computational perspective because we explore how neurons integrate inputs from the two ears.”

The NSF funding will allow Razak’s lab to investigate the neural computations that generate cortical maps underlying sound localization behavior in the pallid bat.

Read more

CATEGORY: NewsComments Off

Staying safe on the 4th of July

UC Davis Health System experts provide tips on how to enjoy the holiday safely.

Fireworks and food are a central part of Americans’ celebration of the Fourth of July holiday, but they also can be health hazards without proper precautions.

Experts from UC Davis Health System provide some tips on how to enjoy these traditional holiday pastimes while safeguarding against illness and injuries.

The UC Davis Audiology Clinic encourages the use of ear protection to guard against hearing injuries.
To protect themselves, those celebrating with fireworks should use sound judgment and wear earplugs, said Robert Ivory, an audiologist at the Audiology Clinic.

“The explosion from a single firecracker at close range can cause permanent hearing damage in an instant,” Ivory said. “We encourage people to leave the fireworks to the professionals and to use earplugs when attending fireworks celebrations.”

Fireworks also present a risk of burns, the most common cause of injury during the summer months, and especially in July. Fire and burns are the third-leading cause of unintentional, injury-related deaths among children 14 and under.

In 2012, 60 percent of all fireworks injuries occurred during the month surrounding July 4. About 10,000 people suffer fireworks injuries every year, including 4,000 children ages 14 and under. Burns resulting from improper use of sparklers and illegal fireworks usually involve the hands, face, arms and chest areas.

The best way to protect one’s family is not to use fireworks at home. The Firefighters Burn Institute Regional Burn Center at UC Davis Medical Center recommends attending public fireworks displays and leaving the lighting to professionals.

Read more


CATEGORY: NewsComments Off

Humans get the gist of complex sounds

UC Berkeley study has implications for hearing aids, speech recognition software.

New research by neuroscientists at UC Berkeley, suggests that the human brain is not detail-oriented, but opts for the big picture when it comes to hearing.

Researchers found that when faced with many different sounds, such as notes in a violin melody, the brain doesn’t bother processing every individual pitch, but instead quickly summarizes them to get an overall gist of what is being heard.

The study, published today (June 12) in the journal Psychological Science, could potentially improve the ability of hearing aids to help people tune into one conversation when multiple people are talking in the background, something people with normal hearing do effortlessly. Also, if speech recognition software programs could emulate the information compression that takes place in the human brain, they could represent a speaker’s words with less processing power and memory.

In the study, participants could accurately judge the average pitch of a brief sequence of tones. Surprisingly, however, they had difficulty recalling information about individual tones within the sequence, such as when in the sequence they had occurred.

“This research suggests that the brain automatically transforms a set of sounds into a more concise summary statistic — in this case, the average pitch,” said study lead author Elise Piazza, a UC Berkeley Ph.D. student in the Vision Science program. “This transformation is a more efficient strategy for representing information about complex auditory sequences than remembering the pitch of each individual component of those sequences.”

Read more

CATEGORY: NewsComments Off

Study addresses issues faced by deaf, hard-of-hearing clinicians

Are they getting the support they need?

Darin Latimore, UC Davis

Darin Latimore, UC Davis

Deaf and hard-of-hearing (DHoH) people must overcome significant professional barriers, particularly in health care professions. A number of accommodations are available for hearing-impaired physicians, such as electronic stethoscopes and closed-captioning technologies, but are these approaches making a difference?

A team of researchers from the University of California, Davis, the University of Texas Health Science Center at San Antonio and the University of Michigan surveyed DHoH physicians and medical students to determine whether these and other accommodations enhance career satisfaction and their ability to provide care. This research has important implications for DHoH medical students, educators, employers and patients.

The article, titled ”Deafness Among Physicians and Trainees: A National Survey,” appears in the February issue of the journal Academic Medicine.

“We found that many deaf and hard-of-hearing students and physicians are interested in primary care practice and have a special affinity with those who also have a hearing loss,” said Darin Latimore, assistant dean for student and resident diversity at UC Davis School of Medicine and one of the study’s co-authors. “By enhancing training for a diverse range of physicians, we can improve quality of care and access for underserved populations, especially individuals who are deaf or have a hearing loss.”

Read more

CATEGORY: NewsComments Off

Quieting the ringing in the ears

UC Irvine bioengineer helps develop device to aid people with tinnitus.

Fan-Gang Zeng, UC Irvine

A UC Irvine bioengineer, with help from a suffering patient, developed an MP3 player-like device to ease, or even silence, the debilitating noise that afflicts people with tinnitus.

Tinnitus is a condition commonly known as “ringing in the ears.” But for people who suffer from severe tinnitus, that description doesn’t capture how disturbing the experience can be. Tinnitus can sound like ringing, but it can also be a roaring, clicking or a hissing noise. It can be loud or soft, constant or intermittent, and it afflicts many people with hearing loss.

More than 50 million people in the U.S. suffer from the condition, according to the American Tinnitus Association. Of these, 12 million have it severe enough to seek medical help and 2 million are so debilitated by the condition that they can’t function normally on a day-to-day basis.

Now, people suffering from tinnitus have a new therapeutic tool based on research at UC Irvine. An MP3 player-like device, called the “Serenade Tinnitus Treatment System,” officially debuted in March. It plays specially developed tones that can be customized to suppress a patient’s tinnitus. Unlike current therapies, Serenade doesn’t just drown out the tinnitus with a louder sound, but actively reduces it.

The treatment system grew out of a research project headed by Fan-Gang Zeng, a bioengineer and director of the Hearing and Speech Lab at UC Irvine. In 2006, a patient, Michael (not his real name), came to Zeng for help with his cochlear implant.

Michael, a musician and audio engineer in the San Francisco Bay Area, had suddenly and mysteriously lost his hearing in one ear. “I showed up for work one day, and within minutes my hearing shut down on my right side,” he said.

What was worse, that ear immediately developed tinnitus. “It was nothing but loud squealing noises,” he said.

Michael saw a number of specialists and tried all kinds of traditional and alternative therapies, but nothing got rid of the squealing. As a last resort, he decided to get a cochlear implant, a device usually indicated only for people who are deaf in both ears. But evidence has shown that the implants sometimes help relieve tinnitus connected to hearing loss. Unfortunately, the implant didn’t quiet his tinnitus, so his surgeon, Nikolas Blevins of Stanford University, referred Michael to Zeng, an expert in cochlear implants.

Michael met with Zeng, who decided to try to deliver sounds through the implant that might relieve the loud squealing. “I didn’t have any research experience in tinnitus at that time,” Zeng said. “We said, ‘Let’s give it a try.’ That’s how we got started.”

Soon, Michael became not only a patient but a key collaborator in a research project. “Normally, everyone in this country who’s been implanted with a cochlear implant has been completely deaf,” Michael said. “Me, I can compare what goes through the cochlear implant to a perfectly normal hearing ear and tell you what it sounds like. Not to mention having an audio and music background, I can speak in their language and be able to communicate in that way.”

Michael would spend a week in Zeng’s lab every month, flying back and forth from the Bay Area to Irvine. The researchers didn’t know what they were looking for, so they tried all kinds of sounds in a systematic manner. “We tried things from low to high frequencies, white noise, all kinds of frequencies to mask tinnitus. We tried all kinds of things. It took us a long, long time to figure something out,” Zeng said.

One day, Zeng and his colleagues played a low-frequency tone to Michael and asked him how loud it sounded. It was in normal speaking range, Michael said, but the big surprise was that he couldn’t hear his tinnitus at all. “We all just looked at each other,” he said. “I listened and listened, and I tried to hear it, and I said ‘I can’t pick it out.’ We all looked at each other and went, ‘Wow!’”

That initial discovery eventually led to a research project, funded by the American Tinnitus Association. Out of nearly 100 volunteers, only 20 finished the protocol. “It’s a very grueling experience for the patient,” Zeng said. “We didn’t know which sound works and which one doesn’t, so we just tried them all.”

The researchers tested 17 types of sounds in a range of frequencies, including sounds currently used as tinnitus therapies. “We were surprised that the white noise that clinicians have used most widely and for the longest time proved to be the least effective,” Zeng said.

Instead, they found that sound waves with their amplitude modulated — similar to the frequencies on an AM radio — worked to suppress tinnitus in 60 percent of the volunteers. The team published its results on April 23 in the Journal of the Association of Research in Otolaryngology.

Zeng said these patterned sounds might be serving to stimulate the brain’s auditory cortex. A steady sound, like the white noise of an empty radio frequency, might stimulate the brain at first, but then the brain gets used to it.

“If you have these modulated sounds, then the brain will continue to respond to it,” Zeng hypothesized. “In tinnitus, you hear something when nothing is there, so there’s some activity in the brain. Maybe these modulated sounds will disrupt abnormal activities, which is the basis of your tinnitus.”

“It’s a step forward but far away from a true cure,” Zeng said.

To bring the research to a wider group of patients, a venture capital firm called Allied Minds licensed the technology and formed the San Jose company SoundCure to commercialize it. In August 2011, SoundCure received clearance from the U.S. Food and Drug Administration for the Serenade system, a handheld device that plays the modulated sounds — called “S-Tones” — customized for each tinnitus patient. SoundCure formally launched the product at the American Academy of Audiology Meeting held in Boston in March.

SoundCure provides audiologists with software to test patients and to create sounds that are programmed onto the device, said company CEO Bill Perry. First, the audiologist identifies and tries to match the perceived pitch of the patient’s tinnitus and then modulates the amplitude of the sound wave. “Studies suggest it’s that modulation plus the frequency pitch match that creates brain activity to help reduce a patient’s perception of their tinnitus,” Perry said. The patient can listen to the S-Tones through a pair of small earphones whenever he or she needs relief.

The Serenade device can play four different tracks of sound, two of which are the S-Tones developed at UC Irvine. The other two tracks contain more traditional sound therapies. “The goal was to give the patient and the audiologist a complete sound therapy tool,” Perry says. Many patients are already using the device and report relief from the S-Tones where other remedies have failed.

What’s more, this gives researchers a way to obtain insight into tinnitus itself. The tones provide a non-invasive way to test and evaluate a patient’s tinnitus. “It’s not just a guessing game anymore,” Zeng said. In the future, it might be possible to diagnose and characterize a person’s tinnitus and find better ways to treat it.

Despite the integral part he played in the research, Michael’s tinnitus still bothers him. “I never have quiet,” he said. “It’s not part of my world anymore. I turn to the device for relief from time to time, but I still live with very loud tinnitus.”

However, he credits Zeng for helping him develop a healthy attitude about it. “In fact, it was his discussions with me that seemed to turn things around for me,” Michael said. “He helped me see the glass half-full. He’s one of my favorite people on the planet.”

CATEGORY: SpotlightComments Off

How selective hearing works in the brain

UCSF research explains “cocktail party effect” — ability to tune in a single voice in a crowded room.

Edward Chang, UC San Francisco

The longstanding mystery of how selective hearing works — how people can tune in to a single speaker while tuning out their crowded, noisy environs — is solved this week in the journal Nature by two scientists from the University of California, San Francisco.

Psychologists have known for decades about the so-called “cocktail party effect,” a name that evokes the “Mad Men” era in which it was coined. It is the remarkable human ability to focus on a single speaker in virtually any environment — a classroom, sporting event or coffee bar — even if that person’s voice is seemingly drowned out by a jabbering crowd.

To understand how selective hearing works in the brain, UCSF neurosurgeon Edward Chang, M.D., a faculty member in the UCSF Department of Neurological Surgery and the Keck Center for Integrative Neuroscience, and UCSF postdoctoral fellow Nima Mesgarani, Ph.D., worked with three patients who were undergoing brain surgery for severe epilepsy.

Part of this surgery involves pinpointing the parts of the brain responsible for disabling seizures. The UCSF epilepsy team finds those locales by mapping the brain’s activity over a week, with a thin sheet of up to 256 electrodes placed under the skull on the brain’s outer surface or cortex. These electrodes record activity in the temporal lobe, home to the auditory cortex.

UCSF is one of few leading academic epilepsy centers where these advanced intracranial recordings are done, and, Chang said, the ability to safely record from the brain itself provides unique opportunities to advance our fundamental knowledge of how the brain works.

“The combination of high-resolution brain recordings and powerful decoding algorithms opens a window into the subjective experience of the mind that we’ve never seen before,” Chang said.

In the experiments, patients listened to two speech samples played to them simultaneously in which different phrases were spoken by different speakers. They were asked to identify the words they heard spoken by one of the two speakers.

The authors then applied new decoding methods to “reconstruct” what the subjects heard from analyzing their brain activity patterns. Strikingly, the authors found that neural responses in the auditory cortex only reflected those of the targeted speaker. They found that their decoding algorithm could predict which speaker and even what specific words the subject was listening to based on those neural patterns.  In other words, they could tell when the listener’s attention strayed to another speaker.

“The algorithm worked so well that we could predict not only the correct responses, but also even when they paid attention to the wrong word,” Chang said.

The new findings show that the representation of speech in the cortex does not just reflect the entire external acoustic environment but instead just what we really want or need to hear.

They represent a major advance in understanding how the human brain processes language, with immediate implications for the study of impairment during aging, attention deficit disorder, autism and language learning disorders.

In addition, Chang, who is also co-director of the Center for Neural Engineering and Prostheses at UC Berkeley and UCSF, said that we may someday be able to use this technology for neuroprosthetic devices for decoding the intentions and thoughts from paralyzed patients that cannot communicate.

Revealing how our brains are wired to favor some auditory cues over others it may even inspire new approaches toward automating and improving how voice-activated electronic interfaces filter sounds in order to properly detect verbal commands.

How the brain can so effectively focus on a single voice is a problem of keen interest to the companies that make consumer technologies because of the tremendous future market for all kinds of electronic devices with voice-active interfaces. While the voice recognition technologies that enable such interfaces as Apple’s Siri have come a long way in the last few years, they are nowhere near as sophisticated as the human speech system.

An average person can walk into a noisy room and have a private conversation with relative ease — as if all the other voices in the room were muted. In fact, said Mesgarani, an engineer with a background in automatic speech recognition research, the engineering required to separate a single intelligible voice from a cacophony of speakers and background noise is a surprisingly difficult problem.

Speech recognition, he said, is “something that humans are remarkably good at, but it turns out that machine emulation of this human ability is extremely difficult.”

The article, “Selective cortical representation of attended speaker in multi-talker speech perception” by Mesgarani and Chang appears in the April 19 issue of the journal Nature.

This work was funded by the National Institutes of Health and the Ester A. and Joseph Klingenstein Foundation.

UCSF is a leading university dedicated to promoting health worldwide through advanced biomedical research, graduate-level education in the life sciences and health professions, and excellence in patient care.

CATEGORY: NewsComments Off