Tag Archives: Communicate

Recordings show some ‘mute’ animals communicate vocally: study

Issued on:

Paris (AFP) – More than 50 animal species previously thought to be mute actually communicate vocally, according to a study published on Tuesday which suggested the trait may have evolved in a common ancestor over 400 million years ago.

The lead author of the study, evolutionary biologist Gabriel Jorgewich-Cohen, told AFP he first had the idea of recording apparently mute species while researching turtles in Brazil’s Amazon rainforest.

“When I went back home, I decided to start recording my own pets,” Jorgewich-Cohen said. That included Homer, a turtle he has had since childhood.

To his great excitement, he discovered that Homer and his other pet turtles were making vocal sounds.

So he started recording other turtle species, sometimes using a hydrophone, a microphone for recording underwater.

“Every single species I recorded was producing sounds,” said Jorgewich-Cohen, a researcher at Zurich University in Switzerland.

“Then we started questioning how many more animals that are normally considered mute produce sounds.”

As well as 50 species of turtle, the study published in the journal Nature Communications also included recordings from three “very strange animals” considered mute, he said.

They include a type of lungfish, which has gills as well as lungs that allow it to survive on land, and a species of caecilian — a group of amphibians resembling a cross between a snake and a worm.

The research team also recorded a rare type of reptile only found in New Zealand called a tuatara, the only surviving member of an order called Rhynchocephalia which once spanned the globe.

All the animals made vocal sounds such as clicks and chirps or tonal noises, even if they were not very loud or only made them a few times a day.

Common vocal ancestor

The research team combined their findings with data on the evolutionary history of acoustic communication for 1,800 other species.

They then used an analysis called “ancestral state reconstruction”, which calculates the probability of a shared link back through time.

It had previously been thought that tetrapods — four-limbed animals — and lungfishes had evolved vocal communication separately.

“But now we show the opposite,” Jorgewich-Cohen said. “They come from the same place”.

“What we found is that the common ancestor of this group was already producing sounds, and communicating using those sounds intentionally,” Jorgewich-Cohen.

The common ancestor lived at least 407 million years ago during the Palaeozoic era, the study said.

John Wiens — an evolutionary biology professor at Arizona University in the United States who was not involved in the research — said the suggestion that “acoustic communication arose in the common ancestor of lungfish and tetrapods is interesting and surprising”.

Wiens, who published a 2020 paper called “the origins of acoustic communication in vertebrates”, welcomed the new data for the additional species.

But he suggested the study might not “necessarily distinguish between animals making sounds and actual acoustic communication”.

Jorgewich-Cohen said the researchers had indeed set out to identify sounds animals made specifically for communicating, by comparing video and audio recordings to find matches for particular behaviour.

They also recorded the animals in different groups “so we could tell if there are sounds that are only produced in specific situations”, he said.

He acknowledged that some species were hard to study as they do not vocalise frequently and “tend to be shy”, adding that further research was needed.

Read original article here

Here’s why Ian’s track was hard to predict, and harder to communicate

When Justin Smith checked the weather on Tuesday, he wasn’t worried. The report from WINK News on DirecTV said Hurricane Ian was a threat to Tampa, some 100 miles to the north of Fort Myers Beach, where he was honeymooning with his wife, Karinna Smith.

As someone who remembers riding out Hurricanes Ike and Harvey, the 38-year-old southeast Texas resident said he was confident they could ride out the storm at their hotel, the Lovers Key Resort.

“We were there on our honeymoon,” Smith said. “We were trying to have a good time, not be glued to the TV watching the weather.”

But that meant they missed messages that much of southwest Florida, not just Tampa, was under threat of hurricane conditions as Ian approached. Their first warning that the hurricane was making a turn directly toward Fort Myers Beach was a note posted in the empty hotel lobby Tuesday night, indicating that the hotel was being evacuated.

As conditions became catastrophic on Wednesday, the Smiths survived by taking refuge in a hotel stairwell.

The danger faced by the couple and many others who did not evacuate underscores the challenges of communicating forecasts for storms like Ian. Research shows people often cling to an initial version of forecasts, often missing key updates and changing threats. And meteorologists can struggle to convey the uncertainty in their predictions of a storm’s path and potential, in part because hurricane forecast cones and other tools of communication aren’t as useful as they could be for storms like Ian, whose track toward Florida was difficult to pin down even a day ahead of landfall.

Video taken on Sept. 29, shows Hurricane Ian slamming Fort Myers, Fla., with destructive winds and devastating flooding. (Video: Max Olsen)

The forecast uncertainty even challenged officials charged with making evacuation decisions in Lee County, where Ian came ashore. They now face questions as to whether they delayed too long.

The National Oceanic and Atmospheric Administration is working to improve communication of uncertain, fast-changing threats, but the task is not straightforward. It involves refining messages and optimizing graphical information to simplify the complex for very diverse audiences and keeping them apprised of important changes. The latter was critical in the case of Ian since small deviations in the predicted track would significantly alter which communities would be affected.

“There were a lot of alternate futures that were possible,” said Kim Klockow-McClain, a research scientist at NOAA’s National Severe Storms Laboratory. “Communication is not as simple when there are a lot of possible outcomes.”

Asked to review how well it communicated storm risks and uncertainties with Ian, National Hurricane Center officials deferred comment to NOAA risk communication experts.

Gina Eosco, a program manager and social scientist at the agency, said it can be a challenge for forecasters to overcome what she called “optimistic bias,” when people focus too much on early forecasts suggesting low risks of storm impacts, and miss updates signaling new and changing hazards.

“It can trick your brain into thinking you can relax and you may not pay as much attention to the forecast,” Eosco said. “And so it’s possible people didn’t hear that message.”

Along with that bias toward optimism, past experiences when dire forecasts did not come to pass can also prevent people from properly interpreting the realities of forecasts and storm conditions, she added.

Laura Myers, a senior research scientist at the University of Alabama who studies communication around disasters, applauded the Hurricane Center’s work tracking the storm and broadcasting its threats. And she echoed Eosco, saying people often “anchor” their expectations to early forecasts and then are caught unaware when storm predictions change.

“[Forecasters] know that people are going to cling to that and then walk away and not come back to the information,” Myers said. People form their own “scale of risk aversion” and go back to their daily lives despite broadcast meteorologists encouraging them to check back in for updates, according to Myers.

“If they are shocked about impact, it’s because they anchored,” she said.

That said, the Hurricane Center’s archive of Ian forecasts shows that, as its predictions of the storm’s path shifted, meteorologists did not begin to emphasize risks to the area around the eventual landfall point until about a day in advance.

It wasn’t until Tuesday morning, while Ian was passing over western Cuba — that the Hurricane Center extended a hurricane warning southward to cover the stretch of southwestern Florida coastline that would soon be devastated. Even then, the centerline for the predicted storm track passed through Tampa and wasn’t over Fort Myers until 11 p.m. that night.

In the days before that, what would become ground zero for Ian’s devastation was at the edge of areas the Hurricane Center warned were in the storm’s path. Areas to its south, including Naples — which endured a record ocean surge — were left out.

That meant some, like the Smiths, were caught unaware by Ian’s intensity. Smith said he received none of the National Weather Service text alerts that are supposed to broadcast imminent hazards to any cellphones in their path. And he said he got no alarm from hotel staff.

“They didn’t knock on our door,” Justin Smith told The Washington Post. “They didn’t call that room. They didn’t do anything. By the time that we found out we didn’t have a rental car or anything like that, so we were kind of stuck.”

While some meteorologists suggested it was a failure of the Hurricane Center, others stressed that that represents a misunderstanding of what the forecast cone actually means. There is a 60 to 70 percent chance a storm’s eye will remain within the cone’s boundaries — meaning in about one out of three cases, the storm will move outside of the cone.

The problem is that the forecast cone is not well-designed for unpredictable storms like Ian, Klockow-McClain said. The width of the cone is based on the Hurricane Center’s past error in storm forecast track predictions, but with Ian, that meant an underestimation of potential error.

“The problem is, with that graphic, we’re communicating about how we’ve done in the past. We’re not saying a whole lot about the uncertainty of the current situation,” Klockow-McClain said.

Researchers say the challenge is engaging with the public so that people understand the broader potential for hurricane impacts even outside the forecast cone. Both Eosco and Myers suggested that more localized warnings could better help people evaluate their personal risk.

“Our research has indicated that most people appreciate the worst-case scenario,” Myers said. “They appreciate knowing there is a chance that they would be included in the impacts and what the impacts might be.”

Myers said more should be done to educate the public about hurricane meteorology and risks outside of active weather events, so that when storms strike, they aren’t overwhelmed with too much information.

“If you don’t do that in advance and don’t do it in as many different ways as you possibly can, you’re going to have issues with understanding,” Myers said. Even then, it is not guaranteed that the warnings will be properly interpreted.

NOAA has invested heavily in efforts to narrow gaps in communication and improve public perception and understanding of forecasts, Eosco said. Before and after storms, NOAA conducts a multi-wave project to increase their understanding of how well people understood risks and what actions they took during a hurricane, she said.

And that goes along with Hurricane Center work to improve graphics and messaging around hurricane risks, including adjusting forecast cone images to include wind field sizes so people understand how far dangerous conditions will extend. And the Center has moved to stress that forecast cones are fallible, and that risks extend throughout them and beyond them, though that message doesn’t always get across to the wider public.

“I’m thrilled that if there has to be a hurricane, that we have the ability to learn something from it so that we should improve our communication for future storms,” Eosco said. “If we can find an opportunity of hope here to learn something from it so we can improve such situations and reduce societal impact, that is the type of opportunity NOAA wants to take.”

Meena Venkataramanan contributed to this report.



Read original article here

How Neurons Build and Maintain Their Capacity to Communicate

Summary: Researchers reveal how neurons set up and sustain the vital infrastructure that allows for seamless neurotransmission.

Source: Picower Institute for Learning and Memory

The nervous system works because neurons communicate across connections called synapses. They “talk” when calcium ions flow through channels into “active zones” that are loaded with vesicles carrying molecular messages.

The electrically charged calcium causes vesicles to “fuse” to the outer membrane of presynaptic neurons, releasing their communicative chemical cargo to the postsynaptic cell.

In a new study, scientists at The Picower Institute for Learning and Memory at MIT provide several revelations about how neurons set up and sustain this vital infrastructure.

“Calcium channels are the major determinant of calcium influx, which then triggers vesicle fusion, so it is a critical component of the engine on the presynaptic side that converts electrical signals to chemical synaptic transmission,” said Troy Littleton, senior author of the new study in eLife and Menicon Professor of Neuroscience in MIT’s Departments of Biology and Brain and Cognitive Sciences.

“How they accumulate at active zones was really unclear. Our study reveals clues into how active zones accumulate and regulate the abundance of calcium channels.”

Neuroscientists have wanted these clues. One reason is that understanding this process can help reveal how neurons change how they communicate, an ability called “plasticity” that underlies learning and memory and other important brain functions.

Another is that drugs such as gabapentin, which treats conditions as diverse as epilepsy, anxiety and nerve pain, binds a protein called alpha2delta that is closely associated with calcium channels. By revealing more about alpha2delta’s exact function, the study better explains what those treatments affect.

The more scientists knocked out a protein called alpha2delta with different manipulations (right two columns), the less Cac calcium channel accrued in synaptic active zones of a fly neuron (brightness and number of green dots) compared to unaltered controls (left column).

“Modulation of the function of presynaptic calcium channels is known to have very important clinical effects,” Littleton said. “Understanding the baseline of how these channels are regulated is really important.”

MIT postdoc Karen Cunningham led the study, which was her doctoral thesis work in Littleton’s lab. Using the model system of fruit fly motor neurons, she employed a wide variety of techniques and experiments to show for the first time the step by step process that accounts for the distribution and upkeep of calcium channels at active zones.

A cap on Cac

Cunningham’s first question was whether calcium channels are necessary for active zones to develop in larvae. The fly calcium channel gene (called “cacophony,” or Cac) is so important, flies literally can’t live without it. So rather than knocking out Cac across the fly, Cunningham used a technique to knock it out in just one population of neurons. By doing so, she was able to show that even without Cac, active zones grow and mature normally.

Using another technique that artificially prolongs the larval stage of the fly she was also able to see that given extra time the active zone will continue to build up its structure with a protein called BRP, but that Cac accumulation ceases after the normal six days.

Cunningham also found that moderate increases of decreases in the supply of available Cac in the neuron did not affect how much Cac ended up at each active zone. Even more curious, she found that while Cac amount did scale with each active zone’s size, it barely budged if she took away a lot of the BRP in the active zone. Indeed, for each active zone, the neuron seemed to enforce a consistent cap on the amount of Cac present.

“It was revealing that the neuron had very different rules for the structural proteins at the active zone like BRP that continued to accumulate over time, versus the calcium channel that was tightly regulated and had its abundance capped” Cunningham said.

Regular refresh

The team’s model shows factors that regulate Cac abundance at active zones. Active Zone scaffold development and Cac delivery via alpha2delta increases it while turnover keeps a lid on it. Cac biosynthesis barely increases aboundance.

The findings showed there must be factors other than Cac supply or changes in BRP that regulate Cac levels so tightly. Cunningham turned to alpha2delta. When she genetically manipulated how much of that was expressed, she found that alpha2delta levels directly determined how much Cac accumulated at active zones.

In further experiments, Cunningham was also able to show that alpha2delta’s ability to maintain Cac levels depended on the neuron’s overall Cac supply. That finding suggested that rather than controlling Cac amount at active zones by stabilizing it, alpha 2delta likely functioned upstream, during Cac trafficking, to supply and resupply Cac to active zones.

Cunningham used two different techniques to watch that resupply happen, producing measurements of its extent and its timing. She chose a moment after a few days of development to image active zones and measure Cac abundance to ascertain the landscape. The she bleached out that Cac fluorescence to erase it.

After 24 hours, she visualized Cac fluorescence anew  to highlight only the new Cac that was delivered to active zones over that 24 hours. She saw that over that day there was Cac delivery across virtually all active zones, but that one day’s work was indeed only a fraction compared to what had built up over several days before.

Moreover, she could see that the larger active zones accrued more Cac than smaller ones. And in flies with mutated alpha2delta, there was very little new Cac delivery at all.

If Cac channels were indeed constantly being resupplied, then Cunningham wanted to know at what pace Cac channels are removed from active zones.

The more scientists knocked out a protein called alpha2delta with different manipulations (right two columns), the less Cac calcium channel accrued in synaptic active zones of a fly neuron (brightness and number of green dots) compared to unaltered controls (left column). Credit: Littleton Lab/MIT Picower Institute

To determine that, she used a staining technology with a photoconvertible protein called Maple tagged to the Cac protein that allowed her to change the color with a flash of light at the time of her choosing. That way she could first see how much Cac accumulated by a certain time (shown in green) and then flash the light to turn that Cac red.

When she checked back five days later, about 30 percent of the red Cac had been replaced with new green Cac, suggesting 30 percent turnover. When she reduced Cac delivery levels by mutating alpha2 delta or reducing Cac biosynthesis, Cac turnover stopped. That means a significant amount of Cac is turned over each day at active zones and that the turnover is prompted by new Cac delivery.

Littleton said his lab is eager to build on these results. Now that the rules of calcium channel abundance and replenishment are clear, he wants to know how they differ when neurons undergo plasticity—for instance when new incoming information requires neurons to adjust their communication to scale up or down synaptic communication.

See also

He said he is also eager to track individual calcium channels as they are made in the cell body and then move down the neural axon to the active zones, and he wants to determine what other genes may affect Cac abundance.

In addition to Cunningham and Littleton, the paper’s other authors are Chad Sauvola and Sara Tavana.

Funding: The National Institutes of Health and the JPB Foundation provided support for the research.

About this neuroscience research news

Author: David Orenstein
Source: Picower Institute for Learning and Memory
Contact: David Orenstein – Picower Institute for Learning and Memory
Image: The image is credited to Littleton Lab/MIT Picower Institute

Original Research: Open access.
“Regulation of presynaptic Ca2+ channel abundance at active zones through a balance of delivery and turnover” by Troy Littleton et al. eLife


Abstract

Regulation of presynaptic Ca2+ channel abundance at active zones through a balance of delivery and turnover

Voltage-gated Ca2+ channels (VGCCs) mediate Ca2+ influx to trigger neurotransmitter release at specialized presynaptic sites termed active zones (AZs). The abundance of VGCCs at AZs regulates neurotransmitter release probability (Pr), a key presynaptic determinant of synaptic strength. Although biosynthesis, delivery and recycling cooperate to establish AZ VGCC abundance, experimentally isolating these distinct regulatory processes has been difficult.

Here we describe how the AZ levels of Cacophony (Cac), the sole VGCC mediating synaptic transmission in Drosophila, are determined.

We also analyzed the relationship between Cac, the conserved VGCC regulatory subunit α2δ, and the core AZ scaffold protein Bruchpilot (BRP) in establishing a functional AZ. We find Cac and BRP are independently regulated at growing AZs, as Cac is dispensable for AZ formation and structural maturation, and BRP abundance is not limiting for Cac accumulation. Additionally, AZs stop accumulating Cac after an initial growth phase, whereas BRP levels continue to increase given extended developmental time. AZ Cac is also buffered against moderate increases or decreases in biosynthesis, whereas BRP lacks this buffering.

To probe mechanisms that determine AZ Cac abundance, intravital FRAP and Cac photoconversion were used to separately measure delivery and turnover at individual AZs over a multi-day period. Cac delivery occurs broadly across the AZ population, correlates with AZ size, and is rate-limited by α2δ.

Although Cac does not undergo significant lateral transfer between neighboring AZs over the course of development, Cac removal from AZs does occur and is promoted by new Cac delivery, generating a cap on Cac accumulation at mature AZs.

Together these findings reveal how Cac biosynthesis, synaptic delivery, and recycling set the abundance of VGCCs at individual AZs throughout synapse development and maintenance.

Read original article here

Paralysed man uses brain implant to communicate first words in three months: ‘I want a beer’

A completely paralysed man, who was left unable to communicate for months after losing the ability to even move his eyes, has used a brain implant to ask his caregivers for a beer.

Composing sentences at a rate of just one character per minute, the man also asked to listen to the band Tool “loud”, requested a head massage from his mother, and ordered a curry – all through the power of thought.

The man, who is now 36, had two square electrode arrays surgically implanted into his brain to facilitate communication in March 2019 after being left in a locked-in state as a result of amyotrophic lateral sclerosis (ALS).

People suffering from the progressive neurodegenerative disease have an average life expectancy after diagnosis of two to five years, though they can live much longer. (The late physicist Stephen Hawking lived another 55 years after his diagnosis, relying towards the end of his life on a communication device controlled by a single cheek muscle.)

Until now, a brain implant has not been tested on a completely locked-in patient, and it was not known whether communication was even possible for people who had lost all voluntary muscular control.

“Ours is the first study to achieve communication by someone who has no remaining voluntary movement and hence for whom the BCI is now the sole means of communication,” said Dr Jonas Zimmermann, a senior neuroscientist at the Wyss Center.

“This study answers a long-standing question about whether people with complete locked-in syndrome – who have lost all voluntary muscle control, including movement of the eyes or mouth – also lose the ability of their brain to generate commands for communication.”

Working with researchers at the Wyss Center for Bio and Neuroengineering in Geneva, Switzerland, the ALS patient consented to having the brain implant fitted when he still had the ability to use eye movement to communicate in 2018.

Two microelectrode arrays, each 3.2mm square, were inserted into the surface of the motor cortex in the frontal lobe of the brain

(Wyss Center for Bio Neuroengineering)

It took three months of unsuccessful attempts before a configuration was achieved that allowed the patient to use brain signals to produce a binary response to a speller program, answering ‘yes’ or ‘no’ when presented with letters.

It took another three weeks to produce the first sentences, and over the next year the patient produced dozens of sentences.

One of his earliest communications concerned his care, asking for his head to be kept in an elevated and straight position when there were visitors in the room.

He also requested different kinds of food to be fed through his tubes, including goulash soup and sweet pea soup. “For food I want to have curry with potato then Bolognese and potato soup,” one request stated.

He was also able to interact with his 4-year-old son and wife, generating the message: “I love my cool son.”

The research was detailed in a study published this week in the journal Nature Communications.

The study, titled ‘Spelling interface using intracortical signals in a completely locked-in patient enabled via auditory neurofeedback training’, noted that the BCI communication system can be used in a patient’s home, with some sessions even being performed remotely via the patient’s laptop.

The patient was provided auditory feedback of neural activity levels through a nearby speaker, which allowed them to adjust frequencies to generate ‘yes’ and ‘no’ responses

(Wyss Center/ Nature Communications)

The scientists behind the brain-computer interface technology are now seeking funding to provide similar implants for other people with ALS, which will cost close to $500,000 over the first two years of use.

“This is an important step for people living with ALS who are being cared for outside the hospital environment,” said George Kouvas, chief technology officer at the Wyss Center.

“This technology, benefiting a patient and his family in their own environment, is a great example of how technological advances in the BCI field can be translated to create direct impact.”

Read original article here

Brain Implant Allows Fully Paralyzed Patient to Communicate

An experimental brain-computer interface has allowed a man with amyotrophic lateral sclerosis (ALS) who was unable to speak or move to communicate.

Using a commercially available implant and newly designed software, the patient, who was in the advanced stages of Lou Gehrig’s disease and unable to move his eyes, was able to interact with researchers and caregivers, requesting goulash, beer, and music from the band Tool, thanking the researchers who developed the technology and inviting his 4-year-old son to watch a Disney film.

The investigators note the study shows for the first time that communication is possible in patients in a completely locked-in state (CLIS) and offers hope for a better quality of life in this population.

“It should encourage them to live after artificial respiration and to ask for brain-computer interfaces before they become CLIS,” study investigator Niels Birbaumer, PhD, a professor emeritus of the University of Tübingen, Tübingen, Germany, told Medscape Medical News.

The study was published online March 22 in Nature Communications.

Although the findings appear promising, they build on previous research that was the subject of a 2019 investigation by the largest grant-funding agency in Germany. This controversy prompted the institute that led the current research to appoint an independent expert to audit and monitor the new study.

Mechanism a “Mystery”

Use of brain-computer interface (BCI) technology to allow ALS patients to communicate has increased in recent years. BCIs capture brain signals, transmit them to a computer and convert them into a command that the computer carries out.

Previous research shows patients with ALS who retain eye movement and control have been able to use BCIs to communicate. However, until now, the technology has not worked as well in CLIS patients, who have full-body paralysis.

In 2019, German and Swiss researchers implanted two 64-microde arrays in the brain of a 34-year-old patient who was diagnosed with ALS in 2015.

The electrodes measure neuronal activity while an amplifier located on the outside of the patient’s skull amplifies the signals to a computer. Software created by the research team decodes the signals and translates them into commands.

Using an auditory feedback system, the patient was able to use his mind to modulate the pitch of a tone to either high (meaning “yes”) or low (meaning “no.”) Just how the brain does this is a mystery, Birbaumer said.

A speller program reads letters aloud, first in groups and then individually. When a group contained letters the patient needed to spell a word, he used auditory feedback to select the high-pitch tone.

Initially, the patient was able to correctly spell his name. Ultimately, he was able to form complete sentences. The patient correctly spelled words on 44 of the 107 days in that phase of the experiment, spelling an average of just one character per minute.

Still, the researchers note he was able to interact with his caretakers, family, and researchers, even offering input on changes to make the device more effective.

Controversial History

In 2017, Birbaumer and Ujwal Chaudhary, PhD, who is the lead author on this current study, published a study in PLOS Biology. As reported at the time by Medscape Medical News, that research analyzed a brain-monitoring technique that the scientists claimed enabled patients with ALS who were completely locked in to answer yes or no questions correctly.

Allegations from a whistleblower at the University of Tübingen, where Birbaumer was a senior professor and Chaudhary was a postdoctoral researcher, prompted an investigation by the Deutsche Forschungsgemeinschaft, or German Research Foundation (DFG).

The whistleblower claimed that the 2017 paper and a second study published in 2019 contained incomplete data and misrepresented the findings. The DFG investigation found evidence of scientific misconduct and required that Birbaumer return the grant he had received for the research. The agency also banned Birbaumer from applying for grants or serving as a grant reviewer for 5 years. Chaudhary was banned for 3 years. PLOS Biology later retracted the papers.

Both researchers have refuted the allegations and have reportedly sued the German Research Foundation.

“We have no information about the status of our lawsuit against the DFG; it’s still pending,” Birbaumer told Medscape Medical News. “I hope they investigate our present study because the study of 2017 they did not investigate carefully enough.”

Results “Not Stunningly Good”

The controversial history prompted the Wyss Center in Geneva, Switzerland, which led this new study, to seek out at an independent BCI expert to audit and monitor the study.

Nick Ramsey, PhD, a professor of cognitive neuroscience at the Brain Center of the University Medical Center Utrecht, the Netherlands, agreed to take on the assignment in March 2020.

Ramsey has also conducted research on BCI in patients with ALS, but his work has not included patients in CLIS.

“I judged the study to be compliant with universal standards of scientific integrity,” Ramsey told Medscape Medical News. “I am confident that the data and results presented in the paper are valid and will withstand academic and medical scrutiny.”

Commenting on the new findings, Ramsey noted that the results of the study are “not stunningly good, as the user could only communicate during a limited number of days, and even then with considerable slowness,” Ramsey said. However, he added that the study does provide proof of principle that communication is possible in CLIS patients.

“The question remains whether a BCI implant continues to work well in these patients, as there are some indications that people in such a state may lose their mental capabilities within months or a few years as a result of the disease and can thus no longer generate a wish to communicate,” Ramsey said.

Responding to a query from Medscape Medical News, a spokesperson for Nature Communications declined to comment on the new study, but said that journal editors are “are alert to controversies within each field and take care when considering submissions during the peer-review process.”

“We have rigorous policies to safeguard the integrity of the research we publish,” the spokesperson continued, “including to ensure that research has been conducted to a high ethical standard and is reported transparently.”

The research was funded by Wyss Center for Bio and Neuroengineering, Geneva and Deutsche Forschungsgemeinschaft. The authors have disclosed no relevant financial relationships. Ramsey received payment from the Wyss Center for his advisory role in this project.

Nature Comm. Published online March 22, 2022. Full text

For more Medscape Neurology news, join us on Facebook and Twitter



Read original article here

Brain Implant Enables Completely ‘Locked-In’ Man to Communicate Again

A pair of brain microchips could one day allow those in ‘pseudocomas’ to communicate whatever they want, a new breakthrough suggests.

In a first, a 34-year-old patient who lacked even the most subtle of muscle twitches has used the technology to share a few precious words with his family, using little more than an intent to move his eyes.

 

Similar devices have previously given patients with the fast-progressing condition amyotrophic lateral sclerosis (ALS) the means to send simple messages with extremely limited movements, but researchers say the severity of the man’s condition here represents a significant advancement for the technology. 

“To our knowledge, ours is the first study to achieve communication by someone who has no remaining voluntary movement and hence for whom the BCI is now the sole means of communication,” says neuroscientist Jonas Zimmermann from the Wyss Center in Switzerland.

A pseudocoma is also known as ‘locked-in’ syndrome, because while these patients cannot walk or talk, they are still very much conscious, capable of seeing, hearing, tasting, smelling, thinking, and feeling.

Without the ability to move the mouth or the tongue, however, communication is severely limited. If the eyes can still move, patients can sometimes blink or ‘point’ with their pupils to make themselves understood, but in some advanced cases, even that basic form of communication is out of reach.

The man in this case was one such patient. Within months of diagnosis with the condition, he had already lost the ability to walk and talk. A year later, the patient was placed on a ventilator to help him breathe. A year after that, he lost the ability to fix his gaze.

 

The extreme isolation ultimately led the patient and his family to agree to a cutting-edge experiment.

Before the patient lost the ability to move his eyes, he consented to a surgical procedure that would implant two microchips into the part of his brain that controls muscle movement.

Each chip was equipped with 64 needle-like electrodes, which could pick up on his conscious attempts to move. That brain activity was then sent to a computer, which translated the impulses into a ‘yes’ or ‘no’ signal.

In the past, similar brain implants have allowed some patients with ALS to communicate via a computer typing program. But this is the first time an ALS patient without the ability to so much as use their eyes has been able to do something similar. 

“People have really doubted whether this was even feasible,” Mariska Vansteensel, a brain-computer interface researcher who was not involved in the study, told Science.

(Chaudhary et al., Nature Communications, 2022)

Above: The experimental setup of the brain implants, plus the biofeedback device and the spelling program.

The technique took months of training, but once the patient learned how to control the firing rates of his brain signals, he was able to respond to a spelling program and select specific letters, spoken out loud by the program, to form words and even sentences.

 

Each letter the patient heard took about a minute for the patient to respond to, making for slow progress, but nonetheless, for the first time in a long time, the device allowed this man to express himself.

The accuracy of the technology is still not perfect. The patient could only signal ‘yes’ or ‘no’ about 80 percent of the time, with about 80 percent accuracy. Some days he could only generate words, not sentences.

“These apparent poor performances are primarily due to the completely auditory nature of these systems, which are intrinsically slower than a system based on visual feedback,” the authors write in their study.

The first phrase the ALS patient successfully spelled out was a ‘thank you’ to the lead neurobiologist on his case, Niels Birbaumer.

Then, came a slew of requests for his care, like “Mom head massage” and “I would like to listen to the album by Tool [a band] loud”. 

Then, 247 days after the surgical procedure, the patient gave his verdict on the device: “Boys, it works so effortlessly”. 

On day 251 he sent a message to his kid: “I love my cool son”. He then asked his child to watch a Disney film with him. 

 

On day 462, the patient expressed that his “biggest wish is a new bed”, and that the next day he could go with his loved ones to a barbecue.

“If someone is forming sentences like this, I would say it is positive. Even if it is not positive, it is not negative,” first author of the study Ujwal Chaudhary told The Guardian.

“One time when I was there, he said, ‘Thank you for everything, sister’ [to his sister, who helps care for him]. It was an emotional moment.”

The ability for someone in a pseudocoma to communicate obviously comes with a whole slew of ethical considerations.

After all, who condones the initial insertion? And once a person has learned to communicate again, can they speak for themselves and the future of their care? How accurate do these systems need to be before we can adequately interpret what patients are telling us?

We don’t have rules or outlines for this type of technology quite yet, but if the device turns out to be useful for other patients, we will need to start confronting these quandaries.

Giving advanced ALS patients their voices back could be a huge medical breakthrough and a great relief for individuals and their families. How we respond to those voices is up to us.

The study was published in Nature Communications.

 

Read original article here

Brain Implant Allows Fully Paralyzed Patient to Communicate

In 2020 Ujwal Chaudhary, a biomedical engineer then at the University of Tübingen and the Wyss Center for Bio and Neuroengineering in Geneva, watched his computer with amazement as an experiment that he had spent years on revealed itself. A 34-year-old paralyzed man lay on his back in the laboratory, his head connected by a cable to a computer. A synthetic voice pronounced letters in German: “E, A, D…”

The patient had been diagnosed a few years earlier with amyotrophic lateral sclerosis, which leads to the progressive degeneration of brain cells involved in motion. The man had lost the ability to move even his eyeballs and was entirely unable to communicate; in medical terms, he was in a completely locked-in state.

Or so it seemed. Through Dr. Chaudhary’s experiment, the man had learned to select — not directly with his eyes but by imagining his eyes moving — individual letters from the steady stream that the computer spoke aloud. Letter by painstaking letter, one every minute or so, he formulated words and sentences.

“Wegen essen da wird ich erst mal des curry mit kartoffeln haben und dann bologna und dann gefuellte und dann kartoffeln suppe,” he wrote at one point: “For food I want to have curry with potato then Bolognese and potato soup.”

Dr. Chaudhary and his colleagues were dumbstruck. “I myself could not believe that this is possible,” recalled Dr. Chaudhary, who is now managing director at ALS Voice gGmbH, a neurobiotechnology company based in Germany, and who no longer works with the patient.

The study, published on Tuesday in Nature Communications, provides the first example of a patient in a fully locked-in state communicating at length with the outside world, said Niels Birbaumer, the leader of the study and a former neuroscientist at the University of Tübingen who is now retired.

Dr. Chaudhary and Dr. Birbaumer conducted two similar experiments in 2017 and 2019 on patients who were completely locked-in and reported that they were able to communicate. Both studies were retracted after an investigation by the German Research Foundation concluded that the researchers had only partially recorded the examinations of their patients on video, had not appropriately shown details of their analyses and had made false statements. The German Research Foundation, finding that Dr. Birbaumer committed scientific misconduct, imposed some of its most severe sanctions, including a five-year ban on submitting proposals and serving as a reviewer for the foundation.

The agency found that Dr. Chaudhary had also committed scientific misconduct and imposed the same sanctions for a three-year period. Both he and Dr. Birbaumer were asked to retract their two papers, and they declined.

The investigation came after a whistle-blower, Martin Spüler, a researcher, raised concerns about the two scientists in 2018.

Dr. Birbaumer stood by the conclusions and has taken legal action against the German Research Foundation. The results of the lawsuit are expected to be published in the next two weeks, said Marco Finetti, a spokesman for the German Research Foundation. Dr. Chaudhary says his lawyers expect to win the case.

The German Research Foundation has no knowledge of the publication of the current study and will investigate it in the coming months, Mr. Finetti said. In an email, a representative for Nature Communications who asked not to be named declined to comment on the details of how the study was vetted but expressed confidence with the process. “We have rigorous policies to safeguard the integrity of the research we publish, including to ensure that research has been conducted to a high ethical standard and is reported transparently,” the representative said.

“I would say it is a solid study,” said Natalie Mrachacz-Kersting, a brain-computer interface researcher at the University of Freiburg in Germany. She was not involved in the study and was aware of the previously retracted papers.

But Brendan Allison, researcher at the University of California San Diego, expressed reservations. “This work, like other work by Birbaumer, should be taken with a massive mountain of salt given his history,” Dr. Allison said. He noted that in a paper published in 2017, his own team had described being able to communicate with completely locked-in patients with basic “yes” or “no” answers.

The results hold potential promise for patients in similarly unresponsive situations, including minimally conscious and comatose states, as well as the rising number of people diagnosed with ALS worldwide every year. That number is projected to reach 300,000 by 2040.

“It’s a game-changer,” said Steven Laureys, a neurologist and researcher who leads the Coma Science Group at the University of Liège in Belgium and was not involved in the study. The technology could have ethical ramifications in discussions surrounding euthanasia for patients in locked-in or vegetative states, he added: “It’s really great to see this moving forward, giving patients a voice” in their own decisions.

Myriad methods have been used to communicate with unresponsive patients. Some involve basic pen-and-paper methods devised by family relatives. In others, a caregiver points to or speaks the names of items and looks for microresponses — blinks, finger twitches from the patient.

In recent years a new method has taken center stage: brain-computer interface technologies, which aim to translate a person’s brain signals into commands. Research institutes, private companies and entrepreneurial billionaires like Elon Musk have invested heavily in the technology.

The results have been mixed but compelling: patients moving prosthetic limbs using only their thoughts, and those with strokes, multiple sclerosis and other conditions communicating once again with loved ones.

What scientists have been unable to do until now, however, is communicate extensively with people like the man in the new study who displayed no movements whatsoever.

In 2017, before becoming totally locked-in, the patient had used eye movements to communicate with his family. Anticipating that he would soon lose even this ability, the family asked for an alternative communication system and approached Dr. Chaudhary and Dr. Birbaumer, a pioneer in the field of brain-computer interface technology, both of whom worked nearby.

With the man’s approval, Dr. Jens Lehmberg, a neurosurgeon and an author on the study, implanted two tiny electrodes in regions of the man’s brain that are involved in controlling movement. Then, for two months, the man was asked to imagine moving his hands, arms and tongue to see if these would generate a clear brain signal. But the effort yielded nothing reliable.

Dr. Birbaumer then suggested using auditory neurofeedback, an unusual technique by which patients are trained to actively manipulate their own brain activity. The man was first presented with a note — high or low, corresponding to yes or no. This was his “target tone” — the note he had to match.

He was then played a second note, which mapped onto brain activity that the implanted electrodes had detected. By concentrating — and imagining moving his eyes, to effectively dial his brain activity up or down — he was able to change the pitch of the second tone to match the first. As he did so, he gained real-time feedback of how the note changed, allowing him to heighten the pitch when he wanted to say yes or lower it for no.

This approach saw immediate results. On the man’s first day trying, he was able to alter the second tone. Twelve days later, he succeeded in matching the second to the first.

“That was when everything became consistent, and he could reproduce those patterns,” said Jonas Zimmermann, a neuroscientist at the Wyss Center and an author on the study. When the patient was asked what he was imagining to alter his own brain activity, he replied: “Eye movement.”

Over the next year, the man applied this skill to generate words and sentences. The scientists borrowed a communication strategy that the patient had used with his family when he could still move his eyes.

They grouped letters into sets of five colors. A computerized voice first listed the colors, and the man replied “yes” or “no,” depending on whether the letter he wanted to select was in that set. The voice then listed out each letter, which he selected in similar fashion. He repeated these steps set by set, letter by letter, to articulate full sentences.

On the second day of his spelling endeavor he wrote: “First I would like to thank Niels and his birbaumer.”

Some of his sentences involved instructions: “Mom head massage” and “everyone must use gel on my eyes more often.” Others described cravings: “Goulash soup and sweet pea soup.”

Of the 107 days that the man spent spelling, 44 resulted in intelligible sentences. And while there was great variability in speed, he wrote at about one character per minute.

“Wow, it blew my mind,” said Dr. Mrachacz-Kersting. She speculated that locked-in patients who can keep their minds stimulated could experience longer, healthier lives.

Dr. Mrachacz-Kersting emphasized, however, that the study was based on one patient and would need to be tested on many others.

Other researchers also expressed caution in embracing the findings.

Neil Thakur, chief mission officer of the ALS Association, said, “This approach is experimental, so there’s still a lot we need to learn.”

At this stage the technology is also far too complex for patients and families to operate. Making it more user-friendly and speeding up communication speed will be crucial, Dr. Chaudhary said. Until then, he said, a patient’s relatives will probably be satisfied.

“You have two options: no communication or communication at one character per minute,” he said. “What do you choose?”

Perhaps the biggest concern is time. Three years have passed since the implants were first inserted in the patient’s brain. Since then, his answers have become significantly slower, less reliable and often impossible to discern, said Dr. Zimmermann, who is now caring for the patient at the Wyss Center.

The cause of this decline is unclear, but Dr. Zimmermann thought it probably stemmed from technical issues. For instance, the electrodes are nearing the end of their life expectancy. Replacing them now, however, would be unwise. “It’s a risky procedure,” he said. “All of a sudden you’re exposed to new kinds of bacteria in the hospital.”

Dr. Zimmermann and others at the Wyss Center are developing wireless microelectrodes that are safer to use. The team is also exploring other noninvasive techniques that have proved fruitful in previous studies on patients who are not locked-in. “As much as we want to help people, I think it’s also very dangerous to create false hope,” Dr. Zimmermann said.

At the same time, Dr. Laureys of the Coma Science Group said there would be no value in fostering a sense of “false despair” when viable innovations were appearing on the horizon.

“I’m extremely excited as a caregiver, as a clinician,” he said. “I think it is wonderful that we offer these new scientific insights and technology to very vulnerable and dramatic conditions.”

Read original article here

Alexa for Animals: AI Is Teaching Us How Creatures Communicate

Artificial intelligence has already enabled humans to chat with robots like Alexa and Siri that were inspired by science fiction. Some of its newest creations take a page from a hero of children’s literature: Doctor Dolittle.

Researchers are using AI to parse the “speech” of animals, enabling scientists to create systems that, for example, detect and monitor whale songs to alert nearby ships so they can avoid collisions. It may not yet quite be able to talk to the animals the way the century-old children’s-book character could, but this application of what is known as “deep learning” is helping conservationists protect animals, as well as potentially bridging the gap between human and nonhuman intelligences.

Read original article here

Isolation From Earth May Drive Changes in How Space Colonies Communicate With Home

There are a lot of unknowns when it comes to human beings living off-world. After all, our species has never been anywhere other than Earth, so how will we react physically and mentally if the time comes to start settling down on other planets?

 

A new study based on human simulations on Earth reveals some interesting insights. First, communication with the outside world  – so colleagues back at base – tends to get less and less frequent over time; second, group cohesion for the space colony crew tends to improve the longer the mission continues.

That’s quite promising for future settlers on the Moon, on Mars, or anywhere else. They’re going to need to rely on their own initiative a lot of the time and stick together through every eventuality, which seems to be what happened here.

“The crews in such missions tend to reduce their communication with mission control during isolation, sharing their needs and problems less and less,” says one of the researchers, Dmitry Shved from the Russian Academy of Sciences (RAS).

The data for the study was taken from two SIRIUS (Scientific International Research in Unique Terrestrial Station) simulations run in Russia, one covering a period of 17 days and one lasting 120 days. An artificial communications delay was added to better match what astronauts would experience on other planets and moons.

 

In addition to logging the frequency of contact, researchers also analyzed video messages in terms of facial expressions along with the intensity, frequency, and variability of speech patterns. The types of emotions expressed were also logged as part of the experiments.

As time went on, communication frequency decreased, except for during important mission events, such as landing simulations. Crew cohesion increased, and the communication styles of the crew converged into similar patterns, even across differences of gender and cultural background.

In the four-month simulation, for example, the number of video messages sent to mission control dropped from 200 in the first week of isolation to 115-120 in later weeks. At the same time, the duration of the video messages decreased as well.

“Our findings show that in autonomous conditions, the crews undergo psychological ‘autonomization’, becoming less dependent on mission control,” says Shved.

“Also, the crews in such conditions tend to increase their cohesion when crew members become closer and more similar to each other, despite their personal, cultural, and other differences. So, these phenomena look promising for future solar system exploration – or for any teams living and working in isolation on Earth.”

There have been several other similar experiments in recent years, including the year-long HI-SEAS (Hawaii Space Exploration Analog and Simulation) test and the MARS-500 simulation that lasted for 520 days and produced similar results to this new study.

Further simulations are already underway, including the SIRIUS-21 experiment, which is set to last for a whole eight months. By the time humans really are living on other worlds, we should have plenty of data about what to expect.

“Our findings pose serious questions that should be taken into consideration [before sending crews to the moon, Mars, and other planets],” says Shved.

“The promising part is that the crews seem to become more autonomous and independent from Earth. The increasing crew cohesion should also help them in dealing with various problems during their mission.”

The research has been published in Frontiers in Psychology.

 

Read original article here

Isolation From Earth May Drive Changes in How Space Colonies Communicate With Home

There are a lot of unknowns when it comes to human beings living off-world. After all, our species has never been anywhere other than Earth, so how will we react physically and mentally if the time comes to start settling down on other planets?

 

A new study based on human simulations on Earth reveals some interesting insights. First, communication with the outside world  – so colleagues back at base – tends to get less and less frequent over time; second, group cohesion for the space colony crew tends to improve the longer the mission continues.

That’s quite promising for future settlers on the Moon, on Mars, or anywhere else. They’re going to need to rely on their own initiative a lot of the time and stick together through every eventuality, which seems to be what happened here.

“The crews in such missions tend to reduce their communication with mission control during isolation, sharing their needs and problems less and less,” says one of the researchers, Dmitry Shved from the Russian Academy of Sciences (RAS).

The data for the study was taken from two SIRIUS (Scientific International Research in Unique Terrestrial Station) simulations run in Russia, one covering a period of 17 days and one lasting 120 days. An artificial communications delay was added to better match what astronauts would experience on other planets and moons.

 

In addition to logging the frequency of contact, researchers also analyzed video messages in terms of facial expressions along with the intensity, frequency, and variability of speech patterns. The types of emotions expressed were also logged as part of the experiments.

As time went on, communication frequency decreased, except for during important mission events, such as landing simulations. Crew cohesion increased, and the communication styles of the crew converged into similar patterns, even across differences of gender and cultural background.

In the four-month simulation, for example, the number of video messages sent to mission control dropped from 200 in the first week of isolation to 115-120 in later weeks. At the same time, the duration of the video messages decreased as well.

“Our findings show that in autonomous conditions, the crews undergo psychological ‘autonomization’, becoming less dependent on mission control,” says Shved.

“Also, the crews in such conditions tend to increase their cohesion when crew members become closer and more similar to each other, despite their personal, cultural, and other differences. So, these phenomena look promising for future solar system exploration – or for any teams living and working in isolation on Earth.”

There have been several other similar experiments in recent years, including the year-long HI-SEAS (Hawaii Space Exploration Analog and Simulation) test and the MARS-500 simulation that lasted for 520 days and produced similar results to this new study.

Further simulations are already underway, including the SIRIUS-21 experiment, which is set to last for a whole eight months. By the time humans really are living on other worlds, we should have plenty of data about what to expect.

“Our findings pose serious questions that should be taken into consideration [before sending crews to the moon, Mars, and other planets],” says Shved.

“The promising part is that the crews seem to become more autonomous and independent from Earth. The increasing crew cohesion should also help them in dealing with various problems during their mission.”

The research has been published in Frontiers in Psychology.

 

Read original article here

The Ultimate News Site