Tag Archives: processes

EA CEO: 60 percent of dev processes could be ‘impacted by generative AI’ – Game Developer

  1. EA CEO: 60 percent of dev processes could be ‘impacted by generative AI’ Game Developer
  2. EA CEO talks AI, says the usual stuff before the bong rip hits and he starts blabbing about a future where 3 billion people are creating EA’s games with it PC Gamer
  3. EA’s CEO Says 60% Of Development Processes Could Be ‘Positively Impacted’ By AI Insider Gaming
  4. EA Says AI Will Drive Microtransaction Sales, Help Games Get Made Faster; No Comment On Human Impact GameSpot
  5. EA says generative AI could make it 30% more efficient and boost monetisation by up 20% over 5 years Video Games Chronicle

Read original article here

Hilary Swank, 48, marvels at the pregnancy processes as she prepares to give birth to TWINS

Hilary Swank said she has mainly craved large quantities of fruit during her pregnancy on Monday’s episode of The Late Show with James Corden on CBS.

‘The first 16 weeks I had a lot of morning sickness, I didn’t do any throwing up, but all I wanted was fruit,’ said the 48-year-old Academy Award-winning actress who recently revealed she was pregnant with twins.

Hilary said when she finally told her Alaska Daily co-stars she was pregnant that they understood her over-the-top fruit consumption.

Pregnant actress: Hilary Swank said she has mainly craved large quantities of fruit during her pregnancy on Monday’s episode of The Late Show with James Corden on CBS

‘They were like ‘oh that’s why you eat 10 pomegranates a day, 50 pears,’ Hilary said.

Hilary, who previously revealed that the twins with husband businessman Philip Schneider, 50, were due in April coincidentally on her late father’s birthday, said she had just entered her 27th week and was essentially in her third trimester.

‘I’m feeling pretty full,’ Hilary said. ‘I love it. I feel like women are superheroes for what our bodies do. I have such. I’m in a whole new found respect.’

James asked Hilary since she moved to Colorado if there was anything she missed from living in Los Angeles.

CBS show: James Corden welcomed Hilary and Gwyneth Paltrow as guests on the CBS show

‘I miss playing tennis,’ Hilary said. ‘I live at 9,600 feet and the ball flies. It just flies.’

Hilary said the altitude makes the ball go ‘to Mars.’

‘I love tennis,’ Hilary said. ‘I’m obsessed.’

Tennis lover: ‘I love tennis,’ Hilary said. ‘I’m obsessed’

James asked her how often she made it back to LA and what that commute was like.

‘I have my rescue dogs, my parrots, I have two parrots, five rescue dogs,’ Hilary said. ‘I got a pilot’s license during Covid and was flying, but it’s like a little prop plane ,and it won’t fit five dogs, two parrots and two babies. So now I’m actually looking into buying a retired band bus.’

‘Swanks on tour,’ James said.

Licensed pilot: ‘I have my rescue dogs, my parrots, I have two parrots, five rescue dogs,’ Hilary said. ‘I got a pilot’s license during Covid and was flying, but it’s like a little prop plane ,and it won’t fit five dogs, two parrots and two babies. So now I’m actually looking into buying a retired band bus’

‘You guys will have to figure out what the side mural would be,’ Hilary said.

‘Just paint it in cocaine,’ James said.

James then wanted to know from both Hilary and Gwyneth Paltrow, who was also on the show, what it was like to be famous in the 90’s. Hilary turned to Gwyneth and asked her if she won her Academy Award in 1999. Gwyneth said she thought so.

Nineties life: James then wanted to know from both Hilary and Gwyneth Paltrow, who was also on the show, what it was like to be famous in the 90’s

‘I think I was right after you in 2000,’ Hilary said and then the two actresses high fived

‘You were really famous in the 90’s, I wasn’t,’ Hilary said.

‘You were famous in the 90s,’ Gwyneth said.

Up top: ‘I think I was right after you in 2000,’ Hilary said and then the two actresses high fived

‘No because Boys Don’t Cry came out in 1999 so,’ Hilary said.

James played a clip from Hilary’s show Alaska Daily and congratulated her on her nomination for a Golden Globe.

Hilary shared they hid her pregnancy during filming by having a double  for her walking scenes. Hilary said before she announced she was pregnant she told the show she also needed a stunt double to do her running scenes.

‘I’m pregnant and I can’t tell anybody,’ Hilary said. ‘Guys, I think I’m a really bad runner. I need a stunt double to run.’

She said they didn’t believe her and asked to show them how she ran.

‘They said to me ‘show us,’ Hilary said. ‘I was like okay, how do I look like a really bad runner, I’m wasn’t a great runner to start with anyway, I was like I’ve really got to sell this so that I get my stunt double.’

Hilary said she ran and the four men looked at each other and said ‘yeah you need a stunt double.’

Awards nominee: James played a clip from Hilary’s show Alaska Daily and congratulated her on her nomination for a Golden Globe

Read original article here

How the Brain Processes Sensory Information From Internal Organs

Summary: A new mouse study provides clues as to how the brain processes sensory information from internal organs, revealing feedback from organs activates different clusters of neurons in the brain stem.

Source: Harvard

Most of us think little of why we feel pleasantly full after eating a big holiday meal, why we start to cough after accidentally inhaling campfire smoke, or why we are hit with sudden nausea after ingesting something toxic. However, such sensations are crucial for survival: they tell us what our bodies need at any given moment so that we can quickly adjust our behavior. 

Yet historically, very little research has been devoted to understanding these basic bodily sensations—also known as internal senses—that are generated when the brain receives and interprets input from internal organs.

Now, a team led by researchers at Harvard Medical School has made new strides in understanding the basic biology of internal organ sensing, which involves a complicated cascade of communication between cells inside the body.

In a study conducted in mice and published Aug. 31 in Nature, the team used high-resolution imaging to reveal spatial maps of how neurons in the brain stem respond to feedback from internal organs.

They found that feedback from different organs activates discrete clusters of neurons, regardless of whether this information is mechanical or chemical in nature — and these groups of neurons representing different organs are topographically organized in the brain stem. Moreover, they discovered that inhibition within the brain plays a key role in helping neurons selectively respond to organs. 

“Our study reveals the fundamental principles of how different internal organs are represented in the brain stem,” said lead author Chen Ran, research fellow in cell biology at HMS.

The research is only a first step in elucidating how internal organs communicate with the brain. However, if the findings are confirmed in other species, including humans, they could help scientists develop better therapeutic strategies for diseases such as eating disorders, overactive bladder, diabetes, pulmonary disorders, and hypertension that arise when internal sensing goes awry.

“I think understanding how sensory inputs are encoded by the brain is one of the great mysteries of how the brain works,” said senior author Stephen Liberles,  professor of cell biology in the Blavatnik Institute at HMS and an investigator at Howard Hughes Medical Institute. “It gives inroads into understanding how the brain functions to generate perceptions and evoke behaviors.”

Understudied and poorly understood

For almost a century, scientists have been studying how the brain processes external information to form the basic senses of sight, smell, hearing, taste, and touch that we use to navigate the world. Over time, they have compiled their findings to show how the various sensory areas in the brain are organized to represent different stimuli.  

In the mid-1900s, for example, research on touch led scientists to develop the cortical homunculus for the somatosensory system—an illustration that depicts cartoonish body parts draped over the surface of the brain, each part positioned to align with the location where it is processed, and drawn to scale based on sensitivity.

In 1981, Harvard professors David Hubel and Torsten Wiesel won a Nobel Prize for their research on vision, in which they methodically mapped the visual cortex of the brain by recording the electrical activity of individual neurons responding to visual stimuli.

In 2004, another pair of scientists won a Nobel Prize for their studies of the olfactory system, in which they identified hundreds of olfactory receptors and revealed precisely how odor inputs are arranged in the nose and brain.

However, until now, the process by which the brain senses and organizes feedback from internal organs to regulate basic physiological functions such as hunger, satiation, thirst, nausea, pain, breathing, heart rate, and blood pressure has remained mysterious.

“How the brain receives inputs from within the body and how it processes those inputs have been vastly understudied and poorly understood,” Liberles said.

This is perhaps because internal sensing is more complicated than external sensing, Ran added. External senses, he explained, tend to receive information in a single format. Vision, for example, is based entirely on the detection of light.

By contrast, internal organs convey information through mechanical forces, hormones, nutrients, toxins, temperature, and more—each of which can act on multiple organs and translate into multiple physiological responses. Mechanical stretch, for example, signals the need to urinate when it occurs in the bladder, but translates into satiation when it happens in the stomach and triggers a reflex to stop inhalation in the lungs.

A constellation of neurons

In their new study, Liberles, Ran, and colleagues focused on a brain stem region called the nucleus of the solitary tract, or NTS.

The NTS is known to receive sensory information from internal organs via the vagus nerve. It relays this information to higher-order brain regions that regulate physiological responses and generate behaviors. In this way, the NTS serves as an internal sensory gateway for the brain.

The researchers used a powerful technique called two-photon calcium imaging that measures calcium levels in individual neurons in the brain as a proxy for neuronal activity.

The team applied this technique to mice exposed to different types of internal organ stimuli and used a microscope to simultaneously record the responses of thousands of neurons in the NTS over time. The resulting videos show neurons lighting up throughout the NTS, much like stars winking on and off in the night sky.

Traditional imaging techniques, which involve inserting an electrode to record a small group of neurons at a single time point “are like seeing only a couple pixels of an image at a time,” Ran said. “Our technique is like seeing all the pixels at once to reveal the entire image in high resolution.”

The findings suggest that feedback from different organs activates discrete clusters of neurons in the brain stem. Image is in the public domain

The team discovered that stimuli in different internal organs—for example, the stomach versus the larynx—generally activated different clusters of neurons in the NTS. By contrast, the researchers identified several cases in which mechanical and chemical stimuli in the same organ that often evoke the same physiological response (such as coughing or satiation) activated overlapping neurons in the brain stem. These findings suggest that specific groups of neurons may be dedicated to representing particular organs.

Moreover, the researchers found that responses in the NTS were organized as a spatial map, which they dubbed the “visceral homunculus” in a nod to the analogous cortical homunculus developed decades ago.

Finally, the scientists established that signaling from internal organs to the brain stem requires the inhibition of neurons. When they used drugs to block inhibition, neurons in the brain stem began to respond to multiple organs, losing their prior selectivity.

The work lays the foundation for “systematically studying the coding of internal senses throughout the brain,” Ran said.

A foundation for the future

The findings raise many new questions, some of which the HMS team would like to address.

Ran is interested in investigating how the brain stem conveys internal sensory information to higher-order brain regions that produce the resulting sensations, such as hunger, pain, or thirst.

Liberles wants to explore how the internal sensing system works on a molecular level. In particular, he would like to identify the primary sensory receptors that detect mechanical and chemical stimuli within organs.

Another area for future research is how the system is set up during embryonic development. The new findings, Liberles said, suggest that looking at neuron type alone isn’t enough; researchers must also consider where neurons are located in the brain.

See also

“We need to study the interplay between neuron types and their positions to understand how the circuits are wired and what the different cell types do in the context of different circuits,” he said.

Liberles is also interested in how generalizable the findings are to other animals, including humans. While many sensory pathways are conserved across species, he noted, there are also important evolutionary differences. For example, some animals don’t exhibit basic behaviors such as coughing or vomiting.

If confirmed in humans, the research findings could eventually inform the development of better treatments for diseases that arise when the internal sensory system malfunctions.

“Oftentimes these diseases occur because the brain receives abnormal feedback from internal organs,” Ran said. “If we have a good idea of how these signals are differentially encoded in the brain, we may someday be able to figure out how to hijack this system and restore normal function.”

Additional authors include Jack Boettcher, Judith Kaye, and Catherine Gallori of HMS.

Funding: The work was supported by the National Institutes of Health (grants DP1AT009497; R01DK122976; R01DK103703), the Food Allergy Science Initiative, a Leonard and Isabelle Goldenson Postdoctoral Fellowship, the Harvard Brain Science Initiative, and the American Diabetes Association.

About this neuroscience research news

Author: Dennis Nealon
Source: Harvard
Contact: Dennis Nealon – Harvard
Image: The image is in the public domain

Original Research: Open access.
“A brainstem map for visceral sensations” by Chen Ran et al. Nature


Abstract

A brainstem map for visceral sensations

The nervous system uses various coding strategies to process sensory inputs. For example, the olfactory system uses large receptor repertoires and is wired to recognize diverse odours, whereas the visual system provides high acuity of object position, form and movement.

Compared to external sensory systems, principles that underlie sensory processing by the interoceptive nervous system remain poorly defined.

Here we developed a two-photon calcium imaging preparation to understand internal organ representations in the nucleus of the solitary tract (NTS), a sensory gateway in the brainstem that receives vagal and other inputs from the body.

Focusing on gut and upper airway stimuli, we observed that individual NTS neurons are tuned to detect signals from particular organs and are topographically organized on the basis of body position. Moreover, some mechanosensory and chemosensory inputs from the same organ converge centrally.

Sensory inputs engage specific NTS domains with defined locations, each containing heterogeneous cell types. Spatial representations of different organs are further sharpened in the NTS beyond what is achieved by vagal axon sorting alone, as blockade of brainstem inhibition broadens neural tuning and disorganizes visceral representations.

These findings reveal basic organizational features used by the brain to process interoceptive inputs.

Read original article here

Something’s Glowing at The Galactic Core, And We Could Be Closer to Solving The Mystery

Something deep in the heart of the Milky Way galaxy is glowing with gamma radiation, and nobody can figure out for sure what it might be.

Colliding dark matter has been proposed, ruled out, and then tentatively reconsidered.

 

Dense, rapidly rotating objects called pulsars were also considered as candidate sources of the high-energy rays, before being dismissed as too few in number to make the sums work.

A study by researchers from Australia, New Zealand and Japan could breathe new life into the pulsar explanation, revealing how it might be possible to squeeze some serious intense sunshine from a population of spinning stars without breaking any rules.

Gamma radiation isn’t your typical hue of sunlight. It requires some of the Universe’s most energetic processes to produce. We’re talking black holes colliding, matter being whipped towards light speed, antimatter combining with matter kinds of processes.

Of course, the center of the Milky Way has all of these things in spades. So when we gaze into the heavens and consider all of the crashing bits of matter, spiraling black holes, whizzing pulsars, and other astrophysical processes, we’d expect to see a healthy gamma glow.

But when researchers used NASA’s Fermi telescope to measure the intense shine within the heart of our galaxy about ten years ago, they found there was more of this high-energy light than they could account for: what’s known as the Galactic Centre Excess.

 

One exciting possibility involves unseen bits of matter bumping together in the night. These weakly interacting massive particles – a hypothetical category of dark matter commonly described as WIMPs – would cancel each other out as they smoosh together, leaving nothing but radiation to mark their presence.

It’s a fun explanation to consider, but is also light on evidence.

“The nature of dark matter is entirely unknown, so any potential clues garner a lot of excitement,” says astrophysicist Roland Crocker from the Australian National University.

“But our results point to another important source of gamma ray production.”

That source is the millisecond pulsar.

To make one, take a star much bigger than our own and let its fires die down. It will eventually collapse into a dense ball not much wider than a city, where its atoms pack together so tightly, many of its protons are slowly baked into neutrons.

This process generates super-strong magnetic fields that channel incoming particles into fast-flowing streams glowing with radiation.

Since the object is rotating, these streams swivel around from the star’s poles like the Universe’s biggest lighthouse beacons – so it appears to pulse with energy. Pulsing stars that spin hundreds of times a second are known as millisecond pulsars, and we know a lot about the conditions under which they’re likely to form.

 

“Scientists have previously detected gamma-ray emissions from individual millisecond pulsars in the neighborhood of the Solar System, so we know these objects emit gamma rays,” says Crocker.

To emit them, however, they’d need a generous amount of mass to feed on. Most pulsar systems in the center of the Milky Way are thought to be too puny to emit anything more energetic than X-rays, though.

That might not always be the case, however, especially if the dead stars they emerged from are of a particular variety of ultra-massive white dwarf.

According to Crocker, if enough of these heavyweights were to turn into pulsars and hold onto their binary partners, they would provide just the right amount of gamma radiation to match observations.

“Our model demonstrates that the integrated emission from a whole population of such stars, around 100,000 in number, would produce a signal entirely compatible with the Galactic Centre Excess,” says Crocker.

Being a purely theoretical model, it’s an idea that now needs a generous dose of empirical evidence. Unlike suggestions based on dark matter, however, we already know exactly what to look for.

This research was published in Nature Astronomy.  

 

Read original article here

NASA’s Curiosity Rover Drilled Holes Into Mars, And Found Something Very Strange

As it’s the foundation for all life on Earth, discovering carbon on other planets always gets scientists excited – and the Curiosity Rover on Mars has found an unusual mix of the chemical element that could hypothetically point to the existence of alien life.

 

That’s by no means certain, but it’s a possibility. It’s one of three different scenarios that experts think might have produced the carbon found in sediment in the Gale crater, collected across nine years from August 2012 to July 2021.

A total of 24 powder samples were heated by Curiosity to separate individual chemicals, revealing a wide variation in terms of the mix of carbon 12 and carbon 13 isotopes: the two stable carbon isotopes that can reveal how the carbon cycle may have changed over time.

Part of the Martian landscape where samples were taken. (NASA/Caltech-JPL/MSSS)

What makes these variations particularly fascinating – some samples enriched with carbon 13, and some extremely depleted – is that they point to unconventional processes different to those created by the carbon cycle in Earth’s modern era.

“The amounts of carbon 12 and carbon 13 in our Solar System are the amounts that existed at the formation of the Solar System,” says geoscientist Christopher House from Pennsylvania State University.

“Both exist in everything, but because carbon 12 reacts more quickly than carbon 13, looking at the relative amounts of each in samples can reveal the carbon cycle.”

 

One explanation for the carbon signatures is a giant, molecular cloud of dust. The Solar System passes through one of these every couple of hundred million years or so, and the cooling effect it creates leaves carbon deposits in its wake. This is a plausible scenario, the team says, but one that needs more investigation.

Alternatively, the conversion of CO2 to organic compounds (like formaldehyde) through abiotic (non-biological) processes could explain what Curiosity has found – in this case, ultraviolet light might have been the trigger. It’s something scientists have hypothesized about before, but again further study is required to confirm whether or not this is actually what’s happening.

That leaves the third explanation, which is that either ultraviolet light or microbes once upon a time converted methane produced by biological processes – that we’re looking at carbon created as a result of life. As with the other two possibilities, we’re going to need more surrounding evidence to know for sure, but there are some parallels on Earth.

“The samples extremely depleted in carbon 13 are a little like samples from Australia taken from sediment that was 2.7 billion years old,” says House.

 

“Those samples were caused by biological activity when methane was consumed by ancient microbial mats, but we can’t necessarily say that on Mars because it’s a planet that may have formed out of different materials and processes than Earth.”

Curiosity’s mission carries on, of course. The future discovery of the remains of microbial mats, or substantial methane plumes, or traces of long-lost glaciers would help the scientists figure out which one of these three explanations is most likely.

For now though, we just don’t know enough about Mars and its history to be able to come to any conclusions about how these carbon signatures came about. Further drilling is planned at the spot where many of these samples were collected in a month’s time.

Curiosity has recently been joined by the Perseverance rover, which is planning to return Martian rocks to Earth rather than experimenting on them in situ. Expect much more to be revealed by these two robotic explorers over the coming years.

“All three possibilities point to an unusual carbon cycle unlike anything on Earth today,” says House. “But we need more data to figure out which of these is the correct explanation.

“We are being cautious with our interpretation, which is the best course when studying another world.”

The research has been published in PNAS.

 

Read original article here

Scientists Warn About ‘False Fossils’ Present on Mars

When looking for signs of life on Mars, we need to look out for ‘false fossils’ that may be abundant on the Red Planet, according to a new study.

Mars rover Perseverance lists, among its mission objectives, a first for red planet exploration. The robotic explorer has been tasked with searching for signs of ancient microbial life on the dusty, dry planet – tiny microfossils that would be evidence that Mars was once habitable.

 

That would indeed be an astounding, incredible discovery – but the new paper urges caution in interpreting what we find, in both this and future sources.

According to astrobiologist Sean McMahon of the University of Edinburgh and geobiologist Julie Cosmidis of the University of Oxford in the UK, scientists will have to keep an eye out for non-biological mineral deposits that look a heck of a lot like fossils.

In a new paper, the pair have outlined dozens of non-biological, or abiotic, processes that can produce pseudofossils – structures that look like fossils of microscopic organisms like those that may have once existed on Mars.

“At some stage a Mars rover will almost certainly find something that looks a lot like a fossil, so being able to confidently distinguish these from structures and substances made by chemical reactions is vital,” McMahon said.

“For every type of fossil out there, there is at least one non-biological process that creates very similar things, so there is a real need to improve our understanding of how these form.”

This notion is not exactly surprising. Mars is an absolute feast for pareidolia and conspiracies. All you need is one funny-looking rock and the rumors run riot.

 

It’s not just tabloids and conspiracy theorists, either – scientists have indulged in Mars fantasies, too. You might remember the time mycologists thought they might have found mushrooms on Mars, or the entomologist who thought he’d found bugs.

Microfossils might therefore be quite problematic. Even here on Earth, we struggle to tell the difference between really old rocks and fossils of ancient microbes.

But, if we enter analysis of any potential microfossils on Mars knowing the processes that can produce pseudofossils, we have a better chance of accurately interpreting what we’re seeing.

Many physical processes associated with weathering and the depositing of sedimentary layers can produce rocks that look eerily like fossils. 

Another mechanism is the chemical garden, in which chemicals mixing can produce structures that look biological. Many different types of minerals can also combine to produce pseudofossils known as biomorphs, which look strikingly biological.

You can see an example of chemical garden pseudofossils below. 

Chemical garden pseudofossils. (McMahon & Cosmidis, J. Geol. Soc., 2021)

Even textures in the rock can look biological, since organisms can etch patterns or holes into stone. And isotope ratios of various elements can appear similar to isotope ratios in biological contexts.

Since we don’t know what kind of life could have emerged on Mars – it might be quite different from life here on Earth – and since, as McMahon and Cosmidis noted, there are likely many unknown processes that can produce pseudofossils, biologists looking for life on Mars are going to have to be very careful indeed.

 

The researchers also note that more work, perhaps even experimentation, into the chemistry and physics of Mars could reveal some of these unknown processes, and shed more light on how such formations might be produced. This work could even help us better understand Earth’s rock and fossil record.

“We have been fooled by life-mimicking processes in the past,” Cosmidis said.

“On many occasions, objects that looked like fossil microbes were described in ancient rocks on Earth and even in meteorites from Mars, but after deeper examination they turned out to have non-biological origins. This article is a cautionary tale in which we call for further research on life-mimicking processes in the context of Mars, so that we avoid falling into the same traps over and over again.”

The research has been published in the Journal of the Geological Society.

 

Read original article here

Artificial intelligence sheds light on how the brain processes language

Credit: CC0 Public Domain

In the past few years, artificial intelligence models of language have become very good at certain tasks. Most notably, they excel at predicting the next word in a string of text; this technology helps search engines and texting apps predict the next word you are going to type.

The most recent generation of predictive language models also appears to learn something about the underlying meaning of language. These models can not only predict the word that comes next, but also perform tasks that seem to require some degree of genuine understanding, such as question answering, document summarization, and story completion.

Such models were designed to optimize performance for the specific function of predicting text, without attempting to mimic anything about how the human brain performs this task or understands language. But a new study from MIT neuroscientists suggests the underlying function of these models resembles the function of language-processing centers in the human brain.

Computer models that perform well on other types of language tasks do not show this similarity to the human brain, offering evidence that the human brain may use next-word prediction to drive language processing.

“The better the model is at predicting the next word, the more closely it fits the human brain,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines (CBMM), and an author of the new study. “It’s amazing that the models fit so well, and it very indirectly suggests that maybe what the human language system is doing is predicting what’s going to happen next.”

Joshua Tenenbaum, a professor of computational cognitive science at MIT and a member of CBMM and MIT’s Artificial Intelligence Laboratory (CSAIL); and Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience and a member of the McGovern Institute, are the senior authors of the study, which appears this week in the Proceedings of the National Academy of Sciences. Martin Schrimpf, an MIT graduate student who works in CBMM, is the first author of the paper.

Making predictions

The new, high-performing next-word prediction models belong to a class of models called deep neural networks. These networks contain computational “nodes” that form connections of varying strength, and layers that pass information between each other in prescribed ways.

Over the past decade, scientists have used deep neural networks to create models of vision that can recognize objects as well as the primate brain does. Research at MIT has also shown that the underlying function of visual object recognition models matches the organization of the primate visual cortex, even though those computer models were not specifically designed to mimic the brain.

In the new study, the MIT team used a similar approach to compare language-processing centers in the human brain with language-processing models. The researchers analyzed 43 different language models, including several that are optimized for next-word prediction. These include a model called GPT-3 (Generative Pre-trained Transformer 3), which, given a prompt, can generate text similar to what a human would produce. Other models were designed to perform different language tasks, such as filling in a blank in a sentence.

As each model was presented with a string of words, the researchers measured the activity of the nodes that make up the network. They then compared these patterns to activity in the human brain, measured in subjects performing three language tasks: listening to stories, reading sentences one at a time, and reading sentences in which one word is revealed at a time. These human datasets included functional magnetic resonance (fMRI) data and intracranial electrocorticographic measurements taken in people undergoing brain surgery for epilepsy.

They found that the best-performing next-word prediction models had activity patterns that very closely resembled those seen in the human brain. Activity in those same models was also highly correlated with measures of human behavioral measures such as how fast people were able to read the text.

“We found that the models that predict the neural responses well also tend to best predict human behavior responses, in the form of reading times. And then both of these are explained by the model performance on next-word prediction. This triangle really connects everything together,” Schrimpf says.

Game changer

One of the key computational features of predictive models such as GPT-3 is an element known as a forward one-way predictive transformer. This kind of transformer is able to make predictions of what is going to come next, based on previous sequences. A significant feature of this transformer is that it can make predictions based on a very long prior context (hundreds of words), not just the last few words.

Scientists have not found any brain circuits or learning mechanisms that correspond to this type of processing, Tenenbaum says. However, the new findings are consistent with hypotheses that have been previously proposed that prediction is one of the key functions in language processing, he says.

“One of the challenges of language processing is the real-time aspect of it,” he says. “Language comes in, and you have to keep up with it and be able to make sense of it in real time.”

The researchers now plan to build variants of these language processing models to see how small changes in their architecture affect their performance and their ability to fit human neural data.

“For me, this result has been a game changer,” Fedorenko says. “It’s totally transforming my research program, because I would not have predicted that in my lifetime we would get to these computationally explicit models that capture enough about the brain so that we can actually leverage them in understanding how the brain works.”

The researchers also plan to try to combine these high-performing language models with some computer models Tenenbaum’s lab has previously developed that can perform other kinds of tasks such as constructing perceptual representations of the physical world.

“If we’re able to understand what these language models do and how they can connect to models which do things that are more like perceiving and thinking, then that can give us more integrative models of how things work in the brain,” Tenenbaum says. “This could take us toward better artificial intelligence models, as well as giving us better models of how more of the brain works and how general intelligence emerges, than we’ve had in the past.”

Other authors of the paper are Idan Blank Ph.D. ’16 and graduate students Greta Tuckute, Carina Kauf, and Eghbal Hosseini.


Beautiful or handsome? Neural language models try their hand at word substitution


More information:
The neural architecture of language: Integrative modeling converges on predictive processing, Proceedings of the National Academy of Sciences (2021). DOI: 10.1073/pnas.2105646118.
Provided by
Massachusetts Institute of Technology

Citation:
Artificial intelligence sheds light on how the brain processes language (2021, October 25)
retrieved 26 October 2021
from https://medicalxpress.com/news/2021-10-artificial-intelligence-brain-language.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.



Read original article here

When Robot Eyes Gaze Back at Humans, Something Changes in Our Brain And Behavior

When you know you’re being watched by somebody, it’s hard to pretend they’re not there. It can be difficult to block them out and keep focus, feeling their gaze bearing down upon you.

 

Strangely enough, it doesn’t even seem to really matter whether they’re alive or not.

In new research, scientists set up an experiment where people played a game against a robot.

If the robot looked up at the human players during the session, it ended up affecting the participants’ behavior and strategy in the game – a change that could be discerned in measurements of their neural activity recorded by electroencephalography (EEG) during the experiment.

“If the robot looks at you during the moment you need to make a decision on the next move, you will have a more difficult time in making a decision,” says cognitive neuroscientist Agnieszka Wykowska from the Italian Institute of Technology.

“Your brain will also need to employ effortful and costly processes to try to ‘ignore’ that gaze of the robot.”

In the experiment, 40 participants sat across from an iCub humanoid robot, competing in a game of ‘Chicken’ on a horizontal computer screen, in which two simulated cars rushed head-on towards one another.

Just before the moment of impact, the game would pause, and the participants were asked to look up at the robot – which would either meet their gaze, or look away. During this instant, the participants had to decide whether to let their cars run ahead, or to deviate to the side.

 

The results of the experiment showed that the robot’s return gaze didn’t influence the choices individual human players made, but it did cause their response time to slightly increase, with participants generally responding faster in the game when the iCub averted its eyes.

“In line with our hypothesis, the delayed responses within-subjects after mutual gaze may suggest that mutual gaze entailed a higher cognitive effort, for example, by eliciting more reasoning about iCub’s choices, or higher degree of suppression of the (potentially distracting) gaze stimulus, which was irrelevant to the task,” the researchers explain in their paper.

Representation of iCub and a participant. (IIT)

According to the researchers, this change in player behavior corresponded with a change in neural activity called synchronized alpha activity – a brain wave pattern that’s previously been associated with suppressing attention.

What’s more, when viewed across the entire experiment, higher exposure to averted gaze (where the robot did not stare back) among participants seemed to help players disengage from the social interaction with the iCub, and focus on their gameplay with less distraction.

 

Given the iCub is a humanoid robot – designed loosely to mimic the shape and appearance of people – it’s not altogether surprising perhaps that a robot’s gaze can trigger our usual attentional processes.

But it could have implications for the design of more advanced and interactive robots in the future, the researchers say.

“Robots will be more and more present in our everyday life,” Wykowska says.

“That is why it is important to understand not only the technological aspects of robot design, but also the human side of the human-robot interaction… how the human brain processes behavioral signals conveyed by robots.”

The findings are reported in Science Robotics.

 

Read original article here