Tag Archives: minds

Cracking the code of cognitive health: Regular nut consumption tied to sharper minds – News-Medical.Net

  1. Cracking the code of cognitive health: Regular nut consumption tied to sharper minds News-Medical.Net
  2. Improving memory may be as easy as popping a multivitamin, study finds: ‘Prevents vascular dementia’ Fox News
  3. Eating flavanol-rich foods can boost brain health, new study suggests The Globe and Mail
  4. Nuts for the brain: Study shows nut consumption boosts memory and brain health in seniors News-Medical.Net
  5. Another Study Shows Daily Multivitamin for People Over 60 Slows Memory Decline That Comes With Aging Good News Network
  6. View Full Coverage on Google News

Read original article here

Could Mel Brooks’s ‘Blazing Saddles’ be revived today? The minds behind ‘History of the World, Part II’ weigh in – Yahoo Entertainment

  1. Could Mel Brooks’s ‘Blazing Saddles’ be revived today? The minds behind ‘History of the World, Part II’ weigh in Yahoo Entertainment
  2. ‘History of the World, Part II’ Review: Mel Brooks Blazes Back to the Past The Wall Street Journal
  3. ‘History of the World, Part II’: Ike Barinholtz on Mel Brooks’ Advice and a Potential Part III Hollywood Reporter
  4. Roush Review: ‘History of the World, Part II’ Is a Rollicking By-the-(Mel)-Brooks Romp TV Insider
  5. Could Mel Brooks’s ‘Blazing Saddles’ be revived today? The minds behind ‘History of the World, Part II’ weigh in. AOL
  6. View Full Coverage on Google News

Read original article here

Weird supernova remnant blows scientists’ minds

When dying stars explode as supernovae, they usually eject a chaotic web of dust and gas. But a new image of a supernova’s remains looks completely different — as though its central star sparked a cosmic fireworks display. It is the most unusual remnant that researchers have ever found, and could point to a rare type of supernova that astronomers have long struggled to explain.

“I have worked on supernova remnants for 30 years, and I’ve never seen anything like this,” says Robert Fesen, an astronomer at Dartmouth College in Hanover, New Hampshire, who imaged the remnant late last year. He reported his findings at a meeting of the American Astronomical Society on 12 January and posted them in a not-yet-peer-reviewed paper on the same day1.

An 850-year-old firework

In 2013, amateur astronomer Dana Patchick discovered the object in archived images from NASA’s Wide-field Infrared Survey Explorer. Over the next decade, several teams studied the remnant, known as Pa 30, but the results became only more and more baffling.

Vasilii Gvaramadze, an astronomer at Lomonosov Moscow State University in Russia, and his colleagues found an extremely unusual star in 2019 at the dead center of Pa 302. That star had a surface temperature of roughly 200,000 kelvin, with a stellar wind travelling outward at 16,000 kilometres per second — roughly 5% of the speed of light. “Stars simply don’t have 16,000-kilometre-per-second winds,” Fesen says. Speeds of 4,000 kilometres per second aren’t unheard of, he says — but 16,000 is wild.

Pa 30 was again the subject of intrigue in 2021, when Andreas Ritter, an astronomer at the University of Hong Kong, and his colleagues proposed that the remnant is the aftermath of a supernova that lit up the sky nearly 850 years ago, in 11813. Chinese and Japanese astronomers observed the object for roughly six months before it faded.

During their examination of Pa 30, Ritter and his colleagues noted that the remnant’s emission spectrum contained a particular line associated with the element sulfur. Intrigued, Fesen’s group later imaged the remnant with an optical filter that is sensitive to that line using the 2.4-metre Hiltner Telescope at the Michigan–Dartmouth–MIT Observatory at Kitt Peak, Arizona.

The data they collected not only helped to confirm that Pa 30 is indeed what’s left of the supernova observed in 1181, but also yielded an image of the remnant unlike any other. It contains hundreds of fine filaments radiating outwards. Normally, researchers expect supernova remnants to look like the Crab Nebula — which looks less like a crab and more like a sea anemone, with a smooth region at the centre of an oval-shaped mass of tentacle-like filaments. They also commonly look like the Tycho Supernova, which looks like a sphere of jumbled knots.

But Pa 30 comparatively makes for “just an amazing image”, says Saurabh Jha, an astronomer at Rutgers University in Piscataway, New Jersey. “I’ve never seen anything like it before. It’s really mind-blowing.”

Cheating death

What could have caused such a remnant? In 2021, Ritter and his colleagues speculated that it was a rare supernova explosion classified as type Iax3.

A normal type-Ia supernova occurs when a white dwarf siphons material from a companion star, eventually growing so massive that it can no longer support the extra weight and blows itself to smithereens — dispersing its innards across the galaxy. But in a type-Iax supernova, the star somehow survives. “We often call these zombie stars,” Jha says.

Although theorists have developed many possible mechanisms to explain type-Iax supernovae, Ritter and his colleagues think that two white dwarfs slammed together to produce Pa 30’s fireworks. That’s clear from the amount of sulfur in the remnant, which is a byproduct of a white-dwarf explosion, and the lack of lighter elements that you would see from more massive stars.

Anthony Piro, an astronomer at Carnegie Observatories in Pasadena, California, thinks these findings crystallize at least one path through which a type Iax can form. But it is different from the previously favoured scenario, in which a white dwarf siphons material from a companion. That idea was developed in 2014, when astronomers successfully identified the stars involved in a Iax explosion by looking through archived images from before the event took place4.

So the Pa 30 finding “definitely broadens, in my mind, what could have led to a type-Iax supernova”, Jha says.

These rare explosions tend to occur in distant galaxies, making them difficult to study. But Pa 30 (if it is truly type Iax) is only 2.3 kiloparsecs away — meaning that future observations will shed more light on this unusual type of supernova.

Already, Fesen has applied for observing time on both the Hubble Space Telescope and the newer James Webb Space Telescope (JWST). “The optical image that was taken, I think, gives only a hint of what it really looks like,” Fesen says. “But the JWST image will be simply amazing.”

Read original article here

George Lois Changed Magazines—And Pop Culture—Forever

As an art director for Esquire in the 1960s, George Lois assailed Muhammad Ali with arrows, drowned Andy Warhol in a can of soup, and prepped Richard Nixon’s profile for a close-up. He stunned minds to attention, making magazine covers that spoke so urgently, they muted an entire newsstand’s worth of bold headlines. Through Lois’s work, history was reified.

I wasn’t alive in the ’60s, but I can tell you that many of the era’s visual markers that arise in my mind were made by Lois. (And I’m surely not alone in this—the Museum of Modern Art secured several of his works for its permanent collection.) He was a fierce and uncompromising visual visionary, a provocateur whose wordless commentary refracted America through dozens of roughly 8-by-10-inch canvases. He possessed an uncanny ability to channel collective sentiment in a time of deep political divide, but more than that, he transmitted messages that America didn’t realize it was ready to embrace. Until he died this past weekend at age 91, George Lois was the greatest living magazine art director. He will be remembered as a pioneering graphic artist of the 20th century.

Before Esquire, Lois made his bones as an ad man developing campaigns for Xerox and John F. Kennedy’s presidential campaign. He was Bronx-born—brash, passionate, and willing to throw down the gauntlet to defend his ideas. Rumored to be the inspiration for Mad Men’s Don Draper, Lois rejected the comparison altogether (which is fair, because Draper didn’t have nearly as tight a grip on the counterculture as Lois did). Through advertising, Lois honed his stylish and daring sensibilities, which would carry him through a decade in magazines and then on to MTV, where he rescued a flailing brand and turned it into a zeitgeist-defining entity.

In 2019, when Peter Mendelsund and I began redesigning The Atlantic, no designer had more influence on us. Lois’s work left us no choice but to contend with it, occupying, as it does, a dominant space in the cultural imagination. We studied his covers, seeking to bring a similar sensibility to The Atlantic, which is to say we tried to copy him often. A common thread in Lois’s most searing designs is the relationship of typography to image. He frequently relied on a striking central visual component to anchor the cover while the rest of the elements remained deferential. This required bravery—as well as immense trust in the public—and removed the onus from the language. He reduced the cover’s typography to Lilliputian scale in order to harness the image’s massive power.

In 1968, Lois subjected Ali to the fate of Saint Sebastian, using arrows to martyr the iconoclastic athlete. In the cover’s bottom right-hand corner sits a small headline of five words. This magazine cover, among the most famous in American history, manages to confront race, religion, and the Vietnam War in a single conceptual image that is as brutal as it is brilliant.

Two covers designed by George Lois for Esquire. Left: Issue No. 413, April 1968. Right: Issue No. 367, June 1964

Over dozens of Esquire issues, he didn’t just create iconic images; he deployed existing icons in order to subvert, reframe, and recontextualize them. Take his 1964 cover of Kennedy, with a hand photographed in the foreground of the frame, wiping away an imagined tear. This meta visual move adds friction to a static image; it forces us to confront and process tragedy in a new way. It turns the magazine, newsstand price 60 cents, into something that transcends its form—into something more like art.

From Jiffy Lube ads to the campaign for “I Want My MTV” to the boxer Sonny Liston donning a Santa hat on the cover of Esquire, contemporary American culture looks and feels the way it does in part because of Lois’s genius. If you’ve ever been struck by a piece of design in our pages, you might now recognize the traces of his influence. Even if you don’t, I can tell you that it’s there (our December 2019 and November 2021 covers are both valiant attempts at homage). History has no choice but to remember George Lois; he was an integral part of the machine of remembering.

Read original article here

Study Suggests Spins of ‘Brain Water’ Could Mean Our Minds Use Quantum Computation : ScienceAlert

In the ongoing work to realize the full potential of quantum computing, scientists could perhaps try peering into our own brains to see what’s possible: A new study suggests that the brain actually has a lot in common with a quantum computer.

The findings could teach us a lot about the functions of neurons as well as the fundamentals of quantum mechanics. The research might explain, for example, why our brains are still able to outperform supercomputers on certain tasks, such as making decisions or learning new information.

As with much quantum computing research, the study looks at the idea of entanglement – two separate particles being in states that are linked together

“We adapted an idea, developed for experiments to prove the existence of quantum gravity, whereby you take known quantum systems, which interact with an unknown system,” says physicist Christian Kerskens from the University of Dublin.

“If the known systems entangle, then the unknown must be a quantum system, too. It circumvents the difficulties to find measuring devices for something we know nothing about.”

In other words, the entanglement or relationship between the known systems can only happen if the mediating system in the middle – the unknown system – operates on a quantum level, too. While the unknown system can’t be studied directly, its effects can be observed, as with quantum gravity.

For the purposes of this research, the proton spins of ‘brain water’ (the fluid that builds up in the brain) act as the known system, with custom magnetic resonance imaging (MRI) scans used to non-invasively measure the proton activity. The spin of a particle, which determines its magnetic and electrical properties, is a quantum-mechanical property.

Through this technique, the researchers were able to see signals resembling heartbeat-evoked potentials, which are a type of electroencephalography (EEG) signal. These signals aren’t normally detectable via MRI, and the thinking is that they showed up because the nuclear proton spins in the brain were entangled.

The observations recorded by the team require verification via confirmation via future studies across multiple scientific fields, but the early results look promising for non-classical, quantum happenings in the human brain when it’s active.

“If entanglement is the only possible explanation here then that would mean that brain processes must have interacted with the nuclear spins, mediating the entanglement between the nuclear spins,” says Kerskens.

“As a result, we can deduce that those brain functions must be quantum.”

The brain functions that lit up the MRI readings were also associated with short-term memory and conscious awareness, and that suggests the quantum processes – if that’s indeed what they are – play a crucial role in cognition and consciousness, suggests Kerskens.

What researchers need to do next is to learn more about this unknown quantum system in the brain – and then we might fully understand the workings of the quantum computer that we’re carrying around in our heads.

“Our experiments, performed only 50 meters away from the lecture theatre where Schrödinger presented his famous thoughts about life, may shed light on the mysteries of biology, and on consciousness which scientifically is even harder to grasp,” says Kerskens.

The research has been published in the Journal of Physics Communications.

Read original article here

Live Brain Cells Playing Pong in a Dish Could Illuminate Mind’s Mechanics

Scientists have created a gamer — out of cells, in a lab.

An Australian-led team of researchers placed 800,000 live human and mouse brain cells into a dish, connected them to electrodes and a simulation of the classic game Pong. The scientists then watched as the mini-mind quickly taught itself the game and improved the more it practiced. They were able to follow along by converting the cellular responses into a visual depiction of the game that looks much like the original. 

They call their system DishBrain, and say it proves neurons in a dish could learn and display basic signs of intelligence. The team details the new setup, dubbed synthetic biological intelligence, or SBI, in a study published Wednesday in the journal Neuron. 

Eventually, the authors say, SBI could help unlock longstanding mysteries of brain mechanics and lead to better treatments for certain neurological conditions. “DishBrain offers a simpler approach to test how the brain works and gain insights into debilitating conditions such as epilepsy and dementia,” says Hon Weng Chong, chief executive officer of biotech start-up Cortical Labs.

SBI could also offer an alternative to animal testing, which is often how scientists go about studying the viability of new drugs and therapies. 

“We now have, in principle, the ultimate biomimetic ‘sandbox’ in which to test the effects of drugs and genetic variants — a sandbox constituted by exactly the same computing (neuronal) elements found in your brain and mine,” adds co-author Professor Karl Friston, a theoretical neuroscientist at University College London.

Artificial vs. biological intelligence

The study team found that biological intelligence, aka living brain cells, behave pretty differently than a computer might in terms of AI.

“In the past, models of the brain have been developed according to how computer scientists think the brain might work,” says Brett Kagan, chief scientific officer of Cortical Labs and a co-author of the study. “That is usually based on our current understanding of information technology, such as silicon computing… But in truth we don’t really understand how the brain works.”

Interestingly, DishBrain naturally learned to play Pong out of an apparent tendency toward acting on its environment in ways that make it more predictable and less random. In other words, this system behaves much more like a real live brain than AI does.

For example, when DishBrain successfully returned the “ball” in Pong, that resulted in the system being able to better predict where it would move next. If DishBrain failed, it would lose the point and a new point would begin with the computer releasing a ball from a random starting place, and so on. Because DishBrain uses a feedback loop, it seems to get progressively better the more it plays.

“This is remarkable because you cannot teach this kind of self-organization, simply because — unlike a pet — these mini brains have no sense of reward and punishment,” Friston adds.

Now Cortical Labs, an Australian biotech startup, is working on a new generation of biological computer chips to create a generalized form of SBI that, as the team writes in its study, “may arrive before artificial general intelligence due to the inherent efficiency and evolutionary advantage of biological systems.”

“We know our brains have the evolutionary advantage of being tuned over hundreds of millions of years for survival,” explains co-author Adeel Razi of Monash University. “Now, it seems we have in our grasp where we can harness this incredibly powerful and cheap biological intelligence.”  

The researchers also tried the system on other simple games. 

“You know when the Google Chrome browser crashes and you get that dinosaur that you can make jump over obstacles (Project Bolan),” Kagan says. “We’ve done that and we’ve seen some nice preliminary results, but we still have more work to do building new environments for custom purposes.”

Next up, the team has plans to show DishBrain a good time. 

“We’re trying to create a dose response curve with ethanol — basically get them ‘drunk’ and see if they play the game more poorly, just as when people drink,” Kagan says.

While we’ll look forward to the results of the drunk DishBrain study, let’s maybe keep those inebriated neurons far away from any self-driving car code. 

Read original article here

Live Brain Cells Playing Pong in a Dish Could Illuminate Mind’s Mechanics

Scientists have created a gaming opponent — out of cells, in a lab.

An Australian-led team of researchers placed 800,000 live human and mouse brain cells into a dish, connected them to electrodes and a simulation of the classic game Pong. The scientists then watched as the mini-mind quickly taught itself the game and improved the more it practiced. They were able to follow along by converting the cellular responses into a visual depiction of the game that looks much like the original. 

They call their system DishBrain, and say it proves neurons in a dish could learn and display basic signs of intelligence. The team details the new setup, dubbed synthetic biological intelligence, or SBI, in a study published Wednesday in the journal Neuron. 

Eventually, the authors say, SBI could help unlock longstanding mysteries of brain mechanics and lead to better treatments for certain neurological conditions. “DishBrain offers a simpler approach to test how the brain works and gain insights into debilitating conditions such as epilepsy and dementia,” says Hon Weng Chong, chief executive officer of biotech start-up Cortical Labs.

SBI could also offer an alternative to animal testing, which is often how scientists go about studying the viability of new drugs and therapies. 

“We now have, in principle, the ultimate biomimetic ‘sandbox’ in which to test the effects of drugs and genetic variants — a sandbox constituted by exactly the same computing (neuronal) elements found in your brain and mine,” adds co-author Professor Karl Friston, a theoretical neuroscientist at University College London.

Artificial vs. biological intelligence

The study team found that biological intelligence, aka living brain cells, behave pretty differently than a computer might in terms of AI.

“In the past, models of the brain have been developed according to how computer scientists think the brain might work,” says Brett Kagan, chief scientific officer of Cortical Labs and a co-author of the study. “That is usually based on our current understanding of information technology, such as silicon computing… But in truth we don’t really understand how the brain works.”

Interestingly, DishBrain naturally learned to play Pong out of an apparent tendency toward acting on its environment in ways that make it more predictable and less random. In other words, this system behaves much more like a real live brain than AI does.

For example, when DishBrain successfully returned the “ball” in Pong, that resulted in the system being able to better predict where it would move next. If DishBrain failed, it would lose the point and a new point would begin with the computer releasing a ball from a random starting place, and so on. Because DishBrain uses a feedback loop, it seems to get progressively better the more it plays.

“This is remarkable because you cannot teach this kind of self-organization, simply because — unlike a pet — these mini brains have no sense of reward and punishment,” Friston adds.

Now Cortical Labs, an Australian biotech startup, is working on a new generation of biological computer chips to create a generalized form of SBI that, as the team writes in its study, “may arrive before artificial general intelligence due to the inherent efficiency and evolutionary advantage of biological systems.”

“We know our brains have the evolutionary advantage of being tuned over hundreds of millions of years for survival,” explains co-author Adeel Razi of Monash University. “Now, it seems we have in our grasp where we can harness this incredibly powerful and cheap biological intelligence.”  

The researchers also tried the system on other simple games. 

“You know when the Google Chrome browser crashes and you get that dinosaur that you can make jump over obstacles (Project Bolan),” Kagan says. “We’ve done that and we’ve seen some nice preliminary results, but we still have more work to do building new environments for custom purposes.”

Next up, the team has plans to show DishBrain a good time. 

“We’re trying to create a dose response curve with ethanol — basically get them ‘drunk’ and see if they play the game more poorly, just as when people drink,” Kagan says.

While we’ll look forward to the results of the drunk DishBrain study, let’s maybe keep those inebriated neurons far away from any self-driving car code. 

Read original article here

Is this proof brain scans can read our minds? Special type of MRI translated brainwaves into images

Your deepest personal life lies between your ears: your experiences, desires and memories, your politics, your emotional problems and mental ailments. 

It’s for you to keep secret or to share. But could scientists soon open up your mind for anyone to read?

Researchers armed with super-high-tech brain scanners and artificial intelligence programs claim they are forging new keys that may unlock our inner worlds.

They are developing technology to read our minds with such accuracy that it may reveal exactly what we’re looking at or imagining, discover our voting and buying intentions, and even reveal why we may be wired for illnesses such as depression and schizophrenia.

In fact, this technology is already being used to understand how our brains work in order to diagnose and treat a range of conditions, for instance enabling surgeons to plan how to operate on brain tumours while sparing as much healthy tissue as possible.

It’s also enabled neurologists and psychologists to map how cognitive functions such as vision, language and memory operate across different brain regions.

Scientists use it to track how our brains produce experiences such as pain, and are developing the technology to fathom problems such as addiction — and to test drugs for treating these and illnesses such as depression.

But now some experts are asking whether the results produced by this technology are robust enough to be relied on, with implications for how patients with mental health problems, for instance, are being treated.

The artificial intelligence system translated each volunteer’s neuron reaction from the fMRI back into computer code to recreate the photographic portrait

The technology at the heart of it all is MRI (magnetic resonance imaging), a scanning system first developed in the 1970s.

MRI uses a strong magnetic field to agitate tiny particles called protons inside our cells. 

These protons respond differently according to the cells’ chemical nature. The differences in how cells respond enable physicians to discriminate between various types of tissues.

One of the most common types of MRI application is called fMRI (functional magnetic resonance imaging), which is used to watch how our brains are operating.

It relies on the fact that when regions of the brain become active, they demand energy in the form of oxygen-rich blood.

Oxygen molecules can be detected by fMRI, so the scanners can see where in the brain our neurons — brain nerve cells — work hardest (and draw most oxygen) while we have thoughts or emotions.

MRI technology is becoming ever more accurate. Late last year scientists working on the Iseult project, a Franco-German MRI machine-building initiative based in Paris, switched on the world’s strongest scanner.

At its heart is an extraordinarily powerful 132-ton magnet rated at 11.7 Tesla. Standard NHS hospital MRIs used for diagnostic scanning are typically 1.5 to 3 Tesla.

This titanic power enables the Iseult scanner to picture things in our brain as small as 100 microns; the size of our larger individual brain cells. 

The MRI can also picture the connections between these brain cells, which are typically some 700 microns long.

Such clarity can enable scientists to see which brain cells are firing, and how they interact within vast networks.

But what do such interactions mean? To find out, investigators are using another cutting-edge technology, artificial intelligence (or algorithms — sets of mathematical instructions in a computer program), to interpret this electrical brain-cell activity.

In January, researchers at Radboud University in the Netherlands published startling results in the journal Nature Scientific Reports from an experiment where they showed pictures of faces to two volunteers inside a powerful brain-reading fMRI scanner.

As the volunteers looked at the images, the fMRI scanned the activity of neurons in the areas of their brain responsible for vision. 

The researchers then fed this information into a computer’s artificial intelligence (AI) algorithm.

As you can see from the extreme likeness between the original faces and the portraits, the results are so astonishingly similar as to appear uncanny.

So how did the scientists do this? In order to ‘train’ the AI system, the volunteers had previously been shown a series of other faces while their brains were being scanned — the key is that the photographic pictures they saw were not of real people, but essentially a paint-by-numbers picture created by a computer: each tiny dot of light or darkness was given a unique computer-program code.

A person undergoing an MRI scan (file image) much like the fMRI which detected the volunteers’ neurons

What the fMRI scan did was detect how the volunteers’ neurons responded to these ‘training’ images. 

The artificial intelligence system then translated each volunteer’s neuron reaction back into computer code to recreate the photographic portrait.

In the test, neither the volunteers nor the AI system had ever seen the faces that were decoded and recreated so accurately.

Thirza Dado, an AI researcher and a cognitive neuroscientist, who led the study, told Good Health that these highly impressive results demonstrate the potential for fMRI/AI systems effectively to read minds in future.

‘I believe we can train the algorithm not only to picture accurately a face you’re looking at, but also any face you imagine vividly, such as your mother’s,’ she says.

‘By developing this technology, it would be fascinating to decode and recreate subjective experiences, perhaps even your dreams.

‘Such technological knowledge could also be incorporated into clinical applications such as communicating with patients who are locked within deep comas.’

Her work is focused on using the technology to help restore vision in people who, through disease or accident, have become blind.

‘We are already developing brain-implant cameras that will stimulate people’s brains so they can see again,’ she says.

In an as-yet unpublished study, macaque monkeys were fitted with camera-vision implants and then underwent fMRI scans while they looked at the facial photographs. 

Thirza Dado’s AI system was able to translate these images back just as accurately as with her human tests, suggesting the camera implants work effectively.

As AI brain-decoding systems become more sophisticated, Thirza Dado believes they could enable police forces to scan witnesses’ brains for memory-pictures of people involved in crimes.

‘In future, we may also be able to look at the ability to picture people’s ideas,’ she says.

Such mind-reading technologies present serious ethical questions about privacy.

Indeed, earlier this year another study showed how computers may, in future, even eavesdrop on what is perhaps the most personal and profound moment of our lives: the thoughts that may flash through the mind around the moment of death (see box, above far right).

U.S. scientists at Ohio State University say they can tell people’s political ideology with an accuracy rate of some 80 per cent using fMRI

But already, U.S. scientists say they can tell people’s political ideology with an accuracy rate of some 80 per cent using fMRI.

In a study published in May involving 174 adults, researchers at Ohio State University were able to predict accurately if they were politically conservative or liberal.

‘Can we understand political behaviour by looking solely at the brain? The answer is a fairly resounding “yes”,’ said study co-author Skyler Cranmer, a professor of political science at Ohio State.

‘The results suggest that the biological and neurological roots of political behaviour run much deeper than we’d thought.’

The study, published in the journal PNAS Nexus, examined how different regions of individuals’ brains communicated with each other, either when looking at pictures or simply doing nothing.

REVEALED, WHAT WE THINK ABOUT IN OUR DYING MOMENTS

Advances in technology and the ability ‘to read’ minds are already testing ethical boundaries.

In February, doctors reported how they’d unintentionally recorded the brain activity of an 87‑year-old patient at the point of his death, performing an electroencephalogram (EEG) on his brain to study his epileptic seizures, when he had a sudden heart attack and died.

In an EEG, sensors are attached to the scalp to pick up electrical signals produced by brain nerve cells as they communicate. This can reveal what activity is occurring in the brain.

Writing in the journal Frontiers in Aging Neuroscience, the doctors explained that because the EEG machine was kept running, they’d recorded the man’s brain activity at the end of his life — and found that we may experience a flood of memories when we die.

For some 30 seconds before and after his heart stopped, the scans showed increased activity in the brain areas associated with memory recall, meditation and dreaming.

Dr Ajmal Zemmar, a neurosurgeon at the University of Louisville in Kentucky, who published the report, speculates: ‘Through generating brainwaves involved in memory retrieval, the brain may be playing a recall of important life events just before we die.’

He adds: ‘Something we may learn from this research is: although our loved ones have their eyes closed and are ready to leave us to rest, their brains may be replaying some of the nicest moments they experienced in their lives.’

<!- - ad: https://mads.dailymail.co.uk/v8/de/health/none/article/other/mpu_factbox.html?id=mpu_factbox_1 - ->

Advertisement

A super-computer’s AI system monitored this brain activity and compared it with the volunteers’ self-reported political ideology on a six-point scale from ‘very liberal’ to ‘very conservative’. 

It then identified patterns of brain networking to predict political leanings.

Three areas — the amygdala, inferior frontal gyrus and hippocampus — were most strongly associated with political affiliation.

The amygdala is believed to be key in detecting and responding to threats, while the inferior frontal gyrus is key to our understanding and processing language; the hippocampus is central to learning and memory.

While this study did find a link between the brain signatures and political ideologies, it can’t explain what causes what — i.e. is brain pattern the result of the ideology people choose, or did the pattern cause the ideology?

Whatever the case, it’s chilling to think such technology could be developed by authoritarian regimes to detect people’s inner beliefs and punish them for opinions they’ve never voiced.

Yet some commentators argue that the technology’s mind-reading abilities are being seriously overclaimed.

The starkest example is its use as a lie detector, pioneered in 2001 by Daniel Langleben, a professor of psychiatry at Stanford University, California. 

He theorised that the brain has to work harder to tell lies, as it has to construct a story and suppress the truth.

His fMRI studies showed increased activity during deception in areas such as the anterior cingulate cortex, thought to be in charge of monitoring errors, and the dorsal lateral prefrontal cortex, linked to behaviour control.

But its efficacy is yet to be convincingly proven. Moreover, studies, such as one by Plymouth University in 2019, show that MRI lie detectors can be beaten with simple mental evasion techniques; making up new memories about a lie, and focusing mentally on a particular superficial aspect of the story can alter brain activity patterns on the MRI scans to render the detector tests inaccurate.

But the doubts go much further. Researchers have questioned whether MRI scanning can give reliable results about individuals’ mental states.

This throws into question the view that psychologists can accurately infer from fMRI scans patients’ mental conditions and whether, for example, their mood states are happy or depressed, as well as the effectiveness of medications to treat low mood — for instance, as seen by changed activity in a specific area of the brain.

Two years ago, psychologists at Duke University in North Carolina reviewed 56 studies that used repeated fMRI scans of 90 people’s brains and showed that the results were vastly different from test to test, even if the tests were repeated within a few days or weeks.

This means that the fMRI brain scan results of a person completing a memory task or watching a film, for example, could easily be entirely different when tested under the same circumstances a week later, even though they’re feeling and thinking the same.

This lack of consistency means that the scans can’t give reliable data on people’s mental functioning or health, reported the journal Psychological Science.

The study’s lead author, Ahmad Hariri, a professor of psychology and neurology, says: ‘If a measure gives a different value every time it is administered, it can hardly be used to make predictions about a person. Better measures are needed to achieve clinically useful results.’

Researchers have questioned whether MRI scanning can give reliable results about individuals’ mental states

Such concerns were reinforced in June by a major report in the journal Nature, with the researchers arguing that fMRI brain scanning studies produce such highly complex and variable results that even large projects that involve hundreds or thousands of patients are still too small to reliably detect most links between the way people’s brains function and the way they behave.

Scott Marek, an assistant professor of psychiatry at Washington University, discovered the problem when he scanned the brains of 2,000 children to try to establish links between their brain activity and their IQ.

To double-check that his results were consistent, Scott Marek split them into two equal sets and analysed them in the same way. 

If the results were consistent, the two sets would produce broadly the same data. But they did not. They were very different.

‘I was shocked,’ he says.

Even with large studies such as his, the individual brain scans showed such variable results that broad-scale conclusions about relationships between brain activity and behaviour or intelligence could not be made reliably.

Dr Joanna Moncrieff, a psychiatrist and professor of critical and social psychiatry at University College London agrees. 

‘While fMRI scans can show something dynamic is going on in a brain, they don’t provide any evidence for establishing what causes this dynamic activity to happen. 

‘The scan shows a brain that is active at that moment, without explaining why.

‘Similarly, no one has ever clinically demonstrated with fMRI scans any identifiable biological mechanism in the brain that consistently underlies depression or other mental disorder,’ she adds.

‘So to claim, as drug researchers do, that they can show in fMRI scans that drugs such as antidepressants, or psychedelics, can rectify mental disorders such as depression makes no sense.’

Karl Friston, a professor of neuroscience at University College London and a global authority on brain imaging, says Scott Marek’s results reveal the complexity in getting reliable results from MRI studies.

Even those that involve thousands of patients can produce apparently convincing but bad results, he told Good Health.

This is due to something Professor Friston calls ‘the fallacy of classical inference’: if there’s lots of data swirling around, it is easier to ‘see’ patterns even though they are coincidental. 

‘If we can understand the connectivity between different brain regions and how that might go awry in disorders such as schizophrenia, autism, depression or Parkinson’s, we may be able to understand the failures of this message-passing’

It’s rather like seeing faces in randomly patterned carpets.

Where MRI-scanning science can prove useful, he says, is in deep-dive studies of individuals’ brains to establish how they pass messages around.

‘If we can understand the connectivity between different brain regions and how that might go awry in disorders such as schizophrenia, autism, depression or Parkinson’s, we may be able to understand the failures of this message-passing and find drug therapies to address the problems,’ says Professor Friston.

He adds that while Thirza Dado’s research is definitely reputable science, a fundamental problem is that our brains don’t see things such as faces in photographic style but rather like a Picasso painting: our brains note the side of a nose, an eye, a fringe, and put them together. 

So MRI scans will never be able to pull a real-life picture of a face out of someone’s brain.

But he agrees this approach may be a vast help in creating artificial vision: ‘It’s similar to hearing aids,’ he says. 

‘If you can identify the visual information that matters, you can emphasise it. This could help people with, for example, partial blindness, by finding out what frequencies produce the best representations and enhancing them.’

So reading minds may still be a long way off. But it appears that the super-tech world of MRI and artificial intelligence could one day restore sight to the blind.

Read original article here

Scientists See What People Picture in Their Mind’s Eye

Summary: Using electrocorticogram technology to capture brain waves, researchers found the meaning of what people imagine can be determined from brain wave patterns, even if the image differs from what a person is looking at.

Source: Osaka University

They say a picture is worth a thousand words. Now, researchers from Japan have found that even a mental picture can communicate volumes.

In a study published this month in Communications Biology, researchers from Osaka University have revealed that the meaning of what a person is imagining can be determined from their brain wave pattern, even if the image differs from what the person is looking at.

When we see images in real life, whether we are talking to a friend, watching a movie, or watching a beautiful sunset, our brains take in this visual information in a way that can be detected by a technique called electrocorticogram, which detects patterns of electrical activity in the brain. These patterns are not set in stone, however; they can be changed by what we are paying attention to or imagining at the time.

“Attention is known to modulate neural representations of perceived images,” says lead author of the study Ryohei Fukuma. “However, we didn’t know whether imagining a different image could also change these representations.”

To test this, the researchers developed a new technology by working with patients with epilepsy who already had electrodes implanted in their brains to record and display electrocorticogram readouts of images that they were imagining. The patients were shown an image of the real-time readout and instructed to mentally picture a different image representing a “landscape,” “human face,” or “word” (for example, thinking of a human face while looking at various types of images) to control the readout.

Electrocorticogram (ECoG) recordings were taken from 17 patients with epilepsy who had implanted subdural cortical electrodes related to visual perception. A decoder was trained to estimate the semantic meaning of the images that the patients were viewing from the intracranial ECoG recording using visual-semantic space. Based on the real-time inferred semantic information with the decoder, an image was displayed on a monitor placed in front of the patient. The patient then tried to display an image with instructed meaning by imagining it. Credit: The researchers

“The results clarified the relationship between brain activities when people look at images versus when they imagine them,” explains Takufumi Yanagisawa, senior author. “The electrocorticogram readouts of the imagined images were distinct from those provoked by the actual images viewed by the patients. They could also be modified to be even more distinct when the patients received real-time feedback.”

The time needed to generate a very clear distinction between the imagined image and the viewed image was different for imagining a “word” and a “landscape,” which could have something to do with the different parts of the brain involved in imagining these two concepts.

“Our findings suggest that a readout image controlled by the subject’s imagery can be inferred by an observer using this technology,” says Fukuma.

Given the accuracy with which this new technology displays images that exist within the subject’s mind, a similar approach could be used to develop a communication device for severely paralyzed patients, such as those with amyotrophic lateral sclerosis. Similar devices already used by some patients with this condition rely on motor control, which degenerates more quickly than visual cortical activity, so an imagery-based device could be highly valuable.

About this neuroscience research news

Author: Press Office
Source: Osaka University
Contact: Press Office – Osaka University
Image: The image is credited to the researchers

Original Research: Open access.
“Voluntary control of semantic neural representations by imagery with conflicting visual stimulation” by Ryohei Fukuma et al. Communications Biology


Abstract

See also

Voluntary control of semantic neural representations by imagery with conflicting visual stimulation

Neural representations of visual perception are affected by mental imagery and attention. Although attention is known to modulate neural representations, it is unknown how imagery changes neural representations when imagined and perceived images semantically conflict.

We hypothesized that imagining an image would activate a neural representation during its perception even while watching a conflicting image. To test this hypothesis, we developed a closed-loop system to show images inferred from electrocorticograms using a visual semantic space.

The successful control of the feedback images demonstrated that the semantic vector inferred from electrocorticograms became closer to the vector of the imagined category, even while watching images from different categories. Moreover, modulation of the inferred vectors by mental imagery depended asymmetrically on the perceived and imagined categories.

Shared neural representation between mental imagery and perception was still activated by the imagery under semantically conflicting perceptions depending on the semantic category.

Read original article here

New COVID Variant Causing Officials to Lose Their Minds Once Again

It is never going to end. There is a reason why we get a new flu shot every year. There is a reason why some people with the flu shot still get the flu. Viruses mutate! They mutate all the freaking time.

I guess COVID is special because officials freak out when a new COVID mutation comes out. They vomit all the hyperbole they can to keep the sheep scared and in line.

Need proof? When the vaccines came out normalcy started to come back and politicians like Fauci went to the wayside.

All of a sudden, the dangerous Delta variant came around, forcing the politicians back to the forefront.

But now the Delta variant is no longer a concern. People are letting down their guard! They’re starting to go back to normal and not paying attention to politicians.

Fauci and politicians are on the verge of losing their celebrity status! So of course a new deadly variant has been discovered.

B.1.1.529

The World Health Organization (WHO) attended a special meeting on Friday because of the B.1.1.529 COVID variant that showed up in southern Africa. They’ll need a new name if they want to keep up the panic because that’s a mouthful. It will likely receive a Greek letter.

One scientist described the numerous mutations of this variant as “horrific” and another told the BBC “it was the worst they’d seen”:

It is also incredibly heavily mutated. Prof Tulio de Oliveira, the director of the Centre for Epidemic Response and Innovation in South Africa, said there was an “unusual constellation of mutations” and that it was “very different” to other variants that have circulated.

“This variant did surprise us, it has a big jump on evolution [and] many more mutations that we expected,” he said.

In a media briefing Prof de Oliveira said there were 50 mutations overall and more than 30 on the spike protein, which is the target of most vaccines and the key the virus uses to unlock the doorway into our body’s cells.

Zooming in even further to the receptor binding domain (that’s the part of the virus that makes first contact with our body’s cells), it has 10 mutations compared to just two for the Delta variant that swept the world.

This level of mutation has most likely come from a single patient who was unable to beat the virus.

A lot of mutation doesn’t automatically mean: bad. It is important to know what those mutations are actually doing.

Of course, they worry the vaccine will not work against this variant. Um, duh. Again, why do you think we need a flu shot every single year?

EHRMEHGERD we’re all going to die.

The BBC also had a video explaining why we keep seeing new variants of COVID. We’re not stupid. Everyone knows viruses, like everything else in the world, evolve to survive.

Must Stop ALL TRAVEL

Anyway, the UK, EU, and Israel have already halted air travel to southern Africa.

Because, you know, the lockdowns and travel bans have worked so well in the past.

German Health Minister Jens Spahn said, “The last thing we need is to bring in a new variant that will cause even more problems.”

Israeli Prime Minister Naftali Bennett said, “We are currently on the verge of a state of emergency.” British Health Secretary Sajid Javid declared it’s a “huge international concern!”

The Japanese government will force Japanese nationals to isolate and quarantine for 10 days and have constant testing when they come home from Botswana, Eswatini, Namibia, Lesotho, South Africa, and Zimbabwe.



In other words, this is never going to end. We all knew once everyone caved, hung onto every word from people like Fauci, and didn’t back down they would find every excuse to keep their power and relevance.

It’s a shame because when we face a virus like the ones in Outbreak and Contagion, you know, a virus that is a literal death sentence for anyone who catches it, no one will care or pay attention.

DONATE

Donations tax deductible
to the full extent allowed by law.

Read original article here