Tag Archives: Scans

Walmart customers rush to buy $300 travel essential that scans at the register for just $109… – The US Sun

  1. Walmart customers rush to buy $300 travel essential that scans at the register for just $109… The US Sun
  2. Walmart shoppers rush to buy $89 home essential scanning at register for just $25… The US Sun
  3. Walmart shoppers rush to buy $448 best-selling outdoor home essential that scans at the register for just… The US Sun
  4. Walmart shoppers rush to buy $355 bike which scans at register for just $124… The US Sun
  5. Walmart shoppers rush to buy must-have gadget appearing for $19 at checkout – claim huge discount on ‘… The US Sun
  6. View Full Coverage on Google News

Read original article here

Walmart shoppers are rushing to buy a $17 kitchen essential that scans at the register for just $5… – The US Sun

  1. Walmart shoppers are rushing to buy a $17 kitchen essential that scans at the register for just $5… The US Sun
  2. Walmart shoppers rush to buy a $180 essential which scans at register for $6 – and it’s cheaper than Ama… The US Sun
  3. Walmart shoppers rushing to buy $114 cleaning gadget that scans for just $25 – and it’s a ‘high chance’ dea… The US Sun
  4. Walmart shoppers rush to buy $25 must-have supplements which scan at register for just $5, 80% off full… The US Sun
  5. Walmart must-buys starting at $14 and what to avoid – including the ‘two-pack’ you can get in basically any… The US Sun
  6. View Full Coverage on Google News

Read original article here

Robot scans that can spot bowel cancer humans miss

Robot scans that can spot bowel cancer humans miss: Scientists hope integrating AI technology into existing colonoscopy equipment could help save more lives

Artificial intelligence may be more effective than the human eye alone at spotting the early signs of bowel cancer.

A new UK trial is investigating whether adding AI technology — which uses computer algorithms to scan and read images — to standard colonoscopy examinations improves the accuracy of these scans.

More than 42,000 people are diagnosed with bowel cancer in the UK each year and 16,000 die from it, making it the second most common cause of cancer death.

Colonoscopies are the ‘gold standard’ way of diagnosing the disease. This is where the large bowel is examined using a camera attached to a thin, flexible tube.

Artificial intelligence may be more effective than the human eye alone at spotting the early signs of bowel cancer

The camera relays live images from inside the bowel on to a screen, allowing the clinician carrying out the procedure to check for pre-cancerous polyps called adenomas — small growths that can be found on the wall of the bowel. It is believed that bowel cancer develops from these polyps and, if detected, they can be removed during the procedure.

However, although colonoscopies are extremely effective, three in every 100 examinations miss a cancer or polyp which might be small, flat or hidden in the folds of the bowel but which goes on to become a cancer, according to the NHS.

Scientists hope that integrating AI technology (which not only reads scans but also learns as it goes along) into existing colonoscopy equipment could help save more lives by boosting the accuracy of the 45-minute procedure, so that more cancers are caught at an early stage when they’re easier to treat.

To try to locate these hard-to-find abnormalities, U.S. researchers have developed an AI box called the GI Genius which connects to colonoscopy equipment and analyses the video footage in real time.

Colonoscopies are the ‘gold standard’ way of diagnosing the disease. This is where the large bowel is examined using a camera attached to a thin, flexible tube [File photo]

If it spots something unusual, the device creates a green box on the screen pinpointing a precise section of the bowel lining which needs closer inspection and sounds an alert. The medic carrying out the scan will then decide whether to investigate further.

The first UK trial to test the AI device is halfway through screening around 2,000 NHS patients.

Patients enrolled on the trial have either undergone a colonoscopy before or have experienced symptoms such as blood in their stools or significant changes to their bowel habits, which they have reported to their GP; or have taken part in the NHS Bowel Screening Programme (a home testing kit sent to adults aged 60 to 74 in England, and from the age of 50 in Scotland).

Half of those on the trial will undergo a standard colonoscopy, the other half with the AI device.

Nine hospitals across England are taking part in the Colo-Detect trial, mostly in the North East, with the study being led by Newcastle University and South Tyneside and Sunderland NHS Foundation Trust.

The trial, funded by U.S. medical device company Medtronic that designed the device, is due to end in April (researchers will evaluate both the clinical and cost-effectiveness of the technology).

Results of the first U.S. trial of the device, published in the American Journal Gastroenterology last year, showed a 50 per cent reduction in missed polyps when the AI technology was used compared to standard colonoscopy.

Commenting on the new trial, Dr Duncan Gilbert, a consultant clinical oncologist in lower gastrointestinal cancers at University Hospitals Sussex NHS Foundation Trust, said: ‘Colorectal cancer remains a major public health challenge for the UK. Worryingly, it is also becoming more common in younger patients.

‘Screening using colonoscopy to find and remove polyps and early cancers has been shown to save lives and anything that improves the effectiveness of colonoscopy is to be welcomed.

‘Testing new technologies in properly conducted clinical trials such as this is exactly what we need to do and is an example of how NHS clinical research leads the world.’

Read original article here

Withings’ Toilet Sensor Scans Your Pee to Measure Your Health

Most smart devices that measure your health are wearables — smartwatches like the Apple Watch, or Oura’s Ring series. Instead, imagine getting health data by carrying out a bodily function you do multiple times a day: urinating. Soon you’ll be able to do just that with Withings’ U-Scan, a sensor that attaches to your toilet bowl and analyzes your urine each day you use it. Withings unveiled the censor this week during CES 2023, the world’s largest consumer tech trade show. 


Now playing:
Watch this:

Withings U-Scan Analyzes Your Urine At Home



2:24

Anyone who’s ever offered up a urine sample at a doctor’s office knows that urine can tell us important things about our health: if we’re dehydrated, if we’re pregnant, if we have an infection and even the health of some of our organs. Withings is homing in on some of these biomarkers with two different versions of its consumer device, available in Europe in the first half of 2023, with plans for US availability following clearance by the US Food and Drug Administration. 

Read more: The Wonders of CES 2023: 3D Laptops, Wireless TV and Shape-Shifting Screens

One cartridge made for the U-Scan is meant to monitor nutrition and metabolic information by measuring ketone and vitamin C levels, and testing your urine’s pH (low or high pH can be associated with kidney health and more). 

The second is made for people who want to better track their menstrual cycles, by measuring surges of LH, or luteinizing hormone. LH peaks when ovulation is right around the corner and fertility is likely highest. This cycle cartridge will also measure urine pH. 

At-home urine test strips have already been available to track things like LH surges and ketone levels. And urine tests such as Vivoo’s also pair with an app to give people more insight into their health and education on what measurements may mean. But these are more hands-on than the attach-and-go sensors Withings has developed. 

“You don’t think about it and you just do what you do every day,” Withings CEO Mathieu Letombe told CNET. 

The future of health tracking was right in front of you all along. 


Marlene Ford/Getty Images

To use it, Withings says the device works best if you attach it to the front of your toilet bowl (which means people who normally pee standing up might also have to sit, or at least get creative). Urine will flow to a small collection inlet, which the company says can differentiate between urine and external liquid, such as toilet water. A thermal sensor detects the presence of urine, and it’s moved to a test pod. When the analysis is finished, waste is released from the device and disappears with a flush.

Results will be routed to your phone via Wi-Fi, and you can read your health insights daily on the Withings’ Health Mate app. 

The device contains a cartridge filled with test strips that’ll last you roughly three months. Oh, and the sensor will be able to tell your “stream” apart from that of visitors, because the U-Scan is able to differentiate based on the “distance and speed of the flow,” Letombe said. 

Because it is not cleared by the FDA in the US yet, there is no price point for the U-Scan right now. You’ll be able to get either the U-Scan Nutri Balance or Cycle Sync cartridges — or both if you want to get even more data — in Europe for 500 euros (approximately $527 at present) later this year. Withings is confident that the first two consumer sensors are just the beginning: The company has hopes for more medical devices in the future, adding to the long list of smartwatches, wearable sensors and other devices that funnel our health into data points. 

This product has been selected as one of the best products of CES 2023. Check out the other Best of CES 2023 award winners.  

The information contained in this article is for educational and informational purposes only and is not intended as health or medical advice. Always consult a physician or other qualified health provider regarding any questions you may have about a medical condition or health objectives.

Read original article here

CT scans of toothed bird fossil leads to jaw-dropping discovery | Fossils

Fossil experts have cooked the goose of a key tenet in avian evolution after finding a premodern bird from more than 65m years ago that could move its beak like modern fowl.

The toothy animal was discovered in the 1990s by an amateur fossil collector at a quarry in Belgium and dates to about 66.7m years ago – shortly before the asteroid strike that wiped out non-avian dinosaurs.

While the fossil was first described in a study about 20 years ago, researchers re-examining the specimen say they have made an unexpected discovery: the animal had a mobile palate.

“If you imagine how we open our mouths, the only thing we’re able to do is [move] our lower jaw. Our upper jaw is totally fused to our skull – it’s completely immobile,” said Dr Daniel Field, senior author of the research from the University of Cambridge.

Non-avian dinosaurs, including tyrannosaurs, also had a fused palate, as do a small number of modern birds such as ostriches and and cassowaries. By contrast, the vast majority of modern birds including chickens, ducks and parrots are able to move both their lower and upper jaw independently from the rest of the skull and each other.

That, says Field, makes the beak more flexible and dextrous, helping with preening, nest building and finding food. “That is a really important innovation in the evolutionary history of birds. But it was always thought to be a relatively recent innovation,” he said.

“The assumption has always been … that the ancestral condition for all modern birds was this fused-up condition typified by ostriches and their relatives just because it seems simpler and more reminiscent of non-bird reptiles,” Field added.

Birds with a mobile palate are called neognaths, or “new jaws”, while those with a fused palate are palaeognaths, or “old jaws”.

The study, which was published in the journal Nature, is expected to ruffle feathers, not only for suggesting the mobile palate predates the origin of modern birds but that the immediate ancestors of ostriches and their relatives went on to evolve a fused palate.

“Why the ancestors of ostriches and their relatives would have lost that beneficial conformation of the palette is, at this point, still a mystery to me,” said Field.

The discovery was made when Field and colleagues examined the fossils using CT scanning techniques. The researchers discovered that a bone previously thought to be from the animal’s shoulder was actually from its palate.

Palate of Janavis finalidens compared with that of a pheasant and an ostrich. Photograph: Dr Juan Benito and Daniel Field, University of Cambridge

The team have labelled the newly discovered animal Janavis finalidens in reference to the Roman god that looked both backwards and forwards, and a nod to the animal’s place on the bird family tree. The portmanteau of the Latin words for “final” and “teeth” reflects the existence of Janavis shortly before toothed birds were wiped out in the subsequent mass extinction.

The site of its discovery means it lived around the same time and place as the toothless “wonderchicken”, the oldest known modern bird, although at 1.5kg (3.3lb), Janavis would have have weighed almost four times as much.

While the palate bones of wonderchicken have not been preserved, Field said he was confident they would have been similar to those of Janavis. However, he added that the size difference of the creatures could explain why relatives of wonderchicken survived the catastrophe 66m years ago, but those of Janavis did not.

“We think that this mass extinction event was highly size selective,” he said. “Large bodied animals in terrestrial environments did terribly across this mass extinction event.”

Prof Mike Benton, a palaeontologist at the University of Bristol who was not part of the research, said the study raised questions of the position on the bird family tree of three unusual, extinct groups that lived after the mass extinction including Dromornithidae, known as demon ducks, and Gastornithidae, thought to be a type of giant flightless fowl.

“If this palate feature is primitive, I see that [these groups] could have had earlier origins and perhaps survived from Cretaceous onwards,” he said.

Read original article here

Experts say breast cancer scans are normally free – but not if you have symptoms or are under 40

After one 36-year-old woman was charged nearly $18,000 for a breast cancer check-up, concerns have been raised over how much it should really cost.

Breast cancer is the most common cancer among women in the United States — with 285,000 cases and 42,000 deaths a year — but if it is caught in the early stages almost every patient survives.

Doctors urge all women aged 50 to 74 years old to get screened for the disease once every other year, with those at a higher risk are advised to start getting the tests in their 40s.

Experts say the screening test — a mammogram — is generally free for all women who are more than 40 years old and have health insurance. But those who have symptoms or no coverage may need to fork out $200 to $400.

If no cancer is detected then no further action is needed. But women that have a warning sign spotted normally need to get their results confirmed with a biopsy, which has a cash price of between $1,000 and $2,000.

All the costs above are given as an average cash price. But these may be significantly lower for some women depending on their health insurance plans and the amount of their deductible they have paid. Some insurance plans – such as short term types – may not offer free breast cancer screenings

The above graph shows new cases of breast cancer among women as a rate per 100,000 people (light green line) and the death rate (as a dark green line). It reveals that deaths have been falling very gradually

The above chart shows the age groups where women are most likely to have a breast cancer diagnosis. This is around the age of 63 years. Medicare – for those over 65 years old – offers free breast scans

Women who go for breast cancer screenings will initially be sent for a mammogram, where low-dose X-rays are fired into the breasts to check for unusual growths or troubling to the tissue.

This is normally given as a 2D-scan, where the bottom and tops of the breasts are checked. But some hospitals also offer a 3D-scan, which will look at the sides of the breasts as well. Those under 40 years old may be offered an MRI, because their tissue may be too thick for the mammogram to penetrate.

Patients’ results are usually available about a week or two later, with most scans not detecting anything untoward that requires a further check.

Breast cancer is one of the most common cancers in the world 

Breast cancer is one of the most common cancers in the world, and the second most common among women in the U.S. behind 285,000 cases and 40,000 deaths a year.

What is breast cancer?

Breast cancer develops from a cancerous cell which develops in the lining of a duct or lobule in one of the breasts.

Most cases develop in women over the age of 50, but younger women are sometimes affected. Breast cancer can develop in men though this is rare.

Staging means how big the cancer is and whether it has spread. Stage 1 is the earliest stage and stage 4 means the cancer has spread to another part of the body.

What causes breast cancer?

A cancerous tumour starts from one abnormal cell. The exact reason why a cell becomes cancerous is unclear. It is thought that something damages or alters certain genes in the cell. This makes the cell abnormal and multiply ‘out of control’.

Although breast cancer can develop for no apparent reason, there are some risk factors that can increase the chance of developing breast cancer, such as genetics.

What are the symptoms?

The usual first symptom is a painless lump in the breast, although most breast lumps are not cancerous and are fluid filled cysts, which are benign.

The first place that breast cancer usually spreads to is the lymph nodes in the armpit. If this occurs you will develop a swelling or lump in an armpit.

How is breast cancer treated?

Treatment options which may be considered include surgery, chemotherapy, radiotherapy and hormone treatment. Often a combination of two or more of these treatments are used. 

  • Surgery: Breast-conserving surgery or the removal of the affected breast depending on the size of the tumour. 
  • Radiotherapy: A treatment which uses high energy beams of radiation focussed on cancerous tissue. This kills cancer cells, or stops cancer cells from multiplying. It is mainly used in addition to surgery. 
  • Chemotherapy: A treatment of cancer by using anti-cancer drugs which kill cancer cells, or stop them from multiplying.

How successful is treatment?

The outlook is best in those who are diagnosed when the cancer is still small, and has not spread. Surgical removal of a tumour in an early stage may then give a good chance of cure.

The routine mammography offered to women between the ages of 50 and 70 mean more breast cancers are being diagnosed and treated at an early stage.

<!- - ad: https://mads.dailymail.co.uk/v8/de/health/none/article/other/mpu_factbox.html?id=mpu_factbox_1 - ->

Advertisement

But about one in ten will be called back to have a biopsy, to double-check something on the mammogram that has raised concerns. This involves a small piece of breast tissue being removed — often via a thin hollow needle — which is then analyzed in the lab to check for cancer.

Doctors say not to be too concerned if this happens, with the National Breast Cancer Foundation — a non-profit based in Texas — saying only one in five of these leads to a breast cancer diagnosis.

Dr Ge Bai, a health accounting expert at Johns Hopkins University, told DailyMail.com that mammograms are generally free either every year or every other for insured women over 40 years old.

She said this was thanks to the Affordable Care Act (ACA) — also known as Obamacare — enacted in 2010 which required all health insurance companies to start offering them.

But the free offer comes with a number of caveats, Bai warned.

She said patients would be charged for the screening if they had any symptoms of breast cancer, were younger than 40 years, went more than once a year, or had the 3D-version instead of the 2D type.

Some hospitals are pushing 3D mammograms — which look at the sides of breasts as well as the tops and bottoms — but the American Cancer Society says there is no evidence that these are more likely to detect a cancer that would otherwise be missed.

She added that some insurance plans would not comply with the ACA, however, and would not offer the free screening. This can vary by state.

Medicaid — the U.S. health insurance program for poorer Americans with 88million users — offers the free scans in most states, but will not cover them for better-paid women living in areas that have not adopted the ACA expansion — such as Texas, Florida and Alabama. Medicare — which insures Americans over 65 years old — also offers the free scans. 

Any further scans — such as biopsies — will not be offered for free, and will need to be paid for by the patient or their insurance.

Treatment costs also depend on the insurance status. 

Bai told DailyMail.com: ‘For a woman over 40, who is employed, if your employer is complying then you should pay nothing for a breast cancer scan.

‘But in many cases this is not the case, and you must then pay whatever amount it is before you reach your deductible.’

A spokeswoman for the American Cancer Society told DailyMail.com: ‘Generally, all ACA compliant plans provide no-cost coverage for breast cancer screenings (and other preventive services) based on the U.S. Preventive Services Task Force (USPSTF) guidelines.

‘[However,] there are a number of specific caveats and a host of non-compliant plans for which varying rules apply and where state variation may be more likely.’

They said that those under a short-term plan or one from Christian Sharing Ministries — which offer cost sharing for their members — may not be able to get screened for the cancer for free.

Bai urged all women worried about having to fork out for a breast cancer scan to ask hospitals for their cash prices — which is almost always lower than that offered to insurance. , which they are required to publish — although some still do not comply.

Asked how much a mammogram and biopsy should be, Bai pointed to a research letter that she helped write which was published last year in the journal JAMA Network Open.

For the paper the reserachers trawled through data from 900 hospitals on more than 70 procedures to establish the cash prices for each.

Results showed a mammogram for both breasts had a cash price of about $277, although this varied by between $190 and $400 depending on the hospital.

The team did not look at a biopsy for breast cancer, but Bai said the cost would likely be similar to biopsies done for other cancers such as bowel cancer. The cash price for this is about $2,000, ranging from $1,200 to $3,000.

Dani Yuengling, now 36 and a HR worker in South Carolina, was handed a bill totalling $17,979 (pictured) for an ultra-sound guided breast biopsy. After searching the hospital’s own website she had expected to pay around $1,400 for the procedure

Yuengling — who wanted to get a lump on her right breast checked — said she was left unable to sleep and suffering migraines by the bill. She has refused to return to the hospital for a follow-up

Insurance plans may offer a cheaper price for patients, as they may only be needed to pay a co-pay — a fixed amount to be paid for a covered health service — and part of their deductible — the amount patients pay before their health insurance kicks in.

Latest figures suggest that 76.4 percent of American women aged 50 to 74 years old are screened for breast cancer every year.

This is below the 80 percent target set by the federal Government.

President Joe Biden has made it one of the hallmarks of his presidency to fight a war on cancer, saying he aims to halve fatalities from the disease within the next 25 years.

This will be done in part through developing and rolling out screening tests that can use a single blood swab to detect cancers early, he said.

It comes after Dani Yuengling, from Conway in South Carolina, came forward last month to say she had been charged $18,000 for a breast biopsy despite already being insured.

The 36-year-old went to see her doctor early this year after noticing a lump on her right breast, fearing the worst because her mother had died from the disease five years earlier.

The HR worker was referred to the Grand Strand Medical Center in Myrtle Beach for a biopsy, with a quick check of the hospital’s expenses calculator suggesting she would be charged $1,400. Yuengling hoped her health insurance with Cigna, one of the nation’s largest, would help offset the costs.

But after having the test in February — and no cancer being found — she received a bill for $17,979. After ringing the hospital she was offered a 36 percent discount — taking it to $11,500 —, but is now refusing to return for a follow-up check.

Experts advised patients to always ask their healthcare providers for the cash price of a procedure, which is almost always lower than that offered to insurance. A spokeswoman for the Conway Medical Center, just 14 miles from the hospital where Yuengling was scanned, said they would charge $2,100 cash price for the same procedure.

Read original article here

Is this proof brain scans can read our minds? Special type of MRI translated brainwaves into images

Your deepest personal life lies between your ears: your experiences, desires and memories, your politics, your emotional problems and mental ailments. 

It’s for you to keep secret or to share. But could scientists soon open up your mind for anyone to read?

Researchers armed with super-high-tech brain scanners and artificial intelligence programs claim they are forging new keys that may unlock our inner worlds.

They are developing technology to read our minds with such accuracy that it may reveal exactly what we’re looking at or imagining, discover our voting and buying intentions, and even reveal why we may be wired for illnesses such as depression and schizophrenia.

In fact, this technology is already being used to understand how our brains work in order to diagnose and treat a range of conditions, for instance enabling surgeons to plan how to operate on brain tumours while sparing as much healthy tissue as possible.

It’s also enabled neurologists and psychologists to map how cognitive functions such as vision, language and memory operate across different brain regions.

Scientists use it to track how our brains produce experiences such as pain, and are developing the technology to fathom problems such as addiction — and to test drugs for treating these and illnesses such as depression.

But now some experts are asking whether the results produced by this technology are robust enough to be relied on, with implications for how patients with mental health problems, for instance, are being treated.

The artificial intelligence system translated each volunteer’s neuron reaction from the fMRI back into computer code to recreate the photographic portrait

The technology at the heart of it all is MRI (magnetic resonance imaging), a scanning system first developed in the 1970s.

MRI uses a strong magnetic field to agitate tiny particles called protons inside our cells. 

These protons respond differently according to the cells’ chemical nature. The differences in how cells respond enable physicians to discriminate between various types of tissues.

One of the most common types of MRI application is called fMRI (functional magnetic resonance imaging), which is used to watch how our brains are operating.

It relies on the fact that when regions of the brain become active, they demand energy in the form of oxygen-rich blood.

Oxygen molecules can be detected by fMRI, so the scanners can see where in the brain our neurons — brain nerve cells — work hardest (and draw most oxygen) while we have thoughts or emotions.

MRI technology is becoming ever more accurate. Late last year scientists working on the Iseult project, a Franco-German MRI machine-building initiative based in Paris, switched on the world’s strongest scanner.

At its heart is an extraordinarily powerful 132-ton magnet rated at 11.7 Tesla. Standard NHS hospital MRIs used for diagnostic scanning are typically 1.5 to 3 Tesla.

This titanic power enables the Iseult scanner to picture things in our brain as small as 100 microns; the size of our larger individual brain cells. 

The MRI can also picture the connections between these brain cells, which are typically some 700 microns long.

Such clarity can enable scientists to see which brain cells are firing, and how they interact within vast networks.

But what do such interactions mean? To find out, investigators are using another cutting-edge technology, artificial intelligence (or algorithms — sets of mathematical instructions in a computer program), to interpret this electrical brain-cell activity.

In January, researchers at Radboud University in the Netherlands published startling results in the journal Nature Scientific Reports from an experiment where they showed pictures of faces to two volunteers inside a powerful brain-reading fMRI scanner.

As the volunteers looked at the images, the fMRI scanned the activity of neurons in the areas of their brain responsible for vision. 

The researchers then fed this information into a computer’s artificial intelligence (AI) algorithm.

As you can see from the extreme likeness between the original faces and the portraits, the results are so astonishingly similar as to appear uncanny.

So how did the scientists do this? In order to ‘train’ the AI system, the volunteers had previously been shown a series of other faces while their brains were being scanned — the key is that the photographic pictures they saw were not of real people, but essentially a paint-by-numbers picture created by a computer: each tiny dot of light or darkness was given a unique computer-program code.

A person undergoing an MRI scan (file image) much like the fMRI which detected the volunteers’ neurons

What the fMRI scan did was detect how the volunteers’ neurons responded to these ‘training’ images. 

The artificial intelligence system then translated each volunteer’s neuron reaction back into computer code to recreate the photographic portrait.

In the test, neither the volunteers nor the AI system had ever seen the faces that were decoded and recreated so accurately.

Thirza Dado, an AI researcher and a cognitive neuroscientist, who led the study, told Good Health that these highly impressive results demonstrate the potential for fMRI/AI systems effectively to read minds in future.

‘I believe we can train the algorithm not only to picture accurately a face you’re looking at, but also any face you imagine vividly, such as your mother’s,’ she says.

‘By developing this technology, it would be fascinating to decode and recreate subjective experiences, perhaps even your dreams.

‘Such technological knowledge could also be incorporated into clinical applications such as communicating with patients who are locked within deep comas.’

Her work is focused on using the technology to help restore vision in people who, through disease or accident, have become blind.

‘We are already developing brain-implant cameras that will stimulate people’s brains so they can see again,’ she says.

In an as-yet unpublished study, macaque monkeys were fitted with camera-vision implants and then underwent fMRI scans while they looked at the facial photographs. 

Thirza Dado’s AI system was able to translate these images back just as accurately as with her human tests, suggesting the camera implants work effectively.

As AI brain-decoding systems become more sophisticated, Thirza Dado believes they could enable police forces to scan witnesses’ brains for memory-pictures of people involved in crimes.

‘In future, we may also be able to look at the ability to picture people’s ideas,’ she says.

Such mind-reading technologies present serious ethical questions about privacy.

Indeed, earlier this year another study showed how computers may, in future, even eavesdrop on what is perhaps the most personal and profound moment of our lives: the thoughts that may flash through the mind around the moment of death (see box, above far right).

U.S. scientists at Ohio State University say they can tell people’s political ideology with an accuracy rate of some 80 per cent using fMRI

But already, U.S. scientists say they can tell people’s political ideology with an accuracy rate of some 80 per cent using fMRI.

In a study published in May involving 174 adults, researchers at Ohio State University were able to predict accurately if they were politically conservative or liberal.

‘Can we understand political behaviour by looking solely at the brain? The answer is a fairly resounding “yes”,’ said study co-author Skyler Cranmer, a professor of political science at Ohio State.

‘The results suggest that the biological and neurological roots of political behaviour run much deeper than we’d thought.’

The study, published in the journal PNAS Nexus, examined how different regions of individuals’ brains communicated with each other, either when looking at pictures or simply doing nothing.

REVEALED, WHAT WE THINK ABOUT IN OUR DYING MOMENTS

Advances in technology and the ability ‘to read’ minds are already testing ethical boundaries.

In February, doctors reported how they’d unintentionally recorded the brain activity of an 87‑year-old patient at the point of his death, performing an electroencephalogram (EEG) on his brain to study his epileptic seizures, when he had a sudden heart attack and died.

In an EEG, sensors are attached to the scalp to pick up electrical signals produced by brain nerve cells as they communicate. This can reveal what activity is occurring in the brain.

Writing in the journal Frontiers in Aging Neuroscience, the doctors explained that because the EEG machine was kept running, they’d recorded the man’s brain activity at the end of his life — and found that we may experience a flood of memories when we die.

For some 30 seconds before and after his heart stopped, the scans showed increased activity in the brain areas associated with memory recall, meditation and dreaming.

Dr Ajmal Zemmar, a neurosurgeon at the University of Louisville in Kentucky, who published the report, speculates: ‘Through generating brainwaves involved in memory retrieval, the brain may be playing a recall of important life events just before we die.’

He adds: ‘Something we may learn from this research is: although our loved ones have their eyes closed and are ready to leave us to rest, their brains may be replaying some of the nicest moments they experienced in their lives.’

<!- - ad: https://mads.dailymail.co.uk/v8/de/health/none/article/other/mpu_factbox.html?id=mpu_factbox_1 - ->

Advertisement

A super-computer’s AI system monitored this brain activity and compared it with the volunteers’ self-reported political ideology on a six-point scale from ‘very liberal’ to ‘very conservative’. 

It then identified patterns of brain networking to predict political leanings.

Three areas — the amygdala, inferior frontal gyrus and hippocampus — were most strongly associated with political affiliation.

The amygdala is believed to be key in detecting and responding to threats, while the inferior frontal gyrus is key to our understanding and processing language; the hippocampus is central to learning and memory.

While this study did find a link between the brain signatures and political ideologies, it can’t explain what causes what — i.e. is brain pattern the result of the ideology people choose, or did the pattern cause the ideology?

Whatever the case, it’s chilling to think such technology could be developed by authoritarian regimes to detect people’s inner beliefs and punish them for opinions they’ve never voiced.

Yet some commentators argue that the technology’s mind-reading abilities are being seriously overclaimed.

The starkest example is its use as a lie detector, pioneered in 2001 by Daniel Langleben, a professor of psychiatry at Stanford University, California. 

He theorised that the brain has to work harder to tell lies, as it has to construct a story and suppress the truth.

His fMRI studies showed increased activity during deception in areas such as the anterior cingulate cortex, thought to be in charge of monitoring errors, and the dorsal lateral prefrontal cortex, linked to behaviour control.

But its efficacy is yet to be convincingly proven. Moreover, studies, such as one by Plymouth University in 2019, show that MRI lie detectors can be beaten with simple mental evasion techniques; making up new memories about a lie, and focusing mentally on a particular superficial aspect of the story can alter brain activity patterns on the MRI scans to render the detector tests inaccurate.

But the doubts go much further. Researchers have questioned whether MRI scanning can give reliable results about individuals’ mental states.

This throws into question the view that psychologists can accurately infer from fMRI scans patients’ mental conditions and whether, for example, their mood states are happy or depressed, as well as the effectiveness of medications to treat low mood — for instance, as seen by changed activity in a specific area of the brain.

Two years ago, psychologists at Duke University in North Carolina reviewed 56 studies that used repeated fMRI scans of 90 people’s brains and showed that the results were vastly different from test to test, even if the tests were repeated within a few days or weeks.

This means that the fMRI brain scan results of a person completing a memory task or watching a film, for example, could easily be entirely different when tested under the same circumstances a week later, even though they’re feeling and thinking the same.

This lack of consistency means that the scans can’t give reliable data on people’s mental functioning or health, reported the journal Psychological Science.

The study’s lead author, Ahmad Hariri, a professor of psychology and neurology, says: ‘If a measure gives a different value every time it is administered, it can hardly be used to make predictions about a person. Better measures are needed to achieve clinically useful results.’

Researchers have questioned whether MRI scanning can give reliable results about individuals’ mental states

Such concerns were reinforced in June by a major report in the journal Nature, with the researchers arguing that fMRI brain scanning studies produce such highly complex and variable results that even large projects that involve hundreds or thousands of patients are still too small to reliably detect most links between the way people’s brains function and the way they behave.

Scott Marek, an assistant professor of psychiatry at Washington University, discovered the problem when he scanned the brains of 2,000 children to try to establish links between their brain activity and their IQ.

To double-check that his results were consistent, Scott Marek split them into two equal sets and analysed them in the same way. 

If the results were consistent, the two sets would produce broadly the same data. But they did not. They were very different.

‘I was shocked,’ he says.

Even with large studies such as his, the individual brain scans showed such variable results that broad-scale conclusions about relationships between brain activity and behaviour or intelligence could not be made reliably.

Dr Joanna Moncrieff, a psychiatrist and professor of critical and social psychiatry at University College London agrees. 

‘While fMRI scans can show something dynamic is going on in a brain, they don’t provide any evidence for establishing what causes this dynamic activity to happen. 

‘The scan shows a brain that is active at that moment, without explaining why.

‘Similarly, no one has ever clinically demonstrated with fMRI scans any identifiable biological mechanism in the brain that consistently underlies depression or other mental disorder,’ she adds.

‘So to claim, as drug researchers do, that they can show in fMRI scans that drugs such as antidepressants, or psychedelics, can rectify mental disorders such as depression makes no sense.’

Karl Friston, a professor of neuroscience at University College London and a global authority on brain imaging, says Scott Marek’s results reveal the complexity in getting reliable results from MRI studies.

Even those that involve thousands of patients can produce apparently convincing but bad results, he told Good Health.

This is due to something Professor Friston calls ‘the fallacy of classical inference’: if there’s lots of data swirling around, it is easier to ‘see’ patterns even though they are coincidental. 

‘If we can understand the connectivity between different brain regions and how that might go awry in disorders such as schizophrenia, autism, depression or Parkinson’s, we may be able to understand the failures of this message-passing’

It’s rather like seeing faces in randomly patterned carpets.

Where MRI-scanning science can prove useful, he says, is in deep-dive studies of individuals’ brains to establish how they pass messages around.

‘If we can understand the connectivity between different brain regions and how that might go awry in disorders such as schizophrenia, autism, depression or Parkinson’s, we may be able to understand the failures of this message-passing and find drug therapies to address the problems,’ says Professor Friston.

He adds that while Thirza Dado’s research is definitely reputable science, a fundamental problem is that our brains don’t see things such as faces in photographic style but rather like a Picasso painting: our brains note the side of a nose, an eye, a fringe, and put them together. 

So MRI scans will never be able to pull a real-life picture of a face out of someone’s brain.

But he agrees this approach may be a vast help in creating artificial vision: ‘It’s similar to hearing aids,’ he says. 

‘If you can identify the visual information that matters, you can emphasise it. This could help people with, for example, partial blindness, by finding out what frequencies produce the best representations and enhancing them.’

So reading minds may still be a long way off. But it appears that the super-tech world of MRI and artificial intelligence could one day restore sight to the blind.

Read original article here

How Not to Use Brain Scans in Neuroscience

Summary: While neuroimaging may be a standard in neuroscience and psychology research, a new study says researchers are massively underestimating how large the study sample must be for a neuroimaging study to produce reliable findings.

Source: University of Pittsburgh

What does it take to know a person?

If you’ve seen how a friend acts across different domains of their life, you might reasonably say you know who they are. Compare that to watching an interview with a celebrity — maybe you can claim some knowledge about them, but a single observation of a stranger can only tell you so much.

Yet a similar idea — that a lone snapshot of a brain can tell you about an individual’s personality or mental health — has been the basis of decades of neuroscience studies.

That approach was punctured by a paper in Nature earlier this year showing that scientists have massively underestimated how large such studies must be to produce reliable findings.

“The more we learn about who we are as people, the more we learn that, on average, we’re much more similar than we are different — and so understanding those differences is really challenging,” said Brenden Tervo-Clemmens (A&S ’21G), now a postdoctoral fellow at Massachusetts General Hospital and Harvard Medical School who co-led the multi-institutional research as a clinical psychology PhD student at Pitt.

At the center of the research is MRI (magnetic resonance imaging) brain scans. While invaluable for diagnosing brain conditions, they’ve also been used by researchers to draw links between a person’s brain structure and aspects of their personality and mental health.

Tervo-Clemmens and his colleagues call this technique brain-wide association scans, or BWAS, in a nod to “GWAS” studies that attempt to decipher the often-tiny effects of genes from massive datasets (as seen in dubious science headlines announcing “a gene for depression” or “a gene for intelligence”).

“The approach is similar: Here’s one profile of you biologically, how well can we determine the complexity of your human experience?” Tervo-Clemmens said. “And the answer is, usually not very well.”

A typical study of the kind would include somewhere around 25 participants, due in part to the high cost of running scans. But, Tervo-Clemmens and his colleagues showed, scientists would need to scan the brains of more than 1,000 to be confident that the connections they find aren’t just a statistical mirage.

Reaching that conclusion required getting a far broader view of the field than was possible until recently. Along with colleagues at a number of institutions as well as his advisor, Pitt Professor of Psychiatry Beatriz Luna, Tervo-Clemmens combined three recent publicly available studies that together included MRI data from around 50,000 participants.

Using this massive body of information, the team simulated the process of science, selecting groups of the scans at random as if they were patients recruited to a study. By repeating that process over and over, the researchers could figure out how likely it is that any given number of scans would produce a misleading result simply due to chance — and how many participants it takes for a study to be reliable.

Not every investigation requires 1,000 brain scans, they showed. “If the goal is just to understand something like the general organization of the brain, we sometimes only need 10 to 20 participants to do that,” Tervo-Clemmens said. It’s only because a single brain scan reveals so little about a person’s personality and mental health that researchers need a massive amount of data before these complex traits begin to reliably stand out from the statistical noise.

Amplifying that problem is a well-known bug in 21st century science: Researchers are often rewarded for publishing results that show exciting new connections, rather than less glamorous findings suggesting the absence of a connection.

The latter results are less likely to be published and more likely to languish on a hard drive. So not only are small imaging studies more likely to “discover” a link that isn’t actually there, but those same misleading studies also receive a disproportionate amount of attention.

Tervo-Clemmens is quick to note that the Nature paper wasn’t intended to call out the whole field. Neuroscientists and psychologists have successfully tackled questions about personality and mental health using a variety of other techniques. And brain scans on their own are very effective for diagnosing conditions and mapping out the broader picture of how brains work. It’s when scientists combine the two, reducing the complexities of a person into a single image, that they fall short.

“We can count on less than a hand the number of these studies that have held up under scrutiny and are really driving treatment,” he said. “In my own area, one study might show that increased function of a particular brain region is related to more symptoms, but you can find, almost without question, another study showing the opposite effect.”

A typical study of the kind would include somewhere around 25 participants, due in part to the high cost of running scans. Image is in the public domain

Although he now focuses mostly on psychiatric and substance-use disorders in adolescents, Tervo-Clemmens doesn’t quite fit into any one box as a researcher. “I’m kind of a psychologist, and I’m kind of a statistician, and I’m kind of a neuroscientist,” he said. It’s that perspective, he explains, that helps him do the kind of broad critical research like this current study, along with his boundary-crossing education at Pitt.

He saw patients as a PhD student in clinical psychology while also training in cross-disciplinary programs like Center for the Neural Basis of Cognition, experiences he credits as encouraging breadth in research. “I think that level of integration is what makes Pitt so awesome as a graduate student,” he said.

The result was a study that’s already produced a stir among other scientists. An instant classic, the paper and its pre-publication version have already been cited by more than 250 other scholarly works.

So where does that leave the field?

First, Tervo-Clemmens said, it’s necessary to re-examine the smaller studies of the past to see if their results hold up to further examination. As for future research, one solution would be to simply supersize brain-scan studies of complex behavior so they stand up to statistical scrutiny. But there’s another possible way forward, where researchers find ways to study patients over time and across contexts to get a fuller sense of their identities.

See also

“We need to be aligning our research to how we generally think and understand human beings,” said Tervo-Clemmens. “That’s a challenge of cost and economy. But I also think it’s one that will ultimately be worth it.”

It’s like growing pains for a line of research that’s only a few decades old: Stressful and full of uncertainty, but also a sign that the field is heading in new and exciting directions.

About this neuroimaging and neuroscience research news

Author: Nicholas France
Source: University of Pittsburgh
Contact: Nicholas France – University of Pittsburgh
Image: The image is in the public domain

Original Research: Open access.
“Reproducible brain-wide association studies require thousands of individuals” by Brenden Tervo-Clemmens et al. Nature


Abstract

Reproducible brain-wide association studies require thousands of individuals

Magnetic resonance imaging (MRI) has transformed our understanding of the human brain through well-replicated mapping of abilities to specific structures (for example, lesion studies) and functions (for example, task functional MRI (fMRI)). Mental health research and care have yet to realize similar advances from MRI.

A primary challenge has been replicating associations between inter-individual differences in brain structure or function and complex cognitive or mental health phenotypes (brain-wide association studies (BWAS)). Such BWAS have typically relied on sample sizes appropriate for classical brain mapping (the median neuroimaging study sample size is about 25), but potentially too small for capturing reproducible brain–behavioural phenotype associations.

Here we used three of the largest neuroimaging datasets currently available—with a total sample size of around 50,000 individuals—to quantify BWAS effect sizes and reproducibility as a function of sample size. BWAS associations were smaller than previously thought, resulting in statistically underpowered studies, inflated effect sizes and replication failures at typical sample sizes.

As sample sizes grew into the thousands, replication rates began to improve and effect size inflation decreased. More robust BWAS effects were detected for functional MRI (versus structural), cognitive tests (versus mental health questionnaires) and multivariate methods (versus univariate). Smaller than expected brain–phenotype associations and variability across population subsamples can explain widespread BWAS replication failures.

In contrast to non-BWAS approaches with larger effects (for example, lesions, interventions and within-person), BWAS reproducibility requires samples with thousands of individuals.

Read original article here

Can brain scans reveal behaviour? Bombshell study says not yet



A scan using functional magnetic resonance imaging, or fMRI, shows areas of the brain active during speech.Credit: Zephyr/Science Photo Library

In 2019, neuroscientist Scott Marek was asked to contribute a paper to a journal that focuses on child development. Previous studies had shown that differences in brain function between children were linked with performance in intelligence tests. So Marek decided to examine this trend in 2,000 kids.

Brain-imaging data sets had been swelling in size. To show that this growth was making studies more reliable, Marek, based at Washington University in St. Louis, Missouri (WashU), and his colleagues split the data in two and ran the same analysis on each subset, expecting the results to match. Instead, they found the opposite. “I was shocked. I thought it was going to look exactly the same in both sets,” says Marek. “I stared out of my apartment window in depression, taking in what it meant for the field.”

Now, in a bombshell 16 March Nature study1, Marek and his colleagues show that even large brain-imaging studies, such as his, are still too small to reliably detect most links between brain function and behaviour.

As a result, the conclusions of most published ‘brain-wide association studies’ — typically involving dozens to hundreds of participants — might be wrong. Such studies link variations in brain structure and activity to differences in cognitive ability, mental health and other behavioural traits. For instance, numerous studies have identified brain anatomy or activity patterns that, the studies say, can distinguish people who have been diagnosed with depression from those who have not. Studies also often seek biomarkers for behavioural traits.

“There’s a lot of investigators who have committed their careers to doing the kind of science that this paper says is basically junk,” says Russell Poldrack, a cognitive neuroscientist at Stanford University in California, who was one of the paper’s peer reviewers. “It really forces a rethink.”

The authors emphasize that their critique applies only to the subset of research that seeks to explain differences in people’s behaviour through brain imaging. But some scientists think that the critique tars this field with too broad a brush. Smaller, more detailed studies of brain–behaviour links can produce robust findings, they say.

Weak correlations

After his botched replication, Marek set out to understand the reasons for the failure together with Nico Dosenbach, a neuroscientist at WashU, and their colleagues. That work resulted in the latest study, in which they analysed magnetic resonance imaging (MRI) brain scans and behavioural data from 50,000 participants in several large brain-imaging efforts, such as the UK Biobank’s collection of brain scans.

Some of these scans gauged aspects of brain structure, for instance the size of a particular region. Others used a method called functional MRI (fMRI) — the measurement of brain activity while people do a task, such as memory recall, or while at rest — to reveal how brain regions communicate.

The researchers then used subsets drawn from these large databases to simulate billions of smaller studies. These analyses looked for associations between MRI scans and various cognitive, behavioural and demographic traits, in samples ranging from 25 people to more than 32,000.

In simulated studies involving thousands of people, the researchers identified reliable correlations between brain structure and activity in particular regions and different behavioural traits — associations that they could replicate in different subsets of the data. However, these links tended to be much weaker than those typically reported by most other studies.

Researchers measure correlation strength using a metric called r, for which a value of 1 means a perfect correlation and 0 none at all. The strongest reliable correlations Marek and Dosenbach’s team found had an r of 0.16, and the median was 0.01. In published studies, r values above 0.2 are not uncommon.

To understand this disconnect, the researchers simulated smaller studies and found that these identified much stronger associations, with high r values, but also that these findings did not replicate in other samples, large or small. Even associations identified in a study of 2,000 participants — large by current standards — had only a 25% chance of being replicated. More typical studies, with 500 or fewer participants, produced reliable associations around just 5% of the time.

Even larger studies

The study did not attempt to replicate other published brain-wide association studies. But it suggests that high r values common in the literature are almost certainly a fluke, and not likely to be replicated. Factors that hinder reproducibility in other fields, such as the tendency to publish only statistically significant results with large effect sizes, means that these spurious brain–behaviour associations fill the literature, says Dosenbach. “People are only publishing things that have a strong enough effect size. You can find those, but those are the ones that are most wrong.”

To make such studies more reliable, brain-imaging studies need to get much bigger, Marek, Dosenbach and their colleagues argue. They point out that genetics research was plagued by false positives until researchers, and their funders, started looking for associations in very large numbers of people. The largest genome-wide association studies (GWAS) now involve millions of participants. The team coined the term brain-wide association study, or BWAS, to draw parallels with genetics.

For brain imaging, Marek says, “I don’t know if we need hundreds of thousands or millions. But thousands is a safe bet.”

“What the Marek paper suggests is that a lot of the time, if you don’t have these really large samples, you are most likely wrong or lucky in finding a good brain–behaviour correlation,” says Caterina Gratton, a cognitive neuroscientist at Northwestern University in Evanston, Illinois. The paper appeared as a preprint in 2020, and Gratton says she has sat on grant-review panels that have cited it when raising scepticism over relatively small BWAS studies. “This is an important paper for the field,” she adds.

But some researchers argue that smaller BWAS studies still have value. Peter Bandettini, a neuroscientist at the National Institute of Mental Health in Bethesda, Maryland, says that studies such as the ones Marek’s team simulated looked for correlations between crude measurements of behaviour or mental health (self-reported surveys, for example) and brain scans whose conditions might vary from participant to participant, diluting bona fide associations.

By selecting participants carefully and analysing brain-imaging data using sophisticated approaches, it might be possible to find associations between brain scans and behaviour that are stronger than those identified in the study, says Stephen Smith, a neuroscientist at the University of Oxford, UK who leads the UK Biobank’s brain imaging efforts. “I fear this paper may be overestimating unreliability.”

Read original article here

Cosmic Ray Scans Could Reveal Hidden ‘Voids’ in The Great Pyramid of Giza

A new ultra-powerful scan of the Great Pyramid of Giza using cosmic rays could reveal the identities of two mysterious voids inside. 

The largest of the two voids is located just above the grand gallery – a passageway that leads to what may be the chamber of the pharaoh Khufu – and is about 98 feet (30 meters) long and 20 feet (6 m) in height, according to previous pyramid scans.

 

Archaeologists are uncertain as to what they will find in the void, which could be one large area or several small rooms, they said. They also hope to find out the function of that void; the most fantastic possibility is that the opening is the hidden burial chamber of Khufu. A more mundane possibility is that the cavity played some role in the building of the pyramid. 

Constructed for the pharaoh Khufu (reign circa 2551 BCE to 2528 BCE), the Great Pyramid of Giza is the largest pyramid ever constructed in ancient Egypt and is the only surviving wonder of the ancient world. 

Related: Photos: Looking inside the Great Pyramid of Giza

Constructed for the pharaoh Khufu (reign circa 2551 BCE to 2528 BCE), the Great Pyramid of Giza is the largest pyramid ever constructed in ancient Egypt and is the only surviving wonder of the ancient world.  

Between 2015 and 2017, the “Scan Pyramids” project ran a series of scans that analyzed muons – cosmic particles that regularly fall on Earth – to detect any voids. Those scans revealed both of the voids in 2017. 

 

Now, a new team is planning to scan the Great Pyramid again, but this time with a more powerful system that will analyze muons in greater detail. Muons are negatively-charged elementary particles that form when cosmic rays collide with atoms in Earth’s atmosphere.

These high-energy particles constantly rain down on Earth (yes, they’re harmless); because they behave differently when interacting with say stone versus air, researchers can use super-sensitive detectors to pinpoint the particles and map areas they can’t physically explore, as with the Great Pyramid.

An illustration of the inside of the Great Pyramid of Giza. (ScanPyramids mission)

“We plan to field a telescope system that has upwards of 100 times the sensitivity of the equipment that has recently been used at the Great Pyramid,” a team of scientists wrote in a preprint paper published on the preprint server on arXiv. Papers published on preprint servers have yet to be reviewed by other scientists in the field. 

“Since the detectors that are proposed are very large, they cannot be placed inside the pyramid, therefore our approach is to put them outside and move them along the base. In this way, we can collect muons from all angles in order to build up the required data set,” the team wrote in the paper. 

 

“The use of very large muon telescopes placed outside [the Great Pyramid] can produce much higher resolution images due to the large number of detected muons,” they added.  

The detectors are so sensitive, the researchers pointed out, they might even reveal the presence of artifacts inside of the voids. If “a few m3 is filled with material [such as pottery, metals, stone or wood], we should be able to distinguish that from air,” Alan Bross, a scientist at the Fermi National Accelerator Laboratory who is co-author of the paper, told Live Science in an email. 

Need for funds

The team has received approval from the Egyptian Ministry of Tourism and Antiquities to conduct the scans, but they still need funds to build the equipment and place it beside the Great Pyramid. 

“We are looking for sponsors for the full project,” said Bross. “Once we have full funding, we believe it will take [about] two years to build the detectors,” Bross said. Currently, the group only has enough funding to conduct simulations and design some prototypes, Bross said. 

 

Once the telescopes are deployed, they will need some time to gather data.

“Once we deploy the telescopes after about one year of viewing time, we expect to have preliminary results. We will need between two and three years of viewing to collect enough muon data to reach full sensitivity for the study of [the Great Pyramid],” said Bross. 

Related content:

This article was originally published by Live Science. Read the original article here.

 

Read original article here