Tag Archives: figured

Common Antidepressants Cause Emotional “Blunting” – Scientists Finally Figured Out Why

A new study explains the reason behind the emotional “blunting” that affects around half of people who take SSRIs, a family of common antidepressant medications. The research shows that the drugs impact reinforcement learning, a crucial behavioral process that enables us to learn from our surroundings.

Scientists have worked out why common anti-depressants cause around half of users to feel emotionally ‘blunted’. In a study published today, they show that the drugs affect reinforcement learning, an important behavioral process that allows us to learn from our environment.

According to the NHS, more than 8.3 million patients in England received an antidepressant drug in 2021/22. A widely-used class of antidepressants, particularly for persistent or severe cases, is selective serotonin reuptake inhibitors (SSRIs). These drugs target serotonin, a chemical that carries messages between nerve cells in the brain and has been dubbed the ‘pleasure chemical’. Common SSRIs include Citalopram (Celexa), Escitalopram (Lexapro), Paroxetine (Paxil, Pexeva), Fluoxetine (Prozac) and Sertraline (Zoloft).

One of the widely-reported side effects of SSRIs is ‘blunting’, where patients report feeling emotionally dull and no longer finding things as pleasurable as they used to. Between 40-60% of patients taking SSRIs are believed to experience this side effect.

To date, most studies of SSRIs have only examined their short-term use, but, for clinical use in depression, these drugs are taken chronically, over a longer period of time. A team led by researchers at the University of Cambridge, in collaboration with the University of Copenhagen, sought to address this by recruiting healthy volunteers and administering escitalopram, an SSRI known to be one of the best-tolerated, over several weeks and assessing the impact the drug had on their performance on a suite of cognitive tests.

In total, 66 volunteers took part in the experiment, 32 of whom were given escitalopram while the other 34 were given a placebo. Volunteers took the drug or placebo for at least 21 days and completed a comprehensive set of self-report questionnaires and were given a series of tests to assess cognitive functions including learning, inhibition, executive function, reinforcement behavior, and decision-making.

The results of the study are published today (January 23, 2023) in the journal Neuropsychopharmacology.

The team found no significant group differences when it came to ‘cold’ cognition – such as attention and memory. There were no differences in most tests of ‘hot’ cognition – cognitive functions that involve our emotions.

However, the key novel finding was that there was reduced reinforcement sensitivity on two tasks for the escitalopram group compared to those on placebo. Reinforcement learning is how we learn from feedback from our actions and environment.

In order to assess reinforcement sensitivity, the researchers used a ‘probabilistic reversal test’. In this task, a participant would typically be shown two stimuli, A and B. If they chose A, then four out of five times, they would receive a reward; if they chose B, they would only receive a reward one time out of five. Volunteers would not be told this rule, but would have to learn it themselves, and at some point in the experiment, the probabilities would switch and participants would need to learn the new rule.

The team found that participants taking escitalopram were less likely to use the positive and negative feedback to guide their learning of the task compared with participants on placebo. This suggests that the drug affected their sensitivity to the rewards and their ability to respond accordingly.

The finding may also explain the one difference the team found in the self-reported questionnaires, that volunteers taking escitalopram had more trouble reaching orgasm when having sex, a side effect often reported by patients.

Professor Barbara Sahakian, senior author, from the Department of Psychiatry at the University of Cambridge and a Fellow at Clare Hall, said: “Emotional blunting is a common side effect of SSRI antidepressants. In a way, this may be in part how they work – they take away some of the emotional pain that people who experience depression feel, but, unfortunately, it seems that they also take away some of the enjoyment. From our study, we can now see that this is because they become less sensitive to rewards, which provide important feedback.”

Dr. Christelle Langley, joint first author also from the Department of Psychiatry, added: “Our findings provide important evidence for the role of serotonin in reinforcement learning. We are following this work up with a study examining neuroimaging data to understand how escitalopram affects the brain during reward learning.”

Reference: “Chronic escitalopram in healthy volunteers has specific effects on reinforcement sensitivity: A double-blind, placebo-controlled semi-randomised study” by Langley, C, Armand, S, et al., 23 January 2023, Neuropsychopharmacology.
DOI: 10.1038/s41386-022-01523-x

The research was funded by the Lundbeck Foundation.



Read original article here

Scientists Figured Out When And How Our Sun Will Die, And It Will Be Epic : ScienceAlert

How will our Sun look after it dies? Scientists have made predictions about what the final days of our Solar System will look like, and when it will happen. And we humans won’t be around to see the Sun’s curtain call.

Previously, astronomers thought the Sun would turn into a planetary nebula – a luminous bubble of gas and cosmic dust – until evidence suggested it would have to be a smidge more massive.

An international team of astronomers flipped it again in 2018 and found that a planetary nebula is indeed the most likely solar corpse.

The Sun is about 4.6 billion years old – gauged on the age of other objects in the Solar System that formed around the same time. Based on observations of other stars, astronomers predict it will reach the end of its life in about another 10 billion years.

There are other things that will happen along the way, of course. In about 5 billion years, the Sun is due to turn into a red giant. The core of the star will shrink, but its outer layers will expand out to the orbit of Mars, engulfing our planet in the process. If it’s even still there.

One thing is certain: By that time, we won’t be around. In fact, humanity only has about 1 billion years left unless we find a way off this rock. That’s because the Sun is increasing in brightness by about 10 percent every billion years.

That doesn’t sound like much, but that increase in brightness will end life on Earth. Our oceans will evaporate, and the surface will become too hot for water to form. We’ll be about as kaput as you can get.

It’s what comes after the red giant that has proven difficult to pin down. Several previous studies have found that, in order for a bright planetary nebula to form, the initial star needs to have been up to twice as massive as the Sun.

However, the 2018 study used computer modeling to determine that, like 90 percent of other stars, our Sun is most likely to shrink down from a red giant to become a white dwarf and then end as a planetary nebula.

“When a star dies it ejects a mass of gas and dust – known as its envelope – into space. The envelope can be as much as half the star’s mass. This reveals the star’s core, which by this point in the star’s life is running out of fuel, eventually turning off and before finally dying,” explained astrophysicist Albert Zijlstra from the University of Manchester in the UK, one of the authors of the paper.

“It is only then the hot core makes the ejected envelope shine brightly for around 10,000 years – a brief period in astronomy. This is what makes the planetary nebula visible. Some are so bright that they can be seen from extremely large distances measuring tens of millions of light years, where the star itself would have been much too faint to see.”

The data model that the team created actually predicts the life cycle of different kinds of stars, to figure out the brightness of the planetary nebula associated with different star masses.

Planetary nebulae are relatively common throughout the observable Universe, with famous ones including the Helix Nebula, the Cat’s Eye Nebula, the Ring Nebula, and the Bubble Nebula.

Cat’s Eye Nebula (NASA/ESA)

They’re named planetary nebulae not because they actually have anything to do with planets, but because, when the first ones were discovered by William Herschel in the late 18th century, they were similar in appearance to planets through the telescopes of the time.

Almost 30 years ago, astronomers noticed something peculiar: The brightest planetary nebulae in other galaxies all have about the same level of brightness. This means that, theoretically at least, by looking at the planetary nebulae in other galaxies, astronomers can calculate how far away they are.

The data showed that this was correct, but the models contradicted it, which has been vexing scientists ever since the discovery was made.

“Old, low mass stars should make much fainter planetary nebulae than young, more massive stars. This has become a source of conflict for the past 25 years,” said Zijlstra

“The data said you could get bright planetary nebulae from low mass stars like the Sun, the models said that was not possible, anything less than about twice the mass of the Sun would give a planetary nebula too faint to see.”

The 2018 models have solved this problem by showing that the Sun is about the lower limit of mass for a star that can produce a visible nebula.

Even a star with a mass less than 1.1 times that of the Sun won’t produce a visible nebula. Bigger stars up to 3 times more massive than the Sun, on the other hand, will produce the brighter nebulae.

For all the other stars in between, the predicted brightness is very close to what has been observed.

“This is a nice result,” Zijlstra said. “Not only do we now have a way to measure the presence of stars of ages a few billion years in distant galaxies, which is a range that is remarkably difficult to measure, we even have found out what the Sun will do when it dies!”

The research was published in the journal Nature Astronomy.

An earlier version of this article was first published in May 2018.

Read original article here

Uh Oh, Scientists Figured Out How to Grow Terrifying Parasitic Mushrooms in the Lab

Just in time for Halloween, scientists in Korea say they’ve found a better way to grow an insect-destroying mushroom in the lab. Their work could make studying these fungi easier, which is important, since they and the chemicals they produce may actually have medicinal uses for humans, creepy as they are.

The fungi is known as Cordyceps. Members of this genus, along with a related but distinct genus called Ophiocordyceps, are parasitic, usually feeding on insects and other arthropods. These fungi will invade and often kill their hosts, though not before using them as fuel to grow their fruiting bodies (technically, this is the part of the fungi that we call the mushroom) and release new infectious spores into the world to start the process all over again. Some members of Ophiocordyceps are also known for “zombifying” their ant hosts by manipulating their behavior before death to ensure their optimal survival.

As horrifying as their way of life is, some members of Cordyceps are considered food in parts of Asia. They’ve also been used in traditional Chinese medicine and more recently are being sold as supplements (supplements of any kind, it should be noted, have little quality control and aren’t necessarily harmless). And early research has suggested that Cordyceps produce chemicals that could have beneficial health effects, particularly a compound called cordycepin. Some studies have indicated, for instance, that cordycepin might have anti-viral or cancer-fighting properties.

This research has largely come from animal or lab studies, though, meaning it will take a lot more evidence in humans to confirm any potential benefits. These experiments and any eventual widespread use of Cordyceps will also require having ample supplies of the fungi or their compounds, and that’s a challenge. Though these fungi are found throughout the world, they’re hard to find and harvest from the wild. There are now ways to cultivate them in the lab, but the current methods only yield low amounts of healthy Cordyceps or cordycepin, making them hard to scale up.

Cordyceps militaris
Photo: charnsitr (Shutterstock)

Researchers at Chungbuk National University tried to improve on these methods, which usually use brown rice as the growth medium. They theorized that these mushrooms would grow better on richer sources of protein—namely, insects. They also guessed that their diet would affect how large the fungi grew and how much cordycepin they produced, so they tested out different types of insects. These insect nurseries were kept growing for two months before the researchers harvested the Cordyceps. The team’s findings, published Wednesday in Frontiers in Microbiology, suggests that their insect theory was right on the money.

Cordyceps grown on edible insects contained approximately 100 times more cordycepin compared to Cordyceps on brown rice,” said study author Mi Kyeong Lee, a professor at Chungbuk, in a statement from Frontiers.

As expected, though, there were differences in how the insect food affected their growth. The fungi were most plentiful when they fed on mealworms and silkworm pupae, for instance. But they actually produced the most cordycepin when they fed on Japanese rhinoceros beetles. The team’s work also indicates that it was the fat content of the insects, not their protein, that predicted how much cordycepin the mushrooms produced. The rhinoceros beetles were especially full of a type of fat called oleic acid, and once the team introduced oleic acid to a low-fat insect feed, the Cordyceps’ production of cordycepin rose as well.

“The cultivation method of Cordyceps suggested in this study will enable the production of cordycepin more effectively and economically,” Lee said.

While these scientists may have found an improved method of growing Cordyceps in the lab, you probably shouldn’t expect mass production just yet. The authors note that churning out insects on an industrial scale isn’t easy, either. So if these freaky fungi do turn out to be medically valuable, there’ll be more challenges ahead in developing them for mass use. That said, there is at least one research team at Oxford University actively studying a modified version of cordycepin as a cancer drug in early human trials.

Read original article here

Finally, Scientists Have Figured Out A Key Molecular Mechanism Behind Human Hearing : ScienceAlert

Scientists have finally unraveled the structure of a mysterious protein complex inside the inner ear that enables hearing in humans.

To solve this decades-old puzzle, researchers needed to grow 60 million roundworms (Caenorhabditis elegans), which use a very similar protein complex as humans do to sense touch.

As humans only have a tiny amount of this protein inside their inner ears, turning to another source was the only way the team could accumulate enough of the protein to study.

“We spent several years optimizing worm-growth and protein-isolation methods, and had many ‘rock-bottom’ moments when we considered giving up,” says co-first author Sarah Clark, a biochemist from Oregon Health and Science University (OHSU) in Portland.

Researchers have known for some time that the transmembrane channel-like protein 1 (TMC1) complex performs an important role in hearing, but the exact makeup has remained elusive.

“This is the last sensory system in which that fundamental molecular machinery has remained unknown,” says senior author Eric Gouaux, a senior biochemist at OHSU.

Thanks to this new research, published in Nature, we now know that this protein complex operates as a tension-sensitive ion channel that opens and closes depending upon the movement of hairs inside the inner ear.

Using electron microscopy, the researchers discovered that the protein complex “resembles an accordion”, with subunits “poised like handles” on either side.

Sound waves traveling through the ear strike the eardrum (tympanic membrane), then to the inner ear where it jiggles the ossicles; three of the body’s tiniest bones. The ossicles strike the snail-like cochlear, which in turn brushes microscopic finger-like hairs called stereocilia against membranes.

These stereocilia are embedded in cells that have the ion channels formed by the TMC1 complex that open and close as the hairs move, sending electrical signals along the auditory nerve to the brain to be interpreted as sound.

(ttsz/Getty Images)

“The auditory neuroscience field has been waiting for these results for decades, and now that they are right here – we are ecstatic,” says OSHU otolaryngologist Peter Barr-Gillespie, a national leader in hearing research who was not involved in the study.

The discovery could one day help researchers develop treatments for hearing impairments.

Hearing loss and deafness affect more than 460 million people worldwide. By understanding the nature of hearing, researchers can continue to find diverse ways to support, treat, or prevent hearing loss in our community.

This paper was published in Nature.

Read original article here

Hackers might have figured out your secret Twitter accounts

A security vulnerability on Twitter allowed a bad actor to find out the account names associated with certain email addresses and phone numbers (and yes, that could include your secret celebrity stan accounts), Twitter confirmed on Friday. Twitter initially patched the issue in January after receiving a report through its bug bounty program, but a hacker managed to exploit the flaw before Twitter even knew about it.

The vulnerability, which stemmed from an update the platform made to its code in June 2021, went unnoticed until earlier this year. This gave hackers several months to exploit the flaw, although Twitter said it “had no evidence to suggest someone had taken advantage of the vulnerability” at the time of its discovery.

Last month’s report from Bleeping Computer suggested otherwise, and revealed that a hacker managed to exploit the vulnerability while it flew under Twitter’s radar. The hacker reportedly amassed a database of over 5.4 million accounts by taking advantage of the flaw, and then tried to sell the information on a hacker forum for $30,000. After analyzing the data posted to the forum, Twitter confirmed that its user data had been compromised.

It’s still unclear how many users have actually been affected though, and Twitter doesn’t seem to know, either. While Twitter says it plans on notifying affected users, it isn’t “able to confirm every account that was potentially impacted.” Twitter advises anyone concerned about their secret accounts to enable two-factor authentication, as well as to attach an email address or phone number that isn’t publicly known to the account they don’t want to be associated with.



Read original article here

Why does Saturn have rings and Jupiter doesn’t? A computer model may have figured it out

Jupiter, the fifth planet in our solar system and by far most massive, is a treasure trove of scientific discovery. Last year a pair of studies found that the planet’s iconic Great Red Spot is 40 times deeper than the Mariana Trench, the deepest location on Planet Earth. In April authors of a paper in the journal Nature Communications studied a double ridge in Northwest Greenland with the same gravity-scaled geometry as those found on Europa, one of Jupiter’s moons, and concluded that the probability of life on Europa is greater than expected.

Now scholars believe they have cracked another great Jupiter mystery — namely, why it lacks the spectacular rings flaunted by its celestial neighbor, Saturn. As a very massive gas giant with similar composition, the evolution of the two planets is believed to be similar — meaning the reason that one has a prominent ring system and the other doesn’t has always been something of a puzzle.

RELATED: A giant planet may have “escaped” from our solar system, study finds

With results that are currently online and will soon be published in the journal Planetary Science, researchers from the University of California–Riverside used modeling to determine that Jupiter’s enormous moons nip the creation of possible rings right in the bud.

Using a computer simulation that accounted for the orbits of each of Jupiter’s four moons, astrophysicist Stephen Kane and graduate student Zhexing Li realized that those moons’ gravity would alter the orbit of any ice that might come from a comet and ultimately prevent their accumulation in such a way as to form rings, as happened with Saturn. Instead the moons would either fling the ice away from the planet’s orbit or pull the ice toward a collision course with themselves.

This not only explains why Jupiter only has the paltriest of rings at present; it suggests that it likely never had large rings.


Want more health and science stories in your inbox? Subscribe to Salon’s weekly newsletter The Vulgar Scientist.


There is more at stake here than merely understanding why the aesthetics of Jupiter differ from the aesthetics of Saturn. As Kane explained in a statement, a planet’s rings contain many clues about that planet’s history. They can help scientists understand what objects might have collided with a planet in the past, or perhaps the type of event that formed them.

“For us astronomers, they are the blood spatter on the walls of a crime scene. When we look at the rings of giant planets, it’s evidence something catastrophic happened to put that material there,” Kane explained.

The scientists say they do not plan on ending their astronomical investigation at Jupiter; their next stop is Uranus, which also has paltry rings. The researchers speculate that Uranus, which appears to be tipped on its side, may lack rings because of a collision with another celestial body.

Technically, Jupiter does have a ring system, it is just incredibly small and faint. Indeed, Jupiter’s rings are so small that scientists did not even discover them until 1979, when the space probe Voyager passed by the gas giant. There are three faint rings, all of them made of dust particles emitted by the nearby moons — a main flattened ring that is 20 miles thick and 4,000 miles wide, an inner ring shaped like a donut that is more than 12,000 miles thick and a so-called “gossamer” ring that is actually comprised of three much smaller rings comprised of microscopic debris from the nearby moons.

NASA itself has expressed wonderment at the wispy rings that accompany our solar system’s most conspicuous behemoth — in particular, at the size of the particles that comprise them.

“These grains are so tiny, a thousand of them put together are only a millimeter long,” NASA writes. “That makes them as small as the particles in cigarette smoke.”

By contrast, the rings of Saturn are famously beautiful, and some of the particles in those rings are “as large as mountains.” When the space probe Cassini finally got an up-close look at Saturn’s rings, it found “spokes” longer than the diameter of Earth and potentially made of ice — as well as water jets from the Saturnian moon Enceladus, which would provide much of the material in the planet’s E ring.

For more Salon articles about astronomy:

Read original article here

Scientists Have Figured Out Why Childbirth Became So Complex and Dangerous

The World Health Organization estimates that nearly 300,000 people die every year due to pregnancy-related causes.

A study finds that complex human childbirth and cognitive abilities are a result of walking upright.

Childbirth in humans is much more complex and painful than in great apes. It was long believed that this was a result of humans’ larger brains and the narrow dimensions of the mother’s pelvis. Researchers at the University of Zurich have now used 3D simulations to show that childbirth was also a highly complex process in early hominin species that gave birth to relatively small-brained newborns – with important implications for their cognitive development.

Complications are common for women during and following pregnancy and childbirth. The majority of these issues arise during pregnancy and are either avoidable or curable. However, childbirth is still dangerous. The World Health Organization estimates that 830 people die every day due to causes related to childbirth and pregnancy. Furthermore, for every woman that dies due to childbirth, another 20-30 encounter injury, infection, or disabilities. 

Four major complications are responsible for 75% of maternal deaths: severe bleeding (typically after birth), infections, high blood pressure during pregnancy, and complications from delivery. Other common issues include unsafe abortions and chronic conditions such as cardiac diseases and diabetes. 

All of this shows how human birthing is much more difficult and painful than that of large apes. This was long believed to be due to humans’ bigger brains and the limited dimensions of the mother’s pelvis. Researchers at the University of Zurich have now shown, using 3D simulations, that birthing was likewise a highly complicated procedure in early hominin species that gave birth to relatively small-brained newborns – with significant consequences for their cognitive development.

The fetus normally navigates a narrow, convoluted birth canal by bending and turning its head at different phases during human delivery. This complicated procedure has a significant risk of birth complications, which may range from extended labor to stillbirth or maternal death. These issues were long thought to be the outcome of a conflict between humans adjusting to upright walking and our larger brains.

The dilemma between walking upright and larger brains

Bipedalism developed around seven million years ago and dramatically reshaped the hominin pelvis into a real birth canal. Larger brains, however, didn’t start to develop until two million years ago, when the earliest species of the genus Homo emerged. The evolutionary solution to the dilemma brought about by these two conflicting evolutionary forces was to give birth to neurologically immature and helpless newborns with relatively small brains – a condition known as secondary altriciality.

A research group led by Martin Häusler from the Institute of Evolutionary Medicine at the University of Zurich (UZH) and a team headed up by Pierre Frémondière from Aix-Marseille University have now found that australopithecines, who lived about four to two million years ago, had a complex birth pattern compared to great apes. “Because australopithecines such as Lucy had relatively small brain sizes but already displayed morphological adaptations to bipedalism, they are ideal to investigate the effects of these two conflicting evolutionary forces,” Häusler says.

Birth simulation of Lucy (Australopithecus afarensis) with three different fetal head sizes. Only a brain size of maximum 30 percent of the adult size (right) fits through the birth canal. Credit: Martin Häusler, UZH

The typical ratio of fetal and adult head size

The researchers used three-dimensional computer simulations to develop their findings. Since no fossils of newborn australopithecines are known to exist, they simulated the birth process using different fetal head sizes to take into account the possible range of estimates. Every species has a typical ratio between the brain sizes of its newborns and adults. Based on the ratio of non-human primates and the average brain size of an adult Australopithecus, the researchers calculated a mean neonatal brain size of 180 g. This would correspond to a size of 110 g in humans.

For their 3D simulations, the researchers also took into account the increased pelvic joint mobility during pregnancy and determined a realistic soft tissue thickness. They found that only the 110 g fetal head sizes passed through the pelvic inlet and midplane without difficulty, unlike the 180 g and 145 g sizes. “This means that Australopithecus newborns were neurologically immature and dependent on help, similar to human babies today,” Häusler explains.

Prolonged learning is key to cognitive and cultural abilities

The findings indicate that australopithecines are likely to have practiced a form of cooperative breeding, even before the genus Homo appeared. Compared to great apes, the brains developed for longer outside the uterus, enabling infants to learn from other members of the group. “This prolonged period of learning is generally considered crucial for the cognitive and cultural development of humans,” Häusler says. This conclusion is also supported by the earliest documented stone tools, which date back to 3.3 million years ago – long before the genus Homo appeared.

Reference: “Dynamic finite-element simulations reveal early origin of complex human birth pattern” by Pierre Frémondière, Lionel Thollon, François Marchal, Cinzia Fornai, Nicole M. Webb, and Martin Haeusler, 19 April 2022, Communications Biology.
DOI: 10.1038/s42003-022-03321-z



Read original article here

Physicists figured out how launching a Falcon 9 changes the atmosphere

With the cost of launching a rocket into space falling, the number of rocket launches is, well, taking off. Last year, governments and companies across the world successfully launched 133 rockets into orbit, breaking a record that stood for 45 years.

But there’s a catch. Breaking free from Earth’s gravity requires a rocket to release a tremendous amount of energy in a short period of time. As a rocket leaves Earth, it produces hot exhaust that changes the physics and chemistry of the atmosphere as it passes through. In a paper published Tuesday in the peer-reviewed journal Physics of Fluids, a pair of physicists simulated the launch of a SpaceX Falcon 9 rocket blasting into space.

They found several reasons to be concerned.

The carbon footprint isn’t the problem 

Rockets aren’t responsible for putting that much carbon dioxide into the atmosphere. A typical launch burns roughly the same amount of fuel as a day-long commercial flight but produces seven times as much CO2 — between 200 and 300 tons — as the airliner. That’s far more carbon than the average person will generate in their lifetime, but it’s a rounding error compared to the 900 million tons of CO2 the aviation industry was spewing annually before the pandemic.

But that’s not the whole story. “We don’t care about a rocket’s carbon footprint. That’s irrelevant,” says researcher Martin Ross. For him, it’s the particles contained in rocket exhaust — chiefly alumina and black carbon — that really matter. “These particles scatter and absorb sunlight. They change the temperature and circulation of the stratosphere,” Ross says.

Unfortunately, scientists only have a faint understanding of the total environmental impact of a rocket launch. “The current level of data about rocket emissions does not provide researchers with enough information to fully assess the impact of launches on the global environment,” Ross says. 

The effect of carbon emissions high in the atmosphere is uncertain

The researchers behind the new study are bringing the problem into sharper focus by modeling the exhaust from the nine nozzles of a Falcon 9 rocket as it launches into space. These simulations incorporate data about the rocket and its propellant (RP-1) with equations that describe how gases behave under various conditions. Thanks to some serious computing power, the researchers were able to predict how exhaust behaves after exiting the nozzles, at increments of roughly 0.6 miles (1 km) in altitude.

The researchers analyzed the launch by comparing the volume of exhaust released during one kilometer of upward travel through a certain band of the atmosphere (e.g. between 2 km and 2.99 km) with the properties of the atmosphere at that specific altitude. They had to adopt this somewhat confusing method because the physical and chemical makeup of the atmosphere is different at different altitudes.

They found that the amount of total exhaust is “negligible” compared to the air around it, even at high altitudes. That’s a surprise because the atmosphere is much less dense at higher altitudes. According to their calculations, the amount of exhaust released by a Falcon 9 as it travels between 70 km and 70.99 km (roughly 43 miles) is just one-fourteenth the amount of mass found in one cubic kilometer (roughly .25 mi3) of air at that altitude. (This is conveyed by the blue line in the chart below.)

The amount of mass contained in a cubic kilometer of exhaust compared to the ambient air.

What isn’t negligible is the amount of CO2 that a Falcon 9 introduces into higher levels of the atmosphere as it passes through (represented by the dotted red line in the figure above). Once it passes an altitude of 27 miles (43.5 km), a rocket starts emitting more than one cubic kilometer’s worth of CO2 for each kilometer it climbs. By the time it reaches 43.5 miles (70 km), a Falcon 9 releases more than 25 times the amount of CO2 found in a cubic kilometer of air at that altitude. 

And rocket exhaust contains more than carbon

It’s more than CO2. “Perhaps even more crucially, the [amount of] carbon monoxide (CO) and water (H2O) [in rocket exhaust] are of a similar order as carbon dioxide,” the authors write. That’s a concern because there’s hardly any carbon monoxide or water high in the atmosphere. “Therefore, these compounds’ emissions at high altitudes introduce an even more significant contribution/rise to the existing, if any, trace amounts already present.”

Water vapor immediately freezes at that altitude, but researchers have no idea where those ice crystals end up. Carbon monoxide reacts with hydroxide (O2) to form even more CO2. The researchers also discovered that dangerous exhaust emissions called thermal nitrogen oxides (NOx) can stick around for a long time in hot rivers before dispersing throughout the atmosphere, especially at lower altitudes. 

The future is uncertain, but researchers and regulators are paying attention

With just more than 100 launches per year, some say that pollution from rockets isn’t an issue. “One of the arguments that people have used in the past was to say that we don’t really need to pay attention to rockets or to the space industry, or the space industry is small, and it’s always going to be small,” Ross says.

He doesn’t agree. “I think the developments that we’re seeing the past few years show that … space is entering this very rapid growth phase like aviation saw in the ’20s and ’30s.”

The authors behind the new study feel the same way. “We believe that the problem of atmospheric pollution caused by rocket launches is vital and needs to be addressed appropriately as commercial space flights, in particular, are expected to increase in the future,” they write. 

The problem of pollution from rockets slowly coming into clearer focus, and it’s being taken seriously in high places. Later this year, the World Meteorological Organization and the UN Environmental Program will release new a report that summarizes how rocket emissions deplete ozone. With any luck, this attention will cause atmospheric pollution to become a key factor in the design of future rockets.



Read original article here

Scientists figured out a way to clean dust off of solar panels without using water

Dust on solar panels reduces their output significantly, so they need to be kept clean. But what’s the best way to do that? Scientists at the Massachusetts Institute of Technology (MIT) say they have a solution.


UnderstandSolar is a free service that links you to top-rated solar installers in your region for personalized solar estimates. Tesla now offers price matching, so it’s important to shop for the best quotes. Click here to learn more and get your quotes. — *ad.


One of the most common ways to clean dust off solar panels is to spray them with water. But that’s a huge waste of water, especially in desert settings, where there are a lot of solar farms. The MIT scientists note in their new study, which is published in Science Advances:

At a global PV capacity above 500 GW, we estimate on the basis of reports that up to 10 billion gallons of water are being consumed every year worldwide for solar panel cleaning purposes, which can otherwise satisfy the annual water needs of up to two million people in developing and underdeveloped countries. 

Further, dry scrubbing damages solar panels.

According to the researchers, static electricity can keep dust off solar panels, and is a much more sustainable solution. And that’s important, because as the researchers note, for example, “Dust accumulation of 5 mg/cm2 corresponds to almost 50% loss in power output.”

Effect of dust accumulation on solar panel power output. Source: Science Advances

The researchers achieved this by using “adsorbed moisture-assisted charge induction.” Adsorption is when moisture in the air attaches to a dust particle’s surface. Cosmos concisely explains how it works:

The new technique works by passing a simple electrode – a conductor of electricity, which could be a simple metal bar – just above the surface of the solar panel. The electric field produced by the electrode causes the dust particles to become electrically charged as well.

The same charge the dust holds is then applied to the solar panel’s surface through a conductive layer a few nanometers thick. The researchers have calculated the voltage range to apply to overcome the pull of gravity and adhesion forces, so the dust particles are pushed from the surface until they fall off.

In real-world scale and practice, the authors suggest that every solar panel could be fitted with railings on each side, with an electrode spanning across the panel. A small electric motor, perhaps even using electricity output from the panel itself, could then drive a belt system to move the electrode back and forth.

This method works in environments where the ambient humidity is at 30% or greater, and most deserts can achieve humidity of around 30%.

What do you think about this proposed solar panel cleaning method? Let us know in the comments below.

Read more: These light, thin, flexible solar panels ‘peel and stick’ to roofs

FTC: We use income earning auto affiliate links. More.


Subscribe to Electrek on YouTube for exclusive videos and subscribe to the podcast.

Read original article here

YouTuber figured out Asus Z690 Hero motherboards melted down due to backward capacitor

A YouTuber who goes by the name of Buildzoid on the Actually Hardware Overlocking channel has figured out that a backward capacitor on the Asus ROG Maximus Z690 Hero motherboard is causing it to melt down, according to a report by Tom’s Hardware. Asus has since acknowledged the issue in a post on its site and plans on issuing replacements to customers with affected motherboards.

Problems with the Z690 Hero motherboard started turning up on the Asus support forum, as well as on Reddit, and the issues experienced by users are pretty much identical. As noted by Tom’s Hardware, users reported that their motherboards started smoking in the same spot: the two MOSFETs (metal-oxide-semiconductor field-effect transistor) next to the DIMM slots and the Q-code reader.

In a video on his channel, Buildzoid diagnoses the issue using only the pictures posted to support forums and on Reddit, attributing the Z690 Hero’s failure to the backward capacitor installed next to the MOSFETs, not the MOSFETs themselves. Buildzoid looks closely at the images of the motherboard, pointing out that the text on the capacitor is actually upside down, a potential sign that it’s installed incorrectly. As Tom’s Hardware mentions, a reversed capacitor results in reversed polarity, causing the MOSFETs to malfunction and burn up.

After news about the issue started gaining traction, Asus confirmed that Buildzoid’s diagnosis is, in fact, correct. “In our ongoing investigation, we have preliminarily identified a potential reversed memory capacitor issue in the production process from one of the production lines that may cause debug error code 53, no post, or motherboard components damage,” Asus announced. “The issue potentially affects units manufactured in 2021 with the part number 90MB18E0-MVAAY0 and serial number starting with MA, MB, or MC.”

The company says you can find your Z690 Hero’s serial number and part number on the side of your motherboard’s packaging, as well as on the sticker that’s placed on the top or bottom of the motherboard itself. In a separate post on Asus’ Facebook page, the company added a link to a tool that checks whether your Z690 Hero is affected by the issue based on its serial number.

“Going forward, we are continuing our thorough inspection with our suppliers and customers to identify all possible affected ROG Maximus Z690 Hero motherboards in the market and will be working with relevant government agencies on a replacement program,” Asus states.

Read original article here