Tag Archives: Unstable

Google’s head of AR software quits, citing “unstable commitment and vision” – Ars Technica

  1. Google’s head of AR software quits, citing “unstable commitment and vision” Ars Technica
  2. Google’s AR software leader is out over the company’s “unstable commitment and vision” The Verge
  3. Google loses its top engineering exec for augmented reality as he blasts the company’s ‘unstable commitment’ Fortune
  4. Google’s AR efforts face more turmoil as lead exec quits over ‘unstable commitment’ Android Police
  5. Mark Lucovsky Leaves Google, Calls Company “Unstable” Thurrott.com
  6. View Full Coverage on Google News

Read original article here

An unstable slope above Prince William Sound is dropping faster. A complete failure could cause a tsunami.

Last month, scientists noticed new movement on an unstable slope in Prince William Sound, but they’re uncertain where that new movement may lead.

It’s possible the speed-up means rapid failure of the slope may be ahead, which could send the mass of land crashing into the water below. That could, in turn, create a tsunami in nearby fjords and bays as well as inundation, dangerous waves and currents in the community of Whittier — a risk scientists first warned about in 2020.

But it’s also possible the movement could stall, and nothing dramatic or devastating would occur.

The steep slope is located in the Barry Arm fjord, in a narrow stretch of water in Prince William Sound. It’s located 30 miles northeast of Whittier.

Dennis Staley, a research physical scientist at the U.S. Geological Survey who leads the Prince William Sound Landslide Hazards Project, said in late August, scientists noticed a portion of the unstable area began to move.

The movement caused concern because of how swiftly the slope went from not moving to sliding some 50 millimeters per day over a few days, Staley said.

“We don’t like to see landslides accelerate; that makes us a little bit nervous,” Staley said. “And then we also don’t like to see the area that’s expanding.”

It’s challenging, if not impossible, to say how likely the slope is to fail on a given day or during a specific period, Staley said. They can’t say whether the slope will keep on moving and stop at some point, or if a fast-moving landslide — what’s known as a catastrophic failure — is possible.

[Over 24 hours, Harry Potter Lake pulls a disappearing act on Alaska’s North Slope]

“We don’t want to be overly alarmist and say this is something that is inevitably going to end in a catastrophic failure because there’s a strong chance that it won’t, but we do want to keep an eye on the landslide,” he said.

Staley said that scientists have suspended non-essential boat-based activities in Barry Arm out of caution.

“We don’t want our crews in harm’s way should there be any kind of failure,” Staley said.

There are a few potential scenarios for what might happen if the slope fails, said Seldovia-based geologist Bretwood Higman, who has researched Barry Arm. Higman’s sister, an artist and naturalist, was the person who initially pointed out the slope as potentially unstable while she was in the area.

The impacts of the slope’s possible failure depend on the size of the area that crashes into the water and how much water it would displace, Higman said. He noted the level of uncertainty is high — they don’t know how harmful the impacts would be in the town of Whittier. However, he said they’d likely be at least problematic in terms of strong currents potentially damaging the harbor. Different models have shown different-sized waves washing into the community, Higman said.

Higman said there’s no correct answer for people trying to decide whether or not they should spend time in the vicinity of Barry Arm.

“We don’t know enough about it to even pretend like we can tell anyone what they should do,” Higman said.

For his part, Higman said he wouldn’t camp on the beach in the Barry and Harriman fjord areas, given the dangers that even a small tsunami would pose to the site. He’d also be cautious taking a vessel right up below the area of instability.

“That’s not dice I’m comfortable rolling, but that’s just me,” Higman said. “I really, absolutely would not judge someone else making a different decision.”

[Correction: An earlier version of this story included an incorrect spelling of Harriman.]



Read original article here

Electric cars being charged at night making America’s power grid unstable

STANFORD, Cali — Leaving your electric car charging overnight to have it ready in the morning seems like a good idea in theory. But in reality, research suggests doing so does more harm in the long run. Stanford scientists say that it costs more to charge your electric car at night and it could stress out your local electric grid.

Instead, researchers suggest drivers should switch to charging their vehicle at work or in public charging stations. Another added benefit to charging in the daytime at a public station is that it reduces greenhouse gas emissions.

With the effects of climate change more apparent than ever—frequent forest fires, widespread flooding, and stronger hurricanes—car companies are expecting people to start investing in electric-powered cars in the future. For example, California residents are expected to buy more electric cars as the state is planning to ban sales of gasoline-powered cars and light trucks in 2035.

“We encourage policymakers to consider utility rates that encourage day charging and incentivize investment in charging infrastructure to shift drivers from home to work for charging,” says study’s co-senior author, Ram Rajagopal, an associate professor of civil and environmental engineering at Stanford University, in a statement..

So far, electric cars make up one million or 6% of automobile sales in California. The state’s goal is to increase that number to five million electric vehicles by 2030. However, the study authors say that the change from gas to electric will cause a strain in the electric grid when there’s 30% too 40% of cars on the road.

“We were able to show that with less home charging and more daytime charging, the Western U.S. would need less generating capacity and storage, and it would not waste as much solar and wind power,” explains Siobhan Powell, a doctor of mechanical engineering and lead study author. “And it’s not just California and Western states. All states may need to rethink electricity pricing structures as their EV charging needs increase and their grid changes.”

If half of vehicles in the western United States are electric, the team estimates it would take over 5.4 gigawatts of energy storage—equivalent to five large nuclear power reactors—to charge the cars. However, if people charged their electric cars at work instead of home, the electric demand is expected to go down to 4.2 gigawatts.

California currently uses time-of-use rates to encourage people to use electricity at night such as running the dishwasher and charging cars. However, the authors argue that with growing demand of electric cars, this strategy is outdated and will soon incur high demand with low supply. More specifically, the teams says if a third of homes were to charge their electric cars at 11 PM or whenever electricity rates go down, the local grid would become unstable.

“The findings from this paper have two profound implications: the first is that the price signals are not aligned with what would be best for the grid – and for ratepayers. The second is that it calls for considering investments in a charging infrastructure for where people work,” says Ines Azevedo, associate professor of energy science and engineering and co-senior author.

“We need to move quickly toward decarbonizing the transportation sector, which accounts for the bulk of emissions in California,” Azevedo adds. “This work provides insight on how to get there. Let’s ensure that we pursue policies and investment strategies that allow us to do so in a way that is sustainable.”

The study is published in Nature Energy.



Read original article here

It Would Take About 100 Billion Years for Another Star to Pass Close Enough to Make the Solar System Unstable

In 1687, Sir Isaac Newton published his magnum opus, Philosophiæ Naturalis Principia Mathematica, which effectively synthesized his theories on motion, velocity, and universal gravitation. In terms of the latter, Newton offered a means for calculating the force of gravity and predicting the orbits of the planets. Since then, astronomers have discovered that the Solar System is merely one small point of light that orbits the center of the Milky Way Galaxy. On occasion, other stars will pass close to the Solar System, which can cause a dramatic shakeup that can kick objects out of their orbits.

These “stellar flybys” are common and play an important role in the long-term evolution of planetary systems. As a result, the long-term stability of the Solar System has been the subject of scientific investigation for centuries. According to a new study by a team of Canadian astrophysicists, residents of the Solar System may rest easy. After conducting a series of simulations, they determined that a star will not pass by and perturb our Solar System for another 100 billion years. Beyond that, the possibilities are somewhat frightening!

The research was led by Garett Brown, a graduate student of computational physics from the Department of Physical and Environmental Sciences (PES) at the University of Toronto at Scarborough. He was joined by Hanno Rein, an associate professor of astrophysics (and Brown’s mentor) also from the PES at UT Scarborough. The paper that describes their findings was recently published in the Monthly Notices of the Royal Astronomical Journal. As they indicated in their paper, the study of stellar flybys could reveal much about the history and evolution of planetary systems.

Remove All Ads on Universe Today

Join our Patreon for as little as $3!

Get the ad-free experience for life

As Brown explained to Universe Today via email, this is particularly true of stars like the Solar System during its early history:

“The full extent that stellar flybys play in the evolution of planetary systems is still an active area of research. For planetary systems that form in a star cluster, the consensus is that stellar flybys play an important role while the planetary system remains within the star cluster. This is typically the first 100 million years of planetary evolution. After the star cluster dissipates the occurrence rate of stellar flybys dramatically decreases, reducing their role in the evolution of planetary systems.”

The most widely-accepted theory for the Solar System’s formation is known as the Nebula Hypothesis, which states that the Sun formed from a massive cloud of dust and gas (aka. a nebula) that underwent gravitational collapse at its center. The remaining dust and gas then form a disk around the Sun, which slowly accretes to form a system of planets. In one version of the hypothesis, the Sun formed due to perturbations in the nebula, possibly from a close flyby by another star (or a supernova). But as Brown explained, stellar flybys are also likely to have played a role in planet formation.

“During planet development, when there is a disk of dust and gas around a star, stellar flybys are expected to be responsible for disk truncation, which would prevent the formation of planets on wider, more distant orbits,” he said. “For planets which have already formed on wide orbits, stellar flybys are thought to be responsible for removing or destabilizing the outermost planets.”

Another widely-accepted theory is that our Sun formed roughly 4.5 billion years ago as part of a star cluster that it long since left. With these theories in mind, Brown and Rein investigated how being part of a cluster (and therefore subject to stellar flybys) could have altered the Solar System once its planets formed and were part of an established system. They found that the role played by stellar flybys depends on how strongly the passing star can disturb the system. They further determined that a stellar flyby can dynamically destabilize a system, causing planets to crash into each other or to become ejected.

Artist’s impression of a solar system in the process of formation. Credit: NASA/JPL-Caltech

This presented a significant challenge because of an issue plaguing astronomers ever since Newton proposed his Theory of Universal Gravitation. To put it briefly, it all comes down to the N-body problem, which describes the difficulty of predicting the individual motions of a group of celestial objects interacting with each other gravitationally. Solving this exactly remains a mathematical impossibility, so astronomers are forced to make numerical approximations. But as Brown said, there are still two major issues with these calculations:

“One, the motion of the planets are chaotic, meaning small differences in the initial conditions of the system will result in dramatically different outcomes (even differences as small as one part in a trillion). And two, the timescales involved are dramatically different. We can get a sense for the statistical outcome of a chaotic system using an ensemble of numerical solutions. For the long-term stability of the Solar System this can give us a ratio of simulations that end up destabilizing compared to the number of simulations that remain stable to the end of the integration time.”

“However, solving the timescales issue is much more difficult. Sophisticated numerical methods have been developed over the past 50 years which make this more tractable, but we essentially need to simulate the motion of the planets one day at a time for billions of years. This requires an incredible amount of computational resources. We typically want to know if the Solar System will remain stable for the remaining lifetime of the Sun (about 5 billion years). Even with modern computers (as fast as they are) it can easily take 3-4 weeks to run just one 5 billion year simulation of the Solar System.”

To even begin to get reasonable statistics, Brown added, researchers need to conduct thousands of different simulations. There are two ways to do this: running the simulations on a single computer for up to 70 years or more, or using thousands of different computers simultaneously for a month. This not only makes statistical analysis very complicated but also very expensive. For their analytics, Brown and Rein used the Niagara supercomputer at the University of Toronto’s SciNet center – which is part of the Digital Research Alliance of Canada network.

The Niagara supercomputer at the SciNet center. Credit: University of Toronto

As Brown explained, he and Rein employed two main methods to calculate the potential perturbations caused by steller flybys.

“The first was an analytical approximation developed in 1975 by Douglas Heggie and refined over the years with his collaborators. It’s an approximation that assumes the relative velocity between the two stars is small compared to the orbital velocity of the planets. This analytical estimate allows us to very quickly compute order of magnitude estimates for how a stellar flyby will change the semi-major axis of a planet.”

The second method they used numerical integrations using REBOUND, an open-source multi-purpose N-body code for collisional dynamics developed by Hanno Rein and collaborators. Between these two methods, Brown and Rein were able to simulate a stellar flyby numerically and then measure the system’s state before and after. In the end, their results indicated that perturbations of the Solar System would require a very close flyby and that a stellar encounter of this kind was not likely to happen for a very long time. Said Brown:

“We found that critical changes to the orbit of Neptune needed to be on the order of 0.03 AU or 4.5 billion meters in order to have any impact on the long-term stability of the Solar System. These critical changes could increase the likelihood of instability over the lifetime of the Solar System by 10 times. Additionally, we estimated that a critical stellar flyby like this could occur once every 100 billion years in the region the Solar System is currently in.

“[W]e estimated that we would need to wait about 100 billion years before a stellar flyby past the Solar System would simply increase the odds of dismantling its current architecture by 10 times (and that’s still not a guarantee of destruction).”

Given the turbulent history of the Solar System, it is understandable that the idea of stellar flybys (and the resulting perturbations) would cause anxiety for some. After all, astronomers theorize that “planetary shakeups” may be a common feature of a system’s evolution and that large objects are regularly ejected from the outer reaches of a system due to flybys. A good example is Neptune’s largest moon Triton, which is thought to have formed in the Kuiper Belt and was hurled towards the inner Solar System, where Neptune captured it (which led to the destruction of Neptune’s original satellites).

This artwork shows a rocky planet being bombarded by comets. Credit: NASA/JPL-Caltech

In addition, gravitational interactions with other star systems are why we have long-period comets, where objects kicked out of the Oort Cloud periodically pass through the inner Solar System. The idea that a close flyby could send numerous comets our way (or larger objects like a planetoid) sounds like a doomsday scenario! But as Douglas Adams famously said, “Don’t Panic!” Not only do stellar flybys happen regularly, they typically pass light-years away and don’t influence the Solar System.

In many ways, this is similar to Near Earth Asteroids (NEAs) and the possibility that one will collide with Earth someday. While we know that impacts have happened in the past that were devastating (like the Chicxulub Impact Event that killed off the dinosaurs ca. 65 million years ago), NEAs make close passes with Earth regularly that pose no threat. In addition, recent analyses of two NEAs considered “potentially hazardous” (2022 AE1 and Apophis) found that neither would threaten Earth for a long time.

What’s more, recent observations by missions like the ESA’s Gaia Observatory have provided the most accurate data on the proper motions and velocities of stars in the Milky Way Galaxy. As Brown noted, this included data on impending flybys and how close they will pass to our system:

“Two notable stars are HD 7977, which may have passed within 3,000 AU (0.0457 light-years) of the Sun some 2.5 million years ago, and Gliese 710 (or HIP 89825), which is expected to pass within about 10,000 AU (0.1696 light-years) of the Sun in about 1.3 million years from now. Doing some rough calculations, both of these stars will have no appreciable effect to the evolution of the Solar System.”

What’s more, a lot will happen between now and then and it’s highly unlikely humanity will be around to witness such an event. Assuming we have not driven ourselves to extinction or left Earth to explore other reaches of the galaxy, planet Earth will cease being habitable long before that. “Considering the Sun will expand and engulf the Earth in about 5 billion years, physically distancing from other stars is not an issue we need to worry about,” said Brown.

Further Reading: arXiv

Read original article here

Mysterious and unstable ‘blobs’ the size of continents beneath Earth’s surface baffle scientists

For years scientists have been scratching their heads over two unexplained massive blobs of rock under Earth’s surface.

Many theories have been thrown around since their discovery in the 1980s, including claims that they could be huge fragments of an alien world.

The blobs of rock under Earth’s crust are each the size of a continent and 100 times taller than Mount Everest.

One sits under Africa, while the other can be found under the Pacific Ocean.

In pursuit of answers, a pair of experts have made some interesting new discoveries about the two gigantic masses.

As suspected, it turns out, the blob under Africa is a lot higher.

In fact, it’s twice the height of the one on the opposite side of the world, measuring in about 620 miles taller.

And that’s not all.

Crucially, scientists have found that the African blob of rock is also less dense and less stable.

It’s not clear why things are this way but it could be a reason for the continent having significantly more supervolcano eruptions over hundreds of millions of years, compared to its counterpart on the other side.

A 3D model of the blobs beneath Earth’s mantle in Africa.
Mingming Li/ASU

“This instability can have a lot of implications for the surface tectonics, and also earthquakes and supervolcanic eruptions,” said Qian Yuan, from Arizona State University.

These thermo-chemical materials – officially known as large low-shear-velocity provinces (LLSVPs) – were studied by looking at data from seismic waves and running hundreds of simulations.

While we now know they both have different compositions, we’re yet to work out how this affects the surrounding mantle, which is found between the planet’s core and the crust.

And most importantly, we’re no closer to figuring out where these mysterious blobs came from.

“Our combination of the analysis of seismic results and the geodynamic modeling provides new insights on the nature of the Earth’s largest structures in the deep interior and their interaction with the surrounding mantle,” Yuan added.

“This work has far-reaching implications for scientists trying to understand the present-day status and the evolution of the deep mantle structure, and the nature of mantle convection.”

And so, the investigation continues.

The research was published in the Nature Geoscience journal.

This article originally appeared on The Sun and was reproduced here with permission.

Read original article here

Florida officials accelerate plans to demolish unstable remains of condo tower ahead of potential tropical storm – The Washington Post

  1. Florida officials accelerate plans to demolish unstable remains of condo tower ahead of potential tropical storm The Washington Post
  2. Tropical Storm Elsa Nearing Hispaniola, Jamaica, Cuba; Florida Threat Begins Monday | The Weather Channel – Articles from The Weather Channel | weather.com The Weather Channel
  3. Hurricane Elsa: What impacts could Tampa Bay see? WFLA
  4. First Alert Forecast: Sunny for your holiday weekend; moisture from Elsa could impact South Carolina next week WIS10
  5. 5 PM UPDATE: Elsa picks up speed, now 85 mph Florida Keys Weekly
  6. View Full Coverage on Google News

Read original article here

American astronauts to again use Russian Soyuz rocket to reach ISS as NASA can’t rely on ‘unstable’ US tech – Moscow space chief

After the US bought a seat on a Russian spacecraft to send a NASA astronaut to the International Space Station, the chief of Russia’s Space Agency Roscosmos Dmitry Rogozin claimed that American spaceflight is still “unstable.”

From 2011 to last year, the US relied on Russia’s Soyuz launch system to send its astronauts to live on the ISS, with America not having its own capability. In May 2020, Crew Dragon, a reusable spacecraft made by Elon Musk’s private company SpaceX, took two Americans to space for the first time on a US-made vehicle in nine years.

However, despite this progress, it appears that NASA will still need help from Moscow.

Revealed by Roscosmos on Wednesday, in late 2020 the Americans asked for a spot for astronaut Mark Vande Hei on Soyuz MS-18, due to go to the ISS on April 9. According to a press release, the request was made very late and interrupted the Russian flight program, but the company accepted it as a way to “confirm its commitment to joint agreements and the spirit of joint use of the International Space Station.”

“The tradition of international crews, which has existed for more than twenty years, will be continued again,” the statement said.



Also on rt.com
A giant leap for mankind? A joint China-Russia plan to build a station on the Moon is set to spark a new space war with America


Before 2011, there was an agreement between Russia and the US to share seats on each other’s spacecraft. Last year, NASA reported that it would be negotiating with Roscosmos to bring back this crossover system. However, according to Rogozin, this latest agreement isn’t a swap-over.

“Flights to the ISS are unstable,” Rogozin wrote on Facebook. “There was an urgent need for them to find insurance and send their man on our ship, to make sure their segment on the space station isn’t left unattended. The talk of swapping seats has nothing to do with this at all,” he clarified, noting that the agreement was made with Axiom Space, a NASA contractor and private space company which has plans to eventually sell human spaceflight experiences to tourists who want to visit the ISS.

“It’s convenient for them, and we don’t care. We’ll invest the proceeds in new developments.”

In 2014, when he served as deputy prime minister, Rogozin famously joked that the US should use a trampoline to send its astronauts to space after sanctions that targeted the Russian space industry. In 2020, after the success of SpaceX, Musk quipped, “the trampoline is working!”



Also on rt.com
Russian space agency Roscosmos begins design of ‘Venera-D’ orbital station, set to be Moscow’s 1st mission to Venus since USSR era


However, Musk was not to have the last laugh. Following the latest US request for a seat, Rogozin wrote on Facebook: “Apparently, the trampoline only works so-so.”

Think your friends would be interested? Share this story!

Read original article here

Unstable helium adds a limit on the ongoing saga of the proton’s size

Enlarge / The small particle accelerator in Switzerland where, surrounded by farms, the work took place.

Physicists, who dedicate their lives to studying the topic, don’t actually seem to like physics very much since they’re always hoping it’s broken. But we’ll have to forgive them; finding out that a bit of theory can’t possibly explain experimental results is a sign that we probably need a new theory, which is something that would excite any physicist.

In recent years, one of the things that has looked the most broken is a seemingly simple measurement: the charge radius of the proton, which is a measure of its physical size. Measurements made with hydrogen atoms, which have a single electron orbiting a proton, gave us one answer. Measurements in which the electron was replaced by a heavier particle, called a muon, gave us a different answer—and the two results were incompatible. A lot of effort has gone into eliminating this discrepancy, and it has gotten smaller, but it hasn’t gone away.

That has theorists salivating. The Standard Model has no space for these kind of differences between electrons and muons, so could this be a sign that the Standard Model is wrong? The team behind some of the earlier measurements is now back with a new one, this one tracking the behavior of a muon orbiting a helium nucleus. The results are consistent with other measurements of helium’s charge radius, suggesting there’s nothing funny about the muon. So the Standard Model can breathe a sigh of relief.

Measuring muons?

The measurement involved is, to put it simply, pretty insane. Muons are essentially heavy versions of electrons, so substituting one for another in an atom is relatively simple. And a muon’s mass provides some advantages for these sorts of measurements. The mass ensures that the muon’s orbitals end up so compact that its wave function overlaps with the wave function of the nucleus. As a result, the muon’s behavior when it is orbiting a nucleus is very sensitive to the nucleus’ charge radius.

All of this would be great if it weren’t for the fact that muons are unstable and typically decay in under two microseconds. Putting one in orbit around a helium nucleus adds to the complications, since helium typically has two electrons in orbit, and they can interact with each other. The expected three-way interactions of a nucleus-muon-electron are currently beyond our ability to calculate, meaning we would have no idea if the actual behavior differed from theory.

So the researchers solved this problem by creating a positively charged ion composed of a helium nucleus and a single muon orbiting it. Making one of these—or, more correctly, making hundreds of them—is where the insanity starts.

The researchers had access to a beam of muons created by a particle collider, and they decided to direct the beam into some helium gas. In this process, as the muons enter, they have too much energy to stay in orbit around a helium nucleus, so they bounce around, losing energy with each collision. Once the muons slow down enough, they can enter into a high-energy orbit in a helium atom, bumping out one of its electrons in the process. But the second electron is still around, messing up any potential measurements.

But the muon has a lot of momentum because of its mass, and energy transfers within an atom are faster than losing the energy to the environment. So as the muon transfers some of its energy to the electron, the electron’s smaller mass ensures that this is enough to boot the electron out of the atom, and we’re left with a muonic helium ion. Fortunately, all of this happens quickly enough that the muon hasn’t had a chance to decay.

Let the insanity begin

By this point, the muon is typically in an orbital that is lower energy but has more energy than the ground state. The researchers set up a trigger sensitive to the appearance of muons in the experiment. After a delay to allow the muons to boot out the two electrons, the trigger causes a laser to hit the sample with the right amount of energy to boost the muon from the 2S orbital to the 2P orbital. From there, it will decay into the ground state, releasing an X-ray in the process.

Many of the muons won’t be in a 2S orbital, and the laser will have no effect on them. The researchers were willing to sacrifice much of the muonic helium they made in order to get precision measurements of the ones that were in the right state. Their presence was signaled by the detection of an X-ray with the right energy. To further ensure they were looking at the right thing, the researchers only took data that was associated with a high-energy electron produced by the decay of the muon.

And remember, all of this had to take place fast enough to happen within the millisecond time window before the muon decayed.

The first step involved tuning the laser used to the right frequency to boost the muon into the 2P orbit, since this is the value that we need to measure. This was done by adjusting a tunable laser across a frequency range until the helium started producing X-rays. Once the frequency was identified, the researchers took data for 10 days, which was enough for precision measurements of the frequency. During this time, the researchers observed 582 muonic helium ions.

Based on calculations using the laser frequency, the researchers found that the helium nucleus’ charge radius is 1.6782 femtometers. Measurements made by bouncing electrons off the nucleus indicate it is 1.681. These two values are within experimental errors, so they’re in strong agreement.

We’re sorry, it’s not broken

On the simplest level, the fact that the muon measurements agree with measurements made independently indicates that there’s nothing special about muons. Consequently, the Standard Model, which says the same thing, is intact down to fairly small limits allowed by the experimental errors here. (That is not to say it’s not broken in some other way, of course.) So theorists everywhere will be disappointed.

As an amusing aside, the researchers compared their value to one generated decades ago in the particle accelerators at CERN. It turns out this value is similar, but only by accident, since the earlier work had two offsetting errors. “Their quoted charge radius is not very far from our value,” the researchers note, “but this can be traced back to an awkward coincidence of a wrong experiment combined with an incomplete 2P–2S theory prediction, by chance yielding a not-so-wrong value.” So in this case, two wrongs did make an almost-right.

In any case, this work will focus researchers’ attention back onto trying to figure out why different experiments with protons keep producing results that don’t quite agree, since we can’t blame things on the muon being weird. In the meantime, we can all appreciate how amazing it is that we can manage to do so much with muons within the tiny fraction of a second in which they exist.

Nature, 2021. DOI: 10.1038/s41586-021-03183-1  (About DOIs).

Read original article here