Tag Archives: solves

Fallout’s Creator Solves One Of The Franchise’s Biggest Mysteries – Den of Geek

  1. Fallout’s Creator Solves One Of The Franchise’s Biggest Mysteries Den of Geek
  2. Fallout co-creator settles over two decades of fan debate about who nuked who with a single off-hand comment PC Gamer
  3. Fallout co-creator Tim Cain has designed a secret sequel that no one will ever see: “I’m never ever going to talk about it” Gamesradar
  4. It’s Over: We Know Who Launched The First Nukes in Fallout Insider Gaming
  5. Fallout co-creator wants to see the post-apocalyptic RPG series leave the US: “We wanted to explore China and Russia” Yahoo Entertainment
  6. View Full Coverage on Google News

Read original article here

Critical Role’s Matt Mercer Solves Baldur’s Gate 3 Problems In The Most D&D Way – Kotaku

  1. Critical Role’s Matt Mercer Solves Baldur’s Gate 3 Problems In The Most D&D Way Kotaku
  2. D&D legend Matthew Mercer overcomes Baldur’s Gate 3 challenge by stacking 40 boxes Gamesradar
  3. Of course Baldur’s Gate 3 lets you stack dozens of crates and stand on top of them so you can teleport over the wall of a keep PC Gamer
  4. Matt Mercer’s ‘Very Smart’ Crate-Stacking Trick Gets Him to the Top of a Baldur’s Gate 3 Castle IGN
  5. Baldur’s Gate 3 Fans Are Calling For More In-Game Detail On Class Progression TheGamer
  6. View Full Coverage on Google News

Read original article here

DeepMind’s latest AI project solves programming challenges like a newb

Enlarge / If an AI were asked to come up with an image for this article, would it think of The Matrix?

Google’s DeepMind AI division has tackled everything from StarCraft to protein folding. So it’s probably no surprise that its creators have eventually turned to what is undoubtedly a personal interest: computer programming. In Thursday’s edition of Science, the company describes a system it developed that produces code in response to programming typical of those used in human programming contests.

On an average challenge, the AI system could score near the top half of participants. But it had a bit of trouble scaling, being less likely to produce a successful program on problems where more code is typically required. Still, the fact that it works at all without having been given any structural information about algorithms or programming languages is a bit of a surprise.

Rising to the challenge

Computer programming challenges are fairly simple: People are given a task to complete and produce code that should perform the requested task. In an example given in the new paper, programmers are given two strings and asked to determine whether the shorter of the two could be produced by substituting backspaces for some of the keypresses needed to type the larger one. Submitted programs are then checked to see whether they provide a general solution to the problem or fail when additional examples are tested.

Given enough examples of programs that can solve a single problem, it would probably be possible for an AI system to infer the algorithmic structure needed to succeed. But that wouldn’t be a general solution to tackle any problems; an AI trained on one class of challenge would fail when asked to tackle an unrelated challenge.

To make something more generalizable, the DeepMind team treated it a bit like a language problem. To an extent, the description of the challenge is an expression of what the algorithm should do, while the code is an expression of the same thing, just in a different language. So the AI in question was designed to have two parts: one that ingested the description and converted it to an internal representation, and a second that used the internal representation to generate functional code.

Training the system was also a two-stage process. In the first stage, the system was simply asked to process a snapshot of material on GitHub, a total of over 700GB of code. (In these days of where you can fit that on a thumb drive, that may not sound like much, but remember that code is just raw text, so you get a lot of lines per gigabyte.) Note that this data will also include the comments, which should use natural language to explain what nearby code is doing and so should help with both the input and output tasks.

Once the system was trained, it went through a period of tuning. DeepMind set up its own programming contests and then fed the results into the system: problem description, working code, failing code, and the test cases used to check it.

Similar approaches had been tried previously, but DeepMind indicates that it was just able to throw more resources at the training. “A key driver of AlphaCode’s performance,” the paper indicates, “came from scaling the number of model samples to orders of magnitude more than previous work.”

Read original article here

NASA solves Voyager 1 data glitch mystery, but finds another

NASA’s Voyager 1 probe is finally making sense again in interstellar space.

After months of sending junk data about its health to flight controllers on Earth, the 45-year-old Voyager 1 is once again beaming back clear telemetry data on its status beyond our solar system. NASA knew the problem was somewhere in the spacecraft’s attitude articulation and control system, or AACS, which keeps Voyager 1’s antenna pointed at Earth. But the solution was surprising. 

“The AACS had started sending the telemetry data through an onboard computer known to have stopped working years ago, and the computer corrupted the information,” NASA officials wrote in an update (opens in new tab) Tuesday (Aug. 30). The rest of the spacecraft was apparently fine, collecting data as it normal.

Related: Celebrate 45 years of Voyager with these amazing images (gallery)

Once engineers began to suspect Voyager 1 was using a dead computer, they simply sent a command to the probe so its AACS system would use the right computer to phone home. It was a low-risk fix, but time consuming. It takes a radio signal nearly 22 hours to reach Voyager 1, which was 14.6 billion miles (23.5 billion kilometers) from Earth and growing farther by the second as of Aug. 30.

With the Voyager 1 data glitch solved, NASA is now pondering a new mystery: what caused it in the first place. 

“We’re happy to have the telemetry back,” Voyager project manager Suzanne Dodd said in a statement (opens in new tab). “We’ll do a full memory readout of the AACS and look at everything it’s been doing. That will help us try to diagnose the problem that caused the telemetry issue in the first place.”

Related: Voyager 1 marks 10 years in interstellar space

Engineers suspect Voyager 1 began routing its health and status telemetry through the dead computer after receiving a bad command from yet another onboard computer. That would suggest some other problem lurking inside Voyager 1’s computer brains, but mission managers don’t think it’s a threat to the iconic spacecraft’s long-term health.

Still, they’d like to know exactly what’s going inside Voyager 1. 

“So we’re cautiously optimistic, but we still have more investigating to do,” Dodd said in the statement. 

NASA launched the Voyager 1 spacecraft, and its twin Voyager 2, in 1977 on a mission to explore the outer planets of the solar system. Voyager 1 flew by Jupiter and Saturn during its primary mission and kept going, ultimately entering interstellar space in 2012, with Voyager 2 reaching that milestone in 2018. 

You can track the status of Voyager 1 and Voyager 2 on this NASA website (opens in new tab).

Email Tariq Malik at tmalik@space.com (opens in new tab) or follow him @tariqjmalik (opens in new tab). Follow us @Spacedotcom (opens in new tab)Facebook (opens in new tab) and Instagram (opens in new tab).



Read original article here

DNA analysis solves mystery of bodies found at bottom of medieval well

The identity of the remains of the six adults and 11 children and why they ended up in the medieval well had long vexed archaeologists. Unlike other mass burials where skeletons are uniformly arranged, the bodies were oddly positioned and mixed — likely caused by being thrown head first shortly after their deaths.

To understand more about how these people died, scientists were recently able to extract detailed genetic material preserved in the bones thanks to recent advances in ancient DNA sequencing. The genomes of six of the individuals showed that four of them were related — including three sisters, the youngest of whom was five to 10 years old. Further analysis of the genetic material suggested that all six were “almost certainly” Ashkenazi Jews.

The researchers believe they all died during antisemitic violence that wracked the city — most likely a February 1190 riot related to the Third Crusade, one of a series of religious wars supported by the church — as described by a medieval chronicler. The number of people killed in the massacre is unclear.

“I’m delighted and relieved that twelve years after we first started analysing the remains of these individuals, technology has caught up and helped us to understand this historical cold case of who these people were and why we think they were murdered,” said Selina Brace, a principal researcher at the Natural History Museum in London and lead author on the paper, said in a news release.

Judaism is primarily a shared religious and cultural identity, the study noted, but as a result of a long-standing practice of marrying within the community, Ashkenazi Jewish groups often carry a distinctive genetic ancestry that includes markers for some rare genetic disorders. These include Tay-Sachs disease, which is usually in fatal in childhood.

The researchers found that the individuals in the well shared a similar genetic ancestry to present-day Ashkenazi Jews, who, according to the study, are descendants of medieval Jewish populations with histories mainly in northern and Eastern Europe.

“Nobody had analyzed Jewish ancient DNA before because of prohibitions on the disturbance of Jewish graves. However, we did not know they were likely Jewish until after doing the genetic analyses,” evolutionary geneticist and study coauthor Mark Thomas, a professor at University College London, said in the release.

“It was quite surprising that the initially unidentified remains filled the historical gap about when certain Jewish communities first formed, and the origins of some genetic disorders,” he said.

The DNA analysis also allowed the researchers to infer the physical traits of a toddler boy found in the well. He likely had blue eyes and red hair, the latter a feature associated with historical stereotypes of European Jews, the study, published Tuesday by the journal Current Biology, said.

In the medieval manuscript “Imagines Historiarum II,” chronicler Ralph de Diceto paints a vivid picture of the massacre:

“Many of those who were hastening to Jerusalem determined first to rise against the Jews before they invaded the Saracens. Accordingly on 6th February [in 1190 AD] all the Jews who were found in their own houses at Norwich were butchered; some had taken refuge in the castle,” he wrote, according to the news release.

The well was located in what used to be the medieval Jewish quarter of Norwich, with the study noting that the city’s Jewish community were descendants of Ashkenazi Jews from Rouen, Normandy, who were invited to England by William the Conqueror, who invaded England in 1066.

The link with the 1190 riot isn’t definitive, however.

Radio carbon dating of the remains suggested the bodies ended up in the well at some point between 1161 to 1216 — a period which includes some well-documented outbreaks of antisemitic violence in England but also covers the Great Revolt of 1174 during which many people in the city were killed.

“Our study shows how effective archaeology, and particularly new scientific techniques such as ancient DNA, can be in providing new perspectives on historical events,” Tom Booth, a senior research scientist at the Francis Crick Institute, said in the news release.

“Ralph de Diceto’s account of the 1190 AD attacks is evocative, but a deep well containing the bodies of Jewish men, women, and especially children forces us to confront the real horror of what happened.”

Read original article here

New study solves mystery of how soft liquid droplets erode hard surfaces

A new study led by University of Minnesota Twin Cities researchers shows why liquid droplets have the ability to erode hard surfaces, a discovery that could help engineers design more erosion-resistant materials. The above image shows the impact droplets can make on a granular, sandy surface (left) versus a hard, plaster (right) surface. Credit: Cheng Research Group, University of Minnesota

A first-of-its-kind study led by University of Minnesota Twin Cities researchers reveals why liquid droplets have the ability to erode hard surfaces. The discovery could help engineers design better, more erosion-resistant materials.

Using a newly developed technique, the researchers were able to measure hidden quantities such as the shear stress and pressure created by the impact of liquid droplets on surfaces, a phenomenon that has only ever been studied visually. 

The paper is published in Nature Communications.

Researchers have been studying the impact of droplets for years, from the way raindrops hit the ground to the transmission of pathogens such as COVID-19 in aerosols. It’s common knowledge that slow-dripping water droplets can erode surfaces over time. But why can something seemingly soft and fluid make such a huge impact on hard surfaces?

“There are similar sayings in both eastern and western cultures that ‘Dripping water hollows out stone,'” explained Xiang Cheng, senior author on the paper and an associate professor in the University of Minnesota Department of Chemical Engineering and Materials Science. “Such sayings intend to teach a moral lesson: ‘Be persistent. Even if you’re weak, when you keep doing something continuously, you will make an impact.’ But, when you have something so soft like droplets hitting something so hard like rocks, you can’t help wondering, ‘Why does the drop impact cause any damage at all?’ That question is what motivated our research.”






Watch a video demonstrating in slow motion how a water droplet impacts a sandy surface. Credit: University of Minnesota

In the past, droplet impact has only been analyzed visually using high-speed cameras. The University of Minnesota researchers’ new technique, called high-speed stress microscopy, provides a more quantitative way to study this phenomenon by directly measuring the force, stress, and pressure underneath liquid drops as they hit surfaces.

The researchers found that the force exerted by a droplet actually spreads out with the impacting drop—instead of being concentrated in the center of the droplet—and the speed at which the droplet spreads out exceeds the speed of sound at short times, creating a shock wave across the surface. Each droplet behaves like a small bomb, releasing its impact energy explosively and giving it the force necessary to erode surfaces over time.

Besides paving a new way to study droplet impact, this research could help engineers design more erosion-resistant surfaces for applications that must weather the outdoor elements. Cheng and his lab at the University of Minnesota Twin Cities already plan to expand this research to study how different textures and materials change the amount of force created by liquid droplets.

“For example, we paint the surface of a building or coat wind turbine blades to protect the surfaces,” Cheng said. “But over time, rain droplets could still cause damage via impact. So, our research after this paper is to see if we can reduce the amount of shear stress of droplets, which would allow us to design special surfaces that can mitigate the stress.”

In addition to Cheng, the research team included University of Minnesota chemical engineering Ph.D. student Ting-Pi Sun, University of Santiago, Chile Assistant Professor Leonardo Gordillo and undergraduate students Franco Álvarez-Novoa and Klebbert Andrade, and O’Higgins University, Chile Assistant Professor Pablo Gutiérrez.


Heat conduction is important for droplet dynamics


More information:
Stress distribution and surface shock wave of drop impact, Nature Communications (2022). DOI: 10.1038/s41467-022-29345-x
Provided by
University of Minnesota

Citation:
New study solves mystery of how soft liquid droplets erode hard surfaces (2022, March 31)
retrieved 31 March 2022
from https://phys.org/news/2022-03-mystery-soft-liquid-droplets-erode.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.



Read original article here

New study solves a major problem with living off-Earth

Humanity could establish bases on other planets and guarantee their supply of oxygen using new techniques discovered here on Earth.

The discovery — Research published earlier this month reveals that the water electrolysis process, which uses electricity to split the water particles into hydrogen and oxygen, does indeed work at lower gravity levels. The findings, published in the journal Nature, show that oxygen production via electrolysis only reduced by 11 percent under Moon-like gravity conditions.

“If there is water-ice on Mars, then this could also be electrolyzed to make oxygen and hydrogen,” Mark Symes, a senior lecturer in chemistry at the University of Glasgow and an author on the paper, tells Inverse.

“These could then be used to propel rockets from Mars back to Earth.”

Why it matters — The findings could help enable some of the most ambitious plans in spaceflight. In 2017, SpaceX CEO Elon Musk outlined plans to send humans to Mars in the 2020s, establish a base, and ultimately establish a city by 2050.

The Starship, an under-development reusable rocket, uses liquid oxygen and methane as its fuel. At the 2017 event, Musk explained how astronauts could collect water and carbon dioxide and use it to make fuel:

SpaceX’s explanation for how it will make more fuel.SpaceX

As the diagram above shows, electrolysis is vital for making the liquid oxygen that ships could use to return.

How they did it — The latest research looked at how reduced gravity could change the electrolysis process. The team conducted electrolysis at several strengths of gravity, ranging from 0.166 g (similar to the Moon) to 8 g (eight times stronger than Earth). By comparison, Mars’ gravity is around 0.38 g.

“No one had previously looked at the effects of lunar gravity on the water electrolysis process,” Symes says.

The team found that oxygen production was only reduced by 11 percent under the weakest, Moon-like gravity. Most importantly, the process works and could provide resources for a lunar base.

What this means for humans on the Moon

NASA has already started experimenting with using Mars’ resources. In April 2021, the agency’s Mars Oxygen In-Situ Resource Utilization Experiment, or MOXIE, extracted its first oxygen from the air. It did this by collecting carbon dioxide from the atmosphere and using a different electrolysis process.

MOXIE was a success, but a future version would need to be much bigger. The current version, the size of a toaster, produces 10 grams of oxygen per hour. A four-person astronaut mission would need around 55,000 pounds of oxygen to leave Mars, which would require a MOXIE around 100 times larger.

With these latest findings, astronauts have an alternative source of oxygen. Explorers could seek out water-ice and use that to harvest more oxygen.

Looking to the far future, Musk and others want to build large cities and even terraform the planet to make it more liveable. What do the latest findings mean for that?

“That’s a bit beyond my expertise, but in general I would say that anything that allows or facilitates humans spending time on these other planets without constant resupply from Earth is a step towards a self-sufficient infrastructure,” Symes says.

Abstract: Establishing a permanent human presence on the Moon or Mars requires a secure supply of oxygen for life support and refueling. The electrolysis of water has attracted significant attention in this regard as water-ice may exist on both the Moon and Mars. However, to date there has been no study examining how the lower gravitational fields on the Moon and Mars might affect gas-evolving electrolysis when compared to terrestrial conditions. Herein we provide experimental data on the effects of gravitational fields on water electrolysis from 0.166 g (lunar gravity) to 8 g (eight times the Earth’s gravity) and show that electrolytic oxygen production is reduced by around 11% under lunar gravity with our system compared to operation at 1 g. Moreover, our results indicate that electrolytic data collected using less resource-intensive ground-based experiments at elevated gravity (>1 g) may be extrapolated to gravitational levels below 1 g.

Read original article here

NASA’s NuSTAR Spots Highest-Energy Light Ever Detected From Jupiter – And Solves a Decades-Old Mystery

Jupiter’s southern hemisphere is shown in this image from NASA’s Juno mission. New observations by NASA’s NuSTAR reveal that auroras near both the planet’s poles emit high-energy X-rays, which are produced when accelerated particles collide with Jupiter’s atmosphere. Credit: Enhanced image by Kevin M. Gill (CC-BY) based on images provided courtesy of NASA/JPL-Caltech/SwRI/MSSS

The planet’s auroras are known to produce low-energy X-ray light. A new study finally reveals higher-frequency X-rays and explains why they eluded another mission 30 years ago.

Scientists have been studying Jupiter up close since the 1970s, but the gas giant is still full of mysteries. New observations by

Jupiter is shown in visible light for context with an artistic impression of the Jovian upper atmosphere’s infrared glow is overlain, along with magnetic field lines. Jupiter’s powerful magnetic field accelerates ions and funnels them toward the planet’s poles, where they collide with its atmosphere and release energy in the form of light. Credit: J. O’Donoghue (JAXA)/Hubble/NASA/ESA/A. Simon/J. Schmidt

Electrons from Io are also accelerated by the planet’s magnetic field, according to observations by NASA’s Juno spacecraft, which arrived at Jupiter in 2016. Researchers suspected that those particles should produce even higher-energy X-rays than what Chandra and XMM-Newton observed, and NuSTAR (short for Nuclear Spectroscopic Telescope Array) is the first observatory to confirm that hypothesis.

“It’s quite challenging for planets to generate X-rays in the range that NuSTAR detects,” said Kaya Mori, an astrophysicist at

NuSTAR detected high-energy X-rays from the auroras near Jupiter’s north and south poles. NuSTAR cannot locate the source of the light with high precision, but can only find that the light is coming from somewhere in the purple-colored regions. Credit: NASA/JPL-Caltech

The solution to that puzzle, according to the new study, lies in the mechanism that produces the high-energy X-rays. The light comes from the energetic electrons that Juno can detect with its Jovian Auroral Distributions Experiment (JADE) and Jupiter Energetic-particle Detector Instrument (JEDI), but there are multiple mechanisms that can cause particles to produce light. Without a direct observation of the light that the particles emit, it’s almost impossible to know which mechanism is responsible.

In this case, the culprit is something called bremsstrahlung emission. When the fast-moving electrons encounter charged atoms in Jupiter’s atmosphere, they are attracted to the atoms like magnets. This causes the electrons to rapidly decelerate and lose energy in the form of high-energy X-rays. It’s like how a fast-moving car would transfer energy to its braking system to slow down; in fact, bremsstrahlung means “braking radiation” in German. (The ions that produce the lower-energy X-rays emit light through a process called atomic line emission.)

Each light-emission mechanism produces a slightly different light profile. Using established studies of bremsstrahlung light profiles, the researchers showed that the X-rays should get significantly fainter at higher energies, including in Ulysses’ detection range.

“If you did a simple extrapolation of the NuSTAR data, it would show you that Ulysses should have been able to detect X-rays at Jupiter,” said Shifra Mandel, a Ph.D. student in astrophysics at Columbia University and a co-author of the new study. “But we built a model that includes bremsstrahlung emission, and that model not only matches the NuSTAR observations, it shows us that at even higher energies, the X-rays would have been too faint for Ulysses to detect.”

The conclusions of the paper relied on simultaneous observations of Jupiter by NuSTAR, Juno, and XMM-Newton.

New Chapters

On Earth, scientists have detected X-rays in Earth’s auroras with even higher energies than what NuSTAR saw at Jupiter. But those emissions are extremely faint – much fainter than Jupiter’s – and can only be spotted by small satellites or high-altitude balloons that get extremely close to the locations in the atmosphere that generate those X-rays. Similarly, observing these emissions in Jupiter’s atmosphere would require an X-ray instrument close to the planet with greater sensitivity than those carried by Ulysses in the 1990s.

“The discovery of these emissions does not close the case; it’s opening a new chapter,” said William Dunn, a researcher at the University College London and a co-author of the paper. “We still have so many questions about these emissions and their sources. We know that rotating magnetic fields can accelerate particles, but we don’t fully understand how they reach such high speeds at Jupiter. What fundamental processes naturally produce such energetic particles?”

Scientists also hope that studying Jupiter’s X-ray emissions can help them understand even more extreme objects in our universe. NuSTAR typically studies objects outside our solar system, such as exploding stars and disks of hot gas accelerated by the gravity of massive black holes.

The new study is the first example of scientists being able to compare NuSTAR observations with data taken at the source of the X-rays (by Juno). This enabled researchers to directly test their ideas about what creates these high-energy X-rays. Jupiter also shares a number of physical similarities with other magnetic objects in the universe – magnetars, neutron stars, and white dwarfs – but researchers don’t fully understand how particles are accelerated in these objects’ magnetospheres and emit high-energy radiation. By studying Jupiter, researchers may unveil details of distant sources we cannot yet visit.

Reference: “Observation and origin of non-thermal hard X-rays from Jupiter” by Kaya Mori, Charles Hailey, Gabriel Bridges, Shifra Mandel, Amani Garvin, Brian Grefenstette, William Dunn, Benjamin J. Hord, Graziella Branduardi-Raymont, John Clarke, Caitriona Jackman, Melania Nynka and Licia Ray, 10 February 2022, Nature Astronomy.
DOI: 10.1038/s41550-021-01594-8

More About the Missions

NuSTAR launched on June 13, 2012. A Small Explorer mission led by Caltech and managed by JPL manages the Juno mission for the principal investigator, Scott J. Bolton of the Southwest Research Institute in San Antonio. Juno is part of NASA’s New Frontiers Program, which is managed at NASA’s Marshall Space Flight Center in Huntsville, Alabama, for the agency’s Science Mission Directorate. Lockheed Martin Space in Denver built and operates the spacecraft.