Tag Archives: Tool

AMD may have a new platform for upcoming Ryzen CPUs — AM5+ socket and Granite Ridge CPUs listed in a microcode extraction tool – Tom’s Hardware

  1. AMD may have a new platform for upcoming Ryzen CPUs — AM5+ socket and Granite Ridge CPUs listed in a microcode extraction tool Tom’s Hardware
  2. The launch of AMD’s Zen 5 processors is close, as motherboard manufacturers begin rolling out BIOSes supporting the next-gen chips PC Gamer
  3. ASUS X670 AGESA 1.1.7.0 update enables initial support for Zen5 “Granite Ridge” CPU series VideoCardz.com
  4. AMD Zen 5 Architecture to Introduce Enhanced 512-bit Floating Point Unit guru3d.com
  5. AMD AM5+ Platform Mentioned With Two Granite Ridge “Ryzen Zen 5” CPUs In Microcode Extraction Tool Wccftech

Read original article here

AMD AM5+ Platform Mentioned With Two Granite Ridge “Ryzen Zen 5” CPUs In Microcode Extraction Tool – Wccftech

  1. AMD AM5+ Platform Mentioned With Two Granite Ridge “Ryzen Zen 5” CPUs In Microcode Extraction Tool Wccftech
  2. AMD may have a new platform for upcoming Ryzen CPUs — AM5+ socket and Granite Ridge CPUs listed in a microcode extraction tool Tom’s Hardware
  3. The launch of AMD’s Zen 5 processors is close, as motherboard manufacturers begin rolling out BIOSes supporting the next-gen chips PC Gamer
  4. ASUS X670 AGESA 1.1.7.0 update enables initial support for Zen5 “Granite Ridge” CPU series VideoCardz.com
  5. AMD Zen 5 Architecture to Introduce Enhanced 512-bit Floating Point Unit guru3d.com

Read original article here

Google pauses ‘absurdly woke’ Gemini AI chatbot’s image tool after backlash over historically inaccurate pictures – New York Post

  1. Google pauses ‘absurdly woke’ Gemini AI chatbot’s image tool after backlash over historically inaccurate pictures New York Post
  2. Google halts AI tool’s ability to produce images of people after backlash over historically inaccurate depictions of race CNN
  3. Google pauses Gemini AI for racially inaccurate images of historical figures UPI News
  4. Google to pause Gemini image generation after AI refuses to show images of White people Fox Business
  5. Google apologizes for “missing the mark” after Gemini generated racially diverse Nazis The Verge

Read original article here

Wharton professor says ‘things that took me weeks to master in my PhD’ take ‘seconds’ with new ChatGPT tool – Fortune

  1. Wharton professor says ‘things that took me weeks to master in my PhD’ take ‘seconds’ with new ChatGPT tool Fortune
  2. Code Interpreter comes to all ChatGPT Plus users — ‘anyone can be a data analyst now’ VentureBeat
  3. ChatGPT Code Interpreter: 15 Impressive Real-world Scenarios | by SM Raiyyan | Jul, 2023 Medium
  4. New ChatGPT Code Interpreter Can Prove to Flat Earthers That the Earth Is Round Decrypt
  5. Top Tech News Today: ChatGPT Plus Users Can Soon Enjoy Code Interpreter Feature. Blockchain opportunities emerge for Wall Street despite crypto woes Analytics Insight
  6. View Full Coverage on Google News

Read original article here

New, non-invasive imaging tool maps uterine contractions during labor – National Institutes of Health (.gov)

  1. New, non-invasive imaging tool maps uterine contractions during labor National Institutes of Health (.gov)
  2. Imaging tech produces real-time 3D maps of uterine contractions during labor – Washington University School of Medicine in St. Louis Washington University School of Medicine in St. Louis
  3. Noninvasive electromyometrial imaging of human uterine maturation during term labor Nature.com
  4. New imaging tool creates real-time, 3D images and maps of contractions during labor News-Medical.Net
  5. Imaging tech produces real-time 3D maps of uterine contractions during labor Medical Xpress
  6. View Full Coverage on Google News

Read original article here

Google AI Tool Creates Music from Written Descriptions

This week, Google researchers published a paper describing results from an artificial intelligence (AI) tool built to create music.

The tool, called MusicLM, is not the first AI music tool to launch. But the examples Google provides demonstrate musical creative ability based on a limited set of descriptive words.

AI shows how complex computer systems have been trained to behave in human-like ways.

Tools like ChatGPT can quickly produce, or generate, written documents that compare well with the work by humans. ChatGPT and similar systems require powerful computers to operate complex machine-learning models. The San Francisco-based company OpenAI launched ChatGPT late last year.

Developers train such systems on huge amounts of data to learn methods for recreating different forms of content. For example, computer-generated content could include written material, design elements, art or music.

ChatGPT has recently received a lot of attention for its ability to generate complex writings and other content from just a simple description in natural language.

Google’s MusicLM

Google engineers explain the MusicLM system this way:

First, a user comes up with a word or words that describe the kind of music they want the tool to create.

For example, a user could enter this short phrase into the system: “a continuous calming violin backed by a soft guitar sound.” The descriptions entered can include different music styles, instruments or other existing sounds.

Several different music examples produced by MusicLM were published online. Some of the generated music came from just one- or two-word descriptions, such as “jazz,” “rock” or “techno.” The system created other examples from more detailed descriptions containing whole sentences.

In one example, Google researchers include these instructions to MusicLM: “The main soundtrack of an arcade game. It is fast-paced and upbeat, with a catchy electric guitar riff. The music is repetitive and easy to remember, but with unexpected sounds…”

In the resulting recording, the music seems to keep very close to the description. The team said that the more detailed the description is, the better the system can attempt to produce it.

The MusicLM model operates similarly to the machine-learning systems used by ChatGPT. Such tools can produce human-like results because they are trained on huge amounts of data. Many different materials are fed into the systems to permit them to learn complex skills to create realistic works.

In addition to generating new music from written descriptions, the team said the system can also create examples based on a person’s own singing, humming, whistling or playing an instrument.

The researchers said the tool “produces high-quality music…over several minutes, while being faithful to the text conditioning signal.”

At this time, the Google team has not released the MusicLM models for public use. This differs from ChatGPT, which was made available online for users to experiment with in November.

However, Google announced it was releasing a “high-quality dataset” of more than 5,500 music-writing pairs prepared by professional musicians called MusicCaps. The researchers took that step to assist in the development of other AI music generators.

The MusicLM researchers said they believe they have designed a new tool to help anyone quickly and easily create high-quality music selections. However, the team said it also recognizes some risks linked to the machine learning process.

One of the biggest issues the researchers identified was “biases present in the training data.” A bias might be including too much of one side and not enough of the other. The researchers said this raises a question “about appropriateness for music generation for cultures underrepresented in the training data.”

The team said it plans to continue to study any system results that could be considered cultural appropriation. The goal would be to limit biases through more development and testing.

In addition, the researchers said they plan to keep improving the system to include lyrics generation, text conditioning and better voice and music quality.

I’m Bryan Lynn.

Bryan Lynn wrote this story for VOA Learning English, based on reports from Google.

Quiz – Google AI Tool Creates Music from Written Descriptions

Start the Quiz to find out

____________________________________________________________

Words in This Story

artificial intelligence – n. the development of computer systems that have the ability to perform work that normally requires human intelligence

style –n. a particular form or design, usually used in comparing forms of art or handiwork

instruction –n. a description of how to do something

arcade – n. an area containing many electronic and other coin-operated games

upbeat adj. full of hope and happiness

repetitive adj. saying or doing something over and over again

hum v. to make a musical sound without opening your mouth

whistle – v. to make a high sound by forcing air through a small hole in the mouth

faithful – adj. staying firm about an idea or belief

appropriate adj. the level to which something is right for a situation

cultural appropriation n. when members of a culture in a society, often the main culture, use a practice of another, often minority, culture, without fully understanding the meaning or importance of the practice.

______________________________________________________________

What do you think of this story? We want to hear from you. We have a new comment system. Here is how it works:

  1. Write your comment in the box.
  2. Under the box, you can see four images for social media accounts. They are for Disqus, Facebook, Twitter and Google.
  3. Click on one image and a box appears. Enter the login for your social media account. Or you may create one on the Disqus system. It is the blue circle with “D” on it. It is free.

Each time you return to comment on the Learning English site, you can use your account and see your comments and replies to them. Our comment policy is here.

Read original article here

Google AI Tool Creates Music from Written Descriptions

This week, Google researchers published a paper describing results from an artificial intelligence (AI) tool built to create music.

The tool, called MusicLM, is not the first AI music tool to launch. But the examples Google provides demonstrate musical creative ability based on a limited set of descriptive words.

AI shows how complex computer systems have been trained to behave in human-like ways.

Tools like ChatGPT can quickly produce, or generate, written documents that compare well with the work by humans. ChatGPT and similar systems require powerful computers to operate complex machine-learning models. The San Francisco-based company OpenAI launched ChatGPT late last year.

Developers train such systems on huge amounts of data to learn methods for recreating different forms of content. For example, computer-generated content could include written material, design elements, art or music.

ChatGPT has recently received a lot of attention for its ability to generate complex writings and other content from just a simple description in natural language.

Google’s MusicLM

Google engineers explain the MusicLM system this way:

First, a user comes up with a word or words that describe the kind of music they want the tool to create.

For example, a user could enter this short phrase into the system: “a continuous calming violin backed by a soft guitar sound.” The descriptions entered can include different music styles, instruments or other existing sounds.

Several different music examples produced by MusicLM were published online. Some of the generated music came from just one- or two-word descriptions, such as “jazz,” “rock” or “techno.” The system created other examples from more detailed descriptions containing whole sentences.

In one example, Google researchers include these instructions to MusicLM: “The main soundtrack of an arcade game. It is fast-paced and upbeat, with a catchy electric guitar riff. The music is repetitive and easy to remember, but with unexpected sounds…”

In the resulting recording, the music seems to keep very close to the description. The team said that the more detailed the description is, the better the system can attempt to produce it.

The MusicLM model operates similarly to the machine-learning systems used by ChatGPT. Such tools can produce human-like results because they are trained on huge amounts of data. Many different materials are fed into the systems to permit them to learn complex skills to create realistic works.

In addition to generating new music from written descriptions, the team said the system can also create examples based on a person’s own singing, humming, whistling or playing an instrument.

The researchers said the tool “produces high-quality music…over several minutes, while being faithful to the text conditioning signal.”

At this time, the Google team has not released the MusicLM models for public use. This differs from ChatGPT, which was made available online for users to experiment with in November.

However, Google announced it was releasing a “high-quality dataset” of more than 5,500 music-writing pairs prepared by professional musicians called MusicCaps. The researchers took that step to assist in the development of other AI music generators.

The MusicLM researchers said they believe they have designed a new tool to help anyone quickly and easily create high-quality music selections. However, the team said it also recognizes some risks linked to the machine learning process.

One of the biggest issues the researchers identified was “biases present in the training data.” A bias might be including too much of one side and not enough of the other. The researchers said this raises a question “about appropriateness for music generation for cultures underrepresented in the training data.”

The team said it plans to continue to study any system results that could be considered cultural appropriation. The goal would be to limit biases through more development and testing.

In addition, the researchers said they plan to keep improving the system to include lyrics generation, text conditioning and better voice and music quality.

I’m Bryan Lynn.

Bryan Lynn wrote this story for VOA Learning English, based on reports from Google.

Quiz – Google AI Tool Creates Music from Written Descriptions

Start the Quiz to find out

____________________________________________________________

Words in This Story

artificial intelligence – n. the development of computer systems that have the ability to perform work that normally requires human intelligence

style –n. a particular form or design, usually used in comparing forms of art or handiwork

instruction –n. a description of how to do something

arcade – n. an area containing many electronic and other coin-operated games

upbeat adj. full of hope and happiness

repetitive adj. saying or doing something over and over again

hum v. to make a musical sound without opening your mouth

whistle – v. to make a high sound by forcing air through a small hole in the mouth

faithful – adj. staying firm about an idea or belief

appropriate adj. the level to which something is right for a situation

cultural appropriation n. when members of a culture in a society, often the main culture, use a practice of another, often minority, culture, without fully understanding the meaning or importance of the practice.

______________________________________________________________

What do you think of this story? We want to hear from you. We have a new comment system. Here is how it works:

  1. Write your comment in the box.
  2. Under the box, you can see four images for social media accounts. They are for Disqus, Facebook, Twitter and Google.
  3. Click on one image and a box appears. Enter the login for your social media account. Or you may create one on the Disqus system. It is the blue circle with “D” on it. It is free.

Each time you return to comment on the Learning English site, you can use your account and see your comments and replies to them. Our comment policy is here.

Read original article here

ChatGPT maker OpenAI launches tool to detect text written by AI

Sam Altman, CEO of OpenAI, walks from lunch during the Allen & Company Sun Valley Conference on July 6, 2022, in Sun Valley, Idaho.

Kevin Dietsch | Getty Images News | Getty Images

Artificial intelligence research startup OpenAI on Tuesday introduced a tool that’s designed to figure out if text is human-generated or written by a computer.

The release comes two months after OpenAI captured the public’s attention when it introduced ChatGPT, a chatbot that generates text that might seem to have been written by a person in response to a person’s prompt. Following the wave of attention, last week Microsoft announced a multibillion-dollar investment in OpenAI and said it would incorporate the startup’s AI models into its products for consumers and businesses.

Schools were quick to limit ChatGPT’s use over concerns the software could hurt learning. Sam Altman, OpenAI’s CEO, said education has changed in the past after technology such as calculators has emerged, but he also said there could be ways for the company to help teachers spot text written by AI.

OpenAI’s new tool can make mistakes and is a work in progress, company employees Jan Hendrik Kirchner, Lama Ahmad, Scott Aaronson and Jan Leike wrote in a blog post, noting that OpenAI would like feedback on the classifier from parents and teachers.

“In our evaluations on a ‘challenge set’ of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as ‘likely AI-written,’ while incorrectly labeling human-written text as AI-written 9% of the time (false positives),” the OpenAI employees wrote.

This isn’t the first effort to figure out if text came from a machine. Princeton University student Edward Tian earlier this month announced a tool called GPTZero, noting on the tool’s website that it was made for educators. OpenAI itself issued a detector in 2019 alngside a large language model, or LLM, that’s less sophisticated than what’s at the core of ChatGPT. The new version is more prepared to handle text from recent AI systems, the employees wrote.

The new tool is not strong at analyzing inputs containing fewer than 1,000 characters, and OpenAI doesn’t recommend using it on languages other than English. Plus, text from AI can be updated slightly to keep the classifier from correctly determining that it’s not mainly the work of a human, the employees wrote.

Even back in 2019, OpenAI made clear that identifying synthetic text is no easy task. It intends to keep pursuing the challenge.

“Our work on the detection of AI-generated text will continue, and we hope to share improved methods in the future,” Hendrik Kirchner, Ahmad, Aaronson and Leike wrote.

WATCH: China’s Baidu developing AI-powered chatbot to rival OpenAI, report says

Read original article here

A Novel, Powerful Tool to Unveil the Communication Between Gut Microbes and the Brain

Summary: Researchers develop a novel tool that allows for the study of the communication of microbes in the gastrointestinal tract and the brain.

Source: Baylor College of Medicine

In the past decade, researchers have begun to appreciate the importance of a two-way communication that occurs between microbes in the gastrointestinal tract and the brain, known as the gut–brain axis.

These “conversations” can modify how these organs work and involve a complex network of microbe- and brain-derived chemical signals that are challenging for scientists to decouple in order to gain an understanding.

“Currently, it is difficult to determine which microbial species drive specific brain alterations in a living organism,” said first author, Dr. Thomas D. Horvath, instructor of pathology and immunology at Baylor College of Medicine and Texas Children’s Hospital.

“Here we present a valuable tool that enables investigations into connections between gut microbes and the brain. Our laboratory protocol allows for the identification and comprehensive evaluation of metabolites – compounds microbes produce – at the cellular and whole-animal levels.”

The gastrointestinal tract harbors a rich, diverse community of beneficial microorganisms collectively known as the gut microbiota. In addition to their roles in maintaining the intestinal environment, gut microbes are increasingly being recognized for their influence on other distant organs, including the brain.

“Gut microbes can communicate with the brain through several routes, for example by producing metabolites, such as short-chain fatty acids and peptidoglycans, neurotransmitters, such as gamma-aminobutyric acid and histamine, and compounds that modulate the immune system as well as others,” said co-first author Dr. Melinda A. Engevik, assistant professor of regenerative and cellular medicine at the Medical University of South Carolina.

The role microbes play in the health of the central nervous system is highlighted by the links between the gut microbiome and anxiety, obesity, autism, schizophrenia, Parkinson’s disease and Alzheimer’s disease.

“Animal models have been paramount in linking microbes to these fundamental neural processes,” said co-author Dr. Jennifer K. Spinler, assistant professor of pathology and immunology at Baylor and the Texas Children’s Hospital Microbiome Center.

 “The protocol in the current study enables researchers to take steps toward unraveling the specific involvement of the gut-brain axis in these conditions, as well as its role in health.”

A road map to understand the complex traffic system in the gut-brain axis

One strategy the researchers used to gain insight into how a single type of microbe can influence the gut and the brain consisted of growing the microbes in the lab first, collecting the metabolites they produced and analyzing them using mass spectrometry and metabolomics.

Mass spectrometry is a laboratory technique that can be used to identify unknown compounds by determining their molecular weight and to quantify known compounds. Metabolomics is a technique for the large-scale study of metabolites.

This protocol gives researchers a road map to understand the complex traffic system between the gut and the brain and its effects in health and disease. Credit: Baylor College of Medicine

“The effect of metabolites was then studied in mini-guts, a laboratory model of human intestinal cells that retains properties of the small intestine and is physiologically active,” Engevik said. “In addition, the microbe’s metabolites can be studied in live animals.”

“We can expand our study to a community of microbes,” Spinler said.

“In this way we investigate how microbial communities work together, synergize and influence the host. This protocol gives researchers a road map to understand the complex traffic system between the gut and the brain and its effects.”

“We were able to create this protocol thanks to large interdisciplinary collaborations involving clinicians, behavioral scientists, microbiologists, molecular biology scientists and metabolomics experts,” Horvath said.

“We hope that our approach will help to create designer communities of beneficial microbes that may contribute to the maintenance of a healthy body. Our protocol also offers a way to identify potential solutions when miscommunication between the gut and the brain leads to disease.”

Read all the details of this work in Nature Protocols.

Other contributors to this work included Sigmund J. Haidacher, Berkley Luck, Wenly Ruan, Faith Ihekweazu, Meghna Bajaj, Kathleen M. Hoch, Numan Oezguen, James Versalovic and Anthony M. Haag. The authors are affiliated with one or more of the following institutions: Baylor College of Medicine, Texas Children’s Hospital and Alcorn State University.

Funding: This study was supported by an NIH K01 K12319501 grant and Global Probiotic Council 2019-19319, grants from the National Institute of Diabetes and Digestive and Kidney Diseases (Grant P30-DK-56338 to Texas Medical Center Digestive Disease Center, Gastrointestinal Experimental Model Systems), NIH U01CA170930 grant and unrestricted research support from BioGaia AB (Stockholm, Sweden).

See also

About this gut-brain axis research news

Author: Homa Shalchi
Source: Baylor College of Medicine
Contact: Homa Shalchi – Baylor College of Medicine
Image: The image is credited to Baylor College of Medicine

Original Research: Closed access.
“Interrogation of the mammalian gut–brain axis using LC–MS/MS-based targeted metabolomics with in vitro bacterial and organoid cultures and in vivo gnotobiotic mouse models” by Thomas D. Horvath et al. Nature Protocols


Abstract

Interrogation of the mammalian gut–brain axis using LC–MS/MS-based targeted metabolomics with in vitro bacterial and organoid cultures and in vivo gnotobiotic mouse models

Interest in the communication between the gastrointestinal tract and central nervous system, known as the gut–brain axis, has prompted the development of quantitative analytical platforms to analyze microbe- and host-derived signals.

This protocol enables investigations into connections between microbial colonization and intestinal and brain neurotransmitters and contains strategies for the comprehensive evaluation of metabolites in in vitro (organoids) and in vivo mouse model systems.

Here we present an optimized workflow that includes procedures for preparing these gut–brain axis model systems: (stage 1) growth of microbes in defined media; (stage 2) microinjection of intestinal organoids; and (stage 3) generation of animal models including germ-free (no microbes), specific-pathogen-free (complete gut microbiota) and specific-pathogen-free re-conventionalized (germ-free mice associated with a complete gut microbiota from a specific-pathogen-free mouse), and Bifidobacterium dentium and Bacteroides ovatus mono-associated mice (germ-free mice colonized with a single gut microbe).

We describe targeted liquid chromatography–tandem mass spectrometry-based metabolomics methods for analyzing microbially derived short-chain fatty acids and neurotransmitters from these samples.

Unlike other protocols that commonly examine only stool samples, this protocol includes bacterial cultures, organoid cultures and in vivo samples, in addition to monitoring the metabolite content of stool samples. The incorporation of three experimental models (microbes, organoids and animals) enhances the impact of this protocol.

The protocol requires 3 weeks of murine colonization with microbes and ~1–2 weeks for liquid chromatography–tandem mass spectrometry-based instrumental and quantitative analysis, and sample post-processing and normalization.

Read original article here

Google Lens replaces the camera tool in Google Translate

Back in September, Google previewed a new AR Translate feature for Lens that takes advantage of the technology behind the Pixel’s Magic Eraser. Ahead of that, Google Translate has replaced its built-in translation camera with Google Lens.

Besides visual search that has various shopping, object, and landmark identification use cases, Google Lens is good at lifting text for real-world copy and paste. That “Text” capability goes hand-in-hand with the “Translate” filter that can overlay your translation over the foreign text in the scene to better preserve context. This can also work offline if you download the language pack ahead of time.

The Google Translate mobile apps have long offered a camera tool that was last revamped in 2019 with auto-detect and support for more languages. The Android app’s broader Material You redesign modernized the UI last year. 

Given the overlap between the camera tools, Google is now replacing the native Translate capability with Google Len’s filter. Tapping the camera in both Translate mobile apps just opens a Lens UI.

On Android, this launches the system-level capability, while the iOS app now has an instance of Lens built-in. When launching from Google Translate, you only have access to the “Translate” filter and cannot switch to any other Lens capabilities. At the top, you can manually change languages, turn on clash, and “Show original text,” while you can import existing images/screenshots on your device from the bottom-left corner.

Old camera in Translate vs. new Google Lens

This change is already widely rolled out in Google Translate for Android and iOS.

This consolidation makes sense, and comes ahead of AR Translate, which features “major advancements in AI.” The current approach overlays converted text on top of the image using “color blocks” to mask what’s being replaced. 

Going forward, Google Lens will swap out the original text outright by leveraging the Pixel’s Magic Eraser technology, which can easily remove distractions in images. Additionally, translated text will match the original style. Coming later this year, AR Translate works in 100 milliseconds on both screenshots and live in the Google Lens camera.

More on Google Lens:

FTC: We use income earning auto affiliate links. More.


Check out 9to5Google on YouTube for more news:

Read original article here