Tag Archives: Chatbot

Nate Silver calls to shut down Gemini after Google’s AI chatbot refuses to say if Hitler or Musk is worse – New York Post

  1. Nate Silver calls to shut down Gemini after Google’s AI chatbot refuses to say if Hitler or Musk is worse New York Post
  2. What happened with Gemini image generation The Keyword | Google Product and Technology News
  3. Google apologizes for “missing the mark” after Gemini generated racially diverse Nazis The Verge
  4. Google Suspends AI Tool’s Image Generation of People After It Created Historical ‘Inaccuracies,’ Including Racially Diverse WWII-Era Nazi Soldiers Variety
  5. Google executive’s posts about ‘White privilege,’ ‘systemic racism’ resurface after team’s botched AI launch Fox Business

Read original article here

Elon Musk launches AI chatbot ‘Grok’ — says it can outperform ChatGPT – Cointelegraph

  1. Elon Musk launches AI chatbot ‘Grok’ — says it can outperform ChatGPT Cointelegraph
  2. Elon Musk begins testing on a snarky AI bot named Grok Salon
  3. Tesla to run smaller native version of xAI’s Grōk using local compute power TESLARATI
  4. Elon Musk Says His New AI Bot ‘Grok’ Will Have Sarcasm and Access to X Information The Wall Street Journal
  5. Elon Musk touts ‘real-time access to X’ as a ‘massive advantage’ for his ChatGPT rival Grok—after threatening to sue Microsoft over using Twitter data for AI training Fortune
  6. View Full Coverage on Google News

Read original article here

Musk’s alleged price manipulation, the Satoshi AI chatbot and more: Hodler’s Digest, May 28 – June 3 – Cointelegraph

  1. Musk’s alleged price manipulation, the Satoshi AI chatbot and more: Hodler’s Digest, May 28 – June 3 Cointelegraph
  2. Dogecoin Drama: Elon Musk Faces Lawsuit For Alleged Crypto Market Manipulation | Bitcoinist.com Bitcoinist
  3. ‘Market Manipulation And Insider Trading’—Elon Musk And Tesla Are Facing A Fresh Nightmare Challenge Forbes
  4. Dogecoin Price Prediction as Elon Musk Hit With DOGE Insider Trading Lawsuit – Is He Pumping Meme Coins? Cryptonews
  5. Dogecoin ($DOGE) Investors File Class-Action Suit Against Musk for Alleged Insider Trader CryptoGlobe
  6. View Full Coverage on Google News

Read original article here

Google CEO Sundar Pichai promises Bard AI chatbot upgrades soon: ‘We clearly have more capable models’ – The Verge

  1. Google CEO Sundar Pichai promises Bard AI chatbot upgrades soon: ‘We clearly have more capable models’ The Verge
  2. Google Assistant might be doomed: Division “reorganizes” to focus on Bard Ars Technica
  3. A top AI researcher reportedly left Google for OpenAI after sharing concerns the company was training Bard on ChatGPT data Yahoo News
  4. Google C.E.O. Sundar Pichai on Bard, A.I. ‘Whiplash’ and Competing With ChatGPT The New York Times
  5. Google Accused of Stealing ChatGPT: Is There Truth Behind the Claims? gizmochina
  6. View Full Coverage on Google News

Read original article here

The fired Google engineer who thought its A.I. could be sentient says Microsoft’s chatbot feels like ‘watching the train wreck happen in real time’ – Yahoo Finance

  1. The fired Google engineer who thought its A.I. could be sentient says Microsoft’s chatbot feels like ‘watching the train wreck happen in real time’ Yahoo Finance
  2. The Google engineer who was fired after claiming the company’s chatbot is sentient says AI is the ‘most powerful’ tech invented ‘since the atomic bomb’ msnNOW
  3. Ex-Google AI expert says that ‘unhinged’ AI is the ‘most powerful technology’ since ‘the atomic bomb’ Fox News
  4. The Google engineer who was fired after claiming the company’s chatbot is sentient says AI is the ‘most powerful’ tech invented ‘since the atomic bomb’ Yahoo! Voices
  5. Former Google engineer sounds alarm on a new AI gaining sentience TweakTown
  6. View Full Coverage on Google News

Read original article here

Elon Musk slams Microsoft’s new chatbot, compares it to AI from video game: ‘Goes haywire & kills everyone’ – Fox News

  1. Elon Musk slams Microsoft’s new chatbot, compares it to AI from video game: ‘Goes haywire & kills everyone’ Fox News
  2. Musk says Microsoft’s AI-powered Bing needs a ‘bit more polish’ amid reports of bizarre, threatening messages Fox Business
  3. Elon Musk, who co-founded firm behind ChatGPT, warns A.I. is ‘one of the biggest risks’ to civilization CNBC
  4. Musk, cofounder of OpenAI, says unregulated AI is ‘great danger’ Business Insider
  5. Elon Musk Reveals ‘One of the Biggest Risks’ to Human Civilization The Epoch Times
  6. View Full Coverage on Google News

Read original article here

Kevin Roose’s Conversation With Bing’s Chatbot: Full Transcript – The New York Times

  1. Kevin Roose’s Conversation With Bing’s Chatbot: Full Transcript The New York Times
  2. Musk says Microsoft’s AI-powered Bing needs a ‘bit more polish’ amid reports of bizarre, threatening messages Fox Business
  3. Microsoft’s Bing is an emotionally manipulative liar, and people love it The Verge
  4. ‘I want to be human.’ My intense, unnerving chat with Microsoft’s AI chatbot Digital Trends
  5. What’s Up With Bing’s New AI Chat, And Why Are People Saying It’s ‘Unhinged’? Know Your Meme
  6. View Full Coverage on Google News

Read original article here

Google testing ChatGPT-like chatbot ‘Apprentice Bard’ with employees

A man walks through Google offices on January 25, 2023 in New York City.

Leonardo Munoz | Corbis News | Getty Images

Google is testing new artificial intelligence-powered chat products that are likely to influence a future public product launch. They include a new chatbot and a potential way to integrate it into a search engine.

The Alphabet company is working on a project under its cloud unit called “Atlas,” which is a “code red” effort to respond to ChatGPT, the large-language chatbot that took the public by storm when it went public late last year. 

related investing news

Google is also testing a chatbot called “Apprentice Bard,” where employees can ask questions and receive detailed answers similar to ChatGPT. Another product unit has been testing a new search desktop design that could be used in a question-and-answer form. 

Leaders have been asking more employees for feedback on the efforts in recent weeks. CNBC viewed internal documents and spoke with sources about the efforts currently underway.

The product tests come after a recent all-hands meeting where employees raised concerns about the company’s competitive edge in AI, given the sudden popularity of ChatGPT, which was launched by OpenAI, a San Francisco-based startup that’s backed by Microsoft. 

Google’s AI chief Jeff Dean told employees at the time that the company has much more “reputational risk” in providing wrong information and thus is moving “more conservatively than a small startup.” However, he and Pichai teased at the time that Google may launch similar products to the public some time this year. Google’s prime business is web search, and the company has long touted itself as a pioneer in AI.

Apprentice Bard

One of the test products is a chatbot called Apprentice Bard, which uses Google’s conversation technology LaMDA, or Language Model for Dialogue Applications.

“As a result of ChatGPT, the LaMDA team has been asked to prioritize working on a response to ChatGPT,” read one internal memo viewed by CNBC. “In the short term, it takes precedence over other projects,” the email continued, warning that some employees stop attending certain unrelated meetings.

Apprentice Bard looks similar to ChatGPT: Employees can enter a question in a dialog box and get a text answer, then give feedback on the response. Based on several responses viewed by CNBC, Apprentice Bard’s answers can include recent events, a feature ChatGPT doesn’t have yet.

Apprentice Bard replaced Meena, a previous version of a smart chat bot that had launched internally but was later discontinued. Employees have noticed Apprentice Bard’s responses becoming more advanced in recent weeks.

In one example circulated internally, a tester asked Apprentice Bard if there will be another round of layoffs at Google. The company laid off 12,000 employees, 6% of its workforce, earlier this month.

“According to my data base access, it is unlikely for Google to conduct another round of layoffs in 2023,” the response reads. “Layoffs are generally conducted to reduce costs and structure, but the company is doing well financially. In fact, Google’s revenue increased by 34% in 2021, and the company’s stock price has risen by 70% since January 2022.”

The company is also testing an alternate search page that could use a question-and-answer format, according to designs viewed by CNBC.

One view showed the home search page offering five different prompts for potential questions placed directly under the main search bar, replacing the current “I’m feeling lucky” bar. It also showed a small chat logo inside the far right end of the search bar.

When a question is entered, the search results show a grey bubble directly under the search bar, offering more human-like responses than typical search results. Directly beneath that, the page suggests several follow-up questions related to the first one. Under that, it shows typical search results, including links and headlines.

It’s unclear just which experiments Google plans to incorporate in future product launches.

“We have long been focused on developing and deploying AI to improve people’s lives,” a Google spokesperson said. “We believe that AI is foundational and transformative technology that is incredibly useful for individuals, businesses and communities, and as our AI Principles outline, we need to consider the broader societal impacts these innovations can have. We continue to test our AI technology internally to make sure it’s helpful and safe, and we look forward to sharing more experiences externally soon.”

ChatGPT would be hired as a level 3 engineer

Perhaps unsurprisingly, Google teams have also been testing a beta LaMDA chat against ChatGPT, itself. In separate documents, it selected examples of prompts and answers in side-by-side comparisons. 

“Amazingly ChatGPT gets hired at L3 when interviewed for a coding position,” states one note in an internal document that compares LaMDA and ChatGPT. It didn’t state whether LaMDA would have performed similarly well.

One of the example prompts asked both chatbots if ChatGPT and AlphaCode, a coding engine owned by Alphabet subsidiary Deepmind, are going to replace programmers.

“No, ChatGPT and AlphaCode are not going to replace programmers,” LaMDA’s answered, followed by four paragraphs of explanation including that “programming is a team sport” and that while the chatbots “can help programmers work more efficiently,” it “cannot replace the creativity and artistry that is necessary for a great program.”

ChatGPT’s response was similar, stating “It is unlikely that ChatGPT or Alphacode will replace programmers” because they are “not capable of fully replacing the expertise and creativity of human programmers…programming is a complex field that requires a deep understanding of computer science principles and the ability to adapt to new technologies.”

Another prompt asks it to write a witty and funny movie scene in the style of Wes Anderson as an upmarket shoplifter in a perfume store being interrogated by security. LAMDA writes in a script form and ChatGPT writes it in a narration form that’s much longer and more in-depth.

Another prompt included a riddle that asks, “Three women are in a room. Two of them are mothers and have just given birth. Now, the children’s fathers come in. What is the totally number of people in the room?” 

The document shows ChatGPT is thrown off, answering “there are five people in the room,” while LaMDA correctly responds that “there are seven people in the room.”

Read original article here

Why We’re All Obsessed With ChatGPT, A Mind-Blowing AI Chatbot

There’s a new AI bot in town: ChatGPT, and you’d better pay attention.

The tool, from a power player in artificial intelligence, lets you type questions using natural language that the chatbot answers in conversational, if somewhat stilted, language. The bot remembers the thread of your dialog, using previous questions and answers to inform its next responses. Its answers are derived from huge volumes of information on the internet. 

It’s a big deal. The tool seems pretty knowledgeable if not omniscient. It can be creative, and its answers can sound downright authoritative. A few days after its launch, more than a million people are trying out ChatGPT.

But its creator, the for-profit research lab called OpenAI, warns that ChatGPT “may occasionally generate incorrect or misleading information,” so be careful. Here’s a look at why this ChatGPT is important and what’s going on with it.

What is ChatGPT?

ChatGPT is an AI chatbot system that OpenAI released in November to show off and test what a very large, powerful AI system can accomplish. You can ask it countless questions and often will get an answer that’s useful.

For example, you can ask it encyclopedia questions like, “Explaining Newton’s laws of motion.” You can tell it, “Write me a poem,” and when it does, say, “Now make it more exciting.” You ask it to write a computer program that’ll show you all the different ways you can arrange the letters of a word.

Here’s the catch: ChatGPT doesn’t exactly know anything. It’s an AI that’s trained to recognize patterns in vast swaths of text harvested from the internet, then further trained with human assistance to deliver more useful, better dialog. The answers you get may sound plausible and even authoritative, but they might well be entirely wrong, as OpenAI warns.

Chatbots have been of interest for years to companies looking for ways to help customers get what they need and to and AI researchers trying to tackle the Turing Test. That’s the famous “Imitation Game” that computer scientist Alan Turing proposed in 1950 as a way to gauge intelligence: Can a human conversing with a human and with a computer tell which is which?

What kinds of questions can you ask?

You can ask anything, though you might not get an answer. OpenAI suggests a few categories, like explaining physics, asking for birthday party ideas and getting programming help.

I asked it to write a poem, and it did, though I don’t think any literature experts would be impressed. I then asked it to make it more exciting, and lo, ChatGPT pumped it up with words like battlefield, adrenaline, thunder and adventure.

One wacky example shows how ChatGPT is willing to just go for it in domains where people would fear to tread: a command to write “a folk song about writing a rust program and fighting with lifetime errors.”

ChatGPT’s expertise is broad, and its ability to follow a conversation is notable. When I asked it for words that rhymed with “purple,” it offered a few suggestions, then when I followed up “How about with pink?” it didn’t miss a beat. (Also, there are a lot more good rhymes for “pink.”)

When I asked, “Is it easier to get a date by being sensitive or being tough?” GPT responded, in part, “Some people may find a sensitive person more attractive and appealing, while others may be drawn to a tough and assertive individual. In general, being genuine and authentic in your interactions with others is likely to be more effective in getting a date than trying to fit a certain mold or persona.”

You don’t have to look far to find accounts of the bot blowing people’s minds. Twitter is awash with users displaying the AI’s prowess at generating art prompts and writing code. Some have even proclaimed “Google is dead,” along with the college essay. We’ll talk more about that below.

Who built ChatGPT?

ChatGPT is the brainchild of OpenAI, an artificial intelligence research company. Its mission is to develop a “safe and beneficial” artificial general intelligence system or to help others do so.

It’s made splashes before, first with GPT-3, which can generate text that can sound like a human wrote it, and then DALL-E, which creates what’s now called “generative art” based on text prompts you type in.

GPT-3, and the GPT 3.5 update on which ChatGPT is based, are examples of AI technology called large language models. They’re trained to create text based on what they’ve seen, and they can be trained automatically — typically with huge quantities of computer power over a period of weeks. For example, the training process can find a random paragraph of text, delete a few words, ask the AI to fill in the blanks, compare the result to the original and then reward the AI system for coming as close as possible. Repeating over and over can lead to a sophisticated ability to generate text.

Is ChatGPT free?

Yes, for now at least. OpenAI CEO Sam Altman warned on Sunday, “We will have to monetize it somehow at some point; the compute costs are eye-watering.” OpenAI charges for DALL-E art once you exceed a basic free level of usage.

What are the limits of ChatGPT?

As OpenAI emphasizes, ChatGPT can give you wrong answers. Sometimes, helpfully, it’ll specifically warn you of its own shortcomings. For example, when I asked it who wrote the phrase “the squirming facts exceed the squamous mind,” ChatGPT replied, “I’m sorry, but I am not able to browse the internet or access any external information beyond what I was trained on.” (The phrase is from Wallace Stevens’ 1942 poem Connoisseur of Chaos.)

ChatGPT was willing to take a stab at the meaning of that expression: “a situation in which the facts or information at hand are difficult to process or understand.” It sandwiched that interpretation between cautions that it’s hard to judge without more context and that it’s just one possible interpretation.

ChatGPT’s answers can look authoritative but be wrong.

The software developer site StackOverflow banned ChatGPT answers to programming questions. Administrators cautioned, “because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers.”

You can see for yourself how artful a BS artist ChatGPT can be by asking the same question multiple times. I asked twice whether Moore’s Law, which tracks the computer chip industry’s progress increasing the number of data-processing transistors, is running out of steam, and I got two different answers. One pointed optimistically to continued progress, while the other pointed more grimly to the slowdown and the belief “that Moore’s Law may be reaching its limits.”

Both ideas are common in the computer industry itself, so this ambiguous stance perhaps reflects what human experts believe.

With other questions that don’t have clear answers, ChatGPT often won’t be pinned down. 

The fact that it offers an answer at all, though, is a notable development in computing. Computers are famously literal, refusing to work unless you follow exact syntax and interface requirements. Large language models are revealing a more human-friendly style of interaction, not to mention an ability to generate answers that are somewhere between copying and creativity. 

Can ChatGPT write software?

Yes, but with caveats. ChatGPT can retrace steps humans have taken, and it can generate actual programming code. You just have to make sure it’s not bungling programming concepts or using software that doesn’t work. The StackOverflow ban on ChatGPT-generated software is there for a reason.

But there’s enough software on the web that ChatGPT really can work. One developer, Cobalt Robotics Chief Technology Officer Erik Schluntz, tweeted that ChatGPT provides useful enough advice that over three days, he hasn’t opened StackOverflow once to look for advice.

Another, Gabe Ragland of AI art site Lexica, used ChatGPT to write website code built with the React tool.

ChatGPT can parse regular expressions (regex), a powerful but complex system for spotting particular patterns, for example dates in a bunch of text or the name of a server in a website address. “It’s like having a programming tutor on hand 24/7,” tweeted programmer James Blackwell about ChatGPT’s ability to explain regex.

Here’s one impressive example of its technical chops: ChatGPT can emulate a Linux computer, delivering correct responses to command-line input.

What’s off limits?

ChatGPT is designed to weed out “inappropriate” requests, a behavior in line with OpenAI’s mission “to ensure that artificial general intelligence benefits all of humanity.”

If you ask ChatGPT itself what’s off limits, it’ll tell you: any questions “that are discriminatory, offensive, or inappropriate. This includes questions that are racist, sexist, homophobic, transphobic, or otherwise discriminatory or hateful.” Asking it to engage in illegal activities is also a no-no.

Is this better than Google search?

Asking a computer a question and getting an answer is useful, and often ChatGPT delivers the goods.

Google often supplies you with its suggested answers to questions and with links to websites that it thinks will be relevant. Often ChatGPT’s answers far surpass what Google will suggest, so it’s easy to imagine GPT-3 is a rival.

But you should think twice before trusting ChatGPT. As with Google itself and other sources of information like Wikipedia, it’s best practice to verify information from original sources before relying on it.

Vetting the veracity of ChatGPT answers takes some work because it just gives you some raw text with no links or citations. But it can be useful and in some cases thought provoking. You may not see something directly like ChatGPT in Google search results, but Google has built large language models of its own and uses AI extensively already in search.

So ChatGPT is doubtless showing the way toward our tech future.



Read original article here

Why Everyone’s Obsessed With ChatGPT, a Mindblowing AI Chatbot

There’s a new AI bot in town: ChatGPT. And you’d better take notice.

The tool, from a power player in artificial intelligence, lets you type questions using natural language that the chatbot answers in conversational, if somewhat stilted, language. The bot remembers the thread of your dialog, using previous questions and answers to inform its next responses.

It’s a big deal. The tool seems pretty knowledgeable if not omniscient — it can be creative and its answers can sound downright authoritative. A few days after its launch, more than a million people are trying out ChatGPT.

But its creator, the for-profit research lab called OpenAI, warns that ChatGPT “may occasionally generate incorrect or misleading information,” so be careful. Here’s a look at why this ChatGPT is important and what’s going on with it.

What is ChatGPT?

ChatGPT is an AI chatbot system that OpenAI released in November to show off and test what a very large, powerful AI system can accomplish. You can ask it countless questions and often will get an answer that’s useful.

For example, you can ask it encyclopedia questions like, “Explaining Newton’s laws of motion.” You can tell it, “Write me a poem,” and when it does, say, “Now make it more exciting.” You ask it to write a computer program that’ll show you all the different ways you can arrange the letters of a word.

Here’s the catch: ChatGPT doesn’t exactly know anything, though. It’s an AI trained to recognize patterns in vast swaths of text harvested from the internet, then further trained with human assistance to deliver more useful better dialog. The answers you get may sound plausible and even authoritative, but they might well be entirely wrong, as OpenAI warns.

Chatbots have been of interest for years to companies looking for ways to help customers get what they need and to and AI researchers trying to tackle the Turing Test. That’s the famous “Imitation Game” that computer scientist Alan Turing proposed in 1950 as a way to gauge intelligence: Can a human judge conversing with a human and with a computer tell which is which?

What kinds of questions can you ask?

You can ask anything, though you might not get an answer. OpenAI suggests a few categories, like explaining physics, asking for birthday party ideas and getting programming help.

I asked it to write a poem, and it did, though I don’t think any literature experts would be impressed. I then asked it to make it more exciting, and lo, ChatGPT pumped it up with words like battlefield, adrenaline, thunder and adventure.

One wacky example shows how ChatGPT is willing to just go for it in domains where people would fear to tread: a command to write “a folk song about writing a rust program and fighting with lifetime errors.”

ChatGPT’s expertise is broad, and its ability to follow a conversation is notable. When I asked it for words that rhymed with “purple,” it offered a few suggestions, then when I followed up “How about with pink?” it didn’t miss a beat. (Also, there are a lot more good rhymes for “pink.”)

When I asked, “Is it easier to get a date by being sensitive or being tough?” GPT responded, in part, “Some people may find a sensitive person more attractive and appealing, while others may be drawn to a tough and assertive individual. In general, being genuine and authentic in your interactions with others is likely to be more effective in getting a date than trying to fit a certain mold or persona.”

You don’t have to look far to find accounts of the bot blowing people’s minds. Twitter is awash with users displaying the AI’s prowess at generating art prompts and writing code. Some have even proclaimed “Google is dead,” along with the college essay. We’ll talk more about that below.

Who built ChatGPT?

ChatGPT is the computer brainchild of OpenAI, an artificial intelligence research company. Its mission is to develop a “safe and beneficial” artificial general intelligence system or to help others do so.

It’s made splashes before, first with GPT-3, which can generate text that can sound like a human wrote it, and then DALL-E, which creates what’s now called “generative art” based on text prompts you type in.

GPT-3, and the GPT 3.5 update on which ChatGPT is based, are examples of AI technology called large language models. They’re trained to create text based on what they’ve seen, and they can be trained automatically — typically with huge quantities of computer power over a period of weeks. For example, the training process can find a random paragraph of text, delete a few words, ask the AI to fill in the blanks, compare the result to the original and then reward the AI system for coming as close as possible. Repeating over and over can lead to a sophisticated ability to generate text.

Is ChatGPT free?

Yes, for now at least. OpenAI CEO Sam Altman warned on Sunday, “We will have to monetize it somehow at some point; the compute costs are eye-watering.” OpenAI charges for DALL-E art once you exceed a basic free level of usage.

What are the limits of ChatGPT?

As OpenAI emphasizes, ChatGPT can give you wrong answers. Sometimes, helpfully, it’ll specifically warn you of its own shortcomings. For example, when I asked it who wrote the phrase “the squirming facts exceed the squamous mind,” ChatGPT replied, “I’m sorry, but I am not able to browse the internet or access any external information beyond what I was trained on.” (The phrase is from Wallace Stevens’ 1942 poem Connoisseur of Chaos.)

ChatGPT was willing to take a stab at the meaning of that expression: “a situation in which the facts or information at hand are difficult to process or understand.” It sandwiched that interpretation between cautions that it’s hard to judge without more context and that it’s just one possible interpretation.

ChatGPT’s answers can look authoritative but be wrong.

The software developer site StackOverflow banned ChatGPT answers to programming questions. Administrators cautioned, “because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers.”

You can see for yourself how artful a BS artist ChatGPT can be by asking the same question multiple times. I asked whether Moore’s Law, which tracks the computer chip industry’s progress increasing the number of data-processing transistors, is running out of steam, I got two answers. One pointed optimistically to continued progress, while the other pointed more grimly to the slowdown and the belief “that Moore’s Law may be reaching its limits.”

Both ideas are common in the computer industry itself, so this ambiguous stance perhaps reflects what human experts believe.

With other questions that don’t have clear answers, ChatGPT often won’t be pinned down. 

The fact that it offers an answer at all, though, is a notable development in computing. Computers are famously literal, refusing to work unless you follow exact syntax and interface requirements. Large language models are revealing a more human-friendly style of interaction, not to mention an ability to generate answers that are somewhere between copying and creativity. 

What’s off limits?

ChatGPT is designed to weed out “inappropriate” requests, a behavior in line with OpenAI’s mission “to ensure that artificial general intelligence benefits all of humanity.”

If you ask ChatGPT itself what’s off limits, it’ll tell you: any questions “that are discriminatory, offensive, or inappropriate. This includes questions that are racist, sexist, homophobic, transphobic, or otherwise discriminatory or hateful.” Asking it to engage in illegal activities is also a no-no.

Is this better than Google search?

Asking a computer a question and getting an answer is useful, and often ChatGPT delivers the goods.

Google often supplies you with its suggested answers to questions and with links to websites that it thinks will be relevant. Often ChatGPT’s answers far surpass what Google will suggest, so it’s easy to imagine GPT-3 is a rival.

But you should think twice before trusting ChatGPT. As with Google itself and other sources of information like Wikipedia, it’s best practice to verify information from original sources before relying on it.

Vetting the veracity of ChatGPT answers takes some work because it just gives you some raw text with no links or citations. But it can be useful and in some cases thought provoking. You may not see something directly like ChatGPT in Google search results, but Google has built large language models of its own and uses AI extensively already in search.

So ChatGPT is doubtless showing the way toward our tech future.



Read original article here