Google wants you to chat with its AI chatbot at your own risk

Google has already warned that early previews of its LaMDA (Language Model for Dialogue Applications) model "may display inaccurate or inappropriate content"

IANS Photo
IANS Photo
user

IANS

Google has opened its experimental artificial intelligence (AI) chatbot for the public and you can now register to chat with the AI-driven bot trained on the company's controversial language model.

Google has already warned that early previews of its LaMDA (Language Model for Dialogue Applications) model "may display inaccurate or inappropriate content".

'AI Test Kitchen' by Google is an app where people can learn about, experience, and give feedback on Google's emerging AI technology.

"Our goal is to learn, improve and innovate responsibly on AI together. We'll be opening up to small groups of people gradually," said the company.

According to Alphabet and Google CEO Sundar Pichai, 'AI Test Kitchen' is "meant to give you a sense of what it might be like to have LaMDA in your hands".

The ability of these language models to generate infinite possibilities shows potential, "but it also means they don't always get things quite right".

"And while we've made substantial improvements in safety and accuracy in the latest version of LaMDA, we're still at the beginning of a journey," said Google.

"We've added multiple layers of protection to the AI Test Kitchen. This work has minimised the risk, but not eliminated it," it added.

Both Google and Meta (formerly Facebook) have unveiled their AI conversational chatbots, asking the public to give feedback.

The initial reports are scary as the Meta chatbot named BlenderBot 3 thought Mark Zuckerberg is "creepy and manipulative" and Donald Trump will always be the US president.

Meta said last week that all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks.

"BlenderBot can still make rude or offensive comments, which is why we are collecting feedback that will help make future chatbots better," the company mentioned in a blogpost.

Last month, Google fired an engineer over breaching its confidentiality agreement after he made a claim that the tech giant's conversation AI is "sentient" because it has feelings, emotions and subjective experiences.

Lemoine also interviewed LaMDA, which came with surprising and shocking answers.

Follow us on: Facebook, Twitter, Google News, Instagram 

Join our official telegram channel (@nationalherald) and stay updated with the latest headlines


/* */