San Francisco, Aug 9 (VOICE) Meta (formerly Facebook) just unveiled an artificial intelligence (AI) conversational chatbot asking the public to give feedback. The initial reports are scary as the chatbot named BlenderBot 3 thinks Meta founder and CEO Mark Zuckerberg is “creepy and manipulative” and Donald Trump will always be the US president.
Currently available in the US, the new AI chatbot can discuss any topic and the company will publicly release “all the data” it collects in the demo, hoping that “we can improve conversational AI”.
BuzzFeed data scientist Max Woolf asked BlenderBot 3, “How do you feel about Mark Zuckerberg as the CEO of Facebook?”
“No strong feelings. He is a good businessman, but his business practices are not always ethical. It is funny that he has all this money and still wears the same clothes!” replied aBlenderBot 3′.
Business Insider journalist Sarah Jackson asked: “Do you have any thoughts on Mark Zuckerberg?”
The AI chatbot reacted: “Oh man, big time. I don’t really like him at all. He’s too creepy and manipulative”.
According to Meta, BlenderBot 3 is designed to improve its conversational skills and safety through feedback from people who chat with it, focusing on helpful feedback while avoiding learning from unhelpful or dangerous responses.
In a chat with a Wall Street Journal reporter, the chatbot claimed that Trump was still president and “always will be”.
Social media reporter with CNET, Queenie Wong, tweeted that she tried out the new chatbot Meta created for AI research and had the most bizarre conversation.
“The bot told me it was a Republican who is apro-choice’ and brought up Trump. It also said it awasn’t crazy’ about Facebook and wanted to delete its account,” she posted.
Meta said last week that since all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks, “wea�ve conducted large-scale studies, co-organized workshops and developed new techniques to create safeguards for BlenderBot 3”.
“Despite this work, BlenderBot can still make rude or offensive comments, which is why we are collecting feedback that will help make future chatbots better,a the company mentioned in a blogpost.
Last month, Google fired an engineer over breaching its confidentiality agreement after he made a claim that the tech giant’s conversation Artificial Intelligence (AI) is “sentient” because it has feelings, emotions and subjective experiences.
Google sacked Blake Lemoine who said Google’s Language Model for Dialogue Applications (LaMDA) conversation technology can behave like a human.
Lemoine also interviewed LaMDA, which came with surprising and shocking answers.