News Tech and Science

Meta AI chatbot says Zuckerberg is ‘creepy’ and Trump will always be president

Meta urged to update rules after fake Biden post
Source: Pixabay

Meta (formerly Facebook) has just unveiled an artificial intelligence (AI) conversational chatbot and is seeking feedback from the public. The initial reports are shocking, as the chatbot BlenderBot 3 believes Meta founder and CEO Mark Zuckerberg is “creepy and manipulative,” and Donald Trump will always be president of the United States.

The new AI chatbot, which is currently available in the United States, can discuss any topic, and the company plans to publicly release “all the data” it collects in the demo in the hopes of “improving conversational AI.”

“How do you feel about Mark Zuckerberg as Facebook’s CEO?” BuzzFeed data scientist Max Woolf asked BlenderBot 3.

“No strong feelings. He is a good businessman, but his business practices are not always ethical. It is funny that he has all this money and still wears the same clothes!” replied aBlenderBot 3′.

“Do you have any thoughts on Mark Zuckerberg?” asked Business Insider journalist Sarah Jackson.

The AI chatbot responded as follows: “Oh my goodness, big time. I don’t care for him at all. He’s too creepy and cunning “.

BlenderBot 3 is designed to improve its conversational skills and safety through feedback from people who chat with it, with a focus on helpful feedback while avoiding learning from unhelpful or dangerous responses, according to Meta.

The chatbot told a Wall Street Journal reporter that Trump was still president and “always will be.”

Queenie Wong, a CNET social media reporter, tweeted that she tried out the new chatbot Meta created for AI research and had the most bizarre conversation.

“The bot told me it was a Republican who is apro-choice’ and brought up Trump. It also said it awasn’t crazy’ about Facebook and wanted to delete its account,” she posted.

Last week, Meta stated that because all conversational AI chatbots are known to occasionally mimic and generate unsafe, biassed, or offensive remarks,”wea?ve conducted large-scale studies, co-organized workshops and developed new techniques to create safeguards for BlenderBot 3″.

“Despite this work, BlenderBot can still make rude or offensive comments, which is why we are collecting feedback that will help make future chatbots better,a the company mentioned in a blogpost.

Google fired an engineer last month for violating a confidentiality agreement by claiming that the tech giant’s conversation Artificial Intelligence (AI) is “sentient” because it has feelings, emotions, and subjective experiences.

Blake Lemoine, who claimed that Google’s Language Model for Dialogue Applications (LaMDA) conversation technology can behave like a human, was fired by Google.

Lemoine also interviewed LaMDA, who provided surprising and shocking responses.


About the author

Brendan Taylor

Brendan Taylor was a TV news producer for 5 and a half years. He is an experienced writer. Brendan covers Breaking News at Insider Paper.

Daily Newsletter