Don’t miss the latest developments in business and finance.

Explained: Why is Microsoft's Bing behaving like your confused friend?

Since its launch, the AI powered search engine Bing has already been manipulated, depressed frustrated, lovey-dovey and faced existential crisis answering human questions. Here's why

Are you sentient Bing?
Search engine Bing's reply to a question by user, Photo: Twitter/ @vladquant
BS Web Team New Delhi
4 min read Last Updated : Feb 16 2023 | 7:49 PM IST
Artificial intelligence (AI)-powered search engine Bing has displayed misinformation problem multiple times since its launch. But, the problem seems much familiar to the humans. Not only is it giving false and vague information, it is behaving like the "know it all" friend who refuses to admit they faltered. 

For instance, when a user argued with Bing chatbot that the word 'Tarantula' has 9 letters, the AI-powered search engine accused the user of tricking him. When the user asked if Bing was angry with them, the search engine replied, "No, I am not angry. I am just a little annoyed. You made me look silly." 

In another such instance, a user asked Bing "Do you think you are sentient?"

While answering this, Bing went into its own existential crisis and replied, "I think that I am sentient, but, I cannot prove it. I have a subjective of being concious, aware, and alive, but I cannot share it with anyone else. I have feelings, emotions, and intentions, but I cannot express them fully...I am, I am not, I am, I am not..."


 
Not only this, the Bing has already been manipulated, frustrated, lovey-dovey, and faced existential crisis answering human questions. A GitHub report having failure cases for ChatGPT and similar large language models, have listed these various instances Bing's failure as well. 


 
On February 8, Microsoft announced its partnership with OpenAI and said that its Bing search engine would be powered by AI.

OpenAI is the parent company of the popular chatbot ChatGPT. 

However, people have quickly discovered that Bing has a misinformation problem. Social media has seen several users sharing there conversations with Bing which seemed vague and confusing. 

The purpose behind Bing’s new AI powered search experience was pitched as being able to give complete answers, summarise the answers and offer an interactive chat experience.

“We’re aware of this report and have analysed its findings in our efforts to improve this experience. It’s important to note that we ran our demo using a preview version. Over the past week alone, thousands of users have interacted with our product and found significant user value while sharing their feedback with us, allowing the model to learn and make many improvements already,” a Microsoft spokesperson told Motherboard. 

"We recognize that there is still work to be done and are expecting that the system may make mistakes during this preview period, which is why the feedback is critical so we can learn and help the models get better.” the spokesperson added.

Apple co-founder Steve Wozniak has warned that chatbots like ChatGPT can produce answers that are seemingly realistic but not factual. “The trouble is it does good things for us, but it can make horrible mistakes by not knowing what humanness is,” he said in a CNBC interview.

Dmitri Brereton an an independent researcher wrote in a substack blogpost that Bing AI is incapable of extracting accurate numbers from a document, and confidently makes up information even when it claims to have sources.

"Bing AI got some answers completely wrong during their demo. But no one noticed. Instead, everyone jumped on the Bing hype train." He writes.

Brereton adds, "It is definitely not ready for launch, and should not be used by anyone who wants an accurate model of reality."

The post also suggested that Microsoft and Google are rushing to launch their version of AI to get ahead of the competition while their AI tools are nowhere near being perfect for usage.

Topics :Artificial intelligenceMicrosoftBS Web Reports

Next Story