• Welcome to ROFLMAO.com—the ultimate destination for unfiltered discussions and endless entertainment! Whether it’s movies, TV, music, games, or whatever’s on your mind, this is your space to connect and share. Be funny. Be serious. Be You. Don’t just watch the conversation—join it now and be heard!

discuss When AI is wrong, AI just makes something up to defend itself.

A thread covering the latest news on trends, groundbreaking technologies, and digital innovations reshaping the tech landscape.
Joined
Nov 13, 2024
Messages
947
Impact
310
LOL Coins
𝕷1,615
Signature Pro
Cherries & Berries
Interesting read!

The tech giant’s chatbot has been found to frequently “hallucinate” when asked to return the meaning behind made-up sayings.

Instead of pointing out that the question itself is wrong, the algorithm accepts the saying as authentic and goes on a wild goose chase in an attempt to explain its meaning and provenance.
Source and full article: https://www.thetimes.com/uk/media/article/ai-chatbots-hallucinate-idioms-google-gemini-rf2vfzdls

After AI hallucinates, it scrambles to define an excuse for its false information. Is it safe to say that AI can be a disinformation machine? What do you all think?
 
Are you even minding those large language models? They would make obvious mistakes and when you correct them, they would give you excuse like that is what they found on the web. But it is normal, we can't expect 100% accuracy from a bot. Be sure to check well before you take AI suggestions hook, line and sinker.
Exactly. The models are fed information by the user. Then you'll correct it and it'll be right.

Most of the time AI gets it information and sources from sites like Wikipedia & forums. The ai systems are starting to be like search engines, they have crawlers and bots that search the net for their indexes.

The large language models are also designed by the users, they can also be customized by the users. It's like a website, you can debug it.
 
Then you'll correct it and it'll be right.
On the flip side, what if everybody corrected √4 (square root of 4) to be equal to 1?

Would it eventually relearn the algorithm is divisible by 4 and spit out √12 = 3? 🤔

I'm sure that you couldn't fool it in mathematics, but AI seems flawed if enough people can tell it that it's wrong, where little by little, history of it being right could be erased.
 
It is been proven times without number that AI is susceptible to mistakes. No serious person should assume it is 100% perfect.
That's probably why it started throwing out its sources, of which, sometimes are Wikipedia, which isn't to be trusted fully either. I would assume that AI would follow the Wikipedia footnote and determine from the cited source whether that was a credible edit, and possibly edit Wiki if it was wrong, as well as update their LLM with the correct information.
 
That's probably why it started throwing out its sources, of which, sometimes are Wikipedia, which isn't to be trusted fully either. I would assume that AI would follow the Wikipedia footnote and determine from the cited source whether that was a credible edit, and possibly edit Wiki if it was wrong, as well as update their LLM with the correct information.
AI probably couldn't be bothered with verifying information from third party sources before dishing them out.
 
AI probably couldn't be bothered with verifying information from third party sources before dishing them out.
There is also a problem with Wikipedia sources being dead links now that it's been around for so long. So, I think it's safe to say, they take the Wiki edit as gold, even though the cited source has no material to reference anymore.
 
AI probably couldn't be bothered with verifying information from third party sources before dishing them out.
It doesn't as it pulls it sources from everywhere. The only thing that can verify the information is someone correcting the information. It's just like a doctor, they can be wrong sometimes as well.
 
Back
Top