Interesting read!
After AI hallucinates, it scrambles to define an excuse for its false information. Is it safe to say that AI can be a disinformation machine? What do you all think?
Source and full article: https://www.thetimes.com/uk/media/article/ai-chatbots-hallucinate-idioms-google-gemini-rf2vfzdlsThe tech giant’s chatbot has been found to frequently “hallucinate” when asked to return the meaning behind made-up sayings.
Instead of pointing out that the question itself is wrong, the algorithm accepts the saying as authentic and goes on a wild goose chase in an attempt to explain its meaning and provenance.
After AI hallucinates, it scrambles to define an excuse for its false information. Is it safe to say that AI can be a disinformation machine? What do you all think?
