This week came with a number of major news teams announcing that ChatGPT isn’t as reliable as we would like. They painted some of the revelations like legal briefs, college papers, and more, as being tragically flawed, and how it’s a surprise to them. Here at CTR, I noted my own concern early on, that it’s great for some simple correction of your written work, but not for something that you’ll submit blind. This, of course, adds to something that I’ve been saying… that AI is more artificial than intelligent, and we need to be careful in a number of areas. One fun takeaway, however, was that they are comparing it to a hallucination, where the AI actually thinks it’s saying the right thing. That, I think, is a rather fair analogy.
Spend enough time with ChatGPT and other artificial intelligence chatbots and it doesn’t take long for them to spout falsehoods.