One of the ongoing issues pertaining to AI is that it’s not as reliable as you think it might be. Multiple news accounts are now coming out, each showing how unreliable the AI can be. A lawyer used some references in court, but then found that the references were all made up by the AI. A professor at A&M found that all of their students had used AI for producing their essays, but then did research that even that wasn’t accurate. This is across the board, and shows that while AI sounds legit and true, that it could make some major mistakes. I’ve tinkered a bit, and I’ve found mistakes that were really basic and easy. It’s good for cleaning up grammar and spelling, but the idea of it taking over our world, is just as far fetched right now, as it is for providing term papers, or other complicated material. Skynet is closer than it has been, but it’s still a long way off.

Last week, a lawyer got in hot water for citing made-up cases generated by ChatGPT. The system can’t tell fact from fiction, and OpenAI needs to do more to warn its users.