
Once again, the Houston Chronicle’s Chris Tomlinson nails it on the head asking, “Can someone sue AI for defamation”? (See AI bot scooped story on a politician’s affair, but does it have the right to free speech?). It is a question worth asking. Does simply “scouring” the internet for content and then sharing constitute the legal definition of defamation when it turns out not to be true?
AI models can create text that falsely accuses individuals of wrongdoing. In a recent court ruling, OpenAI prevailed in a lawsuit filed in Georgia that accused the ChatGPT maker of defaming a radio host by producing allegations and a fictional lawsuit against him.
In the ruling, Gwinnett County Superior Court Judge Tracie Cason said OpenAI’s ChatGPT puts users on notice that it can make errors. Would the same criteria apply to other media (aka The Houston Chronicle?) if they included a disclaimer saying there could be errors in their reporting and thus not responsible for publishing false and damaging information?
I guess we need to remember Caveat Emptor (buyer beware) when using artificial intelligence.