From the Lawfare blog (hyperlink to my paper revised):
If somebody lies about you, you’ll be able to normally sue them for defamation. However what if that somebody is ChatGPT? Already in Australia, the mayor of a city outdoors Melbourne has threatened to sue OpenAI as a result of ChatGPT falsely named him a responsible get together in a bribery scandal. May that occur in America? Does our libel regulation enable that? What does it even imply for a big language mannequin to behave with “malice”? Does the First Modification put any limits on the flexibility to carry these fashions, and the businesses that make them, accountable for false statements they make? And what’s one of the best ways to cope with this downside: personal lawsuits or authorities regulation?
On this episode of Arbiters of Fact, our collection on the data ecosystem, Alan Rozenshtein, Affiliate Professor of Legislation on the College of Minnesota and Senior Editor at Lawfare, mentioned these questions with First Modification skilled Eugene Volokh, Professor of Legislation at UCLA and the creator of a draft paper entitled “Large Libel Models.”