Share
Just over two years ago, following the launch of ChatGPT, SCOTUSblog decided to test how accurate the much-hyped AI really was — at least when it came to Supreme Court-related questions. The conclusion? Its performance was “uninspiring”: precise, accurate, and at times surprisingly human-like text appeared alongside errors and outright fabricated facts. Of the 50 questions posed, the AI answered only 21 correctly.
Now, more than two years later, as ever more advanced models continue to emerge, I’ve revisited … Read the rest