Login | April 19, 2024

The problems (some of) with AI chats

RICHARD WEINER
Technology for Lawyers

Published: March 24, 2023

Some weeks, I have not that much much to write about.
These last few weeks, it’s been hard to not constantly write about AI-generated chatbots.
Like, I want to write about Rhode Island putting all of their public-facing government docs on the blockchain, but maybe some other time.
At any rate, Google, Microsoft and a couple of Chinese tech giants are rolling out their versions of AI chatbots. So here are a few words of warning about them.
AI chatbots like ChatGPT use large language models (LLMs) as their knowledge base. And that brings a lot of native problems with the formats. Tech magazine The Verge has a list of some of them.
One: “The biggest problem for AI chatbots and search engines is bulls..it.” The machines don’t understand context and create lots of wrong answers that sound like they’re right. They invent biographical data. And, of course, they amplify the prejudices of their programmers in ways like racism and misogyny. There is no way to track all of this, and the platforms can’t use disclaimers.
Two: They offer singular answers to complex questions, often giving inaccurate, potentially fatal, advice. So basically you can’t believe the answer if it’s an overly simply, or “one true” answer.
Three: To this point, Open GPT (and assumedly the rest of the pack) are incredibly easy to hack—not with malicious code, but just by overwhelming it with words, infinity directions, or other ways to jailbreak it. There are some listed in the Verge article.
Four: User backlash against the chatbot companies if the answers don’t fit the user’s political agenda (which Verge calls “culture wars). There will be built-in prejudices—religious, political, and other aspects of a culture. In some repressive regimes, these could cause police raids on company headquarters.
Five: Sourcing. In an LLM, where do you get all that information in the first place? Everywhere? No wonder these models aren’t accurate.
Six: Cost in computing time. It costs Open AI about a couple of cents per chat. We’re used to free search. Will you pay for it? We‘ll see—OpenAI just rolled out their pay service.
Seven: Here come the regulators.
Eight: The existential threat against the entire internet. That’s a whole other discussion.


[Back]