Subscribe now

Technology

People are less likely to believe an AI if it conveys uncertainty

When a large language model expresses doubt about the information it supplies, people are less likely to accept it as fact and more likely to find accurate information elsewhere

By Matthew Sparkes

4 June 2024

AI chatbots can answer questions convincingly, but they aren’t always accurate

portishead1/Getty Images

When artificial intelligence models say they are unsure of their answers, people become warier of their output and ultimately more likely to dig out accurate information elsewhere. But since no AI model is currently capable of judging its own accuracy, some researchers question whether making AIs express doubt is a good idea.

While the large language models (LLMs) behind chatbots like ChatGPT create impressively believable outputs, it has been shown time and time again that they can simply make up facts. This misinformation is disruptive at…

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox! We'll also keep you up to date with New Scientist events and special offers.

Sign up

To continue reading, subscribe today with our introductory offers

View introductory offers

No commitment, cancel anytime*

Offer ends 2nd of July 2024.

*Cancel anytime within 14 days of payment to receive a refund on unserved issues.

Inclusive of applicable taxes (VAT)

or

Existing subscribers

Sign in to your account