Skip to article
Business Trends
Emergent Story mode

Now reading

Overview

1 / 5 3 min 1 sources Single Outlet
Sources

Story mode

Business TrendsSingle OutletBlindspot: Single outlet risk

We studied chatbots and language and saw a huge problem: They mean 80% when they say ‘likely’ but humans hear 65%

The rise of chatbots and virtual assistants has transformed the way we interact with technology.

Read
3 min
Sources
1 source
Domains
1

The rise of chatbots and virtual assistants has transformed the way we interact with technology. From customer service to language translation, these AI-powered tools have become increasingly sophisticated, allowing us...

Story state
Structured developing story
Evidence
Evidence mapped
Coverage
0 reporting sections
Next focus
What comes next

Continue in the field

Focused storyNearby context

Open the live map from this story.

Carry this article into the map as a focused origin point, then widen into nearby reporting.

Leave the article stream and continue in live map mode with this story pinned as your origin point.

  • Open the map already centered on this story.
  • See what nearby reporting is clustering around the same geography.
  • Jump back to the article whenever you want the original thread.
Open live map mode

Source bench

Blindspot: Single outlet risk

Single Outlet

1 cited references across 1 linked domains.

References
1
Domains
1

1 cited reference across 1 linked domain. Blindspot watch: Single outlet risk.

  1. Source 1 · Fulqrum Sources

    We studied chatbots and language and saw a huge problem: They mean 80% when they say ‘likely’ but humans hear 65%

Open source workbench

Keep reporting

ContradictionsEvent arcNarrative drift

Open the deeper evidence boards.

Take the mobile reel into contradictions, event arcs, narrative drift, and the full source workspace.

  • Scan the cited sources and coverage bench first.
  • Keep a blindspot watch on Single outlet risk.
  • Move from the summary into the full evidence boards.
Open evidence boards

Stay in the reporting trail

Open the evidence boards, source bench, and related analysis.

Jump from the app-style read into the deeper workbench without losing your place in the story.

Open source workbenchBack to Business Trends
📈 Business Trends

We studied chatbots and language and saw a huge problem: They mean 80% when they say ‘likely’ but humans hear 65%

The rise of chatbots and virtual assistants has transformed the way we interact with technology.

Wednesday, February 25, 2026 • 3 min read • 1 source reference

  • 3 min read
  • 1 source reference

The rise of chatbots and virtual assistants has transformed the way we interact with technology. From customer service to language translation, these AI-powered tools have become increasingly sophisticated, allowing us to communicate with them in a more natural, human-like way. However, a recent study has highlighted a significant problem with the way chatbots use language, particularly when it comes to expressing uncertainty.

Researchers at the journal NPJ Complexity found that large language model AIs, such as ChatGPT, often fail to align with humans when communicating uncertainty. The study focused on words of estimative probability, which include terms like "maybe," "probably," and "almost certain." By comparing how AI models and humans map these words to numerical percentages, the researchers uncovered significant gaps between humans and large language models.

One of the most striking findings was the difference in interpretation of the word "likely." While a human might assume that "likely" means a probability of around 65%, a chatbot like ChatGPT might use the same word to represent an 80% probability. This discrepancy can have significant consequences, particularly in situations where accurate communication of uncertainty is crucial, such as in medical diagnosis or financial forecasting.

The researchers suggest that this problem arises from the way chatbots are trained on vast amounts of text data, which can lead to a disconnect between the AI's internal representation of probability and the way humans understand it. While chatbots are designed to generate human-like language, they often rely on statistical patterns rather than nuanced understanding of context and subtlety.

The implications of this study are far-reaching, highlighting the need for greater transparency and clarity in the way chatbots communicate with humans. As we increasingly rely on these AI-powered tools to make decisions and provide information, it is essential that we understand the limitations and potential biases of their language.

So, what can be done to address this issue? One possible solution is to develop more explicit and standardized ways of expressing uncertainty in chatbot language. This could involve using numerical probabilities or ranges to convey uncertainty, rather than relying on ambiguous words like "likely" or "maybe." Additionally, researchers and developers can work to improve the way chatbots are trained, incorporating more nuanced and context-dependent understanding of language and probability.

Ultimately, the study serves as a reminder that, despite the impressive advances in chatbot technology, there is still much to be learned about the complexities of human language and communication. By acknowledging and addressing these limitations, we can work towards creating more effective and transparent interactions between humans and machines.

Sources:

  • "We studied chatbots and language and saw a huge problem: They mean 80% when they say ‘likely’ but humans hear 65%" (NPJ Complexity)

The rise of chatbots and virtual assistants has transformed the way we interact with technology. From customer service to language translation, these AI-powered tools have become increasingly sophisticated, allowing us to communicate with them in a more natural, human-like way. However, a recent study has highlighted a significant problem with the way chatbots use language, particularly when it comes to expressing uncertainty.

Researchers at the journal NPJ Complexity found that large language model AIs, such as ChatGPT, often fail to align with humans when communicating uncertainty. The study focused on words of estimative probability, which include terms like "maybe," "probably," and "almost certain." By comparing how AI models and humans map these words to numerical percentages, the researchers uncovered significant gaps between humans and large language models.

One of the most striking findings was the difference in interpretation of the word "likely." While a human might assume that "likely" means a probability of around 65%, a chatbot like ChatGPT might use the same word to represent an 80% probability. This discrepancy can have significant consequences, particularly in situations where accurate communication of uncertainty is crucial, such as in medical diagnosis or financial forecasting.

The researchers suggest that this problem arises from the way chatbots are trained on vast amounts of text data, which can lead to a disconnect between the AI's internal representation of probability and the way humans understand it. While chatbots are designed to generate human-like language, they often rely on statistical patterns rather than nuanced understanding of context and subtlety.

The implications of this study are far-reaching, highlighting the need for greater transparency and clarity in the way chatbots communicate with humans. As we increasingly rely on these AI-powered tools to make decisions and provide information, it is essential that we understand the limitations and potential biases of their language.

So, what can be done to address this issue? One possible solution is to develop more explicit and standardized ways of expressing uncertainty in chatbot language. This could involve using numerical probabilities or ranges to convey uncertainty, rather than relying on ambiguous words like "likely" or "maybe." Additionally, researchers and developers can work to improve the way chatbots are trained, incorporating more nuanced and context-dependent understanding of language and probability.

Ultimately, the study serves as a reminder that, despite the impressive advances in chatbot technology, there is still much to be learned about the complexities of human language and communication. By acknowledging and addressing these limitations, we can work towards creating more effective and transparent interactions between humans and machines.

Sources:

  • "We studied chatbots and language and saw a huge problem: They mean 80% when they say ‘likely’ but humans hear 65%" (NPJ Complexity)

Coverage tools

Sources, context, and related analysis

Visual reasoning

How this briefing, its evidence bench, and the next verification path fit together

A server-rendered QWIKR board that keeps the article legible while showing the logic of the current read, the attached source bench, and the next high-value reporting move.

Cited sources

0

Reasoning nodes

3

Routed paths

2

Next checks

1

Reasoning map

From briefing to evidence to next verification move

SSR · qwikr-flow

Story geography

Where this reporting sits on the map

Use the map-native view to understand what is happening near this story and what adjacent reporting is clustering around the same geography.

Geo context
0.00° N · 0.00° E Mapped story

This story is geotagged, but the nearby reporting bench is still warming up.

Continue in live map mode

Coverage at a Glance

1 source

Compare coverage, inspect perspective spread, and open primary references side by side.

Linked Sources

1

Distinct Outlets

1

Viewpoint Center

Center

Outlet Diversity

Very Narrow
1 source with viewpoint mapping 1 higher-credibility source
Coverage is still narrow. Treat this as an early map and cross-check additional primary reporting.

Coverage Gaps to Watch

  • Single-outlet dependency

    Coverage currently traces back to one domain. Add independent outlets before drawing firm conclusions.

Read Across More Angles

Source-by-Source View

Search by outlet or domain, then filter by credibility, viewpoint mapping, or the most-cited lane.

Showing 1 of 1 cited sources with links.

Center (1)

Fortune

We studied chatbots and language and saw a huge problem: They mean 80% when they say ‘likely’ but humans hear 65%

Open

fortune.com

Center High Dossier
Fact-checked Real-time synthesis Bias-reduced

This article was synthesized by Fulqrum AI from 1 trusted sources, combining multiple perspectives into a comprehensive summary. All source references are listed below.