Blog

Why Your AI Assistant Gets Confused

Why Your AI Assistant Gets Confused

Why Your AI Assistant Gets Confused: The "Lost in Conversation" Problem

Dated: June 6, 2025

Have you ever noticed that a conversation with an AI starts out great, but as it goes on, the AI seems to lose its way? It might start giving you long, rambling answers, making weird assumptions, or completely forgetting what you asked for three messages ago.

A May 2025 study from researchers at Microsoft and Salesforce has finally put a name to this: "Lost in Conversation". Their research shows that even the smartest AI models today struggle with the basic back-and-forth of a normal human conversation.

The Core Problem: Multi-Turn Meltdown

The study found a massive gap between how an AI performs when you give it one big instruction versus when you give it information piece-by-piece.

  • Accuracy Drops: On average, an AI’s ability to get the job done drops by 39% the moment a conversation moves into a back-and-forth exchange.
  • Unreliability Skyrockets: While the AI is still "smart," it becomes incredibly inconsistent. The researchers found that "unreliability"—the gap between the best and worst outcomes—increases by 112% in these conversations.
  • The "Wrong Turn" Rule: When an AI takes a wrong turn in a conversation, it gets lost and does not recover.

Why the AI Gets Lost

The researchers identified four specific habits that cause AI assistants to fail when talking to humans:

  1. Jumping to Conclusions: AI models often try to give you a final answer too early, making assumptions before you’ve finished explaining what you need.
  2. "Answer Bloat": As the chat continues, the AI’s answers tend to get much longer—sometimes 300% longer than necessary. These "bloated" answers are usually lower quality and filled with unnecessary fluff.
  3. Forgetting the Middle: In a long list of instructions, the AI tends to remember the very first and last things said, but it often ignores the "middle" turns of the conversation.
  4. Stubbornness: If the AI makes an incorrect guess early on, it often clings to that mistake even after you give it new, correct information.

The "Thinking" AI Trap

You might think that newer "reasoning" models—AI designed to "think" before they speak—would be better at this. Surprisingly, the study found they aren't. Because these models are programmed to be more thorough, they actually produce the longest and most "bloated" answers, which makes them even more likely to get confused.


Solving the Problem: Join the MyCoachingTree Beta

At MyCoachingTree, we aren't just watching this research—we are building the solution. We are currently Beta testing a new AI tool specifically engineered for the sports industry to overcome these exact "Lost in Conversation" hurdles.

By training our tool on specialized, non-public industry data and focusing on high-reliability multi-turn interactions, we aim to provide the consistent support that professional coaches deserve.

Ready to experience a more reliable AI that's just for coaches?

Join the MyCoachingTree Beta Test Today!

DOWNLOAD THE MICROSOFT AND SALESFORCE STUDY