Following many user reports of a discernible drop in performance, Anthropic’s Claude Code has officially addressed recent complaints. There was conjecture that the tool had been purposefully lowered due to complaints ranging from slower answers to poorer coding quality and memory-related problems.
In a thorough post-mortem, Anthropic confirmed that the problems were genuine and unrelated to model deterioration. Rather, the disturbance was caused by three different updates between March and April. Crucially, the issues only affected the Claude Code interface and associated systems; the fundamental AI models and API were unaffected.
A shift in “reasoning effort” was the first problem. Anthropic reduced the reasoning level from high to medium in order to shorten reaction times. Although this increased speed, it decreased output depth, which made replies seem less sophisticated. The corporation changed its mind and reinstated higher reasoning levels in response to user feedback.
A caching bug was the second and most serious issue. Every interaction’s context was inadvertently erased by a system intended to remove dormant session memory. This led to the AI forgetting earlier stages, which produced inconsistent reasoning, repetitious responses, and bad coding choices. Although it had a major negative effect on user experience, the bug was resolved in early April.
The third problem stemmed from an upgrade to the system prompts designed to minimize verbosity. Because of this, the AI was compelled to provide incredibly brief answers, which limited explanations and reduced the quality of the code. Internal testing revealed a discernible decline in performance as a result of this limitation.
With the most recent version (v2.1.116+), Anthropic has fixed all three issues, restored regular functionality, and reset user use limitations. The business made it clear that these were execution problems rather than deliberate downgrades or cost-cutting strategies.
All things considered, the event shows how minor system-level adjustments can have a big influence on AI performance even when the underlying models are still robust.

