What AI Is Actually Good At (And Why It Matters to Know the Difference)
Pini Reznik
By Pini Reznik
Jun 3, 2025

Week 5: What AI Is Actually Good At (And Why It Matters to Know the Difference)

As organizations adopt AI at scale, one of the most common—and costly—mistakes is misunderstanding what large language models (LLMs) are actually good at.
Just because an AI tool generates fluent text or fast answers doesn’t mean it’s adding real value. And when teams misread its strengths, they often end up automating the wrong tasks, building tools no one trusts, or chasing speed instead of impact.


Understanding LLM Capabilities

LLMs are extremely good at:

  • Filling in structured gaps: Where there’s a known format or clear context, LLMs accelerate outputs.
  • Pattern recognition: Reusing linguistic and logic patterns across vast training data.
  • Momentum tasks: Drafting, summarizing, translating, rewording—especially where speed matters more than nuance.

But they still struggle with:

  • Real-time trade-offs: When judgment requires values, priorities, or navigating uncertainty.
  • Ambiguous input: When key signals are missing or conflict.
  • Accountability: When a mistake needs to be traced, explained, or defended.

This is why many AI Native teams treat LLMs not as decision-makers—but as accelerators. They move ideas forward faster, but they don’t replace human reasoning.

Stanford HAI 2024 AI Index


AI Infrastructure Is About Augmentation, Not Replacement

The most effective teams are using LLMs to reduce friction—not replace people.

  • Developers using GitHub Copilot complete coding tasks up to 55% faster in early trials.
  • Designers are using AI to test 10 versions of a concept in the time it used to take to create one.
  • Support teams are automating low-risk first responses, while focusing humans on cases that require empathy or judgment.

GitHub Productivity Study

This isn’t about automation for its own sake—it’s about strategic leverage. Where does a 30-second AI draft unlock a 3-week acceleration?


What You Should Measure Instead

Forget “efficiency” as the primary outcome. It’s the wrong metric for this kind of capability.

Measure instead:

  • Experimentation velocity: Are you trying more ideas because the cost of failure is lower?
  • Decision latency: Are people able to move forward faster—with confidence?
  • Human-AI feedback loops: Are outputs improving over time, because humans are still in the loop?

Teams that build for augmentation consistently see higher innovation throughput—and less resistance.

BCG AI Maturity Report


What Changes When You Get This Right

Understanding what AI is actually good at changes how you structure teams, build platforms, and measure outcomes.

You stop chasing “full automation.” You start investing in momentum systems—tools that remove friction from creation, learning, and iteration. You stop asking “How do we replace people?”
You start asking “How do we help people build better, faster, and smarter?”

This is the real ROI of AI: not fewer people, but more possibilities per person.
More ideas tried. More learning loops closed.
More impact—without burning more hours.

Teams that embrace this model are already outpacing their competitors in delivery, morale, and velocity of innovation.


About Waves of Innovation

Waves of Innovation is a weekly signal for engineering leaders navigating the shift from Cloud Native to AI Native infrastructure.

Each edition explores the technical and strategic shifts shaping tomorrow’s platforms—from AI-augmented developer tools to decision-making patterns, organizational design, and ethics. We draw from real-world experience, early field patterns, and our upcoming book.

If you care about the future of systems—and the humans building them, you’re in the right place.


Key Takeaways

  • LLMs are best at unlocking momentum, not making autonomous decisions.
  • Use AI to reduce friction and increase throughput—not to mimic expertise.
  • True transformation comes from faster cycles of learning, feedback, and delivery.
  • If you get this right, you don’t just move faster. You build a system that learns with you.