Why is Claude Code Different from Cursor if they Both Use Claude?

I get this question often. Here’s how the story goes:

  • Someone complains that AI is bad because it just does a bunch of stuff for you really quickly and does it wrong.
  • I ask what tools they’re using, and they say they’re using Cursor because it’s the most familiar IDE for them.
  • I ask them if they start a new chat for each feature, and the answer is usually “no” because this isn’t intuitive.
  • I suggest they try a different agent — Claude Code is my preferred tool — and they say, “It’s all Claude, though, how would that be different?”

Few developers have had time to learn these tools. They’ve been forced to use them as productivity enhancers, without the time to learn.

I have seen performance gains in my work. I shipped the first version of Uplift in a few hours! But I’ve also taken an enormous amount of time learning, training others, and trying terrible AI tools.

I work with many developers who use Cursor daily. This is good! Cursor seems to work better with their workflow. While I have a distaste for Cursor, others find it useful. Agents, like IDEs, are a preference, after all.

Going faster means slowing down to learn your tools. Changing IDEs or using a CLI is a BIG workflow change for most people, and if Cursor is your first brush with AI, I do not blame you for thinking it’s a waste of time.

In this post, I want to demystify “It’s all Claude in the end” to help people understand why some agents are good and others are frustrating.

Defining Terms

Let’s start with defining three standard terms. I’m going to use the common nomenclature in this post, but it’s important to note that their meanings are often blended in different contexts.

  1. AI: These two small letters carry the weight of “agents”, “LLMs”, “automation”, “autocomplete”, and everything else. When someone says “AI,” they mean any one of these. In this post, “AI” refers to coding agents that have some autonomy in their work.
  2. Agent: This term comes up in phrases like “agentic workflow” or “coding agent”. The agent is the engine of AI coding. The agent connects the user’s input to the local environment’s context (files, available tools) and provides it to the LLM. In this post, “agent” is a loop that takes user input, provides context to the LLM, and calls tools.
  3. LLM: We all know LLMs like Claude, ChatGPT, or Codex. “LLM” stands for “large language model”. “LLM” is a misnomer. Many of these models accept multiple types of input and produce various outputs. A more accurate name is “Large Multimodal Model”. The models we use for coding are autoregressive, meaning they predict the following sequence based on previous context. For consistency with vernacular usage, I’ll refer to them as LLMs.

AI: Distinguishing the Agent from the LLM

In the diagram below, the “environment” is your computer and the tools you let your agent use. If you’ve given your agent access to find, for example, it will show up in the list. Depending on the agent, it might also collect some project stats to send to the LLM.

When you have a CLAUDE.md or AGENTS.md file, this will get sent with the context, too.

Your ComputerLLM (API)EnvironmentAgentUserLLM (API)EnvironmentAgentUserAgent Orchestration LayerManages loop, provides tools, maintains contextParse intentReason about next stepChoose tool to callloop[Until task complete]Different agents = different tools, autonomy levels,and orchestration strategies (even with same LLM)"Fix bug in auth.js"Context + Available Tools + User RequestTool call decisionExecute tool (view/edit/bash)Return resultAppend result to context"Fixed: added null checkTests passing ✓"

Notice that most of the work actually happens on your own machine. The agent interacts with your environment to collect information to send to the LLM, makes edits, and reports back.

An agent is not intelligent. This is really important to understand. An agent without an LLM is a loop with conditionals. Much like the code we write every day.

Adding an LLM into the loop gives the agent the ability to reason and make decisions beyond pattern matching.

Every agentic coding company will write its agent differently. The tools available, the actions the agent takes, and the system prompts it sends to the LLM are the product the company builds.

Cursor is different from Claude Code because Cursor’s agent is different, even if they both use a Claude model to reason and make decisions.

The LLM Reads the Entire Conversation Every Time

The LLM is an outside actor. If you are using Cursor or Claude code, every message you send and every agent loop involves API calls to the LLM provider you are using. If you are running a model locally, the model is separate from your agent system.

Every time the agent calls the LLM API, it is the first time the LLM has ever seen your message. The LLM reads THE WHOLE MESSAGE THREAD EVERY TIME.

That’s right. Do you have a long-running conversation where you’ve worked on three or four different tasks? When you ask the agent a question, the LLM reads the whole message thread and can easily confuse instructions you previously gave it with your current task.

In my experience, this is the most common reason agentic coding starts as highly accurate and quickly degrades into chaos.

How do you fix this? Start a new chat. It’s that simple. Start a new chat for every task you work on.

Warp has a great feature in its agent that detects a change in subject and suggests starting a new chat. I wish other agents would do that too.

This is an important distinction to make. The agent’s capabilities and system prompt affect how it works. Cursor tends to be very ambitious, which means the context window can quickly become bloated with failed attempts and misdirection.

More cautious agents will confirm actions with the user and detect when the conversation is drifting, keeping the context focused on the task at hand.

No agent is perfect, which is why I’ve explored using the Jujutsu VCS to give the agent memory between sessions and narrow the context the LLM receives on each turn.

LLM (API)AgentLLM (API)AgentContext:[System prompt]Context:[System, User,Assistant]Context:[System, User,Asst, Tool,Asst]Context:[System, User,Asst, Tool,Asst, Tool,Asst]Each API call sends the ENTIRE contextContext grows with every messageand tool result.[System prompt, User: "Fix bug"]"I'll check the file"[System, User, Assistant, Tool: file contents]"I'll edit line 42"[System, User, Asst, Tool, Asst, Tool: edit result]"Fixed! Tests passing"

Key Principles

The diagram below illustrates the capabilities of a coding agent. The Agent Loop is the central capability. This is the part that each company builds (Claude Code, Cursor, Zed, Warp, etc.)

The LLM is separate from the agent loop. Many different agents can use the same LLM, and most agents let you pick which LLM to interact with.

Agent Loop

Calls LLM

Stateful Orchestration

Maintains state and history

Executes tools

Manages context window

Stateless Reasoning

Processes full context each turn

Selects tools & plans actions

No memory between calls

Conclusion

Try lots of agents! I know this can feel slow and frustrating, but when you do this, you are learning how different agents work. You are learning the new tools you can use in your job.

Take 15 minutes each day to build a small project with an agent. Pick something simple, like a to-do list app. Build the same app with different agents and see how they perform.

In the future, I think we will standardize on a handful of coding agents that developers can pick to work with their preferred workflow, much as we have standardized on a handful of IDEs for different use cases.

I prefer Claude Code for its built-in protections and customization options.

If you want an agent integrated in your IDE, you can run /ide to hook Claude Code up to VSCode. It’s not as deeply integrated as Cursor is, but it works okay. Another good option is to try out Zed, their agent has similar capabilities to Claude Code and is built into the IDE.

I’ve tried Warp a few times, and the deep integration of chat, CLI, and editing is promising to me, but sometimes confusing.

In the end, Vim is my home row, so the setup that works for me is Claude Code in one pane and Vim in the other.