How Do LLM-Suggested Edits Work in Text Editors

How Do LLM-Suggested Edits Work in Text Editors
Photo by Chris Ried / Unsplash

LLMs Need Context, Just Us

Context. I need context to understand what you're talking about.

Give me a one line Jira ticket and I'll spend the next half hour finding context and probably pestering you for more information. Once I have that, I'll probably spend the next half hour figuring out how this work fits into the project as a whole.

Without context, I write bad code.

LLMs are very much like us in this way. Ask and ambiguous question and you'll get an ambiguous answer. Or, if you're using OpenAI's deep research, you'll get a patiently worded question back asking for more information.

In a previous post, I quoted Samuel Johnson

The greatest part of a writer's time is spent in reading, in order to write: a man will turn over half a library to make one book.

This is true of LLMs, too. They just do the work much faster than we do. Now, if you can give the LLM the right context, it will give you a much better answer.

How Does Zed Give the LLM Context?

I dug into this today because I wanted to know if I could use the same ideas to write a script to make bulk edits.

Behind the scenes, Zed primes the LLM with information on how to respond. By default an LLM will respond with text. But for an editor to make suggested edits, it needs a predictable, parse-able, response.

Zed uses the following prompt template to tell the LLM to respond with XML in a specific format. Then Zed takes that response and generates suggested edits with the response.

zed/assets/prompts/suggest_edits.hbs at main · zed-industries/zed
Code at the speed of thought – Zed is a high-performance, multiplayer code editor from the creators of Atom and Tree-sitter. - zed-industries/zed

As you might know, you can also send a whole file in the chat. Which means that a single line like: /file some_file.rb Please suggest performance improvements will result in suggest_edits.hbs, some_file.rb and the text you typed getting sent to the LLM.

To paraphrase Samuel Johnson, the greatest part of an LLMs time is spent in reading, in order to write: an LLM to turn over half a code base to write one line of code.

I sure hope that you, too, spend the time to turn over half, or at lease a quarter, of the code base to understand the code the LLM is suggesting.

Without context, we write bad code.