|
2 | 2 |
|
3 | 3 | ### The Story Behind `sllm.nvim` |
4 | 4 |
|
5 | | -The [`llm`](https://llm.datasette.io/en/stable/) command-line tool by Simon Willison (creator of Django and Datasette) is a wonderfully extensible way to interact with Large Language Models. Its power lies in its simplicity and vast plugin ecosystem, allowing users to tap into numerous models directly from their terminal. |
| 5 | +The [`llm`](https://llm.datasette.io/en/stable/) command-line tool by Simon |
| 6 | +Willison (creator of Django and Datasette) is a wonderfully extensible way to |
| 7 | +interact with Large Language Models. Its power lies in its simplicity and vast |
| 8 | +plugin ecosystem, allowing users to tap into numerous models directly from their |
| 9 | +terminal. |
6 | 10 |
|
7 | | -Like many developers, I found myself frequently switching to web UIs like ChatGPT, painstakingly copying and pasting code snippets, file contents, and error messages to provide the necessary context for the AI. This interruption broke my workflow and felt inefficient. I was particularly inspired by Simon's explorations in using the `llm` tool for complex tasks, and it struck me how beneficial it would be to manage and enrich this context seamlessly within Neovim. |
| 11 | +Like many developers, I found myself frequently switching to web UIs like |
| 12 | +ChatGPT, painstakingly copying and pasting code snippets, file contents, and |
| 13 | +error messages to provide the necessary context for the AI. This interruption |
| 14 | +broke my workflow and felt inefficient. I was particularly inspired by Simon's |
| 15 | +explorations in using the `llm` tool for complex tasks, and it struck me how |
| 16 | +beneficial it would be to manage and enrich this context seamlessly within |
| 17 | +Neovim. |
8 | 18 |
|
9 | | -`sllm.nvim` was born out of the desire to streamline this process. It is a simple plugin (~500 lines of Lua) that delegates the heavy lifting of LLM interaction to the robust `llm` CLI. For its user interface, it leverages the excellent utilities from `mini.nvim`. The core focus of `sllm.nvim` is to make context gathering and LLM interaction a native part of the Neovim experience, eliminating the need to ever leave the editor. |
| 19 | +`sllm.nvim` was born out of the desire to streamline this process. It is a |
| 20 | +simple plugin (~500 lines of Lua) that delegates the heavy lifting of LLM |
| 21 | +interaction to the robust `llm` CLI. For its user interface, it leverages the |
| 22 | +excellent utilities from `mini.nvim`. The core focus of `sllm.nvim` is to make |
| 23 | +context gathering and LLM interaction a native part of the Neovim experience, |
| 24 | +eliminating the need to ever leave the editor. |
10 | 25 |
|
11 | 26 | ### The `sllm.nvim` Philosophy: A Focused Co-pilot |
12 | 27 |
|
13 | | -The Neovim ecosystem already has excellent, feature-rich plugins like `CodeCompanion.nvim`, `avante.nvim`, `parrot.nvim`. So, why build another? `sllm.nvim` isn't designed to be a replacement, but a focused alternative built on a distinct philosophy and architecture. |
| 28 | +The Neovim ecosystem already has excellent, feature-rich plugins like |
| 29 | +`CodeCompanion.nvim`, `avante.nvim`, `parrot.nvim`. So, why build another? |
| 30 | +`sllm.nvim` isn't designed to be a replacement, but a focused alternative built |
| 31 | +on a distinct philosophy and architecture. |
14 | 32 |
|
15 | 33 | Here are the key differentiators: |
16 | 34 |
|
17 | | -1. **On-the-fly Function Tools: A Game-Changer** |
18 | | - This is the most significant differentiator. With `<leader>sF`, you can visually select a Python function in your buffer and **register it instantly as a tool for the LLM to use** in the current conversation. You don't need to pre-configure anything. This is a game-changer for interactive development: |
19 | | - * **Ad-hoc Data Processing:** Have the LLM use your own function to parse a log file or reformat a data structure. |
20 | | - * **Live Codebase Interaction:** Let the LLM use a function from your project to query a database or check an application's state. |
21 | | - * **Ultimate Flexibility:** This workflow is impossible in a web UI and provides a level of dynamic integration that is unique to `sllm.nvim`. |
| 35 | +1. **On-the-fly Function Tools: A Game-Changer** This is the most significant |
| 36 | + differentiator. With `<leader>sF`, you can visually select a Python function |
| 37 | + in your buffer and **register it instantly as a tool for the LLM to use** in |
| 38 | + the current conversation. You don't need to pre-configure anything. This is a |
| 39 | + game-changer for interactive development: |
| 40 | + - **Ad-hoc Data Processing:** Have the LLM use your own function to parse a |
| 41 | + log file or reformat a data structure. |
| 42 | + - **Live Codebase Interaction:** Let the LLM use a function from your project |
| 43 | + to query a database or check an application's state. |
| 44 | + - **Ultimate Flexibility:** This workflow is impossible in a web UI and |
| 45 | + provides a level of dynamic integration that is unique to `sllm.nvim`. |
22 | 46 |
|
23 | | -2. **Radical Simplicity: It's a Wrapper, Not a Monolith** |
24 | | - The fundamental difference is that `sllm.nvim` is a thin wrapper around the `llm` CLI. It doesn't reinvent the wheel by implementing its own API clients or conversation management. All heavy lifting is delegated to the `llm` tool, which is robust, battle-tested, and community-maintained. This keeps `sllm.nvim` itself incredibly lightweight, transparent, and easy to maintain. |
25 | | - |
26 | | -3. **Instant Access to an Entire CLI Ecosystem** |
27 | | - By building on `llm`, this plugin instantly inherits its vast and growing plugin ecosystem. This is a powerful advantage. |
28 | | - * Want to access hundreds of models via OpenRouter? Just `llm install llm-openrouter`. |
29 | | - * Need to feed a PDF manual or a GitHub repo into your context? There are `llm` plugins for that. |
30 | | - This extensibility is managed at the `llm` level, allowing `sllm.nvim` to remain simple while giving you access to powerful workflows that other plugins would need to implement from scratch. |
31 | | - |
32 | | -4. **Explicit Control: You Are the Co-pilot, Not the Passenger** |
33 | | - Some tools aim to create an autonomous "agent" that tries to figure things out for you. `sllm.nvim` firmly believes in a **"co-pilot" model where you are always in control.** You explicitly provide context. You decide what the LLM sees: the current file (`<leader>sa`), diagnostics (`<leader>sd`), the output of a `git diff` (`<leader>sx`), or a new function tool (`<leader>sF`). The plugin won't guess your intentions, ensuring a predictable, reliable, and secure interaction every time. |
| 47 | +2. **Radical Simplicity: It's a Wrapper, Not a Monolith** The fundamental |
| 48 | + difference is that `sllm.nvim` is a thin wrapper around the `llm` CLI. It |
| 49 | + doesn't reinvent the wheel by implementing its own API clients or |
| 50 | + conversation management. All heavy lifting is delegated to the `llm` tool, |
| 51 | + which is robust, battle-tested, and community-maintained. This keeps |
| 52 | + `sllm.nvim` itself incredibly lightweight, transparent, and easy to maintain. |
34 | 53 |
|
| 54 | +3. **Instant Access to an Entire CLI Ecosystem** By building on `llm`, this |
| 55 | + plugin instantly inherits its vast and growing plugin ecosystem. This is a |
| 56 | + powerful advantage. |
| 57 | + - Want to access hundreds of models via OpenRouter? Just |
| 58 | + `llm install llm-openrouter`. |
| 59 | + - Need to feed a PDF manual or a GitHub repo into your context? There are |
| 60 | + `llm` plugins for that. This extensibility is managed at the `llm` level, |
| 61 | + allowing `sllm.nvim` to remain simple while giving you access to powerful |
| 62 | + workflows that other plugins would need to implement from scratch. |
35 | 63 |
|
| 64 | +4. **Explicit Control: You Are the Co-pilot, Not the Passenger** Some tools aim |
| 65 | + to create an autonomous "agent" that tries to figure things out for you. |
| 66 | + `sllm.nvim` firmly believes in a **"co-pilot" model where you are always in |
| 67 | + control.** You explicitly provide context. You decide what the LLM sees: the |
| 68 | + current file (`<leader>sa`), diagnostics (`<leader>sd`), the output of a |
| 69 | + `git diff` (`<leader>sx`), or a new function tool (`<leader>sF`). The plugin |
| 70 | + won't guess your intentions, ensuring a predictable, reliable, and secure |
| 71 | + interaction every time. |
0 commit comments