LLM Context is a tool that helps developers quickly inject relevant content from code/text projects into Large Language Model chat interfaces. It leverages .gitignore
patterns for smart file selection and provides both a streamlined clipboard workflow using the command line and direct LLM integration through the Model Context Protocol (MCP).
Note: This project was developed in collaboration with Claude-3.5-Sonnet (and more recently Grok-3), using LLM Context itself to share code during development. All code in the repository is human-curated (by me 😇, @restlessronin).
Configuration files were converted from TOML to YAML in v 0.2.9. Existing users must manually convert any customizations in .llm-context/config.yaml
files to the new .llm-context/config.yaml
.
For an in-depth exploration of the reasoning behind LLM Context and its approach to AI-assisted development, check out our article: LLM Context: Harnessing Vanilla AI Chats for Development
- Direct LLM Integration: Native integration with Claude Desktop via MCP protocol
- Chat Interface Support: Works with any LLM chat interface via CLI/clipboard
- Optimized for interfaces with persistent context like Claude Projects and Custom GPTs
- Works equally well with standard chat interfaces
- Project Types: Suitable for code repositories and collections of text/markdown/html documents
- Project Size: Optimized for projects that fit within an LLM's context window. Large project support is in development
Install LLM Context using uv:
# Basic installation
uv tool install llm-context
# Or with code outlining support (recommended for developers)
# uv tool install "llm-context[outline]"
To upgrade to the latest version:
# Basic upgrade
uv tool upgrade llm-context
# Or with code outlining support
# uv tool upgrade "llm-context[outline]"
Warning: LLM Context is under active development. Updates may overwrite configuration files prefixed with
lc-
. We recommend all configuration files be version controlled for this reason.
Add to 'claude_desktop_config.json':
{
"mcpServers": {
"CyberChitta": {
"command": "uvx",
// Basic installation:
"args": ["--from", "llm-context", "lc-mcp"]
// With code outlining support (uncomment this line and comment the line above:
// "args": ["--from", "llm-context[outline]", "lc-mcp"]
}
}
}
Once configured, you can start working with your project in two simple ways:
-
Say: "I would like to work with my project" Claude will ask you for the project root path.
-
Or directly specify: "I would like to work with my project /path/to/your/project" Claude will automatically load the project context.
- Navigate to your project's root directory
- Initialize repository:
lc-init
(only needed once) - (Optional) Edit
.llm-context/config.yaml
to customize ignore patterns - Select files:
lc-sel-files
- (Optional) Review selected files in
.llm-context/curr_ctx.yaml
- Generate context:
lc-context
- Use with your preferred interface:
- Project Knowledge (Claude Pro): Paste into knowledge section
- GPT Knowledge (Custom GPTs): Paste into knowledge section
- Regular chats: Use
lc-set-profile code-prompt
first to include instructions
- When the LLM requests additional files:
- Copy the file list from the LLM
- Run
lc-clip-files
- Paste the contents back to the LLM
lc-init
: Initialize project configurationlc-set-profile <n>
: Switch profileslc-sel-files
: Select files for inclusionlc-context
: Generate and copy contextlc-prompt
: Generate project instructions for LLMslc-clip-files
: Process LLM file requestslc-changed
: List files modified since last context generationlc-outlines
: Generate outlines for code files (requires installing with[outline]
extra)lc-clip-implementations
: Extract code implementations requested by LLMs (requires installing with[outline]
extra, doesn't support C/C++)
LLM Context provides advanced features for customizing how project content is captured and presented:
- Smart file selection using
.gitignore
patterns - Multiple profiles for different use cases
- Code Navigation Features:
- Smart Code Outlines: Allows LLMs to view the high-level structure of your codebase with automatically generated outlines highlighting important definitions (requires
[outline]
extra) - Definition Implementation Extraction: Paste full implementations of specific definitions that are requested by LLMs after they review the code outlines, using the
lc-clip-implementations
command
- Smart Code Outlines: Allows LLMs to view the high-level structure of your codebase with automatically generated outlines highlighting important definitions (requires
- Customizable templates and prompts
See our User Guide for detailed documentation of these features.
Check out our comprehensive list of alternatives - the sheer number of tools tackling this problem demonstrates its importance to the developer community.
LLM Context evolves from a lineage of AI-assisted development tools:
- This project succeeds LLM Code Highlighter, a TypeScript library I developed for IDE integration.
- The concept originated from my work on RubberDuck and continued with later contributions to Continue.
- LLM Code Highlighter was heavily inspired by Aider Chat. I worked with GPT-4 to translate several Aider Chat Python modules into TypeScript, maintaining functionality while restructuring the code.
- This project uses tree-sitter tag query files from Aider Chat.
- LLM Context exemplifies the power of AI-assisted development, transitioning from Python to TypeScript and back to Python with the help of GPT-4 and Claude-3.5-Sonnet.
I am grateful for the open-source community's innovations and the AI assistance that have shaped this project's evolution.
I am grateful for the help of Claude-3.5-Sonnet in the development of this project.
This project is licensed under the Apache License, Version 2.0. See the LICENSE file for details.