llama
Here are 38 public repositories matching this topic...
🤖 The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference
-
Updated
Mar 10, 2025 - Go
AI Cloud OS: ⚡️Open-source RAG knowledge database with admin UI, user management and Single-Sign-On⚡️, supports ChatGPT, Claude, DeepSeek R1, Llama, Gemini, HuggingFace, etc., chat bot demo: https://ai.casibase.com, admin UI demo: https://ai-admin.casibase.com
-
Updated
Mar 10, 2025 - Go
ChatGPT CLI is a versatile tool for interacting with LLM models through OpenAI and Azure, as well as models from Perplexity AI and Llama. It supports prompts and history tracking for seamless, context-aware interactions. With extensive configuration options, it’s designed for both users and developers to create a customized GPT experience.
-
Updated
Feb 17, 2025 - Go
🧬 Helix is a private GenAI stack for building AI applications with declarative pipelines, knowledge (RAG), API bindings, and first-class testing.
-
Updated
Mar 10, 2025 - Go
🏗️ Fine-tune, build, and deploy open-source LLMs easily!
-
Updated
Mar 10, 2025 - Go
A holistic way of understanding how Llama and its components run in practice, with code and detailed documentation.
-
Updated
Aug 20, 2024 - Go
LLaMA-2 in native Go
-
Updated
Nov 30, 2024 - Go
🚢 Yet another operator for running large language models on Kubernetes with ease. Powered by Ollama! 🐫
-
Updated
Mar 10, 2025 - Go
Inference Hub for AI at Scale
-
Updated
Mar 9, 2025 - Go
MaK(Mac+Kubernetes)llama - Running and orchestrating large language models (LLMs) on Kubernetes with macOS nodes.
-
Updated
May 22, 2024 - Go
Improve this page
Add a description, image, and links to the llama topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the llama topic, visit your repo's landing page and select "manage topics."