Lists (7)
Sort Name ascending (A-Z)
Stars
razakiau / claude-code
Forked from ultraworkers/claw-codeClaude Code Snapshot for Research. All original source code is the property of Anthropic.
An open-source long-horizon SuperAgent harness that researches, codes, and creates. With the help of sandboxes, memories, tools, skill, subagents and message gateway, it handles different levels of…
Automatically generate faceless YouTube Shorts from trending topics using AI scripts, TTS, and FFmpeg. Fully containerized and one-click deployable
Fast, small, and fully autonomous AI personal assistant infrastructure, ANY OS, ANY PLATFORM — deploy anywhere, swap anything 🦀
A lightweight alternative to OpenClaw that runs in containers for security. Connects to WhatsApp, Telegram, Slack, Discord, Gmail and other messaging apps,, has memory, scheduled jobs, and runs dir…
MOVA: Towards Scalable and Synchronized Video–Audio Generation
Qwen3-ASR is an open-source series of ASR models developed by the Qwen team at Alibaba Cloud, supporting stable multilingual speech/music/song recognition, language detection and timestamp prediction.
Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline model in a user-friendly interface.
Awesome Unified Multimodal Models
A collection of resources on personalized image generation.
Implement a ChatGPT-like LLM in PyTorch from scratch, step by step
[ICDMW] ChatGraph: Interpretable Text Classification by Converting ChatGPT Knowledge to Graphs
End-to-end realtime stack for connecting humans and AI
Complete solutions to the Programming Massively Parallel Processors Edition 4
RAGFlow is a leading open-source Retrieval-Augmented Generation (RAG) engine that fuses cutting-edge RAG with Agent capabilities to create a superior context layer for LLMs
Awesome LLM Books: Curated list of books on Large Language Models
Demystify AI agents by building them yourself. Local LLMs, no black boxes, real understanding of function calling, memory, and ReAct patterns.
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM
Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache).
Official PyTorch implementation for "Large Language Diffusion Models"
Compare open-source local LLM inference projects by their metrics to assess popularity and activeness.
A machine learning accelerator core designed for energy-efficient AI at the edge.
Turn any PDF or image document into structured data for your AI. A powerful, lightweight OCR toolkit that bridges the gap between images/PDFs and LLMs. Supports 100+ languages.

