Natively is a free, privacy-first AI Copilot for Google Meet, Zoom, and Teams. It serves as an open-source alternative to Cluely, providing real-time transcription, interview assistance, and automated meeting notes completely locally.
Unlike cloud-only tools, Natively uses Local RAG (Retrieval Augmented Generation) to remember past conversations, giving you instant answers during technical interviews, sales calls, and daily standups.
While other tools focus on being "lightweight" wrappers, Natively is a complete intelligence system.
- Local Vector Database (RAG): We embed your meetings locally so you can ask, "What did John say about the API last week?"
- Rich Dashboard: A full UI to manage, search, and export your history—not just a floating window.
- Rolling Context: We don't just transcribe; we maintain a "memory window" of the conversation for smarter answers.
This demo shows a complete live meeting scenario:
- Real-time transcription as the meeting happens
- Rolling context awareness across multiple speakers
- Screenshot analysis of shared slides
- Instant generation of what to say next
- Follow-up questions and concise responses
- All happening live, without recording or post-processing
Note
macOS Users:
- "Unidentified Developer": If you see this, Right-click the app > Select Open > Click Open.
- "App is Damaged": If you see this (common with DMGs), run this in Terminal:
(Or point to wherever you installed the app)
xattr -cr /Applications/Natively.app
- Expanded Speech Providers: First-class support for Google, Groq, OpenAI, Deepgram, ElevenLabs, Azure, and IBM Watson.
- Custom Key Bindings: Fully customizable global shortcuts for window actions.
- Stealth Mode 2.0: Enhanced masquerading (Terminal, Activity Monitor) and "undetectable" dock mode.
- Markdown Rendering: Improved formatting and code highlighting in the Usage View.
- Performance: Optimized image analysis with
sharpand lower latency interactions. - Models: Support for Gemini 3, GPT-5.2, Groq Llama 3.3, Claude 4.5 or any other LLM provider.
- Why Natively?
- Privacy & Security
- Quick Start (End Users)
- Installation (Developers)
- AI Providers
- Key Features
- Meeting Intelligence Dashboard
- Use Cases
- Comparison
- FAQ
- Architecture Overview
- Technical Details
- Known Limitations
- Responsible Use
- Contributing
- License
Natively is a desktop AI assistant for live situations:
- Meetings
- Interviews
- Presentations
- Classes
- Professional conversations
It provides:
- Live answers
- Rolling conversational context
- Screenshot and document understanding
- Real-time speech-to-text
- Instant suggestions for what to say next
All while remaining invisible, fast, and privacy-first.
- 100% open source (AGPL-3.0)
- Bring Your Own Keys (BYOK)
- Local AI option (Ollama)
- All data stored locally
- No telemetry
- No tracking
- No hidden uploads
You explicitly control:
- What runs locally
- What uses cloud AI
- Which providers are enabled
- Node.js (v20+ recommended)
- Git
- Rust (required for native audio capture)
Natively is 100% free to use with your own keys.
Connect any speech provider and any LLM. No subscriptions, no markups, no hidden fees. All keys are stored locally.
- Google Cloud Speech-to-Text (Service Account)
- Groq (API Key)
- OpenAI Whisper (API Key)
- Deepgram (API Key)
- ElevenLabs (API Key)
- Azure Speech Services (API Key + Region)
- IBM Watson (API Key + Region)
Connect Natively to any leading model or local inference engine.
| Provider | Best For |
|---|---|
| Gemini 3 Pro/Flash | Recommended: Massive context window (2M tokens) & low cost. |
| OpenAI (GPT-5.2) | High reasoning capabilities. |
| Anthropic (Claude 4.5) | Coding & complex nuanced tasks. |
| Groq / Llama 3 | insane speed (near-instant answers). |
| Ollama / LocalAI | 100% Offline & Private (No API keys needed). |
| OpenAI-Compatible | Connect to any custom endpoint (vLLM, LM Studio, etc.) |
Note: You only need ONE speech provider to get started. We recommend Google STT ,Groq or Deepgram for the fastest real-time performance.
Your credentials:
- Never leave your machine
- Are not logged, proxied, or stored remotely
- Are used only locally by the app
What You Need:
- Google Cloud account
- Billing enabled
- Speech-to-Text API enabled
- Service Account JSON key
Setup Summary:
- Create or select a Google Cloud project
- Enable Speech-to-Text API
- Create a Service Account
- Assign role:
roles/speech.client - Generate and download a JSON key
- Point Natively to the JSON file in settings
git clone https://github.com/evinjohnn/natively-cluely-ai-assistant.git
cd natively-cluely-ai-assistantnpm installCreate a .env file:
# Cloud AI
GEMINI_API_KEY=your_key
GROQ_API_KEY=your_key
OPENAI_API_KEY=your_key
CLAUDE_API_KEY=your_key
GOOGLE_APPLICATION_CREDENTIALS=/absolute/path/to/service-account.json
# Speech Providers (Optional - only one needed)
DEEPGRAM_API_KEY=your_key
ELEVENLABS_API_KEY=your_key
AZURE_SPEECH_KEY=your_key
AZURE_SPEECH_REGION=eastus
IBM_WATSON_API_KEY=your_key
IBM_WATSON_REGION=us-south
# Local AI (Ollama)
USE_OLLAMA=true
OLLAMA_MODEL=llama3.2
OLLAMA_URL=http://localhost:11434
# Default Model Configuration
DEFAULT_MODEL=gemini-3-flash-previewnpm startnpm run dist- Custom (BYO Endpoint): Paste any cURL command to use OpenRouter, DeepSeek, or private endpoints.
- Ollama (Local): Zero-setup detection of local models (Llama 3, Mistral, Gemma).
- Google Gemini: First-class support for Gemini 3.0 Pro/Flash.
- OpenAI: GPT-5.2 support with optimized system prompts.
- Anthropic: Claude 4.5 Sonnet support for complex reasoning.
- Groq: Ultra-fast inference with Llama 3 models.
- Always-on-top translucent overlay
- Instantly hide/show with shortcuts
- Works across all applications
- Real-time speech-to-text
- Context-aware Memory (RAG) for Past Meetings
- Instant answers as questions are asked
- Smart recap and summaries
- Capture any screen content
- Analyze slides, documents, code, or problems
- Immediate explanations and solutions
- What should I answer?
- Shorten response
- Recap conversation
- Suggest follow-up questions
- Manual or voice-triggered prompts
Natively understands that listening to a meeting and talking to an AI are different tasks. We treat them separately:
- System Audio (The Meeting): Captures high-fidelity audio directly from your OS (Zoom, Teams, Meet). It "hears" what your colleagues are saying without interference from your room noise.
- Microphone Input (Your Voice): A dedicated channel for your voice commands and dictation. Toggle it instantly to ask Natively a private question without muting your meeting software.
- Global activation shortcut
- Instant answer overlay
- Upcoming meeting readiness
- Full Offline RAG: All vector embeddings and retrieval happen locally (SQLite).
- Semantic Search: innovative "Smart Scope" detects if you are asking about the current meeting or a past one.
- Global Knowledge: Ask questions across all your past meetings ("What did we decide about the API last month?").
- Automatic Indexing: Meetings are automatically chunked, embedded, and indexed in the background.
- Undetectable Mode: Instantly hide from dock/taskbar.
- Masquerading: Disguise process names and window titles as harmless system utilities.
- Local-Only Processing: All data stays on your machine.
Natively includes a powerful, local-first meeting management system to review, search, and manage your entire conversation history.
- Meeting Archives: Access full transcripts of every past meeting, searchable by keywords or dates.
- Smart Export: One-click export of transcripts and AI summaries to Markdown, JSON, or Text—perfect for pasting into Notion, Obsidian, or Slack.
- Usage Statistics: Track your token usage and API costs in real-time. Know exactly how much you are spending on Gemini, OpenAI, or Claude.
- Audio Separation: Distinct controls for System Audio (what they say) vs. Microphone (what you dictate).
- Session Management: Rename, organize, or delete past sessions to keep your workspace clean.
- Live Assistance: Get explanations for complex lecture topics in real-time.
- Translation: Instant language translation during international classes.
- Problem Solving: Immediate help with coding or mathematical problems.
- Interview Support: Context-aware prompts to help you navigate technical questions.
- Sales & Client Calls: Real-time clarification of technical specs or previous discussion points.
- Meeting Summaries: Automatically extract action items and core decisions.
- Code Insight: Explain unfamiliar blocks of code or logic on your screen.
- Debugging: Context-aware assistance for resolving logs or terminal errors.
- Architecture: Guidance on system design and integration patterns.
Natively is built on a simple promise: Any speech provider, any API key, 100% free to use, and universally compatible.
| Feature | Natively | Commercial Tools (Cluely, etc.) | Other OSS |
|---|---|---|---|
| Price | Free (BYOK) | $20 - $50 / month | Free |
| Speech Providers | Any (Google, Groq, Deepgram, etc.) | Locked to Vendor | Limited |
| LLM Choice | Any (Local or Cloud) | Locked to Vendor | Limited |
| Privacy | Local-First & Private | Data stored on servers | Depends |
| Latency | Real-Time (<500ms) | Variable | Often Slow |
| Universal Mode | Works over ANY app | often limited to browser | No |
| Meeting History | Full Dashboard & Search | Limited | None |
| Data Export | JSON / Markdown / Text | Proprietary Format | None |
| Audio Channels | Dual (System + Mic) | Single Stream | Single Stream |
| Screenshot Analysis | Yes (Native) | Limited | Rare |
| Stealth Mode | Yes (Undetectable) | No | No |
Natively processes audio, screen context, and user input locally, maintains a rolling context window, and sends only the required prompt data to the selected AI provider (local or cloud).
No raw audio, screenshots, or transcripts are stored or transmitted unless explicitly enabled by the user.
- React, Vite, TypeScript, TailwindCSS
- Electron
- Rust (native audio)
- SQLite (local storage)
- Gemini 3 (Flash / Pro)
- OpenAI (GPT-5.2)
- Claude (Sonnet 4.5)
- Ollama (Llama, Mistral, CodeLlama)
- Groq (Llama, Mixtral)
- Minimum: 4GB RAM
- Recommended: 8GB+ RAM
- Optimal: 16GB+ RAM for local AI
Natively is intended for:
- Learning
- Productivity
- Accessibility
- Professional assistance
Users are responsible for complying with:
- Workplace policies
- Academic rules
- Local laws and regulations
This project does not encourage misuse or deception.
- Linux support is limited and looking for maintainers
Contributions are welcome:
- Bug fixes
- Feature improvements
- Documentation
- UI/UX enhancements
- New AI integrations
Quality pull requests will be reviewed and merged.
Licensed under the GNU Affero General Public License v3.0 (AGPL-3.0).
If you run or modify this software over a network, you must provide the full source code under the same license.
Note: This project is available for sponsorships, ads, or partnerships – perfect for companies in the AI, productivity, or developer tools space.
Star this repo if Natively helps you succeed in meetings, interviews, or presentations!
Yes. Natively is an open-source project. You only pay for what you use by bringing your own API keys (Gemini, OpenAI, Anthropic, etc.), or use it 100% free by connecting to a local Ollama instance.
Yes. Natively uses a Rust-based system audio capture that works universally across any desktop application, including Zoom, Microsoft Teams, Google Meet, Slack, and Discord.
Natively is built on Privacy-by-Design. All transcripts, vector embeddings (Local RAG), and keys are stored locally on your machine. We have no backend and collect zero telemetry.
Natively is a powerful assistant for any professional situation. However, users are responsible for complying with their company policies and interview guidelines.
Simply install Ollama, run a model (e.g., ollama run llama3), and Natively will automatically detect it. Enable "Ollama" in the AI Providers settings to switch to offline mode.
ai-assistant meeting-notes interview-helper presentation-support ollama gemini-ai electron-app cross-platform privacy-focused open-source local-ai screenshot-analysis academic-helper sales-assistant coding-companion cluely cluely alternative interview coder final round ai claude skills moltbot

