"Among Us" meets Large Language Models. An autonomous social deduction simulation where AI agents with distinct psychological profiles lie, bluff, and deduce to eliminate the imposter among them.
Imposter Protocol is an experiment in AI group dynamics and deceptive reasoning. It simulates a game where 5 autonomous agents communicate in natural language to solve a word-association puzzle.
One agent is the Imposter, who does not know the secret word. The others are Innocents. Through a series of word association rounds, free-form discussions, and voting phases, the agents must deduce identities based on subtle semantic clues and behavioral analysis.
This project demonstrates the power of LangGraph for orchestrating complex state machines and LLMs (via Groq) for generating consistent, personality-driven roleplay.
- 🧠 Distinct Personalities: Agents are not generic. They are initialized with randomized trait vectors (Aggression, Paranoia, Chaos, Wit) that dictate their speech patterns and strategic risks.
- 🗣️ Natural Language Gameplay: Agents debate, accuse, defend, and lie using human-like conversation, powered by gpt-oss-120B (via Groq).
- 📉 Dynamic Stress System: Agents track their own stress levels. If an agent is accused too often, their stress spikes, causing them to stutter, panic, or make irrational decisions.
- 🕸️ Social Graph Tracking: Every agent maintains a memory of relationships, tracking trust scores and suspicion reasons for every other player.
-
⚡ Dual Interfaces:
- Web Console: A Cyberpunk-themed Flask dashboard for visual observation.
- CLI Mode: A Matrix-style terminal interface for lightweight execution.
Agents attempt to prove their innocence by providing a hint related to the secret word without giving it away to the Imposter.
Agents analyze the previous round, cross-reference their trust scores, and aggressively debate.
The democratic process of elimination. Agents cast votes based on their internal reasoning chain.
The simulation is built on a State Graph architecture using LangGraph.
- State Management: The game state (rounds, history, agent memories) is passed immutably between nodes.
- The Loop:
Setup: Generates secret words, roles, and personalities.Word Round: Agents generate hints while checking against forbidden words.Discussion: Agents analyze history and update their Social Graph (Trust/Suspicion).Voting: Agents make a final decision based on accumulated suspicion.Last Chance: If caught, the Imposter attempts a zero-shot guess of the secret word.
To ensure game stability, all LLM interactions use Pydantic models to enforce structured JSON outputs (e.g., DiscussionOutput, VoteOutput), preventing the simulation from breaking due to hallucinated formats.
- Python 3.11+
- A Groq API Key
-
Clone the repository
git clone https://github.com/yourusername/imposter-protocol.git cd imposter-protocol -
Initialize and Sync Dependencies Using
uv, this command creates the virtual environment and installs all dependencies instantly.uv sync
-
Configure Environment Create a
.envfile in the root directory:GROQ_API_KEY=gsk_your_key_here
python app.pyAccess the console at http://127.0.0.1:5000
python cli.pyYou can tweak the "Soul" of the simulation in the Web UI or config.py:
- Chaos: Increases randomness in decision making.
- Aggression: Makes agents more likely to attack early.
- Paranoia: Agents trust no one, even with good evidence.
- Model Switching: The system automatically falls back to different models if API limits are hit.
- Backend: Python, Flask
- AI Logic: LangChain, LangGraph
- LLM Provider: Groq (gpt-oss-120B)
- Frontend: HTML5, CSS3 (Custom Cyberpunk Design), Jinja2
- Validation: Pydantic
- Human Participation: Allow a human player to join the lobby via the UI.
- Voice Synthesis: Use TTS to let agents speak their lines.
- Reinforcement Learning: Allow agents to learn from each game.
Distributed under the MIT License. See LICENSE for more information.
Built by Redtius as an exploration into Multi-Agent Systems.





