This project demonstrates the advantages of Parlant's structured approach over traditional monolithic LLM prompts for building conversational agents.
Terminal 1 - Start the server:
uv run parlant_agent_server.pyTerminal 2 - Run the comparison:
uv run demo_comparison.pyThe demo tests 5 realistic scenarios:
- Policy replacement with critical warnings
- Coverage calculation with specific parameters
- Health condition impact assessment
- Mixed topics with boundary maintenance
- Decision making with conflicting rules
parlant-conversational-agent/
├── parlant_agent_server.py # Parlant agent with tools & guidelines
├── demo_comparison.py # Main comparison demo runner
├── traditional_llm_prompt.py # Monolithic prompt approach
├── parlant_client_utils.py # Parlant API client utilities
├── rich_table_formatter.py # Beautiful console table rendering
└── pyproject.toml # Project dependencies (uv)
uv sync # Install dependencies- Python 3.10+ (required for Parlant)
uvpackage manager- OpenAI API key in
.envfile
Get a FREE Data Science eBook 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. Subscribe now!
Contributions are welcome! Please fork the repository and submit a pull request with your improvements.
