-
Amazon
- Bellevue, WA
- msamogh.github.io
Highlights
Lists (5)
Sort Name ascending (A-Z)
Starred repositories
A Claude Code skill for deliberate skill development during AI-assisted coding
🕵️ OSINT Tools for gathering information and actions forensics 🕵️
Official implementation of the paper "Chain-of-Experts: When LLMs Meet Complex Operation Research Problems"
Generative Agents: Interactive Simulacra of Human Behavior
A generative agent implementation for LLaMA based models, derived from langchain's implementation.
Minimal reproduction of DeepSeek R1-Zero
A list of free LLM inference resources accessible via API.
DialOp: Decision-oriented dialogue environments for collaborative language agents
Enhance LLM problem-solving with REAP: Reflection, Explicit Problem Deconstruction, and Advanced Prompting. This repo includes the REAP prompt framework, code for reproducing experiments, datasets,…
A clean implementation based on AlphaZero for any game in any framework + tutorial + Othello/Gobang/TicTacToe/Connect4 and more
Code for paper "G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment"
DSPy: The framework for programming—not prompting—language models
Best practice and tips & tricks to write scientific papers in LaTeX, with figures generated in Python or Matlab.
Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
Contrastive Learning Reduces Hallucination in Conversations
Implementation for paper "A Knowledge-Enhanced Pretraining Model for Commonsense Story Generation"
Cleaned up version of the PlotMachines code
A practical and feature-rich paraphrasing framework to augment human intents in text form to build robust NLU models for conversational engines. Created by Prithiviraj Damodaran. Open to pull reque…
High-speed download of LLaMA, Facebook's 65B parameter GPT model
An extensible benchmark for evaluating large language models on planning
The official tool for creating proceedings for conferences of the Association for Computational Linguistics (ACL).
Running large language models on a single GPU for throughput-oriented scenarios.


