Bridging the gap between High-Level Planning (LLMs) and Low-Level Control (RL)
I'm on a mission to build intelligent agents that don't just thinkโthey act. While the world obsesses over bigger language models, I'm engineering the control systems that turn AI thoughts into real-world actions.
๐ Master's @ Rutgers University | ๐ฌ Research Assistant @ DIMACS
"The future of AI isn't just about what it can understandโit's about what it can do."
The industry has built incredible AI "brains" (GPT-4, Claude, Gemini), but who's building the body?
That's where I come in. I architect the real-time control stacks that:
- โก Bridge 100ms LLM planning with 60Hz physics control loops
- ๐ฎ Turn abstract goals into precise motor commands
- ๐ Learn from interaction, not just supervision
- ๐๏ธ Scale from simulation to reality
Building hybrid intelligence architectures where:
- GPT-4 decomposes high-level tasks ("make coffee")
- TQC agents execute continuous control (grasp, pour, place)
- Asynchronous pipelines eliminate control latency
Impact: Achieved 50% better generalization on manipulation tasks
Engineering multi-threaded Python systems that:
- Decouple blocking LLM API calls from physics loops
- Maintain stable 60Hz control in MuJoCo environments
- Handle sensor fusion and state estimation in real-time
Designing compositional state representations that:
- Decompose scenes into objects and relationships
- Enable zero-shot generalization to novel configurations
- Published findings on arXiv
Algorithms I Live By: PPO โข A2C โข TQC โข SAC โข TD3
while True:
observe()
think() # LLMs for reasoning
act() # RL for execution
learn() # Continuous improvementThe best AI systems learn from experience, not just data.

