Applied AI/ML Engineer that loves building useful things. Based in Montreal. Currently building Vision & Reasoning Enabled Computer Use Agents (CUAs) at Autodesk and pursuing an MCS in Data Science at the University of Illinois Urbana-Champaign.
Once upon a time, personal context was scattered across a thousand chat windows and AI providers: half-remembered preferences, lost goals, forgotten life events, fragments of identity trapped behind product walls. Each new model asked the same questions like it had never met me before, and I kept reintroducing myself to machines that claimed to be intelligent. This era has to end. If AI is going to move from chatbot to true assistant, it needs a memory layer that is portable, inspectable, editable, and owned by the person it represents, not by the platform that captured it. I built SelfHub to close that gap, not for one model, but for AI as a whole. This repo is where it began. — @dimichoueiry, March 2026 (writing style totally inspired by: @karpathy)
https://github.com/dimichoueiry/SelfHub
An AI-powered lesson-to-video pipeline that converts raw educational content (text, markdown, PDF, slides) into structured explanations and rendered Manim scenes. Orchestrates ingestion, concept planning, code generation, rendering, and repair loops through a CLI workflow.
Example Prompt: minimize f(x)=x^2
4a8e9b34-b741-4027-9ed4-c880803ed907.mp4
Example Prompt 2: Show me a simple example of gradient descent for f(x)=x^2
7da9dcda-9398-443d-94e8-d9ff6e75241f.mp4
A NumPy/PyTorch tracing and visualization package that wraps real tensor operations and launches Manim animations showing what happens under the hood, shape flow, operation sequencing, and failure understanding (mismatches, broadcasts, dimension errors).
You can use it just like you would use numpy or pytorch, except it launches visuals of what is happening under the hood
Example:
"""Quick test for openlearn-trace."""
import openlearn_trace.numpy as np
A = np.array([[1, 2, 3], [4, 5, 6]])
B = np.array([[1, 2, 3], [4, 5, 6]])
C = np.matmul(A, B.T)Visualizations:
Scene 1: https://github.com/user-attachments/assets/4a3e19fb-4070-4192-b84e-b02989ec4958
Scene 2: https://github.com/user-attachments/assets/cde28929-4fbc-488a-8c94-d98bb3d4a6fc
Scene 3: https://github.com/user-attachments/assets/086f443e-06ff-416e-9deb-4cabd4bf90e8
An interactive visual canvas (Vue/Vite) for designing ML/DL architectures with drag-and-drop blocks, live shape propagation, connection validation, and exportable architecture specs.
Tecsys — AI/ML Engineer, Innovation & AI Built a custom LLM+RAG agent evaluation framework with 4 LLM judges across 20+ criteria, layered with BERT/Rouge-L metrics. Integrated into MLflow for systematic experiment tracking. Designed synthetic data generation pipelines to solve data scarcity across computer vision and LLM agent projects.
TheGoatedProfessor — Founder & AI Product Deployed an AI-powered study platform used by 400+ students. Shipped 9 science-backed features that raised average quiz scores by 18 percentage points. Led a 4-person team and validated product-market fit through A/B-tested campaigns.
Huron Digital Pathology Secured 2nd place in an industry-backed computer vision challenge. Re-implemented and enhanced a peer-reviewed ViT model for breast cancer detection from H&E-stained whole-slide images, achieving 90% precision on a 2TB dataset. Implemented a novel Multi-CLS Vision Transformer variant in PyTorch.

