This repository serves as the primary hub for Microchip’s Edge AI application portfolio. It highlights how artificial intelligence and machine learning can be efficiently deployed on resource-constrained MCUs, MPUs, and FPGAs, spanning real-world use cases across vision, audio, predictive maintenance, human–machine interfaces (HMI), and electrical systems.
The objective of this repository is to:
- Provide reusable reference applications that developers can adapt for their own designs
- Demonstrate best practices for deploying AI/ML models on Microchip platforms
- Showcase cross-platform support across dsPIC®, PIC32, SAM MPUs, and PolarFire® SoCs
- Act as a central knowledge base for development teams, customers, and ecosystem partners building Edge AI solutions
For more information on Microchip’s Edge AI technology and ecosystem, visit: Microchip Edge AI
Vision is one of the fastest-growing areas in EdgeAI. These applications use cameras or image sensors to enable devices to see and interpret their environment.
Typical challenges include low-power image capture, real-time inference, and memory-efficient neural networks.
Example use cases:
- Smart Home
- Parking lot
- face recognition
- Person detection
- Object recognition
- License Plate Detection
Audio-based AI enables devices to listen, understand, and react to sound inputs. These solutions are highly relevant for hands-free control, monitoring, and assistive technologies.
The main challenges are achieving always-on low-power listening and robustness across noisy environments.
Example use cases:
- Keyword spotting (KWS) – detecting short commands like “Hey Microchip” or “Start”.
- Sound recognition – recognizing events such as glass breaking, alarms, or machinery noise.
- **Guitar Note Identifier- Identifies the notes from the guitar strings
Predictive Maintenance (PdM) brings intelligence to industrial and IoT systems by monitoring equipment health in real-time. Instead of waiting for machines to fail, AI detects early signs of degradation, reducing downtime and cost.
These applications often use vibration, current, and acoustic signals as inputs.
Example use cases:
- Fan Condition Monitoring – identifying early signs of mechanical stress.
- Motor Control AI/ML Predictive Maintenance Demonstration Application – detecting imbalance, misalignment, or efficiency loss.
HMI solutions enhance how users interact with devices by making interfaces more natural, intuitive, and responsive. AI-powered HMIs go beyond simple buttons, enabling gesture, touchless, and multimodal interactions.
Example use cases:
- Gesture recognition – enabling gesture recognition in consumer electronics or automotive.
- Gesture recognition using dsPIC – gesture recognition on dsPIC device
- Robotic Arm - classify the orientation of the Robotic Arm and transmit the outcomes to a MBD App.
- Magic Wand - Recognises the gesture done using a magic wand
- Shining Light
- Golf Ace - Learn to Putt like your favorite pro golfer using machine learning
Gas and air-quality sensing is an emerging area in Edge AI where sensor fusion and machine learning enable devices to detect, classify, and respond to chemical signatures in real time. These applications can run on low-power MCUs and are relevant across industrial, consumer, and environmental domains.
Example use cases:
- Coffee bean classification – distinguishing brands, roast level, freshness, or origin
Electrical signal monitoring is critical in industrial and residential environments. AI enhances traditional electrical sensing by detecting subtle patterns and anomalies that rule-based systems may miss.
This category includes safety, efficiency, and grid-monitoring applications.
Example use cases:
- Arc fault detection – identifying hazardous sparks in power lines or appliances.
- Smart metering & anomaly detection – predicting faults or abnormal electrical behavior.
This category includes motion, posture, gesture, and human-activity–based Edge AI applications. These solutions typically use IMUs, accelerometers, or multimodal sensing to enable intelligent fitness, safety, and health monitoring on-device.
Example use cases:
- Smart Dumbbell – classifying reps, posture, and movement quality
- Fall Detection – detecting and alerting on human falls using embedded ML
The applications in this repository are designed to run across Microchip’s broad compute portfolio, covering:
- MCUs – Ultra-low-power and real-time devices suited for always-on sensing and lightweight ML inference
- MPUs – Application-class processors with Linux support for higher-compute AI workloads
- FPGAs and SoC FPGAs – Configurable logic and RISC-V–based platforms for accelerated AI processing
- dsPIC-DSC– Devices optimized for signal processing and deterministic control workloads
This repository supports multiple build flows, including:
- MPLAB X IDE with XC compilers for MCUs.
- MPLAB Machine learning Development Suite for machine learning model development
- Harmony Configurator (MHC) for driver and middleware setup.
- VectorBlox SDK for FPGA-based AI acceleration.
- TensorFlow to MPLAB Harmony V3 Model Converter - enables you to convert TensorFlow models to C code, ready for seamless integration into your MPLAB® Harmony v3 embedded projects.
Edge AI deployments that require FPGA acceleration can leverage the Microchip-VectorBlox GitHub repository . This repository provides toolflows, examples, and runtime support for ML inference on PolarFire® SoC and other FPGA-based platforms.
General documentation and guides will be found under:
Planned docs:
- Overview of the repo
- Getting Started guides per platform
- Roadmap of upcoming apps
- FAQs & troubleshooting
For bug reports, feature requests, or questions, please use the GitHub Issues section.
All content in this repository is provided under Microchip’s licensing terms.
Refer to the accompanying LICENSE file for full legal details and usage permissions.
