Skip to content

Edge AI application examples for Microchip MCUs, MPUs, FPGAs, and DSP-enabled devices — covering various domains and use cases

MicrochipTech/EgdeAI-Applications-Repository

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 

Repository files navigation

Microchip Edge AI

Edge AI Applications

Status Platforms Domain ML License

Edge AI Applications Repository

This repository serves as the primary hub for Microchip’s Edge AI application portfolio. It highlights how artificial intelligence and machine learning can be efficiently deployed on resource-constrained MCUs, MPUs, and FPGAs, spanning real-world use cases across vision, audio, predictive maintenance, human–machine interfaces (HMI), and electrical systems.

The objective of this repository is to:

  • Provide reusable reference applications that developers can adapt for their own designs
  • Demonstrate best practices for deploying AI/ML models on Microchip platforms
  • Showcase cross-platform support across dsPIC®, PIC32, SAM MPUs, and PolarFire® SoCs
  • Act as a central knowledge base for development teams, customers, and ecosystem partners building Edge AI solutions

For more information on Microchip’s Edge AI technology and ecosystem, visit: Microchip Edge AI


🔹 Application Categories

1. Vision Applications

Vision is one of the fastest-growing areas in EdgeAI. These applications use cameras or image sensors to enable devices to see and interpret their environment.
Typical challenges include low-power image capture, real-time inference, and memory-efficient neural networks.

Example use cases:

  • Smart Home
  • Parking lot
  • face recognition
  • Person detection
  • Object recognition
  • License Plate Detection

2. Audio Applications

Audio-based AI enables devices to listen, understand, and react to sound inputs. These solutions are highly relevant for hands-free control, monitoring, and assistive technologies.
The main challenges are achieving always-on low-power listening and robustness across noisy environments.

Example use cases:


3. Predictive Maintenance Applications

Predictive Maintenance (PdM) brings intelligence to industrial and IoT systems by monitoring equipment health in real-time. Instead of waiting for machines to fail, AI detects early signs of degradation, reducing downtime and cost.
These applications often use vibration, current, and acoustic signals as inputs.

Example use cases:


4. Human–Machine Interface (HMI) Applications

HMI solutions enhance how users interact with devices by making interfaces more natural, intuitive, and responsive. AI-powered HMIs go beyond simple buttons, enabling gesture, touchless, and multimodal interactions.

Example use cases:

  • Gesture recognition – enabling gesture recognition in consumer electronics or automotive.
  • Gesture recognition using dsPIC – gesture recognition on dsPIC device
  • Robotic Arm - classify the orientation of the Robotic Arm and transmit the outcomes to a MBD App.
  • Magic Wand - Recognises the gesture done using a magic wand
  • Shining Light
  • Golf Ace - Learn to Putt like your favorite pro golfer using machine learning

5. Gas Sensing Applications

Gas and air-quality sensing is an emerging area in Edge AI where sensor fusion and machine learning enable devices to detect, classify, and respond to chemical signatures in real time. These applications can run on low-power MCUs and are relevant across industrial, consumer, and environmental domains.

Example use cases:


6. Electrical Applications

Electrical signal monitoring is critical in industrial and residential environments. AI enhances traditional electrical sensing by detecting subtle patterns and anomalies that rule-based systems may miss.
This category includes safety, efficiency, and grid-monitoring applications.

Example use cases:


7. Wearables & Activity Monitoring

This category includes motion, posture, gesture, and human-activity–based Edge AI applications. These solutions typically use IMUs, accelerometers, or multimodal sensing to enable intelligent fitness, safety, and health monitoring on-device.

Example use cases:

  • Smart Dumbbell – classifying reps, posture, and movement quality
  • Fall Detection – detecting and alerting on human falls using embedded ML

🖥 Supported Platforms

The applications in this repository are designed to run across Microchip’s broad compute portfolio, covering:

  • MCUs – Ultra-low-power and real-time devices suited for always-on sensing and lightweight ML inference
  • MPUs – Application-class processors with Linux support for higher-compute AI workloads
  • FPGAs and SoC FPGAs – Configurable logic and RISC-V–based platforms for accelerated AI processing
  • dsPIC-DSC– Devices optimized for signal processing and deterministic control workloads

🛠 Toolchains & Build Support

This repository supports multiple build flows, including:


🔗 VectorBlox Reference

Edge AI deployments that require FPGA acceleration can leverage the Microchip-VectorBlox GitHub repository . This repository provides toolflows, examples, and runtime support for ML inference on PolarFire® SoC and other FPGA-based platforms.


📚 Documentation

General documentation and guides will be found under:

Planned docs:

  • Overview of the repo
  • Getting Started guides per platform
  • Roadmap of upcoming apps
  • FAQs & troubleshooting

For bug reports, feature requests, or questions, please use the GitHub Issues section.


📜 License

All content in this repository is provided under Microchip’s licensing terms.
Refer to the accompanying LICENSE file for full legal details and usage permissions.

Microchip Edge AI

Releases

No releases published

Packages

No packages published