An Audio event detection training tool & deployment
- We easy the process of training and deployment on MCU.
- If you haven't installed NuEdgeWise, please follow these steps to install Python virtual environment and choose
NuEdgeWise_env. - Skip if you have done.
- The
train.ipynbwill help you train the model, and convert it to a TFLite, C++ file and Vela TFLite.
(a.) No matter downloaded ESC-50 or your dataset, they should be in dataset and with ESC format.
(b.) This tool uses ESC-50 dataset for training and testing which are given in ESC format.
(c.) Your dataset must be comprised of:
- A folder containing the audio files with all same format, like
.wav. - A
.csvfile with at least a "filename" column and a "category" column.
train.ipynbprovides various attributes for training configuration. User can control them by the easy UI in jupyter notebook. There are default model&training setting forminiresnetv2andyamnet, and user can start from here with pre-train model.- More detail configuration are in
cfg/my_config.yaml. - After training, testing result of normal & quantization model will be show as pictures.
- The training reult and model are in
workspace/YOUR_PROJECT_NAME -
- Use
Testtab intrain.ipynbto test the tflite model with single test audio file which not go through preprocessing (rearranging the audio file). This testing is more like MCU inference scenario. -
- In
board_testfolder, we offer a pyOCD script to communicate the board to test all test dataset. - We have a chance to test large number dataset from MCU and get the result.
- Utilize the
Deploymenttab intrain.ipynbto convert the TFLite model to C source/header files and Vela TFLite. -
- ML_M460_SampleCode
- [M55M1]