Deep Learning Inference using Constrained Devices

indicates CONFIRMED TO RUN

Register Now

 

 

  • Datum: vrijdag 30 oktober, 2020
  • Duur: 1 uur (met live Q&A)
  • Tijd: 11:00 – 12:00
  • Spreker: Rahul Dubey
  • Prijs: GRATIS!

 

Overview:

As applications for Deep Learning grow rapidly in many industries, this webinar will help you understand some of the practicalities of deploying Deep Learning using constrained platforms such as single board computers, Microcontrollers and Neural Network accelerators.

In this webinar:

We will guide you through the steps needed to deploy Deep Learning Models at the cloud Edge, using an industrial application as an example use case.

We will cover:

  • the differences between Model Training and Model Inference
  • leveraging Transfer Learning to customize the Model
  • setting up sensors for training data acquisition and inference
  • how a Model’s architecture and weights are stored
  • how a Model connects to data in the outside world using a scan loop
  • the use of signal processing and neural network libraries to execute models on Microcontrollers
  • Model execution using a neural network accelerator
  • creation of a Docker container to package the Model and its runtime dependencies

The ideas will be illustrated in the context of NXP silicon devices.

Rahul Dubey Doulos Member Technical Staff, will present this training webinar, which will consist of a one-hour session and be interactive with
Q&A participation from attendees.
Attendance is free of charge

 

 

Doulos


Datum
30 oktober 2020

Locatie
Webinar
Online

Webinar

Prijs
€ 0,00

Informatie
Training brochure