Deep Learning Inference using Constrained Devices

indicates CONFIRMED TO RUN

Register Now

  • Date: Friday May 15, 2020
  • Duration: 1 uur (with live Q&A)
  • Time 11am – 12pm (CEST)
  • Presenter: Rahul Dubey
  • Cost: FREE!

 

Overview:Python Logo

As applications for Deep Learning grow rapidly in many industries, this webinar will help you understand some of the practicalities of deploying Deep Learning using constrained platforms such as single board computers, Microcontrollers and Neural Network accelerators.

In this webinar:

We will guide you through the steps needed to deploy Deep Learning Models at the cloud Edge, using an industrial application as an example use case.

We will cover:

  • the differences between Model Training and Model Inference
  • leveraging Transfer Learning to customize the Model
  • setting up sensors for training data acquisition and inference
  • how a Model’s architecture and weights are stored
  • how a Model connects to data in the outside world using a scan loop
  • the use of signal processing and neural network libraries to execute models on Microcontrollers
  • Model execution using a neural network accelerator
  • creation of a Docker container to package the Model and its runtime dependencies

The ideas will be illustrated in the context of NXP silicon devices.

Doulos Member Technical Staff, Rahul Dubey, will present this training webinar, which will consist of a one-hour session (see below for details) and be interactive with Q&A participation from attendees.

 

 

 

 

Doulos


Date
15 May 2020

Location
Webinar
Online

Webinar

Price
€ 0,00

Information
Training brochure