Hardware acceleration for on-device Machine Learning

Subscribers:
1,350,000
Published on ● Video Link: https://www.youtube.com/watch?v=iSt3fT1YsKE



Duration: 15:52
6,211 views
0


Hardware acceleration can dramatically reduce inference latency for machine learning enabled features and allow you to deliver live on-device experiences that may not be possible otherwise.
Today, in addition to CPU, Android devices embed various specialized chips such as GPU, DSP or NPU that you can use to accelerate your ML inference.
In this talk we go over some tools and solutions offered by TensorFlow and Android ML teams that help you take advantage of various hardware to accelerate ML inference in your Android app.

Resources:
TensorFlow documentation→ https://goo.gle/3UCuw2L
GPU delegate documentation → https://goo.gle/3DQMWGe
Model analyzer → https://goo.gle/3NRuKAN
NNAPI delegate documentation: → https://goo.gle/3tc4ibB
Performance delegates documentation → https://goo.gle/3TiZeNd
Acceleration Service → https://goo.gle/3hkxMRT
Android ML documentation → https://goo.gle/3tbzcko

Speaker: Thomas Ezan

Watch more:
Watch all the Android Dev Summit sessions → https://goo.gle/ADS-All
Watch all the Platform track sessions → https://goo.gle/ADS-Platform

Subscribe to Android Developers → https://goo.gle/AndroidDevs

#Featured #AndroidDevSummit #Android







Tags:
Hardware acceleration
performance improvement
machine learning
ML
on-device machine learning
CPU
DSP
NPU
Android ML
Android Dev Summit
Android Developers Summit
Android Dev Summit 2022
ADS
ADS 22
ADS ‘22
ADS 2022
Developers Summit
Dev Summit
Android developer
android developers
android dev
android devs
android announcements
android announcement
app developer
developer
application developer