FAQ
Speakers
Sponsors
Schedule
Schedule
TVMCon covers state of the art of deep learning compilation and optimization, with a range of tutorials, research talks, case studies, and industry presentations.
Day 1: Tutorials
March 16th, 2023
Introduction to Unity: TVMScript
Yaxing Cai
Tutorial
Introduction to Unity: Relax and PyTorch
3 speakers
Tutorial
TVM BYOC in Practice
Roy Segal
Tutorial
Get Started with TVM on Adreno GPU
Siva Rama Krishna Reddy
Tutorial
Introduction to Unity: Metaschedule
Xiyou Zhou
Tutorial
How to Bring microTVM to a custom IDE
Mohamad Katanbaf
Tutorial
Day 2: Keynote & Main Session
March 17th, 2023
Community Keynote
4 speakers
Keynote
PyTorch 2.0: the journey to bringing compiler technologies to the core of PyTorch
Peng Wu
Keynote
Support QNN Dialect for TVM with MediaTek Neuron and Devise the Scheduler for Acceleration
2 speakers
Main Stage
On-Device Training Under 256KB Memory
2 speakers
Main Stage
AMD Tutorial
2 speakers
Main Stage
TVM at TI: Accelerating inference using the C7x/MMA
2 speakers
Main Stage
Adreno GPU: 4x speed-up and upstreaming to TVM mainline
2 speakers
Main Stage
Transfer-Tuning: Reusing Auto-Schedules for Efficient Tensor Program Code Generation
Perry Gibson
Main Stage
Improvement in the TVM OpenCL codegen to autogenerate optimal convolution kernels for Adreno GPUs
2 speakers
Main Stage
PyTorch Profiling
An Wang
Main Stage
TVM Unity: Pass Infrastructure and BYOC
2 speakers
Main Stage
Renesas Hardware accelerators with Apache TVM
2 speakers
Main Stage
Introduction on 4th Gen Intel Xeon processor and BF16 support with TVM
2 speakers
Main Stage
Hidet: Task Mapping Programming Paradigm for Deep Learning Tensor Programs
Yaoyao Ding
Main Stage
Towards Building a Responsible Data Economy
Dawn Song
Keynote
Optimizing SYCL Device Kernels with AKG
2 speakers
Lightning Talk
Adreno GPU Performance Enhancements using TVM
2 speakers
Lightning Talk
Improvements to CMSIS-NN integration in TVM
2 speakers
Lightning Talk
UMA: Universal Modular Accelerator Interface
3 speakers
Lightning Talk
Day 3: Research & Community Discussions
March 18th, 2023
TVM Unity for Dynamic Models
2 speakers
Lightning Talk
Empower Tensorflow serving with backend TVM
Tiejun Chen
Lightning Talk
Enabling Conditional Computing on Hexagon target
2 speakers
Lightning Talk
Decoupled Model Schedule for Large Deep Learning Model Training
Hongzheng Chen
Lightning Talk
Using TVM to bring Bayesian neural networks to embedded hardware
Falk Selker
Lightning Talk
Efficient Support of TVM Scan OP on RISC-V Vector Extension
2 speakers
Lightning Talk
Improvements to Ethos-U55 support in TVM including CI on Alif Semiconductor boards
Ilya Gozman
Lightning Talk
Compiling Dynamic Shapes
Vinod Grover
Main Stage
TVM Packaging in 2023: delivering TVM to end users
Leandro Nunes
Main Stage
Cross-Platform Training Using Automatic Differentiation on Relax IR
2 speakers
Main Stage
AutoTVM: Reducing tuning space by cross axis filtering
Alexey Voronov
Main Stage
SparseTIR: Composable Abstractions for Sparse Compilation in Deep Learning
2 speakers
Main Stage
Analytical Tensorization and Fusion for Compute-intensive Operators
Si-Ze Zheng
Main Stage
CUTLASS 3.0: Next Generation Composable and Reusable GPU Linear Algebra Library
Vijay Thakkar
Main Stage
Enabling Data Movement and Computation Pipelining in Deep Learning Compiler
Guyue Huang
Main Stage
Automating DL Compiler Bug Finding with NNSmith
Jiawei Liu
Main Stage
TVM at NIO
Zhao Wu
Main Stage
TVM at Tencent
3 speakers
Main Stage
Integrating the Andes RISC-V Processors into TVM
2 speakers
Main Stage
Alpa: A Compiler for Distributed Deep Learning
Lianmin Zheng
Main Stage
ACRoBat: Compiler and Runtime Techniques for Efficient Auto-Batching of Dynamic Deep Learning Computations
Pratik Fegade
Main Stage
Channel Folding: a Transform Pass for Optimizing Mobilenets
Gavin Uberti
Main Stage