Fuxin Li is currently a professor in the School of Electrical Engineering and Computer Science at Oregon State University. Before that, he has held research positions in University of Bonn and Georgia Institute of Technology. He had obtained a Ph.D. degree in the Institute of Automation, Chinese Academy of Sciences in 2009. He has won an NSF CAREER award, (co-)won the PASCAL VOC semantic segmentation challenges from 2009-2012, and led a team to the 4th place finish in the DAVIS Video Segmentation challenge 2017. He has published more than 60 papers in computer vision, machine learning and natural language processing. His main research interests are deep learning, video object segmentation, multi-target tracking, point cloud deep networks, uncertainty estimation in deep learning and human understanding of deep learning.

21 January 2021

This talk will present some of our recent work in new designs in the well-known convolutional and recurrent networks. We will start with PointConv, which efficiently implements CNN on irregularly spaced 3D point cloud data. This allows us to build CNN networks in areas where it was not possible in the past, improve the handling of scale and rotation invariances in CNNs as well as improving non-rigid 3D point matching (also known as scene flow from point clouds). Then, we will talk about about our work in particle-based network generators, where we train a generator to generate all the weights of a deep network to estimate uncertainty in deep learning predictions, detecting outliers and adversarial examples, and drive exploration in reinforcement learning algorithms.

On recurrent networks, I will talk about our experience utilizing LSTM in multi-target tracking and show some intuitions about why the current LSTM may be insufficient for long-term multi-object tracking. We present a novel bilinear LSTM model suitable for multi-target tracking problems. Results on the MOT 2016 and MOT 2017 challenges show that it significantly outperform traditional LSTMs in terms of identity switches, helps us to achieve real-time online tracking with state-of-the-art performance.


Imprint | Privacy Policy

Stay in the know with developments at IARAI

We can let you know if there’s any

updates from the Institute.
You can later also tailor your news feed to specific research areas or keywords (Privacy)

Log in with your credentials


Forgot your details?

Create Account