专栏名称: VALSE
VALSE(Vision and Learning Seminar) 年度研讨会的主要目的是为计算机视觉、图像处理、模式识别与机器学习研究领域内的中国青年学者提供一个深层次学术交流的舞台。
目录
相关文章推荐
创伙伴  ·  卖盗版labubu,一天赚走7.5万 ·  13 小时前  
铅笔道  ·  今天,120亿超级独角兽IPO:滴灌通!公布 ... ·  17 小时前  
创伙伴  ·  一双丑鞋,套现67亿 ·  2 天前  
51好读  ›  专栏  ›  VALSE

VALSE Webinar 19-09期 3D视觉与深度学习

VALSE  · 公众号  ·  · 2019-04-12 21:02

正文

请到「今天看啥」查看全文


报告嘉宾: 黄其兴(The University of Texas at Austin)

报告时间: 2019年4月17日(星期三)晚上20:00(北京时间)

报告题目: Extreme Relative Pose Estimation for RGB-D Scans via Scene Completion


报告人简介:

Qixing Huang is an assistant professor at UT Austin. He obtained his PhD in Computer Science from Stanford University in 2012. From 2012 to 2014 he was a postdoctoral research scholar at Stanford University. Huang was a research assistant professor at Toyota Technological Institue at Chicago from 2014-2016. He received his MS and BS in Computer Science from Tsinghua University. Huang has also interned at Google Street View, Google Research and Adobe Research.


His research spans computer vision, computer graphics, computational biology, and machine learning. In particular, his recent focus is on developing machine learning algorithms (particularly deep learning) that leverage Big Data to solve core problems in computer vision, computer graphics, and computational biology. He is also interested in statistical data analysis, compressive sensing, low-rank matrix recovery, and large-scale optimization, which provide a theoretical foundation for much of his research. He is an area chair of CVPR 2019 and ICCV 2019.


个人主页:

https://www.cs.utexas.edu/~huangqx/


报告摘要:

Estimating the relative rigid pose between two RGB-D scans of the same underlying environment is a fundamental problem in computer vision, robotics, and computer graphics. Most existing approaches allow only limited relative pose changes since they require considerable overlap between the input scans. We introduce a novel approach that extends the scope to extreme relative poses, with little or even no overlap between the input scans. The key idea is to infer more complete scene information about the underlying environment and match on the completed scans. In particular, instead of only performing scene completion from each individual scan, our approach alternates between relative pose estimation and scene completion. This allows us to perform scene completion by utilizing information from both input scans at late iterations, resulting in better results for both scene completion and relative pose estimation. Experimental results on benchmark datasets show that our approach leads to considerable improvements over state-of-the-art approaches for relative pose estimation. In particular, our approach provides encouraging relative pose estimates even between non-overlapping scans.

报告嘉宾: 苏昊(University of California, San Diego)

报告时间: 2019年4月17日(星期三)晚上20 :30 (北京时间)

报告题目: Understanding the 3D Environments for Interactions







请到「今天看啥」查看全文