My Profile Avatar



Hou-Ning Hu

Ph.D. student, Vision Science Lab
National Tsing Hua University, Taiwan

Hou-Ning Hu is a doctoral student at the Department of Electrical Engineering, National Tsing Hua University. He works with Prof. Min Sun as a member of the Vision Science Lab on Deep Learning and its applications in Computer Vision.
His research interests span on wide applications of computer vision techniques, such as user experience, visual saliency on 360 videos, video dynamic, super-resolution. He is also very interested in learning 3D geometry from visual perception of our surroundings.



NEWS: Our paper about 360 visual grounding is accepted to AAAI 2018!
Our paper about 360 viewing pilot is accepted to CVPR 2017 as oral presentation!
Our paper about 360 viewing experience is accepted to CHI 2017!

Publications

2018

Shih-Han Chou, Yi-Chun Chen, Kuo-Hao Zeng, Hou-Ning Hu, Jianlong Fu, and Min Sun
Self-view Grounding Given a Narrated 360° Video
AAAI 2018
ICCV 2017 Workshop
[BibTex]    [PDF (ArXiv)]    [Project Page]    [MSRA Page]    [ICCV CLVL Page]   

@inproceedings{ChouAAAI18,
author = {Shih-Han Chou, Yi-Chun Chen, Kuo-Hao Zeng, Hou-Ning Hu, Jianlong Fu, Min Sun},
title = {Self-view Grounding Given a Narrated 360° Video},
journal = {AAAI Conference on Artificial Intelligence (AAAI)},
year = {2018}
}
2017

Hou-Ning Hu*, Yen-Chen Lin*, Ming-Yu Liu, Hsien-Tzu Cheng, Yung-Ju Chang, and Min Sun
Deep 360 Pilot: Learning a Deep Agent for Piloting through 360° Sports Videos
IEEE CVPR 2017 Oral
(* indicates equal contribution)
[BibTex]    [PDF (High Resolution)]    [PDF (ArXiv)]    [Project Page]   

@inproceedings{HuCVPR17,
author = {Hou-Ning Hu and Yen-Chen Lin and Ming-Yu Liu and Hsien-Tzu Cheng and Yung-Ju Chang and Min Sun},
title = {Deep 360 Pilot: Learning a Deep Agent for Piloting through 360° Sports Videos},
journal = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2017}
}

Yen-Chen Lin, Yung-Ju Chang, Hou-Ning Hu, Hsien-Tzu Cheng, Chi-Wen Huang, and Min Sun
Tell Me Where to Look: Investigating Ways for Assisting Focus in 360° Video
ACM CHI 2017
[BibTex]    [PDF (High Resolution)]    [PDF (DOI)]    [Project Page]   

@inproceedings{LinCHI17,
author = {Lin, Yen-Chen and Chang, Yung-Ju and Hu, Hou-Ning and Cheng, Hsien-Tzu and Huang, Chi-Wen and Sun, Min},
title = {Tell Me Where to Look: Investigating Ways for Assisting Focus in 360° Video},
booktitle = {Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems},
series = {CHI '17},
year = {2017},
isbn = {978-1-4503-4655-9},
location = {Denver, Colorado, USA},
pages = {2535--2545},
numpages = {11},
url = {http://doi.acm.org/10.1145/3025453.3025757},
doi = {10.1145/3025453.3025757},
acmid = {3025757},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {360-degree videos, auto pilot, focus assistance, video experience, visual guidance},
}

Side Projects

Tensorflow implementation of SoundNet

A Tensorflow implementation of SoundNet from the paper "SoundNet: Learning Sound Representations from Unlabeled Video" by Yusuf Aytar, Carl Vondrick, Antonio Torralba. NIPS 2016

Experiences

Novatek Microelectronics Corp.

Summer Intern
Jul. 2017 - Aug. 2017
Computer Vision Algorithm Development

Vision Science Lab

Advisor: Prof. Min Sun
Graduate student
Jul. 2015 - Present
Computer Vision and Deep Learning.


EE6485 Computer Vision

Head teaching assistant
Sep. 2016 - Jan. 2017

Resume

Last update: March 2018

Contact

Address:
Vision Science Laboratory
No. 101, Section 2, Kuang-Fu Road,
Hsinchu, Taiwan 30013

Office: EECS 711
E-mail: eborboihuc[at]gmail[dot]tw