Lin Zhong
Rutgers University, Computer Science Department
617 Bowser Road, Piscataway, NJ, 08854
phone: (267)815-5411, email: linzhong at cs dot rutgers dot edu
[Home] [Publications]

Currently, I am a software engineer for a fun company, Snapchat.

Before that, I was a Ph.D. student in Computer Science of Rutgers, the State University of New Jersey, with professor Dr. Dimitris N. Metaxas. I am also a research assistant in Computational Biomedicine Imaging and Modeling Center (CBIM). My major research interests focus on Computer vision, Medical image analysis, Machine learning and Computer graphics.
(Curriculum Vitae)(Linkedin).

  • 2009 ~ 2015, Ph.D. in Computer Science, Rutgers University.
  • 2006 ~ 2009, M.S. in Computer Science, Beihang University.
  • 2002 ~ 2006, B.E. in Computer Science, Harbin Engineering University.
Working Experience
  • 2015 ~ present, Software Engineer at Snapchat.
  • 2010 ~ 2015, Research Assistant at Rutgers University.
  • Summer, 2014, Software Engineer intern at Facebook, Menlo Park, CA.
  • Summer, 2012, Research intern at Creative Technologies Lab, Adobe Systems, Seattle.
  • Summer, 2011, Research intern at Eastman Kodak Research Labs., Rochester, NY.
  • 2009 - 2010, Teaching Assistant at Rutgers University.
  • 2006 - 2009, Research Assistant at Beihang University.
Research Projects

Feature Enginieering for Facebook Ads Ranking Backend

Mentor: Dan Zhang Team: Ads Ranking / Core optimization

Mainly focus on the feature learning for the ads ranking system, which enhances the prediction performance of click, conversion. It is also really exciting to run experiments on billions of user data.

  • Retrieved Asx features from Adlogger for conversion prediction
  • Added offline features and breakdowns (e.g., age, gender) for newly introduced video ads
  • Boosting trees were trained first, and then they were used as input features for logistic regression. Boosting tree selection is the feature selection for the following learning model, i.e., logistic regression.

Noisy Image Deblurring [Project]

State-of-the-art single image deblurring techniques are sensitive to image noise. Even a small amount of noise, which is inevitable in low-light conditions, can degrade the quality of blur kernel estimation dramatically. We propose a new method for handling noise in blind image deconvolution based on new theoretical and practical insights. Our key observation is that applying a directional low-pass filter to the input image greatly reduces the noise level, while preserving the blur information in the orthogonal direction to the filter.

  • [CVPR'13 oral] Handling Noise in Single Image Deblurring using Directional Filters. [PDF]
    Lin Zhong, Sunghyun Cho, Dimitris Metaxas, Sylvain Paris and Jue Wang.

Facial Expression Analysis

Inspired by the observation that only a few facial parts are active in expression disclosure (e.g., around the mouth, the eyes), we try to discover the common and specific patches which are important to discriminate all the expressions and only a particular expression, respectively. A two-stage multi-task sparse learning (MTSL) framework is proposed to efficiently locate those discriminative patches.

  • [CVPR'12] Learning Active Facial Patches for Expression Analysis. [PDF]
    Lin Zhong, Qingshan Liu, Peng Yang, Bo Liu, Junzhou Huang and Dimitris Metaxas.

Geometry Analysis for Papillary Muscles (High resolution CT)

we propose methods to extract the motion of papillary muscles from high resolution CT images, and quantitatively characterize them by extracting spatio-temporal skeletons. This method first reconstructs and visualizes detailed models of papillary muscles using a two-stage coarse-to-fine registration. To describe the model's shape and motion effectively and efficiently, high level abstractions of the models, i.e., the skeletons, are extracted with spatial and temporal constraints. Several skeleton-based indices are proposed to analyze the changes of model shapes and motions during a heart cycle.

  • [ISBI'13] Papillary Muscles Analysis from High Resolution CT using Spatial-Temporal Skeleton Extraction. [PDF][Video]
    Lin Zhong, Shaoting Zhang, Mingchen Gao, Junzhou Huang, Zhen Qian, Dimitris N. Metaxas, Leon Axel.

Stereoscopic Video Synthesis

we present an automatic and robust framework to synthesize stereoscopic videos from casual 2D monocular videos. First, 3D geometry information (e.g., camera parameters, depth map) are extracted from the 2D input video. Then a Bayesian-based View Synthesis (BVS) approach is proposed to render high-quality new virtual views for stereoscopic video to deal with noisy 3D geometry information. Extensive experiments on various videos demonstrate that BVS can synthesize more accurate views than other methods, and our proposed framework also be able to generate high-quality 3D videos.

  • [ISM'12] Towards automatic Stereoscopic Video Synthesis from a Casual Monocular video. [PDF]
    Lin Zhong, Sen wang, Minwoo Park, Rodney Miller and Dimitris Metaxas.
332:252, Programming Methodology I (C++)
332:351, Programming Methodology II (Data structure)