alt text 

Zhihui Zhu
Assistant Professor
Computer Science and Engineering
The Ohio State University


583 Dreese Lab
2015 Neil Avenue
Columbus, OH 43210


Phone:
Email: zhu.3440@osu.edu


I am looking for self-motivated students who are interested in data science, machine learning, and signal processing. If you are interested in joining my lab, please send me your CV, transcripts, and any other demonstration materials.

Our group works on theoretical and computational aspects of data science and machine learning, with a focus on developing efficient methods for extracting useful information from large-scale and high-dimensional data, proving their correctness, and applying them to solving real-world problems. A few current projects include

  • Machine learning: representation learning; generalization properties and implicit regularization; model (neural network) compression; deep neural networks for unsupervised learning and inverse problems;

  • Quantum information: statistical and algorithmic aspects of quantum information;

  • Optimization: Nonconvex geometric analysis; distributed optimization; the design, analysis, and implementation of large-scale optimization algorithms for engineering problems.

News:

  • [August 2022] Our group joined the Department of Computer Science and Engineering at The Ohio State University.

  • [May 2022] On May 26-27, together with Jere's group, we had our annual Deep & Sparse Team meeting at the University of Denver to discuss project accomplishments and plans.

  • [May 2021] Invited to serve as a TPC member (area chair) at NeurIPS 2022.

  • [May 2021] One ICML’21 on robust subspace learned accepted. New paper released on understanding the behabior of the classifiers in monder deep neural networks.

  • [May 2021] Our collaborative proposal (with Mike and Gongguo at CSM) on Structured Inference and Adaptive Measurement Design has been awarded by NSF!

  • [March 2021] Invited to serve as a TPC member (area chair) at NeurIPS 2021.

  • [Sep 2020] Our paper has been accepted at NeurIPS as spotlight (top 4%), which characterizes implicit bias with discrepant learning rates and builds connections between over-parameterization, RPCA, and deep neural networks.

  • [Jun 2020] Our proposal (with Jere at JHU) ‘‘Collaborative Research: CIF: Small: Deep Sparse Models: Analysis and Algorithms’’ has been awarded by NSF!

  • [Jun 2020] Two papers about over-parameterization are on arXiv: one studies the benefit of over-realized model in dictionary learning, another one characterizes implicit bias with discrepant learning rates and builds connection between over-parameterization, RPCA, and deep neural networks.

  • [Feb 2020] Our paper on robust homography estimation has been accepted to CVPR 2020.

  • [Jan 2020] Co-organized with Qing and Shuyang, our two-session mini-symposium ‘‘Recent Advances in Optimization Methods for Signal Processing and Machine Learning’’ has been accepted by the inaugural SIAM Conference on Mathematics of Data Science. See you at Cincinnati, Ohio in May!

  • [Jan 2020] Invited talk at Colorado School of Mines.

  • [Nov 2019] Our paper (with Xiao, Shixiang, Zengde, Qing, and Anthony) ‘‘Nonsmooth Optimization over Stiefel Manifold: Riemannian Subgradient Methods’’ is on arxiv. This work provides (first) explicit convergence rate guarantees for a family of Riemannian subgradient methods when used to optimize nonsmooth functions (that are weakly convex in the Euclidean space) over then Stiefel manifold.

  • [Oct 2019] Attended the Northrop Grumman University Research Symposium, and presented our work on ‘‘Object Identification with Less Supervision".

  • [Oct 2019] Attended the Computational Imaging workshop at IMA, University of Minnesota, and presented our work on ‘‘A Linearly Convergent Method for Non-smooth Non-convex Optimization on Grassmannian with Applications to Robust Subspace and Dictionary Learning’’.

  • [Aug 2019] Our paper (with Xiao, Anthony, Jason) ‘‘Incremental Methods for Weakly Convex Optimization’’ is on arxiv. This work provides (first) convergence guarantee for incrememtal algorithms and their random shuffling version (including the incremental subgradient method which is the work-horse of deep learning) in solving weakly convex optimization problems which could be nonconvex and nonsmooth.