Shengjie Wang

shengjie
Assistant Professor @ Computer Science, NYU Shanghai
shengjie.wang [AT] nyu.edu

Biography

I'm an assistant professor at NYU Shanghai (since 2023 Fall). I received my Ph.D. from Computer Science and Engineering at University of Washington, advised by Jeffrey Bilmes. I received my bachelor's degree from University of Illinois at Urbana-Champaign. I'm interested in machine learning in general including fields such as deep learning, AI for science, submodular optimization, computer vision, and NLP.

Openings

Multiple positions for Ph.D. and RAs are avalaible. Feel free to drop me an email if you are interested. For Ph.D. applicants, you can get more information from NYU Courant and Tandon Ph.D. application websites.

Publications


  • [RecSys'23] Full Index Deep Retrieval: End-to-End User and Item Structures for Cold-start and Long-tail Item Recommendation
    Z Gong, X Wu, L Chen, Z Zheng, S Wang, A Xu, C Wang, F Wu

  • [ICML'23] Machine Learning Force Fields with Data Cost Aware Training
    A Bukharin, T Liu, S Wang, S Zuo, W Gao, W Yan, T Zhao
  • [pdf]
  • [WSDM'23] DGRec: Graph Neural Network for Recommendation with Diversified Embedding Generation
    L Yang, S Wang, Y Tao, J Sun, X Liu, PS Yu, T Wang
  • [link]
  • [NeurIPS'22] Retrospective adversarial replay for continual learning
    L Kumari, S Wang, T Zhou, JA Bilmes
  • [pdf]
  • [NeurIPS'21] Constrained Robust Submodular Partitioning
    S Wang*, T Zhou*, C Lavania, J Bilmes
  • [link]
  • [AISTATS'21] Curriculum Learning by Optimizing Learning Dynamics
    S Wang*, T Zhou*, J Bilmes
  • [pdf]
  • [ICLR'21] Robust Curriculum Learning: from clean label detection to noisy label self-correction
    S Wang*, T Zhou*, J Bilmes
  • [pdf]
  • [Neurips'20] Guided Learning by Dynamic Instance Hardness
    S Wang*, T Zhou*, J Bilmes
  • [pdf]
  • [ICML'20] Time-Consistent Self-Supervision for Semi-Supervised Learning
    S Wang*, T Zhou*, J Bilmes
  • [pdf]
  • [ICML'19] Bias also matters: Bias attribution for deep neural network explanation
    S Wang*, T Zhou*, J Bilmes
  • [pdf]
  • [ICML'19] Jumpout: Improved dropout for deep neural networks with relus
    S Wang*, T Zhou*, J Bilmes
  • [pdf]
  • [AISTATS'19] Fixing Mini-batch Sequences with Hierarchical Robust Partitioning
    S Wang, W Bai, C Lavania, J Bilmes
  • [pdf]
  • [NeurIPS'18] Diverse ensemble evolution: Curriculum data-model marriage
    T Zhou, S Wang, JA Bilmes
  • [pdf]
  • [ICLR'17] Training compressed fully-connected networks with a density-diversity penalty
    S Wang, H Cai, J Bilmes, W Noble
  • [pdf]
  • [ICLR'17] Do deep convolutional nets really need to be deep and convolutional?
    G Urban, KJ Geras, SE Kahou, O Aslan, S Wang, A Mohamed, M Phillipose, M Richardson, R Caruana
  • [pdf]
  • [ICML'16] Analysis of deep neural networks with extended data jacobian matrix
    S Wang, A Mohamed, R Caruana, J Bilmes, M Plilipose, M Richardson, K Geras, G Urban, O Aslan
  • [pdf]
  • [ISMB'16] Faster and more accurate graphical model identification of tandem mass spectra using trellises
    S Wang, JT Halloran, JA Bilmes, WS Noble
  • [pdf]
  • [NeurIPS'15] Mixed Robust/Average Submodular Partitioning: Fast Algorithms, Guarantees, and Applications
    K Wei, RK Iyer, S Wang, W Bai, JA Bilmes
  • [pdf]