Welcome To My Home Page

About Me

I’m a senior student in Turing class at the School of Computer Science, Peking University. I’m interested in computer vision for robotics.

Currently, I am a research intern at CCVL, supervised by Bloomberg Distinguished Professor Alan Yuille. Before that, I was a research intern at Hyperplane Lab, Center on Frontiers of Computing Studies, advised by Prof. Hao Dong. I’m also collaborating with Dr. Kaichun Mo (NVIDIA) on articulated object manipulation.

You can find my CV here: Chuanruo Ning’s CV

Research

  • Zero-shot Category-level 2D Part Segmentation from a Single 3D Annotation
    • Chuanruo Ning, Jiawei Peng, Yaoyao Liu, Jiahao Wang, Yining Sun, Alan Yuille, Adam Kortylewski, Angtian Wang
    • We achieve zero-shot object part segmentation that only requires one 3D annotation for part definition. With one training, our framework could directly generalize to any part definition without any adaption. We establish the 3D to 3D correspondence for part transfer across meshes and 3D to 2D correspondence for render-and-compare based part detection.
    • Under review
  • Where2Explore: Few-shot Affordance Learning for Unseen Novel Categories of Articulated Objects
    • Chuanruo Ning, Ruihai Wu, Haoran Lu, Kaichun Mo, Hao Dong
    • Explore the cross-category few-shot learning task, where the model is required effectively explores novel categories with minimal interactions on a limited number of instances. Propose ‘Similarity’ to measure semantic similarity between local geometries across different categories. Enable the model to perform few-shot learning on novel categories by discovering uncertain yet important areas.
    • Paper / Project Page
    • NeurIPS 2023
  • Learning Environment-Aware Affordance for 3D Articulated Object Manipulation under Occlusion
    • Ruihai Wu*, Kai Cheng*, Yan Shen, Chuanruo Ning, Guanqi Zhan, Hao Dong
    • We propose an environment-aware affordance framework that incorporates both object-level actionable priors and environment constraints. A novel contrastive affordance learning framework is introduced, which is capable of training on scenes containing a single occluder and generalizing to scenes with complex occluder combinations.
    • Paper / Project Page
    • NeurIPS 2023
  • Learning Foresightful Dense Visual Affordance for Deformable Object Manipulation
    • Ruihai Wu*, Chuanruo Ning*, Hao Dong        (* denotes equal contribution)
    • We propose to learn dense visual representation for deformable object manipulation , which reveals the dynamic and kinematic property of deformable objects. By training in a reversed step-by-step manner, we enable the representation to be aware of `value’ of states, thus finding the global optimal action for deformable object manipulation tasks.
    • Paper / Project Page / Video / Video(real-world)
    • ICCV 2023

Talks

Title: Part Detection via Render-and-compare Method
Date: 2023-8-18
Location: Malone Hall, Johns Hopkins University, Baltimore, United States
Slides

Title: Occlusion Reasoning for Manipulation
Date: 2022-8-4
Location: Center on Frontiers of Computing Studies, Beijing, China
Slides

Title: In-hand Reorientation
Date: 2022-2-20
Location: Center on Frontiers of Computing Studies, Beijing, China
Slides

Services

  • Program Committee: Annual AAAI Conference on Artificial Intelligence (AAAI 2024)
  • Reviewer: Conference on Computer Vision and Pattern Recognition (CVPR 2024)

Awards and Honors

  • 2023: Huatai Securities Scholarship
  • 2023: Peking University Merit Student
  • 2022: John Hopcroft Scholarship
  • 2022: Peking University Dean’s Scholarship
  • 2020: Peking University Freshman Scholarship