Yang You

Yang You
  • Doctor of Philosophy
  • PostDoc Position at Stanford University

About

35
Publications
5,593
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
483
Citations
Introduction
Yang You currently works at the School of Software, Shanghai Jiao Tong University. Yang does research in Artificial Neural Network, Software Engineering and Computer Graphics. Their most recent publication is 'Relative CNN-RNN: Learning Relative Atmospheric Visibility from Images'.
Current institution
Stanford University
Current position
  • PostDoc Position

Publications

Publications (35)
Article
Deformable object manipulation stands as one of the most captivating yet formidable challenges in robotics. While previous techniques have predominantly relied on learning latent dynamics through demonstrations, typically represented as either particles or images, there exists a pertinent limitation: acquiring suitable demonstrations, especially fo...
Preprint
Vision foundation models, particularly the ViT family, have revolutionized image understanding by providing rich semantic features. However, despite their success in 2D comprehension, their abilities on grasping 3D spatial relationships are still unclear. In this work, we evaluate and enhance the 3D awareness of ViT-based models. We begin by system...
Article
Object pose estimation constitutes a critical area within the domain of 3D vision. While contemporary state-of-the-art methods that leverage real-world pose annotations have demonstrated commendable performance, the procurement of such real training data incurs substantial costs. This paper focuses on a specific setting wherein only 3D CAD models a...
Article
Embedding Human and Articulated Object Interaction (HAOI) in 3D is an important direction for a deeper human activity understanding. Different from previous works that use parametric and CAD models to represent humans and objects, in this work, we propose a novel 3D geometric primitive-based language to encode both humans and objects. Given our new...
Article
Various recent methods attempt to implement rotation-invariant 3D deep learning by replacing the input coordinates of points with relative distances and angles. Due to the incompleteness of these low-level features, they have to undertake the expense of losing global information. In this paper, we propose the CRIN, namely Centrifugal Rotation-Invar...
Preprint
Various recent methods attempt to implement rotation-invariant 3D deep learning by replacing the input coordinates of points with relative distances and angles. Due to the incompleteness of these low-level features, they have to undertake the expense of losing global information. In this paper, we propose the CRIN, namely Centrifugal Rotation-Invar...
Preprint
Object pose estimation is an important topic in 3D vision. Though most current state-of-the-art method that trains on real-world pose annotations achieve good results, the cost of such real-world training data is too high. In this paper, we propose a novel method for sim-to-real pose estimation, which is effective on both instance-level and categor...
Preprint
This paper presents a general one-shot object localization algorithm called OneLoc. Current one-shot object localization or detection methods either rely on a slow exhaustive feature matching process or lack the ability to generalize to novel objects. In contrast, our proposed OneLoc algorithm efficiently finds the object center and bounding box si...
Preprint
Full-text available
In this paper, we tackle the problem of category-level 9D pose estimation in the wild, given a single RGB-D frame. Using supervised data of real-world 9D poses is tedious and erroneous, and also fails to generalize to unseen scenarios. Besides, category-level pose estimation requires a method to be able to generalize to unseen objects at test time,...
Article
Point cloud analysis without pose priors is very challenging in real applications, as the orientations of point clouds are often unknown. In this paper, we propose a brand new point-set learning framework PRIN, namely, P oint-wise R otation I nvariant N etwork, focusing on rotation invariant feature extraction in point clouds analysis. We c...
Preprint
Pixel-level 2D object semantic understanding is an important topic in computer vision and could help machine deeply understand objects (e.g. functionality and affordance) in our daily life. However, most previous methods directly train on correspondences in 2D images, which is end-to-end but loses plenty of information in 3D spaces. In this paper,...
Article
Robotic picking of diverse range of novel objects is a great challenge in dense clutter, in which objects are stacked together tightly. However, collecting large-scale dataset with dense grasp labels is extremely time-consuming, and there is huge gap between synthetic color and depth images with real images. In this paper, we explore suction based...
Article
Pixel-level 2D object semantic understanding is an important topic in computer vision and could help machine deeply understand objects (e.g. functionality and affordance) in our daily life. However, most previous methods directly train on correspondences in 2D images, which is end-to-end but loses plenty of information in 3D spaces. In this paper,...
Preprint
Full-text available
Detecting aligned 3D keypoints is essential under many scenarios such as object tracking, shape retrieval and robotics. However, it is generally hard to prepare a high-quality dataset for all types of objects due to the ambiguity of keypoint itself. Meanwhile, current unsupervised detectors are unable to generate aligned keypoints with good coverag...
Preprint
Full-text available
Point cloud analysis without pose priors is very challenging in real applications, as the orientations of point clouds are often unknown. In this paper, we propose a brand new point-set learning framework PRIN, namely, Point-wise Rotation Invariant Network, focusing on rotation invariant feature extraction in point clouds analysis. We construct sph...
Article
Recently, plenty of deep learning methods have been proposed to handle point clouds. Almost all of them input the entire point cloud and ignore the information redundancy lying in point clouds. This paper addresses this problem by extracting the Reeb graph from point clouds, which is a much more informative and compact representation of point cloud...
Preprint
Full-text available
Keypoint detection is an essential component for the object registration and alignment. However, previous works mainly focused on how to register keypoints under arbitrary rigid transformations. Differently, in this work, we reckon keypoints under an information compression scheme to represent the whole object. Based on this, we propose UKPGAN, an...
Preprint
Full-text available
3D object detection has attracted much attention thanks to the advances in sensors and deep learning methods for point clouds. Current state-of-the-art methods like VoteNet regress direct offset towards object centers and box orientations with an additional Multi-Layer-Perceptron network. Both their offset and orientation predictions are not accura...
Chapter
Semantic understanding of 3D objects is crucial in many applications such as object manipulation. However, it is hard to give a universal definition of point-level semantics that everyone would agree on. We observe that people have a consensus on semantic correspondences between two areas from different objects, but are less certain about the exact...
Article
Deep reinforcement learning (DRL) has gained a lot of attention in recent years, and has been proven to be able to play Atari games and Go at or above human levels. However, those games are assumed to have a small fixed number of actions and could be trained with a simple CNN network. In this paper, we study a special class of Asian popular card ga...
Preprint
Full-text available
Visual semantic correspondence is an important topic in computer vision and could help machine understand objects in our daily life. However, most previous methods directly train on correspondences in 2D images, which is end-to-end but loses plenty of information in 3D spaces. In this paper, we propose a new method on predicting semantic correspond...
Article
Full-text available
Point cloud analysis without pose priors is very challenging in real applications, as the orientations of point clouds are often unknown. In this paper, we propose a brand new point-set learning framework PRIN, namely, Pointwise Rotation-Invariant Network, focusing on rotation-invariant feature extraction in point clouds analysis. We construct sphe...
Preprint
Detecting 3D objects keypoints is of great interest to the areas of both graphics and computer vision. There have been several 2D and 3D keypoint datasets aiming to address this problem in a data-driven way. These datasets, however, either lack scalability or bring ambiguity to the definition of keypoints. Therefore, we present KeypointNet: the fir...
Preprint
Full-text available
Fine-grained semantic understanding of 3D objects is crucial in many applications such as object manipulation. However, it is hard to give a universal definition of point-level semantics that everyone would agree on. We observe that people are pretty sure about semantic correspondences between two areas from different objects, but less certain abou...
Preprint
Full-text available
Deep reinforcement learning (DRL) has gained a lot of attention in recent years, and has been proven to be able to play Atari games and Go at or above human levels. However, those games are assumed to have a small fixed number of actions and could be trained with a simple CNN network. In this paper, we study a special class of Asian popular card ga...
Preprint
Full-text available
In recent years, point clouds have earned quite some research interests by the development of depth sensors. Due to different layouts of objects, orientation of point clouds is often unknown in real applications. In this paper, we propose a new point sets learning framework named Pointwise Rotation-Invariant Network (PRIN), focusing on the rotation...
Article
Full-text available
We propose a deep learning approach for directly estimating relative atmospheric visibility from outdoor photos, which does not rely on weather images or data that require expensive sensing or custom capture. Our data-driven approach capitalizes on a large collection of internet images to learn rich scene and visibility varieties. The relative CNN-...

Network

Cited By