3D Computer Vision (3D Vision) techniques have been the key solutions to various scene perception problems such as depth from image(s), camera/object pose estimation, localization and 3D reconstruction of a scene. These solutions are the major part of many AI applications including AR/VR, autonomous driving and robotics. In this talk, I will first review several categories of 3D Vision problems and their challenges. Given the category of static scene perception, I will introduce several learning-based depth estimation methods such as PlaneRCNN, Neural RGBD, and camera pose estimation methods including MapNet as well as few registration algorithms deployed in NVIDIA's products. I will then introduce more challenging real world scenarios where scenes contain non-stationary rigid changes, non-rigid motions, or varying appearance due to the reflectance and lighting changes, which can cause scene reconstruction to fail due to the view dependent properties. I will discuss several solutions to these problems and conclude by summarizing the future directions for 3D Vision research that are being conducted by NVIDIA's learning and perception research (LPR) team.