Search For:

Displaying 1-50 out of 61 total
Physically guided liquid surface modeling from videos
Found in: ACM Transactions on Graphics (TOG)
By Greg Turk, Huamin Wang, Miao Liao, Qing Zhang, Ruigang Yang, Greg Turk, Huamin Wang, Miao Liao, Qing Zhang, Ruigang Yang, Greg Turk, Huamin Wang, Miao Liao, Qing Zhang, Ruigang Yang, Greg Turk, Huamin Wang, Miao Liao, Qing Zhang, Ruigang Yang
Issue Date:July 2009
pp. 1-2
We present an image-based reconstruction framework to model real water scenes captured by stereoscopic video. In contrast to many image-based modeling techniques that rely on user interaction to obtain high-quality 3D models, we instead apply automatically...
     
Stereo Matching with Color-Weighted Correlation, Hierarchical Belief Propagation, and Occlusion Handling
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By Qingxiong Yang, Liang Wang, Ruigang Yang, Henrik Stewénius, David Nistér
Issue Date:March 2009
pp. 492-504
In this paper, we formulate a stereo matching algorithm with careful handling of disparity, discontinuity and occlusion. The algorithm works with a global matching stereo model based on an energy-minimization framework. The global energy contains two terms...
 
Automatic Natural Video Matting with Depth
Found in: Computer Graphics and Applications, Pacific Conference on
By Oliver Wang, Jonathan Finger, Qingxiong Yang, James Davis, Ruigang Yang
Issue Date:November 2007
pp. 469-472
Video matting is the process of taking a sequence of frames, isolating the foreground, and replacing the background in each frame. We look at existing single-frame matting techniques and present a method that improves upon them by adding depth information ...
 
Spatial-Depth Super Resolution for Range Images
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Qingxiong Yang, Ruigang Yang, James Davis, David Nister
Issue Date:June 2007
pp. 1-8
We present a new post-processing step to enhance the resolution of range images. Using one or two registered and potentially high-resolution color images as reference, we iteratively refine the input low-resolution range image, in terms of both its spatial...
 
Stereo Matching with Color-Weighted Correlation, Hierachical Belief Propagation and Occlusion Handling
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Qýngxiong Yang, Liang Wang, Ruigang Yang, Henrik Stewenius, David Nister
Issue Date:June 2006
pp. 2347-2354
In this paper, we formulate an algorithm for the stereo matching problem with careful handling of disparity, discontinuity and occlusion. The algorithm works with a global matching stereo model based on an energy- minimization framework. The global energy ...
 
3D Imaging Techniques and Multimedia Applications [Guest editor's introduction]
Found in: IEEE MultiMedia
By Ruzena Bajcsy,Ruigang Yang,Pietro Zanuttigh,Cha Zhang
Issue Date:January 2013
pp. 14-16
With the advances in sensing, transmission, and visualization technology, 3D information has become increasingly incorporated into real-world applications, from architecture, entertainment, and manufacturing to security. One of the fundamental requirements...
 
An experimental study of pupil constriction for liveness detection
Found in: 2013 IEEE Workshop on Applications of Computer Vision (WACV)
By Xinyu Huang,Changpeng Ti,Qi-zhen Hou,Alade Tokuta,Ruigang Yang
Issue Date:January 2013
pp. 252-258
As iris recognition systems have been deployed in many security areas, liveness detection that can distinguish between real iris patterns and fake ones becomes an important module. Most existing algorithms focus on the appearance difference between real an...
 
Video Stereolization: Combining Motion Analysis with User Interaction
Found in: IEEE Transactions on Visualization and Computer Graphics
By Miao Liao, Jizhou Gao, Ruigang Yang, Minglun Gong
Issue Date:July 2012
pp. 1079-1088
We present a semiautomatic system that converts conventional videos into stereoscopic videos by combining motion analysis with user interaction, aiming to transfer as much as possible labeling work from the user to the computer. In addition to the widely u...
 
See-through Image Enhancement through Sensor Fusion
Found in: 2012 IEEE International Conference on Multimedia and Expo (ICME)
By Bo Fu,Mao Ye,Ruigang Yang,Cha Zhang
Issue Date:July 2012
pp. 687-692
Many hardware designs have been developed to allow a camera to be placed optically directly behind the screen. The purpose of such setups is to enable two-way video teleconferencing that maintains eye-contact. However, the image from the see-through camera...
 
Edge-preserving photometric stereo via depth fusion
Found in: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
By Qing Zhang, Mao Ye, Ruigang Yang,Y. Matsushita,B. Wilburn, Huimin Yu
Issue Date:June 2012
pp. 2472-2479
We present a sensor fusion scheme that combines active stereo with photometric stereo. Aiming at capturing full-frame depth for dynamic scenes at a minimum of three lighting conditions, we formulate an iterative optimization scheme that (1) adaptively adju...
 
Accurate 3D pose estimation from a single depth image
Found in: Computer Vision, IEEE International Conference on
By Mao Ye, Xianwang Wang,Ruigang Yang, Liu Ren,Marc Pollefeys
Issue Date:November 2011
pp. 731-738
This paper presents a novel system to estimate body pose configuration from a single depth map. It combines both pose detection and pose refinement. The input depth map is matched with a set of pre-captured motion exemplars to generate a body configuration...
 
A novel see-through screen based on weave fabrics
Found in: Multimedia and Expo, IEEE International Conference on
By Cha Zhang, Ruigang Yang,Tim Large, Zhengyou Zhang
Issue Date:July 2011
pp. 1-6
See-through screens (STS) have found important applications in remote collaboration systems to enhance non-verbal communication and gaze awareness. Existing STS designs often sacrifice the display quality significantly, rendering low-contrast images that d...
 
Global stereo matching leveraged by sparse ground control points
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Liang Wang, Ruigang Yang
Issue Date:June 2011
pp. 3033-3040
We present a novel global stereo model that makes use of constraints from points with known depths, i.e., the Ground Control Points (GCPs) as referred to in stereo literature. Our formulation explicitly models the influences of GCPs in a Markov Random Fiel...
 
Interreflection removal for photometric stereo by using spectrum-dependent albedo
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Miao Liao, Xinyu Huang, Ruigang Yang
Issue Date:June 2011
pp. 689-696
We present a novel method that can separate m-bounced light and remove the interreflections in a photometric stereo setup. Under the assumption of a uniformly colored lambertian surface, the intensity of a point in the scene is the sum of 1-bounced light t...
 
Reliability Fusion of Time-of-Flight Depth and Stereo Geometry for High Quality Depth Maps
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By Jiejie Zhu, Liang Wang, Ruigang Yang, James E. Davis, Zhigeng Pan
Issue Date:July 2011
pp. 1400-1414
Time-of-flight range sensors have error characteristics, which are complementary to passive stereo. They provide real-time depth estimates in conditions where passive stereo does not work well, such as on white walls. In contrast, these sensors are noisy a...
 
Learning 3D shape from a single facial image via non-linear manifold embedding and alignment
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Xianwang Wang, Ruigang Yang
Issue Date:June 2010
pp. 414-421
The 3D reconstruction of a face from a single frontal image is an ill-posed problem. This is further accentuated when the face image is captured under different poses and/or complex illumination conditions. In this paper, we aim to solve the shape recovery...
 
Color calibration of multi-projector displays through automatic optimization of hardware settings
Found in: Computer Vision and Pattern Recognition Workshop
By R.M. Steele, Mao Ye, Ruigang Yang
Issue Date:June 2009
pp. 55-60
We describe a system that performs automatic, camera-based photometric projector calibration by adjusting hardware settings (e.g. brightness, contrast, etc.). The approach has two basic advantages over software-correction methods. First, there is no softwa...
 
Joint depth and alpha matte optimization via fusion of stereo and time-of-flight sensor
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Jiejie Zhu, Miao Liao, Ruigang Yang, Zhigeng Pan
Issue Date:June 2009
pp. 453-460
We present a new approach to iteratively estimate both high-quality depth map and alpha matte from a single image or a video sequence. Scene depth, which is invariant to illumination changes, color similarity and motion ambiguity, provides a natural and ro...
 
Image deblurring for less intrusive iris capture
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Xinyu Huang, Liu Ren, Ruigang Yang
Issue Date:June 2009
pp. 1558-1565
For most iris capturing scenarios, captured iris images could easily blur when the user is out of the depth of field (DOF) of the camera, or when he or she is moving. The common solution is to let the user try the capturing process again as the quality of ...
 
Spatial-Temporal Fusion for High Accuracy Depth Maps Using Dynamic MRFs
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By Jiejie Zhu, Liang Wang, Jizhou Gao, Ruigang Yang
Issue Date:May 2010
pp. 899-909
Time-of-flight range sensors and passive stereo have complimentary characteristics in nature. To fuse them to get high accuracy depth maps varying over time, we extend traditional spatial MRFs to dynamic MRFs with temporal coherence. This new model allows ...
 
Estimating pose and illumination direction for frontal face synthesis
Found in: Computer Vision and Pattern Recognition Workshop
By Xinyu Huang, Xianwang Wang, Jizhou Gao, Ruigang Yang
Issue Date:June 2008
pp. 1-6
Face pose and illumination estimation is an important pre-processing step in many face analysis problems. In this paper, we present a new method to estimate the face pose and illumination direction from one single image. The basic idea is to compare the re...
 
Fusion of time-of-flight depth and stereo for high accuracy depth maps
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Jiejie Zhu, Liang Wang, Ruigang Yang, James Davis
Issue Date:June 2008
pp. 1-8
Time-of-flight range sensors have error characteristics which are complementary to passive stereo. They provide real time depth estimates in conditions where passive stereo does not work well, such as on white walls. In contrast, these sensors are noisy an...
 
Stereoscopic inpainting: Joint color and depth completion from stereo images
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Liang Wang, Hailin Jin, Ruigang Yang, Minglun Gong
Issue Date:June 2008
pp. 1-8
We present a novel algorithm for simultaneous color and depth inpainting. The algorithm takes stereo images and estimated disparity maps as input and fills in missing color and depth information introduced by occlusions or object removal. We first complete...
 
Robust and Accurate Visual Echo Cancelation in a Full-duplex Projector-Camera System
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By Miao Liao, Ruigang Yang, Zhengyou Zhang
Issue Date:October 2008
pp. 1831-1840
In this paper we study the problem of
 
Real-Time Visibility-Based Fusion of Depth Maps
Found in: Computer Vision, IEEE International Conference on
By Paul Merrell, Amir Akbarzadeh, Liang Wang, Philippos Mordohai, Jan-Michael Frahm, Ruigang Yang, David Nister, Marc Pollefeys
Issue Date:October 2007
pp. 1-8
We present a viewpoint-based approach for the quick fusion of multiple stereo depth maps. Our method selects depth estimates for each pixel that minimize violations of visibility constraints and thus remove errors and inconsistencies from the depth maps to...
 
Examplar-based Shape from Shading
Found in: 3D Digital Imaging and Modeling, International Conference on
By Xinyu Huang, Jizhou Gao, Liang Wang, Ruigang Yang
Issue Date:August 2007
pp. 349-356
Traditional Shape-from-Shading (SFS) techniques aim to solve an under-constrained problem: estimating depth map from one single image. The results are usually brittle from real images containing detailed shapes. Inspired by recent advances in texture synth...
 
Space-Time Light Field Rendering
Found in: IEEE Transactions on Visualization and Computer Graphics
By Huamin Wang, Mingxuan Sun, Ruigang Yang
Issue Date:July 2007
pp. 697-710
<p><b>Abstract</b>—In this paper, we propose a novel framework called <it>space-time light field rendering</it>, which allows <it>continuous</it> exploration of a dynamic scene in both space and time. Compared to e...
 
BRDF Invariant Stereo Using Light Transport Constancy
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By Liang Wang, Ruigang Yang, James E. Davis
Issue Date:September 2007
pp. 1616-1626
Nearly all existing methods for stereo reconstruction assume that scene reflectance is Lambertian and make use of brightness constancy as a matching invariant. We introduce a new invariant for stereo reconstruction called light transport constancy (LTC), w...
 
Toward the Light Field Display: Autostereoscopic Rendering via a Cluster of Projectors
Found in: IEEE Transactions on Visualization and Computer Graphics
By Ruigang Yang, Xinyu Huang, Sifang Li, Christopher Jaynes
Issue Date:January 2008
pp. 84-96
<p><b>Abstract</b>—Ultimately, a display device should be capable of reproducing the visual effects observed in reality. In this paper we introduce an autostereoscopic display that uses a scalable array of digital light projectors and a p...
 
Restoring 2D Content from Distorted Documents
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By Michael S. Brown, Mingxuan Sun, Ruigang Yang, Lin Yun, W. Brent Seales
Issue Date:November 2007
pp. 1904-1916
This paper presents a framework to restore the 2D content printed on documents in the presence of geometric distortion and non-uniform illumination. Compared with textbased document imaging approaches that correct distortion to a level necessary to obtain ...
 
Flexible Pixel Compositor for Plug-and-Play Multi-Projector Displays
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Ruigang Yang, Daniel R. Rudolf, Vijai Raghunathan
Issue Date:June 2007
pp. 1-2
No summary available.
 
Light Fall-off Stereo
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Miao Liao, Liang Wang, Ruigang Yang, Minglun Gong
Issue Date:June 2007
pp. 1-8
We present light fall-off stereo-LFS-a new method for computing depth from scenes beyond lambertian reflectance and texture. LFS takes a number of images from a stationary camera as the illumination source moves away from the scene. Based on the inverse sq...
 
Robust and Accurate Visual Echo Cancelation in a Full-Duplex Projector-Camera System
Found in: Computer Vision and Pattern Recognition Workshop
By Miao Liao, Mingxuan Sun, Ruigang Yang, Zhengyou Zhang
Issue Date:June 2006
pp. 10
We developed a comprehensive set of techniques to address the ?visual echo? problem in a full-duplex projectorcamera system. A calibration procedure records the geometric and photometric transfer between the projector and camera in a look-up table. With th...
 
How Far Can We Go with Local Optimization in Real-Time Stereo Matching
Found in: 3D Data Processing Visualization and Transmission, International Symposium on
By Liang Wang, Mingwei Gong, Minglun Gong, Ruigang Yang
Issue Date:June 2006
pp. 129-136
Applications such as robot navigation and augmented reality require high-accuracy dense disparity maps in real-time and online. Due to time constraint, most real-time stereo applications rely on local winner-take-all optimization in the disparity computati...
 
High-Quality Real-Time Stereo Using Adaptive Cost Aggregation and Dynamic Programming
Found in: 3D Data Processing Visualization and Transmission, International Symposium on
By Liang Wang, Miao Liao, Minglun Gong, Ruigang Yang, David Nister
Issue Date:June 2006
pp. 798-805
We present a stereo algorithm that achieves high quality results while maintaining real-time performance. The key idea is simple: we introduce an adaptive aggregation step in a dynamic-programming (DP) stereo framework. The per-pixel matching cost is aggre...
 
BRDF Invariant Stereo Using Light Transport Constancy
Found in: Computer Vision, IEEE International Conference on
By James E. Davis, Ruigang Yang, Liang Wang
Issue Date:October 2005
pp. 436-443
Nearly all existing methods for stereo reconstruction assume that scene reflectance is Lambertian, and make use of color constancy as a matching invariant. We introduce a new invariant for stereo reconstruction called Light Transport Constancy, which allow...
 
Geometric and Photometric Restoration of Distorted Documents
Found in: Computer Vision, IEEE International Conference on
By Mingxuan Sun, Ruigang Yang, Yun Lin, George Landon, Brent Seales, Michael S. Brown
Issue Date:October 2005
pp. 1117-1123
We present a system to restore the 2D content printed on distorted documents. Our system works by acquiring a 3D scan of the document?s surface together with a high-resolution image. Using the 3D surface information and the 2D image, we can ameliorate unwa...
 
Immersive Electronic Books for Surgical Training
Found in: IEEE Multimedia
By Greg Welch, Andrei State, Adrian Ilie, Kok-Lim Low, Anselmo Lastra, Bruce Cairns, Herman Towles, Henry Fuchs, Ruigang Yang, Sascha Becker, Dan Russo, Jesse Funaro, Andries van Dam
Issue Date:July 2005
pp. 22-35
Immersive electronic books (IEBooks) for surgical training will let surgeons explore previous surgical procedures in 3D. The authors describe the techniques and tools for creating IEBook.
 
Image-Gradient-Guided Real-Time Stereo on Graphics Hardware
Found in: 3D Digital Imaging and Modeling, International Conference on
By Minglun Gong, Ruigang Yang
Issue Date:June 2005
pp. 548-555
We present a real-time correlation-based stereo algorithm with improved accuracy. Encouraged by the success of recent stereo algorithms that aggregate the matching cost based on color segmentation, a novel image-gradient-guided cost aggregation scheme is p...
 
Camera-Based Calibration Techniques for Seamless Multiprojector Displays
Found in: IEEE Transactions on Visualization and Computer Graphics
By Michael Brown, Aditi Majumder, Ruigang Yang
Issue Date:March 2005
pp. 193-206
Multiprojector, large-scale displays are used in scientific visualization, virtual reality, and other visually intensive applications. In recent years, a number of camera-based computer vision techniques have been proposed to register the geometry and colo...
 
Eye Gaze Correction with Stereovision for Video-Teleconferencing
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By Ruigang Yang, Zhengyou Zhang
Issue Date:July 2004
pp. 956-960
<p><b>Abstract</b>—The lack of eye contact in desktop video teleconferencing substantially reduces the effectiveness of video contents. While expensive and bulky hardware is available on the market to correct eye gaze, researchers have be...
 
Dealing with Textureless Regions and Specular Highlights-A Progressive Space Carving Scheme Using a Novel Photo-consistency Measure
Found in: Computer Vision, IEEE International Conference on
By Ruigang Yang, Marc Pollefeys, Greg Welch
Issue Date:October 2003
pp. 576
We present two extensions to the Space Carving framework. The first is a progressive scheme to better reconstruct surfaces lacking sufficient textures. The second is a novel photo-consistency measure that is valid for both specular and diffuse surfaces, un...
 
Multi-Resolution Real-Time Stereo on Commodity Graphics Hardware
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Ruigang Yang, Marc Pollefeys
Issue Date:June 2003
pp. 211
In this paper a stereo algorithm suitable for implementation on commodity graphics hardware is presented. This is important since it allows to free up the main processor for other tasks including high-level interpretation of the stereo results. Our algorit...
 
Combining Approximate Geometry with View-Dependent Texture Mapping — A Hybrid Approach to 3D Video Teleconferencing
Found in: Graphics, Patterns and Images, SIBGRAPI Conference on
By Celso S. Kurashima, Ruigang Yang, Anselmo Lastra
Issue Date:October 2002
pp. 112
We present a hybrid system using computer vision and graphics methods that effectively combines fast automatic 3D model extraction with view-dependent texture mapping for the purpose of real-time person-to-person 3D video teleconferencing. In our approach ...
 
Real-Time Consensus-Based Scene Reconstruction Using Commodity Graphics Hardware
Found in: Computer Graphics and Applications, Pacific Conference on
By Ruigang Yang, Greg Welch, Gary Bishop
Issue Date:October 2002
pp. 225
<p>We present a novel use of commodity graphics hardware that effectively combines a plane-sweeping algorithm with view synthesis for real-time, on-line 3D scene acquisition and view synthesis. Using real-time imagery from a few calibrated cameras, o...
 
Model-Based Head Pose Tracking With Stereovision
Found in: Automatic Face and Gesture Recognition, IEEE International Conference on
By Ruigang Yang, Zhengyou Zhang
Issue Date:May 2002
pp. 0255
We present a robust model-based stereo head tracking algorithm that operates in real time on a commodity PC. The use of an individualized three-dimensional head model, coupled with the epipolar constraint from the stereo image pair, greatly improves the ro...
 
PixelFlex: A Reconfigurable Multi-Projector Display System
Found in: Visualization Conference, IEEE
By Ruigang Yang, David Gotz, Justin Hensley, Herman Towles, Michael S. Brown
Issue Date:October 2001
pp. N/A
This paper presents PixelFlex - a spatially reconfigurable multi-projector display system. The PixelFlex system is composed of ceiling-mounted projectors, each with computer-controlled pan, tilt, zoom and focus; and a camera for closed-loop calibration. Wo...
 
Multi-Projector Displays Using Camera-Based Registration
Found in: Visualization Conference, IEEE
By Ramesh Raskar, Michael S. Brown, Ruigang Yang, Wei-Chao Chen, Greg Welch, Herman Towles, Brent Seales, Henry Fuchs
Issue Date:October 1999
pp. 26
<div>Conventional projector-based display systems are typically designed around precise and regular configurations of projectors and display surfaces. While this results in rendering simplicity and speed, it also means painstaking construction and on...
 
Personal Photograph Enhancement Using Internet Photo Collections
Found in: IEEE Transactions on Visualization and Computer Graphics
By Chenxi Zhang, Jizhou Gao,Oliver Wang,Pierre Georgel, Ruigang Yang,James Davis,Jan-Michael Frahm,Marc Pollefeys
Issue Date:February 2014
pp. 262-275
Given the growth of Internet photo collections, we now have a visual index of all major cities and tourist sites in the world. However, it is still a difficult task to capture that perfect shot with your own camera when visiting these places, especially wh...
 
Video Enhancement of People Wearing Polarized Glasses: Darkening Reversal and Reflection Reduction
Found in: 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
By Mao Ye,Cha Zhang,Ruigang Yang
Issue Date:June 2013
pp. 1179-1186
With the wide-spread of consumer 3D-TV technology, stereoscopic videoconferencing systems are emerging. However, the special glasses participants wear to see 3D can create distracting images. This paper presents a computational framework to reduce undesira...
 
 1  2 Next >>