Search For:

Displaying 1-31 out of 31 total
Video tapestries with continuous temporal zoom
Found in: ACM Transactions on Graphics (TOG)
By Adam Finkelstein, Connelly Barnes, Dan B. Goldman, Eli Shechtman, Adam Finkelstein, Connelly Barnes, Dan B. Goldman, Eli Shechtman, Adam Finkelstein, Connelly Barnes, Dan B. Goldman, Eli Shechtman, Adam Finkelstein, Connelly Barnes, Dan B. Goldman, Eli Shechtman
Issue Date:July 2010
pp. 1-10
We present a novel approach for summarizing video in the form of a multiscale image that is continuous in both the spatial domain and across the scale dimension: There are no hard borders between discrete moments in time, and a user can zoom smoothly into ...
     
PatchMatch: a randomized correspondence algorithm for structural image editing
Found in: ACM Transactions on Graphics (TOG)
By Adam Finkelstein, Connelly Barnes, Dan B Goldman, Eli Shechtman, Adam Finkelstein, Connelly Barnes, Dan B Goldman, Eli Shechtman, Adam Finkelstein, Connelly Barnes, Dan B Goldman, Eli Shechtman, Adam Finkelstein, Connelly Barnes, Dan B Goldman, Eli Shechtman
Issue Date:July 2009
pp. 1-2
This paper presents interactive image editing tools using a new randomized algorithm for quickly finding approximate nearest-neighbor matches between image patches. Previous research in graphics and vision has leveraged such nearest-neighbor searches to pr...
     
Large-Scale Visual Font Recognition
Found in: 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
By Guang Chen,Jianchao Yang,Hailin Jin,Jonathan Brandt,Eli Shechtman,Aseem Agarwala,Tony X. Han
Issue Date:June 2014
pp. 3598-3605
This paper addresses the large-scale visual font recogni- tion (VFR) problem, which aims at automatic identification of the typeface, weight, and slope of the text in an image or photo without any knowledge of content. Although vi- sual font recognition ha...
 
Automatic Upright Adjustment of Photographs With Robust Camera Calibration
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By Hyunjoon Lee,Eli Shechtman,Jue Wang,Seungyong Lee
Issue Date:May 2014
pp. 1-1
Man-made structures often appear to be distorted in photos captured by casual photographers, as the scene layout often conflicts with how it is expected by human perception. In this paper, we propose an automatic approach for straightening up slanted man-m...
 
Deblurring by Example Using Dense Correspondence
Found in: 2013 IEEE International Conference on Computer Vision (ICCV)
By Yoav Hacohen,Eli Shechtman,Dani Lischinski
Issue Date:December 2013
pp. 2384-2391
This paper presents a new method for deblurring photos using a sharp reference example that contains some shared content with the blurry photo. Most previous deblurring methods that exploit information from other photos require an accurately registered pho...
 
Regenerative morphing
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Eli Shechtman, Alex Rav-Acha, Michal Irani, Steve Seitz
Issue Date:June 2010
pp. 615-622
We present a new image morphing approach in which the output sequence is regenerated from small pieces of the two source (input) images. The approach does not require manual correspondence, and generates compelling results even when the images are of very ...
 
Summarizing visual data using bidirectional similarity
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Denis Simakov, Yaron Caspi, Eli Shechtman, Michal Irani
Issue Date:June 2008
pp. 1-8
We propose a principled approach to summarization of visual data (images or video) based on optimization of a well-defined similarity measure. The problem we consider is re-targeting (or summarization) of image/video data into smaller sizes. A good “visual...
 
In defense of Nearest-Neighbor based image classification
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Oren Boiman, Eli Shechtman, Michal Irani
Issue Date:June 2008
pp. 1-8
State-of-the-art image classification methods require an intensive learning/training stage (using SVM, Boosting, etc.) In contrast, non-parametric Nearest-Neighbor (NN) based image classifiers require no training time and have other favorable properties. H...
 
Actions as Space-Time Shapes
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By Lena Gorelick, Moshe Blank, Eli Shechtman, Michal Irani, Ronen Basri
Issue Date:December 2007
pp. 2247-2253
Human action in video sequences can be seen as silhouettes of a moving torso and protruding limbs undergoing articulated motion. We regard human actions as three- imensional shapes induced by the silhouettes in the spacetime volume. We adopt a recent appro...
 
Space-Time Behavior-Based Correlation—OR—How to Tell If Two Underlying Motion Fields Are Similar Without Computing Them?
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By Eli Shechtman, Michal Irani
Issue Date:November 2007
pp. 2045-2056
We introduce a behavior-based similarity measure which tells us whether two different space-time intensity patterns of two different video segments could have resulted from a similar underlying motion field. This is done directly from the intensity informa...
 
Matching Local Self-Similarities across Images and Videos
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Eli Shechtman, Michal Irani
Issue Date:June 2007
pp. 1-8
We present an approach for measuring similarity between visual entities (images or videos) based on matching internal self-similarities. What is correlated across images (or across video sequences) is the internal layout of local self-similarities (up to s...
 
Space-Time Completion of Video
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By Yonatan Wexler, Eli Shechtman, Michal Irani
Issue Date:March 2007
pp. 463-476
This paper presents a new framework for the completion of missing information based on local structures. It poses the task of completion as a global optimization problem with a well-defined objective function and derives a new algorithm to optimize it. Mis...
 
Actions as Space-Time Shapes
Found in: Computer Vision, IEEE International Conference on
By Moshe Blank, Lena Gorelick, Eli Shechtman, Michal Irani, Ronen Basri
Issue Date:October 2005
pp. 1395-1402
Human action in video sequences can be seen as silhouettes of a moving torso and protruding limbs undergoing articulated motion. We regard human actions as three-dimensional shapes induced by the silhouettes in the space-time volume. We adopt a recent appr...
 
Space-Time Behavior Based Correlation
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Eli Shechtman, Michal Irani
Issue Date:June 2005
pp. 405-412
<p>We introduce a behavior-based similarity measure which tells us whether two different space-time intensity patterns of two different video segments could have resulted from a similar underlying motion field. This is done directly from the intensit...
 
Space-Time Super-Resolution
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By Eli Shechtman, Yaron Caspi, Michal Irani
Issue Date:April 2005
pp. 531-545
We propose a method for constructing a video sequence of high space-time resolution by combining information from multiple low-resolution video sequences of the same dynamic scene. Super-resolution is performed simultaneously in time and in space. By
 
Improving patch-based synthesis by learning patch masks
Found in: 2014 IEEE International Conference on Computational Photography (ICCP)
By Nima Khademi Kalantari,Eli Shechtman,Soheil Darabi,Dan B Goldman,Pradeep Sen
Issue Date:May 2014
pp. 1-8
Patch-based synthesis is a powerful framework for numerous image and video editing applications such as hole-filling, retargeting, and reshuffling. In all these applications, a patch-based objective function is optimized through a patch search-and-vote pro...
   
Learning Video Saliency from Human Gaze Using Candidate Selection
Found in: 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
By Dmitry Rudoy,Dan B. Goldman,Eli Shechtman,Lihi Zelnik-Manor
Issue Date:June 2013
pp. 1147-1154
During recent years remarkable progress has been made in visual saliency modeling. Our interest is in video saliency. Since videos are fundamentally different from still images, they are viewed differently by human observers. For example, the time each vid...
 
Patch-based high dynamic range video
Found in: ACM Transactions on Graphics (TOG)
By Connelly Barnes, Dan B. Goldman, Pradeep Sen, Eli Shechtman, Nima Khademi Kalantari, Soheil Darabi
Issue Date:November 2013
pp. 1-8
Despite significant progress in high dynamic range (HDR) imaging over the years, it is still difficult to capture high-quality HDR video with a conventional, off-the-shelf camera. The most practical way to do this is to capture alternating exposures for ev...
     
Optimizing color consistency in photo collections
Found in: ACM Transactions on Graphics (TOG)
By Dan B. Goldman, Dani Lischinski, Eli Shechtman, Yoav HaCohen
Issue Date:July 2013
pp. 1-10
With dozens or even hundreds of photos in today's digital photo albums, editing an entire album can be a daunting task. Existing automatic tools operate on individual photos without ensuring consistency of appearance between photographs that share content....
     
Robust patch-based hdr reconstruction of dynamic scenes
Found in: ACM Transactions on Graphics (TOG)
By Dan B. Goldman, Eli Shechtman, Maziar Yaesoubi, Nima Khademi Kalantari, Pradeep Sen, Soheil Darabi
Issue Date:November 2012
pp. 1-11
High dynamic range (HDR) imaging from a set of sequential exposures is an easy way to capture high-quality images of static scenes, but suffers from artifacts for scenes with significant motion. In this paper, we propose a new approach to HDR reconstructio...
     
Image melding: combining inconsistent images using patch-based synthesis
Found in: ACM Transactions on Graphics (TOG)
By Connelly Barnes, Dan B. Goldman, Eli Shechtman, Pradeep Sen, Soheil Darabi
Issue Date:July 2012
pp. 1-10
Current methods for combining two different images produce visible artifacts when the sources have very different textures and structures. We present a new method for synthesizing a transition region between two source images, such that inconsistent color,...
     
The PatchMatch randomized matching algorithm for image manipulation
Found in: Communications of the ACM
By Adam Finkelstein, Connelly Barnes, Dan B. Goldman, Eli Shechtman
Issue Date:November 2011
pp. 103-110
This paper presents a new randomized algorithm for quickly finding approximate nearest neighbor matches between image patches. Our algorithm offers substantial performance improvements over the previous state of the art (20--100×), enabling its use in...
     
Non-rigid dense correspondence with applications for image enhancement
Found in: ACM SIGGRAPH 2011 papers (SIGGRAPH '11)
By Dan B. Goldman, Dani Lischinski, Eli Shechtman, Yoav HaCohen
Issue Date:August 2011
pp. 287-294
This paper presents a new efficient method for recovering reliable local sets of dense correspondences between two images with some shared content. Our method is designed for pairs of images depicting similar regions acquired by different cameras and lense...
     
Exploring photobios
Found in: ACM SIGGRAPH 2011 papers (SIGGRAPH '11)
By Eli Shechtman, Ira Kemelmacher-Shlizerman, Rahul Garg, Steven M. Seitz
Issue Date:August 2011
pp. 287-294
We present an approach for generating face animations from large image collections of the same person. Such collections, which we call photobios, sample the appearance of a person over changes in pose, facial expression, hairstyle, age, and other variation...
     
Expression flow for 3D-aware face component transfer
Found in: ACM SIGGRAPH 2011 papers (SIGGRAPH '11)
By Dimitri Metaxas, Eli Shechtman, Fei Yang, Jue Wang, Lubomir Bourdev
Issue Date:August 2011
pp. 287-294
We address the problem of correcting an undesirable expression on a face photo by transferring local facial components, such as a smiling mouth, from another face photo of the same person which has the desired expression. Direct copying and blending using ...
     
Non-rigid dense correspondence with applications for image enhancement
Found in: ACM Transactions on Graphics (TOG)
By Dan B. Goldman, Dani Lischinski, Eli Shechtman, Yoav HaCohen
Issue Date:July 2011
pp. 1-33
This paper presents a new efficient method for recovering reliable local sets of dense correspondences between two images with some shared content. Our method is designed for pairs of images depicting similar regions acquired by different cameras and lense...
     
Exploring photobios
Found in: ACM Transactions on Graphics (TOG)
By Eli Shechtman, Ira Kemelmacher-Shlizerman, Rahul Garg, Steven M. Seitz
Issue Date:July 2011
pp. 1-33
We present an approach for generating face animations from large image collections of the same person. Such collections, which we call photobios, sample the appearance of a person over changes in pose, facial expression, hairstyle, age, and other variation...
     
Expression flow for 3D-aware face component transfer
Found in: ACM Transactions on Graphics (TOG)
By Dimitri Metaxas, Eli Shechtman, Fei Yang, Jue Wang, Lubomir Bourdev
Issue Date:July 2011
pp. 1-33
We address the problem of correcting an undesirable expression on a face photo by transferring local facial components, such as a smiling mouth, from another face photo of the same person which has the desired expression. Direct copying and blending using ...
     
Cosaliency: where people look when comparing images
Found in: Proceedings of the 23nd annual ACM symposium on User interface software and technology (UIST '10)
By Dan B. Goldman, David E. Jacobs, Eli Shechtman
Issue Date:October 2010
pp. 219-228
Image triage is a common task in digital photography. Determining which photos are worth processing for sharing with friends and family and which should be deleted to make room for new ones can be a challenge, especially on a device with a small screen lik...
     
Video tapestries with continuous temporal zoom
Found in: ACM SIGGRAPH 2010 papers (SIGGRAPH '10)
By Adam Finkelstein, Connelly Barnes, Dan B. Goldman, Eli Shechtman
Issue Date:July 2010
pp. 10-18
We present a novel approach for summarizing video in the form of a multiscale image that is continuous in both the spatial domain and across the scale dimension: There are no hard borders between discrete moments in time, and a user can zoom smoothly into ...
     
PatchMatch: a randomized correspondence algorithm for structural image editing
Found in: ACM SIGGRAPH 2009 papers (SIGGRAPH '09)
By Adam Finkelstein, Connelly Barnes, Dan B Goldman, Eli Shechtman
Issue Date:August 2009
pp. 3-3
This paper presents interactive image editing tools using a new randomized algorithm for quickly finding approximate nearest-neighbor matches between image patches. Previous research in graphics and vision has leveraged such nearest-neighbor searches to pr...
     
 1