Search For:

Displaying 1-50 out of 51 total
TPAMI CVPR Special Section
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By Pedro F. Felzenszwalb,David A. Forsyth,Pascal Fua,Terrance E. Boult
Issue Date:December 2013
pp. 2819-2820
The articles in this special issue include papers from the CVPR'11 conference which was held in Colorado Spring, CO, June 2011.
 
Rendering synthetic objects into legacy photographs
Found in: ACM Transactions on Graphics (TOG)
By David Forsyth, Derek Hoiem, Kevin Karsch, Varsha Hedau, David Forsyth, Derek Hoiem, Kevin Karsch, Varsha Hedau
Issue Date:December 2011
pp. 1-15
We propose a method to realistically insert synthetic objects into existing photographs without requiring access to the scene or any additional scene measurements. With a single image and a small amount of annotation, our method creates a physical model of...
     
Object Detection with Discriminatively Trained Part-Based Models
Found in: Computer
By David Forsyth
Issue Date:February 2014
pp. 6-7
This installment of Computer's series highlighting the work published in IEEE Computer Society journals comes from IEEE Transactions on Pattern Analysis and Machine Intelligence.
 
Editorial: State of the Journal
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By David A. Forsyth
Issue Date:January 2014
pp. 1
The two factors that make TPAMI a wonderful journal are largely immune to disruption by a change of editors. Our community is a fertile source of exciting intellectual creations and scientific discoveries, and this factor ensures there are fine papers for ...
 
Editor's Note
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By David A. Forsyth
Issue Date:June 2013
pp. 1281-1283
No summary available.
 
Recovering free space of indoor scenes from a single image
Found in: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
By Varsha Hedau,Derek Hoiem,David Forsyth
Issue Date:June 2012
pp. 2807-2814
In this paper we consider the problem of recovering the free space of an indoor scene from its single image. We show that exploiting the box like geometric structure of furniture and constraints provided by the scene, allows us to recover the extent of maj...
 
Comparative object similarity for improved recognition with few or no examples
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Gang Wang, David Forsyth, Derek Hoiem
Issue Date:June 2010
pp. 3525-3532
Learning models for recognizing objects with few or no training examples is important, due to the intrinsic long-tailed distribution of objects in the real world. In this paper, we propose an approach to use comparative object similarity. The key insight i...
 
Utility data annotation with Amazon Mechanical Turk
Found in: Computer Vision and Pattern Recognition Workshop
By Alexander Sorokin, David Forsyth
Issue Date:June 2008
pp. 1-8
We show how to outsource data annotation to Amazon Mechanical Turk. Doing so has produced annotations in quite large numbers relatively cheaply. The quality is good, and can be checked and controlled. Annotations are produced quickly. We describe results f...
 
Object image retrieval by exploiting online knowledge resources
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Gang Wang, David Forsyth
Issue Date:June 2008
pp. 1-8
We describe a method to retrieve images found on web pages with specified object class labels, using an analysis of text around the image and of image appearance. Our method determines whether an object is both described in text and appears in a image usin...
 
Unsupervised Segmentation of Objects using Efficient Learning
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Himanshu Arora, Nicolas Loeff, David A. Forsyth, Narendra Ahuja
Issue Date:June 2007
pp. 1-7
We describe an unsupervised method to segment objects detected in images using a novel variant of an interest point template, which is very efficient to train and evaluate. Once an object has been detected, our method segments an image using a Conditional ...
 
Transfer Learning in Sign language
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Ali Farhadi, David Forsyth, Ryan White
Issue Date:June 2007
pp. 1-8
We build word models for American Sign Language (ASL) that transfer between different signers and different aspects. This is advantageous because one could use large amounts of labelled avatar data in combination with a smaller amount of labelled human dat...
 
Searching Video for Complex Activities with Finite State Models
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Nazli Ikizler, David Forsyth
Issue Date:June 2007
pp. 1-8
We describe a method of representing human activities that allows a collection of motions to be queried without examples, using a simple and effective query language. Our approach is based on units of activity at segments of the body, that can be composed ...
 
Tracking People by Learning Their Appearance
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By Deva Ramanan, David A. Forsyth, Andrew Zisserman
Issue Date:January 2007
pp. 65-81
An open vision problem is to automatically track the articulations of people from a video sequence. This problem is difficult because one needs to determine both the number of people in each frame and estimate their configurations. But, finding people and ...
 
Animals on the Web
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Tamara L. Berg, David A. Forsyth
Issue Date:June 2006
pp. 1463-1470
We demonstrate a method for identifying images containing categories of animals. The images we classify depict animals in a wide range of aspects, configurations and appearances. In addition, the images typically portray multiple species that differ in app...
 
Combining Cues: Shape from Shading and Texture
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Ryan White, David A. Forsyth
Issue Date:June 2006
pp. 1809-1816
We demonstrate a method for reconstructing the shape of a deformed surface from a single view. After decomposing an image into irradiance and albedo components, we combine normal cues from shading and texture to produce a field of unambiguous normals. Usin...
 
Searching Off-line Arabic Documents
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Jim Chan, Celal Ziftci, David Forsyth
Issue Date:June 2006
pp. 1455-1462
Currently an abundance of historical manuscripts, journals, and scientific notes remain largely unaccessible in library archives. Manual transcription and publication of such documents is unlikely, and automatic transcription with high enough accuracy to s...
 
Aligning ASL for Statistical Translation Using a Discriminative Word Model
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Ali Farhadi, David Forsyth
Issue Date:June 2006
pp. 1471-1476
We describe a method to align ASL video subtitles with a closed-caption transcript. Our alignments are partial, based on spotting words within the video sequence, which consists of joined (rather than isolated) signs with unknown word boundaries. We start ...
 
Tracking People and Recognizing Their Activities
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Deva Ramanan, David Forsyth, Andrew Zisserman
Issue Date:June 2005
pp. 1194
No summary available.
   
Skeletal Parameter Estimation from Optical Motion Capture Data
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Adam G. Kirk, James F. O'Brien, David A. Forsyth
Issue Date:June 2005
pp. 782-788
In this paper we present an algorithm for automatically estimating a subject?s skeletal structure from optical motion capture data. Our algorithm consists of a series of steps that cluster markers into segment groups, determine the topological connectivity...
 
Finding Glass
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Kenton McHenry, Jean Ponce, David Forsyth
Issue Date:June 2005
pp. 973-979
This paper addresses the problem of finding glass objects in images. Visual cues obtained by combining the systematic distortions in background texture occurring at the boundaries of transparent objects with the strong highlights typical of glass surfaces ...
 
Looking at People
Found in: Computer and Robot Vision, Canadian Conference
By David Forsyth
Issue Date:May 2005
pp. xvi-xvi
No summary available.
   
The Effects of Segmentation and Feature Choice in a Translation Model of Object Recognition
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Kobus Barnard, Pinar Duygulu, Raghavendra Guru, Prasad Gabbur, David Forsyth
Issue Date:June 2003
pp. 675
We work with a model of object recognition where words must be placed on image regions. This approach means that large scale experiments are relatively easy, so we can evaluate the effects of various early and mid-level vision algorithms on recognition per...
 
Mixtures of Trees for Object Recognition
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Sergey Ioffe, David Forsyth
Issue Date:December 2001
pp. 180
Efficient detection of objects in images is complicated by variations of object appearance due to intra-class object differences, articulation, lighting, occlusions, and aspect variations. To reduce the search required for detection, we employ the bottom-u...
 
Clustering Art
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Kobus Barnard, Pinar Duygulu, David Forsyth
Issue Date:December 2001
pp. 434
We extend a recently developed method [1] for learning the semantics of image databases using text and pictures. We incorporate statistical natural language processing in order to deal with free text. We demonstrate the current system on a difficult datase...
 
Learning the Semantics of Words and Pictures
Found in: Computer Vision, IEEE International Conference on
By Kobus Barnard, David Forsyth
Issue Date:July 2001
pp. 408
We present a statistical model for organizing image collections which integrates semantic information provided by associated text and visual information provided by image features. The model is very promising for information retrieval tasks such as databas...
 
Exploiting Image Semantics for Picture Libraries
Found in: Digital Libraries, Joint Conference on
By Kobus Barnard, David Forsyth
Issue Date:June 2001
pp. 469
We consider the application of a system for learning the semantics of image collections to digital libraries. We discuss our approach to browsing and search, and investigate the integration both in more detail.
   
Finding People by Sampling
Found in: Computer Vision, IEEE International Conference on
By Sergey Ioffe, David Forsyth
Issue Date:September 1999
pp. 1092
We show how to use a sampling method to find sparsely clad people in static images. People are modeled as an assembly of nine cylindrical segments. Segments are found using an EM algorithm, and then assembled into hypotheses incrementally, using a learned ...
 
Dynamics Modeling and Culling
Found in: IEEE Computer Graphics and Applications
By Stephen Chenney, Jeffrey Ichnowski, David Forsyth
Issue Date:March 1999
pp. 79-87
Procedural animation, via dynamical systems, has many advantages over keyframe animation, yet suffers from high computational cost and difficulties in modeling. We describe tools that analyze and process systems to enable culling if the system is not in vi...
 
Shading Primitives: Finding Folds and Shallow Grooves
Found in: Computer Vision, IEEE International Conference on
By John Haddon, David Forsyth
Issue Date:January 1998
pp. 236
<p>Diffuse interreflections cause effects that make current theories of shape from shading unsatisfactory. We show that distant radiating surfaces produce radiosity effects at low spatial frequencies. This means that, if a shading pattern has a small...
 
Video Event Detection: From Subvolume Localization to Spatiotemporal Path Search
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By Du Tran, Junsong Yuan,David Forsyth
Issue Date:February 2014
pp. 404-416
Although sliding window-based approaches have been quite successful in detecting objects in images, it is not a trivial problem to extend them to detecting events in videos. We propose to search for spatiotemporal paths for video event detection. This new ...
 
Non-parametric Filtering for Geometric Detail Extraction and Material Representation
Found in: 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
By Zicheng Liao,Jason Rock,Yang Wang,David Forsyth
Issue Date:June 2013
pp. 963-970
Geometric detail is a universal phenomenon in real world objects. It is an important component in object modeling, but not accounted for in current intrinsic image works. In this work, we explore using a non-parametric method to separate geometric detail f...
 
Skeletal Parameter Estimation from Optical Motion Capture Data
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Adam G. Kirk, James F. O'Brien, David A. Forsyth
Issue Date:June 2005
pp. 1185
No summary available.
   
Human Tracking with Mixtures of Trees
Found in: Computer Vision, IEEE International Conference on
By Sergey Ioffe, David Forsyth
Issue Date:July 2001
pp. 690
Tree-structured probabilistic models admit simple, fast inference. However, they are not well suited to phenomena such as occlusion, where multiple components of an object may disappear simultaneously. We address this problem with mixtures of trees, and de...
 
Understanding pictures of rooms: technical perspective
Found in: Communications of the ACM
By David Forsyth
Issue Date:April 2013
pp. 91-91
The main applications and challenges of one of the hottest research areas in computer science.
     
BeThere: 3D mobile collaboration with spatial input
Found in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13)
By Brett R. Jones, Brian P. Bailey, David Forsyth, Giuliano Maciocci, Rajinder S. Sodhi
Issue Date:April 2013
pp. 179-188
We present BeThere, a proof-of-concept system designed to explore 3D input for mobile collaborative interactions. With BeThere, we explore 3D gestures and spatial input which allow remote users to perform a variety of virtual interactions in a local user's...
     
Rendering synthetic objects into legacy photographs
Found in: Proceedings of the 2011 SIGGRAPH Asia Conference (SA '11)
By David Forsyth, Derek Hoiem, Kevin Karsch, Varsha Hedau
Issue Date:December 2011
pp. 1-137
We propose a method to realistically insert synthetic objects into existing photographs without requiring access to the scene or any additional scene measurements. With a single image and a small amount of annotation, our method creates a physical model of...
     
Rendering synthetic objects into legacy photographs
Found in: Proceedings of the 2011 SIGGRAPH Asia Conference (SA '11)
By David Forsyth, Derek Hoiem, Kevin Karsch, Varsha Hedau
Issue Date:December 2011
pp. 1-12
We propose a method to realistically insert synthetic objects into existing photographs without requiring access to the scene or any additional scene measurements. With a single image and a small amount of annotation, our method creates a physical model of...
     
Still looking at people
Found in: Proceedings of the 13th international conference on multimodal interfaces (ICMI '11)
By David A. Forsyth
Issue Date:November 2011
pp. 1-2
There is a great need for programs that can describe what people are doing from video. Among other applications, such programs could be used to search for scenes in consumer video; in surveillance applications; to support the design of buildings and of pub...
     
Generalizing motion edits with Gaussian processes
Found in: ACM Transactions on Graphics (TOG)
By David Forsyth, Leslie Ikemoto, Okan Arikan
Issue Date:January 2009
pp. 1-12
One way that artists create compelling character animations is by manipulating details of a character's motion. This process is expensive and repetitive. We show that we can make such motion editing more efficient by generalizing the edits an animator make...
     
ManifoldBoost: stagewise function approximation for fully-, semi- and un-supervised learning
Found in: Proceedings of the 25th international conference on Machine learning (ICML '08)
By David Forsyth, Deepak Ramachandran, Nicolas Loeff
Issue Date:July 2008
pp. 600-607
We describe a manifold learning framewor that naturally accommodates supervised learning, partially supervised learning and unsupervised clustering as particular cases. Our method chooses a function by minimizing loss subject to a manifold regularization p...
     
Quick transitions with cached multi-way blends
Found in: Proceedings of the 2007 symposium on Interactive 3D graphics and games (SI3D '07)
By David Forsyth, Leslie Ikemoto, Okan Arikan
Issue Date:April 2007
pp. 145-151
We describe a discriminative method for distinguishing natural-looking from unnatural-looking motion. Our method is based on physical and data-driven features of motion to which humans seem sensitive. We demonstrate that our technique is significantly more...
     
Quick transitions using multi-way blends
Found in: ACM SIGGRAPH 2006 Sketches (SIGGRAPH '06)
By David Forsyth, Leslie Ikemoto, Okan Arikan
Issue Date:July 2006
pp. 31-es
In Nehab et al. [2006], we describe an automatic preprocessing algorithm that reorders triangles in a mesh so as to enable the graphics hardware to efficiently cull both vertex and pixel processing at rendering time. Our method brings the overdraw rates of...
     
Quick transitions using multi-way blends
Found in: Material presented at the ACM SIGGRAPH 2006 conference (SIGGRAPH '06)
By David Forsyth, Leslie Ikemoto, Okan Arikan
Issue Date:July 2006
pp. 31-es
In Nehab et al. [2006], we describe an automatic preprocessing algorithm that reorders triangles in a mesh so as to enable the graphics hardware to efficiently cull both vertex and pixel processing at rendering time. Our method brings the overdraw rates of...
     
Knowing when to put your foot down
Found in: Proceedings of the 2006 symposium on Interactive 3D graphics and games (SI3D '06)
By David Forsyth, Leslie Ikemoto, Okan Arikan
Issue Date:March 2006
pp. 49-53
Footskate, where a character's foot slides on the ground when it should be planted firmly, is a common artifact resulting from almost any attempt to modify motion capture data. We describe an online method for fixing footskate that requires no manual clean...
     
Towards auto-documentary: tracking the evolution of news stories
Found in: Proceedings of the 12th annual ACM international conference on Multimedia (MULTIMEDIA '04)
By David A. Forsyth, Jia-Yu Pan, Pinar Duygulu
Issue Date:October 2004
pp. 820-827
News videos constitute an important source of information for tracking and documenting important events. In these videos, news stories are often accompanied by short video shots that tend to be repeated during the course of the event. Automatic detection o...
     
Enriching a motion collection by transplanting limbs
Found in: Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation (SCA '04)
By David A. Forsyth, Leslie Ikemoto
Issue Date:August 2004
pp. 99-108
This paper describes a method that can significantly increase the size of a collection of motion observations by cutting limbs from one motion sequence and attaching them to another. Not all such transplants are successful, because correlations across the ...
     
Motion synthesis from annotations
Found in: ACM SIGGRAPH 2003 Papers (SIGGRAPH '03)
By David A. Forsyth, James F. O'Brien, Okan Arikan
Issue Date:July 2003
pp. 56-ff
This paper describes a framework that allows a user to synthesize human motion while retaining control of its qualitative properties. The user paints a timeline with annotations --- like walk, run or jump --- from a vocabulary which is freely chosen by the...
     
Exploiting image semantics for picture libraries
Found in: Proceedings of the first ACM/IEEE-CS joint conference on Digital libraries (JCDL '01)
By David Forsyth, Kobus Barnard
Issue Date:January 2001
pp. 469
We consider the application of a system for learning the semantics of image collections to digital libraries. We discuss our approach to browsing and search, and investigate the integration both in more detail.
     
Efficient dynamics modeling for VRML and Java
Found in: Proceedings of the third symposium on Virtual reality modeling language (VRML '98)
By David Forsyth, Jeffrey Ichnowski, Stephen Chenney
Issue Date:February 1998
pp. 15-24
From the Coliseum in Rome to the verdant landscape of the Loire Valley, the world's cultural heritage has withstood the text of time. Today though, the pace of progress --- from urban sprawl to pollution, neglect, conflict, and even tourism --- threatens t...
     
View-dependent culling of dynamic systems in virtual environments
Found in: Proceedings of the 1997 symposium on Interactive 3D graphics (SI3D '97)
By David Forsyth, Stephen Chenney
Issue Date:April 1997
pp. 55-58
we present an exact and interactive collision detection system, I-COLLIDE, for large-scale environments. Such environments are characterized by the number of objects undergoing rigid motion and the complexity of the models. The algorithm does not assume th...
     
 1  2 Next >>