This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Looking at People: Sensing for Ubiquitous and Wearable Computing
January 2000 (vol. 22 no. 1)
pp. 107-119

Abstract—The research topic of looking at people, that is, giving machines the ability to detect, track, and identify people and more generally, to interpret human behavior, has become a central topic in machine vision research. Initially thought to be the research problem that would be hardest to solve, it has proven remarkably tractable and has even spawned several thriving commercial enterprises. The principle driving application for this technology is “fourth generation” embedded computing: “smart”' environments and portable or wearable devices. The key technical goals are to determine the computer's context with respect to nearby humans (e.g., who, what, when, where, and why) so that the computer can act or respond appropriately without detailed instructions. This paper will examine the mathematical tools that have proven successful, provide a taxonomy of the problem domain, and then examine the state-of-the-art. Four areas will receive particular attention: person identification, surveillance/monitoring, 3D methods, and smart rooms/perceptual user interfaces. Finally, the paper will discuss some of the research challenges and opportunities.

[1] M. Weiser, “The Computer for the 21st Century,” Scientific Am., vol. 265, no. 3, pp. 66-76, Sept. 1991.
[2] A. Pentland, “Smart Rooms, Smart Clothes,” Scientific Am., vol. 274, no. 4, pp. 68-76, 1996.
[3] A. Pentland, “Wearable Intelligence,” Scientific Am. Presents, vol. 9, no. 4, pp. 90-95, 1998.
[4] R. Stein, S. Ferrero, M. Hetfield, A. Quinn, and M. Krichever, “Development of a Commerically Successful Wearable Data Collection System,” Proc. IEEE Int'l Symp. Wearable Computers, pp. 18-24, Pittsburgh, Oct. 1998.
[5] Proc. IEEE Int'l Conf. Face and Gesture Recognition, I. Essa, ed., Killington, Vt., IEEE CS Press, Oct. 1996.
[6] M. Lucente, G.-J. Zwart, and A. George, “Visualization Space: A Testbed for Deviceless Multimodal User Interface, Intelligent Environments,” Proc. AAAI Spring Symp. Series, pp. 87-92, Stanford Univ., Mar. 1998.
[7] M. Turk, “Visual Interaction with Lifelike Characters,” Proc. IEEE Int'l Conf. Face and Gesture Recognition, pp. 368-373, Killington, Vt., Oct. 1996.
[8] J. Rekimoto, Y. Ayatsuka, and K. Hayashi, “Augment-Able Reality: Situated Communication through Physical and Digital Spaces,” Proc. IEEE Int'l Symp. Wearable Computers, pp. 18-24, Pittsburgh, Oct. 1998.
[9] P. Maes, B. Blumburg, T. Darrell, and A. Pentland, “ALIVE: An Artificial Life Interactive Environment,” Proc. SIGGRAPH '93—Visual, pp. 115, 1993.
[10] C. Maggioni and B. Kammerer, GestureComputer: History, Design, and Applications, in Computer Vision for Human-Machine Interaction. R. Cipolla and A. Pentland, eds., Cambridge Univ. Press, 1998.
[11] W. Freeman and C. Weissman, “Television Control by Hand Gestures,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 179-183, Zurich, Switzerland, June 1995.
[12] B. Moghaddam, W. Wahid, and A. Pentland, “Beyond Eigenfaces: Probabalistic Matching for Face Recognition,” Proc. IEEE Int'l Conf. Automatic Face and Gesture Recognition, pp. 30-35, Nara, Japan, Apr. 1998.
[13] D.L. Swets and J. Weng, Using Discriminant Eigenfeatures for Image Retrieval IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 16, no. 8, pp. 831-836, Aug. 1996.
[14] P.N. Belhumeur, J. Hespanda, and D. Kriegeman, Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711-720, July 1997.
[15] V.N. Vapnik, Statistical Learning Theory, John Wiley&Sons, 1998.
[16] T. Jaakkola, M. Meila, and T. Jebara, “Maximum Entropy Discrimination,” Proc. Conf. Neural Information Processing, Denver, Dec. 1999.
[17] S. Ullman and R. Basri, "Recognition by Linear Combinations of Models," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 13, pp. 992-1006, 1991.
[18] T. Poggio and S. Edelman, “A Network that Learns to Recognize Three-Dimensional Objects,” Nature, vol. 343, pp. 263-266, 1990.
[19] M. Kirby and L. Sirovich,“Application of Karhunen-Loève procedure for the characterization of human faces,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 12, no. 1, pp. 103-108, Jan. 1990.
[20] M. Turk and A. Pentland, “Eigenfaces for Recognition,” J. Cognitive Neuroscience, vol. 3, no. 1, pp. 71-86, 1991.
[21] L. Wiskott, J.M. Fellous, N. Kruger, and C. von der Malsburg, Face Recognition by Elastic Bunch Graph Matching IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 775-779, July 1997.
[22] R. Rao and D. Ballard, “An Active Vision Architecture Based on Iconic Representations,” Artificial Intelligence, vol. 78, pp. 461-505, 1995.
[23] M. Oren, C. Papageorgiou, P. Sinha, E. Osuna, and T. Poggio, Pedestrian Detection Using Wavelet Templates Proc. Computer Vision and Pattern Recognition, pp. 193-199, June 1997.
[24] B. Schiele and J. Crowley, “Probabilistic Object Recognition Using Multidimensional Receptive Field Histograms,” Proc. 13th Int'l Conf. Pattern Recognition, vol. B, pp. 50-54, 1996.
[25] A.J. Bell and T.J. Sejnowski, An Information-Maximization Approach to Blind Separation and Blind Deconvolution Neural Computation, vol. 7, no. 6, June 1995.
[26] R. Kauth, A. Pentland, and G. Thomas, “BLOB: An Unsupervised Clustering Approach to Spatial Preprocessing of MSS Imagery,” Proc. 11th Int'l Symp. Remote Sensing of the Environment, Center for Remote Sensing Information, Ann Arbor, Mich., Apr. 1977.
[27] Y. Yacoob and L.S. Davis, “Recognizing Human Facial Expression from Long Image Sequences Using Optical Flow,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 18, no. 6, pp. 636-642, June 1996.
[28] K. Mase and A. Pentland, “Lip Reading: Automatic Visual Recognition of Spoken Words,” Proc. Opt. Soc. Am. Topical Meeting on Machine Vision, pp. 1,565-1,570, Cape Cod, Mass., June 1989.
[29] T. Darrell, I. Essa, and A. Pentland, “Task-Specific Gesture Analysis in Real-Time Using Interpolated Views,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 18, no. 12, pp. 1,236-1,242, Dec. 1996.
[30] J. Yamato, H. Ohya, and K. Ishii, “Recognizing Human Action in Time-Sequential Images Using Hidden Markov Model,” Proc. 1992 IEEE Conf. Computer Vision and Pattern Recognition, pp. 379-385, 1992.
[31] T. Starner, J. Weaver, and A. Pentland, “Real-Time American Sign Language Recognition using Desk and Wearable Computer Based Video,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 12, pp. 1,371-1,375, Dec. 1995.
[32] V. Pavlovic, B. Frey, and T. Huang, “Time-Series Classification Using Mixed-State Dynamic Bayesian Networks,” Proc. Conf. Computer Vision and Pattern Recognition, pp. 609-615, 1999.
[33] A. Willsky, “Detection of Abrupt Changes in Dynamic Systems, Detection of Abrupt Changes in Signals and Dynamical Systems,” Lecture Notes Control and Information Sciences. no. 77,Basseville and Benvieniste, eds., Springer-Verlag, 1986.
[34] A. Blake, M. Isard, and D. Reynard, “Learning to Track the Visual Motion of Contours,” Artificial Intelligence, no. 78, pp. 101-133, 1995.
[35] M. Friedmann, T. Starner, and A. Pentland, “Synchronization in Virtual Realities,” Presence, vol. 1, no. 1, pp. 139-144, 1992.
[36] A. Pentland and A. Liu, “Modeling and Prediction of Human Behavior,” Neural Computation, vol. 11, pp. 229-242, 1999.
[37] H.H. Nagel, H. Kollnig, M. Haag, and H. Damm, “Association of Situation Graphs with Temporal Variations in Image Sequences,” Proc. European Conf. Computer Vision, vol. 2, pp. 338-347, 1994.
[38] Y. Kuniyoshi and H. Inoue, “Qualitative Recognition of Ongoing Human Action Sequences,” Proc. Int'l Joint Conf. on Artifical Intelligence, pp. 1,600-1,609. 1993.
[39] J. Siskind, “Grounding Language in Perception,” Artificial Intelligence Rev., vol. 8, pp. 371-391, 1994.
[40] A. Bobick, “Movement, Activity, and Action: The Role of Knowledge in the Perception of Motion,” Proc. Royal Soc. B., special issue knowledge-based vision in man and machine, vol. 352, pp. 1,270-1,281, 1997.
[41] F. Quek, “Eyes in the Interface,” Image and Vision Computing, vol. 13, 1995.
[42] P. Ekman and W. Friesen, Facial Action Coding System. Palo Alto, Calif.: Consulting Psychologist Press, 1978.
[43] G. Sperling, M. Landy, Y. Cohen, and M. Pavel, “Intelligible Encoding of ASL Image Sequences at Extremely Low Information Rates,” Computer Vision, Graphics, and Image Processing, vol. 31, pp. 335-391, 1985.
[44] D.M. Gavrila, “The Visual Analysis of Human Movement: A Survey,” Computer Vision and Image Understanding, vol. 73, no. 1, Jan. 1999.
[45] V.I. Pavlovic, R. Sharman, and T.S. Huang, "Visual Interpretation for Human-Computer Interaction: A Review," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 677-695, July 1997.
[46] D. McNeil, Hand and Mind: What Gestures Reveal About Thought. Univ. of Chicago Press, 1992.
[47] R. Bolt, “Put-That-There: Voice and Gesture at the Graphics Interface,” Computer Graphics, vol. 14, no. 3, pp. 262-270, 1980.
[48] J. Cassell, “A Framework for Gesture Generation and Interpretation,” Computer Vision for Human-Machine Interaction. R. Cipolla and A. Pentland, eds., Cambridge Univ. Press, 1998.
[49] R. Chellappa, C. Wilson, and S. Sirohey, "Human and Machine Recognition of Faces: A Survey," Proc. IEEE, vol. 83, no. 5, pp. 705-740, 1995.
[50] Computer Vision for Human-Machine Interaction. R. Cipolla and A. Pentland, eds., Cambridge Univ. Press, 1998.
[51] E. Cerezo, A. Pina, and F. Seron, “Motion and Behavior Modeling: State of Art and New Trends,” The Visual Computer, vol. 15, pp. 124-146, 1999.
[52] Int'l Conf. Automatic Face and Gesture Recognition, Zurich, Switzerland, June 1995.
[53] Proc. IEEE Conf. Automatic Face and Gesture Recognition, Nara, Japan, Apr. 1998.
[54] A. Waibel, M. Vo, P. Duchnowski, and S. Manke, “Multimodal Interfaces,” Artificial Intelligence Rev., vol. 10, pp. 299-319, 1995.
[55] R. Sharma, V. Pavlovic, and T. Huang, “Toward Multimodal Human-Computer Interface,” Proc. IEEE, vol. 86, no. 5, pp. 853-869, 1998.
[56] D. Valentin, H. Abdi, A. O'Toole, and G. Cottrell, “Connectionist Models of Face Processing: A Survey,” Pattern Recognition, vol. 27, pp. 1,208-1,230, 1994.
[57] T. Kohonen, "Self-Organization and Associated Memory," Berlin Heidelberg. New York: Springer-Verlag, 1988.
[58] T. Kanade, “Computer Recognition of Human Faces,” Interdisciplinary Systems Res., vol. 47, 1977.
[59] I. Craw,H. Ellis,, and J.R. Lishman,”, Automatic extraction of face-features,” Pattern Recognition Letters, vol. 5, pp. 183-187, Feb. 1987.
[60] H. Abdi, “Generalized Approaches for Connectionist Auto-Associative Memories: Interpretation, Implication, and Illustration for Face Processing,” Artifical Intelligence and Cognitive Science, pp. 151-164, 1988.
[61] G.W. Cottrell and M.K. Fleming, “Face Recognition Using Unsupervised Feature Extraction,” Proc. Int'l Neural Network Conf., pp. 322-325, 1990.
[62] P. Phillips, H. Wechsler, J. Huang, and P. Rauss, “The FERET Database and Evaluation Procedure for Face Recognition Algorithms,” Image and Vision Computing, vol. 16, no. 5, pp. 295-306, 1998.
[63] K. Etemad and R. Chellappa, “Discriminant Analysis for Recognition of Human Face Images,” J. Optical Soc. Am. A., vol. 14, pp. 1,724-1,733, 1997.
[64] B. Moghaddam and A. Pentland, “Probabilistic Visual Learning for Object Representation,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 696-710, July 1997.
[65] P. Penev and J. Atick, “Local Feature Analysis: A General Statistical Theory for Object Representation,” Network: Computation in Neural Systems, vol. 7, pp. 477-500, 1996.
[66] K. Akita, “Image Sequence Analysis of Real World Human Motion,” Pattern Recognition, vol. 17, no. 4, pp. 73-83, 1984.
[67] Proc. DARPA Image Understanding Workshop, Monterey Calif. San Francisco: Morgan Kaufmann, Nov. 1998.
[68] R. Polana and R. Nelson, “Recognizing Activities,” Proc. IEEE Int'l Conf. Computer Vision, 1994.
[69] A. Lipton, H. Fujiyoshi, and R. Patil, “Moving Target Detection and Classification from Real-Time Video,” Proc. IEEE Workshop Applications of Computer Vision, 1998.
[70] E. Grimson, C. Stauffer, R. Romano, and L. Lee, “Using Adaptive Tracking to Classify and Monitor Activities in a Site,” Proc. IEEE Conf. Computer Vision and Pattern Recogition, pp. 22-29, 1998.
[71] T. Boult, “Frame-Rate Multi-Body Tracking for Surveillance,” DARPA Image Understanding Workshop, Monterey, Calif. San Francisco: Morgan Kaufmann, Nov. 1998.
[72] A. Selinger and L. Wixson, “Classifying Moving Objects as Rigid or Non-Rigid Without Correspondences,” DARPA Image Understanding Workshop, Monterey, Calif., San Francisco: Morgan Kaufmann, Nov. 1998.
[73] F. Bremond and G. Medioni, “Scenario Recognition in Airborne Video Imagery,” Proc. DARPA Image Understanding Workshop, Monterey, Calif. San Francisco: Morgan Kaufmann, Nov. 1998.
[74] K. Konolige, Small Vision Systems: Hardware and Implementation DARPA Image Understanding Workshop, Monterey, Calif., San Francisco: Morgan Kaufmann, Nov. 1998.
[75] K.K. Sung and T. Poggio, “Example-Based Learning for View-Based Face Detction,” Proc. DARPA Image Understanding Workshop, vol. II, pp. 843-850, Monterey, Calif., San Francisco: Morgan Kaufmann, Nov. 1994.
[76] H. Rowley, S. Baluja, and T. Kanade, “Neural Network-Based Face Detection,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 203-208, 1996.
[77] H. Schneiderman and T. Kanade, “Probabalistic Modeling of Local Appearance and Spatial Relationships for Object Recognition,” IEEE Conf. Computer Vision and Pattern Recognition, pp. 45-51, 1998.
[78] T. Olson and F. Brill, “Moving Object Detection and Event Recognition Algorithms for Smart Cameras,” Proc. DARPA Image Understanding Workshop, pp. 159-175, Monterey, Calif. San Francisco: Morgan Kaufmann, 1997.
[79] N. Oliver, B. Rosario, and A. Pentland, “Statistical Modeling of Human Interactions,” Proc. IEEE Conf. Computer Vision and Pattern Recogntion, 1998.
[80] Y. Ivanov, C. Stauffer, B. Bobick, and W.E.L. Grimson, “Video Surveillance of Interactions,” Proc. IEEE Workshop Video Surveillance, Fort Collins, Colo., June 1999.
[81] I. Haritaoglu, D. Harwood, and H. Davis, W4: Who, What, When, Where: A Real-Time System for Detecting and Tracking People. 1998.
[82] D. Hogg, “Model-Based Vision: A program to See a Walking Person,” Image Vision Computing, vol. 1, no. 1, pp. 5-20, 1983.
[83] N. Badler and S. Smoliar, “Digital Representations of Human Movement,” ACM Computing Surveys, vol. 11, no. 1, pp. 19-38, 1979.
[84] K. Mase, Y. Suenaga, and T. Akimoto, “Head Reader: A Head Motion Understanding System for Better Man-Machine Interaction,” Proc. IEEE Systems, Man, and Cybernetics, pp. 970-974, Nov. 1987.
[85] P. Narayanan, P. Rander, and T. Kanade, “Constructing Virtual Worlds Using Dense Stereo Processing,” Proc. Int'l Conf. Computer Vision, Greece, 1998.
[86] K.N. Kutulakos and J. Vallino, “Calibration-Free Augmented Reality,” IEEE Trans. Visualization and Computer Graphics, vol. 4, no. 1, pp. 1-20, Jan.-Mar. 1998.
[87] S. Moeszzi, A. Katkere, D. Kuramura, and R. Jain, “Reality Modeling and Visualization from Multiple Video Sequences,” IEEE Computer Graphics and Applications, vol. 16, no. 6, pp. 58-63, 1996.
[88] M. Yachida and Y. Iwai, “Looking at Human Gestures,” Computer Vision for Human-Machine Interaction. R. Cipolla and A. Pentland, eds., Cambridge Univ. Press, 1998.
[89] N. Oliver, F. Berard, J. Coutaz, and A. Pentland, “LAFTER: Lips and Face Tracker,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 100-110, San Juan, Puerto Rico, 1997.
[90] B. Schiele and A. Waibel, “Gaze Tracking Based on Face Color,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, Zurich, Switzerland, June 1995.
[91] T.F. Cootes, C.J. Taylor, D.H. Cooper, and J. Graham, "Active Shape Models—Their Training and Application," Computer Vision and Image Understanding, vol. 61, no. 1, pp. 38-59, Jan. 1995.
[92] D. DeCarlo and D. Metaxas, “The Integration of Optical Flow and Deformable Models: Applications to Human Face Shape and Motion Estimation,” Proc. IEEE Computer Vision and Pattern Recognition (CVPR '96), pp. 231-238, 1996.
[93] T. Jebara, K. Russell, and A. Pentland, “Mixtures of Eigenfeatures for Real-Time Structure from Texture,” Proc. IEEE Int'l Conf. Computer Vision, Bombay, India, Jan. 1998.
[94] A. Azarbayejani, T. Starner, B. Horowitz, and A. Pentland, “Visually Guided Graphics,” IEEE Trans. Pattern Analysis and Machine Vision, vol. 15, no. 6, pp. 602-604, 1993.id="bibI010794">
[95] Y. Lee, D. Terzopoulos, and K. Waters, “Realistic Modeling for Facial Animation,” Proc. Ann. Conf. Series, SIGGRAPH 1995, pp. 55-62, 1995.
[96] T. Ishikawa, H. Sera, S. Morishima, and D. Terzopoulos, “Facial Image Reconstruction by Estimated Muscle Parameter,” IEEE Conf. Automatic Face and Gesture Recognition, pp. 342-347, Nara, Japan, Apr. 1998.
[97] I.A. Essa and A.P. Pentland, “Coding, Analysis, Interpretation, and Recognition of Facial Expressions,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 757-763, July 1997.
[98] H. Li, P. Roivainen, and R. Forchheimer, "3D Motion Estimation in Model-Based Facial Image Coding," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 15, no. 6, pp. 545-555, June 1993.
[99] M. Black and Y. Yacoob, “Tracking and Recognizing Rigid and Nonrigid Facial Motion Using Local Parametric Models of Image Motion,” Proc. IEEE Int'l Conf. Computer Vision, Cambridge, Mass., 1995.
[100] I. Kakadiaris and D. Metaxas, “Model-Based Estimation of 3-D Human Motion Based on Active Multi-Viewpoint Selection,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, 1996.
[101] J. Rehg and T. Kanade, “Visual Tracking of High DOF Articulated Structures: An Application to Human Hand Tracking,” Proc. European Conf. Computer Vision, vol. II, pp. 35-46, 1996.
[102] E. Di Bernardo, L. Goncalves, and P. Perona, “Monocular Tracking of the Human Arm in 3-D, in Computer Vision for Human-Machine Interaction,” R. Cipolla and A. Pentland, eds., Cambridge Univ. Press, 1998.
[103] C. Wren and A. Pentland, “Dynamic Modeling of Human Motion,” Proc. IEEE Conf. Automatic Face and Gesture Recognition, pp. 22-27, Nara, Japan, Apr. 1998.
[104] C. Bregler, “Tracking People with Twists and Exponential Maps, IEEE Conf. Computer Vision and Pattern Recogntion,” 1998.
[105] D.M. Gavrila and L.S. Davis, “3-D Model-Based Tracking of Humans in Action: A Multi-View Approach,” Proc. Conf. Computer Vision and Pattern Recognition, pp. 73–80, June 1996.
[106] M. Kruger, Virtual Reality. Addison-Wesley, 1983.
[107] K. Mase, “Human Reader: A Vision-Based Man-Machine Interface,” Computer Vision Human-Machine Interaction, R. Cipolla and A. Pentland, eds., Cambridge Univ. Press, 1998.
[108] http://www.microsoft.com/billgatespdc.htm
[109] V. Pavlovic, R. Sharma, and T. Huang, “Gestural Interface to a Visual Computing Enviroment for Molecular Biologists,” Proc. IEEE Int'l Conf. Face and Gesture Recognition, pp. 30-35, Killington, Vt., Oct. 1996.
[110] M. Black, F. Berard, A. Jepson, W. Newman, E. Saund, G. Socher, and M. Taylor, “The Digital Office: Overview,” Proc. AAAI Spring Symp. Series, pp. 1-6, Stanford Univ., Mar. 1998.
[111] K. Jo, Y. Kuno, and Y. Shirai, “Manipulative Hand Gesture Recognition Using Task Knowledge for HCI,” Proc. IEEE Conf. Automatic Face and Gesture Recognition, pp. 468-473, Nara, Japan, Apr. 1998.
[112] S. Stillman, R. Tanawongsuwan, and I. Essa, “System for Tracking and Recognizing Multiple People,” Proc. IEEE Second Int'l Audio- and Video-Based Biometric Person Authentication, pp. 96-101, Washington, D.C., Mar. 1999.
[113] M. Cohen, “Design Principles for Intelligent Environments,” Proc. AAAI Spring Symp. Series, pp. 26-43, Stanford Univ., Mar. 1998.
[114] J.L. Crowley and F. Beard, “Multimodal Tracking of Faces for Video Communications,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 640-645, 1997.
[115] T. Darrell, G. Gordon, J. Woodfill, and M. Harville, “Tracking People with Integrated Stereo, Color, and Face Detection,” AAAI Spring Symp. Series, pp. 44-50, Stanford Univ., Mar. 1998.
[116] C. Wren, “Understanding Expressive Action,” Technical Report 498, Massachusetts Institute of Technology, Media Lab, 1999.
[117] A. Bobick, S. Intille, J. Davis, F. Baird, C. Pinhanez, L. Campbell, Y. Ivanov, A. Shutte, and A. Wilson, “The KidsRoom: A Perceptually-Based Interactive and Immersive Story Environment,” Presence, vol. 8 no. 4, pp. 367-391, 1999.
[118] K. Waters, J. Rehg, M. Loughlin, S. Kang, and D. Terzopoulos, “Visual Sensing for Active Public Spaces,” Computer Vision for Human-Machine Interaction, R. Cipolla and A. Pentland, eds., Cambridge Univ. Press, 1998.
[119] A. Shafer, J. Krumm, B. Brumitt, B. Meyers, M. Czerwinski, and D. Robbins, “The New EasyLiving Project at Microsoft,” Proc. DARPA/NIST Smart Spaces Workshop, 1998.
[120] Y. Yacoob, L. Davis, M. Black, D. Gavrila, T. Horsrasert, and C. Morimoto, “Looking at People in Action—An Overview,” Computer Vision for Human-Machine Interaction, R. Cipolla and A. Pentland, eds., Cambridge Univ. Press, 1998.
[121] G. Furnas et al., "The Vocabulary Problem in Human-System Communication," Comm. ACM, Nov. 1987, pp. 964-971.
[122] S. Mann, “Smart Clothing: The Wearable Computer and WearCam,” Personal Technologies, vol. 1, no. 1, 1997.
[123] T. Starner, S. Mann, B. Rhodes, J. Levine, J. Healey, D. Kirsch, R. Picard, and A. Pentland, “Visual Augmented Reality Through Wearable Computing,” Presence, vol. 6, no. 4, pp. 386-398, 1997.
[124] Y. Rosenberg and M. Werman, “Real-Time Object Tracking from a Moving Video Camera: A Software Approach on a PC,” Proc. IEEE Workshop Applications of Computer Vision, pp. 238-239, Oct. 1998.

Index Terms:
Looking at people, face recognition, gesture recognition, visual interface, appearance-based vision, wearable computing, ubiquitious.
Citation:
Alex Pentland, "Looking at People: Sensing for Ubiquitous and Wearable Computing," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 1, pp. 107-119, Jan. 2000, doi:10.1109/34.824823
Usage of this product signifies your acceptance of the Terms of Use.