The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.01 - January/February (2007 vol.27)
pp: 26-27
Published by the IEEE Computer Society
Takeo Igarashi , The University of Tokyo
Bob Zeleznik , Brown University
ABSTRACT
Two key research areas must advance before sketch-based interfaces can transition from a collection of individually provocative research prototypes into a computing platform of choice. First, fundamental techniques for segmenting, recognizing, and parsing collections of ink strokes must be generalized and made more robust with better user models. Second, developers must go beyond redesign and actually reinvent their applications to leverage pen input's intrinsic capacity for rapid, direct, modeless 2D expression. Solving either of these problems independently is no guarantee of success. Guest editors Takeo Igarashi of the University of Tokyo and Bob Zeleznik of Brown University introduce this special issue, which reflects on these two research themes.
Welcome to the special issue on sketch-based interaction, which comes on the heels of renewed popular excitement about pen-based computing. The recent widespread availability of Tablet PCs has reinvigorated attempts to unlock the host of advantages pen-based interaction promises over the computing world's staple: mouse-based desktop GUIs. In contrast with searching through a clutter of menus to change modes or perform actions explicitly and sequentially, pen-based sketching interfaces in particular facilitate a transparency between entering drawings, text, and commands that rivals the fluidity of pencil and paper. Yet, despite pencil and paper's success for such diverse tasks as calculating, drawing, note-taking, proofreading, and conceptualizing logical and geometric structures, reception of the fruits of pen computing has been muted in general.
Pen-based computing's rich pedigree traces back to light guns in the 1950s, predating the first mouse systems by a decade. During this time, hardware technologies such as the light pen have come and gone as have a host of pioneering ventures including Wang Freestyle, Go Corp's PenPoint OS, and Apple's Newton. These failures, however, arguably have not signaled a loss of interest in pen computing, but rather an unceasing demand for even better technical solutions. Pen input by its nature is a double-edged sword that conveniently leverages physical skills developed with pencil and paper while frustratingly raising expectations for an unattainably intuitive experience. With pencil and paper, people are free to mix geometry, text, and implied commands, as well as switch domains without a second thought. Moreover, a sketch on paper can be informal, incomplete, and even ambiguous, because it's typically processed by humans who share a broad understanding. However, for tractability, pen-based sketching interfaces only leverage narrow-domain knowledge, which cannot support the goal of free-form expressiveness. Still, the research community continues to forge important inroads that make this an exciting period in the evolution of pen computing.
Necessary advances
Two key research areas must advance before sketch-based interfaces can transition from a collection of individually provocative research prototypes into a computing platform of choice. First, fundamental techniques for segmenting, recognizing, and parsing collections of ink strokes must be generalized and made more robust with better user models. Second, developers must go beyond redesign and actually reinvent their applications to leverage pen input's intrinsic capacity for rapid, direct, modeless 2D expression. Solving either of these problems independently is no guarantee of success.
For example, consider a primitive-based diagramming application that might appear ideally suited for sketch-based interaction because of its 2D graphical nature. Just accurately recognizing shapes and their dimensions (for example, square versus rectangle versus trapezoid) is challenging, particularly when considering the imprecision and variation of rapid free-hand input. Furthermore, choosing when and how to display feedback for recognition results can significantly impact usability. Recognizing and adjusting shapes as they are drawn can be disruptive, whereas interpreting only completed diagrams can lead to complex, compound errors that can be difficult to detect and correct. Balancing these basic recognition tasks, sketch-based interfaces must also be crafted to accommodate necessary abstract application input (for example, choosing styles and colors and aligning, copying and deleting shapes), including simplified alternatives to drawing repetitive or complex primitives.
The selections
The articles we chose for this special issue reflect reasonably well on these two research themes, but they by no means cover the full scope of ongoing activity. In particular, the articles we received were heavily biased toward 3D modeling and drawing applications. This bias might be attributable to our use in the call for papers of the ill-defined term sketching. For example, to some sketching implies any rough, informal work product, whereas to others it's tightly associated with a quickly drawn picture. Our very loose working definition of sketching is any pen-centric interaction that, relative to other forms of input, allows people to rapidly and richly express and explore their ideas.
2D applications
The first two articles discuss 2D sketching applications. The first article, "Sketch Interpretation Using Multiscale Models of Temporal Patterns" by Sezgin and Davis, examines the possibility of using temporal patterns for improving accuracy when recognizing hand-drawn 2D sketch content. Conventional recognition methods rely solely on the spatial information that sketched strokes convey, and they ignore temporal information about when the strokes were drawn. By exploiting knowledge of how people characteristically follow specific patterns when drawing certain diagrams, some recognition errors should be avoidable. Building on previous work using stroke-level ordering, this article presents a hierarchical recognition model that uses both stroke- and object-level temporal ordering information gathered automatically from training data. The demonstrated accuracy improvements show that static 2D sketch recognition is still an active research area and many interesting possibilities are still open.
The second article, "Advances in Mathematical Sketching: Moving Toward the Paradigm's Full Potential" by LaViola, addresses the end-to-end challenges of designing an effective sketch-based system for visualizing mathematics. In mathematical sketching, users can enter 2D handwritten mathematics and drawings directly with a pen, as if using pencil and paper. However, through an elegant combination of implicit automatic interpretation and explicit gesturing, the system can computationally associate the mathematics with the diagram to create dynamic conceptual animations. This article describes the most recent advances in LaViola's prototype system, MathPad 2, including supporting animation based on open-form solutions, evaluating calculus expressions, solving ordinary differential and simultaneous equations, and improving symbol recognition and drawing rectification. This article also demonstrates the potential synergy of integrating fundamental recognition research with applied application design to create an intrinsically pen-based experience.
3D models
The next three articles discuss using sketching to construct and edit 3D models. In "Free-Form Sketching of Self-Occluding Objects," Cordier and Seo address the problem of 3D-geometry construction from a 2D silhouette that contains self-occlusions. First, they extract a 2D skeleton on the sketching plane. Then, they derive a 3D skeleton by solving a linear optimization problem that avoids intersections while achieving C1 continuity (no discontinuities in the first derivative). Finally, they construct a 3D surface around the skeleton as an implicit surface. This article exemplifies recent advances in the active research area of constructing free-form 3D models from 2D sketches.
"Sketch-Based 3D-Shape Creation for Industrial Styling Design" by Kara and Shimada focuses on CAD models' styling design. The user first constructs a base wireframe model through interactive sketching and then builds interpolating surfaces that cover the wireframe. The final surface is generated by applying pressure-force inflation or minimization of curvature variation to the interpolating surfaces. The article's main contributions are the use of a rough, simplified 3D model as a guide for initial wireframe construction, and the introduction of new 3D curve creation and modification techniques that allow oversketching. This article is an example of many attempts to apply sketching to industrial design tasks.
The last article, "A Sketch-Based Interface for Clothing Virtual Characters" by Turquin et al., introduces a sketching interface for directly applying clothes to virtual character models. The user first draws a 2D silhouette of the intended garment on a 2D projection of a 3D body. The system then generates a corresponding 3D garment around the 3D body, so that the garment is offset from the body based on a distance inferred from the 2D sketch. This article describes an extension to a previous implementation, which included designing a garment's front and back sides and indicating clothing folds, in addition to general interface improvements and feedback from a clothing designer. This article demonstrates the effectiveness of using a sketching interface to inspire a powerful new tool that addresses a domain-specific 3D design problem.
Additional resources
We hope that this pen-based sketching overview serves as a useful vignette of the state of the art of this expanding research area. Several symposiums and workshops dedicated to sketching can provide a more comprehensive perspective. For example, Eurographics Workshop on Sketch-Based Interfaces and Modeling showcases the recent progress in sketching each year, and it's still growing (29 papers in 2006). With luck, some of you will be inspired enough to join the crusade to make the pen mightier than the mouse, or at least a compelling alternative.

Takeo Igarashi is an associate professor in the Computer Science Department at the University of Tokyo. His research interests include user interfaces, in particular interaction techniques for 3D graphics. Igarashi has a PhD in information engineering from the University of Tokyo, where he developed a sketch-based 3D free-form modeling system (Teddy). He received the significant new researcher award at Siggraph 2006. Contact him at takeo@acm.org.

Bob Zeleznik is director of research for Brown University's Computer Graphics Group and the Microsoft Center for Research on Pen-Centric Computing at Brown. His research interests include designing 2D and 3D post-WIMP user interfaces, in particular symbolic and diagrammatic pen-centric computing. Zeleznik has an MSc in computer science from Brown University, where he developed the gestural Sketch 3D modeling system. Contact him at bcz@cs.brown.edu.
27 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool