The Community for Technology Leaders

Digital-Content Authoring

Takeo Igarashi, University of Tokyo
Radomír Měch, Adobe

Pages: pp. 16-17

Computer graphics techniques take data, or content, as input and generate static and dynamic images on the screen. Traditionally, experts trained to use professional tools such as Maya and Photoshop created the input content, and consumers passively enjoyed the resulting images. However, the increasing computational capability of PCs and fast Internet connections are producing consumers who actively generate their own digital content and share it with millions of other users. For example, millions of consumers are sharing illustrations on deviantART, videos on YouTube, photos in Flickr, creatures in Spore, and a virtual environment in Second Life. In addition, novel touch input devices available to everyday users in the form of smart phones or tablets make creating content in a casual environment easier and more compelling.

New Challenges

This trend presents many novel technical challenges to digital-content creation. Users often don't have intensive training, so content-creation tools must be very easy to use. Professional modeling packages present too many operations for casual users. So, you must carefully select a subset of these operations to make the user interfaces intuitive while still allowing creation of a variety of models.

Also, tools should leverage the abundant resources on the Internet by allowing easy combination and modification of existing content. Tools must provide a powerful search method to find appropriate resources and an editing method to combine them. They must also be able to efficiently handle the large amount of data downloaded from the Internet (or obtained by scanning), to support creative exploration.

In This Issue

This special issue presents recent advances in such digital-content-creation techniques, ranging from 3D modeling to behavior authoring and image editing.

In the tutorial, Pushkar Joshi presents curve-based modeling as an emerging paradigm that exploits pen- or touch-based input. He subdivides it into three approaches, by increasing order of difficulty: extruding 2D shapes, inflating 2D shapes, and drawing and surfacing 3D curves. For each approach, he points out areas that might be particularly easy or difficult to implement and provides references for further study.

In "NaturaSketch: Modeling from Images and Natural Sketches," Luke Olsen and his colleagues improve on techniques for sketch-based 3D modeling. Users sketch an intended 3D model; the system automatically inflates the 2D sketch to generate a 3D model. Unlike previous systems, this one supports more natural sketches that contain overlapping and multiple strokes. It also supports annotations to control the inflation.

In "Large-Scale Physics-Based Terrain Editing," Juraj Vanek and his colleagues deal with editing large terrain models. Modeling systems should respond interactively to user edits. However, in many cases, the sheer volume of data presents a challenge. For example, for an erosion simulation on a large terrain model, the number of computations can be so large that the computer can't respond in real time. Vanek and his colleagues applied multiresolution tiling of a large terrain model to accelerate the simulation of erosion using a GPU. With their technique, users can edit large terrain models at interactive frame rates.

In "A Behavior-Authoring Framework for Multi-actor Simulations," Mubbasir Kapadia and his colleagues present a scripting system in which domain specialists define basic behaviors and users easily customize these behaviors for their own simulations with simple scripts. The system then uses a heuristic-search planner to generate complicated interactions among agents. In this way, the framework offloads some of the more complex modeling tasks to the specialists who define the scripts.

In "Photosketcher: Interactive Sketch-Based Image Synthesis," Mathias Eitz and his colleagues address the problem of photo composition. The user sketches the rough shape of a desired image, and the system automatically searches for images necessary for the composition. The system then cuts out and stitches these images together to generate the final image. By using a database of existing images, the system can present a much better solution than would be possible with just an algorithm.

Although these five articles address different problems, they provide a good overview of techniques common to authoring problems in general. NaturaSketch, Photosketcher, and the methods described in the tutorial use sketching as user input. Sketching is becoming popular as an alternative input method for content authoring in which the user can more quickly depict a shape in mind as a collection of free-form strokes.

Kapadia and his colleagues use scripting to let users participate in behavior authoring. Scripting itself isn't new. However, recent systems use scripts everywhere to allow end-user customization; Kapadia and his colleagues' system is a good example of this trend.

Vanek and his colleagues use physical simulation as part of content creation. Physical simulation isn't new either, but more and more authoring systems are integrating interactive physical simulation, as this example shows.

Finally, Photosketcher is a good example of a data-driven approach. With the abundant materials available on the Internet, data-driven methods are increasingly popular in authoring.

We hope this special issue gives you some hints or inspirations to address diverse problems in digital-content authoring.

About the Authors

Takeo Igarashi is a professor in the University of Tokyo's Department of Computer Science. His research interest is user interfaces, focusing on interaction techniques for 3D graphics. Igarashi has a PhD in information engineering from the University of Tokyo. Contact him at
Radomír Měch is a senior research manager at Adobe's Advanced Technology Labs. His research interests are procedural modeling, rendering, and 3D printing. Mech has a PhD in computer science from the University of Calgary. Contact him at
68 ms
(Ver 3.x)