The Community for Technology Leaders

Evolutions in Gaming

Danna

Pages: pp. 7-10


Game consoles are encroaching on PC territory, says Mark Claypool, associate professor in the computer science department at Wooster Polytechnic Institute. "Now they have hard drives, main memory, a programmable processor, a big graphics card, and networking." While game consoles aren't upgraded as often as PCs are—the last upgrades were four and five years ago—the most recent upgrades share the common denominator of "more": more power, better graphics, and more connections with other players. Last year's releases from the big three in gaming consoles—Microsoft's Xbox 360, Nintendo's Wii, and Sony's PlayStation 3—represent an evolution in both technology and purpose, expanding game play and its applications.

Great New Graphics

The new releases from Microsoft and Sony, says Claypool, have concentrated largely on high-end graphics performance. These consoles have major graphics cards in them, and the graphics rendering has reached the point where it's almost photorealistic. "The hardware technology that creates the rendering of the virtual world at the end point, where the client is sitting—whether that's a console or a high-end PC—that's gotten really good," says Glenville Armitage, associate professor of telecommunications engineering and director of the Centre for Advanced Internet Architectures at Swinburne University of Technology. The Nintendo Wii also stepped up its graphics, but not nearly as much as the other two consoles.

"The Wii has concentrated more on a slightly different—and hopefully larger—target audience," Claypool says. Nintendo has innovated where the other two have not, with a novel interface for input to the games. The Wii gyroscope controller can detect the player's motions. The controller mimics many typical gaming actions, such as shooting a weapon, but can also mimic sports actions, such as swinging a golf club or rolling a bowling ball. This different kind of input can appeal to a broader set of game players, Claypool says, especially those that wouldn't otherwise pick up a traditional game console controller.

Powerful Processors

The powerful processors that companies are loading into consoles anticipate new challenges in computer technology. Pradeep Dubey, senior principal engineer and manager of innovative platform architecture at Intel, says that game consoles have moved toward a dual processor architecture. Most have a CPU for general processing tasks and a graphics processing unit for graphics rendering. Both Xbox and PlayStation 3 have a CPU and GPU. Xbox has three 3.2-GHz, high-frequency PowerPC processor cores, and PlayStation 3 has the new Cell processor designed from scratch in a joint effort between IBM, Sony, and Toshiba. They both have traditional GPUs.

Next, Pradeep says, "you could argue that you need other specialized processors like a physics processor to do the physical realism. Then you can argue for the next thing, which is artificial intelligence, to do the behavioral realism, and you'll need an APU or AI processor." It's possible to create a console architecture with such a system of chips, he says, because these processors are already in the marketplace. But the challenge then becomes how the developer can wear four or five different programming hats to program the CPU, GPU, PPU, and APU. Pradeep posits that the alternative is to make the software developer's life easier and offer him one chip tool and one programming model. Coming up with the architecture to do all these tasks and making the developer's tasks easier, Pradeep says, "is definitely our goal for doing the research that we do here at Intel."

Putting in hard drives and more powerful processors has brought consoles nearer to PC standards, and closer to PC problems as well. "You have the same issue," Claypool says. "As computers have gotten more powerful, putting more silicon into a smaller area, you have to keep them cool. That holds for the consoles as well."

Networking

Perhaps most remarkable, however, is that the three vendors have provided networking capabilities for their consoles, effectively erasing the line between game playing on computers versus game playing on consoles. With this online component, players can download software, demos, and patches as well as connect to other players for many games. Armitage sees the new connectivity as the point where the next set of challenges for consoles may occur. The underlying Internet connectivity between the different players and the actual servers that are hosting the game, along with fabulous graphics, helps create a sense of immersion in a virtual reality. "That interconnectivity is still kind of shaky," Armitage says. "It can break that illusion of immersion."

The feeling that players are colocated in the same virtual environment can be disrupted if the flow of packets going back and forth between Web servers and clients slows down. When slowdowns occur, the split-second timing that players in first-person shooter games depend on to be competitive suffers. That usually happens when other users on the network surf the net, send email, or download files. "Game traffic doesn't take up much bandwidth, but it is very sensitive to timing," says Armitage. "When the game client sends a packet out to the Internet towards the server, it's a small packet that doesn't take up much bandwidth, but it does need to get to the server in a very timely fashion. The packets going in and out of the house associated with Web surfing or email or BitTorrenting can often be very large packets that may consume a lot of bandwidth. But users actually don't really care whether their email or downloads are delayed by a couple of milliseconds from time to time. The game player does care."

Armitage and his team recently completed a prototype for a system that lets an ISP remotely manage the home router. ANGEL (Automated Network Games Enhancement Layer, http://caia.swin.edu.au/sitcrc/staticpages/index.php?page=angel) uses statistical modeling to identify when the flow of traffic in and out of a particular home belongs to a game. "We can predict that within about a second of the game traffic starting up," Armitage says. "With that information, the computer system can send a specially coded message back down the line to the home router that says 'here's a particular flow of traffic going through you right now that's a game.'" The home router then gives that traffic preferential treatment.

Game Play

"While the hardware improvements have been amazing," says Claypool, "innovation in terms of the kind of game play is a lot less clear. You have the same kinds of input, the same kinds of shooters, the same kinds of action games." He says that there has been a bit of innovation with artificial intelligence in game design, but not much. "That's why people play other people," he says. "Other people are much more sophisticated than any kind of AI that's been programmed so far."

However, Claypool notes that intriguing changes have emerged in the online communities playing games. Interesting socializations occur in the massively multiplayer groups. "People get together and they do other things you didn't expect," says Claypool, "like form clans, tell stories, chat, and buy items such as weapons in the game and sell them outside in the real world for money." The game Second Life is a virtual world where people do things in the game for designated money in that world, such as buying real estate with Linden dollars. Linden dollars are purchased with real money; there's an exchange rate. Claypool considers this emergent behavior—people using games in different ways—as a different kind of game play, where people make their own rules.

Learning from Games

There's a growing interest in using game play mechanics to educate players and help them learn something, such as math, history, economics, disaster or development planning, and skills or job training. The Serious Games Initiative ( www.seriousgames.org), a group in Washington, D.C., addresses the educational and enterprise uses of games. The Initiative holds a summit each year that showcases research and development in serious games.

Mark Oehlert, learning strategy architect at Booz Allen Hamilton, says "one of the reasons that game-based learning is powerful as a learning methodology is that the goal of a game is to fail incrementally and learn incrementally." Game players go back to their game repeatedly, trying to improve their score. "It really builds this powerful learning pattern over the content that you are working through," Oehlert says. "When you finally achieve success, you've got this really strong mental pattern that overlays the content." He notes that Kurt Squire, a professor at the University of Wisconsin, did his dissertation research on teaching history to students using the game Civilization III. "The game builds in powerful ideas about diplomacy, the development of technology, and the impact of tax policy on your civilian population," Oehlert says. These sophisticated concepts are built into the game play. When players lose the game because they failed to appreciate the diplomatic aspects of it, the next time they pay attention to them. "It's kind of this covert learning that's really powerful for these kids," Oehlert says. "It gets past some of the barricades that kids have built up that learning isn't going to be fun."

Business Games

Oehlert says the corporate environment can benefit from using games as well. A business situation can be cast as a game because, he says, a game is really just a decision-making scenario. By changing elements in the game, such as adjusting time limits, the resources available to players, or the end state they have to reach, a game can address corporate problems such as training, marketing, and planning. Games provide infinite what-ifs, the ability to compress time, and the opportunity to engage in high-consequence activities without fear of failure. These aspects of games can help corporations address important business problems.

Game culture, such as competitiveness, can be harnessed by encouraging employees to post high scores on training tests. Oehlert says companies can use games to create a virtual headquarters for orienting new employees. "These games have editors that allow you to change scenery, characters' appearance," he says. "You can recreate your corporate headquarters, create an avatar that looks like a generic corporate employee, and then present a tour of the headquarters and show different plants." Using a game to make movies, called machinema, is much less expensive than making a traditional corporate film, and it's much easier to edit and update.

Corporate advertising and marketing departments are using virtual worlds to create virtual brands, placing corporate offices and services there. IBM is using Second Life to help with orientation training for its global work force. When new employees in North America, Latin America, and Asia join the company, IBM can create a virtual place for them to connect inside of Second Life. "Instead of flying them all to New York, everybody can come into that world and interact with each other," Oehlert says. "That might not be as good as face to face, but it is more efficient than flying everybody to one place, and certainly more effective than just swapping emails."

Game simulation capacities can be very useful, as well. "Games allow you to engage in high-consequence, low-volume activities," Oehlert says. Games can help corporations create scenarios beyond the traditional business simulation of a spreadsheet to test large-scale budgeting decisions. Oehlert worked with the US Air Force using a virtual world to lay out future air bases, modeling traffic flows and logistic requirements before actually building the bases. "PricewaterhouseCoopers created a first-person perspective game to teach its employees about financial derivatives," he says. "UPS has considered using Xboxes to create safe-driving games for their drivers."

in brif…Project CHIL Is Advancing Some Cool Ideas

Greg Goth

Computers in the Human Interaction Loop is a 24 million European Union-funded research project with academic and industrial partners in Europe and the United States. CHIL ( http://chil.server.de/servlet/is/101) promotes human-computer interactions in which humans aren't forced to adapt their behaviors to accommodate technology. Instead, the project aims for machines to better understand human activities and intentions and to "intuitively" supply the appropriate assistance.

"What CHIL is all about is potentially turning computer services a little bit upside down," says Alex Waibel, CHIL's scientific coordinator. "People can observe other people and then act upon what they observe. A secretary or butler, for example, would be in a position to observe people and then offer assistance when necessary. One of the problems why we can't have computers do this is they can't really observe the human context. They have a rather primitive way of observing the world."

Support or Spy?

Producing more perceptive computing tools requires approaching design parameters from new angles. For example, studies at the Royal College of Art hypothesize that when designing social-interaction systems, developers might need to rethink traditional requirements-based design, which focuses on task-related end points. CHIL design takes inspiration from this by taking a more subjective approach in which users' values play a more critical role.

One example of the difficulties surrounding these new parameters was in the initial evaluation of a tool called the Relational Cockpit. Developed by the project partners, the RC aimed to give people more insight into their behaviors in meetings and facilitate more productive meetings. However, test subjects felt the tool's original design had properties that could be construed as being unduly judgmental: "It was clear from the discussion that participants did not believe that computers could reliably infer the attentional and emotional states or the mood of people," evaluators concluded. "In the end, the attitude towards RC was ambivalent: as someone said, 'Much depends on how the service is implemented: there is a difference between such service as support or as judgment.'"

Waibel says that users must also feel that they can trust CHIL-capable tools to ultimately work on their command. No matter how capable a machine is, particularly one that can track location or activities, users will want to be able to turn it off if they want privacy or want simply to be left alone. Additionally, Waibel says, contextual requirements might also determine how much of the perceptually enhanced technology will be deployed.

"For an elderly person in frail health, for example, if you had a room with cameras and microphones installed with the intent of providing emergency support, if the alternative is going into a nursing home, I think that's a situation where most of us would rather be observed."

Futuristic Collaboration

Waibel says some projects have advanced serendipitously, with the fruit of some projects differing slightly from the original concept. One such tool is a targeted audio technology from Daimler Chrysler that can "steer" audio beams to a specific user via phased array acoustic principles (see http://chil.server.de/servlet/is/9141).

"It sounds like science fiction, but it's actually starting to work," Waibel says. "We have a demo system in our laboratory."

Over the long term, however, perhaps CHIL's greater accomplishment will not lie in prototype end-user tools, but rather in the supporting eight-layer CHIL architecture ( http://chil.server.de/servlet/is/5270/software_architecture.pdf? command=downloadContent&filename=software_architecture.pdf), to which developers worldwide can direct multimodal design efforts.

Call it Intuition?

Michael Tarr, a professor in the Department of Cognitive and Linguistic Sciences at Brown University, says that technological advances such as faster processing and better algorithms aside, the success or failure of perceptually advanced tools will often depend on design elements far removed from computer science laboratories.

"A large part of what's going to make projects like CHIL work is not just identifying the potential of cognition and perception, but developing a better process to make those things be workable and be part of products in a way that isn't done for the most part right now," Tarr says.

He notes that while we can benefit from specific principles from cognitive science or computer science, we'll also benefit from our intuition. The best perceptual products, he says, will emerge from a not-envisioned melding of scientific advancements and aesthetic concerns. In this seat-of-the-pants "guerilla design," it might come down to putting more processing or learning in, but it really depends: "It's clear sometimes the best designers aren't people who know about the scientific issues."

CHIL began in January 2004 and will run through August 2007. An annual workshop called Clear (classification of location, environment, activities, and relationship) will also start this year (8-9 May in Baltimore), co-sponsored by the National Institute of Standards and Technology, to continue researching and demonstrating projects. CHIL partners also hold a technology demonstration day for developers interested in licensing CHIL-derived technology. This year's demonstration will be held 12 July in Karlsruhe.

FULL ARTICLE
60 ms
(Ver 3.x)