As the show floor opened yesterday to a dizzying flurry of game previews, the E3 conference program took a step back into the more conceptual.
An afternoon session titled Gamings New Look: The Importance of Interface asked how future game interfaces should be approached. Moderated by Computer Gaming World Editor in Chief Jeff Green, the panel featured interface experts James Ragonese of Scansoft, Richard Marks of Sony Computer Entertainment America, and Will Wright of Maxis, the creative mind behind blockbusters SimCity 2000 and The Sims.
Underscoring a central theme of the session, Marks opened by calling attention to the problem that new input devices are often treated as gadgets or gimmicks. Marks, inventor of the PlayStations EyeToy camera peripheral, stressed that innovative new control devices are more than just eccentric luxuries.
Marks argued that gamers instinctively want more inputs than the computer currently allows. Using first-person shooters as an example, Marks discussed how emerging devices like the EyeToy can be used to simplify controls. Observing that players tend to unconsciously lean in the direction they want to go, Marks explained how motion-sensing technologies are able to link this body movement directly to the strafing controls. Wright seconded Marks and cited studies confirming that games indeed map directly into our instincts.
Ragonese, speaking from his expertise in integrating speech recognition in games, also agreed with Marks, claiming that [New input devices] give you the ability to innovate and change things...they can bring new value to the game. Green echoed this idea of adding new value, pointing to dance pads as a recent innovation that allowed new forms of gameplay.
Despite this excitement, all four speakers stressed that emerging interfaces and devices must be used intelligently. Wright in particular expressed concern, commenting that he wants to feel immersed in a virtual world and not as if hes using a device. Green later related a humorous anecdote of the problem, describing how playing Counterstrike with voice chat jarringly reminded him that his teammates were actually teenagers instead of soldiers.
Ragonese argued that new interfaces must be used specifically and with purpose. As an example, Ragonese showed video footage of an experimental version of Prince of Persia that his group had augmented to allow spoken commands. Ragonese soon realized that some commands, such as drink water, were more suited to speech, whereas more visceral actions, such as stabbing enemies, seemed awkward when verbalized.
In contrast, Ragonese showed a recorded demo of speaking orders to an offensive line in a football game. Because these commands are actually spoken in real football, this usage of speech recognition struck Ragonese as much more natural. From these experiments, Ragonese concluded that developers cant just throw devices at an existing game, but rather they must change the original game or insert [new interfaces] in a natural place where it enhances value.
Wright, however, argued that the basic design of such interfaces is a much harder problem than their appropriate use. Wright asked, How do we convey to the player what the in-game language is? Responding to Ragoneses Prince of Persia demo, Wright questioned how players would know what commands are supported--even if the commands did blend into the game well. Wright described how Maxis' game interfaces are tested by their immediate accessibility to brand-new players: We put people in front of the game and dont tell them a thing.
Wright also proposed that emerging interfaces should allow some type of interaction that wasnt possible before. In particular, Wright expressed interest in emotive computing--the ability to convey information about the players emotional state. Marks offered a slightly different take, stating of new interfaces, Youre not looking for a problem to solve--youre looking to add value.