Views from the Road – SXSW 2011

Jay Iorio, Innovation Director, IEEE Standards Association (IEEE SA)

Array

My inaugural visit to South by Southwest (SXSW) was two plus weeks ago, and I may still be recovering.  SXSW is basically a small city of creative people moving into a small city of welcoming and open-minded residents for a couple of weeks.  The trip turned out to be a whirlwind of conversations with fascinating people working on some incredible projects.

Definitely worth the trip.

As background, I work at the IEEE Standards Association in emerging technologies.  My trip to SXSW, centered on examining ways we may advance the role of standards using less formalized consensus mechanisms for the converging interactive world(s).  I was also hoping to hear about new developments in the arena of virtual worlds, which is one of my specialties.

This year, there wasn’t much talk about virtual worlds at all, with social networking in all its permutations occupying center stage.  Regardless, I found no shortage of imagination and ideas in many related areas among the people I encountered.  There was also, in general, a sophisticated understanding of how consensus can be artfully applied to dynamic technologies so as to hasten their evolution and adoption.

While I didn’t take copious notes, here are some of the panels, exhibits, talks and keynotes that I found most interesting at the conference:

Mike Kruzeniski (Microsoft) gave an insightful keynote called “How Print Design is the Future of Interaction He talked about the evolution of the user interface and how principles of print design (minus the paper) — e.g., typography, grid design, intertwined goals of aesthetic beauty and clarity of communication — are the (near) future of web design.  He said, print may be dead, but not print design, making a strong case that there is a lot to be learned from hundreds of years of experience in print-based typography, layout and design.  Through examples from advertising and elsewhere (referring to the documentary “Helvetica”), he persuasively made the argument that too much information conveys less than what might appear to be too little information.

The Screenburn Exhibit.  One developer created a first-person shooter that was controlled by Rockband hardware — in this case the drum pads.  He explained that while these games are being dropped, people still have the hardware, which can be used as input for new games.  So, I shot a monster with a snare drum.   The highlight for me was Microsoft Kinect, whose motion technology is made possible by an IEEE Standard.  Microsoft set up a tent with a built-in Kinect cam so you could walk in, move your body any which way, and watch a startlingly accurate avatar tracing every move.   It even nailed my body shape — in an obviously and disturbingly manner. There was no lag and the motions were very accurate.  Unfortunately, there doesn’t seem to be any move to use Kinect for Second Life or OpenSimulator avatar control.

A panel on Game Publishing Evolution explored the evolution that is being motivated largely by the mobile game explosion.  An interesting insight was that the games world seems to be evolving much as the film and music industries have, in the sense that much of the innovative games are being created by very small companies (which one panelist referred to as “indies,” explicitly evoking the music world) and is then picked up by the behemoths (EA, Sony), which have the marketing etc muscle to get the product to potential customers.

A keynote speech by Seth Priebatsch focused on applying game mechanics to real-world situations, from global warming to education.   He asserted that the last decade was the “social” decade, and the upcoming era will use game logic — incentives, levels, goals, etc etc — as a way to analyze and attack complex problems.  You can find his presentation here.

An augmented-reality panel at the Hilton, while marketing-focused,  offered some some interesting bits and a heavy focus on location-based computing.  Panelists discussed integrating data from various sources and taking into account not only physical location but also the user’s behavior.  One speaker imagined an RFID Internet of objects with which an AR-enabled user would interact in a dynamic and customized fashion. Companies are developing AR contact lenses, glasses, eye-tracking technology, etc.  I have a feeling that this area is going to move from sci-fi to mass-market surprisingly quickly — some of the examples are very compelling, useful, and would involve virtually no learning curve.

Amish Patel and Kay Hofmeister from Microsoft gave a great talk on the Future of Touch Interface Design.  (I have embedded SlideShare below)  The key insight for me centered on the three stages in a technology’s development

  1. New Technology (Experimentation)
  2. Copy old Language (Product incorporation – using the “old language)
  3. New Language (The technology develops its “own” language)

For example, in Stage 1, film began as a tech experiment.   In Stage 2, it evolved into a tool for capturing live performances using a stationary camera, which missed the medium’s potential.  The language of film had not yet been developed.  The speakers used “Citizen Kane” as an example of film’s evolution to Stage 3:  as the camera became an active participant in telling stories through point of view, actively moving and tracking and panning.  Film at that point developed its own language, which has changed remarkably little since then.

Another example used is the mouse.  In Stage 1, it started as a Doug Englebart experiment in the late 1960s, and in Stage 2, it was later introduced, but with a command-line interface.  Again this missed the mark on the device’s potential.  The mouse reached Stage 3 when the Xerox Star and the Macintosh in the 1980s made the mouse come alive within a graphical user interface.  The mouse began to develop a language at this point, but has not changed much since then.

One main assertion is that touch, actually predates the mouse as an experiment.  The speakers made a strong case that  touch still uses the language and behavior of the mouse, the hardware keyboard, and scrolling behavior, which means it has yet to break free conceptually from the language of the GUI and develop one of its own.  It was first employed in kiosks and point-of-sale devices, but, the panelists argued, it has yet to reach Stage 3.   The Microsoft approach to Stage 3 involves several factors: body awareness, multitouch, and multimodal (touch and voice, touch and pen, touch and air — e.g., Kinect).

Share this Article