Tuesday, August 24, 2010

Coding Our Faces for the Crowd

In a recent post "How the Metaverse Was Won", John Lester (@pathfinder) asks a simple question about Neal Stephenson's Snow Crash.

What was the one thing that made the Metaverse in Snow Crash broadly successful?

And he reminds us (or maybe just me, I had forgotten) it was Juanita's faces. From Snowcrash:
And once they got done counting their money, marketing the spinoffs, soaking up the adulation of others in the hacker community, they all came to the realization that what made this place a success was not the collision-avoidance algorithms or the bouncer daemons, or any of that other stuff.  It was Juanita’s faces.
Just ask the businessmen in the Nipponese Quadrant.  They come here to talk turkey with suits from around the world, and they consider it just as good as face-to-face.  They more or less ignore what is being said as a lot gets lost in translation, after all.  They pay attention to the facial expressions and body language of the people they are talking to.  And that’s how they know what’s going on inside a person’s head – by condensing fact from the vapor of nuance.
John suggests that coding our facial expressions and body language into virtual worlds is the answer to improving the efficacy of those worlds. The rest of John's post as well as the comments is a great read.

But that's not what I meant to say

I've been thinking a lot about the subtleties of communication and dialog and just how much of it can get lost in pure text, and how sometimes the most important bits trickle through the cracks of interpretation or translation.

Using voice (VOIP) can be easier than text chat but in the physical world we do often rely on our ability to read faces and body language to draw out deeper meaning. Watch Ghost by Marco Brambilla for an example. (Try it with and without sound.)




It's striking to me how much meaning can be extracted from that piece without words - but it is "the intended" meaning?

I remain skeptical that the translation of facial expression and body language would make Second Life broadly successful, as it did in Stephenson's world. The reason is not because I don't think our visual language is important, clearly it is. But I wonder how much of a tax it might place on interactions - speaking strictly for me although I'll venture to suggest that it applies more broadly.

For example, as a musician I have the fortune to meet a lot of people from all over the world. It's really the best part of what I do. For every encounter, there is the virtual equivalent of first impressions. This is where we make or break a potential connection.

We know that the world is rich with diversity - of many dimensions - language, culture, thought, experience, etc. This creates a unique communication challenge. When I'm trying to understand what someone needs or is asking online, I know I might do any of the following things. I may lean forward on my elbow and rub my forehead, or lean back and fold my arms. I might talk to myself out loud. I may even get up completely and pace, or grab a cup of tea.

These physical reactions are just me trying to process and the freedom to go through these "wtf" exercises allows me to ease into a conversation with less stress than having to worry - like I do in face to face encounters - what misinformation my body language might be communicating.

Now imagine how that might translate to the other person if my actions and facial expressions were coded into the conversation. Even still, imagine if any of those expressions were considered culturally offensive. The result could be unnecessarily disastrous.

I know it sounds like I am suggesting that technical advances intended to make us "more human" might actually impede or even disrupt the humanness and connectedness that worlds can provision.

Maybe I am, or maybe I'm just thinking out loud.

.
Share Some Grace:

blog comments powered by Disqus