A couple of days ago, The Atlantic published a provocative piece by William Davies, Facebook, Smart Tech, and The End of Language. I call it provocative because even though the writing was top-notch, my inner monologue while reading went something like: “What? No… But how…?!” (repeat, repeat, repeat).
The article is about a comment that Mark Zuckerberg recently made about augmented reality, devices that will allow us to transmit thoughts directly from one mind to another. He refers to this as the “ultimate communication technology.”
Davies explores what Zuckerberg meant by this comment. There are already technologies that allow people to communicate to play a game, albeit about very simple things like “shoot now.” The article continues to talk about people’s “growing suspicion in language” as a motive for developing the technologies that Zuckerberg referred to. This suspicion is epitomized in a quote that Davies includes by a neuromarketing guru – “People lie; brains don’t.” (As a picky side note, isn’t a brain a necessary component of a person generating a lie?)
The logic is this: we have a pure thought, and we have no way of sharing it other than by using words. Once we use words, our pure thought becomes tainted, skewed by those words we’ve chosen to express it. Wouldn’t it be great if we could cut out the pesky linguistic middle man?
Actually, this sounds like a terrible idea to me. For one, there’s really no such thing as a pure thought – the language that we use to express thoughts makes those thoughts exactly what they are. Using words to express ideas can be fun. It can help us clarify them. It generates conversation – which in turn encourages us to reevaluate our ideas, to expand on them, to see connections to other people’s thoughts. We would miss out on so much richness if we were to ever eliminate our precious linguistic middle man. Davies points out another crucial flaw in the end-of-language logic: any AR needs to be created by a human, so it can never be created without the bias that language has instilled in its human creator. Instead, that one creator’s bias would be disseminated to all users of the technology.
Granted, there are some uses for augmented reality (AR) that would be cool and useful. For example, it could help people who experience brain damage and are unable to speak communicate practical needs. But could it ever allow for the end of language? Let me just sit here and think about that.