Context Creation: Supremacy of Text also makes it predictable
As we push through our conversations on representation, transliteracy, the dominance of text (as moral authority) and how all of that is wrapped up in political, social, and economic power structures, I am again drawn to the notion of patter recognition and how that can be used to great effect. Essentially, language is a patter, a string of letters into words into phrases into sentences and so on. These patters, whether pithy, witty, curt, or contrite are essentially predictable patterns. Verbs fall after subjects, that sort of thing. Although machine recognition of these patterns has been an elusive goal having gone on for the last few decades, we have reached a point where some of the “work” we associate with textual recognition and analysis can be offloaded to a technical assistant.
We look for patterns everywhere, in the clouds as children, in the haunting painting hanging in the museum as an adult. A pattern of something relating this to us, a lever to internalize the subject. We look for these patterns of recognition in the most complicated of ways, in social interactions, in the million different permutations available to the human face to express emotion, in our culture. These are places where no machine fears to tread. Text, however, is not a scary place. It is generally quite predictable. Predictable enough to spot a pattern or two.
So, in a sea of text, we rely on the lighthouse of context. It bounds items together, strips away the noise, lightens the cognitive load. It might still remain supreme (indeed its supremacy is also the engine that drives the creation of so much of it), but we use our multimodal intelligence to look for meaning in it. There is simply too much of it to read.
This automation needn’t be complicated. Wordle is brilliant in that it is simple. A picture of words according to frequency. So, I decided to build upon this a bit, to see a tag cloud unfolding and ideally to see it bend as we discuss new material. I want to see it evolve in both time and space. And so we have two stabs at context. Both using our Twitter hashtag of #ededc; both providing context.
Now the question becomes, why wouldn’t we use digital tools to sift through the endless digital waters of text? What extension of human cognition does this suggest? It is evidence of intelligence, is it not, this appropriation of tools? What do we lose? Is authority muted if only machines are reading through it, separating the curds from the whey? What greater affect does this have on the packaging of ideas and content? How will they be grouped, contextualized?
I suppose the answer, the one answer that dictates all above anything else, is utility. We will do whatever we need to do to understand and make use of things in our world. We are the marketplace of activity, of innovation and ideas, and if text is predictable enough to be outsourced to non-humans (technology), then what threat does that pose to its supremacy?