As we push through our conversations on representation, transliteracy, the dominance of text (as moral authority) and how all of that is wrapped up in political, social, and economic power structures, I am again drawn to the notion of patter recognition and how that can be used to great effect. Essentially, language is a patter, a string of letters into words into phrases into sentences and so on. These patters, whether pithy, witty, curt, or contrite are essentially predictable patterns. Verbs fall after subjects, that sort of thing. Although machine recognition of these patterns has been an elusive goal having gone on for the last few decades, we have reached a point where some of the “work” we associate with textual recognition and analysis can be offloaded to a technical assistant.

We look for patterns everywhere, in the clouds as children, in the haunting painting hanging in the museum as an adult. A pattern of something relating this to us, a lever to internalize the subject. We look for these patterns of recognition in the most complicated of ways, in social interactions, in the million different permutations available to the human face to express emotion, in our culture. These are places where no machine fears to tread. Text, however, is not a scary place. It is generally quite predictable. Predictable enough to spot a pattern or two.

So, in a sea of text, we rely on the lighthouse of context. It bounds items together, strips away the noise, lightens the cognitive load. It might still remain supreme (indeed its supremacy is also the engine that drives the creation of so much of it), but we use our multimodal intelligence to look for meaning in it. There is simply too much of it to read.

This automation needn’t be complicated. Wordle is brilliant in that it is simple. A picture of words according to frequency. So, I decided to build upon this a bit, to see a tag cloud unfolding and ideally to see it bend as we discuss new material. I want to see it evolve in both time and space. And so we have two stabs at context. Both using our Twitter hashtag of #ededc; both providing context.

Now the question becomes, why wouldn’t we use digital tools to sift through the endless digital waters of text? What extension of human cognition does this suggest? It is evidence of intelligence, is it not, this appropriation of tools? What do we lose? Is authority muted if only machines are reading through it, separating the curds from the whey? What greater affect does this have on the packaging of ideas and content? How will they be grouped, contextualized?

I suppose the answer, the one answer that dictates all above anything else, is utility. We will do whatever we need to do to understand and make use of things in our world. We are the marketplace of activity, of innovation and ideas, and if text is predictable enough to be outsourced to non-humans (technology), then what threat does that pose to its supremacy?

By Michael Gallagher

My name is Michael Sean Gallagher. I am a Lecturer in Digital Education at the Centre for Research in Digital Education at the University of Edinburgh. I am Co-Founder and Director of Panoply Digital, a consultancy dedicated to ICT and mobile for development (M4D); we have worked with USAID, GSMA, UN Habitat, Cambridge University and more on education and development projects. I was a researcher on the Near Futures Teaching project, a project that explores how teaching at The University of Edinburgh unfold over the coming decades, as technology, social trends, patterns of mobility, new methods and new media continue to shift what it means to be at university. Previously, I was the Research Associate on the NERC, ESRC, and AHRC Global Challenges Research Fund sponsored GCRF Research for Emergency Aftershock Forecasting (REAR) project. I was an Assistant Professor at Hankuk University of Foreign Studies (한국외국어대학교) in Seoul, Korea. I have also completed a doctorate at University College London (formerly the independent Institute of Education, University of London) on mobile learning in the humanities in Korea.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.