Mapo Bridge, Seoul


(even the examples highlighted for Siri are either process oriented or low-order (low attention) cognitive tasks)

I think of Siri and the automation (programming) of routine, how its greatest contribution to the learning discussion might be the reclaiming of cognitive capacity (and time) from process. Siri is significant if not for it what it is then what it might propel us to imagine. I see the immediate application of voice responses/commands to various situations. Look up the weather, write this as email, call wife. In its present (and highly marketable) iteration, it is essentially a productivity tool, one that encourages multitasking across sensory environments (while I am walking, I can write emails; while I am driving, I can fact check a presentation). It encourages a division of attention (perhaps efficiently so) across a few different fields. It holds a lot of potential for those with vision impairments as well and can be considered a minor success just from that perspective. It offloads the individual responsibility of memory onto the augmented self (individual +technology+network), freeing us as people and societies from the “tyranny of information recall“. It is distributed intelligence in one of its many manifestations and has a long technological track record. Think MS Word and the perceived decline of spelling acumen or Facebook telling us when it is someone’s birthday. We offload repetitive tasks onto technology and networks to free up cognitive space. End result: I can neither remember anyone’s birthday nor spell their name correctly. 

So how long, I ask, before we, as consumers, are able to program Siri, or use Siri as a vehicle for programming, using our voice? And I literally do mean program additional applications or uses.

Mapo Bridge, Seoul

I suspect that the full range of possible applications of Siri are illustrated in the context in which they are experienced. For example, there is a nasty little crosswalk that I have to transverse close to my apartment building in Seoul. Once I cross it, it is relative smooth sailing and my mind wanders and roams and I generally come back from these long walks wanting to blog. My thoughts are free. So, perhaps the next time I am walking, I want to manufacture an optimal walk based on these moments of caution (that crosswalk) and other moments of reflection. Perhaps when next standing on this crosswalk, I want to say to Siri “remember this crosswalk and warn me approximately 10 seconds ahead of time that it is dangerous”. Based on the GPS and the Siri voice note, I want to program a bit of my next walk to make sure I am not daydreaming when I should be exhibiting caution. These reminders maximize reflection without sacrificing safety. 

Fast forward that walk a bit. I want to post notes throughout my walk based on my location and what I was thinking at that moment. Not only that, I want metadata to be recorded from the day, the time, the temperature, the season, and the ontological question (or even daydream) I was bandying about in my head. I want to stick voice post-it notes all over the place and I want to do it with any combination of senses. Siri makes the audio a possibility and so I want to program my world using something like Siri as a vehicle. Voice activated programming. Like routing rules for daily life. An ontological trigger in Point A to revisit Thought B. A reminder that you left a thought unifinished in this spot. Program a trigger to remind yourself to revisit the thought of multimodal representation in mobile urban playscapes (at this spot in the urban playscape). An ability to craft a list of questions using voice for review in a list later. An audiobook and voice annotation layered over it; voice annotations=notes=audio reflection. A teacher layering voice reminders for students for assignments or even voice triggers for reflection/homework in opportune spots across cultural sites. Even a banal ‘avoid the dead duck on the river trail reminder’ would suffice. All of it breaks up the singular authority/tyranny of text and rebalances the monotony of singular sensory input channels (visual/text). 

Essentially, these types of applications are not exactly revolutionary in that they are doing the same thing as textual input interfaces (blogs, Twitter, etc.) but via voice. What is revolutionary, at least in my estimation, is their ability to extract additional reflective time and space from hectic schedules. I would program my repetitive processes until reflective meditations. The commute to work, the walk to the store. For me, that long walk I try to take daily. I would offload (program) my repetitive tasks into semi-conscious behaviors (reminders for when the train is coming might offload the need to stare at the big board in the train station for five-ten minutes at a time). I would then take that offloaded time and claim it for reflective space. Meditative space. Deep thinks. That is what voice activiation does for me. It allows me to maintain my visual field and offload these tasks to voice. If I want, I can use the voice as a vehicle or a tool for reflection.

Siri opens up a few channels. It claims quite a bit of time. It is still a Beta, but Beta is just a codeword for imagining a million applications outside the intended one. 

Routing rule for voice applications



By Michael Gallagher

My name is Michael Sean Gallagher. I am a Lecturer in Digital Education at the Centre for Research in Digital Education at the University of Edinburgh. I am Co-Founder and Director of Panoply Digital, a consultancy dedicated to ICT and mobile for development (M4D); we have worked with USAID, GSMA, UN Habitat, Cambridge University and more on education and development projects. I was a researcher on the Near Futures Teaching project, a project that explores how teaching at The University of Edinburgh unfold over the coming decades, as technology, social trends, patterns of mobility, new methods and new media continue to shift what it means to be at university. Previously, I was the Research Associate on the NERC, ESRC, and AHRC Global Challenges Research Fund sponsored GCRF Research for Emergency Aftershock Forecasting (REAR) project. I was an Assistant Professor at Hankuk University of Foreign Studies (한국외국어대학교) in Seoul, Korea. I have also completed a doctorate at University College London (formerly the independent Institute of Education, University of London) on mobile learning in the humanities in Korea.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.