I was thinking a bit about multimodality as I think the Lifestream itself encourages us to do, to experiment with multiple forms of representation, of vantage points, of askew glances, all towards presenting the essence of the object in question. I also think these multiple forms of representation loosen up the academic discussion a bit away from purely text driven analysis and towards some sense of “completeness”, or the understanding that life is not spent in text alone.

So, what does it mean to reflect in a multimodal capacity? What does the analysis and synthesis aspects of learning look like when approaches from a variety of sensory inputs? How can I visualize both the contextual elements of the object and myself simultaneously. To say that I did so in the video below would be an outright fabrication. However, what it does is demonstrate how I view my participation in digital culture, as primarily a visual and auditory journey. Text, when I am involved in sensemaking activities and not purely problem-solution oriented, is a vehicle of navigation, a means of input. A sign on the bathroom door telling me which door to enter. An off-ramp on the highway. The exit from an airplane.

What this video clumsily attempts to do is represent layers of cognitive connection with sensemaking, that is the human brain is attempting to approach a reflection on a variety of fronts simultaneously. What is the object in question in this element? It is me merely reflecting on the mild ennui one feels on return from a long journey, when the next journey might be, how profound the lessons learned were. It is an emotional, rather than purely intellectual, sensemaking exercise and those can often be the most complicated.

So why minimize the sensory approaches one takes to unlock meaning?


As Research Methods is training me to do, one must be open with their subjectivities and biases and towards this end, I attest to my love of sound, of music. In this application (ToneMatrix from Andre Michelle), I am afforded the opportunity to score my own rudimentary soundtrack to this scrapbook of sorts. The music speaks to the emotive elements of intellect quicker than anything I know of, so I tend to have music playing at all times (much to my wife’s chagrin). I was also inspired by a quick exchange on Twitter about audio applications (#ededc).

The Flickr slideshow are merely images tagged with “travel” from my collections and they offer a simple running backdrop to the activity, a kind of flowing wallpaper that contextualizes the learning.

The Twitter application allows me to approach my connections in space, where I sit in this complicated web of interactions and where they sit in mine. This is sensemaking in that there is approximately 5 people out of my connections that are actually based in Princeton, New Jersey. I am constantly reminded of the relevant nature of geography and how fluid a variable it can be.

The second Flickr application, Tag Galaxy, allows me to view layers of perception using tags associated with Flickr photos. I can not only see my own mapped over a globe, I can see others takes on the same objects. It immediately reorganizes my perspective as personalities are revealed through vantage points. A dash of sunlight, a different backdrop, a portrait, an edifice. All reveal the idiosyncrasies of perception. I will hopefully produce more of these videos over the semester as I am rather addicted to visualizations and interactive applications.

With the advent of nanotechnology and haptic technologies, I suppose it is on to olfactory simulations to complete the sensory input channels in digital culture. So, what is “real” again? If it looks like an apple, tastes like an apple, feels like an apple, smells like an apple, then it’s an apple.  At some point, do physical and digital cultures converge into just culture?

By Michael Gallagher

My name is Michael Sean Gallagher. I am a Lecturer in Digital Education at the Centre for Research in Digital Education at the University of Edinburgh. I am Co-Founder and Director of Panoply Digital, a consultancy dedicated to ICT and mobile for development (M4D); we have worked with USAID, GSMA, UN Habitat, Cambridge University and more on education and development projects. I was a researcher on the Near Futures Teaching project, a project that explores how teaching at The University of Edinburgh unfold over the coming decades, as technology, social trends, patterns of mobility, new methods and new media continue to shift what it means to be at university. Previously, I was the Research Associate on the NERC, ESRC, and AHRC Global Challenges Research Fund sponsored GCRF Research for Emergency Aftershock Forecasting (REAR) project. I was an Assistant Professor at Hankuk University of Foreign Studies (한국외국어대학교) in Seoul, Korea. I have also completed a doctorate at University College London (formerly the independent Institute of Education, University of London) on mobile learning in the humanities in Korea.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.