Screenhunter_01_apr

The two images are taken from Personas, how the internet sees you. What if we were able to kinesthetically and intellectually able to craft that perception?  

A friend of mine (who shall remain nameless) brought this to my attention regarding Kinect SDK and how Microsoft has opened the floodgates a bit by allowing outside developers to build on the system. I am not technical nor do I pretend to be able to take advantage of this development personally, but I do immediately see the advantages of such an approach. For Microsoft, access to a potential development pool infinitely greater than their own as well as significant saturation in the market (if I can build on it, I become invested in it). For developers and the general public, there is an opportunity to build applications without significant commercial upside (ie, educational ones). 

Overall, it is a good shift towards shortening the distance between cognitive and kinaesthetic activity. I think something and then do something without the intermediate step of inputting something. What you do achieves immediate representation. While it will only work on Windows (at least initially) seems to be a temporary inconvenience. Eventually, I can imagine a saturation across all platforms. 

So what does this mean for learning? 

Everything, really. It allows motion activated applications to be built throughout schools (short-term) and then throughout urban centers (middle term) and then anywhere under the sun (long term). Granted, the security complications here are endless (this is some crazy uncharted territory in this respect, this shift towards geographically specific computing), but the potential is equally enormous (no risk, no reward).

Screenhunter_02_apr

Imagine the germane cogntive load at work when learners merely mimic or mime their way through a presentation, kinesthetically fusing their cognitive process with their physical ones. Everything becomes applied practice with this type of approach and we literally learn by doing. It seems to be a wonderful illustration of activity theory as well, leading to the creation of tools and artifacts littered throughout the digital landscape, layered over real life, activated by motion. To literally sift through concepts with our hands? Incredible.

I like the potential for sensory reinforcement as well. I walk down the street (kind of like an urban sonic forest installment-like this, but less frightening:)

I walk down the street and can activate applications (Kinect boxes) in the environment, streaming together sound through the manipulation of some layer of context. I want to trigger sound based on the audible levels of the particular street or the weather or even the color of the houses. All that audio is refed into my headphones, I can record it and mix it. Instant ambient Eno (granted, not as good. Kind of like My Music for Airports, substituting this waking life for Airports. 

Speaking of Eno, why not a Bloom application for the world itself? I can layer audio and visuals over top my movements, a blur of thought, action, and sensory input? A literal dance with the environment. A permanent audio installation layered over the physical environment. 

More practically, though, I can just see a lot of learning that emphasizes activity itself, the motion of the human body. It goes a ways towards bridging the chasm of dualism, that mind-body divide and will only be developed more going forward. Ultimately, and I believe the general trends lean toward this, we will wear our technology or it will be embedded in our environment, in our landscapes itself. It will act on us and we will act on it. It will augment our (cognitive) abilities and stimulate kinesthetic interaction. It will have positive and negatives. But it will be there for us to use to the best of our abilities, for its potential for human development. 

 

By Michael Gallagher

My name is Michael Sean Gallagher. I am a Lecturer in Digital Education at the Centre for Research in Digital Education at the University of Edinburgh. I am Co-Founder and Director of Panoply Digital, a consultancy dedicated to ICT and mobile for development (M4D); we have worked with USAID, GSMA, UN Habitat, Cambridge University and more on education and development projects. I was a researcher on the Near Futures Teaching project, a project that explores how teaching at The University of Edinburgh unfold over the coming decades, as technology, social trends, patterns of mobility, new methods and new media continue to shift what it means to be at university. Previously, I was the Research Associate on the NERC, ESRC, and AHRC Global Challenges Research Fund sponsored GCRF Research for Emergency Aftershock Forecasting (REAR) project. I was an Assistant Professor at Hankuk University of Foreign Studies (한국외국어대학교) in Seoul, Korea. I have also completed a doctorate at University College London (formerly the independent Institute of Education, University of London) on mobile learning in the humanities in Korea.

One thought on “Kinect, Brian Eno, and kinesthetic learning”
  1. Tried the personas..limited in that it searches the name generally not specifically, but interesting none the less.

Leave a Reply to Jen G. Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.