This post falls way outside of anything substantial, useful, pertinent, or even useful.

That is unless you love robots! And who doesn’t love robots? Those magicians of the imagination since science fiction was born, since the Japanese entered the global economic scene (at least not as a colonial power) , since the mantra of efficiency, of doing more with less entered our consciousness and gripped us like the delicate, yet powerful hands of one Johnny Five in Short Circuit.

The face of disdain: my future surrogate for awkward conversations

Let’s start with the technical specifications:

The mobile ’s height is variable up to around 1.75 meters and it weighs 16 kilograms. It includes a main computer with Intel Core 2 Duo CPI and Internet connections, several mini-computers, and some self-awareness and autonomy is built-in. The robot is self-balancing and moves around on two aluminum and rubber wheels, reaching human walking speed. The main computer runs a free BSD operating system to drive QB’s motors. The system is controlled remotely by a Firefox browser and simple keyboard commands.

QB “sees” via a five-megapixel video camera in one eye, and a lower resolution camera on the head pointing downwards, and transmits the video feeds to its remote controller via the Internet. Another camera monitors what is at QB’s feet. The robot “hears” via three microphones that feed audio to the telecommuter, and has high-quality speakers for audio in the other direction. The robot feeds an image of the telecommuter to the people in the remote location via a 320 x 240 LCD screen mounted on its head, and the screen doubles as a control panel to enable the Wi-Fi connection. The second eye functions as a .

“It includes a main computer with Intel Core 2 Duo CPI and Internet connections, several mini-computers, and some self-awareness and autonomy is built-in.” Not unlike me. I have limited self-awareness and only a passable amount of autonomy and none of it was built in, only achieved. I also wish my second eye functioned as a laser pointer. That might not lead to anything good, though.

So, the plan is to spend the next year convincing my wife that $15,000 is a reasonable sum for such a worthwhile investment. I will then conduct meticulous experiments in educational ‘presence’, to really probe deep into what it means to be somewhere, to be present in something. It is like a surrogate Second Life/Real Life confluence and I will probe social situations (dinner parties that I don’t attend), educational engagements (teaching a class face to screen-using laser pointers as disciplinary devices), and standing in line at the DMV. What does it mean to stand in line if a robot is doing it for you? It means awesome, that’s what.

Of course as popular science fiction and/or Hollywood would have me believe, the robot surrogate (Rurrogate? Surrobot? Rogate? Subot? Balthazar?) will eventually be struck by some combination of lightning/gamma rays/demon vampires and achieve consciousness and spend 90 minutes of screen time learning to love. Or go mad and have to be destroyed. Either way, that must be worth $15,000.

On only a slightly serious note, what is the variable of time after these are released that the Korean government attempts to use them to teach English instead of the expat crowd? Oops, too late.

By Michael Gallagher

My name is Michael Sean Gallagher. I am a Lecturer in Digital Education at the Centre for Research in Digital Education at the University of Edinburgh. I am Co-Founder and Director of Panoply Digital, a consultancy dedicated to ICT and mobile for development (M4D); we have worked with USAID, GSMA, UN Habitat, Cambridge University and more on education and development projects. I was a researcher on the Near Futures Teaching project, a project that explores how teaching at The University of Edinburgh unfold over the coming decades, as technology, social trends, patterns of mobility, new methods and new media continue to shift what it means to be at university. Previously, I was the Research Associate on the NERC, ESRC, and AHRC Global Challenges Research Fund sponsored GCRF Research for Emergency Aftershock Forecasting (REAR) project. I was an Assistant Professor at Hankuk University of Foreign Studies (한국외국어대학교) in Seoul, Korea. I have also completed a doctorate at University College London (formerly the independent Institute of Education, University of London) on mobile learning in the humanities in Korea.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.