We do a series of events using the technology itself as the provocation into a deeper exploration of what the university might be in the future. These aren't technical workshops.
Photo by Andy Kelly on Unsplash

Reposting here from Panoply Digital.

Optimistic Take

With artificial intelligence (and perhaps with all things technological), we as humans seem to run the gamut between dystopian visions of complete AI takeover and dysfunction (think HAL) and utopian daydreams of gracious ubiquity where personal assistants attend to the less pleasant aspects of daily life (think every edtech advertisement ever, and an increasing portion of Amazon’s advertising budget it would seem). AI is pervasive, regardless of how you critique it. The average flight of a Boeing plane involves only seven minutes of human-steered flight, which is typically the takeoff and landing. Chatbots structure much of our online shopping and an increasing amount of our mobile interactions. If I can build a chatbot (via RunDexter or some such service), then it is time to talk about their more sophisticated AI offspring. We are already living in an increasingly AI structured digital space and have been for some time.

Realistically, we are left with a technology that has potential and it is that potential that must be, and increasingly has been, supported in ICT4D. From UAVs for surveillance and disaster response applications to natural language processing (think Alexa’s Skills) for responsive mobile applications in humanitarian contexts and on to leveraging AI to provide digital credit to the previously unbanked, there is certainly merit in exploring AI in ICT4D contexts.

Critical Take

Cue the dystopian theme music, most likely AI generated. In my other guise, I am part of the Centre for Research in Digital Education at the University of Edinburgh and we are about critical takes on emerging technologies, particularly in how they impact education. AI certainly fits that bill. That criticality informs the work of Panoply Digital (all of us, not just me) and how we approach ICT4D projects. Our tagline is “Sustainable development through appropriate technology” with the implicit bit in there that appropriate technology might be no technology at all. With that critical hat on, I can rehash other’s critiques of AI as used in ICT4D and maybe even add my own.

  • As with many emerging technologies, AI largely entrenches accumulated advantage. It has capacity for improving governance, finance, education, and more if and only if one has the capacity to take advantage of it. This includes cadres of AI specialists, technicians, educational structures to spur the development of these previous two groups and so forth. This all only follows after significant investment in AI and the political will and stability to see through investments of this sort.
  • As with all data-driven technologies, the underlying data that the AI learns from is sensitive to discrimination and bias. “When we feed machines data that reflects our prejudices, they mimic them – from antisemitic chatbots to racially biased software.” Unleashing an AI on open, largely unmoderated, digital spaces (think YouTube comments) will generate less than optimal machine learning spaces, and in some cases outright damaging ones.
  • As with all data-driven technologies, the AI will learn from the data that what counts is what is counted, potentially neglecting underrepresented groups and reinforcing the barriers that made them largely underrepresented in the first instance. Not to pick on YouTube again but if your learning data is drawn solely from YouTube comments, large segments of the overall population would be rendered invisible by that focus. Women in particular contexts use technology quite differently (and a shameless plug to reinforce that those differences are there).
  • Building on that, AI is predicated largely on a system of binaries. One can get fairly far if those binaries are sufficiently nuanced enough to render a larger context predictable, but nuanced cultural takes are rendered largely invisible, particularly for high context social contexts. Social and emotional intelligence, fields that are not always logical from a mathematical standpoint are critical to understanding a local context. Large parts of the world still depend on meaning presented through mutual and intricate networked obligations, loyalty, filial piety, respect for age and seniority, and more. AI can conceivably be designed to learn to understand these contexts, but what size of a dataset would be needed to do that?
  • And who would expose their development audiences to the large amounts of data collection needed to create machine learning? That was largely rhetorical as apparently the answer is everyone, but there are, or should be, barriers here in how it is applied to ICT4D. Beyond being aware that “the consequences of decisions and actions are highly sensitive to different contexts, and…may have significant impacts including being seen to reinforce or challenge local social norms or power structures” is the the almost default position in some of the ICT4D field into opting in to these data collection regimes. Exposing audiences to these data collection systems should at least warrant a pause.

I find a helpful start are the three principles of AI development suggested by Virginia Dignum (and drawn from here):

  • “Accountability: an AI system needs to to be able to justify its own decisions based on the algorithms and the data used by it. We have to equip AI systems with the moral values and societal norms that are used in the context in which these systems operate;
  • Responsibility: although AI systems are autonomous, their decisions should be linked to all the stakeholders who contributed in developing them: manufacturers, developers, users and owners. All of them will be responsible for the system’s behaviour;
  • Transparency: users need to be able to inspect and verify the algorithms and data used by the system to make and implement decisions.”

Sounds like a good place to start.

By Michael Gallagher

My name is Michael Sean Gallagher. I am a Lecturer in Digital Education at the Centre for Research in Digital Education at the University of Edinburgh. I am Co-Founder and Director of Panoply Digital, a consultancy dedicated to ICT and mobile for development (M4D); we have worked with USAID, GSMA, UN Habitat, Cambridge University and more on education and development projects. I was a researcher on the Near Futures Teaching project, a project that explores how teaching at The University of Edinburgh unfold over the coming decades, as technology, social trends, patterns of mobility, new methods and new media continue to shift what it means to be at university. Previously, I was the Research Associate on the NERC, ESRC, and AHRC Global Challenges Research Fund sponsored GCRF Research for Emergency Aftershock Forecasting (REAR) project. I was an Assistant Professor at Hankuk University of Foreign Studies (한국외국어대학교) in Seoul, Korea. I have also completed a doctorate at University College London (formerly the independent Institute of Education, University of London) on mobile learning in the humanities in Korea.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.