Reposting here from Panoply Digital. Optimistic Take With artificial intelligence (and perhaps with all things technological), we as humans seem to run the gamut between dystopian visions of complete AI …
In keeping with my previous post on smart cities and embedding unpleasant history, my other recent research interest in mobile learning is surveillance, ethics, and privacy. In my more pessimistic, …
I always have at least five papers in outline form on my computer. All sitting there highly visible, a constant reminder to keep writing (and blogging, apparently). I have noticed that there are several that never get finished (or even started) as the idea might have lost some currency, or my interest is piqued by something else. I have at least two in some state of completion dealing exclusively with ethics in a research context as it deals with my research focus of mobile learning. In light of the Facebook research experiment and other potentially discouraging takes on what ethical research looks like, particularly with technology, I felt it was time to dust these papers off again and get back to work. I am inspired in this by John Traxler’s work on ethics in the mobile context, but this is bigger than strictly mobile. I refer often to BERA’s Ethical Guidelines for Research or the AERA’s Research Ethics. I follow closely Korea’s research ethics developments.
So this post is about essentially the preliminary steps as I see them towards establishing a baseline for ethical research that apparently was a lot cloudier than I had realized. I honestly thought we were moving past some of these issues a bit as the Internet, you know, has been around for over 20 years now, but it seems I was wrong. But I am a pragmatic guy and I think there are pragmatic steps to be taken to reaffirm ethical standards when doing research. So these are listed below. There are more, but these seem like a good place to start.
Systems & Organizations
- Stop avoiding the ethical discussion. It is upon us and we can’t, nor shouldn’t, be trying to avoid it. Address it head on as a community. Negotiate a more robust ethical framework as a community. Demand that it be used as a community.
- By all means work with them if you feel that provides an opportunity for you as an organization to extend the impact of your education. I am not an individual who thinks business is the bogeyman, but nor am I naive enough to think they we are all singing from the same hymnal. They are not bound by ethics as we are; they are bound by legality and their responsibilities to their shareholders. There is nothing inherently wrong with either of those scenarios, but it is ethically suspect for us, as educational professionals, to send thousands upon thousands into that environment without fully being aware of the ethics involved. I would love to see more professors, university administrators, and the like openly reflecting on the pros and cons of ascribing to these MOOC platforms with a level-headed discussion of what is to be gained and what control is lost and what effect that has on ethical research. Minus the marketing speak.
- We can dictate that legality by collectively determining the ethical standards for our research community (education) and then ascribing to systems or platforms that conform to those standards. Present the ethical guidelines to MOOC providers and ask them to meet them or provide the capacity to do so for individual courses. At the end of the day, these platforms cannot exist without your explicit approval, either in terms of students, content, and reputation. In short, don’t invest in platforms that aren’t bound by your ethics.
- Towards this end, if I were a university I might be more inclined to participate in FutureLearn as it is wholly owned by Open University. Granted, it is a private company so that suggests the expectation of profit or some sort of revenue at some point, but as a university they are bound, at least to some degree, by the ethical guidelines of the university itself. And if they aren’t, shame on them.
- Working with commercial platforms/vendors/options/technology does involve entering a domain where you, an ethical researcher, won’t have full control over the data being collected or made visible to parties other than the individual who created it. It is a fact. However, that doesn’t mean you wave your rights to input ethics into the discussion. It doesn’t mean you wave your responsibility to investigate the tools being used, to understand what risks that poses to the individuals involved, to understand what data is being collected and by whom is it being seen. We don’t live in a world any longer where we can play dumb on these technology issues. I might not know how to code, but I sure as hell am going to know what I am exposing my students to if I ask them to use a particular application or technology. Ethics begins right here (if you are doing something #edtech related). Evaluate the tools you are going to use.
- Evaluate the potential for full and absolute disclosure when using mobile or educational technology (or both). Is it possible to receive informed consent from the participant? If not, don’t do it. Others might find the ambiguity in this issue and come to a different conclusion (and that might make more sense in other fields), but I don’t. Informed consent is a mandate and not a suggestion. Sure there is response bias and sure it colors the results, but so what? Your desire as a researcher to get an answer does not trump, not even for a moment, an individual’s right not to be manipulated without their consent. If someone is in your study, they need to know and agree to being in your study.
- Data collection in the open is fraught with the potential for abuse. Again, I suggest going back to informed consent (where I thought we were the whole time). I suggest explicit permission requested and received for using data collected through mobile technology (this is my research focus, but it applies across the #edtech spectrum). I suggest extending your research projects to allow for this permission to be received, to allow for data to be collected in a relatively innocuous way over a relatively lengthy timeline (which allows for the self-conscious participant to settle in to the collection), and to allow for consistent updates as to how the data is being used. If it means lengthening research projects a bit to allow for this open process to occur, so be it. All of this has to be expressed, consented to, and executed ethically. There are no shortcuts.
- Beyond do no harm, we need to reorient our research focus, particularly in education, to protection. We establish guidelines, we generate research projects that enact those guidelines, we get informed consent. This is good, but there is a step beyond this. We need to protect our participants as journalists protect sources. By taking these research projects on, we assume, implicitly, this role. We champion their right to privacy; we expose, very publicly, attempts to thwart their right to confidentiality. We protect them at every turn. If my Phd involved (and it never would as the Institute of Education has one of the more strict ethical reviews I have ever seen) turning over my sources, or exposing their identity in any way, I would choose not to get the PhD. If my job depended on doing something I felt was exposing my participants, I would lose my job. It really isn’t integrity if it isn’t tested, is it? This will never be the case for the vast majority of us, but we should consider this the potential outcome of any research we undertake.
- Collect what data you need to answer your research questions. Rarely should this be more than a few things. Don’t go fishing. The types of data you collect are linked, or should be, exclusively to your research questions. Ethical review boards demand, or should be demanding, linkage there, but that isn’t enough. We have to hold ourselves to higher standards as individuals. Sure it is tempting to collect this or that, data that will be visible to us as researchers in the course of collecting our primary data. Sure we might want to use it and, if given permission, we should. But that is just it. That ancillary data stream might not be one that the participant knows they are revealing. This includes metadata: timestamps, locations, etc., especially prevalent with mobile data. This is highly revealing information and if it doesn’t answer a research question and if it isn’t listed explicitly in the consent form, we need to throw it out (or better yet, create a mechanism to strip it out before it even gets to us).
All of these are places to start. None of these can be avoided any longer, not if we want to retain a reputation as an ethical community. I will post more on the specific ethical complications in a mobile learning environment, but hopefully we hear more from others about how they are pursuing this in their own research communities.
That was a very disparate collection of items in that title, but it wasn’t false advertising. I am in the midst of writing season and the attendant paper and conference …
.@johntraxler goes *way* beyond #privacy w this great comment on ethics & tech-enabled learning. http://t.co/7CRZ1YbRgE — Linda Raftree (@meowtree) November 15, 2013 I stumbled across the above tweet from @meowtree …