Media_httpmediareadcu_eqhba

I am thinking I have been burned/dismayed by one too many academic productivity tools that have some relative quality but never gain a foothold and then just fall off into the ether. Living in this post-social media world (social media as ubiquitous entity), it is hard to envision a social media service that generates enough buy-in to be useful. As for academic research is concerned, the community as recommendation engine approach was best employed by Zotero. Community as affinity groups as academic tribes, that sort of thing.

Readcube takes a swing at the automated form of an academic recommendation engine and the results are, at the very least, promising. I actually believe this one might have some legs, but as a counterpoint to Zotero rather than a direct replacement to it.

However, as an automated search engine it needs to be cautious of offering some degree of transparency in the recommendation process. Academic research is a fickle beast and the perceived authority of the recommendation service (even if it is a person or community) matters a great deal. It is a bit harder to put forth this transparency with automated tools. But it works well, however they managed to do it.

I think it will need to broaden its search scope beyond Google Scholar and PubMed, add a few more export options (Endnote, as far as I see it, is on the decline), and perhaps do an automated bibliography citation entry when the researcher copies and pastes an excerpt from a source into their paper. From a pragmatic perspective, syncing up with institutional proxies is a great idea (which Google Scholar is toying with as well).

Readcube seems much more of an attempt to capture mindshare, a la Google, by capturing a few of the facets of the process of online research. By embedding itself in search, data capture, writing/bibliography creation, it is well on its way towards embedding itself in the process itself. By embedding proxies and aggregation searches across academic resources, it places a subtle wedge of abstraction between itself and the institutions that host the content (databases) and the intellectual activity (universities). As it stands, the end researcher, after registration, would have little to no idea what mechanism was giving them access and where that content was being pulled from. Clever approach, one that reminds me a bit of the Tweetdeck vs. Twitter situation. That is until Twitter pulled the rug on the exercise by changing the underlying structure of search itself (and the APIs to do anything with the data).

Worth a look and at least 5-10 minutes of your time, which is more than I can say of most academic tools.

By Michael Gallagher

My name is Michael Sean Gallagher. I am a Lecturer in Digital Education at the Centre for Research in Digital Education at the University of Edinburgh. I am Co-Founder and Director of Panoply Digital, a consultancy dedicated to ICT and mobile for development (M4D); we have worked with USAID, GSMA, UN Habitat, Cambridge University and more on education and development projects. I was a researcher on the Near Futures Teaching project, a project that explores how teaching at The University of Edinburgh unfold over the coming decades, as technology, social trends, patterns of mobility, new methods and new media continue to shift what it means to be at university. Previously, I was the Research Associate on the NERC, ESRC, and AHRC Global Challenges Research Fund sponsored GCRF Research for Emergency Aftershock Forecasting (REAR) project. I was an Assistant Professor at Hankuk University of Foreign Studies (한국외국어대학교) in Seoul, Korea. I have also completed a doctorate at University College London (formerly the independent Institute of Education, University of London) on mobile learning in the humanities in Korea.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.