Surrendering to filter failure is my flee mechanism; how much data is too much?
I have been posting a lot about learning and how higher order thinking is generally supported by sanctuary (at least that is what I really wanted to say) and reflection, just generally giving yourself time to think. But there are times when that reflection is not compatible with your roles in other walks of life.
Professionally, as is the case with most of us, it is important that I stay abreast of changes, developments, trends in the core fields I work in (information services/archives/higher ed/elearning/academic publishing). This is a professional expectation and part of the social contract I have with my user base. I use all the tools at my disposal. I am not very agnostic about tools; whatever works and is readily accessible.
That is too abstract, however. The following will serve as audience that I am the furthest thing from a mathematician, but is only intended to make the case that we process a heck of a lot of information on a daily basis. And knowing what I have been reading about Cognitive Load Theory, most of that isn’t making it anywhere long term memory. It is novel noise that is quickly discarded unless it helps add some depth to an existing schema or take on the world.
So, here are my rudimentary statistics on my data intake:
- Emails: 70 a day (last week’s average, combined professional and personal). 70 x 75words (just an average; reading a long email thread is the most cognitively taxing thing I do all day). 70×75=5250 words.
- Google Reader: 50 posts a day (I skim or read): 50 x 100 words (rough average)= 5000 words
- Twitter: this is a hard one to estimate. But just for fun, let’s say I process 100 tweets a day at 140 characters each. 14000 words.
- Facebook: fun, but still data intake. Probably 50 posts a day. 50 x 20 words a piece. 1000 words.
- Book Reading (I have a thing for this, where I force myself to read a minimum of 20 pages a night. Usually it is much more than that). 20 x 250 words per page=5000 words.
- Research (I do this a lot for work). I figure a read a few 10 page articles a day, at the minimum. 2 articles x 10 pages and 250 words per page. 5000 words.
- Random internet stories. 10 stories at at least 100 words a pop. 1000 words.
So, there you have it. 36,250 words a day of the words I can remember ingesting. Add that with casual instructions, sign text, auditory input, visual stimuli, and even the reconstituting all of this into creative output (my tweets, blog posts, emails, documentation, copy, etc.). Actually, I don’t mind the output part as I know I am personalizing all of this data and allowing at least parts of it to transfer to long term memory.
However, I have resisted bringing these all together in one interface, a la iGoogle or some such alternative. I like the mental break of switching from one interface to the other. I stream a lot of this through this blog to the right, but that doesn’t account for more than 10% of what I am processing daily. I skim, search, read only what passes the five second rule, discard, reconfigure. A nimble (as possible) approach to filtering endless data streams. But recently, I feel as if I am failing a bit. I daily go to my Google Reader and either click Mark All as Read or just avoid it altogether. I delete alerts as they come in from Google. I have reached a bit of an impasse in processing data (mentally).
Does this signal a plateau of processing? Or is it just data fatigue? I don’t buy into the adoption fatigue notion that there are too many streams or tools themselves. For me, it is simply that there is too much data in those streams to effectively process. So, in a knee-jerk flee mechanism, I simply avoid them altogether. Delete them, reroute them, remove them, mark them all as read. Avoiding the truth that my filters are indeed broke. Too porous, too much information.
Then it got me thinking a bit about what does work and whether that can be replicated across different channels. I find that Twitter is turning out to be my most effective, reliable data stream and it is also the one that I am often the most skeptical about. I get consistently usable data from this stream and I am thinking it is because I have taken great care in curating the list of people I follow. I constantly revise the list, my lists. So, the trust element is high precisely because I have invested in it. I trust these people professionally and subsequently I trust the data they send along.
I suppose my rather simplistic conclusion is that filtering is a human enterprise, despite the gloss of evidence to the contrary. We still rely on trusted human connections to curate the data we receive. The social media revolution has really cemented the social aspect of these relationships. They are between people and are surprisingly sturdy. So, I am going to go back to Google Reader with a cleaver to apply the same schema as my Tweetdeck. Only trusted sources; only valuable, actionable data. And then the stuff I want to read. Because you need to break the chokehold of important and useful on your content selection choices; sometimes you need to read for the sheer enjoyment of it. It is like a giant reset button.
So, I suppose we should all either surrender to filter failure and ignore the white noise, or try to carve it down a little, to make the keyhole view of the world a slightly bit smaller and more focused. Like the pupils dilating a bit in the glare of the sunlight.