Crowdsourcing data analysis
Posted: June 10th, 2011 | Author: Tabitha Hart | Filed under: research tools | Comments Off on Crowdsourcing data analysisAt the recent ICA conference in Boston I attended a very interesting talk about crowdsourcing.
For those of you new to this term, crowdsourcing is a portmanteau combining “crowd” and “outsourcing.” It is the process of outsourcing small, repetitive tasks to a large group of workers. According to Wikipedia it was first coined by Jeff Howe of Wired magazine in his article, “The rise of crowdsourcing”.
The talk I attended was called “Crowdsourced Content Analysis” and was given by Aaron Shaw of UC Berkeley. At the micro level, Aaron’s talk was about the pros and cons of using crowdsourced labor to do data analysis. Aaron described how he had used Amazon’s Mechanical Turk, one of the earliest and best-known crowdsourcing sites, to outsource the content analytic data analysis for one of his research projects. He touched upon some of the logistical and tactical issues involved in taking this approach. At a macro level, Aaron asked larger questions such as: How do we ensure and measure the digital literacy skills of a crowdsourced labor force? How do we test for and ensure reliability and accuracy? What are the ethics of crowdsourcing?
While online crowdsourcing of data analysis (content analysis or otherwise) doesn’t seem to have quite caught on yet amongst academics, it may well become the go-to tool in the near future. Aaron’s talk highlighted its attractive features – low cost, online distribution of information, easy handling of large data sets, inexpensive labor – and left us with many interesting and important questions to reflect on.
Do you have any experience crowdsourcing your data analysis? What tools have you used? What was your experience like?