On crowdsourcing.

Crowdsourcing, it seems, can be quite the dirty word for some within the digital humanities. It is an unfortunate truth that crowdsourcing has become synonymous with outsourcing, a product of a neoliberal agenda in the digital world. Crowdsourcing in academic contexts is quite different, as we will deal with later in this post.

Jeff Howe, who coined the term in an article for wired magazine, describes it in the following way on his blog,“Simply defined, crowdsourcing represents the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call. This can take the form of peer-production (when the job is performed collaboratively), but is also often undertaken by sole individuals. The crucial prerequisite is the use of the open call format and the large network of potential laborers.” When we consider the ramifications of Howe’s definition, it is not impossible to imagine how crowdsourcing can easily become a problematic option utilised by corporations and institutions in order to increase capital and minimise spending on labour. This is an unfortunate truth that diminishes respect surrounding crowdsourcing projects that aim to preserve cultural artifacts or involve national and international communities in projects that provide educational benefits and benefit cultural communities on a global scale.

Amazon mechanical turk (https://www.mturk.com) is just one of the more extreme examples of problematic actions undertaken in the name of ‘crowdsourcing’. Mechanical turk is a virtual platform that allows companies to outsource labour online, which is in itself fine. However, these ‘human intelligence tasks’ are advertised as being undertaken by thousands of workers for incredibly limited amounts of money. Tasks include tagging, transcription and many other tasks that usually feature within crowdsourcing projects, but a rudimentary browse of currently available tasks showed me that pay can be as low as $0.01 for ten minutes work (or the equivalent of five european cent an hour, for context). Workers can be sorted by their level of qualification, and must only be paid if the work is deemed satisfactory by the employer. Whilst there does not seem to have been any great level of global uproar about the establishment of this service per se, it is important to acknowledge that the level of pay for labour on this platform is alarmingly low, and largely unpoliced. These sorts of low-paid projects are also traditionally undertaken by workers in vulnerable financial situations and can be seen as a problematic way of limiting workforce expenditure for a company (perhaps hiring a limited amount of highly-qualified staff to oversee and review work, rather than having a traditionally tiered system of employment). The misuse of crowdsourcing in this way is the antithesis of an appropriate utilisation of this extremely exciting and beneficial model of project, as reflected in popular academic and public projects such as FamilySearch Indexing, Galaxy Zoo and Project Bentham.

A useful example of crowdsourcing as a digital humanities project for the ‘public good’ rather than as a source of funding for strictly for-profit businesses is the Letters of 1916 project (http://dh.tcd.ie/letters1916/), which recently celebrated it’s first anniversary. Emma Clarke, a former student of D.H at TCD, and Karolina Badzmierowska from the Letters of 1916 project visited our class at the start of October to talk us through the various aspects of their project and it’s greater context within the field.

At the beginning of the session, Emma and Karolina prompted us with the question, ‘What is the largest, and best known, example of crowdsourcing?’. As you might imagine, it is the ever popular font of collective knowledge (and oft. ignorance) that is Wikipedia. Lori Byrd Phillips writes on mnc.org that “Wikipedia is often described as the classic example of massively decentralized, peer-produced content on the Web. The site’s success and unwavering tenacity have led it to become arguably the most influential application of the open source software movement within the cultural and academic spheres.”Wikipedia is an example of a very particular type of crowdsourcing, one that relies on collaborative research rather than tagging or transcription activities. This simply means that it expects (or hopes) that contributing users are informed and researched within whatever field they are contributing to, and that their sources are accurate and reliable to greatest possible extent. We can, of course, get into issues surrounding academic work and peer-reviewing here ad nauseum, but Wikipedia does not purport to be an academic source, merely a communal resource.

In contrast to this community-sourcing, Letters of 1916 takes a rather different approach. The researchers explained that the project relies on, and thrives on, what are known as the ‘superuser’, a user who contributes vast amounts of data towards a project, beyond that of an irregular user or one-time user. Superusers are particularly prevalent within projects like letters of 1916 which are based in transcription, although they are not exclusive to these projects. These superusers are usually retired, or work from home, and typically have large amounts of free time to dedicate to a project. These users then can drive projects forwards with quite a limited amount of regular users, providing that rewards and acknowledgements of these users are in place. These superusers are then crucial to projects like letters of 1916 (see also Project Bentham and the Guardian MP Expenses).

Stuart Dunn writes that, “Early successes in academic crowdsourcing can be partially explained in one of two ways: either there is an alignment between the aims of the research group and large sections of the wider public (e.g. in the origins of the universe); or by the application of intelligent means of engaging the relatively small number of people who do the large majority of the work itself (e.g. by allowing them more of an editorial rather than transcribing role; giving them administrative status in project forums etc.) In the former case, the business model form of crowdsourcing happens to work; in the latter it starts to break down, and must be replaced with a more nuanced approach.” Whilst Dunn would describe the letters of 1916 project as falling into the former description, that is not to say that this project adopts an overtly outdated approach to crowdsourcing, merely one that has proven to be effective. This is particularly the case when, like with this particular project, your subject matter is of cultural relevance to your audience.

So how might academically beneficial crowdsourcing projects avoid the trappings of private industries and yet still be both viable and successful? Ruth Holley cites a multitude of tips for creating and maintaining a successful crowdfunding project (which can be found in full here; http://www.dlib.org/dlib/march10/holley/03holley.html), which includes some of the following ideas;

Tip 2: Have a transparent and visible chart of progress towards your goal.
Tip 3: Make the overall environment easy to use, intuitive, quick and reliable.
Tip 8: Give volunteers options and choices
Tip 9: Make the results/outcome of your work transparent and visible.
Tip 10: Let volunteers identify and make themselves visible if they want acknowledgement.
Tip 11: Reward high achievers by having ranking tables and encourage competition.

What Holley makes clear within this article is that truly effective crowdsourcing projects rely on user’s experience within the project both on a community level and on a level of individual acknowledgement. In this way, users are encouraged to repeatedly involve themselves within the project, rather than using the service intermittently, or even once.

In short, beyond the academic world, crowdsourcing has taken on problematic connotations that are wholly incongruent with the aims of crowdsourcing projects as a whole. In an increasingly digitized and globalised world, it is probable that crowdsourcing in order to limit corporate expenditure and devalue labour is increasingly likely. As it stands, these projects are limited enough so as not to permeate all areas of research and industry, yet, projects like Amazon turk still receive tremendous amounts of traffic, and are legitimate options for companies seeking services traditionally undertaken by regularly paid workers. These same services, which are undertaken by members of the public and academic community for entirely separate reasons, are not to be confused. In order for certain levels of crowdsourcing projects to be successful, labour may need to be paid, but it should be paid fairly and in-line with standards that should be established within the community at large.

Bibliography

Dunne, Stuart (Mar 21, 2013).”More than a business model”.

Holley, Rose. “Crowdsourcing: How and Why Should Libraries Do It?” D-Lib Magazine 16.3/4 (2010)

Howe, Jeff (June 2, 2006). “Crowdsourcing: A Definition”

Byrd Philips, Lori (June 25, 2014). “Why You’ll Never Hear Me Call Wikipedia Crowdsourcing”.

Advertisements

4 thoughts on “On crowdsourcing.

  1. lilybeauvilliers says:

    Is it terrible that I totally just signed up for Mechanical Turk because money is money and Amazon gift cards are better than money?

    (It is, I know.)

    Very unique take on crowd sourcing, Kate! It’s certainly beneficial to think about the economics … In a strange way, it seems that the least enjoyable work is the least valued, even though most people would say it’s the most actual WORK. So transcribing you get people to do for free, but making pretty visualizations you do with your lovely PhD funding or tenure-track salary. Curious quirk of human logic.

    Like

    • bedford806 says:

      Oh no! Did I miss something about being paid more in gift cards or are you working for five cents an hour? If you are your self-worth might be slightly askew! Currently writing a post on your research collections post, but I think it might be in, in summation; didn’t Lily say it all? I agree with her.

      Like

      • lilybeauvilliers says:

        No you don’t get paid more in gift cards!! But apparently if you keep an eye out sometimes jobs come up that pay whole dollar bills an hour. (Kinda signed up for the … what do you guys call it? Craic?)

        Woo-hoo, glad my post was inspiring! There’s a huge hole in it … I totally forgot that JSTOR is hidden behind a paywall, so you can’t have both the JSTOR and the Internet Archive model. This is my gift to you. Exploit the hole!

        Like

  2. bedford806 says:

    As much as I frequently try to explore the holes, I think I’m pretty much at ease with my lack-of-point blog now. Don’t become a digital mule Lily! Or when you exploit the system of exploitation and climb to the top of the mule pile, come hook me up with some gift card $$$$$

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s