You are on page 1of 154

universitt

150
Thomas Gegenhuber

Reihe B: Wirtschafts- und Sozialwissenschaften

Crowdsourcing
Aggregation and selection mechanisms and the impact of peer contributions on contests

Impressum
Reihe B Wirtschafts- und Sozialwissenschaften
Thomas Gegenhuber Crowdsourcing Aggregation and selection mechanisms and the impact of peer contributions on contests

2013 Johannes Kepler Universitt Linz Approbiert am 9. Mrz 2012 Begutachter: a. Univ.-Prof. Dr. Robert Bauer Herstellung: Kern: Johannes Kepler Universitt Linz, Altenberger Strae 69, 4040 Linz, sterreich/Austria Umschlag: TRAUNER DRUCK GmbH & Co KG, 4020 Linz, Kglstrae 14, sterreich/Austria

ISBN 978-3-99033-139-2 www.trauner.at

Foreword
Imagine the following situation: You have to write a paper. At some point, you do not see any mistakes. So what can you do to mitigate this effect? You put away your manuscript for a day or two. After two days you have a fresh perspective on your work, you spot mistakes easily and you improve the clarity of the arguments. I want to draw an analogy between the example above and the brainstorming literature. Findings from the brainstorming literature indicate that separation from idea generation leads to selection of better ideas. The separation reduces your involvement in your work, which would otherwise cloud your judgment. One topic that I explore in my thesis are hybrids in crowdsourcing, which attempt to achieve separation in several ways. In short, hybrids are the application of different actors, group structures as well as aggregation and selection mechanisms in a single-round, multi-round or iterative crowdsourcing process. To give you an example for such a hybrid: An organisation makes an open call to harness ideas from the crowd via a web-based platform. In the first step, the crowd individually creates the ideas. Next, members of the crowd vote and comment on each others ideas. After this step, the R&D department of the organisation has a look at the 20 highest ranked ideas (i.e. ideas, which have received the most votes). Out of this pool of ideas, the R&D department selects three ideas that the organisation implements. In this example, idea generation is separated from idea selection. First, the members of the crowd judge the ideas, which they have not created themselves. Second, the employees of the R&D department, who are not involved in the idea generation process at all, make the final selection. While the literature indicates that the crowd does pretty well in creating original ideas, the employees of the R&D department have a better perspective on the feasibility of the ideas for the organisation. Furthermore, the ideas of the crowd must be integrated into the organisation. In this stage the R&D department plays a crucial role. Giving them the last call might mitigate the notIII

invented-here syndrome, because management signals to its R&D employees that it still relies on their professional expertise. This book is written for two audiences. First, for scholars who are interested in the topic of crowdsourcing and innovation: I think you might be interested in the in-depth discussion about crowdsourcing definitions, the aggregation and selection mechanisms, effects of the different types of peer contributions on contests as well as hybrids. Second, the thesis is useful for practitioners who plan to create their own crowdsourcing platform or look for an appropriate intermediary. My thesis will give you a detailed understanding of the processes involved in crowdsourcing, especially if you rely on the contest model. But lets go back to the example at the beginning of this foreword and how it relates to this book. In December 2012 I started with the preparations to publish my thesis as a book. I submitted this thesis in March 2012 and I have not had a detailed look at it since then. To my discontent, I spotted some mistakes, which needed to be fixed. I also realized that changes would be useful to ease the flow of reading. The result is an edited version of my thesis in this book. Since I submitted in March I have read a lot of the literature available and therefore I considered adding a substantial body of new material. But doing so would constitute a major adaption of the original work. I thought this would not serve the goal of this publication, namely to document my in-depth involvement with the topic of crowdsourcing at the time of my graduation. This thesis is the starting point of my academic journey and sets the path for future exploration. Thomas Gegenhuber January 2013, Linz

IV

Acknowledgements
I deeply thank my advisor, Prof. Robert Bauer, for his time, supervision and advice. The numerous discussions with him provided me with essential insights for my thesis. I want to thank the team of the Institute for Organisation and Global Management Education (Prof. Guiseppe Delmestri, a. Prof. Johannes Lehner, Ass. Prof. Ccilia Innreiter-Moser and Dr. Claudia Schnugg) for taking time to discuss ideas or providing feedback to my presentations. I want to thank following colleagues and friends for literature suggestions, discussions or providing feedback to my work: Alexander Bliem, Stacy Kirpichova, Susi Aichinger, Karina Lichtenberger, Marko Hrelja, Erica Chen, Bernhard Prokop, Sean Wise, Jeff DeChambeau, Naumi Haque and Dave Valliere. I also want to thank Skye M. Hughes for proofreading the edited version of my thesis. Special thanks to Melanie Wurzer for always being there for me. Finally I want to express gratitude to my parents, Ilse and Wolfgang Gegenhuber, for their patience and support through the duration of my studies.

Abstract
Crowdsourcing can be understood as an agent (organisation or individual) using web-based platforms, which are often managed by intermediaries, to call for ex ante unidentified, ideally large and diverse sets of individuals (the crowd) to solve problems identified and defined by the agent. A crowdsourcing platform provides the means to aggregate and select the contributions of the crowd. This analysis suggests that the three aggregation mechanisms, collection, contest, and collaboration, not only exist in their pure form, but may also overlap. Drawing upon transaction cost theory, selection instruments can be categorized into four governance mechanisms: hierarchy, standardization, meritocracy and market. The second part of the thesis places an emphasis on the contest model in crowdsourcing and discusses how the access to peer contributions in contests influences the quality of ideas, the cooperative orientation, the motivation of the crowd, and the strategic and marketing considerations of the agent. Moreover, this thesis lays the groundwork for future research on hybrid models in crowdsourcing. A hybrid model uses a combination of different actors, aggregation and selection mechanisms, and group structures in a single, multiround or an iterative process.

VI

Table of Contents
1 Introduction ......................................................................................... 1 1.1 Structure of the Thesis ................................................................... 2 1.2 Method ............................................................................................ 3 2 Crowdsourcing ..................................................................................... 4 2.1 Defining Crowdsourcing................................................................. 4 2.1.1 The Agent ....................................................................................................9 2.1.2 The Crowd .................................................................................................11 2.2 Related Concepts ...........................................................................19 2.2.1 Open Source Development.....................................................................20 2.2.2 Open Innovation.......................................................................................22 2.2.3 Co-creation ................................................................................................25 2.3 Crowdsourcing Classifications...................................................... 27 2.3.1 Functional Perspective .............................................................................27 2.3.1.1 Howe (2008).........................................................................................27 2.3.1.2 Brabham (2012) ...................................................................................29 2.3.1.3 Papsdorf (2009)...................................................................................31 2.3.1.4 Gassmann et al. (2010) .......................................................................34 2.3.1.5 Doan et al. (2011) ................................................................................37 2.3.2 Task Perspective ........................................................................................40 2.3.2.1 Schenk and Guittard (2010)...............................................................40 2.3.2.2 Rouse (2010).........................................................................................41 2.3.2.3 Corney et al. (2009) .............................................................................43 2.3.3 Process Perspective...................................................................................45 2.3.3.1 Geiger et al. (2011) ..............................................................................45 2.3.4 Conclusion .................................................................................................47 2.4 Aggregation and Selection ............................................................ 48

VII

2.4.1 Aggregation ...............................................................................................51 2.4.1.1 Integrative and Selective Crowdsourcing ........................................51 2.4.1.2 Collective Intelligence Genome Framework...................................54 2.4.1.3 Overlapping Dimensions in Aggregation........................................56 2.4.2 Selection .....................................................................................................59 2.4.2.1 Decide Gene ........................................................................................60 2.4.2.2 Evaluate gene .......................................................................................63 2.4.2.3 Selection Instruments and Governance Mechanisms ...................65 2.4.3 Aggregation and Selection Framework .................................................72 3 Contests in Crowdsourcing ................................................................74 3.1 Contests in General........................................................................74 3.1.1 Winner-take-all Model and Distributive Justice ...................................74 3.1.2 Generic Contest Process .........................................................................77 3.1.3 Contest Design Elements........................................................................79 3.1.4 Form Follows Function ...........................................................................81 3.2 Impact of Accessibility of Peer Contributions on Contests .........83 3.2.1 Classification of Accessibility of Peer Contributions.........................84 3.2.2 The Relation between Access to Peer Contributions and Marketing and Strategic Considerations of the Agent......................................................88 3.2.3 Accessibility of Peer Contributions and Motivation of the Crowd .91 3.2.4 Accessibility of Peer Contributions and Quality of the Best Idea ...95 3.2.5 Access to Peer Contributions and the Cooperative Orientation of the Crowd...................................................................................................................101 3.3 Discussion.................................................................................... 108 3.3.1 Reward Structure ....................................................................................111 3.3.2 Hybrids .....................................................................................................113 4 Final conclusion ............................................................................... 120 4.1 Critical Reflection ........................................................................ 120 4.2 Practical Implications.................................................................. 121 4.3 Questions for Further Research .................................................. 122
VIII

4.4 Contributions ...............................................................................124 References ...............................................................................................126 Appendix 1: List of Platforms.................................................................139

IX

List of Abbreviations
CI CIO CS EBS FLOSS GNU HIT ICC IP MNC NGT OI OS R&D TCT Collective Intelligence Chief Information Officer Crowdsourcing Electronic Brainstorming Free/Libre and Open Software GNU's Not Unix Human Intelligence Task Intra-Corporate Crowdsourcing Intellectual Property Multinational Corporation Nominal Group Technique Open Innovation Open Source Research & Development Transaction Cost Theory

List of Figures
Figure 1: Onion-structure in FLOSS (Crowston and Howison, 2005).................17! Figure 2: Open Innovation Paradigm (Chesbrough, 2006)......................................23! Figure 3: Connection between Crowdsourcing and Open Innovation ..................24! Figure 4: Crowdsourcing Applications and Corresponding Industries (Pabsdorf, 2009)..........................................................................................................................33! Figure 5: Categorization of Crowdsourcing Initiatives (Gassmann et al., 2010)..36! Figure 6: Crowdsourcing Taxonomy (Rouse, 2010) ..................................................43! Figure 7: Selective Crowdsourcing ...............................................................................52! Figure 8: Integrative Crowdsourcing ...........................................................................53! Figure 9: Collection Gene .............................................................................................55! Figure 10: Collaboration gene.......................................................................................56! Figure 11: Overlapping Dimensions in Aggregation ................................................59! Figure 12: Voting Process..............................................................................................60! Figure 13: Prediction Market ........................................................................................62! Figure 14: Aggregation and Selection Framework ....................................................73! Figure 15: Generic Contest Process.............................................................................78! Figure 16: Threshold Accessibility of Peer Contributions.......................................86! Figure 17: Connection between Accessibility of Peer Contributions and Aggregation/Selection Mechanisms ....................................................................87! Figure 18: Screenshot Threadless.com ........................................................................89! Figure 19: Correlation between Innovativeness and Cooperative Orientation (Bullinger et al., 2011)...........................................................................................105! Figure 20: Relationship between User Types and Successful Innovation Outcomes (based on Hutter et al., 2011) ..........................................................105! Figure 21: Recursive Incentive Structure (Tang et al., 2011)..................................112!

XI

List of Tables
Table 1: User Types, Community Impact, Traits and Approximate Percentage of Community (Moffitt and Dover, 2011) ...............................................................18! Table 2: Crowdsourcing Typology (Brabham, 2012) ................................................30! Table 3: Sample of CS Systems on the WWW (Doan et al., 2011) ........................39! Table 4: Characteristics of Crowdsourcing Applications (Schenk and Guittard, 2010) .........................................................................................................................41! Table 5: Classification of Crowdsourcing Activities (Corney et al., 2009) ............44! Table 6: Taxonomy of Crowdsourcing Processes (Geiger et al., 2011) .................46! Table 7: InnoCentive Genome (Malone et al., 2009) ................................................54! Table 8: Overview of Individual and Group Decisions...........................................60! Table 9: Evaluation Gene (Wise et al., 2010) .............................................................64! Table 10: Tagging and Flagging Gene.........................................................................65! Table 11: Morphological Analysis of Selection Instruments and Governance Mechanisms .............................................................................................................72! Table 12: Contest Design Elements (Bullinger and Moeslein, 2010) .....................79! Table 13: Overview of Contest Types (Hallerstede and Bullinger, 2010) .............82! Table 14: Comparison of Design Elements of Studies on Cooperative Orientation of the Crowd ...................................................................................108! Table 15: Overview of Propositions re: Impact of Accessibility of Peer Contributions.........................................................................................................109! Table 16: Genome Deloittes Innovation Quest (Gegenhuber and Hrelja, 2012) .................................................................................................................................118!

XII

1 Introduction
The future of the web: Intellectual mercenaries will meet on electronic marketplaces. An organisation is able to easily gather thousands of those intellectual mercenaries to solve one of its problems. Within a few hours and days, the task is completed and the problem solved. The mercenaries receive their reward and seek the next assignment. This web is made for individuals who prefer flexible work arrangements to the mundane routine of a nine-to-five job. Malone and Rockart envisioned this scenario in an article that was published in 1991. Today, it has become reality. MechanicalTurk is a marketplace for individuals who are willing to complete well-defined micro-tasks for a micro-payment. Threadless, a successful venture and well-cited example in business research, solved the problem of how to continuously offer a wide range of cool T-shirts. To do this, Threadless makes an open call to the crowd and asks the crowd to generate T-shirt designs. The submitted T-shirts are aggregated in a collection. The crowd then assesses and selects the best designs by rating and commenting. Submitted designs with the highest score will be considered for production by Threadless. Still, the Threadless management has the final decision whether or not to produce the design. If a new design is implemented for production, the individual who posted the design is awarded a prize. Another well-known example is InnoCentive, which was founded in 2001 (one year later than Threadless). InnoCentive makes it possible for an organisation to broadcast a scientific problem in the form of a contest to a network of potential solvers who will individually try to find a solution. The contestants do not have access to the solutions of their peers. The organisation awards the winning solution with a prize and has to pay a fee to the intermediary InnoCentive. Threadless and InnoCentive were founded in the early 2000s. Six years

later, in an article for Wired magazine, Jeff Howe and Mark Robinson (2006a) introduced the term crowdsourcing to describe platforms such as Mechanical Turk, Threadless and InnoCentive. The term crowdsourcing is derived from the words crowd and outsourcing. Crowdsourcing is discussed intensively in the trade press and industry blogs like crowdsourcing.org. Crowdsourcing also sparked the interest of the scientific community. A related research stream has emerged that analyses how innovation contests work (Leimeister et al., 2009, Bullinger et al., 2010, Hutter et al., 2011), others investigate innovation contests under the umbrella of distributed/open innovation (Jeppesen and Lakhani, 2010). Innovation contests basically use the broadcast search model of InnoCentive, where the sponsor of a challenge can choose from a variety of solutions provided by the crowd. Threadless is also based on the contest logic, but in contrast to InnoCentive, the crowd can assess the contributions of their peers. Even though the final decision of which T-shirt to produce is in the hands of Threadless, the crowd plays a major role in peer-reviewing the designs. At InnoCentive the crowd produces ideas, while the organisation selects the best one. In this model, members of the crowd do not interact with each other and offer no input on the organisations final decision. The thesis attempts to answer following questions: How should crowdsourcing be defined and how does it overlap with related concepts? What platforms should be considered as crowdsourcing and what classifications do exist? How does the aggregation and selection process in crowdsourcing work? And what role does the access to peer contributions play in contests such as InnoCentive or Threadless?

1.1 Structure of the Thesis


The structure of the thesis is as follows: First, I shall define crowdsourcing and review how crowdsourcing differs from related concepts. Next, I shall examine literature that attempts to classify different types of crowdsourcing and discuss different modes of gathering and select2

ing the inputs of the crowd. In the second part of the thesis I shall place an emphasis on contests in crowdsourcing. After briefly explaining the foundation of contests and their externalities, I will elaborate on how the accessibility of peer contributions affects the outcome of a contest. In the conclusion I will integrate the findings of the first and second part of the thesis and investigate hybrids, a combination of different actors, aggregation and selection mechanisms and group structures. Additionally I am going to outline a new reward system. Finally I will summarise the practical implications and contributions of this thesis and outline avenues for further research.

1.2 Method
This thesis attempts to make a conceptual contribution to the crowdsourcing literature. The methodological approach has two strands: an extensive literature review and analysis as well as an examination of crowdsourcing platforms on the web. The literature review encompassed a search on Google Scholar, which is linked to the databases of the Ryerson University Library (Toronto). The Ryerson library has access to a wide range of databases, including Academic Search Premier, Proquest Research Library, Ebsco, ACM Digital Library and Science Direct. The initial keyword search used the term crowdsourcing. It led to the identification of approximately 11,000 articles. I carried out a preliminary scan of papers (abstract + keywords) that were often cited and which discussed crowdsourcing in general. I read all papers in detail that were, according to Google Scholar, quoted more often than 50 times. In a second step I developed a categorization for each research question. The categories are Crowdsourcing in General, Taxonomies, Aggregation and Selection and Contests. I added literature to the categories using a two-step approach. First, I used the reference lists of the most-cited papers to identify further literature. Second, I made another keyword search using numerous com3

binations of a + b. a referred to crowdsourcing, while b was changed in each search. For b I used the following terms: classification, taxonomy, typology, open source, innovation contests, contests, nominal group, innovation tournaments, co-creation, open innovation. I only added literature to the categories that provided informational value for the research. Articles that had no connection to the research focus were discarded (e.g. papers focusing on the economics of MechanicalTurk). To examine crowdsourcing platforms I signed up to the following websites: Threadless, 99designs, DesignCrowd, CrowdSpring, InnovationExchange, Spreadshirt, MechanicalTurk, Vencorps and Dells Ideastorm. I also visited the website of each platform that is mentioned in this thesis.1 This step helped me to gain a deeper understanding of the aggregation and selection mechanisms of crowdsourcing. It was also beneficial to reflect the literature with the actual crowdsourcing applications on the web.

2 Crowdsourcing
2.1 Defining Crowdsourcing
Howe (2006b) describes crowdsourcing as: () the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally) large network of people in the form of an open call. This can take the form or peer-production (when the job is performed collaboratively) but is also often undertaken by sole individuals. The crucial prerequisite is the use of the open call format and the large network of potential laborers. The term outsourcing is an important attribute in Howes definition.

All platforms and their corresponding URLs are listed in Appendix 1.

He acknowledges that crowdsourcing is a disruptive business model and reduces the price of intellectual labour (Howe 2008). Crowdsourcing makes good use of the fact that the digital world is even flatter than the real world; as long as the crowd has access to a computer and the Internet, activities can be performed from anywhere in the world. Howe (2008) admits that crowdsourcing, like outsourcing, takes advantage of disparities between developed and developing countries. Due to income disparities, winning a $300 (USD) bounty for a logo design challenge is worth much more for a designer from Pakistan than for a designer who lives in the US. At Mechanical Turk it is hardly possible to earn a sum that is equivalent to the minimum wage in one hour.2 Howe (2008) is aware of the disruptive effects of crowdsourcing and calls for a firewall against exploitation, otherwise crowdsourcing might be the digital home sweatshop of the future. Von Ahn (2010) also argues that if crowdsourcing is the future of work, regulation should be considered. Crowdsourcing generally shifts the risk of failure from an organisation to the crowd (Schenk and Guittard, 2010) and is perceived as a strategy to reduce costs (Howe, 2008). Whereas outsourcing typically has a one-to-one relationship between an organisation and a third party that needs to deliver specific results (Rouse, 2010; Schenk and Guittard, 2010), crowdsourcing has a one-to-many relationship between the organisation and the crowd. In addition Howes definition (2006b) highlights the open call nature of crowdsourcing, which gives an organisation access to a large network of individuals outside of the organisation. The open call results in a self-selection of individuals who participate due to numerous motivations (c.f. Brabham, 2010; 2012). Although some crowdsourcing platforms use fixed or success-based-payment or restrict who can participate (Geiger et al., 2011), no platform can ex ante identify exactly who will participate in a crowdsourcing effort. In organisations, a
2

To get a sense of why MechanicalTurk is also called the digital assembly line (Stross, 2010), I advise the reader to selfexperiment and try to earn as much money on MechanicalTurk within two hours.

supervisor may instruct an employee into performing a certain task. If the employee refuses to perform the task, he/she faces the risk of being expelled. In crowdsourcing, individuals independently decide for themselves whether or not to accept the task and can refuse to accomplish the task without (hardly any) consequences. Building on Howe (2006b), Brabham (2008) defines crowdsourcing as an online, distributed problem-solving and production model (2008: 75). Brabham (2008) explains that crowdsourcing is a business model that benefits from the input of the crowd. Gassmann et al. (2010) provide following definition of crowdsourcing: Crowdsourcing is a strategy to outsource idea generation and solving problems to external actors in the form of an open call. Additionally to solving problems and idea generation, crowdsourcing may be used for solving micro-tasks. In general, the open call is realized through an online platform.3 (2010: 14) Gassmann et al. (2010) examined a large variety of crowdsourcing platforms, including the platform InnoCentive@Work. InnoCentive@Work makes an open call to a large group within the organisation. This collaborative software is based on the InnoCentive model that claims to build an innovation community within the organisation. If one applies the definition above to InnoCentive@Work , it cannot, strictly speaking, be classified as crowdsourcing. Thus, application of crowdsourcing within an organisation cannot be labelled as outsourcing. Villarroel and Reis (2010) call the application of crowdsourcing within the firm Intra-Corporate Crowdsourcing (ICC) and provide following definition: ICC refers to the distributed organizational model used by the firm to ex3

Translated from German: Crowdsourcing ist eine Strategie des Auslagerns von Wissensgenerierung und Problemlsung an externe Akteure durch einen ffentlichen Aufruf an eine groe Gruppe. Typischerweise stehen Problemlsung und Ideengenerierung im Zentrum, aber es sind auch repetitive Aufgaben mglich. In der Regel wird dieser Aufruf durch eine Website realisiert (Gassmann et al., 2010: 14).

tend problem-solving to a large and diverse pool of self-selected contributors beyond the formal internal boundaries of a multi-business firm: across business divisions, bridging geographic locations, leveling hierarchical structures. (2010: 2) Other scholars do not differentiate whether crowdsourcing takes place inside or outside an organisation. Doan et al. (2011) offer a very broad definition of crowdsourcing and view it as a "general-purpose problem-solving method" (2011: 87). Consequently they consider a platform (e.g. Threadless) as a crowdsourcing system "if it enlists a crowd of humans to help solve a problem defined by the systems owners" (2011: 87). It appears that there is no clear consensus about what constitutes crowdsourcing. Howe (2006b) stresses the notion of outsourcing and the open call format. Brabham (2008) and Doan et al. (2011) place an emphasis on distributed problem-solving. Gassmann et al. (2010) provide a definition that does not consider crowdsourcing within a firm, although they describe examples of the latter in their book. The definition of Villarroel and Reis (2010) highlights the problem-solving capabilities of crowdsourcing within organisations. I conclude that due to the emergence of crowdsourcing within organisations, outsourcing is not the key element of a crowdsourcing definition. Based on this literature review, I propose the following definition of crowdsourcing: Crowdsourcing is the act of an agent who uses a platform (of an intermediary) to call for an ex ante unidentified, selfselected, ideally large and diverse set of individuals (crowd) who explicitly or implicitly solve a problem (defined by the agent). The key element of this definition is the distributed problem solving capacity of crowdsourcing. The agent (either an organisation or an individual) calls for a self-selected crowd by using a platform. A platform
7

should be a web-based system. The web deserves credit for the rise of crowdsourcing with features like speed, reach, anonymity [and the] opportunity for asynchronous engagement (Brabham, 2012: 3). Doan et al. (2011) add that the web () can help recruit a larger number of users, enable a high degree of automation, and provide a large set of social software (for example, email, wiki, discussion group blogging, and tagging) that CS systems can use to manage their users" (2011: 88). However, the term web-based does not necessarily entail that the platform must be located on the Web. An agent may prefer an internal web-based solution for crowdsourcing within the organisation due to security concerns. The agent can also use the platform of a third party (intermediary), which is an optional element of my crowdsourcing definition. Doan et al. (2011) suggest that the system owner defines the problem. But if an agent uses the system of an intermediary to solve a problem (e.g. InnoCentive), the problem is defined by the agent and not by the system owner. However, by using an intermediary, the agent is often limited to the pre-defined problem-solving processes of the intermediary system. My definition also captures the open call prerequisite defined by Howe (2006b). Note that self-selection means the same as open call. The crowd responds to an open call by an agent, therefore the individuals of the crowd are self-selected. Crowdsourcing within an organisation must follow the open call principle. The organisation cannot build its own crowd (Schenk and Guittard, 2010: 3). Although a firm can control the context (e.g. only employees participate)4 it cannot determine exactly who will respond to the open call within this context (i.e. it is not possible to identify ex ante who is part of the crowd). Thus, preselection constrains but does not violate the open call principle (c.f. Geiger et al., 2011). Finally this definition tries to capture whether the crowd participates in
4

A firm could ask employees as well as selected partners to participate. The selection of partners would violate the open call principle.

a crowdsourcing effort explicitly or implicitly (Doan et al., 2011). The crowd can participate explicitly on platforms such as InnoCentive or Threadless, while an example of implicit crowdsourcing is ReCAPTCHA. ReCAPTCHA piggybacks on other websites and is a tool that prevents bots from signing in on a website. The individual user solves a reCAPTCHA by typing the letters that he/she sees on a picture. By recognizing the letters of the picture the user is allowed to sign in. The act of typing in the letters into the form creates a useful side effect: the crowd helps to digitalise books by recognising words that could not be read by software (c.f. Google, 2012). I shall discuss the explicit/implicit dichotomy in more detail later.

2.1.1 The Agent


The starting point in crowdsourcing is the agent.5 The agent defines the nature of the problem that the crowd should solve. The agent can be either an organisation, or an individual. I consider the agent (organisation or individual) as a client, when the agent uses the services of an intermediary. Organisation: An organisation may use crowdsourcing either within or outside itself to solve a problem. Crowdsourcing may be the key element for an organisations business model. For instance, Threadless, harnesses the creative input of the crowd to minimize the cost of designing T-shirts (Howe, 2008; Brabham, 2008). Others use crowdsourcing as an instrument to get ideas from outside of the organisation for future innovation (e.g. OSRAM LED Design Contest). Deloitte stages an innovation contest within the organisation to identify future opportunities (Terwisch and Ullrich, 2009).

From the perspective of agency theory (Jensen & Meckling, 1976) it would be have been evident to use the term principal. As I do not apply agency theory in this thesis, I use the term agent to describe an actor, who can be either an organisation or an individual.

Individual: An individual may start a crowdsourcing effort him/herself and in most cases uses an intermediary platform. By using an intermediary, the individual does not necessarily become a client. For instance, ChallengePost allows individuals to challenge the crowd; for basic contests the individual does not pay any fees. If the individual has to pay, he or she becomes a client. Another way for an individual to be the agent in crowdsourcing is crowdsourcing within crowdsourcing. Sakamoto et al. (2011) introduced the concept of recursive crowdsourcing. In recursive crowdsourcing, the individuals of the crowd assign a sub-task to a sub-crowd. Normally crowdsourcing has two levels of hierarchy: the agent and the crowd. Recursive crowdsourcing adds another level. Sakamoto et al. (2011) refer to Scratch where young participants are using open-ended discussion forums to initiate their own contests, ask their own peers to enter, and provide their own prizes" (2011: 351). Sakamoto et al. (2011) believe that recursive crowdsourcing will improve the capability of the crowd to solve open-ended problems. To make this model work, they suggest that a change of incentives is needed to balance the risk between the individual of the crowd and his/her sub-crowd. Client: If an organisation or an individual makes a monetary transaction to use the services of an intermediary, the individual/organisation becomes a client. As an intermediary InnoCentive connects agents (seekers) who look for the solution of a problem and the crowd (solvers) who attempt to solve the problem. By protecting the interests of both actors, InnoCentive solves the information paradox problem. Morgan and Wang (2010) briefly explain the information paradox put forward by Kenneth Arrow: "(...) [T]he questioner does not know the true value of the idea ex ante unless the answerer reveals the idea. However, once the idea is revealed, the questioner could behave opportunistically and pay little, if any at all, to the
10

answerer" (2010: 82). To post a challenge at InnoCentive, the client has to pay a fee. The client receives access to a vast network of potential solvers who only see a summary of the problem when browsing challenges. InnoCentive grants a solver full access to a challenge if the solver signs an agreement that includes confidentiality clauses; the intellectual property (IP) for accepted solutions must be transferred to the client (dependent on the challenge type). After the contest ends, all entries are blind reviewed by the client. The client chooses the winning solution (or not) and InnoCentive transfers the money to the solver, if the intellectual rights have been successfully transferred to the client. The client is not allowed to use information from submissions that are not accepted. InnoCentive enforces this by having the right to audit the laboratories of the client (c.f. Lakhani et al., 2007; Jeppesen and Lakhani, 2010). Another example for an intermediary are design contest platforms, such as 99designs, where a client can harness the creativity of the crowd by staging a design contest. In design contests, the client awards the winner with a bounty. Additionally the client has to pay a fee to the intermediary.

2.1.2 The Crowd


Simply put, without a crowd, there is no crowdsourcing. My definition of crowdsourcing states that an ex ante unidentified, self-selected, ideally large and diverse set of individuals forms a crowd. Although the crowd is self-selected, there are several ways to create entry barriers. First an agent may pre-select the crowd. Another entry barrier is whether the crowd has to participate individually, as a team or can choose between both to form one entity of participant (Bullinger und Moeslein, 2010). However, an agent cannot control if a member of the crowd is an individual or team that chooses to use one account.

11

Due to the broad reach of the web it is fairly easy to gather a large crowd. Crowdsourcing thereby exploits the spare cycles (Howe, 2008) or cognitive surplus (Shirky, 2010) of the crowd. Howe (2008) explains that spare cycles are the "downtime not claimed by work or family obligations - that quantity is now in surplus" (2008: XIV). The idea of spare cycles is related to Clay Shirkys idea of cognitive surplus: the ability of users around the world to contribute and collaborate voluntarily on large projects. This is possible due to the untapped free time and talents of users as well as the technological advancement that not only allows users to consume, but also to follow their desire to create and share (Shirky, 2010). Still the open question is why crowds should be ideally large and diverse. Therefore I will briefly review two theoretical concepts: dispersed knowledge and collective intelligence. 1) Dispersed Knowledge: Hayeks (1945) concept of dispersed knowledge, which emphasises that knowledge is asymmetrically distributed in a society, is one theoretical foundation for crowdsourcing. Crowdsourcing is a matchmaking process between abundant time and talent and those who need it to solve a problem or perform a task (Howe, 2008). In other words crowdsourcing provides a means to harness the dispersed knowledge of the crowd and aggregates it by using web technology. The importance of dispersed knowledge can be explained by looking at InnoCentive. InnoCentive enables an agent to broadcast a problem to a large network of potential solvers. The logic is to tear down the walls of the organisation and to realize that there is someone outside of the organisation who will have the knowledge to find a solution that the research & development (R&D) department within the organisation cannot find (c.f. Tapscott and Williams, 2006). Knowledge is so widely dispersed that even for large multinational firms it is impossible to have a R&D department

12

with enough resources to find a solution for any problem.6 Lakhani et al. (2007) found in their study of 166 broadcasted problems posted on InnoCentive that the crowd solved 30% of the problems that could not be solved within the traditional boundaries of organisations. The broadcast search occurs if a firm initiates a "problem-solving process by disclosing the details of the problem at hand and inviting the participation of anyone who deems themselves qualified to solve the problems" (Jeppesen and Lakhani, 2010: 1016). Broadcast search is an alternative to local search, which is constrained by the resources, the heuristics, and the perspectives of the organisations employees. A broadcast search helps to find unexpected and useful ignorant individuals who are able to solve the problem. Each problem needs a diverse solver base. Specialists, who are distant to the field of the problem, are more likely to be successful (Lakhani et al., 2007). Lakhani et al. (2007) put forward the concept of marginality and argue that marginality leads to a successful transfer of knowledge between the fields. Jeppesen and Lakhani (2010) view marginality as an asset, because solvers are not burdened with prior assumptions of the field. People use prior knowledge and experience to solve problems. "Once a perspective on a problem is set, the problem solver then employs heuristics to actually find the solution" (Jeppesen and Lakhani, 2010: 1090). But problems are not fixed and cannot be exclusively defined by the field. Marginal solvers bring in new perspectives and heuristics in their attempt to solve a problem. Jeppesen and Lakhani (2010) identified two types of marginality: technical and social. Technical marginality, refers to those coming from a different field than the problem while social marginality means individuals who are distant from their own professional community (e.g. women who are not inAdditionally a large R&D department may not result in numerous innovations. Terwisch and Ullrich (2009) demonstrate that there is no correlation between R&D spending and growth of the firm.
6

13

cluded in the scientific community). Villarroel and Reis (2010) add rank and site marginality in their article about intra-corporate crowdsourcing (ICC). Their study, about an intro-corporate prediction market (stock market of innovation), finds that the lower the position of an employee (rank marginality) and the greater the distance of an employee to the headquarters (site marginality) the more beneficial for the innovation performance of ICC (Villarroel and Reis, 2010). The work of Villarroel and Reis (2010) shows that the concept of dispersed knowledge also applies within the organisation. Employees typically have more skills than they are hired for (c.f. Baker and Nelson, 2005) or they might have knowledge that is not used by the organisation. For example salespeople have tacit knowledge about their customers that a top-tier manager does not have (Howe, 2008). Crowdsourcing instruments, such as a prediction market ease the transfer of sticky knowledge (Von Hippel, 1994) into aggregated data within the organisation (Howe, 2008; Villarroel and Reis, 2010). 2) Collective Intelligence: Howe (2008) draws upon the Diversity Trumps Ability Theorem that postulates that a randomly selected collection of problem solvers outperforms a collection of the best individual problem solvers (Howe 2008: 131). Malone et al. (2010) briefly define collective intelligence (CI) broadly as groups of individuals doing things collectively that seem intelligent (2010: 2). Collective intelligence can mitigate biases in the decision making process, but the effect of diversity is limited (Bonabeau, 2009). For example a group of nuclear engineers would beat a random crowd in designing a new element for a nuclear plant. In this case, concentrated knowledge of the nuclear engineers is better suited to solve the problem. Collective intelligence is a key element for crowdsourcing. While CI can happen in the real world, the web makes it easier to facilitate. Primary forms of CI in crowdsourcing are prediction markets, broadcasting a problem
14

and idea jams (online brainstorming) (Howe, 2008). However a higher level of interaction between individual members of the crowd can cause group thinking (Bonabeau, 2009; Howe, 2008). Quinn and Bederson (2011) state that CI only applies "when the process depends on a group of participants" (2011:4). If a single user does a translation on demand the work "would not be considered collective intelligence because there is no group, and thus no group behavior at work" (Quinn and Bederson, 2011: 4). Contrasting Howe (2008) with Quinn and Bederson (2011) leads to the following question which I will briefly discuss using the example of InnoCentive. At InnoCentive each individual submits his or her solutions to a contest independently. The question then is if the crowd posts solutions independently and the agent chooses one solution, should this be considered collective intelligence? In the spirit of Quinn and Bederson (2011) the argument is that the numerous individual inputs are independent from each other. The agent selects one input and the rest is discarded. Therefore there is no group behaviour and inputs are not used in a collective manner. Steiner (1972) offers a different perspective on this question. Consider the example of pulling a rope. Members of the group are asked to pull as hard as possible and the rule is that only one person at a time is allowed to pull the rope. In this case, the group performance depends on the groups ability to select the strongest member. Steiner (1972) calls this a disjunctive task because it requires an "either-or" decision; the group can accept only one of the available individual contributions as its own, and must reject all others. One member receives total "weight" in determining the group product and others are accorded no weight at all. (1972: 17) While the tasks at InnoCentive could be considered disjunctive
15

tasks, a difference is that at InnoCentive the agent and not the crowd decides what should be the best solution. But is who selects the winning solution the key element to determining disjunctive tasks? Steiner (1972) refers to studies where the groups had to perform a disjunctive task but did not select the idea themselves. One conclusion would be that it does not matter who selects the best input, what counts is whether the collection of ideas created by the crowd contains a solution to the problem. InnoCentives success relies on a group of participants (who do not interact with other) to submit solutions. The performance of the crowd is dependent on the one member of the crowd who is able to submit a successful solution, therefore it could be considered as collective intelligence. But to provide a satisfactory answer lies beyond the scope of this thesis and would require more research. The concepts of dispersed knowledge and collective intelligence provide an answer as to why the crowd in crowdsourcing should be large and diverse. But the term diversity has another meaning from the perspective of social structure and activity levels within a crowd. Social structure and activity levels explain that within the crowd, each individual has different roles and a varying degree of participation. Dispersed knowledge leads to dispersed interests, which results in a division of labour. For instance, some members of the crowd at Threadless focus on submitting designs while others just rate those designs. Crowston & Howison (2005) analyse social structure in free/libre and open software (FLOSS) projects and propose that the teams in FLOSS have an onion-like structure. Figure 1 below is an extract taken from Crowston and Howison (2005), which shows the onion-structure of development teams; this implies a hierarchical relationship between the actors:

16

Figure 1: Onion-structure in FLOSS (Crowston and Howison, 2005)

Crowston and Howison (2005) refer to Mockus et al. (2002) who report that 15 developers created 80% of the code in an Apache project. In successful open source projects it is the vital few that create the majority of the code, while the useful many will help fix the bugs (Mockus et al., 2002). Crowdsourcing literature also suggests that individual users can be divided into different user types based on their propensity to act and interact. Crowdsourcing platforms, such as Threadless, often use communities. Von Hippel (2005) refers to Wellman et al. (2002) who define communities as networks of interpersonal ties that provide sociability, support, information, a sense of belonging, and social identity (2002: 4). Moffitt and Dover (2011) distinguish between four different types of users in communities: lurkers, contributors, creators and evangelists. Table 1, an extract taken from Moffitt and Dover (2011), shows the different types of users along with their accompanying activity level and an estimation about the percentage of each user type within the population of a platform:

17

Table 1: User Types, Community Impact, Traits and Approximate Percentage of Community (Moffitt and Dover, 2011)

Moffitt and Dover (2011) point in the same direction as the FLOSS literature. The trivial many, or to put differently, the useful many, contribute by voting, commenting or, in the case of lurkers, simply consume the content. This suggests that there is a link to the 80:20 rule. At its heart, the basic idea behind the 80:20 rule is the concept of the vital few and trivial many. Juran (1954, 1975) popularized the 80:20 rule in quality management and believes that the 80:20 rule is a universal power law of distribution. Shirky (2003) stated that 20% of the blogs attract 80% of the readers, eBay found the shopping behaviour of the vital few of their users accounts for more than the half of all transactions (Magretta, 2002). Also the figure of Moffitt and Dover (2011) suggests that 18% of the crowd (i.e. the vital few) create most of the content. However, it would be premature to conclude that all crowdsourcing platforms follow the 80:20 rule. Howe (2008) refers to the 1:10:89 rule, which states that one user will create something, 10
18

will vote and 89 will consume. Arthur (2006) writes that 70% of the Wikipedia articles are written by 1.8% of the users. Nevertheless, the literature seems to agree on following point. There is a definite division of labour within the crowd. The vital few are responsible for the majority of core activities (e.g. idea creation) of a crowdsourcing platform. The useful many help with selection of ideas (e.g. Threadless), help to spot bugs (e.g. FLOSS) or are simply an instrument to attract active users. The basic social structure is a key element for designing a crowdsourcing platform. Doan et al. (2011) distinguish between guests, regulars, editors, administrators and dictators and suggest that one needs to think about low-profile tasks for low-ranking users such as guests (e.g. flagging inappropriate contents), and high-profile tasks for highranking users (e.g. become admin at Wikipedia). Howe (2008) also argues that a crowdsourcing platform needs to consider the different time resources of each user. Some users want to do tasks that do not last longer than ten minutes, others, who are very enthusiastic, are willing to spend more time completing a task.7

2.2 Related Concepts


Schenk and Guittard (2010) acknowledge that crowdsourcing is an emerging phenomenon without clear borders and is often confused with related concepts. Indeed, crowdsourcing is connected with concepts such as open source development (Raymond, 1999), open innovation (Chesbrough, 2003) and (virtual) co-creation (Prahalad & Ramaswamy, 2004).

Note that in some forms of implicit crowdsourcing, such as ReCaptcha, it is not possible to identify different user types.

19

2.2.1

Open Source Development

Raymond (1999) distinguishes between cathedral and bazaar approaches for developing software. The cathedral approach believes that programs, which require a certain complexity, must be developed within a firm in a centralized approach. The design of the architecture, features of the program and elimination of bugs should occur prior to the release. In contrast the bazaar approach follows Linux Torvalds style of development - release early and often, delegate everything you can, be open to the point of promiscuity"(Raymond, 1999: 26), a community that takes submissions from anyone. This approach looks chaotic on the surface but through self-regulation a stable system emerges. A brief summary of Raymonds aphorisms for an efficacious open source development: Rewrite and Reuse Plan to throw away See users as co-developers who enable rapid improvement The more co-developers you have, the better the capability to spot and fix bugs. Raymond calls this Linuss Law: Given enough eyeballs, all bugs are shallow (Raymond, 1999:31) Recognize good ideas from users Realize your concept was wrong Perfection means that there is nothing more to take away Open Source Development need social skills and leadership that acts on the basis of common understanding Open source is the blueprint for crowdsourcing. Unix was developed decentralized because the developers were able to break down the labour into small pieces (Howe, 2008). The division of labour is an essential element to tame the complexity of developing software
20

(Raymond, 1999). Crowdsourcing also uses the idea of dividing a task into numerous subtasks. An extreme form is MechanicalTurk. On this platform, tasks are divided into small and independent pieces, so-called HITs (e.g. categorizing websites). Each HIT provides an instruction to the user. For successful completion, the user is rewarded with a micro-payment (Rogstadius et al., 2011). Open source development reduces the costs for creating software. Raymond (1999) argues it is often cheaper and more effective to recruit self-selected volunteers from the Internet than it is to manage buildings full of people who would rather be doing something else (1999: 25). The self-organized volunteers of Linux outperform the industry (Raymond, 1999; Howe, 2008). Extrinsic rewards play a minor role in open source development.8 Developers are rewarded with solving interesting technical problems (Raymond, 1999), get experience and are credited by others within the community (Brabham, 2008). Similarly, Threadless relies on community-driven motivated users (Brabham, 2010) and uses the eyeballs of many as a filter for good designs. Community is a key element of open-source development. A community in crowdsourcing is not a necessary pre-requisite (e.g. MechanicalTurk) but it may be used (e.g. Threadless). Despite the fact that crowdsourcing is seen as an instrument for many to reduce costs, many scholars argue one should exercise caution when using crowdsourcing as a simple cost reducing strategy. A crowd that feels exploited might not participate in the crowdsourcing effort, or even worse, may turn against the crowdsourcer. (Howe, 2008, Rouse, 2010). Franke and Klausberger (2010) explored the motivations of the crowd to participate in crowdsourcing and showed that the size of the organisation (start-up vs. MNC) has an impact on the expected remuneration. Rouse (2010) draws upon trade press blogs about crowdsourcing and argues that open source is about developing for a common good and
8

Except of developers who paid by a firm to participate in open source development, like IBM does in the case of Linux.

21

has many contributors (the crowd) and many possible beneficiaries. In contrast to open source development, crowdsourcing has many contributors (the crowd) but mostly only a few beneficiaries (the agent, the intermediary and a few users) (Rouse, 2010). Most crowdsourcing applications demand that the IP is transferred from the crowd to the organisation (Schenk and Guittard, 2010). Conclusion Open source and crowdsourcing have a different approach towards intellectual property (IP). But if IP was to be the key distinguishing aspect, one would need to distinguish between private-benefitcrowdsourcing (e.g. Threadless), commons-crowdsourcing (e.g. Wikipedia) or citizensourcing9 (c.f. Taewoo, 2011). Although IP has an impact on the motivation of the crowd (c.f. Franke and Klausberger, 2010), it does not fundamentally change the mechanisms of crowdsourcing. Open source development is the blueprint for crowdsourcing; at the same time open source development can be considered as crowdsourcing. Open source uses the distributed problem-solving capabilities of developers who respond to the open call of an agent. For instance, Linus Torvalds made an open call to the crowd for Linux. As Malone et al. (2009, 2010) demonstrate, the crowd collaboratively creates new software modules, but Torvalds and his lieutenants make the final decision about which modules are used for the next release.

2.2.2

Open Innovation

Chesbrough (2003) coined the term Open Innovation (OI), which is a paradigm that assumes that firms can and should use external ideas as well as internal ideas, and internal and external paths to mar-

Citizensourcing is the application of crowdsourcing in the public sector.

22

ket, as they look to advance their technology (Chesbrough, 2006: 1). Open Innovation acknowledges the importance of transferring external knowledge into the organisation. Furthermore Open Innovation introduces a new perspective on managing IP. Many firms use only a small amount of their available IP. This IP might not be useful for the current business model of firm A, but for firm B. Conversely firm C might have IP that could be useful for firm A. Chesbrough (2006) proposes that IP can flow inbound from outside of the organisation (license in, spin in, acquire) or outbound in the form of licensing, spin outs or divesting. Figure 2, an extract taken from Chesbrough (2006) visualizes the Open Innovation paradigm:

Figure 2: Open Innovation Paradigm (Chesbrough, 2006)

Summing up, Chesbrough (2003, 2006) created a new paradigm that is derived from following elements: The equal importance of external/internal knowledge Harvesting abundant knowledge landscape in- and outside of the firm New business models for exploiting R&D. Evaluation of projects to identify possible opportunities for false negative projects (e.g. licensing) Proactive IP management

23

The OI paradigm is based on qualitative evidence of high technology industries, consequently one needs to exercise caution when applying it to other industries. Conclusion In contrast to crowdsourcing, open innovation provides a strategic innovation framework for firms (Wise et al., 2010). Schenk and Guittard (2011) underline the fact that OI shares with crowdsourcing the assumption that an organisation should tap into the distributed knowledge outside of the firm. Whereas OI focuses on knowledge flows between firms, crowdsourcing links a firm with a crowd. Furthermore OI focuses on innovation, which is not a necessary requirement of crowdsourcing. But from the perspective of OI, crowdsourcing would be another instrument to create an inbound knowledge flow to the organisation (Schenk and Guittard, 2011). Figure 3 visualises the relationship between OI and Crowdsourcing:

Figure 3: Connection between Crowdsourcing and Open Innovation

Phillips (2010) suggests that IBM uses the IBM Innovation jam as an open innovation instrument to brainstorm ideas within the organisa24

tion. From this perspective OI can also be conceived as a means to overcome traditional approaches of in-house innovations by enlarging the knowledge base through harvesting the abundant knowledge landscape within the organisation. In other words, the organisation in this perspective acknowledges that employees have more knowledge than they are hired for or have knowledge due to their position, which is not yet used. This opens up the innovation process to a broader set of people.

2.2.3

Co-creation

Prahalad & Ramaswamy (2004) argue that traditional perspectives on value creation have a firm- and product centric perspective. A market in this perspective is an aggregation of customers. This perspective suggests that customers solely interact with the firm by buying the products or services or not. The customer can be targeted (e.g. specific segments) but has no role in value creation. The firm decides what value or experiences a customer can consume. The shift to a cocreation of value is required due to the empowered, networked and informed customer (Prahalad & Ramaswamy, 2004). "(...)[C]ompanies must escape the firm-centric view of the past and seek to co-create value with customers through obsessive focus on personalized interactions between the consumer and the company" (Prahalad and Ramaswamy, 2004: 7). Co-creation places an emphasis on the joint creation of value of firms with their customers. The two principles of cocreation are dialogue (the market is a set of conversations) and transparency (less information asymmetries between firm and customers, customers extract value from the firm). Examples for successful cocreation are some video games (co-creation of content by players), eBay, Expedia or Amazon (Prahalad & Ramaswamy, 2004). Zwass (2011) defines co-creation as the participation of consumers along with the producers in the creation of value in the market place
25

(Zwass, 2011: 13); it encompasses all activities of independent consumers as well as those initiated by the firm in the production domain. Fller (2010) uses the term virtual co-creation and underlines the idea that the unique feature of virtual co-creation is not only that consumers are asked about their preferences but that the consumers also contribute their creativity and problem-solving skills. Co-creation can satisfy the need of individual customers in a costeffective manner. Most co-created goods have the characteristics of being digital and non-rival and may be perceived as more valuable through network effects (more users attract more users) (Zwass, 2011). Zwass (2011) distinguishes between sponsored co-creation and autonomous co-creation. The agent initiates sponsored co-creation and makes an open call to individuals to participate in the value creation process (e.g. the contest). Autonomous co-creation signifies that individuals or consumer communities produce marketable value in voluntary activities conducted independently of any established organisation, although they may be using platforms provided by such organisations, which benefit economically (Zwass, 2011: 11). Examples of autonomous co-creation are open source software and Wikipedia. Conclusion Co-creation is a transdisciplinary field. Co-creation happens within the intellectual space of virtual communities, the commons-like open source, collective intelligence and open innovation (Zwass, 2010). It is difficult to disentangle co-creation from crowdsourcing. Both draw almost from the same intellectual space and at first sight both terms are often used interchangeably. Sponsored co-creation is analogous to crowdsourcing. But some cases of autonomous co-creation may also be considered as crowdsourcing. The key difference is that virtual cocreation emphasises a value perspective and is a marketing paradigm whereas crowdsourcing has a task/distributed problem-solving perspective. Similar to OI, co-creation sees crowdsourcing as an instru26

ment to engage customers in the value creation process.

2.3 Crowdsourcing Classifications


Several scholars proposed classifications/typologies/taxonomies of crowdsourcing to categorize the numerous crowdsourcing applications. What all three terms have in common is that the authors separate like with like from the unalike. Obviously there is a huge difference between the micro-jobs of MechanicalTurk and a community-driven contest. Creating categories creates avenues to examine the effects of similar applications. Additionally, the classifications show what kind of platforms should be considered as a crowdsourcing application. The focus lies on the review of the existing literature on different taxonomies and I will highlight the most comprehensive one for the purpose of this thesis. To ease the flow of reading, I shall use the term classification synonymously with the terms typology and taxonomy. In my review, I identified three broad categories of classifications: Functional perspective Task structure perspective Process perspective

2.3.1
2.3.1.1

Functional Perspective
Howe (2008)

Howe (2008) classifies crowdsourcing initiatives into four categories: Collective Intelligence/Crowd Wisdom, Crowd Creation, Crowd Judging and Crowdfunding. Collective Intelligence/Crowd Wisdom includes prediction markets, idea jams and crowd casting. Crowd casting is another word for broadcast search and encompasses platforms, such as InnoCentive, who broadcast a problem to the crowd.
27

Crowd Creation platforms rely on user-generated content to produce TV (e.g. Current TV) or advertising. An example of the latter is Doritos Crash the Super Bowl contest. The best advertisement was aired during a commercial break of the Super Bowl. Crowd Judging encompasses platforms that use the judgement of the crowd or asks the crowd to organize information. For instance, Digg ranks articles based on popularity. Crowdfunding platforms connect a borrower with a crowd of potential backers. Instead of asking a bank or other wealthy entity to finance a project, crowdfunding uses a web platform to gather numerous contributions from individuals. One contribution alone may be small, but due to the large number in the crowd, the borrower may be able to raise a considerable sum. The crowd receives, in most cases, voting rights for the future development of the project or a final product. The higher the donation amount, the more valuable the reward. Examples for crowdfunding are platforms such as Kickstarter or Kiva or Barack Obamas first presidential campaign. Among all scholars, Howe (2008) is the only one who classifies crowdfunding as a subcategory of crowdsourcing. Lambert and Schwienbacher (2010) define crowdfunding as follows: Crowdfunding involves an open call, essentially through the Internet, for the provision of financial resources either in the form of donation or in exchange for some form of reward and/or voting rights in order to support initiatives for specific purposes. (2010: 6) Solely in crowdfunding there is a monetary transaction from the crowd to the agent. In other applications, such as MechanicalTurk, InnoCentive or Threadless, the crowd (or one member of the crowd) receives money from the agent to successfully solve a problem. Instead of creating a digital artifact, the primary task of the crowd is to donate money. The question is whether donating money, as a response to an open call, constitutes the act of solving a problem defined by the agent. Because the definition of crowdsourcing is rather broad, one
28

could argue that crowdfunding is a subcategory of crowdsourcing. But this discussion requires future research and is not the focus of this thesis.10 2.3.1.2 Brabham (2012)

Brabham (2012) distinguishes between four types of crowdsourcing: The Knowledge Discovery and Management Approach, the Broadcast Search Approach and Distributed Human Intelligence Tasking. The Knowledge Discovery and Management Approach helps to find existing knowledge or discover new knowledge. The agent defines which information the crowd should search or organize. For instances, SeeClickFix asks citizens to report infrastructure problems (e.g. potholes) in a city. The Broadcast Search Approach is oriented towards finding the single specialist with time on his or her hands, probably outside the direct field of expertise of the problem, who is capable of adapting previous work to produce a solution" (Brabham, 2012: 8). Broadcast search problems are difficult but well-defined challenges. The Peer-vetted Creative Production approach uses the creative abilities of the crowd to produce digital artifacts. The crowd produces a flood of ideas, but only a few ideas are useful for the agent. The crowd assumes the role of gatekeeper and selects the best ideas. The Distributed Human Intelligence Tasking is an appropriate approach for crowdsourcing when a corpus of data is known and the problem is not to produce designs, find information, or develop solutions. Rather, it is appropriate when the problem itself involves processing data" (Brabham, 2012: 11). The large task is decomposed into small tasks. Users get specific instructions for the tasks and get a mi10

It seems that several scholars attempt to establish crowdfunding as a unique research stream. Kappel (2009) discusses the legal issues of crowdsourcing, other scholars, such as Bellaflamme et al. (2010) discuss under which conditions crowdfunding is preferred and give examples of how it is used to finance ventures.

29

cro-payment per piece. Table 2, an extract taken from Brabham (2012) provides an overview of all four types:

Table 2: Crowdsourcing Typology (Brabham, 2012)

The Broadcast Search and Peer-vetted Production are both based on the contest logic. The description indicates that the role of the crowd in the Peer-vetted Production, unlike the Broadcast Search, is to assess and select the contributions of their peers. The categorisation of Brabham (2012) reflects the premise that contests apply different mechanisms and thus serve different purposes. Wikipedia would belong in the category of Knowledge Discovery and Management Approach. But Brabham (2012) considers Wikipedia a commons-based peer production (Benkler, 2002; Benkler, 2006) and, in contrast to Wikipedia, a platform such as Peer-to-Patent determines in a top-down and managed process which type of information is sought.
30

2.3.1.3

Papsdorf (2009)

Pabsdorf distinguishes between Open Idea Competitions11 (offener Ideenwettbewerb), Virtual Microjobs (ergebnisorientierterte virtuelle Microjob), User-designed Mass Production (userdesignbasierte Massenfertigung), Collaborative Platforms for Ideas (Userkollaboration basierende Ideenplattform) and Implicit Exploitation of User-content (indirekte Vernutzung von Usercontent). Open Idea Competition covers initiatives such as Dells Ideastorm or MyStarbucksIdea. Some allow assessment and selection through the crowd, some not. Agents use Open Idea Competitions to get new product ideas, or to seek feedback for current products. Additionally the Open Idea Competition is used for marketing purposes. Virtual Microjobs describe the practice where agents call for users who should perform a well-defined task. The user gets paid for successful completion of the task. Pabsdorf (2009) includes in this category initiatives such as InnoCentive, Wilogo (crowd creates logos) or MechanicalTurk. User-designed Mass Production platforms enable the crowd to use online editors or their own graphic programs to produce designs. The crowd receives a success-based commission for their services. For example, a user receives commission per sold item that used his/her design. Examples are the LEGO-Factory and Spreadshirt who combine mass production with mass-customization. Collaborative Platforms for Ideas are mostly intermediary platforms that sell crowdsourcing services to agents. The purposes of these platforms are manifold; tasks on Crowdspirit range from new ideas to business models.12 Some platforms award best ideas with money, some, such as Incuby, offer access to useful networks (e.g. investors).13 Other
11 12 13

Translated from German. The business model of Crowdspirit was apparently not sustainable. The website is offline (Last Visit: February 17, 2012) Also Incuby is offline and the website is for sale (Last Visit: February 17, 2012)

31

platforms share the success of an idea with users by paying a premium depending on the value of the comment for the realisation of the idea or give points for useful contributions because payment would conflict with the community spirit. Implicit Exploitation of User-content means that the contributions of the crowd are the means to an end. Agents use the contributions of the crowd to increase popularity and traffic of a platform with the intention of attracting more advertising. For instance Suite101 asks users to publish articles that are linked with advertising. BILD newspaper asks users to send in worthwhile pictures while CNN developed the user-generated citizen journalism platform iReport. Figure 4, an extract taken from Pabsdorf (2009) shows all five categories ordered according to their function and application in their corresponding industries:

32

Figure 4: Crowdsourcing Applications and Corresponding Industries (Pabsdorf, 2009)

33

The category Implicit Exploitation of User-content would also apply to Facebook and YouTube. On the latter platform other individuals consume the content created by the crowd. The more viewers YouTube has, the easier it is to attract advertisements. The crowd explicitly creates items on the website, shares videos with friends as well as actively rating and commenting on the videos of others. The crowd implicitly engages in determining the popularity of videos by viewing them. Facebook implicitly exploits the data of users to attract advertisers. Haque (2010) posits that the users are the product and not the customers of Facebook. Papsdorf (2009) mentions that his classification is not representative and his key interest is the structure of crowdsourcing. Yet, his classification does not consider the underlying structure and mechanisms of crowdsourcing. Pabsdorf (2009) is the only author who puts InnoCentive and MechanicalTurk into the same category. The tasks of InnoCentive are well defined, but are at the same time rather sophisticated ones that require a high degree of formal education; users are compensated with a large reward. In contrast, on MechanicalTurk, tasks are well defined and each user receives a payment based on a successful completion of each micro-task. 2.3.1.4 Gassmann et al. (2010)

Gassmann et al. (2010) combine the functional view with an actors perspective and identify five major crowdsourcing categories: Intermediaries (Intermedire), Common-based Production (Gemeinsam eine freie Lsung), Company Platforms (Unternehmenseigene Plattformen), Idea Marketplaces (Markplatz fr eigene Ideen) and Citizen Sourcing (ffentliche Initiativen). Intermediaries connect agents with the crowd. Gassmann et al. (2010) created four subcategories of intermediaries: R&D Platforms (e.g. InnoCentive), Marketing & Design (e.g. 99designs), Freelancer (e.g. HumanGrid) and Idea platforms (e.g. Atizo).
34

Common-based Production platforms do not ask the crowd to explicitly solve a problem and there is no monetary remuneration. Some of the solutions within this category are accessible to the public and use a copyleft14 license. Two subcategories of Common-based Production are a) websites such as OpenStreetMap or Wikipedia that make knowledge accessible to the broader public and b) open source software such as Linux or Firefox. The category Company Platforms encompasses all platforms that are built by agents themselves. Apart from the possibility to harness ideas from the crowd, Company Platforms have the goal of positively influencing the brand. Agents attempt to be perceived by their customers as innovative and open to suggestions. The two subcategories are platforms that focus on Branding & Design (e.g. LED Design Contest) or New Product Ideas (Produktideen und Problemlsungen, e.g. MyStarbucksIdea). Idea Marketplaces enable creative users to sell their designed products. For instance, users on Spreadshirt can open their own T-shirt store. Citizen Sourcing serves the public good but in contrast to Commonbased Production, there is an agent who makes an open call to the crowd. Examples are Ideascampaign, an initiative of the Irish government that asked their constituents for ideas on how to boost the economy, or the X-Prize, a challenge for radical innovations (e.g. space travel for everyone). Figure 5 shows the five categories of crowdsourcing initiatives:

14

Heffan (1997) The GNU Project is a worldwide collaborative effort to develop high quality software and make it available to the general public. To ensure unrestricted public access, the GNU Project licenses its software under the GNU General Public License ("GPL"), which prevents users from establishing proprietary rights in either the works themselves or subsequent versions thereof. Richard Stallman, the founder of the GNU Project, refers to this type of agreement as copyleft. (1997: 1487).

35

Figure 5: Categorization of Crowdsourcing Initiatives (Gassmann et al., 2010)

The first layer in the inner circle has an actors perspective. If one asks who owns the platform, one gets the answer intermediaries, the company themselves or the public sector. The two other items do not fit into the layer. One needs to ask the question what? in order to identify the two other items of the inner circle: common-based peer production and marketplaces. In the outer circle the platforms are clustered according to which purpose they serve. The subcategories for intermediaries are R&D Platforms, Marketing and Design, Freelancer and Idea Platforms. But Idea Platforms could also exist within the other categories. It could also be the case, that a government uses an Intermediary instead of building its own platform. Summing up, it is useful to employ an actors perspective to distinguish between the differences of intermediaries and company built plat36

forms. Yet, the drawback is that this classification uses different types of categories (actor and functional perspective) on the same level. 2.3.1.5 Doan et al. (2011)

The classification of Doan et al. (2011) is based on the Nature of Collaboration, System Architecture, Recruitment, Responsibilities (what users do?) and Target Problems. Nature of Collaboration distinguishes between explicit and implicit crowdsourcing systems. System Architecture captures whether the system is a standalone system or piggybacks on another system (e.g. ReCAPTCHA). See chapter Aggregation and Selection for a detailed discussion of why a piggyback system should be considered as crowdsourcing. Recruitment signifies whether a platform needs to recruit a crowd or not. Recruitment refers to a key problem almost every crowdsourcing (CS) platform needs to solve: How to recruit and retain users? (Doan et al., 2011: 88). Only if a CS system piggybacks on another system does the agent have no recruitment problem"15! The category Responsibilities does explain what activities a crowd performs on the platform. Doan et al. (2011) clustered the activities for their categorization Sample of CS Systems on the Web. Note that although the type of collaboration serves as a broad categorization, all the examples are clustered alongside the Responsibilities. I want to focus on the explicit systems. A distinguishing aspect of explicit systems that evaluate, share, or network is that they do not merge user inputs, or do so automatically in relatively simple fashions. For example, evaluation systems typically do not merge textual user reviews. They often merge user inputs such as movie ratings, but do

15

Assuming that the CS system piggybacks on a platform that already has numerous users.

37

so automatically using some formulas. Similarly, networking systems automatically merge user inputs by adding them as nodes and edges to a social network graph. As a result, users of such systems do not need (and, in fact, often are not allowed) to edit other users input. (Doan et al., 2011: 90) In contrast, systems that build artifacts often merge each others inputs. Examples are software (e.g. Linux) or textual knowledge bases (e.g. Wikipedia). Task execution systems require that the crowd perform a specific task. Examples are the Goldcorp Challenge or the Obama Campaign 2008 that relied on the crowd to mobilize voters. MechanicalTurk is also considered a task execution system. The key requirement for these systems is that tasks can be broken down into small elements so that every member of the crowd can make a contribution.

38

Table 3: Sample of CS Systems on the WWW (Doan et al., 2011)

Like Pabsdorf, Doan et al. (2011) consider platforms such as YouTube and Facebook as crowdsourcing: (...) it can be argued that the target problem of many systems (that provide user services) is simply to grow a large community of users, for various reasons (such as personal satisfaction, charging subscriptions fees, selling ads, selling the systems to other companies). (2011: 93) Surprisingly Doan et al. (2011) do not mention any crowdsourcing platforms that use the contest logic such as Threadless, InnoCentive or Atizo.

39

2.3.2
2.3.2.1

Task Perspective
Schenk and Guittard (2010)

In their classification Schenk and Guittard (2010) link the Structure of the Task (simple, complex or creative tasks) with Types of Aggregation (selective or integrative). Crowdsourcing enables cost-effectiveness of simple tasks on a large scale. Simple tasks imply integrative form of crowdsourcing. Integrative crowdsourcing means that all contributions of the crowd are gathered on a platform, unless a contribution fails to meet certain quality criteria. The goal of integrative crowdsourcing is to build large databases (e.g. Wikipedia). Solving complex tasks requires knowledge-intense efforts. A firm might make an open call to the crowd, because the complex task cannot be solved within the firm. Crowdsourcing of complex tasks is a viable option if the agent hopes to benefit from the distributed knowledge of the crowd. Solving complex tasks requires a selective crowdsourcing approach. Selective crowdsourcing means that the agent selects the best solution out of the pool of inputs created by the crowd. The opportunity costs are high for the crowd because the individual cannot know in advance if his/her solution is accepted (and if he/she finds one). The crowd is compensated for the high risk with a high reward (e.g. InnoCentive). Creative tasks, such as creating designs or programming apps, need the creative problem-solving skill of the crowd. Crowdsourcing of creative tasks can be either selective (create a logo on 99designs) or integrative (e.g. Apples App Store). Table 4 provides an overview of the classification:

40

Table 4: Characteristics of Crowdsourcing Applications (Schenk and Guittard, 2010)

The logic of this classification is that the structure of the task determines the mode of aggregation and other attributes of a crowdsourcing platform. Simple tasks require integrative crowdsourcing and complex tasks selective crowdsourcing. But Schenk and Guittard (2010) are not very precise about the characterization of the task structure. A creative task can be simple or complex at the same time. Designing a logo may be a rather simple task; the task structure for programming an application for an App Store can be rather complex. 2.3.2.2 Rouse (2010)

Rouse (2010) combines in her taxonomy the structure of the task with a motivational perspective. Her classification is based on three dimensions: Distribution of Benefits, Supplier Capabilities, Nature of the Task, and Motivation to Participate. Distribution of Benefits is a dichotomy between individualistic crowdsourcing and community crowdsourcing. In individualistic crowdsourcing, only the agent who stages the contest and the few winning participants benefit as opposed to community crowdsourcing, which allocates the benefits to many people. Supplier Capabilities describe the capabilities of the crowd in engaging
41

in different types of tasks. Simple tasks such as evaluating have a low complexity and need only a little education and training. Moderate tasks, such as the design of a T-shirt, have moderate complexity and difficulty and the agent needs moderate effort to evaluate the solutions of the crowd. Sophisticated tasks, such as developing a business plan are complex and require a highly skilled crowd to find a solution. The evaluations of these solutions are likewise sophisticated tasks that consume a great deal of the agents attention. Regarding the Motivations to Participate, Rouse (2010) draws upon the work of several scholars and identifies several types of motivations: self-marketing and social status; instrumental motivations which signify that users benefit personally in participating in a crowdsourcing effort; altruism, helping the interest of others without personal benefit; token compensation, where the user is only rewarded with a small prize; market compensation, where compensation is a source of income for users, and personal achievement where the goal of the user to learn something. Figure 6, an extract taken from Rouse (2010) shows her taxonomy:

42

Figure 6: Crowdsourcing Taxonomy (Rouse, 2010)

The two main branches of the tree diagram are the two elements of the Distribution of Benefits. Rouse (2010) argues that within those two branches, each task (simple/moderate/complex) must be accompanied with the right set of incentives. Open source development distributes the benefits to the community and is a sophisticated task. Thus, the non-monetary incentives are sufficient. Individualistic crowdsourcing platforms such InnoCentive require that users solve rather sophisticated tasks and therefore the agent needs to award winning solutions with market compensation. 2.3.2.3 Corney et al. (2009)

The Nature of the Task combined with the Nature of the Crowd are the key elements of the classification of Corney et al. (2009).
43

The Nature of the Task can either be creation, evaluation or organisation. Creation means that the crowd creates something to solve the problem of the agent. Evaluation tasks require the crowd to assess the quality of inputs (e.g. market survey). Examples for organisation tasks are image tagging (e.g. Galaxy Zoo), website rating (e.g. StumbleUpon), or text recognition (e.g. reCAPTCHA). The organisation task is always a micro-task. Nature of the Crowd specifies the requirements for the crowd to engage in certain tasks. Any individual means that anyone can do the task. Examples for most people tasks are evaluating inputs (e.g. rating). Expert tasks require that the crowd has specific expertise to solve a difficult problem which does not lend itself to aggregating a number of responses (2009: 298). Table 5, an extract from Corney et al. (2009) shows their classification:

Table 5: Classification of Crowdsourcing Activities (Corney et al., 2009)

Corney et al. (2009) recognize that the crowd not only creates and evaluates inputs, but is also responsible for organizing inputs. Yet, Corney et al. (2009) do not provide a clear definition of what constitutes an organisations task. For instance, the example of voting on websites (e.g. StumbleUpon) can also be seen as an evaluation as well as an organisational task. Corney et al. (2009) also connects the nature of the
44

crowd with different approaches to aggregation. Most people tasks use all contributions, expert task problems follow a rather a selective approach. But Corney et al. (2009) provide examples that do not fit this classification. CrowdSpring (a most people task) is a website for logo design contest, but it uses a selective approach. Ushahidi, a geotagging tool, is considered by Corney et al. (2009) as an expert task. Yet, Ushahidi follows an integrative approach and it is rather simple for the crowd to make contributions.

2.3.3
2.3.3.1

Process Perspective
Geiger et al. (2011)

Geiger et al. (2011) propose a classification system of crowdsourcing processes from the perspective of the agent. Geiger et al. (2011) claim that their taxonomy is applicable to all crowdsourcing processes. The taxonomy solely includes process elements that can be influenced by the agent. The four main dimensions of the taxonomy are Pre-selection of Contributors, Accessibility of Peer Contributions, Aggregation of Contributions and Remuneration for Contributions. Regarding the Aggregation of Contributions, Geiger et al. (2011) draw upon Schenk and Guittard (2010) and distinguish between integrative and selective aggregation of contributions. Accessibility of Peer Contributions describes the extent to which the crowd has access to the contributions of their peers. Geiger et al. (2011) identify four characteristics of this dimension: None, View, Assess and Modify. None indicates that the crowd cannot access the contributions of their peers (e.g. InnoCentive). View permits the crowd to see each others contributions but no interaction is allowed (e.g. public contests on 99designs). Assess allows the crowd to comment, rate or vote on the inputs of their peers. Modify means that the crowd can alter or even delete each others contributions in order to correct, update, or otherwise improve them (Geiger, 2011: 7). Modify occurs in
45

collaborative crowdsourcing applications such as Wikipedia. Remuneration of Contributions can be either fixed payments in integrative crowdsourcing (e.g. e-rewards), success-based (winning the award of a contest or revenue per sold picture on iStockphoto) or no monetary compensation. Pre-selection of Contributors describes how the agent can influence who will participate in a crowdsourcing effort. Qualification-based pre selection requires that the crowd has certain qualifications to participate. For instance, members of the crowd need to win a contest to participate in the 99logostore. Context-specific means that the agent can define which groups can participate (e.g. only employees). Table 6, an extract taken from Geiger et al. (2011) shows their taxonomy:

Table 6: Taxonomy of Crowdsourcing Processes (Geiger et al., 2011)

Geiger et al. (2011) provide the following rationale for their taxonomy: Any organization that aims to adopt crowdsourcing in an effective way is required to carefully consider the characteristics of the crowdsourcing process that will be used for their particular goal. () The purpose () is to propose a systematic scheme for classifying crowdsourcing processes and, thus, identify the relevant mechanisms that impact these processes. Since
46

crowdsourcing is used for a variety of different applications (product design, idea generation, problem solving, etc.), this paper focuses on those mechanisms that are applicable to all forms of crowdsourcing processes. (Geiger et al., 2011:2) This taxonomy cannot capture whether or not a platform employs different process types within a platform. For instance, a crowdsourcing contest platform may not allow accessibility of peer contributions in the first round, but in the second. Although the taxonomy can classify the crowdsourcing processes it is not able to capture the exact workflow of a crowdsourcing platform.

2.3.4

Conclusion

The classifications provide an insight as to which platforms should be considered as crowdsourcing. The review also shows that research on crowdsourcing originates from different disciplines. Compare the viewpoints of Doan et al. (2011) who have a computer science background with Papsdorf (2009) who is a sociologist. This may explain why most classifications hardly refer to each other (except for Geiger et al., 2011) The functional perspective suffers from the fact that there is hardly any consensus about common categories. To which category an application such as InnoCentive belongs, is dependent on the view of the scholar. InnoCentive has been put into several different categories Crowd Casting (Howe, 2008), Broadcast Search (Brabham, 2012), Virtual Microjobs (Pabsdorf, 2009) or Intermediaries (Gassmann et al., 2010). The task perspective provides insights about the impact of the task structure on other design elements. But the task structure of Schenk and Guittard (2010) is blurry and the classification of Corney et al. (2009) suffers from inconsistencies. Rouse (2010) explains how the benefits of crowdsourcing, the task structure and motivational factor are connected. Although the classifi47

cation is well thought through, the motivational perspective has one drawback: [M]otivation emerges as an ex post result of a particular crowdsourcing realization seen from a contributors perspective. Most motivational factors, especially intrinsic ones such as passion, fun, community identification, or personal achievement, cannot be directly controlled by the crowdsourcing organization (Leimeister, Huber, Bretschneider, and Krcmar, 2009). In addition, motivational factors often overlap and are, thus, sometimes impossible to distinguish (Ryan and Deci, 2000). (Geiger et al., 2011: 8) Geiger et al. (2011) integrated a large body of existing literature into their taxonomy and provide, so far, the most clear and concise analysis. All process types can be observed and are comparable. Their work suggests that for creating a taxonomy it might be useful to first determine a cluster of platforms with common mechanisms and in the second step identify if the platforms within a cluster serve a common purpose. Consider the Goldcorp Challenge and InnoCentive, which Brabham (2012) categorises as Broadcast Search. Both platforms have the same mechanisms and would be categorised as selective crowdsourcing without crowd assessment. I will discuss later why describing the aggregation process solely as integrative and selective has its drawbacks. In the second section of the paper, I will examine how the accessibility of peer contributions influences the outcome of a contest.

2.4 Aggregation and Selection


How does a crowdsourcing platform process the crowds input? The first phase is called aggregation, which specifies how the agent gathers the input of the crowd. There are different aggregation mechanisms that determine how the inputs are gathered.
48

The second phase, selection, describes how the platform processes the input once it is gathered on the platform. Selection includes three activities: organisation, assessment and selection of the input. Depending on who selects and to what extent the inputs are selected, one activity will be more important (e.g. selection through voting by the crowd), another will not be part of the process (e.g. organising in the form of tagging). In other words, not all three activities must be performed to constitute the selection phase. In each of the two steps a different actor may perform the task. In most cases, the crowd will perform aggregation16; in the selection phase, the crowd might evaluate the inputs through voting, but the agent makes the final selection. In contests, which are designed according to the principles of rational decision-making, the process starts with the agent who makes the open call. The crowd then contributes their individual ideas (aggregation) and the agent selects the winners (selection). But not all crowdsourcing platforms follow this strictly linear process where each step follows another. Aggregation and selection may be strongly interconnected with each other and thus seamless transitions occur between the two steps. Moreover the two steps can have a dynamic or a circular relationship. Consider the example of writing articles for Wikipedia. A user writes an entry (aggregation), other users suggest changes (selection), the user re-writes the entry (aggregation), other users do not agree with these changes (selection) and one of them proposes a new version (aggregation). A Wikipedia administrator steps in to resolve the conflict (selection), but later, due to recent events related to the entry, the article needs to be adapted and the process starts again. The aggregation and selection process is connected to the question of whether the crowd is aware of participating in this process. Recall the difference between explicit and implicit crowdsourcing. This di16

Except for an agent who posts content and seeks the wisdom of the crowd to evaluate his/her ideas.

49

chotomy captures whether the primary activity of the crowd on the platform triggers the desired effect (explicit) or the primary activity triggers a useful side effect that is used by the agent platform to solve a problem (implicit). Examples for explicit platforms are voting and reviewing in the Amazon bookstore, tagging pages at del.ici.ous.com, sharing items, such as videos, on YouTube, networking at Facebook or creating artifacts at Wikipedia. An example for an implicit system is the ESP game. In this application the crowd helps to label images by playing a game. The user is aware that playing the game creates this useful side effect. Doan et al. (2011) distinguish implicit systems that are based on a standalone platform (e.g. ESP Game) or on systems that piggyback on another system (e.g. the recommendation feature of Amazon). Piggyback systems use machine contributions (e.g. statistical algorithms) to process the traces of the crowd in the main system to solve a problem. It is worth noting that Doan et al. (2011) consider reCAPTCHA as a standalone system. Nevertheless, reCAPTCHA gains most of its inputs through integration in other websites. Hence, reCAPTCHA should be considered a piggyback system. It is not clear if piggyback systems meet the criteria for crowdsourcing. An argument against comes from Quinn and Bederson (2011) who conclude that data mining can be defined as the application of algorithms to extract patterns from data. From the viewpoint of Quinn and Bederson (2011), Googles Page ranking system should be considered data mining. The web pages are linked together by the users, but this linking is not directed or caused by the Google system. Also the recommendation feature of Amazon extracts patterns from user behaviour. The system mines individual buying behaviour and suggests products that fit the preferences of the user. A different perspective comes from Lykourentzou et al. (2011) who distinguish between active and passive systems of collective intelli50

gence. In active systems "crowd behaviour does not pre-exist but it is created and coordinated through specific system requests" (Lykourentzou, 2011: 219-220). Active systems are analogous to explicit crowdsourcing. In passive systems, the crowd behaves in a manner that is not influenced by the system. To put it differently, the crowd is not aware of the system and unintentionally contributes to it. An example for a passive collective intelligence system is a traffic control system, which uses the inputs of the cars to adjust the speed limit and thus prevent traffic congestion. On Amazon, many users presumably are not aware that their buying behaviour is an input for the system. The recommendation system of Amazon should be considered as a passive or an implicit crowdsourcing system. The buying decisions of the crowd are inputs for Amazons system. Instead of manually looking at the buying decisions of the crowd, the agent uses the machine to compute the selection of the data (the main activity in this case is organising as Amazon would probably not delete any of its valuable data). From this perspective, data mining is a selection instrument that relies on computational power.

2.4.1
2.4.1.1

Aggregation
Integrative and Selective Crowdsourcing

Integrative and selective crowdsourcing are two distinctive approaches that specify how the inputs of the crowd are aggregated. Selective crowdsourcing: Selective crowdsourcing occurs if the agent can "choose from an input among a set of options the crowd has provided" (Schenk and Guittard, 2010: 8). Selective crowdsourcing is useful if the agent has a specific problem, for instance, a well-defined R&D problem that cannot be solved within the organisation (Schenk and Guittard, 2010). Geiger et al. (2011) add that if selective crowdsourcing allows assessment of the crowd, the selection is based on col51

lective opinion (e.g. Threadless). "[S]elective CS generally implies a winner-takes-all mechanism where only the finder of the "winning" solution is rewarded" (Schenk and Guittard, 2010: 8). Geiger et al. (2011) agree with Schenk and Guittard (2010) and argue that selective crowdsourcing predominantly comes into existence in the form of contests. Figure 7 envisions how selective crowdsourcing uses the inputs of the crowd:

Figure 7: Selective Crowdsourcing

Figure 7 shows that the crowd created a set of options. The agent chooses one input which is deemed useful for solving the problem. This figure does not visualise the possibility that the agent might ask the crowd to assess the options. Based on this figure, the agent could for instance enable the crowd to vote and comment on the inputs to reduce the set of options from 21 to seven. Integrative crowdsourcing: Schenk and Guittard (2010) argue that crowdsourcing offers access to multiple and complementary information and data (e.g. geographical data). We name this Integrative Crowdsourcing (integrative CS) since the issue is to pool complementary input from the crowd. Individual elements have very little value per se but the amount of complementary input brings value to the firm. (2010: 8) Integrative crowdsourcing is useful if the agent has the goal of building a large data or information base. The key challenge is to manage
52

incompatible or redundant information. (Schenk and Guittard, 2010). Geiger et al. (2011) adds that integrative crowdsourcing harnesses the creative power (e.g. iStockphoto) or the collective opinion (e.g. Digg). Figure 8 provides a visualisation of integrative crowdsourcing:

Figure 8: Integrative Crowdsourcing

Figure 8 highlights, that one input created by the crowd has little value for the agent, but the entire collection of inputs has a high value. For example, one Wikipedia article is useless, but one thousand Wikipedia articles are not. Bear in mind that only the inputs within the dashed rectangle are useful for the agent. Inputs outside of the rectangle may fail to meet certain quality criteria and are therefore discarded. For instance, a Wikipedia article might be deleted because it violates the guidelines. At this point, the limitations of the selective/integrative view of crowdsourcing become apparent. There is a difference between how Wikipedia integrates the contributions of the crowd in comparison to iStockphoto. However, these differences cannot be explained with the concept of selective and integrative crowdsourcing because Schenk and Guittard (2010) place emphasis on the final outcome of the aggregation and do not distinguish between aggregation and selection.

53

2.4.1.2

Collective Intelligence Genome Framework

Unlike Schenk and Guittard (2010), Malone et al. (2009, 2010) differentiate in their Collective Intelligence Genome Framework17 between create (aggregation) and decide (selection). The Collective Intelligence Framework captures who (hierarchy or crowd), what (goal and task), how (structure and process) and why (incentives) of a crowdsourcing platform. For each interrogative pronoun, Malone et al. (2009, 2010) identified several genes. These genes can be used to analyse the collective intelligence of a platform. All genes of a platform form a genome. Table 7 shows the InnoCentive genome:
Example InnoCentive Decide What Create Scientific solutions Who Crowd Why Money How Contest Hierarchy

Who gets Management Money reward

Table 7: InnoCentive Genome (Malone et al., 2009)

Sakamoto et al. (2011) state that by analyzing individual steps (genes) of a crowdsourcing platform (genome) the Malone framework provides an instrument to understand the workflow of a platform. Malone et al. (2009, 2010) identify three different create genes: collection, contest and collaboration. 1) Collection: This gene occurs when the items contributed by the members of the crowd are created independently of each other. (Malone et al., 2009: 6). Collections are useful for any activity that can be broke down into small and (mostly) independent pieces (Malone et al., 2009). Examples of a collection gene are picture galleries on iStockphoto, videos on YouTube or articles posted on Digg. Yet, although each item is created independently, it does not mean that the inputs are not connected (c.f. Doan et al.,
17

I will use the designation Collective Intelligence Framework instead of Collective Intelligence Genome Framework.

54

2011: 88). In the case of iStockphoto, the crowd may assign the picture certain attributes, thus it has a loose connection to other pictures that have the same attributes. Figure 9 makes the collection gene visible. Each circle represents an element that was created by the crowd:

Figure 9: Collection Gene

Another example for a collection is Dells Ideastorm. From the viewpoint of Schenk and Guittard, Dells Ideastorm is considered as selective crowdsourcing by Geiger et al. (2011). At Dells Ideastorm, the crowd creates product ideas or suggestions on how to improve the products for Dell. The crowd can comment on each idea and vote for/against (promote and demote) it. Dell incorporates ideas that they find viable. This example shows that a collection can either be integrative or selective crowdsourcing.
2)

Contest: The contest gene is a sub-gene of the collection gene. The contest gene has a winner-take-all logic. Examples for contests are InnoCentive, the Netflix Prize or the OSRAM LED Design Contest. Figure 7 (selective crowdsourcing) could be applied to the contest-gene.

3) Collaboration: The collaboration gene occurs when members of a Crowd work together to create something and important dependencies exist between their contributions (Malone et al., 2009: 7). A prominent example for the collaboration gene is Wikipedia. The contributions to an article are strongly dependent on the work of other individuals. Also Linux belongs to the cate55

gory of collaboration, because each module is interdependent. Figure 10 shows the collaboration gene:

Figure 10: Collaboration gene

The figure above shows a collection of elements that are connected with each other through linking. The right circle shows that one element consists of four parts. In this case, four members of the crowd created this element by contributing one part to the element. The pieces within each element are strongly interdependent with each other. Due to the high level of interdependency of each piece, the users need to coordinate each other. The part on the upper right side represents the premise that one member of the crowd is attempting to add a new part to the element. Once the new part of the element is added to the system, another user might control if it fits to the element and if not, alter it. 2.4.1.3 Overlapping Dimensions in Aggregation

But what if the three different how-create genes, contests, collections and collaboration were to overlap? For instance, Malone et al. (2009) mention that all of Wikipedia is a collection of independent articles18. But a single article is collaboration, because each part within an article is interdependent. Also Linux has a collaboration gene (Malone et al., 2009). In Linux, developers improve existing codes or add new modules to the existing

18

Even though there is intensive hyperlinking between each article in the collection.

56

system and the code that is added to the system needs to be compatible with the existing material. Linux is, in its essence, a collaboration gene, each part is more or less dependent on each other to make the system work. If in Wikipedia an article is missing, it does not cause major problems. A wrong code in Linux may cause an alert or even worse a system crash. Collaboration in this sense is integrative, because users are dependent on each other to combine the elements into a whole system. Note that the application of integrative in this context is slightly different to what Schenk and Guittard (2010) describe as integrative crowdsourcing. The elements of this type of aggregation have a quasi-permanent character. Elements are only discarded if they are not useful to support the system. According to Malone et al. (2009, 2010) InnoCentive has a contest gene. Only one, or a few solutions are selected from the contributions created by the crowd. The collection of contributions in a contest has a temporary nature. Once the winning solution is selected, the remaining contributions are discarded. The created artifacts have few or no connections with each other. Each contribution can be made parallel from each other, which means that a member of the crowd can make the contribution independent of another. This occurs in contrast to integrative aggregation where members of the crowd must consider the work of their peers to make an useful contribution. The most distinctive type of collection would be a crowdsourcing platform that uses the inputs of the crowd to create a database. For instance, implicit crowdsourcing, such as the recommendation system of Amazon uses the inputs of the buying decisions of the crowd. Presumably, Amazon stores all store user data permanently and no data is discarded. The created elements within a collection have a low to medium connection with each other. iStockphoto is at first glance a collection of pictures. But at the same time, there is a competition between the photographers for a piece of the pie for sales within a certain category. For example within a cate57

gory sunset lake there is a competition about who has the most sales. If certain pictures do not sell well, iStockphoto moves them to the dollar bin (pictures are sold for a cheaper price in this category) and notifies the user that he/she might consider removing it from the portfolio (Mail, 2009). Although most users do not contribute to iStockphoto due to monetary considerations (Howe, 2008, Brabham 2008), iStockphoto still has a competitive element. InnovationExchange is an open innovation platform that explicitly supports solvers to form a team19. From the perspective of Malone et al. (2009, 2010), InnovationExchange would be a contest. Indeed, the entire platform is based on the contest model, but by enabling users to form a team, it has also a collaborative element. Collaboration occurs within each team. This means, the contribution of the team consists of interdependent parts. Thus, one could argue that InnovationExchange is a combination of competition and collaboration. The figure below visualizes the overlapping dimensions in the aggregation process:

Also InnoCentive allows the participation of teams, but the support for teams is apparently not as explicit as in the case of InnovationExchange. Lakhani et al. (2007) mention that 10.6% of the solvers formed a team (n=993). The average team size was 2.8 members. The formation of teams did not increase the chances of winning. Overall, 79.6% of the participants report that they did not consult others when working on ideas.

19

58

Figure 11: Overlapping Dimensions in Aggregation

2.4.2

Selection

Aggregation describes how a platform gathers the input of the crowd. Selection captures how the platform organises the inputs, assesses them and makes a selection. Examples of instruments for those activities are tagging for organising, commenting for assessing, voting for selection, and so on. To analyse the selection instruments of this phase I will again draw upon the genes proposed by Malone et al. (2009, 2010). In the aggregation section I used the create genes (collection, contest, collaboration), in the selection phase, I shall use the decide genes, which encompass evaluation and selection of alternatives. Wise et al. (2010) adapted the framework of Malone et al. (2009, 2010) and proposed an evaluation gene to distinguish between evaluation and selection. The rationale for the evaluation gene is that many crowdsourcing platforms use the crowd for assessing and selecting inputs of the crowd, but still, as the case of Threadless shows, the final decision whether to produce the winning designs lies in the hands of Threadless management.
59

2.4.2.1

Decide Gene

Malone et al. (2009, 2010) differentiate between group decisions and individual decisions. Group decision () occurs when inputs from the members of the crowd are assembled to generate a decision that hold for the group as a whole (Malone et al., 2009: 7). In contrast, individual decisions occur when members of a Crowd make a decision that, though informed by crowd input, does not need to be identical for all (Malone et al., 2009: 9). Table 8 provides an overview of all decision genes discriminated according to individual or group decisions.
Group decision - Voting - Averaging - Consensus - Prediction market
Table 8: Overview of Individual and Group Decisions

Individual decisions - Markets - Social Network - Hierarchy

Voting: The opinion of the group is determined by counting the individual votes for each input. Bao et al. (2011) suggest that voting is useful to identify the very best solutions of inputs. In the case of a contest, the majority of the votes determine the winner, in the case of the platform Digg, voting is used to reflect the popularity of the stories and the stories with most votes are prominently displayed on the website. Figure 12 gives an example of a voting process:

Figure 12: Voting Process

60

Malone et al. (2009) identified two variations of voting. Implicit voting interprets certain actions of the crowd as casting votes. For instance, iStockPhoto displays photos in order of the number of times each photo has been downloaded (Malone et al., 2009: 8). Another variation is weighted voting. For example, a web community gives the votes of users with more experience more weight than users who just signed up to the community (e.g. Vencorps). Averaging: In cases where decisions involve picking a number, another common practice is to average the numbers contributed by the members of the Crowd (Malone et al., 2009: 8). The five star rating system of Amazon uses averaging. Bao et al. (2011) suggest that this rating system often mixes poor inputs with mediocre ones. The five star rating system has difficulties to filter out extreme solutions (2011: 7) but is useful for capturing the differences among each input (Bao et al., 2011). Another example for averaging is Suriowieckis example of weighting an ox (Surowiecki, 2004). Consensus: Consensus means that all, or essentially all, group members agree on the final decision (Malone et al., 2009: 8). Examples for consensus are Wikipedia, where articles remain unchanged as long no one is dissatisfied with the current article and reCAPTCHA, where a passive consensus is reached through several users transcribing a word similarly (Malone et al., 2009). Prediction market: Prediction markets, which work similarly to stock markets, are used to estimate the probability of future events. In prediction markets, people buy and sell shares of predictions about future events. If their predictions are correct, they are rewarded, either with real money or with points that can be redeemed for cash or prizes (Malone et al., 2009: 9). Intrade is an example of a crowdsour61

cing platform that uses a prediction market to determine if an event happens or not (yes/no proposition). Members of the crowd buy and sell shares. For instance individuals who think Obama will be re-elected in 2012 buy shares, individuals who do not, sell shares. The market always settles between $ 0.00 (market event has not happened) or $ 10.00 (market event has happened). The profit of one person is the loss of another. Malone et al. (2009) report that organisations such as Google, Microsoft, and Best Buy use prediction markets to harness the collective intelligence of people within their organisation (Malone et al., 2009: 9). At ManorLabs (a platform of the City of Manor), which is based on Spigit, ideas must pass an expert filter before they can be traded on a prediction market. Ideas with a high value are considered for implementation (Gegenhuber and Haque, 2010). Figure 13 outlines a prediction market with three traded ideas:

Figure 13: Prediction Market

Howe (2008) argues that in prediction markets not all votes are equal. Insiders are likely to invest more and there are incentives to the ignorant to keep their money in their wallets (Howe, 2008: 162). According to Howe (2008) there are two problems for prediction markets: Prediction markets within organisations often have weaker incentives for employees than a stock market because money is replaced with a virtual currency. Second, a prediction market may suffer from herd-behaviour, especially if the crowd can interact with each other.
62

Markets: In markets an individual makes a decision what to buy or to sell. In some markets the prices are driven by demand and supply (e.g. eBay), other markets, such as iStockPhoto have pre-defined price ranges (Malone et al., 2009). Social networks: Social networks connect people with each other and the relationships reflect trust and affinity. Crowd members assign different weights to individual inputs with the people who provided them and then make individual decisions (Malone, 2009: 10). Hierarchy: Another type of individual decision is a decision made by the hierarchy. For instance, in the case of InnoCentive, the R&D team and/or management (hierarchy) of the agent decides which entry of the crowd should be used. The application of data mining also belongs to this gene. In this case the hierarchy creates a set of standardised search rules on which the machine (computer) selects the inputs. 2.4.2.2 Evaluate gene

As mentioned previously Wise et al. (2010) introduced an evaluation gene to distinguish between selection and evaluation. In Table 9 an extract taken from Wise et al. (2010) provides a definition and example for the evaluation gene (including the corresponding feedback-public and feedback-not public gene. Note that feedback public gene is a group decision while feedback-not public gene is an individual decision:

63

Gene

Definition

Example OpenEducation allows individuals to evaluate and comment on the plan. However, the users are not allowed to decide nor are their opinions aggregated but rather are simply allowed to evaluate. The Listening and Learning Tour by Secretary of Education Arne Duncan includes a blog which allows stakeholders to publish publically visible feedback on issues relevant to education. The IT Dashboard provides a transparent overview of over 7,000 Federal Information Technologies Investments. Citizens can give the CIOs of the departments feedback via a form, which is not visible to others.

Evaluate The Decide gene introduced by (WHAT) Malone et al. is defined as a system where users evaluate and decide. While there have been several open government initiatives which allow individuals to evaluate, the management (hierarchy) decides. This is defined as the Evaluate gene. FeedbackPublic Similar to the collection gene, the feedback-public gene occurs when individual inputs are collected, however, the inputs considered in this case are feedback (actionable opinions and review) where the feedback of individuals is visible to the broader community.

FeedThe feedback-not public gene is back-Not nearly the same as the feedbackPublic public gene, however, the feedback of individuals is not publically displayed.

Table 9: Evaluation Gene (Wise et al., 2010)

Feedback-public captures the fact that the crowd cannot comment on the contributions of their peers. Feedback non-public indicates that direct messaging also plays a role in crowdsourcing. I will introduce two other genes that have not been considered yet. Many platforms allow users to tag items. Tagging helps to organise the input of the crowd and creates a connection between contributions. Tagging is not only an instrument to organise the inputs of the crowd,
64

but also an instrument for evaluating the importance of an input. Contributions that are tagged are easier to find, thus are likely to get more attention from other users. Another gene is flagging. The crowd can flag content to point out inappropriate content. Table 10 provides a definition for the tagging and flagging gene and gives an example for each:
Gene Definition Example OpenGov allows that the crowd create tags themselves or tag the ideas of each other.

Tagging Zwass (2010) argues that a Bottom-up (Evalu- taxonomy (folksonomy) can be used to ate) classify and provide access (as in Flickr) (2010: 31). Tagging not only organises knowledge it also gives weight to inputs. Elements that are tagged by many people lead to a better position in the taxonomy. Therefore tagging is at the same time a weak instrument for implicit voting. Flagging Flagging is the act of the crowd to mark (Evalu- inappropriate content, such as contribuate) tions that violate the community guidelines. Normally the agent screens the flagged content and decides whether it should be deleted or not.

The crowd can report contests that violate the contest rules on 99designs. At MechanicalTurk the crowd can report HITs that are broken or violate the policies of the platform.

Table 10: Tagging and Flagging Gene

2.4.2.3

Selection Instruments and Governance Mechanisms

Transaction cost theory (TCT) argues that an organisation exists because transactions between the members of an organisation are more efficient (i.e. lower costs) than the market mechanism. The transaction costs determine efficient boundaries (Coase, 1937; Williamson, 1975).
65

"A transaction occurs when a good or service is transferred across a technologically separable interface. One stage of activity terminates and another begins" (Williamson, 1981: 552). Williamson (1981) draws the analogy between friction within mechanical systems and transaction costs between parties. The main goal is to keep transaction costs as low as possible to ensure a harmonious exchange. The two behavioural assumptions of TCT are that agents are bounded rational and that some agents act opportunistically. Additionally, environmental factors such as a high level of uncertainty and complexity, as well as small numbers in the bidding processes need to be considered (Ouchi, 1980). Burger-Helmchen and Pnin (2010) applied TCT to crowdsourcing to examine under which conditions inventive activities (e.g. posting challenges on InnoCentive) can be transferred to the crowd. One conclusion is that transaction costs are low if the problem and solution is strongly codified and if it is possible to secure the solution with a patent. These conditions are met in chemistry and pharmaceuticals20 or software (codified and modular). The level of analysis of BurgerHelmchen and Pnin (2010) are efficient boundaries (i.e. under what conditions is crowdsourcing a more efficient form of organising than performing the transaction internally). Another level of analysis is the governance mechanism of the platform itself. From the perspective of TCT, selection instruments can be considered as governance mechanisms. These mechanisms are essential to organise the contributions of the crowd. From this governance perspective the following questions emerge: What mechanisms do exist? Under which conditions does a particular kind of governance structure have better transaction cost economies in relation to another? There are two main determinants to assess whether a governance structure is efficient. Ouchi (1980) distinguishes between goal congruence and performance ambiguity. If the two parties share the same goal, the goal congruence is high. Performance ambiguity is high, if it

20

Note that most challenges from InnoCentive are in these domains.

66

is difficult to determine the value of the individual contribution in a transaction. For instance, it is difficult to measure the individual performance of a team of manual freight loaders. An example of the digital realm is Wikipedia. It is difficult to determine the value of an individual who collaborates with other users in the creation of a highly interdependent artifact. Each selection instrument (e.g. voting) can be categorised into one of the four governance mechanisms. The four governance mechanisms are as follows: 1) Hierarchy: Hierarchy means that the agent (or the intermediary) has the authority to make a decision. The hierarchy is analogous to what Ouchi (1980) describes as bureaucracy. In TCT, bureaucracy refers to an employment relation and the hierarchy can direct and monitor the activities of the crowd. An employment relation in the traditional sense is rare in crowdsourcing. In crowdsourcing, hierarchy means that the agent has the legitimate authority to decide which inputs of the crowd are used and which are discarded. Many platforms still rely on hierarchies (Malone et al., 2009). Hierarchy is useful if the number of decisions an agent has to make is low to medium. Optimal conditions for hierarchy are medium to low goal congruence and high performance ambiguity. High performance ambiguity in this context means that the agent is very ambiguous about the performance potential of an idea created by the crowd. Consider an innovation contest. First, the agent cannot fully assess ex ante the innovation potential of an idea. Second, the idea may be useful but does not fit into the existing strategy of the agent. In this case, hierarchy reduces uncertainty for the agent. The trade-off of hierarchy is that non-transparent decisions may alienate the crowd. 2) Standardisation: Standardisation is a sub-mechanism of hierarchy and is based on rules. Standardisation can be done by the machine (computer) or by humans or by a combination of both. Crowd67

sourcing applications can standardise the input and/or standardise the quality of the output. Standardisation of input takes place before the input of the crowd is posted on a platform. To achieve standardisation of outputs, all contributions are screened and if they do not meet certain quality criteria, they are deleted. For instance, platforms use the crowd to flag inappropriate content. If a contribution is flagged, a member of staff of the platform decides based on a provided set of rules whether it should be deleted or not.21 Consider the example of MechanicalTurk, a platform that has a high degree of what Mintzberg (1979) calls standardisation of work processes. Users get a detailed description of how to solve the tasks. To control the quality of the work, the system gives the user a task for which the system already knows the answer. If the user fails to give a correct answer, the system does not consider his/her contributions. Still, for instance in the case of transcribing, a manual control is also needed. In addition YouTube uses a combination of machine and human control to prevent the posting of work that violates copyright material. Standardisation is efficient if the agent faces a high number of contributions, the goal congruence is low, and the performance ambiguity is medium to low. If the performance ambiguity would be high, standardisation procedures would not work. 3) Meritocracy: Meritocracy refers to a system, which judges the contributions and abilities of individuals. To put it simple, it does not matter who the individual is, what does count are his/her contributions on the platform. Trompette et al. (2008) argue that an informative meritocracy infrastructure to display the achievements of contributors is part of the crowdsourcing incentive model. Selection instruments such as weighted voting or weighted averaging are

21

If the employee cannot decide based on the provided set of rules, hierarchy will make the decision.

68

used to give members who earn merits through achievements more weight in the selection process. Ouchi (1980) introduced clans as a form of organising to TCT. Clans have an organic solidarity and a strong sense of community. Clans build on norms and traditions and consequently there is less need for the agent to monitor their activities. Common values and beliefs provide the harmony of interests that erase the possibility of opportunistic behaviour. If all members of the organization have been exposed to an apprenticeship or other socialization period, then they will share personal goals that are compatible with the goals of the organization" (Ouchi, 1980: 138) Common values and beliefs as well as the emergence of norms can be observed in communities (c.f. Brabham, 2010). Due to the sense of community the goal congruence of clans with the agent is medium to high. But a community should not be conceived of as a single unit of homogenous individuals. Recall the analysis of the social structure. The personal goals of some users, such as brand evangelists (c.f. Sean and Moffitt, 2011) are probably more aligned with the goals of the agent than those of lurkers. On the Vencorps platform the crowd decides which start-ups should be considered for investing venture capital. Members who are deemed qualified or earn reputation points through actively participating on the platform have more weight in the voting process than other users. Meritocracy is useful if the number of decisions an agent has to make is high and the goal congruence and performance ambiguity are moderately high. A possible trade-off lies between establishing common values and attracting newcomers. Norms of a clan/community may increase entry-barriers for newcomers and ignore new points-of-view (Preece and MaloneyKrichmar, 2003). 4) Markets: Market relationships are based on contractual exchanges and rely on prices as a source of information. The most efficient
69

form of exchange is a spot contract. According to Ouchi (1980) the simplest form of spot contract is buying candy at a store. After the customer pays for the candy there are no further obligations that need to be met. A crowdsourcing example for a spot contract is buying a T-shirt on Spreadshirt.22 Selection instruments such as voting, averaging and prediction markets that aggregate the decision of individuals also fall into the category of markets. Drawing upon Ouchi (1980) and Williamson (1981) markets are efficient if the goal congruence and the performance ambiguity are low. Additionally, markets are useful if the number of decisions is high. Hybrid: Ouchi (1980) acknowledges that a pure governance mechanism (market, bureaucracy or market) hardly exists and even a combination of governance mechanisms may not be sufficient in some cases. Also a crowdsourcing platform may use a combination of two or more governance mechanisms. If at least two different selection mechanism are used it is a hybrid. An example of a hybrid is Threadless because it uses a combination of market and hierarchy. Howe (2008) suggests that a combination of experts (hierarchy) and other selection mechanisms might yield better results. Howe (2008) refers to the voting process on Digg as an example how crowdsourcing can be gamed and argues that mob-rule may neglect interesting information. Note that the application of different selection mechanisms creates a shift in the social structure of a platform. If this combination is not well thought through, it might harm the outcome of a crowd-

22Another

example for a platform that uses a market as a governance mechanism is Twago. This platform allows agents to tap into a pool of freelancers worldwide. An agent posts his project requirements on the platform and members of the crowd make an offer for the required services. The agent can choose the best offer. Twago as an intermediary provides an escrow service for both parties, which is an important element to reduce monitoring costs for the agent.

70

sourcing effort. Consider Wikipedia as an example. Wikipedia combines two governance mechanisms. On the one hand, there is meritocracy. Users create articles, collaborate and decisions are based on consensus. On the other hand, Wikipedia created a bureaucracy and transferred authority to Wikipedia editors to resolve conflicts. The problem is that many of these editors make controversial decisions and alienate new users who want to contribute (c.f. Grams, 2010). To provide a brief summary of this section, I conducted a morphological analysis that combines the selection instruments with the governance mechanisms (Table 11). I did not include the hybrid in this definition, because, as per definition, a hybrid may use multiple selection instruments, and therefore including it would not create any informational value for the reader.

71

Collective Intelligence Hierarchy Framework Decide (Group) Voting Averaging Consensus Prediction Market Decide (Individual) Market

Meritocracy Markets X (weighted voting) X (weighted averaging) X X X X X

Standardization

Social Networks Hierarchy Evaluate (Individual) FeedbackNot Public FeedbackPublic Flagging Evaluate (Group) Tagging X

X X X

X X (weighted flagging) X X

Table 11: Morphological Analysis of Selection Instruments and Governance Mechanisms

2.4.3
72

Aggregation and Selection Framework

The next figure (Figure 14) is a visualised summary of the aggregation

and selection mechanisms. The inner circle shows the overlapping dimensions of aggregation. The outer circle lists the governance mechanisms, which also can be called selection mechanisms. The dashed line visualises the likelihood that the transitions between aggregation and selection are often seamless. The circles of the governance mechanisms cross the boundaries of the dashed line. Consider the selection mechanism standardisation of input. In this case, selection is integrated in the aggregation process.

Figure 14: Aggregation and Selection Framework

73

3 Contests in Crowdsourcing
3.1 Contests in General
Using contests to provide incentives for human ingenuity has a long history in modern society. A prominent example cited in numerous papers is the Longitude Prize. To accelerate the progress in navigation technology, the British Empire staged a contest in the early 18th century that offered a $ 20,000 reward (today: $6M) for the best solution to determine the longitude of a ship. It led to the development of the marine chronometer and gave the British Empire a technological headstart in the competition with other nations for the dominance of the seas (c.f. Jeppesen and Lakhani, 2010; Morgan and Wang, 2010; Hutter et al., 2011). The growing body of literature about contests discusses contests under different labels such as tournaments of ideas (Morgan and Wang, 2010), innovation contests (Boudreau et al., 2011; Terwisch and Xu, 2008; Bullinger et al., 2010; Hallerstede and Bullinger, 2010), innovation tournaments (Terwiesch and Ullrich, 2009) or ideas competition (Ebner et al., 2009; Leimeister et al., 2009). This conversation about contests takes place in different fields. Brabham (2012) discusses contests from a crowdsourcing perspective, Fller (2010) and Hutter et al. (2011) from the co-creation lense, Lakhani et al. (2007) and Jeppesen & Lakhani (2011) locate their work in the field of open/distributed innovation.23

3.1.1

Winner-take-all Model and Distributive Justice

Crowdsourcing reduces the prices of intellectual labour (Howe, 2008)


23
This body of literature on contests indicates that the popularity of contests is constantly increasing. Empirical data on the

growing popularity of contests is practically non-existant. One exemption is a McKinsey study that compares the time span from 1997 and 2007 and shows that the total amount of prizes has risen from 74 - 315 million dollars respectively. Note, that the data only included prizes over $100,000 and the dataset of 210 prizes mainly included philanthrophic prizes, of which many were staged offline (McKinsey, 2009).

74

and blurs the border between work and play (Brabham, 2008). The web provides a perfect information market with few regulations and unlimited mobility (Pabsdorf, 2009). The contest model is one major driver for these observations. Contests have a winner-take all logic that implies that few beneficiaries stand opposite to a majority that comes away empty-handed. I will discuss the distributive justice of the contests based on the example of design contests. Mo et al. (2011) report that individuals have a less than 5% chance of winning a contest on the innovation platform zhubajie.com. Due to the large numbers of competitors in design contests, the chance of an individual winning is presumably also very low. Rouse (2012) states that design contests are not sustainable because they mostly offer token compensation. This leads to lower quality in design. Exploitative crowdsourcing drives talent out of the market and will not be sustainable in the long-term. Morgan and Wang (2010) argue that contests are a useful instrument if individual efforts are difficult to monitor and if it is easier to measure and observe the relative performance than the absolute performance. The business model of design contests is based on the relative assessment of picking the most useful design in relation to other designs. In other words the business model relies on the not useful designs. Individuals of the crowd who add value to the platform by contributing the not useful designs receive no rewards. Therefore, in the case of the design contests, there is no distributive justice. Initiatives such as SpecWatch track how design contests exploit designers.24 Clients of design contest intermediaries have a money back guarantee. If the client does not like the designs, he/she can refuse to pay the compensation. The crowd bears all the risk in design contests. SpecWatch tracks each incident of clients who refuse to award a deOn a personal note: It is absolutely worth having a look at the Twitter Account of SpecWatch: https://twitter.com/#!/specwatch. Unfortunately the last tweet is from January 2010.

24

75

signer and informs designers about the downsides of participating in such contests. But why should design contest platforms change their business model if they currently make profits? There are legal discussions whether minimum wage laws also apply in the virtual realm (c.f. Cherry, 2009). But the web is flat and therefore it is difficult to impose regulations. In the long term, legislation may prevent exploitation, but in the short term the resistance of the crowd can put pressure on intermediaries and their clients. If the crowd turns against the crowdsourcer, the crowd engages in crowdslapping. For instance, Chevrolet Tahoes attempt to get user created ads from the crowd backfired (Howe, 2008; Brabham, 2008).25 There are several alternative reward systems. Awarding multiple prices is one way to soften the winner-take all logic. A possible alternative would be the development of co-operatives. For instance a design cooperative could still use the contest model to create numerous options for the customer. In contrast to other design platforms, designers who have never won a design but show a high level of reasonable activity level on the platform receive a share of the earnings of the platform. The challenge of this model is twofold. First, how to determine who should become a member of the co-operative and who shouldnt? Secondly, how to allocate the surplus? Another possibility is that contests are solely a means to attract crowd who want to showcase their talents. The main business model would be based on a marketplace of talents, where the agent can ask a member of the crowd to perform a task. The intermediary receives a commission for each booked assignment (e.g. DesignCrowd has contests
25

Creating awareness as in the case of SpecWatch is one counter-measure but subversive efforts to undermine the business

model would be more powerful. Digital activists could flood contests on 99designs with numerous entries (e.g. adapted vector files) to increase the screening costs for the client. An even more radical (and very controversial) step would be to post multiple contests without the goal to pay anyone. This approach of gaming the system could force design intermediaries to reconsider their policies regarding the money back guarantee. With the rise of digital work is also likely that labour will dedicate some attention to crowdsourcing the business model. Unions, who are a firewall against exploitation in the offline world could make similar efforts in the virtual realm.

76

and in addition one can hire a designer for a freelance job). Finally, a different reward model for innovation contests in general would be a performance-contingent award, such as a contract that awards the winner of an ideation contest through payment of royalties. Terwisch and Xu (2008) propose that a performance-contingent award results in a higher level of effort by a participant than does a fixed reward. Moreover, the performance-contingent award reduces the competition effect.26 Franke and Klausberger (2010) carried out a study that asked the crowd about the preferred reward structure. The results demonstrate that the crowd prefers a performance-contingent reward due to a sense of justice. Although a performance contingent award may be perceived as fairer, it must not yield higher returns for the winner. If it is a fixed prize, the winner receives the reward even if the potential idea is not commercially successful. Therefore, a performancecontingent award increases the overall risk for a solver (Franke and Klausberger, 2010). Most of these models do not depart from the winner-take-all model. The winner-take-all model yields high performance, but the issue of distributive justice questions the sustainability of this model. If the rewards are not appropriate, talented individuals will be driven out of the market. I conclude that the development of new reward systems is necessary.

3.1.2

Generic Contest Process

The prototypical design of the process of a contest is influenced by the ideals of linear rational decision-making. Building on Malone et al. (2009), Terwisch and Ullrich (2009) and Geiger et al. (2011), Figure 15
26
The competition effect can be understood as follows: the more competitors are in a contest, the less effort made by an

individual because he/she is less likely to win (Boudreau et al., 2008). However, the performance award effect diminishes if the solver base is too large (countervailing competition effect).

77

visualise a generic contest process:

Figure 15: Generic Contest Process

The bottom row specifies each stage of the contest process. The top row of the figure indicates which actor is involved in each stage of the crowdsourcing process. The agent defines the problem at the beginning of the contest and then makes an open call. The crowd generates ideas, which are aggregated in a collection on the platform and the agent and/or the crowd assesses and selects the ideas. Finally the agent has to make a final decision whether the selected idea is useful to solve the problem. Lets apply Figure 15 to InnoCentive: the crowd generates the ideas, the R&D employees of the agent who seeks a solution do the assessment and selection. Another possibility, although it is very unlikely, is that the agent may also be responsible for the aggregation of the ideas. For instance a firm can post five new product ideas and gives activity rewards to the crowd who votes and comments on the ideas. In this case, the open call would move to the right and occur after the aggregation process. If the crowd is neither involved in the generation and
78

aggregation nor in the assessment and selection process it is not crowdsourcing.

3.1.3

Contest Design Elements

In addition to having an understanding about the processes involved in a contest another issue is how to design a contest. Bullinger & Moeslein (2010) reviewed 33 journal and conference publications as well as 57 real world contests to distil the ten most important design elements of contests. Table 12, an extract taken of Bullinger and Moeslein (2010) shows the ten design elements:

Table 12: Contest Design Elements (Bullinger and Moeslein, 2010)

For a detailed explanation of each element see Bullinger & Moeslein (2010). I argue that the design elements could be improved in three ways: First, the design element Media captures if a contest is staged online, offline or a combination of both. Media refers to the environment of an innovation contest (IC). What should be added to this element is
79

whether a contest seeks contributions from outside, from inside or from a combination of both. Second, the number of rounds is also not considered as a design element. The contest can either be enduring (e.g. Threadless), one-round, two-round or multi-round. Third, the design element Community Functionality with the attribute given or not given is not very specific. Community functionalities are instruments that facilitate information exchange, topic related discussion, and if allowed collaborative design of products. Applications belonging to the field of social software are well suited to foster community building, e.g. a fanpage of the contest on facebook.com, messaging services and personal profiles. (Bullinger and Moeslein, 2010: 3) The design element Community Functionality combines two important elements that are worth being considered as elements on their own. One element is what Geiger et al. (2011) calls Accessibility of Peer Contributions. The other element consists of the actual community functionalities such as personal profiles, (direct) messaging between the members or Facebook and Twitter pages. The community functionalities are linked to the accessibility of peer contributions notwithstanding the fact that these community functionalities constitute a unique design element. For instance, the category None (Accessibility of Peer Contributions) would imply that there are no community functionalities. Yet, in blind contests on the design contest platform 99design, users cannot access the contributions of each other, but they can discuss in a forum of the contest. Furthermore, each user can access the personal profiles of each other. Summing up, there should be two design elements the Accessibility of Peer Contributions and Community Functionalities.

80

3.1.4

Form Follows Function

From a practical perspective, philanthropic contests are a powerful tool for change and can have the following purposes: identifying excellence, focusing communities on specific problems, mobilizing new talent, influencing public perception, strengthening problem-solving communities, educating individuals and mobilising capital. The purpose of the contest influences the prize strategy and the design elements of the contest (McKinsey, 2009). Hallerstede and Bullinger (2010) conducted a cluster analysis of 65 ICs to identify three types of contests: community-based IC, expert-based IC, and mob-based IC. To analyze the core design elements, Hallerstede and Bullinger (2010) applied the design elements of Bullinger and Moeslein (2010). Community-based ICs serve marketing and/or ideation purposes. Contests in this cluster run mid- to long term. Although there is often little time to build a community, contests in this cluster use community applications (commenting, direct messaging), Facebook, and Twitter. The degree of elaboration is low and either peers (30%), experts (33%), or both groups (37%) evaluate the contributions. Dimensions that determine placement in this cluster are runtime, community application and evaluators. An example of a community-based contest is the Smellfighters contest, which asked the crowd to share new ideas on how to reduce domestic smell. Expert-based innovation contests have sustainability and development goals and require solutions from the crowd. Expert-based ICs have a medium to a very long runtime, most allow messaging (89%) but only a few allow commenting (33%). Rewards are mostly monetary rather than non-monetary (including mixed reward systems). To win these contests, the crowd needs to create prototypes or complete solutions (i.e. participate in a high degree of elaboration). The target group of this contest type are experts. Discriminating factors for this contest type are reward, runtime, degree of elaboration and community appli81

cation. A typical example for an expert-based innovation contest is the Netflix prize. Mob-based innovation contests have the same purpose as expert-based ICs and also demand solutions from the crowd. Such contests have a long to very long runtime, do not provide community applications or user profiles, but sometimes involve blogs, Twitter and Facebook fanpages. The reward system is either monetary (45%) or non-monetary (55%). Mob-based innovation contests require a medium degree of elaboration (idea to concept level). Agents who use mob-based ICs seek innovation flash mobs: "innovators spring up, contribute their submission and disappear and there is no community building" (Hallerstede and Bullinger, 2010: 6). An example of a mob-based innovation contest is the Virgin Earth Challenge, which asks innovators to develop a device that reduces greenhouse gases. Table 13, taken from Hallerstede and Bullinger, provided an overview for each type of contest:

Table 13: Overview of Contest Types (Hallerstede and Bullinger, 2010)


82

The combination of numerous design elements creates the form of a platform. The overview of Hallerstede and Bullinger (2010) indicates that the form follows function. The purpose influences the choice of design elements. If one purpose of the contest is marketing, the agent should provide community functionalities. The assumption for the analysis below is that form and function are closely linked and that the form of a platform shapes the behaviour of the crowd. To design a successful platform one needs knowledge on how the form influences the crowds behaviour.

3.2 Impact of Accessibility of Peer Contributions on Contests


Accessibility of Peer Contributions describes to what extent the crowd can access the contributions of the others and regulates by which means the crowd can interact or not (Geiger et al., 2011). The access to peer contributions determines the group structure. Some contests allow for the crowd building on each others contributions; some do not. The brainstorming literature has a large number of studies on the group structures nominal groups. In a nominal group, group members have no access to the contributions of their peers in the idea generation process. In other words, InnoCentive uses a large and ex ante unidentified nominal group to gather ideas.27 28 The group structure determines how a crowd can use its resources to solve the problem and
,

Several scholars used the brainstorming literature for their work on online platforms. Ardaiz et al. (2009) use the brainstorming literature as a starting point for developing a web-based idea-generation tool. Dalal et al. (2011) draw upon the Delphi method, nominal group technique (NGT) and crowdsourcing to propose a web-based elicitation system called ExpertsLens. Muhdi et al. (2011) review the brainstorming literature, including the benefits and pitfalls of electronic brainstorming, open innovation and crowdsourcing for their qualitative work to identify through which phases an organisation runs when using an intermediary such as Atizo. Kavadias and Sommer (2009) consider brainstorming and nominal groups as multiagent searches for a solution to a problem (Kavadias and Sommer, 2009: 1899). Kavidias and Sommer (2009) refer in their work to the experimental study of Terwisch and Xu (2008) about innovation contests. Multiagent search means that multiple individuals of the crowd search for a solution to a problem.
27 28

Special thanks to Alexander Bliem, who drew my attention to the analogy between nominal groups and crowdsourcing. I am also grateful for his suggestions regarding literature.

83

how individuals of the crowd can transform their capabilities into a contribution (c.f. Steiner, 1972). In the introduction I raised the question what role does the access to peer contributions play in contests? I broke down this general question into four more specific research questions that are the basis for the analysis in this section: What is the relationship between the access to peer contributions and the marketing and strategic considerations of the agent? How does the access to peer contributions affect the motivation of the crowd? How does the access to peer contributions affect the quality of the best ideas? How does the access to peer contributions influence the cooperative orientation of the crowd? How does this cooperative orientation influence the outcome of the contest? Note that the access to peer contributions is, in most cases, one factor among many others, which influences the outcome of contests. I did not consider factors such as motivation of the crowd, the nature of the task and crowd composition. The development of a full model is not the aim of this analysis.

3.2.1 tions

Classification of Accessibility of Peer Contribu-

Recall that Geiger et al. (2011) identified four different types of accessibility (None, View, Assess, Modify), which indicate to what extent the crowd can access the contributions of its peers. The four types are arranged in order of their degree of accessibility paired with contest examples: 1) None: Members of crowd cannot access the contributions of each other. For instance InnoCentive does not allow users to see the contributions of their peers.
84

2) View: Users have solely visual access to the contributions of others. For example public contests on 99designs allow the crowd to view the designs created by their peers. 3) Assess: The crowd evaluates contributions of others by commenting and/or voting. Threadless is a good example for assess as it uses the crowd to filter out the best ideas. 4) Modify: Modify occurs if the crowd (...) can alter or even delete each others contributions in order to correct, update, or otherwise improve them. In general, this is the case when contributors come together to build something in a highly collaborative way. Examples include Wikis, e.g., Wikipedia, and similar endeavors such as OpenStreetMap or the Emporis Community. (Geiger, 2011: 7) Typically, Modify does not occur within a contest, because the nature of a contest is the opposite of creating something collaboratively. There is one exemption. If the contest allows teamwork, the crowd can modify each others contributions within the team (e.g. InnovationExchange). Another contest that allows users to modify the inputs of each other is the MATLAB Programming Contest (i.e. MATLAB contest) that allows contestants to use and modify the code of other users over the course of the contest. But in the strict sense the MATLAB contest does not fall into the Modify category. The MATLAB contest is a multi-round contest. In the last round of the contest, each user can access the code of other users. He/she can take the code, modify it, and submit the altered code. The user takes the code from another user to build on it and improve his/her submission. But the user does not modify the code of the other user to improve the submission of that user. So each submission is independent of another, thus the MATLAB contest belongs into the View category. There is a threshold between None and View. None is clearly distin85

guishable from each other element because the individuals of the crowd do not have any information about the contributions of their peers. In contrast, all propositions that concern View should also hold for Assess and Modify. This is due to the reason that Assess and Modify automatically imply that the crowd can view the contributions of each other. But this only works one way between View, Assess and Modify. Assess and Modify have a higher degree of accessibility and thus more unique accessibility dimensions than View. Therefore all proposition made specifically for Assess are not applicable for View; the same accounts for Modify in relation to types with a lower degree of accessibility. Figure 16 summarises how each degree of accessibility is related to the other:

Figure 16: Threshold Accessibility of Peer Contributions

How are the Accessibility of Peer Contributions and the aggregation and selection mechanisms connected? Accessibility of Peer Contributions specifies the group structure or in other words to what extent the crowd can access the contributions of others. Aggregation and selection mechanisms do not explicitly make this distinction. A contest may allow that users can access the contributions of others, but the opposite can also be the case. Hierarchy as a selection mechanism is used for contests that do not allow access to peer contributions as well as for contests that do. Nevertheless, accessibility of peer contributions is
86

connected with aggregation and selection mechanisms. Figure 17 provides an overview of how each degree of accessibility is connected to aggregation and selection (governance) mechanisms:

Figure 17: Connection between Accessibility of Peer Contributions and Aggregation/Selection Mechanisms

I assigned each access type a symbol (left column). The columns on the right side list the aggregation and selection mechanisms. If an aggregation or selection mechanism is compatible with an access type, it is labelled with the appropriate symbol. It is noteworthy that the aggregation mechanisms are analysed based on Malone et al. (2009); the overlapping dimensions of aggregation are not considered. A contest and collection are compatible with None, View, or Assess and collaboration only with Modify. InnovationExchange combines a contest (None on the contest level) with collaboration (Modify on the team level). Therefore the category Modify is included in the analysis below. The governance mechanisms hierarchy and standardization are compatible with all types of peer contributions. In contrast, meritocracy relies on Assess or Modify.

87

3.2.2 The Relation between Access to Peer Contributions and Marketing and Strategic Considerations of the Agent
None: Morgan and Wang (2010) argue that an intermediary, such as InnoCentive serves a multi-sided market and therefore relies on the network effect. One solver has no value, but the more solvers, the more interesting is it for seekers to post a problem; the more solvers offer their problem solving skills on the platform, the more seekers are attracted. To attract both parties, the intermediary must consider their interests. That the crowd cannot access the contributions of each other is a crucial prerequisite for an intermediary, such as InnoCentive, who attempts to create an open innovation information market. First, no access prevents one user stealing an idea from another. This protects the interest of the individual who wants to win the prize. Second, it is in the interest of the agent that a potential solution cannot be accessed by anyone except for him/her. Proposition 1: No access to peer contributions is one prerequisite to establish an open innovation market. Assess: Raymond (1999) postulates that "given enough eyeballs all bugs are shallow" (1999:8). Crowdsourcing builds on this principle, but it is not about detecting bugs and more a matter of filtering the contributions of the crowd. Howe (2008) argues that crowdsourcing is volume business and that Sturgeons Law applies. The majority of the contributions of the crowd have a low quality. It would take too much time for traditional gatekeepers to handle the filter process due to the large number of inputs. Consequently the crowd peer-reviews the contributions. Moffitt and Dover (2010) argue that if the agent wants to harness the collective wisdom and insights of the crowd, he/she should enable features such as rating, ranking, voting, polls, reviews and favourites. The literature supports the transaction cost analysis that
88

suggests that a market (or meritocracy) is useful if the number of decisions is high. Proposition 2: Assessment of peer contributions enables efficient filtering of numerous contributions. Swirls Smellfighters contest is a typical example for a communitybased contest. 88% of the community-based ICs enable users to comment on the ideas of their peers (Hallerstede and Bullinger, 2010). To leverage the marketing outcome of a contest, agents often attempt to build a community around the contest. To support the formation of a community an agent must enable community features such as that of the crowd assessing the contributions of their peers. Proposition 3: Assessing peer contributions has a positive impact on the formation of a community. Ogawa and Piller (2006) highlight the fact that the crowd at Threadless selects the designs and at the same time individuals can express their purchase intent. Figure 18, a picture taken from Threadless.com shows the rating interface. The box on the top right corner is used to collect information about the purchase intent:

Figure 18: Screenshot Threadless.com


89

The combined data (results of the rating process and the numbers of purchase intent) can be used by Threadless to determine how many Tshirts to produce. Threadless may discard winning designs because they do not fit into the current catalogue or the design raises copyright issues (Ogawa and Piller, 2006). Another example is the platform of the Japanese firm Muji, which focuses on the production of household goods and food. To identify new ideas for products, Muji can rely on its online community with approximately 410,000 members who submit and rate ideas. Muji creates a professional design spec of the best ideas and calculates a sales prize. If the customer pre-orders reach a certain threshold, Muji proceeds with manufacturing. Like Threadless, Muji does not consider all ideas that have the best rating but reserves the right to discard ideas due to technical constraints (Ogawa and Piller, 2006). Ogawa and Piller (2006) conclude that when implementing systems based on collective customer commitment, which gather ideas from users, the pre-selection should occur through evaluation of the crowd. Proposition 4: Assessment of peer contributions is a prerequisite for establishing collective customer commitment systems. Modify: If a contest is used for marketing purposes, the agent wants as many participants as possible. Limiting participation only to teams creates a higher entry barrier, because it takes more effort for each potential participant to form a team than to participate individually. This might reduce the number of potential participants. Notwithstanding, the team structure has one advantage from a marketing perspective. An individual searches the strong and weak ties (Granovetter, 1983) of his network to form a team and this increases the amount of conversation about a contest. Therefore one should allow the participation of individuals as well as teams. Proposition 5: The participation of teams and individuals increases the amount of
90

conversation about a contest.

3.2.3 Accessibility of Peer Contributions and Motivation of the Crowd


None: At InnoCentive, the crowd cannot access the ideas of others but the crowd can see how many active solvers attempt to solve the same idea. InnoCentive does not provide information how many actually have submitted a solution29. Other open innovation intermediaries have a different policy regarding the information about the number of competitors. NineSigma and InnovationExchange do not issue any information about the number of competitors who attempt to solve the same problem. All three platforms are typical examples of innovation contests that do not allow access to the contributions of peers, but they have a different policy regarding information about competitors. But does information about the number of competitors in a contest fit into the category Accessibility of Peer Contributions? No, because it does not provide any information about the contribution itself. But within the categories View, Assess and Modify, the crowd can easily figure out how many competitors are participating in the challenge and how many contributions they have made. Consider the case of InnovationExchange where the crowd has no information about the contributions of other users and consequently no knowledge about the number of contributors. Examining the impact of having no knowledge about the number of contributors on the motivation of the crowd therefore also applies for a crowd having no access to the contributions of its peers. Boudreau et al. (2008, 2011) conducted a study about the TopCoder al29

Although the number of submissions is not visible to the crowd, one can assume that not everyone who looks at the problem will actually submit a solution. Lakhani et al., (2007) find that on average, out of 240 solvers who inspected a problem statement, only ten submitted a solution.

91

gorithm contest. In the TopCoder contest, the crowd can view the contributions of their peers. Boudreau et al. (2008, 2011) draw upon economic literature and suggest the more competitors in a contest, the less effort an individual contestant makes because he/she is less likely to win (competition effect). Yet, this effect is mitigated by the problem structure of complex problems. For complex problems, some users can take advantage of their capabilities and see themselves in a better position to find the best approach, which is called the problem structure effect (Boudreau et al., 2008). Boudreau et al. (2011) suggest, that these effects can be applied to all innovation tournaments. But what if the crowd does not have access to the contributions of peers and no information about the number of competitors. In this case, the individual has either lots of competition or the opposite is the case. In other words, uncertainty about the nature of competition is high. Three possible effects could occur. First, due to the lack of visible cues about competitors, competition has no influence on the efforts of the crowd and the crowd is affected more by the problem structure effect. Second, the crowd could assume a high level of competition or third, a low level of competition. Proposition 6: If there are no cues about the number of competitors, the problem structure effect has a higher impact on the crowd. Proposition 6a: If there are no cues about the number of competitors, members of the crowd assume a high level of competition, and thus participate with less effort. Proposition 6b: If there are no cues about the number of competitors, members of the crowd assume a low level of competition, and thus participate with more effort. View: Several scholars have investigated what drives the crowd to participate in crowdsourcing. Malone et al. (2010) proposed three different why-genes. The crowd does participate due to love, money (a prize or the future prospect of getting a job) and/or glory. Wise et al. (2010) added two why-genes to the collective intelligence framework of Malone et al. (2010). Building on Rowley and Moldoveanu (2003), Wise
92

et al. (2010) introduced an interest-gene (e.g. soldiers participate in the Wikified army field guide because it is in their own vested interest that the field guides are up to date and also out of civic duty), which can be seen as an identity/ideology gene (e.g. the crowd participates due to a sense of civic duty to improve government). Brabham (2012:5) scanned relevant literature for different motivations and provides the following list of motivators: the desire to earn money; to develop ones creative skills; to network with other creative professionals; to build a portfolio for future employment; to challenge oneself to solve a tough problem; to socialize and make friends; to pass the time when bored; to contribute to a large project of common interest; to share with others; and to have fun. Brabham (2012) concludes that the motivation to participate in crowdsourcing does not differ from other forms of participatory culture such as contributing to Wikipedia and Flickr, posting videos on YouTube, creating code for open software or blogging. Leimeister et al. (2009) name four motives for participating in a competition: learning (access to knowledge), direct compensation (prize or career options), self-marketing (signalling) and social motivation (appreciation by peers and/or organizer). Accessibility of peer contributions is one factor among many others why such motives can be activated or not. Malone et al. (2010) includes in the money-gene the desire to win the prize and signalling. This perspective is not useful for my purposes because the accessibility types
93

View, Assess, and Modify enable users to signal their skills to others, but it is not necessarily connected to winning the prize. Brabham (2012) makes this distinction, but some items of his motivation list could be merged. For instance the motive to socialise and make friends and to share with others are what Leimeister et al. (2009) call social motives. Also Leimeister et al. (2009) distinguishes between direct compensation, which are prizes, or job offers related to the contest and self-marketing (signalling). Consequently I will use the framework of Leimeister et al. (2009) to suggest how the accessibility of peer contributions affects the motivation of the crowd to participate. View enables the crowd to learn from the input of their peers. Proposition 7: Enabling viewing of peer contributions increases the chance to activate members of the crowd who are motivated to learn. View enables each member of the crowd to present themselves if they choose to do so. This enables individuals to signal their skills and knowledge. For instance, many design contests platforms have created a feature that allows the crowd to build a digital portfolio of all contest entries. Proposition 8: To enable viewing of peer contributions increases the chance to activate members of the crowd who have the motive to signal their abilities to others. View makes it possible that the crowd can gain appreciation from their peers and the agent. For instance, one key driver for participating in Threadless is recognition from the design community (Howe, 2008; Brabham, 2010). Fller (2010) identifies recognition/visibility as one motive and demonstrates that (...) motives such as showing ideas, positively affect consumers further interest in co-creation projects, the extent of participation, as well as time spent (Fller, 2010: 113-114). Proposition 9: To enable viewing of peer contributions increases the chance to activate members of the crowd who are driven by social motives. View may also have a positive effect on the productivity of the crowd. Social Loafing exists in asynchronous communities (Preece and Ma94

loney-Krichmar, 2003; Fang et al., 2011). Social comparison, enabled through signalling (e.g. reputation systems and visibility of contributions), can reduce social loafing (Fang et al., 2011). Huberman (2009) demonstrates in an empirical study that attention (measured by the number of downloads on YouTube) has a positive impact on the productivity of users who upload videos. In other words, users who are successful and get a lot of attention for their work (i.e. number of downloads, comments or votes) have a higher level of productivity. In contrast, Archak (2010) did not find that rating is addictive, which means that members at TopCoder who achieved a high ranking did not increase their efforts to keep a high status. Proposition 10a: Viewing peer contributions has a positive impact on the productivity of moderate to very successful users. Proposition 10b: Viewing peer contributions has no impact on the productivity of moderate to very successful users.

3.2.4 Accessibility of Peer Contributions and Quality of the Best Idea


What constitutes the quality of the best idea needs to be defined. Girotra et al. (2010: 596) state that four factors govern the quality of the best idea (Girotra et al., 2010: 596): Number of ideas generated by the group The average quality of the underlying quality distribution The variance of the underlying quality distribution The ability of the group to select the best idea Moreover, which criteria constitute the best idea is dependent on the purpose the idea should serve. Several scholars have developed criteria to judge the best idea. Girotra et al. (2010) measured the business value and purchase intent that includes a multidimensional analysis of five different metrics (technical feasibility, novelty, specificity, demand and
95

overall value). Poetz and Schreier (2009) used the dimensions novelty, customer benefit and feasibility. Rietzschel et al. (2010) used criteria such as originality, feasibility and effectiveness. The main criteria for Bullinger et al. (2011) was innovativeness, which is determined by novelty and usefulness (Amabile, 1996). A set of three basic criteria can be distilled from this review: 1) Novelty and originality of an idea. This criterion is important if the agent is looking to explore new ideas. 2) Usefulness, effectiveness and customer benefit measure to what extent a potential idea does solve a problem. 3) Feasibility describes if the idea is compatible with the existing operations of the agent. High feasibility eases the integration of the idea into the existing operations. of the agent. There may be a tradeoff between the different dimensions. For instance, Rietzschel et al. (2010) suggest that there is an originality effectiveness trade-off in the selection of ideas. None: Kristensson et al. (2004) conducted a lab study with three different groups (ordinary users, advanced users and professionals) who had to create ideas about service improvements for mobile services. Participants of each group submitted their ideas individually and did not have access to the contributions of their peers. Different groups of judges rated ideas individually and the judges were blind to the source. The results of the study demonstrate that the ideas of ordinary users were more original and valuable than the ideas of professional developers and advanced users. Unfortunately the research design has two weak points. Kristensson et al. (2004) report that the assessment was carried out ten months after the idea generation and that the laboratory setting may have had a negative influence on the intrinsic motivation of the participants. Building on Kristensson et al. (2004), Poetz and Schreier (2009) con96

ducted an empirical study whose purpose was to make a real world comparison of ideas generated by professionals with ideas generated by the crowd. Poetz and Schreier (2009) found a consumer goods firm that was willing to gather ideas from professionals within the firm and the crowd simultaneously. Both groups had to solve the problem of how to make the process of feeding babies with mash and solid food more convenient for the baby and parents. Professionals created 51 ideas internally. The crowd could submit their ideas at the company website using an online form and therefore had no access to peer contributions. The crowd submitted 70 ideas, but 18 did not meet basic evaluation criteria and were discarded. A team (CEO and head of R&D department) rated all 103 ideas based on three dimensions: novelty, customer benefit and feasibility. The raters received the ideas in random order and were blind to their originators. First, they rated individually and in a second round could discuss the ratings. Poetz and Schreier (2009) found that user ideas outperformed those of the professionals in terms of novelty and customer benefit but were ranked lower in terms of feasibility. Yet, the overall scores for feasibility were high, which means that user ideas with low feasibility could have been improved in a product development process. Poetz and Schreier (2009) conclude that future research should investigate the comparison between users and professionals considering contingency factors such as user abilities, motivation and design of the crowdsourcing platform. Summing up, two studies suggest that a crowd in a nominal group structure is able to create ideas that rank high in terms of originality. Building on the conclusion of Poetz and Schreier (2009), no access to peer contributions as a contingency factor (design of the crowdsourcing platform) may have a positive effect on the originality of the ideas created by the crowd. Proposition 11: No access to the contributions of peers has a positive effect on the originality of ideas created by the crowd.

97

View: Viewing ideas of others may lead to similar effects than topic fixation. Topic fixation, a drawback of interactive groups, means that the "first person speaking up therefore influences the path everybody takes" and groups may focus longer on previously mentioned ideas (Kavadias and Sommer, 2009: 1904). In the case of public design contests, the great designs of some users may influence other designers to create a variation of that design. In other words, excellent ideas (or the first idea) of some users may influence the path of other users. DesignCrowd provides anecdotal evidence for topic fixation crowdsourcing. In contrast to other design platforms, DesignCrowd does not allow the contestants of a design contest to have access to each others contributions to prevent too similar designs due to "group think and copying" (Pelzer, 2011). Ardaiz et al. (2009) argue that users participating in electronic brainstorming systems (EBS) suffer from cognitive inference in increased group sizes due to the participants need to pay attention to the ideas of others. The conclusion is that having the option to View causes the crowd to pay more attention to the ideas of others. Consequently users might follow the path of other users instead of developing their own ideas, which in turn reduces the overall variance of ideas. Proposition 12: Viewing of peer contributions reduces the variance of ideas created by the crowd. The MATLAB contest has a flowing translation from None to View30. In the darkness phase one cannot access the contributions of the peers, in twilight one sees the scores of the peers and in the daylight phase all codes can be copied and tweaked. The time limit in combination with instantaneous user ranking leads to an exciting contest experience as well as to more efforts by the crowd to win the contest.
The MATLAB contest also had a forum to discuss strategies and ask questions. But participants were not able to comment directly on the submissions of other users. The forum is community functionality and not a means for explicitly rating or commenting on other contributions (Geiger et al., 2011: 7).

30

98

Boudreau et al. (2008) note about the weekly TopCoder algorithm contest (duration: 75 minutes) that the "near-instantaneous scoring of performance and public ranking causes participants to try their best at the various problems" (2008: 15). Howe (2008) argues that the MATLAB contest is a very successful crowdsourcing application because it combines a diverse crowd, transparency of submissions and thus the possibility to exchange (or steal) code from other users. In numerous iterations the codes are reused and the result of this evolutionary process is an improved code. Users can take advantage of the diverse approaches to solve the problem and can build on each others ideas without much in the way of coordination costs. Proposition 13: Multi-phase contests that enable use of each others source code and instantaneously rank the performance of the submitted source code lead to a higher average quality of submissions. Assess: Huang et al. (2011) found that ideas of the crowd at Dells Ideastorm31 are not feasible, because participants underestimate the cost of implementing their ideas and at the same time overestimate the potential of their ideas. Initially, this takes place in an overcrowded market, as a result the likelihood that ideas are implemented is low. Individuals learn from the feedback in the form of peer voting and the responses of Dell, which results in a higher chance that those individuals contribute a high-potential idea. Nevertheless, the learning effect on feasibility (cost structure) remains low. Users who are discontented because their ideas are not well received drop out of the platform. High potential users stay on the platform, which increases the average potential of ideas (although the overall number of ideas decreases) over time (Hung et al., 2011). Huang et al. (2011) suggest that to improve the fil31

The aggregation mechanisms of Dells Ideastorm are a combination of collection and contest. All ideas are collected on the platform. But Dell uses solely the best ideas to improve their products and services.

99

tering process a firm should provide users with information about cost structure. Proposition 14: Feedback in the form of voting and commenting by the agent increases the average quality of ideas contributed by the crowd. The concept of collective intelligence suggests that through additive aggregation in the form of averaging or prediction markets biases such as "pattern obsession (to see patterns when none are present)", or "framing (influence by presentation of solution)" are mitigated (Bonabeau, 2009: 46-47). Rietzschel et al. (2010) carried out a study that examined if a nominal group structure is more productive and able to select the most creative idea. The results showed that participants had a preference for selecting feasible and desirable ideas. While instructing the participants to select a creative idea improved their capability to select more original ideas, the participants were less satisfied and the selected ideas were less effective. The findings of this article are based on two studies. A critical look at the methodical design of the two calls for a reinterpretation of these findings. In both studies, only one trained rater judged the ideas. Although a trained rater had domain knowledge and the agreement between the judgment of the rater and group was high, the methodical approach of one person judging all ideas seems inferior in comparison to other studies (c.f. Girotra et al., 2010) that used multiple raters or large panels. In Study I, ideas were created in an interactive group, and although not clearly stated in the text, one can assume that each participant individually selected the ideas. In the Study II, participants were presented with a pre-generated set of ideas. In other words, the participants who rated the ideas were separated from the idea generation process. To rate the ideas, the participants used a software program. This is analogous to crowdsourcing. Many platforms use the crowd to select their ideas, and the crowd is separated from the idea generation process. Admittedly, an individual can vote for his/her own ideas and also recruit friends for support, but the effect is weak if the crowd is large and diverse. Proposition 15: Under certain conditions, assessment through the crowd leads to se100

lection of original ideas. Modify: Some contests allow the formation of teams. Organisational behaviour literature like Manz and Neck (1995) analyses the differences between team think (1+1+1=5) and group think (1+1+1=2). The literature of offline groups cannot be applied to online groups without considering the different conditions of communication (c.f. Choi et al., 2010). One needs to pay caution to whether a group in a crowdsourcing effort collaborates offline and submits their ideas online on the platform, or the interaction of the team takes places in a virtual realm. In the latter, text-based asynchronous communication allows multiple themes and persons to speak. Emotions can only be articulated via emoticons. In this environment, the social presence compared to faceto-face communication is low (Preece and Maloney-Krichmar, 2003; Hebrank, 2009). Powell et al. (2004) summarise research on virtual teams and focus amongst other things on socio-emotional processes (relationship building, cohesiveness and trust). It takes longer for virtual groups to attain a high level of cohesiveness. Thus teams that only collaborate online in a short-term contest will not lead to higher performance. Proposition 16: In short term contests, "offline" teams yield a higher performance than virtual teams.

3.2.5 Access to Peer Contributions and the Cooperative Orientation of the Crowd
The cooperative orientation describes to what extent a user is cooperative or competitive toward his/her peers. If a user has a low degree of cooperative orientation, he/she is competitive. If a user has a high degree of cooperative orientation, he/she is cooperative (Bullinger et al., 2011).

101

None: No access to peer production does imply competitive behaviour as no sharing of ideas is possible.32 For contests within the organisation, competitive behaviour has drawbacks. Morgan and Wang (2010) argue that a contest within the firm reduces the incentives for employees to cooperate and share ideas with others. The performance of the employees is interdependent (i.e. if the firm is not innovative, it might not survive), thus a contest is not useful. Cooperation transfers knowledge between employees. Schumpeter (1983) identified five types of innovation33 and argues that new combinations require that entrepreneurs use the existing productive means within an economic system in a different way. To put it differently: The entrepreneur remixes the combination like a DJ who mixes existing songs to create an entirely new song (which is called a remix). Ward (2004) calls the remix of existing things conceptual combination, which is one explanation of how new ideas can emerge. Transferring the remix metaphor to the organisation would imply that if employees do not share their ideas and do not cooperate with each other, each employee has less material/inspiration to remix. As a result, the innovativeness of a firm would decrease. Also Benbya and Alystyne (2010) observe that innovation contests within the firm need to balance collaboration and competition. Consequently InnoCentive@work offers collaboration tools (Benbya and Alystyne, 2010). Proposition 17: Contests within an organisation with no access to peer contributions have a negative impact on the innovativeness of a firm. View: Hutter et al. (2011) carried out a study where the main goal was to examine different user types in an contest community. The data was
32

Some platforms may enable features such as user profiles and direct messaging although the crowd does not have access to each others contributions. These elements could enable cooperative behaviour.

The five types of innovation are the introduction of a new good, the introduction of a new method of production, the opening of a new market, the conquest of a new source of supply of raw materials or half-manufactured goods and, the implementation of the new organisation of any industry (Schumpter, 1983: 66).
33

102

taken from the multi-round OSRAM LED Design contest, which enabled users to assess the contributions of their peers. Hutter et al. (2011) identified four different user types: 1) Co-operators post few or no ideas, but make many cooperative comments. Co-operators facilitate the information transfer and knowledge sharing processes in the online community, which are the key prerequisite for further improvements and collaborative innovation (2011: 13). 2) Competitors submit many or very attractive ideas and make few, mostly competitive comments (e.g. criticising others to discourage them). 3) Communititors combine cooperative and competitive behaviour. This user type posts many or few very attractive ideas and makes many cooperative comments. 4) Observers post few or no ideas and make few or no comments. Although observers do not help the community directly, their presence contributes to the critical mass of the community to attract and sustain the interest of other users. The MATLAB contest also provides anecdotal evidence that these types of behaviours occur. As a participant argues in an interview with a trade press blog, the daylight phase creates an environment where both cooperation and competition can emerge. The participant mentions another participant who did well in the first two phases by submitting a code accompanied with a text that explains the details of his submission. Other participants build on his code. But in the end, the willingness to cooperate seems to diminish: "so, close to the end game, you tend to keep your final ideas with you, and just test it in the final minutes. The problem is that most of the competitors are doing exactly the same thing, which makes it exciting and unpredictable" (Wong and Fioravanti, 2011). Proposition 18: Viewing peer contributions creates an environment for cooperative,
103

competitive, co-opetitive34 and passive behaviour. The study of Hutter et al. (2011) indicates that communitative (coopetitive) behaviour increases the quality of ideas (through sharing ideas and getting feedback); it increases the chances of winning due to the fact that one benefits from the inputs of other users and gets attention for oneself (and ones ideas) through actively interacting with peers. Bullinger et al. (2011) investigated the relationship between the degree of cooperative orientation and innovativeness. The data was obtained from a contest at a university that enabled assessment through the crowd (voting and commenting). The results show that either teams with low cooperative behaviour, or groups with high cooperative behaviour develop innovative (new and useful) ideas. The rationale is that teams with low cooperative behaviour focus their energy on their own work and consider interaction with peers as a waste of precious resources within the contest period. In contrast, teams with high cooperative behaviour use boundary spanning activities to transfer knowledge from outside into the group and use these new perspectives to improve the existing ideas. Boundary spanning can be divided into proactive boundary spanning activities (actively commenting and looking at each others ideas) and reactive boundary spanning (responding to feedback) (Bullinger et al., 2011). Figure 19, an extract taken from Bullinger et al. (2011) shows the quadratic relationship between innovativeness and cooperative orientation:

34

Co-opetitive is derived from Co-opetition, a term that describes cooperative competition.

104

Figure 19: Correlation between Innovativeness and Cooperative Orientation (Bullinger et al., 2011)

The results of both studies are conflicting. Bullinger et al. (2011) suggest that either competitive or cooperative behaviour leads to innovativeness of ideas. Constructing a diagram from the results of Hutter et al. (2011) we would see the following relationship between user types and potentials for successful innovation outcomes (Figure 20):

Figure 20: Relationship between User Types and Successful Innovation Outcomes (based on Hutter et al., 2011)

105

Up to now I have argued that access to peer contributions affects the outcomes of a contest. I also indicated that in most cases access to peer contributions might be one factor among many others. Previously, I suggested a direct relationship between the group structure (accessibility of peer production) and the quality of ideas. The two studies suggest that different user behaviour influences the quality of ideas. Access to peer contributions (View, Assess, Modify) creates the environment for such behaviours and these different behaviours have an impact on the quality of ideas. Assuming this indirect relationship, the question that remains is what kind of behaviour improves innovativeness (Bullinger et al., 2011) or the potential for successful innovation outcomes (Hutter et al., 2012)? A brief review of both studies shows that it is too early to provide a satisfying answer to this question. Hutter et al. (2011: 6) put forward three propositions: 1) In contests, competitive as well as co-operative elements can be observed simultaneously. 2) Based on the adoption of either competitive, co-operative or coopetitive behaviour, different user contributions can be found in a contest community. 3) A combination of active competition to win with simultaneous collaboration yields the highest potential for successful innovation outcomes in online idea and design contests. The results of Hutter et al. (2001) support proposition 1 and 2. The support and for proposition 3 is rather weak. Hutter et al. (2011) mention that the finding that a communitative strategy leads to a higher innovation potential is a preliminary result and is not generalisable. Future studies should include control variables such as motivation, time, or previous skills. Bullinger et al. (2011) statistically established the correlation between the degree of cooperative orientation and innovativeness. They also conclude that future research should include additional factors and a
106

comparison of the findings of the contest in their study with other innovation contests would lead to generalisability of their findings. Furthermore both contests differ in two several design elements. Table 14 shows the design elements of each contest:
Design Element Study Bullinger et al. Study Hutter et al. (Bullinger & 2011 2011 Moeslein, 2010) (University contest) (LED Design contest) Media Mixed (Teams offline, Online submitting, voting, commenting online) University Firm (OSRAM) Defined (but more open than Bullinger et al.)

Organizer

Task/Topic specific- Defined ity

Degree of elabora- Concept (Visualisation of Ranged from sketch to tion information systems ter- concept mini) Target group Participation as Contest period Specified (Students) Team Long term (8 weeks) Unspecified who has ideas) Individual Long term (12 Weeks) (anyone

Reward/ motivation Mixed (non-monetary and Mixed (monetary and reward. There is no infor- non-monetary) mation whether the reward is monetary or quasimonetary). Community tionality Evaluation func- Given (Assess) Given (Assess) (peer review
107

Mixed (Jury evaluation of Mixed

experts and peer review through casting votes through casting votes) which was the basis for jury evaluation of experts) Further information about the contest Multi-round contest

Table 14: Comparison of Design Elements of Studies on Cooperative Orientation of the Crowd

In contrast to Hutter et al. (2011), the study of Bullinger et al. (2011) was staged at a university. The LED Design contest focused on the participation of individuals as the contest at the university only allowed teams. The question is whether the form of participation (individual vs. team) has an influence on the outcome. The LED Design Contest was a multi-round contest as opposed to the single-round university contest. Multi-round contests ensure that weaker contestants are eliminated early to save the prizes for the stronger participants (Morgan and Wang, 2010). Based on this review I put forward two propositions for further research. Proposition 19a: Given the same contingency factors, communititive behaviour has a positive impact on the quality of ideas. Proposition 19b: Given the same contingency factors, communititive behaviour has a negative impact on the quality of ideas.

3.3 Discussion
Table 15 provides an overview of the propositions:
Marketing & Strategic considerations None
Proposition 1: No access to peer contributions is one prerequisite to establishing an open innovation market.

Motivation

Quality of best Idea Cooperative orientation


Proposition 11: No access to the contributions of peers has a positive effect on the originality of ideas created by the crowd. Proposition 17: Contests within an organisation with no access to peer contributions have a negative impact on the innovativeness of

Proposition 6: If there are no cues about the number of competitors, the problem structure effect has a higher impact on the crowd. Proposition 6a: If there are no cues about the number of competitors, members of the crowd assume a

108

high level of competition, and thus participate with less effort. Proposition 6b: If there are no cues about the number of competitors, members of the crowd assume a low level of competition, and thus participate with more effort.

a firm.

View

Proposition 7: To enable viewing of peer contributions increases the chance to activate members of the crowd who have the motive to learn. Proposition 8: To enable viewing of peer contributions increases the chance to activate members of the crowd who have the motive to signal their abilities to others. Proposition 9: To enable viewing of peer contributions increases the chance to activate members of the crowd who are driven by social motives. Proposition 10a: Viewing peer contributions has a positive impact on the productivity of moderate to very successful users. Proposition 10b: Viewing peer contributions has no impact on the productivity of moderate to very successful users.

Proposition 12: Viewing peer contributions reduces the variance of ideas created by the crowd. Proposition 13: Multiphase contests that enable using each others source code and instantaneously rank the performance of the submitted source code lead to a higher average quality of the submissions.

Proposition 18: Viewing peer contributions creates an environment for cooperative, competitive, co-opetitive and passive behaviour. Proposition 19a: Given the same contingency factors, communitative behaviour has a positive impact on the quality of ideas. Proposition 19b: Given the same contingency factors, communitative behaviour has a negative impact on the quality of ideas.

Assess

Proposition 2: Assessment of peer contributions enables efficient filtering of numerous contributions. Proposition 3: Assessing peer contributions has a positive impact on the formation of a community. Proposition 4: Assessment of peer contributions is a prerequisite for establishing collective customer commitment systems.

Proposition 14: Feedback in the form of voting and commenting by the agent increases the average quality of ideas contributed by the crowd. Proposition 15: Under certain conditions, assessment through the crowd leads to selection of original ideas.

Modify Proposition 5: The participation of teams and individuals increases the amount of conversation about a contest.

Proposition 16: In short term contests "offline" teams yield a higher performance than virtual teams.

Table 15: Overview of Propositions re: Impact of Accessibility of Peer Contributions

109

It was necessary to develop variations of Proposition 6 (the effect of no cues about the number of competitors) and proposition 10 (the impact of visibility on productivity), because it is not clear under what conditions which effect would occur. How no cues about the number of competitors affect a contestant could depend on the type of personality. Regarding the effect of visibility on the productivity of moderate to very successful users it could be that design of the platform is an essential factor (i.e. TopCoder is based on competition, in contrast, the competitive element in YouTube is rather low). Overall the literature review suggests that the crowd is able to create original ideas. Proposition 11 states that no access to peer contributions has a positive effect on the originality. At the same time, the review suggests that the crowd does not too well in terms of feasibility (c.f. Huang et al., 2011; Poetz and Schreier, 2011). Consequently an agent might use the crowd to generate original ideas and a group of experts with domain knowledge ensures that the selected ideas are feasible. Proposition 11 builds on the fact that the crowd who evaluates the input is at the same time separated from the decision-making. Girotra et al. (2010) and Rietzschel et al. (2006) indicate that separation between the creation and selection process has a positive impact on the ability to select the best ideas. This raises the question of how different group structures can combine to yield the best outcome. I shall discuss this topic by examining the notion of hybrids. Proposition 13 highlights the success of the MATLAB contest model which enables building on each others ideas without incurring large coordination costs. This is in sharp contrast to collaborative platforms such as Wikipedia, which relies on the crowd to modify articles of others. Not only does Wikipedia suffer from group think (Bonabeau, 2009) but also Wikipedians spend a lot of time on coordination and conflict resolution; these communication costs are needed to sustain and grow the system (Suh et al., 2007). In the last phase of the MATLAB contest, the code becomes public. A user who tweaks the code of another user can win. The other user, who was the pioneer of
110

the tweaked idea, wins nothing. Certainly, users know in advance the rules of the contest. Users may participate in the MATLAB contest due to desire to win the prize, but it is more likely that social motives (appreciation by peers) and signalling are key drivers. The advantage of a contest that asks the crowd to solve a programming problem is that instantaneous ranking is possible and that it is very likely to find programmers who want to match their skills with others. Gulley (2006), who works for the firm that stages the MATLAB contest, mentions that programming contests are part of the geek culture. My assumption is that individuals who are socialized in the geek culture are more used to such contests than individuals from other domains. This raises the question of how the MATLAB model could be adapted to use it for different problem domains?

3.3.1

Reward Structure

Drawing upon the discussion about the winner-take-all model in contests (see p. 74) I suggest that transferring the MATLAB contest model to other domains requires a different reward structure. One wonders if there is another way to distribute the surplus of the MATLAB contest. After all, the creation of the best code depends on stealing code from others. The approach of the winners of the DARPA Balloon challenge could serve as an inspiration to adapt the current reward system of the MATLAB contest. In the DARPA Balloon Challenge, teams had to identify the location of ten red weather balloons scattered across the USA. The goal of this challenge was to explore the capabilities of the Internet and social networking to solve a "distributed, time-critical, geo-location problem (Tang et al., 2011: 78). Tang et al. (2011) explain that the approach of the winning MIT Team was built on Peter Dodds et al. who explain that "success in using social networks to tackle widely distributed search problems depends on individual incentives" (Peter Dodds et al. cited in Tang et al., 2011: 80). Thus, the MIT Team built a recursive in111

centive structure. Each person who finds a balloon receives $ 2000, but the person who recruited the finder and the person who recruited the recruiter of the finder also receive money. The advantage over a direct reward: People have the incentive to recruit members and people all over the world (who might know someone who knows someone who knows the location) can be recruited easily (Tang et al., 2011). Figure 21, an extract taken from Tang et al. (2011) visualises the recursive incentive structure of the MIT Team:

Figure 21: Recursive Incentive Structure (Tang et al., 2011)

The MATLAB contest model could be improved with the integration of a recursive incentive structure in the form of a reward tree system. This system ensures that the winning solution receives the highest cash prize, but at the same time, the users who contributed to the winning solution, are also rewarded. An example: Person A wins the contest
112

and receives $ 2000. She tweaked the idea of Person B, who receives $ 1000. Person C inspired B, thus C receives $ 500. The single cash prize for the winner in a recursive tree reward system would presumably be lower compared to a direct reward. To determine which ideas the winning solution builds on, the agent could use a combination of multiple methods: Social network analysis (which ideas and for how long a user looked at them), a user questionnaire and a jury panel of experts with domain knowledge.35 Assuming that an instantaneous ranking of ideas is not possible, the contest should have multiple rounds like the Netflix price. The Netflix prize had the goal of improving the code of the existing movie recommendation system of Netflix. The contest lasted for several years; the final prize was $ 1M while each year a progress prize of $ 50,000 was awarded. Akin to the MATLAB contest, participants shared their ideas and approaches with others (c.f. Howe, 2008). My suggestion attempts to improve the reward system of the MATLAB contest with the purpose of transferring this model to other problem domains. Although more users would be rewarded for their efforts, the tree reward system is still based on the winner-take-all logic.

3.3.2

Hybrids36

The brainstorming literature distinguishes between an interactive group and a nominal group. Recall that an interactive group can build on each others ideas. In a nominal group, group members create ideas individually. Nominal groups are more productive than interactive groups, because they create more ideas (Diehl and Stroebe, 1987). The underlying assumption is that the more ideas the group creates, the greater

35

Another way to determine on which idea someone builds would be to ask the crowd to do so during the idea generation process. This is the case at the OpenIdeo platform: Users can submit ideas via a form. In the last section of this form the user can submit whether someone elses idea inspired his/her idea. The user gets rewarded in the form of points, which increase the users status in the community. The discussion in this chapter is an extended version of Gegenhuber & Hrelja (2012). Although I use similar arguments, I add new and different thoughts.

36

113

the chances of finding an excellent one. Girotra et al. (2010) argue that this perspective is too narrow. Also Rietzschel et al. (2006) suggest that high productivity alone does not explain the quality of the best ideas. A group in an innovation context must be able to select the best idea. In their experimental study Rietzschel et al. (2006) compared the productivity (number of ideas, originality, feasibility) of nominal groups with interactive groups. Nominal groups in this study created and selected the ideas individually. The results show that nominal groups created more ideas than interactive groups and that the ideas of nominal groups were more original. In contrast, the ideas of interactive groups were more feasible. The capability of individuals to judge their own ideas is limited (Rietzschel et al., 2006). Therefore Rietzschel et al. (2006) suggest that a hybrid group structure might be more successful in the tasks of idea generation and selection: "It is possible that a combination of nominal and interactive idea generation and selection would yield optimal results on both tasks" (2006: 250-251). To shed more light on hybrids I will review the nominal group technique (Delbecq et al., 1975) and an empirical study on hybrids (Girotra et al., 2010). Nominal group technique (NGT): The NGT uses a combination of nominal and interactive group structures (Delbecq et al., 1975). The process is as follows (Delbecq, 1975: 8): 1) Silent generation of ideas in writing. 2) Round-robin feedback from group members to record each idea in a terse phrase on a flip chart. 3) Discussion of each recorded idea for clarification and evaluation. 4) Individual voting on priority ideas with the group decision being mathematically derived through rank-ordering or rating. The silent generation of ideas occurs in a nominal group structure. Members sit in the room but do not interact with each other. NGT uses the nominal group structure for idea genera114

tion, an interactive group structure to screen the ideas and again a nominal group structure to select a solution. In the selection process NGT aggregates the opinion of the group members by using methods such as rank-ordering or rating. Delbecq et al. (1975) posit that heterogeneous group members must pool their judgments to invent or discover a satisfactory course of action" (1975: 5). Mair and Hoffmann (1964) suggest that one type of group process should be used to generate information and another type to reach a solution to reduce ambiguity of the group about differences in decision-making phases (Mair and Hoffmann, 1964 cited in Delbecq et al., 1975: 9). Nominal groups produce more ideas than interactive groups (Van den Ven, 1974). But overall the empirical support for the NGT is not uniformly favourable (Bartunek and Murninghan, 1984); one drawback is that no additional ideas can be added after the nominal group session. Hybrid: In a laboratory study, Girotra et al. (2010) compared the performance of interactive groups with a hybrid group. The hybrid group is a way of effectively combining the merits of individual and team approaches (2010: 591). In the hybrid group participants had to individually create their own ideas for ten minutes. After the ten minutes passed, individuals had to rank their own ideas. After this phase, participants had another 20 minutes to share and discuss their ideas in an interactive group setting. After that the group of four people was given five minutes to select their ideas based on consensus and rank the five best ideas of their group.37 In the team treatment, the group had 30 minutes to create the ideas, and five minutes to rank the ideas
37

The fact that groups had to agree to select the five best ideas seems to be a minor drawback in this study. Delbecq et al. (1975) cast doubt on this method. Ideas may be perceived as more important than they actually are. A ranking (e.g. one to ten-point scale) provides a higher informational value.

115

based on a consensus-based system. A team of 41 MBA students measured the performance of the ideas by rating on a ten-point scale the business value of the idea. Girotra et al. (2010) assembled a group of 88 potential users to record their purchase intent for each idea. Furthermore two graduate students were recruited to rate, on a ten-point scale, five different metrics (technical feasibility, novelty, specificity, demand and overall value). All three groups were not involved in the generation process and the two large panels used a web-based tool for rating. Compared to Rietzschel et al. (2006, 2010) who used a single rater, using three different sources to measure different types of criteria seem to be a more accurate method. The result: Hybrid groups created more and better ideas. Building on each others ideas in the interactive groups did not lead to better ideas. This is due to the countervailing effect of production blocking. The study demonstrates that even hybrids lack the ability to select the best ideas. Girotra et al. (2010) conclude that irrespective of group structure, the ability of idea generators to evaluate their own ideas is extremely limited, and is perhaps compromised by their involvement in the idea generation step (2010: 600). In NGT, each participant is deeply involved in the generation of his/her ideas, but not in the ideas of the other users. The voting mechanisms channel out individual biases and determine a group decision. Consider the example of Threadless, which works similarly to NGT. Participants create ideas individually and post them on the platform. The crowd can look at each idea and ask questions. Finally the crowd rates the idea (scale from one to five) to determine the best idea. NGT has the drawback that no ideas can be added after the nominal group session. At least on an individual level this is the same as Threadless. Once the idea is posted it cannot be altered. In NGT and in the hybrid group structure of Girotra et al. (2010), participants initially generated the ideas individually and could not access
116

the contributions of their peers. But while NGT only allows assessment in the evaluation phase, the hybrid group could modify each others ideas in this phase. Girotra et al. (2010) draw the conclusion that even the hybrid group was not able to select the best idea and that a separation between idea generation and selection would be beneficial. The study itself indicates how to achieve this separation. The two large panels who rated the ideas in the study of Girotra et al. (2010) could be interpreted as a crowd. Although they lack the characteristic of selfselection, both panels used a web-based interface to rate the ideas. Moreover, the raters of both panels were not involved in the idea generation. Girotra et al. (2010) state that () if the interactive buildup is not leading to better ideas, an organization might be better off relying on asynchronous generation by individuals using, for example, Web-based idea management systems, as this would ease other organizational constraints such as conflicting schedules of team members and travel requirements. (2010: 603) Unfortunately Girotra et al. (2010) do not specify the design of an ideal idea management system. Some are a combination of contest and community (Innocentive@work), others use a combination of individual idea generation, expert opinion and prediction markets to determine the best idea (e.g. ManorLabs). Another example is Deloittes Innovation Quest (c.f. Terwiesch and Ulrich, 2009), which uses a combination of nominal and interactive processes in a multi-round contest. To capture the workflow of the Innovation Quest I will apply the Collective Intelligence Framework. Table 16 below, taken from Gegenhuber & Hrelja (2012) shows the genome.

117

What Create Submit ideas electronically Select ideas for the next round Build groups and improve ideas Vote on the best ideas Determine winners

Who Crowd Management (Domain Experts) Crowd

Why

How

ConMoney, Love, Glory testInterest Private Money, Love, Inter- Hierest archy ConMoney, Love, Glory testInterest Public Love, Interest, Money Voting

Decide

Create

Evaluate Decide

Crowd Mgmt.

Money, Love, Inter- Hierest archy

Table 16: Genome Deloittes Innovation Quest (Gegenhuber and Hrelja, 2012)

In the first round, the crowd submits the ideas without having access to the ideas of their peers (contest-private). The hierarchy, in this case a group of domain experts appointed by the management, selects the ideas for the next round. From a transaction cost perspective it becomes obvious that the expert filter (hierarchy) demands a lot of resources. In the next step participants can recruit colleagues for their ideas and build teams with the purpose of improving the idea. The crowd can assess the ideas through voting. At the end of the contest, the management determines the winner based on its judgment and the votes of the crowd. Selection through hierarchy at the end of the contest should presumable ensure that the winning idea is feasible and can be implemented into the current operations. To recapitulate, Rietzschel et al. (2006) define a hybrid as combination of nominal and interactive idea generation. Girotra et al. (2010) specify a hybrid as an attempt to combine individual and team approaches.
118

Most crowdsourcing platforms, such as Threadless, can be considered as hybrids. Yet, this definition does not fully capture examples like Deloittes Innovation quest. With the rise of crowdsourcing it is necessary to reconceptualise the notion of hybrid (Gegenhuber & Hrelja, 2012). I define a hybrid as a combination of different actors (crowd, employees, both), aggregation mechanisms (collection, contest, collaboration), group structures (none, view, assess, modify) and selection mechanisms (hierarchy, meritocracy, standardization, markets). The process itself can be single-round, multi-round or iterative. Still the question remains, what are ideal combinations for a hybrid? Providing a detailed answer lies beyond the scope of this thesis, but I will outline two preliminary rules. One rule is that actors who create the ideas should not be separated from the selection ideas. The crowd can be used to assess the ideas of their peers. Given that the crowd is large and diverse the individual decision bias of the idea generator is filtered out. Nevertheless the crowd might lack the capabilities to select ideas that are feasible for the agent. Therefore the hierarchy should make the final selection. Another rule is that the initial idea generation should always occur individually with no access to peer contributions to ensure that the ideas have a high level of variance.

119

4 Final conclusion
4.1 Critical Reflection
The foundation for this thesis was an extensive literature review. At some point, saturation regarding the informational value of the literature for each category has been reached.38 This thesis examines how crowdsourcing works in detail but provides little insight about the overall effects of crowdsourcing on the society. Technological determinism maintains that technology transforms society. In this worldview, society reacts to technology (Burnett and Marshall, 2003). Although this view provides insights into the transformational power of technology it neglects the fact that technological advancement occurs within an economic and societal context. Technological indeterminism argues that humans build the technology and that interests and institutions shape the path of technological development (Williamson cited in Freedman, 2002). The Web is the foundation of crowdsourcing. From the viewpoint of technological determinism, the one concern is how to make MechanicalTurk or design contest platforms more efficient. Externalities would be perceived as given. Technological indeterminism would reflect on the externalities of crowdsourcing and discuss strategies to mitigate them. The model of overlapping dimensions of aggregation is oversimplified. Consider the example of InnovationExchange. On the team level it is collaboration and on the contest level it is competition. The argument could be that the two circles of competition and collaboration are not overlapping, they are occurring on a different level (i.e. collaboration within a contribution, a contest between each contribution). Nevertheless, the model highlights that not only pure forms of contest, collaboration or collection exist. The second part of the thesis focuses on the implications of the Ac38

I may not have explored relevant literature that would have provided me with additional insight. For instance I integrated little literature on open source and communities.

120

cessibility of Peer Contributions on the outcomes of the contests. I neglected community functionalities in the analysis. One exception is proposition 13, which states that multi-phase contests that enable using each others source code and instantaneously ranking the performance of the submitted source code lead to a higher average quality of submission. Ranking of performance is visualised through a leaderboard, which is a type of community functionality because it contributes to fostering meritocratic community. I provided only a preliminary explanation of how accessibility of peer contributions and community functionalities differ. To gain a deeper understanding of how the form of a platform shapes the behaviour of the crowd, the community functionalities need to be considered.

4.2 Practical Implications


This thesis makes a call to practitioners to experiment with new reward systems. One suggestion is to apply the reward tree system to the MATLAB contest model, which should make it possible to transfer this contest model to other domains. The combination of sharing information and competing at the same time should lead to an evolutionary improvement of ideas. But alternative reward models, such as co-operatives, should also be considered. If organisations neglect the issue of distributive justice, crowdslapping could be one response because crowdsourcing can be gamed easily. For practitioners who have to design a contest platform, the analysis of peer contributions helps to gain an understanding of how the form of a contest has an influence on the outcome. Regarding the contests within an organisation, the analysis suggests that no accessibility of peer contributions has a negative impact on the willingness to share ideas. This proposition is counter intuitive. One would assume that employees are more willing to contribute to a contest if they are not concerned with the scrutiny of their colleagues. I suggest that a bal121

anced approach - a platform where ideas are visible for everyone but at the same time the possibility to create private virtual working rooms with selected colleagues - could reduce the concerns of employees who initially want to avoid public exposure of their ideas. The preliminary analysis of the transaction cost economies shows under which conditions (number of decisions, goal congruence, performance ambiguity) what type of governance mechanism should be applied. Overall, the thesis summarised key mechanisms of a crowdsourcing platform. These are core building blocks if one attempts to design a platform. Practitioners should pay particular attention to the future development of hybrids because, if correctly applied, hybrids could maximise the outcome of crowdsourcing applications.

4.3 Questions for Further Research


In the first section I raised the question whether the contests of InnoCentive should be considered as collective intelligence. I outlined two different perspectives but I did not adopt a position. Delving into the vast literature of collective intelligence it should be possible to find a satisfactory answer. The TCT analysis of governance mechanisms is rudimentary. For the sake of simplicity, environmental factors of TCT such as uncertainty and complexity have not been considered. Nevertheless, the application of TCT on the level of the crowdsourcing platforms seems to be a beneficial approach for further research. The notion of social marginality (Jeppesen and Lakhani, 2011) draws attention to the issue of gender and crowdsourcing. On the one hand broadcast search exploits the fact that women are not well represented in science. If the opposite was the case, there would be less talent to engage in platforms such as InnoCentive. On the other hand, winning a contest at InnoCentive is a proof of skills and may be useful for fur122

ther career steps. Many platforms, such as Wilogo, have more male than female participants (c.f. Trompette et al., 2008). Crowdsourcing depends on a large and diverse set of people, which is not given if only men participate. Jeppeson and Lakhani (2010) refer to economic literature that demonstrates that women are more reluctant to enter competitions. Future research could examine, whether the accessibility of peer contributions has an impact on the decision of women to participate in contests or not. Generally, the anonymity of the Internet increases the likelihood of individuals taking more risks than they would do in a face-to-face setting (McKenna and Green, 2002). One could expect that no access to peer contributions increases the chance that women consider participating in a contest. Conversely, women could be more reluctant to participate if peers could assess their contributions. Yet, the overall impression is that since there is little literature on crowdsourcing and gender39, therefore future research is desirable. Some of the positive propositions of the accessibility of peer contributions could be tested in empirical studies. Consider proposition 12, which states that viewing peer contributions reduces the variance of ideas created by the crowd. Girotra et al. (2010) mention that similar ideas (low variance) have similar quality. Therefore the lower the variance of ideas, the less likely the possibility that a good idea emerges. This proposition could be tested in following setting. At two different universities, a crowd (students) enters a contest that has the same presentation of a problem. At one university, the crowd submits the ideas via an online form (no access to the contributions of their peers). At the other university, the crowd submits the ideas online and the crowd can assess the contributions of their peers. The independent variable is the variance of ideas, the dependent the degree of accessibility. The methodical design of Girotra et al. (2010) should be deployed to meas-

39

An exception is Jones (2011) who discusses crowdsourcing in the context of gender and inequality.

123

ure the variance of ideas. In addition the study should use control variables for contingency factors such as the type of problem (simple/complex), motivation, efforts (e.g. number of ideas, number of posts, time on the platform) and composition of the crowd (e.g. skills, background, gender, age). Finally, this thesis lays the groundwork for future work on hybrids. I suggest that the elements of hybrids are different types of actors, aggregation and selection mechanisms, accessibility of peer contributions, and processes (one-round, multi-round or iterative). The organisational scholars Grandori and Furnari (2008) specified a set of organisational elements and proposed combinatory laws. These combinatory laws determine what combinations are efficient for organisations (and which are not). Future research on hybrids should define the elements of hybrids more precisely, and, inspired by Grandori and Furnari (2008), propose combinatory laws. This research should draw from multiple fields, such as crowdsourcing, brainstorming literature and organisational literature (e.g. Foss (2001) analyses the radical internal hybrid of Oticon and identifies what incentive problems it caused).

4.4 Contributions
In closing, I will briefly summarise my contributions to the existing body of crowdsourcing literature. First, I argued that outsourcing is not the key element of crowdsourcing and proposed a new definition. Second, I showed that aggregation mechanisms such as contest, collection and collaboration do not only exist in their pure form but are overlapping in many crowdsourcing applications. Applying transaction cost theory on the level of the crowdsourcing platforms led to the identification of four governance mechanisms. The analysis of peer contributions revealed how the form of a platform influences the outcome of a contest. I proposed an alternative reward system that softens the winner-takes-all logic of contests. Finally, I pointed out the importance of hybrids and suggested that future research should de124

velop combinatory rules to maximise the performance of hybrid models.

125

References
Amabile, T. M. (1996). Creativity in Contest. Boulder, Colo.: Westview Press. Archak, N. (2010). Money, Glory and Cheap Talk: Analyzing Strategic Behavior of Contestants in Simultaneous Crowdsourcing Contests on TopCoder.com. WWW 2010. Raleigh, North Carolina, USA. Ardaiz Villanueva, O., Nicuesa Chacn, X., Brene Artazcoz, O., Sanz de Acedo Lizarraga, M. L., & Sanz de Acedo Baquedano, M. T. (2009). Ideation2.0 project: web2.0 tools to support brainstorming networks and innovation teams. Proceeding of the seventh ACM conference on Creativity and cognition (pp. 349350). ACM. Arthur, C. (2006). What is the 1 % rule? The Guardian. Retrieved from http://www.guardian.co.uk/technology/2006/jul/20/guardianweeklyt echnologysection2 Baker, T., & Nelson, R. E. (2005). Creating Something from Nothing: Resource Construction through Entrepreneurial Bricolage. Administrative Science Quarterly, 50(3), 329-366. Bao, J., Sakamoto, Y., & Nickerson, J. V. (2011). Evaluating Design Solutions Using Crowds. Proceedings of the Seventeenth Americas Conference on Information Systems (pp. 1-9). Detroit, MI. Bartunek, J., & Murninghan, J. K. (1984). The nominal group technique: expanding the basic procedure and underlying assumptions. Group & Organization Management, 9(3), 417-432. Belleflamme, P., Lambert, T., & Schwienbacher, A. (2010). Crowdfunding: An Industrial Organization Perspective. Benbya, H., & Alstyne, M. V. (2010). Internal Knowledge Markets: Design From The Outside In. Knowledge Creation Diffusion Utilization. Boston. Benkler, Y. (2002). Coases Penguin, or, Linux and The Nature of the Firm. Yale Law Journal, 112(3), 369-446.
126

Benkler, Y. (2006). The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven and London: Yale University Press. Bonabeau, E. (2009). Decisions 2.0: The power of collective intelligence. MIT Sloan Management Review, 50(2), 4552. Boudreau, K. J., Lacetera, N., & Lakhani, K. R. (2011). Incentives and Problem Uncertainty in Innovation Contests: An Empirical Analysis. Management Science, 57(5), 843-863. Boudreau, K. J., Lacetera, N., & Lakhani, K. R. (2008). Parallel Search, Incentives and Problem Type!: Revisiting the Competition and Innovation Link. Boston. Brabham, D. C. (2008). Crowdsourcing as a Model for Problem Solving: An Introduction and Cases. Convergence: The International Journal of Research into New Media Technologies, 14(1), 75-90. Brabham, Daren C. (2010). Moving the Crowd At Threadless. Information, Communication & Society, 13(8), 1122-1145. Brabham, Daren C. (2012). Crowdsourcing: A model for leveraging online communities. In A. Henderson & J. Delwiche (Eds.), The Routledge Handbook of Participatory Culture. Bullinger, A. C., & Moeslein, K. M. (2010). Innovation Contests Where are we? AMCIS 2010 Proceedings. Bullinger, A. C., Neyer, A.-K., Rass, M., & Moeslein, K. M. (2010). Community-Based Innovation Contests: Where Competition Meets Cooperation. Creativity and Innovation Management, 19(3), 290-303. Burger-Helmchen, T., & Pnin, J. (2010). The limits of crowdsourcing inventive activities: What do transaction cost theory and the evolutionary theories of the firm teach us? Burnett, R., & Marshall, D. P. (2003). Web theory: an introduction. New York: Routledge.

127

Cherry, M. A. (2009). Working for (Virtually) Minimum Wage: Applying the Fair Labour Standards Act in Cyberspace. Alabama Law Review, 60(2006), 1077-1110. Chesbrough, H. W. (2003). Open innovation, the new imperative for creating and profiting from technology. Harvard Business School Press. Chesbrough, H. W. (2006). Open Innovation: A New Paradigm for Industrial Organization. In Chesbrough; Vanhaverbeke; West (Ed.), Open Innovation (pp. 1 - 12). Oxford University Press. Choi, B., Alexander, K., Kraut, R. E., & Levine, J. M. (2010). Socialization Tactics in Wikipedia and Their Effects. CSCW (pp. 107-116). Savannah, Georgia, USA: ACM. Coase, R. H. (1937). The Nature of the Firm. Economica, 4(16), 386405. Corney, J., Torres-Sanchez, C., Jagadeesan, A. P., & Regli, W. C. (2009). Outsourcing labour to the cloud. International Journal of Innovation and Sustainable Development, 4(4), 294313. Crowston, K., & Howison, J. (2005). The social structure of free and open software development. First Monday, 10(2-7). Dalal, S., Khodyakov, D., Srinivasan, R., Straus, S., & Adams, J. (2011). ExpertLens: A system for eliciting opinions from a large pool of noncollocated experts with diverse knowledge. Technological Forecasting and Social Change, 78(8), 1426-1444. Delbecq, Andre L., Van de Ven, A. H., & Gustafson, D. H. (1975). Group Techniques for Program Planning - a guide to nominal and delphi processes. Diehl, M., & Stroebe, W. (1987). Productivity loss in brainstorming groups: Toward the solution of a riddle. Journal of personality and social psychology, 53(3), 497-509 Doan, A., Ramakrishnan, R., & Halevy, A. Y. (2011). Crowdsourcing systems on the World-Wide Web. Communications of the ACM, 54(4),
128

86. ACM. Ebner, W., Leimeister, J., & Krcmar, H. (2009). Community engineering for innovations: the ideas competition as a method to nurture a virtual community for innovations. R& D Management, 39(4). Fang, C., Limin, Z., & Joseph, L. (2011). Social Loafing or Not: An Exploration of Virtual Collaboration. In Q. Ye, M. Zhang, & Z. Zhang (Eds.), The Fifth China Summer Workshop on Information Management (CSWIM). Harbin. Foss, N. (2003). Selective intervention and internal hybrids: Interpreting and learning from the rise and decline of the Oticon spaghetti organization. Organization Science, 14(3), 331-349. Franke, N., & Klausberger, K. (2010). Die Architektur von Crowdsourcing: Wie begeistert man die Crowd. In O. Gassmann (Ed.), Crowdsourcing. Innovationsmanagement mit Schwarmintelligenz. Mnchen: Carl Hanser Verlag. Freedman, D. (2002). A Technological Idiot? Information, Communication & Society, 5(3), 425-442. Fller, J. (2010). Refining Virtual Co-Creation from a Consumer Perspective. California Management Review, 52(2), 98-123. Gassmann, O., Friesike, S., & Huselmann, C. (2010). Crowdsourcing oder berall gordische Knoten. In O. Gassmann (Ed.), Crowdsourcing. Innovationsmanagement mit Schwarmintelligenz. Mnchen: Carl Hanser Verlag. Gegenhuber, T. & Haque, N. (2010). Connecting with Constituents: Open Government at the Local Level. nGenera Insight, S. 1-24. Gegenhuber, T., & Hrelja, M. (2012). Broadcast Search in Innovation Contests: Case for Hybrid Models. CI 2012. Geiger, D., Seedorf, S., Schulze, T., Nickerson, R. C., & Schader, M. (2011). Managing the Crowd: Towards a Taxonomy of Crowdsourcing Processes. Proceedings of the Seventeenth Americas Conference on
129

Information Systems (pp. 1-11). Detroit. Girotra, K., Terwiesch, C., & Ulrich, K. T. (2010). Idea Generation and the Quality of the Best Idea. Management Science, 56(4), 591-605. Google. (2012). reCaptcha: About Us. Retrieved February 17, 2012, from http://www.google.com/recaptcha/aboutus Grams, C. (2010). Love, hate, and the Wikipedia contributor culture problem. Retrieved February 23, 2012, from http://opensource.com/business/10/3/love-hate-and-wikipediacontributor-culture-problem Grandori, A., & Furnari, S. (2008). A Chemistry of Organization: Combinatory Analysis and Design. Organization Studies, 29(3), 459485. Granovetter, M. (1983). The Strength of Weak Ties: A Network Theory Revisited. Sociological Theory, 1, 201-233. Gulley, N. (2006). In Praise of Tweaking: A Wiki-like Programming Contest. Retrieved February 26, 2012, from http://www.starchamber.com/gulley/pubs/tweaking/tweaking.html Hallerstede, S., & Bullinger, A. (2010). Do you know where you go!? A taxonomy of online innovation contests Angelika Cosima Bullinger. XXI ISPIM Conference Bilbao. Haque, N. (2010). Will Facebook be your CRM provider? Retrieved February 20, 2012, from http://www.wikinomics.com/blog/index.php/2010/09/24/willfacebook-be-your-crm-provider/ Hayek, F. A. (1945). The Use of Knowledge in Society. The American Economic Review, 35(4), 519-530. Hebrank, C. (2009). Netzbasierte asychrone Kommunikations- und Kooperationskompetenzen. Theoretische Ableitung - Praktische Diagnose. Linz: Trauner Verlag. Heffan, I. V. (1997). Copyleft!: Licensing Collaborative Works in the
130

Digital Age. Stanford Law Review, 49(6), 1487-1521. Howe, J. (2006a). The Rise of Crowdsourcing. Wired Magazine. Retrieved from http://www.wired.com/wired/archive/14.06/crowds.html Howe, J. (2006b) Crowdsourcing: A Definition, Crowdsourcing: Tracking the Rise of the Amateur. Retrieved from http://crowdsourcing.typepad.com/cs/2006/06/ crowdsourcing_a.html Howe, J. (2008). Crowdsourcing - Why the Power of the Crowd is Driving the Future of Business (2nd ed.). New York: Three Rivers Press. Huang, Y., Singh, P. V., & Srinivasan, K. (2011). Crowdsourcing New Product Ideas under Consumer Learning Crowdsourcing New Product Ideas under Consumer Learning. Huberman, B., Romero, D. M., & Wu, F. (2009). Crowdsourcing, attention and productivity. Journal of Information Science, 35(6), 758765. Hutter, K., Hautz, J., Fller, J., Mueller, J., & Matzler, K. (2011). Communitition: The Tension between Competition and Collaboration in Community-Based Design Contests. Creativity and Innovation Management, 20(1), 3-21. Jensen, C., & Meckling, H. (1976). Theory of the Firm: Managerial Behavior, Agency Costs and Ownership Structure. Journal of Financial Economics, 3, 305-360. Jeppesen, L. B., & Lakhani, K. R. (2010). Marginality and ProblemSolving Effectiveness in Broadcast Search. Organization Science, 21(5), 1016-1033. Jones, M. S. (2011). Choosing to be Invisible?: Gender, Inequality, and Discourse on Virtual Crowdsourcing Work. IAMCR 2011 - Istanbul.
131

Juran, J. M. (1954). Universals in Management Planning and Control. Management Review, 43(11). Juran, J M. (1975). The Non-Pareto Principle!; Mea Culpa. Quality Progress. Kappel, T. (2009). Digital Commons at Loyola Marymount Ex Ante Crowdfunding and the Recording Industry!: A Model for the U. S. Loyola of Los Angeles Entertainment Law Review, 29(375). Kavadias, S., & Sommer, S. C. (2009). The Effects of Problem Structure and Team Diversity on Brainstorming Effectiveness. Management Science, 55(12), 1899-1913. Kristensson, P., Gustafsson, A., & Archer, T. (2004). Harnessing the Creative Potential among Users. Product Innovation Management, 21, 4 - 14. Lakhani, K., Jeppesen, L., Lohse, P., & Panetta, J. (2007). The Value of Openess in Scientific Problem Solving. Boston. Lambert, T., & Schwienbacher, A. (2010). An Empirical Analysis of Crowdfunding. Leimeister, J. M., Huber, M., Bretschneider, U., & Krcmar, H. (2009). Leveraging Crowdsourcing: Activation-Supporting Components for IT-Based Ideas Competition. Journal of Management Information Systems, 26(1), 197-224. Lykourentzou, I., Vergados, D. J., Kapetanios, E., & Loumos, V. (2011). Collective Intelligence Systems: Classification and Modeling. Journal of Emerging Technologies in Web Intelligence, 3(3), 217226. Magretta, J. (2002). What Management Is: How It Works and Why Its Everyone's Business. New York: The Free Press. Mail, D. (2009). iStockphoto requires you to delete images in your portfolio. Retrieved February 17, 2012, from http://prostockmaster.com/microstock/istockphoto-delete-images-inportfolio/
132

Malone, T.W., Laubacher, R., & Dellarocas, C. (2009). Harnessing crowds: Mapping the genome of collective intelligence. MIT Center for Collective Intelligence. Malone, T.W., Laubacher, R., & Dellarocas, C. (2010). The Collective Intelligence Genome. MIT Sloan Management Review, 51(3). Malone, Thomas W, & Rockart, J. F. (1991). Computers, networks, and the corporation. Scientific American, (265), 128-136. Manz, C. C., & Neck, C. P. (1995). Teamthink: beyond the groupthink syndrome in self-managing work teams. Journal of Managerial Psychology, 10(1), 7-15. McKenna, K. Y. a., & Green, A. S. (2002). Virtual group dynamics. Group Dynamics: Theory, Research, and Practice, 6(1), 116-127. McKinsey. (2009). And the winner is - Capturing the Promise of Philanthrophic Prizes. Mintzberg, H. (1979). The structuring of organizations: A synthesis of the research (p. xvi, 512). Englewood Cliffs, NJ: Prentice-Hall. Mo, J., Zheng, Z., & Geng, X. (2011). Winning Crowdsourcing Contests: A Micro-Structural Analysis of Multi-Relational Networks. The Fifth China Summer Workshop on Information Management (CSWIM) (pp. 47-51). Harbin. Mockus, A., Fielding, R., & Herbsleb, J. (2002). Two case studies of open source software development: Apache and Mozilla. ACM Transactions on Software, 11(3), 309-346. Moffitt, S., & Dover, M. (2011). Wikibrands - Reinventing your Company in a Customer-Driven Marketplace. Toronto: McGraw-Hill. Morgan, J., & Wang, R. (2010). Tournaments for Ideas. California Management Review, 52(2), 77-98. Muhdi, L., Daiber, M., & Friesike, S. (2011). The crowdsourcing process!: an intermediary mediated idea generation approach in the early

133

phase of innovation. Int. J. Entrepreneurship and Innovation Management, 14(4), 315-332. Ogawa, S., & Piller, F. T. (2006). Reducing the Risks of New Product Development. MIT Sloan Management Review, 47(2), 65-71. Ouchi, W. (1980). Markets, bureaucracies, and clans. Administrative Science Quarterly, 25(1), 129-141. Pabsdorf, C. (2009). Wie Surfen zu Arbeit wird: Crowdsourcing im Web 2.0 (p. 201). Frankfurt am Main, New York: Campus Verlag. Pelzer, C. (2011). Interview with Alec Lynch von DesignCrowd. Retrieved February 20, 2012, from http://www.crowdsourcingblog.de/blog/2011/07/27/interview-mitalec-lynch-founder-und-ceo-von-designcrowd/ Phillips, J. (2010). Open Innovation Typology. International Journal of Innovation Science, 2(4), 175183. Multi-Science. Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: a critical review of the literature and recommended remedies. The Journal of applied psychology, 88(5), 879-903. Poetz, M. K., & Schreier, M. (2009). The value of crowdsourcing: Can users really compete with professionals in generating new product ideas? Powell, A., Piccoli, G., & Ives, B. (2004). Virtual Teams!: A Review of Current Literature and Directions for Future. The Data Base For Advances In Information Systems, 35(1). Prahalad, C. K., & Ramaswamy, V. (2004). Co-creation experiences: The next practice in value creation. Journal of Interactive Marketing, 18(3), 5-14. Preece, J., & Maloney-Krichmar, D. (2003). Online Communities!: Focusing on sociability and usability. In J. Jacko & A. Sears (Eds.), Handbook of Human-Computer Interaction (pp. 596-620). Mahwah: NJ: Lawrence Erlbaum Associates Inc. Publishers.
134

Quinn, A., & Bederson, B. B. (2011). Human computation: a survey and taxonomy of a growing field. CHI 2011. Vancouver. Raymond, E. (1999). The cathedral and the bazaar. Knowledge, Technology & Policy, 12(3), 23-49. Springer. Rietzschel, E. F., Nijstad, B., & Stroebe, W. (2010). The selection of creative ideas after individual idea generation: choosing between creativity and impact. British journal of psychology (London, England!: 1953), 47-68. Rietzschel, E., Nijstad, B., & Stroebe, W. (2006). Productivity is not enough: A comparison of interactive and nominal brainstorming groups on idea generation and selection. Journal of Experimental Social Psychology, 42(2), 244-251. Rogstadius, J., Kostakos, V., Kittur, A., Smus, B., Laredo, J., & Vukovic, M. (2011). An Assessment of Intrinsic and Extrinsic Motivation on Task Performance in Crowdsourcing Markets. Rouse, A. C. (2010). A Preliminary Taxonomy of Crowdsourcing. ACIS 2010 Proceedings. Rowley, T. J., & Moldoveanu, M. (2003). When will stakeholders act? An interest- and identity-based model of stakeholder group mobilization. The Academy of Management Review, 28(2), 204. Sakamoto, Yasuaki, Tanaka, Y., Yu, L., & Nickerson, J. (2011). The Crowdsourcing Design Space. Foundations of Augmented Cognition. Directing the Future of Adaptive Systems. Springer. Schenk, E., & Guittard, C. (2010). Towards a characterization of crowdsourcing practices. Schumpeter, J. A. (1983). The theory of economic development!: an inquiry into profits, capital, credit, interest, and the business cycle (Reprint). New Brunswick, NJ: Transaction Books. Shirky, C. (2003). Power Laws, Weblogs, and Inequality. Retrieved February 19, 2012, from
135

http://www.shirky.com/writings/powerlaw_weblog.html Shirky, C. (2010). How cognitive surplus will change the world. Retrieved November 24, 2011, from http://www.ted.com/talks/clay_shirky_how_cognitive_surplus_will_c hange_the_world.html Steiner, I. D. (1972). Group process and productivity. New York: Academic Press. Stross, R. (2010). At Microtask and CloudCrowd, Assembly Lines Go Online - NYTimes.com. Retrieved February 25, 2012, from http://www.nytimes.com/2010/10/31/business/31digi.html Suh, B., Chi, E. H., Pendleton, B. A., & Kittur, A. (2007). Us vs . Them!: Understanding Social Dynamics in Wikipedia with Revert Graph Visualizations. IEEE Symposium on Visual Analytics Science and Technology (pp. 163-170). Sacramento, CA, USA. Surowiecki, J. (2004). The Wisdom of Crowds: why the many are smarter than the few and how collective wisdom shapes business, economies, societies and nations. New York: Anchor Books. Taewoo, N. (2011). Suggesting frameworks of citizen-sourcing via Government 2.0. Government Information Quarterly, 29(1), 12-20. Elsevier Inc. Tang, J. C., Cebrian, M., Giacobe, N. A., Kim, H. W., Kim, T., & Wickert, D. B. (2011). Reflecting on the DARPA Red Balloon Challenge. Communications of the ACM, 54(4), 7885. ACM. Tapscott, D., & Williams, A. D. (2006). Wikinomics: How Mass Collaboration Changes Everything. Toronto: Penguin Group (Canada). Terwiesch, C., & Xu, Y. (2008). Innovation Contests, Open Innovation, and Multiagent Problem Solving. Management Science. Terwiesch, Christian, & Ulrich, K. T. (2009). Innovation tournaments: Creating and selecting exceptional opportunities. Innovation. Boston: Harvard Business Press.
136

Trompette, P., Chanal, V., & Pelissier, C. (2008). Crowdsourcing as a way to access external knowledge for innovation!: Control, incentive and coordination in hybrid forms of innovation. 24 the EGOS Colloquium (pp. 1-29). Van de Ven, A.H., & Delbecq, A. L. (1974). The effectiveness of nominal, Delphi, and interacting group decision making processes. Academy of Management Journal. Villarroel, J. A., & Reis, F. (2010). Intra-Corporate Crowdsourcing (ICC): Leveraging upon Rank and Site Marginality for Innovation. CrowdConference. Von Ahn, L. (2010). Work and the Internet. Retrieved February 25, 2012, from http://vonahn.blogspot.com/2010/07/work-andinternet.html Von Hippel, E. (1994). Sticky Information and the Locus of Problem Solving: Implications for Innovation. Management Science, 40(4), 429-439. Von Hippel, E. (2005). Democratizing Innovation. Cambridge, Massachusetts: The MIT Press. Ward, T. (2004). Cognition, creativity, and entrepreneurship. Journal of Business Venturing, 19(2), 173-188. Wellman, B., Boase, J., & Chen, W. (2002). The Networked Nature of Community: Online and Offline. IT&Society, 1(1), 151-165. Williamson, O. E. (2012). The Economics of Organization!: The Transaction Cost Approach. American Journal of Sociology, 87(3), 548-577. Wise, S., Miric, M., & Gegenhuber, T. (2010). COINS for Government: Collaborative Innovation Networks Used in Nascent US Government Initiatives. COINs 2010. Wong, W., & Fioravanti, A. (2011). Matlab Contest Builds On The Best. Retrieved February 20, 2012, from
137

http://electronicdesign.com/article/embedded/Matlab-ContestBuilds-On-The-Best.aspx Zwass, V. (2010). Co-Creation: Toward a Taxonomy and an Integrated Research Perspective. International Journal of Electronic Commerce, 15(1), 11-48.

138

Appendix 1: List of Platforms


99designs 99logostore Amazon App Store (Apple) Bild Newspaper ChallengePost CNN iReport Crowdspirit CrowdSpring Current TV DARPA Balloon Challenge Dells Ideastorm Deloitte Innovation Quest DesignCrowd Digg Doritos Crash the Superbowl contest eBay ESP Game http://99designs.com/ http://99designs.ca/logo-design/store http://www.amazon.com/ https://developer.apple.com/programs/ios/ http://www.bild.de/news/leserreporter/leserre porter/home-15682146.bild.html http://challengepost.com/ http://ireport.cnn.com/ http://www.crunchbase.com/company/crowds pirit http://www.crowdspring.com/ http://current.com/ http://archive.darpa.mil/networkchallenge/ http://www.ideastorm.com/ http://careers.deloitte.com/unitedstates/students/csc_general.aspx?CountryCont entID=14034 http://www.designcrowd.com/ http://digg.com/ http://www.crashthesuperbowl.com http://www.ebay.com/ http://www.gwap.com/gwap/

139

Expedia Facebook Flickr Galaxy Zoo HumanGrid IBM Idea Jam Incuby

http://www.expedia.at https://www.facebook.com/ http://www.flickr.com/ http://www.galaxyzoo.org/ http://www.clickworker.com/en/ https://www.collaborationjam.com/ http://www.incuby.com/

InnoCenntive@work http://www.innocentive.com/innovationsolutions/innocentivework InnoCentive Intrade iStockphoto IT Dashboard Kickstarter Kiva LEGO-Factory Linux http://www.innocentive.com/ http://www.intrade.com/v4/home/ http://www.istockphoto.com/ http://www.itdashboard.gov/ http://www.kickstarter.com/ http://www.kiva.org/ http://designbyme.lego.com/ https://www.linux.com/ InnovationExchange http://www.netflixprize.com/

Listening and Learn- http://www.whitehouse.gov/open/innovations ing Tour /nclb-tour Manorlabs http://www.spigit.com/press-releases/manorlabs-recognized-as-bright-idea-innovator-bythe-ash-center-for-democratic-governance-atharvard-university-2 http://www.mathworks.de/matlabcentral/cont est/ http://www.mturk.com/

MATLAB Contest MechanicalTurk


140

Muji MyStarbucksIdea Netflix Prize NineSigma OpenEducation OpenGov OpenStreetMap OpenIdeo OSRAM LED Design Contest Peer-to-Patent reCAPTCHA Scratch SeeClickFix

http://my.muji.net/ http://mystarbucksidea.force.com/ http://www.netflixprize.com/ http://www.ninesigma.com/ https://openeducation.ideascale.com/ http://opengov.ideascale.com/ http://www.openstreetmap.org/ http://www.openideo.com/ http://www.led-emotionalize.com/

http://peertopatent.org/ http://recaptcha.net/ http://scratch.mit.edu/ http://seeclickfix.com/

Smellfighters Contest http://www.youtube.com/watch?v=vjrde3ik68 w Spreadshirt StumbleUpon http://www.spreadshirt.com/ http://www.stumbleupon.com/

141

Suite101 Threadless TopCoder Twago Ushahidi Vencorps Virgin Earth Challenge Wikified army field guide Wikipedia Wilogo X Prize Yahoo Groups YouTube

http://www.suite101.com/ http://www.threadless.com/ http://www.topcoder.com/ http://www.twago.com/ http://www.ushahidi.com/ http://www.vencorps.com/ http://www.virgin.com/subsites/virginearth/

http://www.whitehouse.gov/open/innovations /wikifiedArmy http://www.wikipedia.org/ http://en.wilogo.com/ http://www.xprize.org/ http://groups.yahoo.com/ http://www.youtube.com/

142

You might also like