Außenansicht der oberen Stockwerke der Executive Academy

Crowdsourcing

The content on this page is currently available in German only.

Crowdsourcing is an open call to anyone to participate in the problem solving process of a focal actor. Crowdsourcing can take many different forms ranging from ‘small scale problem solving’ in ideation contests to so-​called grand challenges in which highly complex problems are being solved over very long time spans.Research of the institute has examined how crowdsourcing platforms can be used to forecast the attractiveness of user-generated designs as well as value creation and value capture aspects of crowdsourcing. Recent projects investigate the role of size for crowdsourcing success, the motivations of contributors in grand challenges, the role of crowding in idea selection processes, and the role of knowledge similarity on self-selection and evaluation outcomes on crowdsourcing platforms.

Publications

Chesbrough, Henry, Lettl, Christopher, Ritter, Thomas. 2018. Value creation and value capture in open innovation. Journal of Product Innovation Management (JPIM). 35 (6), 930-938.

Garaus, C.; Lettl, C., Schirg, F. 2016: Exploring motivations of participants in grand challenges: A comparative case study in the space sector, Academy of Management Best Paper Proceedings, Academy of Management Conference, August 5-9, 2015, Anaheim, California.

Berg-Jensen, Morten, Hienerth, Christoph, Lettl, Christopher. 2014. Forecasting the commercial attractiveness of user-generated designs using online data.An empirical study within the LEGO user community. Journal of Product Innovation Management (JPIM) 31, 75-93.

Franke, N.; Lettl, C.; Roiser, S. Türtscher, P. 2014: „Does God play Dice?” Randomness vs. Deterministic Explanations of Crowdsourcing Success,Academy of Management Best Paper Proceedings, Academy of Management Conference, August 1-4, 2014, Philadelphia, Pennsylvania.

 

 

Ongoing projects

The role of knowledge similarity on self-selection and evaluation quality

The Recognition of Novelty: Investigating the Role of Prior Experience during Idea Screening

Motivation to participate in Grand Challenges

Why Size Matters: Investigating the Roots of Crowdsourcing Success

Sabotage and Self-Promotion in Idea Generation & Selection

The role of individual knowledge on self-selection and evaluation quality

Idea evaluation is a fundamental task for organizations. Traditionally, organizations try to allocate tasks by matching task requirements and relevant knowledge. Due to internal challenges associated with evaluating crowdsourced ideas, organizations increasingly turn to crowds for the evaluation of ideas. Contrary to traditional evaluation processes, crowd evaluation relies on the self-selection of evaluators. In this study, we investigate how the matching of tasks and evaluators based knowledge similarity and self-selection of evaluators influence the outcome of crowd evaluations. We analyze 5,206 potential evaluations and 701 realized evaluations of ideas in a real crowd-evaluation context. By considering self-selection, which is a fundamental mechanism for new forms of organizing, our results reveal that while self-selection based on knowledge similarity might improve crowd evaluation outcomes overall, the effects are more nuanced when considering different sub-dimensions of idea quality.

The Recognition of Novelty: Investigating the Role of Prior Experience during Idea Screening

For innovation, it is not sufficient to generate new ideas, but also to select the best ideas. However, the extant research on novelty evaluation and creativity assessment points to a negative bias toward novel ideas in the context of idea selection – especially under high information load. This is problematic since novel ideas are the lifeblood of successful innovations. To date, only little research has examined individual-level differences that influence why individuals shy away from novel ideas. While individual differences in decision-making have often been attributed to prior experience, we do not know yet whether prior experience in the task is helpful or harmful when it comes to the screening for novel ideas at the fuzzy front end of innovation. To fill this gap of research, we aim to conduct a series of laboratory experiments to examine the effects of task experience and choice load on the recognition of novelty. With this research, we aim to provide an important theoretical lens for better understanding the underlying mechanisms behind individual idea selection behaviour.

Motivations to participate in grand challenges: a comparative case study in the space sector

Understanding the motivations of participants in crowdsourcing contests for grand challenges is important. It allows organizers of such contests to design them in such a way that they attract a critical mass of motivated, capable contestants to work on those large and difficult problems. In our embedded case study of the Ansari X Prize and the Google Lunar X Prize, we explore two questions: (1) What are the participants’ motivations to enter the tournament, and (2) how do their motivations change over time in response to critical incidents in those multi-year contests? We find that idealism plays an important role in the decision to participate and also leads to different reactions to the same critical events. Our data also reveal that events that are perceived as positive lead to increased extrinsic motivation when they are related to the prize, while those unrelated to the challenge may prompt participants to drop out of the contest. Critical incidents that are perceived as negative lead to cognitive dissonance, which is resolved either by withdrawal from the contest or by finding an enriched set of justifications and thus developing “winning despite losing” strategies.

Sabotage and Self-Promotion in Idea Generation & Selection

In this paper, we study how idea generators employ two forms of strategic behavior---sabotage and self-promotion---to gain advantages during idea evaluation. As idea generators of varying ability may use each form of strategic behavior differently, we investigate how behavior by idea generators of heterogeneous ability may distort an organization’s search process by threatening its ability to select the best ideas. We build a formal model to derive predictions about how individuals of heterogeneous ability use sabotage and self-promotion in delegated search processes. We then test these predictions using data from an online crowdsourcing platform where individuals can participate in both idea generation and idea selection of design contests. Using digital trace data on 75,000 individuals who contribute 150,000 submissions and peer-evaluate those submissions over 38 million times allows us to unobtrusively observe how individuals of heterogeneous ability engage in both self-promotion and sabotage in a natural setting. We analyze how individuals of varying ability change their behavior during idea selection when they also participated in idea generation by employing a difference-in-difference framework. We use three natural experiments to further quantify the extent to which incentives from idea generation spill over to idea selection, and to provide additional insights into the sensitivity of strategic behavior to changes in the incentives and search costs associated with identifying suitable targets to sabotage. 

Why Size Matters: Investigating the Roots of Crowdsourcing Success

In this project we examine how size has an effect on crowdsourcing success. We distinguish two types of possible effects that size may have on crowdsourcing success: (1) a resource effect and (2) a mere size effect. We investigate these potential effects using an online experiment with 1089 participants and simulations of crowds with different sizes.

Researchers