Wir verwenden Cookies und Analyse-Tools, um die Nutzerfreundlichkeit der Internet-Seite zu verbessern und für Marketingzwecke. Wenn Sie fortfahren, diese Seite zu verwenden, nehmen wir an, dass Sie damit einverstanden sind. Zur Datenschutzerklärung.
Community evaluation of crowd-sourced ideas
Details
In 2008, Google initiated an ideation challenge called 'Project 10 to the 100'. They asked quite openly 'What would help most?' and received more than 150,000 ideas from people all over the globe. As Google's 'Project 10 to the 100' and many other real life examples show (e.g. Cisco's 'I-Prize' or Nokia's 'Tune Remake'), the number of submissions to crowdsourcing contests can be stunning. However, the majority of submitted ideas is usually of very low quality. According to the so-called Sturgeon's (1958) law, 90% of everything is crap. Firms are often overloaded by too much 'noise' generated on crowdsourcing platforms. They face the problem of not being able to filter and select the best ideas (or only being able to do so with substantial effort). Recently, scholars have proposed that the integration of crowdsourcing communities into the evaluation process appears to be a very promising method to filter high-quality ideas. In this explorative study, Georg Terlecki-Zaniewicz analyzed eleven crowdsourcing platforms, and concluded with a framework that makes the evaluation of crowd-sourced ideas through community evaluation both more efficient and more accurate.
Autorentext
Georg Terlecki-Zaniewicz studied at the Vienna University of Economics and Business. His main research focused on Crowdsourcing and other Open Innovation-related topics. Since 2012, Georg Terlecki-Zaniewicz had worked as Consultant and Innovation Manager for several medium-sized and large corporations.
Klappentext
In 2008, Google initiated an ideation challenge called 'Project 10 to the 100'. They asked quite openly 'What would help most?' and received more than 150,000 ideas from people all over the globe. As Google's 'Project 10 to the 100' and many other real life examples show (e.g. Cisco's 'I-Prize' or Nokia's 'Tune Remake'), the number of submissions to crowdsourcing contests can be stunning. However, the majority of submitted ideas is usually of very low quality. According to the so-called Sturgeon's (1958) law, 90% of everything is crap. Firms are often overloaded by too much 'noise' generated on crowdsourcing platforms. They face the problem of not being able to filter and select the best ideas (or only being able to do so with substantial effort). Recently, scholars have proposed that the integration of crowdsourcing communities into the evaluation process appears to be a very promising method to filter high-quality ideas. In this explorative study, Georg Terlecki-Zaniewicz analyzed eleven crowdsourcing platforms, and concluded with a framework that makes the evaluation of crowd-sourced ideas through community evaluation both more efficient and more accurate.
Weitere Informationen
- Allgemeine Informationen
- Sprache Englisch
- Anzahl Seiten 92
- Herausgeber AV Akademikerverlag
- Gewicht 155g
- Untertitel An explorative study on how to improve the prediction accuracy of crowdsourcing communities
- Autor Georg Terlecki-Zaniewicz
- Titel Community evaluation of crowd-sourced ideas
- Veröffentlichung 15.05.2017
- ISBN 3330509732
- Format Kartonierter Einband
- EAN 9783330509733
- Jahr 2017
- Größe H220mm x B150mm x T6mm
- GTIN 09783330509733