Science topic

Crowdsourcing - Science topic

This group is there to explore ways to make crowdsourcing more efficient.
Questions related to Crowdsourcing
  • asked a question related to Crowdsourcing
Question
5 answers
If you have a recommendation of someone and a summary of the importance of their contribution to science, I would like to learn of these great researchers.
I ask because I found out only recently that my very first research advisor, Dr. Bob Behringer, had passed. He was an amazing, personable figure in granular materials physics and his story is here: https://today.duke.edu/2018/07/physics-professor-robert-p-behringer-dies-69. I was recommending his research to a colleague who had a question about force chains and that's when we learned the news. He was only 69 but he had been a part of the Duke University community for nearly 50 years.
It was too late for my colleague to ask Dr. Behringer his question, but maybe this discussion will help raise awareness of some influential research from some great people. Maybe we can reach out to them with our questions before it is too late. And if it is too late because they have recently departed, their work will live on through those who know about their research.
Relevant answer
Answer
your idea is perfect. the world is full of these wonderful people that are expert on their works but I feel sorry that I can't become one of the student of them.
for everyone that works on seed topics, I'd like to introduce prof Gerhard Leubner in Royal Holloway, University of London who I have cheched his works for a long time and I love him by my heart .
  • asked a question related to Crowdsourcing
Question
3 answers
I am facing issues regarding --->
1. gmission-client deployment process. Though, I have cloned this project from their official GitHub repository.
2. Also, their state-or-the-art assignment algorithms contained in https://github.com/gmission/SpatialCrowdsourcingAssignmentAlgorithms is not giving proper interpretable outcomes.
If anyone has previously used/deployed gmission please help.
Thank you in advance
Relevant answer
Answer
Interesting. I agree with @C K Gomathy
  • asked a question related to Crowdsourcing
Question
2 answers
Hello. I choose to crowdsource for an innovation topic for my MBA dissertation. But now I have trouble making a research question and theory. As there are covid restrictions and I have to use my local area for doing diploma. I also I need to use an interview or a poll in my work.
Thank you.
Relevant answer
Answer
Question formulation is a creative process. There is no easy answer and it is your responsibility, not others. Having said that, there are a number of principles that you need to apply:
1. Read - what has already been published about crowdfunding of SMEs? What are the bigger questions in this research area? You cannot try to answer the big questions in the field but it is important to know what they are.
2. Data - what data can you realistically access? Can you obtain a sufficient number of responses for a quantitative questionnaire? If, not they you should consider a questionnaire with open-ended questions or interviews or mixed methods. If you cannot obtain enough data this way either then consider doing a systematic review of the literature.
3. Interest - why are you interested in this subject? Do other people have similar interests and have already published something in the area?
4. Career - What do you want to do in the future? How can you use your opportunity of doing a dissertation to make contacts and gain insight about the role you wish to have in the future.
5. Niche - putting all these together, try to identify something unique which is viable.
Please see my project on proposal writing for more information.
  • asked a question related to Crowdsourcing
Question
8 answers
Principles of Social Networking: The New Horizon and Emerging Challenges
It is our pleasure to invite you to contribute a chapter in this book within its scope mentioned below.
Website http://cs.nits.ac.in/social_networking/ Submission link https://easychair.org/conferences/?conf=psn20200 Abstract Submission May 25, 2020 Full Chapter Submission August 25, 2020
This is an edited (multi-authored) book titled "Principles of Social Networking: The New Horizon and Emerging Challenges" in collaboration with Springer and the International Federation of Information Processing (IFIP) TC 12 to be published (Final approval is pending) under IFIP-AICT series, Springer.
Tentative List of Topics
  • Introduction to Social Networking
  • Network Modelling, Visualization and Analyzing Tools
  • Role of Centrality in Social Networks
  • Community detection in Social Networks
  • Link prediction in Social Networks
  • Information diffusion and Epidemics
  • Influence propagation and Influence Maximization
  • Crowdsourcing
  • Privacy and cybersecurity
  • Social Recommender Systems
  • Misinformation, Fake News and Rumor Detection
  • Social Reputation, Influence, and Trust
  • Targeted Advertisement and Viral Marketing
  • Review Mining and Rating
  • Sentiment Analysis and Public Opinion Mining
  • Deep learning techniques for social networking
  • Data collection and quality from Social Media
  • Big data and scalability in Social Networks
  • Social geography and spatial networks
  • Multirelational, multidimensional, multi-aspect, multilayer networks
Editors of the Book Dr. Anupam Biswas, National Institute of Technology Silchar, Dr. Ripon Patgiri, National Institute of Technology Silchar, Dr. Bhaskar Biswas, Indian Institute of Technology (BHU) Varanasi. Publication All accepted book chapter will be published in IFIP-AICT book series. IFIP-AICT will be indexed in Springer, SCOPUS, Web of Science, DBLP and other databases. Contact All questions about submissions should be emailed abanumail@gmail.com, patgiri@ieee.org.
Relevant answer
Answer
Thank you for information, it is a nice chance to publish a chapter book
  • asked a question related to Crowdsourcing
Question
7 answers
In order to evaluate the performance of Crowdsourcing platforms from open innovation point of view, we have prepared a questionnaire which only takes 4-7 minutes to fill.
I would really appreciate it if anyone with expertise in Open Innovation and Crowdsourcing submit their response.
You can find the questionnaire through the link below:
Relevant answer
Answer
You cannot enter the questionnaire
  • asked a question related to Crowdsourcing
Question
8 answers
In order to evaluate the performance of a Crowdsourcing platform from open innovation point of view, what indicators should be considered?
Relevant answer
Answer
Dear colleague,
In order to evaluate the performance of a Crowdsourcing platform you may take into consideration some indicators mentionned in the following article:
  • asked a question related to Crowdsourcing
Question
7 answers
We are now crowdsourcing ideas for useful community engagement and partnership tools for both researchers and communtiy stakeholders in health field.
Although there are various tools, toolkits and resources online, we wonder if there are any tools that you find particularly useful in practice? Any recommendations based on your experiences? Tools for all phases of engagement: planning, execution, analysis to dissemination are welcome!
Thank you for your generous sharing of ideas in advance!
Relevant answer
Answer
Good Participatory Practice (GPP) Guideline is designed by AVAC and UNAIDS for health research with an HIV/AIDS focus. But I found it is also useful for other fields. More information: https://www.avac.org/good-participatory-practice
  • asked a question related to Crowdsourcing
Question
5 answers
such as instant messaging, social networking sites, crowd sourcing, and mobile applications (Evans-Cowley and Kitchen, 2011)
Relevant answer
  • asked a question related to Crowdsourcing
Question
1 answer
Can you introduce a data set on cell-phone usage in urban area?
Cell phone model, geographical info, and call duration or data transfer amount are the most important data I seek for during a period of at least 24 hours in a city.
I've worked with some data sets which are too out-dated, I need more recent information (not older than 2015).
Thanks.
Relevant answer
  • asked a question related to Crowdsourcing
Question
4 answers
Hello, I am currently working on a project analyzing crowdsourced data from online trip reports and social media applications. However, I am struggling in defining the methodology for this project, specifically how best to analyze texts from these trip reports/social media applications from a qualitative angle. The hope is to interpret large texts of data and categorize them into themes. I am outreaching to you all to gain varying perspectives on how best to do this. Thank you!
Relevant answer
Answer
Two resources. First, Robert Kozinets recently published a new version of his classic book on Netnography. Recommend giving that a read. (There's a free version available here in RG: )
There's also a 'quick startup guide' I wrote for NVivo that sheds some light on doing computer-assisted qualitative analysis: http://jonisalminen.com/qualitative-analysis-with-nvivo-essential-features/
Hope it helps!
  • asked a question related to Crowdsourcing
Question
1 answer
Hello, I would be happy to participate in this project and have a contributory role as my major field as a PhD graduate is in Open Innovation in SMEs and currently focusing on study and research in IR4, and digital technologies, smart innovation such as OI and Crowdsourcing.
Relevant answer
Answer
Thank you for your interest, Amir. We'll keep this in mind for our next projects. Unfortunately, INNOVENTER ends now and the only remaining activity is to report the results.
  • asked a question related to Crowdsourcing
Question
3 answers
Recently, I'm starting a research project about "crowdsourcing" and I've two questions about the nature of crowdsourcing.
1. Is it possible to illustrate crowdsourcing as a graph? I mean is it true to consider crowdsourcing members as network nodes which are interrelated?
2. Is it possible to monitor the activities of these members (nodes)? I mean is it true to monitor a crowdsourcing process as a project control problem and also do some time studies about these nodes?
Relevant answer
Answer
Hi.,
Crowdsourcing is used as a collective intellectual gathering of information that comes from the public and is then used to complete a business-related task. In other words, businesses use an outsourcing task that's usually performed by an employee, staff, or contractor, to deliver to a crowd of people.
Uber, which pairs available drivers with people who need rides, is an example of crowdsourced transportation. While crowdsourcing often involves breaking up a big job, businesses sometimes use crowdsourcing to assess how multiple people perform at the same job.. With crowdsourcing, large work projects are broken into smaller chunks, or microtasks, and then divvied out to tens, hundreds, or even thousands of people for fast, efficient completion. In this sense, crowdsourcing is based on the principle that many hands make light work.
For More info:
  • asked a question related to Crowdsourcing
Question
1 answer
I am trying to crowdsource mean opinion scores (MOS) for a speech synthesis model. As mos is a subjective metric I'm trying my best to avoid subjective bias.
So far my plan is to have a GitHub page containing the samples and ask different users (from social media) to give an opinion (1-5) for the samples through a web interface. From that, I will calculate the mean opinion score after doing statistical post-processing (following this paper https://github.com/Netflix/sureal/blob/master/resource/doc/dcc17v3.pdf).
Recently, many papers mention Amazon Turk for this type of task, but for my task it's an overkill. Do you have any suggestions regarding this?
Relevant answer
Answer
Hi.,
A Mean Opinion Score is a measure of quality of an event or experience. ... Score can be employed anywhere human subjective experience and opinion is useful. ... with Twilio, we have written up a number of client best practices and can route your communications traffic over the internet or through our high ...
Kindly Refer this link:
  • asked a question related to Crowdsourcing
Question
2 answers
Hello,
I am writing my thesis on crowdsourcing platforms, how to manage and design.
We have run a conjoint analysis to test the crowd's preferences on 4 design attributes. Also, we have run a survey questioning the crowd about their willingness to participate, for that we used the 1-7 likert scale.
Now, the conjoint data was very insightful to know which attribute is more important, but I am interested in knowing if those preferences also meant a higher participation in crowdsourcing contests.
I want to run a regression analysis using Stata or SPSS, wanted to know if there is any way to translate the conjoint data into a likert scale that I can include in my models.
The conjoint study was done on QuestionPro.
Relevant answer
Answer
Hello Milad,
The Conjoint Analysis template in Question Pro uses a rating scale (7="Definitely will buy", 6="Probably will buy", etc.) in response to a set of options that are made up of combinations of your four design attributes. With four attributes, each at two levels (yes/no, high/low, etc) then you have 16 profiles in a full-factorial, which would be reduced to eight profiles in a main-effects-only fractional-factorial design. So each of your respondents gave you 8 ratings for your different profiles. Something like this from each person:
  • a b c d Rating
  • -1 -1 -1 -1 4
  • 1 -1 -1 1 5
  • -1 1 -1 1 3
  • 1 1 -1 -1 7
  • -1 -1 1 1 4
  • 1 -1 1 -1 6
  • -1 1 1 -1 2
  • 1 1 1 1 1
Is that correct?
You already have a regression model here. Conjoint Analysis, of the type you have used, is simply multiple regression applied to experimental design data. (The main difference is the jargon - it kept a lot of overpaid consultants happily occupied throughout the 1980's and 1990's)
Rating is your Dependent Variable, and your four attributes are the Independent Variables.
  • If you run the regression at the level of the individual person, then you have a total 8 degrees of freedom total, and five df for regression, leaving only 3 df for error. In this case, you'd ignore the significance figures for the regressors (attributes) and use Adjusted R2 for goodness of fit. From my experience, I throw out any respondents who have individual-level Adjusted R2 less than 0.70.
  • Alternatively, you can run your regression with all respondents together - this assumes that each respondent is a random variation around a group-level utility function. Unless your respondents are very similar to each other with respect to crowdsourcing preferences, expect rather low regression coefficients.
  • Alternatively, you can run a cluster analysis on the rating scales for all of your respondents and find groups of respondents (market segments) who have similar patterns of preferences for crowdsourcing, and then run group-level regression models for each cluster.
  • asked a question related to Crowdsourcing
Question
4 answers
how to build  training labeled data set using crowd sourcing - 
for example 
i have data set ( A , @, 2,.....)   so A has label   letter, @ has label special charterer, 2 label  number 
for example i text mining ( " iam happy and feel always fine") label is " optimistic "  
Relevant answer
Answer
Diffgram https://diffgram.com (where I work).
  • Workflow. Manage tasks at scale with Jobs.
  • Validation. Pre-label images with your own model or use built in automatic training.
  • Video support. Video frames with interpolation can save a ton of time!
Diffgram is web based:
  • Fast to get started - create an account and start in moments.
  • Share work with teammates for review
  • asked a question related to Crowdsourcing
Question
4 answers
Our team is working with EC-JRC on crowdsourcing of food price data across time, space, and market segments from volunteer crowd. We are interested in testing the performance of the crowdsourcing system that we have developed based on standard metrics or measure of performance. Is there any literature or research findings that has proposed standard thresholds to guide in assessing whether or not the crowdsourcing id successful (e.g. what level of crowd engagment will be considered as a success)??
Relevant answer
Answer
My pleasure Julius
  • asked a question related to Crowdsourcing
Question
8 answers
Trying to understand online vigilantism practices through questioning the pros and cons of public contribution to the production of security
Relevant answer
Answer
Mirna Leko-Šimić Thank you for your interaction. You mntion two ky points related to online vigilantism, namely, responsibility and raising public awareness of things being done wrong. This partly refers to the idea of whistleblowing, but not all citizens have access to serious data to share, nor does whistleblowing always abide by the ethics of providing online data. However, with the ubiquity of cameras and social media platforms, more and more possibilities have existed galore for common citizens to react to wrongdoing through immediate sharing of content in terms of photos, videos, statuses, etc. Then, is there any way to agree upon rules to make those interactions more beneficial for the public good?
  • asked a question related to Crowdsourcing
Question
8 answers
Is it possible to predict future security events like, for example, create weather forecast with high accuracy or predict events and trends in other areas? What can be contribution of crowd-sourced security intelligence in this process?
I created Security Predictions experimental web site at http://securitypredictions.xyz. It has been built to harness the ‘wisdom of crowds’. I experiment how we can use crowd-sourced security intelligence to predict future events. You are welcome to contribute there.
Relevant answer
Answer
Dear Dragan Pleskonjic,
Currently, research centers are being conducted in some research centers to answer the above question. The research concerns the involvement of new information and IT technologies such as: Big Data database technologies, cloud computing, machine learning, Internet of Things, artificial intelligence for prediction research processes, forecasting future events. Positive effects of this type of research are already determined. However, the need to continue this research in order to fully confirm the obtained results.
Best wishes
  • asked a question related to Crowdsourcing
Question
7 answers
OSINT generally refers to the acquisition of publicly available big data and conduct analysis of it for the purposes of intelligence gathering. However, it's also possible to use OSINT as an additional tool for criminal investigations. Since the resources of law enforcement agencies have always been scarce to the extent that they barely manage to deal with cases "physically", some bits of the investigations might be provided by the online community in the form of crowdsourcing. Such a crowdsourcing system requires additional and stricter safeguards than the commercial equivalents have. But that's the subject of another question. I wonder whether a similar idea was actualized before. Please don't mention one-time efforts like the investigation of Boston bombings.      
Relevant answer
  • asked a question related to Crowdsourcing
Question
4 answers
Dear Authors,
Anti-pollutant field needs a lot of research and input from the global research community. It will help to understand safer and efficacious active and products and help to develop standard testing procedures. It will also help global regulatory agencies to set some benchmarks and SOPs.
Currently, I am editing a special issue on "Anti-pollutant cosmetics" with a Switzerland-based "Cosmetic" journal. I would like to invite you and your colleagues to submit articles (reviews, mini-reviews, articles about new actives, patents, trends, etc.) to this special issue.
Your inputs will provide future directions for the research or guidelines for standard tests and benchmarks that can help regulatory agencies.
Please let me know if you need any help. 
Thank you for your kind consideration.
Prashant D. Sawant, Ph.D., MBA
Relevant answer
Answer
Hi Tariq, The special issue is already published in January 2018. Thank you. Prashant
  • asked a question related to Crowdsourcing
Question
3 answers
I made the mistake of not factoring in that crowdsourced samples from Prolific (similar to MTurk) are by their very nature convenient, and so they aren't generalisable. The survey was intended to collect a representative sample.
To give a little info on the survey: sample size is 400. It's a precursor to a series of experiments. It includes repeated measures with the intention of identifying relationship strength for respondents to two types of groups. The strongest respondent-group relationship would then feature in the experiments. What tests can I run on crowdsourced data so that I can generalise the results to inform my future experimental methods? Am I able to use parametric tests on the data or will I be limited to non-parametric?
Relevant answer
Answer
Forget it. There are essentially no such procedures. If you want to make a less costly guess use a convenience sample. I would not trust an experimental design based on that. If the original sample gave "bad" results, the experiment will likely be poorly designed. See any good books on Sampling Theory and Experimental design. Best, David Booth
  • asked a question related to Crowdsourcing
Question
5 answers
Question Closed. Thank you.
Relevant answer
Answer
Gamification, as you mentioned, is not one technique it is more about "learning from games" what makes them so engaging and motivating, using not only simple elements but "game-design elements in a non-game context" [1], meaning also mechanics, aesthetics and processes [2].
Motivation (and engagement) itself has it's theories. For example self-determination theory. There are theories focussing on the individuals needs, physical or psychological (content theories, e.g. [3]) and there are theories focussing on cognitive processes.
All of those approaches can be used to determine how to improve the usability of a system (and improve the user motivation) and to fit the users needs and requirements. To improve the user-motivation, it is key to understand the context of use, to analyse precisely what it is that motivates the user as an individual to enhance the correct motivational elements. Theories (gamification, psychology, motivation...) can be used to backup ideas and hypotheses for design decisions and can help us understand the underlying concepts of motivation and engagement. In the end, what motivates a user is determined by the users situation and context.
[1] Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011, September). From game design elements to gamefulness: defining gamification. In Proceedings of the 15th international academic MindTrek conference: Envisioning future media environments (pp. 9-15). ACM.
[2] Hunicke, R., LeBlanc, M., & Zubek, R. (2004, July). MDA: A formal approach to game design and game research. In Proceedings of the AAAI Workshop on Challenges in Game AI(Vol. 4, No. 1, pp. 1-5). AAAI Press San Jose, CA.
[3] Maslow, A., & Lewis, K. J. (1987). Maslow's hierarchy of needs. Salenger Incorporated, 14, 987.
  • asked a question related to Crowdsourcing
Question
4 answers
My research team is getting ready to use Amazon Mechanical Turk (or a similar service) as a platform for psychological experiments for the first time. We have programmed some of the relevant experiments in Eprime, but don't yet know whether they will work on MTurk, or how to make Eprime and MTurk work together. Can this be done? And what online resources are to there to help us achieve this? Thanks!
Relevant answer
Answer
Hi there! I'm a developer working on Gorilla (gorilla.sc). We provide a direct recruitment link to Amazon Mechanical Turk, making the integration as simple as possible. We've had many researchers come to us from EPrime and most have been able to easily replicate their study in Gorilla using our GUI Task Builder. More complicated tasks have been made possible using small amounts of additional code (which we can write on request).
Some of those researchers have come to us because they want to recruit via services like MTurk or Prolific but haven't been able to using EPrime. I don't have a direct answer to your question but at least an implied one - that integrating EPrime and MTurk is challenging enough such that it precipitates moving to a new experimental tool entirely! We have extensive online documentation, provide email support and can also initial support via Skype (or similar online services) to help you get started.
  • asked a question related to Crowdsourcing
Question
4 answers
I am writing my thesis related to a smart city project in The Netherlands. We are exploring different methods for ideation, selection, collaboration and incubation for sustainable business models with a smart city orientation. Crowdsourcing is one of the ideation methods used on other smart city project in Europe. I am interested in research about how these ideas can be turned into real business models that create value for the transition towards a smart city.
Relevant answer
Answer
Dear Estela,
We work on Mobile Crowdsensing for smart cities applications and you can find our works here:
If you need any information more in detail you can contact me directly. It would be a pleasure to share info and know also about your project.
  • asked a question related to Crowdsourcing
Question
3 answers
Can participants in social media crowdsourced apps be consumers (users) and producers (creators) at the same time?
Relevant answer
Answer
Yes!
  • asked a question related to Crowdsourcing
Question
4 answers
Over the last two decades, several popular concepts have emerged in the management research. These concepts include open innovation, crowdsourcing, crowdfunding, and frugal innovation. I have the honor to witness the rise of these concepts and contribute in all these research fields. See the attached file which show the growth of the publications on these topics. The figures are based on searching results using each concept as search word limiting in the title of the articles
Open innovation had been the hottest topic in the last decade, I assume. Many top-notch scholars published papers on open innovation. Crowdsourcing is a popular concept. However, there is a limited number of articles on crowdsourcing in top journals. Crowdfunding is a very recently emerged concept that is growing its popularity at an exponential rate. Frugal innovation as a concept has yet to get adequate attention from the scholars. The fate of some these concepts depends on the nature of data. For example, it is easier to collect data to pursue quantitative research on crowdfunding while it is almost impossible to collect data to conduct quantitative research on frugal innovation.
These four concepts have advanced our knowledge, but they have also confused us as well. It seems that crowdfunding will continue its popularity. I assume both open innovation and crowdsourcing will lose their luster in the long run. Frugal innovation will grow at a steady pace.
I would be happy know what is your take on these and other concepts of the management field from the research lens.
Relevant answer
Answer
The key would be to clearly define these concepts/terms firms, which is a difficult task. Once we understand what the similarities and differences are then we can start to develop a more indepth understanding of these terms.  I was wondering whether there are any papers that theoretically compare and contrast these terms.
  • asked a question related to Crowdsourcing
Question
5 answers
I was wondering if there is existing research on measuring the impact / contributions that an individual has in an online collaboration setting. Literature has shown that there are different user roles and that their contribution to the overall "success" also differs (e.g. providing solutions, connecting people, providing guidance / comments, ...). What I did not find so far is any paper trying to measure the "performance" of individuals to such a platform / community. I would be interested in doing research on how the log data from online collaboration platforms can be used to measure the "performance" of individuals. Does anybody know of research in this direction? Can anybody recommend papers / streams of research that might be helpful to look at getting started?
Thanks a lot!
Relevant answer
Answer
Hi Dominik
First thing to start your literature research from is
Kraut, R.E., Resnick, P., Kiesler, S., Burke, M., Chen, Y., Kittur, N., Konstan, J., Ren, Y. and Riedl, J., 2012. Building successful online communities: Evidence-based social design. Mit Press.
This book is a must read and will lead you to many more papers on the specific aspect you want to focus on.
We did some work on user contributions in online citizen science, where we used digital traces (logs of activities).
Luczak-Roesch, M., Tinati, R., Simperl, E., Van Kleek, M., Shadbolt, N. and Simpson, R.J., 2014, June. Why Won't Aliens Talk to Us? Content and Community Dynamics in Online Citizen Science. In ICWSM.
Tinati, R., Simperl, E., Luczak-Roesch, M., Van Kleek, M. and Shadbolt, N., 2014. Collective Intelligence in Citizen Science--A Study of Performers and Talkers. arXiv preprint arXiv:1406.7551.
Tinati, R., Luczak-Roesch, M., Simperl, E. and Shadbolt, N., 2014, June. Motivations of citizen scientists: A quantitative investigation of forum participation. In Proceedings of the 2014 ACM conference on Web science (pp. 295-296). ACM.
Tinati, R., Luczak-Roesch, M., Simperl, E. and Hall, W., 2016, May. Because science is awesome: studying participation in a citizen science game. In Proceedings of the 8th ACM Conference on Web Science (pp. 45-54). ACM.
Vancouver
And there is a lot work on how people edit Wikipedia articles. You can start from the references in these papers:
Wierzbicki, A., Turek, P. and Nielek, R., 2010, July. Learning about team collaboration from Wikipedia edit history. In Proceedings of the 6th International Symposium on Wikis and Open Collaboration(p. 27). ACM.
Kittur, A., Suh, B., Pendleton, B.A. and Chi, E.H., 2007, April. He says, she says: conflict and coordination in Wikipedia. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 453-462). ACM.
Vancouver
Tinati, R., Luczak-Roesch, M. and Hall, W., 2016, April. Finding Structure in Wikipedia Edit Activity: An Information Cascade Approach. In Proceedings of the 25th International Conference Companion on World Wide Web (pp. 1007-1012). International World Wide Web Conferences Steering Committee.
Vancouver
Tinati, R., Luczak-Roesch, M., Shadbolt, N. and Hall, W., 2015, May. Using WikiProjects to Measure the Health of Wikipedia. In Proceedings of the 24th International Conference on World Wide Web (pp. 369-370). ACM.
  • asked a question related to Crowdsourcing
Question
3 answers
In 2016, Orna Raviv in Angelika 21(2) published a paper on Andy Warhol's 'Screen Tests'. The journal (predictably) has an unjustifiable paywall for 13 pages, and she herself has not answered queries. Can another scholar with university access send me a PDF?
Relevant answer
Answer
Shalom & Boker tov, Ali...THANK YOU. It opened the paper without difficulty.
  • asked a question related to Crowdsourcing
Question
2 answers
I am a PhD student and working in crowdsourcing quality control and I need dataset related to the worker behavior during tasks completion.
Relevant answer
Answer
 thanks Dr. Priyavrat, and for sure I will contact you.
  • asked a question related to Crowdsourcing
Question
3 answers
I am looking for publications about crowd funding in real estate.
Any recommendations would be highly appreciated.
Relevant answer
Answer
I hope these articles are useful...
  • asked a question related to Crowdsourcing
Question
4 answers
I am interested in any personal experience you may have using this combination (MTurk and Qualtrics) for survey research, and any advice you may have. 
Specifically ease of use, ability to get a representative sample, the number of responses and the cost. Beyond that,  for what type of research would you feel comfortable using survey data obtained through this method?  Would it be best for only testing survey ideas or is the data quality sufficient for more than that?
Finally has anyone had experience with data obtained this way and IRBs?
Thank you!
Relevant answer
Answer
Thank you Mitch--that was great information.  We are developing a policy research project that includes some behavioral elements and we were trying to figure out if using this combo would be good to first test our ideas and maybe use those results to get funding to do a random sample survey so this was very helpful.  Thank you very much!
  • asked a question related to Crowdsourcing
Question
4 answers
Address each steps of the innovation decision process i.e. Knowledge stage, Persuasion stage, decision stage, confirmation stage and implementation stage with relevant interview structured questionnaire.
Relevant answer
Answer
The question is to be asked against the adoption of cage culture in an area for the purpose of research. I'm interested in quantifying all those steps involved in    Innovation-Decision process against this cage culture system.
  • asked a question related to Crowdsourcing
Question
5 answers
I am beginning a project in which participants will watch pornography. Any tips for me (such as validated/approved clips to use)? We are in the beginning stages of the project and could use any advice you have to give. 
Relevant answer
Answer
A key issue is ethical human rights. Be familiar with the Belmont report and be sure to go through an IRB and have a well-constructed consent form. People have different definitions of pornography so be sure they know what they will be seeing. I heard someone say that they couldn't define pornography, but they'd know it if they saw it. Just in case the movies trigger adverse reactions, have a trained therapist or other means of dealing with it. This might seem like alot of precautions, but in your study I feel it is probably needed to protect the participants and you.
  • asked a question related to Crowdsourcing
Question
2 answers
I am interested in possible methods of evaluation of the collected data.
Relevant answer
Answer
Hi! Thank you for your answer, this was a question from 3 years ago:) I have used Krippendorf alpha so far, but am always looking for advice. I will look at your papers! Thanks and greetings from Passau!
  • asked a question related to Crowdsourcing
Question
3 answers
We welcome and would be most appreciative if you could share with us from your own experience and knowledge how this fascinating emerging tool is or will be used.
*Crowdsourcing is defined as taking a job that is traditionally performed in an organization by employees and outsourcing it to a crowd of undefined network of people (non-employees) in the form of an open call. For example, food companies or regulators may ask customers to tweet or to share their posts regarding potential food safety issues.
If no:
- Can you briefly explain why not?
- Can you envision where and how it could be applied for food quality and/or safety
- Do you have an estimation on the time and/or tools required for implementation?
If yes:
- What type of crowdsourcing practices do you use?  
- Why do you use crowdsourcing? 
- Are there any specific benefits?
- Are there any specific drawbacks?
- Can you describe specific example (s)?
- Can you estimate the typical time or the duration?
 Any additional points to share?
Thank you!
Relevant answer
Answer
The credibility of scientists - like that of politicians - has dropped in the perception of the general public dramatically in the last years.
So yes, it would be nice to have the judgements of real experts, but I do not believe that their opinions will be accepted by those who we would like to convince. They will rather read (and believe) blogs on the internet than well documented articles in respected journals or magazines. They can always find some "expert" that writes or says something that appeals to them. To complicate things even further, scientists often disagree on the risks of certain food items or environmental issues.
What solutions do we see for that enormous and apparently growing problem?
  • asked a question related to Crowdsourcing
Question
5 answers
I am exploring platform design methodologies.
Since the term "platform" has diverse meaning and is being used in diverse areas, the focus of platform design methodologies also seems to vary.
From the business perspective,
- Platform design toolkit (http://platformdesigntoolkit.com/toolkit/)
- Platform revolution (although this book seems to introduce the key components and key functions rather than suggesting a step-by-step platform design method)
Meanwhile, from the software development perspective,
- waterfall model
- V-model
- Agile software development
have been found. 
I wonder if there are other platform design methods which can be used in developing a platform to facilitate the participation and collaboration of people in open innovation or crowdsourcing projects.
Relevant answer
Answer
there is an innovation system platform that allows for interactive collaboration among participants to initiate innovation. Usually in agricultural extension where linkage is fostered along the organizations, farmers, extension officer and the marketers.
this allows for an interaction between all actors therefore allowing innovation to sprout anywhere withing the system not necessarily form the research hubs-that is the conventional ones. 
  • asked a question related to Crowdsourcing
Question
5 answers
I am doing my research in the factors that motivate the crowdfunders to invest in reward based crowdfunding projects. I am looking for a scale to measure the motivation for investors. Please provide the solution for the same.  
Relevant answer
Answer
Thank you!
  • asked a question related to Crowdsourcing
Question
4 answers
Hi everyone, 
I'm a student at Business Academy Aarhus working on a project linked to Instant Entrepreneurship and Crowdthinking, while I have analysed certain trends that indicate early signs for a potential expanding industry it seems that the majority of the papers or sources cover Crowdsourcing vs Crowdthinking. 
I'd appreciate any guidelines on this matter.
Thanks,
Juan 
Relevant answer
Answer
You might be interested in the attached paper on open innovation alliances and communities.
  • asked a question related to Crowdsourcing
Question
4 answers
The user contributions in the form of data and information are processed by using crowd-sourced human computations to generate knowledge in a knowledge management system.
Need pointers to similar research and any formal approaches to describe the process of knowledge creation in community crowds.
Relevant answer
Answer
@Franz Plochberger. The user contributions are artifacts containing explicit knowledge produce by humans using their cognition. Machine computations are used to facilitate the distribution of tasks or for capture, sharing, etc. human response. Kindly see the attached link .
  • asked a question related to Crowdsourcing
Question
2 answers
I am trying to develop air quality simulations using WRF-Chem model. We do not have emissions inventories in Costa Rica, that's why I am using the Prep-Chem-Sources tool to generate the emissions. Is there anybody interested into help me with the project? 
I am a pioneer and there is nobody with expertise on this topic here in my country. I would like to contact experts who could help.
P.S. I am sorry for my nonspecific question. I am just trying to persuade interested people because Costa Rica has not enough budget to invest on this. I already have a good progress on my project but I need more training. Please contact me for details.
Thanks!
Jeff
Relevant answer
Answer
Hi Jeff,
I know of two people that may be able to help. One who is a data scientist at Johns Hopkins who is involved in studies of air quality and its effects on population health and the other is a researcher at Duke. If you send me your contact info I can do an email introduction. I'm at paul_courtney@dfci.harvard.edu
Cheers,
Paul
  • asked a question related to Crowdsourcing
Question
8 answers
I am interested to know what are the advantages and disadvantages of collaborative crowdsourcing as compared to competitive crowdsourcing. When would you choose one over the other?
Relevant answer
Answer
I am interested to know what are the advantages and disadvantages of collaborative crowdsourcing as compared to competitive crowdsourcing. When would you choose one over the other?
Think to understand the advantages & disadvantages of Collaborative Crowdsourcing vs Competitive Crowdsourcing are to define the 2 terms & with some use cases:
Collaborative Crowdsourcing - the completion of a task by decomposing into sub-tasks & assigning the sub-tasks to different individuals / groups of individuals with different skillsets to complete the sub-tasks.  This approach is in collaborative mode for all individuals / groups of individuals instead of in competitive mode.  Examples of collaborative crowdsourcing include several IT projects like Open Source that develop Linux OS / Linux variants, OpenStack that develop Cloud Computing IaaS etc.  You can also refer to this link for an article:
Competitive Crowdsourcing - the selection of the best task(s) by getting individuals to share their tasks contributed so that they can compete among each others in order for their task to be selected as the best task(s).  Examples of competitive crowdsourcing include:
  1. Development, shortlisting & selection of mascot for Wrold Cup / Olympic etc.
  2. Kodak’s “Go for the Gold” contest i.e. Kodak asked anyone to submit a picture of a personal victory
  3. Toyota’s first “Dream car art” contest i.e children were asked globally to draw their ‘dream car of the future.
  • asked a question related to Crowdsourcing
Question
3 answers
how social aware crowdsourcing is different from traditional multicasting....... 
Relevant answer
Answer
Dear Dipti,
Photos obtained via crowdsourcing can be used in many critical applications. Due to the limitations of communication bandwidth, storage and processing capability, it is a challenge to transfer the huge amount of crowdsourced photos.
  • asked a question related to Crowdsourcing
Question
6 answers
is there any alternative to 'crowd signals' to collect real data from mobile users across different countries?
Relevant answer
Answer
Of course there are alternatives -- as long as the users are willing to keep an app open.
What sort of data are you looking for?  Anything that can be collected with or by the phone (which is a larger space than you might believe) is easy.  If you include user input, virtually anything is possible.
This question would have been easier to answer if you had provided more details.  Up-votes appreciated if you found this useful.
  • asked a question related to Crowdsourcing
Question
8 answers
As many organizations are taking open innovation or crowdsourcing as their new strategies, their organization design also need to be adjusted to support their strategic changes. As I understood, such self-organizing organizations should be designed by different approach from the traditional organization design. That is, the activities of members should be encouraged by designing the protocols and environment of organizations rather than directly controlled in a hierarchical structure. When the members are perceived as the user of protocols and working environment of an organization, it can be inferred that the participatory design can contribute to improving organizational design.
In this context, I wonder if there are any previous studies on organizational design methods or tools to support the participation of organization members in designing their own organizations. If not, would the attempts to develop such organization design methodologies be necessary or helpful for improving the performance of organizations undertaking open innovation or crowdsourcing?
Relevant answer
Answer
Sojung, I've been engaged with researching selfmanagement, selforganization and circularity (incl. its impact on hierarchy, control, innovation and governance) for a number of decades. In my recently published "The Quest for Professionalism", I took stock of the body of knowledge in these areas.  For other overviews, see my TU/e website as well as an entry on the MIX platform.
  • asked a question related to Crowdsourcing
Question
3 answers
I'm seeking to develop empirical models that describe the options and decision points an enterprise may consider when engaging an online community in crowdsourcing. Design Science Methodology (Peffers et al, 2007) seems to largely fit my requirements but I'm struck by the usual use of this methodology in an Information Systems context rather the more broad social sciences approach I'm considering. Can I appropriate Design Science Methodology in this context or am I missing a more suitable alternative?
Relevant answer
Answer
I'd reccomend looking into the works of Joan van Aken, as Alexandre aalso highlighted, who wrote a lot about the application of DSR to social science questions. He is on RG so you could also contact him directly.
  • asked a question related to Crowdsourcing
Question
2 answers
I'm currently perfoming an experiment with a component of human based computation. People should try to cluster images generated from gene expression (Self Organizing Maps) using their own perceptual capabilities  I'm gathering data in a fashion intended to resemble a hierarchical clustering, At the end, every person should classify thirty images in six groups of five members according to the perceived similarity (perhaps having a better performance than usual algorithms due to Gestalt perception). The thing is that at the end I'll have many sets of images arranged in a particular way for each person (but always 6 groups and 5 members), and I'd like to have a consensus cluster to contrast with computational methods and with the real known labels of the data. I'd appreciate any suggestion in how should I generate this consensus clusters, namely algorithms, programs, papers.
Thank you beforehand and do not hesitate to ask for additional details if needed.
Best regards.
Daniel.
Relevant answer
Answer
Hi, David. Thanks for your answer. I've been actually reconsidering the real necessity of the structure of 6 groups and 5 members. Initially I constrained it to 30 randomized SOMs thinking in a reasonable size for human analysis, and I assured to choose 5 members of the 6 molecular subtypes that I know beforehand, so I thought that steering it to 6 groups and 5 members each could help, but now I think it has no real benefit, even considering that people struggle to achieve that particular structure.
I've been considering using  Cluster-based Similarity Partitioning Algorithm (CSPA) from Strehl and Ghosh http://strehl.com/download/strehl-jmlr02.pdf 
This algorithm has very basic heuristis, consisting in, basically, reverse engineer the individual cluster, binarizing the data, giving 1 to those elements that belong to the same group in that cluster, and 0 if not. This creates a similarity matrix that later can be averaged for all the clusters and applying any other clustering method, achieve the consensus cluster.  
What do you think about this, David?
  • asked a question related to Crowdsourcing
Question
4 answers
I would like to model the behaviour of social network users, who may commit various crimes while pretending to be your friend. I have looked at Bayesian networks, Fuzzy Logic, Neural networks, etc. However, there are some missing data/information that may not be gotten easily about the smart (read criminal) social network users. What would be the best AI algorithm to use in such scenario?
Thanks in advance.
Regards,
Paul. 
Relevant answer
Answer
Of course your modeling method also depends on what is feasible to compute.  You can look at related applications and how they apply the different modeling techniques.  For example:
Kolter, Jeremy Z., and Marcus A. Maloof. "Learning to detect malicious executables in the wild." Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2004.
uses a naieve Bayes approach for detecting malicious executables.  Applied to your problem you would have to define features yourself (e.g. related to the interconnection patterns, vocabulary,...), after which you can learn the mapping of these features to what you want to detect.
I imagine that in your case it is difficult to have a good data-set to learn from (i.e. a list of criminals)
  • asked a question related to Crowdsourcing
Question
3 answers
I am new in my research crowdsourcing (crowds) in data mining and I want to know which is the existing resources for classification of data instances, where the labels are collected using a crowdsourcing way. Please help me.
Relevant answer
Answer
You could also try Amazon Mechanical Turk.  This gives you access to workers who are used to doing this sort of task.
  • asked a question related to Crowdsourcing
Question
1 answer
Are there any self-affirmation manipulations that are especially suitable for MTurk? The Allport-Vernon scale seems a little long for MTurk...
Thanks :)
Relevant answer
Answer
It depends on what you want to achieve,  but Value Affirmation (Cohen et al., 2006; 2009) can be used online. A pen-and-paper method was found to protect the self against perceived threats and was very successful in narrowing racial gaps in academic success. 
  • asked a question related to Crowdsourcing
Question
1 answer
Abundance Coverage-based Estimator (ACE) is a modification of the Chao & Lee (1992) estimator discussed by Colwell & Coddington (1994). Any one knows how to evaluate the estimator without knowing the real species numbers?
For example, if I have a sample containing 10 individuals, ACE will give out a result; while if I have a sample containing 20 individuals, ACE will give a better result. How to evaluate the difference between the two results?
Relevant answer
Answer
there are many measures  you can use for evaluating the difference between sets, such as
1- biased-variance estimation.
2- mean square error based on the hyposis and target sets for each data set.
4- t-test, chi-square, fredman- test...etc.
5- basic statistical tests, such as  interquartile range, mean  standard deviation, mode, variance for each set.
6. correlation coefficient for each set.
  • asked a question related to Crowdsourcing
Question
6 answers
Ipeirotis published "Demographics of mechanical turk " in 2010.
I think some of the data from there was quoted in Paolacci et al's "Running experiments on amazon mechanical turk. Judgment and Decision making"(2010)
The demographics of MechTurk is important for any researcher. Does anybody know of any more recent research on this topic?
Relevant answer
Answer
We have a review coming out in a month or so in the Annual Review of Clinical Psychology. It includes a fairly detailed description of worker demographics. Keep an eye out for it in early 2016
Chandler, J., & Shapiro, D. (2015). Conducting Clinical Science Research on Amazon Mechanical Turk. Annual Review of Clinical Psychology, 12(1).
  • asked a question related to Crowdsourcing
Question
5 answers
I need to use it in my data collection. My proposed samples are working employees since I would like to assess their psychological contract breach as a initial study. 
Relevant answer
Answer
I have a new (longer) paper coming out in the Annual Review of Clinical Psychology on using MTurk samples to conduct research. It (unsurprisingly) has a  clinical psych focus, but we also discuss some of the issues surrounding demographics and data quality in a lot more detail than in earlier publications. Message me in early 2016 if you would like a copy of the paper. 
  • asked a question related to Crowdsourcing
Question
4 answers
I get asked a lot of questions from my students and mentees about using MTurk. How do editorial boards view it in terms of credibility? What are the good and bad sides of it in terms of data collection, cost, and publishability?
Relevant answer
Answer
Christine, you might check Mason & Suri, 2012.  We cited them in two new internet-based papers in Developmental Psychology, both Luthar & Ciciolla, (2015)
Who mothers mommy?  Factors that contribute to mothers' well being, and  What it feels like to be a mother: Variations by children's developmental stages.  
Hope this helps!
  • asked a question related to Crowdsourcing
Question
8 answers
I would like to use away to use Amazon Mechanical Turk for my own survey application, but it seems It is not possible for those who are not resident of USA or don't have USA credit card.
I found some services such as crowdflower, crowdguru.de, smartsheet which use mturk as a ground layer and built upon it and also some other similar platforms such as Cloudcrowd or Samesource..
However, I am not quite sure which one more suits my goal, and what are the limitations for the tasks that we can put on mturk (if there is any limitation)
I really appreciate all your ideas,
Relevant answer
Answer
Hi Mona,
Co-founder of Prolific Academic here. We've developed an alternative to MTurk tailored for academic research and we are accessible worldwide. Over 200+ researchers have completed over 750+ studies on our platform so far.  
New users can run a trial study for free: https://www.prolific.ac/rr?ref=5ZFZ276D. You can find more information here: www.prolific.ac
Should you have any questions about the platform please don’t hesitate to get in touch.
Cheers,
Katia
  • asked a question related to Crowdsourcing
Question
14 answers
Hi everyone,
I'm currently looking into some crowdsourcing platforms which can be funded by non Americans.
Has anyone had any experience with Prolific Academic (link attached)?
The pricing is higher than MTURK, but seems to be lower than the marketing-research platforms.
Any and every reply would be much appreciated!
Thank you,
Atar 
Relevant answer
Answer
Hello all,
I have recently completed running my study on Prolific and thought to circle back to you with my personal experience.
I found the platform to be very easy to work with, simple and straightforward. This goes for:
1. Inputting the required information from the researcher's end (e.g., URL, description, restrictions on subject pool, participant fees and flexible commission) 
2. Reviewing the study status while still live (i.e., number of participants completing the study, average time spent on study)
3. Reviewing submissions and paying participants 
Importantly, I have received superb support from the Prolific team. Response time was minimal (usually within an hour) and my questions/needs were always fully answered.
Since I haven't used MTURK prior to this, I won't be able to compare the two, but I do highly recommend trying out the Prolific platform if you're looking for an MTURK alternative or are interested in trying out a new platform.
Attached is a link to a short youtube demo I checked out before running my study. The video also includes a short comparison to MTURK.
I hope this was helpful!
  • asked a question related to Crowdsourcing
Question
7 answers
I try more precisely to figure out how CS based on creative activities can affect the organization of the innovation process.
Relevant answer
Answer
I am enclosing a recent case study
  • asked a question related to Crowdsourcing
Question
5 answers
I am doing a research on the use of crowdsourcing and social media for service and product development in the hospitality industry (hotel). 
Please help me in finding good articles and journals about this topic. 
Thank you so much for your help!!
Relevant answer
  • asked a question related to Crowdsourcing
Question
7 answers
I would like to perform a regression analysis with several cultural factors (Uncertainty Avoidance, Indivualism vs Collectivism, Social Media Penetration, Ease of Getting Credit, Demography of a Society) and Crowdfunding volume as the dependent variable.
The research question is: "How can you measure the cultural adoption of a new and disruptive financial technology like crowdfunding?"
Could you recommend a suitable theoretic framework for this topic?
Thank you very much in advance.
  • asked a question related to Crowdsourcing
Question
4 answers
I wrote a PHP Movie Survey App for which I would like to use Amazon Mechanical Turk in order to collect my dataset (users preferences on movies). 
I found Amazon MT is only available for US residents, so I searched for similar services in Europ like Crowdflower, Scalableworkforce, Crowdguru.de and many more. 
However, I am not sure if any of them let me put the survey link with the minimum limitations for the tasks..
I do appreciate if someone has the background or experience on any similar services.
Thanks in advance,
Mona
Relevant answer
Answer
There are clickworkers based in Germany, but they left me hanging once so I would not recommend them.
  • asked a question related to Crowdsourcing
Question
10 answers
Crowds evaluate contributions.
It seems that "likes" are mainly social grooming stuff, not really good predictor of content quality. At least it shows "popularity", but the link between popularity and quality is not evident.
Stars and more complex scales seems better (Riedl et al 2010).  But I didn't find literature about new options (e.g. emotional icons).
Any idea ? I'll give you at least 4 stars ! ;-)
Thanks
Relevant answer
Answer
Spoken in another way, you are dealing with the issue of social effects on online systems and want to somehow moderate these effects.
You could go by traditional crowd wisdom studies aka. Surowiecki (crowd wisdom requires diverse groups, who are independent and have local knowledge) and Jaron Lanier (crowd wisdom idea will only work when crowd does not define the question).
Golub, B., & Jackson, M. O. (2010). Naive learning in social networks and the wisdom of crowds. American Economic Journal: Microeconomics, 112-149. has a nice model using DeGroot learning that theorizes how wisdom can occur when independence cannot be controlled (ie. in almost every crowdsourcing case). It would be interesting to conduct an experiment to test interfaces using this model as an evaluative tool.
  • asked a question related to Crowdsourcing
Question
4 answers
Hi all, I am interest to find a better approach to protect sensitive data in crowdsourcing. There are many techniques were used like k-anonymity and the researchers made a lot of enhancement on it . If anyone have a feedback can help me to find another technique with more efficient , I appreciate that.
Relevant answer
Answer
What are you looking for is Differential Privacy and its derivatives . K-anonymity and its family like t-closeness, l-diversity are syntactic techniques that are considered weak . Also Dont confuse security of data achieved through cryptographic techniques with privacy of data , Privacy is much harder to achieve
  • asked a question related to Crowdsourcing
Question
5 answers
I am interest to find some techniques to protect privacy for sensitive data which is to be published in social media for example.
Relevant answer
Answer
Privacy is often a mis-understood paradigm.  If data was collected, it will be used ... and abused.  Therefore, while collecting data, it is a good practice to make sure it is collected in a way it cannot be co-related without adequate efforts and only by those who can potentially understand what it is.  Also, any extra directly co-relating parameters should strictly be left uncollected.  Otherwise data will be innate with privacy risk.
  • asked a question related to Crowdsourcing
Question
3 answers
I'm interested in collecting proximity data from children during school playtime for social network analysis. I would like to get as objective data as possible. I know you can get proximity trackers for animals, and I've heard there are ones available for human adults (i.e. that you wear as a tag around the neck to observe proximity data in offices), but I haven't seen them. I was wondering if anyone knew a) anything about the human ones and if they are child-friendly/robust, or b) whether devices used on animals can be adapted for use on children?
Any advice would be greatly received!
Relevant answer
Answer
Thank you both very much. I've contacted David and Sandy so hopefully they will be able to help. Thanks again.
With regards to ethics - we'd only be collecting data during school playtime and one important facet is that we want technology that provides in information about where the individuals are (geographically), purely how when they within X meters of another proximity logger (all coded so no IDs are used). So we would have no idea where participants are, just which tags are close to each other and for how long, if that makes sense.
  • asked a question related to Crowdsourcing
Question
1 answer
I'm looking at using crowdsourcing to develop ODL course materials. Want to know whether there are existing case studies.
Relevant answer
Answer
You can get the similar or apropriate answer by searching the keyword in the GOOGLE SCHOLAR page. Usually you will get the first paper similar to your keyword.
From my experience, this way will help you a lot. If you still have a problem, do not hasitate to let me know.
Kind regards, Dr ZOL BAHRI - Universiti Malaysia Perlis, MALAYSIA
  • asked a question related to Crowdsourcing
Question
8 answers
Hi, are you aware of any open data set on crowdsourcing, which include information such as profiles of workers/tasks, records of behaviors, completion status of tasks? Or are there any recommended method for obtaining such data set quickly? Thanks!
Relevant answer
Answer
If you're looking for datasets which contains labels and anonymized worker ID, you may access the NIST TREC Crowdsourcing Track website (https://sites.google.com/site/treccrowd/). Since 2011, TREC Crowdsourcing Track annually distributes crowdsourcing datasets publicly to encourage academic research. In addition, you can find some other datasets from the following links: Dr. Matt Lease' lab in UT Austin (http://ir.ischool.utexas.edu/square/data.html) and Dr. Panos Ipeirotis' lab in NYU (https://github.com/ipeirotis/Get-Another-Label). 
  • asked a question related to Crowdsourcing
Question
7 answers
I have played a bit with netlogo over the holidays and produced a very simple model which demonstrates a difference between crowdsourcing and crowd tasking. All it does is to implement two types of agents which can report on "issues":
  • Type 1 ("reporters") walks randomly and spontaneously reports "issues" in their neighbourhood.
  • Type 2 ("observers") can be talked to confirm the findings.
The model nicely demonstrates the obvious: taskable volunteers are much better at confirming the findings than the random walkers.
I see plenty of possibilities for improving this model, but I am not sure what to go for first. Motivation? Different task types? More intelligent tasking mechanism? GIS? Service API?
In your opinion, what is/are the most important features which this crowdsourcing/ crowd tasking model should implement?
Relevant answer
Answer
Dear Denis,
I've never used NetLogo, but I find the idea to apply it to this domain very interesting. I have approached the differences between crowdsourcing and crowdtasking from the perspective of volunteers' involvement in disaster management efforts, trying to understand different forms of contribution and outcomes. Perhaps this paper can offer a few suggestions on different roles and levels of involvement: http://link.springer.com/chapter/10.1007/978-3-662-45960-7_19
Cheers,
Marta
  • asked a question related to Crowdsourcing
Question
4 answers
For those conducting research interviews in languages other than English, translation (and transcription) are often one of the major costs. Crowd-sourced translation services offer an affordable alternative, but they may raise issues like the privacy of participant information. Have they been approved for use in your projects, at your institution? How is the issue of confidentiality dealt with?
Relevant answer
Answer
Translation raises similar problems to transcription: full anonymisation would be difficult to achieve given the translator/transcriber could infer identities, locations etc. from substantive content. Standard procedures usually require a confidentiality agreement to be signed by the translator/transcriber and this is something most research ethics committees I have been involved with would require. (And that should be supported by professional indemnity insurance.) Hence crowdsourced translation could not meet such a requirement. The only way round it would be not to guarantee anonymity to your respondents and to make it very clear to them how their 'data' would be handled; i.e. that it would be translated by crowdsourcing.
  • asked a question related to Crowdsourcing
Question
9 answers
It is known that students can crowd source solutions to publishers question banks.
But does anyone have experience of crowd sourcing from instructors (creating open educational materials) in a way can limit students ability to see the questions in advance. Some vendors of classroom voting systems appear to be touting this.
Relevant answer
Answer
Given students in large courses can game the system, as Gary pointed out, it might be worth turning this to your advantage. The 'PeerWise' tool from University of Auckland provides an environment for students to author the questions and answers, comment on each other's questions and rank each other's questions. In this way the best questions rise to the top. It is a great way to get students thinking deeply about the course. It plays on the idea that the best way to learn something to teach it to someone else.
  • asked a question related to Crowdsourcing
Question
9 answers
I would like to know smart cities (concrete and successful) projects that have used sensors and crowdsourcing together to collect and present data using mobile applications. What is your experience?
Relevant answer
Answer
Hi Jaguaraci,
Check out IBM's Ireland work, I believe they are doing exactly what you're looking for.
You should find references there on their work on smart cities, public transportation and crowdsourcing.
Also on water sensor networks:
Djellel Eddine Difallah, Philippe Cudré-Mauroux, Sean A. McKenna: Scalable Anomaly Detection for Smart City Infrastructure Networks. IEEE Internet Computing 17(6): 39-47 (2013)
  • asked a question related to Crowdsourcing
Question
3 answers
The growth of the crowd sourced solutioneering phenomenon shows how people will engage and support knowledge development. However, how much confidence can we have in a crowd sourced answer, where we don't know participants, and also some people may be trying to obstruct answers?
Relevant answer
Answer
This is a good question especially since interest in CS as a research method has seen some notoriety in recent years.
I've experimented with 2 pilots on Mturk and am just wrapping up my data collection on Crowd Flower for my dissertation. My second pilot may be of interest to you since I completed my pilot in both online and in-person settings with comparable results:
Look at articles by Lefever, Griffiths, Granello and Beaudoin regarding online research (not specific to CS) and there is every indication that anonymity can actually increase trust in survey answers (in particular those about SES) and there is also the added benefit of a decrease in survey bias.
Of course there are always those who will wish to provide obstructionist replies and subvert your research BUT the geographical and cultural diversity of your participant pool counters many of the detractors.
Although I've not had a chance to look at my latest data exclusively for this issue, there are a few things I've noticed. Since Crowd Flower has no quality control (as Mturk does) I was only able to achieve about 50% 'good' data meaning that the participants followed my instructions, filled out the survey and provided me with drawings. The main issue I had was many providing photos that were not theirs, however even those that did not follow instructions and provided photos still appear to have answered the survey questions truthfully (some of this I garnered by matching IPs to the location they provided in the survey). Within this half of participants roughly 10%+ seemed to complete the survey without any concern for the questions. It's also important to note that this fluctuates from region to region. For example larger more tech-savvy populations tended to have more of these obstructionist users who know the crowdsourcing system well and how to use/abuse it.
There is still very little literature on this emerging research niche and I'm looking forward to pursuing it further in the future.
Have you used any of the CS platforms for research?
  • asked a question related to Crowdsourcing
Question
5 answers
We are running a project which aims to utilize the wisdom of the crowd in the screening of search results. We've asked the crowd to identify randomised trials from 1000 of search results. So far 400 people have signed up to trial spot and we've collectively identified around 1500 reports of randomised trials from 40000 records screened. A great result. Find out more and start screening with us here: http://www.cochrane.org/news/tags/authors/become-embase-screener-cochranes-innovative-embase-project-now-open
Relevant answer
Answer
An interesting idea for some of the early stage screening, and if it really is as simple as identifying RCTS, regardless of topic relevance. Have you thought of comparing this with any other method to see if you get better results? I have found simple searches through a Word version of references and abstracts using the 'Find' facility effective for this sort of task.
Beyond this simple stage, however, my experience doing many reviews, and of reviewing other people's reviews, is that high quality screening for relevance needs people who are well-trained in the topic area. This saves redundancy and revisiting of results later on, when the main investigators detect wrong decisions made at these early stages. There is also the danger of the research team losing the in-depth understanding of the shape of the literature that they gain from trawling through the searches. Yes, it's boring - but so is much that makes up good scientific endeavour!
More generally, I guess we need to continually improve our search techniques so that we get even greater specificity.
  • asked a question related to Crowdsourcing
Question
4 answers
Reward-based crowdfunding represents a massive shift in conventional stakeholder-developer relations, putting the reins firmly in the hands of developers rather than user-stakeholders. I've just finished a thesis project on the types of crowdsourcing that are used in crowdfunded design projects to leverage the crowd's creative capacities, and am interested in further examining the impact that this shift has on design and development processes, as compared to conventional top-down investment.
Relevant answer
Answer
C. Lewis Kausel,
In the design side of reward-based crowdfunding, the project founder generally has an idea for a product or service and is soliciting funding. The people who support such projects via financial contribution are people who expect to be future users of that product or service, and can be expected to be enthusiastic about project outcomes.
Crowdsourcing is structurally quite distinct from crowdfunding, in that it's about leveraging the crowd's creative labor rather than their financial resources, but the two aren't mutually exclusive. In fact, a barrier to crowdsourcing is often that you need to motivate the crowd to contribute. As described above, the backers of products or services in reward-based crowdfunding are enthusiastic and motivated, meaning that they are good sources of creative feedback for such projects.
Common types of participatory practices include providing backers with access to beta testing of software, voting on elements of project outcomes, providing backers with developer kits to generate supplemental materials, or entirely open-sourced projects created via crowdfunding. Currently the research on this topic is just starting, so we haven't identified whether these different types of participation affect project outcomes appreciably or differently, but we have found that each type is prevalent in different crowdfunding contexts.
This is just a quick introduction to the topic, but I hope it addresses your question. I think that there are a variety of ways in which the dynamics of crowdfunding impact the design and development process of new products and services in modern society, and would like to encourage researchers to start trying to measure these changes.
  • asked a question related to Crowdsourcing
Question
2 answers
Not officially possible on Amazon for example.
Relevant answer
Answer
You will need a SSN or TIN # to register due to new federal regulations:
CF is pretty good in terms of its diversity of contributors however after 2 months I'm still unable to reach people in Sub-Saharan Africa, East Asia and Mid-East.
One thing to keep in mind is CF has less quality control than Mturk and it takes significantly more tweaking of surveys to get good data, at least using an external self-hosted survey as I have.
  • asked a question related to Crowdsourcing
Question
4 answers
I have come up with a framework as a result of my research. I would like to validate the framework. Should I use a questionnaire or open question interview to validate my work?
Relevant answer
Answer
Consider using a nominal group of experts or stakeholders or a Delphi technique. Jane
  • asked a question related to Crowdsourcing
Question
5 answers
Crowd-sourcing is very important nowadays, but what are the research problems in it?
Relevant answer
Answer
There are several key issues being researched in this area. One is how to break down a task to an ideal sized work package suitable for the crowd. Second is how to conduct QA on the work performed by the crowed. Another is how to automate the process of task assignment, QA and remuneration when considering a large crowdsourced project. One more area of research would be the project management challenges faced in such projects. I have done some work in pre-crowdsourcing which was virtual collaboration. My research student is currently working on a framework for crowdsourced project automation. The paper is currently being drafted. I will share whit you once done.
Hope this helps
Ishan
  • asked a question related to Crowdsourcing
Question
8 answers
I would like to know the latest IT techniques or research questions (business models, Technologies to resolve..) in the domain of crowdsourcing and citizen science
Relevant answer
Answer
I agree that data quality is a major problem in citizen science and crowdsourcing. Our group at Memorial University of Newfoundland has been working on this problem for the past few years. We argue that in addition to expertise data quality is rooted in the way we model citizen science applications (e.g., in the way we do database design).
Please feel free to refer to some of the arguments about participation-data quality trade-off, and data quality as a function of conceptual modeling: