User-Generated Content

Microsoft Research
IEEE Pervasive Computing (Impact Factor: 1.55). 01/2009; 7(4):10 - 11. DOI: 10.1109/MPRV.2008.85
Source: IEEE Xplore


Pervasive user-generated content takes the traditional idea of user-generated content and expands it off the desktop into our everyday world. The six articles in this special issue give innovative examples of gathering and using such content.

27 Reads
  • Source
    • "Recently, the proliferation of social media (Susarla et al. 2012) and crowdsourcing (engaging online users to work on specific tasks, see Doan et al. 2011) has further changed the IS landscape. There is growing interest in user-generated content (UGC) (Cha et al. 2007, Daugherty et al. 2008, Krumm et al. 2008), defined here as various forms of digital information (e.g., comments, forum posts, tags, product reviews, videos, maps) produced by members of the general public. These are often casual content contributors (i.e., the crowd) rather than employees or others closely associated with an organization. "
    [Show abstract] [Hide abstract]
    ABSTRACT: User-generated content (UGC) is becoming a valuable organizational resource, as it is seen in many cases as a way to make more information available for analysis. To make effective use of UGC, it is necessary to understand information quality (IQ) in this setting. Traditional IQ research focuses on corporate data and views users as data consumers. However, as users with varying levels of expertise contribute information in an open setting, current conceptualizations of IQ break down. In particular, the practice of modeling information requirements in terms of fixed classes, such as an Entity-Relationship diagram or relational database tables, unnecessarily restricts the IQ of user-generated data sets. This paper defines crowd information quality (crowd IQ), empirically examines implications of class-based modeling approaches for crowd IQ, and offers a path for improving crowd IQ using instance-and-attribute based modeling. To evaluate the impact of modeling decisions on IQ, we conducted three experiments. Results demonstrate that information accuracy depends on the classes used to model domains, with participants providing more accurate information when classifying phenomena at a more general level. In addition, we found greater overall accuracy when participants could provide freeform data compared to a condition in which they selected from constrained choices. We further demonstrate that, relative to attribute-based data collection, information loss occurs when class-based models are used. Our findings have significant implications for information quality, information modeling, and UGC research and practice.
    Information Systems Research 12/2014; 25(4):669 - 689. DOI:10.1287/isre.2014.0537 · 2.15 Impact Factor
  • Source
    • "A growing trend in organizations is to use information produced outside organizational boundaries created by various entities, such as suppliers, business partners, potential customers , and the general public [Zwass 2010, Hand 2010]. A particularly promising source of externally-produced data is user-generated content [Daugherty Eastin et al. 2008, Krumm Davies et al. 2008], defined here as various forms of digital information (e.g., comments, forum posts, tags, product reviews, videos, maps) produced by members of the general public – who often are casual content contributors (the crowd) – rather than by employees or others closely associated with an organization. There is a growing realization that ordinary users can contribute a large volume of meaningful and timely information, support discoveries and insights [Hand 2010]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: As more organizations rely on externally-produced information, an important issue is how to create appropriate conceptual models of these domains to effectively guide information systems development. Considering the limitations and consequences of traditional approaches, we propose a " lightweight " modeling alternative to traditional " class-based " conceptual modeling as typified by the Entity Relationship model. We provide proof of concept and empirically evaluate the approach using a real-world crowdsourcing project, NLNature.
    Workshop on Information Technology and Systems (WITS), Milan 2013; 12/2013
  • Source
    • "In recent years, there has also been much discussion of user-generated content within the context of Web 2.0. [37]. Most relevant to this study, software users often share information and offer help to other users through blogs, online communities and discussion forums, instructional videos posted on YouTube, and more. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Tutorials and user manuals are important forms of impersonal support for using software applications including electronic medical records (EMRs). Differences between user- and vendor documentation may indicate support needs, which are not sufficiently addressed by the official documentation, and reveal new elements that may inform the design of tutorials and user manuals. What are the differences between user-generated tutorials and manuals for an EMR and the official user manual from the software vendor? Effective design of tutorials and user manuals requires careful packaging of information, balance between declarative and procedural texts, an action and task-oriented approach, support for error recognition and recovery, and effective use of visual elements. No previous research compared these elements between formal and informal documents. We conducted an mixed methods study. Seven tutorials and two manuals for an EMR were collected from three family health teams and compared with the official user manual from the software vendor. Documents were qualitatively analyzed using a framework analysis approach in relation to the principles of technical documentation described above. Subsets of the data were quantitatively analyzed using cross-tabulation to compare the types of error information and visual cues in screen captures between user- and vendor-generated manuals. The user-developed tutorials and manuals differed from the vendor-developed manual in that they contained mostly procedural and not declarative information; were customized to the specific workflow, user roles, and patient characteristics; contained more error information related to work processes than to software usage; and used explicit visual cues on screen captures to help users identify window elements. These findings imply that to support EMR implementation, tutorials and manuals need to be customized and adapted to specific organizational contexts and workflows. The main limitation of the study is its generalizability. Future research should address this limitation and may explore alternative approaches to software documentation, such as modular manuals or participatory design.
    IEEE Transactions on Professional Communication 09/2013; 56(3):194-209. DOI:10.1109/TPC.2013.2263649 · 0.66 Impact Factor
Show more


27 Reads
Available from