Lab
ONEKIN
Department: Computer Languages and Systems
About the lab
Topics of Interest:
- Software Product Lines
- Browser extensions
- Chatbots
- Design Science Research
- Open Source Software Engineering
- Software Product Lines
- Browser extensions
- Chatbots
- Design Science Research
- Open Source Software Engineering
Featured research (5)
Implicit feedback is collecting information about software usage to understand
how and when the software is used. This research tackles implicit feedback in
Software Product Lines (SPLs). The need for platform-centric feedback makes
SPL feedback depart from one-off-application feedback in both the artefact to
be tracked (the platform vs the variant) as well as the tracking approach (indirect coding vs direct coding). Traditionally, product feedback is achieved by
embedding ‘usage trackers’ into the software’s code. Yet, products are now
members of the SPL portfolio, and hence, this approach conflicts with one of
the main SPL tenants: reducing, if not eliminating, coding directly into the
variant’s code. Thus, we advocate for Product Derivation to be subject to a
second transformation that precedes the construction of the variant based on
the configuration model. This approach is tested through FEACKER, an extension to pure::variants. We resorted to a TAM evaluation on pure-systems
GmbH employees(n=8). Observed divergences were next tackled through a focus group (n=3). The results reveal agreement in the interest in conducting
feedback analysis at the platform level (perceived usefulness) while regarding
FEACKER as a seamless extension to pure::variants’ gestures (perceived ease
of use).
The Web Accessibility Guidelines are designed to help developers ensure that web content is accessible to all users. These guidelines provide the foundation for evaluation tools that automate inspection processes. However, due to the heterogeneity of these guidelines and the subjectivity involved in their evaluation, humans are still necessary for the process. As a result, evaluating accessibility becomes a collaborative endeavor wherein different human experts and tools interact. Despite quickly being noticed by the W3C, it has largely been overlooked in the existing literature. Tool vendors often focus on providing a thorough evaluation rather than importing, integrating, and combining results from diverse sources. This paper examines an EARL-based document-centric workflow. It introduces a dedicated editor for EARL documents that accounts for the life-cycle of EARL documents where evaluation episodes feedback on each other. Expert evaluations were conducted (n = 5 experts), not so much about the tool itself but its ability to facilitate a collaborative approach.KeywordsWeb engineeringWeb accessibilityWeb accessibility evaluationBrowser extensionAggregation
This study conducts research on deepfakes technology evolution and trends based on a bibliometric analysis of the articles published on this topic along with six research questions: What are the main research areas of the articles in deepfakes? What are the main current topics in deepfakes research and how are they related? Which are the trends in deepfakes research? How do topics in deepfakes research change over time? Who is researching deepfakes? Who is funding deepfakes research? We have found a total of 331 research articles about deepfakes in an analysis carried out on the Web of Science and Scopus databases. This data serves to provide a complete overview of deepfakes. Main insights include: different areas in which deepfakes research is being performed; which areas are the emerging ones, those that are considered basic, and those that currently have the most potential for development; most studied topics on deepfakes research, including the different artificial intelligence methods applied; emerging and niche topics; relationships among the most prominent researchers; the countries where deepfakes research is performed; main funding institutions. This paper identifies the current trends and opportunities in deepfakes research for practitioners and researchers who want to get into this topic.
Contribution: Instructors are leveraging open-source software (OSS) as a way to experience authentic examples of software problems with their students. Recommender engines might assist students in selecting the right project based on metrics mined from project repositories (e.g., GitHub). This vision is realized through GitMate, a GitHub-based recommender for supporting students in their OSS selection. Background: Contributing to OSS is a valuable way to immerse students into the realities of software development. When it comes to OSS selection, self-selection seems to be the most engaging alternative. Yet, students lack the time (and skills) to analyze project facets and draw comparisons among OSS projects. Research Questions: How can students be assisted to select a good OSS project to contribute to? Specifically, how would a recommender system might help? The envisioned intervention should be useful not only in finding the right project but also challenging students’ initial selections with other alternatives, spurring reflection. Methodology: The aim is to act upon a dependent variable (mind changing in project selection) through an independent variable (project comparison). This is achieved through GitMate, a recommender system on top of GitHub. Its search facilities are used for students to locate three projects at their wish. Next, GitMate recommends similar projects based on the project facets (e.g., number of committers, commits, and stars), mined from GitHub. Pondering the importance of distinct facets, students can now tradeoff different projects. The experiment checks whether students change their first choice. Findings: The results indicate that GitMate helps students compare GitHub projects to the extent of making them change their first choice. Nearly, 80% of the students changed at least one project as a result of using GitMate. This seems to suggest GitMate being effective on its goal: facet-based comparison thinking during OSS selection.