Athena-Research and Innovation Center in Information, Communication and Knowledge Technologies
Recent publications
The annotation of animated motion-captured segments is a challenging, interdisciplinary task, especially when it comes to characterizing movement qualitatively. The lack of intuitive, easy-to-learn-and-use frameworks is considered to be one of the biggest challenges in this process; another is the lack of approaches able to motivate a wide audience of users, from the broader public to dance experts, researchers and performers, to contribute with annotations. In this paper we present Motion Hollow, a story-driven playful experience that uses metaphors based on Laban Movement Analysis, an established framework for movement analysis and annotation, to familiarize novice users with the process of qualitative characterization of dance moves. This work proposes a first step into introducing movement annotation to non-expert users, and as such, its main goal is to explore the implications and potential of such an approach. The evaluation of the experience confirms its potential to transform the annotation of dance movement segments into an engaging and enjoyable experience as well as to foster a deeper understanding of movement annotation both as a concept and process.
This paper presents a system that automates activation of events that improve the accessibility and enhance the experience in theatrical performances in real time and proposes and evaluates the core method employed therein. This method aligns a given set of subtitles that is created and synchronized by experts for a given “rehearsal” audio stream, to a new “performance” audio stream in real-time. The performance stream may be different from the rehearsal not only in terms of timing, but also in terms of spoken content. The method is built around an Automatic Speech Recognition System (ASR) that captures the performance stream in real-time and generates suggestions for subtitle timing corrections. The system is evaluated in an experimental performance with 35 participants while the core method is analysed independently, using sixteen artificial rehearsal-performance pair scenarios. Objective and subjective evaluation reveals specific directions for improvements for the core method. The investigation of the system shows that this innovative approach is promising for improving accessibility and the enhancement of the theatrical experience of audiences.KeywordsArtificial intelligenceTheatreAutomatic Speech RecognitionSubtitles alignmentLSTM
We present an open-source system that can optimize compressed trajectory representations for large fleets of vessels. We take into account the type of each vessel in order to choose a suitable configuration that can yield improved trajectory synopses, both in terms of approximation error and compression ratio. We employ a genetic algorithm that converges to a fine-tuned configuration per vessel type without any hyper-parameter tuning. These configurations can provide synopses that retain less than 10% of the original points with less than 20m approximation error in a real world dataset; in another dataset with 90% less samples than the previous one, the synopses retain 20% of the points and achieve less than 80m error. Additionally the level of compression can be chosen by the user, by setting the desired approximation error. Our system also supports incremental optimization by training in data batches, and therefore continuously improves performance. Furthermore, we employ a composite event recognition engine to efficiently detect complex maritime activities, such as ship-to-ship transfer and loitering; thanks to the synopses generated by the genetic algorithm instead of the raw trajectories, we make the recognition process faster while also maintaining the same level of recognition accuracy. Our extensive empirical study demonstrates the effectiveness of our system over large, real-world datasets.
Social interaction has been recognized as positively affecting learning, with dialogue–as a common form of social interaction–comprising an integral part of collaborative learning. Interactive storytelling is defined as a branching narrative in which users can experience different story lines with alternative endings, depending on the choices they make at various decision points of the story plot. In this research, we aim to harness the power of dialogic practices by incorporating dialogic activities in the decision points of interactive digital storytelling experiences set in a history education context. Our objective is to explore interactive storytelling as a collaborative learning experience for remote learners, as well as its effect on promoting historical empathy. As a preliminary validation of this concept, we recorded the perspective of 14 educators, who supported the value of the specific conceptual design. Then, we recruited 15 adolescents who participated in our main study in 6 groups. They were called to experience collaboratively an interactive storytelling experience set in the Athens Ancient Agora (Market) wherein we used the story decision/branching points as incentives for dialogue. Our results suggest that this experience design can indeed support small groups of remote users, in-line with special circumstances like those of the COVID-19 pandemic, and confirm the efficacy of the approach to establish engagement and promote affect and reflection on historical content. Our contribution thus lies in proposing and validating the application of interactive digital storytelling as a dialogue-based collaborative learning experience for the education of history.
Molecular dynamics simulation is a powerful technique for studying the structure and dynamics of biomolecules in atomic-level detail by sampling their various conformations in real time. Because of the long timescales that need to be sampled to study biomolecular processes and the big and complex nature of the corresponding data, relevant analyses of important biophysical phenomena are challenging. Clustering and Markov state models (MSMs) are efficient computational techniques that can be used to extract dominant conformational states and to connect those with kinetic information. In this work, we perform Molecular Dynamics simulations to investigate the free energy landscape of Angiotensin II (AngII) in order to unravel its bioactive conformations using different clustering techniques and Markov state modeling. AngII is an octapeptide hormone, which binds to the AT1 transmembrane receptor, and plays a vital role in the regulation of blood pressure, conservation of total blood volume, and salt homeostasis. To mimic the water-membrane interface as AngII approaches the AT1 receptor and to compare our findings with available experimental results, the simulations were performed in water as well as in water-ethanol mixtures. Our results show that in the water-ethanol environment, AngII adopts more compact U-shaped (folded) conformations than in water, which resembles its structure when bound to the AT1 receptor. For clustering of the conformations, we validate the efficiency of an inverted-quantized k-means algorithm, as a fast approximate clustering technique for web-scale data (millions of points into thousands or millions of clusters) compared to k-means, on data from trajectories of molecular dynamics simulations with reasonable trade-offs between time and accuracy. Finally, we extract MSMs using various clustering techniques for the generation of microstates and macrostates and for the selection of the macrostate representatives.
This paper explores the effects of the straight-ticket voting option (STVO) on the positions of politicians. STVO, present in some US states, allows voters to select one party for all partisan elections listed on the ballot, as opposed to filling out each office individually. We analyse the effects of STVO on policy-making by building a model of pre-election competition. STVO results in greater party loyalty of candidates, while increasing the weight of non-partisan voters’ positions in candidate selection. This induces an asymmetric effect on vote shares and implemented policies in the two-party system.
The largest wine producers globally are located in Southern Europe and climate is a major factor in wine production. The European Union aims to complement the consumer’s choice for wine with information about environmental sustainability. The carbon footprint is a worldwide-standardized indicator that both wine producers and consumers perceive as the most important environmental indicator. So far, environmental life cycle assessment studies show variability in the system boundaries design and functional unit selection, and review papers do not include life cycle inventory data, and consider vineyards in various locations worldwide. This study aimed to investigate what are the key factors affecting the carbon footprint of red and white wine production in South European countries with the same climatic conditions, and benchmark both wine types. The results showed that the carbon footprints of white and red wines are comparable. The average carbon footprints were 1.02, 1.25, and 1.62 CO2 eq.bottle of wine ⁻¹ for organic red wine, conventional red wine, and conventional white wine, respectively. The viticulture, winemaking, and packaging stages affect greatly the carbon footprint. Diesel consumption at the viticulture stage, electricity consumption at the viticulture and winemaking stages, and glass production at the packaging stage are the largest contributors to the carbon footprint. Wine consumption stage was omitted from most studies, even though it can increase the carbon footprint by 5%. Our results suggest that consumers should choose (conventional or organic) red wine that is produced locally.
The ways our cultural heritage reserve is preserved and disseminated to the public have changed significantly, with the use of immersive technologies, such as virtual reality environments, and serious games. Nowadays, these technologies are also exploited for developing interactive informative applications, to support historical education and enhance museum visits, physical or virtual, especially to younger generations. The field of edutainment, educational entertainment, has been rapidly developing during the last 10 or 15 years. The main goal of this research is to develop an educational 3D puzzle-like serious game which can operate within a virtual reality environment while aiming towards the dissemination of cultural heritage content to the younger public, i.e., students, children etc., through a pleasant gamification process. The cultural heritage objects used are an ancient Greek temple and a statue of the Roman era, whose high-resolution fully textured 3D models were available from previous projects. The game application was developed in Unity game engine with suitable coding to enable the smooth execution of the 3D puzzle solution. The application verified that it is more interesting to learn about cultural assets through a game than in the conventional ways, and even more when it is implemented within a Virtual Reality environment, where the contact with the assets appears to be more direct and realistic. The same application can also be utilized in different educational areas and can be expanded by the inclusion of other digital assets.
We increasingly depend on a variety of data-driven algorithmic systems to assist us in many aspects of life. Search engines and recommender systems among others are used as sources of information and to help us in making all sort of decisions from selecting restaurants and books, to choosing friends and careers. This has given rise to important concerns regarding the fairness of such systems. In this work, we aim at presenting a toolkit of definitions, models and methods used for ensuring fairness in rankings and recommendations. Our objectives are threefold: (a) to provide a solid framework on a novel, quickly evolving and impactful domain, (b) to present related methods and put them into perspective and (c) to highlight open challenges and research paths for future work.
The technological advance of drone technology has augmented the existing capabilities of flying vehicles rendering them a valuable asset of the modern society. As more drones are expected to occupy the airspace in the near future, security-related incidents, either malicious acts or accidents, will increase as well. The forensics analysis of a security incident is essential, as drones are flying above populated areas and have also been weaponised from radical forces and perpetrators. Thus, it is an imperative need to establish a Drone Digital Forensics Investigation Framework and standardise the processes of collecting and processing such evidence. Although there are numerous drone platforms in the market, the same principles apply to all of them; just like mobile phones. Nevertheless, due to the nature of drones, standardised forensics procedures to date do not manage to address the required processes and challenges that such investigations pose. Acknowledging this need, we detail the unique characteristics of drones and the gaps in existing methodologies and standards, showcasing that there are fundamental issues in terms of their forensics analysis from various perspectives, ranging from operational and procedural ones, and escalate to manufacturers, as well as legal restrictions. The above creates a very complex environment where coordinated actions must be made among the key stakeholders. Therefore, this work paves the way to address these challenges by identifying the main issues, their origins, and the needs in the field by performing a thorough review of the literature and a gap analysis.
Understanding the human brain is a "Grand Challenge" for 21st century research. Computational approaches enable large and complex datasets to be addressed efficiently, supported by artificial neural networks, modeling and simulation. Dynamic generative multiscale models, which enable the investigation of causation across scales and are guided by principles and theories of brain function, are instrumental for linking brain structure and function. An example of a resource enabling such an integrated approach to neuroscientific discovery is the BigBrain, which spatially anchors tissue models and data across different scales and ensures that multiscale models are supported by the data, making the bridge to both basic neuroscience and medicine. Research at the intersection of neuroscience, computing and robotics has the potential to advance neuro-inspired technologies by taking advantage of a growing body of insights into perception, plasticity and learning. To render data, tools and methods, theories, basic principles and concepts interoperable, the Human Brain Project (HBP) has launched EBRAINS, a digital neuroscience research infrastructure, which brings together a transdisciplinary community of researchers united by the quest to understand the brain, with fascinating insights and perspectives for societal benefits.
One of the most important tasks in scientific publishing is the articles' evaluation via the editorial board and the reviewers' community. Additionally, in scientific publishing great concern exists regarding the peer-review process and how it can be further optimised to decrease the time from submission to the first decision, as well as increase the objectivity of the reviewers' remarks ensuring that no bias or human error exists in the reviewing process. In order to address this issue, our article suggests a novice cloud framework for manuscript submission based on blockchain technology that further enhances the anonymity between authors and reviewers alike. Our method covers the whole spectrum of current submission systems capabilities, but it also provides a decentralised solution using open-source tools such as Java Spring that enhance the anonymity of the reviewing process.
This paper presents the design, implementation, and operation of a novel distributed fault-tolerant middleware. It uses interconnected WSNs that implement the Map-Reduce paradigm, consisting of several low-cost and low-power mini-computers (Raspberry Pi). Specifically, we explain the steps for the development of a novice, fault-tolerant Map-Reduce algorithm which achieves high system availability, focusing on network connectivity. Finally, we showcase the use of the proposed system based on simulated data for crowd monitoring in a real case scenario, i.e., a historical building in Greece (M. Hatzidakis' residence). The technical novelty of this article lies in presenting a viable low-cost and low-power solution for crowd sensing without using complex and resource-intensive AI structures or image/video recognition techniques.
Nanoforms can be manufactured in plenty of variants differing in their physicochemical properties which can affect their hazard potential. To avoid the testing of each single nanomaterial and nanoform variation, grouping and read-across strategies are used to estimate groups of substances/forms with specific sets of properties that could potentially have similar human health and environmental hazard impact. A novel computational similarity method is presented aiming to compare dose-response data curves and subsequently identify sets of similar nanoforms. The suggested method estimates the model that best fits the data by leveraging pairwise Bayes Factor analysis to compare pairs of curves and evaluate whether each of the nanoforms is sufficiently similar to all other nanoforms. Pairwise comparisons to benchmark materials are used to define threshold values and set the criteria for groups of similar toxicity. Applications to use case data are shown to demonstrate that the method can estimate similar sets of nanoforms and support grouping hypotheses linked to a certain hazard endpoint and route of exposure.
The Water-Food-Energy Nexus can support a general model of sustainable development, balancing resources with increasing economic/productive expectations, as e.g., in agriculture. We synthesise lessons from Greece's practical and research experience, identify knowledge and application gaps, and propose a novel conceptual framework to tackle these challenges. Thessaly (Central Greece), the country's driest region and largest agricultural supplier is used as an example. The area faces a number of water quantity and quality issues, ambitious production-economic objectives, continuous (historically) drought and flood events, conflicts, administrative and economic issues, under serious climate change impacts. A detailed assessment of the current situation is carried out, covering all these aspects, for the first time in an integrated way. Collaboration gaps among different stakeholders are identified as the biggest impediment to socially acceptable actions. For the first time, to our knowledge, the Nexus is set as a keystone to develop a novel framework to reverse the situation and achieve sustainable management under socially acceptable long-term visions. The proposed framework is based on Systems' Theory, innovation, uses a multi-disciplinary platform to bring together all relevant stakeholders, provides scientific support and commitment, and makes use of technological advances for the system's improvement.
Context The domain of rural areas, including rural communities, agriculture, and forestry, is going through a process of deep digital transformation. Digitalisation can have positive impacts on sustainability in terms of greater environmental control, and community prosperity. At the same time, it can also have disruptive effects, with the marginalisation of actors that cannot cope with the change. When developing a novel system for rural areas, requirements engineers should carefully consider the specific socio-economic characteristics of the domain, so that potential positive effects can be maximised, while mitigating negative impacts. Objective The goal of this paper is to support requirements engineers with a reference catalogue of drivers, barriers and potential impacts associated to the introduction of novel ICT solutions in rural areas. Method To this end, we interview 30 cross-disciplinary experts in digitalisation of rural areas, and we analyse the transcripts to identify common themes. Results According to the experts, main drivers are economic, with the possibility of reducing costs, and regulatory, as institutions push for more precise tracing and monitoring of production; barriers are the limited connectivity, but also distrust towards technology and other socio-cultural aspects; positive impacts are socio-economic (e.g., reduction of manual labor, greater productivity), while negative ones include potential dependency from technology, with loss of hands-on expertise, and marginalisation of certain actors (e.g., small farms, subjects with limited education). Conclusion This paper contributes to the literature with a domain-specific catalogue that characterises digitalisation in rural areas. The catalogue can be used as a reference baseline for requirements elicitation endeavours in rural areas, to support domain analysis prior to the development of novel solutions, as well as fit-gap analysis for the adaptation of existing technologies.
We exploit a recent computational framework to model and detect financial crises in stock markets, as well as shock events in cryptocurrency markets, which are characterized by a sudden or severe drop in prices. Our method manages to detect all past crises in the French industrial stock market starting with the crash of 1929, including financial crises after 1990 (e.g. dot-com bubble burst of 2000, stock market downturn of 2002), and all past crashes in the cryptocurrency market, namely in 2018, and also in 2020 due to covid-19. We leverage copulae clustering, based on the distance between probability distributions, in order to validate the reliability of the framework; we show that clusters contain copulae from similar market states such as normal states, or crises. Moreover, we propose a novel regression model that can detect successfully all past events using less than 10% of the information that the previous framework requires. We train our model by historical data on the industry assets, and we are able to detect all past shock events in the cryptocurrency market. Our tools provide the essential components of our software framework that offers fast and reliable detection, or even prediction, of shock events in stock and cryptocurrency markets of hundreds of assets.
Map-Reduce is a programming model and an associated implementation for processing and generating large data sets. This model has a single point of failure: the master, who coordinates the work in a cluster. On the contrary, wireless sensor networks (WSNs) are distributed systems that scale and feature large numbers of small, computationally limited, low-power, unreliable nodes. In this article, we provide a top-down approach explaining the architecture, implementation and rationale of a distributed fault-tolerant IoT middleware. Specifically, this middleware consists of multiple mini-computing devices (Raspberry Pi) connected in a WSN which implement the Map-Reduce algorithm. First, we explain the tools used to develop this system. Second, we focus on the Map-Reduce algorithm implemented to overcome common network connectivity issues, as well as to enhance operation availability and reliability. Lastly, we provide benchmarks for our middleware as a crowd tracking application for a preserved building in Greece (i.e., M. Hatzidakis' residence). The results of this study show that IoT middleware with low-power and low-cost components are viable solutions for medium-sized cloud computing distributed and parallel computing centres. Potential uses of this middleware apply for monitoring buildings and indoor structures, in addition to crowd tracking to prevent the spread of COVID-19.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
66 members
Katerina Pastra
  • Institute for Language and Speech Processing
Anestis Koutsoudis
  • Multimedia Research Group - Xanthi's Division
Despoina Tsiafaki
  • Culture & Creative Industries Department
Aris Lalos
  • Industrial Systems Institute, Platani, Patra
Information
Address
Marousi, Greece