Athena Research and Innovation Center In Information Communication & Knowledge Technologies
Recent publications
Relying on our experience on the development and configuration of tools for the collection, validation and integration of Real-World Data for clinical studies, we designed a knowledge-based approach to facilitate quality assessment during registration of clinical and biological data for multi-center, retrospective studies.
The ability of cancer to develop drug-resistance, in parallel with the undesired effects of chemotherapy, has led to the development of safe nanoparticles characterized by multi-sensitivity. Herein we focus on the synthesis and exploitation of the synthetic route of hybrid silver–iron oxide Nfs and their successful coating with citrate. The parameters of the synthetic route affecting the uniform formation of the Nfs are investigated to optimize the experimental route and the attained Nfs. Most importantly, the study focuses on the evaluation of the Nfs as theranostic agents in the case of glioblastoma. The results suggest that the Nfs are good candidates for CT contrast agents, as the contrast is enhanced after treatment. The in vitro evaluation shows that the Nfs exhibit cytotoxicity towards glioblastoma cells, whereas no significant toxicity towards red blood cells is reported. Finally, internalization studies provide insight information that helps unveil the exact mechanism of action of the Nfs.
In this work, we generalize the ideas of Kaiming initialization to Graph Neural Networks (GNNs) and propose a new scheme (G-Init) that reduces oversmoothing, leading to very good results in node and graph classification tasks. GNNs are commonly initialized using methods designed for other types of Neural Networks, overlooking the underlying graph topology. We analyze theoretically the variance of signals flowing forward and gradients flowing backward in the class of convolutional GNNs. We then simplify our analysis to the case of the GCN and propose a new initialization method. Results indicate that the new method (G-Init) reduces oversmoothing in deep GNNs, facilitating their effective use. Our approach achieves an accuracy of 61.60% on the CS dataset (32-layer GCN) and 69.24% on Cora (64-layer GCN), surpassing state-of-the-art initialization methods by 25.6 and 8.6 percentage points, respectively. Extensive experiments confirm the robustness of our method across multiple benchmark datasets, highlighting its effectiveness in diverse settings. Furthermore, our experimental results support the theoretical findings, demonstrating the advantages of deep networks in scenarios with no feature information for unlabeled nodes (i.e., “cold start” scenario).
Learning in games has emerged as a powerful tool for machine learning with numerous applications. Quantum games model interactions between strategic players who have access to quantum resources, and several recent works have studied learning in the competitive regime of quantum zero-sum games. Going beyond this setting, we introduce quantum common-interest games (CIGs) where players have density matrices as strategies and their interests are perfectly aligned. We bridge the gap between optimization and game theory by establishing the equivalence between KKT (first-order stationary) points of an instance of the Best Separable State (BSS) problem and the Nash equilibria of its corresponding quantum CIG. This allows learning dynamics for the quantum CIG to be seen as decentralized algorithms for the BSS problem. Taking the perspective of learning in games, we then introduce non-commutative extensions of the continuous-time replicator dynamics and the discrete-time best response dynamics/linear multiplicative weights update for learning in quantum CIGs. We prove analogues of classical convergence results of the dynamics and explore differences which arise in the quantum setting. Finally, we corroborate our theoretical findings through extensive experiments.
Recent advancements in artificial intelligence (AI) have significantly expanded the scope of what computers can accomplish. With its myriad of successful applications, AI holds the potential to revolutionize our daily lives. However, this power also brings significant responsibility. As AI systems, models, and algorithms grow increasingly complex and opaque, there is a mounting interest in ensuring they benefit humans and society. As AI becomes more pervasive in our lives, so do the concerns surrounding its ethical use and impact. This chapter delves into the fundamental concepts of responsible artificial intelligence. The goal is to enhance the trustworthiness of AI algorithms, systems, and tools, ensuring they align with the best interests of both humans and society. We start by outlining the guiding principles that AI systems should adhere to, drawing primarily from EU reports and forthcoming regulations. Following that, we explore practical algorithmic approaches designed to foster responsible AI. Lastly, we offer an outlook on responsible AI, featuring insights and recommendations from leading experts in the field, along with our perspectives.
Life cycle assessment (LCA) is a reference methodology to evaluate environmental impacts along supply chains of products. Planetary boundaries (PBs) were developed to define the safe operating space (SOS) for humanity. So far, no study has investigated whether wine production and consumption result in crossing the planetary boundary of climate change and no SOS has been calculated for wine production in Greece. Our study applies an LCA according to the European Product footprint environmental category rules to calculate the climate change score of a bottle of 0.75 L of Greek red organic wine in 2021 and 2026, and also applies planetary boundaries to investigate whether the climate change boundary is exceeded. The latter employed the calculation of a SOS based on four partitioning methods: grandfathering principle, economic value, agricultural land area use, and calorific content. The LCA results showed that wine is a carbon emitter. The 2021, 2026-Low yield, and 2026-High yield systems resulted in positive climate change scores between 0.69–1.14 kg CO2 eq.bottle wine⁻¹. The PBs revealed that carbon emissions of wine production in 2021 exceeded all four SOSs, while carbon emissions of expected wine production in 2026 remained within the SOS of grandfathering, economic value and agricultural land area use partitionings, but exceeded the SOS of the caloric content partitioning. The PB method can be complementary to LCA results in terms of providing context to decision-makers in business and public policy on whether red organic wine production and consumption remain within ecological constraints on human development.
In recent years, interest in synthetic data has grown, particularly in the context of pre-training the image modality to support a range of computer vision tasks, including object classification, medical imaging etc. Previous work has demonstrated that synthetic samples, automatically produced by various generative processes, can replace real counterparts and yield strong visual representations. This approach resolves issues associated with real data such as collection and labeling costs, copyright and privacy. We extend this trend to the video domain applying it to the task of action recognition. Employing fractal geometry, we present methods to automatically produce large-scale datasets of short synthetic video clips, which can be utilized for pre-training neural models. The generated video clips are characterized by notable variety, stemmed by the innate ability of fractals to generate complex multi-scale structures. To narrow the domain gap, we further identify key properties of real videos and carefully emulate them during pre-training. Through thorough ablations, we determine the attributes that strengthen downstream results and offer general guidelines for pre-training with synthetic videos. The proposed approach is evaluated by fine-tuning pre-trained models on established action recognition datasets HMDB51 and UCF101 as well as four other video benchmarks related to group action recognition, fine-grained action recognition and dynamic scenes. Compared to standard Kinetics pre-training, our reported results come close and are even superior on a portion of downstream datasets. Code and samples of synthetic videos are available at https://github.com/davidsvy/fractal_video.
The four human Argonaute (AGO) proteins, critical in RNA interference and gene regulation, exhibit high sequence and structural similarity but differ functionally. We investigated the underexplored structural relationships of these paralogs through microsecond-scale molecular dynamics simulations. Our findings reveal that AGO proteins adopt similar, yet unsynchronized, open-close states. We observed similar and unique local conformations, interdomain distances and intramolecular interactions. Conformational differences at GW182/ZSWIM8 interaction sites and in catalytic/pseudo-catalytic tetrads were minimal. Tetrads display conserved movements, interacting with distant miRNA binding residues. We pinpointed long common protein subsequences with consistent molecular movement but varying solvent accessibility per AGO. We observed diverse conformational patterns at the post-transcriptional sites of the AGOs, except for AGO4. By combining simulation data with large datasets of experimental structures and AlphaFold’s predictions, we identified proteins with genomic and proteomic similarities. Some of the identified proteins operate in the mitosis pathway, sharing mitosis-related interactors and miRNA targets. Additionally, we suggest that AGOs interact with a mitosis initiator, zinc ion, by predicting potential binding sites and detecting structurally similar proteins with the same function. These findings further advance our understanding for the human AGO protein family and their role in central cellular processes.
Budget-feasible procurement has been a major paradigm in mechanism design since its introduction by Singer [28]. An auctioneer (buyer) with a strict budget constraint is interested in buying goods or services from a group of strategic agents (sellers). In many scenarios it makes sense to allow the auctioneer to only partially buy what an agent offers, e.g., an agent might have multiple copies of an item to sell, they might offer multiple levels of a service, or they may be available to perform a task for any fraction of a specified time interval. Nevertheless, the focus of the related literature has been on settings where each agent’s services are either fully acquired or not at all. A reason for this is that in settings with partial allocations like the ones mentioned, there are strong inapproximability results (see, e.g., Anari et al. [5], Chan and Chen [10]). Under the mild assumption of being able to afford each agent entirely, we are able to circumvent such results. We design a polynomial-time, deterministic, truthful, budget-feasible, (2+3)(2+\sqrt {3}) -approximation mechanism for the setting where each agent offers multiple levels of service and the auctioneer has a valuation function which is separable concave, i.e., it is the sum of concave functions. We then use this result to design a deterministic, truthful and budget-feasible O (1)-approximation mechanism for the setting where any fraction of a service can be acquired, again for separable concave objectives. For the special case where the objective is the sum of linear valuation functions, we improve the best known approximation ratio for the problem from (3+5)/2(3+\sqrt {5})/2 (by Klumper and Schäfer [19]) to 2. This establishes a separation between this setting and its indivisible counterpart.
We study truthful mechanisms for allocation problems in graphs, both for the minimization (i.e., scheduling) and maximization (i.e., auctions) setting. The minimization problem is a special case of the well-studied unrelated machines scheduling problem, in which every given task can be executed only by two pre-specified machines in the case of graphs or a given subset of machines in the case of hypergraphs. This corresponds to a multigraph whose nodes are the machines and its hyperedges are the tasks. This class of problems belongs to multidimensional mechanism design, for which there are no known general mechanisms other than the VCG and its generalization to affine minimizers. We propose a new class of truthful mechanisms that have significantly better performance than affine minimizers in many settings. Specifically, we provide upper and lower bounds for truthful mechanisms for general multigraphs, as well as special classes of graphs such as stars, trees, planar graphs, k -degenerate graphs, and graphs of a given treewidth. We also consider the objective of minimizing or maximizing the L p -norm of the values of the players, a generalization of the makespan minimization that corresponds to p = ∞, and extend the results to any p > 0.
Open Science seeks to make research processes and outputs more accessible, transparent and inclusive, ensuring that scientific findings can be freely shared, scrutinized and built upon by researchers and others. To date, there has been no systematic synthesis of the extent to which Open Science (OS) reaches these aims. We use the PRISMA scoping review methodology to partially address this gap, scoping evidence on the academic (but not societal or economic) impacts of OS. We identify 485 studies related to all aspects of OS, including Open Access (OA), Open/FAIR Data (OFD), Open Code/Software, Open Evaluation and Citizen Science (CS). Analysing and synthesizing findings, we show that the majority of studies investigated effects of OA, CS and OFD. Key areas of impact studied are citations, quality, efficiency, equity, reuse, ethics and reproducibility, with most studies reporting positive or at least mixed impacts. However, we also identified significant unintended negative impacts, especially those regarding equity, diversity and inclusion. Overall, the main barrier to academic impact of OS is lack of skills, resources and infrastructure to effectively re-use and build on existing research. Building on this synthesis, we identify gaps within this literature and draw implications for future research and policy.
Provides society information that may include news, reviews or technical notes that should be of interest to practitioners and researchers.
In recent years, artificial intelligence (AI) has deeply impacted various fields, including Earth system sciences, by improving weather forecasting, model emulation, parameter estimation, and the prediction of extreme events. The latter comes with specific challenges, such as developing accurate predictors from noisy, heterogeneous, small sample sizes and data with limited annotations. This paper reviews how AI is being used to analyze extreme climate events (like floods, droughts, wildfires, and heatwaves), highlighting the importance of creating accurate, transparent, and reliable AI models. We discuss the hurdles of dealing with limited data, integrating real-time information, and deploying understandable models, all crucial steps for gaining stakeholder trust and meeting regulatory needs. We provide an overview of how AI can help identify and explain extreme events more effectively, improving disaster response and communication. We emphasize the need for collaboration across different fields to create AI solutions that are practical, understandable, and trustworthy to enhance disaster readiness and risk reduction.
Iterative regularization is a classic idea in regularization theory, that has recently become popular in machine learning. On the one hand, it allows to design efficient algorithms controlling at the same time numerical and statistical accuracy. On the other hand it allows to shed light on the learning curves observed while training neural networks. In this paper, we focus on iterative regularization in the context of classification. After contrasting this setting with that of linear inverse problems, we develop an iterative regularization approach based on the use of the hinge loss function. More precisely we consider a diagonal approach for a family of algorithms for which we prove convergence as well as rates of convergence and stability results for a suitable classification noise model. Our approach compares favorably with other alternatives, as confirmed by numerical simulations.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
93 members
Katerina Pastra
  • Institute for Language and Speech Processing
Anestis Koutsoudis
  • Multimedia Research Group - Xanthi's Division
Despoina Tsiafaki
  • Culture & Creative Industries Department
Aris Lalos
  • Industrial Systems Institute, Platani, Patra