Ilias K. Savvas

Ilias K. Savvas
  • PhD
  • Head of Faculty at University of Thessaly

About

98
Publications
12,120
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
580
Citations
Introduction
Ilias K. Savvas received his PhD in Computer Science at the University College of Dublin in Ireland. He is a Professor of Computer Science, Department of Digital Systems, University of Thessaly, Greece and currently Dean of School of Technology, University of Thessaly. His research interests include issues related to Parallel and Distributed Computing, Data Mining, Quantum Computing, Machine Leraning, and High-Performance Computing.
Current institution
University of Thessaly
Current position
  • Head of Faculty
Additional affiliations
September 2019 - present
University of Thessaly
Position
  • Professor
September 2012 - August 2016
Technological Educational Institute of Thessaly
Position
  • Head of Department
Description
  • Programming, Data and File Structures, Algorithm Analysis Distributed / Parallel Programming GPU Programming
January 2011 - June 2011
University College Dublin
Position
  • Visiting Senior Lecturer

Publications

Publications (98)
Article
Full-text available
Drug repositioning is a less expensive and time-consuming method than the traditional method of drug discovery. It is a strategy for identifying new uses for approved or investigational drugs that are outside the scope of the original medical indication. A key strategy in repositioning approved or investigational drugs is determining the binding af...
Chapter
The objective of this research is to provide a comprehensive analysis of the communication process among executives of Greek HEIs in light of the changing operational landscape. This study aims to emphasize the necessity of digital strategy and digital transformation within organizations to meet the emerging requirements. The methodology employed i...
Article
Full-text available
The swift advancement of quantum computing devices holds the potential to create robust machines that can tackle an extensive array of issues beyond the scope of conventional computers. Consequently, quantum computing machines create new risks at a velocity and scale never seen before, especially with regard to encryption. Lattice-based cryptograph...
Chapter
By considering a general alternating series algorithm introduced by A. and J. Knopfmacher according to which every real number may be expressed by alternating series representations in terms of rationals we present some important results arising from the application of ergodic theory to an alternating series expansion for real numbers in terms of r...
Chapter
The proposed study is devoted to the problem of the new functional nanomaterials diagnostic process automation and acceleration, using advanced instrumental methods. X-ray absorption spectroscopy data allow evaluating qualitative and quantitative materials characteristics with high accuracy. Synchrotron centers are one of the most important tools f...
Chapter
Quantum computing is not only a revolution in information science, but it is going to cause colossal changes in almost all sciences and especially in telecommunications and cryptography. In this work, we present a quantum key distribution protocol, BB84, and also two possible attacks against this cryptographic scheme. Even though quantum cryptosyst...
Article
Full-text available
The rapid development of quantum computing devices promises powerful machines with the potential to confront a variety of problems that conventional computers cannot. Therefore, quantum computers generate new threats at unprecedented speed and scale and specifically pose an enormous threat to encryption. Lattice-based cryptography is regarded as th...
Article
Full-text available
The drug discovery process is a rigorous and time-consuming endeavor, typically requiring several years of extensive research and development. Although classical machine learning (ML) has proven successful in this field, its computational demands in terms of speed and resources are significant. In recent years, researchers have sought to explore th...
Preprint
Full-text available
The drug discovery process is a rigorous and time-consuming endeavor, typically requiring several years of extensive research and development. Although classical machine learning (ML) has proven successful in this field, its computational demands in terms of speed and resources are significant. In recent years, researchers have sought to explore th...
Preprint
Full-text available
The rapid development of quantum computing devices promises powerful machines with capabilities that solve a wide range of problems that traditional computers cannot. Therefore, quantum computers generate new threats at unprecedented speed and scale and specifically pose an enormous threat to encryption. Lattice-based cryptography is considered to...
Article
In modern workforce management, the demand for new ways to maximize worker satisfaction, productivity, and security levels is endless. Workforce movement data such as those source data from an access control system can support this ongoing process with subsequent analysis. In this study, a solution for attaining this goal is proposed, based on the...
Conference Paper
We live in a society where a massive quantity of data is generated daily on online social network platforms. This enormous data contains vital opinion-related information that many companies and other scientific and commercial industries trying to exploit for their benefits. For that purpose, sentiment analysis is required. Sentiment analysis or op...
Conference Paper
There is no doubt that quantum computing has opened up new horizons and perspectives in many fields, including scientific research. With the advancement of technology and quantum computers, we can now conduct new kinds of scientific experiments: observing quantum properties of individual systems, atoms, electrons, and photons as well as influencing...
Conference Paper
The last decade, the evolution in quantum computing has been enormous and real and reliable quantum computers are being developed quickly. One of the consequences of the upcoming quantum era is to make key distribution protocols insecure, as most of them are based on discrete algorithm problems. On the other hand, quantum computing provides a power...
Article
Full-text available
In the field of intelligent surface inspection systems, particular attention is paid to decision making problems, based on data from different sensors. The combination of such data helps to make an intelligent decision. In this research, an approach to intelligent decision making based on a data integration strategy to raise awareness of a controll...
Chapter
The fundamental scientific problem in the paper is the patterns discovery using artificial intelligence to search for and determine the properties of bioactive inorganic scaffold nanomaterials from datasets with scanning electron microscopy (SEM) images. Bioactive inorganic scaffold nanomaterials are bioactive structures that are designed to tempor...
Chapter
Quantum Computing is one of the most promising technology advancements of our time, promising to clarify problems considered unsolved for a classical computer. Real Quantum Computer Devices—once a science fiction concept—are now a reality. Many challenges still remain, on the way to achieve the so-called quantum supremacy. A phenomenon called Quant...
Article
Full-text available
Cellular vehicle-to-everything (C-V2X) communication has recently gained attention in industry and academia. Different implementation scenarios have been derived by the 3rd Generation Partnership Project (3GPP) 5th Generation (5G) Vehicle-to-Everything (V2X) standard, Release 16. Quality of service (QoS) is important to achieve reliable communicati...
Conference Paper
In recent years, machine learning has penetrated a large part of our daily lives, which creates special challenges and impressive progress in this area. Nevertheless, as the amount of daily data is grown, learning time is increased. Quantum machine learning (QML) may speed up the processing of information and provide great promise in machine learni...
Conference Paper
The process to find the prime factors of a large number, the “factoring problem” is believed to be a very hard problem. For this reason, it is the cornerstone of modern cryptographic schemes, like RSA cryptosystem. In 1994, Professor Peter Shor proposed a new polynomial-time quantum algorithm that finds the prime factors of a number with many digit...
Conference Paper
Big Data explosion is a phenomenon of the 21st century. Nowadays, more and more people are using the internet and creating new data regarding ideas, opinions, feelings or their views on a variety of topics and products. The micro-blogging platform Twitter is very popular and produces massive amount of data every fraction of a second and useful info...
Conference Paper
Today we see tremendous potential in applying artificial intelligence (AI), deep reinforcement learning, and agent-based simulation to complex real-world problems. AI helps people support and automate decision-making penetrating almost all daily life aspects and research areas. One of the reasons for this potential is that AI helps us solve problem...
Conference Paper
Machine learning approaches and algorithms are spreading in wide areas in research and technology. Cybersecurity breaches are the common anomalies for networked and distributed infrastructures which are monitored, registered, and described carefully. However, the description of each security breaches episode and its classification is still a diffic...
Conference Paper
3rd Generation Partnership Project (3GPP) 5th Generation (5G) Vehicle-to-Everything (V2X) implementation scenarios represent typical applications of the new forthcoming technology. Quality of service (QoS) in 5G V2X services represents a critical issue with parameters like reliability, end-to-end latency, data rate and communication range in realis...
Article
For 5th Generation (5G) Vehicle-to-Everything (V2X) communication it would be desirable to build a dynamically changing reconfigurable system, considering different parameters. Turbo codes had a great impact on the realisation and success of 3G and 4G. Despite their complexity, their use for 5G V2X and short frames represents a challenging issue. T...
Conference Paper
Quantum Computing is one of the most promising technology advancements of our time, promising to clarify problems considered unsolved for a classical computer. Real Quantum Computer Devices - once a science fiction concept - are now a reality. Many challenges still remain, on the way to achieve the so-called quantum supremacy. A phenomenon called Q...
Article
From Pharmacology to Cryptography and from Geology to Astronomy are some of the scientific fields that Quantum Computing potentially will take off and fly high. Big Quantum Computing vendors invest a large amount of money in improving the hardware and they claim that soon enough a quantum program will be hundreds of thousands of times faster than a...
Article
Full-text available
Learning analytics have proved promising capabilities and opportunities to many aspects of academic research and higher education studies. Data-driven insights can significantly contribute to provide solutions for curbing costs and improving education quality. This paper adopts a two-phase machine learning approach, which utilizes both unsupervised...
Conference Paper
Quantum physics is old and mature. Quantum computing and algorithms arose 3 decades ago. Quantum programming on real Quantum Computational Devices (QCD) is something new. Less than the last five years, researchers and scientists could apply quantum programming to QCD since several vendors like IBM, Rigetti, Microsoft or Google could provide access...
Article
The arrival of big data era introduces new necessities for accommodating data access and analysis by the organizations. The evolution of data is three-fold, increase in volume, variety, and complexity. The majority of data nowadays is generated in the cloud. Cloud data warehouses profit of the benefits of cloud by facilitating the integration of cl...
Article
Full-text available
The present research work proposes the development of an integrated framework for the personalization and parameterization of learning pathways, aiming at optimizing the quality of the offered services by the Higher Educational Institutions (HEI). In order to achieve this goal, in addition to the educational part, the EDUC8 framework encloses the s...
Chapter
A growing interest has been shown recently, concerning buildings as well as different constructions that use transformative and mobile attributes for adapting their shape, size and position in response to different environmental factors, such as humidity, temperature, wind and sunlight. Responsive architecture as it is called, can exploit climatic...
Chapter
We are living in a world of heavy data bombing and the term Big Data is a key issue these days. The variety of applications, where huge amounts of data are produced (can be expressed in PBs and more), is great in many areas such as: Biology, Medicine, Astronomy, Geology, Geography, to name just a few. This trend is steadily increasing. Data Mining...
Chapter
The primary goal of this paper is to develop a distributed ontology-based knowledge representation approach useful for data warehouses design in the security applications area. The paper proposes a novel database design for registering security incidents in critical infrastructure on railways. We propose an approach based on the data warehouse arch...
Chapter
The project portfolio scheduling problem has become very popular in recent years since many modern organizations operate in multi-project and multi-objective environment. Current project oriented organizations have to design a plan in order to execute a set of projects sharing common resources such as personnel teams. This problem can be seen as an...
Article
Data warehouse (DW) systems provide the best solution for intelligent data analysis and decision-making. Changes applied to data gradually in real life have to be projected to the DW. Slowly changing dimension (SCD) refers to the potential volatility of DW dimension members. The treatment of SCDs has a significant impact over the quality of data an...
Conference Paper
A growing interest has been shown recently, concerning buildings as well as different constructions that use transformative and mobile attributes for adapting their shape, size and position in response to different environmental factors, such as humidity, temperature, wind and sunlight. Responsive architecture as it is called, can exploit climatic...
Conference Paper
We are living in a world of heavy data bombing and the term Big Data is a key issue these days. The variety of applications, where huge amounts of data are produced (can be expressed in PBs and more), is great in many areas such as: Biology, Medicine, Astronomy, Geology, Geography, to name just a few. This trend is steadily increasing. Data Mining...
Conference Paper
In a cloud based data warehouse (DW), business users can access and query data from multiple sources and geographically distributed places. Business analysts and decision makers are counting on DWs especially for data analysis and reporting. Temporal and spatial data are two factors that affect seriously decision-making and marketing strategies and...
Chapter
Full-text available
This work presents a new area of application for clustering techniques in industrial and transport applications. The main aim of the research is to propose the technique for detection of point anomalies in telecommunication traffic produced by network subsystems of railway intelligent control system. The central idea behind is to apply enhanced DBS...
Chapter
Nowadays, when the data size grows exponentially, it becomes more and more difficult to extract useful information in reasonable time. One very important technique to exploit data is clustering and many algorithms have been proposed like k-means and its variations (k-medians, kernel k-means etc.), DBSCAN, OPTICS and others. The time complexity of a...
Conference Paper
Detection of point anomalies is a very important issue in a large scale of fields from Astronomy and Biology to network intrusions. Clustering has been employed by many researchers to solve such problems and DBSCAN seems like the most efficient technique. Due to its high computational complexity, this work focused on decreasing it by decreasing the...
Conference Paper
Full-text available
This work presents a new area of application for clustering techniques in industrial and transport applications. The main aim of the research is to propose the technique for detection of point anomalies in telecommunication traffic produced by network subsystems of railway intelligent control system. The central idea behind is to apply enhanced DBS...
Article
The track on the Convergence of Distributed Clouds, Grids and their Management (CDCGM2017) started in 2009 to discuss the evolution of cloud computing w.r.t. the infrastructure providers who started creating next generation hardware that is service friendly, and service developers who started embedding business service intelligence in their computi...
Conference Paper
Nowadays, when the data size grows exponentially, it becomes more and more difficult to extract useful information in reasonable time. One very important technique to exploit data is clustering and many algorithms have been proposed like k-means and its variations (k-medians, kernel k-means etc.), DBSCAN, OPTICS and others. The time complexity of a...
Article
Nowadays, huge quantities of data are generated by billions of machines and devices. Numerous methods have been employed in order to make use of this valuable resource: some of them are altered versions of established known algorithms. One of the most seminal methods, in order to mine from data sources, is clustering, and $k$-means is a key algorit...
Conference Paper
The customers' data of Telecommunication Companies represent a powerful tool to explore their behaviour and then to increase their satisfaction. The data produced is too large to extract on time useful information, which could be beneficial for both sides, companies and customers. One of the solutions to explore this data is representing it by the...
Conference Paper
VANETs represent a key technology for future 5G networks. Three popular mobile routing algorithms used in VANETs are AODV, DSR and DYMO. These algorithms are decomposed into their basic operations and it is shown that they share common operations which can form a reconfigurable architecture. Additionally, the algorithms performance is evaluated con...
Conference Paper
The last years, huge masses of data are produced or extracted by computational systems and independent electronic devices. To exploit this resource, novel methods must be employed or the established ones may be altered in order to confront the issues that arise. One of the most fruitful techniques, in order to locate and use information from data s...
Article
Full-text available
Im theoretischen Rahmen der Systemischen Geopolitik wird der Versuch unternommen, Data-Mining Techniken – vorerst Clusteranalyse – im Bereich der internationalen Beziehungen anzuwenden. Schon die frühe Entwicklungsphase eines beständigen Analysemodells führτ nicht nur zu verfahrenstechnischen (z.B. Vergleich von Algorithmen) Schlussfolgerungen, son...
Conference Paper
This paper describes the development of a simple, real-time operating system for educational purposes. The system provides soft real-time capabilities to serve real time processes and data and meet deadlines. The main contribution of this paper is a very simple open source operating system that implements different real-time algorithms. The code is...
Conference Paper
The last years, huge bundles of information are extracted by computational systems and electronic devices. To exploit the derived amount of data, new innovative algorithms must be employed or the established ones may be changed. One of the most fascinating and productive techniques, in order to locate and extract information from data repositories...
Article
The project portfolio scheduling problem has become very popular in recent years since many modern organizations operate in multi-project and multi-objective environment. Current project oriented organizations have to design a plan in order to execute a set of projects sharing common resources such as personnel teams. This problem can be seen as an...
Chapter
Big data refers to data sets whose size is beyond the capabilities of most current hardware and software technologies. The Apache Hadoop software library is a framework for distributed processing of large data sets, while HDFS is a distributed file system that provides high-throughput access to data-driven applications, and MapReduce is software fr...
Conference Paper
Full-text available
A key factor at the project portfolio scheduling is about aiming at a pool of projects due the multi- objective environment in which modern organizations operate. Recent works proposed multi-objective models that are close to real world projects and they are based on intelligent methods. This work is an evaluation of this approach. More specificall...
Conference Paper
Full-text available
The project portfolio scheduling problem has become very popular in recent years. Current project oriented organisations have to design a plan in order to execute a set of projects sharing common resources such as personnel teams. These projects must, therefore, be handled concurrently. This problem can be seen as an extension of the job shop sched...
Conference Paper
Full-text available
As a geographical method of analyzing power redistribution, Systemic Geopolitical Analysis (according to Ioannis Th. Mazis theoretical basis) proposes a multi-dimensional, interdisciplinary research pattern, which embraces economic, cultural, political and defensive facts. The amount of data produced combining these attributes is extremely large an...
Article
Nowadays, the growth of data is exponential leading to colossal amount of information which is produced by computational systems and electronic instruments such as telescopes, medical devices and so on. To explore this huge amount of data, new fast algorithms must be discovered or old ones may be redesigned. One of the most popular and useful techn...
Conference Paper
Nowadays, colossal amount of information is produced by computational systems and electronic instruments such as telescopes, medical devices and so on. To explore these petabytes of data, new fast algorithms must be discovered or old ones may be redesigned. One of the most popular and useful techniques in order to discover and extract information f...
Chapter
Big data refers to data sets whose size is beyond the capabilities of most current hardware and software technologies. The Apache Hadoop software library is a framework for distributed processing of large data sets, while HDFS is a distributed file system that provides high-throughput access to data-driven applications, and MapReduce is software fr...
Conference Paper
Full-text available
This paper presents a Recurrent Neural Network approach for the multipurpose machines Job Shop Scheduling Problem. This case of JSSP can be utilized for the modelling of project portfolio management besides the well known adoption in factory environment. Therefore, each project oriented organization develops a set of projects and it has to schedule...
Conference Paper
This report gives a brief overview of the main concerns addressed by the authors at the third international track on Cooperative Knowledge Discovery & Data Mining (CKDD), held at WETICE 2013 conference. A presentation of the main topics is given and then a summary of each paper accepted by this conference track is reported.
Article
Full-text available
This work is devoted to the portfolio project management problem and more precisely it is focused on IT portfolios' management. The problem is modelled as a multi-purpose job shop problem. Contemporary organisations such as IT companies define a careful planning to perform a set of projects that share common resources. These projects must, therefor...
Article
Full-text available
This paper presents a Recurrent Neural Network approach for the multipurpose machines Job Shop Scheduling Problem. This case of JSSP can be utilized for the modelling of project portfolio management besides the well known adoption in factory environment. Therefore, each project oriented organization develops a set of projects and it has to schedule...
Conference Paper
This report gives a brief overview of the main concerns addressed by the authors at the third international track on Cooperative Knowledge Discovery & Data Mining (CKDD), held at WETICE 2013 conference. A presentation of the main topics is given and then a summary of each paper accepted by this conference track is reported.
Conference Paper
Full-text available
This paper proposes a Neural Network approach for the project portfolio management problem. The modern organizations such as the IT firms schedule and perform a set of projects that share common rare resources. Therefore, each IT organization develops a set of IT projects and it has to execute them simultaneously. In this work we reviewed the liter...
Data
The Apache Hadoop software library is a framework for distributed processing of large data sets, while HDFS is a distributed file system that provides high-throughput access to data-driven applications, and MapReduce is software framework for distributed computing of large data sets. The huge collections of raw data require fast and accurate mining...
Conference Paper
Full-text available
This work is devoted to the portfolio project management problem and more specifically is focused on IT portfolios’ management. This problem is formulated as Resource Constrained Scheduling Problem (RCPSP). The contemporary organisations such as the IT companies plan and execute a set of projects that share common resources. Therefore, each IT orga...
Conference Paper
This report gives a brief overview of the main concerns addressed by the authors at the third international track on Cooperative Knowledge Discovery & Data Mining (CKDD), held at WETICE 2012 conference. A presentation of the main topics is given and then a summary of each paper accepted by this conference track is reported.
Conference Paper
Full-text available
The Apache Hadoop software library is a framework for distributed processing of large data sets, while HDFS is a distributed file system that provides high-throughput access to data-driven applications, and MapReduce is software framework for distributed computing of large data sets. The huge collections of raw data require fast and accurate mining...
Article
Grids provide access to vast amounts of computational resources for the execution of demanding computations. These resources are geographically distributed, owned by different organizations and are vastly heterogeneous. The aforementioned factors introduce uncertainty in all phases of a Grid Scheduling Process (GSP). This work describes a synergist...
Conference Paper
This report gives a brief overview of the main concerns addressed by the authors at the second international track on Cooperative Knowledge Discovery & Data Mining (CKDD), held at WETICE 2011. A presentation of the main topics is given and then a summary of each paper accepted by the track is reported.
Conference Paper
Full-text available
Requirements management and prioritization is a complex process that should take into account requirements value for customers, cost of implementation, available resources, requirements interdependencies, system architecture and dependencies to the code base. In this paper we present how Social Network Analysis can be used in order to improve softw...
Article
This report gives a brief overview of the main concerns addressed by the authors at the first international workshop on Cooperative Knowledge Discovery & Data Mining (CKDD), held at WETICE 2010. A presentation of the main topics is given and then a summary of each paper accepted by the workshop is reported.
Conference Paper
The problem of scheduling and allocation of tasks to processing nodes in large computational grids (CG) is studied in this paper. Each node of the system is considered as an autonomous stand-alone processing unit, ranging from workstations or small computing devices to computational clusters. For large-scale scheduling on very large CGs, two heuris...
Conference Paper
Full-text available
On large Computational Grids, performance is one of the main problems that has to be addressed but another important issue is the underlying interconnection network that has to be reliable in order to ensure the nodes' intercommunication and the migration of the appropriate load from one node to others. Reliability and performance are both influenc...
Conference Paper
In this study, a heterogeneous distributed computing environment (Grid) is employed as a computing platform to perform some computationally intensive tasks. In order to increase the efficiency of the system (utilization and average response time), a dynamic task scheduling algorithm is proposed to balance the load among the nodes of the system. The...
Conference Paper
Full-text available
Grids offer the potential of harnessing vast amounts of computational resources during the execution of demanding computations. These resources are geographically distributed, owned by different organizations and are highly heterogeneous. All these create an uncertain environment in all phases of a Grid Scheduling Process (GSP). In this work, we fo...
Conference Paper
Full-text available
The massive amount of resources on computational grids raises the question of efficient resource discovery and selection. In this paper we present an agent-based approach to these two phases of a grid scheduling process. The approach is based on client agents which act on behalf of grid users, and search for resources in a network of resource repre...
Article
In this study, a heterogeneous computing environment is employed as a computational platform. In order to increase the efficiency of the system, a dynamic task-scheduling algorithm is proposed, which balances the load among the nodes of the system. The technique is dynamic, nonpreemptive, adaptive, and it uses a mixed centralised and decentralised...
Article
Full-text available
On large Heterogeneous Distributed Computing Systems, the load balancing is one of the main problems that has to be addressed. Another important issue, is the underlying interconnection network that has to be reliable in order to ensure the nodes' intercommunication and the migration of the tasks. In this paper, we combined these two factors to inv...
Article
In this paper, we study the problem of scheduling a large number of time-consuming tasks (of different sizes) on a heterogeneous distributed system. The heterogeneity is expressed in terms of the inter-resources communication and of the resource latency. In such systems, balancing the load of the tasks among the resources is very critical, since th...
Conference Paper
Full-text available
In this study, a cluster-computing environment is employed as a computational platform. In order to increase the efficiency of the system, a dynamic task scheduling algorithm is proposed, which balances the load among the nodes of the cluster. The technique is dynamic, nonpreemptive, adaptive, and it uses a mixed centralised and decentralised polic...
Conference Paper
Peer-to-peer (P2P) computing has emerged as an alternative model of communication and computation to client-server model. While, P2P computing may significantly increase the performance and the scalability of the whole system, they still are facing many challenges in achieving these goals. In this paper we study the problem of scheduling a large nu...
Article
Full-text available
We present a process-algebraic approach for the specification of agent systems where agents participate in joint activities, extending previous work by the first author in (8, 9). While related to existing work on teamwork, such as (15, 16, 17, 19, 20, 21), our focus here is not on discussing notions bearing on joint intentions or abilities. Rather...
Conference Paper
In this paper, we study a high-performance Heterogeneous Distributed System (HDS) that is employed as a computing platform or grid. Precisely, we study the problem of scheduling a large number of CPU-intensive tasks on such systems. In this study, the time spent by a task in the system is considered as the main issue that needs to be minimized. The...

Network

Cited By