Today, ICT has enlarged its horizons and it is practiced under multidisciplinary contexts that introduce new challenges to theoretical and technical approaches. The most critical benefit of introducing new ICT technologies in our real world living are the new ways of working that the online world makes possible. Complexity, uncertainty and scaling issues of real world problems as well as natural phenomena in ecology, medicine and biology demanding ICT assistance create challenging application domains for artificial intelligence, decision support and intelligent systems, wireless sensor networks, pervasive and ubiquitous computing, multimedia information systems, data management systems, internet and web applications and services, computer networks, security and cryptography, distributed systems, GRID and cloud computing. This book offers a collection of papers presented at the First International Conference on ICT Innovations held in September 2009, in Ohrid, Macedonia. The conference gathered academics, professionals and practitioners willing to report their recent success stories with valuable experiences in developing solutions and systems in the industrial and business arena especially innovative commercial implementations, novel applications of technology, and experience in applying recent ICT research advances to practical situations. A special attention on this conference was given to the application of new technologies in eco-informatics and bio-informatics. Prof. Danco Davcev is a Professor at the Computer Science Department at Faculty of Electrical Engineering and Information Technologies at Ss. Cyril and Methodius University, Skopje, Macedonia. Prof. Jorge Marx Gomez is a Professor at the Department of Computer Science at Carl von University, in Oldenburg, Germany.

Chapters (45)

Dealing with environmental issues in the management level of companies is a relatively new thought that came up within the late eighties. With the upcoming idea on sustainability, viewing ecological, social, and economic issues on the same level, international politics increasingly challenged companies to internalize their impacts on the environment. Furthermore, the development of voluntary eco-management systems like EMAS or ISO 14001 was another fact that shifted the focus more towards companies. Overall, the increasing amount of considerations companies had to put towards environmental issues was initiated externally. In order to comply with environmental goals, Environmental Management Information Systems (CEMIS), as a special instance of information systems (IS) have been developed. Being introduced mainly to fulfill different external claims, the situation nowadays is the existence of a large number of very specific, heterogeneous solutions in parallel, targeted towards different kind of environmental issues. As such, no true integrative approach exists and even the support of larger-scale problems by CEMIS remains almost exclusively on an operational level.
The continued scaling of semiconductor devices and the difficulties associated with time and cost of manufacturing these novel device design has been the primary driving force for the significantly increased interest in Computational Electronics which now, in addition to theory and experiments, is being considered as a third important mode in the design and development of novel nanoscale devices. In addition to its significant role in industrial research, modeling and simulation also brings into the picture alternative education modes in which students, by running certain subset of tools that, for example, the nanoHUB offers, can get hands-on experience on the operation of nanoscale devices and can also look into the variation of internal variables that can not be measured experimentally, like the spatial variation of the electron density in the channel in the pre- and post-pichoff regime of operation, electric field profiles which can be used to tailor the electron density to avoid junction breakdown, etc. In summary, Computational Electronics is emerging as a very important field for future device design in both industry and academia. Keywordsnano-electronics-semiclassical and quantum transport-education
This paper investigates an adaptation of Wireless Sensor Networks (WSNs) to cattle health monitoring. The proposed solution facilitates the requirement for continuously assessing the condition of individual animals, aggregating and reporting this data to the farm manager. There are several existing approaches to achieving animal monitoring, ranging from using a store and forward mechanism to employing GSM-based techniques; these approaches only provide sporadic information and introduce a considerable cost in staffing and physical hardware. The core of this solution overcomes the aforementioned drawbacks by using alternative cheap, low power consumption sensor nodes capable of providing real-time communication at a reasonable hardware cost. In this paper, both the hardware and software have been designed to provide real-time data from dairy cattle whilst conforming to the limitations associated with WSNs implementations.
Nowadays, to have relevant information is an important factor that contributes favorably to the decision making process. The usage of ontologies to improve the effectiveness in obtaining information has received special attention from researchers in recent years. However, the conceptual formalism supported by ontologies is not enough to represent the ambiguous information that is commonly founded in many domains of knowledge. An alternative is to incorporate the concepts of compensatory fuzzy logic in order to handle the uncertainty in the data, which take advantage of the benefits it provides for the formal representation of uncertainty. We present in this paper the formal definition of “Compensatory Fuzzy Ontologies” and attempt to bring to light the need for enhanced knowledge representation systems, using the advantages of this approach, which would increase the effectiveness of using knowledge in the field of decision making. KeywordsOntologies-Compensatory Fuzzy Logic-Decision Making Process-Compensatory Fuzzy Ontologies
The aim of this work is to propose a methodology for classifying, analyzing and visualizing data of patients with different symptoms from gynecological database. The application implements a variant of WITT algorithm for conceptual clustering. Pre-clustering algorithm is proposed that includes a tradeoff between overlapping of the initial clusters and displacing the center of clusters far away from the region of great density. To overcome the problem with weak correlation different coding schemes for cases are tested. Successful approach was to take square root of attribute value intervals to achieve the intervals with different sizes. Two different datasets from gynecological database are used: data related to polycystic ovary syndrome and data relevant to diagnose pre-eclampsia.
This work presents a system that facilitates prediction of the winner in a sport game. The system consists of methods for: collection of data from the Internet for games in various sports, preprocessing of the acquired data, feature selection and model building. Many of the algorithms for prediction and classification implemented in Weka (Waikato Environment for Knowledge Analysis) have been tested for applicability for this kind of problems and a comparison of the results has been made. KeywordsData acquisition-data processing-decision-making-prediction methods
The Internet has broken down the barriers that exist between people and information, effectively democratizing access to human knowledge. Nowhere is this more apparent than in the world of news. According to a recent survey news browsing and searching is one of the most important internet activity. The huge amount of news available on line reflects the users need for a plurality of information and opinions. We believe that more information means more choice, more freedom and ultimately more power for people. is our attempt to connect users with the most important current news stories. It pulls together the most reported stories on its front page so that users do not have to examine the web for up-to-date stories. In this paper we present the architecture of and describe all the details of its functioning. Keywordsnews engine-information extraction-text similarity-clustering-classification
This paper presents an implementation of bagging techniques over the heuristic algorithm for induction of classification rules called SA Tabu Miner (Simulated Annealing and Tabu Search data miner). The goal was to achieve better predictive accuracy of the derived classification rules. Bagging (Bootstrap aggregating) is an ensemble method that has attracted a lot of attention, both experimentally, since it behaves well on noisy datasets, and theoretically, because of its simplicity. In this paper we present the experimental results of various bagging versions of the SA Tabu Miner algorithm. The SA Tabu Miner algorithm is inspired by both research on heuristic optimization algorithms and rule induction data mining concepts and principles. Several bootstrap methodologies were applied to SA Tabu Miner, including reducing repetition of instances, forcing repetition of instances not to exceed two, using different percentages of the original basic training set. Various experimental approaches and parameters yielded different results on the compared datasets. KeywordsBagging-Bootstrap-Simulated Annealing-SA Tabu Miner-Tabu Search-Data Mining-Rule Induction
This paper presents the information/material processing synergy in both biological and human made systems. It is a further elaboration on the metaphors previously proposed for genetic information processing, such as the robotics/flexible manufacturing metaphor and the cell systems software metaphor. Related issues are also discussed, such as file system, program preparation and its parallel and distributed features, including interthread communication, among others. The paper proposes that, from a manufacturing science viewpoint, the protein biosynthesis process can be viewed as a CAD/CAM system for molecular biology. Keywordsmanufacturing science-information/material processing synergy-cell operating system-cell CAD/CAM system-distributed systems-protein biosynthesis multithreading-nanotechnology
Estimating the position of a mobile robot in an environment is a crucial issue. It allows the robot to obtain more precisely the knowledge of its current state and to make the problem of generating command sequences for achieving a certain goal an easier task. The robot learns the environment using an unsupervised learning method and generates a percept – action- percept graph, based on the readings of an ultrasound sensor. The graph is then used in the process of position estimation by matching the current sensory reading category with an existing node category. Our approach allows the robot to generate a set of controls to reach a desired destination. For the learning of the environment, two unsupervised algorithms FuzzyART neural network and GNG network were used. The approach was tested for its ability to recognize previously learnt positions. Both algorithms that were used were compared for their precision. KeywordsMobile robots-Position estimation-Unsupervised learning
In the field of artificial intelligence and mobile robotics, calculating suitable paths, for point to point navigation, is computationally difficult. Maneuvering the vehicle safely around obstacles is essential, and the ability to generate safe paths in a real time environment is crucial for vehicle viability. A method for developing feasible paths through complicated environments using a baseline smooth path based on Hermite cubic splines is presented in this paper. A method able to iteratively optimize the path is also presented. This algorithm has been experimentally evaluated with satisfactory results. Keywordspath planning-path optimization-Hermite cubic spline-obstacle avoidance-environment sensing
To understand the structure-to-function relationship, life sciences researchers and biologists need to retrieve similar structures and classify them into the same protein fold. In this paper, we propose a 3D structure-based approach for efficient classification of protein molecules. Classification is performed in three phases. In the first phase, we apply fractal descriptor matching as a filter. Then, protein structures which satisfy the fractal and radius tolerance are classified in the second phase. In this phase, 3D Fourier Transform is applied in order to produce rotation invariant descriptors. Additionally, some properties of primary and secondary structure are taken. In the third phase we use k nearest neighbor classifier. Our approach achieves 86% classification accuracy with applying fractal filter, and 92% without fractal filter. It is shown that fractal filter significantly shorten the classification time. Our system is faster (seconds) than DALI system (minutes, hours, days), and we still get satisfactory results. KeywordsProtein classification-fractal descriptor-3D Discrete Fourier Transform-DALI
The recent advent of high throughput methods has generated large amounts of protein interaction network (PIN) data. A significant number of proteins in such networks remain uncharacterized and predicting their function remains a major challenge. A number of existing techniques assume that proteins with similar functions are topologically close in the network. Our hypothesis is that the simultaneous activity of sometimes functionally diverse functional agents comprises higher level processes in different regions of the PIN. We propose a two-phase approach. First we extract the neighborhood profile of a protein using Random Walks with Restarts. We then employ a “chi-square method”, which assigns k functions to an uncharacterized protein, with the k largest chi-square scores. We applied our method on protein physical interaction data and protein complex data, which showed the later perform better. We performed leave-one-out validation to measure the accuracy of the predictions, revealing significant improvements over previous techniques.
The protein function is tightly related to classification of proteins in hierarchical levels where proteins share same or similar functions. One of the most relevant protein classification schemes is the structural classification of proteins (SCOP). The SCOP scheme has one negative drawback; due to its manual classification methods, the dynamic of classification of new proteins is much slower than the dynamic of discovering novel protein structures in the protein data bank (PDB). In this work, we propose two approaches for automated protein classification. We extract protein descriptors from the structural coordinates stored in the PDB files. Then we apply C4.5 algorithm to select the most appropriate descriptor features for protein classification based on the SCOP hierarchy. We propose novel classification approach by introducing a bottom-up classification flow, and a multi-level classification approach. The results show that these approaches are much faster than other similar algorithms with comparable accuracy.
With the recent development of technology, wireless sensor networks are becoming an important part of many applications such as health and medical applications, military applications, agriculture monitoring, home and office applications, environmental monitoring, etc. Knowing the location of a sensor is important, but GPS receivers and sophisticated sensors are too expensive and require processing power. Therefore, the localization wireless sensor network problem is a growing field of interest. The aim of this paper is to give a comparison of wireless sensor network localization methods, and therefore, multidimensional scaling and semidefinite programming are chosen for this research. Multidimensional scaling is a simple mathematical technique widely-discussed that solves the wireless sensor networks localization problem. In contrast, semidefinite programming is a relatively new field of optimization with a growing use, although being more complex. In this paper, using extensive simulations, a detailed overview of these two approaches is given, regarding different network topologies, various network parameters and performance issues. The performances of both techniques are highly satisfactory and estimation errors are minimal. KeywordsWireless Sensor Networks-Semidefinite programming-multi-dimensional scaling-localization techniques
Greedy algorithms are one of the oldest known methods for code construction. They are simple to define and easy to implement, but require exponential running time. Codes obtained with greedy construction have very good encoding parameters; hence, the idea of finding faster algorithms for code generation seems natural. We start with an overview of the greedy algorithms and propose some improvements. Then, we study the code parameters of long greedy codes in attempt to produce stronger estimates. It is well known that greedy-code parameters give raise to the Gilbert-Varshamov bound; improving this bound is fundamental problem in coding theory. KeywordsLinear codes-Greedy Codes-Lexicodes-Gilbert-Varshamov Bound-Greedy Algorithms
In this paper we assess the vulnerability of different synthetic complex networks by measuring the traffic performance in presence of intentional nodes and edge attacks. We choose which nodes or edges would be attacked by using several centrality measures, such as: degree, eigenvector and betweenness centrality. In order to obtain some information about the vulnerability of the four different complex networks (random, small world, scale-free and random geometric) we analyze the throughput of these networks when the nodes or the edges are attacked using some of the above mentioned strategies. When attack happens, the bandwidth is reallocated among the flows, which affects the traffic utility. One of the obtained results shows that the scale-free network gives the best flow performance and then comes random networks, small world, and the poorest performance is given by the random geometric networks. This changes dramatically after removing some of the nodes (or edges), giving the biggest performance drop to random and scale-free networks and smallest to random geometric and small world networks. KeywordsVulnerability NUM-complex networks-attack strategies-measurements-bandwidth allocation
In this paper we study the end-to-end outage performance of multi-hop cooperative communication systems employing amplify and forward (AaF) relaying under Rayleigh, Nakagami, Rician and Weibull fading channels. The outage probability performances of multi-hop systems with fixed gain and variable gain relays is compared. The outage probability for multi-hop systems under Rayleigh, Nakagami and Weibull fading models can be determined only by combining analytical results with numerical integration techniques. We show that fixed gain system has a better outage performance compared to the variable gain for all fading scenarios. This performance gap increases by increasing the number of hops. KeywordsWireless cooperative communications-outage probability-multipath fading-multi-hop relay systems
Email viruses are one of the main security problems in the Internet. In order to stop a computer virus outbreak, we need to understand email interactions between individuals. Most of the spreading models assume that users interact uniformly in time following a Poisson process, but recent measurements have shown that the intercontact time follows heavy-tailed distribution. The non-Poisson nature of contact dynamics results in prevalence decay times significantly larger than predicted by standard Poisson process based models. Email viruses spread over a logical network defined by email address books. The topology of this network plays important role in the spreading dynamics. Recent observations suggest that node degrees in email networks are heavy-tailed distributed and can be modeled as power law network. We propose an email virus propagation model that considers both heavy-tailed intercontact time distribution, and heavy-tailed topology of email networks. KeywordsComputer viruses-Dynamical systems-Complex networks
In this paper the impact of node communities on the ad hoc network performances is investigated. The community structures are viewed on the logical (application) level and on the physical level. They are modeled using complex network theory with which socially based mobility and communication patterns are developed. The different approaches in modeling offer the view of how the distribution of interconnections between the nodes influences the network performances. The results show that the logical interconnections have a great impact on the network performances especially in the cases when the node degree distribution follows the scale-free law. Keywordscommunities-ad hoc network-performances-complex networks-logical and physical level
In this paper we use the property of diatoms as bioindicators, to indentify which physical-chemical parameters are contained in the taken sample using machine learning algorithm – CN2. Important physical-chemical parameters such as conductivity, saturated oxygen, pH, organic chemical parameters and metals are important in the process of environmental monitoring. These physical-chemical parameters have influence on the entire lake web food chain, thus disturbing the organism’s patterns and interactions between them, such as diatoms community. These communities have high coefficient of indication on certain process such as eutrophication and presence or absence of certain physical-chemical parameters, which means that they can be used as bio-indicators of water quality. The machine learning algorithm – CN2 can produce rules in a form IF-THEN which is suitable for organizing knowledge from diatoms abundance data. In literature the diatoms have ecological preference organized in the same manner. The experimental setup is build to satisfy not only the algorithm properties, but also the ecological knowledge of the diatoms community. We used several modifications of the algorithm, from which then we compare the compactness and coverage of the induced rule. Nevertheless, for regression problems we compare the correlation coefficient, root mean square error (RMSE) and relative root mean square error (RRMSE) or rule quality to point which experiment proved to be most accuracy and more general. Several of the rules are presented in this paper together with the evaluation performance. Based on modifications of the CN2 algorithm parameters, we were able to extract certain knowledge form the data, which later have proved to be valid, or in some cases is novel for many newly discovered diatoms. In future we plan to investigate more modifications of the CN2 algorithm, also to implement multi-target rule induction and compare these results to the single target.
Semantic interaction consists in interacting with the data by means of the image that represents it. In this paper, we analyze the possibility to add semantic interaction to a data-flow oriented visualization applications used in enterprise environments. For this purpose, we discuss a case study from the perspective of the type of interaction supported. We also review recent innovative approaches that attempt to use ontologies to link representation with meaning, using it to enhance user interaction and comprehension. KeywordsVisualization-Semantic Interaction-Visualization Taxonomy-Ontology
The market of Semantic Web Services is a heterogeneous volatile environment. Ability of the enterprise to adapt to it by finding relevant and high quality resources is crucial. This paper presents an approach for personalization in a peer to peer network by continuously searching for relevant resources using a beehive like mechanism and aggregating results in a fuzzy manner. We consider diversity, scrutability and efficient traversal as key features to face difficulties of constructing a successful market. And link the results and findings of our study to the quality of electronic services, and will show how the proposed solution shall enhance the overall user satisfaction based on specified dimensions used to measure its level. Keywordsbeehive-multi-agent system-mobile agent-personalization-peer to peer network-Semantic Web Services-search engine-Quality of Electronic Services-fuzzy logic
This paper describes the development of a methodology and software for phonetics designed with the support of the Internet and web technologies. A web application was created as a data gathering instrument for a phonetic study which aimed to detect the most frequent segmental markers of Macedonian-English accented speech as perceived by native speakers of English and to find out whether English native speakers of different backgrounds perceive the same segments as non-native. The results demonstrate the manifold advantages of the approach as well as the flexibility of its adaptation in applied linguistic research and second language learning/teaching. Keywordsweb application-online experiment-phonetics-computer-assisted research
Current technology used for displaying Windows forms is about 15 years old and it is based on two parts of the Windows operating system - User32 library and GDI/GDI+ API. Microsoft’s new technologies in this area, WPF and XAML, improve current situation by offering better optimization of code execution, extended code reusability and a fresh new visual appearance. New technologies improve quality of created GUI and increase the usage area. Usage of these new technologies in implementing GUI for dental charts application which is developed as a part of a broader medical information system is described in this paper. Achieved results confirm improved quality which is brought by new technologies. KeywordsUser Interface Design-WPF-XAML-Dental Charting
Enterprise Cloud Computing becomes more and more prevalent in the IT and Business Application Industry. The scientific approach is now, to overcome most of the disadvantages of legacy on-premise solutions. Therefore, the existing different research streams, requirements and semantic perspectives need to be converged into one central ubiquitous, standardized architectural approach. The goal is to perform on-demand and cross-enterprise business processes in the context of Very Large Business Applications (VLBAs). Also in this context cloud standardization is one of the biggest challenges of the Open Cloud Manifesto. This paper discusses and outlines, how a semantic composition and federation based reference model (federated ERP-system) can be established for Enterprise Cloud Computing and set up for business operation. Furthermore, it is debated, how enterprises can develop and maintain enterprise software solutions in the Cloud Community in an evolutionary, self-organized way complying to Cloud Standards. In this context a metric driven Semantic Service Discovery and the Enterprise Tomograph can be seen as an entrypoint to an organic, gradable marketplace of processes exposed by cloud based Service Grids and Data Grids in graded levels of granularity and semantic abstractions. KeywordsFederated ERP-Enterprise Tomography-Cloud Computing-Green Cloud-Enterprise 2.0-Semantic Service-Oriented Architecture
The aim of this survey is focused on measuring the level of e-readiness of the enterprises in the Republic of Macedonia with an emphasis on the concept of “e-business strategy readiness”. The survey resulted in 348 responses from the Macedonian enterprises structured according to their economic activity (8 groups) and divided into 8 regions in accordance with Nomenclature of Territorial Units for Statistics – NUTS proposed by State Statistical Office of the Republic of Macedonia. Based on this, we examine the indicators of the e-business strategy readiness index. This index is comprised of three core sub-indices: the level of adoption of ICT, the level of ICT usage, and the level of ICT strategy readiness. Furthermore, the e-business strategy readiness index and its composite sub-indices for the enterprises in the Republic of Macedonia have been calculated.
Nowadays, e - testing is an often used method for evaluation in the process of learning. In this paper, we discuss the e - testing problem of creating large question set that will reflect the knowledge of some domain. A new model of E - testing is introduced with a proposal of a new solution to the problem of creation a large question set for a given domain. Then, we present a methodology for comparison of the results and the contribution of the new model and realization on the automated creation of large number of questions, and we evaluate the quality and the vulnerability of the question set, as well. It is shown that the new model increases the speed of question production by more 10 times. KeywordsSemantic Web-semantic web technologies-ontology-OWL-e - testing-question set
Full (complete) integration is not yet achieved in supply networks; this is a complex challenge because of the importance of integration for management in this environment. Reducing the gap between semantic, Business Process Modeling and Interoperability solutions; will significantly (dramatically) improve the information flow in the chain and its understanding. For this purpose we present a proposal for a framework design that combines the semantic supported modeling with the orchestration of integration processes in the approached context. This will translate in better decisions based on latest and best information. There is also presented a possible support decision model that could be used for the validation of this framework. KeywordsSemantic Business Process Modeling-Interoperability-Heterogeneity-SOA-Integration Technology Evaluation-Compensatory Fuzzy Logic
Nowadays, it is getting more and more difficult discovering the entire Web resources in order to find the correct information needed by user. The correctness of the retrieved results in most cases is not meeting the expected needs because of the weak definition requests provided by the user and this came from the unstructured nature of the existing information resources. The main problems SOA solutions faces are the lack of automation at both discovery and invocation phases among these services and the queries provided by the consumers. Our proposed solution will overcome the poorness of traditional SOA solutions and the complexity of the semantic ones by developing a consistent framework that makes the data understandable for both humans and machines. In addition it will provide Web Service validation and a methodology for dynamic composition of Web Services. KeywordsSOA-Semantic SOA-Web Service Validation-Ontology-Ontology Model
In this paper we present a new mobile service that enable auto-production of 3D graphics content for mobile platform. The general benefit of this service is that it enriches the content of the communication, containing a video stream of an animated avatars produced by dedicated servers controlled by the mobile end-user. The approach consists in selecting an avatar, download it on the mobile phone as a 3D object and composing the message by playing with avatar instead of typing text. Message can be enriched more by adding background to the scene and subtitles for animation sub-sequences. The tests performed show the feasibility of the proposed solution in terms of transmission cost. Keywordsmobile communication-3D graphics-auto-production-avatars-computer animation
This paper presents a set of technologies developed with the goal to improve the learning and practice of Cued Speech (CS). They are based on 3D graphics and are covering the entire end-to-end content chain: production, transmission and visualization. Starting from the requirements of an online system for CS, the research and development path took into account real-time constraints, personalization, user acceptability and equally important, the easiness and feasibility of the deployment. The original components of the system include 3D graphics and animation encoders, streaming servers and visualization engines and are validated in two applications: a web service for text to animation conversion and a chat service supporting two or more users. Keywordscomputer graphics-cued speech-avatar animation-real time transmission-3D graphics player-MPEG-4 standard
Modern teaching methods in the field of applied computer science can not ignore the teaching of well know application and information systems, like enterprise systems, e.g. ERP-Systems. Therefore case studies are the most chosen way to introduce stepwise the handling of these systems. Effective teaching concepts have to improve this situation by consideration of pedagogical and didactical aspects which supports the individual learning process of each student. Our actual research considers actual needs of higher education e.g. present learning in a lab as well as e-learning courses supported by new methods in technology enhanced learning by recording student’s behaviour to guide him through the system. Therefore we introduce a concept using AUM and Enterprise Tomography to improve the teaching for Enterprise Systems. KeywordsTechnology Enhanced Learning-Enterprise Systems-Higher Education-Application Usage Mining-Web Usage Mining-E-Learning-Enterprise Tomography
This study presents approach for presentation of MPEG-4 3D graphics objects in real-time using Microsoft XNA technology. The proposed approach considers the aspects of real-time 3D rendering on multiplatform environment. We introduce management of MPEG-4 3D resources to address the rendering requirements of a modern real-time 3D rendering engine. In our approach this management results in extension by enabling appropriate representation of data resources. This study presents an example of using MPEG-4 encoded 3D content in advanced 3D visualization applications such as games and virtual reality on Windows and Xbox systems. KeywordsMPEG-4-3D Scenes-Rendering-XNA
High angular resolution diffusion imaging (HARDI) is able to capture the water diffusion pattern in areas of complex intravoxel fiber configurations. However, compared to diffusion tensor imaging (DTI), HARDI adds extra complexity (e.g., high post-processing time and memory costs, nonintuitive visualization). Separating the data into Gaussian and non-Gaussian areas can allow to use complex HARDI models just when it is necessary. We study HARDI anisotropy measures as classification criteria applied to different HARDI models. The chosen measures are fast to calculate and provide interactive data classification. We show that increasing b-value and number of diffusion measurements above clinically accepted settings does not significantly improve the classification power of the measures. Moreover, denoising enables better quality classifications even with low b-values and low sampling schemes. We study the measures quantitatively on an ex-vivo crossing phantom, and qualitatively on real data under different acquisition schemes.
Since the development of digital video technology, due to the nature of digital video, the approach to video quality estimation has changed. Basically there are two types of metrics used to measure the objective quality of processed digital video: purely mathematically defined video quality metrics (DELTA, MSAD, MSE, SNR and PSNR) where the error is mathematically calculated as a difference between the original and processed pixel, and video quality metrics that have similar characteristics as the Human Visual System – HVS (SSIM, NQI, VQM) where the perceptual quality is considered in the overall video quality estimation. In this paper, an overview and experimental comparison of PSNR and SSIM metrics for video quality estimation is presented.
In this paper we propose two new types of compression functions, based on quasigroup string transformations. The first type uses known quasigroup string transformations, defined elsewhere, by changing alternately the transformation direction, going forward and backward through the string. Security of this design depends of the chosen quasigroup string transformation, the order of the quasigroup and the properties satisfied by the quasigroup operations. We illustrate how this type of compression function is applied in the design of the cryptographic hash function NaSHA. The second type of compression function uses new generic quasigroup string transformation, which combine two orthogonal quasigroup operations into a single one. This, in fact, is deployment of the concept of multipermutation for perfect generation of confusion and diffusion. One implementation of this transformation is by extended Feistel network F A,B,C which has at least two orthogonal mates as orthomorphisms: its inverse \(F^{-1}_{A,B,C}\) and its square \(F^{2}_{A,B,C}\).
In this paper we examine some performances of error-correcting codes based on quasigroup transformations proposed elsewhere. In these error-correcting codes, there exists a correlation between any two bits of a codeword. Also, these codes are nonlinear and almost random. We give simulation results of packet-error and bit-error probability for binary symmetric channel and for several parameters of these codes. From these simulation results we can conclude that the performances of these codes depend on the used quasigroup, the length of the initial key and the way of introducing the redundant information. Keywordserror-correcting code-random code-packet-error probability-bit-error probability-quasigroup-quasigroup transformation
Blue Midnight Wish hash function is one of 14 candidate functions that are continuing in the Second Round of the SHA-3 competition. In its design it has several S-boxes (bijective components) that transform 32-bit or 64-bit values. Although they look similar to the S-boxes in SHA-2, they are also different. It is well known fact that the design principles of SHA-2 family of hash functions are still kept as a classified NSA information. However, in the open literature there have been several attempts to analyze those design principles. In this paper first we give an observation on the properties of SHA-2 S-boxes and then we investigate the same properties in Blue Midnight Wish.
In this paper we present an analysis methodology for the possible improvements of the performance of Adaptive Petri-Net Grid Genetic Algorithm workflow. Genetic Algorithms are very powerful optimization technique that is easily parallelized using different approaches which makes it ideal for the Grid. The High Level Petri-Net workflow model greatly outperforms currently available DAG workflow model available in gLite Grid middleware. Using the flexibility of the High Level Petri-Net workflows we have designed an adaptive workflow that overcomes the heterogeneity and unpredictability of the Grid infrastructure, giving users better and more stable execution times than formerly used DAG workflows. The performance of the Petri-Net Grid Genetic Algorithm is analyzed using several parameters that change the behavior of the optimization. The performance is measured as a shortening of the overall execution time of the workflow in the process of searching for a solution with suitable quality. In the course of the analysis we defined a stable measurement of the quality of the solution used in the experiments. The experimental results obtained by Genetic Algorithm optimization of performance of the Data Warehouse design have shown the unexpected interesting influence some parameters have over the optimization time for obtaining the same quality level. KeywordsGenetic Algorithms-High level Petri-nets-Grid Workflows
Leveraging the huge seismic data collections can be a quite challenging process, especially if the available data comes from large number of sources. Computing Grids enable such processing, giving the users necessary tools to share the data from various countries and sources. Processing this data not only gives results related to the earthquakes themselves, but also it reveals the geological features of the observed regions. Using the gLite base Grid, we propose a framework for massively parallel wavelet data processing of the seismic waveforms using advanced Grid workflows. Such workflows enable users to use the power of the Grid more easily and to achieve better performance. In the process of the data processing we use seamlessly several different grid services (AMGA, LFC ...) to locate the necessary data and to extract the needed information. The Grid application uses waveform data from several earthquakes from the same recording station. For the processing we use continuous wavelet transformation in order to capture the characteristics of the earth crust following the path from the earthquake origin towards the station. These features are recorded and later are classified using pattern matching to identify important characteristics of some specific seismic region as seen from that specific station.
Crossing the institutional and national boundaries in the collaborative research has become reality. One of the most important tool enabling scientists to work on large projects, with massive parallelism and with huge datasets is of course the computing Grid. Although the technology has reached a level where big groups of scientists are using it on a daily basis, there are still many open issues and problems. From the technical aspects, there is still need for interoperability between many different middlewares, more intuitive and user friendly interfaces, tools to support richer workflows, Quality of Service tools and enablers, monitoring, etc. But there are also many other open issues, mainly related to the self sustainability of the infrastructure. Most of the current Grid infrastructure is supported by short term research and development projects. How will the infrastructure support itself is something to be seen in the near future. EU is strongly devoted to keep its leading position in the field of computing Grids, mainly focusing on integrating all the national Grid initiatives into a larger community, European Grid Initiative, EGI.
Process planning is one of the key activities for product design and manufacturing. Impact of process plans on all phases of product design and manufacture requires high level of interaction of different activities and tight integration of them into coherent system. In this paper, an object-oriented knowledge representation approach is presented with module for parts modeling and module for generation of process plan. Description of machining process entities and their relationships with features, machines and tools are provided. The benefits of the proposed representation, which include connection with geometric model, reduced search space and alternative plan generation, are discussed. These new contributions provide for a new generation of computer aided process planning (CAPP) systems that can be adapted for various manufacturing systems and can be integrated with other computer integrated manufacturing (CIM) modules. KeywordsProcess Planning-Knowledge-Object-Oriented Programming
In the decade up to 2020 European Higher Education will have a vital contribution to realize a Europe of knowledge that has a relevant role, at national and international level, in the cultural and economical development of countries. The higher education will also face the major challenges and opportunities of globalization with accelerated technological developments with new providers, new learners and new types of learning. New educational requirements stimulated by the innovative telecommunication technologies, leads, almost as direct consequence, to the latest educational materials and methodologies, to videoconferencing and distance learning issues. In this framework, the three-year ViCES (Video Conferencing Educational Services) Project was launched and financed by the European Commission within the TEMPUS (Trans-European Mobility Scheme for University Studies) programme. The VICES project will provide an environment that increases student and academic mobility as well as infrastructure that will ease the process of harmonization of different curricula outcomes. Keywordshigher education-TEMPUS European Programme-innovative educational methods-distance learning-curricula harmonization-national and international cooperation
In this paper we describe the architecture and the implementation of a new solution capable of building model-based web portals, used not only for university management, but with a much broader perspective. It is not just a replica of modern ERP systems, or portals that create content management solutions, but a novel solution proposed to be an efficient application for workflow and document management in SMEs.
Promising frameworks have been developed to ensure that different players can have benefits from economic and social activities with high levels of interdependencies. Among these frameworks, the triple helix model constitutes an interesting model that accounts for interactions and ensures coordination of tasks. This chapter focuses on both the elements of the framework and its applications. A special focus is placed on knowledge diffusion in MENA and Arab countries. The usefulness of the triple helix coordinating process is clearly shown to be a way of accounting for interferences and interdependencies. The implicit idea is that further requirements of coordination under this model need further use of ICT tools.
Full-text available
In very crowded areas, a large number of LTE users contained in a single cell will try to access services at the same time causing high load on the Base Station (BS). Some users may be blocked from getting their requested services due to this high load. Using a two-hop relay architecture can help in increasing the system capacity, increasing coverage area, decreasing energy consumption, and reducing the BS load. Clustering techniques can be used to configure the nodes in such two-layer topology. This paper proposes a new algorithm for relay selection based on the Basic Sequential Algorithmic Scheme (BSAS) along with power control protocol. The simulation results show that the proposed algorithm has improved system capacity and energy consumption compared to other existing clustering/relaying schemes.
ResearchGate has not been able to resolve any references for this publication.