Journal of Computing and Information Technology (J Comput Inform Tech)

Publisher: Sveučilište u Zagrebu. University Computing Centre

Journal description

The aim of the international Journal of Computing and Information Technology (CIT) is to present original scientific and professional papers, as well as review articles and surveys, covering the theory, practice and methodology of computer science and engineering, modelling and simulation, and information systems.

Current impact factor: 0.00

Impact Factor Rankings

Additional details

5-year impact 0.00
Cited half-life 0.00
Immediacy index 0.00
Eigenfactor 0.00
Article influence 0.00
Website Journal of Computing and Information Technology website
Other titles Journal of computing and information technology (online), CIT
ISSN 1330-1136
OCLC 64201834
Material type Periodical, Internet resource
Document type Internet Resource, Journal / Magazine / Newspaper

Publications in this journal

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Congestion control and energy consumption in Wireless Multimedia Sensor Network is a new research subject which has been ushered in through the introduction of multimedia sensor nodes that are capable of transmitting large volume of high bit rate heterogeneous multimedia data. Most of the existing congestion control algorithms for Wireless Sensor Networks do not discuss the impact of security attacks by the malicious nodes in network congestion. Sensor nodes are prone to failure and malicious nodes aggravate congestion by sending fake messages. Hence, isolation of malicious nodes from data routing path reduces congestion significantly. Considering that, we have proposed a new Trust Integrated Congestion Aware Energy Efficient Routing algorithm, in which malicious nodes are identified using the concept of trust. The parameter Node Potential is computed, on the basis of the trust value, congestion status, residual energy and the distance of the node from the base station, using Fuzzy Logic Controller. The source node selects the node with the highest potential in its one hop radio range for data transmission which is light weight as well as energy efficient. Finally, merits of the proposed scheme are discussed by comparing them with existing protocols and the study exhibits 25% improvements in network performance.
    Journal of Computing and Information Technology 06/2015; 23(2). DOI:10.2498/cit.1002480
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Many statistical and MOLAP applications use multidimensional arrays as the basic data structure to allow the efficient and convenient storage and retrieval of large volumes of business data for decision making. Allocation of data or data compression is a key performance factor for this purpose because performance strongly depends on the amount of storage required and availability of memory. This holds especially for data warehousing environments in which huge amounts of data have to be dealt with. The most evident consequence of data compression is that it reduces storage cost by packing more logical data per unit of physical capacity. And improved performance is a net outcome because less physical data need to be retrieved during scan-oriented queries. In this paper, an efficient data compression technique is proposed based on the notion of extendible array. The main idea of the scheme is to compress each of the segments of the extendible array using the position information only. We compare the proposed scheme for different performance issues with prominent compression schemes
    Journal of Computing and Information Technology 06/2015; 23(3):111-121. DOI:10.2498/cit.1002441
  • [Show abstract] [Hide abstract]
    ABSTRACT: Eye tracking provides information about a user's eye gaze movements. For many years, eye tracking has been used in Human Computer Interaction (HCI) research. Similarly, research on computerised educational systems also relies heavily on students' interactions with systems, and therefore eye tracking has been used to study and improve learning. We have recently conducted several studies on using worked examples in addition to scaffolded problem solving. The goal of the project reported in this paper was to investigate how novices and advanced students learn from examples. The study was performed in the context of SQL-Tutor, a mature Intelligent Tutoring System (ITS) that teaches SQL. We propose a new technique to analyse eye-gaze patterns named EGPA. In order to comprehend an SQL example, students require the information about tables' names and their attributes which are available in a database schema. Thus, if students paid attention to the database schema, they would understand SQL examples more easily. We analysed students' eye movement data from different perspectives, and found that advanced students paid more attention to database schema than novices. In future work, we will use the findings from this study to provide proactive feedback or individualised amounts of information.
    Journal of Computing and Information Technology 06/2015; 23(2):171. DOI:10.2498/cit.1002627
  • [Show abstract] [Hide abstract]
    ABSTRACT: Capacitated Vehicle Routing Problem (CVRP) is among transportation problems that are of the foremost concerns in logistics. Ensuring an effective product distribution over a large distribution network while reducing the required costs represents the scope of the present work. A synergic and interactive environment of parallel meta-heuristics is developed using a generalized island model to deal with large instances of CVRP. In the proposed model, cooperative meta-heuristics, namely genetic algorithms (GA) and ant colony optimization algorithms (ACO), are organized into archipelagoes. They communicate synchronously, globally and locally by exchanging solutions. In order to handle properly the migration of solutions, either between archipelagoes or between islands within the same archipelago, appropriate selection and replacement policies are adopted. Furthermore, the proposed approach uses other new features including a new binary solution representation and different optimization process (i.e GA, ACO) on each island. To prove the efficiency of the present work, tests over the well-known set of benchmarks, comparative studies and experimental analysis have been conducted.
    Journal of Computing and Information Technology 06/2015; 23(2):141. DOI:10.2498/cit.1002465
  • [Show abstract] [Hide abstract]
    ABSTRACT: In recent years, crowdsourcing has become a powerful tool to bring human intelligence into information processing. This is especially important forWeb data which in contrast to well-maintained databases is almost always incomplete and may be distributed over a variety of sources. Crowdsourcing allows to tackle many problems which are not yet attainable using machine-based algorithms alone: in particular, it allows to perform database operators on incomplete data as human workers can be used to provide values during runtime. As this can become costly quickly, elaborate optimization is required. In this paper, we showcase how such optimizations can be performed for the popular skyline operator for preference queries. We present some heuristics-based approaches and compare them to crowdsourcing-based approaches using sophisticated optimization techniques while especially focusing on result correctness.
    Journal of Computing and Information Technology 01/2015; 23(1):43. DOI:10.2498/cit.1002509

  • Journal of Computing and Information Technology 01/2015; 23(1):29. DOI:10.2498/cit.1002508

  • Journal of Computing and Information Technology 01/2015; 23(1):1 - 18. DOI:10.2498/cit.1002507
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper attempts to mine the hidden individual behavior pattern from the raw users' trajectory data. Based on DBSCAN, a novel spatio-temporal data clustering algorithm named Speed-based Clustering Algorithm was put forward to find slow-speed subtrajectories (i.e., stops) of the single trajectory that the user stopped for a longer time. The algorithm used maximal speed and minimal stopping time to compute the stops and introduced the quantile function to estimate the value of the parameter, which showed more effectively and accurately than DBSCAN and certain improved DBSCAN algorithms in the experimental results. In addition, after the stops are connected with POIs that have the characteristic of an information presentation, the paper designed a POIBehavior Mapping Table to analyze the user's activities according to the stopping time and visiting frequency, on the basis of which the user's daily regular behavior pattern can be mined from the history trajectories. In the end, LBS operators are able to provide intelligent and personalized services so as to achieve precise marketing in terms of the characteristics of the individual behavior.
    Journal of Computing and Information Technology 01/2015; 23(3):245-254. DOI:10.2498/cit.1002578
  • [Show abstract] [Hide abstract]
    ABSTRACT: Relay selection has been regarded as an effective method to improve the performance of cooperative communication system. However, frequent operation of relay selection can bring enormous control message overhead and thereby decrease the performance of cooperative communication. To reduce the relay selection frequency, in this paper, we propose a relay selection scheme to choose the best relay with considering successive packets transmission. In this scheme, according to the length of data packet, data transmission rate and the estimated channel state information (CSI), the best relay is selected to maximize the number of successive packets transmission under the condition that the given symbol-error-rate (SER) is kept. Finally, numerical results show that the proposed relay selection scheme can support the operation of successive packets transmission in cooperative wireless networks and that the maximum number of successive packets transmission is affected by the different network parameters, i.e., data transmission rate, packet length and Doppler frequency at one relay node.
    Journal of Computing and Information Technology 01/2015; 22(4):217. DOI:10.2498/cit.1002423
  • [Show abstract] [Hide abstract]
    ABSTRACT: The increased usage of information technologies in educational tasks resulted in high volume of data, exploited to build analytical systems that can provide practical insight in the learning process. In this paper, we propose a method of running social network analysis on multiple data sources (academic years, communication tools). To achieve this, the collected data that describe social interactions were converted into a common format by employing a prior developed semantic web educational ontology. Using a mapping language the relational data set was linked to the appropriate concepts defined in the ontology and then it was exported in RDF format. The means for SPARQL access was also provided. Subsequently, query patterns were defined for different social interactions in the educational platform. To prove the feasibility of this approach, Gephi tool set was used to run SNA (Social Network Analysis) on data obtained with the SPARQL queries. The added value of this research lies in the potential of this method to simplify running social network analysis on multiple data sets, on a specific course or the entire academic year, by simply modifying the query pattern.
    Journal of Computing and Information Technology 01/2015; 23(3):269. DOI:10.2498/cit.1002645
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The traditional information retrieval technologies are based on keywords, and therefore provide limited capabilities to capture the conceptualizations associated with user needs and contents. As a new technology of information retrieval, semantic retrieval can retrieve information resource fully and precisely based on the knowledge understanding and knowledge reasoning. Ontology, which can well represent and reason about the domain knowledge, is proved to be very useful in the semantic retrieval. On this basis, in this paper, we propose a complete ontology-based semantic retrieval approach and framework for education management system. Firstly, we present some rules for constructing domain ontology from the education management system; Then, a semantic annotation method of the constructed ontology is given; Further, the ontologybased semantic retrieval algorithmis proposed; Finally, a complete framework is developed and some experiments are done. Conducted experiments show that our semantic retrieval model obtained comparable and better performance results than the traditional information retrieval technology for education management system.
    Journal of Computing and Information Technology 01/2015; 23(3):255. DOI:10.2498/cit.1002493
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: As has been explained in the paper "A Survey of Communications and Collaborative Web Technologies" in this special issue the web started as a more or less one-way information system, allowing engineers at CERN to share new results in physics world-wide easily. As it developed, it becamemore and more a two-way (or more precisely a multi-way) communication tool, allowing to collaboratively work in many ways. This has, as it has been described in the aforementioned paper, not just changed how information is created and provided easy ways of communication, but has also allowed the growth of social networks as first tools to tie together the power of brains, changing how we work, think, do research, form new circles of people we are interested in. This on its own would have been a major change in how society works, yet it turned out soon that the technologies available would allow to set up completely new business models, and threaten some existing ones. Among the hundreds of million applications found on the web today, many set up to make money, or at least pay for themselves by providing excellent ways of advertisements. There is no person who could know all the applications available on the web, and there is no way to even list a tiny percentage of them. However, what we are trying in this paper is to classify applications and discuss their commercial or societal impact.
    Journal of Computing and Information Technology 01/2015; 23(1):19. DOI:10.2498/cit.1002513
  • [Show abstract] [Hide abstract]
    ABSTRACT: The construction of reliable and stable routes in a mobile ad hoc network is one of the primary research issues in equipping each device to continuously maintain the information required to properly route traffic. Mobility of nodes often leads to link failures and hence requires route reconstruction to resume the communication between the nodes. The stability factor of a route can reduce the number of times the route is changed or reconstructed. This paper presents a novel idea for discovering a stable set of routes using the metrics from multiple layers rather than depending on network layer along with a finite set of parameters to qualify a link or connecting to a node. The link stability factor and link received signal strength are considered as the main metrics to qualify the stability of a route, derived from the physical and data link layer based on bit or packet error rate, retrieved from the soft output decoder. The simulation results based on the analysis of the proposed algorithm prove to be more efficient in terms of discovering stable routes, reducing frequent reconstruction of routes and hence improving the overall performance of the network.
    Journal of Computing and Information Technology 01/2015; 22(4):227. DOI:10.2498/cit.1002407
  • [Show abstract] [Hide abstract]
    ABSTRACT: Active distributed storages need to assure both consistency and dynamic data support, in addition to availability, confidentiality and resiliency. Further, since storage durability suffers in untrusted and unreliable environments, it becomes crucial to (a) select the most reliable set of servers to assure data retrievability and (b) dynamically identify errant servers and restore the data to ensure data recoverability. We address the issues of concurrency, consistency, dynamic data support, data share repair, and trust management in providing persistent storage and access. The paper focuses primarily on erasure coded distributed storages (storages employing erasure coding for data dispersal). Integration of Quorum based approach using Notification propagation, with a reliability model based on server trust-reputation forms the comprehensive design proposed. Treating servers and their data shares equally at data reconstructions during data retrievals is rather inadequate in untrusted environments. The design provides a suitable platform for use of Soft Decision Decoding to overcome this inadequacy. The design has been validated by the simulation, study, and analysis carried out for Reed Solomon coded storage with varying levels of resiliency and concurrency. The proposed design can be suitably adapted in typical distributed information storages catering to global networked audience in public, untrusted, and unreliable operating environments.
    Journal of Computing and Information Technology 01/2015; 23(3):191. DOI:10.2498/cit.1002490
  • [Show abstract] [Hide abstract]
    ABSTRACT: Protecting data and its communication is a critical part of the modern network. The science of protecting data, known as cryptography, uses of secret keys to encrypt data in a format that is not easily decipherable. However, most commonly secure logons for a workstation connected to a network use passwords to perform user authentication. These passwords are a weak link in the security chain, and are a common point of attack on cryptography schemes. One alternative to password usage is biometrics for network security: using a person’s physical characteristics to verify whom the person is and unlock the data correspondingly. This study focuses on the Cambridge biometric cryptosystem, a system for performing user authentication based on a user’s iris data. The implementation of this system started from a single-core software-only system to a collaborative system consisting of a single core and a hardware accelerator. The experiment takes place on a Xilinx Zynq-7000 All Programmable SoC. Software implementation is performed on one of the embedded ARM A9 cores while hardware implementation makes use of the programmable logic. Our hardware acceleration produced a speedup of 2.2X while reducing energy usage to 52.5% of its original value. These results are also compared to a many-core acceleration of the same system, providing analysis of the different acceleration methods.
    Journal of Computing and Information Technology 01/2015;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: RDF (Resource Description Framework) and RDF Schema (collectively called RDF(S)) are the normative language to describe the Web resource information. How to construct RDF(S) from the existing data sources is becoming an important research issue. In particular, UML (Unified Modeling Language) is being widely applied to data modeling in many application domains, and how to construct RDF(S) from the existing UML models becomes an important issue to be solved in the context of Semantic Web. By comparing and analyzing the characteristics of UML and RDF(S), this paper proposes an approach for constructing RDF(S) from UML and implements a prototype construction tool. First, we give the formal definitions of UML and RDF(S). After that, a construction approach from UML to RDF(S) is proposed, a construction example is provided, and the analyses and discussions about the approach are done. Further, based on the proposed approach, a prototype construction tool is implemented, and the experiment shows that the approach and the tool are feasible.
    Journal of Computing and Information Technology 01/2015; 22(4):237. DOI:10.2498/cit.1002459