Computer and Information Science

Published by Canadian Center of Science and Education
Online ISSN: 1913-8997
Print ISSN: 1913-8989
The A5/1 Register Parameters
In Europe and North America, the most widely used stream cipher to ensure privacy and confidentiality of conversations in GSM mobile phones is the A5/1. In this paper, we present a new attack on the A5/1 stream cipher with an average time complexity of 2^(48.5), which is much less than the brute-force attack with a complexity of 2^(64). The attack has a 100% success rate and requires about 5.65GB storage. We provide a detailed description of our new attack along with its implementation and results.
This paper introduces Kernel-based Information Criterion (KIC), for model selection in regression analysis. The novel kernel-based complexity measure in KIC efficiently computes the interdependency between parameters of the model using a variable-wise variance, and yields selection of better, more robust regressors. Experimental results show superior performance on both simulated and real data sets compared to Leave-One-Out Cross-Validation (LOOCV), kernel-based Information Complexity (ICOMP), and maximum log of marginal likelihood in Gaussian Process Regression (GPR).
In information systems, a system is analyzed using a modeling tool. Analysis is an important phase prior to implementation in order to obtain the correct requirements of the system. During the requirements phase, the software requirements specification (SRS) is used to specify the system requirements. Then, this requirements specification is used to implement the system. The requirements specification can be represented using either a structure approach or an object-oriented approach. A UML (Unified Modeling Language) specification is a well-known for representation of requirements specification in an object-oriented approach. In this paper, we present one case study and discuss how mapping from UML specification into implementation is done. The case study does not require advanced programming skills. However, it does require familiarity in creating and instantiating classes, object-oriented programming with inheritance, data structure, file processing and control loop. For the case study, UML specification is used in requirements phase and Borland C++ is used in implementation phase. Based on the case study, it shows that the proposed approach improved the understanding of mapping from UML specification into implementation. Comment: 9 pages
System tests suite derivation flowchart
Late requirements for cleaning Robot
In recent years, Agent-Oriented Software Engineering (AOSE) methodologies are proposed to develop complex distributed systems based upon the agent paradigm. The implementation for such systems has usually the form of Multi-Agent Systems (MAS). MAS' testing is a challenging task because these systems are often programmed to be autonomous and deliberative, and they operate in an open world, which requires context awareness. In this paper, we introduce a novel approach for goal-oriented software system testing. It specifies a testing process that complements the goal oriented methodology Tropos and reinforces the mutual relationship between goal analysis and testing. Furthermore, it defines a structured and comprehensive system test suite derivation process for engineering software agents by providing a systematic way of deriving test cases from goal analysis.
Information accessibility on World Wide Web (WWW) still remains to be a complex issue for blind users as majority of websites are invaded by contents, both non-visual (audio) and visual (video & images). Accessibility measures in terms of a usable and blind-friendly website should be made available to blind users. A particular area that can be improved on is the e-Business segment for blinds. Easily-accessible, e-Business platforms could further promote entrepreneurship practices through the development of blind-friendly features such as voice. In current study, we investigated the existing difficulties (such as inconsistent webpage structure, incompetent voice recognition engines and use of long sentences as commands, etc) to access the web-content and suggested a different (numeric hotlist) approach, in order to make the web more accessible, exclusively for blinds.
The causal/effect of seven salient beliefs and an individual's attitude and norms, all of which lead to form a person's Behavioural Intention (BI), are not well documented in the context of Internet Banking (IB). The attitudinal belief, represented by five innovation attributes, together with the normative belief, represented by two types of interaction channels, were extracted in accordance with Rogers' (1995) and Ajzen's (1991) theories and literature. The study proposes a conceptual framework of an individual's behavioural intention determinants to adopt IB and tests it using a path analysis of the Ordinary Least Squares (OLS). The results support the argument that attitude, relative advantage/compatibility, observability, ease of use and mass media interaction are the key determinants of BI to use IB.
According to the characteristic that higher order derivatives of some base functions can be expressed by primitive functions and lower order derivatives, cascade-correlation algorithm with tunable activation functions is proposed in this paper. The base functions and its higher order derivatives are used to construct the tunable activation functions in cascade-correlation algorithm. The parallel and series constructing schemes of the activation functions are introduced. The model can simply the neural network architecture, speed up the convergence rate and improve its generalization. The efficiency is demonstrated with the two-spiral classification and Mackay-Glass time series prediction problem.
Increase rate time 
Adaptation time 
Component-based development has become a commonly used technique for building complex software systems by composing a set of existing components. In general adapting an application means stopping the application and restarting it after the adaptation. This approach is not suitable for a large classes of software systems in which continuous availability is a critical requirement, hence the need of adapting dynamically the application at runtime. This paper presents an architecture based approach for dynamic adaptation in critical components based software using multi agent system.To achieve this, we use an agent based system to perform the adaptation. The agent system is guided by an architectural description. The adaptation mechanism is implemented within the connectors using the flexibility offered by the Java script language techniques. The script language Groovy is used. The evaluation is made by comparing the execution time before and after the adaptation mechanism. The paper is structured as follows: section 2 presents related works to dynamic adaptation. Section 3 describes the proposed solution to achieve a dynamic update of components-based software applications. The implementation details and some measurements relative to our solution are given in section 4. Section 5 concludes and presents some perspectives.
In this paper, we propose a model for vehicle traffic based on multi-agent systems and account suppositions and its issues. Traffic is an ever-growing problem as population and the number of drivers around the world increase exponentially. Previously, fluid flow models have been used in an attempt to model traffic. Based on recent studies, only agent based models can accurately model a traffic scenario. This is because small perturbations could have a butterfly-like effect, which causes a rapid change in the entire system.
Structure of literature review 
Security Framework 
Proposed MAS Architecture 
The purpose of this literature review is to provide the information about illustrating the usage of Multi-Agent System (MAS) techniques that can be beneficial in cloud computing platform to facilitate security of cloud data storage (CDS) among it. MAS are often distributed and agents have proactive and reactive features which are very useful for cloud data storage security (CDSS). The architecture of the system is formed from a set of agent’s communities. This paper of literature review described on the theoretical concept and approach of a security framework as well as a MAS architecture that could be implemented in cloud platform in order to facilitate security of CDS, on how the MAS technology could be utilized in a cloud platform for serving the security that is developed by using a collaborative environment of Java Agent DEvelopment (JADE). In order to facilitate the huge amount of security, our MAS architecture offered eleven security attributes generated from four main security policies of correctness, integrity, confidentially and availability of users’ data in the cloud. This paper of literature review also describes an approach that allows us to build a security cloud platform using MAS architecture and this architecture tends to use specialized autonomous agents for specific security services and allows agents to interact to facilitate security of CDS.
Utilized Usability Model
Knowledge management (KM) has become an important topic as organizations wish to take advantage of the information that they produce and that can be brought to bear on present decisions. This paper described a system to manage the information and knowledge generated during the software maintenance process (SMP). Knowledge Management System (KMS) is utilizing to help employees build a shared vision, since the same codification is used and misunderstanding in staff communications may be avoided. The architecture of the system is formed from a set of agent communities each community of practice (CoP) is in charge of managing a specific type of knowledge. The agents can learn from previous experience and share their knowledge with other agents or communities in a group of multi-agent system (MAS). This paper also described on the theoretical concept and approach of multi-agent technology framework that could be implemented software maintenance process (SMP) in order to facilitate knowledge sharing among the maintainers of the learning organization. as well as to demonstrate it into the system wise, on how the multi-agent technology could be utilized in the software maintenance process (SMP) system model for serving the maintainer that is developed by using groupware such as Lotus Notes software. This architecture will be named as MASK-SM (MAS Architecture to Facilitate Knowledge Sharing of Software Maintenance). The author followed the Prometheus methodology to design the MAS architecture. This paper applied the definition of ISO 9241-11 (1998) that examines effectiveness, efficiency, and satisfaction. The emphasis will be given to the software maintenance process (SMP) activities that may concern with multi-agent technology to help the maintainers especially in learning organization to work collaboratively including critical success factor in order to ensure that software maintenance process (SMP) initiatives would be delivered competitive advantage for the community of practice (CoP) as well as users of the organization.
To enhance the identification capacity of the skin-hearing aid for voice signals, the four-channel skin-hearing aid based on Morse encoding method is proposed in this article, which can overcome many disadvantages such as low identification rate and bad anti-jamming capability existing in analog variable pressure skin-hearing aid. By processing voices, seeking the corresponding Morse code of voice signals, establishing the skin response voltage sequence by the Morse encoding, stimulating the skins by this voltage sequence, and making the skins to more clearly feel different signals of voice, the identification rate of skin hearing can be enhanced obviously.
Power transmission lines routes mapping is an important technique for locating power transmission line routes and towers on mountain/hilltops to assist viewing of their impacts on the environment, operations and allocation of public utilities. A study was therefore conducted to map the power transmission lines within Bukit Lanjan PLUS highway. The main objective of this study is to assess the capability of airborne hyperspectral sensing for mapping of power transmission. By using ENVI software, the airborne hyperspectral imaging data was enhanced using convolution filtering technique using band 3 which produced a gray scale image which appeared clearer and sharper. The spectral reflectance curves were acquired for each power line which showed the same spectrum characteristics in curve or the reflectance energy. This is because of the same power lines composition material for all power lines. Ground verification was done by comparing the UPM-APSB's AISA Global Positioning System (GPS) coordinates readings with ground GPS coordinates readings of the power transmission lines footings. The ground verification result from the two matching power transmission line footings showed that the accuracy of power lines identification was acceptable. This study implies that airborne hyperspectral imagers are powerful tools for mapping and spotting of suitable large transmission towers and lines.
The strategic goal of Shanghai Airport is to build a perfect domestic and international route network, and construct the air gateway connecting the world with China. Shanghai Airport is speeding up the construction of information systems, and business intelligence is core of the airport decision support system. First, this paper discusses the application blueprint of business intelligence for Shanghai Airport. Second, it analyses and designs the technical architecture of Shanghai Airport business intelligence system. Finally, it explores the application architecture of Shanghai Airport business intelligence system and the key functions of the modules. This paper aims to provide reference and help for the research and planning of related airport projects.
Simulation parameter
Mobile sensor nodes at speed 5m/s: (a) 80% of nodes are dead; (b) 50% of nodes are dead; (c) Number of received data message at base station; (d) Energy dissipation  
Rayleigh fading case 1: (a) 80% of nodes are dead; (b) 50% of nodes are dead; (c) Number of received data message at base station; (d) Energy dissipation  
Rayleigh fading case 2: (a) 80% of nodes are dead; (b) 50% of nodes are dead; (c) Number of received data message at base station; (d) Energy dissipation  
Clustering is an effective topology approach in wireless sensor network, which can increase network lifetime and scalability. Either Node Mobility or Channel fading has a negative impact on various clustering protocol. In case of Node Mobility when all sensor nodes are mobile the current nearest cluster head may be the farthest one for the sensor node when message transmission phase starts. In the present research the received signal strength is used to estimate the sensor location. Consequently, channel fading affects the path loss between the nodes thus affecting the estimated distance between them. This paper introduces a new clustering protocol which is built on Adaptive Decentralized re-clustering protocol called E-ADRP (Enhanced Adaptive Decentralized re-clustering protocol). Simulations are performed to test the effect of node mobility using Random Walk Mobility model (RWM) on Low Energy Adaptive Clustering Hierarchy (LEACH) and Enhanced Adaptive Decentralized re-clustering protocol (E-ADRP). The simulation results show that the applied mobility on LEACH affected the network lifetime and energy dissipation negatively while in contrast E-ADRP simulation results were much better. On the other side, Rayleigh channel model also was applied on LEACH and E-ADRP clustering algorithms. The simulation results show that network lifetime and energy dissipation at mobile nodes were nearly stable compared to static nodes in case of E-ADRP while in case of LEACH mobile nodes were negatively affected by rate up to 24% less than static nodes, at fading E-ADRP and LEACH were both negatively affected where E-ADRP was affected by rate up to 40% less than static nodes and LEACH was affected by rate up to 50% less than static nodes.
Development Process for WBIs for Algorithm 
Flow of WBIs for Algorithm System 
Many researchers and academicians suggested that there is a need to change the way knowledge is being imparted. It is suggested that along with the traditional teaching and learning method there must be few equipments, tools and frameworks that could provide assistance to both learners and the instructors. This paper presents a framework for WBIs for algorithm learning which will be working as a teaching aid. The framework presented in this paper is aimed to offer the systematic learning of the algorithm for learners of diversified knowledge level and to provide a personal assistance. The paper also discusses the environment and attributes of WBIs for algorithm which will be encouraging interactive learning. Furthermore paper describes the various development phases, architecture and flow of the overall system and presents the WBIs development model and model which depicts the system's flow.
In this paper, we propose a novel unsupervised approach to query segmentation using the word alignment model which is usually adopted in statistical machine translation system. Query segmentation is to obtain complete phrases or concepts in a query by segmenting a sequence of query terms, which is an important query processing procedure for improving information retrieval performance in search engines. In this work, we use a novel monolingual word alignment method to segment queries and automatically obtain the query structure in the form of multilevel segmentation. Our approach is language independent and unsupervised so that it is easy to be applied to various language scenarios. Experimental results on a real-world query dataset show that our approach outperforms the state of the art language model based method, which demonstrates the effectiveness of the proposed approach in query segmentation.
This paper presents a segmented optimal multi-degree reduction approximation method for Bézier curve based on the combination of optimal function approximation and segmentation algorithm. In the proposed method, each Bernstein basis function is optimally approximated by the linear combination of lower power S bases. The piecewise curve of Bernstein basis function is replaced by the obtained optimal approximation functions. The proposed method is simple and intuitive. Experiments manifest that it improves the approximation performance.
In the context of Service-Oriented Architecture (SOA), complex systems can be realized through the visualization of business driven processes. The automation of Service Supported Systems (SSS) is the future integral part of core SOA which provides preprocessed information and solution suggestions for the Cloud Computing Users (CCU). CCU requires compact and fast decision supporting displays and user interface in order to handle the increasing work load. This requires intelligent, intuitive and robust preprocessing system as a backbone for automation lifecycle management. Complex business management processes often entail complex environmental decision-making procedures. This process can be greatly enhanced if it is based on an exploratory-envisioning system such as Information Exploration and Visualization Environment. Current scientific research has taken advantage of e-science to enhance distributed simulation, analysis and visualization. Many of these infrastructures use one or more collaborative software paradigms like Grid Computing, High Level Architecture (HLA) and Service Oriented Architecture (SOA), which together provide an optimal environment for heterogeneous and distant, real-time collaboration. While significant progress has been made using these collaborative platforms, often there is no particular software suite that fulfils all requirements for an entire organization or case study. End-user must cope manually with a collection of tools and its exporting/importing capabilities to obtain the output needed for a particular purpose. We presents how service oriented architecture can be utilized in automation services support system using RCD framework as underlying composition platform. The introduced framework combines rapid analysis development and intelligent process state visualization for CCU and discusses the challenges met in building reliable cloud computing services for web services. Unified Modeling Language (UML) is used as a specification technique for the system analysis and design process which is the only way to visualize one’s design and check it against requirements before developers start to code.
The purpose of this study was to develop educational software for online assessment of multiple choice responses (MCQs). An automated assessment software program, duly developed in this study can display assessment items, record candidates' answers, and mark and provide instant reporting of candidates' performance scores. Field tests of the software were conducted on four primary schools located in Bindura town using a previous year summative Grade 7 assessment set by the Zimbabwe School Examination Council (ZIMSEC). Results were that computerized assessment in mathematics has the potential to enhance the quality of assessment standards and can drastically reduce material costs to the examination board. The paper exposes test mode benefits inherent in computer-based assessments, such as one-item display and ease of candidates selecting/changing optional answers. It also informs the ongoing debate on possible enhancement of candidates' performance on a computer-based assessment relative to the traditional pen-and-paper assessment format. The need for the development of diagnostic instructional software to compliment computerized assessments is one of the recommendations of the study.
ICT has become an increasingly important factor in the development process of nations. Major barriers can be met in the adoption and diffusion of e-government services depending on the readiness of a country in terms of ICT infrastructure and deployment. This study aims to define organizational requirements that will be necessary for the adoption of e-government to resolve the delay of ICT readiness in public sector organizations in developing countries. Thus, this study contributes an integrated e-government framework for assessing the ICT readiness of government agencies. Unlike the existing e-government literature that focuses predominantly on technical issues and relies on generic e-readiness tools, this study contributes a comprehensive understanding of the main factors in the assessment of e-government organizational ICT readiness. The proposed e-government framework comprises seven dimensions of ICT readiness assessment for government organizations including e-government organizational ICT strategy, user access, e-government program, ICT architecture, business process and information systems, ICT infrastructure, and human resource. This study is critical to management in assessing organizational ICT readiness to improve the effectiveness of e-government initiatives.
pairwise comparison matrix
Context model of wireless communication in heterogeneous environments 
Calculation of criteria weights for FTP by AHP
Recently, there has been great interest in the community over heterogeneous wireless networks in order to support services with high quality. The integration of Wimax and Wi-Fi technologies in communication systems such as Voice ,FTP,Video conducted heterogeneous system and reveal that the proposed network selection technique can effectively decide the optimum network through making trade-offs among network condition, user preference while avoiding frequent handoffs. We identify vertical handover decision algorithm suitable for multimode mobile devices based on the Analytic Hierarchy Process (AHP) and evaluate its performance through simulation by Opnet. This paper gives a comparative analysis of two queuing systems FIFO and WFQ by showing the advantages of using suitable weight which was calculated by AHP. The simulation results show that WFQ technique has a superior quality than FIFO. It could be opportunity to network designers and suppliers are providing the resources to servers more efficient use.
Nowadays biometric authentication systems have been more developed, especially in secure and financial systems; so cracking a biometric authentication system is now a growing concern. But their security has not received enough attention. Imitating a biometric trait of a genuine user to deceive a system, spoofing, is the most important attacking method. Multi biometric systems have been developed to overcome some weaknesses of single biometric systems because the forger needs to imitate more than one trait. No research has further investigated the vulnerability of multimodal systems against spoof attack. We empirically examine the robustness of five fixed rules combining similarity scores of face and fingerprint traits in a bimodal system. By producing different spoof scores, the robustness of fixed combination rules is examined against various possibilities of spoofing. Robustness of a multi biometric system depends on the combination rule, the spoof trait and the intensity of spoofing. Min rule shows the most robustness when face is spoofed especially in very secure systems but when the fingerprint is faked the max rule shows the least vulnerability against possibilities of spoofing.
Web includes digital libraries and billions of text documents. A fast and simple search through this sizeable set is important for users and researchers. Since manual or rule based document classification is a difficult, time consuming process, automatic classification systems are absolutely needed. Automatic text classification systems demand extensive and proper training data sets. To provide these data sets, usually, numerous unlabeled documents are labeled manually by experts. Manual labeling of documents is a difficult and time consuming process. Moreover, in manual labeling, due to human exhaustion and carelessness, there is the possibility of mistakes. In this study, semi-automatic creation of training data set has been proposed in a way that only a small percentage of this extensive set’s documents is labeled manually and the remaining percentage is done automatically. Results show that by labeling only ten percent of the training set, remaining documents can be automatically labeled with 98 percent of accuracy. It is worth mentioning that this reduction in accuracy only occurs in standard data sets, while for large practical data sets, this reduction is trivial compared to the accuracy reduction resulted by human exhaustion and carelessness.
Context Overlapping Model (COM) is presented in this article for the task of Automatic Sentence Segmentation (ASS). Comparing with HMM, COM expands observation from single word to n-gram unit and there is an overlapping part between the neighboring units. Due to the co-occurrence constraint and transition constraint, COM model reduces the search space and improves tagging accuracy. We treated ASS as a task of sequence labeling and applied 2-gram COM to it. The experiment results show that the overall correct rate of the open test is as high as 90.11%, which is significantly higher than the baseline model (second order HMM), which is 85.16%.
Many Networked Control Systems (NCS) comprise several control loops that are closed over a network of computation nodes. In those systems, periodic, sporadic and non-periodic packets share the network bandwidth, complex timing behavior and communication lead to delay and jitter, which both degrade the control performance. That is to say the performance of NCS is determined not only by control algorithms, but also by the quality of serves (QoS) of network. Therefore, the network QoS must be considered during the controller synthesis. Also, the control performance should be taken into account during system scheduling. In this paper, a control–scheduling co-design method that integrates controller and scheduler design is proposed. The transmission periods of control packets are calculated to optimize the overall performance of control loops. The transmission phases and resource utilization of sporadic and non-periodic packets are controlled by the resource reservation server running on each node to guarantee the temporal requirements of those data and also to limit the effect on the transmission of control packets. Further more, a harmonization algorithm of transmission periods of packets is also presented to improve the bandwidth utilization. An integrated simulation platform based on Truetime is present, and the experimental results show that the scheme is more effective than the former ones.
The RPPI Strategy is improved by involving in the testing costs as an evaluation factor. In software testing stage, the component software with both high RPPI value and low testing costs is given advantage to be tested. The new proposed method avoids the defects in common RPPI strategy, which is often impracticable because of too much extra testing costs. Since both the objective of higher RPPI value and the objective of low costs are simultaneously required, the problem is modeled as a multi-objective optimization problem and resolved by the optimization algorithm such as the goal programming method and the Pareto algorithm. The method proposed is practicable in software reliability analysis.
Process flow Diagram for Kiosk System
This paper describes a cinema seat booking system and its design. The idea behind the system is to allow public kiosks to be installed in local shopping areas, where members of the general public could pre-book cinema seats for the film of their choice. A discussion follows on the design of a prototype for the system. Requirements specification is discussed and a design and task analysis is carried out. Evaluation is very important in computer interface development and improvement. Here analytical, observational and usability evaluations are carried out. The system created is found to be visually pleasing, user-focused and fully functional.
The objective of this paper is to present an automated segmentation method which allows rapid identification of Tumor tissues/pathological structure with an accuracy and reproducibility comparable to those of manual segmentation. The authors uses the wiener filter for the removal of noise and then applies a new marker based watershed segmentation method using image processing and digital processing algorithms to detect Tumor tissues of Brain. This method is simple and intuitive in approach and provides higher computational efficiency along with the exact segmentation of an image. The proposed technique has been implemented on MATLAB 7.3 and the results are compared with the existing techniques.
With the increasingly expanding and perfecting of the construction and application of the information networking, the cost, technology, security, and management of the information networking are facing big risk and challenge. The thin terminal technology of broadband computer is the information service system based on the C/S system computing platform, which could fully utilize the broadband network technology, with low cost, high efficiency, high security, and easy management, and sufficiently dig the computing function of the server, and effectively solve various risks and challenges in the construction of the information networking.
The development trend of the aero-engine control system is the distributed control system (DCS), and the communication among devices is the key point and difficult point of DCS. In this article, the communication interface of CAN (Controller Area Network) bus is designed, the CAN communication technology among nodes is studied, the semi-physical platform of DCS is established, and the semi-physical simulation is implemented by combining with the engine model, and the experiment result shows that the control system simulation platform can completely satisfy the requirements of aero-engine DCS.
While capturing software requirements of business process, multinational firms are confronted with the fact that their business operations are scattered over a number of national markets. This paper suggests the approach of central-local-central loop to tackle the contextual complexity. First, Business process information is elicited, analysed and elaborated at the central headquarters. Second, requirements engineers must validate the centrally documented requirements at the local national markets where the application will ultimately be deployed. Empirical experience shows that observation, apprenticing and online-interview are effective field elicitation methodologies if they are applied in a combined and balanced manner. Third, the adapted, changed and added information shall be satisfactorily verified centrally and agreed upon by local and central product champions and decision makers.
Fiber optics is the fastest and reliable means of transferring large amounts of data Optical fiber links are used for local area networks to world wide data communication. Fiber-optic communication is a method of transmitting information from one place to another by sending pulses of light through an optical fiber. Digital optical communications explores the practical applications of this union and applies digital modulation techniques to optical communications systems. Intensity modulation is a modulation technique in which the optical output power of a source is varied in accordance with some characteristic of the modulating signal. Fiber optics trainer ST 2502, which is a single board fiber optic transmitter receiver module providing two independent fiber optic communication links, is used in the present work. It is used to study and analyze the analog & digital signals modulation in relation to the losses in optical fiber. This work intends to obtain an intensity modulation of a digital signal transmitted over fiber optic cable and demodulate the same at receiver end to get the original signal. Different fiber optic cables namely SIPMMA fiber, thermocouple type K with glass fiber/stainless cables, OFNR cable is used to obtain intensity modulation using digital input signal. It has been investigated that the output of detector is affected with the change in cable and the size of cables also plays an important role in data transmission. The study of switched faults in intensity modulation is also taken up in the study.
Some example images of Graz-01 dataset: 
Kernel classifiers based on the hand-crafted image descriptors proposed in the literature have achieved state-of-the-art results in several dataset and been widely used in image classification systems. Due to the high intra-class and inter-class variety of image categories, no single descriptor could be optimal in all situations. Combining multiple descriptors for a given task is a way to improve the accuracy of the image classification systems. In this paper, we propose a filter framework “Learning to Align the Kernel to its Ideal Form(LAKIF)” to automatically learn the optimal linear combination of multiple kernels. Given the image dataset and the kernels computed on the image descriptors, the optimal kernel weight is learned before the classification. Our method effectively learns the kernel weights by aligning the kernels to their ideal forms, leading to quadratic programming solution. The method takes into account the variation of kernel matrix and imbalanced dataset, which are common in real world image categorization tasks. Experimental results on Graz-01 and Caltech-101 image databases show the effectiveness and robustness of our method.
An optimized initial center of K-means algorithm(PKM) is proposed, which select the k furthest distance data in the high-density area as the initial cluster centers. Experiments show that the algorithm not only has a weak dependence on the initial data, but also has fast convergence and high clustering quality. To obtain effective cluster and accurate cluster, we combine the optimized K-means algorithm(PKM) and genetic algorithm into a hybrid algorithm (PGKM). It can not only improve compactness and separation of the algorithm but also automatically search for the best cluster number k, then cluster after optimizing the k-centers. The optimal cluster is not obtained until terminal conditions are met after continuously iterating. Experiments show that the algorithm has good cluster quality and overall performance.
The challenges and profound opportunities of Information and Communication Technology (ICT) are quite unique and unparallel in the world history. It is happening at an extremely fast pace; impacting all corners of the globe; ubiquitous in effect; unlimited by any natural laws or rules; multidirectional in revolution; insatiable in demand; its report is ear-splitting, gut-wrenching, jaw-dropping and heart-stopping. In this paper, we present the opportunities and consequential challenges it posses on the citadel of knowledge in the third world countries. We asses both the development of ICT and ICT for development of third world countries through universities as un-bias agent for development in order to bridge the digital divide.
The technology of cloud computing has attracted wide attention since its beginning, and it will become the mainstream in IT industry of next step. However, the future situation of cloud computing technology is still in initial stage, because the technology of cloud computing is burgeoning. So the conception of super cloud and super cloud language is proposed by referring to the development route of database technology and data mining technology, and the necessity of super cloud and super cloud language existence is discussed and the scheme is given which will become the goal of the development of cloud in the future.
ZIVM architecture 
A feature-wise comparison of Inter-VM mechanisms ZIVM
Overview of shared memory virtual device 
With the advent of virtualization technology and its propagation to the infrastructure of Cloud distributed systems, there is an emergent request for more effective means of communication between virtual machines (VMs) lying on distributed memory than traditional message based communication means. This paper presents a distributed virtual shared memory mechanism called ZIVM (Zero-copy Inter-VM) to ease the programming of inter-VM communications in a transparent way and at the same time to achieve comparable and bearable execution performance. ZIVM has been implemented as a virtual cluster on a hosted virtual machine using the KVM hypervisor. Experimental results have shown near native performance in terms of latency and bandwidth.
Taking the feed-driven linear motor mechanism as the control object, the SIMODRIVE 611D as the control system, establish mathematical model of the servo system. The relation between feed-axis current and cutting force is presented, the current and feed-rate are separately selected as feedback and output. Around the aim of the feed-axis constant current control, choose the fuzzy control as the control method of the linear feed-driven machining process. The simulation results indicate that the intelligent control system has fast response time and good performance of anti-jamming, also the experimental results show the efficiency of milling can be improved and tool be highly protected.
Practical teaching is an important part of teaching activities in colleges and universities. Based on the characteristics of vocational colleges, the article expounds the importance of practical teaching in the teaching system of vocational college. Meanwhile it makes an analysis on the current situations and development demand of vocational college, discusses how to make more students make full use of existing practical teaching resources and points out the necessity and urgency to expand the openness of laboratory.
As for the optical measurement, the longer the focal length of collimator is, the less the measurement error is. However, the longer the focal length is, the more difficult the optical design is. It’s higher application value to design long focal length collimator with wide-view. The paper designs a collimator with 2000mm focal length, 4 degree field of view and its wavelength range is from 480nm to 750nm. The collimator is employed in the camera resolution detection system. It adopts the apochromatism three-piece-type structure and the secondary spectrum is corrected. The paper analyses the imaging quality of the optical system and the MTF curve is presented.
This paper analyzes the application of pixel segmentation techniques, the recognition and selection of image regions, as well as the performing of operations on the regions found within the digital images in order to detect nudity. The research aims to develop a software tool capable of nudity detection on digital images. The segmentation in the HSV color model (Hue, Saturation, and Value) to locate and remove the pixels corresponding to human skin is used. The algorithm in Recognition, Selection and Operations in Regions (RSOR), to recognize and separate the region with the highest number of skin pixels within the segmented image (largest region), is proposed. Once selected the largest region, the RSOR algorithm calculates the percentage on the segmented image taken from the original one, and then it calculates the percentage on the largest region, in order to identify whether there is a nude in the image. The criteria for appraising if an image depicts a nude is the following: If the percentage of skin pixels in the segmented image, in comparison to the original image, is less than 25% it is not considered a nude, but if it exceeds this percentage, then, the image is a nude. However, when the percentage of the largest region has been estimated and it amounts to less than 35%, the image is definitely not a nude. The final result is a message that informs the user whether or not the image is a nude. The RSOR algorithm obtains a 4.7% false positive, compared to other systems, and it has shown to possess optimum performance for nudity detection.
Diagraph of direct relations among indices 
Final table of relations among indices
E-commerce is among functions which have been promoted by modern communication technologies. The progressive trend of e-commerce has created new opportunities in the field of trade and commerce. Taking advantage of this method of trading in national and international levels require providing infrastructures without which e-commerce will be hindered and conducting commercial transactions will not be fully possible. The present article aims to do pathology on e-commerce with an approach to information technology in Knauf Iran Company. At first, researches which have been carried out in Iran and other countries were studied and different criteria and indices of e-commerce were drawn. Then, a conceptual pattern was developed after categorizing the criteria. Thereafter, effects of indices on each other were assessed to prioritize effects of elements on e-commerce and finally results were analyzed using the DEMATEL technique. The institutional-organizational management index was placed in the highest priority and the expert and educated manpower index was placed in the lowest priority.
Telecentres are physical spaces that provide public access to information and communication technology particularly the Internet for educational, personal, social, and economic development. This paper will closely look into the characteristics of the community that influence the success of telecentres. Although there are a number of influential factors in regards to community characteristics, the emphasis will be on groups and networks factor. Survey was conducted to collect data from users regarding their use of telecentres. In the questionnaire, apart from the users' profiles, items related to the groups and networks were also included. Sampling was done based on a population comprising of telecentres implemented by state governments, non-governmental organizations, and private sectors. The findings suggest that there are some indications showing certain self belonging, as consequences to the usages of telecentres, to a group and establishing networks which can contribute to the success of telecentres.
Increased implementation of new databases related to multidimensional data involving techniques to support efficient query process, create opportunities for more extensive research. Pre-processing is required because of lack of data attribute values, noisy data, errors, inconsistencies or outliers and differences in coding. Several types of pre-processing based on component analysis will be carried out for cleaning, data integration and transformation, as well as to reduce the dimensions. Component analysis can be done by statistical methods, with the aim to separate the various sources of data into a statistical pattern independent. This paper aims to improve the quality of pre-processed data based on component analysis. RapidMiner is used for data pre-processing using FastICA algorithm. Kernel K-mean is used to cluster the pre-processed data and Expectation Maximization (EM) is used to model. The model was tested using wisconsin breast cancer datasets, lung cancer datasets and prostate cancer datasets. The result shows that the performance of the cluster vector value is higher and the processing time is shorter.
Top-cited authors
Kamaruzaman Jusoff
  • University of Malaya
Robert Goodwin
  • Flinders University
Giselle Rampersad
  • Flinders University
Plaban Kumar Bhowmick
  • Indian Institute of Technology Kharagpur
Ibrahim A. Alghamdi
  • Real Estate Development Fund (REDF)