Journal of Computer Science

Published by Science Publications
Online ISSN: 1549-3636
Publications
(A) Original signature, (B) Random forgery, (C) Simple forgery, (D) Skilled forgery 
Captured signature (A) before adjustment and (B) after adjustment 
Vertical splitting of the signature image 
Horizontal splitting of the signature image 
d avg (average distance) and s (standard deviation) derivation from distances 
Conference Paper
In this paper a novel offline signature verification scheme has been proposed. The scheme is based on selecting 60 feature points from the geometric centre of the signature and compares them with the already trained feature points. The classification of the feature points utilizes statistical parameters like mean and variance. The suggested scheme discriminates between two types of originals and forged signatures. The method takes care of skill, simple and random forgeries. The objective of the work is to reduce the two vital parameters False Acceptance Rate (FAR) and False Rejection Rate (FRR) normally used in any signature verification scheme. In the end comparative analysis has been made with standard existing schemes.
 
Article
Human Computer Interaction is a primary factor in the success or failure of any device but if an objective view is taken of the current mobile phone market you would be forgiven for thinking usability was secondary to aesthetics. Many phone manufacturers modify the design of phones to be different than the competition and to target fashion trends, usually at the expense of usability and performance. There is a lack of awareness among many buyers of the usability of the device they are purchasing and the disposability of modern technology is an effect rather than a cause of this. Designing new text entry methods for mobile devices can be expensive and labour-intensive. The assessment and comparison of a new text entry method with current methods is a necessary part of the design process. The best way to do this is through an empirical evaluation. The aim of the paper is to establish which mobile phone text input method best suits the requirements of a select group of target users. This study used a diverse range of users to compare devices that are in everyday use by most of the adult population. The proliferation of the devices is as yet unmatched by the study of their application and the consideration of their user friendliness.
 
Article
Problem statement: This study presented the optimized test scheduling and test access for ITC-02 SOC benchmark circuits using genetic algorithm. In the scheduling procedure of SOC, scheduling problem was formulated as a sequence of two problems and solved. Approach: Test access mechanism width was partitioned into two and three partitions and the applications of test vectors and test vector assignments for different partitions were scheduled using different operators of genetic algorithm. Results: The test application time was calculated in terms of CPU time cycles for two and three partitions of twelve ITC-02 SOC benchmark circuits and the results were compared with theinteger linear programming approach. Conclusion: The results showed that the genetic algorithm based approach gives better results.
 
Article
Digital communications systems use Multi tone Channel (MC) transmission techniques with differentially encoded and differentially coherent demodulation. Today there are two principle MC application, one is for the high speed digital subscriber loop and the other is for the broadcasting of digital audio and video signals. In this study the comparison of multi carriers with OQPSK and Offset 16 QAM for high-bit rate wireless applications are considered. The comparison of Bit Error Rate (BER) performance of Multi tone Channel (MC) with offset quadrature amplitude modulation (Offset 16 QAM) and offset quadrature phase shift keying modulation (OQPSK) with guard interval in a fading environment is considered via the use of Monte Carlo simulation methods. BER results are presented for Offset 16 QAM using guard interval to immune the multi path delay for frequency Rayleigh fading channels and for two-path fading channels in the presence of Additive White Gaussian Noise (AWGN). The BER results are presented for Multi tone Channel (MC) with differentially Encoded offset 16 Quadrature Amplitude Modulation (offset 16 QAM) and MC with differentially Encoded offset quadrature phase shift keying modulation (OQPSK) using guard interval for frequency flat Rician channel in the presence of Additive White Gaussian Noise (AWGN). The performance of multitone systems is also compared with equivalent differentially Encoded offset quadrature amplitude modulation (Offset 16 QAM) and differentially Encoded offset quadrature phase shift keying modulation (OQPSK)with and without guard interval in the same fading environment.
 
Article
The recovery mechanism from transient fault in distributed systems has been intensively studied in the past, but to our best knowledge, none of these studies has been devoted to cope together with transient and permanent hard faults. Our study devoted to recovery processes in a distributed environment in case of hard faults like transient or permanent. The recovery mechanism we presented can be based on one of the six proposed strategies involving checkpointing and message logging between distributed application processes. This exhaustive number is system-dependant. The strategies have been examined with respect to propagation recovery through processes in order to prevent the fastidious well known domino effect problem. The considered framework was a distributed system composed of a set of autonomous nodes running each one a local system; and some of them were predisposed to replace failing ones in case of permanent fault. Our main contribution was to enable a distributed application to meet its requirements of terminating its mission in spite of node crash.Preliminary experimental results of a fault tolerant mechanism based upon one of the proposed strategies demonstrated that our proposals seem to be conclusive.
 
Article
The software industry has been experiencing a software crisis, a difficulty of delivering software within budget, on time, and of good quality. This may happen due to number of defects present in the different modules of the project that may require maintenance. This necessitates the need of predicting maintenance urgency of the particular module in the software. In this paper, we have applied the different predictor models to NASA five public domain defect datasets coded in C, C++, Java and Perl programming languages. Twenty one software metrics of different datasets and Java Classes of thirty five algorithms belonging to the different learner categories of the WEKA project have been evaluated for the prediction of maintenance severity. The results of ten fold cross validation are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) for different project datasets. The results show that logistic model Trees (LMT) and Complimentary Naïve Bayes (CNB) based Model provide a relatively better prediction consistency compared to other models and hence, can be used for the maintenance severity prediction of the software. The developed system can also be used for analysis and to evaluate the influence of different factors on themaintenance severity of different software project modules.
 
Article
Wireless Local Area Networks (WLANs) based on IEEE 802.11 standard have been rapidly growing. With limitations of the WLAN standard and rapid increase in wireless application demand, the air interface acts as bottleneck even in high-speed WLAN standards such as IEEE 802.11a and 802.11g with expected data transmission rates of up to 54 Mbps. To improve the overall performance of IEEE 802.11 WLAN under large deployment and heavy application demand environment, Radio Resource Management (RRM) algorithms based on power control have been investigated and tested through simulation. The results show that controlling the Wireless Terminal’s (WT) transmitter power to an optimum power level helps in increasing data throughput in WLANs and when the transmitter power level of a WT is increased beyond the optimum power level, the overall data throughputs drop drastically no matter how high the WT’s transmitter power may be increased.
 
The cellular multilayer architecture
Number of relay handsets per route versus the overall traffic load, MRC algorithm and ARC algorithm
Article
In heterogeneous microcellular networks like 802.11(WLAN)-GPRS, users can roam around through vertical handoffs. The major problem in heterogeneous networks is that they support different QoS requirements and the users suffer sudden changes in data rates. This study proposescertain routing algorithms to extend the service time of mobile handsets in the wireless network with higher available data rate, thus improving the average available bandwidth.
 
Test weight input screen  
Marking result screen  
Article
This study describes the design of an automatic assessment system for assessing an automata-based assignment. Automata concept is taught in several undergraduate computing courses such as Theory of Computation, Automata and Formal Languages and Compilers. We take twoelements into consideration when assessing the student’s answers; static element and dynamic element. The static element involves the number of states (initial and final as well) and the number oftransitions. Whilst the dynamic aspect involves executing the automata against several test data. In this work, we rely heavily on the JFLAP for drawing and executing the automata.
 
Catching the topic “butterfly” 
An example graph of D-Numbers and their 
Novel web search system architecture design 
Article
Problem statement: The main goal of a Web crawler is to collect documents that are relevant to a given topic in which the search engine specializes. These topic specific search systems typically take the whole document's content in predicting the importance of an unvisited link. But current research had proven that the document's content pointed to by an unvisited link is mainly dependent on the anchor text, which is more accurate than predicting it on the contents of the whole page. Approach: Between these two extremes, it was proposed that Treasure Graph, called T-Graph is a more effective way to guide the Web crawler to fetch topic specific documents predicted by identifying the topic boundary around the unvisited link and comparing that text with all the nodes of the T-Graph to obtain the matching node(s) and calculating the distance in the form of documents to be downloaded to reach the target documents. Results: Web search systems based on this strategy allowed crawlers and robots to update their experiences more rapidly and intelligently that can also offer speed of access and presentation advantages. Conclusion/Recommendations: The consequences of visiting a link to update a robot's experiences based on the principles and usage of T-Graph can be deployed as intelligent-knowledge Web crawlers as shown by the proposed novel Web search system architecture.
 
Hadith component (Yusoff et al., 2008)
Article
Problem statement: The needs of computer forensics investigators have been directly influenced by the increasing number of crimes performed using computers. It is the responsibility of the investigator to ascertain the authenticity of the collected digital evidence. Without proper classification of digital evidence, the computer forensics investigator may ended up investigating using untrusted digital evidence and ultimately cannot be use to implicate the suspected criminal. Approach: The historical methods of verifying the authenticity of a hadith were studied. The similarities between hadith authentication and digital evidence authentication were identified. Based on the similarities of the identified processes, a new method of authenticating digital evidence were proposed, together with the trust calculation algorithm and evidence classification. Results: The new investigation processes and an algorithm to calculate the trust value of a given digital evidence was proposed. Furthermore, a simple classification of evidence, based on the calculated trust values was also proposed. Conclusion/Recommendations: We had successfully extracted the methods to authenticate hadith and mapped it into the digital evidence authentication processes. The trust values of digital evidence were able to be calculated and the evidence can be further classified based on the different level of trust values. The ability to classify evidence based on trust levels can offer great assistance to the computer forensics investigator to plan their works and focus on the evidence that would give them a better chance of catching the criminals.
 
Article
An attempt was made in this study, to develop a ”Knowledge Based System” fori)computing the depth (or the third dimension) of 3D objects, ii) computing the inclinations of 2Dplanes with predefined reference planes and iii) computing the inclinations of a straight line withpredefined reference planes. The development of the proposed “Knowledge Based System” was basedon the “Principle of Color Gradation”.
 
Classification of boundary-based shape representation
a) Dividing the original boundary into fixed size blocks, b) Replacing the boundary pixels by straight-lines
The change in angle ( θ ∆ ) of P2 with respect
a) The original boundary, b) The boundary representation
Article
2D object shape description has gained considerable attention in field of content-basedimage retrieval (CBIR) and in various applications that concerned with the shapes of objects. Objectshape descriptors in the literature are commonly classified into two forms: contour-based and region-based.In this paper we present our contour-based shape representation and description, where theobject shape is represented and describe as a set of discrete segments. The descriptor is invariant totranslation, scale and Circular shifting makes it robust to rotation.
 
Article
This study presents a performance analysis of two different multipliers for unsigned data, one uses a carry-look-ahead adder and the second one uses a ripple adder. The study’s main focus is onthe speed of the multiplication operation on these 32-bit multipliers which are modeled using VHDL, A hardware description language. The multiplier with a carry-look-ahead adder has shown a betterperformance over the multiplier with a ripple adder in terms of gate delays. Under the worst case, the multiplier with the fast adder shows approximately twice the speed of the multiplier with the rippleadder. The multiplier with a ripple adder uses time = 979.056 ns, while the multiplier with the carrylook- ahead adder uses time = 659.292 ns.
 
Article
Problem statement: Though stereoscopic 3D visualization technology has made considerable progress, it was always associated with eye attachments, like wearing color glasses, wearing Polaroid glasses, using timed shutters. Approach: This has considerably retarded the popularity of 3D visualization technology. Though there are discrete references in the literature for developing stereoscopic 3D visualization without application of glasses, these methods involve transferring the attachments to the monitor, direct viewing by straining the eyes, application of color variation. Results: They have not made considerable progress in this direction. An attempt is made in this study to use computer capabilities through generic algorithms to develop the stereoscopic 3D visualization without the application of glasses overcoming the above mentioned limitations. Conclusion/Recommendations: An attempt is made in this study to develop an "Expert System" using computer capabilities to give a satisfactory 3D/depth view of 2D images of 3D environments. Algorithms are developed to process both computer acquired and computer generated stereo pairs for 3D visualization.
 
Digital recording mechanism of the in-line setup 
An in-line hologram for 15 μm, 30 μm and in the image processing system. Therefore more 45 μm particles in a 9.2 x 9.2 x 4 mm ̄ 3 flexibility is offered to overcome the inherent problems volume (z r є [48, 52 mm] with 1694 particles which have been the restrictions to the applications of (5 mm ̄ 3 )) conventional in-line holography. Theoretical analysis : based on Fourier optics and experimental results show From Fig. 7 at plane Z r1 the proposed system that a high lateral resolution and less influence of successively extracted particle field. It is found that all speckle noise can be achieved in the in-line the particles size at that plane with equal size 30 μm. At configuration. Z r2 the extracted particles have both sizes of 15μm and However, its application is often hampered by the 30μm respectively. While at z Z r3 the three different poor depth resolution due to insufficient resolving particle sizes are extracted. power of the digital image sensors. In this study, we From this experiment, it is clear that the proposed demonstrate that the complex amplitude of the system can clearly extract the particles with different reconstruction field provides a promising solution to sizes even if they are at the same plane. Also, this this problem. A novel method of particle extraction system can be used for implementing holographic using complex amplitude has been developed for digital particle tracking with different particle sizes at different holography of a particle field. This method uses the z planes. dipping characteristic of the total reconstruction wave We have applied the same system in another to extract the depth position of opaque particles. Results experiment with the same parameters of first one except show that the system outperforms the traditional that the particles size equal to or slightly larger than the intensity methods and can predict particle 3D position pixels size of the hologram. It is founded that the accurately 209 
Article
Digital holography for 3D particle field extraction and tracking is an active research topic. It has a great application in realizing characterization of micro-scale structures in microelectromechanical systems (MEMS) with high resolution and accuracy. In-line configuration is studied in this study as the fundamental structure of a digital holography system. Digital holographic approach, not only eliminates wet chemical processing and mechanical scanning, but also enables the use of complex amplitude information inaccessible by optical reconstruction, thereby allowing flexible reconstruction algorithms to achieve optimization of specific information. However, owing to the inherently low pixel resolution of solid-state imaging sensors, digital holography gives poor depth resolution for images. This problem severely impairs the usefulness of digital holography especially in densely populated particle fields. This study describes a system that significantly improves particle axial-location accuracy by exploring the reconstructed complex amplitude information, compared with other numerical reconstruction schemes that are merely traditional optical reconstruction. Theoretical analysis and experimental results demonstrate that in-line configuration presents advantageous in enhancing the system performance. Greater flexibility of the system, higher lateral resolution and lower speckle noise can be achieved.
 
Article
Problem statement: Literature review was mainly aiming at recognition of objects by the computer and to make explicit the information that is implicit in the attributes of 3D objects and their relative positioning in the 3D Environment (3DE) as seen in the 2D images. However quantitative estimate of position of objects in the 3DE in terms of their x, y and z co-ordinates was not touched upon. This issue assumes important dimension in areas like Kinematic Design of Robos (KDR), while the Robo is negotiating with z field or Depth Field (DF). Approach: The existing methods such as pattern matching used by Robos for Depth Visualization (DV) using a set of external commands, were reviewed in detail. A methodology was developed in this study to enable the Robo to quantify the depth by itself, instead of looking for external commands. Results: The Results are presented and discussed. The Results are presented and discussed. The major conclusions drawn based on the results were listed. Conclusion: The major contribution of the present study consists of computing the Depth (D1) corresponding to the depth (d) measured from the photographic image of a 3DE. It had been concluded that, there exists an excellent agreement between the computed depth D1 and the corresponding actual Depth (D). The percent deviation of D1 from D (DP) lies between ±2 over the entire region of the (DF). Through suitable interfacing of the developed equation with the kinematic design of Robos, the Robo can generate its own commands for DF negotiations.
 
Article
Literature review reveals that there exists, many methods for 3D visualization of a givenobject or a set of objects [1,2] . To name a few commonly used methods, we have Stereoscopic 3Dvisualization [3] , Cross eye visualization, Parallel eye visualization, Perspective 3D visualization etc. Anoptimu location of the viewing point for an ideal 3D perspective view is attempted in this paper. Theconclusions drawn are presented [4,5].
 
The infinite data state diagram of the Quantum Authentication Process (QAP) 
The virtual private networks VPN (classical channels) and the quantum channel 
The infinite data state diagram of the Key 
The 
The 
Article
Problem statement: In previous researches, we investigated the security of communication channels, which utilizes authentication, key distribution between two parties, error corrections and cost establishment. In the present work, we studied new concepts of Quantum Authentication (QA) and sharing key according to previous points. Approach: This study presented a new protocol concept that allows the session and key generation on-site by independently applying a cascade of two hash functions on a random string of bits at the sender and receiver sides. This protocol however, required a reliable method of authentication. It employed an out-of-band authentication methodology based on quantum theory, which uses entangled pairs of photons. Results: The proposed quantum-authenticated channel is secure in the presence of eavesdropper who has access to both the classical and the quantum channels. Conclusion/Recommendations: The key distribution process using cascaded hash functions provides better security. The concepts presented by this protocol represent a valid approach to the communication security problem.
 
Article
A practical mobility management technique aimed at reducing handoff latency to less than 10ms has been proposed and implemented in this paper. Handoff latency has been reduced by decoupling scanning from handoff execution process. This scheme has been implemented in the form of client side application with modifications to open source Madwifi new generation driver. Performance of this mobility management scheme has been evaluated on experimental Testbed forfinding its suitability for real time applications. Effect of the proposed scheme on commercial VoIP client, Skype has been evaluated. Effect of the proposed handoff and optimized background scan on audio and video streaming has also been investigated. The results indicate that this mobility management technique can be used for real time traffic over 802.11 wireless networks.
 
Actual back-off  
Consecutive back-off measurement Figure 3 illustrates this test (Test 6), which works in the case of sources with inter frame delays. In practice, this is mainly the case of TCP sources (in this case the delay is due to the congestion control of TCP), which represent over 91% of traffic in real networks. The actual backoff test for these sources does not yield the correct values (as explained in the previous paragraph), and consequently cannot detect potential cheating. Let us consider a station S sending TCP traffic and being monitored by the system algorithm. We assume that there is enough traffic from other sources on the common channel such that, between two frames sent by S and separated by a transport layer delay, there is at least one interleaving frame from another station. Hence, if the AP observes two consecutive non-interleaved frames from S, it can consider the idle time between them as only a back off in addition to the mandatory DIFS. These consecutive frames are the result of channel contention that may force S to queue packets at the MAC layer even if they were separated by a delay at upper layers. In this situation, S would benefit from cheating with backoff in order to free its MAC layer queue. Thus, the system can collect significant samples of the backoff values chosen by S; we call these samples consecutive back-offs.  
Article
Wireless Medium Access Control (MAC) protocols such as IEEE 802.11 use distributed contention resolution mechanisms for sharing the wireless channel. In this environment, selfish hosts that fail to adhere to the MAC protocol may obtain an unfair throughput share. For example, IEEE 802.11 requires hosts competing for access to the channel to wait for a “back-off” interval, randomly selected from a specified range, before initiating a transmission. Selfish hosts may wait for smaller back-off intervals than well-behaved hosts; thereby obtaining an unfair advantage. We show in this thesis that a greedy user can substantially increase his share of bandwidth, at the expense of the other users, by slightly modifying the driver of his network adapter. This study is a complementary of DOMINO System model to enhance the detection system in the MAC layer of IEEE 802.11; ourenhanced system is a piece of software to be installed in or near the Access Point. The system can detect and identify greedy stations without requiring any modification of the standard protocol. We illustrate these concepts by simulation results.
 
Article
Problem statement: Deployment of real time services over 802.11 wireless networks requires quality of service (QoS) differentiation mechanisms for different traffic types. This required investigations into the performance of the Medium access (MAC) schemes like Distributed Coordinated Function (DCF) and Enhanced DCF with respect to the stringent QoS requirements imposed by the real time services. Motivation for this research was to find the suitability of 802.11MAC schemes for real time traffic. Approach: In this study, various available MAC schemes were experimentally evaluated for QoS provisioning in 802.11 wireless networks. Performance evaluationwas done based on important QoS metrics like access delay, jitter, packet loss, Round Trip Time (RTT) and throughput of the traffic. Experimental Testbed based was established using off the shelfhardware and open source software. The traffic was captured in real time and analyzed thereafter for various QoS metrics. Results: The results indicated that there is considerable QoS improvement using802.11e EDCF with reconfigured queues over the ordinary DCF mechanism. Results were obtained on experimental testbed using various types of UDP and TCP traffic. Conclusion: It can be concludedthat proper differentiation and scheduling of traffic specific to application, helps in providing better QoS over the 802.11 wireless networks and improves their suitability for deployment of real timeservices.
 
Article
Problem statement: IEEE 802.11 Medium Access Control (MAC) protocol is one of the most implemented protocols in this network. The IEEE 802.11 controls the access to the share wireless channel within competing stations. The IEEE 802.11 DCF doubles the Contention Window (CW) size for decreasing the collision within contending stations and to improve the network performances but it is not good for error prone channel because the sudden CW rest to CWmin may cause several collisions. Approach: The research to date has tended to focus on the current number of active stations that needs complex computations. A novel backoff algorithm is presented that optimizes the CW size with take into account the history of packet lost. Results: Finally, we compare the HBCWC with IEEE 802.11 DCF. The simulation results have shown 24.14, 56.71 and 25.33% improvement in Packet Delivery Ratio (PDR), average end to end delay and throughput compared to the IEEE 802.11 DCF. Conclusion: This study showed that monitoring the last three channel statuses achieve better delay and throughput that can be used for multimedia communications.
 
Article
Problem statement: FECG (Fetal Electrocardiogram) signal contains potentially precise information that could assist clinicians in making more appropriate and timely decisions during pregnancy and labor. Approach: Conventional techniques were often unable to achieve the extraction of FECG from the Abdominal ECG (AECG) in satisfactorily level. A new methodology by combining the Artificial Neural Network (ANN) and Correlation (ANNC) approach had been proposed in this study. Results: The accuracy of the proposed method for FECG extraction from the AECG signal was about 100% and the performance of the method for FHR extraction is 93.75%. Conclusions/Recommendations: The proposed approach involved the FECG extraction even though the MECG and FECG are overlapped in the AECG signal so that the physician and clinician can make the correct decision for the well-being of the fetus and mother during the pregnancy period.
 
Article
Problem statement: In order to bring speech into the mainstream of business process an efficient digital signal processor is necessary. The Fast Fourier Transform (FFT) and the butter fly structure symmetry will enable the harwaring easier. With the DSP and software proposed, togetherly established by means of a system, named here as “Speech Abiding System (SAS)”, a software agent, which involves the digital representation of speech signals and the use of digital processors to analyze, synthesize, or modify such signals. The proposed SAS addresses the issues in two parts. Part I: Capturing the Speaker and the Language independent error free Speech Content for speech applications processing and Part II: To accomplish the speech content as an input to the Speech User Applications/Interface (SUI). Approach: Discrete Fourier Transform (DFT) of the speech signal is theessential ingredient to evolve this SAS and Discrete-Time Fourier Transform (DTFT) links the discrete-time domain to the continuous-frequency domain. The direct computation of DFT is prohibitively expensive in terms of the required computer operations. Fortunately, a number of “fast” transforms have been developed that are mathematically equivalent to the DFT, but which require significantly a fewer computer operations for their implementation. Results: From Part-I, the SAS able to capture an error free Speech content to facilitate the speech as a good input in the main stream ofbusiness processing. Part-II provides an environment to implement the speech user applications at a primitive level. Conclusion/Recommendations: The SAS agent along with the required hardware architecture, a Finite State Automata (FSA) machine can be created to develop global oriented domain specific speech user applications easily. It will have a major impact on interoperability and disintermediation in the Information Technology Cycle (ITC) for computer program generating.
 
Article
Calculation of Current Transformers (CTs) magnetic and thermal properties are very complex due to the complexity of their construction, different properties of their materials and nonlinearityof core B-H curve. Finite Element Methods (FEMs) are very capable and reliable methods for these problems solution, such as Ansys software. In this study Ansys software is applied in analysis of an 800-400/5-5 CT. These analyses consist of 2D static normal, open circuit and short circuit condition of CT. Magnetic and thermal analysis are made and the results will be discussed.
 
Volume-time curve and flow-volume curve of three patterns [3] 
Multilayer Perceptron neural network topology 
Combined 
Results of ANN1 and ANN2
Article
Problem Statement: Lung disease is a major threat to the human health regarding the industrial life, air pollution, smoking, and infections. Lung function tests are often performed using spirometry. Approach: The present study aims at detecting obstructive and restrictive pulmonary abnormalities. Lung function tests are often performed using spirometry. In this study, the data were obtained from 250 volunteers with standard recording protocol in order to detect and classify pulmonary diseases into normal, obstructive and restrictive. Firstly, spirometric data was statistically analyzed concerning its significance for neural networks. Then, such parameters were presented as input to MLP and recurrent networks. Results: These two networks detected normal and abnormal disorders as well as obstructive and restrictive patterns, respectively. Moreover, the output data was confirmed by measuring accuracy and sensitivity. Conclusion: The results show that the proposed method could be useful for detecting the function of respiratory system.
 
Different Levels of Abstraction of Class Person 
Article
Although the evolving field of software engineering introduces many methods and modelling techniques, we conjecture that the concepts of abstraction and generality are among the fundamentals of each such methodology. This study proposed a formal representation of these two concepts, along with a two-dimensional space for the representation of their application. Based on the examples, we further elaborate and discuss the notion of abstraction and generalisation transformations in various domains of software development.
 
Article
Problem statement: Most institutions recognize the critical role that information security risk management plays in supporting their missions and objectives. Often, institutions do not pay enough attention towards assessing effectiveness of existing security measures. They are also unable to respond to new security threats in reasonable time. Furthermore, new laws are also forcing institutionsto manage security risk more closely and effectively than in the past. Approach: In this study, metric based assessment and exception handling plan has been proposed, specific to the needs of an academicenvironment. Organization structure and reporting strategy which is crucial for effective implementation and monitoring is also proposed. Discussion and Conclusion: Proposed assessment metric enables small institutions to make a moderate but quick start, as essential measures are identified and prioritized. As and when institutes gain more experience and resources, remaining levels of the metric can also be implemented. Secondly, to reduce response time, a novel role based communication of exceptions is proposed. Responsibilities are distributed across the institution and security exceptions are reported directly to the predefined roles, responsible for that particular security control. The proposed plan will improve overall risk management with quick response time.
 
Article
Problem statement: Modern cryptographic algorithms are based on complexity of two problems: Integer factorization of real integers and a Discrete Logarithm Problem (DLP). Approach: The latter problem is even more complicated in the domain of complex integers, where Public Key Cryptosystems (PKC) had an advantage over analogous encryption-decryption protocols in arithmetic of real integers modulo p: The former PKC have quadratic cycles of order O (p<SUP>2</SUP>) while the latter PKC had linear cycles of order O(p). Results: An accelerated non-deterministic search algorithm for a primitive root (generator) in a domain of complex integers modulo triple prime p was provided in this study. It showed the properties of triple primes, the frequencies of their occurrence on a specified interval and analyzed the efficiency of the proposed algorithm. Conclusion: Numerous computer experiments and their analysis indicated that three trials were sufficient on average to find a Gaussian generator.
 
Article
An algorithm for accelerated acquirement of minimal representation of super-large numberswas presented. The algorithm considers a new form of arithmetic, which was called arithmetic of q-representationof large range integers; which was based on the numbers of the generalized sequence ofFibonacci. The estimations of the complexity of the offered algorithms are presented.
 
Principle of the method of the Bootstrap 123123SSSSSS⎡⎤⎢⎥⎢⎥⎢⎥⎢⎥⎢⎥⎣⎦
Article
Problem Statement: Current algorithms for the recognition and synthesis of Arabic prosody concentrate on identifying the primary stressed syllable of accented words on the basis of fundamental frequency. Generally, the three acoustic parameters used in prosody are: Fundamental frequency, duration and energy. Approach: In this study, we exploited the acoustic parameter of energy by means of a classification by a discriminant analysis to detect the primary accented syllables of Standard Arabic words with the structure [CVCVCV] read by four native speakers (two male and two female). Results: We obtained a percentage of detection equal to 78% of the accented syllables. Conclusion: These preliminary results need to be tested on larger corpora but our results suggest this could be a useful addition to existing algorithms, in the goal of improving systems of automatic synthesis and recognition in Standard Arabic.
 
Article
Problem statement: To find frequently occurring Sequential patterns from web log file on the basis of minimum support provided. We introduced an efficient strategy for discovering Web usage mining is the application of sequential pattern mining techniques to discover usage patterns from Web data, in order to understand and better serve the needs of Web-based applications. Approach: The approaches adopt a divide-and conquer pattern-growth principle. Our proposed method combined tree projection and prefix growth features from pattern-growth category with position coded feature from early-pruning category, all of these features are key characteristics of their respective categories, so we consider our proposed method as a pattern growth, early-pruning hybrid algorithm. Results: Our proposed Hybrid algorithm eliminated the need to store numerous intermediate WAP trees during mining. Since only the original tree was stored, it drastically cuts off huge memory access costs, which may include disk I/O cost in a virtual memory environment, especially when mining very long sequences with millions of records. Conclusion: An attempt had been made to our approach for improving efficiency. Our proposed method totally eliminates reconstructions of intermediate WAP-trees during mining and considerably reduces execution time.
 
Article
Secure buildings are currently protected from unauthorized access by a variety of devices. Even though there are many kinds of devices to guarantee the system safety such as PIN pads, keys both conventional and electronic, identity cards, cryptographic and dual control procedures, the people voice can also be used. The ability to verify the identity of a speaker by analyzing speech, or speaker verification, is an attractive and relatively unobtrusive means of providing security for admission into an important or secured place. An individual’s voice cannot be stolen, lost, forgotten, guessed, orimpersonated with accuracy. Due to these advantages, this paper describes design and prototyping a voice-based door access control system for building security. In the proposed system, the access maybe authorized simply by means of an enrolled user speaking into a microphone attached to the system. The proposed system then will decide whether to accept or reject the user’s identity claim or possibly to report insufficient confidence and request additional input before making the decision. Furthermore, intelligent system approach is used to develop authorized person models based on theirs voice. Particularly Adaptive-Network-based Fuzzy Inference Systems is used in the proposed system to identify the authorized and unauthorized people. Experimental result confirms the effectiveness of the proposed intelligent voice-based door access control system based on the false acceptance rate and false rejection rate.
 
Article
Problem statement: In Mobile Ad hoc Network (MANET), both the routing layer and the Medium Access Control (MAC) layer are vulnerable to several attacks. There are very few techniques to detect and isolate the attacks of both these layers simultaneously. In this study, we developed a combined solution for routing and MAC layer attacks. Approach: Our approach, makes use of three techniques simultaneously which consists of a cumulative frequency based detection technique for detecting MAC layers attacks, data forwarding behavior based detection technique for detecting packet drops and message authentication code based technique for packet modification. Results: Our combined solution presents a reputation value for detecting the malicious nodes and isolates them from further network participation till its revocation. Our approach periodically checks all nodes, including the isolated nodes, at regular time period λ. A node which recovers from its misbehaving condition is revoked to its normal condition after the time period λ. Conclusion/Recommendations: By simulation results, we show that our combined solution provides more security by increased packet delivery ratio and reduced packet drops. We also shown that our approach has less overhead compared to the existing technique.
 
Article
Problem statement: A major drawback in the existing protocols in dealing with energy management issues is that the time varying nature of the wireless channels among the ad hoc nodes is ignored. Approach: This study proposed a channel adaptive energy efficient Medium Access Control (MAC) protocol, for efficient packets scheduling and queuing in an ad hoc network, with time varying characteristic of wireless channel taken into consideration. Every node in the proposed schemeestimates the channel and link quality for each contending flow based on which a weight value is calculated and propagated using the routing protocol. Since a wireless link with worse channel qualitycan result in more energy expenditure, the transmission was allowed only for those flows whose weight is greater than Channel Quality Threshold (CQT). For flows with weight less than CQT, the packets were buffered until the channel and link quality recovers or the weight becomes greater than CQT. To avoid buffer overflow and achieve fairness for the poor quality nodes, a fair scheduling and queuing algorithm is designed where in the CQT is adaptively adjusted on basis of the current incoming traffic load. Results: Simulation results showed that the proposed MAC protocol achieves substantial energy savings with better fairness and increased throughput. Conclusion: The designed protocol provided an efficient packets scheduling and queuing in an ad hoc network, with time varyingcharacteristic of wireless channel taken into consideration.
 
Replica consistency architecture  
Average delay time with 1000 site s In both Fig. 8 and 9 the horizontal axis indicates the access weight range and the vertical axis indicates average delay time. In both case with 500 and 1000 replicas, it was observed that the updates arrive quickly to the grid sites with most demand, causing these replicas to reach the state of consistency faster. The time it takes for the update message to reach grid site is the propagation delay associated to that site, the comparison in Fig. 8 and 9 is expressed in average number of hubs needed for the updates to reach a grid site with a given range of access weight. Comparing the group average delay time in the access weight based technique and the original technique, the access weight based technique sends the updates to the high access demand sites faster than the original technique because it gives a high priority to the high access weight sites. However the updates will reach the low access demand sites slower. It can be concluded that the enhanced protocol (AUPG) caused the high access weight site to reach the state of consistency on average of 2, in the case of 500 replica sites and also on average 2 in the case of 1000 replica site and still give better average delay time for the sites with high access weight until 41-50. The idea of showing the result of the proposed algorithm with different number of replica sites is to analyze the scalability of the proposed algorithm. As can be seen in the Fig. 8 and 9 the algorithm scaled well even by increasing the number of replica sites to 1000 sites. The experiments have provided performance analysis of the replication protocol. To summarize the results, it can be clearly seen that the protocol reduces the average load compared with the radial propagation method and reduced the update propagation response time compared to the line propagation method. Besides
Article
Replication is a well known technique to improve reliability and performance for a Data Grid. Keeping consistent content at all distributed replica is an important subject in Data Grid. Replicaconsistency protocol using classical propagation method called the radial method suffers from high overhead at the master replica site, while line method suffers from high delay time. In Data Grid not allreplicas can be dealt with in the same way, since some will be in greater demand than others. Updating first replica having most demand, a greater number of clients would access the updated content in a shorter period of time. In this study, based on asynchronous aggressive update propagation technique, a scalable replica consistency protocol is developed to maintain replica consistency in data grid which aimed to reaching delay reduction and load balancing, such that the high access weight replicas updated faster than the others. The simulation results shows that the proposed protocol is capable of sending the updates to high access replicas in a short period while reducing the total update propagation repose time and reached load balancing.
 
Article
Applications like e-newspaper or interactive online gaming have more than one resource anda large number of users. There is a many-to-many relationship between users and resources; each usercan access multiple resources and multiple users can access each resource. The resources areindependent and each resource needs to be encrypted by a different Resource Encryption Key (REK).Each REK needs to be distributed to all subscribers of the resource and each subscriber must get all theREKs he/she subscribes to. Also this environment is very dynamic in terms of subscription changes byusers and resource changes by service providers. We term this as the problem of key management forDifferential Access Control (DIF-AC) in dynamic environments. Conventional ways of access controlare not sufficient for this problem of DIF-AC key management. In this study we propose a novelapproach of keys management to enforce DIF-AC in highly dynamic environments, based on SecureGroup Communication framework.
 
Example of deployment of ad hoc networks in underground mines  
Article
Problem statement: Medium Access Control (MAC) protocols are one of most important research issues in communication and networking. There have been proposed several MAC protocols in wireless as well in wired networks. Approach: ALOHA-based protocol is the simplest one, being able to provide prompt access, reliable channels and support for quality of service. However its limited capacity, low throughput and excessive delays make it not suitable for several applications. Results: So there had been many efforts devoted to increase its performance, one of them is the capture effect. With capture effect, the packets arriving with the highest power had a good chance to be detected accurately, even when other packets were present. C onclusion: In this study, a throughput-delay trade-off was investigated to improve the network performance. The capture threshold was assigned to the highest throughout/time delay ratio. Therefore, the capture threshold which provided a high throughput and low time delay was selected by a central node.
 
Article
Problem statement: In this study, a distributed power control algorithm is proposed for Dynamic Frequency Hopping Optical-CDMA (DFH-OCDMA) system. Approach: In general, the DFH-OCDMA can support higher number of simultaneous users compared to other OCDMA techniques. However, the performance of such system degrades significantly as the received power does lower than its minimum threshold. Results: This may obviously occur in a DFH-OCDMA network with near-far problem which consist of different fiber lengths among the users, that resulting to unequal power attenuation. The power misdistribution among simultaneous active users at the star coupler would degrade the Bit Error Rate (BER) performance for users whose transmitting signals with longer fiber lengths. In order to solve these problems, we propose an adaptive distributed power control technique for DFH-OCDMA to satisfy the target Signal to Noise Ratio (SNR) for all users. Conclusion: Taking into account the noise effects of Multiple Access Interference (MAI), Phase Induced Intensity Noise (PIIN) and shot noise, the system can support 100% of users with power control as compared to 33% without power control when the initial transmitted power was -1dBm with 30 simultaneous users.
 
Normalizedcontroloverhead  
Connectivity maintenance  
Article
Internet-based Mobile Ad Hoc Networking (MANET) is an emerging technology thatsupports self-organizing mobile networking infrastructures. This is expected to be of great use incommercial applications for the next generation Internet users. A number of technical challenges arefaced today due to the heterogeneous, dynamic nature of this hybrid MANET. A new hybrid routingscheme AODV_ALMA is proposed, which act simultaneously combining mobile agents to find path tothe gateway to establish connection with Internet host and on-demand distance vector approach to findpath in local MANET is one of the unique solution. An adaptive gateway discovery mechanism basedon mobile agents making use of pheromone value, pheromone decay time and balance index is used toestimate the path and next hop to the gateway. The mobile nodes automatically configure the addressusing mobile agents first selecting the gateway and then using the gateway prefix address. The mobileagents are also used to track changes in topology enabling high network connectivity with reduceddelay in packet transmission to Internet. The performance tradeoffs and limitations with existingsolutions for various mobility conditions are evaluated using simulation.
 
Article
This study proposes a technique of accessing applications from multiple file servers in a computer network that caters for a large user community. The technique is a step towards creating a computer network system that is simple to implement, scalable, easy to manage, transparent to the user and suitable for a university-type environment. The proposed technique has some features of a distributed system, but is not as complex. It is built around a very popular commercial network operating system called NetWare
 
Article
Problem Statement: Kirkpatrick’s model for the evaluation of training programs has been a staple in institutional learning since 1959 and is easily applicable to any training program. Approach: This model,however, was developed for traditional learning environments and has been regarded as antiquated, especially when one took into consideration the fact that institutional learning has increasingly taken on the form of elearning. This study proposed an adaptation of Kirkpatrick’s model, which accommodated the nuances of the e-learning environment. This model proposed a tri-stage mode of evaluation. Results: The three stages were interaction, learning and results. The interaction stage took into consideration the special challenges posed by the environment while the learning and results stages examined the alignment between the curriculum and the needs of an organization. Conclusions/Recommendations: The research conducted supported the thesis that existing training models fail to accommodate for e-learning environments and, in establishing importantguidelines and criteria for the remediation as such, addressed the initial concern. The proposed evaluation method is one that is rudimentary in nature and holds a great promise for practical application.
 
Model structure 
Supporting layer connection characteristics 
Training set organization 
The training patterns for the second stage 
Article
Problem statement: Generalization feature enhancement of neural networks, especially feed forward structural model has limited progress. The major reason behind such limitation is attributed to the principal definition and the inability to interpret it into convenient structure. Traditional schemes, unfortunately have regarded generalization as an innate outcome of the simpleassociation, referred to by Pavlov and had been modeled by piaget as the basis of assimilating conduct. Approach: A new generalization approach based on the addition of a supportive layer to the traditional neural network scheme (atomic scheme) was presented. This approach extended the signal propagation of the whole net in order to generate the output in two modes, one deals with the requiredoutput of trained patterns with predefined settings, while the other tolerates output generation dynamically with tuning capability for any newly applied input. Results: Experiments and analysisshowed that the new approach is not only simpler and easier, but also is very effective as the proportions promoting the generalization ability of neural networks have reached over 90% for some cases. Conclusion: Expanding neuron as the generalization essential construction denoted the accommodating capabilities involving all the innate structures in conjugation with Intelligence abilitiesand with the needs of further advanced learning phases. Cogent results were attained in comparison with that of the traditional schemes.
 
Article
Problem statement: This study proposed a robust algorithm named as Backward Recovery Preemptive Utility Accrual Scheduling (BRPUAS) algorithm that implements the Backward Recovery (BR) mechanism as a fault recovery solution under the existing utility accrual scheduling environment. The problem identified in the TUF/UA scheduling domain is that the existing algorithms only considers the Abortion Recovery (AR) as their fault recovery solution in which all faulty tasks are simply aborted to nullify the erroneous effect. The decision to immediately abort the affected tasks is inefficient because aborted tasks produce zero utility causes the system to accrue lower utility. Approach: The proposed BRPUAS algorithm enabled the re-execution of the affected tasks rather than abortion to reduce the number of aborted task in the existing algorithm known as Abortion Recovery Preemptive Utility Accrual Scheduling (ARPUAS) algorithm that employed the AR mechanism. The BRPUAS ensure the correctness of the executed tasks in the best effort basis in such a way that the infeasible tasks are aborted and produced zero utility, while the feasible tasks are re-executed to produce positive utility and consequently maximized the total accrued utility to the system. The performances of these algorithms are measured by using discrete event simulation. Results: The proposed BRPUAS algorithm achieved higher accrued utility compared to ARPUAS for the entire load range. Conclusion: Simulation results revealed that the BR mechanism is more efficient than the existing AR mechanism, producing higher accrued utility ratio and less abortion ratio making it more reliable and efficient for adaptive real time application domain.
 
Simulation model  
AUR Vs average loads  
SR Vs average load  
AR Vs average load  
Article
Problem statement: This study proposed two utility accrual real time scheduling algorithms named as Preemptive Utility Accrual Scheduling (PUAS) and Non-preemptive Utility Accrual Scheduling (NUAS) algorithms. These algorithms addressed the unnecessary abortion problem that was identified in the existing algorithm known as General Utility Scheduling (GUS). It is observed that GUS is inefficient for independent task model because it simply aborts any task that currently executing a resource with lower utility when a new task with higher utility requests the resource. The scheduling optimality criteria are based on maximizing accrued utility accumulated from execution of all tasks in the system. These criteria are named as Utility Accrual (UA). The UA scheduling algorithms are design for adaptive real time system environment where deadline misses are tolerable and do not have great consequences to the system. Approach: We eliminated the scheduling decision to abort a task in GUS and proposed to preempt a task instead of being aborted if the task is preemptive able. We compared the performances of these algorithms by using discrete event simulation. Results: The proposed PUAS algorithm achieved the highest accrued utility for the entire load range. This is followed by the NUAS and GUS algorithms. Conclusion: Simulation results revealed that the proposed algorithms were more efficient than the existing algorithm, producing with higher accrued utility ratio and less abortion ratio making it more suitable and efficient for real time application domain.
 
Article
Problem statement: The precision and reliability of the effort estimation is very important for the competitiveness of software companies. The uncertainty at the input level of the Constructive Cost Model (COCOMO) yields uncertainty at the output, which leads to gross estimation error in the effort estimation. Fuzzy logic-based cost estimation models are more appropriate when vague and imprecise information was to be accounted for and was used in this research to improve the effort estimation accuracy. This study proposed to extend the COCOMO by incorporating the concept of fuzziness into the measurements of size. The main objective of this research was to investigate the role of size in improving the effort estimation accuracy by characterizing the size of the project using trapezoidal function which gave superior transition from one interval to another. Approach: The methodology adopted in this study was use of fuzzy sets rather than classical intervals in the COCOMO. Using fuzzy sets, size of a software project can be specified by distribution of its possible values and these fuzzy sets were represented by membership functions. Though, Triangular membership functions (TAMF) was used in the literature to represent the size, but it was not appropriate to clear the vagueness in the project size. Therefore, to get a smoother transition in the membership function, the size of the project, its associated linguistic values were represented by
 
The traditional model 
The proposed model 
Accuracy comparison of TMAR and PMAR 
Coverage comparison of TMAR and PMAR 
Article
Problem statement: Noise within datasets has to be dealt with under most circumstances. This noise includes misclassified data or information as well as missing data or information. Simple human error is considered as misclassification. These errors will decrease the accuracy of the data mining system so it will not be likely to be used. The objective was to propose an effective algorithm to deal with noise which is represented by missing data in datasets. Approach: A model for improving the accuracy and coverage of data mining systems was proposed and the algorithm of this model was constructed. The algorithm was dealing with missing values in datasets. It splits the original dataset into two new datasets; one contains tuples that have no missing values and the other one containstuples that have missing values. The proposed algorithm was applied to each of the two new datasets. It finds the reduct of each of them and then it merges the new reducts into one new dataset which willbe ready for training. Results: The results showed interesting as it increases the accuracy and coverage of the tested dataset compared to the traditional models. Conclusion: The proposed algorithm performs effectively and generates better results than the previous ones.
 
Article
Problem statement: Iris segmentation is one of the most important steps in iris recognition system and determines the accuracy of matching. Most segmentation methods in the literature assumed that the inner and outer boundaries of the iris were circular. Hence, they focus on determining model parameters that best fit these hypotheses. This is a source of error, since the iris boundaries were not exactly circles. Approach: In this study we proposed an accurate iris segmentation method that employs Chan-Vese active contour method to extract the iris from his surrounding structures. Results: The proposed method was implemented and tested on the challenging UBIRIS database the results indicated the efficacy of the proposed method. Conclusion: The experimental results showed that the proposed method localized the iris area probably even when the eyelids occlude same part of iris.
 
Verification of performance using ROC curves  
Article
Problem statement: A biometric system provides automatic identification of an individual based on a unique feature or characteristic possessed by the individual. Iris recognition is regarded as the most reliable and accurate biometric identification system available. Approach: Most commercial iris recognition systems use patented algorithms developed by Daugman and these algorithms are able to produce perfect recognition rates. However, published results have usually been produced under favorable conditions and there have been no independent trials of the technology. Results: In this study after providing brief picture on development of various techniques for iris recognition, hamming distance coupled with neural network based iris recognition techniques were discussed. Perfect recognition on a set of 150 eye images has been achieved through this approach. Further, Tests on another set of 801 images resulted in false accept and false reject rates of 0.0005 and 0.187% respectively, providing the reliability and accuracy of the biometric technology. Conclusion/Recommendations: This study provided results of iris recognition performed applying Hamming distance, Feed forward back propagation, Cascade forward back propagation, Elman forward back propagation and perceptron. It has been established that the method suggested applying perceptron provides the best accuracy in respect of iris recognition with no major additional computational complexity.
 
Top-cited authors
B. J.A. Krose
  • Amsterdam University of Applied Science
Patrick van der Smagt
  • Volkswagen Group
Ali Al-Haj
  • Princess Sumaya University for Technology
Alaa Sheta
  • Southern Connecticut State University
Luai Shalabi
  • Arab Open University - Kuwait