Research Items (49)
- Apr 2018
- 2018 1st International Conference on Computer Applications & Information Security (ICCAIS)
- Apr 2018
- 2018 1st International Conference on Computer Applications & Information Security (ICCAIS)
Abstract—The Blind signature is similar to digital signature except that a message is signed by a signer without knowing the content of the message. It is one of the most famous cryptographic techniques in E-voting system (EVS) that guarantee the anonymity of the voters. In this paper, first, we analyze a recently introduced blind signature scheme and show that, the attacker can forge a legitimate signature for any desired message without obtaining the signer’s private key. In other words, Mohsen et al.’s blind signature scheme is universally forgeable. Then, a new blind signature scheme based on the discrete logarithm problem (DLP) and the modified ElGamal digital signature is presented. The proposed blind signature scheme meets all properties of blind signature such as correctness, blindness, unforgeability and untraceability. Therefore, the proposed blind signature scheme is more efficient in EVS to ensure voter anonymity, in other words to remove voter’s identity from his cast ballot.
This paper presents an approach for age estimation based on faces through classifying facial images into predefined age-groups. However, a task such as the one at hand faces several difficulties because of the different aspects of every single person. Factors like exposure, weather, gender and lifestyle all come into play. While some trends are similar for faces from a similar age group, it is problematic to distinguish the aging aspects for every age group. This paper’s concentration is in four chosen age groups where the estimation takes place. We employed a fast and effective machine learning method: deep learning so that it could solve the age categorization issue. Principal component analysis (PCA) was used for extracting features and reducing face image. Age estimation was applied to three different aging datasets from Morph and experimental results are reported to validate its efficiency and robustness. Eventually, it is evident from the results that the current approach has achieved high classification results compared with support vector machine (SVM) and k-nearest neighbors (K-NN).
A Wireless Sensor Network (WSN) is composed of multiple number of nodes each of which consist of sensing devices to collect data from environment. These sensing devices are provided with processing unit to perform operations on data. Sensing devices are deployed randomly in remote environment, due to which battery charging or replacement is not considered practical. Clustering has been proven as one of the most effective technique for reducing energy consumption of the wireless sensor networks. In this sensor nodes are grouped into separate clusters. Wireless Sensor Network have been paid huge attention for their potential use in observing environment, health, military surveillance, home applications and many more. The design of sensor network is influenced by factors like scalability, energy consumption, environment etc. and depends on the application. Of the three activities: sensing, processing and communication, most of the energy is spent on communication purposes. Energy conservation is thus a dominant factor in wireless sensor networks. Routing strategy selection is very important for proper delivery of packets. Ongoing research aims in extending network lifetime by designing protocols that requires less energy during communication. This paper provides a survey on energy efficient routing in wireless sensor networks.
Breast Cancer is a harmful disease that has caused millions of women deaths. There are a huge number of publications on breast cancer research which offers a good source of information. Identifying breast cancer biomarkers is not a trivial task. There are many approaches used to identify and extract the needed information more efficiently from structured/unstructured text, uncover relationships and hidden rules from the huge amount of information such as text mining, machine learning and data mining. This paper reviews some of research literature on breast cancer using these approaches.
- Dec 2017
Recently, automatic age estimation has gained an increasing interest from the research community because of its relevance to many applications in different fields such as law enforcement, security control, and human-computer interaction. However, despite advances in automatic age estimation, it remains a challenging problem due to a set of challenges which includes incomplete aging patterns, multi-class nature, personalized aging patterns, and disturbance. We consider the problem of automatic age estimation from face images. Age estimation problem is usually formulated as a multiclass classification problem relating the facial features to a specific age group. In this work, an automatic facial age estimation system based on a deep belief network was proposed. The proposed approach includes four steps: image preprocessing, feature extraction, feature reduction, deep believe network classification into one of four age groups. The performance of the proposed approach is evaluated using a standard dataset and the experimental results are promising.
- Oct 2017
Sign language recognition system (SLRS) is one of the application areas of human computer interaction (HCI) where signs of hearing impaired people are converted to text or voice of the oral language. This paper presents an automatic visual SLRS that translates isolated Arabic words signs into text. The proposed system has four main stages: hand segmentation, tracking, feature extraction and classification. A dynamic skin detector based on the face color tone is used for hand segmentation. Then, a proposed skin-blob tracking technique is used to identify and track the hands. A dataset of 30 isolated words that used in the daily school life of the hearing impaired children was developed for evaluating the proposed system, taking into consideration that 83% of the words have different occlusion states. Experimental results indicate that the proposed system has a recognition rate of 97% in signer-independent mode. In addition to, the proposed occlusion resolving technique can outperform other methods by accurately specify the position of the hands and the head with an improvement of 2.57% at τ = 5 that aid in differentiating between similar gestures.
In our today’s life, it is obvious that cloud computing is one of the new and most important innovations in the field of information technology which constitutes the ground for speeding up the development in great size storage of data as well as the processing and distribution of data on the largest scale. In other words, the most important interests of any data owner nowadays are related to all of the security as well as the privacy of data, especially in the case of outsourcing private data on a cloud server publicly which has not been one of the well-trusted and reliable domains. With the aim of avoiding any leakage or disclosure of information, we will encrypt any information important or confidential prior to being uploaded to the server and this may lead to an obstacle which encounters any attempt to support any efficient keyword query to be and ranked with matching results on such encrypted data. Recent researches conducted in this area have focused on a single keyword query with no proper ranking scheme in hand. In this paper, we will propose a new model called Secure Model for Preserving Privacy Over Encrypted Cloud Computing (SPEC) to improve the performance of cloud computing and to safeguard privacy of data in comparison to the results of previous researches in regard to accuracy, privacy, security, key generation, storage capacity as well as trapdoor, index generation, index encryption, index update, and finally files retrieval depending on access frequency.
- May 2016
- the 10th International Conference on Informatics and Systems INFOS'16
Content-based Image Retrieval (CBIR) is a term referring to looking for digital images by analyzing content of images rather than its metadata. CBIR system retrieves the image via low-level features such as color, texture and shape. In this work, we propose an improved CBIR system that retrieves images from a database based on the semantic features of them. The methodology that divide image and extracts low-level features from each region and label each one with the suitable concept (Sky, Sand, Water, trunks, foliage, rocks,..., and Grass). The results of the paper reflects the efficiency of the system for retrieving images with up to 98% recognition ratio.
The analysis of human activities is one of the most interesting and important open issues for the automated video surveillance community. In order to understand the behaviors of humans, a higher level of understanding is required, which is generally referred to as activity recognition. While traditional approaches rely on 2D data like images or videos, the development of low-cost depth sensors created new opportunities to advance the field. In this paper, a system to recognize human activities using 3D skeleton joints recovered from 3D depth data of RGB-D cameras is proposed. A low dimensional descriptor is constructed for activity recognition based on skeleton joints. The proposed system focuses on recognizing human activities not human actions. Human activities take place over different time scales and consist of a sequence of sub-activities (referred to as actions). The proposed system recognizes learned activities via trained Hidden Markov Models (HMMs). Experimental results on two human activity recognition benchmarks show that the proposed recognition system outperforms various state-of-the-art skeleton-based human activity recognition techniques.
As road networks become more congested, traffic surveillance using computer vision techniques is increasingly important. Traffic surveillance can help in improving road network efficiency, rerouting traffic when accidents occur and minimizing delays. Although, there are many algorithms developed to detect and track moving vehicles in daytime, only a handful of techniques have been proposed for nighttime traffic scenes. In the night environment, the moving vehicles are commonly identified by detecting and locating vehicle headlights and taillights. This paper proposes an effective method for detecting and tracking moving vehicles in nighttime. The proposed method identifies vehicles by detecting and locating vehicle lights using automatic thresholding and connected components extraction. Detected lamps are then paired using rule based component analysis approach and tracked using Kalman Filter (KF). The automatic thresholding approach provides a robust and adaptable detection process that operates well under various nighttime illumination conditions. Furthermore, most nighttime tracking algorithms detects vehicles by locating either headlights or rear lights. However, the proposed method has the ability to track vehicles through detecting vehicle headlights and/or rear lights. Several experiments are presented that demonstrate the feasibility and the effectiveness of the proposed method to detect and track vehicles in various nighttime environments.
We present a new approach for face recognition system. The method is based on 2D face image features using subset of non-correlated and Orthogonal Gabor Filters instead of using the whole Gabor Filter Bank, then compressing the output feature vector using Linear Discriminant Analysis (LDA). The face image has been enhanced using multi stage image processing technique to normalize it and compensate for illumination variation. Experimental results show that the proposed system is effective for both dimension reduction and good recognition performance when compared to the complete Gabor filter bank. The system has been tested using CASIA, ORL and Cropped YaleB 2D face images Databases and achieved average recognition rate of 98.9 %.
Recognizing human activity is one of the important areas of computer vision research today. It plays a vital role in constructing intelligent surveillance systems. Despite the efforts in the past decades, recognizing human activities from videos is still a challenging task. Human activity may have different forms ranging from simple actions to complex activities. Recently released depth cameras provide effective estimation of 3D positions of skeletal joints in temporal sequences of depth maps. In this paper, a system for human activity recognition is proposed. We have considered the task of obtaining a descriptive labeling of the activities being performed through labeling human sub-activities. The activities we consider happen over a long period, and comprise several sub-activities performed in a sequence. The proposed activity descriptor makes the activity recognition problem viewed as a sequence classification problem. The proposed system employs Hidden Markov Models (HMMs) to recognize human activities. The system is evaluated on two benchmark datasets for daily living activity recognition. Experiment results demonstrate that the proposed system outperforms the state-of-the-art methods
Particle filter has grown to be a standard tool for solving visual tracking problems in real world applications. One of the critical tasks in object tracking is the tracking of fast-moving objects in complex environments, which contain cluttered background and scale change. In this paper, a new tracking algorithm is presented by using the joint color texture histogram to represent a target and then applying it to particle filter algorithm called PFJCTH. The texture features of the object are extracted by using the local binary pattern (LBP) technique to represent the object. The proposed algorithm extracts effectively the edge and corner features in the target region, which characterize better and represent more robustly the target. The experiments showed that this new proposed algorithm produces excellent tracking results and outperforms other tracking algorithms.
Traffic surveillance using computer vision techniques is an emerging research area. Many algorithms are being developed to detect and track moving vehicles in daytime in effective manner. However, little work is done for nighttime traffic scenes. For nighttime, vehicles are identified by detecting and locating vehicle headlights and rear lights. In this paper, an effective method for detecting and tracking moving vehicles in nighttime is proposed. The proposed method identifies vehicles by detecting and locating vehicle lights using automatic thresholding and connected components extraction. Detected lamps are then paired using rule based component analysis approach and tracked using Kalman Filter (KF). The automatic thresholding approach provides a robust and adaptable detection process that operates well under various nighttime illumination conditions. Moreover, most nighttime tracking algorithms detect vehicles by locating either headlights or rear lights while the proposed method has the ability to track vehicles through detecting vehicle headlights and/or rear lights. Experimental results demonstrate that the proposed method is feasible and effective for vehicle detection and identification in various nighttime environments.
Traffic surveillance plays a vital role in computer vision and Intelligent Transportation Systems (ITS). Image analysis provides several effective techniques to detect moving objects in images. Thus, it has been extensively used for traffic monitoring systems. Recently, the problem of detecting and tracking vehicles is an important emerging research area for intelligent transportation systems. Many algorithms have been developed to detect and track moving vehicles either in daytime or in nighttime. In fact, vehicle tracking in daytime and in nighttime cannot be approached with the same techniques, due to the extreme different illumination conditions. Building an integrated system to deal with daytime and nighttime is still a challenging problem especially when considering shadows at daytime, dim lighting at night, and real-time processing constraint. In this paper, a vehicle tracking system is developed to deal with daytime and nighttime vehicles tracking. First, a daytime/nighttime detector is applied to the scene to determine the suitable technique. For daytime videos, shadows are removed from vehicles by applying a gamma decoding followed by a thresholding operation and employing an estimated background model of the video sequence. For nighttime videos, headlights and taillights are located and paired to initialize vehicles for tracking process. The experimental results have shown that the proposed method can effectively track vehicles in both daytime and nighttime.
- Aug 2014
The eye is one of the sense organs that can give users better interaction closer to their need by observing the change of the eyes (open or closed). It is considered as a rich source for gathering information on our daily life. So, it is used in computer science area, especially in human computer interaction. This paper proposes a new system for detecting eye blinks accurately without any restriction on the background and the user does not have to wear any sensors or marks. No manual initialization is required in our proposed system. The proposed system works with the online and offline environment. It automatically classifies the eye as either open or closed at each video frame. The proposed system is tested with the users who wear glasses and the experiments proved its applicability. The proposed system is very easy to configure and use. It is totally non-intrusive and it only requires one low-cost web camera and computer
Shadow detection and removal has had great interest in computer vision especially in outdoor environments. It is an important task for visual tracking, object recognition, and many other important applications. One of the fundamental challenges for accurate tracking is achieving invariance to shadows. Two or more separate objects can appear to be connected through shadows. Many algorithms have been proposed in the literature that deal with shadows. However, the problem remains largely unsolved and needs further research effort. This paper proposes a method for removing cast shadows from vehicles in outdoor environments. The proposed method employs the estimated background model of the video sequence and applies a Gamma decoding followed by a thresholding operation. Experimental results show the success of the proposed method in detecting and removing shadows robustly and leads to considerable improvements in multiple object tracking. General Terms Computer Vision, Pattern Recognition.
Human behavior Analysis, using visual information in a given image or sequence of images, has been an active area of research in computer vision community. The image captured by conventional camera does not provide the suitable information to perform comprehensive analysis. However, depth sensors have recently made a new type of data available. Most of the existing work focuses on body part detection and pose estimation. A growing research area addresses the recognition of human actions based on depth images. In this paper, an efficient method for human action recognition is proposed. Our research makes the following contributions: the proposed method makes an efficient representation of human actions by constructing a feature vector based on the human's skeletal information extracted from depth images. Then, introducing these feature vectors to Multi-class Support Vector Machine (MSVM) to perform the action classification task. The proposed representation of the human action ensures it is invariant to the scale of the subjects/objects and the orientation to the camera, while it maintains the correlation among different body parts. A number of experiments have been performed in order to evaluate the proposed algorithm. The results revealed that the proposed algorithm is efficient and leads to an improved action recognition process. Moreover, it is suitable for implementation in a real-time behavior analysis.
Robust tracking of non-rigid objects is a challenging task. Particle filter is a powerful tool for vision tracking based on sequential Monte Carlo framework and proved very successful for non-linear and non-Gaussian estimation problem. This paper proposes a tracking algorithm based on particle filter and optimized Likelihood. Colour distributions are applied as they are robust to partial occlusion, rotation, scale invariant and computationally efficient. As the colour of an object can vary over time dependent on the illumination, the target model is adapted during temporally stable image observation. Particle filter approximates a posterior probability density of the state by using samples which are called particles. Here, the state is treated as the position of the object and the weight is considered as the likelihood of each particle. For this likelihood, we calculate the similarity between the colour histogram of the tracked object and the region around the position of each particle by using Bhattacharya distance. To enhance the results, a new parameter is multiplied by the previous likelihood to increase the particles weight. The system proves to be robust against problems of partial occlusion, full occlusion and illumination changes. Finally the mean state of the particles is treated as the estimated position of the object. The correctness as well as validity of the algorithm is demonstrated through the experiments results.
Identifying moving objects from a video scene is a fundamental and critical task in object tracking. However, shadows extracted along with the objects can result in large errors in object localization and recognition. Despite many attempts, the problem remains largely unsolved due to several challenges. Since cast shadows can be as big as the actual objects, their incorrect classification as foreground results in inaccurate detection and decreases tracking performance. Hence, an effective method for shadow detection and removal is required significantly to provide urgent support and to reduce the effects of incorrect object tracking. In this paper, an efficient method for removing cast shadow from vehicles is proposed. The method works by applying a Gamma decoding followed by a thresholding operation and employing the estimated background model of the video sequence. A number of experiments has been performed. The results revealed the proposed algorithm is efficient and leading to improved tracking process.
- Dec 2013
Human- Computer Interaction (HCI) systems are designed for disabled people who are unable to move or control any parts of their bodies except for their eyes. The idea behind our proposed system is detecting eye blink feature from a video with a high degree of accuracy. Our proposed system is considered as an alternative input modality enables people with severe disabilities form communicate with the computer. Our proposed system uses Viola Jones to detect the face and eye regions. Our proposed system is based on two steps for determining the state of the eye (being open or closed): The first step is splitting the eye region horizontally into two equal parts. Then find the difference between the number of black pixels of the first horizontal part in eye region and the number of black pixels of the second horizontal part in the eye region. The second step is applied only in the first horizontal part of the eye region by finding the ratio of black pixels to white pixels in this part, and this step added to ensure accurate results. The experimental results have proven that the proposed system detection accuracy is very efficient on the recorded cam videos and accurately detects eye blinks without any restriction on the background. The proposed system is very easy to configure and use. It is totally non-intrusive and it only requires one low-cost web camera and computer.
The purpose of this study is to develop an intelligent remote detection and diagnosis system for breast cancer based on cytological images. First, this paper presents a fully automated method for cell nuclei detection and segmentation in breast cytological images. The locations of the cell nuclei in the image were detected with circular Hough transform. The elimination of false-positive (FP) findings (noisy circles and blood cells) was achieved using Otsu's thresholding method and fuzzy c-means clustering technique. The segmentation of the nuclei boundaries was accomplished with the application of the marker-controlled watershed transform. Next, an intelligent breast cancer classification system was developed. Twelve features were presented to several neural network architectures to investigate the most suitable network model for classifying the tumor effectively. Four classification models were used, namely, multilayer perceptron using back-propagation algorithm, probabilistic neural network (PNN), learning vector quantization, and support vector machine (SVM). The classification results were obtained using tenfold cross validation. The performance of the networks was compared based on resulted error rate, correct rate, sensitivity, and specificity. Finally, we have merged the proposed computer-aided detection and diagnosis system with the telemedicine platform. This is to provide an intelligent, remote detection, and diagnosis system for breast cancer patients based on the Web service. The proposed system was evaluated using 92 breast cytological images containing 11 502 cell nuclei. Experimental evidence shows that the proposed method has very effective results even in the case of images with high degree of blood cells and noisy circles. In addition, two benchmark data sets were evaluated for comparison. The results showed that the predictive ability of PNN and SVM is stronger than the others in all evaluated data sets.
Breast cancer detection and segmentation of cytological images is the standard clinical practice for the diagnosis and prognos is of breast cancer.This paper presents a fully automated method for cell nuclei detection and segmentation in breast cytological images. The images are enhanced with histogram stretching and contrast-limited adaptive histogram equalization(CLAHE).The locations of the cell nuclei in the image are detected with circular Hough transform(CHT) and local maximum filtering.The elimination of false positive findings (noisy circles and blood cells) is achieved using Otsu’s thresholding method and fuzzy C-means clustering technique.The segmentation of the nuclei boundaries is accomplished with the application of the marker controlled watershed transform in the gradient image,using the nuclei markers extracted in the detection step.The proposed method is evaluated using 92 breast cytological images containing 11, 502 cell nuclei.Experimental evidence shows that the proposed method has very effective results even in the case of images with high degree of blood cells, noisy circles
With the large number of surveillance cameras now in operation, both in public placesand in commercial centers, significant research efforts have been invested in attempts to automate surveillance video analysis. The goal of visual surveillance is not only to put cameras in the place of human eyes, but also to accomplish the entire surveillance task as automatically as possible. Recently, the problem of analyzing behavior in videos has been the focus of several researchers' efforts. They concentrate on developing intelligent visual surveillance systems to replace traditional passive video surveillance systems which can only store surveillance videos but are not able to identify or describe interesting activities. In this paper, we give a survey of behavior analysis work in video surveillance and compare the performance of the state-of-the-art algorithms on different datasets.Furthermore, useful datasets are analyzed in order to provide help for initiating research projects.
Particle Filters (PFs), are widely used where the system is non Linear and non Gaussian. Choosing the importance proposal distribution is a key issue for solving nonlinear filtering problems. Practical object tracking problems encourage researchers to design better candidate for proposal distribution in order to gain better performance. In this correspondence, a new algorithm referred to as the hybrid iterated Kalman particle filter (HIKPF) is proposed. The proposed algorithm is developed from unscented Kalman filter (UKF) and iterated extended Kalman filter (IEKF) to generate the proposal distribution, which lead to an efficient use of the latest observations and generates more close approximation of the posterior probability density. Comparing with previously suggested methods (e.g. PF, PF-EKF, PF-UKF, PF-IEKF), our proposed method shows a better performance and tracking accuracy. The correctness as well as validity of the algorithm is demonstrated through numerical simulation and experiment results.
The purpose of this paper is to develop an intelligent diagnosis system for breast cancer classification. Artificial Neural Networks and Support Vector Machines were being developed to classify the benign and malignant of breast tumor in fine needle aspiration cytology. First the features were extracted from 92 FNAC image. Then these features were presented to several neural network architectures to investigate the most suitable network model for classifying the tumor effectively. Four classification models were used namely multilayer perceptron (MLP) using back-propagation algorithm, probabilistic neural networks (PNN), learning vector quantization (LVQ) and support vector machine (SVM). The classification results were obtained using 10-fold cross validation. The performance of the networks was compared based on resulted error rate, correct rate, sensitivity and specificity. The method was evaluated using six different datasets including four datasets related to our work and two other benchmark datasets for comparison. The optimum network for classification of breast cancer cells was found using probabilistic neural networks. This is followed in order by support vector machine, learning vector quantization and multilayer perceptron. The results showed that the predictive ability of probabilistic neural networks and support vector machine are stronger than the others in all evaluated datasets.
This paper proposes a new region-based image retrieval technique called Principal Regions Image Retrieval (PRIR). This technique starts by segmenting an image to the most general principal regions and applies a fuzzy feature histogram to describe the color and texture properties of each segmented region. The proposed approach starts by generating a nearest neighbor graph for the segmented regions, and applying a greedy graph matching algorithm with a modified scoring function to determine the image rank. The proposed segmentation approach provides significant speedup toward state of the art techniques while keeping accurate precision. Moreover, the proposed approach combines local and global description to improve the retrieval results. Standard image databases are used to evaluate the performance of the proposed system. Results show that the proposed approach enhances the accuracy of retrieval compared to other approaches reported in the literature.
- May 2012
- Informatics and Systems (INFOS), 2012 8th International Conference on
In the last ten years, skin detection has been a milestone in most of the computer vision applications. But till now, there is no robust skin detector. The different degrees of the skin tone color are the obstacle that faces skin detection process. This paper proposes an adaptive skin modeling and detection technique which is based on face skin tone color. Face is a good indicator of different characteristics of skin tone color where it carries significant information about skin color. Skin modeling aims to develop adaptive margins of skin detector. These margins have been obtained after applying an online dynamic threshold to the pixels gathered around the major and minor axes of bounding rectangle of detected face. Experimental results show that the proposed method has promising results compared to state-of -the-art skin detection methods.
Due to the expansion of high-speed Internet access, the need for secure and reliable networks has become more critical. The sophisti-cation of network attacks, as well as their severity, has also increased recently. As such, more and more organizations are becoming vulnerable to attack. The aim of this research is to classify network attacks using neural networks (NN), which leads to a higher detec-tion rate and a lower false alarm rate in a shorter time. This paper focuses on two classification types: a single class (normal, or attack), and a multi class (normal, DoS, PRB, R2L, U2R), where the category of attack is also detected by the NN. Extensive analysis is con-ducted in order to assess the translation of symbolic data, partitioning of the training data and the complexity of the architecture. This paper investigates two engines; the first engine is the back-propagation neural network intrusion detection system (BPNNIDS) and the second engine is the radial basis function neural network intrusion detection system (BPNNIDS).The two engines proposed in this paper are tested against traditional and other machine learning algorithms using a common dataset: the DARPA 98 KDD99 benchmark dataset from International Knowledge Discovery and Data Mining Tools. BPNNIDS shows a superior response compared to the other techniques reported in literature especially in terms of response time, detection rate and false positive rate.
Though Arabic language is a widely spoken language, research done in the area of Arabic Speech Recognition is limited when compared to other similar languages. Also, while the accuracy of speaker dependent speech recognizers has nearly reached to 100%, the accuracy of speaker independent speech recognition systems is still relatively poor. This paper concerns with the recognition of speaker independent Arabic speech using Support Vector Machine. The proposed model is applied on the connected Arabic digits (number) using Neural Networks as an example. Also we can apply the system to any other domain. A spoken digit recognition process is needed in many applications that use numbers as input such as telephone dialing using speech, airline reservation, and automatic directory to retrieve or send information. This has been realized by first building a corpus consisting of 1000 numbers composing 10000 digits recorded by 20 speakers different in gender, age, physical conditions…, in a noisy environment. Secondly, each recorded number has been digitized into 10 separate digits. Finally these digits have been used to extract their features using the Mel Frequency Cepstral Coefficients (MFCC) technique which are taken as input data to the Neural Networks for the recognition phase. The performance of the system is nearly 94% when we used the Support Vector Machine (SVM).
- May 2011
Augmented Reality (AR) is the technology of adding virtual objects to real scenes through enabling the addition of missing information in real life. As the lack of resources is a problem that can be solved through AR, this paper presents and explains the usage of AR technology we introduce Augmented Reality Student Card (ARSC) as an application of AR in the field of education. ARSC uses single static markers combined in one card for assigning different objects, while leaving the choice to the computer application for minimizing the tracking process. ARSC is designed to be a useful low cost solution for serving the education field. ARSC can represent any lesson in a 3D format that helps students to visualize different learning objects, interact with theories and deal with the information in a totally new, effective, and interactive way. ARSC can be used in offline, online and game applications with seven markers, four of them are used as a joystick game controller. One of the novelties in this paper is that experimental tests had been made for the ARTag marker set for sorting them according to their efficiency. The results of those tests were used in this research to choose the most efficient markers for ARSC, and can be used for further research. The experimental work in this paper also shows the constraints for marker creation for an AR application. As we need to work in both online and offline application, merging of toolkits and libraries has been made, as presented in this paper. ARSC was examined by a number of students of both genders with average age between 10 and 17 years and it found great acceptance among them.
- Dec 2010
- Computer Engineering Conference (ICENCO), 2010 International
Augmented Reality (AR) is the technology of adding virtual objects to the real scenes through enabling the addition of missing information at real life. As the lack of resources is a problem that can be solved through AR, this paper represents and explains the usage of AR technology in what can be named Augmented Reality Student Card (ARSC) for serving the education field. ARSC uses single static markers combined in one card for assigning different objects, leaving the choice to the computer application for minimizing the tracking process. ARSC is designed to be a useful low cost solution for serving the education field. ARSC represents any lesson in a 3D format that helps students to visualize the facts, interact with theories and deal with the information in a totally new effective and interactive way. ARSC can be used in offline, online and game applications with seven markers, four of them are used as a joystick game controller. One of the novelties in this paper is that full experimental tests had been made for the ARTag marker set for sorting them according to their efficiency. The results of the tests are used in this research to choose the most efficient markers for ARSC, and can be used for further researches. The experimental work that had been made in this paper also shows the constraints for marker creation for an AR application. Due to the need to work for online and offline application, merging of toolkits and libraries has been made, as presented in this paper. ARSC was examined by a number of students of both genders with average age between 10-17 years and it was found to have a great acceptance among them.
- Apr 2010
- The 35th International Conferance For statistics, computer Science And Its Applications, The Egyption statistical Society (ESS)
- Apr 2009
- Networking and Media Convergence, 2009. ICNM 2009. International Conference on
Steganography gained importance in the past few years due to the increasing need for providing secrecy in an open environment like the internet. With almost anyone can observe the communicated data all around, steganography attempts to hide the very existence of the message and make communication undetectable. Many techniques are used to secure information such as cryptography that aims to scramble the information sent and make it unreadable while steganography is used to conceal the information so that no one can sense its existence. In most algorithms used to secure information both steganography and cryptography are used together to secure a part of information. Steganography has many technical challenges such as high hiding capacity and imperceptibility. In this paper, we try to optimize these two main requirements by proposing a novel technique for hiding data in digital images by combining the use of adaptive hiding capacity function that hides secret data in the integer wavelet coefficients of the cover image with the optimum pixel adjustment (OPA) algorithm. The coefficients used are selected according to a pseudorandom function generator to increase the security of the hidden data. The OPA algorithm is applied after embedding secret message to minimize the embedding error. The proposed system showed high hiding rates with reasonable imperceptibility compared to other steganographic systems.
- Jan 2009
Content-Based Image Retrieval (CBIR) considers the characteristics of the image itself, for example its shapes, colors and textures. The Current approaches to CBIR differ in terms of which image features are extracted. Recent work deals with combination of distances or scores from different and independent representations. This work attempts to induce high level semantics from the low level descriptors of the images. In this paper, we propose a new approach that integrates techniques of salient, color and texture features. Our approach extracts interest salient regions that work as local descriptors. A greedy graph matching algorithm with a proposed modified scoring function is applied to determine the final image rank. The proposed approach is appropriate for accurately retrieving images even in distortion cases such as geometric deformations and noise. This approach was tested on proprietary image databases. Also an offline case study is developed where our approach is tested on images retrieved from Google keyword based image search engine. The results show that a combination of our approach as a local image descriptor with another global descriptor outperforms other approaches.
- Jan 2005
- Proceeding of the 13th International Conference on Artificial Intelligence Applications, ICAIA, Cairo, Egypt, Feb.2005 .
The Intelligent Agent (IA) is currently an important field and a hot research area in the Artificial Intelligence field. IA for education is Classroom and platform independent because such a system generally will be online, so it can be used by a huge number of learners. In this research, we are proposing a Multi-agent system for an educational application. The system includes an Educational system which composed of an Explainer module, a problem generator module and an evaluation module. There are a group of agents that help the system to do its job. The system has a graphical user interface and can be connected to the Internet. The student can go through the Explainer then problem generator which shows a set of problems and gives the possibility that the student can solve this problem step by step, the answer go through the evaluation model which analyzes them and identifies the problem areas which has difficulties of the student and shows how we can treat these problems, also it stores the scores of the student in student record and shows it to him if he ask for that in a self explanatory form . During all of these procedures a system of five intelligent agents is helping to get our goal from this system.
- Jan 2004
- Proceedings of the 12th (ICAIA), Cairo, Egypt, 2004
This work involves computer-based voice recognition digital techniques to generate a voice archiving system. In that respect two computer programs routine were made use of the first routine resides in using available commercial computer packages based on the adoption of both the probability analysis of voice elements on one hand and the artificial neural network based on the fast Fourier transform on the other hand. A sample set of various types of prerecorded and properly archived audio material was installed on the computer storage media and there after the developed technique mentioned above was loaded and tested. Results obtained have confirmed an effective and reliable outcome to accessing the appropriate audio file from the installed audio library thus confirming the validity of the method for a developed audio archiving system. While the second routine comprises the development of a combination of visual basic and artificial neural network software. This procedure was used successfully for accessing selected files out of an audio/video library.
- Nov 2003
- 11th International Conference on Artificial Intelligence Applications ICAIA , Cairo International conference Center, Cairo, EGYPT
In this paper, a technique for hiding secret data in images is implemented. The technique aims to increase the hiding capacity without affecting the quality of the carrier image in which data to be hidden (the resulting image is called the stego image). The technique combines two methods for improving the image quality; an optimum substitution matrix and a pixel adjustment process. The optimum substitution matrix replaces the k-LSBs (Least Significant Bits) of the data by other optimum values in order to minimize the difference between the carrier image and the secret data. Finding the optimum substitution matrix has proved to be a computationally expensive process. In this paper, a dynamic programming strategy is implemented to find the optimum substitution matrix. After applying the optimum substitution matrix and the secret data is hidden into the image, a pixel adjustment process is used to decrease by about half the MSE (mean squared error) between the stego image and the carrier (host) image. The proposed technique is applied to some standard images used in literature. The experimental results show that the stego image is not degraded in appearance even with embedding up to 4-bits of data in each byte. The PSNR is computed for different values of k. The experimental results are also compared to previous works and show a significant improvement.
In three-dimensional display based on integral imaging (II) the compression of the elemental images is a major need to be implemented in real time applications. In this paper, we propose an Integral Imaging (II) lossless compression coder based on three-dimensional set partitioning in hierarchical trees,3D SPIHT. The elemental images are stacked to form a three dimensional image. 3D wavelet transform is performed, then 3D SPIHT coding is applied. Simulations are performed to test the performance of the 3D compression system. The results show that the proposed system has superior compression Performance compared to 2 DSPIHT.