Content uploaded by Magdi Mohamed

Author content

All content in this area was uploaded by Magdi Mohamed on Dec 30, 2014

Content may be subject to copyright.

A preview of the PDF is not available

Handwriting recognition requires tools and techniques that
recognize complex character patterns and represent imprecise,
common-sense knowledge about the general appearance of characters, words
and phrases. Neural networks and fuzzy logic are complementary tools for
solving such problems. Neural networks, which are highly nonlinear and
highly interconnected for processing imprecise information, can finely
approximate complicated decision boundaries. Fuzzy set methods can
represent degrees of truth or belonging. Fuzzy logic encodes imprecise
knowledge and naturally maintains multiple hypotheses that result from
the uncertainty and vagueness inherent in real problems. By combining
the complementary strengths of neural and fuzzy approaches into a hybrid
system, we can attain an increased recognition capability for solving
handwriting recognition problems. This article describes the application
of neural and fuzzy methods to three problems: recognition of
handwritten words; recognition of numeric fields; and location of
handwritten street numbers in address images.

Figures - uploaded by Magdi Mohamed

Author content

All figure content in this area was uploaded by Magdi Mohamed

Content may be subject to copyright.

Content uploaded by Magdi Mohamed

Author content

All content in this area was uploaded by Magdi Mohamed on Dec 30, 2014

Content may be subject to copyright.

A preview of the PDF is not available

... In other words, the null space of matrix, A, defines the region of the input space that maps to zero. The motivation to leverage the null space is related to the study of adversarial samples such as those shown in (Nguyen et al., 2014) and to experiences in handwritten word recognition in the 1990s (Chiangand P. D. Gader, 1997;Gader et al., 1997). The NuSA approach is a partial, but important, solution to the problem of competency awareness of ANNs; it is unlikely that there is one method alone that can alleviate this problem. ...

... The NuSA approach is focused on the opposite problem, i.e., large changes in an input sample can produce a small (or, no) changes in output. A human would easily disregard this heavily corrupted sample as an outlier but, as pointed out in (Chiangand P. D. Gader, 1997;Gader et al., 1997;Nguyen et al., 2014), the network would not be able to distinguish the sample from the valid sample. An example of this is shown in Figure 1. ...

Many machine learning classification systems lack competency awareness. Specifically, many systems lack the ability to identify when outliers (e.g., samples that are distinct from and not represented in the training data distribution) are being presented to the system. The ability to detect outliers is of practical significance since it can help the system behave in an reasonable way when encountering unexpected data. In prior work, outlier detection is commonly carried out in a processing pipeline that is distinct from the classification model. Thus, for a complete system that incorporates outlier detection and classification, two models must be trained, increasing the overall complexity of the approach. In this paper we use the concept of the null space to integrate an outlier detection method directly into a neural network used for classification. Our method, called Null Space Analysis (NuSA) of neural networks, works by computing and controlling the magnitude of the null space projection as data is passed through a network. Using these projections, we can then calculate a score that can differentiate between normal and abnormal data. Results are shown that indicate networks trained with NuSA retain their classification performance while also being able to detect outliers at rates similar to commonly used outlier detection algorithms.

... Fig. 2.1 Example of classical HWR approach relying on explicit segmentation and subsequent classification (inspired by[78]) ...

The integrated use of hidden Markov models (HMMs) and Markov chain models can be considered the state-of-the-art for the analysis of sequential data. The former represents a generative model that covers the “appearance” of the underlying data whereas the latter describes restrictions of possible hypotheses sequences. Hidden Markov models describe a two-stage stochastic process with hidden states and observable outputs. The first stage can be interpreted as a probabilistic finite state automaton, which is the basis for the generative modeling as it is described by the second stage. Markov chain models are usually realized as stochastic n-gram models, which describe the probability of the occurrence of entire symbol sequences. For both HMMs and Markov chain models efficient algorithms exist for parameter estimation and for model evaluation. They can be used in an integrated manner for effective segmentation and classification of sequential data. This chapter gives a detailed overview of the theoretical foundations of Markovian models as they are used for handwriting recognition.

... HCR can be divided into two categories namely, online and off-line. On-line character recognition involves the identification of characters while they are written [6] and deals with time ordered sequences of data, pen up, and down movement and pressure sensitive pads that record the pen"s pressure and velocity [7]. On the other hand, off-line character recognition involves the recognition of already written character patterns in scanned digital image. ...

The development of handwriting character recognition (HCR) is an interesting area in pattern recognition. HCR system consists of a number of stages which are preprocessing, feature extraction, classification and followed by the actual recognition. It is generally agreed that one of the main factors influencing performance in HCR is the selection of an appropriate set of features for representing input samples. This paper provides a review of these advances. In a HCR, the set of features plays as main issues, as procedure in choosing the relevant feature that yields minimum classification error. To overcome these issues and maximize classification performance, many techniques have been proposed for reducing the dimensionality of the feature space in which data have to be processed. These techniques, generally denoted as feature reduction, may be divided in two main categories, called feature extraction and feature selection. A large number of research papers and reports have already been published on this topic. In this paper we provide an overview of some of the methods and approach of feature extraction and selection. Throughout this paper, we apply the investigation and analyzation of feature extraction and selection approaches in order to obtain the current trend. Throughout this paper also, the review of metaheuristic harmony search algorithm (HSA) has provide.

... Our system employs a simple discriminator based on the distribution of the heights and widths of connected components 14]. 11 ...

Many parents feel uncomfortable helping their children with homework, with only 66% of parents consistently checking their child’s homework [22]. Because of this, many turn to math games and problem solvers as they have become widely available in recent years [12, 21]. Many of these applications rely on multiple choice or keyboard entry submission of answers, limiting their adoption. Auto graders and applications, such as PhotoMath, deprive students of the opportunity to correct their own mistakes, automatically generating a solution with no explanation [19]. This work introduces a novel homework assistant – Homework Helper (HWHelper) – that is capable of determining mathematical errors in order to provide meaningful feedback to students without solutions. In this paper, we focus on simple arithmetic calculations, specifically multi-digit addition, introducing 2D-Add, a new dataset of worked addition problems. We design a system that acts as a guided learning tool for students allowing them to learn from and correct their mistakes. HWHelper segments a sheet of math problems, identifies the student’s answer, performs arithmetic and pinpoints mistakes made, providing feedback to the student. HWHelper fills a significant gap in the current state-of-the-art for student math homework feedback.

As a first step of document understanding a digital image of the document to be analyzed or the trajectory of the pen used for writing needs to be captured. From this raw data the relevant document elements (e.g., text lines) need to be segmented. These are then subject to a number of pre-processing steps that aim at reducing the variability in the appearance of the writing by applying a sequence of normalization operations. In order to be processed by a handwriting recognition system based on Markov models, text-line images and raw pen trajectories are then converted into a sequential representation—which is quite straight-forward for online data but requires some “trick” in the offline case. Based on the serialized data representation features are computed that characterize the local appearance of the script. These are fed into a Markov-model based decoder that produces a hypothesis for the segmentation and classification of the analyzed portion of handwritten text—usually as a sequence of word or character hypotheses.

We describe a model-based motion filtering process that, when applied to human arm motion data, leads to improved arm gesture recognition. Arm movements can be viewed as responses to muscle actuations that are guided by responses of the nervous system. Our motion filtering method makes strides towards capturing this structure by integrating a dynamic model with a control system for the arm. We hypothesize that embedding human performance knowledge into the processing of arm movements will lead to better recognition performance. We present details for the design of our filter, our evaluation of the filter from both expert-user and multiple-user pilot studies. Our results show that the filter has a positive impact on recognition performance for arm gestures.

In Section 1.1 we defined a classifier as any function D:ℜp ↦ Npc. The value y = D(z) is the label vector for z in ℜP. D is a crisp classifier if D [ℜp] = Nhc; otherwise, the classifier is fuzzy, possibilistic or probabilistic, which for convenience we lump together as soft classifiers. This chapter describes some of the most basic (and often most useful) classifier designs, along with some fuzzy generalizations and relatives.

An off-line method for handwritten text recognition is proposed. It is a hybrid approach using both holistic and analytical strategies. Words are divided into vertical segments and features are extracted from these segments. Features include upper stroke, lower stroke, middle loop and first character of a word. Features' fuzzy values and relative positional information form word's global representation. The matching word is found by comparing an unknown word representation with the word representations in a word dictionary. Contextual information is then used to find matching phrase from a text dictionary. The approach allows direct conversion of ASCII form of word to its holistic representation without involving training.

An approach to handprinted word recognition is described. The approach is based on the use of generating multiple possible segmentations of a word image into characters and matching these segmentations to a lexicon of candidate strings. The segmentation process uses a combination of connected component analysis and distance transform-based, connected character splitting. Neural networks are used to assign character confidence values to potential character within word images. Experimental results are provided for both character and word recognition modules on data extracted from the NIST handprinted character database.

A lexicon-based, handwritten word recognition system combining
segmentation-free and segmentation-based techniques is described. The
segmentation-free technique constructs a continuous density hidden
Markov model for each lexicon string. The segmentation-based technique
uses dynamic programming to match word images and strings. The
combination module uses differences in classifier capabilities to
achieve significantly better performance.

The Choquet fuzzy integral is applied to handwritten word recognition. A handwritten word recognition system is described. The word recognition system assigns a recognition confidence value to each string in a lexicon of candidate strings. The system uses a lexicon-driven approach that integrates segmentation and recognition via dynamic programming matching. The dynamic programming matcher finds a segmentation of the word image for each string in the lexicon. The traditional match score between a segmentation and a string is an average. In this paper, fuzzy integrals are used instead of an average. Experimental results demonstrate the utility of this approach. A surprising result is obtained that indicates a simple choice of fuzzy integral works better than a more complex choice.

An off-line handwritten word recognition system is described.
Images of handwritten words are matched to lexicons of candidate
strings. A word image is segmented into primitives. The best match
between sequences of unions of primitives and a lexicon string is found
using dynamic programming. Neural networks assign match scores between
characters and segments. Two particularly unique features are that
neural networks assign confidence that pairs of segments are compatible
with character confidence assignments and that this confidence is
integrated into the dynamic programming. Experimental results are
provided on data from the U.S. Postal Service.

Experiments comparing neural networks trained with crisp and fuzzy
desired outputs are described. A handwritten word recognition algorithm
using the neural networks for character level confidence assignment was
tested on images of words taken from the United States Postal Service
mailstream. The fuzzy outputs were defined using a fuzzy k-nearest
neighbor algorithm. The crisp networks slightly outperformed the fuzzy
networks at the character level but the fuzzy networks outperformed the
crisp networks at the word level. This empirical result is interpreted
as an example of the principle of least commitment

From the Publisher:
This decade has witnessed increasing interest in fuzzy technology both from academia and industry. It is often said that fuzzy theory is easy and simple so that engineers can progress quickly to real applications. However, the lack of knowledge of design methodologies and the theoretical results of fuzzy theory have often caused problems for design engineers. The aim of this book is to provide a rigorous background for uncertainty calculi, with an emphasis on fuzziness. Fundamentals of Uncertainty Calculi with Applications to Fuzzy Inference is primarily about the type of knowledge expressed in a natural language, that is, in linguistic terms. The approach to modeling such knowledge is based upon the mathematical theory of uncertainty related to fuzzy measures and integrals and their applications. The book consists of two parts: Chapters 2 - 6 comprise the theory, and applications are offered in Chapters 7 - 10. In the theory section the exposition is mathematical in nature and gives a complete background on uncertainty measures and integrals, especially in a fuzzy setting. Applications concern recent ones of fuzzy measures and integrals to problems such as pattern recognition, decision making and subjective multicriteria evaluations.

Some theorems of T. Murofushi and M. Sugeno (Fuzzy Sets and Systems29 (1989), 201–227) concerning representation of fuzzy measures and the Choquet integral are generalized. It is shown that, if there is a certain relation between two measurable functions, then the Choquet integral is additive for these two functions. In addition this article discusses null sets with respect to fuzzy measures and also fuzzy measures defined on a class which is not a σ-algebra.

Two hybrid fuzzy neural systems are developed and applied to
handwritten word recognition. The word recognition system requires a
module that assigns character class membership values to segments of
images of handwritten words. The module must accurately represent
ambiguities between character classes and assign low membership values
to a wide variety of noncharacter segments resulting from erroneous
segmentations. Each hybrid is a cascaded system. The first stage of both
is a self-organizing feature map (SOFM). The second stages map distances
into membership values. The third stage of one system is a multilayer
perceptron (MLP). The third stage of the other is a bank of Choquet
fuzzy integrals (FI). The two systems are compared individually and as a
combination to the baseline system. The new systems each perform better
than the baseline system. The MLP system slightly outperforms the FI
system, but the combination of the two outperforms the individual
systems with a small increase in computational cost over the MLP system.
Recognition rates of over 92% are achieved with a lexicon set having
average size of 100. Experiments were performed on a standard test set
from the SUNY/USPS CD-ROM database