Jordan J. Bird

Jordan J. Bird
Nottingham Trent University | NTU · Department of Computer Science

Doctor of Philosophy
Looking for potential research collaborators

About

54
Publications
66,071
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
733
Citations
Introduction
Dr Jordan J. Bird is a Research Fellow with the Computational Intelligence and Applications Research Group (CIA) within the Department of Computer Science at Nottingham Trent University. Before that, he studied for a PhD in Human-Robot Interaction at Aston University. His research interests include Artificial Intelligence (AI), Human-Robot Interaction (HRI), Machine Learning (ML), Deep Learning, Transfer Learning, and Data Augmentation. http://jordanjamesbird.com/
Education
July 2018 - July 2021
Aston University
Field of study
  • Artificial Intelligence
October 2014 - July 2018
Aston University
Field of study
  • Computer Science

Publications

Publications (54)
Conference Paper
Full-text available
The ability to autonomously detect a physical fall is one of the many enabling technologies towards better independent living. This work explores how genetic programming can be leveraged to develop machine learning pipelines for the classification of falls via EEG brainwave activity. Eleven physical activities (5 types of falls and 6 non-fall activ...
Preprint
Full-text available
This study explores how robots and generative approaches can be used to mount successful false-acceptance adversarial attacks on signature verification systems. Initially, a convolutional neural network topology and data augmentation strategy are explored and tuned, producing an 87.12% accurate model for the verification of 2,640 human signatures....
Preprint
Full-text available
In modern society, people should not be identified based on their disability, rather, it is environments that can disable people with impairments. Improvements to automatic Sign Language Recognition (SLR) will lead to more enabling environments via digital technology. Many state-of-the-art approaches to SLR focus on the classification of static han...
Preprint
Full-text available
In state-of-the-art deep learning for object recognition, SoftMax and Sigmoid functions are most commonly employed as the predictor outputs. Such layers often produce overconfident predictions rather than proper probabilistic scores, which can thus harm the decision-making of `critical' perception systems applied in autonomous driving and robotics....
Article
Contemporary Artificial Intelligence technologies allow for the employment of Computer Vision to discern good crops from bad, providing a step in the pipeline of selecting healthy fruit from undesirable fruit, such as those which are mouldy or damaged. State-of-the-art works in the field report high accuracy results on small datasets (<1000 images)...
Article
In state-of-the-art deep learning for object recognition, Softmax and Sigmoid layers are most commonly employed as the predictor outputs. Such layers often produce overconfidence predictions rather than proper probabilistic scores, which can thus harm the decision-making of ‘critical’ perception systems applied in autonomous driving and robotics. G...
Preprint
Full-text available
With growing societal acceptance and increasing cost efficiency due to mass production, service robots are beginning to cross from the industrial to the social domain. Currently, customer service robots tend to be digital and emulate social interactions through on-screen text, but state-of-the-art research points towards physical robots soon provid...
Preprint
Full-text available
Much of the state-of-the-art in image synthesis inspired by real artwork are either entirely generative by filtered random noise or inspired by the transfer of style. This work explores the application of image inpainting to continue famous artworks and produce generative art with a Conditional GAN. During the training stage of the process, the bor...
Article
Full-text available
In this work we present the Chatbot Interaction with Artificial Intelligence (CI-AI) framework as an approach to the training of a transformer based chatbot-like architecture for task classification with a focus on natural human interaction with a machine as opposed to interfaces, code, or formal commands. The intelligent system augments human-sour...
Chapter
Full-text available
In this work, we achieve up to 92% classification accuracy of electromyographic data between five gestures in pseudo-real-time. Most current state-of-the-art methods in electromyographical signal processing are unable to classify real-time data in a post-learning environment, that is, after the model is trained and results are analysed. In this wor...
Preprint
Full-text available
Contemporary Artificial Intelligence technologies allow for the employment of Computer Vision to discern good crops from bad, providing a step in the pipeline of selecting healthy fruit from undesirable fruit, such as those which are mouldy or gangrenous. State-of-the-art works in the field report high accuracy results on small datasets (<1000 imag...
Conference Paper
Full-text available
In this work we achieve up to 92% classification accuracy of electromyographic data between five gestures in pseudo-real-time. Most current state-of-the-art methods in electromyography signal processing are unable to classify real-time data in a post-learning environment, that is, after the model is trained and results are analysed. In this work we...
Article
Full-text available
Synthetic data augmentation is of paramount importance for machine learning classification, particularly for biological data, which tend to be high dimensional and with a scarcity of training samples. The applications of robotic control and augmentation in disabled and able-bodied subjects still rely mainly on subject-specific analyses. Those can r...
Thesis
Full-text available
In modern Human-Robot Interaction, much thought has been given to accessibility regarding robotic locomotion, specifically the enhancement of awareness and lowering of cognitive load. On the other hand, with social Human-Robot Interaction considered, published research is far sparser given that the problem is less explored than pathfinding and loco...
Article
Full-text available
Objective The novelty of this study consists of the exploration of multiple new approaches of data pre-processing of brainwave signals, wherein statistical features are extracted and then formatted as visual images based on the order in which dimensionality reduction algorithms select them. This data is then treated as visual input for 2D and 3D CN...
Article
Full-text available
In this study, we present a transfer learning method for gesture classification via an inductive and supervised transductive approach with an electromyographic dataset gathered via the Myo armband. A ternary gesture classification problem is presented by states of ’thumbs up’, ’thumbs down’, and ’relax’ in order to communicate in the affirmative or...
Article
Full-text available
In this work we present a three-stage Machine Learning strategy to country-level risk classification based on countries that are reporting COVID-19 information. A K% binning discretisation (K = 25) is used to create four risk groups of countries based on the risk of transmission (coronavirus cases per million population), risk of mortality (coronav...
Conference Paper
Full-text available
The novelty of this study consists in a multi-modality approach to scene classification, where image and audio complement each other in a process of deep late fusion. The approach is demonstrated on a difficult classification problem, consisting of two synchronised and balanced datasets of 16,000 data objects, encompassing 4.4 hours of video of 8 e...
Preprint
Full-text available
In this work, we present the Chatbot Interaction with Artificial Intelligence (CI-AI) framework as an approach to the training of deep learning chatbots for task classification. The intelligent system augments human-sourced data via artificial paraphrasing in order to generate a large set of training data for further classical, attention, and langu...
Article
Full-text available
In this work, we show that a late fusion approach to multimodality in sign language recognition improves the overall ability of the model in comparison to the singular approaches of image classification (88.14%) and Leap Motion data classification (72.73%). With a large synchronous dataset of 18 BSL gestures collected from multiple subjects, two de...
Article
Full-text available
Preliminary results to a new approach for neurocognitive training on academic engagement and monitoring of attention levels in children with learning difficulties is presented. Machine Learning (ML) techniques and a Brain-Computer Interface (BCI) are used to develop an interactive AI-based game for educational therapy to monitor the progress of chi...
Preprint
Full-text available
In this work, we show that a late fusion approach to multi-modality in sign language recognition improves the overall ability of the model in comparison to the singular approaches of Computer Vision (88.14%) and Leap Motion data classification (72.73%). With a large synchronous dataset of 18 BSL gestures collected from multiple subjects, two deep n...
Preprint
Full-text available
The novelty of this study consists in a multi-modality approach to scene classification, where image and audio complement each other in a process of deep late fusion. The approach is demonstrated on a difficult classification problem, consisting of two synchronised and balanced datasets of 16,000 data objects, encompassing 4.4 hours of video of 8 e...
Preprint
Full-text available
In speech recognition problems, data scarcity often poses an issue due to the willingness of humans to provide large amounts of data for learning and classification. In this work, we take a set of 5 spoken Harvard sentences from 7 subjects and consider their MFCC attributes. Using character level LSTMs (supervised learning) and OpenAI's attention-b...
Conference Paper
Full-text available
In this work, we show that both fine-tune learning and cross-domain sim-to-real transfer learning from virtual to real-world environments improve the starting and final scene classification abilities of a computer vision model. A 6-class computer vision problem of scene classification is presented from both videogame environments and photographs of...
Article
Full-text available
In this work, we argue that the implications of pseudorandom and quantum-random number generators (PRNG and QRNG) inexplicably affect the performances and behaviours of various machine learning models that require a random input. These implications are yet to be explored in soft computing until this work. We use a CPU and a QPU to generate random n...
Preprint
Full-text available
Deep networks are currently the state-of-the-art for sensory perception in autonomous driving and robotics. However, deep models often generate overconfident predictions precluding proper probabilistic interpretation which we argue is due to the nature of the SoftMax layer. To reduce the overconfidence without compromising the classification perfor...
Conference Paper
Full-text available
Autonomous speaker identification suffers issues of data scarcity due to it being unrealistic to gather hours of speaker audio to form a dataset, which inevitably leads to class imbalance in comparison to the large data availability from non-speakers since large-scale speech datasets are available online. In this study, we explore the possibility o...
Article
Recent advances in the availability of computational resources allow for more sophisticated approaches to speech recognition than ever before. This study considers Artificial Neural Network and Hidden Markov Model methods of classification for Human Speech Recognition through Diphthong Vowel sounds in the English Phonetic Alphabet rather than the c...
Article
Full-text available
Recent advances in the availability of computational resources allow for more sophisticated approaches to speech recognition than ever before. This study considers Artificial Neural Network and Hidden Markov Model methods of classification for Human Speech Recognition through Diphthong Vowel sounds in the English Phonetic Alphabet rather than the c...
Article
Full-text available
In this work, we show the success of unsupervised transfer learning between Electroencephalographic (brainwave) classification and Electromyographic (muscular wave) domains with both MLP and CNN methods. To achieve this, signals are measured from both the brain and forearm muscles and EMG data is gathered from a 4-class gesture classification exper...
Chapter
Full-text available
The implications of realistic human speech imitation are both promising but potentially dangerous. In this work, a pre-trained Tacotron Spectrogram Feature Prediction Network is fine tuned with two 1.6 h speech datasets for 100,000 learning iterations, producing two individual models. The two Speech datasets are completely identical in content othe...
Chapter
Full-text available
This work presents an image classification approach to EEG brainwave classification. The proposed method is based on the representation of temporal and statistical features as a 2D image, which is then classified using a deep Convolutional Neural Network. A three-class mental state problem is investigated, in which subjects experience either relaxa...
Article
Full-text available
In this work, we argue that the implications of Pseudo and Quantum Random Number Generators (PRNG and QRNG) inexplicably affect the performances and behaviours of various machine learning models that require a random input. These implications are yet to be explored in Soft Computing until this work. We use a CPU and a QPU to generate random numbers...
Preprint
Full-text available
In this work, we argue that the implications of Pseudo and Quantum Random Number Generators (PRNG and QRNG) inexplicably affect the performances and behaviours of various machine learning models that require a random input. These implications are yet to be explored in Soft Computing until this work. We use a CPU and a QPU to generate random numbers...
Conference Paper
Full-text available
This work presents an image classification approach to EEG brainwave classification. The proposed method is based on the representation of temporal and statistical features as a 2D image, which is then classified using a deep Convolutional Neural Network. A three-class mental state problem is investigated, in which subjects experience either relaxa...
Conference Paper
Full-text available
The implications of realistic human speech imitation are both promising but potentially dangerous. In this work, a pre-trained Tacotron Spectrogram Feature Prediction Network is fine tuned with two 1.6 hour speech datasets for 100,000 learning iterations, producing two individual models. The two Speech datasets are completely identical in content o...
Preprint
Full-text available
This study suggests a new approach to EEG data classification by exploring the idea of using evolutionary computation to both select useful discriminative EEG features and optimise the topology of Artificial Neural Networks. An evolutionary algorithm is applied to select the most informative features from an initial set of 2550 EEG statistical feat...
Conference Paper
Full-text available
This paper proposes an approach to selecting the amount of layers and neurons contained within Multilayer Perceptron hidden layers through a single-objective evolutionary approach with the goal of model accuracy. At each generation, a population of Neural Network architectures are created and ranked by their accuracy. The generated solutions are co...
Conference Paper
Full-text available
This study proposes an approach to ensemble sentiment classification of a text to a score in the range of 1-5 of negative-positive scoring. A high-performing model is produced from TripAdvisor restaurant reviews via a generated dataset of 684 word-stems selected by their information gain ranking. Analysis documents the few mis-classified instances...
Poster
Full-text available
Phoneme awareness provides the path to high resolution speech recognition to overcome the difficulties of classical word recognition. Here we present the results of a preliminary study on Artificial Neural Network (ANN) and Hidden Markov Model (HMM) methods of classification for Human Speech Recognition through Diphthong Vowel sounds in the English...
Conference Paper
Full-text available
Phoneme awareness provides the path to high resolution speech recognition to overcome the difficulties of classical word recognition. Here we present the results of a preliminary study on Artificial Neural Network (ANN) and Hidden Markov Model (HMM) methods of classification for Human Speech Recognition through Diphthong Vowel sounds in the English...
Chapter
Full-text available
This paper proposes an approach to selecting the amount of layers and neurons contained within Multilayer Perceptron hidden layers through a single-objective evolutionary approach with the goal of model accuracy. At each generation, a population of Neural Network architectures are created and ranked by their accuracy. The generated solutions are co...
Chapter
Full-text available
This study proposes an approach to ensemble sentiment classification of a text to a score in the range of 1–5 of negative-positive scoring. A high-performing model is produced from TripAdvisor restaurant reviews via a generated dataset of 684 word-stems, gathered by information gain attribute selection from the entire corpus. The best performing cl...
Conference Paper
Full-text available
Accent classification provides a biometric path to high resolution speech recognition. This preliminary study explores various methods of human accent recognition through classification of locale. Classical, ensemble, timeseries and deep learning techniques are all explored and compared. A set of diphthong vowel sounds are recorded from participant...
Conference Paper
Full-text available
This paper explores single and ensemble methods to classify emotional experiences based on EEG brainwave data. A commercial MUSE EEG headband is used with a resolution of four (TP9, AF7, AF8, TP10) electrodes. Positive and negative emotional states are invoked using film clips with an obvious valence, and neutral resting data is also recorded with...
Article
Full-text available
This study suggests a new approach to EEG data classification by exploring the idea of using evolutionary computation to both select useful discriminative EEG features and optimise the topology of Artificial Neural Networks. An evolutionary algorithm is applied to select the most informative features from an initial set of 2550 EEG statistical feat...
Chapter
Full-text available
In this paper we propose an approach to a chatbot software that is able to learn from interaction via text messaging between human-bot and bot-bot. The bot listens to a user and decides whether or not it knows how to reply to the message accurately based on current knowledge, otherwise it will set about to learn a meaningful response to the message...
Chapter
Full-text available
Many image classification models have been introduced to help tackle the foremost issue of recognition accuracy. Image classification is one of the core problems in Computer Vision field with a large variety of practical applications. Examples include: object recognition for robotic manipulation, pedestrian or obstacle detection for autonomous vehi...
Conference Paper
Full-text available
This work aims to find discriminative EEG-based features and appropriate classification methods that can categorise brainwave patterns based on their level of activity or frequency for mental state recognition useful for human-machine interaction. By using the Muse headband with four EEG sensors (TP9, AF7, AF8, TP10), we categorised three possible...
Conference Paper
Full-text available
Many image classification models have been introduced to help tackle the foremost issue of recognition accuracy. Image classification is one of the core problems in Computer Vision field with a large variety of practical applications. Examples include: object recognition for robotic manipulation, pedestrian or obstacle detection for autonomous vehi...
Conference Paper
Full-text available
In this paper we propose an approach to a chatbot software that is able to learn from interaction via text messaging between human-bot and bot-bot. The bot listens to a user and decides whether or not it knows how to reply to the message accurately based on current knowledge, otherwise it will set about to learn a meaningful response to the message...

Network

Cited By

Projects

Projects (2)
Project
Sim2Real aims to develop a prototype Reinforcement Learning (RL) agent capable of knowledge transfer on a variety of tasks, as well as an architecture for a coach agent capable of devising synthetic tasks for training purposes. This will be achieved through the following Research Objectives (RO): RO1-Developing a reliable framework for knowledge transfer between widely varying tasks; RO2-Proposing an efficient, analytically powerful, predictive model of Knowledge Transfer; RO3-Proposing a coach agent architecture that automatically decomposes a complex RL task into a sequence of simpler ones; RO4-Validating the knowledge transfer from simulation to a real robotic application.
Project
PhD in Human-Robot Interaction, supervised by Dr. Diego R. Faria and Dr. ‪Anikó Ekárt. In 2021, I was awarded the PhD.