915
732.91
0.80
815

Publication History View all

  • [Show abstract] [Hide abstract]
    ABSTRACT: Modern societies have reached a point where everyday life relies heavily on desired operation of critical infrastructures, in spite of accidental failures and/or deliberate attacks. The issue of desired performance operation of CIS at high security level receives considerable attention worldwide. The pioneering generic methodologies and methods are presented in the paper project for designing systems capable of achieving these objectives in the cost effective manner at existing CIS and also in the future. A control systems engineering approach to integrated monitoring, control and security of critical infrastructure systems (CIS) is applied. A multilayer structure for an intelligent autonomous reconfigurable agent operating within a single region of a CIS is derived first. Methods and algorithms for synthesising the layers are proposed so that the agent can autonomously perform required control activities under wide range of operating conditions. The required ability of the system to meet the desired operational objectives under a wide range of the operating conditions is achieved by supervised reconfiguration of the agents. Recently proposed robustly feasible model predictive control technology with soft switching mechanisms between different control strategies is applied to implement the soft and robustly feasible agent reconfiguration, which is adequate to current operational conditions. Next developing the multiagent structures, which are suitable for monitoring, control and security of an overall CIS is discussed. It is based on the distributed structuring the agent layers. The proposals are illustrated by applications to the integrated wastewater treatment case-study system and drinking water distribution system.
    Annual Reviews in Control 01/2014;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The concept of common operational pictures (COPs) is explored through the application of social network analysis (SNA) and agent-based modelling to a generic search and rescue (SAR) scenario. Comparing the command structure that might arise from standard operating procedures with the sort of structure that might arise from examining information-in-common, using SNA, shows how one structure could be more amenable to 'command' with the other being more amenable to 'control' - which is potentially more suited to complex multi-agency operations. An agent-based model is developed to examine the impact of information sharing with different forms of COPs. It is shown that networks using common relevant operational pictures (which provide subsets of relevant information to groups of agents based on shared function) could result in better sharing of information and a more resilient structure than networks that use a COP. Practitioner Summary: SNA and agent-based modelling are used to compare different forms of COPs for maritime SAR operations. Different forms of COP change the communications structures in the socio-technical systems in which they operate, which has implications for future design and development of a COP.
    Ergonomics 04/2013;
  • [Show abstract] [Hide abstract]
    ABSTRACT: The paralinguistic information in a speech signal includes clues to the geographical and social background of the speaker. This paper is concerned with automatic extraction of this information from a short segment of speech. A state-of-the-art language identification (LID) system is applied to the problems of regional accent recognition for British English, and ethnic group recognition within a particular accent. We compare the results with human performance and, for accent recognition, the ‘text dependent’ ACCDIST accent recognition measure. For the 14 regional accents of British English in the ABI-1 corpus (good quality read speech), our LID system achieves a recognition accuracy of 89.6%, compared with 95.18% for our best ACCDIST-based system and 58.24% for human listeners. The “Voices across Birmingham” corpus contains significant amounts of telephone conversational speech for the two largest ethnic groups in the city of Birmingham (UK), namely the ‘Asian’ and ‘White’ communities. Our LID system distinguishes between these two groups with an accuracy of 96.51% compared with 90.24% for human listeners. Although direct comparison is difficult, it seems that our LID system performs much better on the standard 12 class NIST 2003 Language Recognition Evaluation task or the two class ethnic group recognition task than on the 14 class regional accent recognition task. We conclude that automatic accent recognition is a challenging task for speech technology, and speculate that the use of natural conversational speech may be advantageous for these types of paralinguistic task.
    Computer Speech & Language 01/2013; 27(1):59–74.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Graphics processing units (GPUs) provide substantial processing power for little cost. We explore the application of GPUs to speech pattern processing, using language identification (LID) to demonstrate their benefits. Realization of the full potential of GPUs requires both effective coding of predetermined algorithms, and, if there is a choice, selection of the algorithm or technique for a specific function that is most able to exploit the GPU. We demonstrate these principles using the NIST LRE 2003 standard LID task, a batch processing task which involves the analysis of over 600 h of speech. We focus on two parts of the system, namely the acoustic classifier, which is based on a 2048 component Gaussian Mixture Model (GMM), and acoustic feature extraction. In the case of the latter we compare a conventional FFT-based analysis with IIR and FIR filter banks, both in terms of their ability to exploit the GPU architecture and LID performance. With no increase in error rate our GPU based system, with an FIR-based front-end, completes the NIST LRE 2003 task in 16 h, compared with 180 h for the conventional FFT-based system on a standard CPU (a speed up factor of more than 11). This includes a 61% decrease in front-end processing time. In the GPU implementation, front-end processing accounts for 8% and 10% of the total computing times during training and recognition, respectively. Hence the reduction in front-end processing achieved in the GPU implementation is significant.
    Computer Speech & Language 10/2012; 26(5):371–383.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We have used a 3-GHz microwave host cavity to study the remarkable electronic properties of metallic, single-walled carbon nanotubes. Powder samples are placed in its magnetic field antinode, which induces microwave currents without the need for electrical contacts. Samples are shown to screen effectively the microwave magnetic field, implying an extremely low value of sheet resistance (< 10 μΩ) within the graphene sheets making up the curved nanotube walls. Associated microwave losses are large due to the large surface area, and also point to a similar, very small value of sheet resistance due to the inherent ballistic electron transport.
    Nanoscale Research Letters 08/2012; 7:429.
  • [Show abstract] [Hide abstract]
    ABSTRACT: A procedure for designing digital Butterworth filters is proposed. The procedure determines the denominator and the numerator of the filter transfer function based on the positions of the poles in the s-plane and zeros in the z-plane, respectively, and calculates the gain factor using a maximum point normalization method. In contrast to some conventional algorithms, the presented procedure is much simpler by directly obtaining the filter with 3-dB frequencies. This makes the presented algorithm a useful tool for determining the boundaries in electronic or communication systems' frequency responses. Moreover, the proposed algorithm is compatible with high-order transformations which are the limitations of general pole-zero placement techniques. The proposed method is illustrated by the examples of designing the low-pass, high-pass, band-pass, and band-stop filter.
    Computers & Electrical Engineering 07/2012; 38(4):811-818.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The strategies of novice and expert crime scene examiners were compared in searching crime scenes. Previous studies have demonstrated that experts frame a scene through reconstructing the likely actions of a criminal and use contextual cues to develop hypotheses that guide subsequent search for evidence. Novice (first-year undergraduate students of forensic sciences) and expert (experienced crime scene examiners) examined two "simulated" crime scenes. Performance was captured through a combination of concurrent verbal protocol and own-point recording, using head-mounted cameras. Although both groups paid attention to the likely modus operandi of the perpetrator (in terms of possible actions taken), the novices paid more attention to individual objects, whereas the experts paid more attention to objects with "evidential value." Novices explore the scene in terms of the objects that it contains, whereas experts consider the evidence analysis that can be performed as a consequence of the examination. The suggestion is that the novices are putting effort into detailing the scene in terms of its features, whereas the experts are putting effort into the likely actions that can be performed as a consequence of the examination. The findings have helped in developing the expertise of novice crime scene examiners and approaches to training of expertise within this population.
    Human Factors The Journal of the Human Factors and Ergonomics Society 06/2012; 54(3):413-24.
  • Source
    Thin Solid Films 04/2012; 520(13):4506.
  • University of Birmingham Teaching and Learning Conference 2012University of Birmingham Teaching and Learning Conference 2012; 01/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: The advent of modern railway signalling and train control technology allows the implementation of advanced real-time railway management. Optimisation algorithms can be used to: minimise the cost of delays; find solutions to recover disturbed scenarios back to the operating timetable; improve railway traffic fluidity on high capacity lines; and improve headway regulation. A number of researchers have previously considered the problem of minimising the costs of train delays and have used various optimisation algorithms for differing scenarios. However, little work has been carried out to evaluate and compare the different approaches. This paper compares and contrasts a number of optimisation approaches that have been previously used and applies them to a series of common scenarios. The approaches considered are: brute force, first-come-first-served, Tabu search, simulated annealing, genetic algorithms, ant colony optimisation, dynamic programming and decision tree based elimination. It is found that simple disturbances (i.e. one train delayed) can be managed efficiently using straightforward approaches, such as first-come-first-served. For more complex scenarios, advanced methods are found to be more appropriate. For the scenarios considered in this paper, ant colony optimisation and genetic algorithms performed well, the delay cost is decreased by 30% and 28%, respectively, compared with first-come-first-served.
    Journal of Rail Transport Planning & Management. 01/2012; 2(s 1–2):23–33.
Information provided on this web page is aggregated encyclopedic and bibliographical information relating to the named institution. Information provided is not approved by the institution itself. The institution’s logo (and/or other graphical identification, such as a coat of arms) is used only to identify the institution in a nominal way. Under certain jurisdictions it may be property of the institution.