Recent publications
The authors demonstrate the use of highly temperature insensitive dense wavelength division multiplexing filters combined with optical reflectors and optical time domain reflectometry to uniquely identify specific optical fibres beyond the optical splitter in an optical access passive optical network (PON). The very low wavelength shift (∼0.1 pm/°C) of the filters over the full industrial temperature range facilitates the future field deployment of these components in the PON without the requirement for complex and expensive wavelength tracking at the optical line terminal. image
The mainstreaming of conspiracy narratives has been associated with a rise in violent offline harms, from harassment, vandalism of communications infrastructure, assault, and in its most extreme form, terrorist attacks. Group-level emotions of anger, contempt, and disgust have been proposed as a pathway to legitimizing violence. Here, we examine expressions of anger, contempt, and disgust as well as violence, threat, hate, planning, grievance, and paranoia within various conspiracy narratives on Parler. We found significant differences between conspiracy narratives for all measures and narratives associated with higher levels of offline violence showing greater levels of expression.
Question‐driven automatic text summarization is a popular technique to produce concise and informative answers to specific questions using a document collection. Both query‐based and question‐driven summarization may not produce reliable summaries nor contain relevant information if they do not take advantage of extractive and abstractive summarization mechanisms to improve performance. In this article, we propose a novel extractive and abstractive hybrid framework designed for question‐driven automatic text summarization. The framework consists of complimentary modules that work together to generate an effective summary: (1) discovering appropriate non‐redundant sentences as plausible answers using an open‐domain multi‐hop question answering system based on a convolutional neural network, multi‐head attention mechanism and reasoning process; and (2) a novel paraphrasing generative adversarial network model based on transformers rewrites the extracted sentences in an abstractive setup. Experiments show this framework results in more reliable abstractive summary than competing methods. We have performed extensive experiments on public datasets, and the results show our model can outperform many question‐driven and query‐based baseline methods (an R1, R2, RL increase of 6%–7% for over the next highest baseline).
Digitalisation and the Internet of Things (IoT) help city councils improve services, increase productivity and reduce costs. City‐scale monitoring of traffic and pollution enables the development of insights into low‐air quality areas and the introduction of improvements. IoT provides a platform for the intelligent interconnection of everyday objects and has become an integral part of a citizen's life. Anyone can monitor from their fitness to the air quality of their immediate environment using everyday technologies. With caveats around privacy and accuracy, such data could even complement those collected by authorities at city‐scale, for validating or improving policies. The authors explore the hierarchies of urban sensing from citizen‐to city‐scale, how sensing at different levels may be interlinked, and the challenges of managing the urban IoT. The authors provide examples from the UK, map the data generation processes across levels of urban hierarchies and discuss the role of emerging sociotechnical urban sensing infrastructures, that is, independent, open, and transparent capabilities that facilitate stakeholder engagement and collection and curation of grassroots data. The authors discuss how such capabilities can become a conduit for the alignment of community‐ and city‐level action via an example of tracking the use of shared electric bicycles in Bristol, UK. image
This survey uncovers the tension between AI techniques designed for energy saving in mobile networks and the energy demands those same techniques create. We compare modeling approaches that estimate power usage cost of current commercial terrestrial next-generation radio access network deployments. We then categorize emerging methods for reducing power usage by domain: time, frequency, power, and spatial. Next, we conduct a timely review of studies that attempt to estimate the power usage of the AI techniques themselves. We identify several gaps in the literature. Notably, real-world data for the power consumption is difficult to source due to commercial sensitivity. Comparing methods to reduce energy consumption is beyond challenging because of the diversity of system models and metrics. Crucially, the energy cost of AI techniques is often overlooked, though some studies provide estimates of algorithmic complexity or run-time. We find that extracting even rough estimates of the operational energy cost of AI models and data processing pipelines is complex. Overall, we find the current literature hinders a meaningful comparison between the energy savings from AI techniques and their associated energy costs. Finally, we discuss future research opportunities to uncover the utility of AI for energy saving.
The recently introduced QUIC protocol has greatly increased the flexibility of end-to-end transmissions on the Internet, surpassing the design limits of the most popular transport protocols: TCP and UDP. However, some of TCP’s main design principles were carried onto QUIC, which may not be suitable for real-time use-cases; primarily, full reliability, as it requires every packet to be retransmitted until acknowledged by the receiver. In this work, we present dr, a partial reliability framework that allows for granular alteration of the reliability per packet at the transport layer. The framework is housed by QUIC and its multipath extension, yet offering no-ack and no-retransmit for the true meaning of unreliable packet transmission (congestion control does not impact and is not influenced by unreliable packets). The “dynamic” in dr refers to interchangeable reliable and unreliable transmission, in one session and across multiple paths, depending on the volatility of the communication system; guided by reliability policies. Fluidly altering packet reliability may offer a means in meeting stringent 5G and Beyond transmission requirements, especially for xURLLC use-cases. We examine the performance of dr in single-and multiple-path architectures through system-level simulation using Mininet. The results illustrate comparable performance to vital qoe metrics for the dynamic reliability policies compared to the original (MP)QUIC. Alternatively, the enhancements at the transport layer stem from a reduction in communication congestion by up to 80% for single-and multiple-path connections compared to the original (MP)QUIC. As a result, the amount of backlogged and out-of-order packets is reduced, downsizing intermediate and end-to-end buffer occupancies.
Each year JOCN has the privilege of inviting the best optical networking papers presented at the Optical Fiber Communication Conference (OFC) for an extended write-up. The January and February issues of the journal include a Special Edition covering OFC 2023, and there are 15 excellent papers to explore between the two issues.
The metaverse is seen as an evolution paradigm of the next-generation Internet, able to support a diverse range of persistent and always-on interconnected synchronous multiuser virtual environments where people can engage with others in real time, merging the physical and virtual world [1], [2], [3]. The concept was first mentioned in 1992s Neal Stephenson novel “
Snow Crash
” [4], and it follows the web and mobile Internet revolutions, allowing users to experience virtual environments in an immersive and
hyperspatiotemporal
manner [1]. Thus, it represents a paradigm shift in digital interaction, enabling real-time, multidimensional experiences that transcend the boundaries of physical space with the promise of bringing new levels of social connection and collaboration. The metaverse exists within the Internet, but not in the traditional way of seeing the world through a screen [1]. Instead, the metaverse aims to provide immersive experiences based on the convergence of spatial computing technologies that enable multisensory user interactions [e.g., virtual reality (VR), augmented reality (AR), and mixed reality (MR)] [2], [3] combined with 3-D data and artificial intelligence. The metaverse is also related to the concept of digital twins (DTs), which are digital replicas of elements in the real world (e.g., assets and processes) that mirror and synchronize in real time with their source, creating a bidirectional connection between them. While DTs focus more on the bidirectional connection between real and virtual and the accuracy of the representation toward better decision-making, the metaverse looks at sociotechnical challenges of seamless embodied communication between users and the dynamic interactions with the virtual spaces.
The objective of this work is to train a chatbot capable of solving evolving problems through conversing with a user about a problem the chatbot cannot directly observe. The system consists of a virtual problem (in this case a simple game), a simulated user capable of answering natural language questions that can observe and perform actions on the problem, and a Deep Q-Network (DQN)-based chatbot architecture. The chatbot is trained with the goal of solving the problem through dialogue with the simulated user using reinforcement learning. The contributions of this paper are as follows: a proposed architecture to apply a conversational DQN-based agent to evolving problems, an exploration of training methods such as curriculum learning on model performance and the effect of modified reward functions in the case of increasing environment complexity.
Cyber threats and vulnerabilities present an increasing risk to the safe and frictionless execution of business operations. Bad actors (“hackers”), including state actors, are increasingly targeting the operational technologies (OTs) and industrial control systems (ICSs) used to protect critical national infrastructure (CNI). Minimisations of cyber risk, attack surfaces, data immutability, and interoperability of IoT are some of the main challenges of today’s CNI. Cyber security risk assessment is one of the basic and most important activities to identify and quantify cyber security threats and vulnerabilities. This research presents a novel i-TRACE security-by-design CNI methodology that encompasses CNI key performance indicators (KPIs) and metrics to combat the growing vicarious nature of remote, well-planned, and well-executed cyber-attacks against CNI, as recently exemplified in the current Ukraine conflict (2014–present) on both sides. The proposed methodology offers a hybrid method that specifically identifies the steps required (typically undertaken by those responsible for detecting, deterring, and disrupting cyber attacks on CNI). Furthermore, we present a novel, advanced, and resilient approach that leverages digital twins and distributed ledger technologies for our chosen i-TRACE use cases of energy management and connected sites. The key steps required to achieve the desired level of interoperability and immutability of data are identified, thereby reducing the risk of CNI-specific cyber attacks and minimising the attack vectors and surfaces. Hence, this research aims to provide an extra level of safety for CNI and OT human operatives, i.e., those tasked with and responsible for detecting, deterring, disrupting, and mitigating these cyber-attacks. Our evaluations and comparisons clearly demonstrate that i-TRACE has significant intrinsic advantages compared to existing “state-of-the-art” mechanisms.
Despite the significant impact of driver behavior on fuel consumption and carbon dioxide equivalent (CO2e) emissions, this phenomenon is often overlooked in road freight transportation research. We review the relevant literature and seek to provide a deeper understanding of the relationship between freight drivers’ behavior and fuel consumption. This study utilizes a real-life dataset of over 4000 driving records from the freight logistics sector to examine the effects of specific behaviors on fuel consumption. Analyzed behaviors include harsh acceleration/deceleration/cornering, over-revving, excessive revolutions per minute (RPM), and non-adherence to legal speed limits ranging from 20 to 70 miles per hour (mph). Our findings confirm existing literature by demonstrating the significant impact of certain driving characteristics, particularly harsh acceleration/cornering, on fuel consumption. Moreover, our research contributes new insights into the field, notably highlighting the substantial influence of non-adherence to the legal speed limits of 20 and 30 mph on fuel consumption, an aspect not extensively studied in previous research. We subsequently introduce an advanced fuel consumption model that takes into account these identified driver behaviors. This model not only advances academic understanding of fuel consumption determinants in road freight transportation, but also equips practitioners with practical insights to optimize fuel efficiency and reduce environmental impacts.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Information
Address
Ipswich, United Kingdom