Technical Report

Hypertext Transfer Protocol -- HTTP/1.1

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Tunneling, therefore, provides the communication mechanism between two or more clouds. There can be different types of tunneling, such as HTTP tunneling [17], GRE tunneling [18], VXLAN [19] tunneling, and GENEVE tunneling [20]. If we examine the details of VXLAN we will see that it was introduced as a MAC-in-IP within an IP/UDP transport. ...
... These essential requirements need communication between Multiple clouds [15], [16], [28]. Communication between Mutiple clouds can be provided by L2 tunnels [6], [7], [17]- [21].The IBN can then deploy tunnels between clouds. Collectively these tunnels form an overlay network. ...
Article
Full-text available
This paper presents an intent-based networking (IBN) system for the orchestration of OpenStack-based clouds and overlay networks between multiple clouds. Clouds need to communicate with other clouds for various reasons such as reducing latency and overcoming single points of failure. An overlay network provides connectivity between multiple Clouds for communication. Moreover, there can be several paths of communication between a source and a destination cloud in the overlay network. A machine learning model can be used to proactively select the best path for efficient network performance. Communication between the source and destination can then be established over the selected path. Communication in such type of a scenario requires complex networking configurations. IBN provides a closed-loop and Intelligent system for cloud to cloud communication. To this end, IBN abstracts complex networking and cloud configurations by receiving an intent from a user, translating the intent, generating complex configurations for the intent, and deploying the configurations, thereby assuring the intent. Therefore, the IBN that is presented here has three major features: (1) It can deploy an OpenStack cloud at a target machine, (2) it can deploy GENEVE tunnels between different clouds that form an overlay network, and (3) it can then leverage the advantages of machine learning to find the best path for communication between any two clouds. As machine learning is an essential component of the intelligent IBN system, two linear and three non-linear models were tested. RNN, LSTM, and GRU models were employed for non-linear modeling. Linear regression and SVR models were employed for linear modeling. Overall all the non-linear models outperformed the linear model with an 81% R 2 score, exhibiting similar performance. Linear models also showed similar performance but with lower accuracy. The testbed contains an overlay network of 11 GENEVE tunnels between 7 OpenStack-based clouds deployed in Malaysia, Korea, Pakistan, and Cambodia at TEIN.
... Hypothesis: Message Queuing Telemetry Transport (MQTT) over Transmission Control Protocol (TCP) and WebSocket is less latency and have better connection than Hypertext Transfer Protocol (HTTP) in Internet of things (IoT) systems for data transfer, so it's more suitable for Digital Twin (DT) [3][4][5][6]. However, WebSocket has a bidirectional structure that fits web browsers naturally and is suitable for data visualization and web interaction tasks, which gives it a trade-off between low latency and web compatibility. ...
Article
Full-text available
This study aims to identify the most suitable IoT communication protocol for a digital twin architecture, focusing on meeting the demands of real-time data transmission with stability and security. The research methodology involves a comparative performance analysis of MQTT over TCP, MQTT over WebSocket, and HTTP protocols, evaluating their latency, connectivity stability, and IoT application feasibility using an ESP32 microcontroller and DHT22 sensor setup. The experimental results demonstrate that MQTT over TCP achieves the lowest average latency at 290.5 ms, while MQTT over WebSocket exhibits more stable latency profiles with a 193.2 ms standard deviation; HTTP, despite its broader compatibility, showed higher average latency of 342.6 ms with a 307.9 ms standard deviation. These findings provide valuable insights for digital twin implementations, enabling developers to make informed protocol selections based on specific requirements, whether prioritizing real-time performance through MQTT protocols or emphasizing system compatibility and simplicity through HTTP.
... [19]. Then, the backbone of the web page was invented (i.e., hypertext markup language (HTML) [20], hypertext transfer protocol (HTTP) [21], and Browser). Web 1.0 enables internet application users to access web server content and search for information. ...
Article
Full-text available
Web 3.0 marks the beginning of a new era for the internet, characterized by distributed technology that prioritizes data ownership and value expression. Web 3.0 aims to empower users by providing them with ownership and control of their data and digital assets rather than leaving them in the hands of large corporations. Web 3.0 relies on decentralization, which uses blockchain technology to ensure secure user communication. However, Web 3.0 still faces many security challenges that might affect its deployment and expose users’ data and digital assets to cybercriminals. This survey investigates the current evolution of Web 3.0, outlining its background, foundation, and application. This review presents an overview of cybersecurity risks that face a mature Web 3.0 application domain (i.e., decentralized finance (DeFi)) and classifies them into seven categories. Moreover, state-of-the-art methods for addressing these threats are investigated and categorized based on the associated security risks. Insights into the potential future directions of Web 3.0 security are also provided.
... Then, it sends these test cases to the target service to explore potential flaws. If a test case triggers an HTTP response code in the 50X range of the target service [49], the fuzzer will assume that an error has occurred and store the test case for further analysis. The main process of RESTful API fuzzing is as follows. ...
Article
Full-text available
Modern web services widely provide RESTful APIs for clients to access their functionality programmatically. Fuzzing is an emerging technique for ensuring the reliability of RESTful APIs. However, the existing RESTful API fuzzers repeatedly generate invalid requests due to unawareness of errors in the invalid tested requests and lack of effective strategy to generate legal value for the incorrect parameters. Such limitations severely hinder the fuzzing performance. In this paper, we propose DynER, a new test case generation method guided by dynamic error responses during fuzzing. DynER designs two strategies of parameter value generation for purposefully revising the incorrect parameters of invalid tested requests to generate new test requests. The strategies are, respectively, based on prompting Large Language Model (LLM) to understand the semantics information in error responses and actively accessing API-related resources. We apply DynER to the state-of-the-art fuzzer RESTler and implement DynER-RESTler. DynER-RESTler outperforms foREST on two real-world RESTful services, WordPress and GitLab with a 41.21% and 26.33% higher average pass rate for test requests and a 12.50% and 22.80% higher average number of unique request types successfully tested, respectively. The experimental results demonstrate that DynER significantly improves the effectiveness of test cases and fuzzing performance. Additionally, DynER-RESTler finds three new bugs.
... To orchestrate and organise network messages, an application layer technology must be used. Researchers in [23] explore the Hyper Text Transfer Protocol (HTTP), commonly used on the internet to serve content. Alternatively, authors in [24] discuss the Advanced Message Queuing Protocol (AMQP), an application-specific protocol for the RabbitMQ message broker ecosystem. ...
Article
Full-text available
In recent years, the cost-effectiveness and versatility of Unmanned Aerial Vehicles (UAVs) have led to their widespread adoption in both military and civilian applications, particularly for operations in remote or hazardous environments where human intervention is impractical. The use of multi-agent UAV systems has notably increased for complex tasks such as surveying and monitoring, driving extensive research and development in control, communication, and coordination technologies. Evaluating and analysing these systems under dynamic flight conditions present significant challenges. This paper introduces a mathematical model for leader–follower structured Quadrotor UAVs that encapsulates their dynamic behaviour, incorporating a novel multi-agent ad hoc coordination network simulated via COOJA. Simulation results with a pipeline surveillance case study demonstrate the efficacy of the coordination network and show that the system offers various improvements over contemporary pipeline surveillance approaches.
... The integration of IoT device techniques lacks interoperability, which describes the ability for heterogeneous IoT applications to integrate the same IoT device and for one IoT application to integrate heterogeneous IoT devices. Therefore, we envision an application layer protocol that can cooperate with (1) existing Internet and IoT protocols, such as MQTT [68], CoAP [69], and HTTP [70]; (2) the machine learningbased description technique discussed earlier in Section 4.1; and (3) the machine learning-based integration technique discussed earlier in this section in order to (a) integrate heterogeneous IoT devices and collect their data, and (b) interpret collected data to formats and structures understandable by heterogeneous IoT applications. ...
Article
Full-text available
The Internet of Things (IoT) includes billions of sensors and actuators (which we refer to as IoT devices) that harvest data from the physical world and send it via the Internet to IoT applications to provide smart IoT services and products. Deploying, managing, and maintaining IoT devices for the exclusive use of an individual IoT application is inefficient and involves significant costs and effort that often outweigh the benefits. On the other hand, enabling large numbers of IoT applications to share available third-party IoT devices, which are deployed and maintained independently by a variety of IoT device providers, reduces IoT application development costs, time, and effort. To achieve a positive cost/benefit ratio, there is a need to support the sharing of third-party IoT devices globally by providing effective IoT device discovery, use, and pay between IoT applications and third-party IoT devices. A solution for global IoT device sharing must be the following: (1) scalable to support a vast number of third-party IoT devices, (2) interoperable to deal with the heterogeneity of IoT devices and their data, and (3) IoT-owned, i.e., not owned by a specific individual or organization. This paper surveys existing techniques that support discovering, using, and paying for third-party IoT devices. To ensure that this survey is comprehensive, this paper presents our methodology, which is inspired by Systematic Literature Network Analysis (SLNA), combining the Systematic Literature Review (SLR) methodology with Citation Network Analysis (CNA). Finally, this paper outlines the research gaps and directions for novel research to realize global IoT device sharing.
... Hypertext Transfer Protocol (HTTP) merupakan prokotol aplikasi yang umumnya digunakan pada World Wide Web (WWW) dan merupakan fondasi dari komunikasi data WWW [13]. Protokol HTTP merupakan aturan komunikasi yang harus dipenuhi pada saat menggunakan protokol HTTP. ...
Article
Full-text available
Information systems are technologies that can help work systematically. However, the existing systems or applications are not yet integrated with one another, making many processes have the same function on different systems, for example the authentication process is built using the web service concept. Integration or interoperability of information system software involving various components, which may create gaps that can disrupt system security. In this study, security has been implemented in web services using JSON Web Token (JWT) with the HMAC-SHA512 algorithm which is stored in browser cookies. From the research results, this concept is very suitable to be applied to applications or information systems on different platforms that use the same service, JWT tokens are also successfully stored in browser cookies. In addition, a comparison was also made between the HMAC-SHA512 and HMACSHA-256 algorithms and in the final result it was found that the total time difference was 185 ms and the average time difference was 6.17 ms. It can be concluded that the HMAC-SHA512 algorithm is 0.9861% faster than the HMAC-SHA256 algorithm.
... Однако можно привести и примеры, когда анализ может быть затруднён, например, HTTP 1.1. При анализе только ответов нужно будет определять, должны ли после заголовка идти данные или нет (как в случае ответа на запрос HEAD) [47]. Для полного решения данной проблемы могут применяться системы, предполагающие распределённый захват трафика в различных точках и последующую их синхронизацию. ...
Article
Full-text available
This paper presents a summary of experience in developing the deep packet inspection system using full protocol decoding. The paper reviews the challenges encountered during implementation and provides a high-level overview of the solutions to these issues. The challenges can be grouped into two groups. The first group is related to the fundamental tasks which must be addressed when implementing full protocol decoding systems. This includes ensuring correct protocol parsing, which involves identifying and interpreting protocol headers and fields correctly. Moreover, it is necessary to ensure the processing of fragmented packets and the assembly of fragments into the original message. Additionally, the processing and analysis of encrypted traffic is a crucial task that may require the use of specialized algorithms and tools. The second group of problems is related to optimizing the process of full protocol decoding to ensure high-speed traffic processing, as well as supporting new protocols and the ability to add user-defined extensions. While there are open-source systems that address some of the primary issues associated with full protocol decoding, there may be a need for additional effort and specialized solutions to efficiently operate and expand the functionality of such systems. Although implementing deep network traffic analysis tools using full protocol decoding requires the use of advanced hardware and software technologies, the benefits of such analysis are significant. This approach provides a more complete understanding of network traffic patterns and enables more effective detection and prevention of cyber-attacks. It also allows for more accurate monitoring of network performance and the identification of potential bottlenecks or other issues that may impact network efficiency. In this article, we also emphasize the importance of system architecture development and implementation to ensure the successful deployment of deep network traffic analysis tools using full protocol decoding. At last, we conducted an experiment where several advanced optimizations were implemented in the system that had already solved primary issues. These optimizations related to working with memory, based on the features of the traffic processing scheme. By results, we evaluated significant performance improvement in solving secondary tasks, described in this work.
... One of the achievements of Web2 systems was their use of micro-services to delineate functionspecific backend systems. Two of the most important architecture design aspects of highly scalable web services today are: (i) the use of standard service interfaces in the form of REST-ful APIs (Representational State Transfer or REST) [5,6], and (ii) the disallowing of "backdoors" (direct links bypassing the APIs) to access the backend systems. ...
Preprint
Full-text available
Many businesses seeking new capabilities that blockchains may offer are deterred from fully embracing the technology due to fears of the classic ”vendor lock-in” and platform-capture into one specific blockchain. From an asset-centric perspective, most business applications seek certain desirable functional guarantees with regard to the state of the tokenized asset on the blockchain. These new capabilities must be accessible through standardized service interfaces. The emerging tokenized asset networks based on decentralized ledger technology must integrate seamlessly into existing financial IT systems through similar standard interfaces. As such, if blockchains are to be a foundational technology in the future Web3 Internet of Value, then several classes and types of standardized APIs must be specified, published, and widely deployed by the nascent tokenized asset industry. These standard APIs must provide business applications with a single uniform interface to the many and varied blockchains today, thereby reducing business IT costs and preventing platform-capture.
... One of the achievements of Web2 systems was their use of micro-services to delineate functionspecific backend systems. Two of the most important architecture design aspects of highly scalable web services today are: (i) the use of standard service interfaces in the form of REST-ful APIs (Representational State Transfer or REST) [5,6], and (ii) the disallowing of "backdoors" (direct links bypassing the APIs) to access the backend systems. ...
Preprint
Full-text available
Many businesses seeking new capabilities that blockchains may offer are deterred from fully embracing the technology due to fears of the classic ”vendor lock-in” and platform-capture into one specific blockchain. From an asset-centric perspective, most business applications seek certain desirable functional guarantees with regard to the state of the tokenized asset on the blockchain. These new capabilities must be accessible through standardized service interfaces. The emerging tokenized asset networks based on decentralized ledger technology must integrate seamlessly into existing financial IT systems through similar standard interfaces. As such, if blockchains are to be a foundational technology in the future Web3 Internet of Value, then several classes and types of standardized APIs must be specified, published, and widely deployed by the nascent tokenized asset industry. These standard APIs must provide business applications with a single uniform interface to the many and varied blockchains today, thereby reducing business IT costs and preventing platform-capture.
... Email, VoIP applications, File sharing, etc., each have taken a share of the total Internet traffic, but the largest share is currently Web browsing applications with almost 70% of the total traffic across the Internet [1]. Web traffic traditionally uses the TCP and HTTP [2] [3] protocols for the request and delivery of web page content. There had already been numerous developments across these protocols with a view to improving the performance without proposing any replacement of these standards. ...
Article
Full-text available
Popular Internet applications such as web browsing, and web video download use HTTP protocol as application over the standard Transport Control Protocol (TCP). Traditional TCP behavior is unsuitable for this style of application because their transmission rate and traffic pattern are different from conventional bulk transfer applications. Previous works have analyzed the interaction of these applications with the congestion control algorithms in TCP and the proposed Congestion Window Validation (CWV) as a solution. However, this method was incomplete and has been shown to present drawbacks. This paper focuses on the ‘newCWV’ which was designed to address these drawbacks. NewCWV provides a practical mechanism to estimate the available path capacity and suggests a more appropriate congestion control behavior. This paper describes how this algorithm was implemented in the Linux TCP/IP stack and tested by experiments, where results indicate that, with newCWV, the browsing can get 50% faster in an uncongested network.
... displays different videos to a web scraper depending on the meta-information available in the HTTP-header that is sent with every request of the scraper. For this purpose, different prof iles were set up, whereby each profile altered one field of the HTTP request header (Fielding et al. 1999). The benchmark profile is a web scraper based on Chrome (user-agent), English (accept-language), and a Los Angeles based IP-address (X-forwarded-for). ...
Preprint
The increasing adoption of econometric and machine-learning approaches by empirical researchers has led to a widespread use of one data collection method: web scraping. Web scraping refers to the use of automated computer programs to access websites and download their content. The key argument of this paper is that na\"ive web scraping procedures can lead to sampling bias in the collected data. This article describes three sources of sampling bias in web-scraped data. More specifically, sampling bias emerges from web content being volatile (i.e., being subject to change), personalized (i.e., presented in response to request characteristics), and unindexed (i.e., abundance of a population register). In a series of examples, I illustrate the prevalence and magnitude of sampling bias. To support researchers and reviewers, this paper provides recommendations on anticipating, detecting, and overcoming sampling bias in web-scraped data.
... HTTP has been developing for over 30 years. There have been three main versions of HTTP, including HTTP/1.1 [61], HTTP/2 [17], and HTTP/3 [26]. Each latter version provides more critical features that are able to support HAS. Chapter 3 ...
... Representational State Transfer Hypertext Transfer Protocol (REST HTTP): HTTP [145] is the primary client/server protocol that adopts the request/response model. HTPP has been related to the REST architecture [146] to ease the interaction between dissimilar entities over web-based services. ...
Article
Full-text available
Numerous municipalities employ the smart city model in large cities to improve the quality of life of their residents, utilize local resources efficiently, and save operating expenses. This model incorporates many heterogeneous technologies such as Cyber-Physical Systems (CPS), Wireless Sensor Networks (WSNs), and Cloud Computing (ClCom). However, effective networking and communication protocols are required to provide the essential harmonization and control of the many system mechanisms to achieve these crucial goals. The networking requirements and characteristics of smart city applications (SCAs) are identified in this study, as well as the networking protocols that can be utilized to serve the diverse data traffic flows that are required between the dissimilar mechanisms. Additionally, we show examples of the networking designs of a few smart city systems, such as smart transport, smart building, smart home, smart grid, smart water, pipeline monitoring, and control systems.
... Email, VoIP applications, File sharing etc., each have taken a share of the total Internet traffic, but the largest share is currently Web browsing applications with almost 70% of the total traffic across the Internet [1]. Web traffic uses TCP and HTTP [2] [3] protocols for request and delivery of the web page content. There had already been numerous developments across these protocols with a view to improve the performance without proposing any replacement of these standards. ...
... Email, VoIP applications, File sharing etc., each have taken a share of the total Internet traffic, but the largest share is currently Web browsing applications with almost 70% of the total traffic across the Internet [1]. Web traffic uses TCP and HTTP [2] [3] protocols for request and delivery of the web page content. There had already been numerous developments across these protocols with a view to improve the performance without proposing any replacement of these standards. ...
Conference Paper
Full-text available
Popular Internet applications such as web browsing, web video download or variable-rate voice suffer from standard Transport Control Protocol (TCP) behaviour because their transmission rate and pattern are different from conventional bulk transfer applications. Previous works have analysed the interaction of these applications with the congestion control algorithms in TCP and proposed Congestion Window Validation (CWV) as a solution. However, this method was incomplete and has been shown to present drawbacks. This paper focuses on the ‘newCWV’ which was proposed to address these drawbacks. newCWV depicts a practical mechanism to estimate the available path capacity and suggests a more appropriate congestion control behaviour. These new modifications benefit variable-rate applications that are bursty in nature, with shorter transfer durations. In this paper, this algorithm was implemented in the Linux TCP/IP stack and tested by experiments, where results indicate that, with newCWV, the browsing can get 50% faster in an uncongested network.
... During the communication process, the operations that the server should perform are determined by methods [40]. Regarding the DBL, the most used one is 'GET' which serves to 'retrieve whatever information (in the form of an entity)' from the other system [41]. Additionally, 'PUT' or 'POST', which are used to send data to the other server to create or update a resource [42], may be used depending on the final functionalities and permissions of the final users of the DBL. ...
Article
Full-text available
The Digital Building Logbook (DBL) was first introduced together with the Renovation Wave initiative, promoted by the European Commission and then defined in the proposal for a recast of the energy performance of buildings Directive, in December 2021, as a repository of relevant data on a building that aims to alleviate the current lack of information of the European building stock. Several data sources on buildings already exist at different levels in Europe, and their interlinkage is crucial for a proper data population of the future Building Logbook. However, these data sources are scattered and heterogeneous, thus, they need to be evaluated to determine their suitability for the DBL. This paper analyses the sources that currently exist in Spain and Italy, focusing respectively on Aragon and Lombardy region, and addressing their interoperability possibilities and the indicators collected. The results show that the available data are not fully aligned with the relevant indicators from the existing proposals for a European DBL, and that few data sources are currently suitable for the DBL, since most of them are not interoperable. Considering the features and limitations of the data sources, a dataflow general scheme based on the definition of the DBL is defined for each case study, and guidelines are presented on data collection and interoperability in order to make its implementation feasible at the European scale.
... A 400 error should be returned if any non-standard value exists in the TE header. For RFC 2616 [22], it specifies that a 400 error will be returned whenever there are two TE or CL. However, in RFC 7230, explicitly states that only a valid CL field is allowed to be reserved. ...
Article
Full-text available
Until the development of HTTP request smuggling in 2005, individual HTTP requests were considered as independent entities and could not be split or merged. This is a security problem caused by inconsistent content length interpretation approach between web servers, or the web server is not fully implemented in accordance with the RFC standard. It is especially dangerous for web services with complex web architectures. It can route the victims to receive malicious responses, amplify the impact of certain low-threat vulnerabilities, steal user credentials, or bypass network devices’ defenses. However, since its concept and implementation are quite difficult to overcome, it is often ignored by many network administrators, making users who browse such websites vulnerable to the HTTP request smuggling attacks. This paper proposes a general solution to deal with various HTTP request smuggling attacks. A reverse proxy implemented by Flask validates and cleans dubious HTTP requests from the client side and ensures that the original requests comply with RFC standards. Therefore, the website administrators no longer need to configure complicated network settings or customize some open-source project codes to resist or minimize the risk of the HTTP request smuggling attacks. A series of experiments demonstrate that this method is effective and practical.
Conference Paper
Full-text available
O avanço das tecnologias de Internet das Coisas (Intelligence of Things – IoT) e Inteligência Artificial (IA) abriu novas possibilidades de aplicações em diversas áreas, incluindo monitoramento em tempo real. Este trabalho apresenta o desenvolvimento de um simulador de aplicações de Inteligência Artificial das Coisas (Artificial Intelligence of Things – AIoT) para monitoramento de áreas rurais utilizando Veículos Aéreos Não Tripulados (VANTs). A proposta integra uma arquitetura edge/fog/cloud, onde VANTs equipados com câmeras e algoritmos de IA realizam a detecção de animais em tempo real. O sistema distribui a carga de processamento entre os dispositivos de borda e o servidor fog, otimizando a eficiência e a precisão das detecções. A interface gráfica desenvolvida permite a visualização e gerenciamento de simulações, facilitando a análise e a tomada de decisões. Os resultados demonstram a viabilidade e eficácia do sistema para monitoramento de ambientes de difícil acesso, contribuindo para uma gestão eficiente de recursos e resposta rápida a eventos da aplicação.
Article
Full-text available
The Internet of Things (IoT) is the recent technology intended to facilitate the daily life of humans by providing the power to connect, control and automate objects in the physical world. In this logic, the IoT helps to improve our way of producing and working in various areas (e.g. agriculture, industry, healthcare, transportation etc). Basically, an IoT network comprises physical devices, equipped with sensors and transmitters, that are interconnected with each other and/or connected to the Internet. Its main objective is to gather and transmit data to a storage system such as a server or cloud to enable processing and analysis, ultimately facilitating rapid decision‐making or enhancements to the user experience. In the realm of Connected Objects, an effective IoT data collection system plays a vital role by providing several benefits, such as real‐time data monitoring, enhanced decision‐making, increased operational efficiency etc. However, because of the resource limitations linked to connected objects, such as low memory and battery, or even single‐use devices etc. IoT data collecting presents several challenges including scalability, security, interoperability, flexibility etc. for both researchers and companies. The authors categorise current IoT data collection techniques and perform a comparative evaluation of these methods based on the topics analysed and elaborated by the authors. In addition, a comprehensive analysis of recent advances in IoT data collection is provided, highlighting different data types and sources, transmission protocols from connected sensors to a storage platform (server or cloud), the IoT data collection framework, and principles for streamlining the collection process. Finally, the most important research questions and future prospects for the effective collection of IoT data are summarised.
Thesis
Full-text available
IoT technologies generate large amounts of data that can be used to improve products and services. Cloud computing and providers like AWS are crucial enablers to solve emerging big data challenges. BHS Corrugated, a leading provider of solutions for the corrugated board industry, is migrating its on-premise data management infrastructure to the cloud to benefit from data collected in customer production lines. However, to gain knowledge from stored data, it must be accessible. Currently, BHS is missing a solution to provide data based on long-running queries in a consistent way. This thesis presents the LonqAPI, a Data-as-a-Service interface prototype for long- running queries on the Data Layer of BHS in the AWS Cloud. It focuses on scalability, efficiency, and extensibility to further or changing long-running queries and data sources. Through literature research, technologies, architectures, and patterns to design and implement the LonqAPI are identified and compared. The outcome is a REST API implementing the polling pattern for asynchronous client-server communication using AWS services like API Gateway and Lambda for the interface and Step Functions state machines for decoupled query execution. To illustrate the generic solution, an Athena query is integrated as a long-running query example.
Article
Problems of web application security and antihacker protection are very topical. Queries that users send to a web application via the Internet are registered in log files of the web server. Analyzing log files allows detecting anomalous changes that take place on the web server and identifying attacks. In this work, different methods are used to analyze log files and detect anomalies. The proposed methods allow detecting anomalous queries received from malicious users in log files of the web server.
Article
Full-text available
The rapid growth of network services and applications has led to an exponential increase in data flows on the internet. Given the dynamic nature of data traffic in the realm of internet content distribution, traditional TCP/IP network systems often struggle to guarantee reliable network resource utilization and management. The recent advancement of the Quick UDP Internet Connect (QUIC) protocol equips media transfer applications with essential features, including structured flow controlled streams, quick connection establishment, and seamless network path migration. These features are vital for ensuring the efficiency and reliability of network performance and resource utilization, especially when network hosts transmit data flows over end-to-end paths between two endpoints. QUIC greatly improves media transfer performance by reducing both connection setup time and transmission latency. However, it is still constrained by the limitations of single-path bandwidth capacity and its variability. To address this inherent limitation, recent research has delved into the concept of multipath QUIC, which utilizes multiple network paths to transmit data flows concurrently. The benefits of multipath QUIC are twofold: it boosts the overall bandwidth capacity and mitigates flow congestion issues that might plague individual paths. However, many previous studies have depended on basic scheduling policies, like round-robin or shortest-time-first, to distribute data transmission across multiple paths. These policies often overlook the subtle characteristics of network paths, leading to increased link congestion and transmission costs. In this paper, we introduce a novel multipath QUIC strategy aimed at minimizing flow completion time while taking into account both path delay and packet loss rate. Experimental results demonstrate the superiority of our proposed method compared to standard QUIC, Lowest-RTT-First (LRF) QUIC, and Pluginized QUIC schemes. The relative performance underscores the efficacy of our design in achieving efficient and reliable data transfer in real-world scenarios using the Mininet simulator.
Article
Full-text available
Citation: Alashhab, Z.R.; Anbar, M.; Rihan, S.D.A.; Alabsi, B.A.; Ateeq, K. Enhancing Cloud Computing Analysis: A CCE-Based HTTP-GET Log Dataset. Appl. Sci. 2023, 13, 9086. Abstract: The Hypertext Transfer Protocol (HTTP) is a common target of distributed denial-of-service (DDoS) attacks in today's cloud computing environment (CCE). However, most existing datasets for Intrusion Detection System (IDS) evaluations are not suitable for CCEs. They are either self-generated or are not representative of CCEs, leading to high false alarm rates when used in real CCEs. Moreover, many datasets are inaccessible due to privacy and copyright issues. Therefore, we propose a publicly available benchmark dataset of HTTP-GET flood DDoS attacks on CCEs based on an actual private CCE. The proposed dataset has two advantages: (1) it uses CCE-based features, and (2) it meets the criteria for trustworthy and valid datasets. These advantages enable reliable IDS evaluations, tuning, and comparisons. Furthermore, the dataset includes both internal and external HTTP-GET flood DDoS attacks on CCEs. This dataset can facilitate research in the field and enhance CCE security against DDoS attacks.
Article
Over the past few decades, significant progress has been made in quantum information technology, from theoretical studies to experimental demonstrations. Revolutionary quantum applications are now in the limelight, showcasing the advantages of quantum information technology and becoming a research hotspot in academia and industry. To enable quantum applications to have a more profound impact and wider application, the interconnection of multiple quantum nodes through quantum channels becomes essential. Building an entanglement-assisted quantum network, capable of realizing quantum information transmission between these quantum nodes, is the primary goal. However, entanglement-assisted quantum networks are governed by the unique laws of quantum mechanics, such as the superposition principle, the no-cloning theorem, and quantum entanglement, setting them apart from classical networks. Consequently, fundamental efforts are required to establish entanglement-assisted quantum networks. While some insightful surveys have paved the way for entanglement-assisted quantum networks, most of these studies focus on enabling technologies and quantum applications, neglecting critical network issues. In response, this paper presents a comprehensive survey of entanglement-assisted quantum networks. Alongside reviewing fundamental mechanics and enabling technologies, the paper provides a detailed overview of the network structure, working principles, and development stages, highlighting the differences from classical networks. Additionally, the challenges of building wide-area entanglement-assisted quantum networks are addressed. Furthermore, the paper emphasizes open research directions, including architecture design, entanglement-based network issues, and standardization, to facilitate the implementation of future entanglement-assisted quantum networks.
Article
O presente estudo objetivou discutir o atendimento de conjuntos de dados abertos governamentais acerca da Avaliação da Pós-Graduação Stricto Sensu disponíveis no portal Dados Abertos Capes às boas práticas para de dados na web. A análise apresentou a adequação dos 29 conjuntos de dados às 35 boas práticas para a disponibilização de dados na web, recomendadas pelo World Wide Web Consortium, bem como os benefícios alcançados pelo atendimento, de forma a fornecer aportes teóricos da Ciência da Informação para subsidiar os fornecedores de dados, no tocante à melhoria dos dados disponibilizados na web. A partir da análise foi possível verificar que, das 35 boas práticas, 7 não se aplicam aos conjuntos de dados analisados; 20 foram consideradas não atendidas ou parcialmente atendidas e 8 consideradas atendidas. Os dados disponíveis no portal Dados Abertos Capes necessitam de ajustes para que possam atender à terceira estrela dos dados abertos.
Chapter
With the rapid development of computer network, today’s network structure has become very complex. The promotion of Wi-Fi and the widespread use of mobile terminals such as cell phone, laptop and smart watch, have made wireless access become the main way of Internet surfing. Conventional congestion control algorithms are based on packet loss to perceive network congestion like NewReno, CUBIC. On Wi-Fi scenarios, last-mile delivery is wireless, and it may cause random loss. In this case, these congestion control algorithms based on packet loss perception can not distinguish the network congestion correctly and maybe reduce the congestion window when congestion doesn’t occur. To solve this problem, we propose optimization of these congestion control algorithms, which can be intelligently aware of packet loss and improve TCP transmission performance. We evaluated TCP transmission performance on wired network and wireless network. The results show that the performance of our method is similar to that of the conventional congestion control algorithms in wired environment, while the bandwidth utilization is significantly improved on wireless network. In addition, our work achieves great competition fairness using logical analysis.KeywordsTCPCongestion control algorithmNewRenoCUBICWireless network
Chapter
Many attacks on TLS have been published exploiting vulnerabilities in implementations or the specification. These attacks target different data sets that should be protected by cryptography: the plaintext of a TLS record, the PremasterSecret, the MasterSecret, or even the private key of the server. In this chapter, attacks are classified according to these targets and the basic attack technique used. None of the described attacks did break TLS completely, so they should not be considered as a weakness of TLS but rather as an indication of the growing understanding of TLS in the research community.
Chapter
The Hypertext Transfer Protocol (HTTP) is the essential application layer protocol on the Internet and the basis for communication on the World Wide Web. While HTTP was initially intended only for transmitting HTML and the data embedded in it, today, almost any kind of data can be sent via this protocol. HTTP uses a very simple communication pattern where a HTTP request is always answered with a HTTP response. This simple communication pattern can be extended arbitrarily by additional HTTP-Headers. HTTP client authentication can be performed via the HTTP Basic or HTTP Digest mechanisms or via passwords in HTML forms. HTTP uses the services of the Transmission Control Protocol (TCP), which guarantees reliable data transmission.
Article
Full-text available
Push Extensions to the IMAP protocol (P-IMAP) defines extensions to the IMAPv4 Rev1 protocol [RFC3501] for optimization in a mobile setting, aimed at delivering extended functionality for mobile devices with limited resources. The first enhancement of P-IMAP is extended support to push changes actively to a client, rather then requiring the client to initiate contact to ask for state changes. In addition, P-IMAP contains extensions for email filter management, message delivery, and maintaining up-to-date personal information. Bindings to specific transport are explicitly defined. Eventually P- IMAP aims at being neutral to the network transport neutrality. P-IMAP is a recommendation for interoperable intermediate implementations awaiting [LEMONADEPROFILEBIS] or the realization of the OMA MEM enabler using it.
ResearchGate has not been able to resolve any references for this publication.