River valley projects have a lot of promise in the seismically active Himalayan orogenic region. Some hydroelectric projects are now operational, some are in the planning stages, and a few more will be built shortly. Knowing the nature of ground motion at these locations is critical. The present study uses a probabilistic seismic hazard analysis (PSHA) technique to estimate Peak Ground Acceleration (PGA) for the three hydropower projects in Uttarakhand, Himachal Pradesh, and Jammu and Kashmir (India). Given all potential earthquakes, the aim of probabilistic seismic hazard analysis (PSHA) is to quantify the rate of surpassing certain ground motion levels at the project site. Hazard curves may be used to determine the seismic design input for a location, and they can also be used to analyze the tunnel seismic reaction. The fundamental methods of PSHA are presented in this article in an attempt to offer a clear and brief introduction to the theoretical basis and implementation of PSHA in today’s engineering practice.
With rapid advancements in the technology, almost all the devices around are becoming smart and contribute to the Internet of Things (IoT) network. When a new IoT device is added to the network, it is important to verify the authenticity of the device before allowing it to communicate with the network. Hence, access control is a crucial security mechanism that allows only the authenticated node to become the part of the network. An access control mechanism also supports confidentiality, by establishing a session key that accomplishes secure communications in open public channels. Recently, blockchain has been implemented in access control protocols to provide a better security mechanism. The foundation of this survey article is laid on IoT, where a detailed description on IoT, its architecture and applications is provided. Further, various security challenges and issues, security attacks possible in IoT and their countermeasures are also provided. We emphasize on the blockchain technology and its evolution in IoT. A detailed description on existing consensus mechanisms and how blockchain can be used to overpower IoT vulnerabilities is highlighted. Moreover, we provide a comprehensive description on access control protocols. The protocols are classified into certificate-based, certificate-less and blockchain-based access control mechanisms for better understanding. We then elaborate on each use case like smart home, smart grid, health care and smart agriculture while describing access control mechanisms. The detailed description not only explains the implementation of the access mechanism, but also gives a wider vision on IoT applications. Next, a rigorous comparative analysis is performed to showcase the efficiency of all protocols in terms of computation and communication costs. Finally, we discuss open research issues and challenges in a blockchain-envisioned IoT network.
Binarization is an essential pre-processing step for many document image analysis tasks. Binarization of handwritten documents is more challenging than printed documents because of the non-uniform density of ink and the variable thickness of strokes. Instead of traditional scanners, people nowadays use the mobile camera to capture documents, including text written on white and glass boards. The quality of the camera-captured document images is often poor when compared with scanned document images. This impacts binarization accuracy. This paper presents a deep learning-based binarization framework called Deep Semantic Binarization (dsb) to binarize various document images. We pose document image binarization problem as a pixel-wise two-class classification task. Deep networks (including dsb) require many training images during training. However, the benchmark datasets with a limited number of training images are publicly available in the literature. We explore various training strategies, including transfer learning, to handle the data scarcity during training. Due to the unavailability of mobile-captured whiteboard and glass board images, we created two datasets, namely wbids-iiit and gbids-iiit, with associated ground truths. We validate dsbn on the public benchmark dibco dataset and wbids-iiit and gbids-iiit datasets. We empirically demonstrate that the dsb outperforms the state-of-the-art techniques for wbids-iiit, gbids-iiit and public datasets.
Climate change exposes more frequent natural hazards and physical vulnerabilities to the built and natural environments. Extreme precipitation and temperature events will have a significant impact on both the natural environment and human society. However, it is unclear whether precipitation and temperature extremes increase physical vulnerabilities across scales and their links with large-scale climate indices. This study investigates the relationship between precipitation and temperature extremes, as recommended by the Expert Team on Climate Change Detection and Indices (ETCCDI), and large scale climatological phenomenon indices (Indian Summer Monsoon Index (ISMI), Arctic Oscillation (AO), and North Atlantic Oscillation (NAO)), using India as a case study. Our findings show that extreme warm indices were primarily negatively related to ISMI and positively related to extreme cold indices. According to Pearson's correlation coefficients and Wavelet Transform Coherence (WTC), extreme warm indices were negatively related to ISMI and positively related to extreme cold indices. The extreme precipitation indices had a significant positive relationship, primarily with AO. Furthermore, from 1951 to 2018, India experienced an increase in warm extremes over western, central, and peninsular India, while cold indices increased over northwest India. Precipitation extremes of more than one day, more than five days, very wet and extremely wet days have increased across India except in the Indo-Gangetic plains, while heavy and very heavy precipitation days, consecutive wet days, and consecutive dry days have decreased.
Afghanistan is a semi-arid country and most vulnerable to climate extremes related hazards, including droughts and floods that has caused huge impact on the socio-economic development of the country. The present study analysed the observed precipitation and temperature trends for seven agro-climatic zones of Afghanistan over the period 1951 to 2007 with Asian Precipitation-Highly-Resolved Observational Data Integration towards Evaluation of Water Resources (APHRODITE). Change in the magnitude of precipitation and temperatures in recent years with reference to distant past was assessed by dividing the historical data into two parts as 1951–1990 and 1991–2006. Further, the trend analysis was performed on daily data to test the increasing or decreasing rainfall and temperature trends using Mann–Kendall trend test for each zone of Afghanistan. The maximum precipitation occurrence months were observed as January, February, March, April and May for all zones of Afghanistan. Whereas, June, July, August, September, October, November and December and generally considered as dry months. The maximum temperature was observed in the months of May, June, July and August, with hottest month as July for all seven zones of Afghanistan. The annual total precipitation has shown an increasing trend for the zones of South, South-West, East and Central, whereas, a decreasing trend has been observed for the zones of North, North-East and West zones. The trend analysis of the precipitation with gridded data sets reveals for most part of the Afghanistan region the rainfall has been observed as decreasing. Whereas, for all seven agro-climatic zones of Afghanistan an increasing trend of temperature in recent years of 2004 to 2016 were observed. Overall, the North, North-East and West zones of Afghanistan are more vulnerable with decreasing precipitation and increasing temperatures indicating more dry and warm periods indicating increasing drought conditions. Whereas, the South, South-West, East, and Central zones are more vulnerable with increasing trends of both precipitation and temperatures indicating increase of more wet and warm climates.
Objectives : The purpose of this study is to analyze the challenges that overseas cancer patients face while receiving treatment in a Tier 1 city in India. Methods : A total of 2835 overseas patients from 55 countries received cancer treatment in two hospitals in Delhi, India between November 2013 and April 2019. Of these, 937 patients participated in a 30-point questionnaire on patient reported outcomes. The questionnaire included clinical information, difficulties, expenditure, financial toxicity (FT), education, profession/earning, family/emotional issues, and the reason for the choice of Delhi. Results : Patient population consisted of 1149 and 1686 males and females with a mean age of 46.8±18.7 years. Patients came from countries spread across four geographical regions, namely Asia (19 countries), Europe (4), Oceania (8), and Africa (23). The two most prominent reasons for overseas treatment were (1) unreliable medical service at home or in nearby countries and (2) non-availability of the medical services in the home country. 14.9% of the patients had metastatic disease. Of the 55 countries in total, 15 did not have any inland radiotherapy facility. The average number of caregivers was 1.3. The average treatment cost, travel cost, and one month accommodation cost for 2.3 people were 10.1±9 thousand, 4.3±2.9 thousand, and 1.6 thousand United States dollars, respectively. While 116 (13.4%) patients reported an ignorable financial burden, the rest (86.4%) reported moderate to extreme financial distress. Among the financially distressed patients, 35% had exhausted their lifetime savings. All patients in this category reported compromised lifestyle in terms of food and treatment of other chronic diseases for themselves and family. Conclusion : Patients from other countries coming to Delhi for cancer management travel long distances in search of reliable medical services. In the process, many patients and their families exhaust their finances, leading to lifetime financial distress. Lay Summary : Traveling of cancer patients to another country for treatment (migration) is a worldwide problem. This is due to the lack of reliable medical facilities in the home and nearby countries, or the desire for better treatment. Cancer patient migration often produces several problems, which include family and emotional issues, financial difficulties, profession/earning and problems associated with travel to other countries. A total of 937 cancer patients from 55 countries belonging to four continents (Africa, Asia, Europe, and Oceania) were interviewed on a 30-point questionnaire about these difficulties and the reason for their choice of an overseas destination for cancer treatment. It was found that the most significant problems contributing to cancer patient migration were the lack of comprehensive cancer care facilities and reliability of the existing facilities. Around 15% of migrated patients had terminal diseases with limited life expectancy. This study, first of its kind, identifies problems associated with overseas cancer patients and quantify them.
Parallel computing of hash functions along with the security requirements have great advantage in order to reduce the time consumption and overhead of the CPU. In this article, a keyed hash function based on farfalle construction and chaotic neural networks (CNNs) is proposed, which generates a hash value with arbitrary (defined by user) length (eg, 256 and 512 bits). The proposed hash function has parallelism merit because it is built over farfalle construction which avoids the dependency between the blocks of a given message. Moreover, the proposed hash function is chaos based (ie, it relies on chaotic maps and CNNs which have non‐periodic behavior). The security analysis shows that the proposed hash function is robust and satisfies the properties of hash algorithms, such as random‐like (non‐periodic) behavior, ideal sensitivity to original message and secret key, one‐way property and optimal diffusion effect. The speed performance of the hash function is also analyzed and compared with a hash function which was built based on sponge construction and CNN, and compared with secure hash algorithm (SHA) variants like SHA‐2 and SHA‐3. The results have shown that the proposed hash function has lower time complexity and higher throughput especially with large size messages. Additionally, the proposed hash function has enough resistance to multiple attacks, such as collision attack, birthday attack, exhaustive key search attack, preimage and second preimage attacks, and meet‐in‐the‐middle attack. These advantages make it ideal to be used as a good collision‐resistant hash function.
Normative aging trends of the brain can serve as an important reference in the assessment of neurological structural disorders. Such models are typically developed from longitudinal brain image data—follow-up data of the same subject over different time points. In practice, obtaining such longitudinal data is difficult. We propose a method to develop an aging model for a given population, in the absence of longitudinal data, by using images from different subjects at different time points, the so-called cross-sectional data. We define an aging model as a diffeomorphic deformation on a structural template derived from the data and propose a method that develops topology preserving aging model close to natural aging. The proposed model is successfully validated on two public cross-sectional datasets which provide templates constructed from different sets of subjects at different age points.
Understanding the nexus between land use land cover (LULC) and land surface temperature (LST) of a rapidly growing city may help planners mitigate the effects of uncontrolled urbanization on the micro- and macro-environment. The primary focus of the study is to monitor the transient LULC of Surat, one of the rapidly growing cities in India. To comprehend the urban dynamics, the study analyses the tri-decadal LULC of Surat using temporal Landsat imagery corresponding to 1990, 2001, 2009, and 2020. Besides classification of satellite data to derive LULC using the maximum likelihood algorithm, emphasis has been given to evaluate the normalized difference vegetation index and normalized difference built-up index, which help in differentiating vegetation and built-up from other land-use types. In addition, the LST of Surat is computed, and zonal analysis is performed to examine its association with LULC. Results show that the built-up area of Surat increased by 3.22 times during the considered time, while the aerial extent of vegetation decreased by 1.58 times. Future land-use dynamics are predicted using the Markov model. Findings revealed that the built-up area is expected to increase by 20% between 2020 and 2030, while the vegetation area is likely to decrease by 13%. The developed model attained an accuracy of 52.08%, which is in agreement with the past studies. The findings of this study help urban planners and stakeholders to devise effective policies that can mitigate the detrimental effects of rapid urbanization on environment.
Ketones are the key functional group that recurs in chemistry and biology, and accessing them through simple and economic ways is highly desirable. Herein, we report the synthesis of unsymmetrical ketones from abundant toluene and alkyl esters, where volatile alcohols are the sole byproduct. This protocol applies to a repertoire of substrates bearing electron‐donating, electron‐withdrawing, and neutral substituents. Most importantly, the organometallic ferrocenyl ester underwent aroylation with ease. This method is the first example to furnish diketones from methyl arenes and diesters. Furthermore, cyclic imide was synthesized by this protocol utilizing KN(SiMe 3 ) 2 as a 'nitrogen' source. Density functional theory studies provide insight into deprotonation of toluene by K + ‐π interaction by increasing its acidity, and this being the rate‐determining step.
Identity-based encryption is an important cryptographic system that is employed to ensure confidentiality of a message in communication. This article presents a provably secure identity based encryption based on post quantum security assumption. The security of the proposed encryption is based on the hard problem, namely Learning with Errors on integer lattices. This construction is anonymous and produces pseudo random ciphers. Both public-key size and ciphertext-size have been reduced in the proposed encryption as compared to those for other relevant schemes without compromising the security. Next, we incorporate the constructed identity based encryption (IBE) for Internet of Things (IoT) applications, where the IoT smart devices send securely the sensing data to their nearby gateway nodes(s) with the help of IBE and the gateway node(s) secure aggregate the data from the smart devices by decrypting the messages using the proposed IBE decryption. Later, the gateway nodes will securely send the aggregated data to the cloud server(s) and the Big data analytics is performed on the authenticated data using the Artificial Intelligence (AI)/Machine Learning (ML) algorithms for accurate and better predictions.
We present a joint multi-robot trajectory optimizer that can compute trajectories for tens of robots in aerial swarms within a small fraction of a second. The computational efficiency of our approach is built on breaking the per-iteration computation of the joint optimization into smaller, decoupled sub-problems and solving them in parallel through a custom batch optimizer. We show that each of the sub-problems can be reformulated to have a special Quadratic Programming structure, wherein the matrices are shared across all the problems and only the associated vector varies. As result, the batch solution update rule reduces to computing just large matrix vector products which can be trivially accelerated using GPUs. We validate our optimizer’s performance in difficult benchmark scenarios and compare it against existing state-of-the-art approaches. We demonstrate remarkable improvements in computation time its scaling with respect to the number of robots. Moreover, we also perform better in trajectory quality as measured by smoothness and arc-length metrics.
Short and long range reservoir inflow forecast is essential for efficient real time operational planning, scheduling of hydroelectric power system and management of water resources. Large-scale climate phenomenon indices have a strong influence on hydrological processes under complex weather conditions, and it should be considered to forecast reservoir inflow for efficient dam operation strategies. This study aims to explore the relevance of large-scale climate phenomenon indices in improving the reservoir inflow prediction at short-term time scales. This paper presents a simple and effective framework to combine various data-driven machine learning (ML) algorithms for short-range reservoir inflow forecasting. Random Forest (RF), Gradient Boosting Regressor (GBR), K-Nearest Neighbors Regressor (KNN), and Long Short-Term Memory (LSTM) were employed for predicting daily reservoir inflows considering various climate phenomenon indices (e.g., Arctic Oscillation , North Atlantic Oscillation, and Southern Oscillation Index) and hydroclimatic variables (precipitation), accounting for time-lag effects. After training the individual ML algorithm, a framework was developed to create an ensemble model using a robust weighted voting ensemble method to quantify forecasting uncertainty and to improve the model performance by combining the inflow results of the single ML model and the highest vote is chosen based on the weights assigned to the single ML model. The developed framework was examined in two distinct reservoirs located in India and California, USA. The ensemble model consistently outperformed the standalone RF, GBR, KNN, and LSTM in predicting high (flood control and monsoon seasons) and low flows (runoff and non-monsoon seasons) of both study reservoirs. The demonstrated short-term reservoir forecasting model allows reservoir operators to adapt and add regional hydrological and large-scale climate indices in real-time decision-making. The presented framework can be applied for any reservoir inflow forecasting.
Anatomical variabilities seen in longitudinal data or inter-subject data is usually described by the underlying deformation, captured by non-rigid registration of these images. Stationary Velocity Field (SVF) based non-rigid registration algorithms are widely used for registration. However, these methods cover only a limited degree of deformations. We address this limitation and define an approximate metric space for the manifold of diffeomorphisms G. We propose a method to break down the large deformation into finite set of small sequential deformations. This results in a broken geodesic path on G and its length now forms an approximate registration metric. We illustrate the method using a simple, intensity-based, log-demon implementation. Validation results of the proposed method show that it can capture large and complex deformations while producing qualitatively better results than state-of-the-art methods. The results also demonstrate that the proposed registration metric is a good indicator of the degree of deformation.
An autonomous broker that liaises between retail customers and power-generating companies (GenCos) is essential for the smart grid ecosystem. The efficiency brought in by such brokers to the smart grid setup can be studied through a well-developed simulation environment. In this paper, we describe the design of one such energy broker called VidyutVanika21 (VV21) and analyze its performance using a simulation platform called PowerTAC (PowerTrading Agent Competition). Specifically, we discuss the retail (VV21–RM) and wholesale market (VV21–WM) modules of VV21 that help the broker achieve high net profits in a competitive setup. Supported by game-theoretic analysis, the VV21–RM designs tariff contracts that a) maintain a balanced portfolio of different types of customers; b) sustain an appropriate level of market share, and c) introduce surcharges on customers to reduce energy usage during peak demand times. The VV21–WM aims to reduce the cost of procurement by following the supply curve of the GenCo to identify its lowest ask for a particular auction which is then used to generate suitable bids. We further demonstrate the efficacy of the retail and wholesale strategies of VV21 in PowerTAC 2021 finals and through several controlled experiments.
Fully entangled fraction (FEF) is a significant figure of merit for density matrices. In bipartite d⊗d quantum systems, the threshold value FEF >1/d, carries significant implications for quantum information processing tasks. Like separability, the value of FEF is also related to the choice of global basis of the underlying Hilbert space. A state having its FEF ≤1/d, might give a value >1/d in another global basis. A change in the global basis corresponds to a global unitary action on the quantum state. In the present work, we find that there are quantum states whose FEF remains less than 1/d, under the action of any global unitary, i.e., any choice of global basis. We invoke the hyperplane separation theorem to demarcate the set from states whose FEF can be increased beyond 1/d through global unitary action. Consequent to this, we probe the marginals of a pure three party system in qubits. We observe that under some restrictions on the parameters, even if two parties collaborate (through unitary action on their combined system) they will not be able to breach the FEF threshold. The study is further extended to include some classes of mixed three qubit and three qutrit systems. Furthermore, the implications of our work pertaining to k-copy nonlocality and teleportation are also investigated.
At the LHC, the gluon-initiated processes are considered to be the primary source of di-Higgs production. However, in the presence of a new resonance, the light-quark initiated processes can also contribute significantly. In this paper, we look at the di-Higgs production mediated by a new singlet scalar. The singlet is produced in both quark-antiquark and gluon fusion processes through loops involving a scalar leptoquark and right-handed neutrinos. With benchmark parameters inspired from the recent resonant di-Higgs searches by the ATLAS collaboration, we examine the prospects of such a resonance in the TeV-range at the High-Luminosity LHC (HL-LHC) in the bb¯τ+τ− mode with a multivariate analysis. We obtain the 5σ and 2σ contours and find that a significant part of the parameter space is within the reach of the HL-LHC.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.