# Khaje Nasir Toosi University of Technology

• Tehran, Iran
Recent publications
Customer retention and finding a way to preserve the customers are the most important issues of any organization. The main purpose of the present study in machine learning is to focus on correctly identifying customer needs with a method based on extracting opinion and sentiment analysis and quantifying customers’ sentiment orientation. In other words, the main issue is designing a recommender system to provide appropriate services according to customer satisfaction, sentiment, and experiences. The proposed method is based on customers’ opinions and experiences, which are obtained by evaluating tweets containing hashtags with the titles and headings of banking services as a statistical population. So, after reconsideration, correlation scores in terms of people’s sentiment score due to the tweets, cosine similarity, and reliability, consideration of relevant characteristic groups as well as recorded ideas in the training and testing process will be provided in the form of submitting a personalized offer to receive banking services. In order to represent a recommending solution, suitable classification methods are used along with the opinion mining methods and proper validation approach as well, and the terminal designed system with a little error will take steps to provide personalized services as well as help the banking system. As there is no thorough provision of banking services tailored to the customers’ situation, therefore, the mentioned system will be extremely beneficial.
The price of constructing hydrogen generation units is very high and sometimes it is not possible to build them in the desired location, so the transfer of hydrogen from the hydrogen generation system to the units that need it is justified. Since the storage of hydrogen gas needs a large volume and its transportation is very complex, so if hydrogen is stored in liquid form, this problem can be resolved. In transporting liquid hydrogen (LH2) over long distances, owing to heat transfer to the environment, the LH2 vaporizes, forming boil-off gas (BOG). Herein, in lieu of only reliquifying the BOG, this study proposes and assesses a system employing the BOG partially as feed for a novel liquefaction process, and also the remaining utilized in a proton exchange membrane fuel cell (PEMFC) to generate power. Using the cold energy of the onsite liquid oxygen utility of the LH2 cargo vessel, the mixed refrigerant liquefaction cycle is further cooled down. In this regard, by using 130 kg/h BOG as input, 60.37 kg/h of liquid hydrogen is produced and the rest enters PEMFC with 552.7 kg/h oxygen to produce 1592 kW of power. The total thermal efficiency of the integrated system and electrical efficiency of the PEMFC is 83.18% and 68.76%, respectively. Regarding the liquefaction cycle, its specific power consumption (SPC) and coefficient of performance (COP) were achieved at 3.203 kWh/kgLH 2 and 0.1876, respectively. The results of exergy analysis show that the exergy destruction of the whole system is 937.4 kW and also its exergy efficiency is calculated to be 58.38%. Exergoeconomic and Markov analyses have also been applied to the integrated system. Also, by changing the important parameters of PEMFC, its optimal performance has been extracted.
This paper describes the results of three-dimensional (3D) finite element analyses investigating the installation effects of groups of stone columns in purely cohesive soils. Installation of stone columns is simplified to the insertion of rigid cylindrical elements with conical tips in a single homogeneous soil layer (Tresca plasticity and a quasi-incompressible elastic law). Installation of a single column is simulated as the reference case and the installations of two columns and a group of nine columns are considered to study the interaction between the installation of several columns and the influence of the installation sequence. The process is simulated using a Coupled Eulerian Lagrangian formulation. Stone column installation alters the surrounding soil and the numerical results show the increase in horizontal stresses and pore pressures. The installation effects of several columns at common spacings overlap between each other and accumulate, producing higher horizontal stresses and pore pressures in a larger area. The installation sequence is mainly visible around the last column installed, where the radial stresses are lower.
A comprehensive assessment of the energetics parameters for a novel design of a multi-generation system including a heliostat field, a CO2 cycle, a desalination unit, an electrodialysis unit, and a combined cooling, heating, and power (CCHP) unit is conducted. The outputs of the system are electric power, cooling, heating, desalinated water, NaClO, and hydrogen. The study quantifies energy efficiency, exergy, exergoenvironmental and key environmental factors. The assessment additionally incorporates economic and environmental features of the proposed system to highlight the related social costs of air pollution. A comparison of system performance against conventional non-renewable energy-powered systems is highlighted through four scenarios. The system produces 11.89 GWh/year electrical energy, 5.3 GWh/year cooling, 6.5 GWh/year heating, 9.8 ton/year NaClO, 2.2 × 108 m3/year hydrogen, and 2.8 × 105 m3/year desalinated water with estimated values of the energy and exergy efficiencies at 22.39 % and 40.24 %, respectively, and payback period for the investment is determined to be 5 years.
Activated carbon powder was fixed using calcium alginate into a melamine-formaldehyde sponge to separate organics from water and wastewaters easily. The specific surface area and the mean pore size of the adsorbent were obtained as 354.3 m²/g and 2.63 nm, respectively. The contact angle of the AC-sponge was ~0° due to complete and rapid absorption of water droplets indicating its super hydrophilic property. The adsorption isotherm model was well represented by both Langmuir and Freundlich equations. The kinetic equations followed pseudo-second-order models for the adsorption of methylene blue as an indicator. Continuous treatment was investigated in a novel capillary adsorption system. It is a simple, efficient, and energy-saving method. Water was passed through a strip of the sponge from one side to the other side by the capillary action and the difference in water level between the two sides. The system was operated at initial MB concentration of 10 and 50 mg/L, and flow rate of 0.75 and 1.5 mL/min. The results showed easy water flow inside the super hydrophilic sponge and long breakpoint times. The breakpoint time was 2476 min at 10 mg/L initial MB concentration and 0.75 mL/min flow rate. Further, about a 30 % increase in the breakpoint time was observed due to pause times during the operation. The regeneration of the adsorbent was still up to 92.5 % after the second regeneration.
In this study, firstly the fiber metal laminates (FMLs) were fabricated by using aluminum sheets as skin layers and jute/basalt fibers with the different stacking sequences (sandwiched and intercalated structures) as core of structures. Then, the carbon nanotubes (CNTs) were dispersed into the epoxy resin for fabricating the FMLs with the proper flexural and shear properties. Also, the macro and micro-structural observations were performed to characterize failure mechanisms into these structures. The obtained results showed that the FML with the stacking sequence of jute sandwiched by the basalt fibers had the highest mechanical properties. Incorporating 0.3 wt% CNTs caused to achieve the highest flexural strength, flexural modulus and shear strength in this FML, which were 635 MPa, 54 GPa and 27.1 MPa, respectively. The tearing and plastic deformation of aluminum layers, fracturing and pulling out the basalt fibers, fracturing and crossing the cracks between the jute layers, delamination between aluminum and composite core were the characterized failure mechanisms. Also, creating the CNTs/epoxy nanocomposite adhesive agent on the interface of skin/core in the FML structures, fibrillating the jute fibers, pulling out CNTs, and adhering distinct fibers/epoxy were the characterized as influence factors on the mechanical properties.
Geothermal energy is a high potential energy source that can provide heat and power with minimal emissions. Recent advances in nanotechnology show that nanoparticles, such as CuO, TiO2, Al2O3, and Ag suspended in water, can enhance the heat transfer coefficient and rate. Here, we present a comprehensive review of nanotechnology use in closed geothermal heat pumps and heat exchangers. The study comprises: (i) detailed quantification of the effects of different parameters on a liquid suspension, including nanoparticle thermal conductivity, viscosity, volume fraction, density, specific heat capacity, mass flow rate, nanoparticle shape and size, Brownian motion, pressure drop, and friction factor; and (ii) an economic analysis of the benefits related to using of nanofluids in geothermal applications. We show that the overall performance of ground source geothermal systems improves through the use of low volume fractions of nanoparticles in suspension.
Most solar polygeneration systems use a single solar source to power the system. Despite their importance, dual-source systems received less attention. To address this gap, this study aims to analyze a new polygeneration arrangement that combines photovoltaic thermal (PVT) and flat plate collector (FPC). A detailed thermodynamic and economic analysis is conducted for a newly designed cooling, heating, and power (CCHP) system powered by solar energy for domestic applications. PVT and FPC collectors, with an ejector-vapor-compression refrigeration cycle that is equipped with a booster compressor, are used in the proposed system. The key goal of using a PVT with FPC panel in a series configuration is to achieve maximum electrical and thermal efficiencies. The effects of various factors on the system performance, including booster pressure ratio, R600a, and R290 refrigerants, PVT-FPC cycle water flow rate, and solar irradiation, are investigated. The multi-objective optimization results show that energy and exergy efficiency, payback period, and internal rate of return are 90.15%, 12.46%, 4.025years, and 26.67%, respectively. The results showed that using a booster compressor significantly increases the system performance by 96.34% in coefficient of performance (COP), 55.1% in exergy efficiency, and 121.9% in net output power. This optimum system also prevents the release of 2504 kg of carbon dioxide into the environment.
DC microgrids are finding widespread applications in various areas, including off-grid microgrids, transportation electrification, data centers, and residential and industrial purposes. As such, modeling and investigation of stability and robustness of microgrid for different types of loads including constant power, constant impedance, and constant current will play a key role in integrating these emerging grid components. To that end, this paper first develops a model for a buck converter based DC microgrid and then analyzes the system stability and robustness using root locus and Kharitonov theorem, respectively. It is shown that the system stability degrades as a result of the increase of local constant power load. To mitigate this issue, three methods are proposed to improve the system stability, and the robustness as well as the performance of system are investigated. Results verify the effectiveness of the suggested approaches.
Volatility estimation is an important issue in certain aspects of the financial community, such as risk management and asset pricing. It is known that stock returns often exhibit volatility clustering and the tails of the distributions of these series are fatter than the normal distribution. As a response to the need of these issues, the high unconditional volatility of assets encourages the users to predict their price in an ever changing market environment. Our main focus in this paper is to study the behavior of returns and volatility dynamics of some general stochastic economic models. First, we apply the local polynomial kernel smoothing method based on nonparametric regression to estimate the mean and the variance of the returns. We then implement and develop an empirical likelihood procedure in terms of conditional variance on daily log returns for inference on the nonparametric stochastic volatility as well as to construct a confidence interval for the volatility function. It appears that the proposed algorithm is applicable to some popular financial models and represents a good fit for the behavior observed in the stock and cryptocurrency markets. Some numerical results in connection to real data on the S&P 500 index and highly volatile Bitcoin dataset are also illustrated.
The present study, using theoretical analysis and computer simulations, explains investigating the behavior of synapses for fast analog computations in spiking neural networks with Floating Gate transistors. This synapse presents an advantage over previous works as it omits body effects and has low voltage sources due to floating gate transistors capabilities. Furthermore, due to having a delay less than 50 ms, it can be used for temporal coding and consumes 0.36 nW. This circuit has been added to a network of two neurons modeled with our previous work of Hodgkin-Huxley neuron model and the results show good agreements with the previous works. The chip area of the mentioned gating variable circuit is 708.817 μm × 1514.361 μm.
Road roughness-induced vibrations are the main source of fatigue damage accumulation in vehicles. Hence, road surface profile modeling and simulation are of high importance when it comes to vehicle fatigue damage assessment. This research focuses on uncovering the statistical distributions that describe the characteristics of road loads that affect fatigue damage accumulation. Particle filtering is deployed to estimate the log-volatility of the road surface profile by assuming a random walk behavior for the hidden log-volatility. The skew-normal distribution with three parameters is fitted to the estimated log-volatility. The inferred parameters are used for synthesizing artificial road profiles used in a vehicle simulation to calculate fatigue damage indices (FDI). The new road profile model outperforms all the previous models in predicting the real road FDI.
Key-frame selection methods were developed in the past years to reduce the complexity of frame processing in visual odometry (VO) and visual simultaneous localization and mapping (VSLAM) algorithms. Key-frames help increasing algorithm's performances by sparsifying frames while maintaining its accuracy and robustness. Unlike current selection methods that rely on many heuristic thresholds to decide which key-frame should be selected, this paper proposes a photogrammetric-based key-frame selection method built upon ORB-SLAM3. The proposed algorithm, named Photogrammetric Key-frame Selection (PKS), replaces static heuristic thresholds with photogrammetric principles, ensuring algorithm’s robustness and better point cloud quality. A key-frame is chosen based on adaptive thresholds and the Equilibrium Of Center Of Gravity (ECOG) criteria as well as Inertial Measurement Unit (IMU) observations. To evaluate the proposed PKS method, the European Robotics Challenge (EuRoC) and an in-house datasets are used. Quantitative and qualitative evaluations are made by comparing trajectories, point clouds quality and completeness and Absolute Trajectory Error (ATE) in mono-inertial and stereo-inertial modes. Moreover, for the generated dense point clouds, extensive evaluations, including plane-fitting error, model deformation, model alignment error, and model density and quality, are performed. The results show that the proposed algorithm improves ORB-SLAM3 positioning accuracy by 18% in stereo-inertial mode and 20% in mono-inertial mode without the use of heuristic thresholds, as well as producing a more complete and accurate point cloud up to 50%. The open-source code of the presented method is available at https://github.com/arashazimi0032/PKS.
Mobile robots lack a driver or a pilot and, thus, should be able to detect obstacles autonomously. This paper reviews various image-based obstacle detection techniques employed by unmanned vehicles such as Unmanned Surface Vehicles (USVs), Unmanned Aerial Vehicles (UAVs), and Micro Aerial Vehicles (MAVs). More than 110 papers from 23 high-impact computer science journals, which were published over the past 20 years, were reviewed. The techniques were divided into monocular and stereo. The former uses a single camera, while the latter makes use of images taken by two synchronised cameras. Monocular obstacle detection methods are discussed in appearance-based, motion-based, depth-based, and expansion-based categories. Monocular obstacle detection approaches have simple, fast, and straightforward computations. Thus, they are more suited for robots like MAVs and compact UAVs, which usually are small and have limited processing power. On the other hand, stereo-based methods use pair(s) of synchronised cameras to generate a real-time 3D map from the surrounding objects to locate the obstacles. Stereo-based approaches have been classified into Inverse Perspective Mapping (IPM)-based and disparity histogram-based methods. Whether aerial or terrestrial, disparity histogram-based methods suffer from common problems: computational complexity, sensitivity to illumination changes, and the need for accurate camera calibration, especially when implemented on small robots. In addition, until recently, both monocular and stereo methods relied on conventional image processing techniques and, thus, did not meet the requirements of real-time applications. Therefore, deep learning networks have been the centre of focus in recent years to develop fast and reliable obstacle detection solutions. However, we observed that despite significant progress, deep learning techniques also face difficulties in complex and unknown environments where objects of varying types and shapes are present. The review suggests that detecting narrow and small, moving obstacles and fast obstacle detection are the most challenging problem to focus on in future studies.
Wetlands provide many benefits, such as water storage, flood control, transformation and retention of chemicals, and habitat for many species of plants and animals. The ongoing degradation of wetlands in the Great Lakes basin has been caused by a number of factors, including climate change, urbanization, and agriculture. Mapping and monitoring wetlands across such large spatial and temporal scales have proved challenging; however, recent advancements in the accessibility and processing efficiency of remotely sensed imagery have facilitated these applications. In this study, the historical Landsat archive was first employed in Google Earth Engine (GEE) to classify wetlands (i.e., Bog, Fen, Swamp, Marsh) and non-wetlands (i.e., Open Water, Barren, Forest, Grassland/Shrubland, Cropland) throughout the entire Great Lakes basin over the past four decades. To this end, an object-based supervised Random Forest (RF) model was developed. All of the produced wetland maps had overall accuracies exceeding 84%, indicating the high capability of the developed classification model for wetland mapping. Changes in wetlands were subsequently assessed for 17 time intervals. It was observed that approximately 16% of the study area has changed since 1984, with the highest increase occurring in the Cropland class and the highest decrease occurring in the Forest and Marsh classes. Forest mostly transitioned to Fen, but was also observed to transition to Cropland, Marsh, and Swamp. A considerable amount of the Marsh class was also converted into Cropland. Citation: Amani, M.; Kakooei, M.; Ghorbanian, A.; Warren, R.; Mahdavi, S.; Brisco, B.; Moghimi, A.; Bourgeau-Chavez, L.; Toure, S.; Paudel, A.; et al. Forty Years of Wetland Status and Trends Analyses in the Great Lakes Using Landsat Archive Imagery and Google Earth Engine.
Precipitation, as an important component of the Earth’s water cycle, plays a determinant role in various socio-economic practices. Consequently, having access to high-quality and reliable precipitation datasets is highly demanded. Although Gridded Precipitation Products (GPPs) have been widely employed in different applications, the lack of quantitative assessment of GPPs is a critical concern that should be addressed. This is because the inherent errors in GPPs would propagate into any models in which precipitation values are incorporated, introducing uncertainties into the final results. This paper aims to quantify the capability of six well-known GPPs (TMPA, CHIRPS, PERSIANN, GSMaP, IMERG, and ERA5) at multiple time scales (daily, monthly, and yearly) using in situ observations (over 1.7 million) throughout Iran over the past two decades (2000–2020). Both continuous and categorical metrics were implemented for precipitation intensity and occurrence assessment based on the point-to-pixel comparison approach. Although all metrics did not support the superior performance of any specific GPP, taking all investigations into account, the findings suggested the better performance of the Global Satellite Mapping of Precipitation (GSMaP) in estimating daily precipitation (CC = 0.599, RMSE = 3.48 mm/day, and CSI = 0.454). Based on the obtained continuous metrics, all the GPPs had better performances in dry months, while this did not hold for the categorical metrics. The validation at the station level was also carried out to present the spatial characteristics of errors throughout Iran, indicating higher overestimation/underestimation in regions with higher precipitation rates. The validation analysis over the last two decades illustrated that the GPPs had stable performances, and no improvement was seen, except for the GSMaP, in which its bias error was significantly reduced. The comparisons on monthly and yearly time scales suggested the higher accuracy of monthly and yearly averaged precipitation values than accumulated values. Our study provides valuable guidance to the selection and application of GPPs in Iran and also offers beneficial feedback for further improving these products.
Aldous and Fill conjectured that the maximum relaxation time for the random walk on a connected regular graph with n $n$ vertices is (1 + o( 1 ) ) 3 n 22 π 2 $(1+o(1))\frac{3{n}^{2}}{2{\pi }^{2}}$. This conjecture can be rephrased in terms of the spectral gap as follows: the spectral gap (algebraic connectivity) of a connected k $k$‐regular graph on n $n$ vertices is at least (1 + o( 1 ) ) 2 k π 23 n 2 $(1+o(1))\frac{2k{\pi }^{2}}{3{n}^{2}}$, and the bound is attained for at least one value of k $k$. We determine the structure of connected quartic graphs on n $n$ vertices with a minimum spectral gap which enables us to show that the minimum spectral gap of connected quartic graphs on n $n$ vertices is (1 + o( 1 ) ) 4 π 2 n 2 $(1+o(1))\frac{4{\pi }^{2}}{{n}^{2}}$. From this result, the Aldous–Fill conjecture follows for k = 4 $k=4$.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
5,702 members
• Faculty of Electrical and Computer Engineering
• Faculty of Electrical Engineering
• Faculty of Mechanical Engineering
• Faculty of Civil Engineering
• Faculty of Geodesy and Geomatics Engineering
Information