Mohamed El Haziti’s research while affiliated with Mohammed V University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (37)


Figure 2. Flow chart describing the different stages of our literature research process.
Figure 4. Word cloud of keywords.
Figure 5. Network visualisation of the co-occurrence of the keywords plotted by VOSviewer.
Figure 6. Density visualisation of the co-occurrence of the keywords plotted by VOSviewer.
Figure 8. The generic urban sprawl prediction framewrok (adapted from (Tekouabou et al., 2022b)).

+3

Exploring the potentialities and challenges of deep learning for simulation and prediction of urban sprawl features
  • Article
  • Full-text available

January 2025

·

50 Reads

Data & Policy

·

·

Mohamed El Haziti

Rapid urbanization poses several challenges, especially when faced with an uncontrolled urban development plan. Therefore, it often leads to anarchic occupation and expansion of cities, resulting in the phenomenon of urban sprawl (US). To support sustainable decision–making in urban planning and policy development, a more effective approach to addressing this issue through US simulation and prediction is essential. Despite the work published in the literature on the use of deep learning (DL) methods to simulate US indicators, almost no work has been published to assess what has already been done, the potential, the issues, and the challenges ahead. By synthesising existing research, we aim to assess the current landscape of the use of DL in modelling US. This article elucidates the complexities of US, focusing on its multifaceted challenges and implications. Through an examination of DL methodologies, we aim to highlight their effectiveness in capturing the complex spatial patterns and relationships associated with US. This work begins by demystifying US, highlighting its multifaceted challenges. In addition, the article examines the synergy between DL and conventional methods, highlighting the advantages and disadvantages. It emerges that the use of DL in the simulation and forecasting of US indicators is increasing, and its potential is very promising for guiding strategic decisions to control and mitigate this phenomenon. Of course, this is not without major challenges, both in terms of data and models and in terms of strategic city planning policies.

Download

Knowledge Distillation in Image Classification: The Impact of Datasets

July 2024

·

118 Reads

As the demand for efficient and lightweight models in image classification grows, knowledge distillation has emerged as a promising technique to transfer expertise from complex teacher models to simpler student models. However, the efficacy of knowledge distillation is intricately linked to the choice of datasets used during training. Datasets are pivotal in shaping a model’s learning process, influencing its ability to generalize and discriminate between diverse patterns. While considerable research has independently explored knowledge distillation and image classification, a comprehensive understanding of how different datasets impact knowledge distillation remains a critical gap. This study systematically investigates the impact of diverse datasets on knowledge distillation in image classification. By varying dataset characteristics such as size, domain specificity, and inherent biases, we aim to unravel the nuanced relationship between datasets and the efficacy of knowledge transfer. Our experiments employ a range of datasets to comprehensively explore their impact on the performance gains achieved through knowledge distillation. This study contributes valuable guidance for researchers and practitioners seeking to optimize image classification models through kno-featured applications. By elucidating the intricate interplay between dataset characteristics and knowledge distillation outcomes, our findings empower the community to make informed decisions when selecting datasets, ultimately advancing the field toward more robust and efficient model development.


Fig. 1. Teacher-student framework for knowledge distillation.
Fig. 2. Response-based knowledge distillation.
Fig. 3. Methodology use to train students on each dataset.
Fig. 4. Teacher Accuracy on each dataset.
Fig. 5. Students Accuracy on each dataset
Comparative Analysis of Datasets for Response-based Knowledge Distillation in Image Classification

December 2023

·

77 Reads

·

2 Citations

This paper presents a comparative analysis of dif- ferent datasets’ influence on the effectiveness of response-based knowledge distillation in image classification. Through a series of experiments and evaluations, we explore the impact of dataset choice on the performance and transferability of distilled models. Our findings provide insights into optimizing dataset selection for response-based knowledge distillation, a critical aspect of model compression and transfer learning.


Figure 1. Key characteristics of urban sprawl.
Figure 2. Challenges of urban sprawl.
Overviewing the emerging methods for predicting urban Sprawl features

December 2023

·

105 Reads

·

2 Citations

E3S Web of Conferences

Urban sprawl, a common phenomenon characterized by uncontrolled urban growth, has far-reaching socio-economic and environmental implications. It’s a complex phenomenon, and finding a better way to tackle it is essential. Accurate simulation and prediction of urban sprawl features would facilitate decision-making in urban planning and the formulation of city growth policies. This article provides an overview of the techniques used to this end. Initially, it highlights the use of a certain category of so-called traditional methods, such as statistical models or classical machine learning methods. It then focuses particularly on the intersection of deep learning and urban sprawl modelling, examining how deep learning methods are being exploited to simulate and predict urban sprawl. I finally studies hybrid approaches that combine deep learning with agent-based models, cellular automata, or other techniques offer a synergistic way to leverage the strengths of different methodologies for urban sprawl modelling.


M-Centrality: identifying key nodes based on global position and local degree variation

January 2023

·

58 Reads

Identifying influential nodes in a network is a major issue due to the great deal of applications concerned, such as disease spreading and rumor dynamics. That is why, a plethora of centrality measures has emerged over the years in order to rank nodes according to their topological importance in the network. Local metrics such as degree centrality make use of a very limited information and are easy to compute. Global metrics such as betweenness centrality exploit the information of the whole network structure at the cost of a very high computational complexity. Recent works have shown that combining multiple metrics is a promising strategy to quantify the node's influential ability. Our work is in this line. In this paper, we introduce a multi-attributes centrality measure called M-Centrality that combines the information on the position of the node in the network with the local information on its nearest neighborhood. The position is measured by the K-shell decomposition, and the degree variation in the neighborhood of the node quantifies the influence of the local context. In order to examine the performances of the proposed measure, we conduct experiments on small and large scale real-world networks from the perspectives of transmission dynamics and network connectivity. According to the empirical results, the M-Centrality outperforms its alternatives in identifying both influential spreaders and nodes essential to maintain the network connectivity. In addition, its low computational complexity makes it easily applied to large scale networks.


A Hybrid Robust Image Watermarking Method Based on DWT-DCT and SIFT for Copyright Protection

October 2021

·

358 Reads

·

12 Citations

In this paper, a robust hybrid watermarking method based on discrete wavelet transform (DWT), discrete cosine transform (DCT), and scale-invariant feature transformation (SIFT) is proposed. Indeed, it is of prime interest to develop robust feature-based image watermarking schemes to withstand both image processing attacks and geometric distortions while preserving good imperceptibility. To this end, a robust watermark is embedded in the DWT-DCT domain to withstand image processing manipulations, while SIFT is used to protect the watermark from geometric attacks. First, the watermark is embedded in the middle band of the discrete cosine transform (DCT) coefficients of the HL1 band of the discrete wavelet transform (DWT). Then, the SIFT feature points are registered to be used in the extraction process to correct the geometric transformations. Extensive experiments have been conducted to assess the effectiveness of the proposed scheme. The results demonstrate its high robustness against standard image processing attacks and geometric manipulations while preserving a high imperceptibility. Furthermore, it compares favorably with alternative methods.


Sensor Placement Optimization: A Case Study of PM10 Network in Dunkirk.
Khaoula Karroum

·

·

Hervé Delbarre

·

[...]

·

Mohamed El Haziti

Minimizing the financial cost of sensors network deployment while ensuring a good coverage of the measured variable have become the main priority in network topology optimization. In this work, the problem is considered of optimization of air quality network for pollution monitoring in Dunkirk in France. Modelized data of PM10 by Atmospheric Dispersion Modelling System (ADMS) is taken as a ground truth. In this study, we used Inverse Distance Weighting (IDW) interpolation to compute the cost function of Root Mean Square Error (RMSE). Pollution rates in the region is estimated by interpolation from the positions of measurement stations. The Root Mean Square Error between interpolated and the ground truth pollution values reflect the quality of the network. The position of pollution sensors is optimized by means of the Genetic Algorithm to minimize the RMSE. A random population is chosen as initial input to this algorithm, afterwards by means of crossover and mutation techniques it tries to generate populations that would produce individuals achieving a smaller (RMSE). In addition to optimal station positioning, this approach also allows estimating how RMSE of pollution estimation depends on the number of measuring stations. We observe the optimization precision dependence on the sensors number. The obtained configuration was analyzed and compared with the actual national measurement network of ATMO Hauts-de-France. As for this deployed national network, that have most of its sensors poisoned next to the emission sources (most industrial), the optimal topology resulted by genetic algorithm has only one coastal sensor (next to the industrial zone), and the rest are distributed all over the studied zone. Besides to sensors found to be in the sea part of the city. This obtained topology could be explained by the background pollution contribution in the atmospheric pollution of the city.


Fig. 3. Axiomatization for Agent role pattern
Fig. 4. Axiomatization for Event pattern 3.1.3. Axioms implementation
Fig. 6. The representation of applicant in turtle format -partial view
Fig. 7. The representation of credit scorecard in turtle format -partial view
Fig. 8. SPARQL query executed in Jana framework with ARQ -partial view
The Implementation of Credit Risk Scorecard Using Ontology Design Patterns and BCBS 239

June 2020

·

242 Reads

·

3 Citations

Cybernetics and Information Technologies

Nowadays information and communication technologies are playing a decisive role in helping the financial institutions to deal with the management of credit risk. There have been significant advances in scorecard model for credit risk management. Practitioners and policy makers have invested in implementing and exploring a variety of new models individually. Coordinating and sharing information groups, however, achieved less progress. One of several causes of the 2008 financial crisis was in data architecture and information technology infrastructure. To remedy this problem the Basel Committee on Banking Supervision (BCBS) outlined a set of principles called BCBS 239. Using Ontology Design Patterns (ODPs) and BCBS 239, credit risk scorecard and applicant ontologies are proposed to improve the decision making process in credit loan. Both ontologies were validated, distributed in Ontology Web Language (OWL) files and checked in the test cases using SPARQL. Thus, making their (re)usability and expandability easier in financial institutions. These ontologies will also make sharing data more effective and less costly.


A Review of Air Quality Modeling

March 2020

·

494 Reads

·

39 Citations

Mapan - Journal of Metrology Society of India

Air quality models (AQMs) are useful for studying various types of air pollutions and provide the possibility to reveal the contributors of air pollutants. Existing AQMs have been used in many scenarios having a variety of goals, e.g., focusing on some study areas and specific spatial units. Previous AQM reviews typically cover one of the forming elements of AQMs. In this review, we identify the role and relevance of every component for building AQMs, including (1) the existing techniques for building AQMs, (2) how the availability of the various types of datasets affects the performance, and (3) common validation methods. We present recommendations for building an AQM depending on the goal and the available datasets, pointing out their limitations and potentials. Based on more than 40 works on air quality, we concluded that the main utilized methods in air pollution estimation are land-use regression (LUR), machine learning, and hybrid methods. In addition, when incorporating LUR methods with traffic variables, it gives promising results; however, when using kriging or inverse distance weighting techniques, the monitoring stations measurements of air pollution data are enough to have good results. We aim to provide a short manual for people who want to build an AQM given the constraints at hands such as the availability of datasets and technical/computing resources.


Modeling with ontologies design patterns: credit scorecard as a case study

January 2020

·

553 Reads

·

7 Citations

Indonesian Journal of Electrical Engineering and Computer Science

span lang="EN-US">This paper proposes an ontological scorecard model for credit risk management. The purpose of credit scoring model is to reduce the possibility of potential losses with regard to issued loans. Loans are provided according to strict criteria which contain information about the client, loan structure, the purpose, repayment source and collateral. Several techniques have been used for credit risk assessment before granting a loan. Ontology design patterns is used here to enable the implementation of domain knowledge using the OWL rules and to improve the decision making process in credit monitoring. The modeling of our ontology will make the data publication simpler and graph structures intuitive, thus making its reusability and expandability easier.</span


Citations (23)


... However, this performance improvement is often accompanied by huge computational costs and storage requirements, which limits the application of the model on resource-constrained devices. Knowledge distillation has become an important research direction for model lightweight and efficient deployment by compressing knowledge in large models into smaller models [4]. ...

Reference:

Feature Alignment-Based Knowledge Distillation for Efficient Compression of Large Language Models
Comparative Analysis of Datasets for Response-based Knowledge Distillation in Image Classification

... Global urbanization is a continuous phenomenon that offers opportunities and challenges for sustainable urban planning and development (Belinga and El Haziti, 2023). The rapid and sometimes uncontrollably expanding metropolitan regions known as "urban sprawl" have acquired attention as a significant issue influencing infrastructure development, land use/cover patterns, and environmental/ climate sustainability as cities grow (Hua and Gani, 2023). ...

Overviewing the emerging methods for predicting urban Sprawl features

E3S Web of Conferences

... As a result, the transform domain algorithm has been investigated the most. Discrete wavelet transform (DWT) [3][4][5][6], Redundant Discrete wavelet transform (RDWT) [7][8][9], Discrete Fourier transform (DFT) [10,11], Discrete cosine transform (DCT) [12,13], Lifting wavelet Abstract The use of watermarking techniques is becoming more popular among researchers as a potential option for protecting digital image copyright. In this paper, a secure and robust colour image watermarking scheme using integer wavelet transform (IWT) and dual matrix decomposition is proposed. ...

A Hybrid Robust Image Watermarking Method Based on DWT-DCT and SIFT for Copyright Protection

... By implementing the principles outlined in BCBS 239, banks are expected to enhance their risk management capabilities, increase transparency, and bolster their resilience to financial shocks [114][115][116]. This framework is particularly applicable to Global Systematically Important Banks (G-SIBs), which are subject to additional regulatory requirements, and encourages national authorities to extend their principles to Domestic Systematically Important Banks (D-SIBs) as well. ...

The Implementation of Credit Risk Scorecard Using Ontology Design Patterns and BCBS 239

Cybernetics and Information Technologies

... Wireless sensor networks (WSNs) can efficiently monitor various environmental conditions, including the following: pressure, sound, motion, temperature, vibration, acceleration, humidity, and also pollutant/chemical concentrations [1,2]. Considering deployments under such conditions, the goal of a WSN is to support the fast propagation of the sensed data (probably through their self-organization in clusters and executing a relevant cluster-based routing protocol) to a fixed or mobile base station, where it can be processed and analyzed [3]. ...

Routing protocols for wireless sensor networks: A survey
  • Citing Chapter
  • January 2020

... Among these, LUR models are particularly effective as they consider both the source and influencing factors of particulate matter and simulate air particulate levels based on geographic variables [21]. This approach effectively elucidates the spatial and temporal distribution characteristics of particulate matter [22], and buffer size in an LUR model can be adjusted to the study scale [23]. Most existing LUR research has focused on large and medium scales. ...

A Review of Air Quality Modeling
  • Citing Article
  • March 2020

Mapan - Journal of Metrology Society of India

... Ontology modeling allows you to create a conceptual description of a specific subject area, which in turn allows you to effectively search and classify information. Elhassouni et al. [6] proposes a sequence of actions for extracting knowledge from data and integrating ontology elements. Works [7], [8] explore improving the decision-making process using ontologies. ...

Modeling with ontologies design patterns: credit scorecard as a case study

Indonesian Journal of Electrical Engineering and Computer Science

... The methods applying container transform domains can be based on discrete cosine transform (DCT) [5,6], discrete wavelet transform (DWT) [7], integer wavelet transform (IWT) [8], discrete Fourier transform (DFT) [9]. ...

Hybrid blind robust image watermarking technique based on DFT-DCT and Arnold transform

... Therefore, the necessity to design secure techniques has increased in the last few decades. Digital image watermarking has been found to be an effective solution for copyright protection of images [1]. Its basic procedure is to embed imperceptible information, termed watermark, in the original image. ...

Blind Robust 3-D Mesh Watermarking Based on Mesh Saliency and QIM Quantization for Copyright Protection
  • Citing Chapter
  • September 2019

Lecture Notes in Computer Science

... Ontologies representing the domain knowledge have been used to guide the design of the application and to supply the system with semantic technologies possibilities [13][14][15][16]. The general ontology that models the credit risk management process and two specific ontologies have been proposed [17]. One of these specific models the process of credit allocation to clients, while the second displays necessary concepts for monitoring of a credit system. ...

Applying ontologies to data integration systems for bank credit risk management

Journal of Data Mining & Digital Humanities