# Petersburg State Electrotechnical University

• Saint Petersburg, Russia
Recent publications
We suggest an effective approach for the semi-automated segmentation of biomedical images according to their patchiness based on local edge density estimation. Our approach does not require any preliminary learning or tuning, although a couple of free parameters directly controllable by the end user adjust the analysis resolution and sensitivity, respectively. We show explicitly that the local edge density exhibits excellent correlations with the cell monolayer density obtained by manual domain-expert based assessment, characterized by correlation coefficients ρ>0.97. Our results indicate that the proposed algorithm is capable of an efficient segmentation and quantification of patchy areas in various biomedical microscopic images. In particular, the proposed algorithm achieves 95 to 99% median accuracy in the segmentation of image areas covered by the cell monolayer in an in vitro scratch assay. Moreover, the proposed algorithm effectively distinguishes between the native and regenerated tissue fragments in microscopic images of histological sections, indicated by nearly three-fold discrepancy between the local edge densities in the corresponding image areas. We believe that the local edge density estimate could be further applicable as a surrogate image channel characterizing its patchiness either as a substitute or as a complementary source to the conventional cell- or tissue-specific fluorescent staining, in some cases either avoiding or limiting the use of complex experimental protocols. We implemented a simple open-source software tool with for on-the-fly visualization allowing for a straightforward feedback by a domain expert without any specific expertise in image analysis techniques. Our tool is freely available online at https://gitlab.com/digiratory/biomedimaging/bcanalyzer.
Introduction. Mathematical modeling is the most important stage in the development of devices based on surface acoustic waves (SAW). Computer simulations that have proven their efficiency in recent years can significantly reduce the time input and improve the accuracy of calculating the designed characteristics. A rapid analysis of the operating characteristics of the designed acoustoelectronic devices requires the knowledge of basic parameters of acoustic waves propagating along the device substrates. Aim. Proposal and approbation of a methodology for calculating the key parameters necessary for modeling SAW devices based on the models of P-matrix and coupling modes, based on the example of analysis of Rayleigh waves by the finite element method. Materials and methods . The theoretical part of the work was carried out using the mathematical theory of differential equations presented in a matrix form and the finite element method. Mathematical processing was conducted in the MatLab and COMSOL environments. Results. An original technique for deriving SAW parameters for a model of coupling modes based on a rapid algorithm implemented in COMSOL was developed. A comparison of the calculated parameters of electromechanical coupling coefficient and velocity of acoustic waves over the substrate surface with those presented in literature showed their good agreement. Based on the derived parameters, a number of transversal filters were designed. A comparison of the calculated and experimentally measured values of the transmission coefficient was performed. Conclusion. The proposed technique for analyzing infinite periodic electrodes by the finite element method based on an analysis of eigenfrequencies and static analysis made it possible to calculate the main parameters of Rayleigh waves in conventional substrates: lithium niobate, lithium tantalate and quartz. The practical significance lies in the use of the obtained parameters in the development of various classes of acoustoelectronic devices.
Introduction. When conducting diagnostic examination of patients, various technological means are used to identify pathological conditions timely and accurately. The rapid development of sensors and imaging devices, as well as the advancement of modern diagnostic methods, facilitate the transition from the visual examination of images performed by a medical specialist towards the widespread use of automated diagnostic systems referred to as clinical decision support systems. Aim . To develop a method for enhancing the contrast of endoscopic images taking into account their features with the purpose of increasing the efficiency of medical diagnostic systems. Materials and methods. Contrast enhancement inevitably leads to an increase in the noise level. Despite the large number of different methods for noise reduction, their use at the preliminary stage of correction leads to the loss of small but important details. The development of a method for enhancing the contrast of endoscopic images was based on a nonlinear transformation of the intensity of pixels, taking into account their local neighborhood. Regression analysis was used to obtain a functional dependence between the depth of contrast correction and the degree of detail of the processed pixel neighborhood. Results. The results of experimental evaluation and comparison with conventional methods show that, under a comparable level of contrast enhancement, the proposed method provides a greater value of the structural similarity index towards to the original image (0.71 versus 0.63), with the noise level reduced by 17 %. Conclusion . In comparison with conventional methods, the developed method provides a simultaneous contrast correction of both light and dark image fragments and limits the growth of the noise level (typical of similar methods) by adapting the correction depth to the neighborhood features of the processed image element.
Introduction. Due to the increasing number of users, growing rates of data transmission, and rapid advancement of the Internet of Things, the parameter of channel capacity is acquiring greater importance in modern communication systems. In wireless communication systems, capacity limitation occurs due to a low signal-to-noise ratio, one reason for which consists in high losses associated with the propagation of electromagnetic waves. These losses can be compensated using high-gain antenna systems, such as metasurfaces, transmitarrays, or reflectarrays. Aim. Development and research of a one-bit transmit phased antenna array with spatial excitation for use in wireless communication networks across sub-6 GHz frequencies. The issues of reducing the insertion losses associated with the cell geometry and control components are discussed. Account is taken of the parasitic parameters of p–i–n diodes used as control elements for phase adjustment in a unit cell. Methods for suppressing cross-polarization in a unit cell with the purpose of reducing insertion losses are studied. M aterials and methods . The characteristics of unit cells in a transmit antenna array were studied by numerical electrodynamic modeling in the CST Microwave Studio computer-aided design system. The obtained results were confirmed by an experimental study of samples. Results. A unique design of a unit cell comprising the main element of a transmitarray was proposed. On its basis, a transmitarray was designed and manufactured, whose measurements proved the level of insertion losses to be lower than 1.5 dB in the operating frequency band of 210 MHz (3.6 %). The level of cross-polarization was found to be lower than 24 dB, and the gain attenuation did not exceed 2.5 dB in the range of beam deflection from 45° to -45°. Conclusion. The design simplicity, low losses, and acceptable cross-polarization levels of the developed one-bit transmit phased antenna array with spatial excitation confirm its feasibility for modern communication systems.
Introduction. In the article the peculiarities of scientific and communication practices of the methodologists in interaction with the university teachers are considered on the material of the analysis of the ITMO.Expert methodological community activity. Methodology and sources. The theoretical framework of the research is institutional and structural-functional approaches to the study of university activities. The method used for collecting empirical data presented in the article is a content analysis of the co-messages sent by the participants of the online intensive “Personalized Learning Technologies” on the Zoom-conference, which was held on August 23–27, 2021. The research hypotheses are the following: discussions of new pedagogical practices and new design formats initiated by university methodologists provoke teachers' skepticism, distrust, and resistance. Also teachers may demonstrate vulnerability in discussing new approaches in education. Results and discussion. Testing the hypothesis that discussing new pedagogical practices and approaches will cause skeptical attitudes and distrust on the part of teachers was not confirmed. The patterns of distrust accounted for less than 5 % of the total array of messages. The hypothesis of teachers (especially beginners) being vulnerable when discussing new approaches, a manifestation of which was their refusal to discuss their own pedagogical experience and the problems encountered, was partially confirmed. At the same time due to the organizational and communicative actions of the methodologistsorganizers of the intensive there was formed an atmosphere of trust and safety, so the participants had an opportunity to share their experience or ask questions without being judged by their colleagues. Conclusion. The activity of the pedagogical designer is both an educational activity, aimed at teaching and promoting new educational technologies in teaching, and an organizational and managerial activity, aimed at stimulating the activity of teachers in developing and modeling a variety of knowledge formats. Many researchers suggest characterizing pedagogical design as a sociocultural activity. When designing educational experience and new educational products, methodologists help develop pedagogical traditions and reconsider norms.
Introduction. The article analyzes the scientific, political and social directions of technology development using artificial intelligence in the context of the global digital race. The strategies for achieving technological sovereignty, adopted by the largest countries of the world, and the place and role of artificial intelligence in them are analyzed. Special attention is paid to the analysis of statistical indicators of the achievements of the world's leading states in the field of digital technologies. The scientific, political, economic, regulatory and social resources of the Russian Federation are also being explored, allowing them to become one of the global leaders in digital and technological development. Methodology and sources. The theoretical and methodological base of the study were the classical socio-economic concepts of technological and innovative development (K. Marx, T. Veblein, J. Schumpeter etc.). In the practical part of the study, we used such methods as analysis of documents (reports of the Analytical Center for the Government of the Russian Federation and the Competence Center of the NTI “Artificial Intelligence” of the Moscow Institute of Physics and Technology, analytics from the the Russian International Affairs Council, Oxford Insights, Tortoise etc.) and comparative analysis. The empirical base was data from an analysis of the experience of China, the United States, India and the Russian Federation in developing their own strategies for the development of artificial intelligence technologies. Results and discussion. As a result of the study, we were able to trace how the active actions of the world's largest countries in the conceptualization of steps to develop artificial intelligence are reflected in the construction of the state's technological sovereignty. The analysis made it possible to describe the Russian model of supporting the development of technologies using artificial intelligence as a “Moscow consensus”, characterized by a social orientation of the results. Conclusion. In the structure of technological sovereignty, artificial intelligence plays an important role as a strategic component that contributes to the achievement of digital sovereignty. In the foreseeable future, the critical impact of the dependence of Russia's scientific and technological development on imported solutions and other external factors is obvious, which requires a thorough examination of the situation and a public discussion of any actions for the transition of Russian industry to Industry 4.0. At the same time, it is important to realize that the prospects for technologies using AI are vague without political decisions and financial support.
Introduction. Language situation in Belgium is considered unique because, despite several state languages, a dozen minority languages, and hundreds of dialects being in constant contact with each other, the Belgian language community is also actively influenced by the English language. The relevance of this research is dictated by the growing importance of the English language in Europe, where the latter is frequently used as a lingua franca, by the developing mutual influence of the French and the Dutch languages in Belgium, and also by the linguists’ interest towards the matter of code-switching. The purpose of this research lies in analyzing the examples of code-switching in Belgian media, and in exploring their functions and the strategies of forming them in the context of the difficult language situation in Belgium. Methodology and sources. The material of this research consists of publications in Belgian newspapers and journals and materials of Belgian informational resources in the social network “Instagram”. The following methods are used in this research: continuous sampling method, synthesis method, descriptive method, classification method, and comparative language analysis. Results and discussion. The research briefly describes the features of sociolinguistic situation in Belgium, particularly in three regions: Flanders, Wallonia, and the BrusselsCapital Region. Analysis of the found in Walloon and Flemish examples of code-switching showed that in Belgian newspapers and journals code-switching in French – Dutch and Dutch – French pairs is very rare, and the absolute majority of examples is in English. Switching to English adds some emotional aspect to a text and is used when addressing to precedent statements, situations, and names that entered the international discourse in English. According to P. Muysken, among the strategies of code-switching the most frequently used are insertion and alternation. Conclusion. Sociocultural, economical and linguistic differences lead to the autonomy and certain independence of Belgian regions. They also provoked the willing to turn to English as a mediatory language during interpersonal communication. Analysis of the written sources allowed to document the cases of mostly motivated switching to English and study their stylistic, functional and linguistic features.
Introduction. The upcoming digitalization has a peculiar effect on the purposefulness of students' behavior, ensuring the transition to a new stage in the development of a postindustrial society. Social norms, values, and value orientations are subject to change. All this is formed into a new group of student digital values, which are shaped differently among students of various professional training profiles. These new phenomena involve the search for new methodological and methodological approaches to conducting empirical sociological research, the search for relevant approaches to measuring the digital values of students of various training profiles, and identifying special characteristics and meaningful features of these values. Methodology and sources. The research methodology is mixed, and is based on provisions about values and their specific forms of manifestation, particularly digital values (M. Weber, F. Znanetsky, T. Parsons, E. Durkheim, A. Toffler). The digital society is a new perspective, the nature of which is inconsistently consistent with some of the principles of the traditional approach to the study of values that have developed in theoretical concepts earlier. The concepts of modern authors were used, such as M. Tomasello, F. Warneken, R. Hogan, B.W. Roberts, E.F. Zeer, R. Kadakal, and Nguyen Hoang Huu. Results and discussion. The methodological approaches to the study of the values of the digital society as an object of study are generalized, in particular, the relevance of the system approach. The differences between the study of traditional values and the values of the digital society are shown, as a result of the analysis of which the accumulation of human capital by students with special (digital) characteristics could be considered as a main value of the digital society. Under these conditions, students of the humanitarian direction (journalists, sociologists) more often define for themselves value-goals as the most important values, and programming students define value-means as such. Conclusion. The study used a systematic approach to building a methodology for diagnosing digital values, which allows identifying the main value preferences of students of various training profiles. It is concluded that students of different training profiles understand the digital society differently, which implies different methodological approaches to the study and diagnosis of these values.
Introduction. The paper considers the Ripuarian dialect group spread on the territory of three modern states – Germany, Belgium and the Netherlands. The research concentrates on the dialect’s reception by its speakers, while special attention is paid to the language situation in Belgium. Defining the correspondence of state and linguistic borders in this region might be of great current scientific interest. Methodology and sources. The research methodology is based on Russian and foreign studies in dialectology (V. M. Zhirmunskii, F. Münch, W. Haubrichs) and dialectography (K. Haag, A. Bach, J. Kajot and H. Beckers). For the dialects’ characteristics descriptive and comparative methods were used. The analysis of the sociolinguistic situation is based on the works of P. Auer, Th. Frings, J. Kajot and H. Beckers and others. To follow the current dialect speakers’ point of view the data from Belgian Internet-sites and forums were used. Such complex method allows to valuate not only linguogeographic but also the newest extralinguistic facts. Results and discussion. The paper examines the spread and the characteristics of the Ripuarian dialects, the history of their use in Germany, underlining the special role of Cologne’s dialect. The situation with the Ripuarian dialects in modern Eastern Belgium is as well analyzed. Problems of self-identity of the dialect speakers and of dialect’s connection to the High German are also considered. Conclusion. The dependence of linguistic situation in Belgium on political and sociocultural factors, while the state boundaries play a significant role in the self-identity of dialect speakers.
Introduction. Strategies for understanding another person, which play an important role in social interactions, are focused on recognizing the mental states of the person who is under consideration. These various strategies require a general theoretical conceptualization. One of attempts of this kind of conceptualization is carried out by A. Neven's person model theory. This theory is a subject of our investigation and the aim of this study is to critically analyze A. Neven's person model theory and demonstrate its advantages and disadvantages in comparison with other approaches. Methodology and sources. A. Neven's approach is compared with three competing approaches: folk psychology theory, A. Goldman's simulation theory, and S. Gallagher's interaction theory. Conceptual analysis shows that these theories face a number of serious difficulties, which are discussed in article. Results and discussion. Based on our analysis, we conclude that none of these three theories can be accepted as universal. At the same time, A. Newen's person model theory suggests a multiple strategy for understanding another person and seeks to incorporate the merits of other theories. Thus, the main advantage of this approach is that it allows us to consider the process of understanding another person not as a predetermined one, but as a variable dynamic process. Conclusion. This approach allows considering as a person not only an adult, but also a collective of people, as well as artificial intelligence, which has a great importance for the further improvement of moral practices. At the same time, the person model theory is not devoid of weaknesses; however, when overcoming them, it is able to present the most complete mechanism for understanding the personality
In the context of digital informatization, the Internet is changing the way of human existence. The rapid development of the Internet has promoted the use of smartphones in people’s daily lives, and at the same time, a large number of applications running on different operating system environments have appeared on the market. Predicting the duration of application usage is crucial for the management planning of related companies and the good life of users. In this work, a dataset containing time series of user application usage information is considered and the problem of “application usage” forecast is being addressed. The dataset used in this work is based on reliable and realistic user records of the usage of the application. Firstly, this paper investigates suitable forecast models for application development on the applied user usage time dataset, which includes neural network algorithms and ensemble algorithms, among others. Then, an Explainable Artificial Intelligence Approach (SHAP) is introduced to explain the selected optimal forecast models, thus enhancing user trust of the forecasting models. The forecast results show that the ensemble models perform better in the time series dataset of user application usage information, especially LightGBM has more obvious advantages. Explanation results show that the frequency of use of the target variables, category and lagged nature are important features in the forecast of the application dataset.
In this correspondence, a mathematical model is developed for the efficient realisation of a generalised M × M polyphase parallel finite impulse response (FIR) filter structure composed of M parallel conventional decimator polyphase filters. Primarily, the proposed structure is designed in such a way that the benefit of coefficient symmetry property of linear‐phase FIR filters can be availed without using the pre/post circuit blocks. A numerical example is also studied to validate the proposed structure. Furthermore, the delay‐elements reduction approach is given to avoid the excessive usage of memory elements and the performance of the proposed structure is evaluated in terms of the number of delay elements (D) $(\mathcal{D})$, adders (A) $(\mathcal{A})$ and multipliers (M) $(\mathcal{M})$. Compared to the traditional structures, our proposed structure is found to be more efficient in terms of M $\mathcal{M}$. Moreover, in contrast to the fast FIR algorithms, the proposed structure resolves the issues of additional requirements of the pre/post blocks and the absence of parallel structure with coefficient symmetry for higher prime values of M (i.e. M > 3). The synthesis result reveals that the proposed 37‐tap filter (with M = 3 and 12‐bit inputs) involves 30% less area‐delay‐product (ADP) per output and 33.05% less power per output compared to the most recent structure.
Recurrent neural network (RNN) models continue the theory of the autoregression integrated moving average (ARIMA) model class. In this paper, we consider the architecture of the RNN with embedded memory—«Process of Nonlinear Autoregressive Exogenous Model» (NARX). Though it is known that NN is a universal approximator, certain difficulties and restrictions in different NN applications are still topical and call for new approaches and methods. In particular, it is difficult for an NN to model noisy and significantly nonstationary time series. The paper suggests optimizing the modeling process for a complicated-structure time series by NARX networks involving wavelet filtering. The developed procedure of wavelet filtering includes the application of the construction of wavelet packets and stochastic thresholds. A method to estimate the thresholds to obtain a solution with a defined confidence level is also developed. We introduce the algorithm of wavelet filtering. It is shown that the proposed wavelet filtering makes it possible to obtain a more accurate NARX model and improves the efficiency of the forecasting process for a natural time series of a complicated structure. Compared to ARIMA, the suggested method allows us to obtain a more adequate model of a nonstationary time series of complex nonlinear structure. The advantage of the method, compared to RNN, is the higher quality of data approximation for smaller computation efforts at the stages of network training and functioning that provides the solution to the problem of long-term dependencies. Moreover, we develop a scheme of approach realization for the task of data modeling based on NARX and anomaly detection. The necessity of anomaly detection arises in different application areas. Anomaly detection is of particular relevance in the problems of geophysical monitoring and requires method accuracy and efficiency. The effectiveness of the suggested method is illustrated in the example of processing of ionospheric parameter time series. We also present the results for the problem of ionospheric anomaly detection. The approach can be applied in space weather forecasting to predict ionospheric parameters and to detect ionospheric anomalies.
Comparative results of calculation and measurement of the frequency responses of the surface acoustic waves filter on a piezoelectric substrate of 64°YX-cut lithium niobate and delay line on a piezoelectric substrate of 128°YX-cut lithium niobate is presented. The calculation was performed on the basis of two approaches—the finite element method in the COMSOL Multiphysics software and using the model of coupling of modes based on P-matrices. A brief overview and features of each approach are presented. The calculation results based on the two approaches are in good agreement with each other and with the experimental results of measurements of the characteristics of the bandpass filter. The delay line operating with the use of the third harmonic frequency is calculated by FEM. The results showed a good match between numerical simulation and experiment. The considered approaches for designing SAW devices allow us to relatively quickly and accurately predict the frequency responses at the simulation stage, thereby reducing the number of experimental iterations and increasing the efficiency of development.
Cellulose nanofibers (CNF) produced by bacterial were functionalized along the surface with 3-(trimethoxysilyl)propyl methacrylate (TMSPM). The chemical and crystalline structure of the material was confirmed with NMR, FTIR, EDX and XRD methods. Modified CNF were used as a crosslinker and reinforcer for polymerizable deep eutectic solvent (DES) based on acrylic acid and choline chloride. Dispersions of modified nanofibers in DES were applied as UV-curable ink for 3D printing. It was shown that shielding of -OH groups of the cellulose surface with TMSPM increased the quality of 3D printed filaments due to reduced CNF agglomeration. At the same time, surface methacrylic groups copolymerize with acrylic acid forming crosslinked ion gel. Elastic moduli of the prepared ion gels were identical to those of gels based on unmodified CNF and crosslinked with N,N'-methylenebisacrylamide. However, strength and the ultimate elongation of the material prepared in this work were 1.05 ± 0.08 MPa at 2700% that is significantly higher than those of the material prepared with unmodified CNF. Graphical Abstract
Cqd/pedot: PSS composites were prepared via the hydrothermal method from glucose carbon quantum dots (CQDs) and an aqueous solution of PEDOT:PSS conducting polymer and their electrical and optical properties were investigated. The morphology and structure of these samples were investigated by AFM, SEM, EDX, and EBSD. It was found that the CQDs and CQD/PEDOT:PSS composites had a globular structure with globule sizes of ~50-300 nm depending on the concentration of PEDOT:PSS in these composites. The temperature dependence of the resistivity was obtained for the CQD/PEDOT:PSS (3%, 5%, 50%) composites, which had a weak activation character. The charge transport mechanism was discussed. The dependence of the resistivity on the storage time of the CQD/PEDOT:PSS (3%, 5%, 50%) composites and pure PEDOT:PSS was obtained. It was noted that mixing CQDs with PEDOT:PSS allowed us to obtain better electrical and optical properties than pure CQDs. Cqd/pedot: PSS (3%, 5%, 50%) composites are more conductive composites than pure CQDs, and the absorbance spectra of CQD/PEDOT:PSS composites are a synergistic effect of interaction between CQDs and PEDOT:PSS. We also note the better stability of the CQD/PEDOT:PSS (50%) composite than the pure PEDOT:PSS film. Cqd/pedot: PSS (50%) composite is promising for use as stable hole transport layers in devices of flexible organic electronics.
The paper presents a method for automating RGB images shot by the human operator and analysis for improving the quality of further 3D reconstruction for low-rise outdoor objects and a use case for method application. The method provides automation as a three-step process: local analysis of images performed during the shooting, global analysis, and recommendations performed after the shooting. Method steps filter out defective images, approximate future 3D-model using SfM, and map it to human operator trajectory estimation to identify object areas that will have low resolution on the final 3D model and, therefore, require additional shooting. Method structure does not require any sensors except RGB camera and inertial sensors and does not rely on any external backend, which lowers the hardware requirement. The authors implemented the method and use-case as an Android application. The method was evaluated by experiments on outdoor datasets created for this study. The evaluation shows that the local analysis stage is fast enough to perform during the shooting process and that the local analysis stage improves the quality of the final 3D model.
Self-generation of microwave nonlinear waveforms in the magnonic-optoelectronic oscillator (MOEO) was investigated. Nonlinear dynamics of the MOEO was due to both optical and magnonic paths of the oscillator circuit. Four-magnon parametric interactions in the magnonic path and cosine transfer function of the electro-optical modulator caused double nonlinearity of the MOEO. Gain coefficient was used as a control parameter. We found that during a route from regular to chaotic dynamics, the oscillator generates two unusual waveforms: symmetry-breaking soliton-like modes of Möbius type and periodic pulses with chaotic amplitude modulation. Nonlinear waveforms were characterized using a time series analysis. Peculiarities of the signals and their spectra in regular and chaotic regimes of self-generation are discussed. We expect that the multiple nonlinearity of the MOEO may be useful for investigation of various fundamental effects in complex time-delayed systems and for development of novel circuits for neuromorphic computing.
Determination of respiratory rate is a necessary task in assessing the state of health in humans. This review provides a description of modern devices used for recording and monitoring respiratory rate. The advantages and disadvantages of the principles of operation of these devices are discussed.
Traditional AI techniques for offline misuse network intrusion detection have performed well, assuming that the traffic from the datasets is sufficiently large for generalization, balanced, independently and identically distributed—exhibiting homogeneous behavior with little to no context change. However, the rapidly expanding IoT network is an ensemble of proliferating internet-connected devices catering to the growing need for handling highly distributed, heterogeneous, and time-critical workloads that conform to none of the above assumptions. Moreover, the evolving Botnet-based attack vectors exploit the non-standardized and poorly scrutinized architectural vulnerabilities of such devices—leading to compounding threat intensity, rapidly rendering the network defenseless. Furthermore, the memory, processor, and energy resource constraints of the IoT devices necessitate lightweight device-specific intrusion detection policies for effective and updated rule learning in real-time through the edge infrastructures. However, the existing methods proposed to solve such issues are either centralized, data and resource-intensive, context-unaware, or inefficient for online rule learning with smaller and imbalanced data samples. Thus, this paper addresses such issues through a context-aware expert system-based feature subset framework with minimal processing overhead and a decentralized on-device misuse detection scheme for IoT—called HetIoT-NIDS, capable of efficiently inferring over smaller data samples, tolerant to class imbalance, and deployable on the low-memory and low-power edge of IoT devices. Furthermore, HetIoT-NIDS facilitates threat localization within the deployed device, preventing threat progression and intensity compounding. The experiments and analyses of the propounded algorithms and the resulting training times and model sizes prove that the proposed approach is efficient and adaptable to online and offline misuse intrusion detection, especially on smaller data sample sizes.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
1,365 members
• Theoretical Fundamentals of Radio Engineering
• Faculty of Computer Science and Technology / Department of Software Engineering and Computer Applications