ITMO University
  • Saint Petersburg, http://en.ifmo.ru/en/, Russia
Recent publications
Laser/light flash analysis (LFA) is becoming an increasingly popular assessment tool for the thermal properties of porous samples. Elevated temperature measurements are typically conducted under a protective atmosphere; specifically, under inert gas flow. Immersing a sample with interconnected porosity in a gas environment theoretically allows heat transfer via gas conduction or convection. To prove that LFA measurements are affected by the choice of the gas environment, measurements are conducted under inert gas flow (helium and nitrogen), static inert gas (nitrogen) and vacuum with two different instruments and two sets of samples: highly-conductive alloy foams and insulating lightweight concrete. A heat conduction model accounting for light penetration was used to extract the apparent thermal diffusivity of the solid–gas mixture. A correlation between the Biot number and the conductivity of the gas environment was registered. This is unusual for an LFA experiment where radiative heat losses, which are independent of the gas environment, are typically considered to be dominant. At the same time, variations in either the microgeometry of the porous sample or the type of carrier gas produce a systematic change in the shape of the time–temperature profiles and the duration of the temperature transient. In lightweight concrete samples, switching the instrument to helium flow enables a multifold increase in the apparent thermal diffusivity relative to the measurements done under vacuum. An attempt to explain this variation was made with a two-temperature model of coupled solid–gas conduction. It is argued that an additional factor is missing from the analysis.
We simulated the response of a surface with a circular relief on a semiconductor wafer constructed using self-affine transformations to the effect of an incident electromagnetic wave. The study assumed that the main mechanism leading to the reaction of the plate to incident radiation is electric polarization, which, for example, is the basis for the functioning of a number of electronic components, such as a MOS FET or a CCD. Since silicon belongs to polarizable materials, a spatial separation of charges occurs in a changing electric field in the volume of a silicon crystal in accordance with the law of change of field. If a silicon wafer is used, on one of the surfaces of which a certain relief is created, then the distribution of charges under the relief will be uneven in space in accordance with the pattern of this relief.
With the decreasing use of refrigerants with a high global warming potential, increasing the efficiency of refrigerating machines using natural working fluids is a very urgent task. The CO2 booster refrigerant cycle can be optimal for the fields of commercial refrigeration, climate engineering and heat pumps, but requires adaptation to regions with hot climates. Such solutions have already been tested by increasing the transcritical cycle efficiency through the use of an ejector and parallel compression. However the use of a scroll compressor in the transcritical cycle may be a good alternative to these methods. This solution, a little bit improves technical and economic indexes of the refrigerating machine, in the most part reducing capital and operational costs. When considering the prospect of improving the efficiency of a booster system with scroll compressor, it is necessary to address the issue of compressor performance optimisation. An issue of this kind can be referred to improving the efficiency of the booster system, at the expense of the internal characteristics of the refrigerating machine. Thus, in this paper, a conceptual approach to solve this problem based on system analysis of compressor interconnection as the main energy-consuming element of a refrigeration machine is considered. Development of conceptual model, allows to identify influence of various factors on operation of scroll compressor and to build adequate mathematical model, to choose or develop necessary methods of calculation.
The broad implementation of therapeutic loaded nano- and microcarriers has led to significant progress in the treatment of inflammatory joint diseases, especially in the case of radiological synovectomy using β-emitting radioisotope labeled carriers. However, the lack of automated procedures for radiolabeling of these carriers seriously limits their further translation into clinical practice. Here, we develop the automated process of microcarriers radiolabeling for radiosynovectomy of inflammatory joint disease (synovitis in an arthritis rat model). The fabricated polymeric microparticles (MPs) possess high biocompatibility in vitro and in vivo, can be effectively loaded with ¹⁸⁸Re therapeutic radionuclide (∼ 97,98%), and exhibit good radiochemical stability (up to 95%). In vivo biodistribution demonstrates the accumulation of intra-articular injected ¹⁸⁸Re-MPs in the knee joint (> 96%ID/g for 7 d) without radiation exposure to other healthy organs. It has been shown that the applied radiosynovectomy with ¹⁸⁸Re-MPs can significantly reduce knee joint swelling (knee joint with synovitis ∼ 1.8 ± 0.1 cm versus treated knee joint ∼ 1.1 ± 0.1 cm) and leads to the improvement of inflammation in synovium and cartilage recovery, decreasing the levels of inflammatory factors (TNF-α, IL-6, and IL-1b). Although there are very limited clinical trials on radiosynovectomy of inflammatory joint disease, this study is a step towards implementation of the developed automated labeling procedure for the real application of radiosynovectomy with ¹⁸⁸Re in clinics.
In this paper, we study how the observed quality of a chosen feature-based link prediction model applied to a part of a large node-attributed network can be further used for the analysis of another part of the network. Namely, we first show that it can be determined on the former part the usage of which features (topological and attributive) of node pairs lead to a certain level of link prediction quality. Based on it, we then construct a link predictability (prediction quality) classifier for the network node pairs that is able to distinguish poorly and highly predictable links by a few selected features of the corresponding nodes. The features are selected to provide a reasonable trade-off between the classifier’s time consumption and quality performance. The classifier is further used in the other part of the network for controlling the link prediction quality typical for the model and the network, without performing the actual link prediction. Our experiments show the good performance of the classifier over all tested real-world networks of various types (at least 0.9208 in terms of ROC-AUC and 0.9224 in terms of Average Precision).
Industrial control systems have been frequent targets of cyber attacks during the last decade. Adversaries can hinder the safe operation of these systems by tampering with their sensors and actuators, while ensuring that the monitoring systems are not able to detect such attacks in time. This paper presents methods to design and overcome stealthy attacks on linear time-invariant control systems that estimate their state using an interval observer, in the presence of unknown but bounded noise and perturbations. We analyze scenarios in which a malicious agent compromises the sensors and/or the actuators of the system with additive attack signals to steer the state estimate outside of the bounds provided by the interval observer. We first show that maximally disruptive attack sequences that remain undetected by a linear monitor can be computed recursively via linear programming. We then design an attack-resilient interval observer for the system’s state, identifying sufficient conditions on the sensor data for such an observer to be realizable. We propose a computational method to determine optimal observer gains using semi-definite programming and compute bounds for the unknown attack signal as well. In numerical simulations, we illustrate and compare the ability of such interval observers to still provide accurate estimates when under attack.
Metal foams are a class of materials with immense potential. The complex topology in these lightweight systems is responsible for their outstanding mechanical and thermal properties. Furthermore, this is the same reason why thermal assessment of metal foams is challenging. While conventional analytical techniques provide useful data, modern-age technology allows to explore thermal transfer in metal foams in great detail by building a digital twin of the thermal characterisation experiment. In this work, high-accuracy measurements with a laser flash analysis instrument, processed with two effective-medium heat conduction models, are compared to a digital twin built with computed tomography and finite element analysis. This combination helps resolving complexity in analysing AlSi7Mg open-cell foams. Evidence of non-Fourier macroscopic heat conduction is found, which makes standard laser-flash analysis non-scalable.
In this work the influence of sequential isomorphic substitution of Y³⁺ by Gd³⁺ on crystal structure, morphology and luminescence spectra of (Y1−x−yGdxYby)3Al5O12 nanocrystals (size 30–40 nm) synthesized via low temperature polymer-salt method was investigated. Prepared nanopowders were studied by XRD, SEM methods of analysis, FT-IR and luminescence spectroscopy. The garnet cubic crystal structure of prepared nanoparticles remains at the replacement of 50% Y³⁺ by Gd³⁺ ions. The substitution of Y³⁺ ions by Gd³⁺ increases the volume of unit cell of garnet crystals and significantly disorders their crystal structure. It was shown that the controlled partial substitution (up to 50%) of Y³⁺ ions by Gd³⁺ in Yb:YAG materials leads to significant broadening of the main emission band of Yb³⁺ ions (λmax. = 1030 nm) that can be promising for development of ultrashort pulsed laser media.
In this note a new high performance least squares parameter estimator is proposed. The main features of the estimator are: (i) global exponential convergence is guaranteed for all identifiable linear regression equations; (ii) it incorporates a forgetting factor allowing it to preserve alertness to time-varying parameters; (iii) thanks to the addition of a mixing step it relies on a set of scalar regression equations ensuring a superior transient performance; (iv) it is applicable to nonlinearly parameterized regressions verifying a monotonicity condition and to a class of systems with switched time-varying parameters; (v) it is shown that it is bounded-input-bounded-state stable with respect to additive disturbances; (vi) continuous and discrete-time versions of the estimator are given. The superior performance of the proposed estimator is illustrated with a series of examples reported in the literature.
This work focuses on a dynamic model of an innovative multigenerational solar-wind-based system from energetic, exergetic, economic, and environmental approaches. It is integrated to a near-zero energy building in St. Petersburg of Russia, with the purpose of covering the hourly cooling, heating, and electricity loads of the building. It consists of a wind turbine, a parabolic trough solar loop, an absorption chiller, and a compressed air energy storage system. A gas heater is also used to meet the total heating load of the system at off-peak hours of solar energy and becomes the main source of energy in some months. A comprehensive model is developed and the feasibility of the solar and wind energy is investigated dynamically on the proposed system. Results of the study show that the proposed solar system can cover up to 61 % of the yearly heating loads of the building, and the system. The required heat load of the system itself includes the heat demand of the absorption chiller, and the compressed air storage system. It is also noticeable that the presented energy storage system provides almost 99 % of the required electricity load beside the wind turbine and the rest of it needs the grid connection. In January, 13 % of the electricity demand is supplied from the grid and in the last three months, nearly 69 % of produced power of wind turbine is sold to the grid. This system leads to 13859 kg/year of CO2 emission reduction due to heating, cooling, and electricity. Moreover, the maximum monthly energy and exergy efficiencies are achieved in December with amounts of 41 % and 11 %, respectively. Finally, from the economic analysis, it is found that the positive amount of net present value is accessible after 12, 14, and 17 years, assuming interest rates of 1 %, 3 %, and 5 %, respectively.
The behavior of a twisted electron colliding with a linearly polarized laser pulse is investigated within relativistic quantum mechanics. In order to better fit the real experimental conditions, we introduce a Gaussian spatial profile for the initial electron state as well as an envelope function for the laser pulse, so that both interacting objects have a finite size along the laser propagation direction. For this setup, we analyze the dynamics of various observable quantities regarding the electron state: the probability density, angular momentum, and mean values of the spatial coordinates. It is shown that the motion of a twisted wavepacket can be accurately described by averaging over classical trajectories with various directions of the transverse momentum component. On the other hand, full quantum simulations demonstrate that the ring structure of the wavepacket in the transverse plane can be significantly distorted, leading to large uncertainties in the total angular momentum of the electron. This effect remains after the interaction once the laser pulse has a nonzero electric-field area.
Understanding the demographic history of populations is a key goal in population genetics, and with improving methods and data, ever more complex models are being proposed and tested. Demographic models of current interest typically consist of a set of discrete populations, their sizes and growth rates, and continuous and pulse migrations between those populations over a number of epochs, which can require dozens of parameters to fully describe. There is currently no standard format to define such models, significantly hampering progress in the field. In particular, the important task of translating the model descriptions in published work into input suitable for population genetic simulators is labor intensive and error prone. We propose the Demes data model and file format, built on widely used technologies, to alleviate these issues. Demes provides a well-defined and unambiguous model of populations and their properties that is straightforward to implement in software, and a text file format that is designed for simplicity and clarity. We provide thoroughly tested implementations of Demes parsers in multiple languages including Python and C, and showcase initial support in several simulators and inference methods. An introduction to the file format and a detailed specification are available at https://popsim-consortium.github.io/demes-spec-docs/.
Reliable broadcast is a fundamental primitive, widely used as a building block for data replication in distributed systems. Informally, it ensures that system members deliver the same values, even in the presence of equivocating Byzantine participants. Classical broadcast protocols are based on centralized (globally known) trust assumptions defined via sets of participants (quorums) that are likely not to fail in system executions. In this paper, we consider the reliable broadcast abstraction in decentralized trust settings, where every system participant chooses its quorums locally. We introduce a class of relaxed reliable broadcast abstractions that perfectly match these settings. We then describe a broadcast protocol that achieves optimal consistency, measured as the maximal number of different values from the same source that the system members may deliver. In particular, we establish how this optimal consistency is related to parameters of a graph representation of decentralized trust assumptions.KeywordsReliable broadcastQuorum systemsDecentralized trustConsistency measure
Psychology suffers from the absence of mathematically-formalized primitives. As a result, conceptual and quantitative studies lack an ontological basis that would situate them in the company of natural sciences. The article addresses this problem by describing a minimal psychic structure, expressed in the algebra of quantum theory. The structure is demarcated into categories of emotion and color, renowned as elementary psychological phenomena. This is achieved by means of quantum-theoretic qubit state space, isomorphic to emotion and color experiences both in meaning and math. In particular, colors are mapped to the qubit states through geometric affinity between the HSL-RGB color solids and the Bloch sphere, widely used in physics. The resulting correspondence aligns with the recent model of subjective experience, producing a unified spherical map of emotions and colors. This structure is identified as a semantic atom of natural thinking—a unit of affectively-colored personal meaning, involved in elementary acts of a binary decision. The model contributes to finding a unified ontology of both inert and living Nature, bridging previously disconnected fields of research. In particular, it enables theory-based coordination of emotion, decision, and cybernetic sciences, needed to achieve new levels of practical impact.
A brief overview of existing approaches to the determination of the dispersed composition of dust during construction works with the definition of the most promising method is presented. The existing methods are divided into methods with pre-deposition, and without. Methods with pre-deposition require a lot of time and are used mainly for laboratory studies. Methods without pre-deposition make it possible to conduct measurements in real time, which distinguishes them favorably from the methods described above during construction works. One of these methods is acoustic, which also provides the possibility of non–destructive testing and is based on the registration of acoustic emission signals that occur during natural or forced vibration excitation of the systems being diagnosed. When analyzing the physical parameters of the dust flow, it was determined that the key signals are AE, which can be presented in the form of diagnostic amplitude-frequency Fourier spectra containing information about the fractional composition and concentrations of the dust and gas flow. From this spectrum, subspectrums corresponding to the selected intervals of dispersion and dust concentration are further distinguished. The resulting algorithm is applied first for reference dust carvings in order to form a data bank, on the basis of which further determination of the dust composition is made. This method requires the availability of special software, which currently exists in the form of a prototype, and, accordingly, needs further refinement.
When data-driven algorithms, especially the ones based on deep neural networks (DNNs), replace classical ones, their superior performance often comes with difficulty in their analysis. On the way to compensate for this drawback, formal verification techniques, which can provide reliable guarantees on program behavior, were developed for DNNs. These techniques, however, usually consider DNNs alone, excluding real-world environments in which they operate, and the applicability of techniques that do account for such environments is often limited. In this work, we consider the problem of formally verifying a neural controller for the routing problem in a conveyor network. Unlike in known problem statements, our DNNs are executed in a distributed context, and the performance of the routing algorithm, which we measure as the mean delivery time, depends on multiple executions of these DNNs. Under several assumptions, we reduce the problem to a number of DNN output reachability problems, which can be solved with existing tools. Our experiments indicate that sound-and-complete formal verification in such cases is feasible, although it is notably slower than the gradient-based search of adversarial examples.The paper is structured as follows. Section 1 introduces basic concepts. Then, Section 2 introduces the routing problem and DQN-Routing, the DNN-based algorithm that solves it. Section 3 proposes the contribution of this paper: a novel sound and complete approach to formally check an upper bound on the mean delivery time of DNN-based routing. This approach is experimentally evaluated in Section 4. The paper is concluded with some discussion of the results and outline of possible future work.
The paper analyzes the possibilities of transforming C programming language constructs into objects of EO programming language. The key challenge of the method is the transpilation from a system programming language into a language of a higher level of abstraction, which doesn’t allow direct manipulations with computer memory. Almost all application and domain-oriented programming languages disable such direct access to memory. Operations that need to be supported in this case include the use of dereferenced pointers, the imposition of data of different types in the same memory area, and different interpretation of the same data which is located in the same memory address space. A decision was made to create additional EO-objects that directly simulate the interaction with computer memory as in C language. These objects encapsulate unreliable data operations which use pointers. An abstract memory object was proposed for simulating the capabilities of C language to provide interaction with computer memory. The memory object is essentially an array of bytes. It is possible to write into memory and read from memory at a given index. The number of bytes read or written depends on which object is being used. The transformation of various C language constructs into EO code is considered at the level of the compilation unit. To study the variants and analyze the results a transpiler was developed that provides necessary transformations. It is implemented on the basis of Clang, which forms an abstract syntax tree. This tree is processed using LibTooling and LibASTMatchers libraries. As a result of compiling a C program, code in EO language is generated. The considered approach turns out to be appropriate for solving different problems. One of such problems is static code analysis. Such solutions make it possible to isolate low-level code fragments into separate program objects, focusing on their study and possible transformations into more reliable code.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
3,584 members
Eduard Ageev
  • Department of Physics
Alexander Dmitriev
  • Department of Solid State Optoelectronics
I. Iorsh
  • Nanophotonics and Metamaterials
Sergey Bezzateev
  • Department of Security of Cyberphysical Systems
A. V. Baranov
  • Department of Photonics
Information
Address
49 Kronverkskiy pr., 197101, Saint Petersburg, http://en.ifmo.ru/en/, Russia
Head of institution
Vladimir N. Vasilev
Website
http://en.ifmo.ru/
Phone
+7 812 232-97-04