Fig 2 - uploaded by Zhiyang Wang
Content may be subject to copyright.
The process of manually tuning a cavity filter. The tuning technician (a) observes the S-parameters curve (S 11 ) (b) shown from a Vector Network Analyzer (c), and adjusts the screws inserted into the cavity filter with a screw-driver (d), according to his own experience and tuning strategies (e). The whole process is very much similar to a reinforcement learning problem.

The process of manually tuning a cavity filter. The tuning technician (a) observes the S-parameters curve (S 11 ) (b) shown from a Vector Network Analyzer (c), and adjusts the screws inserted into the cavity filter with a screw-driver (d), according to his own experience and tuning strategies (e). The whole process is very much similar to a reinforcement learning problem.

Source publication
Conference Paper
Full-text available
Reinforcement Learning has achieved a great success in recent decades when applying to the fields such as finance, robotics, and multi-agent games. A variety of traditional manual tasks are facing upgrading, and reinforcement learning opens the door to a whole new world for improving these tasks. In this paper, we focus on the task called Cavity Fi...

Context in source publication

Context 1
... internal state of the cavity filter, see Fig. 1 (b). The S-parameters curves displayed on the screen of the VNA indicate the tuning state of the current product, and guide the tuning technician to perform the next tuning action so that the curves are gradually optimized until reaching the desired targets. The overall tuning process is shown in Fig. ...

Similar publications

Article
Full-text available
We address gender differences in leader effectiveness and followers’ perceptions of leaders’ effectiveness. Our experimental design removes gender-linked factors that might affect leadership success, such as risk-taking and competitiveness. We employ a repeated weakest-link coordination game. Subjects first complete 10 periods without a leader, and...

Citations

... Refs. [34,35] proposed an automatic tuning framework for cavity filters based on reinforcement learning algorithms. This framework was tested in a simulation environment and verified its applicability under various custom tuning tasks. ...
Article
Full-text available
Microstrip filters are widely used in high-frequency circuit design for signal frequency selection. However, designing these filters often requires extensive trial and error to achieve the desired performance metrics, leading to significant time costs. In this work, we propose an automated design flow for hairpin filters, a specific type of microstrip filter. We employ artificial neural network (ANN) modeling techniques to predict the circuit performance of hairpin filters, and leverage the efficiency of low-cost models to deploy reinforcement learning agents. Specifically, we use the proximal policy optimization (PPO) reinforcement learning algorithm to learn abstract design actions for the filters, allowing us to achieve automated optimization design. Through simulation results, we demonstrate the effectiveness of the proposed approach. By optimizing the geometric dimensions, we significantly improve the performance metrics of hairpin filters, and the trained agent successfully meets our specified design goals within 5 to 15 design steps. This work serves as a conceptual validation attempt to apply reinforcement learning techniques and pre-trained ANN models to automate MMIC filter design. It exhibits clear advantages in terms of time-saving and performance efficiency when compared to other optimization algorithms.
... • A conventional mapping is performed by conventional feed-forward networks [4][5][6]. ...
... More importantly, the learned strategies are expected to be generalized to other cases, i.e., to different individuals of the same type or even totally different types. This work is an extension to [9,33]. In [9], DQN is used for the first time to solve the tuning problem but the state and action spaces are both very limited. ...
... In [9], DQN is used for the first time to solve the tuning problem but the state and action spaces are both very limited. In [33], DDPG first shows its effectiveness for tuning. The methods proposed in [21,22,34] also utilize DQN, double DQN, or DDPG, and based on previous studies, these works attempt more filter orders (screw numbers) or more elaborate reward functions, but have little consideration of generalization or transfer problems. ...
Article
Full-text available
Learning to master human intentions and behave more humanlike is an ultimate goal for autonomous agents. To achieve that, higher requirements for intelligence are imposed. In this work, we make an effort to study the autonomous learning mechanism to solve complicated human tasks. The tuning task of cavity filters is studied, which is a common task in the communication industry. It is not only time-consuming, but also depends on the knowledge of tuning technicians. We propose an automatic tuning framework for cavity filters based on Deep Deterministic Policy Gradient and design appropriate reward functions to accelerate training. Simulation experiments are carried out to verify the applicability of the algorithm. This method can not only automatically tune the detuned filter from random starting position to meet the design requirements under certain circumstances, but also realize the transfer of learning skills to new situations, to a certain extent.
... In addition to value-based DRL, another class of the DRL method is the policy gradient method. Wang et al. [160] present a framework based on deep deterministic policy gradient for tuning cavity filters, where continuous action space is valid. The Experience Replay and the Target Network of DQN is preserved to ensure the stability of the algorithm based on their previous work [161]. ...
Article
Full-text available
Artificial intelligence (AI) techniques have been spreading in most scientific areas and have become a heated focus in photonics research in recent years. Forward modeling and inverse design using AI can achieve high efficiency and accuracy for photonics components. With AI-assisted electronic circuit design for photonics components, more advanced photonics applications have emerged. Photonics benefit a great deal from AI, and AI, in turn, benefits from photonics by carrying out AI algorithms, such as complicated deep neural networks using photonics components that use photons rather than electrons. Beyond the photonics domain, other related research areas or topics governed by Maxwell’s equations share remarkable similarities in using the help of AI. The studies in computational electromagnetics, the design of microwave devices, as well as their various applications greatly benefit from AI. This article reviews leveraging AI in photonics modeling, simulation, and inverse design; leveraging photonics computing for implementing AI algorithms; and leveraging AI beyond photonics topics, such as microwaves and quantum-related topics.
... In recent years, researchers start to apply data-driven machine learning techniques to automate the tuning process [21]- [26], including heuristic method [22], artificial neural network [23], multi-kernel [24], reinforcement learning [25] and Q learning [26]. Knowledge-based tuning algorithms are also popular, like in [27]- [31] fuzzy logic technique is exploited to build tuning systems based on tuning experience summarized from tuning experts to make tuning decisions and automate the tuning process. ...
Article
Full-text available
A procedure for diagnosis and tuning of fabricated bandpass filters is presented. A surrogate model consisting of a mapped coarse model at a basis point and a mapping of measured response with respect to the change of tuning elements from the basis point is established. To reduce the possibility of nonuniqueness, an implicit multipoint parameter-extraction technique is exploited to match both the response and the first-order derivative of response in terms of tuning parameters in the two models. With this approach, the robustness of the surrogate model and the iteration convergence are significantly improved. To verify the proposed method, two screw-tuned waveguide filters, including an X-band filter fabricated by computerized numerical control (CNC) technology and a 3-D-printed metallic air-filled filter with large alignment tolerance and surface roughness, are given. To test the robustness of this approach, an FR-4 microstrip combline filter with uncertain dielectric properties is tuned by active varactors for different specifications utilizing the same surrogate model. All testing results achieve sufficient satisfaction to expectation.
Article
Full-text available
Present-day demand and supply of connectivity necessitate the rapid production of Microwave (MW) filter units. The production of these filters is then followed by the step of utmost importance in the assembly line, viz., the ‘tuning of the filter’, as tuning is crucial to meeting the selectivity requirements of the band. Since the advent of filters, tuning has always been done manually, and hence it is considered a bottleneck by experts in the field. Thus, the need to automate the system is highly implied. The goal of the current work is to outline various MW filter tuning techniques that have been advocated by the community of researchers. The limitations of the said research works and their comparative analysis are also encapsulated in tabular form in the present paper. The paper ends with the implementation of an Expert-Based Hybrid Deep Learning Algorithm to fully automate the filter tuning process.