2nd input velocity (change in error) variable.

2nd input velocity (change in error) variable.

Source publication
Article
Full-text available
In automation and mechatronics applications, mass spring damper system (MSDS) plays a significant role in ensuring model serviceability and safety. The system's dynamic of this mechanical system is quite challenging to control. In this paper, the system is a single degree of freedom (SDOF) spring mass system. The issue of performance evaluation of...

Context in source publication

Context 1
... Figure 16 shows the control forces with using all these three different controllers (FLC, LPID, and NPID). ...

Similar publications

Article
Full-text available
The micro flapping wing using the traditional flapping wing mechanism has the disadvantages of low aerodynamic efficiency and high energy consumption. The bionic flapping wings driven by piezoelectric materials can effectively combine the two main systems of flapping wings and wings. It not only has the task of reducing weight but also has the char...
Article
Full-text available
In accordance with the relevant regulations of Building Method, this paper uses the finite element model to establish a two-column seven-grade dovetail column frame model. 4 kinds of thin cushions are designed with different elasticity and shear modulus at the bottom of the column. The elastic modulus of the cushion is 1/5, 1/10, 1/15, 1/30 of the...
Article
Full-text available
In order to give the clearance and assembling recoil to the fixed cylinder cam mechanism, the main roller was dynamic analyzed. The system dynamics was done and the force changes quickly on the main roller and the force has a great value. Assembling clearance between the main roller and the cam curve slot, the main roller can suddenly rotate and it...

Citations

... Nonlinear control theories allow such systems to be controlled more effectively. However, nonlinear control theories generally require greater mathematical analysis and computational capacity [17]. Additionally, system modeling and design of control strategies can be more complex [18]. ...
Article
Full-text available
This work examines the use of deep Reinforcement Learning (RL) in mass-spring system position control, providing a fresh viewpoint that goes beyond conventional control techniques. Mass-spring systems are widely used in many sectors and are basic models in control theory. The novel aspect of this approach is the thorough examination of the impact of several optimizer algorithms on the RL methodology, which reveals the optimal control tactics. The research applies a Deep Deterministic Policy Gradient (DDPG) algorithm for continuous action spaces, where the actor and critic networks are important components in assessing the agent's performance. The RL agent is trained to follow a reference trajectory using the Simulink environment for system modeling. The study provides insights into the agent's learning approach and performance optimization by evaluating the training process using force-time graphs, reward graphs, and Episode Manager charts. Furthermore, the effect of different combinations of optimizers on the control performance of the agent is examined. The outcomes highlight the importance of optimizer selection in the learning process by revealing significant variations in training times. As a result, a better understanding of the relationship between various optimizers and control performance is provided by this study's novel application of reinforcement learning in mass-spring system control. The results raise the possibility of more potent methods for controlling complex systems and add to the expanding field of study at the interface of control theory and deep learning.