Yazar "Mahata, Shibendu" seçeneğine göre listele
Listeleniyor 1 - 4 / 4
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe Optimal F-domain stabilization technique for reduction of commensurate fractional-order SISO systems(Springernature, 2022) Mahata, Shibendu; Herencsar, Norbert; Alagoz, Baris Baykant; Yeroglu, CelaleddinThis paper presents a new approach for reduction of commensurate fractional-order single-input-single-output systems. The minimization in the frequency response error of the reduced order model (ROM) relative to the original system is carried out in the F-plane. A constrained optimization technique is introduced to satisfy the angle criteria for F-domain stability of the proposed ROM. Significant improvements in both the time- and frequency-responses over the recently published literature are illustrated using several numerical examples.Öğe Reduced order infinite impulse response system identification using manta ray foraging optimization(Elsevier, 2024) Mahata, Shibendu; Herencsar, Norbert; Alagoz, Baris Baykant; Yeroglu, CelaleddinThis article presents a useful application of the Manta Ray Foraging Optimization (MRFO) algorithm for solving the adaptive infinite impulse response (IIR) system identification problem. The effectiveness of the proposed technique is validated on four benchmark IIR models for reduced order system identification. The stability of the proposed estimated IIR system is assured by incorporating a pole-finding and initialization routine in the search procedure of the MRFO algorithm and this algorithmic modification contributes to the MRFO algorithm when seeking stable IIR filter solutions. The absence of such a scheme, which is primarily the case with the majority of the recently published literature, may lead to the generation of an unstable IIR filter for unknown real-world instances (particularly when the estimation order increases). Experiments conducted in this study highlight that the proposed technique helps to achieve a stable filter even though large bounds for the design variables are considered. The convergence rate, robustness, and computational speed of MRFO for all the considered problems are investigated. The influence of the control parameters of MRFO on the design performances is evaluated to gain insight into the interaction between the three foraging strategies of the algorithm. Extensive statistical performance analyses employing various non-parametric hypothesis tests concerning the design consistency and convergence are conducted for comparison of the proposed MRFO-based approach with six other metaheuristic search procedures to investigate the efficiency. The results on the mean square error metric also highlight the improved solution quality of the proposed approach compared to the various techniques published in the literature.Öğe A Robust Frequency-Domain-Based Order Reduction Scheme for Linear Time-Invariant Systems(Ieee-Inst Electrical Electronics Engineers Inc, 2021) Mahata, Shibendu; Herencsar, Norbert; Alagoz, Baris Baykant; Yeroglu, CelaleddinThis paper presents a robust model order reduction technique with guaranteed stability, minimum phase, and matched steady-state response for linear time-invariant single-input-single-output systems. The proposed approach is generalized, allowing the designer to select any desired order of the reduced-order model (ROM). In contrast to the published literature, which primarily uses the time-domain behavior, the proposed technique utilizes the frequency-domain information of the full-order system. The suggested strategy allows the determination of the optimal ROM in a single step, simpler than the various recently reported mixed methods. The robustness is demonstrated using convergence studies and statistical measures about the final solution quality and model coefficients. The superiority over the recent literature is illustrated through four numerical examples using various time-domain and frequency response performance metrics.Öğe A theoretical demonstration for reinforcement learning of PI control dynamics for optimal speed control of DC motors by using Twin Delay Deep Deterministic Policy Gradient Algorithm(Pergamon-Elsevier Science Ltd, 2023) Tufenkci, Sevilay; Alagoz, Baris Baykant; Kavuran, Gurkan; Yeroglu, Celaleddin; Herencsar, Norbert; Mahata, ShibenduTo benefit from the advantages of Reinforcement Learning (RL) in industrial control applications, RL methods can be used for optimal tuning of the classical controllers based on the simulation scenarios of operating con-ditions. In this study, the Twin Delay Deep Deterministic (TD3) policy gradient method, which is an effective actor-critic RL strategy, is implemented to learn optimal Proportional Integral (PI) controller dynamics from a Direct Current (DC) motor speed control simulation environment. For this purpose, the PI controller dynamics are introduced to the actor-network by using the PI-based observer states from the control simulation envi-ronment. A suitable Simulink simulation environment is adapted to perform the training process of the TD3 algorithm. The actor-network learns the optimal PI controller dynamics by using the reward mechanism that implements the minimization of the optimal control objective function. A setpoint filter is used to describe the desired setpoint response, and step disturbance signals with random amplitude are incorporated in the simu-lation environment to improve disturbance rejection control skills with the help of experience based learning in the designed control simulation environment. When the training task is completed, the optimal PI controller coefficients are obtained from the weight coefficients of the actor-network. The performance of the optimal PI dynamics, which were learned by using the TD3 algorithm and Deep Deterministic Policy Gradient algorithm, are compared. Moreover, control performance improvement of this RL based PI controller tuning method (RL-PI) is demonstrated relative to performances of both integer and fractional order PI controllers that were tuned by using several popular metaheuristic optimization algorithms such as Genetic Algorithm, Particle Swarm Opti-mization, Grey Wolf Optimization and Differential Evolution.