Articles
European Journal of Control (09473580)85
This research addresses the challenge of effective human-robot interaction in master-slave robotic systems, particularly for applications like manufacturing and healthcare. A method is proposed for transferring desired impedance from a human operator to a slave robot. A three-term model estimates the interactive force/torque between the human hand and the master robot, with adaptive rules for updating stiffness and damping coefficients in real-time to provide accurate and responsive haptic feedback. These updated coefficients dynamically adjust the reference impedance model used to control the slave robot. This architecture, incorporating robust control techniques and estimators, ensures stability and transparency, enabling the master-side user to perceive conditions faced by the slave robot (e.g., obstacles). The slave robot responds according to the user's desired impedance, providing a seamless and intuitive interaction. Input-to-state stability analysis demonstrates robustness to disturbances and uncertainties. The proposed approach in this paper allows replicating the user impedance of the master robot to the slave robot, with the input-to-state stability of the entire closed-loop system analyzed in the presence of the proposed three-term model. The comparison of the root mean square (RMS) error measure for the tracking position and the tracking force/torque when the slave robot encounters an obstacle shows the favorable performance of the proposed approach compared to the impedance reference model approaches with fixed stiffness and damping coefficients and traditional position control approaches. Numerical simulations and experimental implementation validate the efficiency and accuracy of the proposed approach. © 2025
International Journal of Machine Learning and Cybernetics (1868808X)
The development of advanced control systems for quadrotors has been focused on recent researchs, particularly with the advent of intelligent control methodologies. This paper evaluates and compares two innovative approaches: (1) a Neural controller optimized using a Growing Particle Swarm Optimization (GPSO) algorithm, and (2) a layer-by-layer Deep Reinforcement Learning (DRL) controller. Method 1 leverages the GPSO algorithm to fine-tune the weights of the Neural controller without requiring prior training data, enabling efficient online optimization. It integrates a PD controller, designed using the Ziegler-Nichols method, which is further refined by an online PD-neural controller. This hybrid approach demonstrates high control precision with moderate computational demands. on the other hand, Method 2 employs a DRL based controller that structured in three layers included mapping and goal determination, path generation, and control of quadrotor dynamics. This approach adapts to dynamic environments through episodic task-based training which achieving high adaptability and control precision but at the cost of increased computational complexity. Finally, simulation results and practical experiments demonstrate the performance of the two methods across various scenarios. © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2025.