You are required to read and agree to the below before accessing a full-text version of an article in the IDE article repository.
The full-text document you are about to access is subject to national and international copyright laws. In most cases (but not necessarily all) the consequence is that personal use is allowed given that the copyright owner is duly acknowledged and respected. All other use (typically) require an explicit permission (often in writing) by the copyright owner.
For the reports in this repository we specifically note that
- the use of articles under IEEE copyright is governed by the IEEE copyright policy (available at http://www.ieee.org/web/publications/rights/copyrightpolicy.html)
- the use of articles under ACM copyright is governed by the ACM copyright policy (available at http://www.acm.org/pubs/copyright_policy/)
- technical reports and other articles issued by M‰lardalen University is free for personal use. For other use, the explicit consent of the authors is required
- in other cases, please contact the copyright owner for detailed information
By accepting I agree to acknowledge and respect the rights of the copyright owner of the document I am about to access.
If you are in doubt, feel free to contact webmaster@ide.mdh.se
A Modified Adaptive Data-Enabled Policy Optimization Control to Resolve State Perturbations
Publication Type:
Conference/Workshop Paper
Venue:
64th IEEE Conference on Decision and Control (CDC)
Abstract
This paper proposes modifications to the data-enabled policy optimization (DeePO) algorithm to mitigate state
perturbations. DeePO is an adaptive, data-driven approach
designed to iteratively compute a feedback gain equivalent to
the certainty-equivalence LQR gain. Like other data-driven
approaches based on Willems’ fundamental lemma, DeePO requires persistently exciting input signals. However, linear state-feedback gains from LQR designs cannot inherently produce
such inputs. To address this, probing noise is conventionally
added to the control signal to ensure persistent excitation.
However, the added noise may induce undesirable state perturbations. We first identify two key issues that jeopardize the
desired performance of DeePO when probing noise is not added:
the convergence of states to the equilibrium point, and the
convergence of the controller to its optimal value. To address
these challenges without relying on probing noise, we propose
Perturbation-Free DeePO (PFDeePO) built on two fundamental
principles. First, the algorithm pauses the control gain updating
in DeePO process when system states are near the equilibrium
point. Second, it applies a multiplicative noise, scaled by a mean
value of 1 as a gain for the control signal, when the controller
converges. This approach minimizes the impact of noise as the
system approaches equilibrium while preserving stability. We
demonstrate the effectiveness of PFDeePO through simulations,
showcasing its ability to eliminate state perturbations while
maintaining system performance and stability.
Bibtex
@inproceedings{Kaheni7263,
author = {Mojtaba Kaheni and Niklas Persson and Vittorio De Iuliis and Costanzo Manes and Alessandro Papadopoulos},
title = {A Modified Adaptive Data-Enabled Policy Optimization Control to Resolve State Perturbations},
month = {December},
year = {2025},
booktitle = {64th IEEE Conference on Decision and Control (CDC)},
url = {http://www.es.mdu.se/publications/7263-}
}