Go top

Modelling and optimizing the behavior of distributed agents in decentralized power systems by reinforcement learning techniques

Reinforcement Learning (RL) is an area of machine learning that studies how software agents must take their actions in an environment so as to maximize some notion of cumulative reward. In economics and game theory, reinforcement learning has been used to explain how equilibrium can arise under bounded rationality. The current trend of digitalization that is taking place in many sectors such as the electric power industry, is requiring the application of novel algorithms and approaches to take advantage of the last advances in computation capacity and data management. The main objective of this PhD thesis is to understand how the existence of distributed technologies (distributed generation, flexible demand, energy storage, and advanced power electronics and control devices) will affect our understanding of power systems in a framework where all the involved agents (maybe millions) will make their sequential decisions in an uncertain environment characterized by the interdependence of their actions. The characterization of the behavior of all these distributed agents will be crucial to understand the evolution of future power systems not only due to technical reasons, but also from the economic and regulatory point of view.

Requirements: Creativity and autonomy; programming skills (such as Python, R, Matlab); academic excellence.

Full-time contract with exclusive dedication to the PhD thesis.

Documents: Curriculum vitae, academic record, cover letter and two recommendation letters.
Please enter a valid value.
Please enter a valid value.
Please enter a valid value.
Please enter a valid value.
Please enter a valid value.
Please enter a valid value.
Select an option.
Please enter a value between 2000 and 2025.
Please enter a value between 0 and 10 (2 decimals).
Select a file.
Select a file with a valid extension: zip.
Select a file no larger than 20 MB.
Please check at least one option.
You must check this option.
Fields with an asterisk (*) are required.