A reinforcement learning-based decision system for electricity pricing plan selection by smart grid end users

Citation:

Tianguang Lu, Xinyu Chen, Michael B. McElroy, Chris Nielsen, Qiuwei Wu, Hongying He, and Qian Ai. 2021. “A reinforcement learning-based decision system for electricity pricing plan selection by smart grid end users.” IEEE Transactions on Smart Grid, 12, 3, Pp. 2176-2187. Publisher's Version

Abstract:

With the development of deregulated retail power markets, it is possible for end users equipped with smart meters and controllers to optimize their consumption cost portfolios by choosing various pricing plans from different retail electricity companies. This paper proposes a reinforcement learning-based decision system for assisting the selection of electricity pricing plans, which can minimize the electricity payment and consumption dissatisfaction for individual smart grid end user. The decision problem is modeled as a transition probability-free Markov decision process (MDP) with improved state framework. The proposed problem is solved using a Kernel approximator-integrated batch Q-learning algorithm, where some modifications of sampling and data representation are made to improve the computational and prediction performance. The proposed algorithm can extract the hidden features behind the time-varying pricing plans from a continuous high-dimensional state space. Case studies are based on data from real-world historical pricing plans and the optimal decision policy is learned without a priori information about the market environment. Results of several experiments demonstrate that the proposed decision model can construct a precise predictive policy for individual user, effectively reducing their cost and energy consumption dissatisfaction.
Last updated on 01/11/2024