Fig 2 - uploaded by Andrey Poddubnyy
Content may be subject to copyright.
Average aggregated load profiles for different scenarios, with 95% confidence intervals

Average aggregated load profiles for different scenarios, with 95% confidence intervals

Source publication
Article
Full-text available
The extensive penetration of distributed energy resources (DERs), particularly electric vehicles (EVs), creates a huge challenge for the distribution grids due to the limited capacity. An approach for smart charging might alleviate this issue, but most of the optimization algorithms has been developed so far under an assumption of knowing the futur...

Context in source publication

Context 1
... datasets with consumption of residential build- ings. These buildings were assigned to the nodes of the network. It has 3 winter peaks since it starts at the beginning of 2016 and ends at the end of 2017. The data was scaled to fit the considered power grid. The aggregated load can go up to 0.1 MW in the summer and 0.2 MW in the winter time (Fig. 2). Only the weekdays were considered for the training process, as it is supposed, that people commute by EVs between home and work in the middle of the day, which is supported by the ...

Similar publications

Article
Full-text available
This paper investigates the bidirectional charging management of distributed parking lots accommodating multiple types of electric vehicles (EVs). The distribution of each EV type accross parking lots is determined based on various factors such as the proportion of EV types at specific load points, the EV penetration rate, peak demand at related lo...

Citations

... However, in other studies, the reinforcement learning based algorithms are used to get a better scheduling plan for larger or more complex scenarios. For example, studies in [16][17][18][19] aimed to employ reinforcement learning based algorithm to generate a feasible EV charging scheduling plan to reduce charging costs for users. Similarly, some studies focus on reinforcement learning based algorithms to balance electric load [16,[19][20][21] and increase CSs revenue [22]. ...
Article
Full-text available
This paper addresses the challenge of large-scale electric vehicle (EV) charging scheduling during peak demand periods, such as holidays or rush hours. The growing EV industry has highlighted the shortcomings of current scheduling plans, which struggle to manage surge large-scale charging demands effectively, thus posing challenges to the EV charging management system. Deep reinforcement learning, known for its effectiveness in solving complex decision-making problems, holds promise for addressing this issue. To this end, we formulate the problem as a Markov decision process (MDP). We propose a deep Q-learning (DQN) based algorithm to improve EV charging service quality as well as minimizing average queueing times for EVs and average idling times for charging devices (CDs). In our proposed methodology, we design two types of states to encompass global scheduling information, and two types of rewards to reflect scheduling performance. Based on this designing, we developed three modules: a fine-grained feature extraction module for effectively extracting state features, an improved noise-based exploration module for thorough exploration of the solution space, and a dueling block for enhancing Q value evaluation. To assess the effectiveness of our proposal, we conduct three case studies within a complex urban scenario featuring 34 charging stations and 899 scheduled EVs. The results of these experiments demonstrate the advantages of our proposal, showcasing its superiority in effectively locating superior solutions compared to current methods in the literature, as well as its efficiency in generating feasible charging scheduling plans for large-scale EVs. The code and data are available by accessing the hyperlink: https://github.com/paperscodeyouneed/A-Noisy-Dueling-Architecture-for-Large-Scale-EV-ChargingScheduling/tree/main/EV%20Charging%20Scheduling.
... Nevertheless, with the recent advancement in computer-based computations, data-driven artificial intelligence (AI) methods can provide a promising solution. AI-based approaches such as deep reinforcement learning (DRL) models have recently attracted huge attention [26][27][28][29][30][31][32][33]. The ability of DRL models to handle complicated and nonlinear systems has led to their increasing use in terms of controlling IIDGs in inverter-based MGs [34][35][36][37]. ...
Article
Full-text available
Article Info Abstract Keywords: Microgrid (MG); Autonomous control; Conservation voltage reduction (CVR); Inverter-interfaced distributed generation units (IIDGs); Artificial intelligence (AI); Multi-agent deep reinforcement learning (DRL); Word Count: 5134 Conservation voltage reduction (CVR) is implemented in power systems to mitigate power consumption in steady-state time scales. We propose dynamic-scale CVR (DCVR) as a potent solution to provide cost-effective frequency support in inverter-interfaced (micro) grids. The proposed DCVR reduces the voltage profile in dynamics and consequently power reduction helps to maintain instant production-consumption balance for dynamic frequency support. However, DCVR implementation in autonomous MGs (AMGs) faces critical control and stability problems. DCVR is implemented by controlling the voltage, which is a local variable, and makes it difficult to be realized through a decentralized structure. Besides, preserving accurate reactive power sharing (Q-sharing) through conventional droop controllers while employing the DCVR is another critical concern. In this light, this paper proposes a novel artificial intelligence (AI)-based decentralized control structure to implement the DCVR in AMGs for tackling the existing issues. The multi-agent deep reinforcement learning (DRL) model, with a deep Q-network (DQN) algorithm, is adopted to address the grid instabilities and inaccurate Q-sharing issues that arise due to incorporating the DCVR. The proposed method is able to handle the nonlinearity and complexity of the system while maintaining proper dynamic performance and AMG stability. Simulation results in MATLAB/Simulink prove the effectiveness of the proposed control method.
Article
This article proposes a data-driven decentralized control scheme for a battery energy storage system, “shared” among residential PV households characterized by their respective uncontrollable demand and PV generation. The households are connected to the grid via the point of common coupling and are accordingly billed by the utility company. We firstly translate the decentralized control objective into a multi-agent reinforcement learning (MARL) problem by modelling the interaction between the agents and their environment as a Markov Game. Thereafter, we present the novel Distributed Subgradient Q−learners (DSQL) algorithm based on the “localization” of the Hyper−Q function and the coordination among the learning agents connected via a communication network. The proposed algorithm holds merit in addressing the typical key-aspects of MARL algorithms, i.e., scalability, privacy and fairness. Finally, we perform numerical simulations by using “real” historical demand, PV generation and electricity tariff data and highlight the key advantages of the proposed algorithm w.r.t. the state-of-art, in terms of economic savings and key-performance indicators, such as peak-to-average ratio, valley-to-average ratio and root-mean-squared deviation.