Hybrid electric vehicles can achieve better fuel economy than conventional vehicles by utilizing multiple power sources. While these power sources have been controlled by rule-based or optimization-based control algorithms, recent studies have shown that machine learning-based con trol algorithms such as online Deep Reinforcement Learning (DRL) can effectively control the power sources as well. However, the optimization and training processes for the online DRL-based pow ertrain control strategy can be very time and resource intensive. In this paper, a new offline–online hybrid DRL strategy is presented where offline vehicle data are exploited to build an initial model and an online learning algorithm explores a new control policy to further improve the fuel economy. In this manner, it is expected that the agent can learn an environment consisting of the vehicle dynamics in a given driving condition more quickly compared to the online algorithms, which learn the optimal control policy by interacting with the vehicle model from zero initial knowledge. By incorporating a priori offline knowledge, the simulation results show that the proposed approach not only accelerates the learning process and makes the learning process more stable, but also leads to a better fuel economy compared to online only learning algorithms.
With the advancements of data science, machine learning has become a vital tool for improving decision making by using raw data and information as input. Significant results can be seen by using different machine learning techniques in various real-world domains, such as cybersecurity systems, engineering, healthcare, e-commerce, agriculture, etc. [1]
[1] Yao, Z.; Yoon, H.-S.; Hong, Y.-K. Control of Hybrid Electric Vehicle Powertrain Using Offline-Online Hybrid Reinforcement Learning. Energies 2023, 16, 652. https://doi.org/10.3390/en16020652 |