A Load Balancing Scheme for Gaming Server applying Reinforcement Learning in IoT


Hye-Young Kim, Jin-Sul Kim




A lot of data generated on the game server causes overtime in IoT environment. Recently, both researchers and developers have developed great interests in load balancing schemes in gaming servers. The existing literature have proposed algorithms that distribute loads in servers by mostly concentrating on load balancing and cooperative offloading in Internet of Things (IoT) environment. The dynamic load balancing algorithms have applied a technique of calculating the workload of the network and dynamically allocating the workload according to the network situation, taking into account the capacity of the servers. However, the various previous researches proposed are difficult to reflect the real world by imposing a lot of restrictions and assumptions on the IoT environment, and it is not enough to meet the wide range of service requirements for the IoT environment. Therefore, we proposed an agent that applies a deep reinforced learning method to distribute loads for gaming servers. The agent has accomplished this by measuring network loads and analyzing a large amount of user data. We specifically have chosen deep reinforcement learning because no labels would need to be obtained in advance and it enabled our agent to immediately make the right decisions to load balancing in IoT environment. We have showed several significant functions of our proposed scheme and derived through mathematical analysis. Also, we have compared performances of our proposed scheme and a previous research, ProGreGA, widely used scheme through simulation.