Clustering of opponent's strategies in RTS games with imperfect information leveraging deep neural networks

18 Nov 2022OpenReview Archive Direct UploadReaders: Everyone
Abstract: The Artificial Intelligence (AI) no longer appears only in the science fiction movies. Recently, vigorous research on AI is actively conducted, and many products are becoming commercialized in the market, such as self-driving cars, AI speakers, and home appliances with AI. However, verifying the AI model in research has not been easy in the midst of extremely complex conditions in reality, and gathering the variables to explain the complexity was not simple either. In addition, it was challenging to define the problem level in order to improve the performance of the AI model. Thus, game has been a traditional subject in AI research for its well-defined problems with an appropriate level of difficulty and its objectivity in performance evaluation. Since game has a standard outcome to win or lose, it is an excellent tool for AI research. In the past, traditional board games, such as chess and Go, were utilized in research to verify and improve the performance of AI model. In recent years, AI players have reached a point to overpower humans in traditional board games. On the other hand, Real-Time Strategy (RTS) games are complicated to utilize in AI model because the players in RTS games must compete with incomplete information of the opponent in greater action space compared to the board games, and be able to control multiple units simultaneously without taking turns with each other. Starcraft, released by Blizzard in 1998, has been one of the most popular RTS games worldwide, and is studied by numerous AI researchers for its highly complex qualities compared to the game of Go. Imperfect information game means playing without viewing the entire information of the opponent. In board games, such as chess or Go, players can see every single move of the opponent. However, in poker, one of the imperfect information games, players are limited to see only a part of opponent’s information and therefore, sharp prediction on opponent’s hidden information and careful decision-making are crucial to increase the chance of victory. In addition, variables from incomplete information create more complex and complicated situations for AI players, which makes it more challenging to build an AI player. This study proposes a method utilizing Deep Encoder-Decoder Network and Long Short-Term Memory(LSTM) to revise or predict the opponent’s hidden information similar to real information in a ‘Fog-of-War’ situation when playing with imperfect information of the opponent in RTS game. Then it compares the performance of Deep Encoder-Decoder Network and Long Short-Term Memory (LSTM). To verify this method, this study selected Starcraft among the RTS games and conducted three experiments based on its replay data. First, this study predicted and modified the incomplete information using Deep Encoder-Decoder Network. Second, this study predicted and modified the incomplete information using Long Short-Term Memory. Lastly, this study assessed the performance of the clustering model to categorize the strategies in situations with or without the predicted information. By conducting these experiments, this study confirmed that Deep Encoder-Decoder Network and Long Short-Term Memory model achieve high performance in predicting the opponent’s information. In the future, Deep Encoder-Decoder Network and Long Short-Term Memory model would become excellent methods in predicting the hidden information in RTS games. Furthermore, this study confirmed the possibility that these models could also be applied in other fields of study to achieve effective results in modifying or predicting the partially omitted data in sequenced information for data utilization
0 Replies

Loading