학회 연락처
- +82-2-501-6862
- +82-2-501-6863
- kcde@cde.or.kr
- https://www.cde.or.kr/
Korean Journal of Computational Design and Engineering 2025;30(3):382-392. Published online: Sep, 1, 2025
DOI : https://doi.org/10.7315/cde.2025.382
Manned and unmanned combat systems demand rapid decision-making and effective strategies, highlighting the growing importance of Multi-Agent Reinforcement Learning (MARL) in addressing such complexities. This study investigates the performance of multi-agent RL algorithms in many combat scenarios, focusing on independent decision-making algorithms (DQN, PPO) cooperate algorithms (QMIX, COMA). A simulation environment based on actual weapon specifications was designed to evaluate the algorithms’ learning outcomes using key performance metrics such as survival rates and win rates. The analysis identifies the suitability and behavioral differences of each algorithm across diverse scenarios. Cooperative algorithms are assessed for their effectiveness in collective strategy optimization, while independent algorithms are evaluated for their ability to optimize individual agent actions. This research contributes to the understanding of RL algorithms’ tactical applicability and provides insights into the comparative advantages of cooperative versus independent approaches in complex combat environments.
키워드 Ground Combat Simulation, Multi-Agent, Reinforcement Learning, Cooperative vs Independent Learning, Tactical Decision-Making