P-26 Application of Approximate Q-Learning to Simplified Macromanagement in StarCraft II

Presenter Status

J. N. Andrews Honors Scholar

Second Presenter Status

Assistant Professor, Department of Computing

Preferred Session

Poster Session

Location

Buller Hall Hallways

Start Date

22-10-2021 2:00 PM

End Date

22-10-2021 3:00 PM

Presentation Abstract

Contemporary research in Machine Learning in regards to StarCraft II has recently utilized the power of both neural networks and reinforcement learning in the form of “Deep Reinforcement Learning,” and has risen greatly in popularity. Unfortunately, the use of neural networks comes with great costs in resources and requires expensive hardware to run in a manageable amount of time. Instead, we propose the use of a modified form Approximate Q-learning and forego the use of neural networks to explore the performance of non-neural network strategies in the StarCraft II environment in regards to outpacing an enemy in simplified macromanagement gameplay.

This document is currently not available here.

Share

COinS
 
Oct 22nd, 2:00 PM Oct 22nd, 3:00 PM

P-26 Application of Approximate Q-Learning to Simplified Macromanagement in StarCraft II

Buller Hall Hallways

Contemporary research in Machine Learning in regards to StarCraft II has recently utilized the power of both neural networks and reinforcement learning in the form of “Deep Reinforcement Learning,” and has risen greatly in popularity. Unfortunately, the use of neural networks comes with great costs in resources and requires expensive hardware to run in a manageable amount of time. Instead, we propose the use of a modified form Approximate Q-learning and forego the use of neural networks to explore the performance of non-neural network strategies in the StarCraft II environment in regards to outpacing an enemy in simplified macromanagement gameplay.