Real-time learning is the process of an artificial intelligence agent learning behavior(s) at the same pace as it operates in the real world. Video games tend to be an excellent locale for testing real-time learning agents, as the action happens at real speeds with a good visual feedback mechanism, coupled with the possibility of comparing human performance to that of the agent's. In addition, players want to be competing against a consistently challenging opponent. This paper is a discussion of a controller for an agent in the space combat game Xpilot and the evolution of said controller using two different methods. The controller is a multilayer neural network, which controls all facets of the agent's behavior that are not created in the initial set-up. The neural network is evolved using 1-to-1 evolutionary strategies in one method and genetic algorithms in the other method. Using three independent trials per methodology, it was shown that evolutionary strategies learned faster, while genetic algorithms learned more consistently, leading to the idea that genetic algorithms may be superior when there is ample time before use, but evolutionary strategies are better when pressed for learning time as in real-time learning.
Parker, G.B.; Probst, M.H., "Using evolution strategies for the real-time learning of controllers for autonomous agents in Xpilot-AI," Evolutionary Computation (CEC), 2010 IEEE Congress on , vol., no., pp.1,7, 18-23 July 2010 doi: 10.1109/CEC.2010.5586222
The views expressed in this paper are solely those of the author.