You are on page 1of 3

Journal Log - Jack Wagner

Project: REINFORCED LEARNING IN SCII

Total Hours: 45

Oct 6 - 1 hour
Project idea solidified into the development of a strategy game AI using neural networks.
Found research material and sources regarding neural networks. Found new development is in
progress for a video game AI in Starcraft using a neural net. I would like to find more information
about this. Senior project proposal completed and approved by mentor.

Nov 6 - 2 hours
Created and added profile, essential skills, and abstract to senior portfolio website.
Found a spaceship strategy game run with different AI’s built of neural networks. The AI’s can
duel and fight against a player, learning while they do it. I want to use this sample to aim my
work towards. Later I plan to create my own game or implement the same type of game in an
environment where i can manage and edit the parts how I wish. For documentation reasons I
wish to control and manage the ‘generations’ where the neural networks advance and learn off
each trial they experience.
Ideas I am formulating for my presentation and visual is an interactive piece where
people can play against the neural network AI in the game at different stages in its development
to show the learning curve and legitimate difference in the AI’s intelligence and ability to play the
game.

Nov 14 - 1 hour
Researched additional open source neural network libraries from which to work into my
visual game. Found Torch; a open source library that uses LuaJIT code.

Nov 15 - 4 hours
Scrapped the Torch project. I figured the development of my bot using a code I am not
familiar with would be too time consuming. Instead I moved to research neural networks used in
starcraft. I found promising material. A open source neural network test bot in java that runs in
starcraft 1.6.10. I’ve taken the source code and tested the bot in starcraft. It works! Although all
it did was build economy units until it capped it’s population limit. Then mined resources the
whole game until the other AI came to destroy it’s helpless base. This was Run#1 of Test bot.

Created FliK bot and WIfF bot. Both starting from square one, brand new knowledge
bases. Run#1 of FliK bot, Mined resources, capped start population of 10 with economy units.
Waited till loss by opposing game AI. Repeat of Run#1 of Test bot. as expected.
● I plan to train FliK bot to become more adept and advanced at the game while keeping
WIfF at a lower training level for comparison reasons. I need to setup a script to
automatically begin new games and continue training while I am AFK. I expect It will take
hundreds of iterations for the bot to become adept at the game.

Nov 21 - 1 hour
Scrapped FliK bot & WIfF bot. After more research and running into problems with the
bot’s base code it was evident that continuing with the process would prove ineffective. The
neural network library they used was insufficient. Found documentation using a neural network
starcraft integration using a program called Torch. This brings me back to my earlier idea, being
able to reuse some of the research I had already done. It’s client runs on Linux so my current
task is setting up the environment via Virtual Machine.

Nov 22 - 2 hours
Documented materials used in Research paper.

Nov 29 - 2 hours
Collected and organized sources for project research paper. Created annotated
bibliography. Because of research that was done today, i’ve found more ideas of how to
organize my AI training that needs to be started soon. Many, many, many iterations of gameplay
will need to be completed and documented. To a point where the bot has show significant
improvement in it’s gameplay.

Dec 3 - 6 hours
The Torch, TorchCraft, and Starcraft BWAPI bridge setup was completed. Now working
on learning the LuaJIT code required to use the Torch interface. I currently have concerns over
the time needed to become able to start building the nn ‘neural network’. I hope to see the net
reading replays and playing soon.

Jan 8 - 8 hours
Scrapped the Torch, TorchCraft AI design plan. This was due to the developmental time
required to build the neural networks and algorithms from scratch. Looked into PYSC2 and
Baseline popular pre-made algorithms. Created the PYSC2 environment on my desktop.
Implemented a simple reinforced learning algorithm into a game agent. Began testing the agent
“Flaire” by running game playthroughs. 100 game iterations run.
● I predict the agent will require 10,000+ iterations before it can contest and
competent human player.

Jan 22 - 2 hours
Continued feeding gameplay saves from http://www.gosugamers.net/starcraft2/replays.
Noticable results are easily distinguished. I am in need of a quantifiable representation of the
AIs learning ability and state. My first idea is to spar it against Blizzard’s scripted AI as different
difficulty levels, overcoming the difficulty over time.

Jan 23 - 8 hours
Built master data chart, recording Flaire’s games’ win/loss, unit loss ratios, as well as in-
game score. This will be the data from which I prove the learning. Fed 100 pre-recorded
matches from gosugamers.com starcraft 2 professional database. Then began sparring Flaire
against the Starcraft 2 AI at Very easy difficult. The testing consisted of 50 matches. Statistics
recorded.

Jan 24 - 3 hours
Result from testing was analyzed and condensed into a paper format. Drawing ratios
and improvement curves. Continued work on research paper, filling in the data to complete the
document.

May 17 - 5 hours
Final presentation preparation and assessment. Appearance and layout changes made
to powerpoint. Should have refreshed my knowledge about reinforced learning algorithms
before presenting, stuttered once or twice. Otherwise the presentation was a success, allowing
me to move on with certainty to any other final expos or presentations.

You might also like