Saturday, August 18, 2012

Summer End Update

As the summer draws to a close, our group has completed the first stage necessary to begin to apply machine learning techniques to extrapolate human tendencies from human data.

Our bot can now replay human data as well as record raytrace data, see below post, that results from the human choices. We need only a little more information. Andrew is gathering original human data, including bot acceleration and rotation, to determine key stroke and mouse information from the human player. This is the most direct information we have for human reaction to sensory input. I am working turning this data around to correlate to bot movement, while Jacob works on background infrastructure.

The plan is to implement a neural network before the summer ends!


Monday, June 18, 2012

Summer weekly update 1 & 2


Last week, we decided to go back to ground zero. Kathy decided to look at the code that we got from Igor Karpov and Jacob Schrum as well as read the articles that Igor and Jacob have previously written on Bots learning by example. Jacob decided to read through the Pogamut "cookbook" to become reacquainted with the code and read the articles that Igor and Jacob have written. Andrew emailed Uli with a few questions and also read through the articles.


Currently we are planning a conference call with Igor to learn how to manipulate the Pogamut code to record game play so that we can then replay it for the bot.

Monday, April 2, 2012

Weekly Update: Pogamut & Poster Sessions

In the past couple of weeks we've been busy gearing up for the Undergraduate Research Forum, which takes place Friday, April 13 from 11 to 3 in Welch's main hall.

We've submitted an abstract and are finalizing our poster design, and in the meantime have gotten more familiar with Pogamut through its tutorials. Up next is choosing a method for recording human gameplay.

For now, check out our abstract below, and we hope to see you next Friday at the Forum!

The Botprize is an international competition in the creation of humanlike video game AI (bots) in the first-person shooter game Unreal Tournament 2004. This experiment intends to create such a bot. Data will be gathered from human players and the bot will mimic the human activity through machine learning algorithms. It will access human data from a scenario similar to its own and use the humans’ decisions as a directive for its next action. It is hoped that methods are developed to train software to follow human example in various applications such as social communication, teaching, and cooking.

Saturday, March 3, 2012

Weekly Update: 3/3/12

We looked this week at the information the Unreal Tournament UDK collects from player information using demo recording commands. The results are good for getting the UDK to playback the demo, but unreadable with the access we are given for the UDK documentation.

Ideally we will provide this information to our bot. The bot is currently able to walk, but has no further logic capabilities.

Monday, February 27, 2012

Weekly Update: 2/27


After speaking with Igor Karpov we've decided to start working on adding sensors to our bot. We'll start with rays - vectors emitting from the character in specific directions that collide with any nearby objects (environment or other players) and can return data on where they collided. Picture's worth a thousand words (click for fullscreen):



Those red lines are the rays. And if you peek at the bottom left into the console of Net Beans you can see the hitLocation data being retrieved from the rays. Our hope is to use data like that to determine our bot's next move, based upon how a human would respond to the same data. And to determine how a human would respond to the data, we are going to need to apply similar rays to a human player's character in combat and record the rays' data and see how they compare to the human's decisions.

Applying rays to a human-controlled player character is our next task. We will probably take a round-about method: our current plan is to record a human's actions in a game and then have a bot later recreate those actions with raycasting applied to the bot (there doesn't seem to be an easy way to directly apply raycasting to a human-controlled character).

Monday, February 20, 2012

Weekly Update: 2/20

Andrew has Pogamut up and running on his PC, and using the UDK we can see emptyBot "in action." emptyBot is essentially the "Hello, world!" project for Pogamut: just a bot that can successfully join a server and stand around. Here's a picture (click for zoom):