Over the past couple months, I've been working with a team of seven people to create thousands of variations of restaurant behavior and dialogue, drawing from our database of recorded games. (See my previous post for some background on the project). The interesting thing is that my team members don't know anything about A.I., they're not programmers, they don't have any previous game development experience. They are random people that I hired on the internet, with minimal vetting, and they're doing great work!
More after the jump...
Follow research updates on Twitter: @jorkin
Let me introduce my team:
The team is responsible for annotating game logs with four types of meta-data (events, event hierarchies, causal chains, and references), which they accomplish via custom browser-based Flash applications. This meta-data becomes the fuel that powers my new planning system to control interactive character behavior and dialogue. A programmer is still required, to implement critics -- small pieces of code which constrain when fragments of behavior can execute, but annotating meta-data composes the lion's share of the authoring effort.
I hired my team by posting a Data Entry job opening on oDesk. I asked applicants to annotate one sample file, and hired the first group of people who did a good job. My team was staffed within hours of posting the job, and has now completed annotating 1,000 game logs. It took them a total of 415 hours, which cost just under $3,000. They were working part-time, spread over a couple months, but if someone was doing this full-time (8 hours / day), 415 hours is about 52 days. So, divided among a team of seven, this work could have been completed in about a week (or a week and a half, assuming 40 hour work weeks). I still have a lot of work to do over the next few months to demonstrate that this approach results in more engaging, robust behavior, but the prospect is exciting of a practical, fast, affordable way to create characters capable of rich social interaction.
Watch on Vimeo: Example of Event Annotation.
Small Teams, Big A.I.
|Gratuitous Angry Birds image.|
Regarding A.I. in indie games, there are some notable exceptions on the horizon -- games from small teams with deep A.I. Industry veteran Paul Tozour’s City Conquest is a tower defense RTS developed using genetic algorithms to balance the playing experience. Prom Week, developed by a team of PhDs at UCSC (studying with Michael Mateas and Noah Wardrip-Fruin), might be considered the spiritual successor to Facade. Prom Week promises a highly replayable gameplay experience, based on dynamic social interaction, but the project (perhaps wisely) abandons Facade’s natural language interface. I think that natural language input still offers an opportunity to give players an increased sense of autonomy, and am hoping to show that leveraging data recorded from thousands of players can support robust language understanding while preserving the player’s sense of agency.
What about Turk?
When I describe my approach as crowdsourcing, people often ask why I’m not using Amazon’s Mechanical Turk. Crowdsourcing purists might say that what I’m doing on oDesk is really outsourcing more than crowdsourcing, because I’m working with a persistent team (although the earlier phase of my project where we recorded players online was certainly crowdsourcing). I did experiment briefly with Turk, and my impression was that there are lots of scammers on Turk, trying to make money by clicking things as fast as possible, and a large part of the effort would need to focus on validating work. My research focus is really on building the system that generates behavior and dialogue from the annotated data, and crowdsourcing is a means to an end. There is more personal interaction on oDesk, and the reputation system provides an incentive to maintain high quality work, making it easier to find good people and continue working with them. My experience on oDesk could be considered a proof of concept for a process that could be repeated on Turk in the future.