Sorry for the lack of updates recently...It's serious crunch mode here at present. Will be back to normal in a couple of days.

Posted
Authorddini
CategoriesUncategorized

There are a fair number of you folks visiting here (mostly Mac users, if you were wondering) and I have a number of posts coming down the pipeline, but I want to make sure you guys get to read what you're interested in. So, please feel free to send me an email if you would like to see any AI in Games related topic covered. party on readers!

Posted
Authorddini
CategoriesUncategorized

So, evidently there is this group of people who believe that, at some point in distant history, a period of 297 years was inserted into our calendar without the years actually occuring. Thus, this is actually the year 1709. Check it out at Damn Interesting.

Posted
Authorddini
CategoriesUncategorized

If you're here, chances are you've read about A* search. Just a quick review:

It is a state space search algorithm, that examines nodes in a space full of them. At any moment of your search, you've got a "frontier" of nodes that you may be immediately examining:

Search_fig1

Associated with each node, s, is a cost of reaching it, g(s). You proceed with your seach by examining your frontier of nodes, and picking one that minimizes a criteria I will mention in a moment, f(s). You then "expand" s, meaning you find out s's immediate neighbors in this space and then get rid of it (remove it from the frontier), and then you stick those neighbors on the frontier list. Thus, the frontier list contains every node you have not expanded. A* is just a way of telling you, which one you're going to expand next.

So, right at this moment in the search depicted in the figure, the frontier is:

s1 G' n s2 s3

Of course the whole point of doing the walk through this space is to find a node that has particular qualities that you're looking for, i.e. a goal node. Multiple nodes in the space may actually contain the goal criteria, it's just that in doing this search you're looking for the path of nodes that gets you to a goal containing node optimally.

Back to selecting nodes on the frontier. The special thing to A* is in the function f(s) that you use to rank the nodes within the frontier. You select the next node on the frontier to look at using a best first search on the function f like so:

For each node s on the frontier, f(s) = g(s) + h(s), where g(s) is the measured (actual) cost to get from the start node to here (node s), and h(s) is the estimated cost of getting from here (node s) to the goal.

Now, if h(s) (the estimated cost of getting from here to the goal) is guaranteed to be less than or equal to the actual cost (i.e., h(s) is admissible), then you can make all sorts of guarantees about the outcome of the search process. Specifically, that A* is complete (if there is a solution, A* will find it) and optimal (the solution it finds costs the least to get to starting from the initial node).

What happens when h(s) is no longer admissible?

Well, let's see. Suppose you've got a frontier of nodes at some point in the search:

One of them is G', which as the goal in it, although suppose we known that G' is not the optimal goal. This means:

f(G') = g(G') + h(G')

Suppose that the optimal way of getting to a goal costs you C, i.e. the optimal goal G is such that:

f(G) = g(G) + h(G) = g(G) + 0 (because h(s) = 0 at a goal node) = C

So, regarding that suboptimal goal G':

f(G') = g(G') + h(G') = g(G') > C

If there is actually a path from the initial node to the optimal goal, then there must be some node, n, in your current frontier that leads you to the optimal goal. i.e., if you expanded n, you would find the optimal route to the goal in that sub tree. For that node, n:

f(n) = g(n) + h(n) <= C

if h is admissible. Thus, n has a lower f value than G' does, so we can see that A* always chooses to expand the path through n first (leading to the optimal solution).

If h is not admissible, then G' is still not the optimal solution:

f(G') = h(G') + g(G') > C

just as before. However, regarding that other node n that actually leads to the optimal solution:

f(n) = h(n) + g(n)

f(n) is not necessarily less than C. That is, if h(n) is an overestimation, then f(n) is definitely not less than C. So now you have both G' and n sitting in the frontier, and both are bigger than C. Thus, in general, an inadmissible heuristic means that G' can be selected before n.

This is not necessarily, bad, actually. If your heuristic is such that G' pops out of your search first, then although G' is not optimal, it takes a fewer number of steps to get to than the optimal solution G does. In cases where a shorter plan that works is prefered to a longer (but cheaper) one, this is just what we're looking for. What this means is if your heuristic is inadmissible and leads to G' before G, then it takes less time to compute the solution than it would for an admissible heuristic, because A* has to churn a little bit longer (by going through n) in order to discover G.

The moral is, if you're just looking for path to the goal quickly, rather than spending a longer amount of time finding the optimal goal, then perhaps it's worth it crafting an inadmissible heuristic.

In fact inadmissible heuristics have already been used to this effect in state-space planners yielding exactly this benefit. The HSP planner uses an automatically generated inadmissible heuristic to gain such speed. HSP was later advanced upon to become the FF (fast forward) planner, where it achieved its apex of coolness.

Posted
Authorddini

In case you have missed it, the studio that brought you Okami and Viewtiful Joe is gone. The reason seems to be sales, basically. Although the article implies the conclusion that a creative game means a game that does not sell well, I'm not convinced that is true in general. After all, there are independent film makers, right? Aren't they able to make a living? There are tv shows such as battlestar galactica or deadwood which are good, and yet remain on the air.

So I don't buy the explanation that something creative and good necessarily does not sell.

Posted
Authorddini

Have you guys heard of Pandora? Here's how it works. You tell it some band you like. It analyzes the actual sound patterns within the music, and produces a play list of recommended similar music. Basically an instant custom tailored internet radio station. I'm simultaneously impressed by their success in that I actually like what they recommend (yay!) and dismayed that Don is not an unknowable snowflake.

Posted
Authorddini
CategoriesUncategorized

There are unique AI problems to solve regarding autonomous agents in virtual environments. I write about them in a paper here but I would like to elaborate upon some of them here. In general, when you're making an AI for an autonomous agent, it can be to automate basically any task. For example, a land rover, an intelligent display, or even simulating an entire city population. Within virtual environments, however, and video games in particular, AI is very often (currently most often) used to control a virtual human being.

This has several consequences for designing your AI, but only one of them is for planning. As noted in the paper, the virutal human-ness of your agent means that now, just any old plan that acheives your goal is not acceptable. Flying over a building will get you from here to there, but humans can't fly over buildings, so that plan is not acceptable. So simulating a human restricts the space of valid plans.

Something that occured to me recently, though, is that AI for virtual humans does not simply entail making a system to make P true given we are in a world in which P isn't true (i.e. a planner). To see what I mean, take a look at this Bioshock demo. (Check out the later parts of the demo too).

There's this cool part where the player approaches one of the resident creepies (a Big Daddy) who protects one of the little sisters. Normally, the Big Daddys go about their own business, and actually leave you alone if you do nothing threatening. If you approach one of the Big Daddys while he's with a little sister, he sort of grumbles and frightens you, and tries to continue on about his business. He doesn't immediately launch into an attack, he doesn't run away, he instead does something that exhibits personality.

In general, it's clear that there's a problem space between making a planning agent, i.e. being able to make a plan so as to optimzie some criteria, and a virtual creature, i.e. actually appearing to be a resident of a virtual world.

Posted
Authorddini