If you check out this presentation by Max Levchin delivered several weeks ago as part of UIUC's reflections/projections distinguished lecture series, you noticed that his theme was "start a startup now now NOW!". One of the main reasons for doing so that he mentioned was this phenomenon of the "overhang" regarding venture capital companies. Essentially, this is the phenomenon wherein VC's have more money than they know what to do with, and are actually looking for companies to invest in. He argues that we are in such a time right now, and so get out there and make a company. This New York times article (oh get over yourself and click the link) seems to confirm that we are indeed in such a state, although for a different reason than you might expect. Some companies (e.g. Meebo, Reddit) are even turning VC's down and deciding to finance their company on a shoestring (Meebo did it on credit cards for god's sake!).

Posted
Authorddini
Categoriesgames, Startups

There's a great article and discussion thread over at Good Math, Bad Math on what languages are good for mathematical calculation.

Many people commonly say that C/C++ is fewer steps away from a direct translation to assembly code, and is thus faster than a language such as OCaml or Java. GMBM shows this is not the case, though. Basically, he shows that in C/C++, you could have the following situation. You have two pointers pointing to two arrays. The compiler doesn't know if the arrays are actually two distinct arrays or not (they may point to the same thing), so the compiler must assume that they are. In languages such as Fortran and OCaml, the compiler does have this information, and so can provide serious code optimizations.

Check it out yo.

Posted
Authorddini

Sorry for the lack of updates recently...It's serious crunch mode here at present. Will be back to normal in a couple of days.

Posted
Authorddini
CategoriesUncategorized

There are a fair number of you folks visiting here (mostly Mac users, if you were wondering) and I have a number of posts coming down the pipeline, but I want to make sure you guys get to read what you're interested in. So, please feel free to send me an email if you would like to see any AI in Games related topic covered. party on readers!

Posted
Authorddini
CategoriesUncategorized

So, evidently there is this group of people who believe that, at some point in distant history, a period of 297 years was inserted into our calendar without the years actually occuring. Thus, this is actually the year 1709. Check it out at Damn Interesting.

Posted
Authorddini
CategoriesUncategorized

If you're here, chances are you've read about A* search. Just a quick review:

It is a state space search algorithm, that examines nodes in a space full of them. At any moment of your search, you've got a "frontier" of nodes that you may be immediately examining:

Search_fig1

Associated with each node, s, is a cost of reaching it, g(s). You proceed with your seach by examining your frontier of nodes, and picking one that minimizes a criteria I will mention in a moment, f(s). You then "expand" s, meaning you find out s's immediate neighbors in this space and then get rid of it (remove it from the frontier), and then you stick those neighbors on the frontier list. Thus, the frontier list contains every node you have not expanded. A* is just a way of telling you, which one you're going to expand next.

So, right at this moment in the search depicted in the figure, the frontier is:

s1 G' n s2 s3

Of course the whole point of doing the walk through this space is to find a node that has particular qualities that you're looking for, i.e. a goal node. Multiple nodes in the space may actually contain the goal criteria, it's just that in doing this search you're looking for the path of nodes that gets you to a goal containing node optimally.

Back to selecting nodes on the frontier. The special thing to A* is in the function f(s) that you use to rank the nodes within the frontier. You select the next node on the frontier to look at using a best first search on the function f like so:

For each node s on the frontier, f(s) = g(s) + h(s), where g(s) is the measured (actual) cost to get from the start node to here (node s), and h(s) is the estimated cost of getting from here (node s) to the goal.

Now, if h(s) (the estimated cost of getting from here to the goal) is guaranteed to be less than or equal to the actual cost (i.e., h(s) is admissible), then you can make all sorts of guarantees about the outcome of the search process. Specifically, that A* is complete (if there is a solution, A* will find it) and optimal (the solution it finds costs the least to get to starting from the initial node).

What happens when h(s) is no longer admissible?

Well, let's see. Suppose you've got a frontier of nodes at some point in the search:

One of them is G', which as the goal in it, although suppose we known that G' is not the optimal goal. This means:

f(G') = g(G') + h(G')

Suppose that the optimal way of getting to a goal costs you C, i.e. the optimal goal G is such that:

f(G) = g(G) + h(G) = g(G) + 0 (because h(s) = 0 at a goal node) = C

So, regarding that suboptimal goal G':

f(G') = g(G') + h(G') = g(G') > C

If there is actually a path from the initial node to the optimal goal, then there must be some node, n, in your current frontier that leads you to the optimal goal. i.e., if you expanded n, you would find the optimal route to the goal in that sub tree. For that node, n:

f(n) = g(n) + h(n) <= C

if h is admissible. Thus, n has a lower f value than G' does, so we can see that A* always chooses to expand the path through n first (leading to the optimal solution).

If h is not admissible, then G' is still not the optimal solution:

f(G') = h(G') + g(G') > C

just as before. However, regarding that other node n that actually leads to the optimal solution:

f(n) = h(n) + g(n)

f(n) is not necessarily less than C. That is, if h(n) is an overestimation, then f(n) is definitely not less than C. So now you have both G' and n sitting in the frontier, and both are bigger than C. Thus, in general, an inadmissible heuristic means that G' can be selected before n.

This is not necessarily, bad, actually. If your heuristic is such that G' pops out of your search first, then although G' is not optimal, it takes a fewer number of steps to get to than the optimal solution G does. In cases where a shorter plan that works is prefered to a longer (but cheaper) one, this is just what we're looking for. What this means is if your heuristic is inadmissible and leads to G' before G, then it takes less time to compute the solution than it would for an admissible heuristic, because A* has to churn a little bit longer (by going through n) in order to discover G.

The moral is, if you're just looking for path to the goal quickly, rather than spending a longer amount of time finding the optimal goal, then perhaps it's worth it crafting an inadmissible heuristic.

In fact inadmissible heuristics have already been used to this effect in state-space planners yielding exactly this benefit. The HSP planner uses an automatically generated inadmissible heuristic to gain such speed. HSP was later advanced upon to become the FF (fast forward) planner, where it achieved its apex of coolness.

Posted
Authorddini

In case you have missed it, the studio that brought you Okami and Viewtiful Joe is gone. The reason seems to be sales, basically. Although the article implies the conclusion that a creative game means a game that does not sell well, I'm not convinced that is true in general. After all, there are independent film makers, right? Aren't they able to make a living? There are tv shows such as battlestar galactica or deadwood which are good, and yet remain on the air.

So I don't buy the explanation that something creative and good necessarily does not sell.

Posted
Authorddini