There are unique AI problems to solve regarding autonomous agents in virtual environments. I write about them in a paper here but I would like to elaborate upon some of them here. In general, when you're making an AI for an autonomous agent, it can be to automate basically any task. For example, a land rover, an intelligent display, or even simulating an entire city population. Within virtual environments, however, and video games in particular, AI is very often (currently most often) used to control a virtual human being.
This has several consequences for designing your AI, but only one of them is for planning. As noted in the paper, the virutal human-ness of your agent means that now, just any old plan that acheives your goal is not acceptable. Flying over a building will get you from here to there, but humans can't fly over buildings, so that plan is not acceptable. So simulating a human restricts the space of valid plans.
Something that occured to me recently, though, is that AI for virtual humans does not simply entail making a system to make P true given we are in a world in which P isn't true (i.e. a planner). To see what I mean, take a look at this Bioshock demo. (Check out the later parts of the demo too).
There's this cool part where the player approaches one of the resident creepies (a Big Daddy) who protects one of the little sisters. Normally, the Big Daddys go about their own business, and actually leave you alone if you do nothing threatening. If you approach one of the Big Daddys while he's with a little sister, he sort of grumbles and frightens you, and tries to continue on about his business. He doesn't immediately launch into an attack, he doesn't run away, he instead does something that exhibits personality.
In general, it's clear that there's a problem space between making a planning agent, i.e. being able to make a plan so as to optimzie some criteria, and a virtual creature, i.e. actually appearing to be a resident of a virtual world.