Mechanics: Artificial Agents

A while ago, I wrote about agents and how thinking about characters in a story as agents can help guide the writing process. That’s all well and good when you can write what you want, like in your average linear narrative. But video games don’t need to have a linear narrative. There is a point where just considering what characters would do stops making sense, and instead, we might find ourselves needing to have the characters decide when they will act.

This article about a simple agent-representing AI is primarily based on something I’ve written a while before, but there is a specific part of it that I have in mind when writing this:

Alright then, generally, an agent in a story has 3 properties.

A) They have means by which they act.

(If they have no means of acting, they lack agency and are ahhh.. a different sort of thing.)

B) They have goals they want to achieve.

(I.e. something they want to do, which they use their means to achieve.)

C) They have motives for their actions.

(I.e. they have a reason for what they are doing. This is not necessarily needed, but in general, if your agents don’t have motives, they tend to not be relatable or interesting.)

But if you’d like to read this old article, you can find it here:

Resolving Conflicts – Part I and a Half – Agent Who?

So yeah, I’m going to talk about means, goals and motives and how those can be rounded up to form a simple AI.

Let’s define our algorithm as being, in essence, a set of triplets:

A = {a: (m, d, g)}

Where:

m – a boolean function acting upon the game world; it decides if our character has the means to carry out g;
d – a set of pairs {(dₘ, dₚ)} representing the motives behind g;
dₘ – a specific motive, actions of a character are controlled by several motives, each a continuous number (though not every motive appears in each triple), all dₘ belong to the set D;
dₚ – a priority or weighting for the given motive, once more a real number;
g – a goal or an action the character might pursue.

This functionally represents what I’ve written in that older article quite directly.

The means a character possesses might limit what they can do at a given time, while we generally think of means as overarching values (think: the means Sauron from The Lord of the Rings possesses, by virtue of being the current Dark Lord of Arda), and often that is indeed correct. But as we lower down to a finer scale of detail, things like stamina or mood matter a lot more (incidentally, values important to the player character in my game, too… hmmm). Now, I’m not saying that Sauron sometimes gets bored and dedicates an hour or two playing yeet the goblin, but what I am saying is that he might. So ultimately, many variables might become important to specific actions that could be chalked under means. Some of these are internal (like stamina: “I’m too tired to do this now”) or external (like time of day: “I can’t go to the library, it’s closed”).

A character’s motives, on the other hand, represent their priorities in life. There are things we need to get done and things we want to get done, and sometimes even we develop conflicts between the two. The idea is that each character is guided by some set of needs, desires or wants. This set can be general, or it can be intrinsically linked to the agent in question. Things that shape the prioritisation of one goal over another. Of course, we first must have the means to carry out that goal, but once we’ve determined what we can do, out of all the things possible, we pick one goal to pursue actively. The same motives can affect our desire to achieve different goals, including negatively, allowing for the values of both dₘ and dₚ to be negative and positive, or even for dₚ to be variable. For this simple AI, we assume that when a character decides to pursue a goal, they select it from among which they have the means and the highest cumulative motive.

Motive(a):

return ∑((dₘ, dₚ) ∈ dₐ) dₘdₚ

Finally, the goal itself is pretty flexible. Obviously, its definition is highly dependent on the game in question, but even within the scope of a specific game’s domain, not all goals need to be equal. For example, sometimes it’s about progressing the nefarious plans of world domination, but the same character might also have the need to grab a quick brunch. Ultimately, this is a separate implementation detail.

Let’s re-examine the relationship between goals and motives one last time. In this model, goals are stored as the third value of each triple (m, d, g) as the g value, and their associated motives are tied up in what d represents. So each motive is associated with multiple goals, and each goal is associated with multiple motives. (How a motive affects each goal it’s associated with might also change depending on the weighting value.)

So what does the general algorithm look like?

Step 0: Initialise – decide upon the initial values of A and D.

Step 1: Action preselection – obtain all actions for which the means are met:

A’ := {(m, d, g) ∈ A: m(Ω)}

Step 2: Action priority – select the action with the highest Motive(a) score:

a’ := argmax(a ∈ A’) Motive(a)

Step 3: Carry out the action – whatever that might be and however that might work in the domain of the game in question. We can assume that the character pursues this action until it is completed or interrupted in some decisive way. We can playfully refer to this as executing the action upon the universe:

Ω := Inact(a’, Ω)

Step 4: Update motive values – at this point, we can update the values of all motives (the dₘ values). It’s also fair to assume that the update process has something to do with two factors, the passage of time and the result of the previously taken action.

D := {d ∈ D: d + Δd(Ω))

Step 5 (Optional, but quite useful): Update the action list – this might not be self-evident, but it should make sense if you think about it; A is not a constant. The triples contained within can change over time. Sometimes you might want to remove an action, for example, if a character manages to accomplish some objective and no longer needs to think about pursuing it. Sometimes you might want to temporarily or permanently add something new for the character to do. Either way, A can change, and that’s important to remember.

Step 6: Return to Step 1!

This whole article might seem like it came out of nowhere, but it’s something that’s been on the back burner for a while. I’ve been thinking about how best to handle this since almost the beginning of development and went through several designs. Finally, this solution seems adequate for my needs.

One thought on “Mechanics: Artificial Agents

Leave a comment