My Game AI assignment 1: Foundations

The Engine

The game engine that I used was Unity3D 5.3.2. The game Engine provided an IDE (Integrated Development Environment)  with the ability to generate game objects, with labels, positions, rotations, sprite and/or script components without actually writing any code at all through the IDE’s GUI (Graphical User Interface). It also gave me the ability to use raycasting and to set up colliders using Unity’s built in functions, which also provided details such as distance of collision, the name of the object collided with, and so forth. Unity also provided some great tools to make vector manipulation easier, such as Vector addition, subtraction, normalization, and even dot product calculations rather than needing to perform the math step by step ourselves.Unity also does a great job of listening to key press events as well.

Even though Unity provided so many great functions, and a wonderful interface, a lot was left for to actually code and construct myself. For example, while Unity provides raycasting and 2D Box Colliders for walls, I still had to actually cast the rays out for the range finders, and to write the code to determine if a wall had been hit, which wall, and which range finder (feeler) had the shortest distance (and therefore closer to the wall). I also implemented pie slicing, as well as adjacency detection. Sprites for the subject and other agents were created as well with the help of PAINT.net software, as well as code to generate debug text to the screen during runtime of the code.

The Code

As for the overall class structure, initially a manager class was created to essentially act as a god class for all of the game objects and would control everything for everyone. As development progressed however, it seemed better fit to give agents and the subject an agent class (script) such that if for example every agent were to want to be able to have the same functionality such as adjacent agent detection, and so forth, they would be able to do so.

A separate class was also created for the subject alone which provided the forward, backwards, and rotation movement based on input key pressed. During development however, some of the code became clear that there would only be allowed a single subject controllable by the user. Therefore some of the code isn’t exactly usable for all agents unless the manager is re-written to account for those agents. Likewise, the number of walls is hardcoded currently, as well as the agents themselves. If an additional wall or agent were to be added, they would need to be accounted for within the manager itself.

How the code works:

On start:

  1. The manager initializes some values, most especially game object, and script object pointers that can be accessed publicly and statically by any other script. This makes it easier to run code from other objects if necessary, as well as instantly obtain information such as position, rotation, and so forth without having to constantly re-find the pointer to said objects per need.
  2. The subject script initializes some default values, as well as its own agent script for faster access within itself.
  3. The agent script initializes some default values.

On frame update:

  1. The manager updates some variables involving the subject’s and agents’ positions, and distances of the agents from the subject. As well as (re-)calculate the subject’s relative heading vector.
  2. The subject checks to see if any of the Up, Down, Left, Right arrow keys have been pressed down and performs a movement action such as move forward when Up is pressed, backwards when Down is pressed, rotate left if Left is pressed, and rotate right if Right is pressed. It also performs raycasting code for the range finders, as well as pie slice checking code as well. The subject is also responsible for refreshing the debug text on screen. Since most of the work done in the code is through the subject (such as Range Finding, Pie Slicing, and as well as Adjacent Agent checks (albeit it only because the debug text refreshing function ends up calling the code to check for adjacent agents) — the refreshing of the text is left in control of the subject’s update so that those tasks are completed before generating the debug text to screen.

Now for some additional details on how the code works. As mentioned with the on frame update, the subject moves based on key presses during an update, and the way the subject moves is instantaneous by a set amount, directly to a location relative to the direction the subject is facing. Therefore, speed is considered instantaneous.

Sensors

Sensor Usefulness:

The range finder is useful for the subject so that it knows when it’s approaching a wall. The code does not currently exist, but could easily be implemented such that if a wall is within distance, the subject can stop itself from walking within a wall.

The adjacent agent check can inform the subject that not only is an agent within a very short range of the subject, but can also calculate the heading to that agent if it were to need to turn and face that agent and pursue it, or to turn away and to try and flee the agent.

The pie slicing method helps to give a general idea that there are none, few, or many agents within a certain area near the target. This can be useful for such instances when you might be attempted to be surrounded (agent on each side) or if agents are trying to attack from a single side (therefore it would be best to run away in the opposite directional side).

Sensor Implementation:

The rangefinder was implemented by generating three 2D Rays casted out. It then checks if any of the rays hit a collider object, and if so, it will track which wall(s) it hit, as well as which feeler had the least distance to a wall, as well as the overall shortest distance between a ray and any wall.

The adjacent agent check sensor is very simple. It simply calculates the vector distance/magnitude between the subject and all agents. If the distance is less than or equal to a certain value, than it is considered adjacent to the subject. At which point a debug line is drawn between the subject and agent and text is compiled and returned to display the agent positions on screen in the debug text area.

The pie slicing code was implemented using some child sprites of the subject which are “set active” (become visible or invisible) depending on if an agent is detected within that area. the code that determines if an agent is within a pie slice is done on a per agent basis. The code will first determine if the agent is within range of the pie slices around the subject by calculating the vector distance through vector subtraction. If it’s within range of the pie slices, a relative heading is calculated. This relative heading is calculated by taking the distance vector, normalizing it, and then taking the dot product of that normalized distance and the agent, the arccos of that value, then converted to degrees by using Unity’s RadToDeg (rather than using Mathf.PI and 180).

The relative heading will always return a positive angle between 0 and 180 degrees. This has to do with the fact that the dot product was used to find the angle. The dot product is essentially: (in terms of vectors x and y) x dot y = |x||y|cosθ. The proof behind the 180 positive degrees is far beyond the scope of this paper though. As a result though, some special work had to be done in order to determine which pie slice the agent was actually residing in.

The first step was to subtract 90 from the relative heading angle. This then gave a range of -90 to 90. Which proved to be incredibly useful since it was able to identify that if the subject were to be facing northwards, that positive values would reside on the north part of the unit circle (imaginar-ily) casted around the subject, and that negative values would mostly reside on the lower half of the circle. When the circle was examined, I realized that with how the X dividing the pie slices up for the subject worked as ten 36 degree slices. With each slice owning 2.5 of those 36 degree slices. If I knew for example that the agent was in the upper portion of the unity circle, it could be on the north, east, or west of the north facing subject. But if the 36 degree offset all I had to check for is if the relative heading angle was greater than 36 degrees or less than -36 degrees to determine if it was either north or south with this configuration.

This lead to an issue of being able to determine when the agent was either east or west of the subject. So the relative heading was actually re-calculated, but this time with a 90 degree offset. This ended up rotating our “unit circle” 90 degrees, and was able to re-create the same scenario as before, where I could then subtract 90 degrees from the relative heading angle and determine with the same greater than 36, less than -36 degrees if the agent was east or west of the subject (facing north). If the subject were to face any other direction, all held true. North was simply used to give a description of positions relative to the subject.

Lessons Learned

Looking back, I would have loved to instead search game objects by tags, rather than predefined names. This would allow for easier adding and removing of such things such as walls and other agents. I also regret implementing adjacent agents check in such a way that it’s only called as a result of the debugging text displayed to screen being called. Most of the code was far easier than expected — pie slices definitely gave the most difficulty though, without a doubt. Half of the difficulty had to do with calculating the relative heading properly in the first place. The dot product was initially taken from the subject’s position, the subject’s position facing forward, etc — but never the distance vector of the subject relative to the agent. It was the Game AI course’s textbook that was able to bring light to that error. There was also an extremely lot of difficulty in figuring out where in the pie slice the agent was. At the time the relative heading wasn’t being calculated correctly in the first place (but I moved forward assuming it must be correct, kinda, sorta… and to simply just move on) — one method for calculating the pie slices, at the time pie slice box, was to subtract the x and y position values from the subject and the agents and determine if it’s within range of the box. Then determine if it’s in a one through four geometric graph quadrant… about 500 lines of code later, code worked PRETTY well, except for when the subject rotated, at all, since all values were based on a square box directly around the subject. It was without a doubt a truly serendipitous moment shortly after backtracking to see if the book had any insight on the pie slices, to then discover the relative heading angle was calculated incorrectly and then by some eureka ideas poured in.

Recap: “Great! We have 0 to 180 degrees! Now what?” – “Let’s find a way to get some sort of range out of these degrees that we can differentiate and make useful…. well I remember rotating the subject back with the adjacency check since unity considers north to be the 0 degree of the unit circle… so okay, let’s start there!” — and step by step lead go the code in place for the pie slicing.

If I could do something different, it would be to design the code even more in mind of every agent (including the subject) of having the ability to have adjacent sensors, pie slicing, and wall feelers (rangefinders) that actually worked and were implemented (but shut off with a simple switch) — that was the initial plan but was more or less scrapped in some ways for faster code implementation, testing, and debugging, with the idea that optimization comes last. I’m looking forward to being able to implement this code again in the future, but with a fresh slate and a clearer state of mind of these sensors now. It should be a lot easier, simpler and be able to get more from them than I have currently in my code.

Screenshots

Due to the giant size of clear screenshots, they’ve been left to each have their own page in this report. 
Each have been captioned, and sorted into the following categories i through iii:

  1. The first set of screenshots include at least three different positions, each with different positions and headings.
  2. The second set includes subject facing north, south, east, west as well as interactions between it, the sensors and other agents.
  3. The third set includes screenshots of the subject interacting with the wall from north, south, east, west of the wall, as well as a screenshot interacting with two walls at once.

Category i:
i_1
i – 1: This screenshot shows the subject facing north, in an arbitrary position.


i_2

i – 2: This screenshot shows the subject facing northwest-ish, in a different position than before.

i_3

i – 3: This screenshot shows the subject facing southwest-ish, in a different position than before.

 

Category ii:

ii_1

ii – 1: This screenshot shows an agent(npc2) within the third pie slice (slice number starts from west, north, east, south).

ii_2

ii-2: This screenshot shows an agent (npc2) within the pie slice forth pie slice (slice number starts from west, north, east, south).

ii_3

ii-3: This shows the agent, npc2 (green) in the third pie slice, and the agent, npc1 (purple), within the forth pie slice (slice number starts from west, north, east, south). It also shows adjacency agent check for npc1 (purple) and a line being drawn to that agent. The information of the position of that agent can be seen within the “Adjacent Agent Sensors” list.

ii_4

ii-4: This shows the agent, npc2 (green) in the first pie slice, and the agent, npc1 (purple), within the second pie slice (slice number starts from west, north, east, south). It also shows adjacency agent check for npc2 (green) and a line being drawn to that agent. The information of the position of that agent can be seen within the “Adjacent Agent Sensors” list.

ii_5

ii-5: This shows the agent, npc2 (green) in the first pie slice, and the agent, npc1 (purple), within the third pie slice (slice number starts from west, north, east, south). It also shows adjacency agent check for npc1 (purple) and npc2 (green) with appropriately colored lines to each of those agents, from the subject as well as them both being listed on the “Adjacent Agent Sensors” list.

ii_6

ii-6: This shows essentially the same thing as screenshot ii-5, except the subject is facing 180 degrees from where it was before. Thus, headings will have different values.

ii_7

ii-7: The pie slice detection will detect an agent within multiple slices if it resides within multiple slices as illustrated in this screenshot. Agent NPC1 (green) can be seen residing within the third and forth pie slices.(slice number starts from west, north, east, south)

Category iii:

iii_1

iii-1: This figure shows the subject interacting with a wall, while the subject is facing east. The Minimum Distance text shows that the rangefinders (numbered from left, to right) — that rangefinder #1 has a distance of 1, #2 a distance of 0.7, and #3 a distance of 1. The minimum distance of all the rangefinders is displayed as 0.7

iii_2

iii-2: This figure shows the subject interacting with a wall, while the subject is facing west. The Minimum Distance text shows that the rangefinders (numbered from left, to right) — that rangefinder #1 has a distance of 0.8, #2 a distance of 0.5, and #3 a distance of .8. The minimum distance of all the rangefinders is displayed as 0.5

iii_3

iii-3: This figure shows the subject interacting with a wall, while the subject is facing south. The Minimum Distance text shows that the rangefinders (numbered from left, to right) — that rangefinder #1 and #3 both have a distance of “Inf” meaning no interaction with the wall was encountered, while rangefinder #2 detected a distance of 0.8. Thus the minimum distance is 0.8

iii_4

iii-4: This figure shows the subject interacting with a wall, while the subject is facing north. The Minimum Distance text shows that the rangefinders (numbered from left, to right) — that rangefinder #1 and #3 both have a distance of “Inf” meaning no interaction with the wall was encountered, while rangefinder #2 detected a distance of 0.8. Thus the minimum distance is 0.8

iii_5

iii-5: This figure shows the subject interacting with a wall, while the subject is facing northeast-ish. The Minimum Distance text shows that the rangefinders (numbered from left, to right) — that rangefinder #1 and #2 both interact with the northern wall and have the distances 0.4 and 0.8. Which makes sense since the rangefinder #1 is has far less distance between it and the wall. The rangefinder #3 is interacting with the vertical wall, south of the other wall, and has a distance value of 0.7. The minimum distance reported for the feelers is 0.4. It should be noted that the code actually tracks which feelers have the shortest distance, with which wall, and what those distances are per wall in case those values are ever needed.