Writing games relies heavily on libraries for multimedia processing. Reading/writing image formats, windowing, manipulating graphics and controlling windows are all complicated tasks that you certainly don't want to be reinventing with every new project.
Unity is a full 2d/3d game framework, which includes:
In these initial labs, we will be not be using the high-level features available in the engine, as we are going to go over the fundamentals of physical simulation code. This is similar to how you write your own stack or queue once to understand it, before using the standard ones from there on out.
Unity project setup changed a bit with the 2021 release. Details in the video below.
You must log into blackboard for these videos to show
Once you create a project, in the project directory is a folder called Assets
. This is where you will store your images, models, code and all other parts of your project. Download some (clean) image to the Assets
folder, and note how it shows up in Unity in the Assets
panel. Click on it there and in the inspector window you can see the settings that Unity used to import the image.
You must log into blackboard for these videos to show
The Unity IDE follows many conventions from 3d modeling programs, allowing the user to manually build a scene by dragging in assets. This is convenient for quickly setting up and adjusting a game environment, although it runs into limitations as a model when the games get bigger and more dynamic.
In the main Scene
window, you can see a white rectangular line. This is the part of the scene that will be visible when the game runs, called the viewport. Drag the image you got from Assets
into the Scene
window. If it's too big relative to the viewport, click on it over in Assets
and raise the Pixels Per Unit
in the import settings.
Once it's a decent size, click the play triangle at the top middle of the IDE to run your "game". Click it again to stop.
In Unity terms, you created an entity in the scene that displays a 2d image. Notice that the image also appeared in the upper left hand Hierarchy
pane, which is a tree view of all the entities in the scene. If you rename it there, you're not renaming the image that you imported (in Assets
), you're renaming one specific entity in the scene that displays that image. Drag the image from Assets
to Scene
a second time and you'll see that it creates a second entity.
In order to add functionality to your game/lab/demo/thing, you attach script components to entities. There are a number of ways to do this in the IDE, here's one:
Assets
pane, select Create->C# Script
First
In that script file is a single class that inherits from MonoBehaviour
, which is the base class for all Unity script component functionality. If necessary, change the name of the class to match the name of the script file, First
.
Unity automatically creates two methods for you to implement, Start
and Update
. Before we get into the details of how this all works, try this:
Debug.Log("Starty!");
to the Start
methodDebug.Log("Updating!");
to the Update
methodAssets
to the first entity in the sceneWhat you should see is a bunch of debug messages spit out to the Console
panel - one Starty!
and lots of Updating!
Games (all interactive applications, actually) run a main loop, a while(true)
that just keeps going until the game is done. In that loop, the program does three things:
Unity, like most frameworks and engines, hides that main loop from you. Instead of managing it, you write those script components and attach them to entities. Each time through the main loop, the engine calls Update
on each active script in the scene. The engine calls Start
on each active script roughly at the beginning of the game, although it's not quite that simple.
To implement your game rules, you put code in Start
that you want to happen at the beginning to set things up, and put code in Update
that you want to happen every frame (that's the part that updates the world based on the simulation rules and user input).
Our entity isn't very interesting at this point, because it just draws itself in the same place every frame. Games, like traditional animation, work by showing the user a rapid sequence of images that change slightly. Let's make that entity do something.
The class MonoBehaviour
that we inherited from contains, among other things, a data member called transform
. The transform
expresses the position, rotation and scale of the entity within the scene. In the Unity editor, if you click on the entity you can see the transform
component in the Inspector
pane on the right. If you edit those numbers, you can move the entity around, rotate it and scale it.
In our First
class methods, we can access transform
, because we inherit it from MonoBehaviour
. Try put this code in Start
:
transform.position = new Vector2(0, 0);
When you run the game, Start
will be called and the entity will be positioned in the center of the viewport. The position is a Vector2
, which has and x
component and a y
component.
Note the Unity's Vector2
and Vector3
are C# structs, not classes, meaning that they are passed by value (copied). The x
and y
data members are immutable, so you can't edit them (which is why we set it to a new Vector2
above).
How to make things move in a consistent and physically realistic way? Turns out it's easy! Read these two references and come back:
Reference: Movement and Numerical Integration
Based on that, we need to add a velocity vector to our class. You can do it as a data member inside the class:
Vector3 velocity = new Vector2(0.1f, 0);
We use a Vector3
because that's what transform.position
is, and while Unity is happy to copy a Vector2
to a Vector3
(it just zeroes out z
), other operations will complain at us.
The f
indicates that the number is a float
rather than a double
. Graphics are one of the few places where that kind of optimization matters (allows more data to be moved between the CPU and GPU).
Also, notice that C# is cool with you allocating member objects outside methods - it happens whenever an object is created, at the same time that the constructor is called.
In Update
, add the velocity to the position every frame:
transform.position = transform.position + velocity;
Run the game and the entity moves! However, since we're adding a constant (0.1) every frame, the speed of movement is dependent on the frame rate (i.e. how fast the computer is). That's bad! We want to move at a consistent speed in real time, so we use Euler integration as discussed in the references. The amount of time since the last frame is given to us by Unity as Time.deltaTime
:
transform.position = transform.position + (velocity * Time.deltaTime);
(You'll want to increase your speed from 0.1 to around 5 to be reasonable).
Every frame, the game has to check what the user is doing and apply those actions to the simulation. To make our entity move only when we hold down the d
key, we use Unity's built-in input handling to conditionally set the velocity at the beginning of Update
:
velocity = new Vector2(0, 0); if (Input.GetKey(KeyCode.D)) { velocity = new Vector2(5f, 0); }
Input.GetKey
returns true
if the specified key is being held down. Run the game and use the d
key to move your entity.
To finish up, add more code to Update
so that you can use the WASD
keys to move vertically, horizontally, and diagonally (eight possible directions).
In this part, we'll use the same numerical integration to implement click-to-move. All that will change is how we generate the correct velocity vector. Instead of moving in a direction specified by WASD, it will point towards where you click the mouse.
In your new script Update
, use the Input.GetMouseButtonDown(0)
and Input.mousePosition
methods to print (Debug.Log
) the position of the mouse every time you click. Note that Input.GetMouseButtonDown
only returns true on the frame that it is clicked, not the whole time it's held down (event vs. polling). The argument 0
indicates the left mouse button.
In the past lab, we established that 0,0
is the center of the viewport. Move your entity around in the scene to see the range of x and y in the visible area. Now compare that with the numbers that you're getting when you click. Why don't they match?
Graphics systems, especially 3d, have the concept of a camera, which is the viewpoint of the player into the world. Note the Main Camera
that was automatically added to the Hierarchy
pane when you created the project. Click on it and note the many camera options that come up in the Inspector
. The camera, importantly, can move around in the world, and serves to project the entities in the world onto the screen where the user can see them. Input.mousePosition
returns the position of the mouse in the coordinates of the screen (e.g. 1680x1050 pixels). This makes sense because the mouse pointer isn't "in" the world, it's on the screen on top of what you're seeing.
In order to translate from the point on the screen where you clicked to the point in the world that is under that spot, you have to translate from the screen coordinates to the world coordinates based on the position and size of the camera viewport. Since this is such a common thing, Unity has a method to do it. Camera.main
is a static (global) reference to the main camera in the scene, for convenience, and Camera.main.ScreenToWorldPoint
will take a screen point and return a world point. Wrap that around your Input.mousePosition
call and verify that the printed positions now match the world coordinates.
Note that it returns a Vector3
, with the Z value set to the camera position in Z (-10 by default).
In this video, I quickly went over the math to figure out the unit vector pointing from the location of our entity to the point that you click on. The link covers the same thing as the video, but in more detail and with better images.
You must log into blackboard for these videos to show
Reference: More 2d Vector Operations
Note that a good vector class, such as Unity's Vector2
/Vector3
, returns new vector objects from most operations, rather than mutating the existing vector. This is a good default approach so that if A = B + C, you don't change B and C, which can lead to hard-to-find bugs.
Now, modify the code in Update
that prints the mouse location to instead set a class-level destination vector for the entity to move to. For simplicity, also set a boolean indicating that you have a destination set (since you can't set the vector to null). You'll need to zero out the Z component of the vector so that your entity stays in the 2d plane.
In Update
, on every frame where you have a destination set, move towards it at constant SPEED. Follow the algoritm in the reference above (get the path, normalize, scale to SPEED and integrate position).
Since this is a standard thing to do, Unity has a helper method called MoveTowards
that does those four operations, and another called Lerp
that does the normlization and scaling. Do NOT use those methods! Do the calculation yourself as described. The Unity Vector
classes have all the arithmetic operations that you need.
Last note, I haven't said anything about how you stop when you get to the point. What you'll find is that the entity gets there and jitters like crazy, because the odds of the discrete movement landing exactly on the target point are infintesimal. How do you fix that? Consider it an edge case.
Bring your project to class and we'll go over how to commit it using Git.
Assignment repo invite:
https://classroom.github.com/a/LTfgvqaR
You must log into blackboard for these videos to show