This is a handbook meant to explain the concepts and execution behind the "Big Planet Procedural" project, my directed study final project.

As some background, the project is meant as a case study of how code can create diverse and infinite art through basic techniques of procedural generation. It is inspired by the likes of No Man's Sky.

The program generates an interactive little planet to walk around in. Certain features are interactive.

As opposed to the midterm project, which was made in two dimensions, this project was made in three dimensions using the game engine Unity 3D.

Note that this documentation assumes that the midterm documentation has been read, and will not cover topics covered in there, including noise functions and color theory.

The most basic and fundamental tool in 3D development is the mesh. A mesh is the data type that is represented by a 3D model. Every single object, fundamentally, is a mesh. It might be postprocessed by various filters, textures, and shaders, but these all take mesh data as an input.

At the most elementary level, a mesh is just a collection of points. However, the same collection of points can represent multiple objects. For instance, consider a baseball.

The set of vertices represented by the stitches on the baseball has two possible face configurations, which are both represented on the baseball.

In order to resolve this problem, we must also define every face on the mesh. We do so in terms of triangles, since triangles are the simplest polygon. In code, triangles are defined in terms of a list of integers. The vertices array is represented as a one-dimensional array of three dimensional vectors. The triangles array is represented as a one-dimensional array of integers, each integer corresponding to an index of a vertex in the vertex array. So, every sequence of three integers corresponds to a single triangle. If the triangle array is not a multiple of three, there's a problem.

As an example, the following segment of code draws a square between the points (0, 0, 0), (1, 0, 0), (0, 1, 0), (1, 1, 0);

```
var vertices: Vector3[] = new Vector3[4];
vertices[0] = new Vector3(0, 0, 0);
vertices[1] = new Vector3(1, 0, 0);
vertices[2] = new Vector3(0, 1, 0);
vertices[3] = new Vector3(1, 1, 0);
mesh.vertices = vertices;
var tri: int[] = new int[6];
tri[0] = 0;
tri[1] = 2;
tri[2] = 1;
tri[3] = 2;
tri[4] = 3;
tri[5] = 1;
mesh.triangles = tri;
```

Even though defining the triangles procedurally on larger objects can be difficult, and requires some tinkering around with diagrams. However, the triangles and vertices are actually the most elementary part of a mesh. They tell nothing about how the mesh should be interpreted, and do not contain any information about lighting or textures.

Mesh lighting information is stored in terms of normal vectors, or normals. At every vertex on the mesh, we store the direction perpendicular to the mesh. This helps us with lighting because any light along the normal, or perpendicular to the mesh, will be rendered at full brightness, whereas any point perpendicular to the normal, or tangent to the mesh, will be rendered a zero brightness. This is illustrated in the following diagram from the Unity docs.

Fortunately, we can allow Unity to calculate these normals for us through the handy `Mesh.RecalculateNormals()`

function.

We still have not provided any texture information, though. This is given in terms of UV and tangent data. A UV map describes a correspondence between a triangle on a 2D texture and a triangle on a mesh. A tangent, like a normal, describes a vector tangent to a point. Unlike the normals, however, there are many possible tangents, and must be manually chosen to outline the contour of the mesh. This data is used in bump mapping, which is a method of adding detail to a texture.

It turns out that these both are incredibly difficult to generate procedurally. In order to solve this problem, we simply do not generate them and instead try to use shaders to solve our problems. However, we will return briefly to UV mapping in a later section.

Rigidbodies are what allow us to do physics. The physics engine acts only on objects that have rigidbodies. Objects that do not have rigidbodies are unaffected by the physics engine.

Let's look at some basic rigidbody properties:

All of these properties pretty much work as they do in real life. Mass refers to how much weight (ha) the rigidbody has in determining movement after a collision. Drag refers to how damped an object is in moving through the air. Angular drag refers to how damped an object is when it rotates.

You'll notice that every object has "use gravity" disabled. This is because we want our gravity to work on our little spherical planet, so we've disabled our usual universal gravity and written our own gravity function.

A kinematic rigidbody is one that is unaffected by physics. However, it can affect other things. For instance, a door in a game might be kinematic because we do not want it to be impacted by flying objects and be unhinged, but it can still stop players from walking through it.

Rigidbodies interact with each other through colliders. There are a few prebuilt colliders, such as box colliders and sphere colliders. However, in order to maintain the fidelity of our meshes, we primarily use mesh colliders.

The key thing is that certain things will stop working if the mesh collider is not made convex. In order to circumvent these inconveniences, we simply interpret it as convex, even if it is not. The difference from the actual mesh will usually be negligible.

Colliders can hold physics materials, which we use exactly once: for our rocks. Physics materials allow us to add kinetic friction and bounciness. We add physics materials to the rocks simply to make them "feel" better to mess around with.

All things in computer graphics are represented in terms of vectors and matrices. A point is a vector. A color is a vector. A transformation is a matrix - and so on and so forth.

The issue with vectors is that they don't make any sense without a reference. Vectors describe linear combinations of components and matrices describe linear transformations on these components. Both the origin and the components must exist for vector algebra to make any sense.

For instance, say we have a player positioned at the location `(1, 0, 0)`

and he has a gun one unit to his right at `(2, 0, 0)`

. How do we define the position of the gun? Do we just fix it one unit to the right? Do we keep updating the global position?

It turns out that we can give the gun a local position in terms of the player. We can give the gun coordinates of `(1, 0, 0)`

, and have it be parented to the player. Then, some matrix magic happens and the final coordinate gets translated to global space.

Every Transform data type in Unity, which every Gameobject has, stores a number of matrices that allow translation to and from global and local space. It turns out that these matrices are four-dimensional due to a restriction inherent in three-dimensional matrices, but that's beyond our concern, since Unity has built in functions: `Transform.localToWorldMatrix`

and `Transform.worldToLocalMatrix`

.

In two dimensions, it is easy to represent a rotation. A single angle describes the rotation of an object.

However, in three dimensions, the problem becomes much more difficult. Now, there are three axes to rotate around. The rotation of an object is much more difficult to represent. Do we represent it as three consecutive rotations? If we do, it becomes very difficult to manipulate one rotation without impacting the others in strange ways.

Unity represents rotations as four-dimensional vectors called quaternions. We're not going to worry too much about how they work because Unity provides most of the built-in functionality, but they will show up in code very now and then.

Raycasting is the method of determining an object's location by casting a ray until the ray intersects something. Raycasting is incredibly useful because it is fast and tells us realtime scene data.

Raycasting has two primary functions in our project. First, a ray cast downward tells us whether or not an object is grounded. For stationary objects, like trees, this lets us spawn them in the air and then fix them downwards after.

Raycasting also allows us to click things as the user.

```
RaycastHit hit2;
Ray ray2 = Camera.main.ScreenPointToRay(Input.mousePosition);
if (Physics.Raycast (ray2,out hit2,100.0f, objectMask))
Debug.Log (hit2.transform.tag);
```

There are two things to note. First, this Raycast Hit object allows us to access object data of what we've just clicked. This means we can create different interactions for different objects.

Second, the `objectMask`

is a Layer Mask. Layer Masks contain certain objects, set manually. By giving a layer mask to a raycast, we restrict the ray to hitting only those objects. In this way, this raycast will not hit the planet, which has its own layer mask: it will only interact with objects.

The planet is represented as a blob of 3D noise. We do so by making a sphere and extruding each point by its noise value. Taking a sphere-shaped slice out of 3D noise and extruding it allows us to maintain continuity. We will use this technique often.

```
for (int i = 0; i < vertices.Length; i++) {
float extrusion =0.2f * simplex.octaveNoise
(0.02f*mesh.vertices[i], 1);
vertices [i] += (1 + extrusion) * mesh.vertices [i];
}
```

Our first challenge with this project is getting the mini-gravity to work. We want to replicate the "Super Mario Galaxy" effect.

In order to do this, we need two scripts. We have a gravity attractor and a gravity body. The gravity attractor is assigned to our single gravity attractor. The gravity body is assigned to every object we want this gravity attractor to attract.

Here are the two scripts:

**Gravity Body**

```
using UnityEngine;
using System.Collections;
[RequireComponent (typeof (Rigidbody))]
public class GravityBody : MonoBehaviour {
GravityAttractor planet;
Rigidbody rb;
public bool lookUp;
public float scale = 1.0f;
void Awake() {
rb = GetComponent<Rigidbody> ();
planet = GameObject.FindGameObjectWithTag ("Planet")
.GetComponent<GravityAttractor> ();
rb.useGravity = false;
rb.constraints = RigidbodyConstraints.FreezeRotation;
}
void FixedUpdate() {
planet.Attract (transform, lookUp);
}
}
```

**Gravity Attractor**

```
using UnityEngine;
using System.Collections;
public class GravityAttractor : MonoBehaviour {
public float gravity = -10f;
public void Attract(Transform body, bool lookUp) {
Vector3 targetDirection = (body.position - transform.position)
.normalized;
if (lookUp) {
body.rotation = Quaternion.FromToRotation
(body.up, targetDirection) * body.rotation;
}
body.GetComponent<Rigidbody>()
.AddForce (targetDirection * gravity *
body.GetComponent<GravityBody>().scale);
}
}
```

On the gravity body, we freeze the rotation and pass the transform into the attractor. We freeze the rotation so that we can manipulate it manually. We also pass a bool called `lookUp`

. This determines whether or not we want the body to have its rotation fixed to face out from the planet. In the case of a player, we might want this, whereas in the case of a rock, we might not.

In the attractor, we apply a force in the direction of the planet. If `lookUp`

is true, we use a built in Quaternion function to change the body rotation to face out from the planet.

The camera is parented to a capsule, which represents the player.

The camera rotates horizontally and vertically using the mouse position as an input. We constrain the rotation vertically in order to avoid strange things such as spinning all the way around.

```
transform.Rotate (Vector3.up * Input.GetAxis ("Mouse X") * mouseSensitivityX * Time.deltaTime);
verticalRotation+= Input.GetAxis("Mouse Y") *
Time.deltaTime * mouseSensitivityY;
verticalRotation = Mathf.Clamp (verticalRotation, -50f, 50f);
cameraT.localEulerAngles = Vector3.left * verticalRotation;
```

This deltaTime thing is a built in Unity function which allows for constant movement. Performance issues may cause the time between each frame to change. Since updates are run once per frame, this may cause erratic movement. `Time.deltaTime`

gives the time in between frames, meaning that transformations can be scaled absolutely.

Walking is similar, taking in button inputs.

```
Vector3 moveDirection = new Vector3 (Input.GetAxis("Horizontal"), 0,
Input.GetAxis("Vertical")).normalized;
Vector3 targetMoveAmount = moveDirection * walkSpeed;
moveAmount = Vector3.SmoothDamp (moveAmount, targetMoveAmount,
ref smoothMoveVelocity, .15f);
```

SmoothDamp just allows us to smooth out our movement. Notice that we don't need to worry about any fancy calculations due to our rotating player. We can just use x and z directions because they are in local space. The engine will take care of translating these local transformations into global ones.

Similar to movement, all we need to do to create a jump is add a force in the local up direction. We also have a raycast to ensure the player cannot double jump.

```
if (Input.GetButtonDown ("Jump")) {
if (grounded) {
rb.AddForce (transform.up * jumpForce);
}
}
...
grounded = false;
Ray ray = new Ray (transform.position, -transform.up);
RaycastHit hit;
if (Physics.Raycast (ray, out hit, 2.6f)) {
grounded = true;
}
```

I initially tried to make the sky in terms of a skybox. A skybox is essentially a background, and is the thing that is rendered when there is nothing else.

The issue is, as the name suggests, it's shaped like a box. This makes it difficult to achieve continuous-looking clouds on the corners and edges. In addition, even doing so is a pain because six different textures need to be drawn and assigned...

```
int width = 600;
float boxsize = 10f;
Texture2D fronttexture = new Texture2D(width, width, TextureFormat.ARGB32, false);
for (int i = 0; i < width; i++) {
for (int j = 0; j < width; j++) {
float noiseval = simplex.noise
(boxsize*((float)i)/((float)width),
boxsize*((float)j)/((float)width),
-boxsize/2);
fronttexture.SetPixel(i, j,
new Color(noiseval, noiseval, noiseval));
}
}
fronttexture.Apply();
skybox.SetTexture ("_FrontTex", fronttexture);
/* .... 5 more times ....*/
```

So, I employed a different method. ###Backface Culling The new sky is made in terms of an inverted sphere. However, we can't simply use a normal sphere mesh.

For optimization, every mesh employs what's called backface culling. This means that only the oriented side of the face is rendered, whereas the back is not. This is defined in terms of the right-hand rule. When the triangle is defined, the right hand rule on the clockwise pattern of vertices gives the outward direction.

So, I needed a way to invert the sphere mesh. Here's a script that takes any mesh and inverts the faces:

```
int subMeshes = m.subMeshCount;
for (int i = 0; i < subMeshes; i++) {
if (InvertFaces) {
var type = m.GetTopology(i);
var indices = m.GetIndices(i);
if (type == MeshTopology.Quads) {
for (int n = 0; n < indices.Length; n += 4) {
int tmp = indices[n];
indices[n] = indices[n + 3];
indices[n + 3] = tmp;
tmp = indices[n + 1];
indices[n + 1] = indices[n + 2];
indices[n + 2] = tmp;
}
}
else if (type == MeshTopology.Triangles) {
for (int n = 0; n < indices.Length; n += 3) {
int tmp = indices[n];
indices[n] = indices[n + 1];
indices[n + 1] = tmp;
}
}
m.SetIndices(indices, type, i);
}
}
if (InvertNormals) {
var normals = m.normals;
for (int n = 0; n < normals.Length; n++)
normals[n] = -normals[n];
m.normals = normals;
}
```

Each triangle has its order flipped, and each normal is reversed. This accomplishes our desired effect.

The background sky is visible because the outward facing vertices are culled, and are thus transparent.

The sky is generated using three layers of two-dimensional noise in slight increments through 3D noise. Thus, we can generate three very similar noise layers and compound them to achieve our desired effect.

In script, this looks like this:

```
for (int i = 0; i < resolution; i++) {
for (int j = 0; j < resolution; j++) {
float sum = 0.0f;
for(var k=0; k<3; k++) {
float noiseval = simplex.noise(
0.01f*(float)i, 0.01f*(float)j,
k*0.2f)+0.5f;
if (noiseval+Random.Range(-0.01f, 0.01f) > 0.9f) {
sum += 0.3f;
}
}
Color finalcolor =
new Color (
GameController.skyColor.r + sum,
GameController.skyColor.g + sum,
GameController.skyColor.b + sum);
skytexture.SetPixel (i, j, finalcolor);
}
}
```

However, if you look closely at the top and bottom poles on the sample image, you'll notice some artifacts. This is due to the issue with UV mapping I mentioned earlier.

In order to achieve a fully smooth texture, we should take each point in the texture, pull it back to its equivalent point in world space, and input that coordinate into the noise function. Unfortunately, this is difficult, and Unity provides no built-in function for doing so. Part of the reason is that a single point in a UV map can pull to multiple points on a mesh.

Even for spheres, which have known geometry, this can be difficult, in part because spheres do not use a strictly spherical mapping, so we cannot use simple geometry. In lieu of this, we simply ignore the artifacts.

I wanted to model my colors off of "natural" terrain. If I were to choose natural colors, I'd have three colors: blue for the sky, brown for tree bark, and green for leaves and terrain.

This closely follows a split-complementary color scheme. When I generate the color schemes programatically, I follow these hue relationships.

We have two types of trees:

This tree is modelled as a sphere on a cylinder. The sphere is extruded in the same way as the planet, except with two octaves. The cylinder is also extruded through 3D noise, but only along two axes instead of three.

**Trunk Controller**

```
int i = 0;
while (i < vertices.Length) {
float extrusion = 1.0f*simplex.noise
(vertices[i].x, vertices[i].y, vertices[i].z);
vertices[i] += (1+extrusion) *
(new Vector3(vertices[i].x, 0, vertices[i].z));
i++;
}
```

**Leaves Controller**

```
int i = 0;
while (i < vertices.Length) {
int i = 0;
while (i < vertices.Length) {
float extrusion = 0.7f*simplex.octaveNoise (new Vector3(vertices[i].x, vertices[i].y, vertices[i].z), 2);
vertices[i] += (1+extrusion)*(vertices[i]);
i++;
}
}
```

The other type of tree is modelled as a cylinder with three cones on top.

The trunk is identical. The leaves are extruded the same way as the trunk.

**Leaves Controller 2**

```
int i = 0;
while (i < vertices.Length) {
float extrusion = 2f*simplex.noise
(vertices[i].x, vertices[i].y, vertices[i].z);
vertices[i] += (1+extrusion)*
(new Vector3(vertices[i].x, 0, vertices[i].z));
i++;
}
```

With more and more procedural meshes, we started to get some issues where scripts were executing in the wrong order. For instance, points were being extruded before they were being generated. This can be fixed in the script priority window in Unity.

This another one of the issues I was referring to in the concepts section. At one point, I tried to add a texture to the procedural meshes. Unfortunately, it caused some very strange artifacts, where the same texture becomes rendered differently from different angles.

This is a product of faulty UV mapping. The procedural mesh mapping causes the topology of the UV to get messed up. I'm not particularly sure how to fix this, as it would require generating a dynamic topology, which is a bit beyond my mathematical ability. So, I've simply left the textures off.

In order to resolve the UV mapping issue, we use a number of shaders. A shader is essentially a function that takes in mesh and scene data and outputs pixel data.

For instance, the shader we use on the sky is an unlit shader. An unlit shader just takes the color data of the mesh and outputs it directly without any manipulation.

Every shader is made up of two parts. The vertex shader takes vertex data and outputs pixel information. The fragment shader takes this pixel information and outputs the final color. In the case of our unlit shader, the vertex shader might pass through a single color and the fragment shader would leave it untouched.

The full graphics rendering pipeline is actually a lot more complicated than this, so we're not gonna worry too much about it.

On the other hand, the trees use a toon shader. A toon shader calculates light based on an inverse-square model. (Like universal gravitation, as a point gets farther from a light source, the luminance decreases proportionally to the inverse of the distance squared.) Then it sets a threshold value and outputs one of two colors: a darker color for any lighting below the threshold and a lighter one above.

Our toon shader technically does not do this. Instead, it uses a specially designed texture to give a matted, flat feeling. Some toon shaders also give an outline. This is done using an edge detection algorithm that works off of the normals. Here you can see the toon outline on a patch of grass.

Since shaders are pretty complicated, I won't leave any code here. The Unity Asset store and the internet have a few pre-written shaders for reference.

The rocks are modelled as a sphere with some noise extrusion, as before. You're starting to get the point, so I'm not going to bother copying code.

Notice the flat, triangle-shaded aesthetic. As it turns out, this isn't actually something we can do with shaders. We have to process the mesh.

```
Vector3[] oldVerts = mesh.vertices;
int[] triangles = mesh.triangles;
Vector3[] vertices = new Vector3[triangles.Length];
for (int i = 0; i < triangles.Length; i++) {
vertices[i] = oldVerts[triangles[i]];
triangles[i] = i;
}
```

Essentially, in many meshes, some vertices are shared or overlapped between triangles. This creates smoothing. This script removes that smoothing and gives us our geometric look.

You'll notice we can kick the rock around like a soccer ball. We can also click on it. The clicking is done as a raycast, as shown in the concepts section.

To add the force, we once more utilize local coordinates.

```
public void hop(Vector3 direction) {
rb.AddForce(direction.normalized * 1500f);
rb.angularVelocity = Random.rotation.eulerAngles;
}
```

Since this function is public, we can access it from the player script when it clicks the rock.

As mentioned, we also have a physics material that adds an element of bounciness.

The grass is modelled as a square root function using the top as a 0 point.

By modelling it as a function of form `y = a*sqrt(bx)`

we can introduce some further variation into the curve.

The mesh is a cylinder with many vertical components, representing our real number line as an input. We put this into our displacement function to get our grass blade. Programatically, we must first store the top location of the grass blade before we can do this.

```
while (i < vertices.Length) {
vertices [i] += direction * factor1
*Mathf.Pow (factor2*(top - vertices [i].y), 0.5f);
i++;
}
```

We generate the grass in terms of patches, adding some random displacement around each point.

When we click on or walk through the grass, it shakes a little bit. We use a simple animation. What we want to do is pull the grass to one side, then pull it back.

One thing we could do is every frame for 30 frames, at each point, add a directional displacement proportional to the distance from the bottom to the point. So, if a point at the top is displaced by 1 unit per frame, and a point at the bottom is displaced by 0 units per frame, one in the middle is displaced by 0.5 units per frame.

```
for (int i = 0; i < 30; i++) {
for(var j=0; j<vertices.Length; j++) {
vertices [j] += 0.001f*
i*direction*(vertices[j].y-bottom);
}
mesh.vertices = vertices;
yield return null;
}
```

We use a concept here called a Coroutine. A coroutine essentially suspends its execution until the next frame each time `yield return`

is called.

We also run the same function backwards to move the grass back to its original position.

The issue is that this type of animation does not look natural. When grass sways in the wind, it doesn't just instantly stop and reverse. It eases both into and out of its animation. To explain this, we need to introduce the concept of an animation curve. The animation curve we just made looks like this:

An animation curves shows the progress of the animation vs time. In our case, we progress through the animation at the same rate we progress through time.

Say we'd like to ease into our animation, or start of slower and then accelerate. Our curve would look like this:

Similarly, if we want to ease out:

And if we want to ease both:

Often, these curves will be made manually. In our case, to save resources, we directly program them. We'd like to ease both, however, this type of function is often difficult to program manually and often takes the form of a sigmoid or arctan. So, we cheat a little bit and essentially take two parabolas and stick them together.

In code, this looks like:

```
for (int i = 0; i < 30; i++) {
for(var j=0; j<vertices.Length; j++) {
vertices [j] += 0.001f*i*i*
direction*(vertices[j].y-bottom);
}
mesh.vertices = vertices;
yield return null;
}
for (int i = 29; i >=0; i--) {
for(var j=0; j<vertices.Length; j++) {
vertices [j] -= 0.001f*i*i*
direction*(vertices[j].y-bottom);
}
mesh.vertices = vertices;
yield return null;
}
```

The i*i represents the parabola.

The fruit is made exactly the same way as the leaves and planet, with a blob of noise. The only interesting thing about the fruit is the ability to pick it up.

Essentially, all we do is make the fruit kinematic when we click on it and parent the camera to it. This way, it will have a fixed location relative to the camera, and we can move it around. When we click again, we undo this

```
if(!carrying) {
attraction.enabled = false;
hitrb.isKinematic = true;
hit2.transform.parent = transform;
carrying = true;
}
...
if(carrying) {
attraction.enabled = true;
hitrb.isKinematic = false;
hit2.transform.parent = null;
carrying = false;
hitrb.AddForce(transform.forward * 10f);
}
```

We make the fruit spawn with a 50% chance when a tree is clicked. The tree also plays a little jiggle animation, which is nearly identical to the grass, except we modify scale.

```
for (int i = 0; i < 8; i++) {
foreach (Transform currentleaves in leafchildren) {
currentleaves.transform.localScale+=
0.1f*i*i*new Vector3(0.02f, 0.02f, 0.02f);
}
yield return null;
}
for (int i = 7; i >=0; i--) {
foreach (Transform currentleaves in leafchildren) {
currentleaves.transform.localScale-=
0.1f*i*i*new Vector3(0.02f, 0.02f, 0.02f);
}
yield return null;
}
```

There are two lights in the scene: a sun, and "moonlight." It's a bit of a misnomer to say that we have a moon light, since we really want the absence of light, but this light allows us to give the night side of the planet some shading. The sun has a lens flare to appear like a sun, while the moon light has a sphere to represent a moon.

There is also an ambient light setting. In order to accomplish our desired effect of a convincing night side and day side, we set the ambient light to slightly darker than the terrain color, representing our base lighting. We also set the moon light to this color.

In code:

```
Color ambientColor = Color.HSVToRGB (baseColor.x, baseColor.y, 0.3f);
RenderSettings.ambientLight = ambientColor;
GameObject moonlight = GameObject.FindGameObjectWithTag ("Moonlight");
moonlight.GetComponent<Light> ().color = ambientColor;
```

Overall, this has been a fun project. It's showcased the ability for a relatively small set of techniques to generate some really cool stuff by introducing elements of interactivity. There are still many, many techniques that I learned about that did not make it into the final project, including fractals and cellular automata.

If I do additional work on this, I'd like to:

- Implement scene types: e.g. presets for deserts, grasslands...
- Allow moving between planets
- Use a wider range of techniques
- Gain total control over the graphics by understanding shaders and UVs properly
- Add an invisible spring joint to fruit when you pick it up so that it fixes to the center of the camera
- Fix some bugs
- Rocks sometimes self-destruct
- The player sometimes falls through the world