Use of Analytics in Unity

For the fourth project of Studio 2, we were introduced to Unity Analytics. Unity Analytics is a data platform that tracks certain aspects of a player and their gameplay. It doesn’t track anything automatically.

Unity Analytics does have some limitations though. It can only take 100 custom events per hour, per user. A custom event can track up to ten different parameters so long as the character count is below 500. Those ten parameters can only be one of these three different types of data: Booleans, Strings and Numbers (ints, floats, etc).

Acclim definitely could have benefited from using analytics. At first, I thought that I wouldn’t need analytics and could just rely on recorded video for playtesting because everything that I needed to know could be seen there. However, I didn’t account for how tedious it is to sift through video files.

If I had of implemented analytics, I would have set up a timer that runs in the background and given each interactable object type a label. This means that, when a player interact with an object, the time (float) at which they interacted with it and what the object is called (string) would be sent to Unity Analytics.

This means my custom event would look like this:

AnalyticsCode.png

With this data, I would be able to see if players are interacting with all of the objects that I wanted them to and I would be able to tell how long players are taking to interact with objects. This would allow me to tell if an object is too small or too well hidden, or it might even show that players aren’t interacting with anything but the letters. That would mean that there needs to be more incentive to interacting with the objects that are around the apartment.

If I had of used analytics, I would have realised that people don’t check the wardrobe or the fridge, after the first day, sooner.

 

Advertisements

Improvements Through Playtesting

Playtesting is always a valuable tool for making a game better. People who have never seen your game before will give you insight that you wouldn’t find anywhere else.

Point in case, my playtesting for the fourth project of Studio 2. A problem that was pointed out by a player was the how small the window is for interacting with an object. It hadn’t occurred to me how, with a small window and a gamepad, it is really difficult to line it up correctly. For example, the only way to interact with the light switch in the apartment was to move the dot on the screen over the red circle on the image below.

InteractDetect.png

After the player suggested that I make the area bigger, it occurred to me that there is no reason why the area is that small. It was just something that I hadn’t thought about and had just become accustomed to while developing the game. From then on, I just couldn’t stop noticing how some people were struggling with interacting with objects.

Because of that feedback, I have adjusted the size of the interactable area to be as large as the object. This fix should make it easier to interact with the smaller object in the apartment.

Storytelling in Acclim

The story in Acclim, the fourth project for Studio 2, is going to be told through a series of letters that the player receives from people. Each day, the player will receive new letters from the player characters parents, their boss’ secretary, their neighbours and an automated bot. This is the more traditional side of storytelling, the written word.

This, however, presents a problem because the letters are in gibberish and the player will be unable to read them. The reason they will be in gibberish is because we want to try to simulate the feeling of adapting to a new culture, and that includes learning a new language. So the letters from the characters who are native to this culture will write to you in their language. While this will interrupt the flow of the written storytelling, I believe it will also add to the environmental storytelling by making the foreign language more visible and the cultural divide more apparent.

To make sure the player is still engaged when the letters are unreadable, we included letters from the player characters parents that will be written in English. This will allow the player to understand their character and their characters’ motivations when they can’t read the other letters. Once the player has progressed further into the game and the letters have started to become translated, they can read the letters after knowing about the character.

The letter on the left is in gibberish, the letter on the right is from the parents.

Letter1.png

Letter2.png

 

 

 

 

 

 

 

As the player progresses, the apartment that they are in becomes more populated with items such as food, clothes and furniture. This is to reinforce the fact that the player is not seeing everything that is happening, that what they are looking at is snapshots of this characters’ life. This is environmental storytelling. It tells a story without words, without sound.

An example of such storytelling can be found in Portal, as well as in many other games.

PortalLvl.jpg

Throughout the game, the player travels through white rooms and white corridors, with little to no dirt. But later in the game, the player can find a little nook with words and pictures scrawled on the walls.

Ratman_Den.jpg

These words and pictures tell a story all on their own. The nook is dirty and GLaDOS mentions not being able to find you. The scene is set up to show you a side of the game that you wouldn’t of seen otherwise and would seem out of place if mentioned any where else.

I try to use environmental storytelling in Acclim as well. Not as significant as Portal, but Acclim’s environmental storytelling is used to represent that the player character is starting to settle in to their new home. The fridge/freezer is stocked and the wardrobe is full. More objects get placed around the house to show that the player character is making the apartment their own.

The top image is the wardrobe and the fridge/freezer on the first day and the bottom image is on the last day.Room1.png

Room2.png

 

 

 

 

Creating a Random Level Generator

For the first project of Studio 1, I created a random level generation tool. This tool relies on a collection of ’tiles’ and spawns three tiles perpendicular to the direction that the player is moving. It spawns tiles so that there are no gaps, regardless of which direction you move.

lvlgen.gif

This tool works by using a separate collider for each cardinal direction and a collider above, as can be seen in the image below.

DirectionColliders.png

The use of the four colliders is to run the following function:

TiggerCode.png

What this function does is it moves the Spawner, which is an empty in the middle of the colliders, to the next row of tiles. It then moves the Generator to the tile that the player has just moved to. The Generator is the parent of all of the colliders and the Spawner. This results in the Spawner being in the perfect place to put a tile.

The script then instantiates three tiles that are randomly taken from the TileDatabase, which I will mention later. It places one tile in the same position as the Spawner and then one on either side of the first tile. This makes a row of tiles that perfectly lines up with the previous tiles..

After spawning the tiles, the function then moves the Spawner back to the middle of the Generator and moves the Remover (the collider that sits above the others; also a child of Generator) to the row of tiles that are behind the player.Remover1.png

The Remover then runs the following function:RemoverCode.png

Since all the tiles are tagged as Land, the Remover destroys the tiles it touches.

Remover2.png

Then, the Remover is moved above the player so that it doesn’t accidentally touch any other tiles. Remover3.png

Now, the TileDatabase. This is a script that has references to all the tiles and puts all the tiles into a list (not the best way to do this, would be easier to just have the list and assign in the Inspector).

TileDb2.png

TileDb1.png

 

 

 

 

 

 

 

 

This script links to all of tiles in the game folder.

Tiles.png

So it is here that the Spawner pulls three tiles from.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Coding the Player

Project 1 was a solo project, so it was up to me to make or source everything. The most involved script that I had to write was my Player script. It controls player movement, player animation, player attack and more. I am going to go through exactly what my Player script does, as well as the scripts that it is dependant on.

First off, let’s go through the references and variables.

PlayerCode1.png In my references there are three that are assigned in the Start() function one that is assigned in the Inspector and another that is assigned later, in the Attack() function. Of the three that are assigned in Start(): the Rigidbody2D is used to move the player, the Animator is used to change which animations are applied to the player and the AudioManager is used to play audio. The bullet GameObject is used as a base in the Attack() function.

Next are the variables playerSpeed and playerHealth. The variable playerSpeed is a factor in how fast the player moves and playerHealth determines how much damage the player can take.

 

Now on to the functions that are called in FixedUpdate(). The first function called is Movement().

PlayerCode2.png

This function is a list of if, else if and else statements. If one of the first four statements is true, then the player moves in a direction dependant upon which key is press. Pressing W, A, S or D moves the character up, left, down or right. Each statement also determines whether or not the bool ‘Moving’ is true.PlayerAnimation.png

If the ‘Moving’ bool is true, then the character transitions from whatever state they are currently in to the ‘PlayerWalk’ state. When the character is in the ‘PlayerWalk’ state, the character sprite plays a looping 2 frame moving/walking animation.

 

The next function is the Attack() function.

PlayerCode3.png

This function only runs when the player clicks the left mouse button. When that happens, a Vector3 is made that contains the X,Y and Z coordinates of the mouse cursors current position. Then the Z coordinate of the Vector3 is reduced to 0 (since this game is 2D). After that, the Vector3 is changed from screen coordinates to game coordinates.

Now, a copy of the bullet that was previously referenced is spawned and assign to the bulletPrefab reference. The bulletPrefab is then rotated to face the mouse cursor. Then force is applied to the bulletPrefab’s rigidbody, making it travel toward the location that the player clicked.

As well as spawning and firing the bulletPrefab, a sound effect is played. You can read more about that in my pitch shifting audio blog.

Finally, after a two second timer, the bulletPrefab destroys itself.

 

The bullet has its own script as well. It detects if it collides with anything and, if so, tells them to take damage and then plays audio.

BulletScript.png

The Bullet script references and assigns the AudioManager, since it plays audio when it collides with certain objects. There is also a variable that determines how much damage the bullet does.

The OnCollisionEnter2D() function retrieves the tag of the object that it collides with. If it collides with an object tagged either Enemy or Enviro, it will tell them to take damage, play a sound effect and then destroy itself.

 

Now, back to the Player script.

PlayerCode4.png

Both the ApplyDamage() function and the Death() function are in all scripts that have the ability to take damage and be destroyed. When a variable named damage is sent to this function, it reduces playerHealth by an amount equal to damage. Then if the variable playerHealth is less than, or equal to, 0 the GameObject that this script is attached to is destroyed.

Pitch Shifting Audio

I have added further to Project 1. I decided to make three more sounds effects. There is now a sound effect for: an enemy getting hit with a bullet, firing a bullet and destroying the environment.

As usual, I opened Audacity and recorded three different tracks. If you would like more info on how I did this, you can check out my previous audio blog.

AudacityBulletRip.png

I then exported all of the tracks and named them Bullet, BulletHit and Rip. Then I moved the exported files into my Unity project.

The first thing to do was to add more public references to the AudioManager script. So now my script looks like this:

AudioManagerCode.png

Then I can just drag and drop the audio files that I have into the appropriate spot in the Inspector.

AudioInspector.png

Now for the interesting part. To make the audio change pitch when it play, all we need is one line of code.

PlayerCodePitchAudio.pngI put that one line of code in the Attack() function in the Player script, which might seem familiar if you have seen some of my other Project 1 blogs.

It is line 62 that does all the work. When the bullet shoots, it travels toward the mouse cursor and plays the ‘bullet’ sound effect at a pitch that ranges between 0.6 and 1.4 (with 1 being the original pitch). The pitch changes every time the player shoots a bullet.

However this setup has a flaw. Since all audio plays through audioManager.audio, every sound effect is going to play at the same pitch as the last bullet fired. There are two ways to solve this and one way takes much longer and requires more work than the other.

The more difficult option is to add an extra line of code to all scripts that play audio. In this case I have five other audio clips whose pitch I don’t want to change. This means that every time I use audioManager.audio.PlayOneShot() (the same function is on line 63 in the code above), I need to add the following code on the line above it:

AudioHardFix.png

This line will change the pitch back to 1 meaning that the next line, the line to play audio, will play at the normal pitch.

The easy fix is to add an extra AudioSource to the AudioManager GameObject. After doing so, the AudioManager code will also need to change to the following:AudioManagerEasyFix.png

The Start() function that found the AudioSource will not work anymore since there are two AudioSources. So instead, we remove the Start() function and add another public reference for the second AudioSource. Now we can just drag and drop the AudioSources in the Inspector.

Finally, lines 62 and 63 in the Player script change to:

PlayerAudioEasyFix.png

Instead of playing the audio through audioManager.audio, it now plays through the second AudioSource audioManager.bulletHit. Also, only the Pitch of the second AudioSource is change. This means that all audio played through the first one will not have a change in pitch.

Win/Lose Scripting – Scene Change With a Key Press

For Project 3, I implemented a ‘Lose Screen’ and a ‘Win Screen’. At first, it was just a Canvas with a white Panel spread across it with some text and a button that said ‘Restart’ in the centre. This really hindered gameplay however, since gameplay was limited to a joystick and a single button. The button meant that players would have to move a hand from either the joystick or the single button to the mouse and click the on-screen button, Then, from there, they would have to quickly move their hand back to where is was previously.

I decided to fix this problem by writing a script that reloads the current level, if you lose, or the next level, if you win, with the Space Bar, the single button that was used for gameplay. This process involved writing three scripts: one that detects when the player has lost, one that reloads the current scene and one that loads the next scene. The first script that was written was HitDetection.cs and it contains the following:

HitDetectionScript.png

In this script, lines 7 and 8 are are public references to the Win and the Lose Canvas’. This allows you to drag and drop the appropriate Canvas in the Inspector. The next line is a public reference to the PlayerController script (which contains all the code for how the player moves their character), however this is set in the Start() function instead of the Inspector.

The Start() function disables both the Win and Lose Canvas, then finds the PlayerController script that is attached to the GameObject tagged as PlayerBody. Disabling both Canvas’ is a crucial step because it stops them from appearing as soon as the game starts.

Finally, the OncollisionEnter2D() function. This function detects if the object that this script is attached to has collided with a GameObject tagged either Ground or EndZone. If it collides with the Ground, the LoseCanvas is enabled, however, if it collides with the EndZone, the WinCanvas is enabled instead. Either way the PlayerController script is disabled. This is because, when the player wins or loses, we want to stop them from being able to control the character afterwards.

Now to the LoseScreen script. It is a really small script that looks like this:

LoseScript.png

The Update() function in this script checks if the player presses the Space Bar and, if so, loads the active scene. Remember to include line 4 in any script that uses the SceneManager. Lines 1-3 are generated whenever you make a C# script in Unity, so you don’t need to worry about them, but line 4 is not.

The only difference between the LoseScreen script and the WinScreen script is line 11. Line 11 in the WinScreen script is this:

WinLoseChange.png

Instead of loading the active scene, this load the next scene. The next scene is determined by the order that the scenes are placed in File > Build Settings in Unity.

Improving Enemies with Particle Effects

Yet again I am back on Project 1, making improvements. This time I decided to add some more visual features to the enemies. I wanted to make the enemies stand out because they can be hard to see, since both the enemies and the background have similar cream colours. Also, since the enemy sprites just disappear when they have lost all their health, I wanted to have a fade or a transition just so it looks better.

What I added is a particle effect that sits behind an enemy and fades after the enemy dies. The effect is intended to make the enemy appear as if they have an aura.

To do this I created a Particle System in Unity, set its Shape to Mesh and its Mesh to Cylinder.

ShapeMesh.png

The reason I selected Mesh > Cylinder instead of Circle is because particles spawn in the centre and around the edge of the Cylinder, whereas Circle has particles spawning inside the Circle. You can see the difference below.

ShapeCompare.png

ShapeCompareSettings.png

The Particle System on the left is the Cylinder and the Particle System on the right is the Circle. Both have the same settings, which you can see in the image above.

The Cylinder Shape suits my needs more because, when speed is set to a negative, all the particles centre in a single spot. This gives me the kind of fading effect that I am looking for. Below is an image of the Cylinder Particle System with Start Speed set to -0.5 and Emission > Rate Over Time set to 500.

Cylinder.png

For the actual Particle System I used the following settings and got the following result:

CylinderSettings.png

CylinderDone.png

Now to put the Particle System underneath an enemy sprite.

CylinderSprite.png

In order to make the Particle System linger and fade after the the Enemy sprite has been destroy, we need to write some code. We can’t make the Particle System a child of the the Enemy sprite because when the Enemy sprite is destroyed the Particle System will be destroyed as well.

The first that we have to do is reference the Particle System.StopLoopCodeStart.png

Also, set up a bool that we will use later.

StopLoopCodeBool.png

Then, in the function that controls when the enemy is destroyed, line 41 of the code below retrieves the Main section of our Particle System and line 42 retrieves the Loop setting of our Particle System and changes its value to the bool made earlier.

StopLoopCode.png

Be sure to put the code above the Destroy code, otherwise the Enemy will be destroyed and will not run anymore code.

Now we have a Particle System that fades when the Enemy sprite is destroyed, but what happens when the Enemy sprite moves? Well the Particle System, at the moment, doesn’t follow the Enemy sprite. Let’s change that.FollowCode.png

The above code has a public reference to a GameObject, meaning that we can drag and drop the specific Enemy sprite that we want to be associated with this code. Then the Follow() function finds the Enemy’s position, if available, and makes the Particle System’s position equal to it. Finally, this function is called in Update().

Now the Particle System follows the Enemy sprite and fades away shortly after the Enemy sprite is destroyed.

Importing Assets From Collaborators

When I imported the art assets, that I received from Hayden, into the Unity project, I encountered a problem. The assets could be dragged into the scene and the preview of the assets had some form of artefacting.

It turns out that the cause of this problem is that the assets’ Texture Type was set to Default.

Standard.png

In for the asset to be useable in the way that I want it to, I just needed to change the Texture Type from Default to Sprite (2D and UI). This immediately fixed all the problems that I was having.

However, all the art assets that Hayden sent me were huge when compared the to size of our levels and, well… pretty much everything else.

Huge.png

I tried adjusting the size of the head by changing the scale of it, but that had me working in decimals and it was just a headache to deal with.

Fortunately, there is another way to deal with this problem. It is called Pixels Per Unit.

PPU.png

Adjusting Pixels Per Unit to 2500, instead of 100, shrunk the size of the asset down to the small head that you can see below the big head in the 2nd image above this.

From there on out it was a simple Texture Type change and a Pixels Per Unit adjustment to each assets that Hayden sent and they all became useable.

Creating Audio Assets

In addition to art assets and particle effects for Project 1, I decided to create my own audio assets as well. Using Audacity, I created the sound effects for when an enemy gets destroyed and when and enemy attacks.

To do this, I opened Audacity and click the ‘Click to Start Monitoring’ button up the top. This is just to set up my mic and make sure that everything is recording properly. From there I hit the record button and made a few noises that I thought would be appropriate.

Audacity.png

After I found a few that I thought were good enough to use, I deleted all the excess audio and dead-air by clicking and dragging the mouse cursor over the areas and then pressing the Delete key.

Next I had to cut the audio in three different sections. This is because there were three different sounds in that audio clip that I want separated and if I was to export the current recording, it would export in one large chunk which I would rather not work with.

To cut the audio, I highlighted a section that I wanted to keep and pressed the Space Bar. What the Space Bar does is it plays the currently selected audio and nothing else, meaning that you can check if you have selected all the audio you wanted.

AudacitySelectedAudio.png

Then, once you are happy with your selection, you go to Edit > Remove Special > Split Cut.

AudacitySplitCut.png

This cuts the audio you have selected out of the current track. Now you need a place that audio in a different track. To do this you need to go to Tracks > Add New > Stereo Track.AudacityAddTrack.png

This will create a track beneath the first one on the screen. Now you need to paste the track that you cut by clicking on the second track and then going to Edit > Paste.AudacityPaste.png

You must follow these steps for each separate audio file that you wish to create. Remember that each track is a different audio file. When pasting audio, it may sit beyond the beginning of the track but that is nothing to worry about. Since there is nothing recorded there, not even dead-air, Audacity disregards that section completely. However, if you want to fix that you can go to Tracks > Align Tracks > Start to Zero and it will move the audio on the track to the beginning of the track.

Now that all the audio is ready, we can export it. We are able to export multiple tracks at once by going to File > Export Multiple. When you click that you will be greeted by the screen below.AudacityExportMultiple.png

Note that I am exporting the audio as MP3’s. When I first tried this I was given a pop-up telling me that I needed the LAME encoder in order to export my audio as MP3’s. This isn’t a big problem because the pop-up as a button on it that directs you to the download page for the LAME encoder.

After doing that, it will ask you to enter any info that you want on your audio files such as Artist Name, Track Title, etc. Once that is done and your files are successfully exported, it is time to import them into Unity. I made a folder called Audio and placed all of my tracks in there.

Now there is audio in the project but not in the game. Time to add some functionality through code. I started by creating an empty and naming it ‘AudioManager’. I gave it an AudioSource and attached a script to it that looked like this:AudioScript.png

This script has public AudioClip refenences that allow me to drag and drop my audio from the Inspector and an AudioSource reference that is assigned to when the script starts. So now we a place in the game where audio is stored and can be played from, but nothing that tells the AudioManager to actually play anything.

The next step is then to add lines of code that tell the AudioManager to certain tracks when a certain thing happens. The two images below are of the lines I have added to the enemies Death and Attack functions.

EnemyAudioScript1.png

EnemyAudioScript2.png

The first script tells the AudioManager to play ‘defGhst1’ whenever an enemy dies and the second script tells the AudioManager to play ‘defGhst2’ whenever the player enters the enemy’s range of attack.