Softskill Plans and Improvement

As Studio 1 began, I became focused on the work that presented to me. Throughout Project 1 I focused too heavily on creating art assets and throughout Project 2 I prioritised the completion of group work over personal work.

My problem was that I didn’t have any personal management. Whenever I had time free to work on something, I found that I spent a lot of time just figuring out what I needed to work on. I always knew what group work I had to do, thanks to Hack n Plan, but when it came to write Dev Diaries or report my work, I struggled.

To help remedy this, I created a personal Hack n Plan where I listed all of the work that I had left to do. Anything from Dev Diaries to Audio to Scripting were placed in the Hack n Plan. It was a valuable tool for organising my free time and making me far more effective at completing my work.

It allows me to track my remaining work and monitor my week-to-week work ethic. The time spent updating Hack n Plan is more than made up for by the amount of time I save when finding something I want to work on.

I also attempted to use a work calendar, along side Hack n Plan. I found it useful at the start but, once I had memorised the times that I had set aside to work, I found myself not using it very often. I found that I worked better when I kept Hack n Plan opening, using it as a visual reminder of all of the work I had to do.

I plan on using Hack n Plan as often as possible. As it is now, Hack n Plan has been a valuable tool in helping me finish all the work that I needed to do. Additionally, I believe that, once I am more familiar with Hack n Plan, I can gain the benefits of a work calendar from Hack n Plan as well.

Coding the Player

Project 1 was a solo project, so it was up to me to make or source everything. The most involved script that I had to write was my Player script. It controls player movement, player animation, player attack and more. I am going to go through exactly what my Player script does, as well as the scripts that it is dependant on.

First off, let’s go through the references and variables.

PlayerCode1.png In my references there are three that are assigned in the Start() function one that is assigned in the Inspector and another that is assigned later, in the Attack() function. Of the three that are assigned in Start(): the Rigidbody2D is used to move the player, the Animator is used to change which animations are applied to the player and the AudioManager is used to play audio. The bullet GameObject is used as a base in the Attack() function.

Next are the variables playerSpeed and playerHealth. The variable playerSpeed is a factor in how fast the player moves and playerHealth determines how much damage the player can take.


Now on to the functions that are called in FixedUpdate(). The first function called is Movement().


This function is a list of if, else if and else statements. If one of the first four statements is true, then the player moves in a direction dependant upon which key is press. Pressing W, A, S or D moves the character up, left, down or right. Each statement also determines whether or not the bool ‘Moving’ is true.PlayerAnimation.png

If the ‘Moving’ bool is true, then the character transitions from whatever state they are currently in to the ‘PlayerWalk’ state. When the character is in the ‘PlayerWalk’ state, the character sprite plays a looping 2 frame moving/walking animation.


The next function is the Attack() function.


This function only runs when the player clicks the left mouse button. When that happens, a Vector3 is made that contains the X,Y and Z coordinates of the mouse cursors current position. Then the Z coordinate of the Vector3 is reduced to 0 (since this game is 2D). After that, the Vector3 is changed from screen coordinates to game coordinates.

Now, a copy of the bullet that was previously referenced is spawned and assign to the bulletPrefab reference. The bulletPrefab is then rotated to face the mouse cursor. Then force is applied to the bulletPrefab’s rigidbody, making it travel toward the location that the player clicked.

As well as spawning and firing the bulletPrefab, a sound effect is played. You can read more about that in my pitch shifting audio blog.

Finally, after a two second timer, the bulletPrefab destroys itself.


The bullet has its own script as well. It detects if it collides with anything and, if so, tells them to take damage and then plays audio.


The Bullet script references and assigns the AudioManager, since it plays audio when it collides with certain objects. There is also a variable that determines how much damage the bullet does.

The OnCollisionEnter2D() function retrieves the tag of the object that it collides with. If it collides with an object tagged either Enemy or Enviro, it will tell them to take damage, play a sound effect and then destroy itself.


Now, back to the Player script.


Both the ApplyDamage() function and the Death() function are in all scripts that have the ability to take damage and be destroyed. When a variable named damage is sent to this function, it reduces playerHealth by an amount equal to damage. Then if the variable playerHealth is less than, or equal to, 0 the GameObject that this script is attached to is destroyed.

Pitch Shifting Audio

I have added further to Project 1. I decided to make three more sounds effects. There is now a sound effect for: an enemy getting hit with a bullet, firing a bullet and destroying the environment.

As usual, I opened Audacity and recorded three different tracks. If you would like more info on how I did this, you can check out my previous audio blog.


I then exported all of the tracks and named them Bullet, BulletHit and Rip. Then I moved the exported files into my Unity project.

The first thing to do was to add more public references to the AudioManager script. So now my script looks like this:


Then I can just drag and drop the audio files that I have into the appropriate spot in the Inspector.


Now for the interesting part. To make the audio change pitch when it play, all we need is one line of code.

PlayerCodePitchAudio.pngI put that one line of code in the Attack() function in the Player script, which might seem familiar if you have seen some of my other Project 1 blogs.

It is line 62 that does all the work. When the bullet shoots, it travels toward the mouse cursor and plays the ‘bullet’ sound effect at a pitch that ranges between 0.6 and 1.4 (with 1 being the original pitch). The pitch changes every time the player shoots a bullet.

However this setup has a flaw. Since all audio plays through, every sound effect is going to play at the same pitch as the last bullet fired. There are two ways to solve this and one way takes much longer and requires more work than the other.

The more difficult option is to add an extra line of code to all scripts that play audio. In this case I have five other audio clips whose pitch I don’t want to change. This means that every time I use (the same function is on line 63 in the code above), I need to add the following code on the line above it:


This line will change the pitch back to 1 meaning that the next line, the line to play audio, will play at the normal pitch.

The easy fix is to add an extra AudioSource to the AudioManager GameObject. After doing so, the AudioManager code will also need to change to the following:AudioManagerEasyFix.png

The Start() function that found the AudioSource will not work anymore since there are two AudioSources. So instead, we remove the Start() function and add another public reference for the second AudioSource. Now we can just drag and drop the AudioSources in the Inspector.

Finally, lines 62 and 63 in the Player script change to:


Instead of playing the audio through, it now plays through the second AudioSource audioManager.bulletHit. Also, only the Pitch of the second AudioSource is change. This means that all audio played through the first one will not have a change in pitch.

Win/Lose Scripting – Scene Change With a Key Press

For Project 3, I implemented a ‘Lose Screen’ and a ‘Win Screen’. At first, it was just a Canvas with a white Panel spread across it with some text and a button that said ‘Restart’ in the centre. This really hindered gameplay however, since gameplay was limited to a joystick and a single button. The button meant that players would have to move a hand from either the joystick or the single button to the mouse and click the on-screen button, Then, from there, they would have to quickly move their hand back to where is was previously.

I decided to fix this problem by writing a script that reloads the current level, if you lose, or the next level, if you win, with the Space Bar, the single button that was used for gameplay. This process involved writing three scripts: one that detects when the player has lost, one that reloads the current scene and one that loads the next scene. The first script that was written was HitDetection.cs and it contains the following:


In this script, lines 7 and 8 are are public references to the Win and the Lose Canvas’. This allows you to drag and drop the appropriate Canvas in the Inspector. The next line is a public reference to the PlayerController script (which contains all the code for how the player moves their character), however this is set in the Start() function instead of the Inspector.

The Start() function disables both the Win and Lose Canvas, then finds the PlayerController script that is attached to the GameObject tagged as PlayerBody. Disabling both Canvas’ is a crucial step because it stops them from appearing as soon as the game starts.

Finally, the OncollisionEnter2D() function. This function detects if the object that this script is attached to has collided with a GameObject tagged either Ground or EndZone. If it collides with the Ground, the LoseCanvas is enabled, however, if it collides with the EndZone, the WinCanvas is enabled instead. Either way the PlayerController script is disabled. This is because, when the player wins or loses, we want to stop them from being able to control the character afterwards.

Now to the LoseScreen script. It is a really small script that looks like this:


The Update() function in this script checks if the player presses the Space Bar and, if so, loads the active scene. Remember to include line 4 in any script that uses the SceneManager. Lines 1-3 are generated whenever you make a C# script in Unity, so you don’t need to worry about them, but line 4 is not.

The only difference between the LoseScreen script and the WinScreen script is line 11. Line 11 in the WinScreen script is this:


Instead of loading the active scene, this load the next scene. The next scene is determined by the order that the scenes are placed in File > Build Settings in Unity.

Writing A Film Script (Screenplay): What I Learned

For Project 2, each team was required to come up with their own scenario to a board game. This involved: making a unique set of rules that could be attached to a common set of rules that every scenario had, making crises that players had to solve which were specific to the scenario, and making film scripts to deliver to Film students. The Film students would later set up and record a professional actor to perform the scripts. These recordings would accompany the board game, to be played along side the crises.

In order to write a screenplay that would be presentable, I researched how to properly format and write a professional film script.

Screenplays are notorious for being finicky when it comes to their layout. Descriptive text must have a certain indent, while speech text must have an increased indent beyond that. In terms of layout, each page should follow these specifics:

  • Top Margin: 1 Inch
  • Bottom and Right (outside) Margin: 0.75-1.25 Inches
  • Left (inside) Margin: 1.5 Inches
  • Font: 12 point, Courier
  • Dialog Indent: 1.5 Inches, on both sides

Every page should follow these rules and the reason for this is because it will take one minute for each page to play out on-screen. This allows readers of the screenplay to accurately guess how long it would run for.

Beyond that, there are some specific terms, references and capitalisation that must be used in writing. For example, the first line of a screenplay should read ‘FADE IN:’, without the quotation marks, as this signifies that this page is the beginning of the screenplay. From there, scenes are split up by their location.

After ‘FADE IN:’ but before any other text, you must write a one line description of the location that the next scene is going to take place in. It must specify whether there is an INT. (interior) or EXT. (exterior) camera at the start of the description and, at the end of the description, you can whether the scene is shot during the day, the night or at the same time as the last scene if need be. Finally, the entire description must be capitalised.

Whenever a new character is introduced their entire name must be capitalised and must be accompanied by a description of them. From there you can write normally about what characters are doing, until they speak. When a character speaks, their entire name must be capitalised and indented on the left by an additional inch. If dialog runs over the page, then (MORE) must be added to the bottom of the page that the dialog started on. Then on the page that the dialog runs onto, the characters name is placed at the top of the dialog, all capitalised again, but with (CONT’D) placed next to it.

There are also certain terms that must be next to a characters name at the top of their dialog depending on the characters’ location on-screen. If the character has dialog but is not mean’t to be on-screen when saying it, then you must add (OS) next to the characters’ name at the top of dialog. However, if the character is purely voice, e.g. a narrator, you must add (V.O.) next to the characters’ name at the top of dialog.

Lastly, if a specific transition is required between scenes then it should be noted at the end of the scene. To do this, write the transition in capitals with a colon at the end (e.g. DISSOLVE:), and align the text to the right. Note, however, that is should be done for every scene change.


It doesn’t contain everything that has been discussed but below is an image of the screenplay that I wrote for the first crisis of my team’s scenario.


Future Uses:

This research obviously helps with writing dialog and planning cutscenes in games. Beyond that, I believe that this provides me with a valuable first step into an aspect of game development, even an entire genre of games, that I am yet to explore. It allows me to begin experimenting with narrative-driven games in a form that is easily understood and is still useful later in the development cycle.

This also gives me a medium in which to express ideas and stories not only in a structured manner, but in a professional manner as well. It allows me to write in a format that is common in the industry (in the film industry at least) and can be easily read by veterans and amateurs alike.

Finally, this research is the first step into opening avenues for me in the film industry should I ever wish to transition to another creative industry.

Improving Enemies with Particle Effects

Yet again I am back on Project 1, making improvements. This time I decided to add some more visual features to the enemies. I wanted to make the enemies stand out because they can be hard to see, since both the enemies and the background have similar cream colours. Also, since the enemy sprites just disappear when they have lost all their health, I wanted to have a fade or a transition just so it looks better.

What I added is a particle effect that sits behind an enemy and fades after the enemy dies. The effect is intended to make the enemy appear as if they have an aura.

To do this I created a Particle System in Unity, set its Shape to Mesh and its Mesh to Cylinder.


The reason I selected Mesh > Cylinder instead of Circle is because particles spawn in the centre and around the edge of the Cylinder, whereas Circle has particles spawning inside the Circle. You can see the difference below.



The Particle System on the left is the Cylinder and the Particle System on the right is the Circle. Both have the same settings, which you can see in the image above.

The Cylinder Shape suits my needs more because, when speed is set to a negative, all the particles centre in a single spot. This gives me the kind of fading effect that I am looking for. Below is an image of the Cylinder Particle System with Start Speed set to -0.5 and Emission > Rate Over Time set to 500.


For the actual Particle System I used the following settings and got the following result:



Now to put the Particle System underneath an enemy sprite.


In order to make the Particle System linger and fade after the the Enemy sprite has been destroy, we need to write some code. We can’t make the Particle System a child of the the Enemy sprite because when the Enemy sprite is destroyed the Particle System will be destroyed as well.

The first that we have to do is reference the Particle System.StopLoopCodeStart.png

Also, set up a bool that we will use later.


Then, in the function that controls when the enemy is destroyed, line 41 of the code below retrieves the Main section of our Particle System and line 42 retrieves the Loop setting of our Particle System and changes its value to the bool made earlier.


Be sure to put the code above the Destroy code, otherwise the Enemy will be destroyed and will not run anymore code.

Now we have a Particle System that fades when the Enemy sprite is destroyed, but what happens when the Enemy sprite moves? Well the Particle System, at the moment, doesn’t follow the Enemy sprite. Let’s change that.FollowCode.png

The above code has a public reference to a GameObject, meaning that we can drag and drop the specific Enemy sprite that we want to be associated with this code. Then the Follow() function finds the Enemy’s position, if available, and makes the Particle System’s position equal to it. Finally, this function is called in Update().

Now the Particle System follows the Enemy sprite and fades away shortly after the Enemy sprite is destroyed.

Importing Assets From Collaborators

When I imported the art assets, that I received from Hayden, into the Unity project, I encountered a problem. The assets could be dragged into the scene and the preview of the assets had some form of artefacting.

It turns out that the cause of this problem is that the assets’ Texture Type was set to Default.


In for the asset to be useable in the way that I want it to, I just needed to change the Texture Type from Default to Sprite (2D and UI). This immediately fixed all the problems that I was having.

However, all the art assets that Hayden sent me were huge when compared the to size of our levels and, well… pretty much everything else.


I tried adjusting the size of the head by changing the scale of it, but that had me working in decimals and it was just a headache to deal with.

Fortunately, there is another way to deal with this problem. It is called Pixels Per Unit.


Adjusting Pixels Per Unit to 2500, instead of 100, shrunk the size of the asset down to the small head that you can see below the big head in the 2nd image above this.

From there on out it was a simple Texture Type change and a Pixels Per Unit adjustment to each assets that Hayden sent and they all became useable.

Creating a Playtest Questionnaire

Throughout Project 3, there were two playtesting sessions where people from all over campus could come and play the games we had made. In order to get the most information that we could from the playtester, every team had to come up with three questions to ask playtesters about their game.

The purpose of these questions is not to see whether they liked it or not or where there could be improvements, but to make sure that the game that we are creating is giving players the experience that we intend them to have.

As such, our questions need to be carefully tailored so that there aren’t subtle indicators that might influence the answers that we are given.

During our teams’ brainstorming we came up with questions such as:

  • How difficult did you find balancing the head? (1-5)
  • Were any parts of the game too frustrating?
  • Does the game have a message?

The above three questions all contain subtle hints toward what we want the answer to be. The first two lead the playing into thinking that we want them to find the game hard and will likely skew their answers, perhaps unconsciously, towards suggesting that the game is hard. The third question leads people to believe that the games has a message just by virtue of bringing it to attention.

So we can’t use language similar to that, but we also can’t use a scaling system that you can see in the first question. The scaling system tells us very little. While it certainly tells us what they thought, it doesn’t tell us why they thought that way and gives us no indication as to whether the rating was a result of design or something else.

After culling a few more questions and having more discussion, we decided that the best way to go was to ask open-ended questions that required an answer that was at least a few words. Asking questions that can be answered with a yes or a no aren’t very helpful and questions that require a paragraph are too daunting to answer, so those types of questions aren’t viable either.

The initial three that the team chose were:

  1. Describe a moment when you nearly lost your balance but recovered.
  2. Describe a moment when you chose not to boast to someone.
  3. Describe a moment in the game where you failed on a ramp. How did that impact your strategy?

When asked what answers we wanted to receive from these questions. They didn’t hold up very well. The answers we gave were fairly mundane and didn’t really tell us much about the feel of the game. Our answers were:

  1. Second time I went up a ramp
  2. Before I went up a ramp
  3. I failed the ramp the first time I took, so I moved slower next time.

All these answers really tell us is that ramps exist and are difficult. Plus it is far too ramp-focused in a game about boasting. Also, the third question was a bit too leading, so the team reworked the questions into:

  1. Please describe a moment where you feel like you boasted too much.
  2. Please describe a moment when you nearly lost your balance but recovered.
  3. Please describe a moment in the game where you changed your strategy – why did you choose to change it?

These questions are far more open in terms of responses that the team believes they will receive.


When it came time to playtest, the team never changed questions between any of the playtesting sessions. The last two questions stayed because they made sure that the changes made to how the player moves and to the levels didn’t interfere with the experience that we were trying to create. However, in hindsight, the first question should have been changed because the team received everything from joke answers to confusion as to what boasting was.

The game, in the state it was in at the time, didn’t properly notify the player of who they could boast to or even the fact that they could boast at all. This means that some players missed that mechanic entirely, leaving the question obsolete.

Creating Audio Assets

In addition to art assets and particle effects for Project 1, I decided to create my own audio assets as well. Using Audacity, I created the sound effects for when an enemy gets destroyed and when and enemy attacks.

To do this, I opened Audacity and click the ‘Click to Start Monitoring’ button up the top. This is just to set up my mic and make sure that everything is recording properly. From there I hit the record button and made a few noises that I thought would be appropriate.


After I found a few that I thought were good enough to use, I deleted all the excess audio and dead-air by clicking and dragging the mouse cursor over the areas and then pressing the Delete key.

Next I had to cut the audio in three different sections. This is because there were three different sounds in that audio clip that I want separated and if I was to export the current recording, it would export in one large chunk which I would rather not work with.

To cut the audio, I highlighted a section that I wanted to keep and pressed the Space Bar. What the Space Bar does is it plays the currently selected audio and nothing else, meaning that you can check if you have selected all the audio you wanted.


Then, once you are happy with your selection, you go to Edit > Remove Special > Split Cut.


This cuts the audio you have selected out of the current track. Now you need a place that audio in a different track. To do this you need to go to Tracks > Add New > Stereo Track.AudacityAddTrack.png

This will create a track beneath the first one on the screen. Now you need to paste the track that you cut by clicking on the second track and then going to Edit > Paste.AudacityPaste.png

You must follow these steps for each separate audio file that you wish to create. Remember that each track is a different audio file. When pasting audio, it may sit beyond the beginning of the track but that is nothing to worry about. Since there is nothing recorded there, not even dead-air, Audacity disregards that section completely. However, if you want to fix that you can go to Tracks > Align Tracks > Start to Zero and it will move the audio on the track to the beginning of the track.

Now that all the audio is ready, we can export it. We are able to export multiple tracks at once by going to File > Export Multiple. When you click that you will be greeted by the screen below.AudacityExportMultiple.png

Note that I am exporting the audio as MP3’s. When I first tried this I was given a pop-up telling me that I needed the LAME encoder in order to export my audio as MP3’s. This isn’t a big problem because the pop-up as a button on it that directs you to the download page for the LAME encoder.

After doing that, it will ask you to enter any info that you want on your audio files such as Artist Name, Track Title, etc. Once that is done and your files are successfully exported, it is time to import them into Unity. I made a folder called Audio and placed all of my tracks in there.

Now there is audio in the project but not in the game. Time to add some functionality through code. I started by creating an empty and naming it ‘AudioManager’. I gave it an AudioSource and attached a script to it that looked like this:AudioScript.png

This script has public AudioClip refenences that allow me to drag and drop my audio from the Inspector and an AudioSource reference that is assigned to when the script starts. So now we a place in the game where audio is stored and can be played from, but nothing that tells the AudioManager to actually play anything.

The next step is then to add lines of code that tell the AudioManager to certain tracks when a certain thing happens. The two images below are of the lines I have added to the enemies Death and Attack functions.



The first script tells the AudioManager to play ‘defGhst1’ whenever an enemy dies and the second script tells the AudioManager to play ‘defGhst2’ whenever the player enters the enemy’s range of attack.

Project 3 – Post-Mortem

For this third and final project of Studio 1, some of the students in the class were split into teams of three while some students opted to do this project by themselves. I was in a team that consisted of Ben, Ruby and myself. The project required the class to brainstorm as many verbs as possible. Then all the verbs were gathered together in a single document and discussed amongst the class. It is after this that the teams got together to discuss what game they wanted to make. However, there was a caveat. This caveat was that the game must be controlled with a single joystick and a single button. The single button must perform the verb that the team has chosen.

Our team selected the verb ‘Boast’ and decided to make a game in which the player must navigate around a city and boast to people who are smaller than them. Every time to boast to someone smaller than you, your head gets bigger. This, in turn, makes it harder for the player to navigate the level.


What Went Right:


The team used Discord to communicate with each other throughout the project. Since it is available on both computer and mobile, team members were able access each other whenever it was needed. It was an excellent way to communicate who was working on what and when they were going to be working on it. Source control was also very easy since we would be able to determine when people were working and when people were committing their work. As a result, we never encountered a problem that required us to regress to an earlier version of the game.

The reason why this happened was because all members of the team were very willing to communicate with one another and were transparent when it came to their availability. Each member of the team, at one point or another, had approximately one whole week where they wouldn’t be able to contribute to the project and that was told to every other member of the team as early as possible. This allowed the other team members to prepare for the decreased output of the team and adjust the plan accordingly.

What I can learn from this is that communication is the key to a good project. Even though the project is incomplete, but submitted regardless, the team was happy with what they achieved. Being able to adjust plans as early as possible and assign work to different people as need be was a great advantage. With good communication we were able to manage members’ expectations while maintaining a positive attitude.


What Went Wrong:

Communication w/ Collaborators:

For this project the team collaborated with an Animation student named Hayden and an Audio student named Bronte. The team met with them both within the first few weeks of the project and discussed the game, as well as what deliverables the team would need from each of them. After that meeting, all discussion was delegated to emails and it was at this point that things started going downhill. While the team did receive the assets they needed, some required adjusts and it took days for that information to be received.

The reason why this happened is because emails were sent every few days, meaning that it wasn’t possible for a reliable and quick response from any of the involved parties. This resulted in delayed feedback and adjustments. This delay could last anywhere between a few days to a week.

What I can learn from this is that all members of the team, either internal or external members, need to use the same method of communication. The internal team (Ben, Ruby and I) had excellent communication but the external team (Hayden and Bronte) didn’t. Including the external team into Discord and Hack n Plan would allow them to work autonomously, thus removing the need for email and removing the delay in communication. This would also allow the external team to see who is currently working and enable direct messaging and notifications for faster responses to feedback and more.


Work Backlog:

The team entered the project with work remaining on previous projects. This left each team member essentially working on at least two projects at once. Having the team’s’ attention divided didn’t help Project 3 and led to the team running out of time, therefore forcing them to stop working on the project before it was able to be completed

The reason why this happened varies. Each team member might have different reasons for having work left over from projects. At the very least, Project 2 still had work that was left for Strike Teams to complete and each member of this team was in a least one Strike Team.

What I can learn from this is that people will have their own work to do outside of projects that they work on and, if need be, that work should be considered into the planning of the current project.