Collaboration and Teamwork Overview

Throughout this trimester I have worked on several different projects and also participated during in-class discussions and feedback sessions.

In Class Discussions and Collaboration:

Unfortunately, I cannot post any evidence of this but I have participated in the discussions and feedback sessions that we had weekly. During these sessions, I always tried to be positive, give both warm and constructive feedback and also give some advice on how they could go about fixing something.

One example of this was in one of the project presentations in which Lachlan showed his robot model. He said that he had been using turbo smooth but it had not worked properly on a particularly area and did not know how he could fix it. I suggested using OpenSubdiv instead and gave him a quick demonstration of how to use it. Lachlan took this advice and then incorporated OpenSubdiv into his model.

I found these feedback session incredibly helpful, especially when multiple faciliators were there, as it helped to get many different eyes on my project. When I still had the animation project, it was really good to get feedback and advice on my animations from Chris.

On Slack:

I like to be both positive but also constructive with my feedback over Slack. Some examples are below.

In the Studio 3 Slack channel we would post progress and give feedback on our projects:


Although I never ended up creating assets for the Studio 2 game “The Apprentice” I helped them in the early stages of its construction by helping the designers find a game design that would fit with the “Trust” brief:


Technical Framework Overview.

Throughout this trimester, I have used a technical frameworks that have allowed me to create work more efficiently and effectively. This blog post is a summary of the framework that I have used and is partly made up of blogs that I had written throughout the trimester.

In regards to the Worldbuilder Project, creating a set technical framework and and file structure was my main strategy to avoid issues that I have faced in the past with file corruption and data loss. Using a proper file structure saved time by keeping my files organised.

Below is a screencap that shows the file structure I used for the Worldbuilders project:

file management

This system was described to my class by Brett, one of our facilitators.

This is how it works:

You have one master folder, for ease of copying the entire library for backup purposes. Inside the master folder, you have folders named as the assets, these contains the asset master file, that is used as the source for all referenced files, as well as the textures, and a final folder containing the iterative versions of the asset.

Using this system in combination with the Unreal “reimport” feature allows for quick and easy updating of files.

How I used this system:

I used this system for all assets but will use the garbage can as an example:

  1. I created the asset
  2. I incrementally saved it as “GarbageCan” with version numbers
  3. And gave the most updated version of it the file name “Master_GarbageCan”
  4. I then brought it into the Unreal engine
  5. And placed it where I want it
  6. I then looked to see what needed changing – it was too small and the lid shape was wrong
  7. I then fixed the asset in 3DsMax
  8. Saved it incrementally
  9. And saved over the “Master_GarbageCan” file
  10. Back in the Unreal engine I simply clicked on the garbage can and selected “reimport”
  11. This reimported the “Master_GarbageCan” file (which had just been updated)
  12. This quickly swapped the old version with the new version without changing the placement of the object in the Unreal scene

My thoughts on this workflow:

Using this system allows me to quickly and easily test my assets in-engine and then fix them and reimport them without having to exit Unreal or reposition them. By having both 3DsMax and Unreal open at the same time, I can quickly jump from one program to the other and quickly create, test, fix and reimport assets. This system is much more efficient than the one I had been using previously and I will definitely be using it for all future projects.

Iconic Work: Assassins Creed III

I have picked Ubisoft’s Assassins Creed 3(AC3) as my iconic works because of a number of reasons, these reasons are: it was a complete redesign of the assassins creed animation system, the procedural and physics additions to the animation system, and finally the motion capture implementation.

What makes this work significant?

Ubisoft’s Assassins Creed 3’s animation system was strides ahead of the next gen curve, utilizing massive amounts of motion capture coupled with keyed animations, layered over the top of a brand new procedural animation system that used predictive methods to allow the character to predict environment interactions opposed to simply responding to them. To elaborate the animation system would predict all of the possible movement actions that you as the controlling player could make to a set degree, and in doing so construct an animation sequence to allow the character to not only respond to environmental factors, but actually anticipate them, many games have been faking the anticipation, but through this predictive system the character could accurately anticipate obstacle avoidance, assassinations, jumps, and other environmental factors.

The complete redesign

Jonathan Cooper, was hired to lead the animation team for the development of AC3, and tasked with overseeing the redesign of the AC animation system. His first stop was to approach the Ubisoft tech development team, to find out what could and could not be done, utilizing new forms of physics implementation, and changing the way the character responds to turning as speed, they worked out ways to allow the character to be responsive but still feel grounded, by not having any delay in the change of the character’s direction but by delaying the rotation of the torso and leading with the head, this resulted in a very responsive and realistic looking character without sacrificing game responsiveness.

How this affects me

Knowing how the AC3 team tackled this issue of responsive movement with realistic animation, has changed the way I will develop my own animation, taking inspiration from this great method of design.

Procedural and physics systems.

AC3 was one of the games on the forefront of in engine IK systems, and the techniques that they used with the physics system to compliment animation, have now been adapted into some of the available game engine software.

How I can use this.

Being these systems and techniques are now available I can utilize them in my own work to create more realistic and responsive animation, while cutting down additional work previously required to gain the fidelity now available through using these systems and techniques.

Motion capture implementation

AC3 Used motion capture extensively as a base for the animation work, to ad in stubblety and weight to the character animation, Standard keyed animation was still used for the anticipation and exaggeration in the animations. In the GDC (2016) talk animation director Jonathan Cooper details how It is important to never be in the suit, as there are talented actors whose whole career is based around taking direction and turning that into action, you should be directing them, and utilizing the skills that they possess, to get the best result.

How I benefit from knowing this.

Being that motion capture technology is becoming more widely used and available, knowing how an animation team with as much experience as the team who created AC3, utilized these new tools is going to accelerate my learning process and hopefully increase the quality of my own motion capture sessions.


Action: The Animator’s Process, Saturday, May 30th at Gnomon. (2016). Gnomon — School of Visual Effects, Games & Animation. Retrieved 13 May 2016, from

Character Animation for Games | The Gnomon Workshop. (2016). Retrieved 13 May 2016, from

GDC Vault – Animation Bootcamp: Animating The 3rd Assassin. (2016). Retrieved 13 May 2016, from

Pipelines for Video Game Animation | The Gnomon Workshop. (2016). Retrieved 13 May 2016, from

Staff, G. (2016). Video: Improving AI in Assassin’s Creed III, XCOM, Warframe. Retrieved 13 May 2016, from


Cross-Discipline Work: Anti-Gravity

For the cross-discipline work, I joined in on a Studio 2 games project called Anti-Gravity. My role within the project was that of a freelance 3D artist. The team was looking for someone to create the AI character for their Portal inspired game. Before beginning this project, I believed that I would be required to come up with and model a small robot character. This assumption was correct.

The team gave me a brief which outlined the specifics of the single asset that I would be creating. The asset was to be spherical in shape, with an approximate radius of 20cm and based off of the the Wheatley character from portal 2, and the Eyebot character from the Fallout universe.

Immediately after receiving and reading the brief, I organised a meeting with the game’s designer that issued the brief. After a short meeting we had decided on some references to design the initial 3D sketches. These are the references we agreed on:

Throughout the project I conducted several face to face meetings with the game students: one to clarify the brief, one during the concept phase and one follow up meeting. I found that having these face to face meetings was extremely helpful as it helped to get everyone on the same page and it is much easier than trying to communicate over Slack, as the team was general unresponsive on Slack. In future projects, I will definitely be trying to conduct more face to face meetings as it helps eliminate confusion.

Initially, I had planned on creating three quick concepts / mockups for the game students to chose between but, due to time constraints, I was only able to create two. While this was not as planned, creating quick concepts was a good way to show and get a design approved by the game students.

Below is the final robot character, called Franklin, that I created for the team:

For my final project I had created an empty 3DsMax file that had all the correct scene, unit and export settings so that the model would import correctly into Unity. I used this when creating the Franklin model as it helped with ensuring that the technical side of things would go smoothly.

Additionally, I used the ‘Unity 5’ metallic settings when setting up the Quixel file and when exporting the finished textures. This allowed me to create textures that would work perfectly in the game engine that the team was using.


By laying down the technical framework for creating a 3D asset for Unity, by setting up the 3DsMax and Quixel files in the correct way, I was able to quickly create the asset to the correct specifications without any issues. This was extremely helpful to the project and I will definitely by doing this in the future.

I organised a follow up meeting after delivering the asset to make sure the team was happy with the final result and that they were able to implement it in Unity correctly. They were very happy with my work and I feel as though our multiple meetings helped to contribute to this.

The cross-discipline work that I undertook was as I expected it to be: I acted as a freelance 3D artist by concepting a design, getting it approved and finalising the model. The team that I worked with were half-way through production when I joined and had a very clear idea of the style they were going for. I found this extremely helpful as it gave me a direction to head towards.

R: Gameplay Animation

  1. Background
    1. Game animation is unique: it stands apart from all other kinds of animation because of the fact that it encompasses all forms of animation. Every style of animation has been used in a game at some point. On top of this is the interactivity of the game platform: the animation, in many cases, is getting seen in ways the original creator never expected the art to be seen. This is because the one playing the game is in charge of moving the game forward, and by extension moving the animation forward.
    2. Another major factor setting gameplay animation apart from other kinds of animation, is are the development requirements. The desired end result and the degree that the player of the game has control over the camera can drastically change how the animation must be designed and implemented. Nothing can ever look perfect from every angle, and due to the development cycle and time pressures common in the game industry, there is little time to make every element of the animation perfect. Yet every year the quality of animation seen in all ranges of games, from superstar AAA development teams to single person indie projects, are becoming more and more impressive.
  2. Applications
    1. The main application of gameplay animation techniques are exactly where you would expect to find them, games, but because of some recent technological breakthroughs in real time rendering, some unachievable results due to time constraints can now be created using techniques usually reserved for games to accelerate content creation for digital media. A great example of this is the digital media company “Machinima” that produce video content created inside real time game environments.
    2. So looking back into the games industry itself, animation is fundamental to a huge portion of the games being created, as it is literally the designing creation management of all of the moving characters and a large portion of the movement of the props used in the games.
  3. In practice
    1. Gameplay animation usually consists of a set creation pipeline, every studio or team will have its own version of the pipeline. Here is an example of the pipeline that I use:
      1. Asset list: I’ll create a comprehensive list of all of the animations needed, listing details about what they are to achieve, number of frames, weather or not the animation is looped, and any dependencies.
      2. Pose Design: I’ll take an animation from the list, and go about designing a single pose that captures the desired look of the that animation asset.
      3. Pose Test: I’ll export the single pose from my animation software, and apply it to the character in the engine, to check consistency, and get feedback from the rest of the team.
      4. Stepped Keys: From here I will proceed to create the rest of the key poses around the original pose, using stepped keys. Stepped keys are important as the transitions are not important at this step and will detract from the timing of each pose.
      5. Stepped Test: I will export the stepped keys to be tested in engine, this is to check to make sure the timing and length function correctly in engine, this is another good opportunity to get feedback on your work.
      6. Breakdowns: While still in stepped keys, I will workout the correct breakdowns to accentuate the keys.
      7. Breakdowns Test: Keeping to form with the rest of the pipeline, testing the result of your work in engine at every stage, and getting feedback may seem over the top but it is one of the most important parts of any pipeline, the game engine is were the final work will reside, so it must be checked and rechecked.
      8. Curve setting: Finally we can transition from stepped keys to auto keys to get some movement between frames, but personally I find it best to work on a single transition at a time, keeping the rest of the animation in stepped keys until it has had its turn, also starting from the original pose, and working outward from there, from my experience produces the best result. Also it should go without saying, but I’ll say it anyway, special care should be taken to the start and end of a looped animation, make sure you understand the export process, as to not end up with a hold at the start and end of the loop.
      9. Curve test: once again, export, test, feedback. You should know the drill by now.
      10. Iterate: The most important part of working with game animation is the ability to work in an iterative fashion, you may not work the entire pipeline on a single animation before you start the next, and things change over development, sometimes hours of work will be scraped, that’s why the ability to work iteratively is important: speed will come with practice and time.
  4. My Project
    1. Being my project is focused on gameplay animation, I will be utilizing all of the things that I have talked about so far.


Action: The Animator’s Process, Saturday, May 30th at Gnomon. (2016). Gnomon — School of Visual Effects, Games & Animation. Retrieved 9 May 2016, from

Game Specific Animation Techniques. (2015). polycount. Retrieved 9 May 2016, from

Procedural Characters and the Coming Animation Technology Revolution | (2016) Retrieved 9 May 2016, from

timings?, G. (2016). Good techniques for syncing gameplay actions to specific animation timings? Retrieved 9 May 2016, from

What I Do At Work – Gameplay Animation. (2016). YouTube. Retrieved 9 May 2016, from

R: Additive Animation

  1. Background
    1. Additive Animation Is the processes of layering animation, much like my early blog it is a form of procedural animation, but it is much more focused refined and therefore much more widely used, across all forms of the animation industry.
    2. How it actually works: By using interpolation techniques refined for use as tweening based interpolation, it begun to be used to interpolate layers of animation, effectively have two or more sets of animation blended together to get a desired result, often to create transitional animation automatically increasing the flexibility of a set number of animations.
  2. Applications
    1. Commonly it is being used in both 2D and 3D animation software to increase productivity.
    2. There are also systems in place for this functionality in real time rendering systems, such as game engines.
  3. In practice
    1. The most basic way that additive animation, is used is to create additional animation cycles by combining two animations utilizing a series of tools depending on the used software.
    2. More complex forms of this can be seen in game engines in the form of blend trees, these are a collection of animation cycles that can flow from one to another using a predetermined set of rules, the rules determine the situations that the animations transition blend between each other, hopefully resulting in seamless transitions between characters animations, on top of this additional animations can be blended in to increase the variation of the animations.
  4. My Project
    1. In my project I am using the additive animation system to apply attack animations over the top of my locomotion animations, allowing me to use the same animations for attacking while moving, and attacking while standing, removing a bunch of unnecessary work.


Action: The Animator’s Process, Saturday, May 30th at Gnomon. (2016). Gnomon — School of Visual Effects, Games & Animation. Retrieved 9 May 2016, from

Procedural Characters and the Coming Animation Technology Revolution | (2016) Retrieved 9 May 2016, from

Using Additive Animations. (2016). Retrieved 9 May 2016, from

R: Procedural Animation

  1. Background
    1. Procedural animation started out in early animation software in the form of automatic tweening systems to speed up the development of digital animation. In the beginning it was mostly used to do with things like digital graphics used for title sequences and digital advertisements, but as digital animation started to become a sought after animation technique, interpolating between frames in the software greatly increased the speed that digital animators could produce artwork.
  2. Applications
    1. At the currant time procedural animation is most commonly used in real time game engines. It can be used for simple tasks like adjusting animations to stop issues like foot slide, or to tilt the feet of a character to the slope of a surface. Some less common uses are in place of a full animation cycle, using the same procedural interpolation that is used in 3d animation software like 3ds max or maya, but in engine at run time, drastically reducing the volume of animation data that must be kept in computer memory, reducing the overhead, resulting in an increase in efficiency.
  3. In practice
    1. Most commonly use is Inverse kinematics(IK) the direct opposite of forward kinematics(FK), to explain the two systems I will use an example: First we start with a hierarchy of objects or bones that have a parent/child relationship, the hand is the child of the forearm, witch in turn is the child of the upper arm. Forward kinematics is the procedure of moving the arm to a new position and to do this you will start at the top, the upper arm, working your way down the hierarchy until the arm is in the desired pose. Now Inverse kinematics is the procedure of moving the lowest object in the hierarchy, in this case the hand, and a computer algorithm will update the entire hierarchy using the forward kinematics system, usually so fast that it appears hand is moving and the arm is following. This is one form of procedural animation, and the Inverse kinematic system can be used in game engines to produce a variety of result, like the situations described in the previous paragraph.
    2. When taking a variety of procedural animation techniques and combining them, you can get amassing results of animation that are responsive and looks great, from a minimal amount of animation data, a great example of this is the wolfire game Overgrowth, that produces its entire animation set from 13 frames or poses This is the the gdc 14 talk where the developer explains his system.
  4. My Project
    1. For my project I plan to use on of the most basic of the concepts I described, A two bone IK system to improve the quality of foot placement, and to Aline the angle of the foot to the angle of the surface it is being placed on.
    2. This kind of in at run time procedural animation can produce amazing result, while allowing the animation to be adjusted at run time increasing the usability and functionality of the entire system


GDC 2014 Procedural Animation Video – Wolfire Games Blog. (2016). Retrieved 9 May 2016, from

GDC Vault – Animation Bootcamp: An Indie Approach to Procedural Animation. (2016). Retrieved 9 May 2016, from

procedural animation. (2016). Retrieved 9 May 2016, from

Procedural Characters and the Coming Animation Technology Revolution | (2016) Retrieved 9 May 2016, from

R: New Technology

In this blog I will be talking about some of the new technology that I have been researching and experimenting with.


What is it?: The goal of VR is to remove the barrier between the virtual world, and the the user. To do this the system tries to stimulate as many of the user’s senses as possible. For a long time we have utilized audio to decrease the gap between the user and the virtual world utilizing things like 3d audio. Now using specialised lenses built in to a head worn device, combined with the amazing computing power now available to home computer systems, and advances in real time rendering, it is possible to immerse yourself in the virtual world with both visual and auditory senses.

Since the success of multiple VR headsets over the past years, additional sensory devices have been under development, to make the gap between the real and the virtual even smaller. These include haptic systems in the form of intuitive hand controls or entire electrostatic suits designed to recreate the feeling of temperature and pressure on the user’s skin, over their entire body.


Here is a picture of my World Builder project in VR on one of the Oculus devises(Oculus is one of the leading brands of VR headsets)


Integration with Unreal: The company Epic, the developers of the Unreal Engine 4 and its predecessors, are working toward fully implemented VR features, with later versions of 4 coming with native VR support, and even more recently have been showing their experiments with including VR headsets within the development pipeline with a VR editor, allowing developers to utilize almost all of the editors features in a hands on VR work space.

Motion Capture

What is it?: Motion capture, as used in the creative industries, is the process of using one of many motion captures systems (that either include specialised cameras or suit based detection hardware coupled with specialised software) to produce an accurate recording or capture of a person or object from the real world into a digital format. Capture utilizing facial and voice capture is more commonly know as performance capture as it is taking every part of an actor’s performance and capturing it for use in a digital media platform. Some advancements, like those being used by teams like Official Ninja Theory and its collaborators, have recently showcased using performance capture to animate a digital character in engine and in real time, which is an amazing feat of technological progress.

Integration with animation: I have been integrating motion capture into my own personal animation. I have mainly been using motion capture to add subtlety to my animations that would have taken many, many hours to produce by hand. What would take me hours to complete is achieved in a fraction of recording time utilizing a motion capture system. Industry professionals, like Ubisoft Montreal’s animation director, Jonathan Cooper, and Dice LA’s animation director, Tobias Dahl, have both been known to refer to motion capture, or performance capture, as a reliable tool to accelerate the animation process, but not as a replacement. I myself have found that you still need to clean up the motion capture data, as it is only as good as the quality of the recording, and in many cases, if not all of them, pure motion capture data will not give you the desired result, but it is without a doubt a useful and time saving tool that I see becoming commonplace in many areas of animation.



(2016). Retrieved 2 May 2016, from

know, W. (2016). What is virtual reality? Everything you need to know. TrustedReviews. Retrieved 2 May 2016, from

Motion Capture. (2016). Retrieved 2 May 2016, from

Motion capture – Xsens 3D motion tracking. (2016). Xsens 3D motion tracking. Retrieved 2 May 2016, from

Motion Capture Software and Mocap Tracking Info. (2016). Retrieved 2 May 2016, from

O'Boyle, B. (2016). What is VR? Virtual reality explained – Pocket-lint. Retrieved 2 May 2016, from

What is Virtual Reality? – Virtual Reality. (2015). Virtual Reality. Retrieved 2 May 2016, from

What you need to know about 3D motion capture. (2016). Engadget. Retrieved 2 May 2016, from


World builder Post Mortum


The word builder project was a seven week project were we were tasked with creating an environment piece as a personal interpretation of a place in a piece of well known literature. The catch: this piece must not have been turned into a film or series, as to not be able to use that as a starting point. For the project I chose William Gibson’s “Neuromancer”. For my environment I decided to create a street in the night city.

Here is my 2D concept:


After Testing in engine, I decided to remove the road, street lights, and narrow the scene significantly.

Street 1 Street 2

Then testing the post processing effects, and atmospheric effects.
Fog and Lighting Once I had settled on all of the effects I set out to do the leg work of creating the assets and setting up the scene, you can find more information on this here.


Here is the final video footage of the scene:

I feel the project was successful due to the positive reaction that I received from my facilitators and my peers. Along side this, the project accomplished most of the susses goals that i had planed the project to achieve.


A large portion of the success for this project was due to the volume and effectiveness of the documentation that I created to ground this project. Mainly the art bible and the work break down structure, as they were used to focus the visual narrative, and reinforce progress with the production of the project respectively.

Here are some screen shots.

Scene view 3 Scene view 4 Scene view 2 Scene view 1 Scene view 5





Using Unreal Engine 4 was great, as I already had quite a bit of experience with the engine from personal works in my past. The greatest learning curve over the course of this project came from using the material editor but with my now found knowledge using the material editor I can quickly and effective make complex materials that can be used in a variety of future projects.

Technical frame work

Creating a set technical frame work and and file structure was my main stratify to avoid issues that i have faced in the past with file corruption and data loss. Using a proper file structure saved time by keeping my files organised.

Here is an example of this system:

file management

This system was described to my class by Brett, one of our facilitators.

how it works:

You have one master folder, for ease of copying the entire library for backup purposes. Inside the master folder, you have folders named as the assets, these contains the asset master file, that is used as the source for all referenced files, as well as the textures, and a final folder containing the iterative versions of the asset.


Out of all of the projects that I have taken part in this project had the least issues, the largest problem that I encountered was purely time, being the sole contributor to the project. By the time that I had settled on a concept and had completed all of the documentation, it was already week four of seven. Production was hampered by getting sick, an inevitability in any solo project.

future considerations

The main considerations I will make in the future are to do with the way I approach the documentation and how I setup my asset references. The documentation that I developed over the course of the project was a huge learning curve that will positively impact the way I plan projects in the future.