Am I really not going to have a blog post in 2022? Well, this is it, just under the wire, and not full of any substance. If there’s anyone out there even taking a remote interest in what I’m doing, I’ll have them know that the game project that I’ve been working on is still in progress, and this blog, while not active, is still an outlet for anything I’d like to report on about what I’m doing. I’m not ready yet, but it’s getting there. But enough of what I’m doing. Whatever you’re doing, make sure to…
Video games these days usually contain some form of inventory, or list of objects. Just like on Amazon, the end-user can have a better experience perusing through pictures of items rather than a list of text items. Here’s an example from The Sims 4.
Chairs in Buy Mode Inventory in The Sims 4
Generating this group of small images, or thumbnails, can be time-consuming. While you’ll most likely have better looking results if you stage and render each thumbnail yourself, if you have hundreds or thousands of items, this can be prohibitive if not impossible. This especially becomes true when you consider that in game development, every part of the game is repeatedly re-worked. This constant iteration to get things just right means that any changes you make to an object, need to also be propagated to any other processing it needs, such as thumbnail renders, adding hours or even days of extra processing time, and you can basically torpedo your launch date, because your game would pretty much be sunk.
The solution to saving you massive amounts of time from manually creating object thumbnail renders is obviously to have this done automatically. Before we begin, if you just want to see the source itself, just go straight to github here. Otherwise….
Let’s dive in.
The Concept
Suppose you have a group of 3D objects in your Unity project, say in the FBX format.
FBX objects in Unity Project: cube, sphere, pyramid, cylinder
To create a thumbnail image of the object, a camera in the scene renders its view to a texture, called a RenderTexture. The graphics technique itself is called “Render-to-Texture”, or RTT.
A camera, an object, and a render texture.
Once you have a Texture object that contains the view of the camera, that Texture can be written out to an image file; in this case, we’ll write out a PNG file.
WriteAllBytes() to thumbnail.PNG
Then, later on, we’ll cover automating the process of repeating this concept for a list of objects.
Run-Time Generated Thumbnails for Custom Objects
I should mention a quick caveat about the solution below. The design of this solution involves running a Unity scene to generate thumbnails. This technique allows thumbnails to be generated based on any customizations the player may have introduced, such as custom colors or textures, prior to jumping into the game world. If you need to generate thumbnails during design-time (i.e. outside of Play mode of a Unity scene), you may consider automatically generating thumbnails from your 3D Modeling Tool instead.
Manual Thumbnail Generation
Setting this up manually in Unity is pretty straightforward. You can skip this section if you already know how to render a camera view to a texture and write it out to a PNG file. For everyone else, we’ll go step-by-step below.
Unity Setup
Create a new Scene.
Create an Empty Object (this will hold the script), rename it “ThumbGen”.
Create a Camera and parent it to “ThumbGen”.
Create a RenderTexture, assign it to the camera as TargetTexture.
Create an object (say, a Cube) in the scene, in view of the camera. Click on the camera to check that it’s actually looking at the object, and continue positioning the camera and the object until it’s in view.
That’s it as far as the scene setup goes. Now, let’s write the script.
ThumbnailGenerator Script
Create a new script called ThumbnailGenerator.cs and attach it to the “ThumbGen” object. Implement it as follows:
using System.Collections;
using UnityEngine;
namespace ThumbGenExamples
{
public class ThumbnailGenerator1 : MonoBehaviour
{
public RenderTexture TargetRenderTexture;
public Camera ThumbnailCamera;
public Transform ObjectPosition;
public int ThumbnailWidth;
public int ThumbnailHeight;
public Texture2D TextureResult { get; private set; }
/// <summary>
/// If this is not null or empty, the texture is exported as a png to the file system.
/// </summary>
public string ExportFilePath;
void Start()
{
ThumbnailWidth = 256;
ThumbnailHeight = 256;
Render("render_manual");
}
private void AssignRenderTextureToCamera()
{
if (ThumbnailCamera != null && TargetRenderTexture != null)
{
ThumbnailCamera.targetTexture = TargetRenderTexture;
}
else if (ThumbnailCamera.targetTexture != null)
{
TargetRenderTexture = ThumbnailCamera.targetTexture;
}
}
private void Render(string filename)
{
StartCoroutine(DoRender(filename));
}
IEnumerator DoRender(string filename)
{
yield return new WaitForEndOfFrame();
ExecuteRender(filename);
}
private void ExecuteRender(string filename)
{
if (ThumbnailCamera == null)
{
throw new System.InvalidOperationException("ThumbnailCamera not found. Please assign one to the ThumbnailGenerator.");
}
if (TargetRenderTexture == null && ThumbnailCamera.targetTexture == null)
{
throw new System.InvalidOperationException("RenderTexture not found. Please assign one to the ThumbnailGenerator.");
}
AssignRenderTextureToCamera();
Texture2D tex = null;
{ // Create the texture from the RenderTexture
RenderTexture.active = TargetRenderTexture;
tex = new Texture2D(ThumbnailWidth, ThumbnailHeight);
tex.ReadPixels(new Rect(0, 0, ThumbnailWidth, ThumbnailHeight), 0, 0);
tex.Apply();
TextureResult = tex;
}
// Export to the file system, if ExportFilePath is specified.
if (tex != null && !string.IsNullOrWhiteSpace(ExportFilePath) && !string.IsNullOrWhiteSpace(filename))
{
string dir = System.IO.Path.GetDirectoryName(ExportFilePath);
if (!System.IO.Directory.Exists(dir))
{
System.IO.Directory.CreateDirectory(dir);
}
foreach (char c in System.IO.Path.GetInvalidFileNameChars())
{
filename = filename.Replace(c, '_');
}
string finalPath = string.Format("{0}/{1}.png", ExportFilePath, filename);
byte[] bytes = tex.EncodeToPNG();
System.IO.File.WriteAllBytes(ExportFilePath + "/" + filename + ".png", bytes);
}
}
}
}
Finally, assign the Camera you created to the ThumbnailCamera property in the Inspector for the ThumbnailGenerator1 script, and specify the ExportFilePath as “./test”, without the quotes. This will output the rendered image to the test directory.
Run the scene. A thumbnail image is written out to the file system.
Thumbnail image written to the file system.
Automatic Thumbnail Generation
Now that we have a script to render out a thumbnail, we still need to automate object creation and placement into the scene. Basically, Step (5) above can be automated so that an object can be created, snapshotted, and then destroyed.
Object load, render, and destroy process.
Modify the ThumbnailGenerator script
Update the ThumbnailGenerator script.
Make Render() public
Remove the call to Render() in the Start() function.
With this, we still have a ThumbnailGenerator, but Render() must be called from somewhere else to actually render the view. The script is ThumbnailGenerator2.cs in the git example.
Add an EmptyObject called “Stage”, parent it to the “ThumbGen”. When the object is instantiated, it will be placed at the position of the Stage object. It serves as the camera’s “look at” point. Since we’ll be automatically creating the object, we don’t need the cube from before.
Delete the Cube.
(Also note that from here on out, you’ll need the FBX objects from the Resources folder contained in github.)
Hierarchy and components assigned to ThumbGen.
Create an Object Loader
The first thing we’ll need for auto-generating an object’s thumbnail is an object loader. In this example, I’m simply calling Resources.Load() on one of the objects in our list, using the path to the model resource in the /Resources folder as an identifier. You may consider any other sort of identifier (i.e. unique name or inventoryId, etc.) to load your models, but that’s beyond the scope of this article.
Also keep in mind that Resources.Load() loads the prefab of a GameObject from the asset database. It doesn’t create an instance to be used in the game. Technically, we *can* use the loaded GameObject prefab, but it’s typically dangerous to do this because you can potentially modify all subsequent instances of the GameObject. It’s safer to call GameObject.Instantiate() on the loaded GameObject instead.
We’ll use the following class below as our ObjectLoader.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
namespace ThumbGenExamples
{
public class ObjectThumbnailActivity
{
private GameObject _object;
private ThumbnailGenerator2 _thumbGen;
private string _resourceName;
public ObjectThumbnailActivity(ThumbnailGenerator2 thumbGen, string resourceName)
{
_thumbGen = thumbGen;
_resourceName = resourceName;
}
public bool Setup()
{
if (string.IsNullOrWhiteSpace(_resourceName))
return false;
GameObject prefab = Resources.Load<GameObject>(_resourceName);
GameObject stage = GameObject.Find("Stage");
_object = GameObject.Instantiate(prefab, stage?.transform);
Camera cam = _thumbGen.ThumbnailCamera.GetComponent<Camera>();
cam.transform.LookAt(stage.transform);
return true;
}
public bool CanProcess()
{
return true;
}
public void Process()
{
if (_thumbGen == null)
return;
string filename = string.Format("render_objectloader_{0}", _resourceName);
_thumbGen.Render(filename);
}
public void Cleanup()
{
if (_object != null)
{
GameObject.Destroy(_object);
_object = null;
}
}
}
}
Most of the implementation for object loading will occur in Setup() and Cleanup(), but let’s start with a constructor.
We’re simply saving the resourcePath and thumbGen to private members of the ObjectThumbnailActivity, so that the Setup() method can use them. You may be asking yourself, why don’t we just pass in the resourcePath to Setup()? We’ll cover that in the Optional Improvements section below.
Now we can implement Setup() and Cleanup().
Setup() instantiates a GameObject based on the specified resourcePath, and then positions the object at the “Stage” position.
public bool Setup()
{
if (string.IsNullOrWhiteSpace(_resourceName))
return false;
GameObject prefab = Resources.Load<GameObject>(_resourceName);
GameObject stage = GameObject.Find("Stage");
_object = GameObject.Instantiate(prefab, stage?.transform);
Camera cam = _thumbGen.ThumbnailCamera.GetComponent<Camera>();
cam.transform.LookAt(stage.transform);
return true;
}
Cleanup() simply destroys the instantiated GameObject.
public void Cleanup()
{
if (_object != null)
{
GameObject.Destroy(_object);
_object = null;
}
}
Now we can implement CanProcess() and Process() as follows:
public bool CanProcess()
{
return true;
}
For now, CanProcess() will simply return true.
public void Process()
{
if (_thumbGen == null)
return;
string filename = string.Format("render_objectloader_{0}", _resourceName);
_thumbGen.Render(filename);
}
Now that we have our ObjectThumbnailActivity, the MonoBehaviour class below uses the activity to execute all its methods.
Things to notice here, we use coroutines for calling DoProcess() and DoCleanup(). For DoProcess(), we allow the Setup() method to do a lot of the work, so yielding for WaitForEndOfFrame() allows Setup() to finish doing any initialization needed for the GameObject instantiation. The coroutine that calls DoCleanup() also serves the same purpose. The coroutine allows the Process() method to fully run before cleaning up the GameObject.
Attach this MonoBehaviour to the root “ThumbGen” object, run it, and the object’s thumbnail will be generated.
You may need to place the “Stage” object in a position that would result in a more pleasing thumbnail, but that’s on you.
When you ran the scene, you may have caught a glimpse of the instantiated object before it gets destroyed by the activity.
After running the scene and getting a thumbnail generated, you’re probably saying, “But that’s exactly what the manual scene setup does, and even with far less code”. While this is true, the ObjectThumbnailActivity sets us up to repeatedly load objects without us having to delete the current object and place a new one. We’re getting to that next.
Multiple Object Loading
Now that we have a way to load/unload our objects, we still need a way to process all of them automatically. There are a few ways we can do this, and we’ll go over them below.
First, is simply loading the objects in a loop. This is simple to implement, but it can pause your game for a significant amount of time while it loads all the objects in the list. This is obviously dependent on the number of items we plan to take thumbnail images of.
Second, is multi-threading. This is the ideal solution for this situation, as you can have thumbnails load in the background while your game is loading, and the game would show no signs of slowing down. It’s possible to do this in Unity, but this requires using the newer Jobs system, and the solution is far more complicated. This article does not cover that.
Third, is using Unity’s coroutine system. This solution is kind of an “in between” the two solutions above. Thumbnail generation would occur on the same thread as our game, but it splits up the workload at a specified time interval such that it seems like the game is playable while there is some work being done intermittently. This is the solution that we’ll go with in this article, as it strikes a good balance between implementation complexity and end-user experience.
Per-Frame Processor
Lucky for us, we already have an ObjectProcessor using coroutines, from the section above.
This means we can use that class as the basis for our new class. We just need to add a way to repeatedly process a list of objects.
Here’s the implementation of the per-frame processor, called the MultiObjectProcessor:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
namespace ThumbGenExamples
{
public class MultiObjectProcessor : MonoBehaviour
{
private List<ObjectThumbnailActivity> _activities;
private ObjectThumbnailActivity _curActivity;
void Start()
{
}
void Awake()
{
_curActivity = null;
_activities = new List<ObjectThumbnailActivity>();
ThumbnailGenerator2 thumbGen = GetComponent<ThumbnailGenerator2>();
string[] objectResourceNames =
{
"objects/Cube",
"objects/Cylinder",
"objects/Capsule",
"objects/Sphere"
};
foreach (var name in objectResourceNames)
{
var thumbActivity = new ObjectThumbnailActivity(thumbGen, name);
_activities.Add(thumbActivity);
}
}
void Update()
{
if (_curActivity == null)
{
if (_activities.Count > 0)
{
_curActivity = _activities[0];
_curActivity.Setup();
StartCoroutine("DoProcess");
_activities.RemoveAt(0);
}
}
}
IEnumerator DoProcess()
{
yield return new WaitForEndOfFrame();
if (_curActivity == null)
yield return null;
if(!_curActivity.CanProcess())
yield return null;
_curActivity.Process();
StartCoroutine(DoCleanup());
}
IEnumerator DoCleanup()
{
yield return new WaitForEndOfFrame();
if (_curActivity != null)
{
_curActivity.Cleanup();
_curActivity = null;
}
}
}
}
First thing to note, is a quick reminder that this is a MonoBehaviour. This is needed so that we can leverage the Coroutine system.
Awake() sets up the list of thumbnail activities to process, and initializes the current activity to null.
The Update() function then checks if the current activity is null. If it is, that means the processor is not doing anything, so the function proceeds to initialize the next activity. We check if the activities list is not empty, set the first activity as the current activity, remove it from the list, and call StartCoroutine() to execute the activity’s Setup() and Process() functions. Once the Process() function has completed, the current activity is set to null, so that the Update() function can pick up the next activity in the list.
On the “ThumbGen” object in the Unity Inspector, remove the ObjectProcessor component, and add the MultiObjectProcessor component.
Run the scene. This time, you may notice that the multiple objects in our list appear and disappear in a split second, similar to our original ObjectProcessor.
Check the output directory, and notice the four thumbnails that have been created.
Four thumbnail output images.
That’s it! That’s pretty much the basics of the thumbnail object processing functionality. Below is a discussion of the design, as well as a few more improvements that you may want to make to the code.
Design
All the work above may seem like some over-engineering. To an extent, it is. It really depends on what level of flexibility you need for your own project. If you need to simply set up an object, snapshot it, and clean it up, why not just run through a for loop or something? Well, the way it’s designed above enforces a lot of principles known as SOLID in computer science.
Okay, here is what the code would look like if it was all thrown into a single class or method:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
namespace ThumbGenExamples
{
public class ThumbGenProcessor : MonoBehaviour
{
private List<string> _objectResourceNames;
private GameObject _curObject;
public RenderTexture TargetRenderTexture;
public Camera ThumbnailCamera;
public int ThumbnailWidth;
public int ThumbnailHeight;
public Texture2D TextureResult { get; private set; }
/// <summary>
/// If this is not null or empty, the texture is exported as a png to the file system.
/// </summary>
public string ExportFilePath;
public ThumbGenProcessor()
{
ThumbnailWidth = 256;
ThumbnailHeight = 256;
}
void Start()
{
}
void Awake()
{
_curObject = null;
_objectResourceNames = new List<string>()
{
"objects/Cube",
"objects/Cylinder",
"objects/Capsule",
"objects/Sphere"
};
}
void Update()
{
if (_curObject == null)
{
if (_objectResourceNames.Count > 0)
{
string resourceName = _objectResourceNames[0];
if (!string.IsNullOrWhiteSpace(resourceName))
{
GameObject prefab = Resources.Load<GameObject>(resourceName);
GameObject stage = GameObject.Find("Stage");
_curObject = GameObject.Instantiate(prefab, stage?.transform);
Camera cam = ThumbnailCamera.GetComponent<Camera>();
cam.transform.LookAt(stage.transform);
StartCoroutine(DoProcess(resourceName));
}
_objectResourceNames.RemoveAt(0);
}
}
}
IEnumerator DoProcess(string resourceName)
{
yield return new WaitForEndOfFrame();
if (_curObject == null)
yield return null;
if (ThumbnailCamera == null)
yield return null;
string filename = string.Format("render_objectloader_{0}", resourceName);
Render(filename);
StartCoroutine(DoCleanup());
}
IEnumerator DoCleanup()
{
yield return new WaitForEndOfFrame();
if (_curObject != null)
{
GameObject.Destroy(_curObject);
_curObject = null;
}
}
private void AssignRenderTextureToCamera()
{
if (ThumbnailCamera != null)
{
ThumbnailCamera.targetTexture = TargetRenderTexture;
}
}
public void Render(string filename)
{
StartCoroutine(DoRender(filename));
}
IEnumerator DoRender(string filename)
{
yield return new WaitForEndOfFrame();
ExecuteRender(filename);
}
private void ExecuteRender(string filename)
{
if (ThumbnailCamera == null)
{
throw new System.InvalidOperationException("ThumbnailCamera not found. Please assign one to the ThumbnailGenerator.");
}
if (TargetRenderTexture == null)
{
throw new System.InvalidOperationException("RenderTexture not found. Please assign one to the ThumbnailGenerator.");
}
AssignRenderTextureToCamera();
Texture2D tex = null;
{ // Create the texture from the RenderTexture
RenderTexture.active = TargetRenderTexture;
tex = new Texture2D(ThumbnailWidth, ThumbnailHeight);
tex.ReadPixels(new Rect(0, 0, ThumbnailWidth, ThumbnailHeight), 0, 0);
tex.Apply();
TextureResult = tex;
}
// Export to the file system, if ExportFilePath is specified.
if (tex != null && !string.IsNullOrWhiteSpace(ExportFilePath) && !string.IsNullOrWhiteSpace(filename))
{
string dir = System.IO.Path.GetDirectoryName(ExportFilePath);
if (!System.IO.Directory.Exists(dir))
{
System.IO.Directory.CreateDirectory(dir);
}
foreach (char c in System.IO.Path.GetInvalidFileNameChars())
{
filename = filename.Replace(c, '_');
}
string finalPath = string.Format("{0}/{1}.png", ExportFilePath, filename);
byte[] bytes = tex.EncodeToPNG();
System.IO.File.WriteAllBytes(ExportFilePath + "/" + filename + ".png", bytes);
}
}
}
}
There’s a lot going on here, and sure, it works, but reading it takes a little longer to pick it apart and figure out what’s doing what. It is more difficult to test and debug just the thumbnail code, just the object creation code, or just the multiple-object handling code.
The classes we implemented above separate the concerns of our final goal. One class is solely responsible for taking a snapshot of an object, one class is solely responsible for creating and destroying an object, and one class is solely responsible for looping through a list of objects.
In the section below, we’ll go over a way to extend the MultiObjectLoader (or per-frame processor) to do more than just take thumbnail images of objects. The generic IGameObjectActivity interface makes this architecture a lot more reusable, extensible, flexible, and testable.
Optional Improvements
Taking it a step further…
Focus Point
I have to mention “focus point”. This is optional, but can help improve the resulting thumbnail image. It’s very possible, and maybe very likely, that the focus point of an object for the purpose of a thumbnail image is not the same location as the object’s position.
Here’s an example from one of my game’s models:
Thumbnail without focus point.
Thumbnail with focus point.
The artillery unit’s origin is at the base of the model, so that it can interact with the terrain in the game without having to artificially move the origin during runtime, which can be unnecessary transform calculations. This is a convention that I decided on, and may not be what your game is using. But this has the adverse effect that, for thumbnail generation, the visual center point is too low at the object’s origin. Ideally, the visual center point would be at the base of the turret instead.
If this is the case, there are a couple of techniques we can use. I’ll only mention a few of them, and we’ll use one of them.
One technique is to use the average point of all vertices in the 3D model. This can be time-consuming to calculate, and may not necessarily produce the “aesthetic center point” of the model.
Another technique is to take the average point of the object’s bounding box, but this also presents the same problem of possibly resulting in a center point that’s not aesthetically pleasing.
Lastly, we can use a focus point object. This is an Empty Object, which just contains a Transform for the sole purpose of informing our ThumbnailGenerator where to look. This is the technique we’ll use.
Parent an Empty GameObject to each of the target objects to serve as the focus point for the camera, and name it “ThumbFocus”.
Add following block of code to the Setup() function in ObjectThumbnailActivity.cs. Basically, replace the existing camera LookAt() function call with this:
Transform camTarget = stage.transform;
List<Transform> transforms = new List<Transform>();
GameUtils.GetTransformsWithNameInHierarchy(_object.transform, "thumbfocus", ref transforms);
if (!transforms.IsEmpty())
{
Transform first = transforms.FirstOrDefault();
if (first != null)
{
camTarget = first;
break;
}
}
cam.transform.LookAt(camTarget);
You’ll need the following utility function below:
public static void GetTransformsWithNameInHierarchy(Transform root, string name, ref List<Transform> transforms)
{
if (root == null)
return;
foreach (Transform t in root)
{
if (string.Compare(t.name, name, true) == 0)
transforms.Add(t);
GetTransformsWithNameInHierarchy(t, name, ref transforms);
}
}
This searches for an EmptyObject named “ThumbFocus”, and if it can’t find it, it just resorts to the object’s position.
Refactor ObjectThumbnailActivity to IGameObjectActivity
In the Design section above, I mentioned that the implementation we went over was over-engineered. It certainly is for the immediate use-case, but let’s look at how that architecture works for us and pays itself off with more complex use-cases.
In fact, I had such a use-case, and it’s the very reason why I wanted to write this article. For my turn-based strategy game, I needed thumbnails for not only the units (tanks, planes, ships), but for the structures (factories, seaports, airports), and also for the terrain (field, forest, road, mountain, beach, water, and bridge).
Each of these different types (units, structures, and terrain) all have specific requirements for setting up the object before taking the thumbnail snapshot of it. There are different reasons for this.
For one, the terrain types generate their object geometry in different ways. For example, the forest tile type is basically the field type with some tree geometry, and the beach and bridge types require the water model to indicate its relation to the water.
if (terrainType == TerrainType.Field)
{
RenderField();
}
else if (terrainType == TerrainType.Forest)
{
RenderField();
RenderTrees();
}
...
if (terrainType == TerrainType.Water)
{
RenderWater();
}
else if (terrainType == TerrainType.Bridge)
{
RenderWater();
RenderBridge();
}
else if (terrainType == TerrainType.Shore)
{
RenderShore();
RenderWater();
}
For another, my seaport requires additional logic because I have two variants of it. A straight and a diagonal, but I only want to show the straight model, so I need to hide the diagonal.
if (model.Name.Contains("diagonal")
{
HideModel(model);
}
RenderSeaportModel();
The above is just pseudocode for how all these issues were resolved, but I wanted to point out that there are indeed uses for supporting this architecture.
The best part about this is, you can take advantage of this pattern for other things that need per frame processing without creating a separate thread. You would write a new concrete class that implements the IGameObjectActivity interface, instantiate this new class and add it to the queue. Here is IGameObjectActivity.cs:
/// <summary>
/// Interface for any activity to perform
/// </summary>
public interface IGameObjectActivity
{
/// <summary>
/// Run any setup, or bail if setup fails
/// </summary>
/// <returns></returns>
bool Setup();
/// <summary>
/// Allow the runtime actvity to check
/// if the process can be performed
/// </summary>
/// <returns></returns>
bool CanProcess();
/// <summary>
/// Actually perform the activity.
/// </summary>
void Process();
/// <summary>
/// Run any cleanup necessary to advance to a next activity
/// </summary>
void Cleanup();
}
So, for example, for the pseudocode above, I would have three separate activities:
public class UnitThumbActivity : IGameObjectActivity
{
... // Implements unit-specific thumbnail behavior.
}
public class StructureThumbActivity : IGameObjectActivity
{
... // Implements structure-specific thumbnail behavior.
}
public class TerrainThumbActivity : IGameObjectActivity
{
... // Implements terrain-specific thumbnail behavior.
}
Similarly, we can create a new class ThumbnailActivity, which is EXACTLY the same as ObjectThumbnailActivity, but now implements IGameObjectActivity. It’s exactly the same because I had already named the functions to implement the interface.
/// <summary>
/// This class is exactly like ObjectThumbnailActivity.cs, but
/// this now implements the IGameObjectActivity interface.
/// </summary>
public class ThumbnailActivity : IGameObjectActivity
{
... // Implementation is exactly the same as the existing ObjectThumbnailActivity.
}
And now in the MultiObjectProcessor, the _activities list contains objects that implement the IGameObjectActivity interface:
private List<IGameObjectActivity> _activities;
...
_activities = new List<IGameObjectActivity>();
...
Our per-frame processor now operates on any object that implements the interface IGameObjectActivity. It is no longer limited to just rendering thumbnails, or one type of thumbnail, for example.
Conclusion
And there you have it. The entirety of the project is available on github. While you can probably just take that code wholesale, I took the opportunity in this article to start from manual thumbnail creation in Unity to automating the entire process, with a little taste of Dependency Injection (DI) and good ol’ software engineering practices along the way.
After finishing up the post about my impressions of various paint programs a couple of years ago, I realized that I had not mentioned Affinity Photo. At the time, Affinity Photo was gaining a lot of ground in the competition against Photoshop, and I admit that it was an oversight to not include it. Now, Affinity Photo is considered to be a comparable alternative to Adobe Photoshop.
As of this writing, Affinity is offering 90-day free trials for their 3 products: Photo, Designer, and Publisher. And each is also selling at 50% of the full price. Photo is clearly the direct competitor to Adobe Photoshop, as Designer is the direct competitor to Illustrator, and Publisher is the direct competitor to InDesign. I’m only looking at Photo for this post.
Immediately after launching Affinity Photo, I went straight for the brush tool, laid down some strokes with the stylus, and was underwhelmed. No line variance. My next thought was that paint programs never have their default size jitter set to stylus pressure, and it’s just always a minor inconvenience to set this up. I did the usual driver version checks and reboots, and finally discovered I had to enable Windows Ink for this. This was a little bit unintuitive compared to most of my other paint programs in which I usually have to have Windows Ink disabled instead.
Despite the size jitter defaults, the brush types feel good “out of the box”. They feel very fluid and lightweight.
I like the “preview mode” of the tools that I used, to indicate how the tool would be applied before actually applying it. I can see how a particular tool will affect the image before commiting the action.
Generally speaking, Affinity Photo has a lot more contextual modes that just make sense for every action you do in the program. Something that has made Photoshop slowly grow to be more disjointed over the years, and have basically forced its user-base to adopt and just “live with”.
I just realized I can change the color chooser to a triangle, which I prefer over the sliders. The cursor on the color chooser feels a little sluggish though, so switching between slight variations of saturation and luminance can sometimes be a challenge. The rectangular color chooser seemed to exhibit the least amount of lag.
Another thing I found out is that you can drag a document outside of the main program window for doing reference or comparison work. Looks like Krita cannot do that, but I haven’t verified this with the other paint programs.
Having quick mask and pen tools available in Affinity Photo already makes it a better image editing program than Photoshop Elements, making it more on par with full-blown Photoshop. These are major pluses, considering the price point.
And one other notable and refreshing feature is that you can toggle the toolbar directly below the menu bar to save valuable screen space when editing; something that Photoshop Elements seems to be adamantly against!
Here’s another screenshot with the collapsed toolbar and triangle color chooser.
Affinity Photo is not without its problems. I ran into various instances where I had a marquee or crop tool selected, yet I couldn’t move it until I re-opened the document. Maybe I was in some sort of locked state or something, but if I was, I couldn’t find an indicator of such a state.
Selecting child layers was a little finicky. Sometimes, I would select a child layer, like a mask of an adjustment layer, wanting to delete it, only to find out the parent was selected, and it would be deleted too, which caught me off guard. Layer selection can use a little bit of improvement.
And sometimes zooming in/out with the keyboard shortcuts stops working, although the scroll zoom continues to work. There have also been a few times when selection widgets, like the crop tool wouldn’t drag, but would start working after I restarted the program.
Also, it’s worth mentioning that Photoshop users that are already heavily invested in plugins may not want to consider switching over to Affinity Photo. While some plugins can be loaded into Affinity Photo, there are some that will not.
Despite the list of nitpicky bugs, albeit fairly egregious ones, I’m still on board with Affinity Photo’s workflow over Photoshop’s workflow.
Affinity Photo. From upper left to lower right: sketch, ink over sketch, ink, paint, ink over paint, ink over paint on background
If I were to compare Affinity Photo to all the other paint programs that I use, I’d say Affinity Photo has better stock paint tools than Photoshop Elements, but not quite the variety of brush types that Krita provides. It’s got the snappiness and lightweight feel of SAI Paint Tool, and definitely has better image editing tools than any of the dedicated painting tools like Corel Painter, and the aforementioned SAI and arguably, Krita. Overall, I am impressed with Affinity Photo. It feels like I can get a lot done more intuitively in this program than any other paint program. I’m looking to make this my primary paint program moving forward.
For the game that I’m currently working on, I decided to implement support for Unity’s AssetBundles. Right out of the gate, I jumped into the API and started implementing it in my game, since it seemed so straightforward. Lo and behold, I hit the wall of failure. The concept is clear, but getting it all implemented such that it didn’t slow down my day-to-day development flow was a challenge. Let me explain how I got through things. But first…
What are AssetBundles?
Go ahead and skip this section if you already know what they are, but for the rest of us, here is a brief explanation.
An AssetBundle is a library of assets that can be stored as a file that is hosted separately from the game build. Before AssetBundles were more accessible for developers to work with, any assets that you used in your game (i.e. images, meshes, materials, animations, audio, etc.) needed to be included in the project as part of the game build. That’s fine and all, but as the number of assets in the game grew, it also meant the amount of possibly unused assets were unnecessarily loaded into memory, wasting a lot of system resources, thereby bogging down your game build, and not to mention your entire system.
To alleviate this problem, developers would use the Resources folder to manage their own asset loading, as Unity treats this folder differently from the rest of the assets. This is more of a traditional convention with resource management, in that developers can load their assets “on-demand”, and thereby be a little bit more picky about what assets are loaded into RAM, at any moment in time during the game’s run. This also meant that developers are also responsible for unloading the assets from RAM.
Gamedevs still use the Resources folder (I’ve seen the files in a few popular shipped Unity games), but Unity Technologies has since identified several problems with using the Resources folder for a production release, that can be found here.
So Unity Technologies recommends that AssetBundles are the way to go for packaging content, especially if you plan on delivering that content alongside your game build, or some time in the future with patches and content updates. There’s plenty more information in Unity’s documentation on AssetBundles, so I’m just going to move on now to the more “in-practice” side of things; areas that don’t really seem to be covered very well in Unity documentation.
Multiple Sources
The challenge with AssetBundles isn’t so much the actual authoring and loading of them. It’s that, every time you make a change to an asset in the AssetBundle, you have to rebuild. This completely kills your iteration time on editing those assets. Is there no way around this!? Well… There are ways to expedite the testing process of assets that you have included in an AssetBundle, but it really depends on what stage of implementation you are on your game. This determines which method you should use for testing your assets.
You could be testing the asset:
In the Editor, as a loose asset. This tests the asset itself, but not the AssetBundle that contains it.
In the Editor, as an asset in an AssetBundle. This tests the asset in the AssetBundle, but in the context of the Editor. Typically, the players of your game will only have a standalone build. Not something tied to the editor.
In a published Development Build, from an AssetBundle. This tests the asset in the AssetBundle in a standalone build, with some extra debug tools, to track down any errors that may not show up when testing in the Editor.
In a published Release Build, from an AssetBundle. This tests the asset in the AssetBundle in a standalone build which would be released to players, with absolutely no extra debug help, other than maybe some output logs. If a bug happens here that cannot be replicated in one of the previous 3 cases, it can be difficult to fix.
These four ways to test an asset and its AssetBundle can get confusing really quickly. To alleviate this confusion, Unity had provided the AssetBundleManager, which is a set of scripts that wrap up the AssetBundle API into a nicer high-level API that abstracts away the various methods an asset can be loaded, from an AssetBundle. Well, that’s nice!….. If it worked.
AssetBundleManager
“If it worked? What do you mean?”
Heh. I eventually got it working, and you can scroll down to see how if you want to skip this rant.
I’m running Unity 5.6.3f, which was released in August 2017. That was a couple of years ago, as of this writing. Within that time, AssetBundleManager has been marked as deprecated by Unity, yet there is no other alternative* in its place (other than dealing with the AssetBundle API directly), which is quite confusing to us users who are just starting out with it. Straight off of bitbucket, the AssetBundleManager will not work, without a few changes and configurations. This puts developers who would like to develop the “right” way by using AssetBundles in a sort of limbo, and I wouldn’t blame any developer who feels like they got shortchanged by this sort of messaging from Unity’s documentation.
So, the only course of action is to fix the AssetBundleManager to my needs, as suggested by Unity, but is by no means anything that I had planned on spending my time on.
* There is a new Addressable Asset System which provides far more than what AssetBundleManager gives. It has recently gotten out of “preview” status, so adoption of this new system is still happening.
Okay, end rant.
Back to Basics
I certainly want to utilize the benefits of Unity’s AssetBundleManager. It has everything I need to alleviate the inefficiencies of testing AssetBundle builds. I made the mistake of immediately integrating the AssetBundleManager into my game, and that seriously set me back, because I started to get so confused with the various asynchronous load handlers. After much failure, I had to get back to basics, so I downloaded Unity’s AssetBundleDemo and started from there.
What follows is pretty much a walkthrough of the demo.
Get the AssetBundleDemo from bitbucket here. It contains the AssetBundleManager.
Create a new project, and put the AssetBundleDemo contents into the project’s Assets folder.
Also, get the AssetBundle Browser from github as recommended on this Unity page. Drop this in your project’s Assets folder too.
The scene of interest is AssetLoader.unity. Load that up, and we’ll step through what to do.
First, this is what the AssetBundleManager menu looks like.
The three we’ll be using are:
Build Assets
Simulation Mode
Local AssetBundle Server
I don’t think I had a need for “Build AssetBundles from Selection”, so I have not tried it. And I have no idea what “Build Player (for use with engine code stripping)” is. I tried it, and did not get any good results, and I found it unnecessary.
This is what the AssetBundle Browser UI looks like:
The Build page in the Asset Bundle Browser Tool
Follow Along
This assumes you are developing on Windows, so, sorry in advance to all you other development platforms.
In the following tests, we should see a Unity cube appear on the screen. If it doesn’t, then the AssetBundles failed to load.
The asset of interest is called MyCube, and it is assigned the “cube-bundle” AssetBundle group.
In the scene, the Loader object has a LoadAssets.cs script, which sets up the AssetBundleManager and attempts to load the AssetBundle and the cube object from that AssetBundle.
The Unity Editor
Editor with No Changes
The very basic test you can do is in the Editor. Press Play. Notice that no cube shows up.
This is expected. Since we have not built an AssetBundle yet, there is nothing to load.
Editor with Simulation Mode
The next test you can do is activate Simulation Mode, and press Play. The cube shows up.
Simulation Mode is great for very quick tests of the assets that are labeled for an AssetBundle. Use this mode when iterating on edits of an Asset. But in reality, this mode doesn’t even check for the AssetBundle itself, bypassing that entire system. It simply loads the asset directly from your Assets tree. This isn’t a very comprehensive test for a final build. So, be warned.
Editor with the Local Server
Next test, turn off Simulation Mode, and turn on the Local AssetBundle Server.
Go into Windows Processes, under Unity processes, notice mono.exe. That’s the Local Assetbundle server.
Play the game, and notice that nothing loads. See the error message. The game is connected to the local server, but it fails to download the AssetBundles.
This is because we still have not built any AssetBundles, so we’ll do that next. Build AssetBundles, and keep the Local Server enabled.
Running the game in the Editor now shows the Cube.
Great! So we can now load the cube using the AssetBundleManager in the Editor, using either Simulation Mode or the Local Server when AssetBundles are built.
Standalone Builds
Now let’s publish some builds. For now, disable the Local Server from the Editor menu.
For simplicity’s sake, we are going to use Unity’s StreamingAssets folder to automatically have the AssetBundles copied to the build when the Player is built.
For convenience, the AssetBundle Browser has a check box for pushing to StreamingAssets, so we’ll use that to build our AssetBundles instead of the AssetBundleManager menu.
Change the output path from AssetBundles/WindowsStandalone to AssetBundles/Windows. The LoadAssets script is expecting this path, simply due to the way AssetBundleManager has been written.
All of this special-case treatment of the AssetBundleManager versus AssetBundle Browser, in regards to StreamingAssets and output paths, was definitely part of my confusion. It’s as if Unity Technologies just stopped what they were doing and checked it in without a second thought about how the output will be affected. Although there’s adequate warning about the AssetBundleManager being deprecated, I’m sure it’s led to a lot of frustration for anyone running this demo.
Development Build with StreamingAssets
Click on Build in the AssetBundle Browser. This copies the AssetBundles to the StreamingAssets folder. (As a side note, notice that the StreamingAssets output is not contained in a /Windows subfolder. This is something that you would also have to change in the AssetBundle Browser scripts if you want to keep the platform-specific folder organization.)
Okay, make a standalone build, with the Development Build Option on (I like to publish my builds to an “/out” directory”).
Verify that the build has written out the AssetBundles in the StreamingAssets folder.
Run the build. Notice no cube appears.
Also notice the debug error log at the lower left. This is very helpful in debugging why assetbundles don’t load, among other errors.
This is the same error as the one we got when running the game in the Editor.
Development Build with the Local Server
Activate the Local Server from the Unity Editor, and then re-run the standalone Development Build.
Now, the cube loads. This development build is communicating with the Local AssetBundle Server. The content that is being served is contained in the AssetBundles directory in the root of your project, and NOT the StreamingAssets folder. This is the snippet of LoadAssets.cs that uses the Local Server:
// Initialize the downloading URL.
// eg. Development server / iOS ODR / web URL
void InitializeSourceURL()
{
// ...
#if DEVELOPMENT_BUILD || UNITY_EDITOR
// With this code, when in-editor or using a development builds: Always use the AssetBundle Server
// (This is very dependent on the production workflow of the project.
// Another approach would be to make this configurable in the standalone player.)
AssetBundleManager.SetDevelopmentAssetBundleServer();
return; #else // ... #endif
}
Convenient for testing, but not at all reflective of a completely independent standalone build.
Release Build
And finally, create a full Release build (Disable the Development Build option, and rebuild to a Release executable).
The contents of this build should be pretty much the same as the Development Build. The main difference now is that we are now looking at the StreamingAssets folder, regardless of if we have the Local AssetBundle Server running.
Run the build. Notice no cube shows.
Open the output_log.txt, notice the error.
It couldn’t load the manifest. Whoops, I just threw in new vocabulary. The manifest AssetBundle is the entry point into loading the rest of the assets in that AssetBundle. In this case, it holds the name of the platform: “Windows”.
The Fix
This brings us back to modifying the LoadAssets script. Open it up in an IDE (You’re using Visual Studio, right? Because you should be…).
What this does is set the AssetBundle path to the root of the StreamingAssets folder.
Before rebuilding the release build with the above script change, be sure to close output_log.txt if you have it open. Unity can’t build to the same output directory if any file in that path is open.
Re-build the release build, and run the executable. The cube shows up!
The Unity documentation for the AssetBundle Browser states that the StreamingAssets path is useful for testing, but should not be used in production. I’m not sure why this would be the case though; I plan on using StreamingAssets for my project.
What I described above is only one solution to getting this to work, so you may have to modify the solution for your own build pipeline if you choose to use these scripts as a base.
Review
Here are some guidelines for when to use what mode of testing:
Test in Simulation Mode when testing loose assets marked as AssetBundle files in the editor
Test with the Local Server to test published AssetBundle files (they must be published to the AssetBundles folder at the root of your project).
Test with the Local Server (or published AssetBundles in StreamingAssets) also with published Development Builds.
Test the Final Build and use the output_log.txt to debug any problems at this point. But hopefully, all the previous steps helped that this stage is not a problem. But, since loading goes through completely different paths, there is still a chance that the AssetBundles do not load at any stage of this development.
Going through all these modes should give you a clue on how and when to use these different techniques. Throughout the development of your game and as you create more content, you’ll want to test on the simplest level, via Simulation Mode, all the way to the published non-development Release Build, with the output_log.txt file.
As you can see from the entire process above, it adds a significant amount of overhead to your development workflow. You can always build tools to automate some things, like auto-assigning AssetBundle ids to certain assets, but ultimately, you (or someone else) will still have to test the assets on every stage of development, from inside the Editor to a full-blown Release build. And this essentially introduces a lot more chances for bugs to show up, and I hate bugs which show up in a Release build, that cannot be reproduced in a Debug build. Ugh!
NOTE: This article only covers the very basic loading method for AssetBundles that get locally published alongside the game build, using StreamingAssets. AssetBundles can also be loaded from other sources, like a web server, for example.
The Future of AssetBundles
In recent months, Unity Technologies has spent a large amount of effort to get the new Addressable Asset System to a solid, production-ready state. The Addressables system is meant to alleviate all the issues that AssetBundles have. It’s meant to replace the AssetBundleManager and Browser and add a lot more robustness to the inherent difficulties of dealing with resource management. Some developers have had success with the current state of Addressables, and others have not.
All in the name of preparing your game for a “commercial” release. Of course, if you’re not looking to commercially release your game, I probably could not recommend using AssetBundles. It’s just a large amount of effort that takes away from where your greatest effort should be: making the actual game. In this case, the Resources folder and direct references are still suitable options.
So, whether or not you choose to integrate AssetBundle support in your game, remember to…
Since my last entry, in which I found myself at a development crossroads, I’ve taken some time to explore different paint tools. For Horde Rush and parts of Number Crunchers, Adobe Photoshop Elements 9 (PSE9) and Paint.NET were my tools of choice, because I was most familiar with them, and I already had a license for PSE9. When you’re a starting indie developer, you typically need to go with what you know rather than spend hours or days trying to learn something. That said, I’m not a fan of the results I generated, overall. It simply got the job done. And when I look at some AAA art from games like Warcraft (or anything from Blizzard for that matter), indie art like Bastion, Pinstripe, and Cuphead, and basically, ANYTHING on ArtStation, I can’t help but feel like I’m missing something; well, besides my lack of practice and experience in the last decade or so.
With some of the tools I mention here, I went through the entire process, from sketches, inks, to paint and background, using the hero character in Horde Rush as the subject. Unlike other review articles, I spent the most time with the tools that continued to vibe with me, and only spent as much time as I wanted with the other tools until I was unsatisfied with the experience. So yeah, not objective at all. These are pretty much my early impressions, and nothing more.
Photoshop Elements 2018
I’ll be honest. The only reason why I jumped onto PSE2018 (and this entire research project for that matter!) is because Adobe had it on sale. Since I’m using an old version of PSE, I decided I’ll update to the latest version for a fraction of the original price.
In all fairness, of the tools I’m comparing, PSE is definitely a photo editing tool, and not really meant to be a paint tool. Regardless, I’ve used it as my primary paint tool throughout Horde Rush development. As such, it will continue to be my all-in-one graphics tool, but not necessarily my favorite. Also, it is in the same price range as the other tools, so that’s the reason I have it in consideration here.
I’ve never really been completely happy with the results I got out of PSE. But with PS being the industry leader, blah blah blah, I have stuck with the brand. Well, it’s lighter, less powerful version, at least. I used to play around with an old copy of Photoshop 6.0, before Elements was a thing, so I’m fairly comfortable with the user experience. I get by with doing game development artwork with PSE, yet I still miss the pen tools in PS. On top of that, I’ve never been too knowledgeable about Photoshop’s plethora of brushes, so I’ve only really stuck with the basic tools.
Once I fired up PSE2018, I was immediately reminded that this program is for the novice-intermediate photographers, scrapbookers, and other such hobbyists. The interface is significantly more basic than previous iterations. But at least it’s clean. PSE2018 still has that annoying eLive/Quick/Guided/Expert menu bar that occupies valuable vertical real estate, and I doubt Adobe is planning to make this collapsable or movable any time soon. The first thing I did was click on the “Expert” link at the top, and my mind was eased a little bit, as I got back a number of controls and menus. As I started painting with my Wacom tablet, I noticed immediate unresponsiveness. Fantastic! Just what I’d expect from Adobe! Sarcasm aside, I knew that I had to go through all the Windows 10 Pen and Tablet and Touch settings, to try and disable that insidious circle icon that appears whenever you start drawing. Just remember to disable Windows Ink anywhere you can. There’s a setting in the Wacom Tablet Control Panel for this as well. It was a bit frustrating because I have been using PSE9 on the same machine without any issues, and I had to go back into my tablet settings to fix the issues for PSE2018. After that, I felt a lot more at home with PSE2018.
Photoshop Elements 9. From upper left to lower right: sketch, ink over sketch, ink, paint, ink over paint, ink over paint on background
So, I didn’t draw/paint the above image with PSE2018. That was in PSE9. Turns out that PSE2018’s brush tool isn’t using the Wacom tablet’s sensitivity settings. After doing some Google searching, it turns out that this is a big issue with Photoshop/Elements users. Toggling the Wacom Windows Ink setting helped some people, and for others, like me, it did not. I spent hours fiddling around with trying to get it to work. There should be no reason why PSE2018 is the ONLY paint program installed on my machine that does NOT use pressure sensitivity. I had just bought it… PSE2018 was a big fail for me, so I’m sticking with PSE9, since I didn’t observe any advancement in PSE2018, and I’ve requested a refund from Adobe.
SAI Paint Tool
This tool blew my mind. I never heard of it until I did some searches and watched on youtube some of the incredible artwork that was getting generated.
I’m still on the 30 day trial, but the asking price of about 50 bucks (converted from Japanese yen) is quite reasonable. At first glance, the interface is antiquated, in a “Windows 98” sort of way, but very efficient and functional. Everything you need is where it should be, or where you can easily find it. The installation process is also pretty outdated, as it doesn’t get installed in the typical Program Files directory, and probably doesn’t even use the /Users directory. But sometimes, the old file structure is a good thing, because it’s certainly more up front to the user. You don’t have to deal with registry settings etc.
I noticed absolutely no lag when I was using the tools. Of all the tools I’ve been playing around with, applying strokes in Paint Tool is as smooth as cutting warm butter. And I especially like the results I get with Paint Tool’s watercolor brushes. Also, the magic wand selection reminds me a lot of the quick mask tool in Photoshop; something I’ve always liked.
SAI Paint Tool. From upper left to lower right: sketch, ink over sketch, ink, paint, ink over paint, ink over paint on background
I’ve read somewhere on the interwebs that Paint Tool has some memory limitations as far as canvas size and brush count, but I can’t confirm this. One other possible red flag is that the tool hasn’t been updated since April of 2016. As much as I like the tool, lack of updates may turn me away, especially in a production environment.
Corel Painter Essentials 6
This program is a little bit more of a challenge to get comfortable with compared to the others. I really wanted to like this, and I was ready to drop my 240 bucks on Amazon for the full version instead of the Essentials, and that’s even the sale price ($400+ original price)! Unfortunately, as I continued to use Painter Essentials, the more I was disliking everything about it. It took me a while, and I’m glad I didn’t immediately drop my 50 bucks on it.
I think my problem with Painter is that it’s WAY too close to traditional media. Having to choose the paper surface is a little bit too much for me as a game developer. It might be good if I was creating a Paper Mario knockoff or Kirby’s Art Paper Adventure, but otherwise, I just felt really hindered by its UI. It’s like Corel took every aspect of traditional media and included all the inconveniences of it too. The UI is just short of having a turpentine bottle and a moldy washcloth object to use in between brush selections.
I managed to settle in with a select few tools, but the brush selection can sometimes be daunting to anybody who hasn’t used the equivalent traditional media in the past. However, I’ve tried Painter a few times before and have always liked the mobility of the color picker, so there’s that.
Painter Essentials 6. From upper left to lower right: sketch, ink over sketch, ink, paint, ink over paint, ink over paint on background
Every time I laid down a stroke, I felt there was a bit of lag, but of course, it depends on the brush too. I just felt like I was fighting the drawing tools, and it wasn’t providing the results that I wanted. Perhaps it was the default settings that turned me off. But, how then, can every other paint program provide decent defaults?
Painter is probably great for game concept artists, or traditional artists that are planning to transition to the digital medium. It just wouldn’t immediately fit into my gamedev workflow.
Krita
And now the tool I’m gonna gush over. I’ve tried Krita on an older laptop a few years ago. The results looked fantastic, but the performance was horrible. I wasn’t sure if it was more the laptop or the program. I didn’t want to spend time learning Krita back then, but now, the software has had time to mature, and it’s been established as a paint tool for artists, as noticeable from the amounts of community-created resource bundles, and its UI is very navigable and efficient. I especially like the rich context menu, which gives you quick access to common brushes and a color picker. I also like that the keyboard shortcuts can be set up for Photoshop or SAI Paint Tool users. It also has animation! I haven’t used it yet, and I doubt it’s as good as Toon Boom for that purpose, but it’s cool that it’s included. And on top of all that, it’s open source! What’s not to like about this program!?! Well, there are still performance issues, especially compared to the speedy UI in SAI Paint Tool. Fortunately my rig is powerful enough to not show these issues much. I also had to download a third party watercolor brush set, and it’s still not as impressive as SAI Paint Tool’s, but I think it’s still pretty good. I would like to start incorporating Krita into my normal game dev content creation workflow.
Krita. From upper left to lower right: sketch, ink over sketch, ink, paint, ink over paint, ink over paint on background
Autodesk Sketchbook
And finally, Autodesk Sketchbook. Nope. Not gonna do the subscription model. As much as I respect Autodesk and all that they’ve done for the digital creative arts, once they’ve switched to the subscription model, my consideration for any of their products has considerably dropped off. Sure, software nowadays is more of a service than one-and-done, release/repeat products, but as I see it, as a part-time indie developer, I will never be in front of any of these tools on a consistent basis, so why should I pay consistently? But… I’ll give it to Autodesk for releasing Sketchbook on mobile. I love it on my old Samsung tablet, even though I didn’t make full use of it. And even then, I sometimes switch over to Art Flow. Bottom line for Sketchbook Pro, I never even downloaded the trial due to the subscription model.
Other Tools That I Haven’t Mentioned Above
GIMP – I haven’t used GIMP lately, but if you need some paint program, I guess it’s functional. Last time I used it, in fact, every time I try it, I find it clunky, slow, disjointed, graphically buggy, but again, it gets the work done if you have no other options. Its feature set may be as rich or almost as rich as Photoshop, but good luck using it in a flow that makes you efficient and that doesn’t break your creative rhythm. But hey! It’s open source, so there’s that!
Paint.NET – This is my tool of choice for lightweight, quick edits that don’t require a lot of complexity. It will not necessarily allow you to produce fantastic art pieces, but it’s functional for doing graphical fixes and touch-ups without having as annoying an interface as GIMP. Its feature set is limited and is closer to MS Paint than Photoshop, but sometimes you need that simplicity. I sometimes refer to it as “MS Paint with Layers and Plugins”.
So there you have it. A comparison between four paint programs. I’m definitely leaning towards Krita because it’s powerful and free. How can you beat that? Photoshop Elements still has its place in the development workflow, though, as a general graphics editor; something that the more artsy focused paint programs may not necessarily be good at. But whatever paint tools you use for your game, remember…
Earlier this week, indie game developer Jake Birkett of Grey Alien Games published the Youtube video “You are spending too long making your game” (Alternate reddit source):
Since watching it, I’m taking a moment to step back and look at what I’ve done and where I’m heading. I estimate that it will take me another 2 years to finish the project that I’m currently working on.
My first reaction to this video was, “He’s right”. I’m a believer in “starting small”, and I would even preach it. But I can’t help feeling like a hypocrite if I can’t even follow that. Well, when I went full-time indie for a year, I did my best to make small projects with short dev cycles. I ended up releasing 3 mobile games: Power Tic Tac Toe, Number Crunchers, and Horde Rush. Each took progressively longer to implement, but also with increasingly better visuals and production quality. While Number Crunchers gets some downloads to this day, most likely due to the familiarity of its existing source IP, Horde Rush was pretty much dead on arrival. It had a few downloads at launch, but soon after, did not gain a following, and it eventually got buried by all the hundreds of other games released on a daily basis. Of course, I didn’t do any marketing for Horde Rush, and that’s definitely one of the many reasons it hasn’t taken off.
After starting to work full-time again, I changed gears and decided to work on a game that is a lot more ambitious than the three titles that I’ve released; something with the level of quality of say, a Gameboy/DS game. So far, it’s been moving along, and I’ve been having a lot of fun building it. I figured that if I’m going to work full-time again, I might as well work on something that I’m absolutely passionate about, and that may or may not take a long time to implement.
And then Jake’s video comes out. And he mentions that “spending too long developing a game” has exceptions. The first one being that if you currently have a day job, and you are working on your game on the side, the overly long dev cycle does not apply, as the income is flowing. Regardless, the thought of spending two more years on a game sounds daunting. And having more than one release in two years sounds more beneficial. You’d have the satisfaction and experience of finishing games, you’d have a lot more visibility with whatever audience you have, and/or potentially gain an audience, and generally, I would think it would result in a better quality of life, because you’re not slogging away on one game, not knowing exactly when you’ll be done. Let’s face it. Anything that’s 2 years out can’t accurately be determined to have a certain level of quality, features, or dev time. It’s simply an unknown.
I was eventually going to start a dev blog about my current game. That way, I can stay visible with the game dev community, possibly gain a small audience, and hold myself accountable. But I think I’m another 6-12 months out until I would feel comfortable exhibiting my work in progress. So, I thought of starting up another smaller project that would take a much shorter time to build, and would allow me to reuse some of my existing assets. I’ve planned on eventually working on this project, but after watching Jake’s video and reading all the comments about it, I’m highly considering moving this up on the schedule. As protective as I am over my current project, I don’t think I’d feel that secretive about the development of this new game based on Horde Rush. The IP (intellectual property) is already out there, and my initial gameplay ideas aren’t all that revolutionary. So, I would be more comfortable releasing dev updates, which can help out the gamedev community.
And while I’m trying to convince myself that this is the right way to go, I still have a lot of drive, passion, and some momentum going for my current game. Do I just kill that momentum? Will it be a refreshing change that will help me better see the vision of the game when I return back to it in a few months? Maybe I can try implementing both games in parallel. Not really. That never turns out well. I have limited time as it is with the full-time job. I couldn’t possibly ping-pong back and forth between both projects. Whatever path I choose, I don’t want to dwell on it for too long, because doing anything is better than doing nothing at all.
Game developers are always on a different part of their journey towards completing their game at any moment in time. Maybe you’re just starting, maybe you’re finishing up, and maybe you’re stuck in the middle for the long haul. That “middle” is where I am in development, and sometimes there’s no end in sight. It’s especially apparent when life and the day job (if you’re not a full-time game dev) take priority. I’d like to share some of the “life hacks” that I use to get that precious time in to work on my game.
First, I have to set up the scene. I’ve been busy with work and life over the past 6 or so months, and I don’t see that stopping any time soon. That means I’ve had very minimal time to work on my game project. Let’s put it this way: I’ve used “5 hours per week” as my benchmark for getting work done on my game. If I can hit 5 hours, I’ll be pretty happy. That can come in different variations, like an hour a day after work at my Monday-Friday job, especially if I have plans during the weekend, or maybe a few minutes a day during the week with 2-4 hours on the weekend. It all depends on work and life. On rare occasions, I can manage 10 or more hours in a week if the opportunity is there. But I try my best to fit it in however I can.
But for the past 4-6 months or so, I’ve had far less time than 5 hours/week to do stuff. For several weeks, I’ve only had maybe an hour or two per week tops, which meant like 10-20 minute sessions.
Some weeks, I’ve just been way too tired to even work on the game. Sometimes I made it work. Forced it. Stayed up an extra 20 minutes, or even 5 or 10 minutes, after everything around me has shut down, just so I can put that little bit in to finishing my next small goal. And other times, I had to recognize that, instead of forcing myself to work on my game for 20 minutes, my body needs to just rest, because if I pushed it any further I most likely would get sick.
This has been my situation. And while I can anticipate this being the regular routine for the next several months, I remind myself that as long as I keep on moving forward, no matter how little, my creative spirit will stay alive, and give me hope that I will eventually finish this game. (Besides “hope”, having a development plan and source control are also necessary. You can find plenty of reading on those all over the web.)
Moving forward can also mean moving backward. Sometimes I have to fail to move forward. So if I only had 20 minutes to find out that the code that I wrote within the last 19 minutes is buggy, faulty, or just plain wrong, I accept that as forward movement, by noting in that final minute what did NOT work, so I can remind myself what NOT to do again. The more places I found myself repeating mistakes, the more time I spent into fixing those problems once and up front.
And when I’m away from my game, I do my best to “live in the moment”. I try to stay focused on that activity rather than wishing I was working on my game. It’s a distraction to think about the game all the time when I can’t do anything about it. That’s something I used to do a lot, and still have a tendency to do from time to time. And, I don’t mean “mobile planning” like jotting ideas, inspirations, and epiphanies into your smartphone. I mean those thoughts that have me constantly wishing I was in front of the computer instead. There’s no point in complaining, worrying, getting frustrated at something I can’t help. The added frustration just makes me ineffective at whatever I’m doing. It’s just not healthy.
I haven’t been the greatest at prioritizing things. Sometimes the laundry pile would get too high, the dishes would stack up, or the dog would go without a bath for too long. These are some things that end up falling by the wayside while there’s a game to be made! But one thing that I could de-prioritize is entertainment. I’ve given up plenty of games, movies and TV because I just didn’t think they were as important as my game. If I couldn’t get to my game in a day, let alone the chores around the house, why should I set aside time for movies or TV? Of course I make exceptions, and some entertainment sometimes can be good for the soul. Here’s my example. I’m a big fan of the DOOM franchise by Id Software. A new DOOM game was released in May 2016. I hadn’t had a chance to play it until May of this past year, 2017. I just put other things first, including my game. But once I hit a certain milestone and I was satisfied with where my game was, I set aside time to play DOOM. It still took me over a month to complete it, because I played the game in a piecemeal fashion due to life stuff, but eventually I finished it and I enjoyed every second of it, and it felt good for my soul.
I’m so excited for my game, and actually putting my work-in-progress out there for all to see, but I don’t feel that it’s at that stage yet. I want to make sure the work-in-progress that I post is more frequent, and meaningful, and I can’t exactly do that with short sessions to do things. So right now, I’m just building up.
And to all those game developers who are chugging away at their dreams and digging deeper, keep looking up, and as always…
Aside from working on a side project to my side project, which I recently finished, I’ve gotten back to my original side project (my next game) and found myself researching on some tech for a large feature. Talk about “writing in code”! Spit it out! That tech I’ve been researching on is multiplayer networking solutions for Unity. There. Like the title didn’t give it away.
For the uninitiated, multiplayer games are games that can be played with other human beings across the Internet. Yes, this includes social games like those popular Facebook games of old*, and games like Pokemon GO, all the way to games like Call of Duty and World of Warcraft. Some of these games have competitive play and others have cooperative play, and sometimes there’s a combination of both. This, of course then, is different from single player games, like Crossy Road or Flappy Bird, which don’t involve multiple people playing together. Some games actually have both singleplayer and multiplayer modes, like the aforementioned Call of Duty. Nonetheless, I’ve already established that I’m writing about multiplayer here.
Game developers know full well that one of the most difficult systems to build is the multiplayer networking system of a game. I’m not even going to go into how difficult it is to build a Massively-Multiplayer Online (MMO) game, as that’s way out of scope of this article. If you’re still thinking about it, I’d highly recommend reconsidering your life goals.
But enough of the intro.
Unity Networking, Photon SDK, and PUN
For the past several days, I’ve been looking at potential network solutions for Unity, namely Unity Networking (Unity’s first-party solution) and Photon by Exit Games. From what I gathered, Unity Networking still needs some time to mature, whereas Photon is a more established networking package, with a long running track record. I decided to go with Photon for this reason, as well as for its healthy amount of documentation, so I’ll go into more detail about it here.
As of this writing, Photon has a sort of mixed personality. For Unity, it comes in two flavors. Photon SDK for Unity, and Photon Unity Networking. Whazzawha? Right. Photon SDK for Unity is available on the Exit Games website. Whereas Photon Unity Networking (PUN) is available as a downloadable asset package from the Unity asset store. The difference is that PUN is a wrapper around the underlying Photon SDK, to help you save time with the lower level groundwork code already implemented for you. PUN is also supposed to emulate the API of Unity Networking, so that it is more familiar to the Unity network application programmer.
So, I chose to use the Photon SDK for Unity. Why would that be if PUN already provides a lot of the underlying work for me? Well, there’s one feature that is currently not supported by PUN that I need for my next game: Persistence of asynchronous game sessions. I’ll go into more detail about what that is in a moment.
To make things more complicated, doing any Google searches for Photon Networking will inevitably yield results on “Realtime” vs “Turnbased” Photon. The hope for Exit Games was that Photon can be marketed for developing games that are fast-paced, and played simultaneously, or synchronously, between gamers (Realtime), and for games that are slower-paced, and played by one player at a time, or asynchronously, like Chess (Turn-Based). As of May 2016, Exit Games discontinued Photon Turnbased, which by itself sounds as if Photon would no longer support turn-based games. In reality, the Turnbased features have been merged into Photon Realtime. So that means, I’m using Photon RealTime for a turn-based game. How confusing! At the same time that Exit Games was attempting to simplify the delivery of their product(s) to its customers, it managed to confuse even newer customers like me.
Asynchronous Gameplay and Persistence
To support my next game, which is a turn-based game, I need to implement asynchronous gameplay and persistence via something called Webhooks/WebRPCs.
“Asynchronous gameplay” means that the players do not play simultaneously, together on the same turn. From a network programming perspective, this is far less difficult to implement than synchronous gameplay because with synchronous, realtime gameplay, there is no concept of “turns”. With synchronous games, developers have to implement the system such that every player connected to the game can give input and receive a result immediately, so as to produce the illusion that all players “exist” in the same virtual world. In reality, it takes fractions of a second, or even whole seconds to transmit that data across to all other connected players. Network programmers have to implement numerous tricks and techniques like dead reckoning and client prediction to generate the illusion that a player has moved, without any lag in time. Otherwise, if the game goes out of sync, the illusion is completely broken.
For each turn that a player takes in my turn-based game, the current state of the game has to be saved so that the next player can continue from that state. This is where persistence comes in.
So for persistence, or storage, of game sessions in progress, the Photon SDK provides Webhooks and WebRPCs to load and save game data; something that PUN does not provide.
Webhooks and WebRPCs go hand-in-hand for game clients to communicate with web servers. WebRPCs (Remote Procedure Calls) are used by clients to send messages across the network, so that web hooks on a web server can receive those messages and process them in a secure, authenticated environment. Note that web servers are your typical type of server for hosting web pages. So while in this case, a web server is being used as a game server, it could also be the host for web pages to convey that data, say, for players to check their progress, or the states of their game sessions. Also note that web servers are not used as game servers for real-time games. Servers for real-time games require support for higher loads and bandwidth, because they generally process a lot more network traffic than for turn-based games, to maintain synchronicity across all connected clients, as mentioned above. This is why web servers are more suitable for turn-based games.
And with all of that knowledge, I discovered that Photon does not provide web servers, only the hooks! Bah! Before learning about it, I thought Photon was going to be my one-stop shop for all my network and multiplayer needs for my game. At least, that’s what their marketing made me feel like what it was. In reality, a more seasoned network gameplay engineer probably would have been able to see past the bullet points and tell me that I still needed another component to get Photon working the way I wanted for my turn-based game.
Luckily, through Photon documentation (or somewhere on the Photon or Unity forums), I found out that PlayFab is the recommended solution that provides that web server that I’m looking for. Not only that, it provides some other good stuff that I’d need that Photon doesn’t provide, like user accounts, leaderboards, etc. Also, GameSparks is a competitor which provides pretty much the same thing, but I admit that I didn’t do much research into which would be better for me. Since PlayFab has a partnership with Photon, I decided to go with that.
PlayFab
Once I found out that I had to use yet another system besides Photon, my heart sunk. I was already feeling pretty overwhelmed with having to learn the new Photon API. I was reluctant to learn even more than I was planning to. But we’re game developers, right? We keep learning and work through it! That’s what we do!
Turns out PlayFab, after learning what they offer and how to navigate their browser-based Game Manager, provides quite a bit for the entry price of free. I like how it provides user accounts, tracks when and where they’ve logged on, and other analytics, such as segmentation, which is a business term that is beyond the scope of this post. They have this system called PlayStream which is intended for logging, but I haven’t figured this one out yet. What I was most interested in was, of course, how to implement web hooks for the Photon service.
PlayFab has a proprietary scripting system called CloudScript, found in the Automation section of the GameManager, which is essentially Javascript functions on the web server that handle the events sent via Photon from your game client. I find it a bit awkward and time consuming to upload and deploy cloudscript to PlayFab, but since it’s free, and they’re providing the web servers for me, I’m not complaining all that much. At least not yet… Well (that didn’t take long), trudging through more documentation on PlayFab, I discovered that the free tier of the service is severely limiting in terms of how much data you can save to their web server. It’s understandable, as it’s free, but it’s a fair warning to anyone who wants to try it out, that once you roll a game into production and release, you’ll most likely need to choose a non-free tier to get all your data persisted on their servers.
Keep On Keepin’ On
At the time I am writing this, I’m still in the midst of working through Photon and PlayFab so that I can get a working game flow for the player. While there’s a wealth of information on both services, it’s very fragmented and difficult to piece everything together. In future posts, I’d like to go more in-depth with my Photon and PlayFab setup, because I’m finding it a pain to get through right now. Perhaps others planning to take this route will benefit from my findings. In the mean time…
Make it fun!
* Of course Facebook is not that old. But in “technology-and-video-game” years, 4-5 years is like 4-5 decades…. Is Farmville even still a thing?
Here are a few tips for some of you indie game developers out there, that are looking to show off some of your work via time-lapse video.
CamStudio
Download CamStudio, which is a free and open source video capture program.
Use the following settings: Options > Video Options... > Compressor: Cinepak (default)
Uncheck Auto Adjust at the bottom. This will allow you to adjust the Key Frame frequency, and the Framerates. I set Set Key Frames Every 1 frame(s)
And for framerates, Capture Frames Every 1000 milliseconds Playback Rate at 2 frames/second
This means, that while recording, CamStudio will capture the screen every 1 second, and it will play back twice as fast.
After watching a recording at this playback speed, it was still a little too slow, but most video players nowadays have a speed control, so I wasn’t too worried about it, especially if I decide to export these time-lapse videos to animated GIFs (more on this later), I would have the opportunity to set the speed prior to export.
Another setting that I like to enable is even pixels (which for whatever reason I don’t see in v2.7.3). This is just to allow more video players to run the resulting video, as an odd number of pixels tends to result in encodings that aren’t really friendly to some video players.
Be sure to set your recording region and output directory accordingly, and you’re ready to rock.
When you hit Record (the obvious big red button), corner brackets will pulse at the edges of the recording region, at the specified Capture Frames Every setting, so for example, if I was using 1000 milliseconds, I would see the brackets pulse every second. What’s great about this is, for time-lapse recording, CamStudio doesn’t eat up a lot of resources to capture video while you are doing what you are doing.
Animated GIFs
To create an animated GIF, you need a sequence of still images, or frames, that you would then re-assemble into an animated GIF file.
Converting Video to Frames
If you wanted to convert the time-lapse video you just created above into an animated GIF, you’ll have to extract still frames from the video as a sequence of images before assembling the GIF.
I like to use avidemux (another free and/or open source software program). You can save a video, or portion of a video loaded into avidemux, export it as frames. Be sure to install avidemux 2.5.x, as 2.6.x does not have this feature.
Capturing Frames Directly
Alternatively, you can use a program to capture time-lapse still frames, bypassing the video recording stage altogether. Apparently there are a number of applications that create time-lapse video and/or screenshots, but the one I tried was chronolapse after stumbling across this article.
chronolapse has the option of capturing screenshots or images from a webcam, or both. The program outputs a sequence of jpg images that you can later assemble in another program for creating animated GIFs. One program you can use to do this is GIMP. There are far easier-to-use commercial programs to do this entire process, but I wanted to keep the software in this article free and/or open source.
Exporting Frames to Animated GIF with GIMP
Once you have the sequence of still images, you can use GIMP to export the final animated GIF.
To do this, open up GIMP and create a new image with the same dimensions as the exported images.
Then, select all the images in the sequence in an explorer window, and drag them onto the Layers window to load each image as a layer; be sure to drag the first image in the sequence first, or else the sequence may get out of order. This process can take a long time if you have a large number of images and/or the dimensions of your images are large.
Once all the images are loaded into the Layers window, don’t forget to delete the first frame that contains only a blank image from when you originally created the file. Before exporting, you may have to reverse the order of the images in the layers by using Layer > Stack > Reverse Layer Order, because GIMP is weird sometimes. You can then click on File > Export... This will prompt you to choose your directory and the file type (GIF), and then you can click on Export, and it will bring up this dialog:
Select As Animation, and your looping and delay options. Once you click on Export here, it may or may not again take a long time to process, depending on the number of frames and dimensions of your image.
One last thing I should mention is, animated GIFs can be fairly large for the web, depending on the length and dimensions of the animation. webm is another animated image format that is pretty popular online now, is relatively smaller in size to animated GIFs, and you can find some converters with a quick search.
And done!
Here’s an example of what I used these time-lapse techniques for (it’s a fairly large GIF; apologies in advance):
I was 3D modeling the characters Dipper and Mabel Pines from the show Gravity Falls for an eventual 3D printed gift for my significant other. It turns out that a lot of the principles for 3D modeling for game development also apply to 3D printing. Maybe I’ll write up some future articles on the subject…
My colleague hired me to do a contract art piece for him. It’s a logo to celebrate the release of the 100th episode of his YouTube video channel.
This is what it looks like:
Yes, his name is Storm. Yep, Storm. I’m sure he gets that a lot. You know, your reaction; that face that you’re making. His channel is all about games. He does reviews, and they’re pretty damn funny and entertaining and informative.
Anyway, I encourage you to take a break from whatever it is you’re doing, be it developing a game or reading about Under the Weather.
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are as essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.