Tribalism at work

Tribalism

Like many people, I’ve been struggling a great deal with understanding what is happening in our world, and how to make sense of it.  Many years ago, when I was younger, I myself had a somewhat narcissistic view of the world. The frame in which I absorbed interactions in my life was one that was exclusionary of large groups of people around me. When I say this you’re probably thinking I’m talking about the makers and takers we hear being thrown about in the news media, but I’m talking about something much closer to home, I’m talking about programmers vs artists, or programmers vs clients, or maybe programmers vs management.

The programmers vs. whatever argument/though process is one that many of us are familiar with, and have no fear if you’re not a programmers, in all likelihood, it applies to you too. Maybe it’s artists vs. programmers, or you’re a manager who hasn’t understood the behavior of those alien creatures, the engineers. Our brains are particularly well suited for categorizing groups of things, and so its no surprise that we do it with people as well. Fighting this tendency is a fool’s errand, it’s far to convenient to group people together for me to argue that this is something we should fret about. What I do believe people need to do is not use the simplicity of categorization and grouping to discard the incentives, needs, and desires of whole groups of people.

My own journey

Early in my career in video games, I had a far more confrontational relationship with the other groups or specialization. If management asked us to make changes to the design of a product at a late point in the development cycle, the thought process for me immediately went to something along the lines of “Typical PhB management, if they had just thought about the design ahead of time, then they wouldn’t be making last minute changes, and I wouldn’t be here working late. Their lack of consideration caused me to have to work long hours, and this probably happened because there are no consequences for them in making these decisions. They get to go home at 5, and I’m here working late, so fuck ’em.”

This sort of thought process is not unusual, and it’s perfectly understandable. What it is *not* is effective, even if it may contain some truth. At some point in my career, I decided to take the energy that I had put into feeling resentment or annoyance at these sorts of events and try and channel them into something more productive. Part of what triggered this change is my own maturity and awareness, in starting to see the same behavior in myself when the artists would have to re-do a large portion of their art because we refactored a tool, or would have to stay late as they struggled with the terrible UX and non-existent documentation for whatever tool we had cobbled together. This led me to change the way I think about the work that I do, to re-frame all of these various stakeholders as my customers. Once I started thinking about the art team less as a group making demands on my time, but instead as a group of people, just like the programming team, who are trying to deliver the product that they do best (art) to clients (management and our end users), then I was able to empathize with their plight.

Enlightenment

This change in thinking was revolutionary for me, both in my own effectiveness as a programmer and manager, but also in my own happiness and satisfaction with life. I was no longer waking up every day pitching for a fight, but instead was waking up every day thinking about how I may be able to improve the life of someone else. I imagined being an artist, working on some piece of in-game art for a few days, and then being discouraged when I had to fight the tools for a week to get my art into the game, just to have it look shitty when I did. My job as a programmer became one of empathy, where I was being given an opportunity to tangible change how someone worked each day, and in some ways, that was one of the more meaningful things I could spend my day doing. It gave me the ability to not just wait for a request, and then try to minimize the amount of time I spent trying to satisfy that request, to instead trying to understand what the artists struggled with each day in performing their work, and how I could contribute to that. This led to solutions that were not a response to a specific complaint (which is a rather narrow way at looking at features of software), but instead were an attempt at improving how people worked. If we made a skeletal animation tool that required the artists to label a bunch of bones on their skeleton, and it crashed often, then I would get requests to try and fix the crash. The correct question for me is not “What is causing the crash?”, but “Why do we have this tool in the first place? Is it necessary? Do we need the artists to be labeling bones? This should be something a computer can do.” This kills 2 birds with one stone, I don’t need to fix shitty bugs any longer, and the artist no longer needs to spend time labeling bones. It is not a solution that would have become apparent with the old mindset, but is one that is painfully obvious when I start thinking about the artists as customers.

Conclusion

The way I consider artists or producers, as customers of my software, may not resonate with you, and that’s OK. Find some other way to frame it that does resonate with you. This doesn’t mean that you need to stop making snarky comments about management or the art team, but it does mean that you shouldn’t let those snarky comments frame your entire way of thinking, as its far too easy to let that turn confrontational. The reality is that you’re all on the same team, working on the same product, and even if you don’t personally like every member of the team, that’s OK too. You don’t need to be friends, but you’re still going to be working together, so why make it unpleasant? This kind of thinking has started penetrating the other areas of my life as well. If someone cuts me off on the road, my thought is less “Fuck that guy, he’s driving he’s like an asshole, and now I’m inconvenienced because I had slow down” to something more like “Wow, I wonder why he’s in such a hurry? It must be important! Maybe I’ll let him merge and give him a wave, it may calm him down and hopefully make whatever important thing he’s rushing to a bit easier.” We’re all human, and I certainly don’t mean to imply that I’ve managed to become sort of super pleasant and thoughtful angel of a human being, I still struggle with many of the frames everyone else does, and sometimes I am thinking “Fuck that guy, now I have to work late”, but trying to have empathy, and approaching relationships in my life trying to imagine how the person on the other side of the table sees the world, and having a default assumption if someone is doing something that seems to have no purpose other than to annoy me, then the most likely explanation is *not* that I’m omnipotent and that they’re just trying to bring me down and prevent me from working, but is instead that I must not understand how they see the world, I don’t know what their incentives are, and that I should work harder to understand them. In the end, I may still come to the conclusion that they are trying to bring me down and prevent me from working, but that should not be the default assumption. In the vast majority of cases, there isn’t ill-will or malicious intent, but instead a lack of clarity on both sides, and more understanding and clarification will only improve things.

This thought process is important to me, and I hope that more people than I suspect share this sort of viewpoint. It takes a lot of work and vigilance to maintain this viewpoint, but when I do, I find I’m a happier person. In these times of uncertainty, hate, and vitriol towards the “other”, I think it is important that those of us who are willing to spend the time to understand and improve our relations with other people do so. The more divisions we sow, the more difficult it will be for us all to get back to place where we can all just do what we want to be doing. So with that, back to work….

Let’s Encrypt support

Its been a while since I worked on this page, but all the news about Let’s Encrypt has kicked me into gear. SSL support is something I’ve been intending to set up for quite a while, and their software has finally gotten my ass into gear. Expect more updates soon, now that I have come back to editing this blog on a regular basis. I have a lot to talk about with my new position working on haptic enabled touchscreens!

BitCoin

So I’ve been giving a lot of thought to BitCoin lately, because of a project I’ve been working on. The concept of a bitchain is what initially sparked my interest in the topic, as it has a number of uses outside of currency. I thought BitCoin == Gold Bugs. I’ve also read a lot of economics texts, and so I understand that money is just an agreed upon delusion that a group of people hold, and that’s why it works. The arguments about BitCoin have been primarily about whether we need another currency, but if instead you thought about BitCoin like a protocol, in a way. It’s a wire transform, or a bank transfer, or a cash exchange, or any other system. All of those have fees, hassles, risks, and advantages for each. BitCoin is that. As long as you and I have liquidity in and out of the system, then it is a way for me to send you money and you to receive it, without having to use a bank, or knowing each other, or traveling to meet each other. If that is worth something, then it is worth the effort to pay the fees to get in and out of the bitcoin system. Once you think about it like that, you realize that you don’t need to want to use BitCoin for it to be of value to you.

Think instead of being able to work around the BitCoin ecosphere as working to improve the lives of people in BitCoinIstan. You don’t have to want to move there, but there are people there, with needs, and they want stuff, and if you can provide services for them, they will pay for it.

That’s why I’m working to make BitCoin work better. Not because I need BitCoin, or I have a particular economic belief (other than, their currency is convertible into mine), but because there’s money there, and it works terribly in many cases, and it is easy to get paid in BitCoin. All of those things make me think opportunity.

An easy way to prevent large GitHub checkins

If you’re using GitHub as a remote repo, you may have run into the problem of it failing on files on larger than 100MB. If you run into this limitation, then at least I have to dig through a bunch of webpages I’ve searched before to try and remember how to rewrite history to get rid of my commits of large files. If it’s the last commit, it’s easy, a git reset HEAD~1 will get rid of it, but if you have to rewrite your git history, then life can get really painful, especially if it happens just frequently enough to annoy you, but not often enough that you memorize what you did. Anyway, I finally got fed up with it, and started looking for a solution. All the ones I found were exceedingly complicated, and required either ruby, python, or some other scripting language. This is fine, when you’re on a Mac or a Linux box, but for those of us either running Windows or who want to try and keep their machine clean of un-needed software, it’s a pain.
I finally got fed up, and wrote a script to fix this issue, that has absolutely no dependencies. It will also get a list of ALL the files that are too large, and print them out in a way that can be cut and pasted directly into your .gitignore file. I spent the time writing it, so hopefully it helps someone else out too. Here’s my script, which you can copy directly into .git/hooks/pre-commit on your machine. It’s not elegant, but works on any platform, with no dependencies, using just git’s shell and commands.

if git rev-parse --verify HEAD >/dev/null 2>&1
then
    against=HEAD
else
    # Initial commit: diff against an empty tree object
    against=4b825dc642cb6eb9a060e54bf8d69288fbee4904
fi

# Redirect output to stderr.
exec 1>&2

maximumsize=100000000 #Error if we are over 100MB
rm -f ./error_files_transport_temp
git diff --cached --name-status --diff-filter=ACM | while read st file; do
    # skip deleted files
    if [ "$st" == 'D' ]; then continue; fi
    filesize=$(wc -c <"$file") 
    if [ $filesize -ge $maximumsize ]; then echo $file >> ./error_files_transport_temp; fi
done
if [ -e ./error_files_transport_temp ]; then
    echo One or more files are greater than 100MB!
    echo -------------------------------------
    cat ./error_files_transport_temp
    rm -f ./error_files_transport_temp
    exit 1
fi

Bluetooth LE Woes

A large part of the difference between a senior engineer and one more junior has less to do with the kind of work that they *can* do, and more with how they do it. When I was starting out, once I understood how to program, there wasn’t really a whole of code that I would have had trouble writing. You could have dropped me in just about at any company and I would have been able to produce the code that was required. The code may not have been pretty, but in the end it would have gotten the job done. This point is one that people outside of the programmers can have difficulty grasping. All they see in the end is the output, so from their perspective, junior programmers are the same as senior programmers, just cheaper.
Once you’ve worked with a number of different engineers and codebases, you’ll start to understand that some code bases are easier to work with, and things seem to behave as expected, while others seem to be difficult. This is one of those feelings that can be difficult to explain or quantify (or predict), but as you get even more experience, you begin to be able to predict these things.
One of the reasons that some code bases are easier to work with than others is predictability. You remember one of the first applications you wrote? Have you ever worked with a code base where every change seems to result in 100 bugs that you have to work through? When you finish that feature, and everything seems to be running just right, and someone asks for “one more change” and it all falls apart again? That’s a lack of predictability. Some of you may have had the opposite experience. Where you work with a code base, and every feature that is requested seems to just fall into place? It doesn’t mean that the work you did was any easier, but you spent less time trying to figure out how to make it do what you expect, and more time writing the code people expected of you. That’s the difference between good code and bad code.
Recently I’ve been working with Android and the Bluetooth Low Energy (BLE) libraries, and it is a perfect example of this. At first glance, it’s not that difficult (the BLE spec is a little obtuse, but that’s a different problem). You take the example code, and play around with it, and it seems to work. When you use the BLE library on Android, all of the calls are asynchronous. This means that I make a request of the device (like turn on an LED) and give it a callback function, and some time later the light turns on (or doesn’t) and it calls my function to tell me the action is complete. I made a number of different tests, like connecting to the device, or turning on a light, and everything seemed to work. Then I started working on my actual implementation, where I connected to the device, activated some features, and turned on a light. Sometimes it would connect, sometimes it wouldn’t. Sometimes the light would turn on, sometimes not. I couldn’t figure it out, this was the same code! Why was it not working? So I looked at the documentation, and it said every function will tell you where it fails. So I checked the return value of every function. They all succeeded. And still, my light wouldn’t turn on consistently. I was now several days into this mess, and I wanted to throw my computer at the wall. I finally found a comment deep in some forum thread that someone made, and voila, it had the answer. It ends up that on Android, when you make a request of BLE, it calls you back after some period of time. However, if you make a *second* request before the first one has completed, it will tell you that everything worked, but you will only get one of the callbacks. On some level, this makes sense. This is true for *every* request though. If I connect to the device, I have to wait for the callback before making *any* other requests. If you write a value, you have to wait until that write is complete before making *any* other requests. This isn’t a requirement that is documented, the function won’t tell you it failed (even though it claims to tell you whether a request succeeded or not), and it isn’t enforced by the API (you aren’t required to even give a callback function, if you don’t want to be notified. Just don’t make any other requests until you should have been notified).
This problem is particularly insidious because it gives you the impression that it works. If you make only single requests, or your requests are delayed (say, by waiting for user input), then it works. Make 2 requests in a row, and boom! Your application stops working. The end result? I spent a week working on my app, getting frustrated, trying to figure out why it wasn’t doing what it was supposed to. This is with new hardware, so it wasn’t clear if I was calling out to the hardware wrong, if the hardware was buggy, or if maybe my code was broken. The one thing I didn’t expect was that the platform I was running on would lie to me. This is on a store bought production device, nothing weird or out of the ordinary running.
That is the difference between code written by a good engineer and a bad one. Both will let you make calls to the API, and both will work. On both of them, all the examples work, and the documentation is identical. In one world, I would call the API, and it would blow up if I do something wrong. Or tell me it failed. It would do *something*! And in the other, it fails silently and wastes roughly a week of my time. Multiply this times anyone who I’m blocking, and you can easily waste a man-month in a matter of days.

First Unity Plugin released!

So I’ve been working on some Unity plugins, and the first one is now released! One of the things that has always driven me crazy about Unity is that anything using List can’t be edited like a real list. It’s more like a stack, where I can push and pop from the end of it, but I can’t re-arrange elements or delete anything from the middle (using the inspector interface). So I worked on our first plugin, OtterList. This is a DLL you can drop into your project, and all your lists can now be inserted/deleted/re-arranged, without any extra code on your end! I’m super excited about it, hopefully it makes things easier for everyone. If you want to check it out, grab it from http://u3d.as/content/clockwork-otter/otter-list.

Just to give a bit of a preview on what I’m working on next, enums also drive me crazy in Unity. They come up in one giant list that isn’t easily sorted, filtered, or anything else. This is true for any large lists you’re doing (typically non-editable lists, like enums), but shows up primarily in enums. The next plugin that should be coming out will be one that you can drop in to automatically get a list that can support nesting and filtering. So you can have enums show up filtered and/or nested in the popups automatically. You’ll also be able to use the plugin for any lists you need to draw in your controls. I’m hoping to get that one submitted in the next week or so.

We’re working on a bunch more plugins with a focus on making game development easier. These tools are, at least at first, centered primarily around usability issues in Unity. Small fixes often get you huge gains in productivity, but are difficult to quantify and so rarely get the focus that they deserve. Stay tuned for more stuff about what we’re working on, and if you have particular things driving you crazy, don’t hesitate to let me know (or go to the Clockwork Otter pages and let us know in the forums)!

Loose Coupling & Late Binding – Long engineering heavy post

So I wanted to discuss some challenges I ran into recently. I’m working on some Unity plugins, and these Unity plugins are being distributed as DLLs. There is a common core component, which is distributed with each plugin, and then there are the various plugins. The common core will run with each action, and it checks to see which of the plugins you have, then calls out to those plugins to execute particular functions. So there are a number of requirements here:

The common core has to be the same for each of the plugins
The common core will call into the plugins, if they exist
The plugins themselves may call back into the common core
I don’t control any part of the installation process, so I can’t easily version the common core

So my first thought was to put the common core into a single DLL, and each of the optional plugins into its own DLL. If I put a reference in the common core to each of the plugins, and from each of the plugins to the common core, I can compile it. The issue comes in that the common core will try and load every one of the plugins as soon as it is loaded, and fail if any of them cannot be loaded. This happens before any of our code is executed, and is considered “early binding”. The way to resolve this would be to do “late binding”, which means our code gets executed, and then as is needed, it would load the DLL (if it exists) and allow me to check for the type.

This process is relatively straightforward in C/C++, where it has headers. In those cases I can define the class I am referencing without knowing its particular interface. Here’s where I ran into the first snag, which is that in C#, you don’t have a header file. If I reference a plugin class in the core, that class itself will be included by the core, which would defeat the whole purpose. The two solutions I came up with are I can either manually query and reference each call by a string using reflection (to get the handle that I can use to call the function), or I can create an abstract base class for each object, and have anyone who is “using” the class/object to reference the base type while the actual implementation sits in a derived class. Only the plugin has access to the actual implementation.

The first snag to this comes in that both the plugin and the core class need to reference the interface. If I include it directly, then the class is defined twice (once in the core class, once in the plugin), and this defeats the whole purpose. So the solution is to put the interface class into its own DLL, and I can directly reference that in both the core class and the plugin. I’m calling this a “bridge” class (so that it doesn’t get confused with the language keyword interface).

So the only trouble with creating this sort of bridge class is that I still have a tight coupling between the interface version and the core/plugin. This just means that I need to be very sure about that interface, I can easily update the implementation on either end (you can update your core DLL and/or the plugin DLL, as long as the bridge doesn’t update). If you update the bridge, than the core, and any plugins that use that particular bridge also need to be updated. If the reference is in one direction only, the core calls into the plugin, then you can update your core and *some* of your plugins. If the relationship is two way, where the core calls into the plugin and the plugin calls into the core, then updating the core interface requires updating *all* the plugins that reference it.

Overall, the situation isn’t ideal, and this is one I find much easier to handle in C/C++ code directly (if its an option). I’m really surprised that there isn’t a great system for this in C#, it should be automatic. It looks like Microsoft has a system for this in .NET 4.5, but Unity/Mono doesn’t support that yet, so that’s not an option. How are other people handling this?

Oculus Rift & Facebook

I’ve been following Oculus Rift with quite a bit of excitement since they first appeared on the scene.  I’ve always been excited by VR, and purchased my first pair of VR goggles (with head tracking) back in 1996.  I have since owned several different pairs of VR headsets, and in all honesty, they haven’t improved much since my first pair in 1996.  That is, until the Oculus Rift.  When I first tried them, I was blown away by how much better they looked than previous glasses.  Instead of getting a viewpoint that looked like I was looking at the world through binoculars (which doesn’t provide a very immersive experience), the Oculus Rift offers not quite full periphal vision, but a much wider view than what I had seen before.  This is what makes all the difference in immersion.  There is one thing problem that I’ve always experienced with VR headsets, and the Oculus Rift is no exception.  Your normal video game experience doesn’t translate well to VR, with the exception of games where you drive vehicles.  The reason is that when you’re wearing the headseat, you’re required to look around, but if you’re standing up or trying to rotate very quickly, it is difficult to maintain your balance, you have a tendency to bump into your surroundings, and you get tied up in the cables leaving your headset.  This isn’t a huge problem if you’re playing a racing game, or a flying game, or walking around in a giant mech, because you can be seated, but in these cases the virtual reality experience you are getting is one very close to the experience you are having in real life.  So although it can be a very good experience, there isn’t a wide variety of experiences you can offer.

I read an article a few months ago by a writer who tried the Oculus Rift, and he had similar feelings about virtual reality that I have.  He went on to describe the most interesting experience he had with the Oculus headset was not an active simulation, but a concert video filmed at one of Beck’s shows.  The show had been filmed on stage, and with a number of different cameras, which they had then hooked up to an interactive experience.  You were able to stand on stage, with Beck, and watch him and his band play their show, and he said that it was that experience that really sold him on VR.  For the first time, he felt truly immersed in the world he was in, but instead of it being an active experience, it was one much more passive than he was used to.

So there’s been a ton of consternation on the internet about how Mark Zuckerburg and Facebook are going to completely destroy Oculus Rift and the experience that they were aiming for.  Now I’m not a huge fan of Facebook, and I like the hardcore gaming experience as much as the next person, but I always wondered how Oculus was going to overcome the same problems I had experienced with VR in the past.  Although it provided a superior experience, it wasn’t addressing what I felt was the biggest impediment to widespread adoption of VR.  The more I think about it though, this may be the best thing to happen to VR yet.  I think the most compelling experience it is going to offer is a way to interact with other people, say sitting around a table having a discussion, or watching a concert together, but doing it virtually.  This is what Facebook is bringing to the table in the deal (besides an enormous sum of money).  If Facebook hadn’t bought them, my prediction that they would have underwhelmed considerably when they launched and would have disappeared, because I have yet to see the killer app that is going to sell these things like crazy.  But interaction with other people?  With your friends?  Experiencing things together, virtually, not like a video call on Skype, but as an immersive experience where you’re removed from your surroundings and brought together in an environment that you can completely control?  That’s compelling.  I think it’s great (despite my feelings about Facebook), and I think because of the purchase, Oculus is going to be a market success.  And because of that success, we (the hard core gamers) will get better and better VR experiences (as well as world’s coolest chatrooms).  We’ll get cheap VR headsets that will no longer be high priced specialty products for the geeks out there.

Unity & Assets

Unity is a great engine for multi-platform development, but as with all engines, it certainly has its weak spots.  In an attempt for simplicity, the folks at Unity seem to have erred on the side of less control when it comes to how objects are initialized.  This lack of control may be acceptable for smaller demo projects or hobbyists, but for those of us working on larger scale development, the initialization itself quickly becomes a roadblock and/or a source of a large number of bugs and problems.  I’ll attempt to go through some of the challenges we ran into very early on.

The Unity object life cycle is rather simple, it only gets 2 calls at object creation time.  The first, is the Awake call.  This call happens when an object an object is instantiated, and happens only once during its lifetime.  The second call is Start, which also only happens once after instantiation.  Unity guarantees that the Awake will be called on *all* objects in a particular frame before the Start gets called on all the objects.  The general idea is that you set up all your cross object references in the Awake, and then do all the initialization of actual data in the Start (at which point, in theory, Awake will have been called on every object already).  At first glance, although limiting, isn’t a complete disaster.  As your project increases in complexity there are a number of issues that become impossible to resolve rather quickly.  First, the order in which objects get their Awake and Start calls happen is random.  It’s not completely random, it is semi-predictable when you start working, but as you update platforms or Unity versions, you’ll see the ordering change.  It’s just enough to fool you into bad habits.  This means that for any cross reference between two objects, they both need to have special case code in their Awake to handle the possibility that the other object has not yet been created.  For example, let’s say object A has a reference to object B, and object B has a reference to object A.  Now when A is initialized, it has to look to see if object B exists yet, and if it does, it would set the reference to object B.  If it’s not created yet, it would do nothing.  Object B, when it’s initialized does the same check, looks to see if it can find Object A, and if does, it sets the reference to object A.  The problem comes in that whichever object is initialized first will not get a reference to the other object, only the second one will.  This can be solved by having Object A check if B exists, and if so, set the reference to B, and then reach into Object B and set the reference back to itself (so that B will have it as well).  Immediately, this breaks the loose coupling that you’re supposed to have in OO programming, with each object needing to know about the internals of the other.  This is a rather small example, but as a project grows, you can imagine how the scale of the code grows.

This problem, although leading to ugly code, is solvable, but assuming you’re willing to put up with it, there’s a second issue that you’ll quickly run into.  Objects that are loaded via scene vs instantation vs pre-fab all have different latencies after which their Awake/Start is called.  So in the case of having your objects created at runtime, or loaded through different methods, now your cross references break again.  This means that you can’t even guarantee that the awake will be called before the start in 2 objects that are loaded via a scene vs an asset bundle.

This issue stretches beyond the initialization.  On each frame, every object gets an Update call.  When they are destroyed, they’ll get an OnDestroy.  The order of the OnDestroy calls can be somewhat haphazard, and isn’t guaranteed to happen on the same frame.  So take our previous example, and expand it to include any reference in one object to the other in its Update.  And go one beyond that to include any function that can be called from any other object that has an Update.

So the solution?  The only viable solution that we were able to come up that still satisfied the other requirements of our project, was to not use the Awake or Start callbacks.  In our case, we set up our own analogous initialization system, where the order of initialization was strictly controlled.  We were vigilant to not fall into old habits and use the Awake/Start calls, and once we stopped using them our lives became a lot easier.  I’ll post a bit more about our solution in a later post.

The future of game development

Most of the work I have done in the recent past has been on mobile development, with a specific focus in the free-to-play arena.   Going back further my background is primarily in premium console products, with a couple brief stints/false starts in mobile.  I moved out of console into mobile for a number of reasons.  I liked the shorter development cycles, the ability to make changes rapidly and get feedback on the product from users (instead of reviews), and the hands-on aspect of the development work.  On console side, I also had some concerns, one of which is how the console/premium space is going to evolve to support the ever increasing costs as the scope of the games we work on increases.

One of the avenues in which this has started to manifest itself is in the adoption of third party engines.  In past console titles, the performance hits of adopting a general purpose engine was relatively high (proportionally speaking), and so there was a lot of pushback from developers in using an outside engine.  The last couple generations of consoles have changed things in a fundamental way, where the performance hit of using a general purpose engine has been eclipsed by the costs of developing a custom one.  The transition from in-house to out-of-house engine development has been much faster than many anticipated.  The transition started with Havok and other physics engines at first, but has included rendering engines as well as full game engines more recently.  There has always been a percentage that didn’t develop their engines internally, but that percentage has increased significantly in recent years.  If you look at console games that are out there now, a large proportion of them are using engines that aren’t internally developed, and it isn’t obvious from looking at them which ones do.

These things aren’t enough to support the large development costs that modern games entail, so we’ve also seen many studios and publishers moving to incremental updates and add-ons to their game to try and amortize their costs.  On the mobile side, things have moved away from premium products to a free-to-play model.  Developers in the free-to-play model have learned many lessons that will be useful in the upcoming era of live service on consoles.  These things need to be examined carefully, and I’m certainly not proposing that there aren’t large differences in F2P mobile vs console.  For starters, things like session length, player demographics, etc. are VASTLY different.  But be careful not to throw out the baby with the bathwater, there are nonetheless important lessons that can be taken from F2P.  For starters, developers to put a lot more thought into live service, and how to keep users engaged for longer periods of time with their game.

As we move forward, there’s clearly going to be a move towards a live service model on the premium titles as well.  It’s a great way for publishers to keep their users engaged, and provide additional value to players after they have played through the content.  This manifests itself not just with primarily online games (such as first person shooters) but in the live service events provided by games such as Forza 3 and Assassin’s Creed: Black Flag. It is inevitable on the console side that it will start adopting many of the techniques that have been commonplace in the free-to-play realm when it comes to live service, so that additional revenue from titles beyond what is included in the original shipped title can be earned.  This is won’t be the sort of cheap monetization tricks you see that give F2P a bad name (such as payment gates), but more along the lines of localized leaderboards, synchronous and asynchronous multiplayer play, ways to link up with your friends that increase social activity and competitiveness.  It’s going to make games on console much bigger as far as a gameplay experience without all the additional costs  of entirely new content or a new engine.  It will allow for much faster iteration and ability to stay engaged with our players.  We’ll be able to make better games with this sort of engagement and iteration process, but it will require developers to start incorporating these techniques into their design during the initial development process, and not just tacked on afterwards.