Data Coherence

When I worked at EA (1987-2008), one thing that was always intriguing, and yet exciting was the learning coming from the growth of the complexity of the development, of the technology, or just the growth of the team sizes.

We started with teams of 1 building NES, and the Sega games, where the developer was the programmer, the graphic artist, the designer, the audio artist, the tester, and learned the pain of growing to teams of 10, where communication, alignment, started to be important, to teams of 100s, where design, production and general development practices had to be re-invented.

in 2006-2007, while we were developing Spore, a novel and interesting issue came up, and this blog post will attempt to describe what lead us to a new understanding, and how it impacted my long term thinking.

Unit tests is for code

As we developed more and more complex code, we learned to write unit tests. Simple or complex pieces of code that help us confirm that the code is capable of behaving properly in well understood circumstances.

If the code is a math library for example, you can list a list of operations, and make sure that the result is withing the range of thee expected value plus or minus any uncertainty or rounding error.

If your code does not pass the unit tests, you don’t merge it with the code everyone is using. You have to either fix the code, or fix the tests (because, once in a while, it is the test that is wrong).

Until now (2006), data was complex, but manageable. I can see what we did for Simcity 3000 for example. a large data set for the buildings, with many states/variations to display information back to the users. The data creators (often the graphic’s team) had things under control.

Somehow, within “Spore”, we crossed a threshold, The team had tremendous talent and experience. From the development side with, Andrew Willmott, Chaim Gingold, Lydia Choy, and on the art side with Mike Khoury and Ocean Quigley.
One day Mike Khoury called me, and explained to me the challenge he was facing: the data was getting too complex, and the team was spinning in circle trying to succeed, to make the developer happy, but things were broken all the time. It was frustrating, it lead to poor productivity, everyone hated it…

To explain what happened, I will dive into more details, and bring in Sarah, who as a data scientist implemented the first solution around data coherence. Funny enough, we quickly learned that the problem was on fact happening in other teams across EA, and I am convinced now, across the whole industry…

Getting back to spore, which I studied carefully, here is what the core issues came down to. Let me explain what the technology in the game was about…

In Spore, the user builds creatures, which can be heavily customized. It is all about enabling the user to become a 3D artist, without the complexity of the 3D editors. It is revolutionary.

The trick is to use mathematical (implicit) surface controlled by spheres (all code), and then complement it with really cute “parts” that the user can customize. Like Eyes, paws, tails, ears etc…

And as the team solved the problem for creatures, they realized they could use a similar solution for buildings, vehicles, spaceships, with roofs, windows, doors, wheels, etc…

We called the “cute” parts “rigblocks”. RigBlocks have many parameters, because they are made of what is called “Blend Shapes”. most of them had 2 variables, but some of them had as many as 5 variables. Let me explain how blend shapes work:

You take a 3D object, with a specific geometry. The geometry is made of points, and triangles connected the points, and creating a mesh of polygons. for one object, all the blend shapes have the same graph of points and polygons, but some or all of the points are actually in different locations in space.

For example, if you have simply just a small and a big version of version of the same object, as blend shapes, you can easily scale the object by interpolating between the two set of values.

But if you have 3 blend shapes, one small one, a tall one, and a wide one, now with 2 parameters, you can control how the object scales, with all kind of aspect ratios.

If you are making paws you can make with the same object, small and big, small claws or big claws…

if you are making windows, you can control the size, but also the length of the details… etc…

If you are making a roof, you can control the slope, the symmetry

If you are making a wheel, you can control the width of the tire, the size of the wheel…

If you are making a mouth, you can control, the width, the height, etc..

All of this to show it is a lot of fun!!

Here are some more rigblocks:

What could go wrong?

So much fun, what could be so frustrating?

After interviewing the graphic team, and the dev team, simple things came up:
– The description of the constrains were poorly defined. There was no clear check list of what had to be true. It was a problem on the graphic creation side, because they did not know if they were doing it right but also on the development side, because if they were changing the requirement, it was not clear what they had to communicate.
– Even when the requirements were understood, it was difficult to know of the model was satisfying the requirements. It took too much time to do it manually. Let me explain this in detail:

1- all blend shapes need to have the same structure of points and polygons.
2- the shapes need to be water tight ( which means they can be 3D printed…)
3- when animating between the different shapes, the interior of the polygons need to always point inside, and the external always point outside…
This 3rd constrain, interestingly enough, grows geometrically with the number of parameters on the object. If you decide to just check for the values of 0, 0.5. and 1 for any parameter, that is 3 checks for one parameter, 9 for 2, 27 for 3, 81 for 4, 243 for 5. No wonder that it is difficult to check by hand.

The solution: data test harness

Sarah jumped in, and we built a test harness that was automatically run on every check-in on the art side, It looked at the what had been modified, and it created a set of web pages showing the different pictures of the rigblocks for the different values of the parameters, and for each run all the required tests. For the interior/exterior test, we rendered the shape with a red material on the inside, and detected red pixels on the image…

If any of the result triggered an error, the system emailed the graphic artist.

This system put the artists back in control of their work.
If they passed the harness, they had done their job. If they passed the harness, but it did not work, then it was the whole team’s problem to figure out what had to be improved. Either the runtime code or the harness…

The long term impact

After 20 years of the code getting more complex, we have figured out productivity patterns to validate the quality of what we do when we program.
Data has reached the point where a similar pattern is needed. As data gets more complex, it needs a harness. It is only as good as the harness validates.

It does not matter if it is graphic data, instrument data for online games, or internal financial data…

Luc