Saturday, June 29, 2013

Finding a Common Language of Design pt. 1

It dawned upon me quite early that to design several interconnected parts of software, they would have to share a common language in terms of how they looked and behaved. Luckily, the default look and feel of an application developed under any operating system using Qt is identical to other applications built under the same operating system. It mimics the window frame, the drag and resize facilities, closing and minimizing behaviours and also, colors and sound. In fact, developing with Qt provides promise of "write once, run everywhere", or WORE, meaning that whatever you write will not only work on any platform, but also mimic the look and feel of that platform, while still performing as you designed.

"But then.." I hear you say, "..why not use that to your advantage when developing Pipi?". It's a fair question.

Pipi does not adhere to this flexibility of mimicking its host. This is due to the fact that Pipi is a cross-platform framework. It will be run on a variety of platforms (Windows, Linux and Mac) and thus the behaviour of its host platform will vary but its users, however, won't. Users will come and go and their environments may change, the very platform they work on may change for each project they enroll which is why it is so important to maintain a "beacon of light" and provide a consistent and streamlined UI that any artist can access and use without the slightest doubt of anything being different. Regardless of which platform the artist is working under.

This does not mean, however, that the practices put in place by these hosts are ignored. Throughout the evolution of software, users have come to expect many things from their interaction with computers and it is your responsibility to maintain these expectations if you decide to stray from the path. There is a saying which says that the greatest design is that which behaves just as users expect. This applies to any interaction design, not least so webpages. If you have ever found yourself entering an unfamiliar website with a goal in mind, only to come out of that site with goal achieved 5 minutes later not remembering exactly how you did it but that in fact you did, you have experienced a rare and well-designed site and should take note of its whereabouts and learn from its design. This has happened to me only a few times and sadly, their successfull design makes them the most easily forgettable websites out there. They provide no friction, no moment of wonder or scratching of the head and thus no time to commit this flawless design to memory.

As you might suspect, if you are designing for a fashion site or any other market promoting content rather than functionality, this may not be the best route. For design that evolves around ease-of-use however, focus and simplicity (to borrow from the mantra of Steve Jobs) is key.

Product Development as a Film Process

As I was working the other day, it struck me that the approach I had adopted in developing the graphical context of Pipi is very similar to how I would develop the graphical content for Film. In fact, the more similar I made it, the more comfortable and predictable it became. So, if you're familiar film production, I'd like to share this approach with you as it has helped me in materialising my ideas into concrete matter in a rapid fashion.

Firstly, lets have a brief look at what the process of making film looks like today. A director has a vision. The director refines this vision, discusses it amongst his peers and starts materialising his thoughts onto paper in the form of text (the script) and art (storyboards). Then, one or more studios are hired to perform the bulk of work required in turning the director's vision into reality. Starting with more concept art, he then progresses into pre-visualisation in which the film is blocked out in time and space to represent his vision as a kid would when trying to convey his idea using crayons and the surface of your kitchen table. Its crude, its fast and inaccurate. This is when the vision is tested against its target audience, this is where the vision is iterated upon until the foundation is strong. Once a foundation has been built and once the vision is shared amongst everyone involved in its inception, production begins.

Lets stop here for a bit and look how this correlates to product development. An entrepreneur has a vision. The entrepreneur refines his vision, discusses it amongst his peers and starts materialising his thoughts onto paper in the form of text (requirements) and art (wireframes). Then, one or more companies are erected to perform the bulk of work required in turning the entrepreneur's vision into reality. Starting with more wireframes, he then progresses into pre-visualisation.

This is where I at first got stopped in my tracks and gazed out into the horizon, looking for ways to achieve the same fluidity as what has become natural to me when working in film. To "sketch" your vision and to then iterate on this sketch. But how can one sketch in product development? How can a product be tested against an audience without having it actually built? In User Interface and User Experience (UI/UX) Design, there is this concept of wireframing. Wireframing shares many similarities with that of storyboarding. It is a crude and simple, yet clear and fast version that facilitates iteration. UI/UX designers have been using it for years apparently, and it makes perfect sense for interface design just as storyboarding does to film.

The interesting part, however, is what comes next.

In film, once the script has been written, storyboards laid out, it is time to put the vision against the clock and make a so called animatic. The animatic serves as a useful guide to timing and continuity. Do shots fit together in time? Does this sequence make sense when played out like this? How do we go from this shot to this other one? The animatic answers all of these questions and more.

In developing Pipi, the most natural thing for me to do was to apply the same concept of an animatic, but to User Interface Design. So I took my storyboards drawn hastily on paper, refined them in photoshop to look like the product I had in mind and simply animated the UI with a mocked up version of a cursor so as to give the appearance of a user interacting with my future software.

Click to play
This is great! I can now easily send this off so anyone and, without further explanation, the recipient would understand not only what it looks like and what it can do, but also how those things are done.

Taking this one step further, we can insert the sequence of images into a demo context, such as an operating system, Windows in this case, and host application, Maya.

Click to play
As you can see, there are actually two separate user interfaces being demonstrated simultaneously, one spawning the other, while the fictional user clicks his way through the menu items to achieve his goal. Additionally, the video contains some backing explanation to the side of the window, accentuating things not clear from simply looking at the screen.

Using these techniques, I can iterate on an idea I just finished plotting on paper to the level of what you just saw in these videos in less than a day's worth of work each. And at such high level of clarity, I can collect feedback on features, user experience and visual design more accurately and quickly from my target audience, having the next version out the following morning. If not the very same day.

To finish off, I'd like to provide you with one final demonstration. A user experience and feature demonstration of another application that was hastily developed in order to gain the same level of understanding that one of the previous "wirematics" could have provided, except the execution time was closer to two weeks of Python coding.

Click to play

What is Pipi

It has taken me a long time to define Pipi. As I made my way from studio to studio over the course of the past five years, each one had this "something" about it that made work a much more pleasant experience. Turning these, usually repetitive and cumbersome tasks, into welcome distractions.

Of the 10+ studios I've worked at, each and every one displayed patterns of the "earlyvangelist". The characteristics are as follows; the earlyvangelist:

  1. Has a problem
  2. Is aware of having a problem
  3. Has been actively looking for a solution
  4. Has put together a solution out of piece parts
  5. Has or can acquire a budget

I remember this one artist of a mid-sized house here in the UK who told me he had suggested to management that he wanted to build better tools that could help reduce the rate of mistakes and boost productivity of their crew, but the benefits was unseen by management and so never allowed him to proceed.

Management is not to blame, however. It is difficult to visualise the abstract benefits of what you do not know. It reminds me of a famous quote by Henry Ford. "If I had asked people what they wanted, they would have said faster horses."  Although fake, its message holds true. It is a trait of human nature to try and improve on what we have by mere optimisation rather than stepping outside of that box and into unfamiliar territory. The territory of innovation.

Developing tools is expensive. Both in terms of money and time. There is an inevitable period of trial-and-error when breaking new ground. But what you get out of this process is invaluable. Not only will it accelerate your rate of delivery, it will enhance it, make it stronger and more accurate. The rate of mistakes will drop significantly and the happiness-factor of your workforce will increase.


As I have spent the past year refining this idea and the solutions that came with it, the smoke has cleared and my vision is slightly less blurry. I would now like to give you an overview of what some of my conclusions thus far.

What is Pipi?

Pipi is a Digital Content Creation (DCC) framework for the film, commercial and games industries.

Why Pipi?

Digital content creation is rapidly getting more and more divided into areas of special expertise and along the way we have forgotten to keep the pieces in sync. Pipi aims to solve exactly that.

How does it work?

Pipi is a framework. It provides you with an essential set of tools along with a foundation upon which to build your own tools that fit into your studio's unique way of working. It integrates with the major existing applications, such as Maya and Nuke, and frameworks delivered by third-party developers, such as Shotgun and FTrack, to help take the hassle out of fast-paced production development.


Pipi ships with six essential tools that serve as pivot-points for your work and own custom tools development. These tools, in order of relevance, are Launcher, Inspector, CreatorLibrary, Publisher and Handler

I will now go over the supplied set of tools in more detail.



Main entry-point for artists. This is what is booted up at the dawn of each day as artists begin their work. It supplies artists with an overview of each project and its content along with the ability to manage projects from a higher-level point of view;  creating sequences, editing assets, removing shots is all performed via the Launcher.



Each job, each asset and each shot come bundled with data. This "data about data" is referred to as metadata, and shots et. al. are in turn known as entities.

As the number of entities grow, metadata helps to keep things organized. To give an example, it can be used to keep track of frame-ranges in a shot, comments in playblasts or artist-supplied descriptions for assets. The Inspector then provides the means to visualise this data.

In addition to visualising data, the Inspector also provides the means to monitor what is referred to as "events". Events are recorded throughout the use of Pipi. Any time an artist comments on a shot, publishes his/her work or playblasts a shot in preparation for dailies, the recorded event can be monitored by other artists. This helps artists working on the same asset or shot stay in sync with each other and makes it easy to stay up to date on the overall progress for each chunk of work.



As you start a new job, the Creator provides the means of adding the necessary sequences, shots, assets and variants into that job. It is in the Creator where you append or remove metadata such as camera information, storyboards, references, audio or video et al. The metadata can then be accessed by artists, other Pipi tools or your custom tools.

Creator is most commonly accessed via the Launcher. Whenever an entity is edited, Creator will appear.



I mentioned that in a studio, work is divided into chunks and that each chunk is delegated to specialists. These specialists communicate with each other by "publishing" their finished material onto a central database. Other specialists can then access this material knowing that it conforms to a fixed set of studio rules. Background props, for instance, may have to conform to a certain polygon-count, character setups may have to provide a certain set of controls familiar to animators and so on.

The Publisher then acts as the funnel through which each shared chunk of work is passed before it reaches the public space. The publisher will perform sanity-checks (e.g. "is the geometry visible?") on your work to ensure it conforms to the already published work, in addition to help artists meet the required guidelines.



Once an artist has published his/her work, it can be found in the Library. The Library provides a birds-eye view of each published entity of each job within a studio. It allows artists not only to load animations, point-caches and assets, but also to perform an at-a-glance inspection of metadata, relations between it and other entities, their history and preview the data directly in the Library. This makes the process of finding what you are looking effortless for any artist.



The Handler is the only application-specific tool so far. It deals with the specifics of how an application deals with assets, such as rigs or point-caches. To aid in explaining I'll use Maya as case-in-point. In Maya, once an asset such as a rig has been loaded, it provides the user with a few options. Such as whether the asset should be imported or referenced, which version to load and the ability to up- or downgrade assets. Things which makes a difference only within the specific domain in which the artist is working at the time.

I mentioned that it is application-specific. This means that for each application to make use of it, it has to be provided with an implementation. This is due to the alternating ways applications deal with assets. In Nuke, an asset is an image-sequence loaded via its internal nodes known as "Read". Maya can also load image-sequences, but does so via different means. The Handler provides a high-level abstraction from these technicalities that can be manipulated via it's user interface so that the artist only has to focus on the "what" and not the "how".

Initially, the Handler will be provided for all major DCC applications, such as Maya, Softimage and Houdini, along with Nuke and Mari.

With these tools at hand, an additional set of tools are being developed. Such as a relational viewer to display relationships between the entities within a studio, a rigging framework including "weightshift" and an animation library utilities for storing poses and reusable segments of animation.

Hope you enjoyed and thanks for your attention.