Animation

Whilst our bread and butter is visualization work and explorables, every once in while we do fun animation projects as well.

The above project, which was codenamed icon field, started life as a 3D animation piece done as an event piece for a client, which played out on a large screen wall the width of a ballroom.

I repurposed it to use V/R’s logo, and added in some controls to show how the piece is not video and can be tweaked on the fly with some input parameter.

The fact that it is not video makes the code art piece resource light in terms of load, and it runs seamlessly on any canvas size (because everything is procedurally generated), be it the width of a ballroom wall or the confines of your mobile phone.

You don’t need a native compiled piece done on Unity or C++; WebGL has improved by leaps and bounds in recent years and you can run this off any web browser (which you can set to any size). If you have any interest in code art piece to make your event more interesting, drop us a ping!

Icon field: implementation details

The 3D in this piece was implemented using Three.js.

The idea behind the piece is relatively simple: Create multiple fields of icons in 3D space, then move the fields towards the camera to give the illusion that you’re flying through the fields.

When a field has moved behind the camera (and thus is not visible), then just reuse the field by adding this to the back of the queue to be encountered again.

This gives the illusion of an “infinite” runner, and is resource light because at any one point in time, there are only 8 generated fields for the above example.

To keep the center space of the field empty, use a simple formula to exclude the randomly generated icons from appearing there.

Once again, great fun to do code art, especially in the 3D space.

A WebGL piece done for a client, this 3D photo directory was used as a showcase piece at a event in China, where this was shown on a gorgeous event screen the width of a banquet hall.

It was also designed for multi-browser, multi-form factor (desktops, mobiles) in mind, with end users being able to play with the piece on their devices, or on touchscreens.

The idea behind it was to allow participants to visually inspect all attendees of the event, and through filters and search be able to connect to and find information on people within their organization.

The 3D photo visualization piece that is linked is a sanitized demo version with randomly generated names and photos as the real piece has data that should not be shared publicly.

But the principles and ideas behind the piece still stands, and it might be interesting to go through some of the hurdles involved in building this piece of beautiful code art.

App features
This code art piece changed a lot along the way, from flat 2D photo planes that became cubes (because it fit in with the client’s event design theme).

Originally designed as a sort of a photo selfie mozaic, it morphed into a more useful visualization that could be easily updated and reused for next year.

Many modes were added along the way, like:

  • The base cube mode
  • The globe mode (which was designed to mimick the client’s circular logo – it has been changed in this version – and a bit of a “wow” factor during the presentation)
  • The showcase mode, which highlights a few important organization members
  • Gender sort, a way for organizing the data so that you can see the breakdown in the organization
  • Inspect mode, where you can type in a name and it will jump to that person
  • Filters like by profession and province, where you can see a selection of the data whilst the rest fades into the background

UX design
For each mode, care was taken such that when you transition between modes, there is some animation that gives a bit of a “wow” factor, as well as to cue to the user that the mode has been changed.

There are also various user interactions that allow users to interact with the visualization. This includes:

  • Zoom in/out using the mouse scroll (or for touch pinch in/out to zoom in/out)
  • Rotate left/right if you hold down the mouse button and draw across the screen (or for touch swipe left /swipe right)
  • When you start entering a name under inspect mode, it autocompletes all possible names that are available in the visualization, making it far easier to pick a name
  • When you click on any photo cube in any mode, it immediately jumps to that photo in inspect mode, letting you know who that person is.

Together with all the modes, sorts and filter functions, and by allowing the end user to search and photo inspect, the 3D viz allows users to explore the dataset in a fun and intuitive way.

Three.js: WebGL
The 3D portion of the piece is written in Three.js, one of the best well-known javascript libraries on 3D.

All the cubes are mesh objects located in 3D space, and their textures are the photos that have been mapped on 2 sides of the cube. We tried with more faces, but the amount of rendering involved was taxing on the piece, and 2 faces made for a more pleasing design.

The X, Y, and Z positions of the cubes are positioned using algorithms. For example, in the cube mode, a 10 x 10 x 10 addressing space was created and then randomly shuffled and packed with smaller cubes to give the idea of a packed cubic structure. Resting animations to make the cubes move and rotate were added to make the piece more interesting.

In the globe mode, the X, Y, and Z positions of the cubes were aligned according to circular cubic equations. For the filters, the photos are all arranged in squares of the filtered content (with a dynamic camera zoom based on the amount of people filtered).

Thus, when you switch between modes, you simply calculate the new cube positions based on the relevant equations, and then the transitions will occur.

Detecting if you could click a cube was based on raycasting collision detection with the camera. If the cube is hidden by another cube, you cannot click on that object.

Miscellaneous: libraries used
Hammer.js was my touch library of choice. Originally we used some of javascript’s native click and scroll functions, but they did not mimick as naturally the tap, swipe and pinch in/out functions on a touch screen. Hammer.js touch functions proved to feel more natural and was thus used in the final piece.

SVG and HTML animations for the piece was done using Greensock, my go-to library for animations. From the pop-up information displays, to the sidebar animations, these were done using Greensock.

For the real piece, because we read from a CSV data backend, I used papaparse.js to parse the data into JSON chunks to populate the viz. For this sanitized version, chance.js was the library of choice to generate the random names and data.

And lastly, to round off everything, JQuery and Underscore.js were used. Both of these popular libraries have very useful functions, from DOM manipulation to sorting/filtering.

We investigated the use of other libraries like a load library, but Three.js very basic load manager proved sufficient for the piece, so no load library needed. Phew!

Miscellaneous: challenges
Because of how browsers’ canvas renderer draws animation frames, whenever the focus of the page is lost (i.e. you switch to another app or tab), the animation frames are not updated and are “paused”. This became an issue because in the animation, the X, Y and Z coordinates of the cubes are still being updated even though the frames are not drawn. Thus, when you switch back to the piece, cubes being in the wrong position is very common.

The fix was a solution based on the clock timer. On detection of defocus, start a timer. When you get back to the piece, detect how much time has elapsed, then repaint the scene based on where the cubes are supposed to be.

Other interesting challenges were due to browser specific issues, especially for mobile. The touch functions required quite a bit of debugging to make sure they work decently with the piece.

All in all, I loved doing a code art piece as pleasing and beautiful as this!

The last in the series of interactive pieces written for BASF earlier this year, this BASF Haptex piece follows the same considerations as the rest of the interactive pieces.

Like the BASF Elastopave piece, this piece is heavily composed of animated GIFs for the artwork.

However, there is one partially animated SVG layered on top of background in slide 1 of the piece.

Again this was mainly due to time considerations after the artwork was frozen and this method of composition and editing was the easiest way out for the effect I wanted to achieve.

One of the bigger challenges for this piece was how to keep the animations invariant regardless of any form factor.

When the pieces were first designed, they were designed specifically for a 1920 x 1080 HD screensize, and the animations fit perfectly then. For example, in the first introduction pane, the objects would rotate correctly (they follow a simple bezier curve) and look good on the screen.

However, this would go entirely out of whack for smaller screens because the coordinates were all hard coded in. To solve this, the coordinates have to be relative to the screen size and a bit of math was involved to refactor the piece such that it works on any relative screen size.

Another interactive piece written for large touch screens, this BASF Elastopave interactive explorable follows the same style and design considerations as the original BASF Elastolit utility piece.

Besides the reuse of the Canvas rain code written for the BASF Elastolit utility piece (otherwise the videos would be huge!), this piece has a lot more animated GIFs added into the piece by my designers.

This is a huge time saver when the art pieces are confirmed, and you just need to make sure that the artwork is animated in a loop.

Whilst it is possible to animate every piece of artwork individually, the amount of time taken would be non-trivial. Getting to the nitty gritty of the low level allows absolutely flexibility, but time considerations are not unlimited.

There is one art piece that is a mix of SVG animation overlaid on top of static art, and it is the 3rd pane (yes in this piece you can go to certain slides immediately by a specific URL link).

I decomposed the SVGs files given to me (English and Chinese versions) to animate the piece. I cut out what I wanted to animate in inline SVG, and the rest is loaded as an image background (toggled for language). A bit of positioning, and you get the effect you see.

Whilst there are many ways to create animations and transition effects, each has a different cost. One should always bear in mind things like performance, browser compatibility (if needed) and flexibility of composition.

An interactive piece of work done for our client BASF, this BASF Elastolit utility piece is an interactive explorable designed for large touchscreens.

Used at exhibition event in China earlier in the year, the interactive explorable is meant to draw the attention of the crowd and get them to play with it.

There are several design objectives baked into this piece.

Because it is a standalone interactive, the piece will jump back to the introduction if no one has interacted with it for more than 3 mins.

There is also swipe-based touch capabilities added to the piece. This, together with other UI elements like touch cues, left/right arrows and the bottom navigation bar allows one to transit and navigate the piece easily.

Artwork is custom, and the animation effects are layered-on with a mix of static images, animated gifs, video, and code-based art animation.

A translation button (top-right corner) allows users to quickly switch between English and/or Chinese on the fly.

Beyond the use at one event, the piece was also meant to have a longer shelf-life and accessible on the web. A key to this was making the
load relatively light-weight.

The rain was originally encoded in the video, but then the video files would be huge (in the order of several 100Mb). This is because in video encoding, the screens are captured frame by frame, and file size is dependent on the amount of stuff moving on the screen at one time. More stuff = bigger file size for the quality required.

Thus the decision was made to switch the rain to entirely being code generated, and is a HTML5 canvas rain code layer overlaid on top of the video. You can see an example of the test UI code rain here.

Lightning effects are also canvas-code based, and also overlaid in the same layer as the rain. The comparison between HTML5 canvas code performance vs video size is stark.

The text is also all stored in the HTML5 code (and not video) which also reduced the load for the on-the-fly translation. We managed to get the videos down to below 1 Mb, and yet the entire piece, when run on a HD 1920 x 1080 screen, still retains its sharpness.


Interactive explorable running on a gorgeous 55-inch Samsung Flip touchscreen

Sound files are not in any of the videos and are separately fired during the transitions of the panes. Whilst they could be added within the videos, having all of the elements separately and only gluing them in the code layer makes for a much more flexible result.

I would also argue it is more powerful. For example, the music, wind and rain can be layered in ways dependent on how the viewer transits panes.

Lastly, some work was done to ensure that it looked decent on a mobile form factor (which will nag you to rotate the phone to landscape). It is not perfect for every form factor, but by-and-large should work decently.

This piece is a mix of many different Javascript libraries and technologies, including D3 (for the gauge generation and bottom UI element), GSAP (library for animations), JQuery (useful DOM manipulator), HowlerJS (sound library), low-level Canvas code, Reveal (framework + touch), Icomoon (custom icon pack for smaller sizes).