Rendering Explained: The Key to Stunning 3D Animation

ℹ️
Animation rendering is the process of generating the final 2D visual output from a 3D scene

The rendering engine in your digital content creation tool takes all the elements you've created—3D models, textures, lighting, and special effects—and translates them into a series of images or videos you can watch or share.

Source: The Science Behind Pixar

This guide explores why rendering is a cornerstone of modern animation, how it's done through various techniques, and the challenges that make it a critical stage in production.


Why Rendering?

All 3D productions include rendering as part of their animation pipeline. Not just before delivery but also throughout the entire process in an iterative cycle to gather feedback and perform edits: a well-honed rendering workflow is key to avoiding delays in big production.

However, rendering is not limited to converting 3D models into images. It also includes 2D computer-generated images and visual effects (VFX). For example, animators use rendering to create water simulations or for 2D interpolation.

With new technologies, rendering allows animators to create hyper-realistic graphics―a feat that was once only possible with expensive, time-consuming hardware. It has now become more accessible for independent filmmakers and small animation studios to achieve cinematic-quality results with free, open-source tools like Blender and a $2,000 laptop.

Source: Blender

For all these reasons, animators need a basic understanding of what rendering entails to make the most of the creative features enabled by modern digital content creation tools.


Render Passes

Digital content creation tools break down models and scenes into separate layers for greater control and readability.

During rendering, each of these layers corresponds to a single pass, and manipulating these passes allows animators to create various effects by combining different rendering techniques.

Source: InFocus Film School

For example, the background could be composed of a layer with a gradient color and another with a generated cloud texture. The character in the foreground would have different layers for each body part, texture, shaders, etc.

By isolating these layers, animators can focus on separate details without disturbing the work of others, and adjustments can be added at later stages.

You'll then have shadow passes to add depth by defining contours and suggesting space for the whole scene, as well as highlights to illuminate surfaces and key areas.


Rendering Techniques

As we just mentioned in the last section, rendering is not a single technique. Different techniques bring different pros, cons, and artistic values―rendering shaders will be vastly different from rendering character hair in real-time.

You'll, however, need to understand the following basic techniques:

  • Raytracing - Raytracing simulates the path of light as it interacts with objects to create accurate reflections and shadows. Its complexity comes at a high computational cost and requires high processing power.
  • Scanline - Scanline rendering is a fast algorithm to identify what surfaces can be seen from a particular angle by processing images row-by-row or line-by-line instead of polygon-by-polygon or pixel-by-pixel. It's a preferred method for real-time applications where speed is crucial over photorealistic accuracy.
  • Radiosity - Radiosity focuses on accurately simulating diffused lighting between surfaces by capturing subtle inter-surface light interactions. It's ideal for scenarios needing soft, ambient lighting effects.
  • Rasterization transforms 3D models represented as polygons into pixels or dots for display. While faster, it may lack the sophisticated lighting effects achievable with raytracing.

This list is non-exhaustive, but it will give you an idea of how rendering works. An animator knows each layer has its own rendering techniques to achieve specific results.


Rendering Resolution

Resolution is the number of pixels in an image.

Higher-resolution images offer a clearer and more detailed quality but require more processing power and storage space.

The choice of resolution dramatically affects the viewer's experience, so productions aim for the highest resolution possible. For example, animators would choose an 8K resolution to ensure every minute detail is visible on a big screen. But if your animation is viewed on smartphones, a 1080p resolution is more efficient.

This decision impacts the rendering process tremendously since you need to render 8,294,400 pixels for 4K against 993,600 pixels for 1080p―more than eight times more pixels to render!

For this reason, it's common to work with lower resolutions during production to quickly get feedback on your animation―over long periods, each minute saved not rendering scenes rapidly adds up.

Source: NoTriangle Studio

Rendering Engines

Choosing a rendering engine depends on project-specific requirements. Animation studios and freelancers rely on various rendering engines: some are optimized for hyper-realistic environments, while others handle non-photorealistic styles or real-time rendering.

The choice of rendering engines is sometimes limited by your DCC tool. Take Blender, for instance. The open-source 3D editor has several rendering options: the Cycles engine for raytracing, FreeStyle for non-photorealistic rendering, and EEVEE for real-time. Unreal Engine, on the other hand, has a completely different (but similar) set of rendering settings. Depending on your team of animators, the studio will have to pick the rendering engine they are most familiar with.

A crucial consideration in the rendering process is understanding the roles of CPUs (Central Processing Units) and GPUs (Graphics Processing Units). Blender's Cycles engine allows animators to use one, the other, or both.

The CPU is a general-purpose processor that can handle a wide range of computational tasks. The GPU is designed to perform rapid matrix computation, which is ideal for the kind of parallel processing tasks required in rendering. GPUs can significantly speed up rendering times, so GPU-based engines are often used in workflows that demand quick turnarounds. They are also key for rendering photorealistic graphics.

Choosing between a CPU or GPU rendering engine often depends on the nature of your project, the hardware at your disposal, and the quality versus speed compromise you're willing to make.


Real-Time vs Non-Real-Time Rendering

Another thing to understand is the difference between real-time and non-real-time rendering.

Real-time rendering is the near-instantaneous creation of images as events happen within a virtual environment. Think of it as the technology that powers video games or virtual reality experiences, where objects and characters appear and respond seamlessly as the user interacts. This form of rendering is especially useful during development when you need immediate feedback on a change in your 3D model.

Source: Mages Institute

Non-real-time rendering pre-calculates the images or frames before they are viewed. This method is used during the delivery phase for high-end animations where the quality of visuals is a priority. The process can take anywhere from minutes to days, depending on the level of detail and photorealism aimed for in the final output.


Render Farms

Now that you know how rendering works, you might have an idea of the amount of computer power you need to create an animated movie―far beyond the capability of a single computer or even a few high-end workstations.

This is where render farms come into play.

A render farm is a high-performance computer cluster designed specifically to tackle the demanding computational workload of rendering 3D scenes. These farms consist of numerous interconnected computers, often called nodes, working collaboratively to convert 3D models, textures, lighting, and animations into final high-resolution images or frames that make up the movie.

Source: Ranch Computing

Animation teams use render farms because they significantly reduce the time to render these frames. Why invest tens of thousands of dollars into high-end machines for a one-off project when you can just lease it for a hundredth of the price?

Once an animation is ready to be rendered, the scenes are divided into smaller tasks. These tasks are then distributed among the available nodes in the render farm. This distribution is managed by specialized software that oversees the load balancing across the farm. Each node receives its assigned task and independently processes the frames of the animation. This decentralization allows multiple frames to be rendered simultaneously. As nodes complete their tasks, the rendered frames are collected and assembled back into the sequence or shot that forms part of the final animation. The rendered frames are then subjected to rigorous quality checks to ensure they meet the desired visual standards.

Render farms are the backbone of large-scale animation productions and an essential tool that democratizes the ability to create high-end visual effects, making ambitious projects feasible even for smaller studios.


Conclusion

Rendering is a necessary step to turn ideas into shareable animations. It's a highly technical job that greatly impacts the output quality.

As an animator, mastering rendering is key to pushing the boundaries of what you can create. In larger studios, pipeline management, and rendering optimization are especially important to scale a team's efforts. Even at an individual scale, making the best of rendering previews changes your productivity.

Make sure to experiment with your DCC tool's rendering settings, or try out a render farm for a few dollars.

📽️
To learn more about the animation process consider joining our Discord community! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you!