Build Reliable Export Tools with Blender’s Depsgraph

Build Reliable Export Tools with Blender’s Depsgraph
🧠
If your Blender script reads obj.data, it is probably giving you the wrong answer.

If you write a Python script to export or analyze geometry in Blender, it's probably lying to you. You grab obj.data, loop through vertices, and get numbers that do not match what you see in the viewport or what the renderer produces. The culprit is almost always the same: you are reading the original mesh data before modifiers, drivers, and constraints have been applied.

But Blender has a system that produces the final version of every object in your scene. It's called the dependency graph, or depsgraph. This article shows you exactly how to access it, what to do with it, and how to build a practical export tool your pipeline can use.

By the end, you will have a working Python script that iterates over every mesh object in your scene, reads its post-modifier geometry, and writes a CSV report with vertex count, polygon count, and world-space dimensions. That report can plug directly into a polygon budget check, a pre-render validation step, or a handoff audit before your assets move to the next team.


What's The Depsgraph

Blender's depsgraph (dependency graph) is an internal system that tracks all relationships and dependencies between data in a scene like objects, modifiers, constraints, drivers, shape keys, and animations, and organizes them into a directed acyclic graph where each node represents a piece of data and each edge represents a dependency.

When something changes, like moving an object or changing a value, the depsgraph figures out the minimal set of things that need to be recalculated and updates them in the correct order to avoid redundant work.

The depsgraph makes Blender's evaluation both correct (things always update in the right sequence, so a constraint dependent on another object always sees the up-to-date transform) and efficient (only changed parts of the graph are re-evaluated, which is critical for performance in complex scenes with hundreds of objects, simulations, and drivers).

It also enables features like CoW (Copy-on-Write) to allow the viewport and render to work with evaluated copies of data without corrupting the original, and underpins multi-threaded evaluation where independent branches of the graph can be processed in parallel.


Use Cases

For technical directors and pipeline engineers at animation studios, the depsgraph can be accessed directly through Blender's Python API to unlock powerful scripting use cases.

TDs can retrieve fully evaluated versions of objects to export final mesh data to external renderers or game engines without baking manually.

Studios can also iterate over evaluated object instances to collect every particle-instanced prop or geometry-node-scattered asset in a scene for custom export pipelines or asset reporting tools.

The depsgraph can detect what changed between frames by listening to update tags to support smart cache systems that only re-export or re-process assets when their dependencies have actually been modified; a huge time saver in long-running render farm pipelines.


1. Get a Reference to the Depsgraph

The depsgraph is available through the current context. In the Scripting workspace, or inside an operator, you get it via the evaluated_depsgraph_get method:

import bpy

depsgraph = bpy.context.evaluated_depsgraph_get()

The call returns a Depsgraph object that represents the fully evaluated state of your scene at the current frame.

If your script modifies the scene before reading evaluated data, you need to tell Blender to recompute:

depsgraph.update()

Without this call, the depsgraph reflects the state before your changes. And you don't need it in a read-only script.


2. Get the Evaluated Version of Each Object

Having the depsgraph is not enough on its own. You need to use it to fetch an evaluated copy of each object. That is done with evaluated_get():

obj = bpy.context.active_object
obj_eval = obj.evaluated_get(depsgraph)

The evaluated_get() call returns a temporary, evaluated copy of the object: it's not the original! Any changes you make to it do not persist after the script finishes. You are just reading from it.

On this evaluated object, obj_eval.data is the post-modifier mesh. On the original object, obj.data is still the pre-modifier mesh. Mixing the two in the same script produces incorrect results without error message, which makes bugs hard to catch.

The evaluated object also gives you accurate transform data. obj_eval.matrix_world reflects constraints and drivers, not just the base transform values.

Read the transform from the evaluated object as well if you are computing world-space bounding boxes or building an exporter that needs correct positions.


3. Convert to a Mesh and Read the Data

The evaluated object gives you access to the mesh through to_mesh(). It creates a standalone mesh datablock you can iterate over:

mesh = obj_eval.to_mesh()

This is not free. It allocates memory, and you are responsible for releasing it. When you are done reading, call:

obj_eval.to_mesh_clear()

Failing to call to_mesh_clear() leaks memory inside your Blender session. On a large scene with hundreds of objects, that adds up quickly. Always wrap your mesh reading in a try/finally block to guarantee cleanup even if an error occurs:

mesh = obj_eval.to_mesh()
try:
    for poly in mesh.polygons:
        print(poly.area)
finally:
    obj_eval.to_mesh_clear()

By default, to_mesh() may strip UV maps and vertex color layers for performance. If your export script needs those, pass the additional parameter:

mesh = obj_eval.to_mesh(preserve_all_data_layers=True)

The resulting mesh object gives you access to mesh.vertices, mesh.polygons, mesh.loops, mesh.uv_layers, and mesh.vertex_colors to reflect the final evaluated geometry.


4. Build the Production Script

Let's have a look at a complete example script. It iterates over every mesh object in the active scene, reads its evaluated geometry, and writes a CSV report to your home directory:

import bpy
import csv
import os

def get_evaluated_mesh_stats(context, output_path):
    depsgraph = context.evaluated_depsgraph_get()
    rows = []

    for obj in context.scene.objects:
        if obj.type != 'MESH':
            continue

        obj_eval = obj.evaluated_get(depsgraph)
        mesh = obj_eval.to_mesh()

        try:
            vert_count = len(mesh.vertices)
            poly_count = len(mesh.polygons)
            dims = obj_eval.dimensions

            rows.append({
                "name": obj.name,
                "verts": vert_count,
                "polys": poly_count,
                "dim_x": round(dims.x, 4),
                "dim_y": round(dims.y, 4),
                "dim_z": round(dims.z, 4),
            })
        finally:
            obj_eval.to_mesh_clear()

    with open(output_path, "w", newline="") as f:
        writer = csv.DictWriter(
            f,
            fieldnames=["name", "verts", "polys", "dim_x", "dim_y", "dim_z"]
        )
        writer.writeheader()
        writer.writerows(rows)

    print(f"Report written to {output_path}")


get_evaluated_mesh_stats(
    bpy.context,
    os.path.expanduser("~/mesh_report.csv")
)

You can run this from the Scripting workspace by pasting it into the text editor and pressing Run Script. The output file lands at ~/mesh_report.csv. Open it in any spreadsheet application.

Each row contains the object name, the post-modifier vertex and polygon counts, and the world-space bounding box dimensions in X, Y, and Z. The dimensions come from obj_eval.dimensions, which reads the evaluated object and therefore reflects the full modifier stack.

To adapt this for your own pipeline, you can filter by collection, add a polygon budget threshold and flag objects that exceed it, or swap the CSV output for a JSON payload you can easily import in a project management tool or asset database.


Conclusion

This example script is a basic starter. But when you understand how to reliably read evaluated mesh data, the same pattern applies to exporters, automated LOD validators, collision mesh generators, and pre-render geometry audits. The key insights transfer to every one of those tools: always read from the evaluated object, always clean up with to_mesh_clear(), and never mix original and evaluated data in the same operation.

A logical next step is to register this as a file-save handler using bpy.app.handlers.save_pre, so the report generates automatically every time an artist saves the scene for quality assurance to catch polygon budget violations before they reach the rendering stage.

📽️
To learn more about the animation process consider joining our Discord community! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊