[{"data":1,"prerenderedAt":764},["ShallowReactive",2],{"author-count-basile-en":3,"author-basile-en":4,"posts-author-basile-en-1":17,"tags-header-en":668,"tags-footer-en":735},25,{"id":5,"title":6,"body":7,"description":7,"extension":8,"meta":9,"name":11,"navigation":12,"path":13,"seo":14,"slug":15,"stem":15,"__hash__":16},"author/basile.json","Basile",null,"json",{"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"https://blog.cg-wire.com/author/basile/","Basile Samel",true,"/basile",{},"basile","rICq7hIv3FC9UOWdwDy0cz83GqLZLprfohBMLxmtKAc",[18,52,81,106,134,159,183,207,232,258,283,308,338,363,389,415,442,467,494,520,546,570,593,617,642],{"id":19,"title":20,"authors":21,"body":7,"description":7,"extension":8,"html":24,"meta":25,"navigation":12,"path":44,"published_at":45,"seo":46,"slug":47,"stem":48,"tags":49,"__hash__":51,"uuid":26,"comment_id":27,"feature_image":28,"featured":29,"visibility":30,"created_at":31,"updated_at":32,"custom_excerpt":33,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":36,"primary_tag":37,"url":42,"excerpt":33,"reading_time":43,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":7},"ghost/posts:remembers-kitsu-arco-production.json","How \"Remembers Studio\" Used Kitsu to Scale Up Production on Arco",[22],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"630632b2ca5910003d4a70af","\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">🎬\u003C/div>\u003Cdiv class=\"kg-callout-text\">Scaling a studio is not about adding tools, it is about introducing structure without slowing artists down.\u003C/div>\u003C/div>\u003Cfigure class=\"kg-card kg-embed-card kg-card-hascaption\">\u003Ciframe width=\"200\" height=\"113\" src=\"https://www.youtube.com/embed/HlvIZsmB8-8?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen=\"\" title=\"Piloter la production d’un long métrage : étude de cas “Arco” | Audrey Tondre (Remembers Studio)\">\u003C/iframe>\u003Cfigcaption>\u003Cp>\u003Cspan style=\"white-space: pre-wrap;\">To watch the full talk upon which this article is based, check out the video above\u003C/span>\u003C/p>\u003C/figcaption>\u003C/figure>\u003Cp>When the animated feature film \u003Cem>Arco\u003C/em> premiered in autumn 2025, the release was met with considerable recognition: a selection at the Cannes Film Festival and a Crystal Award at the Annecy International Animation Film Festival. Behind that success was a small Parisian studio, Remembers, navigating its first feature film with a lean production team and a project management tool most of its artists had never heard of before: Kitsu.\u003C/p>\u003Cp>At the Kitsu Submit conference, Audrey Tondre, the production director on \u003Cem>Arco\u003C/em>, shared an honest and detailed account of how she introduced Kitsu to a studio that was not asking for it, and why it turned out to be exactly the right move.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/04/image.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1920\" height=\"1080\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/04/image.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/04/image.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1600/2026/04/image.png 1600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/04/image.png 1920w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>Audrey joined Remembers specifically to produce \u003Cem>Arco\u003C/em>, a traditional hand-drawn 2D feature directed by Ugo Bienvenu and co-produced by Remembers (which Bienvenu runs with partner Félix de Givry) and the company Mountain (founded by Sophie Mas and Natalie Portman). Her background before this project was almost exclusively in 3D feature films, a world where production tracking tools are deeply embedded in every workflow.\u003C/p>\u003Cp>Coming into a 2D studio, producing its first feature was a significant context shift. And that gap between worlds is precisely what makes her story useful for any animation studio looking to grow.\u003C/p>\u003Chr>\u003Ch2 id=\"a-studio-built-for-small-scale-work\">A Studio Built for Small-Scale Work\u003C/h2>\u003Cp>Remembers had built a strong reputation on short-format projects: music videos, commercials, and short films. The quality of the work was not in question. But the infrastructure for managing a long-form project simply did not exist yet.\u003C/p>\u003Cblockquote>\"There was no pipeline, no development team. All the space was dedicated to the artists.\"\u003C/blockquote>\u003Cp>The entire film was produced in-house at Remembers, spread across three separate premises in the 20th arrondissement of Paris. At peak production, around 70 people were working in the animation studios, with a total headcount of roughly 150 people over the course of the project. The production team consisted of Audrey as production director and executive producer, two production coordinators, and one intern.\u003C/p>\u003Cp>With that ratio of production staff to creative staff, having the right tools was not optional.\u003C/p>\u003Chr>\u003Ch2 id=\"the-challenge-introducing-tools-no-one-asked-for\">The Challenge: Introducing Tools No One Asked For\u003C/h2>\u003Cp>When Audrey arrived at Remembers, the studio was tracking projects with Google Sheets. That approach works at the scale of a short film where six people share a room and can turn around to check on each other's screens. Not at a feature film scale.\u003C/p>\u003Cp>But the core team was not asking for anything different.\u003C/p>\u003Cblockquote>\"Very clearly, when I talked about production management and tracking tools, there was no demand for it.\"\u003C/blockquote>\u003Cp>This is a common scenario in small and mid-sized studios making the jump to larger productions. The habits formed on smaller work do not automatically flag themselves as insufficient. Audrey knew she had to solve a problem that had not yet been named, and she had to do it without creating friction.\u003C/p>\u003Cblockquote>\"I knew that if I brought in a new tool, I needed to address a need that hadn't been identified by the core team already in place. The main challenge right away was not to constrain them.\"\u003C/blockquote>\u003Chr>\u003Ch2 id=\"why-kitsu-won\">Why Kitsu Won\u003C/h2>\u003Cp>Audrey's reference point for project tracking came from the 3D feature world, where the dominant tool is Ftrack, a powerful but developer-dependent platform. She immediately recognized it would be the wrong choice for Remembers.\u003C/p>\u003Cblockquote>\"I immediately sensed that it wasn't going to be suitable at all in the context of \u003Cem>Arco\u003C/em>.\"\u003C/blockquote>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/04/image-1.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1920\" height=\"1080\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/04/image-1.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/04/image-1.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1600/2026/04/image-1.png 1600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/04/image-1.png 1920w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>Ftrack and similar enterprise-grade tools require in-house developers to deploy, configure, and maintain. Remembers had none of that: no IT staff, no technical director, no pipeline developer. Bringing in a tool that required that kind of support would have created more problems than it solved.\u003C/p>\u003Cp>Kitsu addressed her core constraints from the start. After a demo with the Kitsu team, the studio collectively decided to move forward. The reasons were practical:\u003C/p>\u003Cul>\u003Cli>No development required to get started\u003C/li>\u003Cli>No ongoing maintenance burden\u003C/li>\u003Cli>No in-house technical resources needed\u003C/li>\u003Cli>An interface intuitive enough for people picking up a production tool for the first time\u003C/li>\u003C/ul>\u003Cblockquote>\"That reassured me greatly. And obviously we were looking for something very intuitive, because since I was addressing people who weren't asking for tools, they needed to be able to pick it up and get on board very naturally.\"\u003C/blockquote>\u003Chr>\u003Ch2 id=\"how-kitsu-worked-in-practice\">How Kitsu Worked in Practice\u003C/h2>\u003Ch3 id=\"the-artist-experience\">The Artist Experience\u003C/h3>\u003Cp>Every artist on the production, regardless of which of the three sites they worked from, had a personal page in Kitsu showing all their assigned tasks (rough animation, clean animation, or other), the status of each task, the estimated time allocated, and a running log of time already spent.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/04/image-2.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1920\" height=\"1080\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/04/image-2.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/04/image-2.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1600/2026/04/image-2.png 1600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/04/image-2.png 1920w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cblockquote>\"You can understand that between this and having no tool at all, we've already made an enormous step forward. And it's not just about productivity, it's more enjoyable too.\"\u003C/blockquote>\u003Cp>Viewing a specific version of a shot no longer meant digging through a shared network drive and risking pulling the wrong file. In Kitsu, every version is one click away and tied directly to its comments. That alone removed a significant source of confusion and wasted time.\u003C/p>\u003Ch3 id=\"the-supervisor-experience\">The Supervisor Experience\u003C/h3>\u003Cp>Supervisors built their review pages using simple filters. An animation supervisor could filter for all shots currently \"waiting for approval,\" see exactly what needed attention, and post feedback directly on the relevant version. Comments were timestamped, attributed, and version-specific.\u003C/p>\u003Cblockquote>\"It's very targeted and it works well.\"\u003C/blockquote>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/04/image-3.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1920\" height=\"1080\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/04/image-3.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/04/image-3.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1600/2026/04/image-3.png 1600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/04/image-3.png 1920w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>Beyond the functional benefit, Kitsu gave supervisors something less obvious but equally valuable: structured time. Rather than being interrupted throughout the day by artists seeking feedback, supervisors could set aside dedicated review blocks in the morning and afternoon, and spend the rest of their time on their own work.\u003C/p>\u003Ch3 id=\"cross-department-communication\">Cross-Department Communication\u003C/h3>\u003Cp>One of the most practical features Audrey highlighted was the ability to tag anyone in the project from within any task comment thread. On a long production where compositing might uncover an issue with a background that had already been approved weeks earlier, this closed the loop quickly.\u003C/p>\u003Cblockquote>\"Inter-department exchanges are really quite easy and can be quick. Often it's small edits, things that slipped through because the shots had already been approved.\"\u003C/blockquote>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/04/image-4.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1920\" height=\"1080\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/04/image-4.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/04/image-4.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1600/2026/04/image-4.png 1600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/04/image-4.png 1920w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Chr>\u003Ch2 id=\"using-kitsu-data-for-production-tracking\">Using Kitsu Data for Production Tracking\u003C/h2>\u003Cp>The second half of Audrey's talk addressed a concern that production managers with experience in more advanced platforms sometimes raise about Kitsu: the lack of custom-built analytics pages. In tools like Ftrack, you can construct dashboards that process and display data in multiple ways without leaving the platform.\u003C/p>\u003Cp>Kitsu does not offer that out of the box. Audrey's response was pragmatic and worth paying attention to.\u003C/p>\u003Cblockquote>\"In reality, all the data that can be valuable in production tracking does exist in Kitsu. It's just not always visible on pages you'll find ready-made.\"\u003C/blockquote>\u003Cp>Her approach combined two simple steps: export a CSV from Kitsu, then import it into a Google Sheet she had built herself.\u003C/p>\u003Ch3 id=\"tracking-production-curves\">Tracking Production Curves\u003C/h3>\u003Cp>For each major department, she maintained a projected completion curve plotted against time. The vertical axis tracked the number of shots completed, and the dashed line represented the original model. Each week, she exported real data from Kitsu's Sequence Stats page, which shows the exact number of shots in each status across every department. She imported that CSV and the Google Sheet updated automatically.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/04/image-5.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1920\" height=\"1080\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/04/image-5.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/04/image-5.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1600/2026/04/image-5.png 1600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/04/image-5.png 1920w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>The result was an immediate visual indicator of whether production was tracking to plan or drifting.\u003C/p>\u003Cblockquote>\"A feature film is a big undertaking with a lot of inertia. If you start drifting for one week, that's okay. Two weeks, you need to look at what's happening.\"\u003C/blockquote>\u003Cp>She also applied a simple weighting system to shots currently in progress. A completed shot counted as one. A shot in editing counted as 0.75. A shot waiting for approval counted as a lower weight. This gave her a more accurate picture of work done rather than just work fully signed off.\u003C/p>\u003Ch3 id=\"tracking-inventory-between-departments\">Tracking Inventory Between Departments\u003C/h3>\u003Cp>On a linear production pipeline, each department feeds the next. If animation moves faster than layout, animators sit idle. If compositing falls behind, it creates a bottleneck no matter how far ahead animation is. Audrey tracked inventory levels at each stage: what was fully available for each department, what was still in progress, and what had already passed through.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/04/image-6.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1920\" height=\"1080\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/04/image-6.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/04/image-6.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1600/2026/04/image-6.png 1600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/04/image-6.png 1920w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>She built a table in Google Sheets with all sequences on one axis and all departments on the other. Cells turned dark green at 100 percent, light green for work in progress, and white when nothing remained. Every cell was formula-driven. No numbers were entered manually. One CSV export from Kitsu's shots page, one import, and the entire table refreshed.\u003C/p>\u003Cblockquote>\"It lets us ask the right questions: 'Oh, this department is moving a bit faster. Do we need to accelerate the previous one, or can we shift some artists from one department to another?'\"\u003C/blockquote>\u003Chr>\u003Ch2 id=\"adoption-was-easier-than-expected\">Adoption Was Easier Than Expected\u003C/h2>\u003Cp>A common worry when introducing new tools to a creative team is resistance. Audrey's experience ran counter to that fear.\u003C/p>\u003Cp>She set up Kitsu before the bulk of the team arrived. By the time animators and background artists joined in large numbers, the tool was already in place and populated. They arrived to a working system rather than a work-in-progress.\u003C/p>\u003Cblockquote>\"Kitsu is something that's very, very easy to pick up. You can click anywhere, you see the film's images, you see all the departments and sequences that might relate to your own.\"\u003C/blockquote>\u003Cp>That last point matters more than it might seem. Artists did not experience Kitsu as a reporting obligation. They experienced it as a window into the broader project. Browsing shots from other departments, seeing the whole film take shape across sequences, made the tool genuinely interesting to use.\u003C/p>\u003Cblockquote>\"It's also enjoyable and motivating to browse around in the tool. It's not just 'oh, I have to post my latest version.'\"\u003C/blockquote>\u003Chr>\u003Ch2 id=\"key-takeaways\">Key Takeaways\u003C/h2>\u003Cp>Audrey's experience on \u003Cem>Arco\u003C/em> offers a few clear lessons for animation studios at a similar inflection point.\u003C/p>\u003Cp>The absence of a technical team is not a blocker. Kitsu does not require developers, a technical director, or an IT department to deploy and maintain. For small and mid-sized studios, this removes the single largest obstacle to adopting a real production tracking platform.\u003C/p>\u003Cp>Simplicity builds adoption. The more complex the tool, the more training it demands and the more resistance it generates. Kitsu's interface allowed a team with no prior experience with production tracking software to get on board quickly and without building up resentment.\u003C/p>\u003Cp>The data is already there. If Kitsu does not offer a specific analytics view out of the box, that is not the end of the conversation. CSV exports from the Sequence Stats and shots pages provide all the raw material needed to build whatever tracking logic a production manager requires, in whatever format suits them.\u003C/p>\u003Cp>Structure and creativity are not opposites. Ugo Bienvenu's ambition on \u003Cem>Arco\u003C/em> was to make a film that could almost have been made in the 1950s: beautiful images, precise animation, minimal compositing, and great music. Kitsu did not interfere with that vision. It protected it by keeping the production on track so that the artists could focus entirely on the work.\u003C/p>\u003Cblockquote>\"The goal was to structure things in the most imperceptible way possible.\"\u003C/blockquote>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>",{"uuid":26,"comment_id":27,"feature_image":28,"featured":29,"visibility":30,"created_at":31,"updated_at":32,"custom_excerpt":33,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":36,"primary_tag":37,"url":42,"excerpt":33,"reading_time":43,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":7},"c66d47f2-d263-4a9e-aad5-f27aebe2c46a","69d4c1a6c037da0001fce813","https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/04/arco-ugo-bienvenu-critique-film.jpg",false,"public","2026-04-07T10:34:46.000+02:00","2026-04-16T11:48:34.000+02:00","Discover how Remembers used Kitsu to scale production on Arco. Learn how a small studio transitioned from Google Sheets to a structured pipeline without developers.","\u003C!-- Prism.js theme (syntax colors) -->\n\u003Clink rel=\"stylesheet\" href=\"https://cdn.jsdelivr.net/npm/prismjs@1.29.0/themes/prism.min.css\">\n\n\u003C!-- Toolbar plugin styles (for the Copy button) -->\n\u003Clink rel=\"stylesheet\" href=\"https://cdn.jsdelivr.net/npm/prismjs@1.29.0/plugins/toolbar/prism-toolbar.min.css\">\n\n\u003C!-- (Optional) Line-numbers styles -->\n\u003C!-- \u003Clink rel=\"stylesheet\" href=\"https://cdn.jsdelivr.net/npm/prismjs@1.29.0/plugins/line-numbers/prism-line-numbers.min.css\"> -->\n\n\u003Cstyle>\n/* Tweak code block appearance a bit (keeps theme styles intact) */\npre[class*=\"language-\"] {\n  border-radius: 8px;\n  overflow: auto;\n}\n\n/* ✅ Always wrap long lines (no horizontal scroll needed) */\npre[class*=\"language-\"],\npre[class*=\"language-\"] code {\n  white-space: pre-wrap;    /* preserve indentation but allow wrapping */\n  word-break: break-word;   /* break long tokens if needed */\n  overflow-wrap: anywhere;  /* last-resort wrapping */\n}\n\n/* Improve toolbar (Copy button) spacing/looks */\ndiv.code-toolbar > .toolbar {\n  opacity: 1;\n  right: 6px;\n  top: 6px;\n}\ndiv.code-toolbar > .toolbar .toolbar-item > button {\n  background: #1f2937;\n  color: #fff;\n  border-radius: 6px;\n  padding: 6px 10px;\n  font-size: 12px;\n}\ndiv.code-toolbar > .toolbar .toolbar-item > button:hover {\n  filter: brightness(1.1);\n}\n\n/* (Optional) Auto line numbers on all code blocks\n   If you want line numbers, uncomment both this and the CSS/JS includes above/below. */\n/*\npre[class*=\"language-\"] {\n  padding-left: 3.25em;\n}\n*/\n\u003C/style>","\u003C!-- Prism core -->\n\u003Cscript defer src=\"https://cdn.jsdelivr.net/npm/prismjs@1.29.0/components/prism-core.min.js\">\u003C/script>\n\u003Cscript defer src=\"https://cdn.jsdelivr.net/npm/prismjs@1.29.0/plugins/autoloader/prism-autoloader.min.js\">\u003C/script>\n\n\u003C!-- Toolbar + Copy-to-Clipboard plugins -->\n\u003Cscript defer src=\"https://cdn.jsdelivr.net/npm/prismjs@1.29.0/plugins/toolbar/prism-toolbar.min.js\">\u003C/script>\n\u003Cscript defer src=\"https://cdn.jsdelivr.net/npm/prismjs@1.29.0/plugins/copy-to-clipboard/prism-copy-to-clipboard.min.js\">\u003C/script>\n\n\u003C!-- (Optional) Line-numbers plugin -->\n\u003C!-- \u003Cscript defer src=\"https://cdn.jsdelivr.net/npm/prismjs@1.29.0/plugins/line-numbers/prism-line-numbers.min.js\">\u003C/script> -->\n\n\u003Cscript>\n  // Configure autoloader to fetch language definitions (bash, python, etc.)\n  window.Prism = window.Prism || {};\n  Prism.plugins = Prism.plugins || {};\n  Prism.plugins.autoloader = Prism.plugins.autoloader || {};\n  Prism.plugins.autoloader.languages_path = 'https://cdn.jsdelivr.net/npm/prismjs@1.29.0/components/';\n\n  // OPTIONAL: If you want line numbers on every block automatically, uncomment:\n  /*\n  document.addEventListener('DOMContentLoaded', function () {\n    document.querySelectorAll('pre > code').forEach(function (code) {\n      const pre = code.parentElement;\n      pre.classList.add('line-numbers');\n    });\n  });\n  */\n\u003C/script>",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":38,"name":39,"slug":40,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":41},"69c20df4cb09d8000107cfe7","Customer Stories","customer-stories","https://blog.cg-wire.com/tag/customer-stories/","https://blog.cg-wire.com/remembers-kitsu-arco-production/",8,"/posts/remembers-kitsu-arco-production","2026-04-13T11:11:27.000+02:00",{"title":20},"remembers-kitsu-arco-production","posts/remembers-kitsu-arco-production",[50],{"id":38,"name":39,"slug":40,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":41},"al6z2t2K9mRQwGazWVe51MY_TEybs5Zl73HsassAVtY",{"id":53,"title":54,"authors":55,"body":7,"description":7,"extension":8,"html":57,"meta":58,"navigation":12,"path":74,"published_at":63,"seo":75,"slug":76,"stem":77,"tags":78,"__hash__":80,"uuid":59,"comment_id":60,"feature_image":61,"featured":29,"visibility":30,"created_at":62,"updated_at":63,"custom_excerpt":64,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":65,"primary_tag":66,"url":71,"excerpt":64,"reading_time":72,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":73},"ghost/posts:blender-python-event-automation.json","Automating Blender with Python Event Handlers",[56],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">⚙️\u003C/div>\u003Cdiv class=\"kg-callout-text\">Blender events let you automate workflows without adding extra steps for artists.\u003C/div>\u003C/div>\u003Cp>A render finishes at 2am, no one is watching, and the output sits in a temporary folder until someone remembers to move it. An artist exports a file with the wrong name. A camera gets left at the wrong focal length before a client delivery.\u003C/p>\u003Cp>All these issues add up. Fortunately, there is a simple solution for all of them: Blender's Python API gives you direct access to the events that drive the application. You can write code that listens for those events and acts on them automatically, without any artist involvement. By the end of this article, you will have two working examples you can adapt in your own pipeline.\u003C/p>\u003Chr>\u003Ch2 id=\"3-ways-to-listen-to-events-in-blender\">3 Ways To Listen to Events in Blender\u003C/h2>\u003Cp>Blender exposes three main mechanisms for responding to events through its Python API:\u003C/p>\u003Cul>\u003Cli>\u003Ccode>app.handlers\u003C/code> are passive listeners that fire when Blender performs a specific action: a render completes, a file loads, a frame changes. Your code registers a function and Blender calls it when the moment arrives. The artist does not need to do anything, so this is often the right tool for automating background pipeline tasks.\u003C/li>\u003Cli>Modal operators are active listeners. They take over Blender's event loop for a given window and intercept everything the artist does in real time, mouse clicks, key presses, cursor movement, until the operator finishes or is cancelled. This is the right tool when you want to build interactive tools that respond to what an artist is physically doing inside the viewport.\u003C/li>\u003Cli>The third way to listen to events, \u003Ccode>msgbus\u003C/code>, lets you subscribe to changes on specific data properties, like the active object or a scene setting. It is useful but narrower in scope. This article does not cover it.\u003C/li>\u003C/ul>\u003Cp>The two examples this article builds cover the most common studio automation needs: the first removes a background task from your artists entirely with a handler, the other replaces a slow, manual workflow with a single click with a modal operator.\u003C/p>\u003Chr>\u003Ch2 id=\"1-auto-export-on-render-complete\">1. Auto-Export on Render Complete\u003C/h2>\u003Cp>There are many useful handlers available, among them:\u003C/p>\u003Cul>\u003Cli>\u003Ccode>render_init\u003C/code> - fires when a render job starts\u003C/li>\u003Cli>\u003Ccode>render_pre\u003C/code> - fires before each frame renders\u003C/li>\u003Cli>\u003Ccode>render_post\u003C/code> - fires after each frame renders\u003C/li>\u003Cli>\u003Ccode>load_pre\u003C/code> / \u003Ccode>load_post\u003C/code> - before/after a \u003Ccode>.blend\u003C/code> file is loaded\u003C/li>\u003Cli>\u003Ccode>save_pre\u003C/code> / \u003Ccode>save_post\u003C/code> - before/after a \u003Ccode>.blend\u003C/code> file is saved\u003C/li>\u003C/ul>\u003Cp>Open Blender and switch to the Scripting workspace from the top tab bar. You will see the Python console on the left and the Text Editor on the right. Write your code in the Text Editor and run it with Alt+P.\u003C/p>\u003Cp>You can also \u003Ca href=\"https://blog.cg-wire.com/blender-addon-ui-scripting-guide/\">use an addon to keep the script persistent\u003C/a>.\u003C/p>\u003Cp>Instead of building a full render pipeline tool, we'll start with something small to understand the main pattern: a minimal handler that fires the moment a render finishes and writes a timestamped confirmation to a file. It's a useful starting point for verifying that your handler is working correctly before building out more complex post-render logic:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">import bpy\nfrom datetime import datetime\n\n@bpy.app.handlers.persistent\ndef on_render_complete(scene, depsgraph):\n    timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n    open(\"test.txt\", \"w\").write(f\"Completed: {timestamp}\\n\")\n\nbpy.app.handlers.render_complete.append(on_render_complete)\n\u003C/code>\u003C/pre>\u003Cp>The \u003Ccode>@bpy.app.handlers.persistent\u003C/code> decorator keeps the handler registered across file loads, so it survives scene changes during a session.\u003C/p>\u003Cp>On render complete, \u003Ccode>datetime.now()\u003C/code> captures the finish time and formats it as a compact timestamp string. That string is written directly to a hardcoded path, overwriting the file on each render.\u003C/p>\u003Cp>Lastly, \u003Ccode>bpy.app.handlers.render_complete.append\u003C/code> registers the function so Blender calls it automatically when a render finishes.\u003C/p>\u003Cp>To test this without waiting for a full render, use Render Single Frame and then check that \u003Ccode>test.txt\u003C/code> exists at the target path and contains the expected timestamp.\u003C/p>\u003Cp>You can then extend the handler to copy output files, record scene metadata, or trigger downstream workflows.\u003C/p>\u003Cp>The pattern is always the same as in the example: define a function, optionally decorate it with \u003Ccode>@bpy.app.handlers.persistent\u003C/code>, then append it to the relevant list.\u003C/p>\u003Chr>\u003Ch2 id=\"2-modal-operators\">2. Modal Operators\u003C/h2>\u003Cp>\u003Ccode>app.handlers\u003C/code> cannot help you when the task involves responding to what an artist is actively doing in the viewport. You need a modal operator instead.\u003C/p>\u003Cp>The use case here is a one-click camera framer: an artist clicks an object and the active camera repositions and reframes to a studio-standard composition. No manual camera adjustment and no guessing at focal length, so no inconsistency between artists.\u003C/p>\u003Cp>A modal operator is a class with two key methods:\u003C/p>\u003Cul>\u003Cli>\u003Ccode>invoke()\u003C/code> starts the operator and registers it with the window manager.\u003C/li>\u003Cli>\u003Ccode>modal()\u003C/code> receives every event that occurs after that and decides what to do with it. The operator stays active and keeps receiving events until it returns \u003Ccode>FINISHED\u003C/code> or \u003Ccode>CANCELLED\u003C/code>.\u003C/li>\u003C/ul>\u003Cpre>\u003Ccode class=\"language-python\">import bpy\n\nclass AutoFrameOperator(bpy.types.Operator):\n    bl_idname = \"studio.auto_frame\"\n    bl_label = \"Auto Frame Selected\"\n\n    def invoke(self, context, event):\n        context.window_manager.modal_handler_add(self)\n        return {'RUNNING_MODAL'}\n\n    def modal(self, context, event):\n        if event.type == 'LEFTMOUSE' and event.value == 'PRESS':\n            target = context.active_object\n            if target:\n                self.frame_camera_to(context, target)\n            return {'FINISHED'}\n\n        if event.type in {'RIGHTMOUSE', 'ESC'}:\n            return {'CANCELLED'}\n\n        return {'RUNNING_MODAL'}\n\n    def frame_camera_to(self, context, target):\n        camera = context.scene.camera\n        if not camera:\n            return\n        focal_length = 85\n        camera.data.lens = focal_length\n        \n        print(f\"Framed camera on: {target.name}\")\n\ndef register():\n    bpy.utils.register_class(AutoFrameOperator)\n\ndef unregister():\n    bpy.utils.unregister_class(AutoFrameOperator)\n\u003C/code>\u003C/pre>\u003Cp>We define a Blender operator called \u003Ccode>AutoFrameOperator\u003C/code>, a reusable action that Blender exposes under the ID \u003Ccode>studio.auto_frame\u003C/code>. When triggered, \u003Ccode>invoke\u003C/code> registers it as a modal handler, meaning it stays active and listens for user input rather than executing immediately.\u003C/p>\u003Cp>The \u003Ccode>modal\u003C/code> method is the event loop that runs on every interaction. A left click grabs the currently active object and passes it to \u003Ccode>frame_camera_to\u003C/code>, then exits. Right-click or Escape cancels cleanly, and anything else keeps the operator waiting.\u003C/p>\u003Cp>The \u003Ccode>RUNNING_MODAL\u003C/code> return value is what keeps the operator alive and listening. Any event that does not match a condition you handle should return \u003Ccode>RUNNING_MODAL\u003C/code> so the operator stays active. Returning \u003Ccode>PASS_THROUGH\u003C/code> instead tells Blender to process the event normally in addition to passing it to your operator, which is useful when you want the artist to still be able to navigate the viewport while the operator is running.\u003C/p>\u003Cp>\u003Ccode>frame_camera_to\u003C/code> is the core logic. It retrieves the scene's active camera and sets its focal length to 85mm, though the actual math to reposition the camera and properly frame the target object isn't implemented as it's out of the scope of this article.\u003C/p>\u003Cp>\u003Ccode>register\u003C/code> and \u003Ccode>unregister\u003C/code> are standard Blender add-on boilerplate that make the operator available when the script loads and remove it cleanly when it unloads.\u003C/p>\u003Cp>To invoke the operator after installing the script as an addon, we open the search menu with F3 and type \"Auto Frame Selected\". To bind it to a shortcut, we can simply add the following snippet inside the \u003Ccode>register()\u003C/code> function:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">wm = bpy.context.window_manager\nkc = wm.keyconfigs.addon\nif kc:\n    km = kc.keymaps.new(name='3D View', space_type='VIEW_3D')\n    kmi = km.keymap_items.new(\"studio.auto_frame\", type='F', value='PRESS', ctrl=True)\n\u003C/code>\u003C/pre>\u003Cp>It's important to namespace your shortcuts carefully. \u003Ccode>Ctrl+F\u003C/code> in the 3D viewport has no default binding in Blender, but check against your studio's existing configuration before deploying. A shortcut conflict that silently overrides a default Blender action is hard to debug and will frustrate your artists.\u003C/p>\u003Cp>One more rule to follow: keep the \u003Ccode>modal()\u003C/code> method lean. Heavy computation inside \u003Ccode>modal()\u003C/code> runs on every single event, which means every mouse movement. If your framing logic is expensive, offload it to a separate method and only call it when the relevant event fires, as shown above with \u003Ccode>frame_camera_to\u003C/code>.\u003C/p>\u003Chr>\u003Ch2 id=\"conclusion\">Conclusion\u003C/h2>\u003Cp>You now have two examples of tools that address real studio problems without adding steps to your artists' workflow.\u003C/p>\u003Cp>The render handler can remove a manual, error-prone handoff from your pipeline entirely. And the modal operator gives artists a consistent, one-click way to frame a camera to your studio standard.\u003C/p>\u003Cp>The same patterns extend further. A \u003Ccode>load_post\u003C/code> handler could enforce naming conventions the moment a file opens. A \u003Ccode>depsgraph_update_post\u003C/code> handler might flag objects that violate your scene budget. A render complete handler can fire an HTTP request to a webhook and post a Slack notification to your production channel when a shot is done.\u003C/p>\u003Cp>The event system is already there: you just have to start listening!\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>",{"uuid":59,"comment_id":60,"feature_image":61,"featured":29,"visibility":30,"created_at":62,"updated_at":63,"custom_excerpt":64,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":65,"primary_tag":66,"url":71,"excerpt":64,"reading_time":72,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":73},"3d817b37-7ce7-4d96-b479-e6915371fade","69d4d1fdc037da0001fce81f","https://images.unsplash.com/photo-1686157251060-3ea1f90857aa?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDExfHwzZCUyMGFuaW1hdGlvbiUyMGF1dG9tYXRpb258ZW58MHx8fHwxNzc1NTU1Mzc0fDA&ixlib=rb-4.1.0&q=80&w=2000","2026-04-07T11:44:29.000+02:00","2026-04-07T11:54:18.000+02:00","Learn how to use Blender’s Python API to listen to events and automate workflows. This guide covers handlers and modal operators with practical examples for production pipelines.",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"5fff0e54653a0c003924f7f2","Pipeline Automation","pipeline","https://blog.cg-wire.com/tag/pipeline/","https://blog.cg-wire.com/blender-python-event-automation/",5,"\u003Cspan style=\"white-space: pre-wrap;\">Photo by \u003C/span>\u003Ca href=\"https://unsplash.com/@hiestudio?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">HI! ESTUDIO\u003C/span>\u003C/a>\u003Cspan style=\"white-space: pre-wrap;\"> / \u003C/span>\u003Ca href=\"https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Unsplash\u003C/span>\u003C/a>","/posts/blender-python-event-automation",{"title":54},"blender-python-event-automation","posts/blender-python-event-automation",[79],{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"9JCrUEGsAvw10rXO5Icp6lhnx5Mume3AbiZ5__SpE9U",{"id":82,"title":83,"authors":84,"body":7,"description":7,"extension":8,"html":86,"meta":87,"navigation":12,"path":99,"published_at":92,"seo":100,"slug":101,"stem":102,"tags":103,"__hash__":105,"uuid":88,"comment_id":89,"feature_image":90,"featured":29,"visibility":30,"created_at":91,"updated_at":92,"custom_excerpt":93,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":94,"primary_tag":95,"url":96,"excerpt":93,"reading_time":97,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":98},"ghost/posts:kitsu-telegram-bot-integration.json","Integrating Messaging Platforms with Kitsu Production Data",[85],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">💬\u003C/div>\u003Cdiv class=\"kg-callout-text\">Turn production events into instant chat notifications and commands with a Kitsu messaging bot.\u003C/div>\u003C/div>\u003Cp>Chat interfaces dominate the modern workplace: production teams coordinate in threads, approvals happen in emails, and LLM-powered assistants are becoming part of daily operations.\u003C/p>\u003Cp>The real problem is proper integration. A message that says \"Shot ready for review\" should let a supervisor approve that shot and update the status in Kitsu under the correct user in an ideal world, but this would require a small backend service, a secure API connection to Kitsu, and a reliable mapping between chat users and Kitsu users. The good news is, you can already do so with Kitsu!\u003C/p>\u003Cp>A simple starting point is a Telegram bot with one command like /hello. The bot links the chat user to their Kitsu account once, then replies through the API and displays them in chat. Whenever an event happens in Kitsu, the bot notified you. That small integration proves the concept, and that's exactly what we're going to build in this article.\u003C/p>\u003Chr>\u003Ch2 id=\"why-custom-messaging-integrations\">Why Custom Messaging Integrations\u003C/h2>\u003Cp>Custom messaging integrations centralize communication around a single source of truth. Instead of supervisors forwarding emails about a task status change, the update can be pushed automatically to the relevant team channel. For example, when a lighting task switches to \"retake\" in Kitsu, the lighting Telegram group instantly receives a structured message with the shot name, assignee, and deadline. The production tracker becomes proactive.\u003C/p>\u003Cp>User experience improves when raw database events are reshaped into readable summaries. Artists should not need to dig through activity logs to understand what changed. A daily digest sent to a Telegram channel can summarize approvals, new assignments, and upcoming deadlines in plain language. That digest can be generated directly from the Kitsu API and delivered automatically every evening to turn production data into something people actually consume.\u003C/p>\u003Cp>Automation is where this approach truly pays off, however. Messaging platforms can act as lightweight command interfaces. A coordinator typing \"/late_shots\" in Telegram can trigger a query against Kitsu and receive an instant report of overdue tasks. A lead typing \"/assign SH010 alice\" can trigger a backend call that updates the assignment in Kitsu. Chat becomes an operational surface for the production database.\u003C/p>\u003Cp>But as we said, let's start simple with a Telegram bot that interacts with Kitsu.\u003C/p>\u003Chr>\u003Ch2 id=\"1-create-a-new-telegram-bot\">1. Create a New Telegram Bot\u003C/h2>\u003Cp>Start by creating a dedicated bot in Telegram. Separation keeps credentials clean and avoids future security headaches when the integration is handed over to production IT.\u003C/p>\u003Cp>Open Telegram and search for BotFather, which is the official bot for managing other bots.\u003C/p>\u003Cp>Initiate a chat and send \u003Ccode>/newbot\u003C/code>. The flow is straightforward: provide a human-readable name like “Kitsu Notifications” and then a unique username such as \u003Ccode>kitsu_pipeline_bot\u003C/code>. The username must end with “bot,” and it has to be globally unique, so expect to try a few variations in a studio environment.\u003C/p>\u003Cp>BotFather returns an API token. Treat this token as a production secret, not as a convenience string to paste into Slack or commit to Git. Store it in your environment configuration system. If this token leaks, anyone can send messages as your production bot, which quickly turns from amusing to catastrophic when producers start receiving spam.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/03/image-10.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"976\" height=\"925\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/03/image-10.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/03/image-10.png 976w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>Before wiring this into Kitsu’s event system, let's validate the token manually.\u003C/p>\u003Cp>Search for your newly created bot by its username inside Telegram and start a conversation with it. Send a simple \"/start\" so Telegram registers your chat.\u003C/p>\u003Cp>To retrieve your client (chat) ID, call the \u003Ccode>getUpdates\u003C/code> endpoint with curl using the token. For example:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">curl https://api.telegram.org/bot&lt;TOKEN&gt;/getUpdates\n\u003C/code>\u003C/pre>\u003Cp>The response will contain a JSON payload with a \u003Ccode>chat\u003C/code> object and an \u003Ccode>id\u003C/code> field. That numeric ID is what your integration will target. In a real pipeline scenario, this might be the chat ID of a supervisors group rather than an individual user.\u003C/p>\u003Cp>Now test outbound messaging directly. Use curl to send a message to yourself:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">curl -X POST https://api.telegram.org/bot&lt;TOKEN&gt;/sendMessage -d chat_id=&lt;CHAT_ID&gt; -d text=\"Kitsu integration test\"\n\u003C/code>\u003C/pre>\u003Cp>If the message appears in Telegram, the token and chat ID are valid. This manual verification step saves hours of debugging later when you plug the same call into a Kitsu event hook and something silently fails.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/03/image-11.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"976\" height=\"925\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/03/image-11.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/03/image-11.png 976w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>With the bot validated, the next step is to connect it to Kitsu’s event system so that, for example, when a new asset is created, a message is pushed automatically to the supervisors’ Telegram group.\u003C/p>\u003Cp>The exact same \u003Ccode>sendMessage\u003C/code> endpoint you tested with curl becomes part of a small service or serverless function triggered by Kitsu.\u003C/p>\u003Chr>\u003Ch2 id=\"2-set-a-kitsu-event-listener\">2. Set a Kitsu Event Listener\u003C/h2>\u003Cp>Next, we need to subscribe to real-time events from Kitsu. The objective is simple: react the moment production data changes.\u003C/p>\u003Cp>We can use Kitsu's \u003Ccode>zou\u003C/code> Python SDK to open a websocket connection and listen for task update events.\u003C/p>\u003Cp>For example, connect to the Kitsu event stream and filter for asset creation events:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">import gazu \n\ngazu.set_host(\"http://localhost:80/api\")\ngazu.set_event_host(\"http://localhost:80/api\")\ngazu.log_in(\"admin@example.com\", \"mysecretpassword\")\n\n\ndef my_callback(data):\n    print(\"Asset created %s\" % data[\"asset_id\"])\n\nevent_client = gazu.events.init()\ngazu.events.add_listener(event_client, \"asset:new\", my_callback)\ngazu.events.run_client(event_client)\n\u003C/code>\u003C/pre>\u003Cp>We use the \u003Ccode>gazu\u003C/code> library to connect to a locally hosted Kitsu API server at \u003Ccode>http://localhost:80/api\u003C/code>, authenticate with the provided admin credentials, and then listen for real-time events.\u003C/p>\u003Cp>The snippet defines a callback function \u003Ccode>my_callback\u003C/code> that prints the ID of a newly created asset whenever it is triggered.\u003C/p>\u003Cp>After initializing an event client with \u003Ccode>gazu.events.init()\u003C/code>, the script registers the callback to listen for the \u003Ccode>\"asset:new\"\u003C/code> event (which fires whenever a new asset is created in the system).\u003C/p>\u003Cp>\u003Ccode>gazu.events.run_client(event_client)\u003C/code> starts the event loop that keeps the script running so that each time a new asset is added to Kitsu, the callback executes and prints its \u003Ccode>asset_id\u003C/code>.\u003C/p>\u003Chr>\u003Ch2 id=\"3-use-the-telegram-api-to-send-a-message\">3. Use the Telegram API to Send a Message\u003C/h2>\u003Cp>With events flowing in, push messages out using Telegram’s \u003Ccode>sendMessage\u003C/code> endpoint like we did earlier for testing. The API is just an HTTP POST that includes the bot token, chat ID, and text payload.\u003C/p>\u003Cp>Encapsulate that in a small utility function:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">import requests\nimport os\n\nTELEGRAM_BOT_TOKEN = os.getenv('TELEGRAM_BOT_TOKEN')\nTELEGRAM_CHAT_ID = os.getenv('TELEGRAM_CHAT_ID')\n\ndef send_telegram_message(text):\n    url = f\"https://api.telegram.org/bot{TELEGRAM_BOT_TOKEN}/sendMessage\"\n    payload = {\n        \"chat_id\": TELEGRAM_CHAT_ID,\n        \"text\": text,\n        \"parse_mode\": \"Markdown\"\n    }\n\n    response = requests.post(url, json=payload, timeout=5)\n\n    if not response.ok:\n        raise RuntimeError(\n            f\"Telegram API error {response.status_code}: {response.text}\"\n        )\n\u003C/code>\u003C/pre>\u003Cp>Note that we defined secret environment variables to prevent persisting them in a Git repository.\u003C/p>\u003Cp>Then call it from the event callback:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">from your_telegram_module import send_telegram_message\n\ndef my_callback(data):\n    send_telegram_message(\"Asset created %s\" % data[\"asset_id\"])\n\u003C/code>\u003C/pre>\u003Cp>To test our event listener:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">TELEGRAM_BOT_TOKEN=&lt;TELEGRAM_BOT_TOKEN&gt; TELEGRAM_CHAT_ID=&lt;CHAT_ID&gt; python server.py\n\u003C/code>\u003C/pre>\u003Chr>\u003Ch2 id=\"4-receiving-messages-with-a-custom-kitsu-api-endpoint\">4. Receiving Messages with a Custom Kitsu API Endpoint\u003C/h2>\u003Cp>Notifications are useful, but bidirectional communication is where the integration becomes truly useful.\u003C/p>\u003Cp>To do so, we need to extend the Kitsu backend with a custom plugin that registers a new route like \u003Ccode>/plugins/telegram/webhook\u003C/code>. Please refer to our official guide on Developing Kitsu Plugins for in-depth steps.\u003C/p>\u003Cp>The manifesto will look like this:\u003C/p>\u003Cpre>\u003Ccode class=\"language-toml\">id = \"telegram\"\nname = \"Telegram Bot\"\ndescription = \"Telegram Bot\"\nversion = \"0.1.0\"\nmaintainer = \"Frank Rousseau &lt;frank@cg-wire.com&gt;\"\nwebsite = \"kitsu.cloud\"\nlicense = \"AGPL-3.0-only\"\nmaintainer_name = \"Frank Rousseau\"\nmaintainer_email = \"frank@cg-wire.com\"\nfrontend_project_enabled = true\nfrontend_studio_enabled = true\nicon = \"telegram\"\n\u003C/code>\u003C/pre>\u003Cp>And our custom route will parse incoming commands and map them to explicit backend actions:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">from flask_restful import Resource\n\nclass WebhookResource(Resource):\n    def post(self):\n        args = self.get_args([\n            (\"message\", {}, True),\n            (\"chat\", {}, True),\n        ])\n        \n        message = args['message']\n        chat_id = args['chat'].get(\"id\")\n        text = message.get(\"text\", \"\")\n    \n        if text == \"/hello\":    \n            send_telegram_message(\"it works\")\n    \n        return jsonify({\"status\": \"ok\"})\n\u003C/code>\u003C/pre>\u003Cp>For the sake of simplicity we define a single command \u003Ccode>/hello\u003C/code>, but you can create many more and use Kitsu services to query production data.\u003C/p>\u003Cp>Deterministic commands are easier to test, log, and secure. You can go a step further and call a LLM to map a natural language request into a command.\u003C/p>\u003Cp>We just need to register the route in the main entrypoint \u003Ccode>__init__.py\u003C/code>:\u003C/p>\u003Cpre>\u003Ccode class=\"language-py\">from . import resources\n\n\nroutes = [(f\"/telegram/webhook\", resources.WebhookResource)]\n\u003C/code>\u003C/pre>\u003Cp>After packaging and installing your plugin on your Kitsu server instance, it's time to tell your Telegram bot how to reach it.\u003C/p>\u003Cp>If you use a local development environment, you can expose the server via tunnel. With ngrok for example, if your server runs on port 5000:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">ngrok http 5000\n\u003C/code>\u003C/pre>\u003Cp>You then need to configure your Telegram bot webhook to point to that URL:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">curl -X POST \"https://api.telegram.org/bot&lt;YOUR_BOT_TOKEN&gt;/setWebhook\" \\\n     -H \"Content-Type: application/json\" \\\n     -d '{\"url\": \"https://&lt;random&gt;.ngrok-free.app/plugin/telegram/webhook\"}'\n\u003C/code>\u003C/pre>\u003Cp>Now send \u003Ccode>/hello\u003C/code> to your bot in your Telegram chat and see the result:\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/03/image-12.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"525\" height=\"560\">\u003C/figure>\u003Chr>\u003Ch2 id=\"conclusion\">Conclusion\u003C/h2>\u003Cp>A custom messaging integration with Kitsu always follows a similar pattern: create a bot on a messaging platform, subscribe to Kitsu events, send structured notifications, and expose backend routes to handle incoming messages.\u003C/p>\u003Cp>But that's not all: consider extending your Kitsu plugin with views!\u003C/p>\u003Cp>For example, to display bot activity or recent interactions directly in the dashboard. Supervisors working inside Kitsu will be able to see which alerts were sent and which commands were triggered. The possibilities are limitless!\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>",{"uuid":88,"comment_id":89,"feature_image":90,"featured":29,"visibility":30,"created_at":91,"updated_at":92,"custom_excerpt":93,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":94,"primary_tag":95,"url":96,"excerpt":93,"reading_time":97,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":98},"16ecaf7a-bc5a-4d86-b08b-bf62ac7701e4","69ae62c591be760001bf7d81","https://images.unsplash.com/photo-1577563908411-5077b6dc7624?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fG1lc3NhZ2V8ZW58MHx8fHwxNzczMDM5MzU5fDA&ixlib=rb-4.1.0&q=80&w=2000","2026-03-09T07:03:49.000+01:00","2026-03-09T08:00:23.000+01:00","Learn how to integrate Kitsu with Telegram by building a bot that listens to production events and sends notifications. This guide explains how to connect Kitsu events, trigger messages, and create simple chat commands for production workflows.",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"https://blog.cg-wire.com/kitsu-telegram-bot-integration/",7,"\u003Cspan style=\"white-space: pre-wrap;\">Photo by \u003C/span>\u003Ca href=\"https://unsplash.com/@lunarts?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Volodymyr Hryshchenko\u003C/span>\u003C/a>\u003Cspan style=\"white-space: pre-wrap;\"> / \u003C/span>\u003Ca href=\"https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Unsplash\u003C/span>\u003C/a>","/posts/kitsu-telegram-bot-integration",{"title":83},"kitsu-telegram-bot-integration","posts/kitsu-telegram-bot-integration",[104],{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"CSt4jGbywZgG5gwq1J_TVNugmU5Ed34skMslwCBWcmM",{"id":107,"title":108,"authors":109,"body":7,"description":7,"extension":8,"html":111,"meta":112,"navigation":12,"path":127,"published_at":117,"seo":128,"slug":129,"stem":130,"tags":131,"__hash__":133,"uuid":113,"comment_id":114,"feature_image":115,"featured":29,"visibility":30,"created_at":116,"updated_at":117,"custom_excerpt":118,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":119,"primary_tag":120,"url":125,"excerpt":118,"reading_time":43,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":126},"ghost/posts:estimating-render-costs-animation.json","How Animation Studios Estimate Render Farm Capacity",[110],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">😀\u003C/div>\u003Cdiv class=\"kg-callout-text\">Rendering costs are not guesswork. With the right framework they become predictable.\u003C/div>\u003C/div>\u003Cp>Everyone has watched a render farm crawl at 4 p.m., staring at a progress bar that hasn't moved in ten minutes, wondering whether the shot will finish before the end of the day. That moment when the queue is full, artists are blocked, and supervisors are asking for an ETA is an estimation problem.\u003C/p>\u003Cp>Rendering often feels impossible to predict. One lighting tweak doubles the frame time. A setting that worked yesterday explodes memory today. Without a cost-estimation framework, you're left with saturated farms, missed deadlines, and eroded trust in the pipeline.\u003C/p>\u003Cp>The good news: render costs are not magic. They are measurable, decomposable, and predictable if you approach estimations with a framework instead of intuition.\u003C/p>\u003Cp>\u003Cstrong>This guide lays out a clear, practical estimation model you can apply immediately.\u003C/strong> It's designed for pipeline developers who need numbers they can defend in a production meeting.\u003C/p>\u003Chr>\u003Ch2 id=\"why-estimating-rendering-costs\">Why Estimating Rendering Costs\u003C/h2>\u003Cp>\u003Ca href=\"https://github.com/cgwire/blog/blob/main/drafts/render-cost-estimation/index.md?ref=blog.cg-wire.com#why-estimating-rendering-costs\">\u003C/a>\u003C/p>\u003Cp>\u003Cstrong>Accurate render cost estimation protects the schedule\u003C/strong> before it's at risk. When a sequence estimated at 2 hours per frame quietly renders at 6, farm occupancy triples and downstream departments are left hanging.\u003C/p>\u003Cp>Cost visibility also directly \u003Cstrong>influences creative decisions\u003C/strong>. When artists see that enabling high-quality volumetrics adds 35% render time, they're more likely to explore alternatives. Without that feedback, choices default to visual preference and the farm absorbs the impact later.\u003C/p>\u003Cp>Reliable estimates are \u003Cstrong>essential for infrastructure and budget control\u003C/strong>. Farm capacity, cloud bursting, and delivery planning all depend on predictable numbers. A 120-frame sequence at 3 hours per frame behaves very differently from one at 9, especially across multiple concurrent shows. When estimates consistently land within range, production trusts the pipeline, and that trust buys room for smarter technical decisions.\u003C/p>\u003Chr>\u003Ch2 id=\"1-what-actually-affects-rendering-costs\">1. What Actually Affects Rendering Costs?\u003C/h2>\u003Cp>\u003Ca href=\"https://github.com/cgwire/blog/blob/main/drafts/render-cost-estimation/index.md?ref=blog.cg-wire.com#1-what-actually-affects-rendering-costs\">\u003C/a>\u003C/p>\u003Cp>Rendering cost is never about a single push of a button. It's the result of multipliers stacking on top of each other.\u003C/p>\u003Cp>If a frame costs too much, everything downstream becomes painful, so the conversation should always start with what affects cost per frame:\u003C/p>\u003Cul>\u003Cli>\u003Cstrong>Resolution\u003C/strong> - Moving from 1080p to 4K is not a mild increase. It's four times the pixels. If a frame renders in five minutes at 1080p, it's completely reasonable to see twenty minutes at 4K with identical settings.\u003C/li>\u003Cli>\u003Cstrong>Frame rate\u003C/strong> - Ten seconds at 24fps is 240 frames. The same ten seconds at 60fps is 600 frames. If each frame costs eight minutes, you've just turned 32 render hours into 80 without touching a single shader or light.\u003C/li>\u003Cli>\u003Cstrong>Render engine choice\u003C/strong> - CPU versus GPU rendering is less about speed and more about memory ceilings. GPUs can be dramatically faster per frame, but they are constrained by VRAM. A scene with 12GB of textures and heavy geometry might fit comfortably in system RAM yet exceed a 24GB GPU once acceleration structures and overhead are included.\u003C/li>\u003Cli>\u003Cstrong>Sampling\u003C/strong> - Doubling samples almost doubles render time. If noise clears acceptably at 192 samples but artists push to 512 just to be safe, render time can nearly triple for negligible visual improvement.\u003C/li>\u003Cli>\u003Cstrong>Scene complexity\u003C/strong> - Modern renderers handle millions of polygons, but acceleration structure build times and memory usage still scale. A five-million-poly hero asset is fine in isolation. Fifty duplicates that are not properly instanced can double scene memory and increase render prep time significantly. The same applies to textures, volumetric fog, procedural systems like hair, fur, crowds, and simulations.\u003C/li>\u003Cli>\u003Cstrong>Animation length\u003C/strong> - Total frames equal duration multiplied by frame rate. A 30-second piece at 24fps is 720 frames. If each frame takes twelve minutes, that's 144 render hours.\u003C/li>\u003C/ul>\u003Cp>The parameters to take into account can feel overwhelming, which is why per-frame cost is the only metric that matters. If the target is eight minutes per frame and early lighting tests show fourteen, the project is already heading toward a significant overrun even if only a handful of frames have been rendered.\u003C/p>\u003Chr>\u003Ch2 id=\"2-understanding-the-core-formula\">2. Understanding the Core Formula\u003C/h2>\u003Cp>\u003Ca href=\"https://github.com/cgwire/blog/blob/main/drafts/render-cost-estimation/index.md?ref=blog.cg-wire.com#2-understanding-the-core-formula\">\u003C/a>\u003C/p>\u003Cp>Every serious conversation about rendering cost needs to start with the core formula:\u003C/p>\u003Cblockquote>\u003Cstrong>Total Render Cost = ((average render time per frame * total frames) / render speed) * hourly compute cost\u003C/strong>\u003C/blockquote>\u003Cp>If a sequence has 1,200 frames, each averaging 18 minutes on a single GPU, and the farm processes 40 frames in parallel at $2.50 per GPU hour, the math immediately reveals whether the lighting tweak just added thousands to the budget. It puts numbers on every decision.\u003C/p>\u003Cp>Estimating render time per frame must be grounded in production reality, not optimism.\u003C/p>\u003Chr>\u003Ch2 id=\"3-local-rendering-vs-cloud\">3. Local Rendering vs Cloud\u003C/h2>\u003Cp>\u003Ca href=\"https://github.com/cgwire/blog/blob/main/drafts/render-cost-estimation/index.md?ref=blog.cg-wire.com#3-local-rendering-vs-cloud\">\u003C/a>\u003C/p>\u003Cp>\u003Cstrong>It can be hard to evaluate total cost of ownership versus total cost of execution\u003C/strong> when choosing between building your own render farm or going for cloud rendering.\u003C/p>\u003Cp>Local workstation rendering looks cheap because the hardware is already sitting there. But that GPU or CPU wasn't free. A $6,000 workstation amortized over three years is roughly $166 per month before a single frame is rendered. Add electricity, say, a 700W machine running 10 hours a day at $0.20 per kWh, and that's roughly $42 per month just to keep it on. Now factor maintenance: failed SSDs, driver conflicts, OS updates breaking plugins. Even a conservative estimate of four hours of IT time per month at $75/hour adds $300. That \"free\" rendering node is suddenly costing over $500 per month before considering production impact. Opportunity cost is another silent budget killer. On a 10-person team billing $600 per artist per day, a single blocked workstation can easily represent thousands in indirect delay over a week of crunch.\u003C/p>\u003Cp>Cloud rendering flips the model from capital expenditure to operational expense. Instead of buying a machine, you rent compute by the GPU-hour. For example, if a frame takes 2 GPU-hours and the provider charges $1.20 per GPU-hour, that's $2.40 per frame. Multiply by 500 frames and the job costs $1,200 in raw compute. That number is transparent and scales linearly with workload, which makes estimates more predictable. Scalability is where cloud becomes strategically powerful. If 500 frames must be delivered in 24 hours and each frame takes 2 hours, locally that's 1,000 GPU-hours. On a single workstation, that's over 40 days of render time. Even with five machines, that's still more than a week. In the cloud, spinning up 100 GPUs finishes the job in roughly 10 hours. That difference can mean landing a client or missing the deadline entirely. But hidden costs in the cloud are where many estimates fall apart.\u003C/p>\u003Cp>\u003Cstrong>The practical approach is hybrid thinking.\u003C/strong> For example, keep a small local farm to render dailies overnight and use cloud rendering for finals, spikes, and simulations that exceed internal capacity. Switch as needed.\u003C/p>\u003Cp>\u003Cstrong>Estimating render cost means modeling behavior, not just machines.\u003C/strong> Once again, it's important to know your average render time per frame and plug it into both local and cloud cost estimators.\u003C/p>\u003Chr>\u003Ch2 id=\"4-hidden-costs-animators-forget\">4. Hidden Costs Animators Forget\u003C/h2>\u003Cp>\u003Ca href=\"https://github.com/cgwire/blog/blob/main/drafts/render-cost-estimation/index.md?ref=blog.cg-wire.com#4-hidden-costs-animators-forget\">\u003C/a>\u003C/p>\u003Cp>Everyone budgets for render time but hidden costs compound across shots. If the goal is predictable delivery, those costs need to be visible and actively managed.\u003C/p>\u003Cul>\u003Cli>\u003Cstrong>Revisions\u003C/strong> are the obvious one, but the real expense isn't just the extra CPU hours. It's the cascade. A late animation tweak on a hero shot forces lighting to re-queue, comp to invalidate caches, and modeling to re-export textures. On a 300-frame 4K shot with heavy volumes, a \"small\" timing change can mean tens of thousands of core-hours plus artist wait time. Clear version approvals can save a lot of money.\u003C/li>\u003Cli>\u003Cstrong>Storage\u003C/strong> is another silent budget killer, especially with EXR sequences. A single 4K 16-bit multi-layer EXR can easily hit 80-150 MB per frame. At 1000 frames, that's 80-150 GB for one version of one shot.\u003C/li>\u003Cli>\u003Cstrong>Bandwidth\u003C/strong> becomes visible the moment artists work remote or across sites. Syncing a 120 GB publish over a 1 Gbps line theoretically takes around 15 minutes, but in practice with contention and overhead, it can take much longer. Now multiply that by ten artists pulling the same plates Monday morning. Suddenly the farm is idle because comp is waiting on transfers. The practical approach is caching and locality, with a NAS and local granular syncs for example.\u003C/li>\u003Cli>\u003Cstrong>Backup and archival policies\u003C/strong> also carry real cost for the same reasons. \u003Cstrong>Software licenses\u003C/strong> are often treated as fixed overhead, but they can also scale unpredictably in the case of render only licenses. \u003Cstrong>IT time and pipeline setup\u003C/strong> rarely make it into show budgets, but they absolutely should. Every new show configuration, custom USD schema, or farm integration is engineering time that competes with support and R&amp;D. Last but not least: when delivery compresses, everything becomes more expensive. Cloud burst rendering costs more per core-hour, vendors charge \u003Cstrong>expedite fees\u003C/strong>, and overtime increases payroll burn.\u003C/li>\u003C/ul>\u003Cp>None of these costs are mysterious. They're just easy to ignore when the focus is on creative output. \u003Cstrong>The role of a strong pipeline is to make these invisible multipliers measurable and manageable.\u003C/strong> When teams see the real cost of a \"small change,\" they make better decisions, and the entire production runs with fewer surprises.\u003C/p>\u003Chr>\u003Ch2 id=\"5-a-simple-estimation-framework\">5. A Simple Estimation Framework\u003C/h2>\u003Cp>\u003Ca href=\"https://github.com/cgwire/blog/blob/main/drafts/render-cost-estimation/index.md?ref=blog.cg-wire.com#5-a-simple-estimation-framework\">\u003C/a>\u003C/p>\u003Cp>Estimating render costs needs to be grounded in reality. Now that you have all the elements, here are a few simple steps you can follow to create your estimate, but don't be simplistic and adapt them to your studio workflow:\u003C/p>\u003Col>\u003Cli>The most reliable starting point is \u003Cstrong>the heaviest scene in the current production\u003C/strong>. Pull the most complex shot you can find: highest character count, full FX, volumetrics, motion blur, the works.\u003C/li>\u003Cli>\u003Cstrong>Render 5-10 final-quality frames under real production settings.\u003C/strong> For example, if the hero battle shot has six characters, rain FX, and 4K output, render frames 101-110 exactly as they would ship. Anything less is lying to yourself.\u003C/li>\u003Cli>Once those frames are done, \u003Cstrong>calculate the average render time per frame across the batch.\u003C/strong> If the ten frames range from 18 to 26 minutes and average out at 22 minutes per frame, that 22 minutes is your baseline.\u003C/li>\u003Cli>With that baseline in hand, \u003Cstrong>add a buffer\u003C/strong> before anyone else asks for it. Production reality guarantees noise. A 15-30% buffer is healthy depending on show volatility. If that 22-minute average becomes 28 minutes after a 25% buffer, you've built in space for inevitable look-dev drift. On a stylized commercial with locked lighting, 15% might be enough. On a feature sequence still evolving, 30% is safer and still defensible.\u003C/li>\u003Cli>Now scale it to the show. \u003Cstrong>Multiply the buffered per-frame time by total frame count.\u003C/strong> A 90-second sequence at 24 fps is 2,160 frames. At 28 minutes per frame, that's 60,480 render minutes, or just over 1,008 render hours. On a 200-node farm where each node runs one frame at a time, that's roughly five hours of wall-clock time, assuming perfect distribution and zero contention. That assumption will never be true, but it gives production something concrete to reason about.\u003C/li>\u003Cli>Next comes \u003Cstrong>the revision margin.\u003C/strong> Expect 10-25% additional frames to be re-rendered over the life of the sequence. If history shows that client notes typically trigger two re-renders, lean toward 20-25%. A 20% revision margin adds 432 frames. At 28 minutes per frame, that's another 201 render hours that must be budgeted.\u003C/li>\u003C/ol>\u003Cp>And as we mentionned earlier, \u003Cstrong>don't forget hidden costs like storage and bandwidth costs!\u003C/strong> Calculate them up front and make sure the network and disks can actually handle that sustained throughput.\u003C/p>\u003Cp>When all these pieces are combined, you get a number that can survive scrutiny. \u003Cstrong>That number is both a cost estimate and a production constraint\u003C/strong>: it tells you whether to optimize shaders, reduce volumetrics, increase farm capacity, or renegotiate scope.\u003C/p>\u003Chr>\u003Ch2 id=\"conclusion\">Conclusion\u003C/h2>\u003Cp>\u003Ca href=\"https://github.com/cgwire/blog/blob/main/drafts/render-cost-estimation/index.md?ref=blog.cg-wire.com#conclusion\">\u003C/a>\u003C/p>\u003Cp>\u003Cstrong>Rendering cost estimation is ultimately about managing uncertainty.\u003C/strong> No estimate survives contact with late creative changes or unexpected technical constraints. The practical approach is simple: test early with representative frames, base projections on measured data instead of intuition, add realistic buffers for revisions, and continuously recalibrate once real shots hit the farm. Every project will drift: the goal is to detect that drift early and absorb it with planning rather than panic.\u003C/p>\u003Cp>If tighter control over that uncertainty sounds appealing, \u003Ca href=\"https://blog.cg-wire.com/flamenco-without-nas-kitsu/\" rel=\"nofollow\">consider trying self-hosting a render farm\u003C/a>. Running your own infrastructure gives direct access to performance metrics, failure rates, queue behavior, and real per-shot render costs instead of relying on opaque cloud billing summaries. Even a small pilot setup with a few nodes rendering a short internal project can expose bottlenecks, validate benchmarks, and build the historical data needed for future estimates. Owning the feedback loop between scene complexity, hardware performance, and scheduling pressure is often the fastest way to turn render cost estimation from guesswork into an operational advantage.\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>",{"uuid":113,"comment_id":114,"feature_image":115,"featured":29,"visibility":30,"created_at":116,"updated_at":117,"custom_excerpt":118,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":119,"primary_tag":120,"url":125,"excerpt":118,"reading_time":43,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":126},"33472886-015c-40af-bab2-a0b2dc33109b","69ae62c891be760001bf7d87","https://images.unsplash.com/photo-1719014745427-663137ae50f6?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGFuaW1hdGlvbiUyMHJlbmRlcmluZ3xlbnwwfHx8fDE3NzMwMzg3NTR8MA&ixlib=rb-4.1.0&q=80&w=2000","2026-03-09T07:03:52.000+01:00","2026-03-09T07:51:00.000+01:00","Learn how animation studios estimate rendering costs and predict farm capacity. This guide explains the factors that affect render time, the core cost formula, and a practical framework for reliable render estimations.",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":121,"name":122,"slug":123,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":124},"5fff0e4b653a0c003924f7f0","Production Management","production-management","https://blog.cg-wire.com/tag/production-management/","https://blog.cg-wire.com/estimating-render-costs-animation/","\u003Cspan style=\"white-space: pre-wrap;\">Photo by \u003C/span>\u003Ca href=\"https://unsplash.com/@buddhaelemental3d?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Buddha Elemental 3D\u003C/span>\u003C/a>\u003Cspan style=\"white-space: pre-wrap;\"> / \u003C/span>\u003Ca href=\"https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Unsplash\u003C/span>\u003C/a>","/posts/estimating-render-costs-animation",{"title":108},"estimating-render-costs-animation","posts/estimating-render-costs-animation",[132],{"id":121,"name":122,"slug":123,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":124},"AfO5bQ4BoUaQdumnEGFpE6vmeVhLOofanyDkTeqmITY",{"id":135,"title":136,"authors":137,"body":7,"description":7,"extension":8,"html":139,"meta":140,"navigation":12,"path":152,"published_at":145,"seo":153,"slug":154,"stem":155,"tags":156,"__hash__":158,"uuid":141,"comment_id":142,"feature_image":143,"featured":29,"visibility":30,"created_at":144,"updated_at":145,"custom_excerpt":146,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":147,"primary_tag":148,"url":149,"excerpt":146,"reading_time":150,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":151},"ghost/posts:retopology-animation-blender-guide.json","Why Retopology Matters for Animation Pipelines",[138],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">🧩\u003C/div>\u003Cdiv class=\"kg-callout-text\">&nbsp;Retopology turns messy 3D meshes into animation-ready assets.\u003C/div>\u003C/div>\u003Cp>AI tools can now generate 3D models in minutes, but they usually produce messy topology, meaning the way polygons are arranged across the surface is uneven and poorly structured. It might look fine on the surface, but it'll break the moment you start trying to animate it.\u003C/p>\u003Cp>If you're doing any kind of animation or rendering, assume \u003Cstrong>you will need retopology\u003C/strong>.\u003C/p>\u003Cp>If you don't know where to start, we've got you covered. In this article, we'll go through the process step-by-step and explain different tools you can use to make it easier.\u003C/p>\u003Chr>\u003Ch2 id=\"whats-retopology\">What's Retopology\u003C/h2>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-blue\">\u003Cdiv class=\"kg-callout-emoji\">💡\u003C/div>\u003Cdiv class=\"kg-callout-text\">\u003Cb>\u003Cstrong style=\"white-space: pre-wrap;\">Retopology is the process of rebuilding the surface topology of a 3D model to create a cleaner arrangement of polygons over an existing sculpt\u003C/strong>\u003C/b> so it deforms correctly in animation.\u003C/div>\u003C/div>\u003Cp>\u003Ca href=\"https://github.com/cgwire/blog/blob/main/drafts/retopology/index.md?ref=blog.cg-wire.com#whats-retopology\">\u003C/a>\u003C/p>\u003Cfigure class=\"kg-card kg-image-card kg-card-hascaption\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/03/image-5.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"560\" height=\"220\">\u003Cfigcaption>\u003Ci>\u003Cem class=\"italic\" style=\"white-space: pre-wrap;\">Source: Blender Manual\u003C/em>\u003C/i>\u003C/figcaption>\u003C/figure>\u003Cp>For example, we don't usually animate the dense sculpt that comes out of ZBrush directly. Instead, we build a lighter, structured mesh on top of it.\u003C/p>\u003Cp>A mesh is a 3D object made of vertices (points), edges (lines between points), and faces (surfaces).\u003C/p>\u003Cp>Before we even think about rigging, we inspect the mesh in wireframe mode and identify dense clusters, stretched polygons, and chaotic edge flow (the direction edges follow across the surface).\u003C/p>\u003Cp>For a character, for example, we could rebuild the shoulder using evenly spaced quads (four-sided polygons) instead of triangles so that the arm could rotate without pinching. This is retopology.\u003C/p>\u003Chr>\u003Ch2 id=\"why-retopology-is-key\">Why Retopology Is Key\u003C/h2>\u003Cp>\u003Ca href=\"https://github.com/cgwire/blog/blob/main/drafts/retopology/index.md?ref=blog.cg-wire.com#why-retopology-is-key\">\u003C/a>\u003C/p>\u003Cp>Retopology rebuilds a model's surface with clean geometry, and you need it if you want \u003Cstrong>assets that are maintainable and reusable\u003C/strong> across productions. Animators don't ship dense sculpt topology downstream. Instead, they rebuild it with clean edge loops so that the next animator or rigger can understand and modify it quickly.\u003C/p>\u003Cp>\u003Cstrong>Good retopology also makes animation easier because deformation becomes predictable.\u003C/strong> Deformation is how a mesh changes shape when a joint rotates, and support it with evenly spaced quads around elbows, knees, and mouths. If you place five to seven radial edge loops around a joint, you give the skin enough geometry to bend without collapsing.\u003C/p>\u003Cp>Lastly, \u003Cstrong>controlling polygon density reduces rendering cost.\u003C/strong> A polygon is a single face of geometry, and more polygons means more data to process, so we usually concentrate on details where silhouettes change and keep flat areas lightweight to cut costs.\u003C/p>\u003Cp>\u003Cstrong>Retopology always comes in handy at some point\u003C/strong>, whether it's to fix a 3D model or create different levels of detail (LOD), so roll up your sleeves and let's dive in.\u003C/p>\u003Chr>\u003Ch2 id=\"1-back-up-your-3d-model\">1. Back Up Your 3D Model\u003C/h2>\u003Cp>\u003Ca href=\"https://github.com/cgwire/blog/blob/main/drafts/retopology/index.md?ref=blog.cg-wire.com#1-back-up-your-3d-model\">\u003C/a>\u003C/p>\u003Cp>First, \u003Cstrong>it's important you back up your model before you touch retopology\u003C/strong>, every single time.\u003C/p>\u003Cp>Automated retopology tools rebuild topology from scratch, which means they overwrite or delete the original mesh data. It happens artists run an auto-retopo pass at the end of a long day, only to realize the new edge flow breaks deformation around the shoulders and the original sculpt is gone.\u003C/p>\u003Cp>Don't rely on undo. Save a clean duplicate of the file and archive the current mesh in your scene before running anything destructive.\u003C/p>\u003Cp>In production, also create a new version in Kitsu to keep changes traceable and recoverable. That way, if the new topology fails in rigging tests, you can roll back in minutes instead of asking IT for a file restore.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/03/image-6.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1438\" height=\"809\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/03/image-6.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/03/image-6.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/03/image-6.png 1438w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>Treat backups as part of the retopology process itself! A two-minute version bump and duplicate save can protect days of sculpting and keep the pipeline moving when supervisors ask to compare \"before\" and \"after\" meshes.\u003C/p>\u003Chr>\u003Ch2 id=\"2-general-process\">2. General Process\u003C/h2>\u003Cp>\u003Ca href=\"https://github.com/cgwire/blog/blob/main/drafts/retopology/index.md?ref=blog.cg-wire.com#2-general-process\">\u003C/a>\u003C/p>\u003Cp>The general workflow is simple: clean the sculpt, voxel remesh for stability, quad remesh for structure, then manually refine deformation areas like shoulders and hips.\u003C/p>\u003Cp>Always test with quick skin weights and extreme poses early.\u003C/p>\u003Chr>\u003Ch2 id=\"3-automated-retopology-with-remeshing\">3. Automated Retopology With Remeshing\u003C/h2>\u003Cp>\u003Ca href=\"https://github.com/cgwire/blog/blob/main/drafts/retopology/index.md?ref=blog.cg-wire.com#3-automated-retopology-with-remeshing\">\u003C/a>\u003C/p>\u003Cp>If a creature comes in with 8 million polygons and chaotic triangles, \u003Cstrong>we don't start hand-retopo immediately\u003C/strong>. Instead, we run an automated remesh pass to establish structure first.\u003C/p>\u003Cp>To do so, Blender proposes two remeshing algorithms: Voxel and quad.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card kg-card-hascaption\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/03/image-7.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1280\" height=\"720\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/03/image-7.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/03/image-7.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/03/image-7.png 1280w\" sizes=\"(min-width: 720px) 720px\">\u003Cfigcaption>\u003Ci>\u003Cem class=\"italic\" style=\"white-space: pre-wrap;\">Source: Sofia Pahaoja on Medium\u003C/em>\u003C/i>\u003C/figcaption>\u003C/figure>\u003Cp>\u003Cstrong>Voxel remeshing\u003C/strong> (VDB Remesh) works by converting the mesh into a 3D grid of tiny cubes (voxels), rebuilding the surface based on volume rather than original edge flow.\u003C/p>\u003Cp>The produced evenly distributed geometry is why it's great for fixing holes, non-manifold geometry (a structure that cannot be unfolded into a 2D plane with consistent surface normals), and intersecting parts. You use voxel when you need a fresh base mesh and don't care much about preserving existing topology, so the result can be messy.\u003C/p>\u003Cp>On the other hand, you can use \u003Cstrong>quad remeshing\u003C/strong> when you want animation-friendly edge loops. Quad remeshing analyzes surface curvature and generates quads which deform predictably under skinning. QuadriFlow follows the shape of your model.\u003C/p>\u003Cp>Naturally, you can combine the two. On a facial rig for example, you could ran quad remesh after voxel cleanup, then adjust guides to force loops around the eyes and mouth.\u003C/p>\u003Cp>It's important to keep in mind that \u003Cstrong>automated retopology is most often than not a starting point, not a final deliverable.\u003C/strong>\u003C/p>\u003Chr>\u003Ch2 id=\"4-manual-retopology-with-poly-build\">4. Manual Retopology With Poly Build\u003C/h2>\u003Cp>\u003Ca href=\"https://github.com/cgwire/blog/blob/main/drafts/retopology/index.md?ref=blog.cg-wire.com#4-manual-retopology-with-poly-build\">\u003C/a>\u003C/p>\u003Cp>\u003Cstrong>Manual retopology with the Poly Build tool\u003C/strong> is what you reach for when deformation quality is key, especially on hero characters that will carry close-ups.\u003C/p>\u003Cp>In Blender, the Poly Build tool lets you draw new polygons directly on the surface of a dense mesh, snapping every vertex to the sculpt.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card kg-card-hascaption\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/03/image-8.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1078\" height=\"516\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/03/image-8.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/03/image-8.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/03/image-8.png 1078w\" sizes=\"(min-width: 720px) 720px\">\u003Cfigcaption>\u003Ci>\u003Cem class=\"italic\" style=\"white-space: pre-wrap;\">Source: Blender Nation\u003C/em>\u003C/i>\u003C/figcaption>\u003C/figure>\u003Cp>To keep the example of the facial rig, an artist could rebuild the mouth area by placing quads (four-sided polygons) around the lips first to make sure edge loops follow the smile lines. It would give the rigger predictable loops for blendshapes and avoid collapsing geometry during extreme phonemes.\u003C/p>\u003Cp>\u003Cstrong>You can also use other modifiers like the Subdivision Surface Modifier or the Multiresolution Modifier\u003C/strong> to perform specific jobs.\u003C/p>\u003Cp>In this step, experience matters a lot. Most animators learn by studying the topology of high-quality models and re-applying the same principles to their own models. It's tacit knowledge, so practice is key!\u003C/p>\u003Chr>\u003Ch2 id=\"5-measuring-retopology-performance\">5. Measuring Retopology Performance\u003C/h2>\u003Cp>\u003Ca href=\"https://github.com/cgwire/blog/blob/main/drafts/retopology/index.md?ref=blog.cg-wire.com#5-measuring-retopology-performance\">\u003C/a>\u003C/p>\u003Cp>Retopology is all about aesthetics, but \u003Cstrong>it's good practice to measure retopology performance with numbers\u003C/strong> by counting meshes in your scene. This way you can assess the amount of work a retopology requires and track your progress.\u003C/p>\u003Cp>In Blender, open the Outliner and check how many mesh objects are present, then enable Statistics in the viewport overlays to see vertex and face counts in real time.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/03/image-9.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"317\" height=\"159\">\u003C/figure>\u003Cp>A character model can look light, but the stats could show 120k faces across separate clothing meshes and simply merging static accessories and removing hidden interior faces could drop the count substantially before starting more complex retopology operations.\u003C/p>\u003Cp>It's also important to consider separate mesh counts depending on LOD strategies.\u003C/p>\u003Cp>LOD, or Level of Detail, means creating multiple versions of the same asset at different resolutions so the engine swaps them based on camera distance.\u003C/p>\u003Cp>Reducing mesh count is also about optimizing LOD is about performance at runtime, so we can retopologize key deformation areas like shoulders and hips so the lower LOD still bends correctly during animation without spending too much time on details. Context is important.\u003C/p>\u003Chr>\u003Ch2 id=\"conclusion\">Conclusion\u003C/h2>\u003Cp>\u003Ca href=\"https://github.com/cgwire/blog/blob/main/drafts/retopology/index.md?ref=blog.cg-wire.com#conclusion\">\u003C/a>\u003C/p>\u003Cp>AI-generated 3D models have made it incredibly fast to go from idea to mesh. But speed without structure comes at a cost. Clean topology is what transforms a raw, messy asset into something production-ready.\u003C/p>\u003Cp>In this guide, we covered what retopology is, why it matters for maintainability, animation, and rendering performance, and how to approach it step by step inside Blender.\u003C/p>\u003Cp>You've seen why backing up your original mesh is critical. From there, we explored automated retopology using remeshing tools like Voxel and Quad methods for fast results, as well as manual retopology with modifiers when precision matters most. Finally, we looked at how to measure performance by analyzing mesh counts and understanding the trade-offs between LODs and topology.\u003C/p>\u003Cp>Retopology isn't just a cleanup step. And while we demonstrated the process in Blender, the same principles apply across all major DCC tools: whether you're working in Blender, Maya, Houdini, or any other 3D software, the fundamentals remain the same.\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>",{"uuid":141,"comment_id":142,"feature_image":143,"featured":29,"visibility":30,"created_at":144,"updated_at":145,"custom_excerpt":146,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":147,"primary_tag":148,"url":149,"excerpt":146,"reading_time":150,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":151},"05e17976-e873-4ea1-b896-fef84f99fcd7","69ae62ca91be760001bf7d8d","https://images.unsplash.com/photo-1590285359328-dce54ee24c1c?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDQwfHxhbmltYXRpb24lMjBtb2RlbHxlbnwwfHx8fDE3NzMwMzgxMDN8MA&ixlib=rb-4.1.0&q=80&w=2000","2026-03-09T07:03:54.000+01:00","2026-03-09T07:41:49.000+01:00","Learn what retopology is and why it’s essential for animation. This guide walks through the retopology workflow in Blender, from automated remeshing to manual topology cleanup for production-ready 3D models.",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"https://blog.cg-wire.com/retopology-animation-blender-guide/",6,"\u003Cspan style=\"white-space: pre-wrap;\">Photo by \u003C/span>\u003Ca href=\"https://unsplash.com/@jhc?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">James Coleman\u003C/span>\u003C/a>\u003Cspan style=\"white-space: pre-wrap;\"> / \u003C/span>\u003Ca href=\"https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Unsplash\u003C/span>\u003C/a>","/posts/retopology-animation-blender-guide",{"title":136},"retopology-animation-blender-guide","posts/retopology-animation-blender-guide",[157],{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"1nVpTXkQokkNvujUo-ojOWyHqGGI1Qw-Q6oHLAzOgBg",{"id":160,"title":161,"authors":162,"body":7,"description":7,"extension":8,"html":164,"meta":165,"navigation":12,"path":176,"published_at":170,"seo":177,"slug":178,"stem":179,"tags":180,"__hash__":182,"uuid":166,"comment_id":167,"feature_image":168,"featured":29,"visibility":30,"created_at":169,"updated_at":170,"custom_excerpt":171,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":172,"primary_tag":173,"url":174,"excerpt":171,"reading_time":150,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":175},"ghost/posts:scaling-animation-studio-systems.json","Scaling an Animation Studio from 5 to 50 Artists",[163],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📈\u003C/div>\u003Cdiv class=\"kg-callout-text\">Scaling an animation studio is less about hiring more artists and more about building the right systems.\u003C/div>\u003C/div>\u003Cp>In a small team, everyone talks to everyone and problems get solved by turning around in a chair. At fifty people, that same habit creates noise and delays.\u003C/p>\u003Cp>System thinking (designing repeatable processes instead of relying on individual heroics) is hard to learn without having seen it inside a larger studio. Many artists only realize this when a project slips because no one has defined who approves shots, where files live, or how feedback is tracked. Multiply that by ten new hires and a deadline, and chaos follows.\u003C/p>\u003Cp>\u003Cstrong>The challenge is building structures that make good work predictable. And the solution is to deliberately design how information, assets, and decisions flow before growth forces painful lessons.\u003C/strong>\u003C/p>\u003Cp>In this article, we define best practices to help you plan ahead.\u003C/p>\u003Chr>\u003Ch2 id=\"1-layered-team-structure\">1. Layered Team Structure\u003C/h2>\u003Cp>\u003Ca href=\"https://github.com/cgwire/blog/blob/main/drafts/scaling-pipeline-from-5-to-50-artists/index.md?ref=blog.cg-wire.com#1-layered-team-structure\">\u003C/a>\u003C/p>\u003Cp>When a studio has five artists, everyone touches everything and decisions happen in the same room. At fifty, that model only leads to confusion.\u003C/p>\u003Cp>It's important to \u003Cstrong>put a layered team structure in place early\u003C/strong> by defining departments like animation, rigging, lighting, or compositing, supervisors who own creative and technical direction, and artists who execute within that scope. The supervisor is the person accountable for final quality and approvals, not just the most senior animator. Once each department has a clear supervisor and a single approval path, feedback flows through one channel and shot turnaround time drops.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/03/image.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1438\" height=\"809\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/03/image.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/03/image.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/03/image.png 1438w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>Departments should be divided to \u003Cstrong>reduce cross-department dependencies\u003C/strong>, to design pipelines that allow teams to work in parallel instead of waiting on each other. A dependency is any task that blocks another task from starting. You can also standardize rigs, naming conventions, and publish processes so animation does not wait on last-minute rig tweaks.\u003C/p>\u003Cp>A clear team structure also makes it easy to \u003Cstrong>keep your budget under control\u003C/strong> as you scale and track your burn rate (how fast cash is spent each month) to guide staffing decisions. When production sees that adding two mid-level animators keeps the burn rate aligned with delivery milestones, hiring is no longer a gamble.\u003C/p>\u003Chr>\u003Ch2 id=\"2-centralized-asset-management\">2. Centralized Asset Management\u003C/h2>\u003Cp>\u003Ca href=\"https://github.com/cgwire/blog/blob/main/drafts/scaling-pipeline-from-5-to-50-artists/index.md?ref=blog.cg-wire.com#2-centralized-asset-management\">\u003C/a>\u003C/p>\u003Cp>You need to centralize asset management early, because five artists can shout across the room for the latest rig, but fifty cannot.\u003C/p>\u003Cp>Asset management originates from a simple need: \u003Cstrong>everyone must always work on the most up-to-date asset.\u003C/strong> There is nothing more frustrating than seeing a lighting artist spend half a day polishing a shot, only to discover the character rig is two versions behind. It's important to quickly replace scattered folders and casual file sharing with a single source of truth where approved files live.\u003C/p>\u003Cp>Spreadsheets may seem enough to track shots and versions, but they collapse the moment three supervisors update them at once or someone forgets to log a change. Google Drive is attractive because you are already familiar with it, but you can't easily version assets and preview renders will eat up your storage quota fast.\u003C/p>\u003Cp>The fix is simple: \u003Cstrong>store all production assets on a secure server with controlled access\u003C/strong>, so files are not passed around manually and permissions prevent accidental overwrites. Lock down DCC tool choices, sharing file formats, and introduce versioning strategies.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/03/image-1.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1438\" height=\"809\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/03/image-1.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/03/image-1.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/03/image-1.png 1438w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>Versioning means saving incremental, clearly numbered iterations of a file so changes can be tracked and rolled back. Instead of letting artists rename files \"final_v7_reallyFinal,\" you can \u003Cstrong>enforce automatic version publishing through your DCC pipeline\u003C/strong>. A practical example: when a rigger publishes a new character to Kitsu, the system increments the version. Animators open shots and automatically reference the latest approved rig.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/03/image-2.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1438\" height=\"809\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/03/image-2.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/03/image-2.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/03/image-2.png 1438w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Chr>\u003Ch2 id=\"3-tracking-documentation\">3. Tracking &amp; Documentation\u003C/h2>\u003Cp>\u003Ca href=\"https://github.com/cgwire/blog/blob/main/drafts/scaling-pipeline-from-5-to-50-artists/index.md?ref=blog.cg-wire.com#3-tracking--documentation\">\u003C/a>\u003C/p>\u003Cp>In a large studio, accountability no longer lives in casual conversations.\u003C/p>\u003Cp>\u003Cstrong>You need a production tracker as a shared system to assign tasks, deadlines, and owners in one visible place.\u003C/strong>\u003C/p>\u003Cp>In Kitsu for example, you can set up every concept, asset, shot, and scene as a trackable task, and assign one clear owner.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/03/image-3.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1438\" height=\"833\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/03/image-3.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/03/image-3.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/03/image-3.png 1438w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>In a small team, everyone remembers who is polishing the walk cycle. In a larger team, two animators may assume the other is handling it. A simple tracker prevents that confusion by making responsibility explicit.\u003C/p>\u003Cp>Pair this with defined milestones so progress is measured against concrete checkpoints rather than gut feeling.\u003C/p>\u003Cp>\u003Cstrong>Documentation must also scale with headcount.\u003C/strong> You need a knowledge base to centralize tools, processes, and conventions to make them by everyone. For example, create a studio wiki in tools like Notion or Confluence and require artists to document new tools and fixes as part of their task completion.\u003C/p>\u003Cp>Last but not least, \u003Cstrong>make use of forecasting tools\u003C/strong> to spot delays early. If layout consistently overruns by two days per sequence, adjust bids and staffing before deadlines slip, not after clients complain.\u003C/p>\u003Chr>\u003Ch2 id=\"4-structure-review-loops-team-communication\">4. Structure Review Loops &amp; Team Communication\u003C/h2>\u003Cp>\u003Ca href=\"https://github.com/cgwire/blog/blob/main/drafts/scaling-pipeline-from-5-to-50-artists/index.md?ref=blog.cg-wire.com#4-structure-review-loops--team-communication\">\u003C/a>\u003C/p>\u003Cp>Feedback cycles also require structure.\u003C/p>\u003Cp>\u003Cstrong>A review loop should be a scheduled, repeatable process where work is submitted, reviewed, revised, and approved in clear stages.\u003C/strong>\u003C/p>\u003Cp>Written communication is also critical because it creates a record and removes ambiguity. Make submissions happen at fixed times each week and require artists to attach a short written intent note explaining what changed and what feedback is requested, or use asynchronous comments that don't require everyone to be present at the same time to reduce meeting overload.\u003C/p>\u003Cp>\u003Cstrong>A review engine\u003C/strong> like Kitsu's centralizes versions, notes, and approvals, to prevent feedback from getting lost in chat threads:\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/03/image-4.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1122\" height=\"549\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/03/image-4.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/03/image-4.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/03/image-4.png 1122w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>You can \u003Cstrong>combine it with a messaging platform\u003C/strong> for quick clarifications while keeping final notes inside the review system. Many teams discover that when supervisors stop giving major notes in private messages and instead post them publicly in the review tool, alignment improves and duplicate work drops significantly.\u003C/p>\u003Chr>\u003Ch2 id=\"5-infrastructure-pipeline-management\">5. Infrastructure &amp; Pipeline Management\u003C/h2>\u003Cp>\u003Ca href=\"https://github.com/cgwire/blog/blob/main/drafts/scaling-pipeline-from-5-to-50-artists/index.md?ref=blog.cg-wire.com#5-infrastructure--pipeline-management\">\u003C/a>\u003C/p>\u003Cp>Infrastructure stops being a background concern when a studio grows. At fifty artists, fifteen minutes of daily friction per person waiting for files to sync, relinking textures, and re-rendering broken shots, adds up to more than twelve hours of lost production time every day.\u003C/p>\u003Cp>\u003Cstrong>A dedicated pipeline team is important.\u003C/strong> Instead of having everyone patch problems as they appear, you can have one pipeline team owning standards, versioning, and automation so artists can stay focused on shots. Technical artists handle multiple key components of an animation studio:\u003C/p>\u003Cul>\u003Cli>A NAS (Network Attached Storage) ensures everyone works from the same source of truth. Instead of copying files over chat, assets are published to a single location.\u003C/li>\u003Cli>Backup and redundancy protect against disaster. One corrupted drive should not freeze a 50-person studio. Automated nightly backups and mirrored servers prevent panic.\u003C/li>\u003Cli>A scalable render farm keeps lighting from blocking animation.\u003C/li>\u003Cli>Custom automations quickly add up when you're handling hundreds of thousands of frames throughout a production.\u003C/li>\u003C/ul>\u003Chr>\u003Ch2 id=\"conclusion\">Conclusion\u003C/h2>\u003Cp>\u003Ca href=\"https://github.com/cgwire/blog/blob/main/drafts/scaling-pipeline-from-5-to-50-artists/index.md?ref=blog.cg-wire.com#conclusion\">\u003C/a>\u003C/p>\u003Cp>Scaling an animation studio isn't just about hiring more artists: \u003Cstrong>you need to design a system that lets more artists succeed\u003C/strong> without stepping on each other.\u003C/p>\u003Cp>Decision-making needs layers. Assets need structure. Tasks need visibility. Feedback needs process. Infrastructure needs ownership. What once lived in conversations and shared intuition must evolve into documented systems and clearly defined responsibilities. Each of these systems reinforces the others, and together they support your studio's growth.\u003C/p>\u003Cp>If you want to scale smoothly without sacrificing quality or culture, you need tools that support this structure. That's where Kitsu comes in. Built specifically for animation and VFX studios, Kitsu helps you centralize tracking, manage assets, structure reviews, and maintain visibility across departments in one place. Scale with confidence with the right systems!\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>\u003Cp>\u003C/p>\u003Cp>\u003C/p>",{"uuid":166,"comment_id":167,"feature_image":168,"featured":29,"visibility":30,"created_at":169,"updated_at":170,"custom_excerpt":171,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":172,"primary_tag":173,"url":174,"excerpt":171,"reading_time":150,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":175},"2dbfc58b-e1fe-4254-9c2f-5d09210ca6dc","69ae62cc91be760001bf7d93","https://images.unsplash.com/photo-1648014613911-e355dc51e2e3?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDR8fGFuaW1hdGlvbiUyMHN0dWRpbyUyMHRlYW18ZW58MHx8fHwxNzczMDM2OTgyfDA&ixlib=rb-4.1.0&q=80&w=2000","2026-03-09T07:03:56.000+01:00","2026-03-09T07:18:45.000+01:00","Learn how to scale an animation studio from a small team to a full production environment. Discover best practices for team structure, asset management, production tracking, review workflows, and pipeline infrastructure.",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":121,"name":122,"slug":123,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":124},"https://blog.cg-wire.com/scaling-animation-studio-systems/","\u003Cspan style=\"white-space: pre-wrap;\">Photo by \u003C/span>\u003Ca href=\"https://unsplash.com/@ooneiroslyl?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">ooneiroslyl\u003C/span>\u003C/a>\u003Cspan style=\"white-space: pre-wrap;\"> / \u003C/span>\u003Ca href=\"https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Unsplash\u003C/span>\u003C/a>","/posts/scaling-animation-studio-systems",{"title":161},"scaling-animation-studio-systems","posts/scaling-animation-studio-systems",[181],{"id":121,"name":122,"slug":123,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":124},"o41IfFxYbK3eKApyX1aZhkYy2fbfTNBeyfC0vXH1V7s",{"id":184,"title":185,"authors":186,"body":7,"description":7,"extension":8,"html":188,"meta":189,"navigation":12,"path":200,"published_at":194,"seo":201,"slug":202,"stem":203,"tags":204,"__hash__":206,"uuid":190,"comment_id":191,"feature_image":192,"featured":29,"visibility":30,"created_at":193,"updated_at":194,"custom_excerpt":195,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":196,"primary_tag":197,"url":198,"excerpt":195,"reading_time":97,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":199},"ghost/posts:kitsu-webhooks-pipeline-automation.json","Using Kitsu Webhooks to Trigger Pipeline Actions",[187],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">⚡\u003C/div>\u003Cdiv class=\"kg-callout-text\">Turn production events into instant pipeline actions with Kitsu webhooks.\u003C/div>\u003C/div>\u003Cp>As a studio grows, the cracks in a manual pipeline get louder: an artist publishes an asset, a supervisor approves a shot, a task flips to \u003Cem>Done\u003C/em>, but somewhere down the line, another tool is still waiting to be told. Those delays add up.\u003C/p>\u003Cp>Kitsu's Event API changes the game by broadcasting what's happening in production the moment it happens. No polling, no guesswork. Just real-time signals you can act upon.\u003C/p>\u003Cp>With webhooks, you can trigger automated actions the instant production data changes, like \u003Ca href=\"https://blog.cg-wire.com/blender-programmatic-rendering/\">launching renders\u003C/a>, syncing tracking tools, notifying teams, or updating downstream systems without human hand-offs.\u003C/p>\u003Cp>In this article, we'll break down how to set them up and put them to work, with a practical, studio-tested example you can drop into a real pipeline.\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-blue\">\u003Cdiv class=\"kg-callout-emoji\">💡\u003C/div>\u003Cdiv class=\"kg-callout-text\">\u003Cb>\u003Cstrong style=\"white-space: pre-wrap;\">Looking for working examples?\u003C/strong>\u003C/b>\u003Cbr>\u003Cbr>You can find the complete source code for the example integration showcased in this guide on our GitHub:\u003Cbr>\u003Cbr>🔗 \u003Ca href=\"https://github.com/cgwire/blog-tutorials/tree/main/kitsu-webhooks%20?ref=blog.cg-wire.com\">https://github.com/cgwire/blog-tutorials/tree/main/kitsu-webhooks%20\u003C/a>\u003C/div>\u003C/div>\u003Chr>\u003Ch2 id=\"why-webhooks\">Why Webhooks\u003C/h2>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/02/data-src-image-9c7a79f2-b129-45df-bea5-52e3d0e07988.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1600\" height=\"900\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/02/data-src-image-9c7a79f2-b129-45df-bea5-52e3d0e07988.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/02/data-src-image-9c7a79f2-b129-45df-bea5-52e3d0e07988.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/02/data-src-image-9c7a79f2-b129-45df-bea5-52e3d0e07988.png 1600w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>Polling the API every few minutes is like asking production for updates by shouting across the floor: it's slow, noisy, and easy to miss at the exact moment something matters.\u003C/p>\u003Cp>Webhooks flip that model: instead of checking whether Kitsu changed, Kitsu tells your pipeline immediately when it does.\u003C/p>\u003Cp>This brings several production benefits in practice: a modeler creates a new prop in Kitsu, and within seconds, your asset build system spins up the correct directory structure on the server, registers the asset in your DCC tools, and makes it visible to layout. No artist has to copy a name or click a button.\u003C/p>\u003Cp>Later in the schedule, a lighting task moves to Done. That single status change can trigger your render management system to submit the shot automatically, using the latest approved files and the correct render settings for the show. By the time anyone notices the task is finished, frames are already rendering.\u003C/p>\u003Cp>When an artist publishes a file, the webhook can push that version straight into your review stack. The media is transcoded, uploaded, and attached to the correct shot before the supervisor opens their inbox. Reviews happen sooner, notes come back faster, and work keeps flowing instead of waiting for someone to remember the next step.\u003C/p>\u003Cp>This is what webhooks buy you: production data turning directly into action. Fewer hand-offs, tighter feedback loops, and a pipeline that reacts at the same speed your artists work.\u003C/p>\u003Chr>\u003Ch2 id=\"available-events\">Available events\u003C/h2>\u003Cp>Kitsu emits events for all production actions covered by \u003Ca href=\"https://gazu.cg-wire.com/data?ref=blog.cg-wire.com\">available data models\u003C/a>:\u003C/p>\u003Cul>\u003Cli>Asset creation and updates\u003C/li>\u003Cli>Shot creation and updates\u003C/li>\u003Cli>Task status changes\u003C/li>\u003Cli>Preview file creation and publication\u003C/li>\u003Cli>People management\u003C/li>\u003Cli>Organization changes\u003C/li>\u003Cli>Shot and sequence updates\u003C/li>\u003C/ul>\u003Cp>Each event carries structured data (IDs, timestamps, user information) so you can precisely identify what changed and react accordingly: a real-time production log you can subscribe to!\u003C/p>\u003Chr>\u003Ch2 id=\"1-create-an-event-listener\">1. Create an event listener\u003C/h2>\u003Cp>The first step is to register an event listener using the Kitsu Python client (\u003Ccode>gazu\u003C/code>). This listener acts like a webhook endpoint: it waits for events and calls your callback function when they occur.\u003C/p>\u003Cpre>\u003Ccode class=\"language-py\">import gazu\n\ngazu.set_host(\"http://localhost/api\")\ngazu.set_event_host(\"http://localhost/api\")\nuser = gazu.log_in(\"admin@example.com\", \"mysecretpassword\")\n\ndef my_callback(data):\n    print(\"Asset created %s\" % data[\"asset_id\"])\n\nevent_client = gazu.events.init()\ngazu.events.add_listener(event_client, \"asset:new\", my_callback)\ngazu.events.run_client(event_client)\n\u003C/code>\u003C/pre>\u003Cp>First, we import Gazu, the official Python client for Kitsu, and configure it to talk to a Kitsu server running locally. Both \u003Ccode>set_host\u003C/code> and \u003Ccode>set_event_host\u003C/code> point to the same API URL: the first is used for standard REST calls, while the second is specifically for the event (websocket) endpoint. In production, it's recommended to set up the two in different threads because listening to events is blocking. But for the sake of simplicity, we do it all in one endpoint in this tutorial.\u003C/p>\u003Cp>Next, we authenticate as a user. Calling \u003Ccode>gazu.log_in\u003C/code> logs in with the provided credentials and establishes a session so the client is authorized to receive events from Kitsu.\u003C/p>\u003Cp>The \u003Ccode>my_callback\u003C/code> function defines how your pipeline reacts when an event is received. It takes the event payload as input and, in this case, simply prints the ID of the newly created asset. In a mid-size animation studio, this callback could, for example, trigger a script that creates a standardized directory structure on the file server whenever a new asset is added in Kitsu. Artists no longer need to set this up manually, and naming conventions stay consistent.\u003C/p>\u003Cp>After that, the script initializes an event client with \u003Ccode>gazu.events.init()\u003C/code>. This client maintains a persistent connection to Kitsu's event system.\u003C/p>\u003Cp>The call to \u003Ccode>gazu.events.add_listener\u003C/code> registers the callback function for a specific event type: \u003Ccode>\"asset:new\"\u003C/code>. This tells Gazu, \"Whenever Kitsu emits an event indicating that a new asset was created, call \u003Ccode>my_callback\u003C/code> with the event data.\"\u003C/p>\u003Cp>Finally, \u003Ccode>gazu.events.run_client(event_client)\u003C/code> starts the event loop. From this point on, the script blocks and listens continuously via a WebSocket connection. As soon as someone creates an asset in Kitsu, Kitsu emits an \u003Ccode>asset:new\u003C/code> event, Gazu receives it, and \u003Ccode>my_callback\u003C/code> is executed immediately.\u003C/p>\u003Chr>\u003Ch2 id=\"2-send-test-events\">2. Send test events\u003C/h2>\u003Cp>To validate your setup, you need to generate real events. The easiest way is to perform standard API actions that you already use in production. For example, by creating an asset programmatically:\u003C/p>\u003Cpre>\u003Ccode class=\"language-py\">import gazu\n\ngazu.set_host(\"http://localhost/api\")\nuser = gazu.log_in(\"admin@example.com\", \"mysecretpassword\")\n\nprojects = gazu.project.all_projects()\nproject = projects[0]\n\nasset_types = gazu.asset.all_asset_types()\nasset_type = asset_types[0]\n\nasset = gazu.asset.new_asset(\n    project,\n    asset_type,\n    \"My new asset\",\n    \"My asset description\"\n)\n\u003C/code>\u003C/pre>\u003Cp>After authentication, we retrieve the list of all projects visible to the logged-in user by calling \u003Ccode>gazu.project.all_projects()\u003C/code>. From that list, we select the first project. In a real production tool, you'd usually look up a specific project by name or ID, but this keeps the example simple.\u003C/p>\u003Cp>The same pattern is used for asset types. The script queries all available asset types, then picks the first one. Asset types define what kind of asset is being created (character, prop, environment, and so on), and Kitsu requires one to be specified when creating a new asset.\u003C/p>\u003Cp>With a project and an asset type in hand, we create a new asset by calling \u003Ccode>gazu.asset.new_asset\u003C/code>. The function takes the target project, the asset type, a name, and a description. When this call succeeds, Kitsu immediately creates the asset in its database and returns the newly created asset object.\u003C/p>\u003Cp>At this point, the asset exists in Kitsu exactly as if it had been created through the web interface. This action also emits an \u003Ccode>asset:new\u003C/code> event, allowing the rest of your pipeline to react automatically.\u003C/p>\u003Cp>Before rolling this out studio-wide, a pipeline TD can create assets in a staging project to confirm that the event triggers downstream automation without touching real production data.\u003C/p>\u003Chr>\u003Ch2 id=\"3-react-to-events-with-callbacks\">3. React to events with callbacks\u003C/h2>\u003Cp>Callbacks are the point where Kitsu events turn into concrete pipeline actions. When a callback is executed, it receives a payload describing exactly what changed: an asset was created, a task moved to a new status, or a file was published. That payload becomes your entry point for driving automation.\u003C/p>\u003Cp>A common first step inside a callback is to use the IDs in the event data to pull full context from Kitsu. For example, when you receive a task update event, you can fetch the complete task, the linked shot, and the associated project to understand where in the show this change happened and what rules should apply.\u003C/p>\u003Cp>From there, callbacks typically perform side effects that would otherwise require manual intervention. An asset creation event could, for example, result in a standardized folder tree being created on disk. A file publish event can push media into your review system, attach metadata, and make it visible to supervisors immediately.\u003C/p>\u003Cp>The key idea is that callbacks let production state drive behavior. Instead of people reacting to updates, your pipeline does, consistently and instantly, using the same rules every time.\u003C/p>\u003Cp>\u003Ca href=\"https://github.com/cgwire/blog-tutorials/tree/main/kitsu-webhooks%20?ref=blog.cg-wire.com\" rel=\"noreferrer\">Fork our example Github repository\u003C/a> to try it out for yourself.\u003C/p>\u003Chr>\u003Ch2 id=\"4-search-events\">4. Search events\u003C/h2>\u003Cp>Live events are only half the story. Kitsu also keeps a record of past events, which gives you a reliable paper trail of what actually happened in production. When something goes wrong or when you need to prove that something worked, this event history is an essential debugging tool.\u003C/p>\u003Cp>Through the API, you can query recent events and filter them by time range or event type. Pulling the last hundred events is often enough to get immediate context after a failure. Narrowing the query to a specific date range lets you inspect what happened during a particular shift or overnight run. Filtering to file-related events is especially useful when tracking publishes and media ingestion issues.\u003C/p>\u003Cpre>\u003Ccode class=\"language-py\">events = gazu.client.get(\"data/events/last?page_size=100\")\nevents = gazu.client.get(\"data/events/last?page_size=100&amp;before=2019-02-01\")\nevents = gazu.client.get(\"data/events/last?page_size=100&amp;before=2019-02-01&amp;after=2019-01-01\")\nevents = gazu.client.get(\"data/events/last?page_size=100&amp;only_files=true\")\n\u003C/code>\u003C/pre>\u003Cp>In practice, this is how you reconstruct a broken automation. Imagine a publishing script fails sometime during the night, and the morning team finds missing media in the review system. Instead of asking artists when they published or digging through logs across multiple machines, you can query Kitsu for all file events from the previous day. That gives you an exact sequence of publishes, timestamps, users, and linked entities.\u003C/p>\u003Cp>You can also keep track of specific events in your pipeline for productivity reports. For example, you could compile the activity log of your animation team to know who did what.\u003C/p>\u003Chr>\u003Ch2 id=\"conclusion\">Conclusion\u003C/h2>\u003Cp>Kitsu API events give you a clean, reliable way to build reactive pipelines. By listening to production changes instead of polling for them, you reduce latency, eliminate manual steps, and make your studio more resilient as it scales.\u003C/p>\u003Cp>Of course, webhooks only go as far as your knowledge of Kitsu scripting, so make sure to have a look at more technical tutorials from our blog to get a better idea of what you can build!\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>",{"uuid":190,"comment_id":191,"feature_image":192,"featured":29,"visibility":30,"created_at":193,"updated_at":194,"custom_excerpt":195,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":196,"primary_tag":197,"url":198,"excerpt":195,"reading_time":97,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":199},"0ba0384a-27a2-4189-ac13-8aca0933041c","6980b67a4304f600017051ef","https://images.unsplash.com/photo-1644088379091-d574269d422f?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDJ8fGNvbm5lY3Rpb25zfGVufDB8fHx8MTc3MDA0NTM2OXww&ixlib=rb-4.1.0&q=80&w=2000","2026-02-02T15:36:42.000+01:00","2026-02-23T10:00:39.000+01:00","Learn how to use the Kitsu Event API and webhooks to build reactive animation pipelines. Trigger automation instantly on asset creation, task updates, and publishes without polling or manual hand-offs.",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"https://blog.cg-wire.com/kitsu-webhooks-pipeline-automation/","\u003Cspan style=\"white-space: pre-wrap;\">Photo by \u003C/span>\u003Ca href=\"https://unsplash.com/@choys_?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Conny Schneider\u003C/span>\u003C/a>\u003Cspan style=\"white-space: pre-wrap;\"> / \u003C/span>\u003Ca href=\"https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Unsplash\u003C/span>\u003C/a>","/posts/kitsu-webhooks-pipeline-automation",{"title":185},"kitsu-webhooks-pipeline-automation","posts/kitsu-webhooks-pipeline-automation",[205],{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"_WyfIj_ToGV8wcQxWVnE_qvsU0aULS6YDQRThnH1vO0",{"id":208,"title":209,"authors":210,"body":7,"description":7,"extension":8,"html":212,"meta":213,"navigation":12,"path":224,"published_at":225,"seo":226,"slug":227,"stem":228,"tags":229,"__hash__":231,"uuid":214,"comment_id":215,"feature_image":216,"featured":29,"visibility":30,"created_at":217,"updated_at":218,"custom_excerpt":219,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":220,"primary_tag":221,"url":222,"excerpt":219,"reading_time":150,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":223},"ghost/posts:automating-kitsu-production-onboarding.json","Scaling Production Setup in Kitsu with CSV Imports (2026)",[211],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">🚀\u003C/div>\u003Cdiv class=\"kg-callout-text\">Spin up new Kitsu productions in minutes by importing clean studio data automatically.\u003C/div>\u003C/div>\u003Cp>If spinning up a new show or scene in Kitsu means clicking through forms, recreating asset lists, and assigning artists one at a time, \u003Ca href=\"https://blog.cg-wire.com/client-communication-animation/\">your onboarding is lacking\u003C/a>.\u003C/p>\u003Cp>That manual overhead compounds fast. Every new production repeats the same setup ritual, every crew onboarding becomes a copy-paste marathon, and every step adds another chance for something to break. At studio scale, that friction costs real time, real money, and real sanity.\u003C/p>\u003Cp>The fastest studios don't just use Kitsu: they wire it in their pipeline. They treat it like a production database, feeding it clean, structured studio data so new shows, shots, or departments come online in minutes, not days. Pipelines are cloned, teams are attached automatically, and Kitsu becomes an engine instead of a bottleneck.\u003C/p>\u003Cp>In this article, we'll break down a practical, production-tested workflow for doing exactly that, using CSV files and the Kitsu Python API (Gazu) to automate production onboarding and make setup work disappear.\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-blue\">\u003Cdiv class=\"kg-callout-emoji\">💡\u003C/div>\u003Cdiv class=\"kg-callout-text\">\u003Cb>\u003Cstrong style=\"white-space: pre-wrap;\">Looking for working examples?\u003C/strong>\u003C/b>\u003Cbr>\u003Cbr>You can find the complete source code for the example integration showcased in this guide on our GitHub:\u003Cbr>\u003Cbr>🔗 \u003Ca href=\"https://github.com/cgwire/blog-tutorials/tree/main/import-spreadsheet-to-kitsu?ref=blog.cg-wire.com\">https://github.com/cgwire/blog-tutorials/tree/main/import-spreadsheet-to-kitsu\u003C/a>\u003C/div>\u003C/div>\u003Chr>\u003Ch2 id=\"what-you-can-import\">What You Can Import\u003C/h2>\u003Cp>In a real production, almost all of the data falls into a few repeatable buckets that are perfect for automation:\u003C/p>\u003Cul>\u003Cli>\u003Cstrong>Artists\u003C/strong> - Your crew already exists somewhere else: an HR sheet, a payroll export, a Notion table. That data usually includes names, emails, and roles like \u003Cem>Animator\u003C/em>, \u003Cem>TD\u003C/em>, or \u003Cem>Supervisor\u003C/em>. Instead of recreating users by hand in Kitsu, you can import that list in one pass and have your team ready to go before day one.\u003C/li>\u003Cli>\u003Cstrong>Assets\u003C/strong> - Characters, props, environments ... anything that follows a naming convention is easy to automate. A CSV with entries like \u003Ccode>CHAR_RobotA\u003C/code>, \u003Ccode>PROP_Sword_01\u003C/code>, or \u003Ccode>ENV_CityBlock\u003C/code> can become a fully populated asset list in Kitsu in seconds, organized exactly the way your pipeline expects.\u003C/li>\u003Cli>\u003Cstrong>Tasks\u003C/strong> - Tasks are also where manual setup really hurts. Modeling, Rigging, Surfacing, Animation... these task types rarely change from show to show. By importing tasks in bulk, you can automatically attach the right task stack to every asset and even pre-assign artists or departments, instead of clicking through hundreds of rows in the UI.\u003C/li>\u003C/ul>\u003Cp>Beyond the basics, you can import \u003Ca href=\"https://gazu.cg-wire.com/data?ref=blog.cg-wire.com\">any production-shaped data Kitsu understands: sequences, shots, episodes, or even entire productions\u003C/a>. This makes it trivial to duplicate a previous show's structure or spin up a new season with the same layout and naming rules.\u003C/p>\u003Cp>Most studios already store all of this in spreadsheets. Treat those spreadsheets as data sources, feed them directly into Kitsu, and let automation do the setup work.\u003C/p>\u003Cp>While Kitsu's UI supports basic spreadsheet imports, scripting takes it much further: with the Kitsu Python API (Gazu), you can chain automations like syncing tasks from Notion, mirror your asset tracker, or regenerate task lists whenever the schedule changes.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/02/data-src-image-82a7e584-d2c0-4457-9ea4-4e97c794b6ff.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"600\" height=\"611\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/02/data-src-image-82a7e584-d2c0-4457-9ea4-4e97c794b6ff.png 600w\">\u003C/figure>\u003Chr>\u003Ch2 id=\"1-csv-parser\">1. CSV Parser\u003C/h2>\u003Cp>The first step is to standardize how you read studio data. CSV is ideal: it is easy for production to edit and easy for scripts to parse.\u003C/p>\u003Cp>In this tutorial, we'll focus on the artist data model for the sake of simplicity, but we could do something similar with assets stored in Google Drive or tasks in Trello.\u003C/p>\u003Cpre>\u003Ccode class=\"language-py\">def load_csv(file_path: Path) -&gt; pd.DataFrame:\n    \"\"\"Load a CSV file into a pandas DataFrame.\"\"\"\n    return pd.read_csv(file_path)\n\n\ndef parse_artists(df: pd.DataFrame) -&gt; List[Dict]:\n    \"\"\"\n    Expected columns:\n        - email\n        - first_name\n        - last_name\n        - role\n    \"\"\"\n    return df.to_dict(orient=\"records\")\n\u003C/code>\u003C/pre>\u003Cp>\u003Ccode>load_csv\u003C/code> is the entry point that turns a raw CSV file into something Python can work with. It reads the file from disk using pandas and returns a DataFrame, giving you a structured, table-like representation of the spreadsheet that can be filtered, validated, or transformed before anything is sent to Kitsu.\u003C/p>\u003Cp>\u003Ccode>parse_artists\u003C/code> takes a DataFrame that represents artist data and converts each row into a dictionary containing an artist's email, name, and role. By returning a list of these dictionaries, it produces API-ready data that can be passed directly to Kitsu or Gazu to create users in bulk instead of adding artists one by one.\u003C/p>\u003Cp>A TV animation studio exporting crew lists from Google Sheets can simply save them as CSV, for example. Production keeps ownership of the data, while TDs automate ingestion without asking for format changes every show.\u003C/p>\u003Chr>\u003Ch2 id=\"2-kitsu-auth\">2. Kitsu Auth\u003C/h2>\u003Cp>Before uploading anything, you need to authenticate against your Kitsu instance:\u003C/p>\u003Cpre>\u003Ccode class=\"language-py\">gazu.set_host(\"http://localhost/api\")\nuser = gazu.log_in(\"admin@example.com\", \"mysecretpassword\")\n\u003C/code>\u003C/pre>\u003Cp>In practice, studios often use a dedicated \u003Cstrong>pipeline or admin account\u003C/strong> for automation. This avoids permission issues and keeps audit logs clean when scripts create or modify data.\u003C/p>\u003Cp>For local testing, it's advised to \u003Ca href=\"https://blog.cg-wire.com/dcc-integration-blender-kitsu/\">use the \u003Ccode>kitsu-docker\u003C/code> install\u003C/a>.\u003C/p>\u003Chr>\u003Ch2 id=\"3-loading-data\">3. Loading Data\u003C/h2>\u003Cp>Artists are usually the first bottleneck during onboarding. You need to gather emails, send invites, assign them to tasks... automating their creation removes hours of manual work for production coordinators.\u003C/p>\u003Cpre>\u003Ccode class=\"language-py\">def upload_artists(artists: List[Dict]):\n    \"\"\"\n    Create artists if they do not already exist.\n    \"\"\"\n    existing_users = {\n        user[\"email\"]: user\n        for user in gazu.person.all_persons()\n    }\n\n    for artist in artists:\n        if artist[\"email\"] in existing_users:\n            print(f\"Artist exists: {artist['email']}\")\n            continue\n\n        gazu.person.new_person(\n            artist[\"first_name\"],\n            artist[\"last_name\"],\n            artist[\"email\"],\n        )\n        print(f\"Created artist: {artist['email']}\")\n\u003C/code>\u003C/pre>\u003Cp>This function takes a list of artist dictionaries and syncs them into Kitsu while avoiding duplicates.\u003C/p>\u003Cp>It starts by querying Kitsu for all existing users and building a lookup table keyed by email, which makes it fast to check whether an artist already exists.\u003C/p>\u003Cp>It then iterates over the incoming artist data and, for each entry, compares the email against that lookup: if a match is found, the script skips creation and logs that the artist already exists. If no match is found, it creates a new user in Kitsu using the artist's name and email via the Gazu API, then prints a confirmation.\u003C/p>\u003Cp>The result is an idempotent import step you can safely re-run—new artists are added, existing ones are left untouched.\u003C/p>\u003Cp>On a feature film ramp-up, a studio could import hundreds of artists from HR data in under a minute. Late hires could be added by simply updating the CSV and rerunning the script without duplicating users or manual checks.\u003C/p>\u003Chr>\u003Ch2 id=\"4-tying-it-all-together\">4. Tying It All Together\u003C/h2>\u003Cp>The main entry point ties everything together:\u003C/p>\u003Cpre>\u003Ccode class=\"language-py\">if __name__ == \"__main__\":\n    gazu.set_host(\"http://localhost/api\")\n    user = gazu.log_in(\"admin@example.com\", \"mysecretpassword\")\n\n    artists_df = load_csv(Path(\"artists.csv\"))\n\n    artists = parse_artists(artists_df)\n\n    upload_artists(artists)\n\u003C/code>\u003C/pre>\u003Cp>This block only runs when the file is executed directly, not when it's imported by another module.\u003C/p>\u003Cp>After authentication, it loads the \u003Ccode>artists.csv\u003C/code> file into a pandas DataFrame, converts those rows into a list of artist dictionaries using parse_artists, and retrieves an existing production in Kitsu by name.\u003C/p>\u003Cp>Finally, it calls upload_artists, which is responsible for iterating over that prepared data and creating the artist accounts in Kitsu, completing the automated onboarding step without any manual UI work.\u003C/p>\u003Cp>In practice, studios version these scripts alongside their pipeline tools. A new show becomes a repeatable command, not a checklist.\u003C/p>\u003Cp>Now, you can log back into your Kitsu dashboard and see the final result:\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/02/data-src-image-9f641c9c-07b5-4154-9c42-45279f6a9d20.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1600\" height=\"900\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/02/data-src-image-9f641c9c-07b5-4154-9c42-45279f6a9d20.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/02/data-src-image-9f641c9c-07b5-4154-9c42-45279f6a9d20.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/02/data-src-image-9f641c9c-07b5-4154-9c42-45279f6a9d20.png 1600w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>\u003Ca href=\"https://github.com/cgwire/blog-tutorials/tree/main/import-spreadsheet-to-kitsu?ref=blog.cg-wire.com\">Have a look at our corresponding Github repository\u003C/a> for a working example you can easily fork to fit your needs!\u003C/p>\u003Chr>\u003Ch2 id=\"conclusion\">Conclusion\u003C/h2>\u003Cp>At its best, Kitsu automation allows technical directors to reclaim control over how productions are born. When your pipeline can create itself from clean data, onboarding stops being a chore. By importing artists, assets, and tasks directly into Kitsu, you eliminate redundant work, reduce human error, and make production onboarding predictable. This approach scales from small teams to multi-show studios.\u003C/p>\u003Cp>Here are some additional features you could add to make your import pipeline more interesting:\u003C/p>\u003Cul>\u003Cli>automatically assign tasks to artists based on their role\u003C/li>\u003Cli>populate departments for production tracking\u003C/li>\u003Cli>generate starting estimates and individual department calendars based on budget constraints\u003C/li>\u003Cli>turn a script into a breakdown list for each shot and use it to pre-generate assets\u003C/li>\u003C/ul>\u003Cp>The list can go on, but you just have to start small!\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>",{"uuid":214,"comment_id":215,"feature_image":216,"featured":29,"visibility":30,"created_at":217,"updated_at":218,"custom_excerpt":219,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":220,"primary_tag":221,"url":222,"excerpt":219,"reading_time":150,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":223},"89dfbefe-bab7-4dbd-a317-2b4f62de9543","6980b6784304f600017051e9","https://images.unsplash.com/photo-1504868584819-f8e8b4b6d7e3?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDN8fHNwcmVhZHNoZWV0fGVufDB8fHx8MTc3MDA0NDU0MXww&ixlib=rb-4.1.0&q=80&w=2000","2026-02-02T15:36:40.000+01:00","2026-02-20T06:03:58.000+01:00","Learn how to automate Kitsu production onboarding using CSV files and the Gazu Python API. Import artists, assets, and tasks in bulk to eliminate manual setup and create repeatable, scalable studio pipelines.",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":121,"name":122,"slug":123,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":124},"https://blog.cg-wire.com/automating-kitsu-production-onboarding/","\u003Cspan style=\"white-space: pre-wrap;\">Photo by \u003C/span>\u003Ca href=\"https://unsplash.com/@goumbik?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Lukas Blazek\u003C/span>\u003C/a>\u003Cspan style=\"white-space: pre-wrap;\"> / \u003C/span>\u003Ca href=\"https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Unsplash\u003C/span>\u003C/a>","/posts/automating-kitsu-production-onboarding","2026-02-16T10:00:37.000+01:00",{"title":209},"automating-kitsu-production-onboarding","posts/automating-kitsu-production-onboarding",[230],{"id":121,"name":122,"slug":123,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":124},"mlgaejAh-K47aV92LIYK0qZr62tuRXQbi8CwKSXhGTg",{"id":233,"title":234,"authors":235,"body":7,"description":7,"extension":8,"html":237,"meta":238,"navigation":12,"path":250,"published_at":251,"seo":252,"slug":253,"stem":254,"tags":255,"__hash__":257,"uuid":239,"comment_id":240,"feature_image":241,"featured":29,"visibility":30,"created_at":242,"updated_at":243,"custom_excerpt":244,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":245,"primary_tag":246,"url":247,"excerpt":244,"reading_time":248,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":249},"ghost/posts:flamenco-without-nas-kitsu.json","NAS-Free Flamenco Rendering with Kitsu Integration (2026)",[236],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">🧠\u003C/div>\u003Cdiv class=\"kg-callout-text\">Run Flamenco without shared storage by letting Kitsu drive render context and files.\u003C/div>\u003C/div>\u003Cp>You want to use Flamenco, but you don't want to buy a NAS.\u003C/p>\u003Cp>If you're a solo artist or a micro animation studio, that's a completely rational decision: shared storage can be expensive, adds maintenance overhead, and solves problems you may not actually have until you try to run a render farm.\u003C/p>\u003Cp>\u003Ca href=\"https://blog.cg-wire.com/self-hosted-blender-render-farm\">Flamenco assumes a traditional studio setup\u003C/a>: shared files, shared paths, instant access. Without a NAS, that assumption is hard to circumvent. Flamenco has no concept of production context, so it doesn't know which shot you want rendered, which version is approved, or where the job files live. And without that knowledge, it can't safely operate in a NAS-less environment.\u003C/p>\u003Cp>That's where Kitsu comes in.\u003C/p>\u003Cp>Kitsu already knows what Flamenco doesn't: tasks, shots, versions, approvals. By treating Kitsu as asynchronous network storage, you can move data to a Flamenco manager when it's needed, render, and avoid hard shared storage entirely.\u003C/p>\u003Cp>Flamenco doesn't support this workflow out of the box. To make it work, you need to build a custom Flamenco job type that pulls context and files from Kitsu, stages them locally, and controls when and how renders run. This article shows you how to build exactly that.\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-blue\">\u003Cdiv class=\"kg-callout-emoji\">💡\u003C/div>\u003Cdiv class=\"kg-callout-text\">\u003Cb>\u003Cstrong style=\"white-space: pre-wrap;\">Looking for working examples?\u003C/strong>\u003C/b>\u003Cbr>\u003Cbr>You can find the complete source code for the example integration showcased in this guide on our GitHub:\u003Cbr>\u003Cbr>🔗 \u003Ca href=\"https://github.com/cgwire/blog-tutorials/tree/main/flamenco-kitsu-render-farm?ref=blog.cg-wire.com\">https://github.com/cgwire/blog-tutorials/tree/main/flamenco-kitsu-render-farm\u003C/a>\u003C/div>\u003C/div>\u003Chr>\u003Ch2 id=\"high-level-architecture\">High-level architecture\u003C/h2>\u003Cp>Our setup is built around a simple idea: Flamenco does the rendering, Kitsu provides the truth.\u003C/p>\u003Cpre>\u003Ccode>Kitsu\n  ↑↓ (REST API)\nCustom Flamenco Job Type\n  ├── Pre-task Python (fetch task data &amp; files)\n  ├── Blender render tasks (Flamenco-managed)\n  └── Post-task Python (upload renders back to Kitsu)\nFlamenco Manager\n  ↓\nFlamenco Worker(s)\n\u003C/code>\u003C/pre>\u003Cp>Flamenco runs exactly as intended, with a Manager scheduling work and Workers executing Blender tasks. What changes is how jobs are defined. Instead of pointing Flamenco at a shared folder and hoping every machine sees the same files, we introduce a custom Flamenco job type that understands production data and knows how to talk to Kitsu.\u003C/p>\u003Cp>Kitsu sits outside the farm and exposes everything through its REST API: shots, tasks, versions, and file locations. When a render job is started—either manually or through automation—the custom job type queries Kitsu to figure out exactly what should be rendered. For example, it might ask: \"Give me the latest approved lighting version for shot 020.\" Kitsu answers, and that answer becomes the render job.\u003C/p>\u003Cp>On the Flamenco side, the Manager doesn't poll Kitsu or track production state. It simply runs the job definition it's given. The custom job type uses a small Python pre-task to fetch metadata and files from Kitsu, stage them locally in a job folder, and then hand them off to standard Blender render tasks that Flamenco already knows how to manage efficiently.\u003C/p>\u003Cp>When rendering is done, a post-task Python step pushes the results back to Kitsu to upload rendered frames, create a new version, or update task status. At no point do workers need shared storage or permanent access to the same filesystem. Each worker pulls what it needs, renders locally, and pushes results back asynchronously.\u003C/p>\u003Chr>\u003Ch2 id=\"1-creating-a-new-job-type\">1. Creating a new job type\u003C/h2>\u003Cp>A Flamenco job type defines how a job turns into actual work. It's the translation layer between \"I want to render this\" and the concrete tasks that Flamenco schedules across the farm. Conceptually, a job type declares what information it needs and how to compile that information into tasks.\u003C/p>\u003Cp>At its simplest, a job type describes a label and a set of settings, then provides a function that receives those settings and builds the job. In code, it looks something like this:\u003C/p>\u003Cpre>\u003Ccode class=\"language-js\">const JOB_TYPE = {\n  label: \"Kitsu Render\",\n  settings: [\n    // { key: \"message\", type: \"string\", required: true },\n    // { key: \"sleep_duration_seconds\", type: \"int32\", default: 1 },\n  ],\n};\n\nfunction compileJob(job) {\n  const settings = job.settings;\n}\n\u003C/code>\u003C/pre>\u003Cp>This code defines the skeleton of a custom Flamenco job type. The \u003Ccode>JOB_TYPE\u003C/code> object declares how the job appears in Flamenco: its human-readable label and the settings it expects when a job is created.\u003C/p>\u003Cp>Those settings act as typed inputs, with validation handled by Flamenco: in this example, a required string and an optional integer with a default value.\u003C/p>\u003Cp>The \u003Ccode>compileJob\u003C/code> function is where the job is turned into executable tasks; it receives the submitted job, reads the resolved settings, and would normally use them to generate render, pre-task, and post-task steps. As written, the function doesn't do any work yet, but it establishes the entry point where production logic will live.\u003C/p>\u003Cp>In a real production setup, instead of a generic message, you pass in a Kitsu task ID, a shot name, the desired output location, or even the Blender version that should be used.\u003C/p>\u003Cp>Where this logic lives matters. Custom Flamenco job types run on the \u003Cstrong>Flamenco Manager\u003C/strong>, not on the workers. On disk, they sit alongside the manager program, for example:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">$ flamenco\n└── flamenco-manager\n└── scripts/\n    └── kitsu-render.js\n\u003C/code>\u003C/pre>\u003Cp>In practice, studios treat these job type scripts as part of their pipeline codebase. They live in version control, evolve over time, and get deployed together with Flamenco updates. That way, you can change how jobs are built and how Kitsu is queried without redeploying or reconfiguring every worker machine on the farm.\u003C/p>\u003Cp>For worker scripts called by custom job types as commands, we put them next to our flamenco-worker program:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">$ flamenco\n└── flamenco-worker\n└── kitsu-render.py\n\u003C/code>\u003C/pre>\u003Chr>\u003Ch2 id=\"2-adding-tasks\">2. Adding tasks\u003C/h2>\u003Cp>Inside \u003Ccode>compileJob\u003C/code>, you explicitly define the tasks that make up the job. This is where a high-level \"render this shot\" request turns into concrete, schedulable work that Flamenco can hand off to workers.\u003C/p>\u003Cp>The example below shows the simplest possible task. An \u003Ccode>echo\u003C/code> task is created using Flamenco's task authoring API, given a category, and then assigned a single command. That command passes the resolved job setting into the task, which will simply print the message when it runs. Finally, the task is added to the job so the Manager can schedule it.\u003C/p>\u003Cpre>\u003Ccode class=\"language-js\">const echoTask = author.Task(\"echo\", \"misc\");\nechoTask.addCommand(\n  author.Command(\"echo\", {\n    message: settings.message,\n  }),\n);\njob.addTask(echoTask);\n\u003C/code>\u003C/pre>\u003Cp>While this task doesn't do anything useful by itself, the pattern is the important part. The same mechanism is used to run Python scripts, launch Blender in background mode for rendering, or perform validation checks before a task is marked complete. Each task is designed to be atomic and restartable, which means if a worker crashes or a render fails at 3 a.m., Flamenco can retry just that task without derailing the entire job. That reliability is what makes this approach scale when you're running hundreds of shots overnight.\u003C/p>\u003Cp>Now, let's get into the meaty part of the tutorial and code a task to download assets from Kitsu, render with Blender, and re-upload the result to Kitsu.\u003C/p>\u003Chr>\u003Ch2 id=\"3-subcommand-1-downloading-assets-from-kitsu\">3. Subcommand 1: Downloading assets from Kitsu\u003C/h2>\u003Cp>The first real task in our Kitsu-driven job is to pull the exact data we need from Kitsu and set up a clean local workspace on the worker. Before Blender ever starts, the worker needs to know which task it's rendering and where the job files live.\u003C/p>\u003Cp>Instead of writing the logic in Javascript, we use the much simpler gazu Python SDK to create a \u003Ccode>kitsu-render\u003C/code> script, then call it in Javascript. If you don't have Python installed in your worker environment, consider \u003Ca href=\"https://blog.cg-wire.com/kitsu-cli-single-binary/\">creating a binary executable from the Python script\u003C/a>.\u003C/p>\u003Cpre>\u003Ccode class=\"language-js\">function compileJob(job) {\n  const settings = job.settings;\n\n  const task = author.Task(\"kitsu-render\", \"misc\");\n\n  task.addCommand(\n    author.Command(\"exec\", { exe: \"python3\", args: [\"kitsu-render.py\"] }),\n  );\n\n  job.addTask(task);\n}\n\u003C/code>\u003C/pre>\u003Cp>The Python script authenticates against the Kitsu API, looks for TODO rendering tasks, and downloads the associated preview file containing a .blend project to render.\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">import os\nimport gazu\n\ngazu.set_host(\"http://localhost/api\")\nuser = gazu.log_in(\"admin@example.com\", \"mysecretpassword\")\n\nprojects = gazu.project.all_projects()\nproject = projects[0]\n\ntasks = gazu.task.all_tasks_for_project(project)\n\nrendering = gazu.task.get_task_type_by_name(\"Rendering\")\ntodo = gazu.task.get_task_status_by_name(\"todo\")\n\nrender_tasks = [\n    t\n    for t in tasks\n    if t[\"task_type_id\"] == rendering[\"id\"] and t[\"task_status_id\"] == todo[\"id\"]\n]\n\nfor task in render_tasks:\n    files = gazu.files.get_all_preview_files_for_task(task)\n    if not files:\n        continue\n\n    latest = files[-1]\n    if latest[\"extension\"] == \"blend\":\n        task_to_render = task\n        latest_blend = latest\n        break\n\nif task_to_render is None:\n    raise RuntimeError(\"No render task with a .blend preview found\")\n\ntarget_path = os.path.join(\n    \"/tmp\", latest_blend[\"original_name\"] + \".\" + latest_blend[\"extension\"]\n)\n\ngazu.files.download_preview_file(latest_blend, target_path)\n\u003C/code>\u003C/pre>\u003Cp>This step is what makes a NAS-less workflow viable. Each worker pulls only the files it needs for the specific task it's running, instead of mounting or syncing an entire production tree. If the download fails, Flamenco can retry the task automatically without human intervention.\u003C/p>\u003Chr>\u003Ch2 id=\"4-subcommand-2-blender-render\">4. Subcommand 2: Blender render\u003C/h2>\u003Cp>Once the blend file to render is staged locally on the worker, we can \u003Ca href=\"https://blog.cg-wire.com/blender-programmatic-rendering/\">render it programmatically\u003C/a> with the \u003Ccode>bpy\u003C/code> library:\u003C/p>\u003Cpre>\u003Ccode class=\"language-py\">bpy.ops.wm.open_mainfile(filepath=target_path)\n\noutput_path = os.path.join(\n    \"/tmp\", latest_blend[\"name\"] + \".mp4\"\n)\n\nbpy.context.scene.render.image_settings.file_format = \"FFMPEG\"\nbpy.context.scene.render.ffmpeg.format = \"MPEG4\"\nbpy.context.scene.render.ffmpeg.codec = \"H264\"\nbpy.context.scene.render.ffmpeg.constant_rate_factor = \"HIGH\"\nbpy.context.scene.render.ffmpeg.gopsize = 12\nbpy.context.scene.render.ffmpeg.audio_codec = \"AAC\"\nbpy.context.scene.render.filepath = output_path\n\nbpy.ops.render.render(animation=True)\n\u003C/code>\u003C/pre>\u003Cp>A more advanced pipeline would leverage Flamenco's native 'blender-render' command to automatically split the frame range into smaller units of work and distribute them across available workers. If a machine drops out or a frame fails, only those frames are retried, so there's no need to restart the entire shot or build custom queue logic to handle parallelism.\u003C/p>\u003Cp>But to keep our example simple, we just render the whole video in one worker.\u003C/p>\u003Chr>\u003Ch2 id=\"5-subcommand-3-uploading-results-back-to-kitsu\">5. Subcommand 3: Uploading results back to Kitsu\u003C/h2>\u003Cp>The final step in the job is \u003Ca href=\"https://blog.cg-wire.com/blender-kitsu-low-res-preview/\">a post-render subcommand that pushes the render results back to Kitsu\u003C/a>. At this point, the worker has finished its frame range locally, and the farm's responsibility shifts from computation to publishing. This is where rendered output becomes visible to the rest of the production.\u003C/p>\u003Cp>The example below shows a minimal Python instruction that uploads the resulting video file to Kitsu as an attachment on the original task.\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">result = gazu.task.publish_preview(\n    task_to_render,\n    todo,\n    comment=\"rendered\",\n    preview_file_path=output_path,\n)\n\u003C/code>\u003C/pre>\u003Cp>In a real production pipeline, this step usually does more than just upload files. We can create a new version in Kitsu, update the task status to something like Done, and trigger review or notification workflows so supervisors know new output is ready. Because this logic is just Python running inside a Flamenco task, it's easy to evolve as production needs change without touching the render farm itself.\u003C/p>\u003Chr>\u003Ch2 id=\"6-triggering-the-workflow\">6. Triggering the workflow\u003C/h2>\u003Cp>Once the custom job type is in place, the workflow is triggered by submitting a job request to the Flamenco Manager. During development, this is often done manually by calling the Manager's REST API directly. It's a fast way to validate that job compilation works, settings are wired correctly, and tasks behave as expected before any automation is layered on top.\u003C/p>\u003Cp>The example below submits a job of type \u003Ccode>kitsu-render\u003C/code> to the Manager. Along with basic metadata for tracking and attribution, the request includes a priority value and an empty \u003Ccode>settings\u003C/code> object, which would normally carry production-specific inputs like a Kitsu production ID. When the job is accepted, the Manager invokes the custom job type, compiles tasks, and schedules them across available workers.\u003C/p>\u003Cpre>\u003Ccode class=\"language-sh\">curl -X 'POST' \\\n  'http://172.17.0.1:8080/api/v3/jobs' \\\n  -H 'accept: application/json' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\n  \"metadata\": {\n    \"project\": \"kitsu\",\n    \"user.email\": \"basunako@gmail.com\",\n    \"user.name\": \"kitsu\"\n  },\n  \"name\": \"Kitsu Render\",\n  \"priority\": 50,\n  \"settings\": {},\n  \"submitter_platform\": \"linux\",\n  \"type\": \"kitsu-render\"\n}'\n\u003C/code>\u003C/pre>\u003Cp>We can see the manager received the job request and assigned it to a worker:\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/02/data-src-image-8815284e-9d0e-49a0-bdd8-ff4ada8a8961.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1600\" height=\"900\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/02/data-src-image-8815284e-9d0e-49a0-bdd8-ff4ada8a8961.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/02/data-src-image-8815284e-9d0e-49a0-bdd8-ff4ada8a8961.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/02/data-src-image-8815284e-9d0e-49a0-bdd8-ff4ada8a8961.png 1600w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>This manual trigger is primarily a development tool. It lets you iterate on job logic, test edge cases, and rerun jobs without involving artists or production tools.\u003C/p>\u003Cp>In production, studios always automate this step. A small service (often a cron job or lightweight webhook listener) periodically queries Kitsu for tasks that are ready to render, like shots that were just approved or published. When it finds one, it submits a corresponding job to the Flamenco Manager using the same API call.\u003C/p>\u003Cp>With this in place, Flamenco becomes a production-aware render backend instead of waiting for humans to push buttons, reacting automatically to changes in Kitsu and keeping the farm in sync with the state of the production.\u003C/p>\u003Chr>\u003Ch2 id=\"conclusion\">Conclusion\u003C/h2>\u003Cp>What you've built in this article is a fundamentally different way to think about rendering in small studios.\u003C/p>\u003Cp>By using a custom Flamenco job type to pull context and data from Kitsu, stage work locally, render through Flamenco's native scheduler, and push results back asynchronously, you've removed the need for shared storage without sacrificing reliability or scale.\u003C/p>\u003Cp>Each piece has a clear responsibility: Kitsu defines what is true in production, Flamenco decides how work runs, and your custom job type is the glue that keeps them in sync. That separation is what makes the system resilient, debuggable, and adaptable as your pipeline grows.\u003C/p>\u003Cp>Understanding this pattern is important because it lets you build render infrastructure that matches the reality of solo artists and micro-studios.\u003C/p>\u003Cp>But don't just leave here, \u003Ca href=\"https://github.com/cgwire/blog-tutorials/tree/main/flamenco-kitsu-render-farm?ref=blog.cg-wire.com\">clone our example Github repository\u003C/a> for this article and start rendering today!\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>",{"uuid":239,"comment_id":240,"feature_image":241,"featured":29,"visibility":30,"created_at":242,"updated_at":243,"custom_excerpt":244,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":245,"primary_tag":246,"url":247,"excerpt":244,"reading_time":248,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":249},"e864ab4c-75a4-40e8-b787-3d0f5937eac3","6980b6744304f600017051e3","https://images.unsplash.com/photo-1666858452715-1399b952befb?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDR8fHJlbmRlcmluZ3xlbnwwfHx8fDE3NzAwNDMxNzB8MA&ixlib=rb-4.1.0&q=80&w=2000","2026-02-02T15:36:36.000+01:00","2026-02-20T06:04:25.000+01:00","Learn how to run Blender Flamenco without a NAS by using Kitsu as asynchronous storage. This guide explains custom Flamenco job types that fetch assets from Kitsu, render locally, and upload results back automatically.",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"https://blog.cg-wire.com/flamenco-without-nas-kitsu/",9,"\u003Cspan style=\"white-space: pre-wrap;\">Photo by \u003C/span>\u003Ca href=\"https://unsplash.com/@fachrizalm?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Fachrizal Maulana\u003C/span>\u003C/a>\u003Cspan style=\"white-space: pre-wrap;\"> / \u003C/span>\u003Ca href=\"https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Unsplash\u003C/span>\u003C/a>","/posts/flamenco-without-nas-kitsu","2026-02-09T10:00:32.000+01:00",{"title":234},"flamenco-without-nas-kitsu","posts/flamenco-without-nas-kitsu",[256],{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"jc6jt91jTF5PCUsPSQcrVR_U5Arx-eKplKRNnp9hexM",{"id":259,"title":260,"authors":261,"body":7,"description":7,"extension":8,"html":263,"meta":264,"navigation":12,"path":275,"published_at":276,"seo":277,"slug":278,"stem":279,"tags":280,"__hash__":282,"uuid":265,"comment_id":266,"feature_image":267,"featured":29,"visibility":30,"created_at":268,"updated_at":269,"custom_excerpt":270,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":271,"primary_tag":272,"url":273,"excerpt":270,"reading_time":43,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":274},"ghost/posts:automated-kitsu-pdf-reports.json","Automating Kitsu Reports with Python and Gazu (2026)",[262],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📊\u003C/div>\u003Cdiv class=\"kg-callout-text\">Turn hours of manual status reporting into a fully automated Kitsu PDF in seconds.\u003C/div>\u003C/div>\u003Cp>How many hours do you spend each week pulling data and generating reports?\u003C/p>\u003Cp>Animator studios use Kitsu to track progress, yet we still see supervisors spend hours manually compiling that data into PDFs just to keep producers and directors in the loop. It's a massive drain on creative energy and a manual point of failure that a senior team shouldn't have to deal with. If the data already exists in our tracking software, sharing it shouldn't be a struggle.\u003C/p>\u003Cp>As a technical lead, your job is to automate mundane tasks so the artists can focus on the art. And by using the Gazu Python client, we can bridge the gap between Kitsu's database and the final stakeholder report.\u003C/p>\u003Cp>Today, we're going to build a script that programmatically pulls project metrics and generates a custom PDF, turning a 2-hour manual chore into a five-second automated task.\u003C/p>\u003Chr>\u003Ch2 id=\"why-custom-reports\">Why Custom Reports?\u003C/h2>\u003Cp>Kitsu is a lifesaver for keeping the chaos of a production organized. The built-in dashboard covers all use cases, even multi-production analysis. But sometimes, \"standard\" doesn't cut it.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/02/data-src-image-4807effb-72e4-4fe8-9684-7f8a44579c42.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1600\" height=\"900\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/02/data-src-image-4807effb-72e4-4fe8-9684-7f8a44579c42.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/02/data-src-image-4807effb-72e4-4fe8-9684-7f8a44579c42.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/02/data-src-image-4807effb-72e4-4fe8-9684-7f8a44579c42.png 1600w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>For example, clients might want to feel like they're paying for a premium service. Sending them a raw software screenshot or a generic link feels a bit amateur. By using custom reports, you can deliver progress updates wrapped in your studio's branding, ensuring the presentation looks as polished as the frames you're delivering.\u003C/p>\u003Cp>Then there is the struggle of finding a producer-friendly format. A producer asks for a very specific Excel pivot table or a legacy PDF for the archives that follows a bizarre internal logic only they understand. If you need to export a filtered list of every shot in Sequence 02 that's currently \"In Progress\" but stuck with \"Overdue\" retakes, a custom report gets you that data instantly. \u003Ca href=\"https://blog.cg-wire.com/reduce-rework-animation/\">It saves you from the manual copy-pasting nightmare\u003C/a> and lets you get back to animating.\u003C/p>\u003Cp>Some studios also need custom views for advanced tracking. Custom data can help you spot department bottlenecks, like when the lighting team is consistently stalled because the FX cache is lagging, allowing you to solve the friction before it turns into a Friday night crunch.\u003C/p>\u003Cp>Fortunately, Kitsu is extremely easy to build upon.\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-blue\">\u003Cdiv class=\"kg-callout-emoji\">💡\u003C/div>\u003Cdiv class=\"kg-callout-text\">\u003Cb>\u003Cstrong style=\"white-space: pre-wrap;\">Looking for working examples?\u003C/strong>\u003C/b>\u003Cbr>\u003Cbr>You can find the complete source code for the example integration showcased in this guide on our GitHub:\u003Cbr>\u003Cbr>🔗 \u003Ca href=\"https://github.com/cgwire/blog-tutorials/tree/main/custom-kitsu-reports?ref=blog.cg-wire.com\">https://github.com/cgwire/blog-tutorials/tree/main/custom-kitsu-reports\u003C/a>\u003C/div>\u003C/div>\u003Chr>\u003Ch2 id=\"1-kitsu-setup-authentication\">1. Kitsu Setup &amp; Authentication\u003C/h2>\u003Cp>First, you need to talk to your Kitsu instance.\u003C/p>\u003Cp>If you don't have a studio URL yet and want to run Kitsu on your own machine, Docker is the fastest way to get a production-ready environment up and running:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">docker run --init -ti --rm -p 80:80 -p 1080:1080 --name cgwire cgwire/cgwire\n\u003C/code>\u003C/pre>\u003Cp>For scripting, we will use the official Kitsu Python SDK, \u003Ccode>gazu\u003C/code>.\u003C/p>\u003Cp>You can authenticate using your user credentials, which is fine for local testing:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">import gazu\n\ngazu.set_host(\"http://localhost/api\")\nuser = gazu.log_in(\"admin@example.com\", \"mysecretpassword\")\n\u003C/code>\u003C/pre>\u003Chr>\u003Ch2 id=\"2-fetch-production-data\">2. Fetch Production Data\u003C/h2>\u003Cp>Before we write a single line of code, we need to talk about the data Kitsu exposes. If it exists in the UI, you can probably grab it via Gazu.\u003C/p>\u003Cp>The API is surprisingly deep. \u003Ca href=\"https://blog.cg-wire.com/how-to-track-properly-the-cg-artist-progress/\">For a solid production report\u003C/a>, you could typically be pulling:\u003C/p>\u003Cul>\u003Cli>\u003Cstrong>Progress Metrics:\u003C/strong> Status changes (e.g., moving from \"WIP\" to \"Internal Review\" using events).\u003C/li>\u003Cli>\u003Cstrong>Time Tracking:\u003C/strong> How long a shot has been \"In Progress\" versus the original estimate.\u003C/li>\u003Cli>\u003Cstrong>Cast Lists:\u003C/strong> Every Character, Environment, and Prop associated with a specific Episode or Sequence.\u003C/li>\u003Cli>\u003Cstrong>Workload:\u003C/strong> The exact number of frames or assets currently assigned to a specific artist.\u003C/li>\u003Cli>\u003Cstrong>Budget:\u003C/strong> How the team quota evolves over time.\u003C/li>\u003Cli>And many more resources you can read about in \u003Ca href=\"https://gazu.cg-wire.com/data?ref=blog.cg-wire.com\">our detailed developer documentation\u003C/a>.\u003C/li>\u003C/ul>\u003Cp>Let's look at a common scenario: you need a quick rundown of every task currently assigned to your team members for a specific project. This is the foundation of any \"Who is doing what?\" report.\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">projects = gazu.project.all_projects()\nproject = projects[0]\n\ntasks = gazu.task.all_tasks_for_project(project)\n\nreport = []\n\nfor task in tasks:\n    assignees = [gazu.person.get_person(p_id)[\"full_name\"] for p_id in task[\"assignees\"]]\n\n    task_info = {\n        \"date\": task[\"updated_at\"],\n        \"entity\": gazu.entity.get_entity(task[\"entity_id\"])[\"name\"],\n        \"type\": gazu.task.get_task_type(task[\"task_type_id\"])[\"name\"],\n        \"status\": gazu.task.get_task_status(task[\"task_status_id\"])[\"name\"]\n    }\n\n    for artist in assignees:\n        report.append({**task_info, \"artist\": artist})\n\u003C/code>\u003C/pre>\u003Cp>Gazu returns dictionaries. When you're fetching \u003Ccode>all_tasks_for_project\u003C/code>, keep in mind that on a feature-length production, this can be a massive amount of data. Always try to filter your data. For example, by \u003Ccode>task_status\u003C/code> or \u003Ccode>entity_type\u003C/code>, if you only need to see, say, active Animation shots.\u003C/p>\u003Chr>\u003Ch2 id=\"3-creating-a-reusable-template\">3. Creating a Reusable Template\u003C/h2>\u003Cp>Now you need to decide how to render the PDF. There are two main options here.\u003C/p>\u003Cp>You can use ReportLab. This is the barebones method. It is fast and requires no external non-Python dependencies. Best for internal tech reports, simple line-item tables, and high-speed batch automation.\u003C/p>\u003Cp>Or you can create an HTML to PDF rendering pipeline using Jinja2 (templating) and WeasyPrint. This is often the preferred method because you can use CSS to style the report. If you can make a webpage, you can make a report. It's best for client-facing deliverables, heavy branding, and complex layouts.\u003C/p>\u003Cp>Let's define your configuration and template:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">STUDIO_NAME = \"My Animation Studio\"\nSTUDIO_LOGO = \"studio_logo.png\"  # local file path\nPROJECT_NAME = \"My Project\"\nOUTPUT_PDF = \"activity_report.pdf\"\n\u003C/code>\u003C/pre>\u003Cp>You use Jinja2 syntax (\u003Ccode>{{ variable }}\u003C/code>) to inject your Python data into standard HTML.\u003C/p>\u003Cpre>\u003Ccode class=\"language-html\">&lt;!doctype html&gt;\n&lt;html&gt;\n    &lt;head&gt;\n        &lt;meta charset=\"utf-8\" /&gt;\n        &lt;style&gt;\n            body {\n                font-family: Arial, sans-serif;\n                margin: 40px;\n            }\n            header {\n                display: flex;\n                align-items: center;\n                margin-bottom: 30px;\n            }\n            header img {\n                height: 50px;\n                margin-right: 20px;\n            }\n            h1 {\n                color: #2a2a2a;\n            }\n            table {\n                width: 100%;\n                border-collapse: collapse;\n                margin-top: 20px;\n            }\n            th {\n                background: #222;\n                color: white;\n                padding: 8px;\n                text-align: left;\n            }\n            td {\n                padding: 8px;\n                border-bottom: 1px solid #ccc;\n            }\n            .footer {\n                margin-top: 40px;\n                font-size: 10px;\n                color: #777;\n                text-align: center;\n            }\n        &lt;/style&gt;\n    &lt;/head&gt;\n\n    &lt;body&gt;\n        &lt;header&gt;\n            &lt;img src=\"{{ studio_logo }}\" /&gt;\n            &lt;h1&gt;{{ studio_name }} – Activity Report&lt;/h1&gt;\n        &lt;/header&gt;\n\n        &lt;p&gt;\n            &lt;strong&gt;Project:&lt;/strong&gt; {{ project_name }}&lt;br /&gt;\n            &lt;strong&gt;Report Date:&lt;/strong&gt; {{ report_date }}\n        &lt;/p&gt;\n\n        &lt;table&gt;\n            &lt;tr&gt;\n                &lt;th&gt;Date&lt;/th&gt;\n                &lt;th&gt;Artist&lt;/th&gt;\n                &lt;th&gt;Task&lt;/th&gt;\n                &lt;th&gt;Entity&lt;/th&gt;\n                &lt;th&gt;Status&lt;/th&gt;\n            &lt;/tr&gt;\n            {% for row in rows %}\n            &lt;tr&gt;\n                &lt;td&gt;{{ row.date }}&lt;/td&gt;\n                &lt;td&gt;{{ row.artist }}&lt;/td&gt;\n                &lt;td&gt;{{ row.entity }}&lt;/td&gt;\n                &lt;td&gt;{{ row.type }}&lt;/td&gt;\n                &lt;td&gt;{{ row.status }}&lt;/td&gt;\n            &lt;/tr&gt;\n            {% endfor %}\n        &lt;/table&gt;\n\n        &lt;div class=\"footer\"&gt;Generated automatically by {{ studio_name }}&lt;/div&gt;\n    &lt;/body&gt;\n&lt;/html&gt;\n\u003C/code>\u003C/pre>\u003Cp>This HTML file acts as a Jinja2 template that defines the visual structure and styling of the report, including page layout, fonts, colors, and a table for displaying activity data. The \u003Ccode>{{ ... }}\u003C/code> expressions mark placeholders for values such as the studio name, logo URL, project name, and report date, while the embedded CSS ensures the document looks polished and print-ready when rendered or converted to PDF.\u003C/p>\u003Cp>When the Python code renders this template, Jinja2 replaces all placeholders with the actual values passed in from the script and executes the \u003Ccode>{% for row in rows %}\u003C/code> loop to generate one table row per activity record. Each \u003Ccode>row\u003C/code> dictionary supplies the date, artist, task, entity, status, and hours values, with the hours field explicitly formatted to two decimal places, producing a complete HTML document with a fully populated table.\u003C/p>\u003Cp>The rendered HTML is given to WeasyPrint, which interprets both the HTML structure and the inline CSS to lay out the content as a printable document. The studio logo is loaded via its URL or relative path, the table and text are styled exactly as defined in the template, and everything is rendered into a PDF file that visually matches the HTML design, ending with the footer that confirms the report was generated automatically.\u003C/p>\u003Chr>\u003Ch2 id=\"4-rendering\">4. Rendering\u003C/h2>\u003Cp>Finally, you glue it all together. You use \u003Ccode>jinja2\u003C/code> to fill in the placeholders in the HTML with your data, and then \u003Ccode>WeasyPrint\u003C/code> converts that HTML string into a PDF file:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">from jinja2 import Environment, FileSystemLoader\nfrom weasyprint import HTML\nfrom datetime import date\n\nenv = Environment(loader=FileSystemLoader(\".\"))\ntemplate = env.get_template(\"report.html\")\n\nhtml = template.render(\n    studio_name=STUDIO_NAME,\n    studio_logo=STUDIO_LOGO,\n    project_name=PROJECT_NAME,\n    report_date=date.today().isoformat(),\n    rows=report,\n)\n\nHTML(string=html, base_url=\".\").write_pdf(OUTPUT_PDF)\n\nprint(f\"PDF generated: {OUTPUT_PDF}\")\n\u003C/code>\u003C/pre>\u003Cp>The first part of the code sets up Jinja2 to load an HTML template from the current directory and then retrieves the aforementioned \u003Ccode>report.html\u003C/code>.\u003C/p>\u003Cp>Next, the template is rendered into a complete HTML document by injecting runtime data into those placeholders. Studio and project metadata are passed in, and the current date is generated in ISO format. The result of this step is a plain HTML string with all dynamic values resolved.\u003C/p>\u003Cp>Finally, the rendered HTML is handed to WeasyPrint, which parses the HTML and any associated CSS and assets, then converts it into a PDF file. The \u003Ccode>base_url\u003C/code> parameter ensures relative paths to images or stylesheets work correctly, and the finished PDF is written to the output path before printing a confirmation message.\u003C/p>\u003Cp>We obtain this final result:\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/02/data-src-image-13e6f8e7-6700-4219-a7ed-6bbdb4850aab.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1600\" height=\"900\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/02/data-src-image-13e6f8e7-6700-4219-a7ed-6bbdb4850aab.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/02/data-src-image-13e6f8e7-6700-4219-a7ed-6bbdb4850aab.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/02/data-src-image-13e6f8e7-6700-4219-a7ed-6bbdb4850aab.png 1600w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>You can try running the script yourself in a minute by \u003Ca href=\"https://github.com/cgwire/blog-tutorials/tree/main/custom-kitsu-reports?ref=blog.cg-wire.com\">cloning our corresponding Github repository\u003C/a>.\u003C/p>\u003Chr>\u003Ch2 id=\"5-automation-tips\">5. Automation Tips\u003C/h2>\u003Cp>Automation is where this workflow actually pays off the biggest dividends: once your report script works locally, the next step is making sure it runs reliably without human intervention, and that the output ends up where people already look.\u003C/p>\u003Cp>Instead of manually running the script, set up a cron job on your server to execute it at a predictable time. For example, running the script every weekday at 6:00 PM ensures the PDF is generated overnight and ready when producers start their day. This is especially useful for daily burn-downs or shot status summaries.\u003C/p>\u003Cp>Once the PDF is generated, use \u003Ccode>gazu\u003C/code> to attach it directly to a relevant entity in Kitsu, like a Production, Episode, or a recurring Task. This turns your report into a first-class production artifact with a permanent history. For example, uploading each day's report to a \"Daily Production Report\" task makes it easy to audit changes over time or reference past decisions. A practical tip: include the date in both the filename and the attachment comment so reports are easy to scan in the Kitsu UI without downloading each one.\u003C/p>\u003Cp>To push the report directly to stakeholders, use Python's built-in \u003Ccode>smtplib\u003C/code> (or a transactional email service) to send the PDF as an attachment. This is ideal for \u003Ca href=\"https://blog.cg-wire.com/collaborative-animation-production/\">producers or clients who don't live in Kitsu\u003C/a> all day. A concrete pattern is to email a short summary in the body—\"Shots blocked: 12, shots finaled: 3\"—and attach the full PDF for details.\u003C/p>\u003Cp>Instead of hardcoding a single HTML layout, store multiple Jinja2 templates like \u003Ccode>client_report.html\u003C/code> and \u003Ccode>internal_audit.html\u003C/code> to generate different report styles from the same Kitsu data. For example, clean, high-level summaries for clients and more detailed tables for internal tracking. A useful approach is to share base templates and macros (headers, tables, status badges) so changes to branding or layout propagate across all report types. Version these templates alongside your code so you can reproduce older reports exactly if needed.\u003C/p>\u003Chr>\u003Ch2 id=\"conclusion\">Conclusion\u003C/h2>\u003Cp>The bigger idea here isn't just about PDFs: it's about reclaiming time and attention for the work that actually moves a production forward!\u003C/p>\u003Cp>By pulling structured data out of Kitsu with Gazu, shaping it with Python, and rendering it into polished, automated reports, you replace a fragile, manual ritual with a repeatable system that runs quietly in the background. What used to be hours of copy-pasting, formatting, and double-checking becomes a dependable pipeline: accurate data, delivered on time, in a format producers and clients actually want to read. Custom reports let you communicate progress with confidence, surface problems before they become crunch, and present your studio as both creatively sharp and technically disciplined.\u003C/p>\u003Cp>The more complex your pipeline is, the more important it becomes to create custom reports, so make sure to read more of our scripting guides for inspiration!\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>",{"uuid":265,"comment_id":266,"feature_image":267,"featured":29,"visibility":30,"created_at":268,"updated_at":269,"custom_excerpt":270,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":271,"primary_tag":272,"url":273,"excerpt":270,"reading_time":43,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":274},"d3e7cb7f-7151-4881-acef-25bc18bf3edc","69805d244304f600017051c5","https://images.unsplash.com/photo-1666875753105-c63a6f3bdc86?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDR8fGRhc2hib2FyZHxlbnwwfHx8fDE3NzAwMjAyODZ8MA&ixlib=rb-4.1.0&q=80&w=2000","2026-02-02T09:15:32.000+01:00","2026-02-20T06:03:57.000+01:00","Learn how to use the Gazu Python SDK to extract production data from Kitsu and generate custom, branded PDF reports. Automate progress tracking, task summaries, and stakeholder updates without manual copy-paste work.",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"https://blog.cg-wire.com/automated-kitsu-pdf-reports/","\u003Cspan style=\"white-space: pre-wrap;\">Photo by \u003C/span>\u003Ca href=\"https://unsplash.com/@dengxiangs?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Deng Xiang\u003C/span>\u003C/a>\u003Cspan style=\"white-space: pre-wrap;\"> / \u003C/span>\u003Ca href=\"https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Unsplash\u003C/span>\u003C/a>","/posts/automated-kitsu-pdf-reports","2026-02-02T10:00:12.000+01:00",{"title":260},"automated-kitsu-pdf-reports","posts/automated-kitsu-pdf-reports",[281],{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"YJ-_wQpipngnWEzDu9U0RCIsq_Blt5_8TSOGLX9c1zw",{"id":284,"title":285,"authors":286,"body":7,"description":7,"extension":8,"html":288,"meta":289,"navigation":12,"path":300,"published_at":301,"seo":302,"slug":303,"stem":304,"tags":305,"__hash__":307,"uuid":290,"comment_id":291,"feature_image":292,"featured":29,"visibility":30,"created_at":293,"updated_at":294,"custom_excerpt":295,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":296,"primary_tag":297,"url":298,"excerpt":295,"reading_time":150,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":299},"ghost/posts:share-kitsu-playlists.json","(2026) How to Export and Share Kitsu Playlists with Python",[287],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📥\u003C/div>\u003Cdiv class=\"kg-callout-text\">Share Kitsu playlists clearly, even when clients can’t access Kitsu directly.\u003C/div>\u003C/div>\u003Cp>Early in your career as an animator, you'll likely learn a hard truth—sometimes the painful way: \u003Cstrong>doing great work is only half the job, sharing it clearly is the other half\u003C/strong>. You might remember a short film project where the animation itself was solid, but the review process was pure chaos. QuickTimes flying back and forth over email, files named things like \u003Ccode>shot_final_v3_really_final.mov\u003C/code>, and no one is quite sure which notes apply to which version. Clients were confused, supervisors were frustrated, and you were spending more time managing files than animating.\u003C/p>\u003Cp>Fast forward a few years, and tools like \u003Cstrong>Kitsu playlists\u003C/strong> completely change how studios review animation.\u003C/p>\u003Cp>They give you structure, traceability, and a clean way to present work. You can group shots, track versions, and centralize feedback. For most teams, that alone is a huge win.\u003C/p>\u003Cp>But here's the thing you learn after years in production: \u003Ca href=\"https://blog.cg-wire.com/how-to-give-efficient-animation-feedback/\">no two studios or clients share the exact same review workflow\u003C/a>. Sometimes you need to send assets offline. Sometimes a client wants everything neatly packaged by sequence. Sometimes legal or security constraints mean you can't give direct Kitsu access. In those cases, you still want to leverage Kitsu's strengths without being locked into a single way of sharing.\u003C/p>\u003Cp>That's exactly what this article is about.\u003C/p>\u003Cp>By the end, you'll know how to \u003Cstrong>create a Kitsu playlist, extract its data with Python, download all related assets in a clean folder structure, and compress everything for easy sharing\u003C/strong>. This approach can save you hours on real productions and make reviews smoother for both artists and clients.\u003C/p>\u003Cp>Let's break it down step by step.\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-blue\">\u003Cdiv class=\"kg-callout-emoji\">💡\u003C/div>\u003Cdiv class=\"kg-callout-text\">\u003Cb>\u003Cstrong style=\"white-space: pre-wrap;\">Looking for working examples?\u003C/strong>\u003C/b>\u003Cbr>\u003Cbr>You can find the complete source code for the example integration showcased in this guide on our GitHub:\u003Cbr>\u003Cbr>🔗 \u003Ca href=\"https://github.com/cgwire/blog-tutorials/tree/main/share-kitsu-playlist?ref=blog.cg-wire.com\">https://github.com/cgwire/blog-tutorials/tree/main/share-kitsu-playlist\u003C/a>\u003C/div>\u003C/div>\u003Chr>\u003Ch2 id=\"1-create-a-kitsu-playlist\">1. Create a Kitsu Playlist\u003C/h2>\u003Cp>\u003Cstrong>Every solid review workflow starts with a clear intention\u003C/strong>: what exactly do you want feedback on? Kitsu playlists are built for that purpose.\u003C/p>\u003Cp>Creating a playlist from the Kitsu dashboard is straightforward. Navigate to your project, head into the Shots or Assets section, and start selecting the items you want reviewed. It helps to think of playlists as review narratives. Instead of dumping everything in, ask yourself:\u003C/p>\u003Cul>\u003Cli>Is this a blocking review?\u003C/li>\u003Cli>Is this a polishing pass?\u003C/li>\u003Cli>Is this focused on animation, lighting, or comp?\u003C/li>\u003C/ul>\u003Cp>For example, on a short cinematic project, you might create separate playlists for:\u003C/p>\u003Cul>\u003Cli>\"Animation Blocking – Act 1\"\u003C/li>\u003Cli>\"Facial Polish – Key Shots\"\u003C/li>\u003Cli>\"Final Lighting Review\"\u003C/li>\u003C/ul>\u003Cp>That small bit of organization can make client reviews dramatically more focused.\u003C/p>\u003Cp>In Kitsu, once your shots are selected, you can create a new playlist, name it clearly, and order the shots in a way that tells a story. Order matters more than people think. \u003Ca href=\"https://blog.cg-wire.com/client-communication-animation/\">When a client presses play, they can judge the art, timing, and revisions in one place.\u003C/a>\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-712558f4-4b58-4b1e-8bb1-7bfa2fee1c74.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1319\" height=\"821\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/01/data-src-image-712558f4-4b58-4b1e-8bb1-7bfa2fee1c74.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/01/data-src-image-712558f4-4b58-4b1e-8bb1-7bfa2fee1c74.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-712558f4-4b58-4b1e-8bb1-7bfa2fee1c74.png 1319w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Chr>\u003Ch2 id=\"2-get-the-playlist-data\">2. Get the Playlist Data\u003C/h2>\u003Cp>Now that we have a playlist ready, it's time to code.\u003C/p>\u003Cp>We start by \u003Cstrong>authenticating with Kitsu\u003C/strong> using the Gazu API client:\u003C/p>\u003Cpre>\u003Ccode class=\"language-py\">import gazu\n\ngazu.set_host(\"http://localhost/api\")\ngazu.log_in(\"admin@example.com\", \"mysecretpassword\")\n\u003C/code>\u003C/pre>\u003Cp>We can then \u003Cstrong>query Kitsu for available projects\u003C/strong> and present them in the terminal. The user selects a project, and that choice defines the scope of everything that follows. Because projects are fetched dynamically, the script works across productions without modification:\u003C/p>\u003Cpre>\u003Ccode class=\"language-py\">productions = gazu.project.all_projects()\n\nfor i, p in enumerate(productions):\n    print(f\"[{i}] {p['name']}\")\n\nproduction = productions[int(input(\"Select project: \"))]\n\u003C/code>\u003C/pre>\u003Cp>From there, \u003Cstrong>playlists are queried from the selected project\u003C/strong> and shown the same way. When a playlist is chosen, the script retrieves the full playlist object from the API.\u003C/p>\u003Cpre>\u003Ccode class=\"language-py\">playlists = gazu.playlist.all_playlists_for_project(production)\n\nfor i, pl in enumerate(playlists):\n    print(f\"[{i}] {pl['name']}\")\n\nplaylist = gazu.playlist.get_playlist(playlists[int(input(\"Select playlist: \"))])\n\u003C/code>\u003C/pre>\u003Cp>\u003Ccode>playlist\u003C/code> contains the full editorial selection reference: shots, versions, ordering, and linked files are all accessible through this object.\u003C/p>\u003Chr>\u003Ch2 id=\"3-download-related-assets\">3. Download Related Assets\u003C/h2>\u003Cp>The next step is turning the playlist data into something reviewable on disk.\u003C/p>\u003Cp>\u003Cstrong>The output is a folder hierarchy that mirrors production reality\u003C/strong>: playlist at the top, sequences underneath, shots inside those, and the actual media sitting where anyone expects to find it.\u003C/p>\u003Cpre>\u003Ccode>Playlist_Name/\n└── Seq_010/\n    ├── Shot_010_001/\n    │   ├── anim_v003.mov\n    │   └── anim_v003.png\n    └── Shot_010_002/\n└── Seq_020/\n    └── Shot_020_005/\n\u003C/code>\u003C/pre>\u003Cp>That structure is the point. It removes ambiguity, avoids flat dumps of files, and lets supervisors and clients navigate by context instead of filenames.\u003C/p>\u003Cp>The playlist name is used as the root folder, so every export stays self-contained and re-runnable.\u003C/p>\u003Cpre>\u003Ccode class=\"language-py\">playlist_name = playlist[\"name\"]\n\u003C/code>\u003C/pre>\u003Cp>We then iterate over each playlist entry and fetch the full shot record because the playlist itself does not include sequence data.\u003C/p>\u003Cpre>\u003Ccode class=\"language-py\">for shot in playlist[\"shots\"]:\n    shot_data = gazu.shot.get_shot(shot[\"entity_id\"])\n\u003C/code>\u003C/pre>\u003Cp>We use the sequence name and shot name to build a deterministic directory path. This enforces a consistent \u003Ccode>playlist/sequence/shot\u003C/code> layout on disk.\u003C/p>\u003Cpre>\u003Ccode class=\"language-py\">shot_name = shot_data[\"name\"]\nsequence_name = shot_data[\"sequence_name\"]\n\nshot_dir = os.path.join(\n    playlist_name,\n    sequence_name,\n    shot_name,\n)\n\u003C/code>\u003C/pre>\u003Cp>If the directory doesn't exist, we create it. This lets the script run multiple times without failing or overwriting partial downloads.\u003C/p>\u003Cpre>\u003Ccode class=\"language-py\">os.makedirs(shot_dir, exist_ok=True)\n\u003C/code>\u003C/pre>\u003Cp>We can then fetch the preview file information corresponding to each shot. Typically, a picture or video:\u003C/p>\u003Cpre>\u003Ccode class=\"language-py\">preview = gazu.files.get_preview_file(shot[\"preview_file_id\"])\n\u003C/code>\u003C/pre>\u003Cp>We preserve the original filename and extension so the output matches what artists and supervisors expect to see.\u003C/p>\u003Cpre>\u003Ccode class=\"language-py\">preview_filename = f\"{preview['original_name']}.{preview['extension']}\"\npreview_path = os.path.join(shot_dir, preview_filename)\n\u003C/code>\u003C/pre>\u003Cp>We download the preview media directly into the shot folder. At this point, the playlist exists on disk as a clean, review-ready directory tree.\u003C/p>\u003Cpre>\u003Ccode class=\"language-py\">gazu.files.download_preview_file(preview, preview_path)\n\u003C/code>\u003C/pre>\u003Cp>The result is a local mirror of the playlist that can be zipped, sent, archived, or reviewed without explanation.\u003C/p>\u003Chr>\u003Ch2 id=\"4-compress-the-folder\">4. Compress the Folder\u003C/h2>\u003Cp>Once everything is downloaded, the final step is making it easy to share. \u003Cstrong>Your script should automatically compress the root playlist folder into a single archive\u003C/strong>:\u003C/p>\u003Cpre>\u003Ccode class=\"language-py\">import shutil\n\nshutil.make_archive(\n    base_name=playlist_name,\n    format=\"zip\",\n    root_dir=os.path.dirname(playlist_name),\n    base_dir=os.path.basename(playlist_name),\n)\n\u003C/code>\u003C/pre>\u003Cp>This archive becomes your handoff artifact. You can upload it to cloud storage, send it through a secure client portal, or archive it internally as a backup folder.\u003C/p>\u003Cp>\u003Cstrong>Clients don't worry about missing files or broken structures. They download once, unzip once, and everything just works.\u003C/strong>\u003C/p>\u003Cp>Include the playlist name and date in the archive filename. Six months later, when someone asks, \"Which version did we send?\", you'll be glad you did.\u003C/p>\u003Chr>\u003Ch2 id=\"onboard-clients-in-kitsu\">Onboard Clients In Kitsu\u003C/h2>\u003Cp>At some point, exporting Kitsu playlists just starts getting in the way. It’s fine when you’re sending a quick snapshot or getting a one-off note pass, but once the project goes into real iteration, things get messy fast. You’re re-exporting for every tweak, clients are commenting on outdated cuts, and feedback ends up split between emails, PDFs, and chat threads. \u003Cstrong>A lot of energy goes into figuring out what the note is referring to instead of actually fixing the shot.\u003C/strong>\u003C/p>\u003Cp>\u003Cstrong>That’s usually when it makes sense to bring clients directly into Kitsu.\u003C/strong> They’re always looking at the current version, they can draw or comment right on the frame, and everyone sees the notes in context. Version history stays intact, so when a client asks about something “from two versions ago,” you can actually see it. For the team, it means fewer guesswork moments and less time copying notes from one place to another.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-1b596b1f-9757-47e5-a893-2c41164a1eab.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1438\" height=\"809\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/01/data-src-image-1b596b1f-9757-47e5-a893-2c41164a1eab.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/01/data-src-image-1b596b1f-9757-47e5-a893-2c41164a1eab.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-1b596b1f-9757-47e5-a893-2c41164a1eab.png 1438w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>Exports are good for quick check-ins, but they don’t scale with real production. \u003Cstrong>Having clients in Kitsu keeps everyone grounded in the same reality.\u003C/strong>\u003C/p>\u003Chr>\u003Ch2 id=\"conclusion\">Conclusion\u003C/h2>\u003Cp>After years in animation, one lesson keeps repeating itself: the smoother your review workflow, the better your creative output. Kitsu already gives you a powerful foundation with playlists, versioning, and centralized feedback. \u003Cstrong>By tapping into its data and building small automation tools, you can adapt it to almost any review scenario.\u003C/strong>\u003C/p>\u003Cp>But you can also extract playlist data from Kitsu and reshape it to fit your custom review workflows. Whether you're sending offline packages, organizing assets for external partners, or just trying to make life easier for your clients, this approach puts you in control.\u003C/p>\u003Cp>\u003Ca>Check out the public Github repository\u003C/a> to clone and modify our code to match your workflow!\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-5c610ee3-e726-4198-8b9b-480d3546530c.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1319\" height=\"821\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/01/data-src-image-5c610ee3-e726-4198-8b9b-480d3546530c.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/01/data-src-image-5c610ee3-e726-4198-8b9b-480d3546530c.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-5c610ee3-e726-4198-8b9b-480d3546530c.png 1319w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>And if there's one final piece of advice worth following: \u003Cstrong>onboard your clients directly onto Kitsu whenever possible!\u003C/strong> Once they \u003Ca href=\"https://www.cg-wire.com/review-engine?ref=blog.cg-wire.com\">experience real-time review rooms\u003C/a>, annotated notes, and version history, most never want to go back to messy email threads again.\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>",{"uuid":290,"comment_id":291,"feature_image":292,"featured":29,"visibility":30,"created_at":293,"updated_at":294,"custom_excerpt":295,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":296,"primary_tag":297,"url":298,"excerpt":295,"reading_time":150,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":299},"503be05a-cfc1-4b66-8010-a46dab1bd231","695bb6ffc665470001df4dc7","https://images.unsplash.com/photo-1727142073871-d40f5a7c76d8?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDJ8fHZpZGVvJTIwZWRpdCUyMHN1aXRlfGVufDB8fHx8MTc2NzYyMDEwNnww&ixlib=rb-4.1.0&q=80&w=2000","2026-01-05T14:05:03.000+01:00","2026-02-20T06:04:53.000+01:00","Learn how to create, export, and share Kitsu playlists using Python. This guide shows how to extract playlist data, download previews into a clean folder structure, and package everything for offline or client-friendly reviews.",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"https://blog.cg-wire.com/share-kitsu-playlists/","\u003Cspan style=\"white-space: pre-wrap;\">Photo by \u003C/span>\u003Ca href=\"https://unsplash.com/@mdesign85?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">MD Duran\u003C/span>\u003C/a>\u003Cspan style=\"white-space: pre-wrap;\"> / \u003C/span>\u003Ca href=\"https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Unsplash\u003C/span>\u003C/a>","/posts/share-kitsu-playlists","2026-01-26T10:00:19.000+01:00",{"title":285},"share-kitsu-playlists","posts/share-kitsu-playlists",[306],{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"-jbM2U_O1PNcpV8f4TxvrmeMLejonGVTVrjVX5RZW_E",{"id":309,"title":310,"authors":311,"body":7,"description":7,"extension":8,"html":313,"meta":314,"navigation":12,"path":325,"published_at":326,"seo":327,"slug":328,"stem":329,"tags":330,"__hash__":337,"uuid":315,"comment_id":316,"feature_image":317,"featured":29,"visibility":30,"created_at":318,"updated_at":319,"custom_excerpt":320,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":321,"primary_tag":322,"url":323,"excerpt":320,"reading_time":97,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":324},"ghost/posts:self-hosted-blender-render-farm.json","Self-Hosting a Blender Render Farm Using Flamenco In 2026",[312],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">🖥️\u003C/div>\u003Cdiv class=\"kg-callout-text\">Turn idle machines into a powerful Blender render farm without touching the cloud.\u003C/div>\u003C/div>\u003Cp>When was the last time you almost missed a deadline because of rendering?\u003C/p>\u003Cp>Every time you open Blender, your workstation sounds like a jet engine preparing for takeoff, and your entire film worth months of work is held hostage by a single progress bar.\u003C/p>\u003Cp>Meanwhile, your old college laptop sits in a box gathering dust. It's not a powerhouse, but it has a GPU. It has RAM. It's a perfectly functional computer doing absolutely nothing while you panic.\u003C/p>\u003Cp>The concept of a \"render farm\" can sound intimidating to one-person studios. You might imagine server racks in a chilled room, expensive licenses, and IT professionals shouting about IP addresses.\u003C/p>\u003Cp>But in the modern Blender ecosystem, that's no longer the reality.\u003C/p>\u003Cp>In this article, \u003Cstrong>I'm going to walk you through how to turn old devices into a unified rendering system using \u003Cem>Flamenco\u003C/em>.\u003C/strong> We will demystify the network setup and get you rendering on multiple machines in a few hours.\u003C/p>\u003Chr>\u003Ch2 id=\"why-self-host-a-render-farm\">Why Self-Host a Render Farm?\u003C/h2>\u003Cp>Before we start plugging in Ethernet cables, let's talk about why you should bother. You might think, \"Why not just send everything to a cloud farm?\" Cloud farms are amazing, but having a local, self-hosted render farm changes your workflow in three fundamental ways.\u003C/p>\u003Cp>When you pay for a cloud farm, you are paying for the final output. \u003Ca href=\"https://blog.cg-wire.com/blender-kitsu-low-res-preview/\">This psychologically discourages you from test rendering\u003C/a>. \u003Cstrong>You become afraid to hit \"Render\" until you are 100% sure everything is perfect.\u003C/strong>\u003C/p>\u003Cp>When you own the farm, the cost of a render is electricity. \u003Ca href=\"https://blog.cg-wire.com/getting-started-with-blender-rendering/\">You can render a rough animation\u003C/a> at 50% resolution just to check the timing or lighting. \u003Cstrong>This freedom allows you to iterate faster.\u003C/strong> You stop guessing and start testing.\u003C/p>\u003Cp>Sometimes, working on a commercial project for a tech client with an NDA is so strict you aren't allowed to even whisper the product name. \u003Cstrong>Uploading those assets to a third-party cloud server - even a secure one - can sometimes violate strict NDA contracts.\u003C/strong> Keeping your data on your local network (LAN) ensures that no pixels leave your studio until you say so.\u003C/p>\u003Cp>There is a specific kind of agony in uploading a 2GB project file to the cloud, waiting for it to render, downloading the frames, and realizing you left a physics cache unbaked. \u003Cstrong>With a local farm like Flamenco, if you spot a mistake, you just hit \"Cancel,\" fix it, and hit \"Render\" again. No upload times, no download times.\u003C/strong> It feels like an extension of your workstation.\u003C/p>\u003Chr>\u003Ch2 id=\"what-is-blender-flamenco\">What is Blender Flamenco?\u003C/h2>\u003Cp>Setting up a render farm from scratch \u003Ca href=\"https://blog.cg-wire.com/blender-programmatic-rendering/\">used to involve complex scripting\u003C/a> or expensive third-party software. Now, we have Blender Flamenco.\u003C/p>\u003Cp>\u003Cstrong>Flamenco is Blender's open-source render farm.\u003C/strong> It's extremely easy to setup: the manager is the brain holding the list of tasks (frames to render) and tells the other computers what to do. The workers are your extra laptops or desktops. They listen to the Manager, ask for a frame, render it, save it, and ask for another.\u003C/p>\u003Cp>Flamenco is designed to be zero-config. It practically discovers itself on your network. If you can install Blender, you can set up Flamenco.\u003C/p>\u003Chr>\u003Ch2 id=\"1-the-setup\">1. The Setup\u003C/h2>\u003Cp>For this tutorial, we start with the simplest configuration possible with our desktop computer acting both as manager and worker. We'll later see how to add our laptop.\u003C/p>\u003Col>\u003Cli>\u003Cstrong>Install Blender\u003C/strong> - Ensure your computer has Blender installed.\u003C/li>\u003Cli>\u003Cstrong>Download Flamenco\u003C/strong> - Go to the Flamenco website and download the package for your OS. Extract it to a folder.\u003C/li>\u003C/ol>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-cec7140f-c6aa-4e18-83fb-be86e5a39ac7.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1064\" height=\"721\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/01/data-src-image-cec7140f-c6aa-4e18-83fb-be86e5a39ac7.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/01/data-src-image-cec7140f-c6aa-4e18-83fb-be86e5a39ac7.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-cec7140f-c6aa-4e18-83fb-be86e5a39ac7.png 1064w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Chr>\u003Ch2 id=\"2-run-flamenco-manager\">2. Run Flamenco Manager\u003C/h2>\u003Col>\u003Cli>Open the Flamenco folder you extracted.\u003C/li>\u003Cli>Double-click \u003Ccode>flamenco-manager\u003C/code>.\u003C/li>\u003Cli>A terminal window will pop up with some text logs.\u003C/li>\u003Cli>Go through the configuration wizard to set up the job folder where you'll upload your blend files to render.\u003C/li>\u003Cli>Shortly after, your web browser should open automatically to \u003Ccode>http://localhost:8080\u003C/code>. This is the Flamenco web interface.\u003C/li>\u003C/ol>\u003Cp>If you see a friendly, dark-themed dashboard, congratulations. You are half a server admin already. The Manager is alive.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-ac803a05-e189-4c17-9fe9-d5749f916aa0.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1319\" height=\"821\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/01/data-src-image-ac803a05-e189-4c17-9fe9-d5749f916aa0.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/01/data-src-image-ac803a05-e189-4c17-9fe9-d5749f916aa0.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-ac803a05-e189-4c17-9fe9-d5749f916aa0.png 1319w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>The manager will tell you to download the addon. Do it now as we'll need it for step 4.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-ccd6a3fb-4abd-469e-a566-5adfddf76196.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1064\" height=\"721\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/01/data-src-image-ccd6a3fb-4abd-469e-a566-5adfddf76196.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/01/data-src-image-ccd6a3fb-4abd-469e-a566-5adfddf76196.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-ccd6a3fb-4abd-469e-a566-5adfddf76196.png 1064w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Chr>\u003Ch2 id=\"3-the-worker\">3. The Worker\u003C/h2>\u003Cp>Now, leave the manager running and double-click \u003Ccode>flamenco-worker\u003C/code>.\u003C/p>\u003Cp>That's it.\u003C/p>\u003Cp>The Worker will scan your local network, find the Manager running on the same computer, and introduce itself. If you look back at your Desktop's web browser (the Manager interface), you should see it appear in the \"Workers\" tab, listed as \"Idle\" and ready for duty.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-6bad58f1-615a-4a7b-8aff-38f07279ebe0.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1319\" height=\"821\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/01/data-src-image-6bad58f1-615a-4a7b-8aff-38f07279ebe0.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/01/data-src-image-6bad58f1-615a-4a7b-8aff-38f07279ebe0.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-6bad58f1-615a-4a7b-8aff-38f07279ebe0.png 1319w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>You should also run \u003Ccode>flamenco-worker\u003C/code> on your Desktop! Your main computer can render and manage at the same time.\u003C/p>\u003Chr>\u003Ch2 id=\"4-add-the-blend-file-and-render\">4. Add the Blend File and Render\u003C/h2>\u003Cp>The stage is set. Now, we can get to work!\u003C/p>\u003Col>\u003Cli>\u003Cstrong>Open Blender\u003C/strong> on your Desktop.\u003C/li>\u003Cli>\u003Cstrong>Enable the Addon\u003C/strong> - Go to Edit &gt; Preferences &gt; Add-ons &gt; Install from Disk. Search for the flamenco zip file you downloaded during the manager setup.\u003C/li>\u003Cli>\u003Cstrong>Link the Manager\u003C/strong> - In the Flamenco add-on preferences, copy/paste the manager's URL address.\u003C/li>\u003Cli>\u003Cstrong>Save Your File\u003C/strong> - Save your \u003Ccode>.blend\u003C/code> file in the configured job folder.\u003C/li>\u003C/ol>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-88504c81-44cf-4d32-a374-0b2dc6746b56.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"724\" height=\"732\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/01/data-src-image-88504c81-44cf-4d32-a374-0b2dc6746b56.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-88504c81-44cf-4d32-a374-0b2dc6746b56.png 724w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>In the Render Properties tab in Blender, scroll down to the \u003Cstrong>Flamenco\u003C/strong> panel.\u003C/p>\u003Col>\u003Cli>Click \u003Cstrong>\"Fetch Job Types\"\u003C/strong>.\u003C/li>\u003Cli>Select \u003Cstrong>\"Simple Render\"\u003C/strong>.\u003C/li>\u003Cli>Hit \u003Cstrong>\"Submit to Flamenco\"\u003C/strong>.\u003C/li>\u003C/ol>\u003Cp>Now, tab over to your web browser. You will see the job pop up. The status bars on your \"Workers\" list will turn green. Your Desktop will grab one frame to render at a time.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-6e7fa2fb-b997-4f6f-ba60-bcc3c70d5bb0.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1319\" height=\"918\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/01/data-src-image-6e7fa2fb-b997-4f6f-ba60-bcc3c70d5bb0.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/01/data-src-image-6e7fa2fb-b997-4f6f-ba60-bcc3c70d5bb0.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-6e7fa2fb-b997-4f6f-ba60-bcc3c70d5bb0.png 1319w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Chr>\u003Ch2 id=\"5-bringing-in-the-laptop\">5. Bringing in the Laptop\u003C/h2>\u003Cp>Now, to add your dusty laptop to the farm.\u003C/p>\u003Cp>Here is the single most actionable piece of advice I can give you, and it is where 90% of beginners fail: \u003Cstrong>All computers must see the files in the exact same place.\u003C/strong>\u003C/p>\u003Cp>If your texture is located at \u003Ccode>C:\\Users\\Dave\\Texture.png\u003C/code> on your desktop, your laptop \u003Cem>cannot\u003C/em> access that path. The laptop doesn't have a user named Dave, and it doesn't have the file on its C drive.\u003C/p>\u003Cp>You need a shared network folder, typically through a NAS. Depending on your operating system, the steps are similar but will slightly differ:\u003C/p>\u003Col>\u003Cli>Connect your desktop and laptop via Ethernet cable\u003C/li>\u003Cli>Create a NAS folder on your Desktop called \u003Ccode>RenderFarm\u003C/code>.\u003C/li>\u003Cli>Right-click it &gt; \u003Cstrong>Properties\u003C/strong> &gt; \u003Cstrong>Sharing\u003C/strong> &gt; \u003Cstrong>Share\u003C/strong>. Give read/write permission to your user.\u003C/li>\u003Cli>\u003Cstrong>Map the Network Drive:\u003C/strong> On your Desktop, map this folder to a drive letter, say \u003Ccode>Z:\u003C/code>. On your Laptop, navigate to the Desktop's network share and map it to **the same letter \u003Ccode>Z:**\u003C/code>.\u003C/li>\u003C/ol>\u003Cp>Now, when you save your Blender file to \u003Ccode>Z:\\RenderFarm\\MyProject.blend\u003C/code>, both computers see it at \u003Ccode>Z:\\RenderFarm\\MyProject.blend\u003C/code>. The path is absolute and identical.\u003C/p>\u003Cp>Now, leave the Desktop running and move over to \u003Cstrong>Computer B (Laptop)\u003C/strong>.\u003C/p>\u003Col>\u003Cli>Make sure your \u003Ccode>Z:\u003C/code> drive (or whatever shared storage you set up) is accessible. Open a file inside it just to be sure.\u003C/li>\u003Cli>Install and open the Flamenco folder on the laptop.\u003C/li>\u003Cli>Make sure you have the same Blender version installed as the one on your desktop.\u003C/li>\u003Cli>Double-click \u003Ccode>flamenco-worker\u003C/code>.\u003C/li>\u003C/ol>\u003Cp>That's it.\u003C/p>\u003Cp>The Worker will scan your local network and find the Manager running on the Desktop.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-90501d50-29c3-4d8f-9b54-511e6c674739.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1504\" height=\"932\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/01/data-src-image-90501d50-29c3-4d8f-9b54-511e6c674739.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/01/data-src-image-90501d50-29c3-4d8f-9b54-511e6c674739.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-90501d50-29c3-4d8f-9b54-511e6c674739.png 1504w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>Flamenco will now automatically orchestrate jobs between your computers.\u003C/p>\u003Cp>If you do not have access to a NAS or do not wish to purchase one, you can have a look at installing a free Samba server on a Linux workstation. Using cloud storage isn't possible because Flamenco doesn't handle asynchronous services, unless you create your own custom job type. We'll see how to do that \u003Ca href=\"https://blog.cg-wire.com/\">in a future article\u003C/a>, using Kitsu as an asynchronous \u003Ca href=\"https://blog.cg-wire.com/animation-asset-storage/\">asset storage server\u003C/a>.\u003C/p>\u003Chr>\u003Ch2 id=\"conclusion-knowing-when-to-scale\">Conclusion: Knowing When to Scale\u003C/h2>\u003Cp>We have covered the hardware setup, the crucial shared storage logic, and the software installation. If you have followed along, \u003Cstrong>you have a functioning render farm in your house and your dusty laptop is now a productive member of your team.\u003C/strong>\u003C/p>\u003Cp>Flamenco makes the barrier to entry for self-hosted rendering incredibly low. It respects your privacy, costs nothing but electricity, and allows you to squeeze every ounce of performance out of the hardware you already own.\u003C/p>\u003Cp>But there is a limit on what you can achieve by yourself.\u003C/p>\u003Cp>Eventually, you will hit a deadline where even your Desktop + Laptop combo isn't enough. Maybe you need to render a 4K sequence with heavy volumetrics in 24 hours and your home farm estimates a completion time of 3 weeks. This is the ceiling of self-hosting.\u003C/p>\u003Cp>When you hit this wall, you don't need to buy five more computers. \u003Cstrong>That's when you transition to a service like Ranch Computing\u003C/strong> that allows you to access hundreds of CPU/GPU nodes instantly. Your home farm is a great daily driver that's perfect for tests, previews, and lighter projects, while a cloud render farm is invaluable for quickly rendering high-quality deliverables to your clients.\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>",{"uuid":315,"comment_id":316,"feature_image":317,"featured":29,"visibility":30,"created_at":318,"updated_at":319,"custom_excerpt":320,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":321,"primary_tag":322,"url":323,"excerpt":320,"reading_time":97,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":324},"80ad6c13-1312-46ac-a74b-94e022668680","695bb702c665470001df4dcd","https://images.unsplash.com/photo-1683322499436-f4383dd59f5a?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDd8fGRhdGElMjBjZW50ZXJ8ZW58MHx8fHwxNzY3NjE4NDAxfDA&ixlib=rb-4.1.0&q=80&w=2000","2026-01-05T14:05:06.000+01:00","2026-02-20T06:04:52.000+01:00","Learn how to build a self-hosted Blender render farm using Flamenco. This guide walks through setup, shared storage, workers, and scaling strategies to help artists render faster using the hardware they already own.",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"https://blog.cg-wire.com/self-hosted-blender-render-farm/","\u003Cspan style=\"white-space: pre-wrap;\">Photo by \u003C/span>\u003Ca href=\"https://unsplash.com/@scottrodgerson?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Scott Rodgerson\u003C/span>\u003C/a>\u003Cspan style=\"white-space: pre-wrap;\"> / \u003C/span>\u003Ca href=\"https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Unsplash\u003C/span>\u003C/a>","/posts/self-hosted-blender-render-farm","2026-01-19T10:00:41.000+01:00",{"title":310},"self-hosted-blender-render-farm","posts/self-hosted-blender-render-farm",[331,332],{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},{"id":333,"name":334,"slug":335,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":336},"69c20ddbcb09d8000107cfe5","Blender","blender","https://blog.cg-wire.com/tag/blender/","DCPLn1PWShGHKlv5NXuil2qtBDL7tnabWDmi33KjLoc",{"id":339,"title":340,"authors":341,"body":7,"description":7,"extension":8,"html":343,"meta":344,"navigation":12,"path":355,"published_at":356,"seo":357,"slug":358,"stem":359,"tags":360,"__hash__":362,"uuid":345,"comment_id":346,"feature_image":347,"featured":29,"visibility":30,"created_at":348,"updated_at":349,"custom_excerpt":350,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":351,"primary_tag":352,"url":353,"excerpt":350,"reading_time":97,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":354},"ghost/posts:kitsu-cli-single-binary.json","Building a Portable Kitsu CLI with Python and Gazu (2026)",[342],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">🧰\u003C/div>\u003Cdiv class=\"kg-callout-text\">Turn fragile Python scripts into a single reliable tool that just runs.\u003C/div>\u003C/div>\u003Cp>It's late in production, the schedule is tight, and you need to roll out a critical pipeline tool on a new machine—something to sync shot statuses, publish playblasts, or \u003Ca href=\"https://blog.cg-wire.com/dcc-integration-blender-kitsu/\">automate a Kitsu workflow\u003C/a>. The tool itself isn't complicated. It's just Python. You already wrote it.\u003C/p>\u003Cp>\u003Cstrong>The problem is everything around it.\u003C/strong>\u003C/p>\u003Cp>The machine you're deploying to doesn't have Python installed. Or it has the wrong version. The studio's Linux server is locked down. A freelancer's Windows box can't compile dependencies. Someone asks whether they need \u003Ccode>pip\u003C/code>, a virtual environment, or the Gazu SDK. Suddenly, a \"simple script\" turns into documentation, troubleshooting, and lost time.\u003C/p>\u003Cp>Instead of building pipeline tools, you're managing environments.\u003C/p>\u003Cp>This is the part no one enjoys: installing Python, pinning versions, chasing missing libraries, and hoping nothing breaks when the OS updates. And when your tool needs to run on artist workstations, render nodes, or CI servers, that fragility becomes a real production risk.\u003C/p>\u003Cp>What you actually want is simple: one tool, one command, that just runs.\u003C/p>\u003Cp>\u003Cstrong>In this article, you'll learn how to package your Kitsu workflows by wrapping the Kitsu Python SDK (Gazu) into a Command Line Interface (CLI) and compiling it into a single binary executable.\u003C/strong> No Python installs. No dependency management. Just a reliable executable you can drop onto any machine and use immediately.\u003C/p>\u003Chr>\u003Ch2 id=\"why-you-need-a-cli\">Why You Need a CLI\u003C/h2>\u003Cp>GUIs are great for creative work, but \u003Cstrong>once you're dealing with pipeline management, a web UI can quickly become a burden\u003C/strong>. When you move the right Kitsu tasks into a CLI, you unlock a faster, more scalable, and more automation-friendly way of working.\u003C/p>\u003Cp>You finish animating five shots and need to update their status and upload previews. In a browser, that means context-switching: Alt-Tab, open Chrome, navigate to Kitsu, drill into the project, find the episode, click the shot, change the status, upload the movie. Then repeat the whole process for every shot. With a CLI, you stay exactly where you are. You type \u003Ccode>kitsu publish --status Review\u003C/code>, hit Enter, and move on. \u003Cstrong>You never leave the keyboard, you never break focus, and you don't pay the cognitive tax of clicking through menus.\u003C/strong>\u003C/p>\u003Cp>A CLI naturally pushes you toward thinking in arguments, lists, and automation, and that's where it starts to compound. \u003Cstrong>If you can update one shot, you can update ten or a hundred using the exact same command.\u003C/strong> You can loop over a sequence, pipe in shot names, or drive the operation directly from a DCC or render output. What would be an hour of repetitive clicking in a web UI becomes a few seconds of scripted work. And it's consistent, repeatable, and easy to version-control.\u003C/p>\u003Cp>Lastly, \u003Cstrong>not everything in a pipeline runs on a workstation with a monitor.\u003C/strong> Sometimes tasks need to happen on a render farm node, a build server, or a background process reacting to files on disk. In those environments, there is no browser and no user to click buttons. A CLI works anywhere you have a shell. You can automate publishes, status changes, validations, and sync operations, and Kitsu gets integrated deeper into the pipeline.\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-blue\">\u003Cdiv class=\"kg-callout-emoji\">💡\u003C/div>\u003Cdiv class=\"kg-callout-text\">\u003Cb>\u003Cstrong style=\"white-space: pre-wrap;\">Looking for working examples?\u003C/strong>\u003C/b>\u003Cbr>\u003Cbr>You can find the complete source code for the example integration showcased in this guide on our GitHub:\u003Cbr>\u003Cbr>🔗 \u003Ca href=\"https://github.com/cgwire/blog-tutorials/tree/main/kitsu-cli?ref=blog.cg-wire.com\">https://github.com/cgwire/blog-tutorials/tree/main/kitsu-cli\u003C/a>\u003C/div>\u003C/div>\u003Chr>\u003Ch2 id=\"1-designing-the-cli-interface\">1. Designing the CLI Interface\u003C/h2>\u003Cp>Before we touch the Kitsu API, we need the skeleton of our tool. In Python, there are several ways to parse command-line arguments, but for a professional pipeline tool, I highly recommend using libraries like \u003Ccode>Click\u003C/code> or \u003Ccode>Typer\u003C/code>.\u003C/p>\u003Cp>For this walkthrough, let's conceptualize a tool called \u003Ccode>kitsu-cli\u003C/code>.\u003C/p>\u003Cp>\u003Cstrong>Think of your tool like a tree.\u003C/strong> The trunk is the main executable, and the branches are your commands and subcommands:\u003C/p>\u003Cpre>\u003Ccode class=\"language-text\">kitsu-cli (root)\n└── production (commands related to productions)\n    └── list (list all productions)\n\u003C/code>\u003C/pre>\u003Cp>Here is how you structure this logic in Python using \u003Ccode>Click\u003C/code>. This structure is crucial because it allows your tool to be extendable. Today you are managing productions; tomorrow you might be managing assets or playlists.\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">import click\n\n@click.group()\ndef cli():\n    \"\"\"My Studio Kitsu Tool\"\"\"\n    pass\n\n@cli.group()\ndef production():\n    \"\"\"Commands for managing productions\"\"\"\n    pass\n\n@production.command()\n@click.option('--name', help='Filter by name')\ndef list(name):\n    \"\"\"List productions\"\"\"\n    click.echo(f\"Listing productions: {name}\")\n\nif __name__ == '__main__':\n    cli()\n\u003C/code>\u003C/pre>\u003Cp>This snippet alone gives you a help menu for free. If the user types \u003Ccode>kit-cli --help\u003C/code>, they see the documentation. This is developer empathy, building tools that teach the user how to use them.\u003C/p>\u003Cp>\u003Cstrong>To run the CLI\u003C/strong>, you just use the same command as a regular Python program:\u003C/p>\u003Cpre>\u003Ccode class=\"language-py\">python3 cli.py production list\n\u003C/code>\u003C/pre>\u003Chr>\u003Ch2 id=\"2-adding-gazu-features\">2. Adding Gazu Features\u003C/h2>\u003Cp>Now that we have the skeleton, we need the muscle. Kitsu provides a fantastic Python client called \u003Cstrong>Gazu\u003C/strong>.\u003C/p>\u003Cp>If you haven't used Gazu before, it is the bridge between your script and your Kitsu server.\u003C/p>\u003Cp>The first hurdle in any pipeline tool is \u003Cstrong>authentication\u003C/strong>. You do not want your artists hard-coding their passwords into scripts. A robust CLI checks if a session already exists. If not, it prompts the user to log in once and saves the token locally. For the sake of simplicity, we'll just hardcode our authentication logic:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">import gazu\n\ngazu.set_host(\"http://localhost/api\")\nuser = gazu.log_in(\"admin@example.com\", \"mysecretpassword\")\n\u003C/code>\u003C/pre>\u003Cp>Once authenticated, we can flesh out that \u003Ccode>list\u003C/code> command we wrote earlier. To list productions:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">@production.command()\n@click.option('--name', help='Filter by name')\ndef list(name):\n    \"\"\"List productions\"\"\"\n    click.echo(f\"Listing productions: {name}\")\n\u003C/code>\u003C/pre>\u003Cp>No need to open a browser, wait for the Vue app to load, and filter the view. \u003Cstrong>This script returns raw data instantly.\u003C/strong>\u003C/p>\u003Chr>\u003Ch2 id=\"3-interactive-interface\">3. Interactive Interface\u003C/h2>\u003Cp>While command flags (like \u003Ccode>--name test\u003C/code>) are great, \u003Cstrong>it would be a much better experience to pick productions from an interactive list\u003C/strong>.\u003C/p>\u003Cp>Instead of forcing the user to type the exact name of a sequence (which they will inevitably misspell), we can make our CLI smarter by adding prompts. If the user forgets to supply an argument, you just ask them for it.\u003C/p>\u003Cp>A library like \u003Ccode>questionary\u003C/code> is great for this because it adds self-documented, interactive selection lists to the terminal.\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">import questionary\n\n@production.command()\ndef select():\n    \"\"\"List available productions\"\"\"\n    productions = gazu.project.all_projects()\n\n    selected_project = questionary.select(\n        \"Which project are you working on?\", choices=productions\n    ).ask()\n\n    click.echo(f\"You selected {selected_project}. Loading assets...\")\n\n\u003C/code>\u003C/pre>\u003Cp>This tiny addition changes the user experience from \"scary hacker tool\" to \"helpful assistant.\" It reduces error rates to near zero because the user can only select valid options retrieved directly from Kitsu.\u003C/p>\u003Chr>\u003Ch2 id=\"4-the-single-executable-binary\">4. The Single Executable Binary\u003C/h2>\u003Cp>Last but not least, \u003Cstrong>we need to solve the \"It doesn't work on my laptop\" problem\u003C/strong>. We have a Python script with dependencies:\u003Ccode>gazu\u003C/code>, \u003Ccode>click\u003C/code>, \u003Ccode>questionary\u003C/code>, etc.\u003C/p>\u003Cp>To run this on a freelancer's machine, they would normally need to install Python, or maybe create a virtual environment, and \u003Ccode>pip install\u003C/code> the requirements. To eliminate all those steps, we can use \u003Ccode>PyInstaller\u003C/code>.\u003C/p>\u003Cpre>\u003Ccode class=\"language-sh\">python3 -m pip install pyinstaller\n\u003C/code>\u003C/pre>\u003Cp>PyInstaller analyzes your Python script, finds every library you imported, bundles the Python interpreter itself, and wraps it all into a single \u003Ccode>.exe\u003C/code> file (on Windows) or target binary (on Linux/Mac).\u003C/p>\u003Cp>Navigate to your script's folder in your terminal and run:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">python3 -m PyInstaller --onefile --name kitsu-cli cli.py\n\u003C/code>\u003C/pre>\u003Cul>\u003Cli>\u003Ccode>--onefile\u003C/code>: This flag tells PyInstaller to bundle everything into a single file, rather than a folder of loose dependencies.\u003C/li>\u003Cli>\u003Ccode>--name\u003C/code>: The name of your final binary file.\u003C/li>\u003C/ul>\u003Cp>After the process finishes, check the \u003Ccode>dist/\u003C/code> folder. You will find a file named \u003Ccode>kitsu-cli\u003C/code> (or \u003Ccode>kitsu-cli.exe\u003C/code>).\u003C/p>\u003Cp>You can now take this file, put it on a USB drive, email it, or put it on a network drive. An artist can drag it to their desktop and run it as long as it's compiled on the same OS architecture (macOS, Windows, etc.). They do not need Python installed. They do not need to install Gazu manually. It just works:\u003C/p>\u003Cpre>\u003Ccode class=\"language-sh\">./kitsu-cli production list\n\u003C/code>\u003C/pre>\u003Cp>But don't take my word for it, try it out yourself by \u003Ca>cloning our Github repository\u003C/a>.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-f4c09502-e96e-4692-8fc7-d4dd59d6482c.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1319\" height=\"913\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/01/data-src-image-f4c09502-e96e-4692-8fc7-d4dd59d6482c.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/01/data-src-image-f4c09502-e96e-4692-8fc7-d4dd59d6482c.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-f4c09502-e96e-4692-8fc7-d4dd59d6482c.png 1319w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>If you need to cross-compile your CLI to different OS targets, you can use Github Actions.\u003C/p>\u003Chr>\u003Ch2 id=\"cli-example-the-render-fetcher\">CLI Example: The \"Render Fetcher\"\u003C/h2>\u003Cp>Let's switch to a more pipeline-centric scenario.\u003C/p>\u003Cp>Picture a workflow where you're \u003Ca href=\"https://blog.cg-wire.com/getting-started-with-blender-rendering/\">managing distributed rendering\u003C/a> across multiple machines. Each render node needs to regularly pull new work from Kitsu: shots marked \u003Cem>TODO\u003C/em> for rendering, along with their corresponding preview \u003Ccode>.blend\u003C/code> files. These machines are headless, locked down, and deliberately minimal—no Python installs, no virtual environments, no dependency juggling.\u003C/p>\u003Cp>What you want is a single executable you can drop onto any server and run as a cron job or service:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">./kitsu-cli pull MechaFight /home/user/flamenco/jobs\n\u003C/code>\u003C/pre>\u003Cp>The corresponding code would look like this:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">import os\n\nimport click\nimport gazu\nimport questionary\n\ngazu.set_host(\"http://localhost/api\")\nuser = gazu.log_in(\"admin@example.com\", \"mysecretpassword\")\n\n\n@click.group()\ndef cli():\n    \"\"\"My Studio Kitsu Tool\"\"\"\n    pass\n\n\n@cli.command()\n@click.argument(\"project_name\", required=True)\n@click.argument(\"output_path\", required=True)\ndef pull(project_name, output_path):\n    click.echo(f\"Fetching TODO render tasks for project: {project_name}\")\n\n    project = gazu.project.get_project_by_name(project_name)\n\n    tasks = gazu.task.all_tasks_for_project(project)\n\n    rendering = gazu.task.get_task_type_by_name(\"Rendering\")\n    todo = gazu.task.get_task_status_by_name(\"todo\")\n\n    render_tasks = [\n        t\n        for t in tasks\n        if t[\"task_type_id\"] == rendering[\"id\"] and t[\"task_status_id\"] == todo[\"id\"]\n    ]\n\n    for task in render_tasks:\n        files = gazu.files.get_all_preview_files_for_task(task)\n        size = len(files)\n\n        if size &gt; 0:\n            latest = files[size - 1]\n            if latest[\"extension\"] == \"blend\":\n                target_path = os.path.join(\n                    output_path, latest[\"name\"] + \".\" + latest[\"extension\"]\n                )\n                gazu.files.download_preview_file(latest, target_path)\n\n\nif __name__ == \"__main__\":\n    cli()\n\u003C/code>\u003C/pre>\u003Col>\u003Cli>\u003Cstrong>Query Kitsu\u003C/strong> - The CLI connects to Kitsu (via Gazu) and retrieves all rendering tasks with a \u003Cem>TODO\u003C/em> status for a given project.\u003C/li>\u003Cli>\u003Cstrong>Filter tasks\u003C/strong> - It filters tasks that are marked \u003Ccode>todo\u003C/code> and have an associated preview file (in this case, a \u003Ccode>.blend\u003C/code> file).\u003C/li>\u003Cli>\u003Cstrong>Download assets\u003C/strong> - For each task, the CLI downloads the corresponding preview \u003Ccode>.blend\u003C/code> file to the specified output path on disk.\u003C/li>\u003Cli>\u003Cstrong>Render\u003C/strong> - Once downloaded, the files are ready for Blender to pick up, manually or via an automated render orchestrator like Flamenco.\u003C/li>\u003C/ol>\u003Cp>When this CLI is compiled into a single binary, it becomes trivial to deploy. You can drop it onto Linux render nodes and run it from cron or systemd without installing Python or dependencies. Every server pulls work the same way. Folder structures are consistent. Task state comes straight from Kitsu. And your render farm stays focused on rendering.\u003C/p>\u003Cp>Again, check it out in \u003Ca>the corresponding Github repository\u003C/a>.\u003C/p>\u003Chr>\u003Ch2 id=\"conclusion\">Conclusion\u003C/h2>\u003Cp>\u003Cstrong>Creating your own Kitsu CLI doesn't have to be complex.\u003C/strong> By wrapping the Gazu library in a user-friendly CLI and freezing it with PyInstaller, you scale your pipeline. You remove the technical friction of environment management and let your artists focus on what they do best: creating beautiful animations.\u003C/p>\u003Cp>Learn more about combining Kitsu and Blender scripting by \u003Ca href=\"https://blog.cg-wire.com/\">subscribing to our blog\u003C/a>!\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>",{"uuid":345,"comment_id":346,"feature_image":347,"featured":29,"visibility":30,"created_at":348,"updated_at":349,"custom_excerpt":350,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":351,"primary_tag":352,"url":353,"excerpt":350,"reading_time":97,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":354},"8ece75b9-d27d-4edb-b152-e03c93326889","695b8678c665470001df4da3","https://images.unsplash.com/photo-1484417894907-623942c8ee29?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDIxfHxzb2Z0d2FyZSUyMGRldmVsb3BtZW50fGVufDB8fHx8MTc2NzYwNzcwNHww&ixlib=rb-4.1.0&q=80&w=2000","2026-01-05T10:38:00.000+01:00","2026-02-20T06:04:43.000+01:00","Learn how to package Kitsu workflows into a standalone command-line tool using Python, Gazu, and PyInstaller. This guide covers CLI design, interactive prompts, and compiling a single executable for reliable deployment across studios and render farms.",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"https://blog.cg-wire.com/kitsu-cli-single-binary/","\u003Cspan style=\"white-space: pre-wrap;\">Photo by \u003C/span>\u003Ca href=\"https://unsplash.com/@emilep?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Emile Perron\u003C/span>\u003C/a>\u003Cspan style=\"white-space: pre-wrap;\"> / \u003C/span>\u003Ca href=\"https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Unsplash\u003C/span>\u003C/a>","/posts/kitsu-cli-single-binary","2026-01-12T10:00:37.000+01:00",{"title":340},"kitsu-cli-single-binary","posts/kitsu-cli-single-binary",[361],{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"tzpUcwj2c_RrpCjFHR0EJ7oQimUwcACObhVr3BaAavI",{"id":364,"title":365,"authors":366,"body":7,"description":7,"extension":8,"html":368,"meta":369,"navigation":12,"path":380,"published_at":381,"seo":382,"slug":383,"stem":384,"tags":385,"__hash__":388,"uuid":370,"comment_id":371,"feature_image":372,"featured":29,"visibility":30,"created_at":373,"updated_at":374,"custom_excerpt":375,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":376,"primary_tag":377,"url":378,"excerpt":375,"reading_time":43,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":379},"ghost/posts:blender-shaders-explained.json","Working with Blender Shaders (2026): Nodes & Scripting",[367],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">🎨\u003C/div>\u003Cdiv class=\"kg-callout-text\">Shaders are not magic, they’re visual recipes you can control and automate.\u003C/div>\u003C/div>\u003Cp>It's easy to panic the first time you hear the word \u003Cem>shader\u003C/em>. Someone mentions GLSL, GPUs start sweating, and suddenly you're imagining walls of unreadable code and your computer fan screaming for mercy.\u003C/p>\u003Cp>oHere's the part no one tells you early enough: you don't need to be a mathematician or a graphics programmer to work with shaders. You're not required to write low-level GPU code or understand every equation behind light physics. Blender doesn't expect that from you. Instead, it gives you nodes: visual building blocks that behave more like Lego than code. You plug things together, see the result instantly, and adjust until it feels right.\u003C/p>\u003Cp>Think of shaders less as code and more as recipes. You're mixing values, textures, and logic to describe how a surface should react to light. Sometimes you'll follow a known recipe, sometimes you'll improvise, and sometimes you'll break things just to see what happens. It's how you'll learn.\u003C/p>\u003Cp>\u003Cstrong>In this article, we're going to demystify what shading actually is, strip away the fear around it, and explore how to manipulate shaders procedurally using Blender's node system or a bit of scripting for an animation pipeline.\u003C/strong> By the end, shading won't feel like a forbidden room anymore.\u003C/p>\u003Chr>\u003Ch2 id=\"whats-a-shader\">\u003Cstrong>What's a Shader?\u003C/strong>\u003C/h2>\u003Cp>To understand shaders, we have to stop thinking about \"colors\" and start thinking about \"physics.\"\u003C/p>\u003Cp>\u003Ca href=\"https://blog.cg-wire.com/hard-surface-modeling/\">\u003Cu>If you paint a wooden chair red in the real world\u003C/u>\u003C/a>, you aren't just changing its color. You are adding a layer of material that interacts with light. That red paint has a specific roughness (how much it scatters light), a specific specularity (how shiny it is), and a specific refractive index.\u003C/p>\u003Cp>\u003Cstrong>A shader is a set of instructions that tells the computer how to simulate that light interaction.\u003C/strong>\u003C/p>\u003Cfigure class=\"kg-card kg-image-card kg-card-hascaption\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-296bf085-924e-40f9-92fc-346c5dc31de0.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1600\" height=\"1067\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/01/data-src-image-296bf085-924e-40f9-92fc-346c5dc31de0.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/01/data-src-image-296bf085-924e-40f9-92fc-346c5dc31de0.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-296bf085-924e-40f9-92fc-346c5dc31de0.png 1600w\" sizes=\"(min-width: 720px) 720px\">\u003Cfigcaption>\u003Ci>\u003Cem class=\"italic\" style=\"white-space: pre-wrap;\">Source: TurboSquid\u003C/em>\u003C/i>\u003C/figcaption>\u003C/figure>\u003Cp>When a ray of light from your digital sun hits the surface of your object, the shader steps in and asks:\u003C/p>\u003Cul>\u003Cli>\"Are you bouncing off?\" (Reflection)\u003C/li>\u003Cli>\"Are you going through?\" (Transmission/Glass)\u003C/li>\u003Cli>\"Are you getting trapped inside?\" (Absorption)\u003C/li>\u003Cli>\"Are you scattering under the skin?\" (Subsurface Scattering)\u003C/li>\u003C/ul>\u003Cp>If you're modeling a wet cobblestone street, a simple image texture makes it look like a flat photo of a street. A shader tells the renderer that the water in the cracks is perfectly reflective and smooth, while the stone is rough and dull. It tells the light to bounce differently off the wet parts than the dry parts.\u003Ca href=\"https://blog.cg-wire.com/how-light-shapes-emotion-in-animation/\"> \u003Cu>Light shapes reality.\u003C/u>\u003C/a>\u003C/p>\u003Chr>\u003Ch2 id=\"why-you-must-master-shader-nodes\">\u003Cstrong>Why You Must Master Shader Nodes\u003C/strong>\u003C/h2>\u003Cp>You might ask, \"Why not just download textures?\"\u003C/p>\u003Cp>Photo-scanning is great, but procedural shading gives you three superpowers that static images cannot match.\u003C/p>\u003Cp>When you use an image texture (a JPG or PNG), you are limited by pixels. Zoom in too close to a wall, and it becomes blurry.\u003C/p>\u003Cp>Shaders use math. \u003Cstrong>Math has no resolution limit.\u003C/strong> You can zoom into a procedural scratch on metal until you see the microscopic grooves, and it will remain crisp. Even if you have a model you're proud of, with clean topology and nice proportions, it'll still look flat without shaders.\u003C/p>\u003Cp>Blender's shader nodes make it \u003Cstrong>easy to tweak your textures in a consistent way\u003C/strong>. Let's say you are texturing a spaceship: you paint rust onto the hull using a texture map. Your Art Director walks in and says, \"Great, but the ship looks too old. Reduce the rust by 50%.\" If you hand-painted that, you have to start over or spend hours erasing. With shader nodes, you simply locate the \"Rust Amount\" value you created and slide it from \u003Ccode>1.0\u003C/code> to \u003Ccode>0.5\u003C/code>. Done.\u003C/p>\u003Cp>Static textures look frozen, but \u003Cstrong>shaders can also be animated\u003C/strong>. You can build a shader setup where moss grows on a rock over time based on the frame number, or where a shield glows brighter as it gets hit. Shaders allow your materials to react to the environment.\u003C/p>\u003Cp>For all these reasons, learning to master shader nodes is an incredible unlock for professional artists working with tight deadlines.\u003C/p>\u003Chr>\u003Ch2 id=\"the-different-types-of-shader-nodes\">\u003Cstrong>The Different Types of Shader Nodes\u003C/strong>\u003C/h2>\u003Cp>Blender's node system works like a flow chart. You click \u003Ccode>Add\u003C/code> to add nodes and connect them together. Data flows from left to right. To understand how to leverage each feature, you need to understand the different node types available.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-2573386d-adc9-4979-a848-89d1cae3645e.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1600\" height=\"900\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/01/data-src-image-2573386d-adc9-4979-a848-89d1cae3645e.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/01/data-src-image-2573386d-adc9-4979-a848-89d1cae3645e.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-2573386d-adc9-4979-a848-89d1cae3645e.png 1600w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Ch3 id=\"1-input-nodes\">\u003Cstrong>1. Input Nodes\u003C/strong>\u003C/h3>\u003Cp>Input nodes provide data from the scene, object, geometry, or user-defined values into the shader network.\u003C/p>\u003Cul>\u003Cli>\u003Cstrong>Texture Coordinate\u003C/strong> - Provides UV, object, generated, and camera coordinates + use UV output to correctly map an image texture onto a UV-unwrapped model\u003C/li>\u003Cli>\u003Cstrong>Geometry\u003C/strong> - Outputs geometric information such as normals and pointiness + use Pointiness to create dirt accumulation in crevices\u003C/li>\u003Cli>\u003Cstrong>Fresnel\u003C/strong> - Calculates view-angle-based reflectivity + use it to create stronger reflections on the edges of glass\u003C/li>\u003Cli>\u003Cstrong>Object Info\u003C/strong> - Supplies per-object data like random values or object color + use Random output to give each object a slightly different color\u003C/li>\u003Cli>\u003Cstrong>Value\u003C/strong> - Outputs a constant numerical value + use it to control roughness with a single slider\u003C/li>\u003Cli>\u003Cstrong>Color\u003C/strong> - Outputs a constant color value + use it as a base color for a stylized material\u003C/li>\u003C/ul>\u003Ch3 id=\"2-output-nodes\">\u003Cstrong>2. Output Nodes\u003C/strong>\u003C/h3>\u003Cp>Output nodes define the final result of a shader and connect the node network to Blender’s rendering system.\u003C/p>\u003Cul>\u003Cli>\u003Cstrong>Material Output\u003C/strong> - Outputs the final surface, volume, and displacement data + connect a Principled BSDF to the Surface input\u003C/li>\u003C/ul>\u003Ch3 id=\"3-shader-nodes\">\u003Cstrong>3. Shader Nodes\u003C/strong>\u003C/h3>\u003Cp>Shader nodes define how light interacts with a surface, including reflection, refraction, and emission.\u003C/p>\u003Cul>\u003Cli>\u003Cstrong>Principled BSDF\u003C/strong> - Physically based all-in-one surface shader + create realistic metal, plastic, or skin materials\u003C/li>\u003Cli>\u003Cstrong>Diffuse BSDF\u003C/strong> - Produces matte, non-reflective surfaces + use for chalk, clay, or unpolished stone\u003C/li>\u003Cli>\u003Cstrong>Glossy BSDF\u003C/strong> - Produces mirror-like reflections + use for polished metal or mirrors\u003C/li>\u003Cli>\u003Cstrong>Glass BSDF\u003C/strong> - Combines refraction and reflection + use for windows or glass bottles\u003C/li>\u003Cli>\u003Cstrong>Emission\u003C/strong> - Emits light from a surface + use for screens, LEDs, or neon signs\u003C/li>\u003Cli>\u003Cstrong>Mix Shader\u003C/strong> - Blends two shader outputs + mix diffuse and glossy shaders for worn metal\u003C/li>\u003C/ul>\u003Ch3 id=\"4-displacement-nodes\">\u003Cstrong>4. Displacement Nodes\u003C/strong>\u003C/h3>\u003Cp>Displacement nodes alter surface detail by modifying geometry or shading normals.\u003C/p>\u003Cul>\u003Cli>\u003Cstrong>Displacement\u003C/strong> - Performs true geometric displacement + create real depth in a brick wall using a height map (Cycles)\u003C/li>\u003Cli>\u003Cstrong>Bump\u003C/strong> - Simulates surface detail using normal perturbation + add fine scratches without increasing geometry\u003C/li>\u003Cli>\u003Cstrong>Normal Map\u003C/strong> - Converts normal textures into usable normal data + apply a baked normal map from a game asset\u003C/li>\u003C/ul>\u003Ch3 id=\"5-color-nodes\">\u003Cstrong>5. Color Nodes\u003C/strong>\u003C/h3>\u003Cp>Color nodes adjust, blend, and transform color information within the shader network.\u003C/p>\u003Cul>\u003Cli>\u003Cstrong>Mix Color\u003C/strong> - Blends two colors or textures + mix a dirt texture over a clean base color\u003C/li>\u003Cli>\u003Cstrong>RGB Curves\u003C/strong> - Adjusts contrast and color balance + increase texture contrast without re-editing the image\u003C/li>\u003Cli>\u003Cstrong>Hue/Saturation\u003C/strong> - Modifies hue, saturation, and value + tint a material blue without repainting textures\u003C/li>\u003Cli>\u003Cstrong>Invert\u003C/strong> - Reverses color values + invert a roughness map to create a glossiness map\u003C/li>\u003C/ul>\u003Ch3 id=\"6-texture-nodes\">\u003Cstrong>6. Texture Nodes\u003C/strong>\u003C/h3>\u003Cp>Texture nodes generate or load image and procedural textures for materials.\u003C/p>\u003Cul>\u003Cli>\u003Cstrong>Image Texture\u003C/strong> - Loads external image files + use an albedo map for a PBR material\u003C/li>\u003Cli>\u003Cstrong>Noise Texture\u003C/strong> - Generates smooth procedural noise + add subtle roughness variation to plastic\u003C/li>\u003Cli>\u003Cstrong>Voronoi Texture\u003C/strong> - Produces cell-based patterns + create cracks, scales, or stone tiles\u003C/li>\u003Cli>\u003Cstrong>Gradient Texture\u003C/strong> - Outputs smooth gradients + use as a mask for blending materials\u003C/li>\u003C/ul>\u003Ch3 id=\"7-utility-nodes\">\u003Cstrong>7. Utility Nodes\u003C/strong>\u003C/h3>\u003Cp>Utility nodes perform mathematical operations and data conversions.\u003C/p>\u003Cul>\u003Cli>\u003Cstrong>Mapping\u003C/strong> - Transforms texture coordinates + scale and rotate a texture pattern\u003C/li>\u003Cli>\u003Cstrong>Math\u003C/strong> - Performs numerical operations + clamp roughness values to prevent extremes\u003C/li>\u003Cli>\u003Cstrong>Vector Math\u003C/strong> - Performs vector-based calculations + modify normal or direction vectors\u003C/li>\u003Cli>\u003Cstrong>Clamp\u003C/strong> - Limits values to a specified range + prevent over-bright emission values\u003C/li>\u003C/ul>\u003Ch3 id=\"8-group-nodes\">\u003Cstrong>8. Group Nodes\u003C/strong>\u003C/h3>\u003Cp>Group nodes package multiple nodes into reusable, organized components.\u003C/p>\u003Cul>\u003Cli>\u003Cstrong>Node Group\u003C/strong> - Encapsulates complex node setups + create a reusable “Rust Shader” used across multiple assets\u003C/li>\u003C/ul>\u003Ch3 id=\"9-layout-nodes\">\u003Cstrong>9. Layout Nodes\u003C/strong>\u003C/h3>\u003Cp>Layout nodes organize the node graph visually and do not affect rendering output.\u003C/p>\u003Cul>\u003Cli>\u003Cstrong>Frame\u003C/strong> - Visually groups related nodes + frame all texture-related nodes together\u003C/li>\u003Cli>\u003Cstrong>Reroute\u003C/strong> - Redirects node connections for clarity + clean up overlapping noodle connections\u003C/li>\u003C/ul>\u003Chr>\u003Ch2 id=\"the-next-level-scripting-your-shaders\">\u003Cstrong>The Next Level: Scripting Your Shaders\u003C/strong>\u003C/h2>\u003Cp>When you get comfortable connecting nodes manually, you can make wood, plastic, gold, or any kind of material. But \u003Cstrong>what if you have a scene with 500 unique objects, and you need to generate a random variation\u003C/strong> of a worn metal material for each one with some tweaks?\u003C/p>\u003Cp>This is where Python scripting becomes key. You can use it to ensure every material in your project follows the same node structure. You can write a script that says, \"Make this material red, but vary the hue slightly by a random number for every object.\"\u003C/p>\u003Cp>Let's get our hands dirty. We are going to write a Python script that creates a new material, adds a Principled BSDF, generates a noise texture to control the color, and links it all up.\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-blue\">\u003Cdiv class=\"kg-callout-emoji\">💡\u003C/div>\u003Cdiv class=\"kg-callout-text\">\u003Cb>\u003Cstrong style=\"white-space: pre-wrap;\">Looking for working examples?\u003C/strong>\u003C/b>\u003Cbr>\u003Cbr>You can find the complete source code for the example integration showcased in this guide on our GitHub:\u003Cbr>\u003Cbr>🔗 \u003Ca href=\"https://github.com/cgwire/blog-tutorials/tree/main/blender-shaders?ref=blog.cg-wire.com\">https://github.com/cgwire/blog-tutorials/tree/main/blender-shaders\u003C/a>\u003C/div>\u003C/div>\u003Cp>Open the \u003Cem>Scripting\u003C/em> tab in Blender, create a new text block, and follow along.\u003C/p>\u003Cp>First, we need to import the library and tell Blender we want to create a new material.\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">import random\n\nimport bpy\n\ndef create_procedural_material(mat_name):\n&nbsp;&nbsp;&nbsp;&nbsp;mat = bpy.data.materials.new(name=mat_name)\n\n&nbsp;&nbsp;&nbsp;&nbsp;mat.use_nodes = True\n&nbsp;&nbsp;&nbsp;&nbsp;nodes = mat.node_tree.nodes\n&nbsp;&nbsp;&nbsp;&nbsp;links = mat.node_tree.links\n\n&nbsp;&nbsp;&nbsp;&nbsp;nodes.clear()\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>Now, let's add the nodes. Think of this as pulling items out of the \"Add\" menu programmatically:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">    node_output = nodes.new(type='ShaderNodeOutputMaterial')\n&nbsp;&nbsp;&nbsp;&nbsp;node_output.location = (400, 0)\n\n&nbsp;&nbsp;&nbsp;&nbsp;node_principled = nodes.new(type='ShaderNodeBsdfPrincipled')\n&nbsp;&nbsp;&nbsp;&nbsp;node_principled.location = (0, 0)\n\n&nbsp;&nbsp;&nbsp;&nbsp;node_principled.inputs['Roughness'].default_value = 0.2\n&nbsp;&nbsp;&nbsp;&nbsp;node_principled.inputs['Metallic'].default_value = 1.0\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>Now, let's make it interesting. We will add a Noise Texture and a ColorRamp to generate a random color pattern.\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">node_noise = nodes.new(type='ShaderNodeTexNoise')\n&nbsp;&nbsp;&nbsp;&nbsp;node_noise.location = (-600, 0)\n&nbsp;&nbsp;&nbsp;&nbsp;node_noise.inputs['Scale'].default_value = 15.0\n&nbsp;&nbsp;&nbsp;&nbsp;node_noise.inputs['Detail'].default_value = 10.0\n\n&nbsp;&nbsp;&nbsp;&nbsp;node_ramp = nodes.new(type='ShaderNodeValToRGB')\n&nbsp;&nbsp;&nbsp;&nbsp;node_ramp.location = (-300, 0)\n\n&nbsp;&nbsp;&nbsp;&nbsp;node_ramp.color_ramp.elements[0].color = (0.1, 0.1, 0.1, 1)\n\n&nbsp;&nbsp;&nbsp;&nbsp;rand_r = random.random()\n&nbsp;&nbsp;&nbsp;&nbsp;rand_g = random.random()\n&nbsp;&nbsp;&nbsp;&nbsp;rand_b = random.random()\n&nbsp;&nbsp;&nbsp;&nbsp;node_ramp.color_ramp.elements[1].color = (rand_r, rand_g, rand_b, 1)\u003C/code>\u003C/pre>\u003Cp>\u003Cbr>\u003C/p>\u003Cp>Finally, we have to wire them together and apply this new shader to the current context (the default cube):\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">    links.new(node_noise.outputs['Fac'], node_ramp.inputs['Fac'])\n\n&nbsp;&nbsp;&nbsp;&nbsp;links.new(node_ramp.outputs['Color'], node_principled.inputs['Base Color'])\n\n&nbsp;&nbsp;&nbsp;&nbsp;links.new(node_principled.outputs['BSDF'], node_output.inputs['Surface'])\n\n&nbsp;&nbsp;&nbsp;&nbsp;return mat\n\nmy_new_mat = create_procedural_material(\"SciFi_Metal_Random\")\n\nbpy.context.object.data.materials.append(my_new_mat)\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>Copy that code into your text editor and press \"Run Script\" (the Play button). Look at your active object. It is now a metallic surface with a noise pattern of a random color. Run it again (change the name in the function call), and you get a different color.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-99dc12fe-068b-40f7-9f10-ef0c5e000ba0.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1268\" height=\"827\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2026/01/data-src-image-99dc12fe-068b-40f7-9f10-ef0c5e000ba0.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2026/01/data-src-image-99dc12fe-068b-40f7-9f10-ef0c5e000ba0.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2026/01/data-src-image-99dc12fe-068b-40f7-9f10-ef0c5e000ba0.png 1268w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>Congratulations, \u003Cstrong>you just created a procedural material generator!\u003C/strong>\u003C/p>\u003Cp>Have a look at\u003Ca href=\"https://github.com/cgwire/blog-tutorials/tree/main/blender-shaders?ref=blog.cg-wire.com\" rel=\"noreferrer\"> \u003Cu>our corresponding Github repository\u003C/u>\u003C/a> to play with the code!\u003C/p>\u003Chr>\u003Ch2 id=\"conclusion\">\u003Cstrong>Conclusion\u003C/strong>\u003C/h2>\u003Cp>Shaders are more than just coloring within the lines. They are the skin of your digital world. \u003Cstrong>They tell the story of the object\u003C/strong>: how old it is, where it has been, and what it is made of.\u003C/p>\u003Cp>By understanding the logic of shader nodes, \u003Cstrong>you can create anything from photorealistic skin to stylized cartoon fire\u003C/strong>. And by taking that leap into Python scripting, you unlock the ability to \u003Cstrong>work faster and smarter\u003C/strong>, automating the tedious parts of the job so you can focus on the art.\u003C/p>\u003Cp>But this is just one piece of the puzzle. You can change the surface, but what about the shape? The next logical step in your journey is \u003Cem>Geometry Nodes\u003C/em>. Just as Shader Nodes control the color and light procedurally, Geometry Nodes control the mesh and structure programmatically.\u003Ca href=\"https://blog.cg-wire.com/blender-scripting-geometry-nodes-2/\"> \u003Cu>Have a look at our dedicated article\u003C/u>\u003C/a> to create entire scenes from code!\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>",{"uuid":370,"comment_id":371,"feature_image":372,"featured":29,"visibility":30,"created_at":373,"updated_at":374,"custom_excerpt":375,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":376,"primary_tag":377,"url":378,"excerpt":375,"reading_time":43,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":379},"67a0028f-66b2-4116-ac34-040c8a14d052","695b7d1dc665470001df4d80","https://images.unsplash.com/photo-1664526936810-ec0856d31b92?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDd8fHNoYWRlciUyMG5vZGVzfGVufDB8fHx8MTc2NzYwMzU4M3ww&ixlib=rb-4.1.0&q=80&w=2000","2026-01-05T09:58:05.000+01:00","2026-03-26T09:56:11.000+01:00","Learn how Blender shaders really work, from node-based materials to procedural shading and Python-driven automation. This guide breaks down shader concepts, node types, and scripting techniques to help artists build flexible, production-ready materials.",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"https://blog.cg-wire.com/blender-shaders-explained/","\u003Cspan style=\"white-space: pre-wrap;\">Photo by \u003C/span>\u003Ca href=\"https://unsplash.com/@guerrillabuzz?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">GuerrillaBuzz\u003C/span>\u003C/a>\u003Cspan style=\"white-space: pre-wrap;\"> / \u003C/span>\u003Ca href=\"https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Unsplash\u003C/span>\u003C/a>","/posts/blender-shaders-explained","2026-01-05T10:35:18.000+01:00",{"title":365},"blender-shaders-explained","posts/blender-shaders-explained",[386,387],{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},{"id":333,"name":334,"slug":335,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":336},"L9nHGKFoNkSSxbDZv_Z2mmZLxxHmhill232zPkpfpCE",{"id":390,"title":391,"authors":392,"body":7,"description":7,"extension":8,"html":394,"meta":395,"navigation":12,"path":406,"published_at":407,"seo":408,"slug":409,"stem":410,"tags":411,"__hash__":414,"uuid":396,"comment_id":397,"feature_image":398,"featured":29,"visibility":30,"created_at":399,"updated_at":400,"custom_excerpt":401,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":402,"primary_tag":403,"url":404,"excerpt":401,"reading_time":150,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":405},"ghost/posts:blender-programmatic-rendering.json","Programmatic Video Rendering in Blender Using Python (2026)",[393],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">🧠\u003C/div>\u003Cdiv class=\"kg-callout-text\">Turn Blender into a programmable rendering engine with just a few lines of Python.\u003C/div>\u003C/div>\u003Cp>Learning Blender as a 3D artist usually means learning about its addon ecosystem. Tasks that would take hours like rigging a character can be turned into seconds with addons like Rigify. The same goes for most workflows, and we often end up asking ourselves the same recurring question: \"Can Blender do this automatically?\"\u003C/p>\u003Cp>The answer is yes. The key is the programming language Python.\u003C/p>\u003Cp>Blender includes a powerful built-in scripting engine, and with just a few lines of code, you can create objects, position cameras, and even trigger full renders.\u003C/p>\u003Cp>You won't need to pay for an addon if you know how to build one yourself. And at its core, an addon is just a script wrapped in a custom Blender user interface.\u003C/p>\u003Cp>If you've never scripted in Blender before, discovering the \u003Ccode>bpy\u003C/code> module feels like opening a secret door inside a tool you thought you already knew: suddenly, every part of the interface becomes programmable. You're not just clicking buttons anymore but giving instructions to build repeatable systems.\u003C/p>\u003Cp>One of the most important workflows you can automate is rendering. Not only to make your pipeline faster but also to help keep rendering settings consistent and predictable. In this tutorial, we'll implement a basic programmatic rendering system to automatically animate a 3D text and turn it into a full HD video. We'll start from zero, exploring how to run Python for Blender and how to use it to control the scene. By the end, you'll have a good overview of how to automate common animation tasks.\u003C/p>\u003Chr>\u003Ch2 id=\"use-cases\">\u003Cstrong>Use Cases\u003C/strong>\u003C/h2>\u003Cp>Programmatic rendering unlocks a wide range of powerful workflows that go far beyond traditional manual scene building:\u003C/p>\u003Cul>\u003Cli>\u003Cstrong>Data-driven motion graphics\u003C/strong> — Animated charts, realtime API-driven broadcast graphics, or automatically generated social videos.\u003C/li>\u003Cli>\u003Cstrong>Generative art\u003C/strong> — Procedural patterns, noise fields, particle experiments, and algorithmic illustrations that evolve from code.\u003C/li>\u003Cli>\u003Cstrong>Batch-rendered variants\u003C/strong> — Personalized ads, product color variations, automated aspect-ratio crops, and bulk social asset generation.\u003C/li>\u003Cli>\u003Cstrong>Procedural 3D content\u003C/strong> — Terrain builders, parametric modeling, foliage/world population, and automated 3D asset variations.\u003C/li>\u003Cli>\u003Cstrong>Generative UI &amp; design systems\u003C/strong> — Dynamic SVGs, templated banners, and brand-consistent graphics rendered on demand.\u003C/li>\u003Cli>\u003Cstrong>VFX and animation scripting\u003C/strong> — Automated rig controls, crowd systems, particle population, and repeatable simulation setups.\u003C/li>\u003Cli>\u003Cstrong>Simulation visualizations\u003C/strong> — Fluid and smoke simulations, traffic and crowd dynamics, and scientific or physics-based renders.\u003C/li>\u003C/ul>\u003Cp>Many 3D modeling tasks are repetitive and time-consuming. By integrating them into an automated, script-driven pipeline, artists can focus more on creative worldbuilding while Python handles the tedious parts in the background.\u003C/p>\u003Cp>In any case, the development workflow is pretty much the same:\u003C/p>\u003Col>\u003Cli>\u003Cstrong>Setup\u003C/strong> - define needed input data and scene cleanup\u003C/li>\u003Cli>\u003Cstrong>Geometry generation\u003C/strong> - modeling the actual assets needed for the task\u003C/li>\u003Cli>\u003Cstrong>Animation\u003C/strong> - defining the transforms and their associated keyframes\u003C/li>\u003Cli>\u003Cstrong>Output\u003C/strong> - the desired assets (3D models, video, image sequence, etc.)\u003C/li>\u003C/ol>\u003Cp>This is exactly the path we're going to take for our 3D text video rendering example.\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-blue\">\u003Cdiv class=\"kg-callout-emoji\">💡\u003C/div>\u003Cdiv class=\"kg-callout-text\">\u003Cb>\u003Cstrong style=\"white-space: pre-wrap;\">Looking for working examples?\u003C/strong>\u003C/b>\u003Cbr>\u003Cbr>You can find the complete source code for the example integration showcased in this guide on our GitHub:\u003Cbr>\u003Cbr>🔗 \u003Ca href=\"https://github.com/cgwire/blender-programmatic-rendering?ref=blog.cg-wire.com\">https://github.com/cgwire/blender-programmatic-rendering\u003C/a>\u003C/div>\u003C/div>\u003Chr>\u003Ch2 id=\"1-scene-setup\">\u003Cstrong>1. Scene Setup\u003C/strong>\u003C/h2>\u003Cp>Before we dive into generating scenes, we first need a clean starting point. When you open Blender, it loads a default scene usually containing a cube, a camera, and a light. For this tutorial, we'll only need the latter two.\u003C/p>\u003Cp>The first step in using Blender programmatically is importing the \u003Ccode>bpy\u003C/code> module. This gives you full access to Blender's data, tools, and rendering pipeline directly from Python:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">import bpy\n\nbpy.data.objects.remove(bpy.data.objects.get(\"Cube\"), do_unlink=True)\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>Here, we remove the default \u003Cstrong>Cube\u003C/strong> object. The \u003Ccode>do_unlink=True\u003C/code> parameter makes sure Blender not only deletes the object but also unlinks it from any scene that might reference it.\u003C/p>\u003Chr>\u003Ch2 id=\"2-manipulating-3d-text\">\u003Cstrong>2. Manipulating 3D Text\u003C/strong>\u003C/h2>\u003Cp>Next, we add a 3D text object to the scene to serve as the core element we'll manipulate and eventually render programmatically.\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">bpy.ops.object.text_add(location=(0, 0, 0))\ntext_obj = bpy.context.object\ntext_obj.name = \"CaptionText\"\ntext_obj.data.body = \"Hello world!\"\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>This code snippet creates a new text object at the world origin, assigns it a readable name, and sets its displayed text to \u003Ccode>\"Hello world!\"\u003C/code>.\u003C/p>\u003Cp>To give the text more presence in the scene, we can adjust its geometry. Increasing the size and adding extrusion make the text fully 3D, and centering it on both axes simplifies future transformations and animations:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">text_obj.data.size = 0.6\ntext_obj.data.extrude = 0.05\ntext_obj.data.align_x = \"CENTER\"\ntext_obj.data.align_y = \"CENTER\"\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>With these adjustments, the text is cleanly centered, properly scaled, and ready for further processing.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-8cb519b5-e128-4bdd-9348-9aa0dfe2c36c.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1600\" height=\"901\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/12/data-src-image-8cb519b5-e128-4bdd-9348-9aa0dfe2c36c.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/12/data-src-image-8cb519b5-e128-4bdd-9348-9aa0dfe2c36c.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-8cb519b5-e128-4bdd-9348-9aa0dfe2c36c.png 1600w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Chr>\u003Ch2 id=\"3-adding-keyframes\">\u003Cstrong>3. Adding Keyframes\u003C/strong>\u003C/h2>\u003Cp>We\u003Ca href=\"https://blog.cg-wire.com/stepped-animation/\"> \u003Cu>create a simple animation by inserting keyframes\u003C/u>\u003C/a> for the text position over time.\u003C/p>\u003Cp>First, we move our timeline cursor to frame 1, position the text at the starting location, and record that position with a keyframe:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">bpy.context.scene.frame_set(1)\ntext_obj.location = (-4.0, 0.0, 1.0)\ntext_obj.keyframe_insert(data_path=\"location\", frame=1)\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>Next, we advance to frame 40, shift the text along the X axis, and insert another keyframe to mark its new position:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">bpy.context.scene.frame_set(40)\ntext_obj.location = (0.0, 0.0, 1.0)\ntext_obj.keyframe_insert(data_path=\"location\", frame=40)\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>With these two keyframes in place, Blender automatically interpolates the movement between them, creating a smooth animation as the text glides into the center of the frame.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-c33d7b37-264c-4c9f-a1ea-e8f2e2a39ff2.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1600\" height=\"901\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/12/data-src-image-c33d7b37-264c-4c9f-a1ea-e8f2e2a39ff2.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/12/data-src-image-c33d7b37-264c-4c9f-a1ea-e8f2e2a39ff2.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-c33d7b37-264c-4c9f-a1ea-e8f2e2a39ff2.png 1600w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Chr>\u003Ch2 id=\"4-video-rendering\">\u003Cstrong>4. Video Rendering\u003C/strong>\u003C/h2>\u003Cp>All we have left to do is\u003Ca href=\"https://blog.cg-wire.com/getting-started-with-blender-rendering/\"> \u003Cu>configure Blender's rendering settings\u003C/u>\u003C/a> and output the final video.\u003C/p>\u003Cp>The first choice is which rendering engine to use: \u003Cstrong>Eevee\u003C/strong> or \u003Cstrong>Cycles\u003C/strong>.\u003C/p>\u003Cp>Eevee is a real-time rasterization engine, making it extremely fast and ideal for previews or stylized animation. Cycles, on the other hand, is a physically based path tracer that produces more realistic lighting but requires much longer render times. For quick iteration and most automated workflows, Eevee is generally the better option:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">bpy.context.scene.render.engine = \"BLENDER_EEVEE\"\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>Next, we specify the output resolution:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">bpy.context.scene.render.resolution_x = 1920\nbpy.context.scene.render.resolution_y = 1080\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>Then we set the frame rate and define the animation range. Here, a 60-frame shot at 24 fps:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">bpy.context.scene.render.fps = 24\nbpy.context.scene.frame_start = 1\nbpy.context.scene.frame_end = 60\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>Blender also needs to know how to encode the final video. We'll export it as an MP4 using H.264 video encoding for rendering speed:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">bpy.context.scene.render.image_settings.file_format = \"FFMPEG\"\nbpy.context.scene.render.ffmpeg.format = \"MPEG4\"\nbpy.context.scene.render.ffmpeg.codec = \"H264\"\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>Finally, we choose where the output file will be written using the current folder for convenience:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">bpy.context.scene.render.filepath = \"//render.mp4\"\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>With everything configured, we can start the render process with a single command:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">bpy.ops.render.render(animation=True)\u003C/code>\u003C/pre>\u003Chr>\u003Ch2 id=\"5-putting-it-all-together\">\u003Cstrong>5. Putting it all together\u003C/strong>\u003C/h2>\u003Cp>Our code is complete and we just need to put it into a Python file \u003Ccode>render.py\u003C/code>:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">import bpy\n\nbpy.data.objects.remove(bpy.data.objects.get(\"Cube\"), do_unlink=True)\n\nbpy.ops.object.text_add(location=(0, 0, 0))\ntext_obj = bpy.context.object\ntext_obj.name = \"CaptionText\"\ntext_obj.data.body = \"Hello world!\"\n\ntext_obj.data.size = 0.6\ntext_obj.data.extrude = 0.05\ntext_obj.data.align_x = \"CENTER\"\ntext_obj.data.align_y = \"CENTER\"\n\nbpy.context.scene.frame_set(1)\ntext_obj.location = (-4.0, 0.0, 1.0)\ntext_obj.keyframe_insert(data_path=\"location\", frame=1)\n\nbpy.context.scene.frame_set(40)\ntext_obj.location = (0.0, 0.0, 1.0)\ntext_obj.keyframe_insert(data_path=\"location\", frame=40)\n\nbpy.context.scene.render.engine = \"BLENDER_EEVEE\"\nbpy.context.scene.render.resolution_x = 1920\nbpy.context.scene.render.resolution_y = 1080\nbpy.context.scene.render.resolution_percentage = 100\nbpy.context.scene.render.fps = 24\nbpy.context.scene.frame_start = 1\nbpy.context.scene.frame_end = 60\n\nbpy.context.scene.render.image_settings.file_format = \"FFMPEG\"\nbpy.context.scene.render.ffmpeg.format = \"MPEG4\"&nbsp; # container\nbpy.context.scene.render.ffmpeg.codec = \"H264\"\nbpy.context.scene.render.ffmpeg.constant_rate_factor = \"HIGH\"\nbpy.context.scene.render.ffmpeg.gopsize = 12\nbpy.context.scene.render.ffmpeg.audio_codec = \"AAC\"\nbpy.context.scene.render.filepath = \"//render.mp4\"\n\nbpy.ops.render.render(animation=True)\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>Now, run the script to start rendering:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">python3 render.py\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>Once the render finishes, check your working directory and your fully programmatically generated animation should now be ready to view.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-2b287259-a96b-456b-b95e-375bf116e3a1.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1088\" height=\"722\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/12/data-src-image-2b287259-a96b-456b-b95e-375bf116e3a1.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/12/data-src-image-2b287259-a96b-456b-b95e-375bf116e3a1.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-2b287259-a96b-456b-b95e-375bf116e3a1.png 1088w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-blue\">\u003Cdiv class=\"kg-callout-emoji\">🔗\u003C/div>\u003Cdiv class=\"kg-callout-text\">You can find our code in a Github repository for easy reproducibility:\u003Ca href=\"https://github.com/cgwire/blender-programmatic-rendering?ref=blog.cg-wire.com\"> \u003Cu>github.com/cgwire/blender-programmatic-rendering\u003C/u>\u003C/a>\u003C/div>\u003C/div>\u003Chr>\u003Ch2 id=\"conclusion\">\u003Cstrong>Conclusion\u003C/strong>\u003C/h2>\u003Cp>In this walkthrough, you built a complete automated pipeline inside Blender: setting up a clean scene, creating and modifying 3D text, animating it with keyframes, and rendering the sequence with smooth interpolation. All of it handled through Python with no manual adjustments needed!\u003C/p>\u003Cp>Now that you've seen how much control the Blender API provides, you can take these ideas much further: automate your workflows, generate graphics from data, build internal tools that assemble scenes, render variations, or create entire animations with a single command... the list to help your animation studio become more productive never ends.\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>",{"uuid":396,"comment_id":397,"feature_image":398,"featured":29,"visibility":30,"created_at":399,"updated_at":400,"custom_excerpt":401,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":402,"primary_tag":403,"url":404,"excerpt":401,"reading_time":150,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":405},"4457d779-ae8e-4ed7-9398-91772c0996c0","6948dba20bfbc7000190a8bf","https://images.unsplash.com/photo-1622547748225-3fc4abd2cca0?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDJ8fHJlbmRlcnN8ZW58MHx8fHwxNzY2MzgyNjA1fDA&ixlib=rb-4.1.0&q=80&w=2000","2025-12-22T06:48:18.000+01:00","2026-02-20T06:04:02.000+01:00","Learn how to automate animation and video rendering in Blender using Python. This tutorial covers scene setup, 3D text generation, keyframe animation, and programmatic rendering to build repeatable, script-driven workflows.",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"https://blog.cg-wire.com/blender-programmatic-rendering/","\u003Cspan style=\"white-space: pre-wrap;\">Photo by \u003C/span>\u003Ca href=\"https://unsplash.com/@sebastiansvenson?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Sebastian Svenson\u003C/span>\u003C/a>\u003Cspan style=\"white-space: pre-wrap;\"> / \u003C/span>\u003Ca href=\"https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Unsplash\u003C/span>\u003C/a>","/posts/blender-programmatic-rendering","2025-12-29T10:00:10.000+01:00",{"title":391},"blender-programmatic-rendering","posts/blender-programmatic-rendering",[412,413],{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},{"id":333,"name":334,"slug":335,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":336},"vOpwec7s0eruEbUu1OcdDfl9ESqnn1LglPRNKNn4kgw",{"id":416,"title":417,"authors":418,"body":7,"description":7,"extension":8,"html":420,"meta":421,"navigation":12,"path":433,"published_at":434,"seo":435,"slug":436,"stem":437,"tags":438,"__hash__":441,"uuid":422,"comment_id":423,"feature_image":424,"featured":29,"visibility":30,"created_at":425,"updated_at":426,"custom_excerpt":427,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":428,"primary_tag":429,"url":430,"excerpt":427,"reading_time":431,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":432},"ghost/posts:blender-kitsu-versioning-addon.json","Managing Blender File Revisions with a Kitsu Versioning Addon (2026)",[419],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">🧱\u003C/div>\u003Cdiv class=\"kg-callout-text\">Replace chaotic file naming with a single source of truth for Blender revisions.\u003C/div>\u003C/div>\u003Cp>Every project begins with good intentions. You start with a clean \u003Ccode>model.blend\u003C/code>, organized folders, and the promise that this time you’ll keep things tidy.\u003C/p>\u003Cp>But as deadlines tighten, the quiet entropy of production sets in. Before long, your project directory starts to resemble an archaeological dig site of panicked last-minute edits:\u003C/p>\u003Cpre>\u003Ccode>model.blend\nmodel_v2.blend\nmodel_v2b.blend\nmodel_final.blend\nmodel_final_really_final.blend\nmodel_FINAL_v3.blend\u003C/code>\u003C/pre>\u003Cp>You know how it happens: someone needs a quick change, another artist branches off a version \"just in case,\" and soon no one is entirely certain which file is \"the real one.\" Comments in chat threads contradict filenames, shots render from outdated versions, and the supervisor sighs deeply.\u003C/p>\u003Cp>In an animation studio, these micro-chaos moments add up. That’s where a proper source of truth needs to enter the story.\u003C/p>\u003Cp>For many teams, that source is Kitsu. And for Blender artists, the missing piece is an automated bridge that keeps files versioned, traceable, and aligned with the project’s production data.\u003C/p>\u003Cp>So you decide to take control: you’re going to make Blender talk to Kitsu and build a versioning system that makes your pipeline feel like it finally has your back.\u003C/p>\u003Cp>In this tutorial, we’ll create an addon that manages file revisions directly from Blender. You’ll be able to connect Blender to a Kitsu project, create and upload revisions of your 3D models, view all existing revisions, and pull older revisions back into Blender.\u003C/p>\u003Chr>\u003Ch2 id=\"workflow-overview\">\u003Cstrong>Workflow Overview\u003C/strong>\u003C/h2>\u003Cp>In a typical Kitsu-driven workflow, an artist opens a Blender scene, does their work, hits a milestone, and uploads a revision. Artists review, iterate, revise, and upload again. Kitsu keeps every step neatly.\u003C/p>\u003Cp>But it wouldn't hurt if you could just upload or pull revisions with a click, right?\u003C/p>\u003Col>\u003Cli>\u003Cstrong>Start in Blender\u003C/strong> - We open our working scene—modeling, shading, rigging, whatever the task at hand demands.\u003C/li>\u003Cli>\u003Cstrong>Checkpoint the work\u003C/strong> - When we hit a milestone (\"blocking complete,\" \"ready for review\"), we create a new revision in Kitsu.\u003C/li>\u003Cli>\u003Cstrong>Review the history\u003C/strong> - Kitsu stores all revisions, giving supervisors a clear timeline and letting you compare versions without digging through files.\u003C/li>\u003Cli>\u003Cstrong>Pull new changes\u003C/strong> - When we need a different version, we can just click to pull in an asset in our current workspace.\u003C/li>\u003C/ol>\u003Cp>This is a very basic workflow, so we are bound to run into problems like how to handle conflict resolution (what if two artists work on the same shot and create a new revision each, how do we handle this?), but it's good enough to give us a functional addon we can improve later on to fit our animation pipeline needs.\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-blue\">\u003Cdiv class=\"kg-callout-emoji\">💡\u003C/div>\u003Cdiv class=\"kg-callout-text\">\u003Cb>\u003Cstrong style=\"white-space: pre-wrap;\">Looking for working examples?\u003C/strong>\u003C/b>\u003Cbr>\u003Cbr>You can find the complete source code for the example integration showcased in this guide on our GitHub:\u003Cbr>\u003Cbr>🔗 \u003Ca href=\"https://github.com/cgwire/blender-kitsu-versioning-addon?ref=blog.cg-wire.com\">https://github.com/cgwire/blender-kitsu-versioning-addon\u003C/a>\u003C/div>\u003C/div>\u003Chr>\u003Ch2 id=\"1-populating-the-kitsu-dashboard\">\u003Cstrong>1. Populating The Kitsu Dashboard\u003C/strong>\u003C/h2>\u003Cp>Kitsu’s web interface is designed so producers, coordinators, or leads can quickly set up the structure of a project. Before Blender artists can publish revisions, we need to populate our production with work-in-progress assets. In\u003Ca href=\"https://blog.cg-wire.com/dcc-integration-blender-kitsu/\"> \u003Cu>the Kitsu Docker instance for local development\u003C/u>\u003C/a>:\u003C/p>\u003Col>\u003Cli>Log into the \u003Cstrong>Kitsu dashboard\u003C/strong>.\u003C/li>\u003Cli>In the main navigation bar, go to \u003Cstrong>Productions\u003C/strong>.\u003C/li>\u003Cli>Click \u003Cstrong>\"Create production\"\u003C/strong> (usually top-right corner).\u003C/li>\u003Cli>Fill in the production details\u003C/li>\u003C/ol>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-58cb0571-2b74-4110-9b07-9e15030bbd05.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"985\" height=\"694\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/12/data-src-image-58cb0571-2b74-4110-9b07-9e15030bbd05.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-58cb0571-2b74-4110-9b07-9e15030bbd05.png 985w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>The new production will appear in the list, and you can open it to begin adding assets.\u003C/p>\u003Cp>Assets are the building blocks of your project: characters, props, environments, vehicles... anything that needs production tracking.\u003C/p>\u003Col>\u003Cli>Go to \u003Cstrong>Productions → Your Production Name\u003C/strong>.\u003C/li>\u003Cli>Switch to the \u003Cstrong>Assets\u003C/strong> tab within the production.\u003C/li>\u003Cli>Click \u003Cstrong>\"Create Asset\"\u003C/strong>.\u003C/li>\u003Cli>Enter an \u003Cstrong>Asset Name\u003C/strong> (e.g., \"RobotHead\") and \u003Cstrong>Asset Type\u003C/strong> (Character, Prop, Set, etc.)\u003C/li>\u003C/ol>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-f4336c33-57ef-4baa-9715-e0c749f7d9b4.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1270\" height=\"870\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/12/data-src-image-f4336c33-57ef-4baa-9715-e0c749f7d9b4.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/12/data-src-image-f4336c33-57ef-4baa-9715-e0c749f7d9b4.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-f4336c33-57ef-4baa-9715-e0c749f7d9b4.png 1270w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>Your asset now exists and has 3 tasks assigned to it.&nbsp;\u003C/p>\u003Cp>Tasks define the workflow steps (Modeling, Shading, Rigging, etc.) that artists will perform on each asset.\u003C/p>\u003Cp>We now have everything we need to test our addon.\u003C/p>\u003Chr>\u003Ch2 id=\"2-linking-the-current-blender-project-to-a-kitsu-task\">\u003Cstrong>2. Linking the Current Blender Project to a Kitsu Task\u003C/strong>\u003C/h2>\u003Cp>We start with a minimal addon declaration that defines the UI location, loads \u003Ccode>gazu\u003C/code>, and prepares the data we’ll expose in dropdown menus:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">bl_info = {\n&nbsp;&nbsp;&nbsp;&nbsp;\"name\": \"Model Versioning (Production/Task/Asset/Revisions)\",\n&nbsp;&nbsp;&nbsp;&nbsp;\"author\": \"cgwire\",\n&nbsp;&nbsp;&nbsp;&nbsp;\"version\": (1, 0, 0),\n&nbsp;&nbsp;&nbsp;&nbsp;\"blender\": (2, 80, 0),\n&nbsp;&nbsp;&nbsp;&nbsp;\"location\": \"View3D &gt; Sidebar &gt; ModelVersioning\",\n&nbsp;&nbsp;&nbsp;&nbsp;\"description\": \"Browse productions, tasks, assets, and manage revisions (list/create/load)\",\n&nbsp;&nbsp;&nbsp;&nbsp;\"category\": \"3D View\",\n}\n\nimport sys\n\nsys.path.append(\"~/.local/lib/python3.11/site-packages\")\n\nimport os\nimport tempfile\n\nimport bpy\nimport gazu\nfrom bpy.props import EnumProperty, PointerProperty\nfrom bpy.types import Operator, Panel, PropertyGroup\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>Note that \u003Ccode>sys.path.append(\"~/.local/lib/python3.11/site-packages\")\u003C/code> allows us to use our local Python installation to access external packages like \u003Ccode>gazu\u003C/code>. By default, Blender runs its own Python environment, so installing packages can be cumbersome. To solve this, we just tell Blender to have a look at our local modules. Update this path accordingly to match your system configuration.\u003C/p>\u003Cp>Before we can automate versioning, Blender needs to know \u003Cem>where\u003C/em> in Kitsu the current model belongs. That means identifying the project, the asset, the task, and eventually the revisions associated with it.\u003C/p>\u003Cp>The first step is simple: authenticate with Kitsu, retrieve available productions, and let the artist pick the context directly from the Sidebar UI.\u003C/p>\u003Cp>Once the addon loads, we authenticate and point the addon at the Kitsu API host:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">gazu.set_host(\"&lt;http://localhost/api&gt;\")\nuser = gazu.log_in(\"admin@example.com\", \"mysecretpassword\")\n\ntemp_dir_path = tempfile.gettempdir()\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>This establishes the session we’ll use to browse productions, find tasks, and eventually create revisions.\u003C/p>\u003Cp>From here, we can begin exposing the production structure. With helper functions for project, asset, task, and revision lookup, we populate each dropdown dynamically:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">def find_project(name):\n&nbsp;&nbsp;&nbsp;&nbsp;return gazu.project.get_project_by_name(name)\n\ndef find_asset(project, name):\n&nbsp;&nbsp;&nbsp;&nbsp;return gazu.asset.get_asset_by_name(project, name)\n\ndef find_task(asset, type_id):\n&nbsp;&nbsp;&nbsp;&nbsp;return gazu.task.get_task_by_name(asset, type_id, \"main\")\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>Each \u003Ccode>EnumProperty\u003C/code> callback pulls fresh data from Kitsu:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">def enum_projects(self, context):\n&nbsp;&nbsp;&nbsp;&nbsp;items = []\n&nbsp;&nbsp;&nbsp;&nbsp;projects = gazu.project.all_projects()\n&nbsp;&nbsp;&nbsp;&nbsp;for p in projects:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;items.append((p[\"name\"], p[\"name\"], \"\"))\n&nbsp;&nbsp;&nbsp;&nbsp;if not items:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;items.append((\"NONE\", \"--- no productions ---\", \"\"))\n&nbsp;&nbsp;&nbsp;&nbsp;return items\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>Assets, tasks, and revisions follow the same pattern:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">def enum_assets(self, context):\n&nbsp;&nbsp;&nbsp;&nbsp;project = find_project(context.scene.mv_state.project)\n&nbsp;&nbsp;&nbsp;&nbsp;items = []\n&nbsp;&nbsp;&nbsp;&nbsp;if project:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;assets = gazu.asset.all_assets_for_project(project)\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;for t in assets:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;items.append((t[\"name\"], t[\"name\"], \"\"))\n&nbsp;&nbsp;&nbsp;&nbsp;if not items:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;items.append((\"NONE\", \"--- no tasks ---\", \"\"))\n&nbsp;&nbsp;&nbsp;&nbsp;return items\n\ndef enum_tasks(self, context):\n&nbsp;&nbsp;&nbsp;&nbsp;project = find_project(context.scene.mv_state.project)\n&nbsp;&nbsp;&nbsp;&nbsp;asset = find_asset(project, context.scene.mv_state.asset)\n&nbsp;&nbsp;&nbsp;&nbsp;items = []\n&nbsp;&nbsp;&nbsp;&nbsp;if asset:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;tasks = gazu.task.all_tasks_for_asset(asset)\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;for t in tasks:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;items.append((t[\"task_type_id\"], t[\"task_type_name\"], \"\"))\n&nbsp;&nbsp;&nbsp;&nbsp;if not items:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;items.append((\"NONE\", \"--- no tasks ---\", \"\"))\n&nbsp;&nbsp;&nbsp;&nbsp;return items\n\ndef enum_revisions(self, context):\n&nbsp;&nbsp;&nbsp;&nbsp;project = find_project(context.scene.mv_state.project)\n&nbsp;&nbsp;&nbsp;&nbsp;asset = find_asset(project, context.scene.mv_state.asset)\n&nbsp;&nbsp;&nbsp;&nbsp;task = find_task(asset, context.scene.mv_state.task)\n&nbsp;&nbsp;&nbsp;&nbsp;items = []\n&nbsp;&nbsp;&nbsp;&nbsp;if task:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;revisions = gazu.files.get_all_preview_files_for_task(task)\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;for r in revisions:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;items.append((str(r[\"revision\"]), str(r[\"revision\"]), \"\"))\n&nbsp;&nbsp;&nbsp;&nbsp;if not items:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;items.append((\"NONE\", \"--- no revisions ---\", \"\"))\n&nbsp;&nbsp;&nbsp;&nbsp;return items\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>Finally, we store all UI selections in a single state object:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">class MV_State(PropertyGroup):\n&nbsp;&nbsp;&nbsp;&nbsp;project: EnumProperty(\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;name=\"Project\", description=\"Select project\", items=enum_projects\n&nbsp;&nbsp;&nbsp;&nbsp;)\n&nbsp;&nbsp;&nbsp;&nbsp;asset: EnumProperty(name=\"Asset\", description=\"Select asset\", items=enum_assets)\n&nbsp;&nbsp;&nbsp;&nbsp;task: EnumProperty(name=\"Task\", description=\"Select task\", items=enum_tasks)\n&nbsp;&nbsp;&nbsp;&nbsp;revision: EnumProperty(\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;name=\"Revision\", description=\"Select revision\", items=enum_revisions\n&nbsp;&nbsp;&nbsp;&nbsp;)\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>This is the foundation of our pipeline integration: Blender now knows how to browse Kitsu and bind itself to the exact task the artist is working on. From here, we can start working on the revision lifecycle.\u003C/p>\u003Chr>\u003Ch2 id=\"3-creating-a-new-revision-button\">\u003Cstrong>3. Creating a \"New Revision\" Button\u003C/strong>\u003C/h2>\u003Cp>We can start automating the part artists interact with most: creating new revisions. In a typical manual workflow, you’d export your file and upload it in Kitsu to the correct task. Our addon will streamline this into a single button press inside Blender.\u003C/p>\u003Cp>Kitsu handles new revisions through \u003Ccode>publish_preview()\u003C/code>. This call sends both the file and metadata:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">temp_file_path = os.path.join(temp_dir_path, \"new_version.glb\")\n\nbpy.ops.export_scene.gltf(filepath=temp_file_path, export_format=\"GLB\")\n\n(comment, preview_file) = gazu.task.publish_preview(\n&nbsp;&nbsp;&nbsp;&nbsp;task,\n&nbsp;&nbsp;&nbsp;&nbsp;task_status,\n&nbsp;&nbsp;&nbsp;&nbsp;revision=new_revision,\n&nbsp;&nbsp;&nbsp;&nbsp;comment=\"increment revision\",\n&nbsp;&nbsp;&nbsp;&nbsp;preview_file_path=temp_file_path,\n)\n\nos.remove(temp_file_path)\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>In our addon, we’ll trigger this from a button in the Sidebar.\u003C/p>\u003Cp>The operator performs three main steps: grab the user’s selections from the addon's state, compute the next revision number, and upload the exported file as the new revision:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">class MV_OT_create_revision(Operator):\n&nbsp;&nbsp;&nbsp;&nbsp;bl_idname = \"mv.create_revision\"\n&nbsp;&nbsp;&nbsp;&nbsp;bl_label = \"Create Revision\"\n\n&nbsp;&nbsp;&nbsp;&nbsp;def invoke(self, context, event):\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;wm = context.window_manager\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return wm.invoke_props_dialog(self, width=400)\n\n&nbsp;&nbsp;&nbsp;&nbsp;def execute(self, context):\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;project = find_project(context.scene.mv_state.project)\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;asset = find_asset(project, context.scene.mv_state.asset)\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;task = find_task(asset, context.scene.mv_state.task)\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;revision = context.scene.mv_state.revision\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;new_revision = int(revision) + 1\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;task_status = gazu.task.get_task_status_by_name(\"todo\")\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;temp_file_path = os.path.join(temp_dir_path, \"new_version.glb\")\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;bpy.ops.export_scene.gltf(filepath=temp_file_path, export_format=\"GLB\")\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(comment, preview_file) = gazu.task.publish_preview(\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;task,\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;task_status,\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;revision=new_revision,\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;comment=\"increment revision\",\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;preview_file_path=temp_file_path,\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;)\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;os.remove(temp_file_path)\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;self.report({\"INFO\"}, \"Revision created\")\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return {\"FINISHED\"}\u003C/code>\u003C/pre>\u003Chr>\u003Ch2 id=\"4-pulling-a-revision-into-blender\">\u003Cstrong>4. Pulling a Revision into Blender\u003C/strong>\u003C/h2>\u003Cp>Versioning isn’t just about publishing your work, it's also about being able to \u003Cem>go back\u003C/em>. Whether you’re reviewing earlier stages, comparing topology, or recovering a detail from a previous iteration, you need a quick, reliable way to load new and older revisions into Blender.\u003C/p>\u003Cp>Once a task is selected, pulling a revision from Kitsu becomes a simple two-step operation: download the preview file associated with the selected revision, and import it into Blender.\u003C/p>\u003Cp>After fetching all preview files for the current task, we can target the revision by index and bring the asset directly into Blender:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">temp_file_path = os.path.join(temp_dir_path, \"new_version.glb\")\n\npreview_file = preview_files[int(revision) - 1]\ngazu.files.download_preview_file(preview_file, temp_file_path)\nbpy.ops.import_scene.gltf(filepath=temp_file_path)\n\nos.remove(temp_file_path)\u003C/code>\u003C/pre>\u003Cp>This gives us a consistent way to retrieve assets exactly as they were at that point in production.\u003C/p>\u003Cp>We encapsulate this workflow inside an operator that mirrors the structure of the Create Revision button:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">class MV_OT_load_revision(Operator):\n&nbsp;&nbsp;&nbsp;&nbsp;bl_idname = \"mv.load_revision\"\n&nbsp;&nbsp;&nbsp;&nbsp;bl_label = \"Load Revision\"\n\n&nbsp;&nbsp;&nbsp;&nbsp;def execute(self, context):\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;project = find_project(context.scene.mv_state.project)\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;asset = find_asset(project, context.scene.mv_state.asset)\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;task = find_task(asset, context.scene.mv_state.task)\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;revision = context.scene.mv_state.revision\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;preview_files = gazu.files.get_all_preview_files_for_task(task)\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;temp_file_path = os.path.join(temp_dir_path, \"new_version.glb\")\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;preview_file = preview_files[int(revision) - 1]\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;gazu.files.download_preview_file(preview_file, temp_file_path)\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;bpy.ops.import_scene.gltf(filepath=temp_file_path)\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;os.remove(temp_file_path)\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;self.report({\"INFO\"}, \"Opened Revision\")\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return {\"FINISHED\"}\u003C/code>\u003C/pre>\u003Cp>This operator makes it trivial for artists to browse and load any version stored in Kitsu without leaving Blender.\u003C/p>\u003Chr>\u003Ch2 id=\"5-registering-the-addon\">\u003Cstrong>5. Registering The Addon\u003C/strong>\u003C/h2>\u003Cp>\u003Ca href=\"https://blog.cg-wire.com/blender-addon-ui-scripting-guide/\">\u003Cu>The panel now ties the whole revision workflow together\u003C/u>\u003C/a>:\u003C/p>\u003Cul>\u003Cli>Select the project\u003C/li>\u003Cli>Choose the asset\u003C/li>\u003Cli>Pick the task\u003C/li>\u003Cli>Browse revisions\u003C/li>\u003Cli>Create or load versions with a single click\u003C/li>\u003C/ul>\u003Cpre>\u003Ccode class=\"language-python\">class MV_PT_panel(Panel):\n&nbsp;&nbsp;&nbsp;&nbsp;bl_label = \"Model Versioning\"\n&nbsp;&nbsp;&nbsp;&nbsp;bl_idname = \"MV_PT_panel\"\n&nbsp;&nbsp;&nbsp;&nbsp;bl_space_type = \"VIEW_3D\"\n&nbsp;&nbsp;&nbsp;&nbsp;bl_region_type = \"UI\"\n&nbsp;&nbsp;&nbsp;&nbsp;bl_category = \"ModelVersion\"\n\n&nbsp;&nbsp;&nbsp;&nbsp;def draw(self, context):\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout = self.layout\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;scene = context.scene\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;mv = scene.mv_state\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout.label(text=\"Project\")\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout.prop(mv, \"project\", text=\"\")\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout.separator()\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout.label(text=\"Asset\")\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout.prop(mv, \"asset\", text=\"\")\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout.separator()\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout.label(text=\"Task\")\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout.prop(mv, \"task\", text=\"\")\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout.separator()\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout.label(text=\"Revision\")\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout.prop(mv, \"revision\", text=\"\")\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout.separator()\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;row = layout.row(align=True)\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;row.operator(\"mv.create_revision\", text=\"Create Revision\", icon=\"ADD\")\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout.operator(\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\"mv.load_revision\", text=\"Load Selected Revision\", icon=\"IMPORT\"\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;)\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>Finally, we register the operators, panel, and state so Blender knows how to construct the UI:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">classes = (\n&nbsp;&nbsp;&nbsp;&nbsp;MV_State,\n&nbsp;&nbsp;&nbsp;&nbsp;MV_OT_create_revision,\n&nbsp;&nbsp;&nbsp;&nbsp;MV_OT_load_revision,\n&nbsp;&nbsp;&nbsp;&nbsp;MV_PT_panel,\n)\n\ndef register():\n&nbsp;&nbsp;&nbsp;&nbsp;for c in classes:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;bpy.utils.register_class(c)\n&nbsp;&nbsp;&nbsp;&nbsp;bpy.types.Scene.mv_state = PointerProperty(type=MV_State)\n\ndef unregister():\n&nbsp;&nbsp;&nbsp;&nbsp;for c in reversed(classes):\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;bpy.utils.unregister_class(c)\n&nbsp;&nbsp;&nbsp;&nbsp;if hasattr(bpy.types.Scene, \"mv_state\"):\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;del bpy.types.Scene.mv_state\n\nif __name__ == \"__main__\":\n&nbsp;&nbsp;&nbsp;&nbsp;register()\u003C/code>\u003C/pre>\u003Cp>At this point, the model versioning workflow is fully bidirectional: you can publish new revisions from Blender and retrieve earlier ones instantly.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-00e861e7-3b2e-4bdc-80b8-1af740cab480.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"759\" height=\"488\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/12/data-src-image-00e861e7-3b2e-4bdc-80b8-1af740cab480.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-00e861e7-3b2e-4bdc-80b8-1af740cab480.png 759w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Chr>\u003Ch2 id=\"conclusion\">\u003Cstrong>Conclusion\u003C/strong>\u003C/h2>\u003Cp>With just a handful of Blender API operators and the convenience of the Gazu SDK, we’ve built a practical (yet basic) versioning workflow that lives directly inside Blender and stays in sync with Kitsu. Artists can link their Blender scene to a Kitsu project, asset, and task, create new revisions with a single button press, browse the full revision history for any task, and pull older versions straight into Blender whenever they need to compare or recover work.\u003C/p>\u003Cp>This workflow is only the beginning. From here, you could expand the addon with automated exports, thumbnail or turntable renders, support for multiple output formats, supervisor review tools, or even hooks into a render farm.\u003C/p>\u003Cp>To get you started, make sure to clone\u003Ca href=\"https://github.com/cgwire/blender-kitsu-versioning-addon?ref=blog.cg-wire.com\"> \u003Cu>our Github repository\u003C/u>\u003C/a> for this versioning addon and try it out yourself!\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>",{"uuid":422,"comment_id":423,"feature_image":424,"featured":29,"visibility":30,"created_at":425,"updated_at":426,"custom_excerpt":427,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":428,"primary_tag":429,"url":430,"excerpt":427,"reading_time":431,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":432},"4ee5e3ab-dd50-4121-99cb-c59d96c2eb7d","6948ca070bfbc7000190a884","https://images.unsplash.com/photo-1617746533234-288e5cf484e2?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDMwfHxhbmltYXRpb24lMjBwaXBlbGluZXxlbnwwfHx8fDE3NjYzODE5ODZ8MA&ixlib=rb-4.1.0&q=80&w=2000","2025-12-22T05:33:11.000+01:00","2026-02-20T06:04:01.000+01:00","Learn how to build a Blender addon that connects to Kitsu to manage asset revisions. This tutorial covers creating, browsing, and loading file versions directly from Blender, keeping production files traceable and in sync with studio workflows.",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"https://blog.cg-wire.com/blender-kitsu-versioning-addon/",12,"\u003Cspan style=\"white-space: pre-wrap;\">Photo by \u003C/span>\u003Ca href=\"https://unsplash.com/@jaspergarrattphotography?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Jasper Garratt\u003C/span>\u003C/a>\u003Cspan style=\"white-space: pre-wrap;\"> / \u003C/span>\u003Ca href=\"https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Unsplash\u003C/span>\u003C/a>","/posts/blender-kitsu-versioning-addon","2025-12-22T10:00:20.000+01:00",{"title":417},"blender-kitsu-versioning-addon","posts/blender-kitsu-versioning-addon",[439,440],{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},{"id":333,"name":334,"slug":335,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":336},"RvLHyMPCRMLBkkSF0lzBXOx7AHvfvlghiFKTD38-uwg",{"id":443,"title":444,"authors":445,"body":7,"description":7,"extension":8,"html":447,"meta":448,"navigation":12,"path":458,"published_at":459,"seo":460,"slug":461,"stem":462,"tags":463,"__hash__":466,"uuid":449,"comment_id":450,"feature_image":451,"featured":29,"visibility":30,"created_at":452,"updated_at":426,"custom_excerpt":453,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":454,"primary_tag":455,"url":456,"excerpt":453,"reading_time":97,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":457},"ghost/posts:blender-kitsu-low-res-preview.json","Automating Low-Res Animation Previews in Blender with Kitsu (2026)",[446],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">⚡\u003C/div>\u003Cdiv class=\"kg-callout-text\">Speed up animation reviews with lightweight previews that render in seconds, not hours.\u003C/div>\u003C/div>\u003Cp>Waiting for full-resolution renders just to review a shot slows down the entire production. Artists spend time waiting and supervisors get delayed feedback. The iteration loop is inefficient.\u003C/p>\u003Cp>To address this, we can create low-resolution animation previews directly in Blender and auto-upload them to Kitsu using Python as a part of our animation pipeline. These previews are fast to render, easy to review, and can be quickly used in Kitsu for approval.\u003C/p>\u003Cp>This is a big deal because full-resolution renders can take hours, and the cloud storage and network bandwidth costs are no joke when you're dealing with thousands of shots. Going from 1080p to 480p can divide the size by up to 5x!\u003C/p>\u003Cp>In this tutorial, we’ll cover how to:\u003C/p>\u003Cul>\u003Cli>Adjust Blender render settings for low-resolution previews\u003C/li>\u003Cli>Automate the render process using Python\u003C/li>\u003Cli>Use \u003Ccode>ffmpeg\u003C/code> to watermark and timestamp the video for fast contextualization\u003C/li>\u003Cli>Export videos and upload them to Kitsu\u003C/li>\u003C/ul>\u003Cp>By the end, you’ll have a script that saves time on shot reviews without sacrificing feedback quality.\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-blue\">\u003Cdiv class=\"kg-callout-emoji\">💡\u003C/div>\u003Cdiv class=\"kg-callout-text\">\u003Cb>\u003Cstrong style=\"white-space: pre-wrap;\">Looking for working examples?\u003C/strong>\u003C/b>\u003Cbr>\u003Cbr>You can find the complete source code for the example integration showcased in this guide on our GitHub:\u003Cbr>\u003Cbr>🔗 \u003Ca href=\"https://github.com/cgwire/blender-kitsu-low-res-preview?ref=blog.cg-wire.com\">https://github.com/cgwire/blender-kitsu-low-res-preview\u003C/a>\u003C/div>\u003C/div>\u003Chr>\u003Ch2 id=\"1-simple-blender-scene-setup\">\u003Cstrong>1. Simple Blender Scene Setup\u003C/strong>\u003C/h2>\u003Cp>Before we can create an animated preview, we need a starting object in the scene. For this tutorial, we’ll use Blender’s default cube.\u003C/p>\u003Cp>First, we create a reference of the scene and the cube:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">import bpy\n\ncube = bpy.data.objects[\"Cube\"]\nscene = bpy.context.scene\u003C/code>\u003C/pre>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-bf950a7a-c387-4b8d-9318-49e5bd3251bd.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1600\" height=\"901\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/12/data-src-image-bf950a7a-c387-4b8d-9318-49e5bd3251bd.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/12/data-src-image-bf950a7a-c387-4b8d-9318-49e5bd3251bd.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-bf950a7a-c387-4b8d-9318-49e5bd3251bd.png 1600w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Chr>\u003Ch2 id=\"2-adding-keyframes-for-animation\">\u003Cstrong>2. Adding Keyframes for Animation\u003C/strong>\u003C/h2>\u003Cp>The next step is animating our cube. For quick modeling previews, short sequences are ideal. Here, we’ll create a \u003Cstrong>360° rotation\u003C/strong> over 48 frames (2 seconds at 24 FPS):\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">for frame, angle in [(1, 0), (12, 1.57), (24, 3.14), (36, 4.71), (48, 6.28)]:\n&nbsp;&nbsp;&nbsp;&nbsp;scene.frame_set(frame)\n&nbsp;&nbsp;&nbsp;&nbsp;cube.rotation_euler[2] = angle\n&nbsp;&nbsp;&nbsp;&nbsp;cube.keyframe_insert(data_path=\"rotation_euler\", index=2)\u003C/code>\u003C/pre>\u003Cp>This loop sets keyframes at regular intervals, rotating the cube smoothly around its Z-axis by increments of pi/2. Using a small number of frames keeps rendering fast and makes it perfect for preview purposes.\u003C/p>\u003Cp>At this point, you could scrub the timeline in Blender to verify the cube rotates as expected.\u003C/p>\u003Chr>\u003Ch2 id=\"3-low-resolution-rendering\">\u003Cstrong>3. Low-Resolution Rendering\u003C/strong>\u003C/h2>\u003Cp>With animation in place, we can configure Blender to render a \u003Cstrong>fast, low-resolution preview\u003C/strong>. The goal is speed over quality: we want something clear enough for review but quick to produce.\u003C/p>\u003Cp>Here, we use\u003Ca href=\"https://blog.cg-wire.com/getting-started-with-blender-rendering/\"> \u003Cu>the Eevee rendering engine for speed and to reduce unnecessary rendering overhead\u003C/u>\u003C/a>. It's much faster than Cycles because it's a simple rasterisation engine, and we don't need a hyper-realistic output in 90% of cases.\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">scene.render.engine = \"BLENDER_EEVEE\"\n\nscene.render.resolution_x = 1920\nscene.render.resolution_y = 1080\nscene.render.resolution_percentage = 50\n\nscene.render.fps = 24\nscene.frame_start = 1\nscene.frame_end = 48&nbsp; # match your animation length\n\nscene.render.image_settings.file_format = \"FFMPEG\"\nscene.render.ffmpeg.format = \"MPEG4\"\nscene.render.ffmpeg.codec = \"H264\"\n\nscene.render.filepath = \"//preview.mp4\"\u003C/code>\u003C/pre>\u003Cp>Although we go for a classic landscape resolution, reducing \u003Ccode>resolution_percentage\u003C/code> or turning off high-quality sampling in Eevee can drastically reduce render times for previews.\u003C/p>\u003Cp>The rest of the settings are pretty standard: 24 frames per second, 48 frames total, and a mp4 output video with H264 encoding (for faster compression) written in the script's current folder.\u003C/p>\u003Cp>Depending on your use case, you can reduce the resolution, decrease the frame rate, and lower the bitrate to lower the size of your previews. You still need enough quality for the review process, though, so tweak the settings for an optimal balance with performance.\u003C/p>\u003Cp>Finally, we can trigger the render in one line:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">bpy.ops.render.render(animation=True)\u003C/code>\u003C/pre>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-27b0c802-b589-4306-b52b-5f910b58320b.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1088\" height=\"722\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/12/data-src-image-27b0c802-b589-4306-b52b-5f910b58320b.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/12/data-src-image-27b0c802-b589-4306-b52b-5f910b58320b.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-27b0c802-b589-4306-b52b-5f910b58320b.png 1088w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>The preview video can be immediately used for review or further processed with tools like FFmpeg for timestamps, watermarks, or custom naming conventions before uploading to Kitsu.\u003C/p>\u003Chr>\u003Ch2 id=\"4-ffmpeg-processing-timestamp-naming-watermark\">\u003Cstrong>4. FFmpeg Processing: Timestamp, Naming, Watermark\u003C/strong>\u003C/h2>\u003Cp>Once Blender has rendered your animation to a video file, you can further process it using \u003Cstrong>FFmpeg\u003C/strong>. This is\u003Ca href=\"https://blog.cg-wire.com/ffmpeg-commands-for-animators/\"> \u003Cu>a common step in production pipelines\u003C/u>\u003C/a> to add timestamps, watermarks, or custom naming-making the previews ready for review.\u003C/p>\u003Cp>Run the following command in a terminal after rendering your preview:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">ffmpeg -framerate 24 \\\\\\\\\n&nbsp;&nbsp;-i preview.mp4 \\\\\\\\\n&nbsp;&nbsp;-i watermark.png \\\\\\\\\n&nbsp;&nbsp;-filter_complex \"\\\\\\\\\n&nbsp;&nbsp;&nbsp;&nbsp;[0:v]drawtext=text='%{pts\\\\\\\\:hms}':x=10:y=10:fontsize=24:fontcolor=white:bordercolor=black:borderw=2[v1]; \\\\\\\\\n&nbsp;&nbsp;&nbsp;&nbsp;[v1][1:v]overlay=W-w-20:H-h-20\" \\\\\\\\\n&nbsp;&nbsp;-c:v libx264 -crf 22 -pix_fmt yuv420p \\\\\\\\\n&nbsp;&nbsp;preview_with_stamp.mp4\u003C/code>\u003C/pre>\u003Cul>\u003Cli>\u003Cstrong>\u003Ccode>drawtext\u003C/code>\u003C/strong> overlays a running timestamp in the top-left corner.\u003C/li>\u003Cli>\u003Ccode>\u003Cstrong>overlay\u003C/strong>\u003C/code> places a watermark image (\u003Ccode>watermark.png\u003C/code>) in the bottom-right corner.\u003C/li>\u003Cli>\u003Cstrong>\u003Ccode>c:v libx264 -crf 22 -pix_fmt yuv420p\u003C/code>\u003C/strong> ensures good quality and broad compatibility for video playback.\u003C/li>\u003Cli>The output file, \u003Ccode>preview_with_stamp.mp4\u003C/code>, is your finalised preview ready for review.\u003C/li>\u003C/ul>\u003Cp>Of course, you can adjust the font size, position, or watermark placement as needed to standardise previews for your team or client reviews.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-aaed9f6c-1b29-4592-b629-1830a6f2aa79.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1088\" height=\"722\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/12/data-src-image-aaed9f6c-1b29-4592-b629-1830a6f2aa79.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/12/data-src-image-aaed9f6c-1b29-4592-b629-1830a6f2aa79.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-aaed9f6c-1b29-4592-b629-1830a6f2aa79.png 1088w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>This step completes the preparation of a production-ready, low-resolution animation preview. The file is now ready to be uploaded to \u003Cstrong>Kitsu\u003C/strong> for quick feedback.\u003C/p>\u003Chr>\u003Ch2 id=\"5-uploading-to-kitsu-via-gazu\">\u003Cstrong>5. Uploading to Kitsu via Gazu\u003C/strong>\u003C/h2>\u003Cp>Once your low-resolution preview is ready, you can upload it directly to \u003Cstrong>Kitsu\u003C/strong> via the dashboard or use the \u003Ccode>gazu\u003C/code> Python SDK. Kitsu is a collaborative pipeline tracker allowing artists and supervisors to access the preview immediately for review.\u003C/p>\u003Cp>The following Python script provides a simple interactive CLI that lets you choose the project and task to upload your preview to:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">import gazu\n\ndef pickProject(label, list_of_items):\n&nbsp;&nbsp;&nbsp;&nbsp;\"\"\"Helper UI to pick one item from a list.\"\"\"\n&nbsp;&nbsp;&nbsp;&nbsp;for i, item in enumerate(list_of_items):\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;print(f\"{i + 1}. {item['name']}\")\n&nbsp;&nbsp;&nbsp;&nbsp;idx = int(input(f\"Choose {label} number: \")) - 1\n&nbsp;&nbsp;&nbsp;&nbsp;return list_of_items[idx]\n\ndef pickTask(label, list_of_items):\n&nbsp;&nbsp;&nbsp;&nbsp;\"\"\"Helper UI to pick one item from a list.\"\"\"\n&nbsp;&nbsp;&nbsp;&nbsp;for i, item in enumerate(list_of_items):\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;asset = gazu.entity.get_entity(item[\"entity_id\"])\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;status = gazu.task.get_task_status(item[\"task_status_id\"])\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;type = gazu.task.get_task_type(item[\"task_type_id\"])\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;print(f\"{i + 1}. {asset['name']} {type['name']} {status['name']}\")\n&nbsp;&nbsp;&nbsp;&nbsp;idx = int(input(f\"Choose {label} number: \")) - 1\n&nbsp;&nbsp;&nbsp;&nbsp;return list_of_items[idx]\n\ngazu.set_host(\"&lt;http://localhost/api&gt;\")\nuser = gazu.log_in(\"admin@example.com\", \"mysecretpassword\")\n\nprojects = gazu.project.all_projects()\nproject = pickProject(\"project\", projects)\n\ntasks = gazu.task.all_tasks_for_project(project)\ntask = pickTask(\"task\", tasks)\n\nprint(\"Uploading preview...\")\ntask_status = gazu.task.get_task_status_by_name(\"todo\")\nresult = gazu.task.publish_preview(\n&nbsp;&nbsp;&nbsp;&nbsp;task,\n&nbsp;&nbsp;&nbsp;&nbsp;task_status,\n&nbsp;&nbsp;&nbsp;&nbsp;comment=\"Auto-generated preview\",\n&nbsp;&nbsp;&nbsp;&nbsp;preview_file_path=\"./preview.mp4\",\n)\n\nprint(\"Done:\", result)\u003C/code>\u003C/pre>\u003Cp>First, we log in to Kitsu via \u003Ccode>gazu\u003C/code> with your credentials. We use the\u003Ca href=\"https://blog.cg-wire.com/dcc-integration-blender-kitsu/\"> \u003Cu>local development environment installation via Kitsu Docker\u003C/u>\u003C/a>. The program lets you select the \u003Cstrong>project\u003C/strong> and \u003Cstrong>task\u003C/strong> from available options using different Kitsu API endpoints to get all your production data:\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-21091709-64dd-41c6-875e-2cdce8b5b178.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1343\" height=\"816\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/12/data-src-image-21091709-64dd-41c6-875e-2cdce8b5b178.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/12/data-src-image-21091709-64dd-41c6-875e-2cdce8b5b178.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-21091709-64dd-41c6-875e-2cdce8b5b178.png 1343w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>We then upload the generated preview video from the previous steps to the selected task.\u003C/p>\u003Cp>Once complete, the preview is available in Kitsu’s review interface, making it easy for team members and supervisors to give feedback without waiting for high-resolution renders.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-78d2cd48-21e9-4599-9b2b-a5e5bef63f76.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"985\" height=\"948\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/12/data-src-image-78d2cd48-21e9-4599-9b2b-a5e5bef63f76.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-78d2cd48-21e9-4599-9b2b-a5e5bef63f76.png 985w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>The review engine is perfect to quickly annotate frames and add comments on precise shots:\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-6ae9b3dd-18e9-4d85-9fa6-e5106babc87e.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1438\" height=\"809\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/12/data-src-image-6ae9b3dd-18e9-4d85-9fa6-e5106babc87e.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/12/data-src-image-6ae9b3dd-18e9-4d85-9fa6-e5106babc87e.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-6ae9b3dd-18e9-4d85-9fa6-e5106babc87e.png 1438w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Chr>\u003Ch2 id=\"6-putting-it-all-together\">\u003Cstrong>6. Putting it all together\u003C/strong>\u003C/h2>\u003Cp>To automate the task end-to-end, let's write a quick bash command:\u003C/p>\u003Cp>\u003Cstrong>\u003Cu>preview.sh\u003C/u>\u003C/strong>\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">python3 render.py &amp;&amp; ./watermark.sh &amp;&amp; python3 upload.py\u003C/code>\u003C/pre>\u003Cp>We can then run the script every time we need to share a preview:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">./preview.sh\u003C/code>\u003C/pre>\u003Cp>Check out our\u003Ca href=\"https://github.com/cgwire/blender-kitsu-low-res-preview?ref=blog.cg-wire.com\"> \u003Cu>Github repository blender-kitsu-low-res-preview\u003C/u>\u003C/a> to try out the final result yourself.\u003C/p>\u003Chr>\u003Ch2 id=\"7-artist-friendly-addon-overview\">\u003Cstrong>7. Artist-Friendly Addon Overview\u003C/strong>\u003C/h2>\u003Cp>Though this is out of the scope of this article, it could be easy to wrap up our code in a Blender addon for artists to easily use.\u003C/p>\u003Cp>You would need a main panel to hold dropdown menus to pick a production, asset, and task to upload to. And a button to click to upload. The uploading logic would take care of rendering, calling ffmpeg as a subprocess for watermarking, and actually sending the temporary files to Kitsu.\u003C/p>\u003Cp>Have a look at our article on\u003Ca href=\"https://blog.cg-wire.com/blender-addon-ui-scripting-guide/\"> \u003Cu>Blender Add-on UI Development\u003C/u>\u003C/a> for more information.\u003C/p>\u003Chr>\u003Ch2 id=\"conclusion\">\u003Cstrong>Conclusion\u003C/strong>\u003C/h2>\u003Cp>By now, you’ve set up a full pipeline: creating a simple 3D object in Blender, animating it, generating a low-resolution preview, adding timestamps and watermarks, and uploading it to Kitsu. The benefits are immediately clear:\u003C/p>\u003Cul>\u003Cli>\u003Cstrong>Faster reviews\u003C/strong> - Supervisors and team members can watch previews immediately without waiting for full-resolution renders.\u003C/li>\u003Cli>\u003Cstrong>Quicker iterations\u003C/strong> - Artists get feedback faster, which shortens the iteration loop and reduces bottlenecks.\u003C/li>\u003Cli>\u003Cstrong>Fewer blockers\u003C/strong> - Automated previews and uploads eliminate repetitive manual steps in the pipeline to keep deliverables consistent.\u003C/li>\u003C/ul>\u003Cp>What used to take an hour of manual work can now be handled with a few scripts, giving the team more time to focus on the creative side of production instead of repetitive tasks.\u003C/p>\u003Cp>You can take this workflow even further depending on your animation studio's needs: add buttons or panels in Blender to run the entire pipeline with one click, automatically batch-generate previews for multiple shots or scenes in a single script, etc.\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>",{"uuid":449,"comment_id":450,"feature_image":451,"featured":29,"visibility":30,"created_at":452,"updated_at":426,"custom_excerpt":453,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":454,"primary_tag":455,"url":456,"excerpt":453,"reading_time":97,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":457},"d4c6e01e-3b37-4c90-b42c-cbfeecc518c2","693549d4ee42880001e4b1dc","https://images.unsplash.com/photo-1653200256306-6dc84510dfb6?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDN8fGFuaW1hdGlvbiUyMHBpcGVsaW5lfGVufDB8fHx8MTc2NTA5ODQ2Mnww&ixlib=rb-4.1.0&q=80&w=2000","2025-12-07T10:33:08.000+01:00","Learn how to generate low-resolution animation previews in Blender and automatically upload them to Kitsu. This tutorial covers Blender render settings, Python automation, FFmpeg processing, and preview publishing to streamline animation reviews.",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"https://blog.cg-wire.com/blender-kitsu-low-res-preview/","\u003Cspan style=\"white-space: pre-wrap;\">Photo by \u003C/span>\u003Ca href=\"https://unsplash.com/@allisonsaeng?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Allison Saeng\u003C/span>\u003C/a>\u003Cspan style=\"white-space: pre-wrap;\"> / \u003C/span>\u003Ca href=\"https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Unsplash\u003C/span>\u003C/a>","/posts/blender-kitsu-low-res-preview","2025-12-15T10:00:23.000+01:00",{"title":444},"blender-kitsu-low-res-preview","posts/blender-kitsu-low-res-preview",[464,465],{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},{"id":333,"name":334,"slug":335,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":336},"slGFk0J3LjB1nVzZocG4Vk6gTdZMox1-G7CWEnotp_I",{"id":468,"title":469,"authors":470,"body":7,"description":7,"extension":8,"html":472,"meta":473,"navigation":12,"path":485,"published_at":486,"seo":487,"slug":488,"stem":489,"tags":490,"__hash__":493,"uuid":474,"comment_id":475,"feature_image":476,"featured":29,"visibility":30,"created_at":477,"updated_at":478,"custom_excerpt":479,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":480,"primary_tag":481,"url":482,"excerpt":479,"reading_time":483,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":484},"ghost/posts:blender-kitsu-breakdown-automation.json","How to Build Blender Shots Automatically Using Python and Kitsu (2026)",[471],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">🧩\u003C/div>\u003Cdiv class=\"kg-callout-text\">Automate your shot setup and eliminate hours of manual asset placement.\u003C/div>\u003C/div>\u003Cp>Animation studios rely on \u003Cstrong>breakdown lists\u003C/strong> to track which assets must appear in each shot.\u003C/p>\u003Cp>Picture this. You’re a VFX artist staring at a blank Blender viewport for your latest production. Your manager hands you the detailed list of assets, shots, and timing cues and says, \u003Cem>\"Turn this into a Blender scene.\"\u003C/em>\u003C/p>\u003Cp>Your first thought could be to log in to your asset manager and place every object manually. But what about complex scenes with hundreds of assets?\u003C/p>\u003Cp>This is the moment where a simple automation can save the day. With Python Blender scripting, you can read Kitsu breakdown data and generate an initial scene automatically in a few minutes.\u003C/p>\u003Cp>In this article, we walk through a full example: fetching breakdowns via the \u003Cstrong>Gazu\u003C/strong> Python API, creating a fresh Blender scene, downloading the assets, and importing them into Blender. By the end, you’ll have a minimal pipeline that builds scenes automatically, ready for layout or animation.\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-blue\">\u003Cdiv class=\"kg-callout-emoji\">💡\u003C/div>\u003Cdiv class=\"kg-callout-text\">\u003Cb>\u003Cstrong style=\"white-space: pre-wrap;\">Looking for working examples?\u003C/strong>\u003C/b>\u003Cbr>\u003Cbr>You can find the complete source code for the example integration showcased in this guide on our GitHub:\u003Cbr>\u003Cbr>🔗 \u003Ca href=\"https://github.com/cgwire/blender-kitsu-automated-scene-composition?ref=blog.cg-wire.com\">https://github.com/cgwire/blender-kitsu-automated-scene-composition\u003C/a>\u003C/div>\u003C/div>\u003Chr>\u003Ch2 id=\"1-getting-the-breakdown\">\u003Cstrong>1. Getting the Breakdown\u003C/strong>\u003C/h2>\u003Cp>Every 3D shot begins as a blank canvas, but the instructions for filling that canvas already exist in Kitsu:\u003Ca href=\"https://blog.cg-wire.com/3d-animation-process/\"> \u003Cu>the \u003Cstrong>breakdown\u003C/strong> dictates exactly what needs to be on stage\u003C/u>\u003C/a> before the animator begins working.\u003C/p>\u003Cp>A typical breakdown provides the essential narrative context your script needs to assemble the scene: the stage (start and end frames, duration, and other annotations stored in the sequence information), and the cast (the actual breakdown of character models, props, and environment assets).\u003C/p>\u003Cp>Before writing code, you need to define the breakdown in the Kitsu dashboard. This is where you manually link your library of 3D assets to the specific shots where they are required. You aren't creating new models here, just casting existing \"actors\" (assets) to a specific shot:\u003C/p>\u003Col>\u003Cli>\u003Cstrong>Enter your production\u003C/strong> - Navigate to your project in Kitsu and open the \u003Cstrong>Shots\u003C/strong> tab.\u003C/li>\u003Cli>\u003Cstrong>Locate the casting sheet\u003C/strong> - Look for the \u003Cstrong>Breakdown\u003C/strong> tab (usually found on the right-hand panel or a dedicated tab depending on your version).\u003C/li>\u003Cli>\u003Cstrong>Select the shot\u003C/strong> - Click on the specific shot you want to populate (e.g., \u003Ccode>SH01\u003C/code>) to open the detailed casting view.\u003C/li>\u003Cli>\u003Cstrong>Assign the assets\u003C/strong> - In the right side panel, click the \u003Cstrong>+ (Plus)\u003C/strong> button or \"Add Asset.\" You can also specify the quantity of each asset you need here.\u003C/li>\u003C/ol>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-ef6fba58-9c73-4a38-b466-0b9d92e4efc0.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1466\" height=\"804\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/12/data-src-image-ef6fba58-9c73-4a38-b466-0b9d92e4efc0.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/12/data-src-image-ef6fba58-9c73-4a38-b466-0b9d92e4efc0.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-ef6fba58-9c73-4a38-b466-0b9d92e4efc0.png 1466w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>Make sure your \u003Cstrong>Assets\u003C/strong> page is already populated with the models (Characters, Props, etc.) you intend to use.\u003C/p>\u003Cp>Once you hit save, the link is established. Now, when your Python script asks Gazu, \"Who is in this shot?\", Kitsu will reply with the list of assets you just assigned. Your Python script acts as the bridge, parsing this casting to automatically populate the Blender viewport.\u003C/p>\u003Cp>If you need a local development environment, have a look at\u003Ca href=\"https://blog.cg-wire.com/dcc-integration-blender-kitsu/\"> \u003Cu>how to install Kitsu from Docker in how Custom DCC Bridge guide\u003C/u>\u003C/a>.\u003C/p>\u003Cp>While Kitsu holds the data, we need a way to fetch it. Enter \u003Cstrong>Gazu\u003C/strong>, the Python SDK for Kitsu’s REST API:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">import gazu\n\ngazu.set_host(\"&lt;http://localhost/api&gt;\")\nuser = gazu.log_in(\"admin@example.com\", \"mysecretpassword\")\n\nprojects = gazu.project.all_projects()\nproject = projects[0]\n\nsequence = gazu.shot.get_sequence_by_name(project, \"SQ01\")\nshot = gazu.shot.get_shot_by_name(sequence, \"SH01\")\n\nassets = gazu.casting.get_shot_casting(shot)\u003C/code>\u003C/pre>\u003Cp>\u003Cbr>We connect to our local Kitsu instance, and then we pick our first production (you can also retrieve a production by name) and the shot we need the casting for.\u003C/p>\u003Cp>We can use this shot ID to retrieve the corresponding casting of assets, the breakdown list.\u003C/p>\u003Chr>\u003Ch2 id=\"2-getting-assets-from-a-breakdown\">\u003Cstrong>2. Getting Assets From a Breakdown\u003C/strong>\u003C/h2>\u003Cp>Now that we know \u003Cem>who\u003C/em> is in the shot, we need to find out \u003Cem>what\u003C/em> they look like.\u003C/p>\u003Cp>In Kitsu, an asset can have many preview files we can use depending on revisions. Our script needs to be able to navigate this data to get the last revision of each asset:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">local_paths = []\nfor asset in assets:\n&nbsp;&nbsp;&nbsp;&nbsp;tasks = gazu.task.all_tasks_for_asset(asset[\"asset_id\"])\n&nbsp;&nbsp;&nbsp;&nbsp;last_task = max(tasks, key=lambda x: x[\"updated_at\"])\n\n&nbsp;&nbsp;&nbsp;&nbsp;preview_files = gazu.files.get_all_preview_files_for_task(last_task)\n&nbsp;&nbsp;&nbsp;&nbsp;last_preview_file = max(preview_files, key=lambda x: x[\"updated_at\"])\n\n&nbsp;&nbsp;&nbsp;&nbsp;download_dir = \"./previews\"\n&nbsp;&nbsp;&nbsp;&nbsp;os.makedirs(download_dir, exist_ok=True)\n\n&nbsp;&nbsp;&nbsp;&nbsp;save_path = os.path.join(\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;download_dir,\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;last_preview_file[\"original_name\"] + \".\" + last_preview_file[\"extension\"],\n&nbsp;&nbsp;&nbsp;&nbsp;)\n&nbsp;&nbsp;&nbsp;&nbsp;gazu.files.download_preview_file(last_preview_file, save_path)\n\n&nbsp;&nbsp;&nbsp;&nbsp;local_paths.append(save_path)\u003C/code>\u003C/pre>\u003Cp>\u003Cbr>For each asset, we retrieve a list of all corresponding tasks of any type ('Modeling', 'Animation', etc.) or status ('done', 'todo'...). We filter this list to retrieve the last updated task.\u003C/p>\u003Cp>We can use this task ID to get the last corresponding preview file revision and download it to a local folder \u003Ccode>previews\u003C/code>. We keep these download paths in memory for the importing step.\u003C/p>\u003Cp>At the end of this loop, you have successfully turned database entries into tangible model files on your hard drive, ready for Blender to ingest.\u003C/p>\u003Chr>\u003Ch2 id=\"3-creating-a-new-blender-scene\">\u003Cstrong>3. Creating a New Blender Scene\u003C/strong>\u003C/h2>\u003Cp>With the asset files safely downloaded, the next task is preparing the Blender environment to receive its new cast member.\u003C/p>\u003Cp>The \u003Ccode>bpy\u003C/code> module, Blender's native Python API, acts as your command console allowing you to manipulate every element of the application.\u003C/p>\u003Cp>Before we import our Kitsu assets, we must eliminate any default objects that come with a new Blender scene. For this simple tutorial, we're targeting the default \u003Cstrong>Cube\u003C/strong>, which is often the only object present besides the default Camera and Light:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">bpy.data.objects.remove(bpy.data.objects.get(\"Cube\"), do_unlink=True)\u003C/code>\u003C/pre>\u003Cp>\u003Cbr>The \u003Ccode>do_unlink=True\u003C/code> flag tells Blender to fully delete the object's data block (like its mesh data) if it’s no longer used by any other object to leave no clutter behind.\u003C/p>\u003Cp>We are now ready for the imported assets to take their places.\u003C/p>\u003Chr>\u003Ch2 id=\"4-importing-asset-files\">\u003Cstrong>4. Importing Asset Files\u003C/strong>\u003C/h2>\u003Cp>Now for the payoff! Since the file we downloaded from Kitsu is a standardised interchange \u003Ccode>.glb\u003C/code> format, which handles both geometry and basic materials, we use Blender’s dedicated \u003Ccode>gltf\u003C/code> import operator.\u003C/p>\u003Cp>The crucial part is providing the correct \u003Cstrong>absolute file path\u003C/strong> (\u003Ccode>glb_path\u003C/code>) to the downloaded asset. Fortunately, we stored those in the previous code snippet:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">for path in local_paths:\n&nbsp;&nbsp;&nbsp;&nbsp;if path.lower().endswith((\".glb\")):\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;print(f\"Importing: {path}\")\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;bpy.ops.import_scene.gltf(filepath=path)\n\nprint(\"All preview GLB files imported successfully!\")\u003C/code>\u003C/pre>\u003Cp>\u003Cbr>Once \u003Ccode>bpy.ops.import_scene.gltf()\u003C/code> executes, Blender reads the file and automatically creates the corresponding \u003Cstrong>objects\u003C/strong>, \u003Cstrong>meshes\u003C/strong>, and \u003Cstrong>materials\u003C/strong> in the current scene.\u003C/p>\u003Cp>The imported asset is now a full-fledged Blender object, placed at the world origin (0, 0, 0), ready for subsequent pipeline steps.\u003C/p>\u003Chr>\u003Ch2 id=\"5-saving-the-scene\">\u003Cstrong>5. Saving the Scene\u003C/strong>\u003C/h2>\u003Cp>The final step in this pipeline segment is to save the assembled layout into a permanent, versionable file. If you close Blender without this step, all the automated work is lost, so we use the \u003Ccode>bpy.ops.wm.save_as_mainfile\u003C/code> operator. This is the programmatic equivalent of clicking \u003Cstrong>File \\&gt; Save As\u003C/strong> in the Blender interface:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">scene_save_dir = \"./\"\nos.makedirs(scene_save_dir, exist_ok=True)\n\nblend_filename = \"SH01.blend\"\nblend_path = os.path.join(scene_save_dir, blend_filename)\n\nbpy.ops.wm.save_as_mainfile(filepath=blend_path)\u003C/code>\u003C/pre>\u003Cp>\u003Cbr>The result is a new Blender file, \u003Ccode>SH01.blend\u003C/code>, that perfectly reflects the \u003Cstrong>breakdown requirements\u003C/strong> from Kitsu, ready for the next department to pick up.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-91e5cf8e-acb1-4ac0-b5ec-d2c37a6a1ed6.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1460\" height=\"828\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/12/data-src-image-91e5cf8e-acb1-4ac0-b5ec-d2c37a6a1ed6.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/12/data-src-image-91e5cf8e-acb1-4ac0-b5ec-d2c37a6a1ed6.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-91e5cf8e-acb1-4ac0-b5ec-d2c37a6a1ed6.png 1460w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Chr>\u003Ch2 id=\"6-user-friendly-addon\">\u003Cstrong>6. User-Friendly Addon\u003C/strong>\u003C/h2>\u003Cp>The script works as expected, but what about artists? Not everyone knows how to run a script.\u003C/p>\u003Cp>Let's slightly modify our code to\u003Ca href=\"https://blog.cg-wire.com/blender-addon-ui-scripting-guide/\"> \u003Cu>turn it into a Blender addon\u003C/u>\u003C/a>:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">bl_info = {\n&nbsp;&nbsp;&nbsp;&nbsp;\"name\": \"Kitsu Shot Auto-Importer\",\n&nbsp;&nbsp;&nbsp;&nbsp;\"description\": \"Pick a project and shot and auto-import the latest preview assets\",\n&nbsp;&nbsp;&nbsp;&nbsp;\"author\": \"cgwire\",\n&nbsp;&nbsp;&nbsp;&nbsp;\"version\": (1, 0, 0),\n&nbsp;&nbsp;&nbsp;&nbsp;\"blender\": (3, 0, 0),\n&nbsp;&nbsp;&nbsp;&nbsp;\"location\": \"Viewport &gt; N-Panel &gt; Kitsu\",\n&nbsp;&nbsp;&nbsp;&nbsp;\"category\": \"Import-Export\",\n}\n\nimport os\nimport sys\n\nsys.path.append(\"~/.local/lib/python3.11/site-packages\")\n\nimport bpy\nimport gazu\nfrom bpy.props import EnumProperty, StringProperty\n\ndef get_projects():\n&nbsp;&nbsp;&nbsp;&nbsp;try:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;projects = gazu.project.all_projects()\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return [(p[\"id\"], p[\"name\"], \"\") for p in projects]\n&nbsp;&nbsp;&nbsp;&nbsp;except:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return []\n\ndef get_sequences(project_id):\n&nbsp;&nbsp;&nbsp;&nbsp;if not project_id:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return []\n&nbsp;&nbsp;&nbsp;&nbsp;try:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;seqs = gazu.shot.all_sequences_for_project(project_id)\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return [(s[\"id\"], s[\"name\"], \"\") for s in seqs]\n&nbsp;&nbsp;&nbsp;&nbsp;except:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return []\n\ndef get_shots(sequence_id):\n&nbsp;&nbsp;&nbsp;&nbsp;if not sequence_id:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return []\n&nbsp;&nbsp;&nbsp;&nbsp;try:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;shots = gazu.shot.all_shots_for_sequence(sequence_id)\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return [(s[\"id\"], s[\"name\"], \"\") for s in shots]\n&nbsp;&nbsp;&nbsp;&nbsp;except:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return []\n\nclass KITSU_Props(bpy.types.PropertyGroup):\n&nbsp;&nbsp;&nbsp;&nbsp;project: EnumProperty(name=\"Project\", items=lambda self, context: get_projects())\n\n&nbsp;&nbsp;&nbsp;&nbsp;sequence: EnumProperty(\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;name=\"Sequence\", items=lambda self, context: get_sequences(self.project)\n&nbsp;&nbsp;&nbsp;&nbsp;)\n\n&nbsp;&nbsp;&nbsp;&nbsp;shot: EnumProperty(\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;name=\"Shot\", items=lambda self, context: get_shots(self.sequence)\n&nbsp;&nbsp;&nbsp;&nbsp;)\n\nclass KITSU_OT_import_shot(bpy.types.Operator):\n&nbsp;&nbsp;&nbsp;&nbsp;bl_idname = \"kitsu.import_shot_assets\"\n&nbsp;&nbsp;&nbsp;&nbsp;bl_label = \"Import Shot Assets\"\n&nbsp;&nbsp;&nbsp;&nbsp;bl_description = (\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\"Download and import latest preview GLB/GLTF files for selected shot\"\n&nbsp;&nbsp;&nbsp;&nbsp;)\n\n&nbsp;&nbsp;&nbsp;&nbsp;def execute(self, context):\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;props = context.scene.kitsu_props\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;# Fetch shot data\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;shot = gazu.shot.get_shot(props.shot)\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;assets = gazu.casting.get_shot_casting(shot)\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;download_dir = os.path.join(bpy.app.tempdir, \"kitsu_previews\")\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;os.makedirs(download_dir, exist_ok=True)\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;local_paths = []\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;for asset in assets:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;tasks = gazu.task.all_tasks_for_asset(asset[\"asset_id\"])\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if not tasks:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;continue\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;last_task = max(tasks, key=lambda x: x[\"updated_at\"])\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;preview_files = gazu.files.get_all_preview_files_for_task(last_task)\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if not preview_files:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;continue\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;last_preview = max(preview_files, key=lambda x: x[\"updated_at\"])\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;save_path = os.path.join(\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;download_dir,\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;last_preview[\"original_name\"] + \".\" + last_preview[\"extension\"],\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;)\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;gazu.files.download_preview_file(last_preview, save_path)\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;local_paths.append(save_path)\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;# Clean default cube\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;obj = bpy.data.objects.get(\"Cube\")\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if obj:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;bpy.data.objects.remove(obj, do_unlink=True)\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;# Import GLB/GLTF assets\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;for path in local_paths:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if path.lower().endswith((\".glb\", \".gltf\")):\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;bpy.ops.import_scene.gltf(filepath=path)\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;# Auto-save blend file\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;save_dir = os.path.join(os.path.expanduser(\"~\"), \"kitsu_scenes\")\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;os.makedirs(save_dir, exist_ok=True)\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;blend_path = os.path.join(save_dir, f\"{shot['name']}.blend\")\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;bpy.ops.wm.save_as_mainfile(filepath=blend_path)\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;self.report({\"INFO\"}, f\"Imported assets and saved: {blend_path}\")\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return {\"FINISHED\"}\n\nclass KITSU_PT_panel(bpy.types.Panel):\n&nbsp;&nbsp;&nbsp;&nbsp;bl_label = \"Kitsu Auto-Importer\"\n&nbsp;&nbsp;&nbsp;&nbsp;bl_idname = \"KITSU_PT_auto_importer\"\n&nbsp;&nbsp;&nbsp;&nbsp;bl_space_type = \"VIEW_3D\"\n&nbsp;&nbsp;&nbsp;&nbsp;bl_region_type = \"UI\"\n&nbsp;&nbsp;&nbsp;&nbsp;bl_category = \"Kitsu\"\n\n&nbsp;&nbsp;&nbsp;&nbsp;def draw(self, context):\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;props = context.scene.kitsu_props\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout = self.layout\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout.separator()\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout.prop(props, \"project\")\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout.prop(props, \"sequence\")\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout.prop(props, \"shot\")\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout.separator()\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout.operator(\"kitsu.import_shot_assets\", icon=\"IMPORT\")\n\nclasses = (\n&nbsp;&nbsp;&nbsp;&nbsp;KITSU_Props,\n&nbsp;&nbsp;&nbsp;&nbsp;KITSU_OT_import_shot,\n&nbsp;&nbsp;&nbsp;&nbsp;KITSU_PT_panel,\n)\n\ndef register():\n&nbsp;&nbsp;&nbsp;&nbsp;for c in classes:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;bpy.utils.register_class(c)\n&nbsp;&nbsp;&nbsp;&nbsp;bpy.types.Scene.kitsu_props = bpy.props.PointerProperty(type=KITSU_Props)\n\ndef unregister():\n&nbsp;&nbsp;&nbsp;&nbsp;for c in classes:\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;bpy.utils.unregister_class(c)\n&nbsp;&nbsp;&nbsp;&nbsp;del bpy.types.Scene.kitsu_props\n\nif __name__ == \"__main__\":\n&nbsp;&nbsp;&nbsp;&nbsp;register()\u003C/code>\u003C/pre>\u003Cp>\u003Cbr>We can now manually pick a production, sequence, and shot to get breakdown data from, and import the corresponding casting in the current Blender viewport:\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/12/data-src-image-bf3ea18d-fd62-4db5-9977-6374b3ee1aef.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"480\" height=\"270\">\u003C/figure>\u003Cp>The logic is simple: we use the same \u003Ccode>gazu\u003C/code> code to populate dropdown menus, and we encapsulate them all in a panel in the viewport. An \u003Ccode>import\u003C/code> button downloads all the corresponding breakdown assets and imports them into the current workspace.\u003C/p>\u003Cp>Keep in mind that adding \u003Ccode>sys.path.append(\"~/.local/lib/python3.11/site-packages\")\u003C/code> lets Blender use your system’s Python installation to load external libraries like \u003Ccode>gazu\u003C/code>. Since Blender ships with its own isolated Python environment, managing package installations can be inconvenient. By extending the path, you simply instruct Blender to check your local modules as well. Make sure to adjust this path to match your own setup.\u003C/p>\u003Chr>\u003Ch2 id=\"conclusion\">\u003Cstrong>Conclusion\u003C/strong>\u003C/h2>\u003Cp>By pulling breakdown lists directly from Kitsu and scripting Blender to assemble scenes, you eliminate repetitive manual steps and ensure asset consistency across all shots. This approach doesn't just save time but also reduces human error and ensures every artist starts with the correct asset version and scene setup required by the producer. This way, you can easily handle ten shots or ten thousand with equal reliability.\u003C/p>\u003Cp>But don't take our word for it,\u003Ca href=\"https://github.com/cgwire/blender-kitsu-automated-scene-composition?ref=blog.cg-wire.com\"> \u003Cu>clone the Github repository\u003C/u>\u003C/a> to try out the result!\u003C/p>\u003Cp>You can extend this workflow by generating automated previews, reports, or even updating asset information from the new revisions created during the shot animation.\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>",{"uuid":474,"comment_id":475,"feature_image":476,"featured":29,"visibility":30,"created_at":477,"updated_at":478,"custom_excerpt":479,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":480,"primary_tag":481,"url":482,"excerpt":479,"reading_time":483,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":484},"d090d72e-fa3b-4af9-806a-a44f7732a7c4","6909b6d2df0ae600014fbb54","https://images.unsplash.com/photo-1725888358557-9f70661012c4?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDJ8fGFuaW1hdGlvbiUyMHBpcGVsaW5lfGVufDB8fHx8MTc2NTA5ODQ2Mnww&ixlib=rb-4.1.0&q=80&w=2000","2025-11-04T09:18:26.000+01:00","2026-02-20T06:04:00.000+01:00","Learn how to automate Blender scene creation using Kitsu breakdown data and Python scripting. This guide walks through retrieving breakdowns via Gazu, downloading assets, importing GLB files, and generating a complete Blender scene ready for layout or animation.",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"https://blog.cg-wire.com/blender-kitsu-breakdown-automation/",11,"\u003Cspan style=\"white-space: pre-wrap;\">Photo by \u003C/span>\u003Ca href=\"https://unsplash.com/@steve_j?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Steve Johnson\u003C/span>\u003C/a>\u003Cspan style=\"white-space: pre-wrap;\"> / \u003C/span>\u003Ca href=\"https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Unsplash\u003C/span>\u003C/a>","/posts/blender-kitsu-breakdown-automation","2025-12-07T18:11:31.000+01:00",{"title":469},"blender-kitsu-breakdown-automation","posts/blender-kitsu-breakdown-automation",[491,492],{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},{"id":333,"name":334,"slug":335,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":336},"i5Pduvllq_hTDBHuCFEVMzMxTyU5evzIUkMxND7t3YY",{"id":495,"title":496,"authors":497,"body":7,"description":7,"extension":8,"html":499,"meta":500,"navigation":12,"path":511,"published_at":512,"seo":513,"slug":514,"stem":515,"tags":516,"__hash__":519,"uuid":501,"comment_id":502,"feature_image":503,"featured":29,"visibility":30,"created_at":504,"updated_at":505,"custom_excerpt":506,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":507,"primary_tag":508,"url":509,"excerpt":506,"reading_time":248,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":510},"ghost/posts:blender-addon-ui-scripting-guide.json","A 2026 Guide to Blender Add-on UI Development",[498],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📄\u003C/div>\u003Cdiv class=\"kg-callout-text\">Turn your Blender scripts into real tools artists love using—here’s how to build clean, intuitive UI panels for your add-ons.\u003C/div>\u003C/div>\u003Cp>If you’ve ever \u003Ca href=\"https://blog.cg-wire.com/blender-scripting-animation/\">written a Blender script\u003C/a>, you’ve probably realized that getting the feature right is only half the battle: the other half is getting someone else to use it! A clean user interface is a must to share and sell Blender add-ons.\u003C/p>\u003Cp>In this guide, you’ll learn how to build user interfaces for your Blender add-ons using the built-in layout system. We’ll cover the most common types of UI components, where panels can appear, and walk through a minimal working example. By the end, you’ll know how to give your add-on a Blender-native graphical interface.\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-blue\">\u003Cdiv class=\"kg-callout-emoji\">💡\u003C/div>\u003Cdiv class=\"kg-callout-text\">\u003Cb>\u003Cstrong style=\"white-space: pre-wrap;\">Looking for working examples?\u003C/strong>\u003C/b>\u003Cbr>\u003Cbr>You can find the complete source code for the example integration showcased in this guide on our GitHub:\u003Cbr>\u003Cbr>🔗 \u003Ca href=\"https://github.com/cgwire/blender-ui-addon-script?ref=blog.cg-wire.com\">https://github.com/cgwire/blender-ui-addon-script\u003C/a>\u003C/div>\u003C/div>\u003Chr>\u003Ch2 id=\"1-common-ui-components\">\u003Cstrong>1. Common UI Components\u003C/strong>\u003C/h2>\u003Cp>In Blender, every element of the user interface has its equivalent in the Python library. You build UI by creating classes that inherit from one of the following types:\u003C/p>\u003Cul>\u003Cli>\u003Ccode>bpy.types.Panel\u003C/code> - for custom panels (the most common)\u003C/li>\u003Cli>\u003Ccode>bpy.types.Menu\u003C/code> - for menus and submenus\u003C/li>\u003Cli>\u003Ccode>bpy.types.Operator\u003C/code> - for actions or tools that can be run from buttons\u003C/li>\u003C/ul>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-daa22afa-ac20-4e3e-8543-c694588146bf.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"334\" height=\"542\">\u003C/figure>\u003Cp>Each of these classes can implement a \u003Ccode>draw(self, context)\u003C/code> method where you describe what the interface should look like using layout commands. Blender’s layout system handles the spacing, alignment, and positioning automatically: it's a declarative UI system where you just describe what should appear and in what order.\u003C/p>\u003Cp>Here are the most common layout elements you’ll use:\u003C/p>\u003Ch3 id=\"basic-display-elements\">\u003Cstrong>Basic Display Elements\u003C/strong>\u003C/h3>\u003Cul>\u003Cli>\u003Cstrong>Label\u003C/strong> - Displays plain, non-interactive text. Format: \u003Ccode>layout.label(text=\"Hello!\")\u003C/code>\u003C/li>\u003Cli>\u003Cstrong>Separator\u003C/strong> - Adds vertical space between items for readability. Format: \u003Ccode>layout.separator()\u003C/code>\u003C/li>\u003C/ul>\u003Cp>\u003C/p>\u003Ch3 id=\"buttons-inputs-props-and-operators\">\u003Cstrong>Buttons, Inputs, Props, and Operators\u003C/strong>\u003C/h3>\u003Cul>\u003Cli>\u003Cstrong>Operator Button\u003C/strong> - Creates a clickable button that triggers an operator (a function registered as a Blender command). You can use this for actions like exporting, duplicating, or running a custom script. Syntax: \u003Ccode>layout.operator(\"myaddon.some_action\", text=\"Run Action\")\u003C/code>\u003C/li>\u003C/ul>\u003Cp>The \u003Ccode>layout.prop()\u003C/code> method is used to display editable Blender properties which are either built-in data (like \u003Ccode>context.object\u003C/code>) or your own custom properties. For example, \u003Ccode>layout.prop(context.object, \"name\")\u003C/code> shows an editable text field for the object’s name. Blender automatically chooses the right widget (text box, slider, checkbox, etc.) based on the property’s type:\u003C/p>\u003Cul>\u003Cli>\u003Cstrong>Checkbox (Boolean property)\u003C/strong> - Displays a toggle checkbox. Example: \u003Ccode>layout.prop(context.object, \"hide_viewport\")\u003C/code>\u003C/li>\u003Cli>\u003Cstrong>Number Field / Slider (Float or Int)\u003C/strong> - Displays a numeric input, often with a slider. Example: \u003Ccode>layout.prop(context.object, \"location\", index=0, text=\"X Location\")\u003C/code>\u003C/li>\u003Cli>\u003Cstrong>Dropdown Menu (Enum property)\u003C/strong> - Displays a dropdown list when the property is an EnumProperty. Example: \u003Ccode>layout.prop(context.object, \"type\")\u003C/code>\u003C/li>\u003Cli>\u003Cstrong>Text Input \u003C/strong>- Displays a text box for string properties. Example: \u003Ccode>layout.prop(my_settings, \"username\")\u003C/code>\u003C/li>\u003C/ul>\u003Cp>\u003C/p>\u003Ch3 id=\"organizing-the-layout\">\u003Cstrong>Organizing the Layout\u003C/strong>\u003C/h3>\u003Cp>To keep your UI structured and easy to understand, Blender provides layout containers like rows, columns, and boxes.\u003C/p>\u003Cp>A panel contains rows and columns. Rows and columns contain properties, operators, and labels. Blender automatically handles padding, alignment, and scaling to match the theme and layout rules.\u003C/p>\u003Cul>\u003Cli>A row (horizontal grouping) puts elements next to each other horizontally:\u003C/li>\u003C/ul>\u003Cpre>\u003Ccode class=\"language-python\">row = layout.row()\nrow.prop(obj, \"location\")\nrow.prop(obj, \"rotation_euler\")\u003C/code>\u003C/pre>\u003Cul>\u003Cli>A column (vertical grouping) stacks elements vertically:\u003C/li>\u003C/ul>\u003Cpre>\u003Ccode class=\"language-python\">col = layout.column()\ncol.prop(obj, \"scale\")\ncol.prop(obj, \"dimensions\")\u003C/code>\u003C/pre>\u003Cul>\u003Cli>box (Visual grouping) draws a bordered box that visually groups related controls, like sections:\u003C/li>\u003C/ul>\u003Cpre>\u003Ccode class=\"language-python\">box = layout.box()\nbox.label(text=\"Transform Settings\")\nbox.prop(obj, \"location\")\nbox.prop(obj, \"rotation_euler\")\u003C/code>\u003C/pre>\u003Cp>For the full list of UI components, have a look at \u003Ca href=\"https://docs.blender.org/manual/en/latest/interface/index.html?ref=blog.cg-wire.com\">the User Interface page of the official Blender documentation\u003C/a>.\u003C/p>\u003Chr>\u003Ch2 id=\"2-where-you-can-put-ui-panels\">\u003Cstrong>2. Where You Can Put UI Panels\u003C/strong>\u003C/h2>\u003Cp>When you create a custom panel in Blender, you can decide where in the interface it appears and what region it occupies with two key class attributes:\u003C/p>\u003Cul>\u003Cli>\u003Ccode>bl_space_type\u003C/code> - which editor or workspace your panel belongs to (for example, the 3D View, the Properties Editor, or the Node Editor).\u003C/li>\u003Cli>\u003Ccode>bl_region_type\u003C/code> - which part of that editor the panel appears in (for example, the sidebar, toolbar, or main window).\u003C/li>\u003C/ul>\u003Cp>Here is a list of the most typical areas where you might place a custom panel:\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-070d3dfe-eb98-42a2-90a2-d2eabc4fc2d4.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1125\" height=\"650\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/11/data-src-image-070d3dfe-eb98-42a2-90a2-d2eabc4fc2d4.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/11/data-src-image-070d3dfe-eb98-42a2-90a2-d2eabc4fc2d4.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-070d3dfe-eb98-42a2-90a2-d2eabc4fc2d4.png 1125w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cul>\u003Cli>The 3D view sidebar appears in the right-hand N-panel sidebar of the 3D Viewport. This is the most common location for modeling, rigging, or scene tools:\u003C/li>\u003C/ul>\u003Cpre>\u003Ccode class=\"language-python\">bl_space_type = 'VIEW_3D'\nbl_region_type = 'UI'\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cul>\u003Cli>You can add panels inside the Properties Editor, among the Object, Material, or Scene tabs. Use this when your add-on deals with materials, objects, render settings, or scene properties:\u003C/li>\u003C/ul>\u003Cpre>\u003Ccode class=\"language-python\">bl_space_type = 'PROPERTIES'\nbl_region_type = 'WINDOW'\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cul>\u003Cli>In the UV/Image Editor sidebar (useful for texture tools or image utilities):\u003C/li>\u003C/ul>\u003Cpre>\u003Ccode class=\"language-python\">bl_space_type = 'IMAGE_EDITOR'\nbl_region_type = 'UI'\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cul>\u003Cli>In the sidebar of the Shader, Geometry Node, or Compositor editors for tools that work with nodes, shaders, or procedural systems:\u003C/li>\u003C/ul>\u003Cpre>\u003Ccode class=\"language-python\">bl_space_type = 'NODE_EDITOR'\nbl_region_type = 'UI'\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>The best panel location depends on your tool’s purpose:\u003C/p>\u003Cul>\u003Cli>Modeling / Object tools → 3D View sidebar (\u003Ccode>VIEW_3D\u003C/code> + \u003Ccode>UI\u003C/code>)\u003C/li>\u003Cli>Material or render settings → Properties editor (\u003Ccode>PROPERTIES\u003C/code> + \u003Ccode>WINDOW\u003C/code>)\u003C/li>\u003Cli>Texture utilities → Image editor sidebar (\u003Ccode>IMAGE_EDITOR\u003C/code> + \u003Ccode>UI\u003C/code>)\u003C/li>\u003Cli>Shader / Geometry tools → Node editor sidebar (\u003Ccode>NODE_EDITOR\u003C/code> + \u003Ccode>UI\u003C/code>)\u003C/li>\u003C/ul>\u003Cp>Picking the right space helps users find your add-on where they naturally expect to, keeping your UI consistent with Blender’s.\u003C/p>\u003Chr>\u003Ch2 id=\"3-minimal-example-custom-panel-in-the-3d-view-sidebar\">\u003Cstrong>3. Minimal Example: Custom Panel in the 3D View Sidebar\u003C/strong>\u003C/h2>\u003Cp>Let's experiment with a simple plugin: a custom panel in the 3D view sidebar that displays a \"hello world\" text alert when clicking on a button.\u003C/p>\u003Ch3 id=\"1-blinfoaddon-metadata\">\u003Cstrong>1) \u003Ccode>bl_info\u003C/code> - addon metadata\u003C/strong>\u003C/h3>\u003Cp>We start by specifying the add-on metadata to tell Blender how to present our add-on to a potential user:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">bl_info = {\n&nbsp;&nbsp;&nbsp;&nbsp;\"name\": \"Simple Addon Example\",\n&nbsp;&nbsp;&nbsp;&nbsp;\"author\": \"Your Name\",\n&nbsp;&nbsp;&nbsp;&nbsp;\"version\": (1, 0),\n&nbsp;&nbsp;&nbsp;&nbsp;\"blender\": (4, 0, 0),\n&nbsp;&nbsp;&nbsp;&nbsp;\"location\": \"View3D &gt; Sidebar &gt; Simple Tab\",\n&nbsp;&nbsp;&nbsp;&nbsp;\"description\": \"A simple example addon that prints a message\",\n&nbsp;&nbsp;&nbsp;&nbsp;\"category\": \"3D View\",\n}\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cul>\u003Cli>\u003Ccode>bl_info\u003C/code> is a module-level dictionary Blender uses to show addon info in Preferences → Add-ons\u003Cul>\u003Cli>\u003Ccode>name:\u003C/code> human-readable name shown in the list\u003C/li>\u003Cli>\u003Ccode>author:\u003C/code> author string\u003C/li>\u003Cli>\u003Ccode>version:\u003C/code> tuple representing addon version\u003C/li>\u003Cli>\u003Ccode>blender:\u003C/code> minimum Blender version this addon targets (tuple)\u003C/li>\u003Cli>\u003Ccode>location:\u003C/code> where the addon UI appears (helpful for users)\u003C/li>\u003Cli>\u003Ccode>description:\u003C/code> short description used in the UI\u003C/li>\u003Cli>\u003Ccode>category:\u003C/code> category grouping in the Add-ons list\u003C/li>\u003C/ul>\u003C/li>\u003C/ul>\u003Cp>It's essential to keep your \u003Ccode>bl_info\u003C/code> accurate, as Blender reads it when scanning installed add-ons.\u003C/p>\u003Cp>\u003C/p>\u003Ch3 id=\"2-define-an-operator-class\">\u003Cstrong>2) Define an operator class\u003C/strong>\u003C/h3>\u003Cp>We then define an Operator subclass. Operators are the official way to perform actions in Blender: they can be invoked from UI, shortcuts, search menu, etc.\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">class SIMPLEADDON_OT_hello(bpy.types.Operator):\n&nbsp;&nbsp;&nbsp;&nbsp;bl_idname = \"simple_addon.say_hello\"\n&nbsp;&nbsp;&nbsp;&nbsp;bl_label = \"Say Hello\"\n&nbsp;&nbsp;&nbsp;&nbsp;bl_description = \"Prints a message to the console\"\n\n&nbsp;&nbsp;&nbsp;&nbsp;def execute(self, context):\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;self.report({'INFO'}, \"Hello from Blender Addon!\")\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;print(\"Hello from Blender Addon!\")\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return {'FINISHED'}\u003C/code>\u003C/pre>\u003Cul>\u003Cli>\u003Ccode>bl_idname\u003C/code> - A unique identifier string in the form \u003Ccode>\"module_name.operator_name\"\u003C/code>, all lowercase and with a dot. This is how you call the operator from code or UI (\u003Ccode>bpy.ops.simple_addon.say_hello()\u003C/code>).\u003C/li>\u003Cli>\u003Ccode>bl_label\u003C/code> - User-facing label that appears on buttons/menus.\u003C/li>\u003Cli>\u003Ccode>bl_description\u003C/code> - Tooltip/description shown in the UI.\u003C/li>\u003Cli>\u003Ccode>execute(self, context)\u003C/code> - Core method called when the operator runs (synchronous execution). \u003Ccode>context\u003C/code> gives access to Blender's current state (active object, scene, area, etc.). \u003Ccode>self.report({'INFO'}, \"…\")\u003C/code> shows a small message in Blender's info bar / status (good for user feedback). \u003Ccode>print(\"…\")\u003C/code> prints to the system/Blender console (useful for debugging). Returns a set like \u003Ccode>{'FINISHED'}\u003C/code> or \u003Ccode>{'CANCELLED'}\u003C/code>. Blender uses this result to know whether the operator completed successfully.\u003C/li>\u003C/ul>\u003Cp>\u003C/p>\u003Ch3 id=\"3-panel-classui-placement\">\u003Cstrong>3) Panel class - UI placement\u003C/strong>\u003C/h3>\u003Cp>We can then get to the Panel subclass to add UI in Blender:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">class SIMPLEADDON_PT_panel(bpy.types.Panel):\n&nbsp;&nbsp;&nbsp;&nbsp;bl_label = \"Simple Addon Panel\"\n&nbsp;&nbsp;&nbsp;&nbsp;bl_idname = \"SIMPLEADDON_PT_panel\"\n&nbsp;&nbsp;&nbsp;&nbsp;bl_space_type = 'VIEW_3D'\n&nbsp;&nbsp;&nbsp;&nbsp;bl_region_type = 'UI'\n&nbsp;&nbsp;&nbsp;&nbsp;bl_category = 'Simple'\n\n&nbsp;&nbsp;&nbsp;&nbsp;def draw(self, context):\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout = self.layout\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout.operator(\"simple_addon.say_hello\")\u003C/code>\u003C/pre>\u003Cul>\u003Cli>\u003Ccode>bl_label\u003C/code> - panel title shown in the UI.\u003C/li>\u003Cli>\u003Ccode>bl_idname\u003C/code> - unique panel identifier.\u003C/li>\u003Cli>\u003Ccode>bl_space_type = 'VIEW_3D'\u003C/code> tells Blender this panel belongs in the 3D Viewport area.\u003C/li>\u003Cli>\u003Ccode>bl_region_type = 'UI'\u003C/code> places it in the right-side region (the N-panel). Other regions exist (e.g., \u003Ccode>'TOOLS', 'WINDOW'\u003C/code>).\u003C/li>\u003Cli>\u003Ccode>bl_category = 'Simple'\u003C/code> - The tab name in the sidebar. The panel will appear under a tab labeled “Simple”.\u003C/li>\u003Cli>\u003Ccode>draw(self, context)\u003C/code> is called to draw UI layout.\u003C/li>\u003Cli>\u003Ccode>self.layout\u003C/code> is a \u003Ccode>UILayout\u003C/code> object used to place buttons, labels, properties, etc.\u003C/li>\u003Cli>\u003Ccode>layout.operator(\"simple_addon.say_hello\")\u003C/code> creates a button that, when clicked, calls the operator with bl_idname \u003Ccode>\"simple_addon.say_hello\"\u003C/code>. The button text is taken from the operator's \u003Ccode>bl_label\u003C/code>.\u003C/li>\u003C/ul>\u003Cp>\u003C/p>\u003Ch3 id=\"4-register-unregister-functions\">\u003Cstrong>4) Register / unregister functions\u003C/strong>\u003C/h3>\u003Cp>Blender requires classes that define UI, operators, panels, properties, etc., to be registered so Blender knows about them:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">def register():\n&nbsp;&nbsp;&nbsp;&nbsp;bpy.utils.register_class(SIMPLEADDON_OT_hello)\n&nbsp;&nbsp;&nbsp;&nbsp;bpy.utils.register_class(SIMPLEADDON_PT_panel)\n\ndef unregister():\n&nbsp;&nbsp;&nbsp;&nbsp;bpy.utils.unregister_class(SIMPLEADDON_PT_panel)\n&nbsp;&nbsp;&nbsp;&nbsp;bpy.utils.unregister_class(SIMPLEADDON_OT_hello)\u003C/code>\u003C/pre>\u003Cul>\u003Cli>\u003Ccode>bpy.utils.register_class(Class)\u003C/code> registers a class; \u003Ccode>unregister_class\u003C/code> removes it.\u003C/li>\u003Cli>It's important to unregister classes in the reverse order of registration, especially when classes reference each other. This is why the panel is unregistered before the operator.\u003C/li>\u003Cli>When the addon is enabled in Preferences, Blender calls \u003Ccode>register()\u003C/code>. When disabled, it calls \u003Ccode>unregister()\u003C/code>.\u003C/li>\u003C/ul>\u003Cp>We put the full code in a Python file \u003Ccode>addon.py\u003C/code>:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">bl_info = {\n&nbsp;&nbsp;&nbsp;&nbsp;\"name\": \"Simple Addon Example\",\n&nbsp;&nbsp;&nbsp;&nbsp;\"author\": \"Your Name\",\n&nbsp;&nbsp;&nbsp;&nbsp;\"version\": (1, 0),\n&nbsp;&nbsp;&nbsp;&nbsp;\"blender\": (4, 0, 0),\n&nbsp;&nbsp;&nbsp;&nbsp;\"location\": \"View3D &gt; Sidebar &gt; Simple Tab\",\n&nbsp;&nbsp;&nbsp;&nbsp;\"description\": \"A simple example addon that prints a message\",\n&nbsp;&nbsp;&nbsp;&nbsp;\"category\": \"3D View\",\n}\n\nimport bpy\n\nclass SIMPLEADDON_OT_hello(bpy.types.Operator):\n&nbsp;&nbsp;&nbsp;&nbsp;bl_idname = \"simple_addon.say_hello\"\n&nbsp;&nbsp;&nbsp;&nbsp;bl_label = \"Say Hello\"\n&nbsp;&nbsp;&nbsp;&nbsp;bl_description = \"Prints a message to the console\"\n\n&nbsp;&nbsp;&nbsp;&nbsp;def execute(self, context):\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;self.report({'INFO'}, \"Hello from Blender Addon!\")\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;print(\"Hello from Blender Addon!\")\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return {'FINISHED'}\n\nclass SIMPLEADDON_PT_panel(bpy.types.Panel):\n&nbsp;&nbsp;&nbsp;&nbsp;bl_label = \"Simple Addon Panel\"\n&nbsp;&nbsp;&nbsp;&nbsp;bl_idname = \"SIMPLEADDON_PT_panel\"\n&nbsp;&nbsp;&nbsp;&nbsp;bl_space_type = 'VIEW_3D'\n&nbsp;&nbsp;&nbsp;&nbsp;bl_region_type = 'UI'\n&nbsp;&nbsp;&nbsp;&nbsp;bl_category = 'Simple'\n\n&nbsp;&nbsp;&nbsp;&nbsp;def draw(self, context):\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout = self.layout\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;layout.operator(\"simple_addon.say_hello\")\n\ndef register():\n&nbsp;&nbsp;&nbsp;&nbsp;bpy.utils.register_class(SIMPLEADDON_OT_hello)\n&nbsp;&nbsp;&nbsp;&nbsp;bpy.utils.register_class(SIMPLEADDON_PT_panel)\n\ndef unregister():\n&nbsp;&nbsp;&nbsp;&nbsp;bpy.utils.unregister_class(SIMPLEADDON_PT_panel)\n&nbsp;&nbsp;&nbsp;&nbsp;bpy.utils.unregister_class(SIMPLEADDON_OT_hello)\u003C/code>\u003C/pre>\u003Chr>\u003Ch2 id=\"4-running-and-packaging-your-add-on\">\u003Cstrong>4. Running and Packaging Your Add-on\u003C/strong>\u003C/h2>\u003Cp>Once you’ve written your add-on script, you can load it into Blender and test it right away. No tools required.\u003C/p>\u003Col>\u003Cli>Save your script - Save your Python file with a clear name like \u003Ccode>my_addon.py\u003C/code>.\u003C/li>\u003Cli>Open Blender’s Add-ons Preferences - Go to Edit → Preferences → Add-ons. This is where Blender manages all installed extensions.\u003C/li>\u003Cli>Install the add-on - Click the Install… button at the top of the preferences window. \u003Ccode>Select your my_addon.py\u003C/code> file and click Install Add-on.\u003C/li>\u003Cli>Enable it - After installing, your add-on should appear in the list. Find it (you can search for “My Add-on”) and check the box to enable it if it's not already.\u003C/li>\u003C/ol>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-27ff3592-fed1-4347-8930-9dd62b2d950b.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1227\" height=\"800\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/11/data-src-image-27ff3592-fed1-4347-8930-9dd62b2d950b.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/11/data-src-image-27ff3592-fed1-4347-8930-9dd62b2d950b.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-27ff3592-fed1-4347-8930-9dd62b2d950b.png 1227w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Col start=\"5\">\u003Cli>Check it in the interface - Open the 3D Viewport, open the sidebar, and look for the tab named Simple. Your custom panel should be there, ready to use!\u003C/li>\u003C/ol>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-2a90e13f-b338-4235-a830-f9c8d8060562.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1227\" height=\"741\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/11/data-src-image-2a90e13f-b338-4235-a830-f9c8d8060562.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/11/data-src-image-2a90e13f-b338-4235-a830-f9c8d8060562.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-2a90e13f-b338-4235-a830-f9c8d8060562.png 1227w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>When you want to share your add-on with others, you can upload it to GitHub, Blender Artists, or Gumroad for distribution. Add a short README.md explaining what the add-on does and how to install it.\u003C/p>\u003Cp>For add-ons with multiple files (e.g. separate modules, icons, or resources), create a folder then zip the entire folder (\u003Ccode>my_addon.zip\u003C/code>) and share that. Blender can install \u003Ccode>.zip\u003C/code> archives directly via the same Install… button so no need to extract it beforehand. The main entry point must be named \u003Ccode>__init__.py\u003C/code>, since Blender treats it as a Python package.\u003C/p>\u003Chr>\u003Ch2 id=\"conclusion\">\u003Cstrong>Conclusion\u003C/strong>\u003C/h2>\u003Cp>Creating UI for Blender add-ons is intimidating at first, but it’s one of the easiest ways to share a tool you created. Once you understand how panels and layouts work, you can quickly add buttons, properties, and organized sections that users will find intuitive.\u003C/p>\u003Cp>\u003Ca href=\"https://github.com/cgwire/blender-ui-addon-script?ref=blog.cg-wire.com\">Have a look at the code repository on Github\u003C/a> to try the example yourself.\u003C/p>\u003Cp>Start small by adding a simple panel, a label, and a button to create an action, and build from there!\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>",{"uuid":501,"comment_id":502,"feature_image":503,"featured":29,"visibility":30,"created_at":504,"updated_at":505,"custom_excerpt":506,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":507,"primary_tag":508,"url":509,"excerpt":506,"reading_time":248,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":510},"e18120b7-5615-497e-8db8-9f03ceee9526","6922df21009fc3000190e38e","https://images.unsplash.com/photo-1760548425425-e42e77fa38f1?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGNvZGluZyUyMGludGVyZmFjZSUyMHRvb2xzfGVufDB8fHx8MTc2Mzg5MzE4MXww&ixlib=rb-4.1.0&q=80&w=2000","2025-11-23T11:17:05.000+01:00","2026-02-20T06:03:59.000+01:00","Turn your Blender scripts into real tools artists love using—here’s how to build clean, intuitive UI panels for your add-ons.",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"https://blog.cg-wire.com/blender-addon-ui-scripting-guide/","\u003Cspan style=\"white-space: pre-wrap;\">Photo by \u003C/span>\u003Ca href=\"https://unsplash.com/@jakubzerdzicki?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Jakub Żerdzicki\u003C/span>\u003C/a>\u003Cspan style=\"white-space: pre-wrap;\"> / \u003C/span>\u003Ca href=\"https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Unsplash\u003C/span>\u003C/a>","/posts/blender-addon-ui-scripting-guide","2025-11-24T10:00:34.000+01:00",{"title":496},"blender-addon-ui-scripting-guide","posts/blender-addon-ui-scripting-guide",[517,518],{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},{"id":333,"name":334,"slug":335,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":336},"3-HhmFMhJkG_7Y2WuAQl2Cmyemg5YE38Mtwl_osaN7w",{"id":521,"title":522,"authors":523,"body":7,"description":7,"extension":8,"html":525,"meta":526,"navigation":12,"path":537,"published_at":538,"seo":539,"slug":540,"stem":541,"tags":542,"__hash__":545,"uuid":527,"comment_id":528,"feature_image":529,"featured":29,"visibility":30,"created_at":530,"updated_at":531,"custom_excerpt":532,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":533,"primary_tag":534,"url":535,"excerpt":532,"reading_time":97,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":536},"ghost/posts:blender-scripting-geometry-nodes-2.json","How to Script Geometry Nodes in Blender with Python (2026)",[524],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">🐍\u003C/div>\u003Cdiv class=\"kg-callout-text\">Procedural modeling becomes far more powerful when you generate nodes with code instead of wiring them by hand.\u003C/div>\u003C/div>\u003Cp>Geometry nodes are an incredible Blender feature, but did you know Blender's Python API also lets you script geometry nodes just like any other data block?\u003C/p>\u003Cp>You can create nodes, set their parameters, and connect them programmatically, opening the door to automated scene generation, custom tools, and rapid model prototyping with just a few lines of code instead of manually wiring dozens of nodes.\u003C/p>\u003Cp>In this tutorial, you'll learn how to create geometry node setups entirely from a Python script. We'll cover the full process from building a new node tree to assigning it to an object with clear examples you can paste directly into Blender's scripting editor.\u003C/p>\u003Cp>In case you missed it, have a look at \u003Ca href=\"https://blog.cg-wire.com/blender-scripting-animation/\">our introduction to Blender scripting\u003C/a> first.\u003C/p>\u003Chr>\u003Ch2 id=\"why-script-geometry-nodes\">\u003Cstrong>Why Script Geometry Nodes?\u003C/strong>\u003C/h2>\u003Cp>Blender's Geometry Nodes editor is an excellent visual system for building procedural tools: it's intuitive, flexible, and great for experimentation once you get the hang of it. But as projects grow in complexity, manually managing large node networks can become tedious and difficult to maintain, especially if you need to reuse them throughout many 3D modeling pipelines.\u003C/p>\u003Cp>Scripting allows you to generate, modify, and connect nodes automatically. Instead of manually recreating the same setups across multiple projects, you can write a script once and reuse it whenever you need it to save time or make your animations more consistent.\u003C/p>\u003Cp>A scripted node setup isn't tied to a single .blend file: it can be stored, versioned, and shared just like any other piece of code. This makes it easy to build a library of procedural tools that can be reused across different projects or shared with other artists and developers.\u003C/p>\u003Cp>Let's see how scripting works in practice with a few code snippets.\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-blue\">\u003Cdiv class=\"kg-callout-emoji\">💡\u003C/div>\u003Cdiv class=\"kg-callout-text\">\u003Cb>\u003Cstrong style=\"white-space: pre-wrap;\">Looking for working examples?\u003C/strong>\u003C/b>\u003Cbr>\u003Cbr>You can find the complete source code for the example integration showcased in this guide on our GitHub:\u003Cbr>\u003Cbr>🔗 \u003Ca href=\"https://github.com/cgwire/blender-scripting-geometry-nodes?ref=blog.cg-wire.com\">https://github.com/cgwire/blender-scripting-geometry-nodes\u003C/a>\u003C/div>\u003C/div>\u003Chr>\u003Ch2 id=\"1-creating-a-new-node-tree\">\u003Cstrong>1. Creating a New Node Tree\u003C/strong>\u003C/h2>\u003Cp>Every Geometry Nodes setup starts as a node tree, which stores nodes and their connections. You can create one from Python using Blender's data API:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">import bpy\n\nnode_tree = bpy.data.node_groups.new(\"MyGeoNodesTree\", 'GeometryNodeTree')\u003C/code>\u003C/pre>\u003Cp>You can think of this \u003Ccode>node_tree\u003C/code> as the digital canvas that will hold all your procedural logic. Once created, you can add nodes, connect them, and set their properties like in Blender's graphical user interface.\u003C/p>\u003Chr>\u003Ch2 id=\"2-add-nodes-and-connect-them\">\u003Cstrong>2. Add Nodes and Connect Them\u003C/strong>\u003C/h2>\u003Cp>Next, let's add a few basic nodes. We'll create an Input Geometry node, a Subdivision Surface node, and a Group Output node, then connect them and apply the result to our cube.\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\"># ADD NODES\ngeo_input = node_tree.interface.new_socket(\n&nbsp;&nbsp;&nbsp;&nbsp;name=\"Geometry\",\n&nbsp;&nbsp;&nbsp;&nbsp;in_out='INPUT',\n&nbsp;&nbsp;&nbsp;&nbsp;socket_type='NodeSocketGeometry'\n)\ngeo_output = node_tree.interface.new_socket(\n&nbsp;&nbsp;&nbsp;&nbsp;name=\"Geometry\",\n&nbsp;&nbsp;&nbsp;&nbsp;in_out='OUTPUT',\n&nbsp;&nbsp;&nbsp;&nbsp;socket_type='NodeSocketGeometry'\n)\n\ninput_node = node_tree.nodes.new(\"NodeGroupInput\")\nsubdivide_node = node_tree.nodes.new(\"GeometryNodeSubdivideMesh\")\noutput_node = node_tree.nodes.new(\"NodeGroupOutput\")\n\ninput_node.location = (-300, 0)\nsubdivide_node.location = (0, 0)\noutput_node.location = (300, 0)\n\n# LINK NODES\nnode_tree.links.new(input_node.outputs['Geometry'], subdivide_node.inputs['Mesh'])\nnode_tree.links.new(subdivide_node.outputs['Mesh'], output_node.inputs['Geometry'])\n\n# APPLY TO CURRENT OBJECT\nobj = bpy.context.object\nmod = obj.modifiers.new(\"MyGeoNodesModifier\", \"NODES\")\nmod.node_group = node_tree\u003C/code>\u003C/pre>\u003Cp>\u003Cbr>When you run this script, you'll have a functional (though simple) geometry node setup that subdivides any geometry it's applied to:\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-de23dbc9-781f-4730-9a46-a6fec93c97a7.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1314\" height=\"889\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/11/data-src-image-de23dbc9-781f-4730-9a46-a6fec93c97a7.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/11/data-src-image-de23dbc9-781f-4730-9a46-a6fec93c97a7.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-de23dbc9-781f-4730-9a46-a6fec93c97a7.png 1314w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Chr>\u003Ch2 id=\"3-set-parameters-and-link-geometry-to-objects\">\u003Cstrong>3. Set Parameters and Link Geometry to Objects\u003C/strong>\u003C/h2>\u003Cp>You can modify parameters directly via the node's properties. For example, let's increase the subdivision level and apply this node group to an object:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">subdivide_node.inputs['Level'].default_value = 3\u003C/code>\u003C/pre>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-18e48250-6a76-4eda-b14c-ce8065b78f9e.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1314\" height=\"889\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/11/data-src-image-18e48250-6a76-4eda-b14c-ce8065b78f9e.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/11/data-src-image-18e48250-6a76-4eda-b14c-ce8065b78f9e.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-18e48250-6a76-4eda-b14c-ce8065b78f9e.png 1314w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>Adjusting \u003Ccode>default_value\u003C/code> for inputs is an easy way to parameterize your setup.\u003C/p>\u003Cp>For a full breakdown of the available parameters and types, refer to \u003Ca href=\"https://docs.blender.org/api/current/bpy.types.Node.html?ref=blog.cg-wire.com\">the official Blender Python API documentation\u003C/a>.\u003C/p>\u003Chr>\u003Ch2 id=\"4-create-a-custom-%E2%80%9Ccube-crowd-generator%E2%80%9D-node-group-programmatically\">\u003Cstrong>4. Create a Custom “Cube Crowd Generator” Node Group Programmatically\u003C/strong>\u003C/h2>\u003Cp>We now know how to define geometry nodes programmatically, but what about creating reusable custom nodes?\u003C/p>\u003Cp>Let's work on a new example that builds a tiny procedural system that scatters many cubes on a surface. The script creates a Geometry Nodes group that takes a surface, scatters points across it, randomly offsets those points, places a cube on each point (instances), converts the instances to real geometry, and outputs the final mesh as \"Cubes\".\u003C/p>\u003Cp>\u003C/p>\u003Ch3 id=\"1-create-a-new-node-group\">\u003Cstrong>1) Create a new node group\u003C/strong>\u003C/h3>\u003Cp>First, we create a new Geometry Node group in Blender named \u003Ccode>\"CubeCrowdGenerator\"\u003C/code>.\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">crowd_group = bpy.data.node_groups.new(\"CubeCrowdGenerator\", \"GeometryNodeTree\")\u003C/code>\u003C/pre>\u003Cp>Like a function, we want to be able to attach this node to any object with a Geometry Nodes modifier later on.\u003C/p>\u003Cp>\u003C/p>\u003Ch3 id=\"2-add-group-input-and-output-nodes-uientry-points\">\u003Cstrong>2) Add group input and output nodes (UI/entry points)\u003C/strong>\u003C/h3>\u003Cp>We place standard input and output groups on the canvas as usual:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">group_in = crowd_group.nodes.new(\"NodeGroupInput\")\ngroup_out = crowd_group.nodes.new(\"NodeGroupOutput\")\n\ngroup_in.location = (-600, 0)\ngroup_out.location = (600, 0)\u003C/code>\u003C/pre>\u003Cul>\u003Cli>\u003Ccode>group_in\u003C/code> and \u003Ccode>group_out\u003C/code> are the visible sockets of the node group in the Geometry Nodes editor.\u003C/li>\u003Cli>The script also positions them so the graph is readable.\u003C/li>\u003C/ul>\u003Cp>\u003C/p>\u003Ch3 id=\"3-define-the-group-interface-what-the-group-acceptsreturns\">\u003Cstrong>3) Define the group interface (what the group accepts/returns)\u003C/strong>\u003C/h3>\u003Cp>We need to expose an \u003Cstrong>input socket named \u003Ccode>Surface\u003C/code>\u003C/strong> where we'll plug the mesh you want to populate (e.g., a plane) and an \u003Cstrong>output socket named \u003Ccode>Cubes\u003C/code>\u003C/strong>, the resulting geometry.\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">interface = crowd_group.interface\ninterface.new_socket(name=\"Surface\", in_out=\"INPUT\", socket_type=\"NodeSocketGeometry\")\ninterface.new_socket(name=\"Cubes\", in_out=\"OUTPUT\", socket_type=\"NodeSocketGeometry\")\u003C/code>\u003C/pre>\u003Cp>In practice, when you add this node group to an object, you will plug its surface (an object's original geometry) into \u003Ccode>Surface\u003C/code>.\u003C/p>\u003Cp>\u003C/p>\u003Ch3 id=\"4-create-the-internal-nodes-the-building-blocks\">\u003Cstrong>4) Create the internal nodes (the building blocks)\u003C/strong>\u003C/h3>\u003Cp>We can then work on the actual internal logic:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">distribute = crowd_group.nodes.new(\"GeometryNodeDistributePointsOnFaces\")\nrand_vec = crowd_group.nodes.new(\"FunctionNodeRandomValue\")\nset_pos = crowd_group.nodes.new(\"GeometryNodeSetPosition\")\ncube = crowd_group.nodes.new(\"GeometryNodeMeshCube\")\ninstance = crowd_group.nodes.new(\"GeometryNodeInstanceOnPoints\")\nrealize = crowd_group.nodes.new(\"GeometryNodeRealizeInstances\")\u003C/code>\u003C/pre>\u003Cul>\u003Cli>\u003Cstrong>GeometryNodeDistributePointsOnFaces\u003C/strong>: creates points across the input surface (controls how many points, distribution).\u003C/li>\u003Cli>\u003Cstrong>FunctionNodeRandomValue (Float Vector)\u003C/strong>: produces a random 3D vector per point used as an offset.\u003C/li>\u003Cli>\u003Cstrong>GeometryNodeSetPosition\u003C/strong>: moves each point by a vector (the random offset).\u003C/li>\u003Cli>\u003Cstrong>GeometryNodeMeshCube\u003C/strong>: generates a cube mesh that will be used as the instance object.\u003C/li>\u003Cli>\u003Cstrong>GeometryNodeInstanceOnPoints\u003C/strong>: places the cube on each point. It doesn't create real geometry, it's just a cheap instance of the original cube.\u003C/li>\u003Cli>\u003Cstrong>GeometryNodeRealizeInstances\u003C/strong>: converts instances into actual mesh geometry so they can be output as a single mesh.\u003C/li>\u003C/ul>\u003Cp>\u003C/p>\u003Ch3 id=\"5-configure-the-random-vector-node\">\u003Cstrong>5) Configure the random vector node\u003C/strong>\u003C/h3>\u003Cp>We set the \u003Ccode>Random Value\u003C/code> node to return a \u003Cstrong>3-component vector \u003C/strong>we can use to offset the generated cubes in the 3D space:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">rand_vec.data_type = \"FLOAT_VECTOR\"\nrand_vec.inputs[\"Min\"].default_value = (-0.5, -0.5, 0.0)\nrand_vec.inputs[\"Max\"].default_value = (0.5, 0.5, 0.5)\u003C/code>\u003C/pre>\u003Cul>\u003Cli>\u003Ccode>Min\u003C/code> and \u003Ccode>Max\u003C/code> define the range for each component. For example, X will be between \u003Ccode>-0.5\u003C/code> and \u003Ccode>0.5\u003C/code>.\u003C/li>\u003Cli>Result: each point gets a slightly different offset so cubes don't sit exactly on top of one another.\u003C/li>\u003C/ul>\u003Cp>\u003C/p>\u003Ch3 id=\"6-node-layout-ui-only\">\u003Cstrong>6) Node layout (UI only)\u003C/strong>\u003C/h3>\u003Cp>We then position the internal nodes to make them easy to understand if we want to check our workflow in Blender:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">distribute.location = (-400, 0)\nrand_vec.location = (-200, -200)\nset_pos.location = (-100, 0)\ninstance.location = (100, 0)\ncube.location = (-400, -200)\nrealize.location = (300, 0)\u003C/code>\u003C/pre>\u003Cp>These \u003Ccode>location\u003C/code> assignments only affect how the nodes are visually arranged in the node editor. They don't affect what the graph does.\u003C/p>\u003Cp>\u003C/p>\u003Ch3 id=\"7-wire-the-nodes-together\">\u003Cstrong>7) Wire the nodes together\u003C/strong>\u003C/h3>\u003Cp>Finally, we define how the data flows:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">links.new(group_in.outputs[\"Surface\"], distribute.inputs[\"Mesh\"])\nlinks.new(distribute.outputs[\"Points\"], set_pos.inputs[\"Geometry\"])\nlinks.new(rand_vec.outputs[\"Value\"], set_pos.inputs[\"Offset\"])\nlinks.new(set_pos.outputs[\"Geometry\"], instance.inputs[\"Points\"])\nlinks.new(cube.outputs[\"Mesh\"], instance.inputs[\"Instance\"])\nlinks.new(instance.outputs[\"Instances\"], realize.inputs[\"Geometry\"])\nlinks.new(realize.outputs[\"Geometry\"], group_out.inputs[\"Cubes\"])\u003C/code>\u003C/pre>\u003Col>\u003Cli>\u003Cstrong>Surface → DistributePointsOnFaces\u003C/strong>: the input surface (plane) is used to create scattered points.\u003C/li>\u003Cli>\u003Cstrong>Points → SetPosition (Geometry)\u003C/strong>: set position receives the points as geometry to be moved.\u003C/li>\u003Cli>\u003Cstrong>RandomValue → SetPosition (Offset)\u003C/strong>: each point gets a random vector offset.\u003C/li>\u003Cli>\u003Cstrong>SetPosition → InstanceOnPoints (Points)\u003C/strong>: the moved points become the anchor positions for instances.\u003C/li>\u003Cli>\u003Cstrong>Cube Mesh → InstanceOnPoints (Instance)\u003C/strong>: each point receives a cube instance.\u003C/li>\u003Cli>\u003Cstrong>InstanceOnPoints → RealizeInstances\u003C/strong>: instances are converted to mesh geometry.\u003C/li>\u003Cli>\u003Cstrong>RealizeInstances → Group Output (\"Cubes\")\u003C/strong>: final result is made available as the group's output.\u003C/li>\u003C/ol>\u003Cp>This is the full code we obtained:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">import bpy\n\n# Create a new Geometry Node group\ncrowd_group = bpy.data.node_groups.new(\"CubeCrowdGenerator\", \"GeometryNodeTree\")\n\n# Create input/output nodes\ngroup_in = crowd_group.nodes.new(\"NodeGroupInput\")\ngroup_out = crowd_group.nodes.new(\"NodeGroupOutput\")\n\ngroup_in.location = (-600, 0)\ngroup_out.location = (600, 0)\n\n# Define group interface sockets\ninterface = crowd_group.interface\ninterface.new_socket(name=\"Surface\", in_out=\"INPUT\", socket_type=\"NodeSocketGeometry\")\ninterface.new_socket(name=\"Cubes\", in_out=\"OUTPUT\", socket_type=\"NodeSocketGeometry\")\n\n# Create internal nodes\ndistribute = crowd_group.nodes.new(\"GeometryNodeDistributePointsOnFaces\")\ninstance = crowd_group.nodes.new(\"GeometryNodeInstanceOnPoints\")\ncube = crowd_group.nodes.new(\"GeometryNodeMeshCube\")\nrealize = crowd_group.nodes.new(\"GeometryNodeRealizeInstances\")\nset_pos = crowd_group.nodes.new(\"GeometryNodeSetPosition\")\nrand_vec = crowd_group.nodes.new(\"FunctionNodeRandomValue\")\n\n# Configure random vector node\nrand_vec.data_type = \"FLOAT_VECTOR\"\nrand_vec.inputs[\"Min\"].default_value = (-0.5, -0.5, 0.0)&nbsp; # minimum offset\nrand_vec.inputs[\"Max\"].default_value = (0.5, 0.5, 0.5)&nbsp; # maximum offset\n\n# Layout nodes\ndistribute.location = (-400, 0)\nrand_vec.location = (-200, -200)\nset_pos.location = (-100, 0)\ninstance.location = (100, 0)\ncube.location = (-400, -200)\nrealize.location = (300, 0)\n\n# Create links\nlinks = crowd_group.links\nlinks.new(group_in.outputs[\"Surface\"], distribute.inputs[\"Mesh\"])\nlinks.new(distribute.outputs[\"Points\"], set_pos.inputs[\"Geometry\"])\nlinks.new(rand_vec.outputs[\"Value\"], set_pos.inputs[\"Offset\"])\nlinks.new(set_pos.outputs[\"Geometry\"], instance.inputs[\"Points\"])\nlinks.new(cube.outputs[\"Mesh\"], instance.inputs[\"Instance\"])\nlinks.new(instance.outputs[\"Instances\"], realize.inputs[\"Geometry\"])\nlinks.new(realize.outputs[\"Geometry\"], group_out.inputs[\"Cubes\"])\u003C/code>\u003C/pre>\u003Cp>Now we just copy/paste this script into the scripting workspace, run it, and we can now add our custom node from the geometry node workspace:\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-d4ff8437-efb6-43b0-b45d-a54fce0b74b6.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1430\" height=\"920\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/11/data-src-image-d4ff8437-efb6-43b0-b45d-a54fce0b74b6.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/11/data-src-image-d4ff8437-efb6-43b0-b45d-a54fce0b74b6.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-d4ff8437-efb6-43b0-b45d-a54fce0b74b6.png 1430w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>We can open the node group to see what's inside by double-clicking on it:\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-679df6c4-2877-4419-8b79-4758df98290a.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1430\" height=\"920\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/11/data-src-image-679df6c4-2877-4419-8b79-4758df98290a.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/11/data-src-image-679df6c4-2877-4419-8b79-4758df98290a.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-679df6c4-2877-4419-8b79-4758df98290a.png 1430w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Chr>\u003Ch2 id=\"conclusion\">\u003Cstrong>Conclusion\u003C/strong>\u003C/h2>\u003Cp>With just a few dozen lines of code, you can script Geometry Nodes setups that would take much longer to assemble manually. You've learned in this article how to create Geometry Node trees, add and connect nodes programmatically, control parameters and assign node trees to objects, and build a full procedural system.\u003C/p>\u003Cp>Have a look at \u003Ca href=\"https://github.com/cgwire/blender-scripting-geometry-nodes?ref=blog.cg-wire.com\">the code repository on Github\u003C/a> to try the example yourself!\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-db488d5a-7ab5-4471-a904-0926b1fa7d11.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1314\" height=\"889\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/11/data-src-image-db488d5a-7ab5-4471-a904-0926b1fa7d11.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/11/data-src-image-db488d5a-7ab5-4471-a904-0926b1fa7d11.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-db488d5a-7ab5-4471-a904-0926b1fa7d11.png 1314w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>This approach unlocks endless automation potential, from tool development to generative art. \u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>",{"uuid":527,"comment_id":528,"feature_image":529,"featured":29,"visibility":30,"created_at":530,"updated_at":531,"custom_excerpt":532,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":533,"primary_tag":534,"url":535,"excerpt":532,"reading_time":97,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":536},"93358eb1-5534-43ed-89a8-0b0de2f00072","691ae1dba0beff00013f02eb","https://images.unsplash.com/photo-1675044794037-9262cedb6d5d?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDV8fGJsZW5kZXIlMjBnZW9tZXRyeSUyMG5vZGVzfGVufDB8fHx8MTc2MzM2OTc0N3ww&ixlib=rb-4.1.0&q=80&w=2000","2025-11-17T09:50:35.000+01:00","2026-02-20T06:04:04.000+01:00","Learn how to script Blender Geometry Nodes using Python to automate procedural setups, generate node trees programmatically, and build reusable tools for your animation pipeline.",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"https://blog.cg-wire.com/blender-scripting-geometry-nodes-2/","\u003Cspan style=\"white-space: pre-wrap;\">Photo by \u003C/span>\u003Ca href=\"https://unsplash.com/@mirzaie?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Mehdi Mirzaie\u003C/span>\u003C/a>\u003Cspan style=\"white-space: pre-wrap;\"> / \u003C/span>\u003Ca href=\"https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Unsplash\u003C/span>\u003C/a>","/posts/blender-scripting-geometry-nodes-2","2025-11-17T10:13:21.000+01:00",{"title":522},"blender-scripting-geometry-nodes-2","posts/blender-scripting-geometry-nodes-2",[543,544],{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},{"id":333,"name":334,"slug":335,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":336},"ABwCKyHYQd2e24_gRrEcz2gAc349u2DzqkOMZrfJtyU",{"id":547,"title":548,"authors":549,"body":7,"description":7,"extension":8,"html":551,"meta":552,"navigation":12,"path":562,"published_at":563,"seo":564,"slug":565,"stem":566,"tags":567,"__hash__":569,"uuid":553,"comment_id":554,"feature_image":555,"featured":29,"visibility":30,"created_at":556,"updated_at":531,"custom_excerpt":557,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":558,"primary_tag":559,"url":560,"excerpt":557,"reading_time":97,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":561},"ghost/posts:blender-scripting-geometry-nodes.json","The Beginner’s Guide to Geometry Nodes in Blender 2026",[550],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">🧩\u003C/div>\u003Cdiv class=\"kg-callout-text\">Rebuilding scenes by hand is so 2010. Geometry Nodes let you automate, randomize, and control Blender projects with precision — turning hours of manual modeling into minutes of procedural magic.\u003C/div>\u003C/div>\u003Cp>Spending hours manually duplicating geometry, reshaping, or animating repetitive movements in Blender isn't fun. Some workflows are like that: you need to do repetitive tasks over and over again, with only slight variations.\u003C/p>\u003Cp>But there is a smarter, faster way to create procedural effects called geometry nodes. They can seem intimidating and take time to master, but by the end of this article, you’ll know what geometry nodes are, why they matter, and how to start using them in your own Blender projects.\u003C/p>\u003Chr>\u003Ch2 id=\"what-are-geometry-nodes\">\u003Cstrong>What Are Geometry Nodes?\u003C/strong>\u003C/h2>\u003Cp>Geometry Nodes are Blender’s way of letting you create and manipulate models procedurally. Instead of editing mesh objects directly, you connect visual nodes that define operations like instancing, transforming, or scattering objects in a non-destructive, modular way.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-b4252feb-7713-4df1-98ca-cc453b53d4ee.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1600\" height=\"830\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/11/data-src-image-b4252feb-7713-4df1-98ca-cc453b53d4ee.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/11/data-src-image-b4252feb-7713-4df1-98ca-cc453b53d4ee.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-b4252feb-7713-4df1-98ca-cc453b53d4ee.png 1600w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>Each node performs a small task, but when connected, they can create incredibly detailed results: from forests made of thousands of randomized trees to animated particle trails or architectural patterns. Geometry Nodes allow you to build once and control everything with adjustable parameters.\u003C/p>\u003Chr>\u003Ch2 id=\"why-geometry-nodes-are-important\">\u003Cstrong>Why Geometry Nodes Are Important\u003C/strong>\u003C/h2>\u003Cp>\u003Ca href=\"https://blog.cg-wire.com/3d-modeling-animation/\">Traditional modeling and animation workflows\u003C/a> often depend on time-consuming manual adjustments where every change or variation requires direct edits to the model. Geometry Nodes revolutionize this process by introducing procedural control, a system that allows you to generate and modify models dynamically through input values, randomness, or mathematical relationships.\u003C/p>\u003Cp>This approach offers several major advantages. It makes you more productive by letting you update or randomize complex scenes instantly without the need to rebuild them from scratch. It also brings flexibility to your pipeline because parameters can be adjusted at any stage of production. Geometry Nodes open the door to experimentation for producing intricate shapes, patterns, and effects that would be difficult or impossible to achieve by hand, like generating a large patch of grass. This feature is ideal for large-scale modeling like crowd simulations or realistic natural environments.\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-green\">\u003Cdiv class=\"kg-callout-emoji\">💡\u003C/div>\u003Cdiv class=\"kg-callout-text\">\u003Cb>\u003Cstrong style=\"white-space: pre-wrap;\">Looking for working examples?\u003C/strong>\u003C/b>\u003Cbr>\u003Cbr>You can find the complete source code for the example Blender–Kitsu integration showcased in this guide on our GitHub:\u003Cbr>\u003Cbr>🔗 \u003Ca href=\"https://github.com/cgwire/blender-scripting-geometry-nodes?ref=blog.cg-wire.com\">https://github.com/cgwire/blender-scripting-geometry-nodes\u003C/a>\u003C/div>\u003C/div>\u003Chr>\u003Ch2 id=\"adding-a-geometry-node\">\u003Cstrong>Adding a Geometry Node\u003C/strong>\u003C/h2>\u003Cp>I know the concept looks intimidating, but give me 5 minutes and we'll create your first geometry node setup.\u003C/p>\u003Col>\u003Cli>Open a new Blender project with a default cube.\u003C/li>\u003Cli>In the Geometry Nodes tab, click New to create a new geometry node group.\u003C/li>\u003C/ol>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-6cd7bfd9-e7fd-4220-834e-c6260fa00949.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1314\" height=\"889\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/11/data-src-image-6cd7bfd9-e7fd-4220-834e-c6260fa00949.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/11/data-src-image-6cd7bfd9-e7fd-4220-834e-c6260fa00949.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-6cd7bfd9-e7fd-4220-834e-c6260fa00949.png 1314w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>You’ll now see a blank node tree in the Geometry Node Editor workspace, with two default nodes: Group Input and Group Output. These represent the start and end of your data flow: geometry comes in, gets modified, and goes out.\u003C/p>\u003Cp>To see your setup in action, just add your first node:\u003C/p>\u003Col>\u003Cli>Click \u003Cstrong>Add\u003C/strong> → \u003Cstrong>Geometry\u003C/strong> → \u003Cstrong>Operations\u003C/strong> → \u003Cstrong>Transform Geometry\u003C/strong>.\u003C/li>\u003Cli>Connect the \u003Cstrong>Group Input\u003C/strong> → \u003Cstrong>Transform Geometry\u003C/strong> → \u003Cstrong>Group Output\u003C/strong>.\u003C/li>\u003Cli>Adjust the translation or scale values in the Transform node.\u003C/li>\u003C/ol>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-ea4b9476-8d89-4d56-bd6a-62dd751ba84d.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1314\" height=\"889\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/11/data-src-image-ea4b9476-8d89-4d56-bd6a-62dd751ba84d.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/11/data-src-image-ea4b9476-8d89-4d56-bd6a-62dd751ba84d.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-ea4b9476-8d89-4d56-bd6a-62dd751ba84d.png 1314w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>You’ll immediately see your object move or resize in the viewport. Congratulations, you’ve just built your first procedural modifier!\u003C/p>\u003Cp>Geometry Nodes fall into several broad categories, each handling different aspects of your scene. Think of these categories as toolboxes: each one focuses on a different kind of task, from generating shapes to controlling data or math behind the scenes.\u003C/p>\u003Cp>Here is a quick overview of the different node types to find the ones you need for a new workflow:\u003C/p>\u003Chr>\u003Ch2 id=\"1-input-nodes\">\u003Cstrong>1. Input Nodes\u003C/strong>\u003C/h2>\u003Cp>Input nodes provide the starting information for your node tree. They bring in existing data from your object or scene, like position, normal, index, or object info that other nodes can use to calculate or transform geometry.\u003C/p>\u003Cp>For example, an Input → Scene → Object info node gives you all the information you need about an object instance to perform calculations.\u003C/p>\u003Cp>When creating a new node tree, Blender will always add a new Input Group node representing the group of models in the current scene.\u003C/p>\u003Cp>\u003C/p>\u003Ch2 id=\"2-output-nodes\">\u003Cstrong>2. Output Nodes\u003C/strong>\u003C/h2>\u003Cp>Output nodes define what leaves your node system: the final geometry that Blender renders or displays. The Group Output node is the most common one, connecting the result of your entire node network back to your object in the viewport.\u003C/p>\u003Cp>Other specialized outputs (like Material Output in shader setups) pass data to different parts of Blender’s system. In Geometry Nodes, the Output stage determines what geometry, instances, or attributes are visible in the result.\u003C/p>\u003Cp>\u003C/p>\u003Ch2 id=\"3-geometry-nodes\">\u003Cstrong>3. Geometry Nodes\u003C/strong>\u003C/h2>\u003Cp>Geometry nodes directly modify, combine, or generate geometry, the actual shapes in your scene.\u003C/p>\u003Cp>They’re the core of procedural modeling. Instead of sculpting by hand, you can create systems that generate geometry automatically, and you can tweak them later without destroying your base mesh.\u003C/p>\u003Cp>\u003C/p>\u003Ch2 id=\"4-mesh-nodes\">\u003Cstrong>4. Mesh Nodes\u003C/strong>\u003C/h2>\u003Cp>Mesh nodes focus on fine control over mesh structures: the vertices, edges, and faces that make up your geometry. They let you access and modify specific mesh components or convert geometry types.\u003C/p>\u003Cp>When you need precise topology control, go for mesh nodes. They’re perfect for procedural modeling tasks like creating grids, manipulating edge loops, or generating new topology from existing meshes.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-da4fe358-ac3b-482a-8641-6b0540c9d792.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"947\" height=\"897\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/11/data-src-image-da4fe358-ac3b-482a-8641-6b0540c9d792.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-da4fe358-ac3b-482a-8641-6b0540c9d792.png 947w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>\u003C/p>\u003Ch2 id=\"5-instance-nodes\">\u003Cstrong>5. Instance Nodes\u003C/strong>\u003C/h2>\u003Cp>Instance nodes create copies (instances) of objects, scattered across surfaces or points. Nodes like Instance on Points or Realize Instances handle this.\u003C/p>\u003Cp>Instancing is one of the most powerful features in Geometry Nodes because it lets you duplicate thousands of objects (like trees, rocks, or particles) without slowing down your scene by only rendering one real copy and referencing it multiple times.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-1049c5fc-a3de-480a-b494-ed49a488af0c.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1600\" height=\"1085\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/11/data-src-image-1049c5fc-a3de-480a-b494-ed49a488af0c.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/11/data-src-image-1049c5fc-a3de-480a-b494-ed49a488af0c.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-1049c5fc-a3de-480a-b494-ed49a488af0c.png 1600w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>\u003C/p>\u003Ch2 id=\"6-attribute-nodes\">\u003Cstrong>6. Attribute Nodes\u003C/strong>\u003C/h2>\u003Cp>Attribute nodes control or pass around custom properties attached to geometry, like color, scale, or random values per point. These attributes can be used to drive transformations, materials, or effects.\u003C/p>\u003Cp>Attributes let you add variation and control to your procedural systems. You can randomize the size of scattered objects, color particles differently, or link material effects to geometry data.\u003C/p>\u003Cp>\u003C/p>\u003Ch2 id=\"7-utilities-and-fields\">\u003Cstrong>7. Utilities and Fields\u003C/strong>\u003C/h2>\u003Cp>Utility nodes handle the logic and math behind your geometry network. They include operations like Math, Vector Math, Compare, or Map Range, and they’re often used to process or control other nodes’ inputs, like in a programming language.\u003C/p>\u003Cp>They’re the brains of your setup, allowing you to build relationships, create gradients, randomize values, etc.\u003C/p>\u003Cp>\u003C/p>\u003Ch2 id=\"8-curve-nodes\">\u003Cstrong>8. Curve Nodes\u003C/strong>\u003C/h2>\u003Cp>Curve nodes work with curve-based geometry like lines, splines, or paths. They’re useful for generating cables, vines, roads, or abstract motion trails. Nodes like Resample Curve, Curve to Mesh, and Set Curve Radius let you adjust the shape, resolution, or thickness of curves procedurally.\u003C/p>\u003Cp>Curves can also drive instancing, letting you place objects along a path or animate their movement over time.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-6409cf07-26e7-4eb6-8e76-85e9a33a1581.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1142\" height=\"936\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/11/data-src-image-6409cf07-26e7-4eb6-8e76-85e9a33a1581.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/11/data-src-image-6409cf07-26e7-4eb6-8e76-85e9a33a1581.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-6409cf07-26e7-4eb6-8e76-85e9a33a1581.png 1142w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>\u003C/p>\u003Ch2 id=\"9-grease-pencil-nodes\">\u003Cstrong>9. Grease Pencil Nodes\u003C/strong>\u003C/h2>\u003Cp>Grease Pencil nodes integrate Blender’s 2D drawing system into the Geometry Nodes workflow. You can procedurally modify strokes, convert drawings into geometry, or apply effects like noise, extrusion, or deformation to 2D lines.\u003C/p>\u003Cp>These nodes bridge the gap between 2D animation and procedural design, giving artists new ways to stylize motion graphics or hybrid 2D/3D scenes.\u003C/p>\u003Cp>\u003C/p>\u003Ch2 id=\"10-point-nodes\">\u003Cstrong>10. Point Nodes\u003C/strong>\u003C/h2>\u003Cp>Point nodes manipulate individual points in your geometry: the fundamental building blocks used for scattering, positioning, or transforming instances. You can add, move, or rotate points, or assign attributes like color or scale to each.\u003C/p>\u003Cp>For instance, Distribute Points on Faces generates evenly or randomly placed points across a surface, which can then serve as placement positions for instances like grass or particles.\u003C/p>\u003Cp>\u003C/p>\u003Ch2 id=\"11-volume-nodes\">\u003Cstrong>11. Volume Nodes\u003C/strong>\u003C/h2>\u003Cp>Volume nodes let you create and manipulate volumetric data like fog, smoke, or procedural density fields. You can use them to generate 3D textures, shape clouds, or fill geometry with density-based effects and open the door to atmospheric or organic effects that go far beyond surface modeling.\u003C/p>\u003Cp>\u003C/p>\u003Ch2 id=\"12-material-nodes\">\u003Cstrong>12. Material Nodes\u003C/strong>\u003C/h2>\u003Cp>Material nodes assign or modify materials and shading data. The Set Material or Material Index nodes let you dynamically apply different materials based on attributes, random seeds, or regions of your model.\u003C/p>\u003Cp>This makes it easy to, for example, color-code parts of a structure or assign materials procedurally to scattered objects.\u003C/p>\u003Cp>\u003C/p>\u003Ch2 id=\"13-texture-nodes\">\u003Cstrong>13. Texture Nodes\u003C/strong>\u003C/h2>\u003Cp>Texture nodes sample or generate procedural textures that can drive geometry transformations or visual variation. They can provide grayscale masks, noise patterns, or gradients that influence scale, displacement, or color.\u003C/p>\u003Cp>By combining texture data with math or attribute nodes, you can create natural randomness for uneven terrain, wavy surfaces, or patterned distribution.\u003C/p>\u003Cp>\u003C/p>\u003Ch2 id=\"14-group-nodes\">\u003Cstrong>14. Group Nodes\u003C/strong>\u003C/h2>\u003Cp>Group nodes bundle multiple nodes into a reusable unit. They’re crucial for organizing complex setups and keeping your node trees clean. You can expose parameters on the group’s input/output to make them adjustable, effectively turning your custom setup into a new super node.\u003C/p>\u003Cp>Once you start building your own groups, you’re not just using Geometry Nodes: you’re creating your own procedural tools.\u003C/p>\u003Cp>\u003C/p>\u003Ch2 id=\"15-hair-nodes\">\u003Cstrong>15. Hair Nodes\u003C/strong>\u003C/h2>\u003Cp>Hair nodes are designed to generate, style, and control procedural hair or fur systems. They provide access to strand length, density, and grooming attributes, allowing you to simulate everything from grass fields to character hair.\u003C/p>\u003Cp>These nodes replace older particle-based workflows with a modern, procedural approach that integrates seamlessly with Blender’s new hair system.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card kg-card-hascaption\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-64d8cc68-468d-4294-8abb-2fafc2ac9d87.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1600\" height=\"916\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/11/data-src-image-64d8cc68-468d-4294-8abb-2fafc2ac9d87.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/11/data-src-image-64d8cc68-468d-4294-8abb-2fafc2ac9d87.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-64d8cc68-468d-4294-8abb-2fafc2ac9d87.png 1600w\" sizes=\"(min-width: 720px) 720px\">\u003Cfigcaption>\u003Ci>\u003Cem class=\"italic\" style=\"white-space: pre-wrap;\">Source: Blender Stack Exchange\u003C/em>\u003C/i>\u003C/figcaption>\u003C/figure>\u003Chr>\u003Ch2 id=\"conclusion\">\u003Cstrong>Conclusion\u003C/strong>\u003C/h2>\u003Cp>Geometry Nodes can seem abstract or intimidating at first, but they are some of the most exciting features in Blender. Once you understand how to combine nodes, you can generate entire animations, environments, or visual effects driven by procedural logic rather than manual edits.\u003C/p>\u003Cp>Don’t feel like you need to memorize them all, however. Most Geometry Nodes setups rely on a handful of key nodes that you’ll naturally get comfortable with as you experiment.\u003C/p>\u003Cp>\u003Ca href=\"https://blog.cg-wire.com/\">In our next article\u003C/a>, we’ll go a step further: you’ll learn how to \u003Ca href=\"https://blog.cg-wire.com/blender-scripting-animation/\">create your own custom node groups using scripts\u003C/a> to automate effects while reducing the complexity of your workflows for unique animation pipelines.\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>",{"uuid":553,"comment_id":554,"feature_image":555,"featured":29,"visibility":30,"created_at":556,"updated_at":531,"custom_excerpt":557,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":558,"primary_tag":559,"url":560,"excerpt":557,"reading_time":97,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":561},"e65c213b-a70f-4f97-a5f0-3c56eb08a3d3","69118ad9e054fc00019520ad","https://images.unsplash.com/photo-1639322537504-6427a16b0a28?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDd8fGJsZW5kZXIlMjBnZW9tZXRyeSUyMG5vZGVzfGVufDB8fHx8MTc2Mjc1NzU1NXww&ixlib=rb-4.1.0&q=80&w=2000","2025-11-10T07:48:57.000+01:00","Blender’s Geometry Nodes let you build 3D models procedurally. Learn how they work, why they’re essential for modern animation pipelines, and how to start using them to create smarter, faster, non-destructive workflows.",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":333,"name":334,"slug":335,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":336},"https://blog.cg-wire.com/blender-scripting-geometry-nodes/","\u003Cspan style=\"white-space: pre-wrap;\">Photo by \u003C/span>\u003Ca href=\"https://unsplash.com/@theshubhamdhage?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Shubham Dhage\u003C/span>\u003C/a>\u003Cspan style=\"white-space: pre-wrap;\"> / \u003C/span>\u003Ca href=\"https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Unsplash\u003C/span>\u003C/a>","/posts/blender-scripting-geometry-nodes","2025-11-10T10:00:00.000+01:00",{"title":548},"blender-scripting-geometry-nodes","posts/blender-scripting-geometry-nodes",[568],{"id":333,"name":334,"slug":335,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":336},"bz_ljpXg75yGUgqM5toWV-cRyrzGxhRi3KTeg1MFwrA",{"id":571,"title":572,"authors":573,"body":7,"description":7,"extension":8,"html":575,"meta":576,"navigation":12,"path":585,"published_at":586,"seo":587,"slug":588,"stem":589,"tags":590,"__hash__":592,"uuid":577,"comment_id":578,"feature_image":579,"featured":29,"visibility":30,"created_at":580,"updated_at":243,"custom_excerpt":581,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":582,"primary_tag":583,"url":584,"excerpt":581,"reading_time":97,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":299},"ghost/posts:ffmpeg-commands-for-animators.json","10 FFmpeg Commands Every Animator Should Know In 2026",[574],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📼\u003C/div>\u003Cdiv class=\"kg-callout-text\">Think video conversion tools are just for editors? Think again. FFmpeg is the secret weapon hiding inside every animation pipeline — used by studios like YouTube, Blender, and DaVinci Resolve — and it can save you \u003Ci>\u003Cem class=\"italic\" style=\"white-space: pre-wrap;\">hours\u003C/em>\u003C/i> of manual work once you know how to use it.\u003C/div>\u003C/div>\u003Cp>If you work in animation or video production, you have already met FFmpeg.\u003C/p>\u003Cp>Despite being open-source and used by giants like YouTube, Blender, and DaVinci Resolve, FFmpeg often stays hidden in the background and few artists are aware of its worth.\u003C/p>\u003Cp>In this guide, we’ll walk through 10 practical FFmpeg commands every animator or pipeline artist should know to save hours of manual work.\u003C/p>\u003Chr>\u003Ch2 id=\"whats-ffmpeg\">\u003Cstrong>What's FFmpeg?\u003C/strong>\u003C/h2>\u003Cp>FFmpeg is a powerful open-source command line toolkit for working with video, audio, and image data. It’s not a single program, rather a suite of tools that handle nearly every kind of media processing task imaginable:\u003C/p>\u003Cul>\u003Cli>Convert between almost any video, audio, or image format.\u003C/li>\u003Cli>Assemble image sequences into movies (and vice versa).\u003C/li>\u003Cli>Compress or transcode large files for reviews or uploads.\u003C/li>\u003Cli>Filters: crop, scale, color adjust, overlay, blur, etc.\u003C/li>\u003Cli>Sync or combine multiple audio/video sources.\u003C/li>\u003Cli>Analyze media metadata (frame rate, codec, bit depth, etc.).\u003C/li>\u003Cli>Automate batch processing in pipelines via scripts.\u003C/li>\u003C/ul>\u003Cp>We can't list down all the nice features it offers, but let’s start with 10 practical FFmpeg commands with examples you can drop straight into your terminal.\u003C/p>\u003Chr>\u003Ch2 id=\"1-compile-an-image-sequence-into-a-video\">\u003Cstrong>1. Compile an Image Sequence into a Video\u003C/strong>\u003C/h2>\u003Cp>\u003Ca href=\"https://blog.cg-wire.com/getting-started-with-blender-rendering/\">Renderers like Blender's\u003C/a> allow outputting image sequences (e.g., thousands of EXRs or PNGs) rather than single movie files. This is safer because if a render crashes, you can resume from there. The problem is that those sequences aren’t playable or easy to review.\u003C/p>\u003Cp>FFmpeg can stitch all frames into a single video file in seconds to create a lightweight, shareable version of your shot:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">ffmpeg -framerate 24 -i frame_%04d.png -c:v libx264 -pix_fmt yuv420p output.mp4\u003C/code>\u003C/pre>\u003Cul>\u003Cli>\u003Ccode>-framerate 24\u003C/code> - tells FFmpeg to read the sequence at 24 frames per second.\u003C/li>\u003Cli>\u003Ccode>-i frame_%04d.png - %04d\u003C/code> means four digits padded with zeros (e.g. \u003Ccode>0001\u003C/code>, \u003Ccode>0002\u003C/code> …). You'll need more digits if your sequence goes above 1000 frames.\u003C/li>\u003Cli>\u003Ccode>-c:v libx264\u003C/code> - encodes the video using the H.264 codec, a good default for reviews.\u003C/li>\u003Cli>\u003Ccode>-pix_fmt yuv420p\u003C/code> - ensures broad compatibility (especially with media players and browsers).\u003C/li>\u003Cli>\u003Ccode>output.mp4\u003C/code> - the name of the final video file.\u003C/li>\u003C/ul>\u003Chr>\u003Ch2 id=\"2-create-a-quick-low-res-review\">\u003Cstrong>2. Create a Quick Low-Res Review\u003C/strong>\u003C/h2>\u003Cp>High-res renders (4K, full-quality EXRs, or ProRes) of several Gbs are too heavy \u003Ca href=\"https://blog.cg-wire.com/how-to-give-efficient-animation-feedback/\">to send over Slack for feedback\u003C/a>: you need smaller, fast-loading versions for daily reviews.\u003C/p>\u003Cp>Just scale and compress a master video automatically to get a playable version without re-rendering:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">ffmpeg -i output.mp4 -vf scale=960:-1 -b:v 1M review.mp4\u003C/code>\u003C/pre>\u003Cul>\u003Cli>\u003Ccode>-i output.mp4\u003C/code> - input file (your high-quality render).\u003C/li>\u003Cli>\u003Ccode>-vf scale=960:-1\u003C/code> - rescales video width to 960 pixels and automatically adjusts height (\u003Ccode>-1\u003C/code>) to keep aspect ratio.\u003C/li>\u003Cli>\u003Ccode>-b:v 1M\u003C/code> - sets video bitrate to 1 megabit per second - a good low size/high speed compromise.\u003C/li>\u003Cli>\u003Ccode>review.mp4\u003C/code> - output file.\u003C/li>\u003C/ul>\u003Chr>\u003Ch2 id=\"3-overlay-a-logo-or-watermark\">\u003Cstrong>3. Overlay a Logo or Watermark\u003C/strong>\u003C/h2>\u003Cp>Studios and freelancers often share work-in-progress files. But without a watermark, previews can be redistributed, leaked, or confused for final versions.\u003C/p>\u003Cp>With a single FFmpeg command, you can overlay a studio logo, username, or “Work In Progress” tag on every frame.\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">ffmpeg -i input.mp4 -i logo.png -filter_complex \"overlay=10:10\" branded.mp4\u003C/code>\u003C/pre>\u003Cul>\u003Cli>\u003Ccode>-i input.mp4\u003C/code> - main video.\u003C/li>\u003Cli>\u003Ccode>-i logo.png\u003C/code> - image to overlay (must have transparency or you’ll get a solid box).\u003C/li>\u003Cli>\u003Ccode>-filter_complex \"overlay=10:10\"\u003C/code> - applies an overlay filter, positioning logo 10px from top-left corner.\u003C/li>\u003Cli>\u003Ccode>branded.mp4\u003C/code> - result with watermark applied.\u003C/li>\u003C/ul>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-44bab5d8-5532-4d0b-9347-12812a0e1271.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"848\" height=\"527\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/11/data-src-image-44bab5d8-5532-4d0b-9347-12812a0e1271.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-44bab5d8-5532-4d0b-9347-12812a0e1271.png 848w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Chr>\u003Ch2 id=\"4-burn-frame-numbers-or-timecode\">\u003Cstrong>4. Burn Frame Numbers or Timecode\u003C/strong>\u003C/h2>\u003Cp>During client or team reviews, everyone needs to reference exact frames for notes, so unlabelled footage makes it impossible to align feedback.\u003C/p>\u003Cp>FFmpeg’s drawtext filter can burn frame numbers or running timecodes into your video to provide a precise reference system, helping supervisors and animators stay synchronized during reviews.\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">ffmpeg -i input.mp4 -vf \"drawtext=text='%{n}':x=10:y=H-th-10:fontsize=24:fontcolor=white\" numbered.mp4\u003C/code>\u003C/pre>\u003Cul>\u003Cli>\u003Ccode>drawtext\u003C/code> filter draws text on each frame.\u003C/li>\u003Cli>\u003Ccode>text='%{n}'\u003C/code> - inserts frame number.\u003C/li>\u003Cli>\u003Ccode>x=10:y=H-th-10\u003C/code> - places text 10px from bottom-left.\u003C/li>\u003Cli>\u003Ccode>fontsize\u003C/code>, \u003Ccode>fontcolor\u003C/code> - control look.\u003C/li>\u003C/ul>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-84b1c23e-6e65-493e-bf3c-96c254d28234.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"848\" height=\"527\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/11/data-src-image-84b1c23e-6e65-493e-bf3c-96c254d28234.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-84b1c23e-6e65-493e-bf3c-96c254d28234.png 848w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>Or for timecode using the presentation timestamp (PTS) formatted as hours:minutes:seconds:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">ffmpeg -i input.mp4 -vf \"drawtext=text='%{pts\\:hms}':x=10:y=H-th-10:fontsize=24:fontcolor=white\" timecode.mp4\u003C/code>\u003C/pre>\u003Chr>\u003Ch2 id=\"5-create-looping-clips-turntables\">\u003Cstrong>5. Create Looping Clips (Turntables)\u003C/strong>\u003C/h2>\u003Cp>When presenting 3D models or shots, you often need looping turntables for portfolios, internal libraries, or demo reels. Manually duplicating clips in an editor is tedious.\u003C/p>\u003Cp>FFmpeg can loop any clip a chosen number of times with -stream_loop, creating continuous playbacks instantly without re-rendering:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">ffmpeg -stream_loop 3 -i turntable.mp4 -c copy looped.mp4\u003C/code>\u003C/pre>\u003Cul>\u003Cli>\u003Ccode>-stream_loop 3\u003C/code> - plays the input 3 extra times.\u003C/li>\u003Cli>\u003Ccode>-i turntable.mp4\u003C/code> - your original animation.\u003C/li>\u003Cli>\u003Ccode>-c copy\u003C/code> - copies audio/video streams without re-encoding (fast, lossless).\u003C/li>\u003Cli>\u003Ccode>looped.mp4\u003C/code> - final output.\u003C/li>\u003C/ul>\u003Chr>\u003Ch2 id=\"6-add-sound-to-a-silent-render\">\u003Cstrong>6. Add Sound to a Silent Render\u003C/strong>\u003C/h2>\u003Cp>Renders from 3D software don’t include audio, even if your animation is synced to dialogue or music, and adding sound manually in Premiere or After Effects can be time-consuming for quick previews.\u003C/p>\u003Cp>FFmpeg can merge a silent render with an audio track instantly, syncing them without a timeline-based editor.\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">ffmpeg -i render.mp4 -i music.wav -c:v copy -c:a aac -shortest final.mp4\u003C/code>\u003C/pre>\u003Cul>\u003Cli>\u003Ccode>-i render.mp4\u003C/code> - video input.\u003C/li>\u003Cli>\u003Ccode>-i music.wav\u003C/code> - audio input.\u003C/li>\u003Cli>\u003Ccode>-c:v copy\u003C/code> - keeps the existing video stream (no re-rendering).\u003C/li>\u003Cli>\u003Ccode>-c:a aac\u003C/code> - encodes audio to AAC (widely supported).\u003C/li>\u003Cli>\u003Ccode>-shortest\u003C/code> - stops encoding when the shorter of the two tracks ends.\u003C/li>\u003C/ul>\u003Chr>\u003Ch2 id=\"7-extract-every-nth-frame\">\u003Cstrong>7. Extract Every Nth Frame\u003C/strong>\u003C/h2>\u003Cp>Reviewing every single frame from a long shot is slow, especially for motion analysis, flicker detection, or checking exposure shifts. Sometimes, you just want to sample frames like one every 10 or 20.\u003C/p>\u003Cp>FFmpeg’s \u003Ccode>select\u003C/code> filter allows you to extract every nth frame automatically. It’s perfect for quick motion diagnostics, creating contact sheets, or generating thumbnails:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">ffmpeg -i input.mp4 -vf \"select='not(mod(n,10))',setpts=N/FRAME_RATE/TB\" frames_%04d.png\u003C/code>\u003C/pre>\u003Cul>\u003Cli>\u003Ccode>select='not(mod(n,10))'\u003C/code> - processes only frames where the frame number n is divisible by 10 (every 10th).\u003C/li>\u003Cli>\u003Ccode>setpts=N/FRAME_RATE/TB\u003C/code> - corrects timestamps so output doesn’t play back too fast.\u003C/li>\u003Cli>\u003Ccode>frames_%04d.png\u003C/code> - naming pattern for extracted images.\u003C/li>\u003C/ul>\u003Chr>\u003Ch2 id=\"8-compare-two-versions-ab-diff\">\u003Cstrong>8. Compare Two Versions (A/B Diff)\u003C/strong>\u003C/h2>\u003Cp>When testing lighting tweaks, color corrections, or denoising updates, it’s hard to see small visual differences between two versions by eye.\u003C/p>\u003Cp>FFmpeg’s \u003Ccode>blend=all_mode=difference\u003C/code> filter subtracts one version from the other and shows differences as bright pixels. It’s a fast way to QA version changes.\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">ffmpeg -i old.mp4 -i new.mp4 -filter_complex \"blend=all_mode=difference\" diff.mp4\u003C/code>\u003C/pre>\u003Cul>\u003Cli>Two input files: old and new render.\u003C/li>\u003Cli>\u003Ccode>blend=all_mode=difference\u003C/code> - subtracts pixel values of one from the other, showing where they differ.\u003C/li>\u003Cli>\u003Ccode>diff.mp4\u003C/code> - bright pixels = changes, dark = no difference.\u003C/li>\u003C/ul>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-57adc37e-d8c2-407a-9057-1739a959c61f.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"848\" height=\"527\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/11/data-src-image-57adc37e-d8c2-407a-9057-1739a959c61f.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-57adc37e-d8c2-407a-9057-1739a959c61f.png 848w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Chr>\u003Ch2 id=\"9-combine-render-passes-side-by-side\">\u003Cstrong>9. Combine Render Passes Side-by-Side\u003C/strong>\u003C/h2>\u003Cp>Artists often need to compare two passes (e.g., old vs. new). Opening them in compositing software just to compare layout or lighting is overkill.\u003C/p>\u003Cp>The \u003Ccode>hstack\u003C/code> (or \u003Ccode>vstack\u003C/code>) filter places videos side-by-side or vertically for easy comparison. It’s perfect for review exports or before/after videos showing changes to clients or supervisors.\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">ffmpeg -i pass1.mp4 -i pass2.mp4 -filter_complex \"hstack\" side_by_side.mp4\u003C/code>\u003C/pre>\u003Cul>\u003Cli>Two input videos.\u003C/li>\u003Cli>\u003Ccode>hstack\u003C/code> - stacks them horizontally. Use \u003Ccode>vstack\u003C/code> to stack vertically instead.\u003C/li>\u003Cli>\u003Ccode>side_by_side.mp4\u003C/code> - output file.\u003C/li>\u003C/ul>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-77024499-0432-4930-97d8-c1aa0942c2e9.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1186\" height=\"748\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/11/data-src-image-77024499-0432-4930-97d8-c1aa0942c2e9.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/11/data-src-image-77024499-0432-4930-97d8-c1aa0942c2e9.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-77024499-0432-4930-97d8-c1aa0942c2e9.png 1186w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>You can also include the resulting video from the previous \u003Ccode>blend=all_mode=difference\u003C/code> command to quickly see the differences between frames:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">ffmpeg -i pass1.mp4 -i diff.mp4 -i pass2.mp4 \\\n-filter_complex \"[0:v][1:v]hstack=inputs=2[top]; [top][2:v]hstack=inputs=2\" \\\nside_by_side2.mp4\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-3179b0a2-949d-468c-ba70-153ae97f0d0c.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1186\" height=\"748\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/11/data-src-image-3179b0a2-949d-468c-ba70-153ae97f0d0c.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/11/data-src-image-3179b0a2-949d-468c-ba70-153ae97f0d0c.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/11/data-src-image-3179b0a2-949d-468c-ba70-153ae97f0d0c.png 1186w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Chr>\u003Ch2 id=\"10-re-time-animation-slow-mo-or-speed-up\">\u003Cstrong>10. Re-Time Animation (Slow-Mo or Speed-Up)\u003C/strong>\u003C/h2>\u003Cp>Timing tweaks like previewing a slower camera move or checking a fast motion test usually require re-rendering or editing in software. That’s inefficient just to try different pacing.\u003C/p>\u003Cp>FFmpeg can alter playback speed on the fly by adjusting frame timestamps to let animators preview alternate speeds instantly.\u003C/p>\u003Cp>Slow down to half speed:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">ffmpeg -i input.mp4 -filter:v \"setpts=2.0*PTS\" slowmo.mp4\u003C/code>\u003C/pre>\u003Cp>Speed up 2×:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">ffmpeg -i input.mp4 -filter:v \"setpts=0.5*PTS\" fast.mp4\u003C/code>\u003C/pre>\u003Cul>\u003Cli>The \u003Ccode>setpts\u003C/code> filter manipulates the presentation timestamps (PTS) of each frame.\u003C/li>\u003Cli>Multiplying by 2.0 doubles playback time (slower).\u003C/li>\u003Cli>Multiplying by 0.5 halves it (faster).\u003C/li>\u003C/ul>\u003Chr>\u003Ch2 id=\"conclusion\">\u003Cstrong>Conclusion\u003C/strong>\u003C/h2>\u003Cp>FFmpeg isn’t just a video converter. With a few lines of text, you can automate tasks that usually take minutes or hours in traditional software: batch rendering, version comparisons, review exports... You name it.\u003C/p>\u003Cp>Once you get comfortable with the syntax, FFmpeg is an extension of your creative workflow. Pick one command from this list, drop it into your next render pipeline, and watch how much smoother your daily production becomes!\u003C/p>\u003Cp>But that's not all. Combine the power of ffmpeg with DCC scripts (like \u003Ca href=\"https://blog.cg-wire.com/blender-scripting-animation/\">Blender scripting\u003C/a>) and you'll unlock superpowers beyond human comprehension (like automating entire scene creations). \u003Ca href=\"https://blog.cg-wire.com/\">Subscribe to our blog\u003C/a> for more!\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>\u003Cp>\u003C/p>",{"uuid":577,"comment_id":578,"feature_image":579,"featured":29,"visibility":30,"created_at":580,"updated_at":243,"custom_excerpt":581,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":582,"primary_tag":583,"url":584,"excerpt":581,"reading_time":97,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":299},"93788414-98a7-4015-8e42-e5214d9567d9","6909b6f1df0ae600014fbb5a","https://images.unsplash.com/photo-1727142073871-d40f5a7c76d8?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDMwfHx2aWRlbyUyMGVuY29kaW5nJTIwdGVybWluYWx8ZW58MHx8fHwxNzYyMjQ1NjQ3fDA&ixlib=rb-4.1.0&q=80&w=2000","2025-11-04T09:18:57.000+01:00","FFmpeg is one of the most powerful media tools used in animation and video production — yet many artists barely scratch the surface of what it can do. Learn 10 essential FFmpeg commands for assembling renders, adding audio, overlaying logos, comparing versions, and optimizing review exports.",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"https://blog.cg-wire.com/ffmpeg-commands-for-animators/","/posts/ffmpeg-commands-for-animators","2025-11-04T10:09:54.000+01:00",{"title":572},"ffmpeg-commands-for-animators","posts/ffmpeg-commands-for-animators",[591],{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"5xJDX4mIKsXbLF-f4fY8sM4QlPjb16pw8NEom2bEGLE",{"id":594,"title":595,"authors":596,"body":7,"description":7,"extension":8,"html":598,"meta":599,"navigation":12,"path":609,"published_at":610,"seo":611,"slug":612,"stem":613,"tags":614,"__hash__":616,"uuid":600,"comment_id":601,"feature_image":602,"featured":29,"visibility":30,"created_at":603,"updated_at":604,"custom_excerpt":605,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":606,"primary_tag":607,"url":608,"excerpt":605,"reading_time":97,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":151},"ghost/posts:forward-vs-inverse-kinematics-blender.json","How To Use Forward and Inverse Kinematics In Blender (2026)",[597],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">🤖\u003C/div>\u003Cdiv class=\"kg-callout-text\">A 3D model is just a lifeless mannequin until you\u003Ca href=\"https://blog.cg-wire.com/rigging-in-animation/\"> \u003Cu>start rigging it\u003C/u>\u003C/a>. The real magic happens when animators make it move, and that’s where kinematics comes in.\u003C/div>\u003C/div>\u003Cp>The problem is, it’s not as simple as dragging a character’s arm or leg around. Push the limits too far, and suddenly your character’s elbow bends backwards, or their run looks like a broken wind-up toy. Play it too safe, and the movement feels stiff and robotic. Finding the balance between believable physics and expressiveness is hard.\u003C/p>\u003Cp>In this article, we explore what kinematics are and how they work in Blender. By the end, you'll have created your first rig for animation.\u003C/p>\u003Chr>\u003Ch2 id=\"what-are-kinematics\">\u003Cstrong>What Are Kinematics\u003C/strong>\u003C/h2>\u003Cp>Kinematics is \u003Cstrong>the study of how things move in space\u003C/strong> without worrying about the forces that cause the motion. In animation, it means focusing on how a character or object’s joints, limbs, and body parts transform from one pose to the next, rather than worrying about muscles or gravity pulling them.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card kg-card-hascaption\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-1727ea30-70cc-42b6-a5e1-085ffa16eef4.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"841\" height=\"431\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/10/data-src-image-1727ea30-70cc-42b6-a5e1-085ffa16eef4.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-1727ea30-70cc-42b6-a5e1-085ffa16eef4.png 841w\" sizes=\"(min-width: 720px) 720px\">\u003Cfigcaption>\u003Ci>\u003Cem class=\"italic\" style=\"white-space: pre-wrap;\">Source: MathWorks\u003C/em>\u003C/i>\u003C/figcaption>\u003C/figure>\u003Cp>Kinematics gives animators the rules and tools to make 3D models move in a way that looks consistent and believable. It's important to make the distinction between forward and inverse kinematics:\u003C/p>\u003Cul>\u003Cli>\u003Cstrong>Forward Kinematics\u003C/strong>: In FK, motion starts at the top of the hierarchy. If you want to move a hand, you rotate the shoulder, then the elbow, then the wrist. It’s intuitive for arcs and natural swinging motions (like waving or swinging a sword) because you control the chain link by link. But it can be tedious: if you animate a finger touching a point in space, you have to manually adjust every joint to line it up.\u003C/li>\u003Cli>\u003Cstrong>Inverse Kinematics\u003C/strong>: IK flips the problem. Instead of rotating each joint, you place the end of the chain where you want it (say, a character’s hand on a table), and the computer calculates how the shoulder and elbow must bend to reach that spot. IK is perfect for locked motions, like keeping feet planted on the floor while the body moves. The downside is that it can sometimes create unnatural bends if not carefully controlled, so you'll need to define complex constraints.\u003C/li>\u003C/ul>\u003Cp>Animators don’t choose one or the other exclusively. They switch between FK and IK depending on the type of motion they need: FK for fluid arcs, IK for precise placement, and often blend the two to achieve the most natural end-to-end movement.\u003C/p>\u003Chr>\u003Ch2 id=\"why-kinematics-are-important\">\u003Cstrong>Why Kinematics Are Important\u003C/strong>\u003C/h2>\u003Cp>\u003Cstrong>Kinematics make sure a character’s movement respects anatomical logic\u003C/strong>: joints bend the right way, limbs maintain proper relationships, and actions flow naturally. Without it, even the best 3D model will look broken during animation. When a character reaches for a cup on a table, the elbow should bend correctly and the wrist rotate naturally. Without kinematics, the arm would hyperextend, or the hand could twist in an impossible way.\u003C/p>\u003Cp>By using forward and inverse kinematics, \u003Cstrong>animators can control complex body parts with far fewer steps\u003C/strong>. Instead of tweaking every single joint frame by frame, they can pose entire chains at once while reducing posing errors. Instead of manually adjusting the ankle, knee, and hip on every frame, the animator just locks the foot in place with inverse kinematics, and the software handles the rest.\u003C/p>\u003Cp>Let's try rigging a simple model in Blender to get a better feel of how it works.\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-green\">\u003Cdiv class=\"kg-callout-emoji\">💡\u003C/div>\u003Cdiv class=\"kg-callout-text\">\u003Cb>\u003Cstrong style=\"white-space: pre-wrap;\">Looking for working examples?\u003C/strong>\u003C/b>\u003Cbr>\u003Cbr>You can find the complete source code for the example integration showcased in this guide on our GitHub:\u003Cbr>\u003Cbr>🔗 \u003Ca href=\"https://github.com/cgwire/blender-ik-fk?ref=blog.cg-wire.com\">https://github.com/cgwire/blender-ik-fk\u003C/a>\u003C/div>\u003C/div>\u003Chr>\u003Ch2 id=\"forward-kinematics-fk-in-blender\">\u003Cstrong>Forward Kinematics (FK) in Blender\u003C/strong>\u003C/h2>\u003Cp>FK is like moving a marionette puppet: You control each string one by one, starting at the shoulder and working your way down to the fingertips. Every rotation builds on the previous one.\u003C/p>\u003Col>\u003Cli>Add a cube (\u003Ccode>Add → Mesh → Cube\u003C/code>) and scale it into a rectangular prism. Normalize the scale to 1 for beveling (\u003Ccode>Object → Apply → Scale\u003C/code>).\u003C/li>\u003C/ol>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-bbbf0c86-c1bb-4b7b-afd1-18cf0971aeab.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1600\" height=\"646\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/10/data-src-image-bbbf0c86-c1bb-4b7b-afd1-18cf0971aeab.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/10/data-src-image-bbbf0c86-c1bb-4b7b-afd1-18cf0971aeab.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-bbbf0c86-c1bb-4b7b-afd1-18cf0971aeab.png 1600w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Col start=\"2\">\u003Cli>In Edit mode, bevel the edges to round each side. Make sure to use \u003Ccode>Edge\u003C/code> mode and select the four edges we need. In the Bevel window that appears, increase the number of segments to create round edges.\u003C/li>\u003C/ol>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-0c49b11d-3896-4035-a842-4eb98d661b33.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1600\" height=\"816\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/10/data-src-image-0c49b11d-3896-4035-a842-4eb98d661b33.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/10/data-src-image-0c49b11d-3896-4035-a842-4eb98d661b33.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-0c49b11d-3896-4035-a842-4eb98d661b33.png 1600w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Col start=\"3\">\u003Cli>Make two more segments to create a mechanical arm. In \u003Ccode>Object\u003C/code> Mode, select the prism and duplicate. Repeat once more so you have three segments.\u003C/li>\u003C/ol>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-318d3ca9-6f1f-4afe-b602-ccff95474782.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"315\" height=\"171\">\u003C/figure>\u003Col start=\"4\">\u003Cli>Place the segments along the X axis to create the chain. Try to position them so they can sit end-to-end with a clear joint position.\u003C/li>\u003C/ol>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-82b2e2e0-c34c-46a2-9418-f313e5c0f788.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"847\" height=\"467\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/10/data-src-image-82b2e2e0-c34c-46a2-9418-f313e5c0f788.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-82b2e2e0-c34c-46a2-9418-f313e5c0f788.png 847w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Col start=\"5\">\u003Cli>Set up the parent hierarchy (FK chain). Build the chain from base to tip. Select the child object first, then put it in the intended parent (the one closer to the base). Repeat so each segment is parented to the previous segment.\u003C/li>\u003C/ol>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-408d19e7-ccac-4706-90fa-1aa9ace327b7.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"319\" height=\"218\">\u003C/figure>\u003Col start=\"6\">\u003Cli>Put each object's origin at its joint. For correct rotation, the origin must be at the joint end of each segment. Use the cursor tool to position the origin. Then, in \u003Ccode>Object\u003C/code> Mode, \u003Ccode>Object → Set Origin → Origin to 3D Cursor\u003C/code>. Do this for every segment.\u003C/li>\u003C/ol>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-7a11ca12-b043-4f94-b663-b87271e51597.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"817\" height=\"416\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/10/data-src-image-7a11ca12-b043-4f94-b663-b87271e51597.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-7a11ca12-b043-4f94-b663-b87271e51597.png 817w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Col start=\"7\">\u003Cli>Give each segment a small default rotation to observe how forward kinematics behave. When you rotate the base (parent) object, the children follow thanks to the parenting chain.\u003C/li>\u003C/ol>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-50da86d0-88ec-4673-a55b-227f9df93c0b.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"817\" height=\"416\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/10/data-src-image-50da86d0-88ec-4673-a55b-227f9df93c0b.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-50da86d0-88ec-4673-a55b-227f9df93c0b.png 817w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>You can then just rotate the arm as you want, keyframe the position, and\u003Ca href=\"https://blog.cg-wire.com/getting-started-with-blender-rendering/\"> \u003Cu>render the final result to get an animation\u003C/u>\u003C/a>!\u003C/p>\u003Cp>As you can notice, FK is great for smooth, arcing motions like waving, swinging a bat, or dancing.\u003C/p>\u003Cp>For more advanced rigs (IK, controls, constraints), Blender animators use an Armature instead of object parenting.\u003C/p>\u003Chr>\u003Ch2 id=\"inverse-kinematics-ik-in-blender\">\u003Cstrong>Inverse Kinematics (IK) in Blender\u003C/strong>\u003C/h2>\u003Cp>IK is more like controlling a puppet's hand, and the arm figures out how the elbow and shoulder should bend to follow along.\u003C/p>\u003Col>\u003Cli>\u003Cstrong>Duplicate the FK arm mesh.\u003C/strong> Select your three-segment FK arm, duplicate it with and move it aside so you keep the FK version for comparison.\u003C/li>\u003C/ol>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-59cd3241-e8e1-4b3b-b60a-8457c017a553.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1349\" height=\"526\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/10/data-src-image-59cd3241-e8e1-4b3b-b60a-8457c017a553.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/10/data-src-image-59cd3241-e8e1-4b3b-b60a-8457c017a553.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-59cd3241-e8e1-4b3b-b60a-8457c017a553.png 1349w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Col start=\"2\">\u003Cli>\u003Cstrong>Merge the segments into one object.\u003C/strong> Select the new arm copy and join each segment into a single mesh (\u003Ccode>Select all → Object → Join\u003C/code>). Now you have one continuous object representing the whole arm.\u003C/li>\u003C/ol>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-4f6843c9-ce25-4aa4-b391-3637d7543c4b.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1349\" height=\"526\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/10/data-src-image-4f6843c9-ce25-4aa4-b391-3637d7543c4b.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/10/data-src-image-4f6843c9-ce25-4aa4-b391-3637d7543c4b.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-4f6843c9-ce25-4aa4-b391-3637d7543c4b.png 1349w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Col start=\"3\">\u003Cli>\u003Cstrong>Create an armature chain.\u003C/strong> Add an Armature in \u003Ccode>Add → Armature\u003C/code>. In the Armature’s \u003Cstrong>Edit Mode\u003C/strong>, extrude bones to match the segments. Select the tip of the first bone to extrude and place it at the elbow. Extrude again for the \"hand\".\u003C/li>\u003C/ol>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-f18a080f-af0d-44c9-a948-01b17e8d4ad7.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1349\" height=\"526\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/10/data-src-image-f18a080f-af0d-44c9-a948-01b17e8d4ad7.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/10/data-src-image-f18a080f-af0d-44c9-a948-01b17e8d4ad7.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-f18a080f-af0d-44c9-a948-01b17e8d4ad7.png 1349w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Col start=\"4\">\u003Cli>\u003Cstrong>Add the IK controller.\u003C/strong> Switch to \u003Cstrong>Pose Mode\u003C/strong> andI select the \u003Cem>hand\u003C/em> bone. Press \u003Ccode>Shift+I → \u003Cem>Add Inverse Kinematics\u003C/em> → Without Targets\u003C/code>. The IK chain will now drive the arm. In the Bone Constraints tab, set \u003Cem>Chain Length\u003C/em> = 3.\u003C/li>\u003C/ol>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-9ffe89a0-6326-410d-9574-91798ce5dbc5.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1008\" height=\"423\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/10/data-src-image-9ffe89a0-6326-410d-9574-91798ce5dbc5.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/10/data-src-image-9ffe89a0-6326-410d-9574-91798ce5dbc5.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-9ffe89a0-6326-410d-9574-91798ce5dbc5.png 1008w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-069281f6-1b06-4992-8a73-63b76c27f9eb.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1008\" height=\"423\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/10/data-src-image-069281f6-1b06-4992-8a73-63b76c27f9eb.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/10/data-src-image-069281f6-1b06-4992-8a73-63b76c27f9eb.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-069281f6-1b06-4992-8a73-63b76c27f9eb.png 1008w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Col start=\"5\">\u003Cli>\u003Cstrong>Bind the mesh to the armature (skinning).\u003C/strong> In \u003Ccode>Object\u003C/code> mode, select the mesh first, then Ctrl-select the armature. Right-click on the objects and select \u003Ccode>Parent → Armature Deform → With Automatic Weights\u003C/code>. Blender assigns vertex groups for each bone so the arm follows the rig.\u003C/li>\u003C/ol>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-0c7343c3-1d1b-4ad3-b68a-804bdf1e1ba4.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"992\" height=\"531\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/10/data-src-image-0c7343c3-1d1b-4ad3-b68a-804bdf1e1ba4.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-0c7343c3-1d1b-4ad3-b68a-804bdf1e1ba4.png 992w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Col start=\"6\">\u003Cli>\u003Cstrong>Animate with IK.\u003C/strong> Go to \u003Cstrong>Pose Mode\u003C/strong>, grab the IK controller bone and move it: the whole arm follows naturally!\u003C/li>\u003C/ol>\u003Cp>You can still use FK by moving the bones in the parent chain.\u003C/p>\u003Cp>Note that the mesh deforms this way by default. You’ll need to add Bone Constraints to match the desired movement like only allowing the arm to move along a single axis to match the behavior of a mechanical arm.\u003C/p>\u003Chr>\u003Ch2 id=\"fkik-switch\">\u003Cstrong>FK/IK Switch\u003C/strong>\u003C/h2>\u003Cp>Most rigs in Blender use a \u003Cstrong>hybrid system\u003C/strong>: FK for flowing arcs and IK for fixed contact. Typically, an animator\u003Ca href=\"https://blog.cg-wire.com/staging-animation-principle/\"> \u003Cu>starts with FK for broad, gestural posing, then switches to IK for moments of contact or precise positioning\u003C/u>\u003C/a>.\u003C/p>\u003Cp>In more advanced rigs, Blender animators create a custom property (usually a slider or toggle in the N-panel or on a controller bone) to switch between FK and IK.\u003C/p>\u003Cp>This is out of the scope of this article, but it is important to keep in mind.\u003C/p>\u003Chr>\u003Ch2 id=\"conclusion\">\u003Cstrong>Conclusion\u003C/strong>\u003C/h2>\u003Cp>Kinematics is the basis of rigging and skinning, and what separates a stiff 3D mannequin from a character that feels alive.\u003C/p>\u003Cp>Forward kinematics gives you smooth arcs and natural flow, while inverse kinematics locks your character to the world with believable contact.\u003C/p>\u003Cp>But don’t just read about it, open Blender, grab a model, and start playing! A well-built rig doesn’t just connect bones: it defines how a character moves, poses, and interacts with the 3D world.\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>\u003Cp>\u003C/p>",{"uuid":600,"comment_id":601,"feature_image":602,"featured":29,"visibility":30,"created_at":603,"updated_at":604,"custom_excerpt":605,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":606,"primary_tag":607,"url":608,"excerpt":605,"reading_time":97,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":151},"d40e9baf-0811-422c-ac9b-e61be18477d6","68ec43d6ded61600017fff81","https://images.unsplash.com/photo-1590285381943-9fbf39f4f75d?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDV8fDNEJTIwY2hhcmFjdGVyJTIwcmlnfGVufDB8fHx8MTc2MDkyMDExNnww&ixlib=rb-4.1.0&q=80&w=2000","2025-10-13T02:12:06.000+02:00","2026-02-20T06:04:27.000+01:00","Discover the difference between Forward Kinematics (FK) and Inverse Kinematics (IK) in Blender. Learn how animators use these systems to bring 3D rigs to life with realistic motion, balance, and control. Includes hands-on rigging examples for beginners.",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":333,"name":334,"slug":335,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":336},"https://blog.cg-wire.com/forward-vs-inverse-kinematics-blender/","/posts/forward-vs-inverse-kinematics-blender","2025-10-28T10:00:04.000+01:00",{"title":595},"forward-vs-inverse-kinematics-blender","posts/forward-vs-inverse-kinematics-blender",[615],{"id":333,"name":334,"slug":335,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":336},"-q57RfoOKdxB3NsG4Ft4R7s9R9Xeglq2q1yRmD78FkQ",{"id":618,"title":619,"authors":620,"body":7,"description":7,"extension":8,"html":622,"meta":623,"navigation":12,"path":633,"published_at":634,"seo":635,"slug":636,"stem":637,"tags":638,"__hash__":641,"uuid":624,"comment_id":625,"feature_image":626,"featured":29,"visibility":30,"created_at":627,"updated_at":628,"custom_excerpt":629,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":630,"primary_tag":631,"url":632,"excerpt":629,"reading_time":43,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":510},"ghost/posts:blender-scripting-animation.json","Blender Scripting for Animation Pipelines: 2026 Introduction",[621],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">⚙️\u003C/div>\u003Cdiv class=\"kg-callout-text\">You can bend Blender to your will with just a few lines of code. Repetitive clicks? Gone. Complex scenes? Built in seconds. Custom tools? Yours to design. That’s the magic of scripting.\u003C/div>\u003C/div>\u003Cp>Blender’s graphical user interface is no doubt amazing, but there are always some tasks that feel like a grind: sharing previews with the team, tweaking endless settings in a new project, or doing the same steps over and over. Sometimes, you just wish there was a button that just did the thing, and scripting is how you unlock it!\u003C/p>\u003Cp>In this article, we’ll crack open Blender’s scripting feature using the Python programming language. You’ll learn how to write your first script, how to run it, and how Blender’s scripting modules are organized. By the end, you’ll have a good understanding of how to start optimizing your production pipeline.\u003C/p>\u003Chr>\u003Ch2 id=\"what-can-i-do-with-scripting\">\u003Cstrong>What Can I Do With Scripting?\u003C/strong>\u003C/h2>\u003Cp>Blender scripting isn’t just a neat trick for hobbyists: it’s a necessity for studios of every size.\u003C/p>\u003Cp>In production, speed and consistency are everything. Studios constantly face tight deadlines, large asset libraries, and the need to keep dozens of shots and scenes perfectly in sync across workstations. Doing that by hand is slow, error-prone, and expensive: that’s why automation is such a big deal!\u003C/p>\u003Cp>Scripting isn’t about writing code, it’s about giving yourself creative shortcuts and superpowers. With Python, you can automate the boring, repetitive tasks that eat up your time, or generate procedural geometry, materials, and even entire environments in just a few lines. You can \u003Cstrong>design your own tools and menus\u003C/strong> tailored to your workflow, and \u003Cstrong>take full control over scenes\u003C/strong>,\u003Ca href=\"https://blog.cg-wire.com/getting-started-with-blender-rendering/\"> \u003Cu>render settings\u003C/u>\u003C/a>, cameras, and lights. Scripting even lets you \u003Cstrong>connect Blender with external tools or APIs\u003C/strong>, making it a powerful part of larger pipelines.\u003C/p>\u003Chr>\u003Ch2 id=\"prerequisites\">\u003Cstrong>Prerequisites\u003C/strong>\u003C/h2>\u003Cp>Before diving in, make sure you have the following:\u003C/p>\u003Cul>\u003Cli>\u003Cstrong>Blender\u003C/strong> - Download and install the latest version from\u003Ca href=\"https://www.blender.org/download/?ref=blog.cg-wire.com\"> \u003Cu>blender.org\u003C/u>\u003C/a>.\u003C/li>\u003Cli>\u003Cstrong>Python\u003C/strong> - You'll need the Python programming language to use Blender's native scripting modules and run programs from your operating system's terminal.\u003C/li>\u003C/ul>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-green\">\u003Cdiv class=\"kg-callout-emoji\">💡\u003C/div>\u003Cdiv class=\"kg-callout-text\">\u003Cb>\u003Cstrong style=\"white-space: pre-wrap;\">Looking for working examples?\u003C/strong>\u003C/b>\u003Cbr>\u003Cbr>You can find the complete source code for the example integration showcased in this guide on our GitHub:\u003Cbr>\u003Cbr>🔗 \u003Ca href=\"https://github.com/cgwire/intro-blender-scripting?ref=blog.cg-wire.com\">https://github.com/cgwire/intro-blender-scripting\u003C/a>\u003C/div>\u003C/div>\u003Chr>\u003Ch2 id=\"1-create-a-new-script\">\u003Cstrong>1. Create a New Script\u003C/strong>\u003C/h2>\u003Cp>Inside Blender, open the \u003Cstrong>Scripting workspace\u003C/strong>. You’ll see a text editor panel where you can create a new script by clicking \u003Cstrong>New\u003C/strong>. This is where you can write your Python code, and it's particularly useful to see results in real-time:\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-05bcd44b-e1a3-4f6a-a5c7-edb11e40b1fb.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1600\" height=\"731\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/10/data-src-image-05bcd44b-e1a3-4f6a-a5c7-edb11e40b1fb.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/10/data-src-image-05bcd44b-e1a3-4f6a-a5c7-edb11e40b1fb.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-05bcd44b-e1a3-4f6a-a5c7-edb11e40b1fb.png 1600w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>For a production pipeline, it's usually more useful to run a script from the command line interface. Fortunately, Python now ships Blender modules. In this tutorial, we'll run a Python program directly from the OS terminal to avoid the extra steps of navigating the graphical user interface, so the first step is to install the required Blender module:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">pip install bpy==3.6.0 --extra-index-url &lt;https://download.blender.org/pypi/&gt;\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>As a test, let's create a new empty Blender file using Python:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">import bpy\n\nbpy.ops.wm.save_as_mainfile(filepath=\"./new_empty_file.blend\")\u003C/code>\u003C/pre>\u003Cp>First, we import Blender’s \u003Cstrong>Python API module\u003C/strong> \u003Ccode>bpy\u003C/code>, which lets us control almost everything in Blender (objects, materials, rendering, etc.). Then, we save the current workspace in a new file.\u003C/p>\u003Cp>\u003C/p>\u003Cp>We can run the program in the terminal like so:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">python3 script.py\u003C/code>\u003C/pre>\u003Cp>\u003C/p>\u003Cp>We can also open the newly created file with the Blender CLI:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">blender new_empty_file.blend\u003C/code>\u003C/pre>\u003Cp>Congrats! You completed your first script. Now, let's get to a more useful example: generating 3D text.\u003C/p>\u003Chr>\u003Ch2 id=\"2-hello-world-text-example\">\u003Cstrong>2. Hello World Text Example\u003C/strong>\u003C/h2>\u003Cp>Imagine you want to create a Star Wars intro animation. You know, the one with text slowly scrolling up at an angle:\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-02ff3b4e-8e6f-4f1a-b6d0-e4fb9e0622eb.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1600\" height=\"681\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/10/data-src-image-02ff3b4e-8e6f-4f1a-b6d0-e4fb9e0622eb.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/10/data-src-image-02ff3b4e-8e6f-4f1a-b6d0-e4fb9e0622eb.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-02ff3b4e-8e6f-4f1a-b6d0-e4fb9e0622eb.png 1600w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>How would you do this efficiently to make it easy to edit? By using a script, of course! So let's try a simple example and generate some 3D text.\u003C/p>\u003Cp>We create a new file and delete all objects in the scene to start clean:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">import bpy\n\nbpy.ops.object.select_all(action='SELECT')\nbpy.ops.object.delete(use_global=False)\u003C/code>\u003C/pre>\u003Cul>\u003Cli>\u003Ccode>bpy.ops.object.select_all(action='SELECT')\u003C/code>: Selects all objects currently in the scene.\u003C/li>\u003Cli>\u003Ccode>bpy.ops.object.delete(use_global=False)\u003C/code>: Deletes all selected objects.\u003C/li>\u003C/ul>\u003Cp>\u003C/p>\u003Cp>Just two instructions are needed to add a new text object to the scene:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">bpy.ops.object.text_add(enter_editmode=False, location=(0, 0, 0))\ntext_obj = bpy.context.object\u003C/code>\u003C/pre>\u003Cul>\u003Cli>\u003Ccode>bpy.ops.object.text_add(...)\u003C/code>: Adds a new \u003Cstrong>Text object\u003C/strong> at the location \u003Ccode>(0, 0, 0)\u003C/code> in the 3D world (XYZ coordinates).\u003C/li>\u003Cli>\u003Ccode>text_obj = bpy.context.object\u003C/code>: Stores a reference to the newly created text object in the variable \u003Ccode>text_obj\u003C/code>. Whenever you add something new, Blender makes it the active object, which you can access via \u003Ccode>bpy.context.object\u003C/code>.\u003C/li>\u003C/ul>\u003Cp>\u003C/p>\u003Cp>Let's change the text string to \"Hello World\":\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">text_obj.data.body = \"Hello World\"\u003C/code>\u003C/pre>\u003Cul>\u003Cli>\u003Ccode>text_obj.data\u003C/code> refers to the \u003Cstrong>Text DataBlock\u003C/strong>, the actual content or settings of the text object.\u003C/li>\u003Cli>\u003Ccode>.body = \"Hello World\"\u003C/code> sets the displayed string to “Hello World”.\u003C/li>\u003C/ul>\u003Cp>\u003C/p>\u003Cp>We can then adjust some text settings to give the text a little thickness and center it on the x and y axes:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">text_obj.data.extrude = 0.05\ntext_obj.data.align_x = 'CENTER'\ntext_obj.data.align_y = 'CENTER'\u003C/code>\u003C/pre>\u003Cul>\u003Cli>\u003Ccode>extrude = 0.05\u003C/code>: Gives the text depth, turning it from flat 2D text into slightly extruded 3D text.\u003C/li>\u003Cli>\u003Ccode>align_x = 'CENTER'\u003C/code>: Horizontally centers the text.\u003C/li>\u003Cli>\u003Ccode>align_y = 'CENTER'\u003C/code>: Vertically centers the text.\u003C/li>\u003C/ul>\u003Cp>You can find more options by reading\u003Ca href=\"https://docs.blender.org/manual/en/latest/modeling/texts/properties.html?ref=blog.cg-wire.com\"> \u003Cu>the documentation on Blender’s text object properties\u003C/u>\u003C/a>.\u003C/p>\u003Cp>\u003C/p>\u003Cp>Last but not least, we can rotate the text so it faces the camera instead of lying flat on the ground, since Blender text defaults to lying flat on the XY plane:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">text_obj.rotation_euler[0] = 1.5708 &nbsp; # 90 degrees in radians\u003C/code>\u003C/pre>\u003Cul>\u003Cli>\u003Ccode>rotation_euler[0]\u003C/code>: Refers to the \u003Cstrong>rotation around the X-axis\u003C/strong>.\u003C/li>\u003Cli>\u003Ccode>1.5708\u003C/code> radians ≈ \u003Cstrong>90 degrees\u003C/strong>.\u003C/li>\u003C/ul>\u003Cp>\u003C/p>\u003Cp>We can save the result using the previously mentioned instruction:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">bpy.ops.wm.save_as_mainfile(filepath=\"./text.blend\")\u003C/code>\u003C/pre>\u003Cp>\u003Cbr>\u003C/p>\u003Cp>To sum up, this is what our final code looks like:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">import bpy\n\nbpy.ops.object.select_all(action='SELECT')\nbpy.ops.object.delete(use_global=False)\n\nbpy.ops.object.text_add(enter_editmode=False, location=(0, 0, 0))\ntext_obj = bpy.context.object\n\ntext_obj.data.body = \"Hello World\"\n\ntext_obj.data.extrude = 0.05\ntext_obj.data.align_x = 'CENTER'\ntext_obj.data.align_y = 'CENTER'\n\ntext_obj.rotation_euler[0] = 1.5708\n\nbpy.ops.wm.save_as_mainfile(filepath=\"./text.blend\")\u003C/code>\u003C/pre>\u003Chr>\u003Ch2 id=\"3-how-to-run-a-script-script-loading\">\u003Cstrong>3. How to Run a Script (Script Loading)\u003C/strong>\u003C/h2>\u003Cp>As previously mentioned, the syntax to run a script in headless mode is simply like any Python program:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">python3 text.py\u003C/code>\u003C/pre>\u003Cp>And that's it! You’ve just run your first \u003Cem>useful\u003C/em> Blender script. It's super useful for automation, pipelines, or batch processing.\u003C/p>\u003Cp>Just open the \u003Ccode>text.blend\u003C/code> file and see the result:\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-eab235c1-3513-4b9d-9f89-8a4d7c1cd122.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1600\" height=\"731\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/10/data-src-image-eab235c1-3513-4b9d-9f89-8a4d7c1cd122.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w1000/2025/10/data-src-image-eab235c1-3513-4b9d-9f89-8a4d7c1cd122.png 1000w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-eab235c1-3513-4b9d-9f89-8a4d7c1cd122.png 1600w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>You can also open a specific \u003Ccode>.blend\u003C/code> file and run the script inside that context:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">bpy.ops.wm.open_mainfile(filepath='my_scene.blend')\u003C/code>\u003C/pre>\u003Cp>This loads \u003Ccode>my_scene.blend\u003C/code> first, then runs the rest of the script on it.\u003C/p>\u003Cp>\u003C/p>\u003Cp>Sometimes, you want to send custom arguments:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">python3 args.py – --text \"CLI Hello\"\u003C/code>\u003C/pre>\u003Cp>\u003Cbr>\u003C/p>\u003Cp>Inside \u003Ccode>args.py\u003C/code>, you can access these arguments like this:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">import sys\n\nargv = sys.argv\nargv = argv[argv.index(\"--\") + 1:]&nbsp; # get args after --\n\nprint(\"Custom args:\", argv)\u003C/code>\u003C/pre>\u003Cp>That's it for the basics, but you still have a lot to discover.\u003C/p>\u003Chr>\u003Ch2 id=\"4-scripting-modules-explained\">\u003Cstrong>4. Scripting Modules Explained\u003C/strong>\u003C/h2>\u003Cp>Blender exposes its scripting features through different modules. Understanding what each module does helps you define what you can script and how to search the documentation to code it.\u003C/p>\u003Cp>First, you have the core \u003Ccode>bpy\u003C/code> modules:\u003C/p>\u003Cul>\u003Cli>\u003Cstrong>\u003Ccode>bpy.context\u003C/code> (Context Access)\u003C/strong> - Provides information about Blender’s current state (active object, scene, mode, selected objects, etc.), e.g., \u003Ccode>bpy.context.object\u003C/code> gets the active object.\u003C/li>\u003Cli>\u003Cstrong>\u003Ccode>bpy.data\u003C/code> (Data Access)\u003C/strong> - Gives direct access to Blender’s datablocks such as meshes, objects, materials, and cameras. Example: \u003Ccode>bpy.data.objects[\"Cube\"]\u003C/code> gets the Cube object.\u003C/li>\u003Cli>\u003Cstrong>\u003Ccode>bpy.msgbus\u003C/code> (Message Bus)\u003C/strong> - A pub/sub system for listening to changes in Blender’s data and triggering callbacks like subscribing to frame-change events.\u003C/li>\u003Cli>\u003Cstrong>\u003Ccode>bpy.ops\u003C/code> (Operators)\u003C/strong> - Exposes functions that mimic UI actions like adding objects, deleting, or rendering. Example: \u003Ccode>bpy.ops.mesh.primitive_cube_add()\u003C/code> adds a cube.\u003C/li>\u003Cli>\u003Cstrong>\u003Ccode>bpy.types\u003C/code> (Types)\u003C/strong> - Defines the core classes of Blender’s data (e.g., \u003Ccode>Object\u003C/code>, \u003Ccode>Mesh\u003C/code>, \u003Ccode>Material\u003C/code>) for extension and customization, to create custom panels or operators.\u003C/li>\u003Cli>\u003Cstrong>\u003Ccode>bpy.utils\u003C/code> (Utilities)\u003C/strong> - Provides helper functions for class registration, add-on handling, and system path access, e.g., \u003Ccode>bpy.utils.register_class(MyOperator)\u003C/code>.\u003C/li>\u003Cli>\u003Cstrong>\u003Ccode>bpy.path\u003C/code> (Path Utilities)\u003C/strong> - Tools for handling file paths, including resolving relative paths and creating absolute paths, e.g., \u003Ccode>bpy.path.abspath(\"//textures/wood.png\")\u003C/code>.\u003C/li>\u003Cli>\u003Cstrong>\u003Ccode>bpy.app\u003C/code> (Application Data)\u003C/strong> - Provides information about Blender itself like version, build details, and runtime mode. Example: \u003Ccode>bpy.app.version\u003C/code> returns \u003Ccode>(3, 6, 2)\u003C/code>.\u003C/li>\u003Cli>\u003Cstrong>\u003Ccode>bpy.props\u003C/code> (Property Definitions)\u003C/strong> - Used to define custom properties like numbers, strings, and enums for operators, panels, or addons, e.g., \u003Ccode>my_prop: bpy.props.IntProperty(name=\"My Number\")\u003C/code>.\u003C/li>\u003C/ul>\u003Cp>Then, you can find more specialized libraries:\u003C/p>\u003Cul>\u003Cli>\u003Cstrong>\u003Ccode>aud\u003C/code> (Audio System)\u003C/strong> - Blender’s audio library for playing sounds, loading files, and mixing audio. Example: play a .wav file directly in Blender with Python.\u003C/li>\u003Cli>\u003Cstrong>\u003Ccode>bgl\u003C/code> (OpenGL Wrapper)\u003C/strong> - Low-level OpenGL wrapper for custom 3D viewport drawing (being replaced by \u003Ccode>gpu\u003C/code>). To draw custom overlays, for example.\u003C/li>\u003Cli>\u003Cstrong>\u003Ccode>bl_math\u003C/code> (Additional Math Functions)\u003C/strong> - Extra math helpers for interpolation, distance calculations, and geometry operations, like computing distances between points.\u003C/li>\u003Cli>\u003Cstrong>\u003Ccode>blf\u003C/code> (Font Drawing)\u003C/strong> - Blender’s font drawing module for rendering text in viewport overlays or panels, e.g., \u003Ccode>blf.draw(font_id, \"Hello World\")\u003C/code>.\u003C/li>\u003Cli>\u003Cstrong>\u003Ccode>bmesh\u003C/code> (BMesh Module)\u003C/strong> - Provides direct low-level access to Blender’s mesh editing system for procedural modeling and topology operations. Example: creating or modifying vertices and faces in edit mode.\u003C/li>\u003Cli>\u003Cstrong>\u003Ccode>bpy_extras\u003C/code> (Extra Utilities)\u003C/strong> - Contains helper functions like import/export support, math conversions, and view3d utilities, e.g., simplifying coordinate conversions.\u003C/li>\u003Cli>\u003Cstrong>\u003Ccode>freestyle\u003C/code> (Freestyle Module)\u003C/strong> - Controls Blender’s Freestyle line rendering for non-photorealistic edge rendering. Example: adjusting line styles or visibility rules.\u003C/li>\u003Cli>\u003Cstrong>\u003Ccode>gpu\u003C/code> (GPU Module)\u003C/strong> - Modern GPU drawing API that allows custom shaders and viewport overlays (successor to \u003Ccode>bgl\u003C/code>). Example: rendering with custom GLSL shaders.\u003C/li>\u003Cli>\u003Cstrong>\u003Ccode>gpu_extras\u003C/code> (GPU Utilities)\u003C/strong> - Helper functions for GPU drawing, simplifying shape rendering without full GLSL code, e.g., drawing a simple rectangle.\u003C/li>\u003Cli>\u003Cstrong>\u003Ccode>idprop.types\u003C/code> (ID Property Access)\u003C/strong> - Provides structured access to Blender’s custom ID properties in dictionary/array form. For example, to manipulate custom metadata on objects.\u003C/li>\u003Cli>\u003Cstrong>\u003Ccode>imbuf\u003C/code> (Image Buffer)\u003C/strong> - Handles image buffers, enabling loading, saving, and pixel-level manipulation, e.g., procedural image generation.\u003C/li>\u003Cli>\u003Cstrong>\u003Ccode>mathutils\u003C/code> (Math Types &amp; Utilities)\u003C/strong> - Blender’s math library offering \u003Ccode>Vector\u003C/code>, \u003Ccode>Matrix\u003C/code>, \u003Ccode>Quaternion\u003C/code>, and geometric utilities, e.g., \u003Ccode>Vector((1,0,0)).cross(Vector((0,1,0))) → (0,0,1)\u003C/code>.\u003C/li>\u003C/ul>\u003Chr>\u003Ch2 id=\"conclusion\">\u003Cstrong>Conclusion\u003C/strong>\u003C/h2>\u003Cp>Blender scripting with Python is one of the most powerful ways to extend and personalize your workflow.\u003C/p>\u003Cp>In this article, we explored how to create and run scripts, print your very first \"Hello World\" in the 3D world, and use the bpy module to make Blender do exactly what you want.\u003C/p>\u003Cp>At first glance, scripting might feel intimidating, but as you’ve seen, even a handful of lines can open doors to entirely new possibilities!\u003C/p>\u003Cp>Now, it’s your turn. Automate the boring stuff or craft tools from scratch for your studio pipeline. You can do it!\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>\u003Cp>\u003C/p>",{"uuid":624,"comment_id":625,"feature_image":626,"featured":29,"visibility":30,"created_at":627,"updated_at":628,"custom_excerpt":629,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":630,"primary_tag":631,"url":632,"excerpt":629,"reading_time":43,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":510},"a68ec682-3536-4c62-ab40-f59e63eae8b1","68ec43d4ded61600017fff7b","https://images.unsplash.com/photo-1760548425425-e42e77fa38f1?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDd8fCUyMHNjcmlwdGluZ3xlbnwwfHx8fDE3NjA2MTMxODl8MA&ixlib=rb-4.1.0&q=80&w=2000","2025-10-13T02:12:04.000+02:00","2026-02-20T06:04:03.000+01:00","Learn how to automate Blender with Python! Discover how scripting can speed up production, eliminate repetitive work, and let you build custom tools tailored to your animation pipeline.",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"https://blog.cg-wire.com/blender-scripting-animation/","/posts/blender-scripting-animation","2025-10-21T10:00:42.000+02:00",{"title":619},"blender-scripting-animation","posts/blender-scripting-animation",[639,640],{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},{"id":333,"name":334,"slug":335,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":336},"bdKf7MIhtakMGVSZgWhSLqHvKXdi7Me_aKU6pQKUlbI",{"id":643,"title":644,"authors":645,"body":7,"description":7,"extension":8,"html":647,"meta":648,"navigation":12,"path":659,"published_at":660,"seo":661,"slug":662,"stem":663,"tags":664,"__hash__":667,"uuid":649,"comment_id":650,"feature_image":651,"featured":29,"visibility":30,"created_at":652,"updated_at":653,"custom_excerpt":654,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":655,"primary_tag":656,"url":657,"excerpt":654,"reading_time":150,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":658},"ghost/posts:dcc-integration-blender-kitsu.json","From Blender to Kitsu: How to Create a Custom DCC Bridge (2026)",[646],{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},"\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">⚙️\u003C/div>\u003Cdiv class=\"kg-callout-text\">Ever wished your creative tools could talk to your production tracker? With a custom DCC integration, they finally can — no more manual uploads, mismatched versions, or lost time between Blender and Kitsu.\u003C/div>\u003C/div>\u003Cp>Artists rely on Digital Content Creation (DCC) tools like \u003Cstrong>Blender\u003C/strong>, \u003Cstrong>Maya\u003C/strong>, or \u003Cstrong>Houdini\u003C/strong> to bring stories to life.\u003C/p>\u003Cp>But while the creative work happens inside these tools, production tracking happens elsewhere. This disconnect can lead to version mismatches, time lost in repetitive manual uploads, and eventually less time spent creating. Without a smooth connection between the DCC software and your production tracker, your pipeline suffers.\u003C/p>\u003Cp>That’s where custom integrations come in.\u003C/p>\u003Cp>In this article, we walk through the basics of creating a Blender integration in Kitsu similar to Kitsu Publisher to publish 3D model previews from Blender to Kitsu.\u003C/p>\u003Chr>\u003Ch2 id=\"what%E2%80%99s-a-dcc-integration\">\u003Cstrong>What’s a DCC Integration?\u003C/strong>\u003C/h2>\u003Cp>A DCC integration is \u003Cstrong>a bridge between a creative software and another software tool\u003C/strong>, like a production tracker.\u003C/p>\u003Cp>For example, instead of exporting files, navigating to a web browser, and manually uploading versions, an integration could\u003Ca href=\"https://blog.cg-wire.com/working-with-multiple-digital-content-creation-tools/\"> \u003Cu>allow artists to publish directly from their tool of choice\u003C/u>\u003C/a>.\u003C/p>\u003Cp>Integrations can handle tasks like\u003Ca href=\"https://blog.cg-wire.com/rendering-explained/\"> \u003Cu>managing complex rendering pipelines\u003C/u>\u003C/a>,\u003Ca href=\"https://blog.cg-wire.com/animation-asset-storage/\"> \u003Cu>managing asset storage and versioning\u003C/u>\u003C/a>, or generating preview images: they automate the boring parts of production so artists can focus on telling stories.\u003C/p>\u003Chr>\u003Ch2 id=\"why-dcc-integration\">\u003Cstrong>Why DCC Integration?\u003C/strong>\u003C/h2>\u003Cp>Every studio eventually hits the same bottleneck: as projects grow, manual processes break down.\u003C/p>\u003Cp>\u003Cstrong>Integrations save time\u003C/strong> because they remove context switching between software.\u003C/p>\u003Cp>They also \u003Cstrong>reduce errors by standardizing repetitive tasks\u003C/strong> like delivering outputs by enforcing naming conventions, formats, and metadata consistency.\u003C/p>\u003Cp>Last but not least, they \u003Cstrong>improve project management and communication\u003C/strong> by giving supervisors and producers real-time updates.\u003C/p>\u003Cp>All professional animation studios rely on a pipeline, and DCC integrations are essential.\u003C/p>\u003Cp>To give you a concrete example, let's try building a script integration that uploads a preview from Blender to Kitsu to easily review work with your team.\u003C/p>\u003Chr>\u003Ch2 id=\"1-getting-started\">\u003Cstrong>1. Getting Started\u003C/strong>\u003C/h2>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-green\">\u003Cdiv class=\"kg-callout-emoji\">💡\u003C/div>\u003Cdiv class=\"kg-callout-text\">\u003Cb>\u003Cstrong style=\"white-space: pre-wrap;\">Looking for working examples?\u003C/strong>\u003C/b>\u003Cbr>\u003Cbr>You can find the complete source code for the example Blender–Kitsu integration showcased in this guide on our GitHub:\u003Cbr>\u003Cbr>🔗 \u003Ca href=\"https://github.com/cgwire/blender-kitsu-dcc-integration-example?ref=blog.cg-wire.com\">github.com/cgwire/blender-kitsu-dcc-integration-example\u003C/a>\u003C/div>\u003C/div>\u003Cp>Before we dive into scripting, let’s set up a local Kitsu instance where we can safely test our integration.\u003C/p>\u003Cp>The easiest way to run Kitsu locally is by using the kitsu-docker repository. Clone the repository to your machine and follow the instructions:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">git clone &lt;https://github.com/cgwire/kitsu-docker.git&gt;\ncd kitsu-docker\ndocker build -t cgwire/cgwire .\ndocker run --init -ti --rm -p 80:80 -p 1080:1080 --name cgwire cgwire/cgwire\u003C/code>\u003C/pre>\u003Cp>This will start all necessary services: Kitsu, the postgres database, and supporting components.\u003C/p>\u003Cp>Once the containers are running, open \u003Ccode>http://localhost:80\u003C/code> in your browser. Use the default credentials:\u003C/p>\u003Cul>\u003Cli>\u003Cstrong>Email\u003C/strong>: admin@example.com\u003C/li>\u003Cli>\u003Cstrong>Password:\u003C/strong> mysecretpassword\u003C/li>\u003C/ul>\u003Cp>You’ll be taken to the Kitsu dashboard.\u003C/p>\u003Cp>Before we can upload previews, we need something to upload them to. In Kitsu:\u003C/p>\u003Col>\u003Cli>Create a new production (e.g., Blender Test Project) by going to the \"\u003Cstrong>Productions\u003C/strong>\" page from the sidebar.\u003C/li>\u003C/ol>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/CleanShot-2025-10-13-at-9---.26.46-1.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"206\" height=\"479\">\u003C/figure>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-0e43401b-afb6-4345-b773-db3d9b03bed3.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"946\" height=\"914\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/10/data-src-image-0e43401b-afb6-4345-b773-db3d9b03bed3.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-0e43401b-afb6-4345-b773-db3d9b03bed3.png 946w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Col start=\"2\">\u003Cli>Inside the production, create an asset.\u003C/li>\u003C/ol>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-83cce3b0-70a0-486d-87e7-4914a5304262.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"946\" height=\"914\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/10/data-src-image-83cce3b0-70a0-486d-87e7-4914a5304262.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-83cce3b0-70a0-486d-87e7-4914a5304262.png 946w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Col start=\"3\">\u003Cli>Creating an asset automatically adds new tasks for all the selected task categories during the production creation. We can use those to upload previews.\u003C/li>\u003C/ol>\u003Cp>To interact with Kitsu programmatically,\u003Ca href=\"https://github.com/cgwire/gazu?ref=blog.cg-wire.com\"> \u003Cu>we use gazu, the official Python client for the Kitsu API\u003C/u>\u003C/a>. It allows us to authenticate, create entities, and upload previews directly from scripts.\u003C/p>\u003Cp>Install it with:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">pip install gazu\u003C/code>\u003C/pre>\u003Cp>Next, authenticate with your Kitsu instance using your username and password:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">import gazu\n\ngazu.set_host(\"&lt;http://localhost/api&gt;\")\n\nuser = gazu.log_in(\"admin@example.com\", \"mysecretpassword\")\n\nprint(\"Logged in as:\", user['user']['full_name'])\u003C/code>\u003C/pre>\u003Cp>\u003Cbr>Once logged in, we can\u003Ca href=\"https://gazu.cg-wire.com/?ref=blog.cg-wire.com\"> \u003Cu>use gazu to fetch productions, assets, and tasks, then attach media files to them\u003C/u>\u003C/a>.\u003C/p>\u003Chr>\u003Ch2 id=\"2-creating-a-preview-from-blender\">\u003Cstrong>2. Creating a preview from Blender\u003C/strong>\u003C/h2>\u003Cp>Producing a preview render is a common use case for animators. You need to get regular feedback throughout the production phase, and a preview is easier to reason with than importing an entire project.\u003C/p>\u003Cp>You can automate this with Blender’s Python API by setting up a viewport capture to render a single frame, saving the output to a temporary folder, and applying studio-wide render settings (resolution, format, watermarking):\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">import bpy\n\nbpy.ops.wm.open_mainfile(filepath=\"./project.blend\")\n\nbpy.context.scene.render.resolution_x = 256\nbpy.context.scene.render.resolution_y = 256\nbpy.context.scene.render.resolution_percentage = 100\n\nbpy.context.scene.render.image_settings.file_format = 'PNG'\nbpy.context.scene.render.filepath = \"./preview.png\"\n\nbpy.ops.render.render(write_still=True)\u003C/code>\u003C/pre>\u003Cul>\u003Cli>\u003Ccode>import bpy\u003C/code>: Import Blender’s Python API\u003C/li>\u003Cli>b\u003Ccode>py.ops.wm.open_mainfile(filepath=\"./project.blend\")\u003C/code>: Opens an existing Blender project file called \u003Ccode>project.blend\u003C/code>\u003C/li>\u003Cli>\u003Ccode>bpy.context.scene.render.resolution_x = 256 [...]\u003C/code>We configure the render resolution to 256 pixels by 256 pixels with no downscale.\u003C/li>\u003Cli>\u003Ccode>bpy.context.scene.render.image_settings.file_format = 'PNG'\u003C/code>: Set the output format to PNG and define the output path to  \u003Ccode>preview.png\u003C/code> before executing a still render of the scene.\u003C/li>\u003C/ul>\u003Cp>This script gives you a lightweight preview file that’s easy to store in Kitsu and quick for supervisors to review.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-e936efc9-2c3b-43ea-86f7-8845bdc6c50f.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"946\" height=\"914\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/10/data-src-image-e936efc9-2c3b-43ea-86f7-8845bdc6c50f.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-e936efc9-2c3b-43ea-86f7-8845bdc6c50f.png 946w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Cp>To run it, just install the bpy package and launch the program like you would for any other python script:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">python3 preview.py\u003C/code>\u003C/pre>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-8fc4a1a4-01c7-4fcb-a8a6-b5d50588d6b8.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"687\" height=\"768\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/10/data-src-image-8fc4a1a4-01c7-4fcb-a8a6-b5d50588d6b8.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-8fc4a1a4-01c7-4fcb-a8a6-b5d50588d6b8.png 687w\">\u003C/figure>\u003Chr>\u003Ch2 id=\"3-uploading-a-preview-to-kitsu\">\u003Cstrong>3. Uploading a preview to Kitsu\u003C/strong>\u003C/h2>\u003Cp>With the preview file ready, the final step is pushing the data into Kitsu with gazu.\u003C/p>\u003Cp>First, we retrieve the task we previously created:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">projects = gazu.project.all_projects()\n\nassets = gazu.asset.all_assets_for_project(projects[0])\n\ntasks = gazu.task.all_tasks_for_asset(assets[0])\ntask_status = gazu.task.get_task_status_by_short_name(\"todo\")\u003C/code>\u003C/pre>\u003Cp>To do so, we get a list of all available projects, then the assets of our newly created project, and finally the tasks assigned to this asset.\u003C/p>\u003Cp>We publish a comment for the task while linking the preview file to it:\u003C/p>\u003Cpre>\u003Ccode class=\"language-python\">(comment, preview_file) = gazu.task.publish_preview(\n&nbsp;tasks[0],\n&nbsp;task_status,\n&nbsp;&nbsp;&nbsp;&nbsp;comment=\"upload preview\",\n&nbsp;&nbsp;&nbsp;&nbsp;preview_file_path=\"./preview.png\"\n)\u003C/code>\u003C/pre>\u003Cp>And run the script:\u003C/p>\u003Cpre>\u003Ccode class=\"language-bash\">python3 upload.py\u003C/code>\u003C/pre>\u003Cp>Once uploaded, the file becomes instantly available in Kitsu’s web interface. Supervisors can review it, leave feedback, and mark statuses—all without any manual file juggling from the artist.\u003C/p>\u003Cfigure class=\"kg-card kg-image-card\">\u003Cimg src=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-e9710dd1-d727-4e9f-85f8-9db075a159f4.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"955\" height=\"931\" srcset=\"https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/size/w600/2025/10/data-src-image-e9710dd1-d727-4e9f-85f8-9db075a159f4.png 600w, https://storage.ghost.io/c/be/86/be86007c-1b95-476e-8b3b-895720c0d138/content/images/2025/10/data-src-image-e9710dd1-d727-4e9f-85f8-9db075a159f4.png 955w\" sizes=\"(min-width: 720px) 720px\">\u003C/figure>\u003Chr>\u003Ch2 id=\"4-distribution\">\u003Cstrong>4. Distribution\u003C/strong>\u003C/h2>\u003Cp>Once your script is working, you have a few options for how to use or share it:\u003C/p>\u003Cul>\u003Cli>\u003Cstrong>Run it directly in Blender\u003C/strong> - Open the \u003Cem>Scripting\u003C/em> workspace and execute the script from there.\u003C/li>\u003Cli>\u003Cstrong>Run it from the command line\u003C/strong> - Just like we did earlier, you can run your script from the terminal like you would for any Python program.\u003C/li>\u003Cli>\u003Cstrong>Package it as an add-on\u003C/strong> - This allows you to enable it from Blender’s preferences and even design a custom user interface for easier access.\u003C/li>\u003C/ul>\u003Cp>Creating a full add-on with its own UI is a must for sharing integrations with artists, but it's a much bigger topic we won’t cover here. If you’d like to dive deeper, check out the\u003Ca href=\"https://docs.blender.org/manual/en/latest/advanced/scripting/addon_tutorial.html?ref=blog.cg-wire.com\"> \u003Cu>official Blender add-on tutorial\u003C/u>\u003C/a>. And stay tuned, we’ll be covering this in more detail in a future post!\u003C/p>\u003Chr>\u003Ch2 id=\"conclusion\">\u003Cstrong>Conclusion\u003C/strong>\u003C/h2>\u003Cp>DCC pipeline integrations are foundational for efficient animation studios: by connecting tools like Blender directly with Kitsu, you reduce friction, improve communication, and make life easier for both artists and production managers.\u003C/p>\u003Cp>You don’t need a massive pipeline team to see the benefits of integrations. Even a small studio can start simple, automate a few pain points, and scale up over time as needed.\u003C/p>\u003Cp>\u003Ca href=\"https://github.com/cgwire/kitsu-publisher-next?ref=blog.cg-wire.com#readme\">\u003Cu>Check out the Kitsu Publisher documentation\u003C/u>\u003C/a> for a production-ready DCC integration solution for Blender, Toon Boom Harmony, and Unreal Engine!\u003C/p>\u003Cdiv class=\"kg-card kg-callout-card kg-callout-card-yellow\">\u003Cdiv class=\"kg-callout-emoji\">📽️\u003C/div>\u003Cdiv class=\"kg-callout-text\">To learn more about the animation process \u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" rel=\"noreferrer\">consider joining our Discord community\u003C/a>! We connect with over a thousand experts who share best practices and occasionally organize in-person events. We’d be happy to welcome you! 😊\u003C/div>\u003C/div>\u003Cdiv class=\"kg-card kg-button-card kg-align-center\">\u003Ca href=\"https://www.cg-wire.com/community?ref=blog.cg-wire.com\" class=\"kg-btn kg-btn-accent\">Join Our Discord Community\u003C/a>\u003C/div>",{"uuid":649,"comment_id":650,"feature_image":651,"featured":29,"visibility":30,"created_at":652,"updated_at":653,"custom_excerpt":654,"codeinjection_head":34,"codeinjection_foot":35,"custom_template":7,"canonical_url":7,"primary_author":655,"primary_tag":656,"url":657,"excerpt":654,"reading_time":150,"access":12,"comments":29,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"email_subject":7,"frontmatter":7,"feature_image_alt":7,"feature_image_caption":658},"1618a7a1-ff36-4259-910d-2902ca5adbbf","68ec43d0ded61600017fff75","https://images.unsplash.com/photo-1580894894513-541e068a3e2b?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDV8fFNvZnR3YXJlJTIwaW50ZWdyYXRpb258ZW58MHx8fHwxNzYwMzE0NjM1fDA&ixlib=rb-4.1.0&q=80&w=2000","2025-10-13T02:12:00.000+02:00","2026-02-20T06:04:22.000+01:00","Learn how to build a custom Blender integration for Kitsu using Python. This guide walks you through setting up a local environment, generating previews in Blender, and uploading them to Kitsu automatically—streamlining your DCC pipeline for faster, more reliable production.",{"id":23,"name":11,"slug":15,"profile_image":7,"cover_image":7,"bio":7,"website":7,"location":7,"facebook":7,"twitter":7,"meta_title":7,"meta_description":7,"threads":7,"bluesky":7,"mastodon":7,"tiktok":7,"youtube":7,"instagram":7,"linkedin":7,"url":10},{"id":333,"name":334,"slug":335,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":336},"https://blog.cg-wire.com/dcc-integration-blender-kitsu/","\u003Cspan style=\"white-space: pre-wrap;\">Photo by \u003C/span>\u003Ca href=\"https://unsplash.com/@thisisengineering?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">ThisisEngineering\u003C/span>\u003C/a>\u003Cspan style=\"white-space: pre-wrap;\"> / \u003C/span>\u003Ca href=\"https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit\">\u003Cspan style=\"white-space: pre-wrap;\">Unsplash\u003C/span>\u003C/a>","/posts/dcc-integration-blender-kitsu","2025-10-14T11:23:34.000+02:00",{"title":644},"dcc-integration-blender-kitsu","posts/dcc-integration-blender-kitsu",[665,666],{"id":333,"name":334,"slug":335,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":336},{"id":67,"name":68,"slug":69,"description":7,"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"url":70},"Y6D4qXodYv1lXvjekjP26GNUDi8I9hI336Agp1r2n8s",[669,677,689,697,708,717,725],{"id":670,"title":334,"body":7,"description":7,"extension":8,"meta":671,"name":334,"navigation":12,"path":674,"seo":675,"slug":335,"stem":335,"__hash__":676},"tag/blender.json",{"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"count":672,"url":336},{"posts":673},15,"/blender",{"description":7},"NGhuNL5GEEpGrAt0Y1hoiAFOBRkB8zKBFq90XcJR47E",{"id":678,"title":679,"body":7,"description":7,"extension":8,"meta":680,"name":684,"navigation":12,"path":685,"seo":686,"slug":687,"stem":687,"__hash__":688},"tag/company.json","Company",{"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"count":681,"url":683},{"posts":682},35,"https://blog.cg-wire.com/tag/company/","Company News","/company",{"description":7},"company","CSg2BLNemwEASf_RYxGHsJOXTxg3xNUldTg2Upc7ZC0",{"id":690,"title":39,"body":7,"description":7,"extension":8,"meta":691,"name":39,"navigation":12,"path":694,"seo":695,"slug":40,"stem":40,"__hash__":696},"tag/customer-stories.json",{"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"count":692,"url":41},{"posts":693},3,"/customer-stories",{"description":7},"vO2w4OuionBXR7-dsFeWvCucjpG7VuCqGV3NZOYyVw0",{"id":698,"title":699,"body":7,"description":7,"extension":8,"meta":700,"name":703,"navigation":12,"path":704,"seo":705,"slug":706,"stem":706,"__hash__":707},"tag/glossary.json","Glossary",{"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"count":701,"url":702},{"posts":673},"https://blog.cg-wire.com/tag/glossary/","Animation Glossary","/glossary",{"description":7},"glossary","ahYw1ulGqHh4X1VqtWmRXHQzLH25NsXPHgKJ8kwOMwA",{"id":709,"title":710,"body":7,"description":7,"extension":8,"meta":711,"name":68,"navigation":12,"path":714,"seo":715,"slug":69,"stem":69,"__hash__":716},"tag/pipeline.json","Pipeline",{"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"count":712,"url":70},{"posts":713},77,"/pipeline",{"description":7},"qa7lmThepbMYAJ--m7WHgcY7p9lpC51BDn7imjnLoHY",{"id":718,"title":122,"body":7,"description":7,"extension":8,"meta":719,"name":122,"navigation":12,"path":722,"seo":723,"slug":123,"stem":123,"__hash__":724},"tag/production-management.json",{"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"count":720,"url":124},{"posts":721},52,"/production-management",{"description":7},"CK3g20iyLvLAN6TiR91N008bRCUY5R5T0A-dnAm-nfI",{"id":726,"title":727,"body":7,"description":7,"extension":8,"meta":728,"name":727,"navigation":12,"path":731,"seo":732,"slug":733,"stem":733,"__hash__":734},"tag/resources.json","Resources",{"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"count":729,"url":730},{"posts":693},"https://blog.cg-wire.com/tag/resources/","/resources",{"description":7},"resources","uMVK_T3_oD87qJ7NOx5cVBCT5uXC9zFj44ZZatYH5RQ",[736,740,744,748,752,756,760],{"id":670,"title":334,"body":7,"description":7,"extension":8,"meta":737,"name":334,"navigation":12,"path":674,"seo":739,"slug":335,"stem":335,"__hash__":676},{"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"count":738,"url":336},{"posts":673},{"description":7},{"id":678,"title":679,"body":7,"description":7,"extension":8,"meta":741,"name":684,"navigation":12,"path":685,"seo":743,"slug":687,"stem":687,"__hash__":688},{"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"count":742,"url":683},{"posts":682},{"description":7},{"id":690,"title":39,"body":7,"description":7,"extension":8,"meta":745,"name":39,"navigation":12,"path":694,"seo":747,"slug":40,"stem":40,"__hash__":696},{"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"count":746,"url":41},{"posts":693},{"description":7},{"id":698,"title":699,"body":7,"description":7,"extension":8,"meta":749,"name":703,"navigation":12,"path":704,"seo":751,"slug":706,"stem":706,"__hash__":707},{"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"count":750,"url":702},{"posts":673},{"description":7},{"id":709,"title":710,"body":7,"description":7,"extension":8,"meta":753,"name":68,"navigation":12,"path":714,"seo":755,"slug":69,"stem":69,"__hash__":716},{"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"count":754,"url":70},{"posts":713},{"description":7},{"id":718,"title":122,"body":7,"description":7,"extension":8,"meta":757,"name":122,"navigation":12,"path":722,"seo":759,"slug":123,"stem":123,"__hash__":724},{"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"count":758,"url":124},{"posts":721},{"description":7},{"id":726,"title":727,"body":7,"description":7,"extension":8,"meta":761,"name":727,"navigation":12,"path":731,"seo":763,"slug":733,"stem":733,"__hash__":734},{"feature_image":7,"visibility":30,"og_image":7,"og_title":7,"og_description":7,"twitter_image":7,"twitter_title":7,"twitter_description":7,"meta_title":7,"meta_description":7,"codeinjection_head":7,"codeinjection_foot":7,"canonical_url":7,"accent_color":7,"count":762,"url":730},{"posts":693},{"description":7},1776340309989]