Snowcap: Ubisoft's AI-Powered GPU Profiler
Predicting GPU performance on consoles using trained ML models.
A plugin for the Snowdrop engine, Snowcap estimates performance for consoles in real-time using a training AI model.
By gathering data on different performance configurations, Snowcap is game-agnostic and can adapt to changes in engine development.
We hear from three developers behind the tool, and how it is being used throughout Ubisoft’s productions.
AI and Games is made possible thanks to our premium supporters. Support the show to have your name in video credits, contribute to future episode topics, watch content in early access, and receive exclusive supporters-only content on AI and Games.
If you'd like to work with us on your own games projects, please check out our consulting services provided at AI and Games. To sponsor our content, please visit our dedicated sponsorship page.
Building games is difficult, we know this, but building games that are designed to be released on multiple platforms at once, well… that’s a whole other problem. As any developer will know, getting your game running smoothly on one platform is first part of a much longer battle, the next part is ensuring it runs as smoothly as possible on all of the other hardware you want to release it on.
And this brings us to Snowcap: a plugin built for Ubisoft’s Snowdrop - the game engine used to create titles such as The Division, Star Wars Outlaws, and Avatar Frontiers of Pandora - that uses AI to estimate performance on different consoles but without actually running them on device.
In an age where questions are quite rightfully being asked about the value of AI in game development, Snowcap is - for me at least - one of the coolest examples I’ve seen of how to use artificial intelligence to support game developers, because it’s helping not just speed up the process of porting to other platforms, but it’s helping developers anticipate and remove peformance issues before they even deploy the game on the target hardware.
Follow AI and Games on: BlueSky | YouTube | LinkedIn | TikTok
The Challenge of Cross-Platform Development
So to get into the weeds of this, it’s worth stressing the real problem that Snowcap is trying to solve, and that’s helping support cross-platform development. As you’ll know when AAA studios ship a new title, it typically is released on PC and consoles at the same time. If you’re porting to console - and assuming we’re focussed on the current hardware on the market in 2026 - then that means you’re working on the PlayStation 5, the PS5 Pro, the Xbox Series S and X, and potentially now the Nintendo Switch 2.
Each of these devices have very different performance profiles, as a result of their makeup of CPUs, memory, and GPU. Quite often when larger studios have a game in development they’ll have teams of devs focussed on optimising for each platform: addressing a memory issue on the PS5, dealing with the GPU performance on the Xbox Series S, and so on.
Speaking from my own experience of porting games to consoles, this is a long process: you work on the issue, make a new build of the game, deploy it to your devkit version of the console, and then begin to run profilers and other performance tests to ensure you don’t wind up with a bad review over on Digital Foundry.
Now this is all par for the course in game development, but as I said, it’s a long process. You can easily lose hours of development time re-building the game, deploying it to the hardware, and hooking up the profiling tools to check whether any incremental improvement has been made.
And this brings us to Snowcap and how Massive used AI to help make that process easier.
In the video above, Snowcap shows us the impact on performance on different platforms (Xbox Series S, X, and PlayStation 5) as a result of use of procedural environment decoration tools as well as a real-time shadow casting light source. Note the frame rate widget on the bottom left.
Snowcap is a plugin built for the Snowdrop engine that predicts in real-time the performance of a game in the engine’s editor based on a particular hardware profile and graphics presets. It achieves this using a trained machine learning model that is gathering a lot of data about how games perform in Snowdrop under a variety of conditions on the target hardware, and then the trained model can predict how well the current game in the editor would perform on that target hardware. As we can see in this demo footage provided to us by Ubisoft, this version of Snowcap is predicting performance for the PlayStation 5, Xbox Series S and Xbox Series X at the same time.
A Change in Philosophy
This is quite an impressive technical achievement, and we’ll dig into the weeds on that shortly, but it’s worth taking a moment to highlight that this is not just a technical innovation, but also something of a change in philosophy over how games are developed.
As Franck Maestre - a senior engineer on the engine - explained to us, the tool started development when Snowdrop was being upgraded for the current generation of consoles, taking into account how Ubisoft’s games are built and the changes in rendering techniques.
Franck Maestre:
The idea of Snowcap was born while we were upgrading Snowdrop for the next generation of platforms [PlayStation 5, Xbox Series S|X] and while we were upgrading the engine, we started discussions with technical artists, world builders, rendering guys: people working on those new platform and new capabilities; new rendering techniques, object materials etc.
We realized that the old process of setting up [art/world] budgets in a traditional way, could work at some extent. But as some rendering techniques are more expensive than others it was introducing a level of complexity to get a deterministic overview of the final performance. So we started investigating injecting in-engine metrics very, very, deep.
When you are defining world generation rules, object placements, avoidance, density, it’s difficult to estimate the resulting GPU workload of what you are doing, without creating builds and running tests. Also keep in mind [Snowdrop] is node-based. So technical artists, can actually edit the object materials properties using the graph, which means that our metrics should not be located at this level - the node level itself - but at the renderers’ underlying operations: the rendering pass, the amount of meshes, primitives, etc.
And in fact, it didn’t start out with the ambition to be AI-driven, it was only after an internal demo was shown around the studio, that Nikolay Stefanov - who is chief architect for Snowdrop and the Technical Director of the upcoming Tom Clancy’s The Division 3 - suggested that this diagnostic process to be modelled using AI. This led to David Renaudie - an Expert AI Scientist at Massive Entertainment - to join the Snowcap team and start exploring how to implement the machine learning model.
But while the technical part is critical, as I said it’s also about the change in approach to how games are being built. We heard from Elia Wrzesniewski, an associate production director on Snowdrop, explain how this is changing the way in which games are made in a more sensible way. Rather than letting performance issues creep into the game in ways that other automated tests will miss, only for them to be ironed out later. Snowcap helps the developers become more proactive in nature: identifying problems as they arise when changes are made to the game, and try to minimise any immediate performance issues that could emerge as they go.
Elia Wrzesniewski:
It’s really about prevention, so it’s a bit of a mindset shift. Just like you described, you want to ship the features, you, have a deadline to meet, so everybody’s focused on gameplay, on features, and sometimes it can create that kind of bottleneck on the optimization later in the production cycle.
And so here, yes, it’s, really the goal to prevent introducing - unintentionally - performance issues along the way, so you don’t create these bottlenecks.
On the Snowdrop pipeline, we had very natural adoption from game production. Nikolai Stefanov, the games he works on, are adopting this tech. People naturally see the benefits of using it.
And as Elia elaborated, by reducing the number of regular or basic performance issues being added to the game, it allows for the QA teams to focus much more on finding issues that arise from regular gameplay - as well as what happens when you just let players loose and start causing chaos.
How Snowdrop Works
For any studio wanting to use Snowcap in a given game, they setup their own project definition file, which defines the performance parameters and technical specifications of the game, and this helps determine the expected performance.
In order to work both in supporting developers in the moment, but also providing more accurate prediction over time, Snowcap operates in two modes. The first is of course the performance prediction mode that we described but also there is the data collection mode. After all, in order for it to predict the performance, it would be good to know how well the actual game you’re developing works in practice on the target hardware - especially as the game itself begins to evolve and change over time.
However, as described to me by the team, given so much of the data capture is focussed on the engine itself, Snowcap is largely game-agnostic, and instead is focussed more on the version and configuration of Snowdrop. This means that over time they can curate datasets reflective of specific permutations of SnowDrop based on the range of pipeline configurations and technical specifications that can be set.
David Renaudie:
It’s pretty simple. There is the hook in the engine that is going to be activated, on a basis that we decide for. So we sample the capture, because we cannot capture like, 60, 80, 100 FPS for all machines during their execution time.
It would be a huge volume of data but it would be also kind of useless because we don’t actually need all this volume of data.
So to build it, we gather millions of data points - and it’s really easy because we are at frame level - and we randomly sample them across a variety of hardware and presets. We aim to have a good distribution of this data, with all the targets that we want, to use this for our predictions. Then we use it to build a model for a given version of the engine.
Since we run play tests regularly - internal play tests are being done - we gather new data and we enrich the model or we retrain it and to make sure that we have no regression and, and so on and so forth.
And then when another games comes in, then we redo this exercise of making sure that we have the right metrics because some rendering techniques differ from game to game, and so sometimes we have to retrain specific models and sometimes we can just reuse one off the shelf that we already have.
But it’s more than just running under the hood without any guidance. As Elia explained, Snowcap can be configured for specific setups and pre-configured paths in the game. In fact if you recall way back in 2020 when we released a video about how Massive had built testing bots in Tom Clancy’s The Division 2, including autotesters that could run through the level based on a pre-built configuration set by the developers. Well now Snowcap can be attached to an autotest bot so that when it gives data reflective of a players normal playthrough, this can be processed into heatmaps and other reports that can give the developers a much clearer insight into which parts of the level have performan`ce issues and what might be the cause of it.
Wrzesniewski: We also can tweak the data tracking frequency, to run what we call a ‘camera performance’ or an ‘auto test’. And so basically what it means is that the camera moves, through the environment to automatically and generate heat maps. So it gives the developers a clear picture of how the level performs even before gameplay elements are added so you have an idea of how your vanilla level performs.
It’s a useful way to spot potential issues very early in the game’s development. To guide you to make smarter optimisation decisions right from the start and not too late.
An example of a recorded playtest, and the resulting heatmap data that is generated. Allowing developers to dig further into where issues are being found.
Visualisation
While we’re talking about Snowcap now in early 2026, the tool has existed in Snowdrop for some time now and has been adopted in most projects built in the engine in recent years. In fact I first heard about it in the summer of 2024 and in that presentation that was shown behind closed doors we saw its ability to track GPU performance in a complex jungle biome and it was highlighting two things.
FPS Prediction: Predicting the frame rate based on the target resolution for the platform.
Dynamic View Scaling: Predicting the level of dynamic view scaling that is being applied - which is when the game is adjusting the rendering resolution on the fly such that the framerate remains consistent. This is useful given each project defines not just the desired resolution and frame rate for a given platform, but the limits of the dynamic scaling as well.
Renaudie:
When we talk about GPU performance, it’s not unidimensional. There are actually two dimensions of it, and this is the entry point of the use of Snowcap in the editor.But if the user will say, “Okay, I, I need to dig a little more into details here,” it’s possible to open it into more a detailed view where you can see the evolution of all the predictions in time and with more complex graph showing all the breakdown, all the details for the predictions
The user can also see the engine metrics that are being actually used to make the predictions. The user can have a look at what’s happening right now in the engine and have an idea of, “Hey, I have a predicted performance drop here. What should I be looking at? Oh, I see. There are too many lights, too many particles, too many somethings, and I have to act upon this.”
And this is, again, just a first level of, of entry, but we saw that it’s still helping and speeding up [the process].
Demo highlighting the change in dynamic view scale. Note this appears both in the Snowcap output window (the larger window), plus in the information widget on the bottom left. The outer (green) bar is the framerate, the inner (blue) bar is the ratio of view scaling.
But in addition to this, it can now show the engine metrics that are being used to make the predictions within the model. This is can be useful for developers to more readily diagnose why there is a performance issue in a particular area of the map. But it’s not just tracking aspects of the game that impact GPU performance - such as the number of meshes and tris within those meshes are being rendered - but it’s also tracking CPU objects as well courtesy of an integration with Snowdrops Performance Analytics Pipeline (PAP). Which is of great use given many CPU-bound aspects of the game can also potentially have GPU implications as well.
Maestre:
We integrated [PAP] through Snowcap for the CPU part. We have data coming from this pipeline that we can see in editor, and we can see if an entity performs in one place [versus another].
Because depending on the context, like say being in a wind corridor where a constant force is applied to the object, then the rigid body for the object will activate, and generally they activate on collision, - and this will cost.
This can also break the frame rate. Of course the GPU is the most important for us, ‘cause this is where we try to anticipate things, but you got the other one, which is the CPU this can also make the frame rate drop.
Wrzesniewski:
But we will still have edge cases, because these systems are so complex that even if you have a clean map. Even if everything is clean, there is still the probability that something goes wrong if the player, I don’t know, opens everything, or puts too many explosions in a specific location.So, I think this also relieves some bandwidth for QA to really try to break the game and find these edge cases.
And the developers they can be more confident knowing, ‘okay, I did my level, I create my art and I know that, it should properly on the target console.’
But yeah, games are so complex today that, uh, even with, uh, these tools, uh, I think it’s still difficult to, we, we will have edge cases to, to find.
So of course, the one thing that is worth bringing up - and I think David would be heartbroken if we didn’t put this in the video - was just how big, how complex and expensive this model is.
Renaudie:
Something that I think will be interesting for your audience is to mention that we went the way of artificial neural networks, but we didn’t go in the deep neural network direction, because it wasn’t actually needed.In the early experimentation phase, we tried with simple feedforward neural networks, and I’m gonna tell you just - without telling you about the architecture or anything - the size of it.
So you know today we are in a world where it’s the big race, you know, among the big actors of the biggest model in terms of billions of parameters and so on.
Snowcap model is running over a whopping 18,000 parameters. So 18k parameters, which is approximately making it big as 80 kilobytes.
That is still, you know, going very well in terms of accuracy for the prediction. We have like less than 4%, error rate, I think 3.33% is what we showed at the conference [the 2025 AI and Games Summer School]. Which is really good!
But it also runs in real time on the developer’s workstation without impacting the frame rate on said workstation and at frame level.
Both Snowcap, and our recent deep dive on MLMove, are a nice reminder that - contrary to the narrative surrounding AI these days - that we can often solve complex problems, with simple yet well considered solutions.
Renaudie:
You don’t need a complex ML model to solve even complex problems like this.When you tackle the problem and say, “Oh wow, it’s really not, not simple, not something we can easily solve. Oh, we might need a big model.”
Well, sometimes not. I think here that the learning is also interesting. We can solve this type of problem with really lightweight models that just do the job, and it’s beautiful. Simple is beautiful.
Closing
It’s not often an idea comes along that catches me by surprise, by Snowcap is certainly one of them. A really innovative way to apply AI to support game developers in a sensible and practical way. Allowing developers to get on with their day to day job, but helping them find the issues in their work faster and more readily so that they can do their best work first time around rather than having to go back and fix everything later.
Plus it’s clear as the range of platform specifications grows with new platforms coming to market, tools such as this can help streamline the process of porting these games. Hey, if it means more of these running pretty slick on portable platforms, I’m all for it.
Special thanks to David, Elia, and Francke for sitting down to chat with us. Plus to David Antell and Joel Morange from the PR teams at Massive and Ubisoft for their support in making this happen - and for giving us permission to share some behind-the-scenes footage of the tool in action that you’ve seen throughout this piece.








