The Need for Governance and Strategy on AI's Impact in Game Dev | 15/04/26
Plus the first talks from our 2025 conference are now live on YouTube!
The AI and Games Newsletter brings concise and informative discussion on artificial intelligence for video games each and every week. Including industry news, innovative research, emerging trends, and our own exclusive editorial and reporting.
You can subscribe to and support AI and Games, with weekly editions appearing in your inbox. If you’d like to work with us on your own games projects, please check out our consulting services. To sponsor, please visit the dedicated sponsorship page.
Hello one and all, and welcome back to this week’s edition of AI and Games. For this week’s edition I’m picking up a thread from a recent interview over on Christopher Dring’s The Game Business on the need for AI governance and strategy. This is something I spend a lot of time discussing with studios in our consulting work, and figured I should share some of that here in the newsletter.
But first, some headlines, and the first talks from AI and Games Conference 2025 hitting YouTube!
Follow AI and Games on: BlueSky | YouTube | LinkedIn | TikTok
AI and Games Conference 2025 on YouTube
After many hours of editing, colour correction and approvals, I’m pleased to announce that for the next couple of months we will be slowly releasing the the recordings from last years AI and Games Conference.
As always we’re committed to making information as accessible as possible for those interested in the Game AI space. Hence we’re releasing these videos for free on the conference YouTube channel. With special thanks to Xsolla who join us as our presenting partner for the video releases!
To kick things off we have two talks that delivered on some really interesting subject matter.
Debugging Across Time and Platforms: The Power of Determinism
Richard Kogelnig / Havok
First up we welcome Richard Kogelnig from our 2025 sponsor Havok who digs into the complexities of debugging AI behaviours across different gaming platforms. The challenges faced in dealing with an issue that isn’t guaranteed to be reproduced the same way on Xbox as it is PlayStation. And how Havok have developed a fool-proof philosophy to meet these challenges in cross-platform determinism.
One Trillion Parameters and No Plans
Jeff Orkin / Bitpart AI
Our second release for this week is one that feels very relevant to the current climate. In an age of Large Language Models being proposed as the solution to many a Game AI problem, is there still a need for traditional planning systems? Jeff Orkin, the creator of the GOAP planner using in 2005’s F.E.A.R., argues not just yes, but maybe more now than ever. This talk digs into the work that Orkin’s team at Bitpart.ai are doing that explores how LLMs can add real value in traditional planning domains, and the that can opportunities from hybrid solutions. Turn up for the insights, and then stick around for the great real-time demo which turns into a meta-joke about all things AI and Games. It was a pretty surreal moment to be there when it happened.
AI (and Games) in the News
Let’s take a quick run through of the headlines before we dig into our story of the week.
Paper’s Please Creator Wary of AI Plagiarism (VGC):
Beloved indie developer, creator of Papers Please and Return of the Obra Dinn mentioned in an interview on Mike Rose and Rami Ismail’s ‘Mike & Rami Are Still Here’ podcast that he’s reticent about discussing his future projects because of AI. While he loves talking about the stuff he’s doing, he has a - rather justified - concern that if we posts about it online, then some grifter will attempt to clone it with Generative AI tools.
I mean, not only did I raise this as an issue specifically with the UK’s (now cancelled) copyright plans, but also we talked about how this exact thing happened just two weeks ago. I get it. It sucks.Lenovo’s New Gaming Headheld DoA Thanks to RAM Prices (Kotaku):
The Legion Go 2 released in 2025 with a hefty price tag of up to $1300. Now thanks to, you guessed it, Generative AI hype, the base model is now $1500, and the top-end device (which as double the RAM) has increased $700 to a whopping $2000. Just in case you needed a reminder that video games are becoming a luxury hobby.Black & White - 25 Years Later (Eurogamer): Lewis Gordon published a piece on Eurogamer last week celebrating 25 years of Lionhead’s beloved god sim Black & White. It’s a decent overview, with comments from various members of the development team. Plus it has some insight into how the AI works in the game, but having researched it myself I feel it works too hard to draw parallels between what the game did, and the AI of today. Yes DeepMind co-founder Demis Hassabis worked on the game, and yes DeepMind made StarCraft bots using the same methodologies, but there’s a big difference between the two.
And yes, I have finished researching Black & White. I hope to have a case study on it sometime this summer. Stay tuned!How Upcoming RPG Bellwright Simulates NPCs at Scale (80.lvl):
Sticking with the Game AI theme - oh how nice it is to briefly write about something other than Generative AI hype - developers Donkey Crew were interviewed by 80LV to discuss how they have thousands of NPCs active in the game. A fun chat and worth checking out. It has a lot of parallels with games like Kingdom Come Deliverance 2, which I’m also working on for a case study later this year!
Phantom Blade Zero Devs Reject DLSS5 (Kotaku):
Short and sweet, but after S-Game Studio’s upcoming title Phantom Blade Zero was listed as one of the upcoming games that support DLSS5 - which as you’ll recall had a reveal that went down like a lead balloon - wrote a lengthy post over on Twitter/X/Whatever to communicate how they’re not using “AI visual tech that could alter our artists’ original creative intent”.
When studios are having to defend their projects because they were planning to use the arguably useful features of DLSS, it only further highlights how badly Nvidia has messed up.Sony Doubles-Down on AI Rendering with Cinemersive (80.lvl):
A small story beat that most didn’t pick up on. Sony Interactive Entertainment acquired UK AI company Cinemersive Labs: a company that specialises in volumetric 3D imaging. Meaning it takes 2D images and turns them into 3D volumes. Sort of a mishmash of existing efforts in photogrammetry but applying it to creating interactive real-time renders using machine learning. While Cinemersive’s work has been predominantly in VR, it does align with Sony’s increasing interests in using AI for rendering, as seen with the PSSR technology on PS5 Pro.
While Most AI Tools are Hot Garbage, Studios Need Solid Governance and Strategy
Last week I saw some headlines off the back of Christopher Dring’s interview with Jon Gibson, the head of transformation at Keywords Studios. When the conversation surrounding AI tools came up, Gibson had this to say:
“AI feels like it’s in the chaos phase right now. And we need to move to the usable phase. How do we use AI in live production environments? How do we use AI in a way where it complements teams rather than potentially threatens teams? And also how do we use it in a context where it’s governed, it’s controlled, it’s IP safe, it’s legally safe, it’s ethically and morally safe? The gulf between cool AI demos where you put in a prompt and you get something amazing, to actually AI in production where you get consistent, high-quality results steered and directed by humans, is quite a big leap.”
You can hear the full comment over on the The Game Business, but for me it was refreshing - and not too unsurprising - to hear this coming from Keywords. After all they’ve done a lot of work in exploring the efficacy of AI tools. I have previously written up on the work they have done on the likes of Project AVA in 2024, an effort to build an entire visual novel game using AI tools. Plus in Project Kara in 2025 which took the same idea but instead sought to remaster an existing 3D game they had already built.
As reported in Gibson’s interview, the company has “tested about 500 different AI tools”, with only about half a dozen ready to be used in production right now. This came up during the original Project AVA report by Stephen Peacock and Lionel Wood in that many of the tools they found were poorly built, lacked enterprise costing structures and licenses, had operational security concerns, or had “laughable” legal terms associated with them. Never mind the actual efficacy of the tools in practice, which as was discussed in both projects were mixed at best.
While this was the main headline for a few news outlets, it wasn’t the main takeaway for me. I have been paying close attention to what Keywords have been up to for a while now, and it makes sense that as a co-development studio they want to show they’re aware of the current state of these technologies - be they positive or negative. However, I wanted to talk about what was for me a more crucial point. When Dring brought up on the current sentiment surrounding AI adoption as has been reported in the likes of the GDC State of the Industry reports, Gibson said, emphasis mine:
“I think not having controls and governance around AI is part of that issue… A lot of companies are not really explaining to their people why they’re using AI, and why it’s a good thing, and what the strategy is. They’re going into using AI models without that clarity. And it concerns developers.”
Governance is Crucial for Studios on AI Use
This was interesting because I don’t hear a lot of people talking meaningfully about building AI governance publicly. On how to ensure there is clear guidance within an organisation on what is considered use of AI tools and technologies, what impact it can have on the development pipelines, and also understanding from stakeholders across an organisation as to what value - if any - it presents.
It’s referred to as an “uncomfortable truth” in the title of the article, in that there’s too much focus on the power and capabilities of what AI can do, rather than meaningful discussion of how and if it should be adopted. And I would agree with that, it’s a topic raised frequently in talks I present at games industry conferences: studios need to take this seriously, and not simply ignore it. This means we need to consider:
What tools are considered acceptable in a games studio?
Who is allowed to utilise these tools and in what business capacities?
How does this extend to vendors, contractors, and other third parties?
Ensuring policy reflects the values of the organisation.
Recognising the sentiment among stakeholders across the organisation - from management right down to juniors - on how AI is impacting the business and what lessons are to be taken away from that.
Appropriate communication within the studio and beyond on what the policy is and how it influences decision making.
Now this isn’t an easy process, and I can speak to it given it is something that we help studios address. We run training and consult with studios on putting this governance in place - more on that later. But there’s a real net benefit in having this in place. It helps developers understand what the studio’s stance is on AI, on what is acceptable usage, and helps give clarity to both the use of these tools, but also to whether a developer’s work and livelihood runs risk of being affected by it. In an age where image generators are being sold left and right, it’s important that art teams feel like they can continue to do their best work without risk of their role being automated.
As Gibson states in the article, the culture that emerges off the back of good governance is one that encourages learning. While there’s no guarantee that someone is going to use AI, it’s useful for them to feel they’re in a space where they can ask questions.
In fact one of my regular activities I do when running a consultancy - where I’m in the studio for a couple of days - is I bake in a couple of hours where I’m at a desk (be it in the office or online), working on my thing, but anyone from across the company is welcome to DM me, jump on a call, or sit with me to ask questions. We get into every topic you imagine, because regardless of people’s enthusiasm for AI they’re curious about it - how it works, its efficacy, whether a tool exists for a specific problem, or even could we theorise how to use AI in specific scenarios. Ultimately people are curious whether AI can be used in responsible ways to deliver value.
These conversations are kept in confidence, but I do summarise the themes of what I hear to leadership. This works on two levels: it helps me educate and empower the individual, while also gathering an idea of the sentiment that is potentially affecting morale such that more effective governance and policy can be constructed.
Even If You Don’t Use AI, Leadership Must Take Charge
Now it’s worth stressing that even if you studio has no intention to use AI at all, policy and governance should still be in place. It’s important that teams feel that the leadership understand the state of this technology, what it does, and also how its evolving.
I often ask the audience in my industry talks as to whether their studio as even a basic policy in place for AI usage, and many still fail to put up their hand. Regardless of the intent, every game studio needs to have AI policies in place. Leadership needs to be able to show they have a grasp of the situation, of how their position reflects the current status quo, and but more importantly, and give clarity to their teams as to what that stance is.
But more critically, a non-AI policy once it is put in place is enforceable. If it’s in writing and clearly communicated, then this becomes a foundational component of what is considered good practice. You don’t wind up with a situation arising because someone has secretly been using AI tools as part of their role, and then it causes a headache because there’s no clear indication of whether this was acceptable, and how it will be acted upon when discovered.
Unsurprisingly, to me at least, a lot of the work on governance we do comes from indie studios who have zero interest in using tools.
Shameless Plug: Work With Us on AI Governance
So yeah, as I’ve expressed more than once, this is something we cover in our client work with games studios. I hadn’t planned on turning the end of this issue into a commercial for our work with games studios, but it’s clearly a talking point right now and worth raising more publicly.
We host a variety of training sessions on these issues. Not just on a technical level, but also the likes of ethical issues, legal considerations, and broader challenges studios faced in constructing and reinforcing AI policy. We also run dedicated consultancy on AI policy adoption, working with all corners of the industry to ensure guidance is in place.
If you’re interested, simply click the image above or the button below, fill in the form and we’ll get back to you!
Wrapping Up
That’s it for this week, thank you for sticking with us to the end. Next week we’ll begin the reader-voted piece on the RAM crisis and a deeper dive into how this all came together. Plus we’ll have more conference talks, and all the other jazz you’ve come to love from this here publication.
In the meantime, my one more thing for this week is our quicklook livestream from last week for indie game Aether & Iron by Seismic Squirrel. A ‘decopunk narrative RPG’ that also has a turn-based car combat system. It’s came out to very positive reviews on Steam and is sort of the antithesis of a lot of the Generative AI hype, with fully hand-drawn art and over 200 human-voiced characters. I had accepted the press key for the game given I was interested in the car combat aspect, but If you’re a fan of games like Disco Elysium and Dispatch this is definitely going to be up your alley.






