WB Discovery Sues Midjourney, Diablo Devs Unionise, and Anthropic Settle! | 10/09/25
PLUS: an early look at our full 2025 conference programme for premium subscribers
Our monthly sponsor issue of the newsletter provides what you come to expect every week from AI and Games. Plus a deep-dive into what’s to come that is exlucisve to our premium subscribers. Ranging from future newsletter topics, to YouTube episodes, new projects, our conference planning and much more.
Good morning and welcome to this week’s edition of the
newsletter. As we can see from the top, Batman is chilling as summer reaches its end. And that brings us to what we have to look forward to this week?Diving into the news, including:
The first announced talk at the AI and Games Conference 2025!
Diablo’s dev team unionises in part due to AI.
Anthropic settles in their class-action lawsuit to the tune of $1.5 billion!
Warner Bros has joined the lawsuit party against Midjourney.
Plus for premium subscribers
A first look at the full programme for the AI and Games Conference 2025.
A deep-dive into case study topics you can expect to see before the end of 2025!
Upcoming newsletter topics for free and premium subscribers.
Okay, let’s go!
Yeah, that’s an image of Batman made using Midjourney. No I didn’t prompt the model for it. I’m happy to steal it from a random person on the internet. After all, they don’t own it do they? Mwa ha ha…
Follow AI and Games on: BlueSky | YouTube | LinkedIn | TikTok
Announcements
Some quick announcements related to the AI and Games Conference.
Decision Emails Have Been Sent
If you submitted a talk to us for this years conference and unsure of its fate, be sure to check your inbox from an email from our event manager Sally Kevan. All emails will have reached inboxes by Monday morning of this week. Once again a huge thank you to everyone who submitted to us.
Early Bird Tickets Close on Friday
Early bird tickets are selling out fast! The indie developer passes have sold out, but regular tickets are now live. Meanwhile at the time of publication we have two (2) industry tier early bird tickets left! Plus we still have a batch left of student and professional tiers.
As a reminder we close early bird on Friday September 12th. At which point all regular tickets go live, so now is the time to grab’em!
The First Talk Is Revealed!
I figured why not whet your appetite for what’s to come with our first talk announcement! We’ll be posting on social media every couple of days with the 30+ talks we have scheduled, but for now let’s start with something unexpected.
A retro post-mortem, with the AI of Dungeon Keeper!
One of the most beloved PC games of all time: Ian Shaw joins us to wind the clock back to 1997 and deliver a post-mortem on one of the earliest examples of emergent game AI.
No lie, the organisational team squealed when they saw this pitch come in. Many of us, myself included, have put a lot of time into Dungeon Keeper over the years. I can't wait to see this.
If you can’t stomach holding out until next week to see what else we’ve got planned, become a premium subscriber where I will be revealing the full list (subject to change of course) at the bottom of this issue!
AI (and Games) in the News
A quick round up of relevant headlines from the past week or so.
Diablo Team Unionise to Protect Themselves from AI - Among Other Things
It’s always great to see another games team unionise - we’re a big fan of collective action around here. As reported by Aftermath, the Diablo development team (or Team 3 as they’re known) at Blizzard have unionised with the Communications Workers of America (CWA). While this is intended to mitigate a number of issues, including the push against working from home, and the “passion tax” of being paid less to work on projects you’re passionate about - a recurring issue in games - one of the big issues was with AI.
After all, Blizzard is owned by Microsoft, who has not only gone to great lengths to go all in on AI, but are laying off swathes of developers across their organisation in pursuit of somehow making their AI services financially sustainable and eventually profitable (I don’t have high hopes for this). Meanwhile as discussed in the article, there’s an increasing use of generative AI in the art corners of the studio to facilitate production. As such, it’s important that the developers start having a means by which to formally address their grievances. Best of luck to them!
A Change of Leadership at DSIT
In recent months as I’ve dug deeper into the legal situation surrounding AI in the UK, I’ve discussed the actions and proposals of the UK government’s Department for Science, Innovation and Technology (DSIT), led by Peter Kyle. I’ve had numerous issues with Kyle’s handling of the proposed AI legislation. He has been shown to happily engage with major AI companies and offer them the moon - including the ludicrous discussion of every UK citizen getting ChatGPT Plus on government dime. Meanwhile he actively avoids talking with creative sectors whose works are being pillaged by them. Never mind the atrocious rollout of the UK’s Online Safety Act, of which he defended by stating opponents to the implementation were on the side of ‘extreme pornographers’. Sure, the ‘other side’ in his particular argument was with Nigel Farage of all people, but it’s a nice reminder you can’t have nuance in a political debate - like a needed a reminder of that these days…
Well the recent departure of Angela Rayner as deputy prime minister for the UK has led to a cabinet reshuffle. With Liz Kendall, the former Secretary of State for Work and Pensions assuming the role at DSIT. I’ll be interested to see her perspectives on these issues in the coming months. No statement has been given thus far, so we’ll see whether it’s business as usual or a deeper understanding of the issues manifests. Meanwhile, Kyle - having been appointed Secretary of State for Business and Trade all of 5 minutes ago - has already declared he’s off to China. Just can’t tie this lad down…
Anthropic Asks to Settle Class-Action Lawsuit for $1.5 billion!
Hoo boy, and there was me thinking Warner Bros was the big story this week (see below). After arguing for months that they were not in the wrong to train against million of books for their Claude LLM, Anthropic found themselves in a pickle back in June. While Judge Alsup stated that the use of the books to train Claude falls under the realms of fair use, the bigger issue was that the company was liable of theft having stolen those books to train their models on by using a pirated book library. A library with 7 million books in it - and could lead to damages of up for $150,000 per book: amounting in the worst case scenario to a fine of around $1 trillion.
With the court case mere months away, Anthropic have now put forward a proposal to pay $1.5 billion in damages for the 500,000 books that have been identified to have been used in training. Making this “the largest copyright recovery in history and the first of its kind in the artificial intelligence era”, said the lawyers for the prosecution.
This amounts, per Reuters reporting back in May, to around half of all Anthropic’s revenue for the past year. Bearing in mind that they have operational costs to host their AI models, and train them too. So I suspect their accountant is not going to be too pleased when this all wraps up.
Now the thing is, this doesn’t close the book (no pun intended) on this story, given by attempting to settle the court case, they can still be sued by other for infringement. It speaks to a fundamental thing that while it looks like US law is going the route of permitting the use of copyrighted materials in AI training, you’re still potentially liable given how you procured the data. So if you stole a bunch of copyrighted works, then this is the new attack vector - given this is something of an admission of guilt with a shiny $1.5 billion price tag on it.
The thing that still gets me every time I read about this, is that one of Anthropic’s largest investors is Amazon. Surely, Jeff Bezos could’ve snuck them a discount on buying every book that exists in the Kindle store? Surely!?!?
Slop on the Showfloor
Two things caught my attention in the past week with regards to trade shows, notably Gamescom in Europe and PAX West in Seattle, that people are becoming more aware of the use of generative AI in games that are on display.
First up I saw Will ‘VikingBlonde’ Overgard linking to a spreadsheet of games where they are being identified as ‘Gen AI Slop’ - meaning people are actually calling it out as they spot them. Meanwhile the piece by Alex Donaldson on Eurogamer about AI popping up in games appearing on the show floor at Gamescom and the mixed feelings he has on the subject.
It speaks to a number of issues here that the industry, and consumers, will have to deal with in the coming years:
The growing apathy among corners of the sector towards any game that uses generative AI.
Critically people will now more readily scrutinise your game for what they perceive to be a degradation of quality brought by AI.
A fundamental lack of understanding of how AI is used in games.
Not to call Donaldson out but in his article shares his thought that “AI tools are inevitably going to become an indelible part of game development”. This perspective is about 10+ years out of date, minimum.
I am going to give him the benefit of the doubt in that he means generative AI in this instance, but once again it speaks to a lack of knowledge and nuance on the issue.
Another Studio Boss Shares Their AI Take!
I don’t plan on sharing every instance where a boss of a studio says generative AI is great, but this one felt really tone deaf to me. Tim Morten at Frost Giant Studios told Game Developer that he felt it would help achieve their vision for their game Stormgate. To quote the article:
"But what I do want to do is fulfil these grand visions. Stormgate is a grand vision for a game—I want tools that help me be able to do that. Stormgate's challenge, like so many other games, is having the funding to cover the surface area that matches the vision. AI is a way for developers to be able to do that better."
I mean again, this is a take driven by someone conflating all forms of AI with each other. But also, it’s a real nothingburger of a quote. I find it speaks to this unflattering vision we have of games auteurs: how AI is a means now to avoid making compromises on their vision. Rather than having to find ways to express yourself artistically within constraints of the technology and budget - something that video games has been doing since the inception of the medium.
Some will succeed, most will fail, but it’s not often a studio lead outright says the quiet thing that in their pursuit to achieve their vision, they’ll use AI rather than hire talent.
Warner Bros Joins the Lawsuit Party
The big news story this week is that Warner Bros. Discovery (gotta love the terrible names made up when you mash corporations together) has joined the legal battle against AI company Midjourney. Much like Disney and NBC Universal before them, they accuse the AI company of letting people generate images that flagrantly violate their intellectual property.

As we discussed back in June when the Disney/NBC lawsuit dropped, a big part of this isn’t just that these image generators have been trained using copyrighted assets, but more critically that they continue to enable for users to generate IP violating material. Images such as the one shown above are examples in the lawsuit itself highlighting that the Midjourney interface allows you to readily violate copyright with zero effort.
I can’t help but wonder, given we now have several of these lawsuits happening, why Midjourney haven’t made any effort to shore this up? While these corporations are unhappy their assets are being used for training, it’s becoming increasingly likely that the use of material to train AI is going to be considered fair use in US copyright law following the aforementioned judgement in the lawsuit by authors against Anthropic earlier this year. But the one thing they can control, is what the system generates.
As I discussed in the issue on the Disney lawsuit, the likes of OpenAI build safeguards to try and stop people making copyright infringing material on the GPT platforms. I suspect because they realise this is the area they’re most likely to get into big legal headaches - or rather, additional legal headaches. I mean I can almost understand - read that as ‘I can imagine if you’re an idiot’ - you might not have put guardrails on this model when it launched. But after all this time, and as we’ve seen the rise of all sorts of slop, and you’ve been sued already, surely you’d think now’s the time to start mitigating some of this.
The Big ‘AI and Games’ Update
Alrighty, time for the fun part: we’re going to dive into what to expect in the coming months across various corners of AI and Games, be it our conference, Goal State, the YouTube, the newsletter and more.
And for this week the big reveal for premium subscribers is I’m going to break down all the talks we have planned for the 2025 AI and Games Conference!
As a reminder, this part of the newsletter is for our premium subscribers, but you can sign up right now with 20% off until the end of this month, and read straight beyond this cheeky little paywall. You can read all about our subscriber driven that’s running right now from our issue last month!
Keep reading with a 7-day free trial
Subscribe to AI and Games to keep reading this post and get 7 days of free access to the full post archives.