Finding The Voice with Lingotion | 10/12/25
We chat with Andreas Rodman on ethical AI voice models + our final BTS Update of 2025
The AI and Games Newsletter brings concise and informative discussion on artificial intelligence for video games each and every week. Plus summarising all of our content released across various channels, like our YouTube videos and in-person events like the AI and Games Conference.
You can subscribe to and support AI and Games, with weekly editions appearing in your inbox. If you'd like to work with us on your own games projects, please check out our consulting services. To sponsor, please visit the dedicated sponsorship page.
Hello one and all and welcome back to AI and Games. For this week’s issue we dig once again into the conversation on AI voices in game development and the broader issues surrounding it. This time around we’re chatting with Andreas Rodman, CEO and founder of Lingotion, who are one of the first companies moving forward in the space of AI voice models for game development.
We’ll dig into the company and their experiences in this space in a moment. Followed by our final behind-the-scenes update for premium subscribers. But first, some announcements!
Follow AI and Games on: BlueSky | YouTube | LinkedIn | TikTok
Announcements
A couple of quick announcements before we get into the meat of this issue.
New Case Study - Amnesia: The Bunker
I’m thrilled to say that now that the hustle and bustle of our 2025 conference is concluded, we’re back doing what we do best, and that’s deep dives into AI in game development.
This week we launched our final case study of the year: the making of Amnesia: The Bunker, a beloved 2023 horror title developed by Swedish games studio Frictional Games. We dig into the design of their enemy antagonist, the ‘beast’, and how the design of the AI systems emerged thanks to an interview with creative and design leads at the studio. It’s a cracker, and a great episode to be ending the year on. You can check out the video above, or the written version at the link below.
In the meantime we’re already hard at work on our case studies for January and February. Our first case studies for 2026 include designing AI for another horror game, plus we ran an interview on what is potentially one of the coolest AI applications in game development I’ve seen in years, and I’m looking forward to sharing them with you. Premium subscribers can find out more about what’s in store later this issue
Speaking of premium subs…
50% Off Annual Subs In December
As mentioned in last week’s issue, we’re running 50% off of our annual subscriptions until the end of the year. By subscribing you get a bunch of benefits, including access to the second half of this post, plus early access to new case studies and other updates over in our Discord server.
Just head over to this link or click the button below. Every little helps us continue in our work and help us expand. A big thanks once again to all the folks who support us, it’s very much appreciated and helps this newsletter continue to exist!
2025 Wrapped Livestream Next Week
Our annual ‘wrapped’ livestream will be hosted on the AI and Games YouTube channel next week on Friday 19th December. Shraddha and I will be talking about the year we’ve gone through. Discussing both the highs and lows of running AI and Games, the drama of the year, the stories we’ve unpacked, the YouTube episodes we’ve worked on, plus an audience QnA and a little hint of what we have planned for 2026.
Timing and the stream link will be announced in next week’s issue, but for now be sure to subscribe to AI and Games on YouTube if you don’t already to see it appear in your feed!
Lingotion’s Approach to AI Voice Acting
So in recent weeks the subject of how AI-native games - titles that utilise generative AI - are going to be increasingly reliant on the use of voice actors has been discussed here in this newsletter. Last week we chatted with the team at Meaning Machine to discuss their efforts in building games that rely on large language models (LLMs). Games such as Dead Meat cannot rely on a traditional approach to voice acting, where all of the lines are prepared in advance, given there’s no guarantee what the LLM will generate in the moment. Hence there is a push towards adopting voice synthesis, where a human voice actors performance has been modelled by an AI to create the desired output.
But while this is a step towards a new technical innovation, there are broader ethical considerations afoot. How does a voice actor get compensated? What rights does a voice actor have in the usage of their data? Are they even involved in the process of the creation of the voice models? Or the voices crafted using said models? Where and how does that voice wind up being used?
Many of these issues came to the fore during my recent piece on the issues surrounding the adoption of generative AI in Embark’s extraction shooter Arc Raiders - an approach I would consider exploitative given the game does not need this tech to service its broader design ambitions. There is increasing concern about not just the rate of the technology’s development, but how quickly companies across various creative sectors are beginning to embrace it, at a time when the voice actors themselves are still trying to figure out where they exist in this new era,
So it felt timely that earlier this year I met Andreas Rodman, who is the CEO and founder of new AI start-up Lingotion. We met back in June at this year’s Summer School on Artificial Intelligence and Games over in Malmo, Sweden - which for clarity, is an event ran by friends of AI and Games but not affiliated with us.
Lingotion is a small company based out of Sweden that is working to provide a platform for AI voice and facial animation solutions for games, but with an emphasis on not just the technical aspects - of being expressive and versatile for deployment - but also in providing ethically based and legally compliant solutions that work with voice actors.
Now given I spend a lot of my time at events talking about the legal and ethical considerations of generative AI in games, it was both surprising - and dare I say, refreshing - hear this coming from an company investing in the space. So I figured let’s have a chat, and find out what Andreas had to say on the state of AI voice acting in game development.
Building an AI Voice Model Company
While Lingotion started in earnest in 2023, Rodman had started exploring what would eventually become this new venture as far back as 2019. Having dabbled with machine learning and deep learning in other areas in previous businesses, he "for fun, bought a bunch of GPUs, built into my garage, started learning stuff”, and critically began exploring how if at all a machine could interact with users more expressively through emotion.
But even at this early stage, there was an interest in finding good data. While you could go the route of stealing the data from movies or YouTube videos - which is very much in vogue with big AI companies - Rodman felt the best results would come when “you can bring in an actor and I can make properly notarised labelled data”. By working in collaboration with actors, he would learn not just how to get the best data for the models, but also the broader challenges of getting the best performances, and what this would mean in the long run.
As Rodman expressed it, “I wondered if you can do this really legally safe?”
While building the technology took some time, with Rodman admitting that early iterations “was really crap in the beginning, but still had no idea of using this as actually a real business”. It was only as things evolved and it began to take shape that the thought crossed his path of whether it could be useful in games. Stating that “NPCs [become] very repetitive. It’s very boring in open world games where you play the main storyline and the world becomes dead”.
It seemed prescient given the hype surrounding generative AI was kicking off just as Rodman’s tech was beginning to shape up in 2023. But while there was an appetite for generated content, he noted:
Andreas Rodman: People don’t really look at what is required to make this work from [a] business point of view. First of all, they don’t look at all of the legal side, but there are other things like, everybody was still doing SaaS [Software as a Service]. Everybody’s fixing [on] cloud and services. And I was like, ‘yeah, you can use that for generating assets. You can do that. But if you want to do real time, it doesn’t work business wise.’
So the push was to address the need to do these things on device (again, a big topic in 2025) such that it minimises costs of servers for AI models. But also that these local models need to be performant (with many players “playing on pretty crappy PCs”), and all of that needs to work in a way that makes sense in a legal and ethical way.
Perhaps unsurprisingly, when Rodman started pursuing investment, Lingotion hit their seed investment goals within two weeks.
Working With Actors
While our focus on this article is on voice, the team has also heavily invested in facial expressions. However in early feedback when reaching out to games studios the feedback was to focus more on the voice than the face. Even though they have continued to iterate and rework both tech stacks in parallel, the consensus from AAA studios was “get the voice really good first, and then add the facial later”.
This led to significant effort to invest in how to gather the right data, and that’s a mixture of understanding both the technical aspects of training small models for local deployment and maintaining emotional delivery, but also how to ensure the process works for voice actors.

As Rodman described, the focus from the beginning was rather than building voice cloning systems by pulling lots of disparate voices and mashing them together, instead they decided to focus on how much effort and energy it would require from a single actor. With the initial voice actor hired by Lingotion for the prototype recording over 20 hours of material. This process has since improved given actors with similar dialects can help bootstrap the model, and while their research is ongoing, the emphasis is on capturing each actors uniqueness - with current research exploring how to capture specific dialects in each language.
“Right now we’re capturing around 60 emotions for an actor, and then we add in speaking styles at different control levels”, Rodman explained. With an ambition to bring it down to around a few hours per actor.
But I was curious about what that recording process is like, and the dialogue - pardon the pun - between Lingotion and the voice actor. There is a process being built here, and by having actors in for hours upon hours of sessions, how much input from the actors is being received?
Rodman: We we are working with actors as we looking at, what data should we record? How we should record it? Is this capturing a good emotional range? Are these combinations sensible? I mean, those kind of things. We have a very tight dialogue, with actors on this, and we get a lot of feedback from the actors, a lot of stuff.
So it’s very much a dialogue. And then it comes back to the AI model itself. What is the AI model you’re going to require and how do you need to parameterise this to make sure that the models, understand it so they could capture it?
The Compliance Aspects
The process behind Lingotion’s creation of AI voice models and broader library was running in tandem with the broader considerations of this should work. Rodman stressed that during that early process with their initial voice actor, as those 20+ hours of recordings took place, there was an opportunity to figure out their perspective, and how the company could be built around it.
“I said to them, let’s put the actor in the centre.” With Lingotion moving not just to build the technology, but also to act as something of a talent agency.
Now I was curious what frustrations or resistance the team had faced with this. After all AI is proving highly disruptive in many business and creative sectors despite not being all that great at the intended applications. Unsurprisingly, it’s been a mixed response.
"So it’s very polarised. So either actors love us or they hate us because we use the word AI”, laughed Rodman. The response has been mixed with people dismissing them outright, and others excited at the prospect given it was offering a solution to the nightmares generative AI has created. A new avenue for royalties and recurring revenue outside of the existing routes, but with control over how their voice is being used.
I dug a little farther into both of these aspects. First being curious as to what the compensation process is like. It was clarified that actors get fixed royalties for the level of prominence of their AI voice in a given work (e.g. foregrounded ‘named’ NPCs versus background characters), as well as the option for a larger down payment on the adoption of their voice for a given project.
Rodman: When we get royalty, a bit of that royalty gets - if there was a down payment in advance - sent directly to them, and once they have paid off the entire down payment, then more royalty goes to them. And that actually was a model that means they are not feeling the actors taking a risk with us.
But on top of this was the level of control the actor has on their voice work. Once a voice model is created, and Lingotion is essentially acting as an agent, what control - if any - does the actor have over how their voice model is used and what companies get to use it?
Rodman explained that they have a variety of rules and regulations surrounding this. First with actors assigned ‘ethics levels’ to their work, with the most liberal saying “use my code wherever we want. Then we have more balanced. It’s like, okay, you can use it as long as it’s not for, like bad guys or somebody like that. With additional rules explicitly stating whether a voice is allowed to be used in adult and sexual related content.
These rules are flexible for actors to change as they wish, and often Lingotion will seek clarification for their voice to be used in specific edge cases, or even for a dialogue to be opened between the actor and a potential client should the external company really want to use the voice and the actor is reluctant to provide it. With all of this being properly recorded for future auditing purposes.
More to Come in the Future
It’s clear that the situation surrounding AI voice models in games is going to continue to evolve, and as I said in our recent issue on Arc Raiders, a precedent has been set for more of this work to appear in future titles. It strikes me that Lingotion will be right at the heart of this as companies start to look for an external vendor to cater to this.
Whether this proves of value to actors, and how the sector broadly reacts to it, is something we’ll no doubt discover in the coming years.
Behind the Scenes Look at 2026
Okie dokie, that’s our final interview for the year in the bag. Thank you so much to Andreas Rodman for the conversation. It was really interesting.
To wrap things up this week it’s time for our final update for premium subscribers as to what’s going on behind-the-scenes. So let’s dig into:
Our end of year plans.
Future newsletter topics.
Upcoming episode of our YouTube series.
Other AI and Games projects, such as Goal State and our 2026 conference.
And more!
As we said earlier, sign up on that awesome Xmas deal to join us below the paywall as we talk about all of this stuff and more!
Keep reading with a 7-day free trial
Subscribe to AI and Games to keep reading this post and get 7 days of free access to the full post archives.












