Hooded Horse ban AI-generated art in their games: “all this thing has done is made our lives more difficult”
Manor Lords and Terra Invicta publishers Hooded Horse are imposing a strict ban on generative AI assets in their games, with company co-founder Tim Bender describing it as an “ethics issue” and “a very frustrating thing to have to worry about”.
“I fucking hate gen AI art and it has made my life more difficult in many ways… suddenly it infests shit in a way it shouldn’t,” Bender told Kotaku in a recent interview. “It is now written into our contracts if we’re publishing the game, ‘no fucking AI assets.’” I assume that’s not a verbatim quote, but I’d love to be proven wrong.
The publishers also take a dim view of using generative AI for “placeholder” work, or indeed any ‘non-final’ aspect of game development. “We’ve gotten to the point where we also talk to developers and we recommend they don’t use any gen AI anywhere in the process because some of them might otherwise think, ‘Okay, well, maybe what I’ll do is for this place, I’ll put it as a placeholder,’ right?” Bender went on.
“Like some, people will have this thought, like they would never want to let it in the game, but they’ll think, ‘It can be a placeholder in this prototype build.’ But if that gets done, of course, there’s a chance that that slips through, because it only takes one of those slipping through in some build and not getting replaced or something.” As an example of this slipping-through, see the accidental presence of a generated placeholder in The Alters.
Hooded Horse employ two artists for marketing, and Bender feels “it would be a betrayal of them to work with anything that is using gen-AI art, like, I wouldn’t be able to face them if we had that right. And we’re absolutely committed ethically, for all the reasons you know, against this.”
As with much discussion of generative AI, the difficulty of Hooded Horse’s position is pinning down what they’re trying to ban. Does an artwork count as generated if somebody used the tech to make a base image of some kind, then fleshed it out and finished it off at length by hand? In general, genAI has become pervasive and hard to quantify across the divisions of game development, thanks not least to the deliberately imprecise, all-enveloping language of the companies selling it. There’s the gloomy underlying reality that a lot of genAI tools are being forced into operating systems and search engines in order to artificially boost “uptake” – it’s hard not to use genAI, at this stage.
I write all this not to suggest that bans are useless, but that they need to be more detailed and specific. In general, whether they favour the tech or not, companies need to start talking in-depth about their usages of generative AI. As a small and slightly weird example from the world of education, James Allen’s Girls School in South London recently unveiled an AI charter, stipulating what they consider fair game both for teachers and for pupils.
I’d love to see similar documents from publishers, though I’m not expecting miracles in a world where the largest PC storefront’s approach to generative AI disclosures is practically useless. But still, perhaps Larian will surprise me at their imminent Q&A about generative AI usage in the forthcoming Divinity.
Bender acknowledged the trickiness of policing generative AI when working with external partners, while reiterating the need for a blunt ban. “The reality is, there’s so much of it going on that the commitment just has to be that you won’t allow it in the game, and if it’s ever discovered, because this artist that was hired by this outside person slipped something in, you get it out and you replace it. That has to be the commitment. It’s a shame that it’s even necessary and it’s a very frustrating thing to have to worry about.”
As an example of the difficulty of policing generative AI usage by third parties, consider the discovery of AI-generated material in trailers for Postal: Bullet Paradise, which has now been cancelled – developers Goonswarm have attributed this oversight to external artists.
The minutiae of different genAI applications aside, the broad cultural or political case against generative AI has, IMO, remained pretty consistent (yes, this is the point in the ‘news post’ where I deliver the customary lecture, please skip to the comments if you are weary of such things). The biggest bots operate by parasitically appropriating and undermining the work of theoretically any and all human workers, whether or not you define this as actual theft or copyright infringement. Their actual utility beyond the grand promises is broadly unproven, and in some cases, obviously non-existent.
Being companies, the companies pushing them aren’t following the public interest: they are helping to drive up energy usage and emissions, and saturating daily life with “companion” technologies that may distort online discourse and in some cases, contribute to mental illness. Within the games industry, the tech is viewed by many executives and shareholders as a quick fix for unsatisfactory profit margins, and a tacit excuse for cutting staff.
That doesn’t make every usage of generative AI abominable – I can understand the appeal for smaller devs with minimal resources, and I am myself interested in art that makes genuinely imaginative and substantial use of generators of all kinds, rather than just calling on them to “enhance productivity” – providing those technologies are responsibly operated.
But generative AI needs to be understood as a form of class war that will principally benefit the already-filthy-rich, and which is fundamentally anti-social in encouraging greater reliance on profit-driven corporate tools rather than solidarity with coworkers and peers.
Lecture over! For a more open-ended and nuanced discussion of generative AI in games, here’s Sam Horti’s recent excellent catch-up with a brace of developers, including 80 Days and Thirsty Suitors writer Meghna Jayanth and Failbetter’s narrative director Chris Gardiner.


