Are AI games right around the corner? In our second-ever live audience show, host Alexandra Takei, Director at Ruckus Games, sits down with Serge Kynstautas, Head of Engineering at Gardens Interactive, a venture-backed startup building a social online shared fantasy adventure game, and Meir Wasserman, Head of Engineering at 2K Technology. Shooting live from the 3rd annual Pragma Online Services Summit in Culver City, LA, the trio dives into two critical topics impacting the games today with their predominantly technical engineering audience: AI and talent. They discuss AI use cases and tool stacks, how they balance context, security, and flexibility, and guide an engineering team in a rapidly changing environment. They also discuss talent in engineering in the new economy – given pressures to do more with less, is the engineering scope and team size being shaved down? Are teams opting for safer, proven tech or looking to innovate? How are engineering leaders bringing along those who may be more reluctant to adopt AI tools in their workflows? That and so much more – including a contentious lighting round and spicy audience Q&A. If you want perspective on the future of AI in games and the subsequent cascading effects on talent, this episode is a must-listen.

We’d also like to thank Lightspeed Venture Partners for making this episode possible! With its dedicated gaming & interactive media practice, the firm invests from an over $6.5 billion pool of early and growth-stage capital. If you’re interested in learning more, go to https://gaming.lsvp.com/.

We’d also like to thank Overwolf for making this episode possible! Whether you're a gamer, creator, or game studio, Overwolf is the ultimate destination for integrating UGC in games! You can check out all Overwolf has to offer at https://www.overwolf.com/.
This transcript is machine-generated, and we apologize for any errors.
Alex: What is up everybody, and welcome to the Naavik Gaming Podcast. I'm your host, Alex Takei, and this is the Interview and Insight segment. If we sound a little different today, it's because today's conversation is pretty special. I'm here, boots on the ground, imply a vista in LA, at the Pragma OSS Summit, for those that will be listening on air, this is Pragma’s third annual Online Services Summit.
We at Naavik are a proud sponsor of the summit alongside Unity, AWS, Curse Forge, First look, Hathoraa, RaID, KID, Rocket Science and Salsa. And hopefully, I miss none of you. This is our second ever live recording, with an audience in Naavik's history. And so, we are very excited to be bringing in the spice today.
So crowd, how are we doing? Wave to us. Oh my God. Oh my gosh. Everybody is so happy here in LA. You should come to New York City for a second. It'll darken you. Um, you guys are great. But up on stage with me, or of course as James already said, Serge, um, I'm gonna blow your last name, Kynstautas. Got it. Yes. Head of engineering at Gardens Interactive, previously head of engineering at Singularity 6. And Meir Wasserman, head of engineering at 2K previously of Amazon Games and Electronic Arts. And I'm very excited to be up here. As I was doing some obligatory LinkedIn stuff in our newsletter, which if you're not subscribed, you can subscribe to our Naavik Digest to keep you up to date on the business side of the industry, you know, the money people. But, when I was on LinkedIn, I realized that this summit is described as “this invite only Summit is built by engineers for engineers shaping the online game, services and technology”. And so, I am not an engineer. And I feel very lucky to be here.
Thank you. Eden, Gideon, James, Serge, Meir, all of you in the crowd it's, it's, it's awesome. But actually with the power of cloud code, which I'm, we're gonna talk a little bit about today, I basically am an engineer. And so, you'll see all my apps that won't scale beyond five users. But that brings us to the kickoff for today's panel.
We're gonna talk about two topics. So AI and engineering, and games and engineering in the new economy. And both reinforce one another and I'm very excited to get started. But before we do, Meir and Serge, I wanna hear a little bit about the both of you. Serge, we already know that you work at Gardens, a venture-backed video game studio, and Meir, you work at 2K, which doesn't need to be explained.
But beyond that succinctly, I would love for you guys to answer each of you three questions. The first is in your roles as heads of engineering. What is the best part of your day? And pitch me on why I should convert to being an engineer. Two, what is something you miss about the old days? And you can define that however you want of games, engineering. And three, tell me about your most hilarious AI upset. When did an AI tool do something you totally weren't expecting? And this will lead into our first topic. Serge, how about you kick us off?
Serge: Okay. So for me as Head of Engineering, what I get really excited about is when I just get to sit back and watch the engineers throw out ideas and collaborate with the artists and the designers and just seeing that synthesis of ideas and back and forth some of what Mike was talking about earlier with this balance between product and technical debate.
Like, what do you wanna be able to do and what can you do. Seeing that interaction's just, that's what makes me really happy is seeing those dynamics, and pitching on being an engineer. Do you want to talk about making a game or do you wanna make a game? Mm-hmm. Like, I think that's like, and I'm glad you're trying Claude.
Talk about it. Talk about it. I think more people, the more people that can get access to, whether it's tools like that or like some really simple engines that make it accessible. Like anybody can be an engineer. It's not a binary decision, it's a spectrum. So, yeah, push your limits, become an engineer.
Meir: All right., Yeah, first off, thanks for hosting. Thanks Pragma. And my PSE Serge, who I shared an Uber with yesterday and found out we had a mutual friend that we both adored and didn't know we knew each other. So, I feel like it's a small industry and you just, you know, find that out in events like this.
Best part of my day is basically just learning and teaching, and I really like learning a lot. And I find I do that almost every day. And then you have opportunities some days where you get to teach someone, something that's not maybe obvious, maybe you're helping grow their mindset. And those days are really great.
It beats the paperwork, that's for sure. Why should you be an engineer? I don't know. Like if you like problem solving, if you like reducing problems down to their constituent parts and, you know, piecing, piecing it back together, figuring it out. If you like keeping up with constantly changing technology, you should be an engineer, but I think that skillset applies to basically any, any type of work. So you're not really stuck in engineering even if you do like those things.
Alex: All right. What's one thing you miss about the old days of games engineering?
Serge: For me, it's the wonder. Like I remember as a player, the eight bit sound or the 16 bit graphics and just like anything that was made, it was just amazing.
And nowadays it's much harder to figure out what the player hasn't experienced before and how do I create that experience, that feeling, So it's just we're kind of past that tech innovation phase where yeah, it was just amazing what you could get this. This box of sand to do so I, I, I do miss those days.
Meir: Mm. That's a fantastic answer. I like that one. I wish I had gone first. Sorry. I like, so I cut my teeth making AAA games at EA, working on Madden. I was a gameplay engineer there. And AAA games many, many years ago were the size of like, indie teams now. And I, I miss that. Like, I, I miss that you could have in AAA environment, like end-to-end ownership over almost the entire code base.
It was really fun. I was filling out one of the bingo cards about easter egg I added to the game. I won't, I won't tell the story here, but I, I, that's doesn't seem like that’s possible on AAA anymore. I'm sure it's rampant and indie and AA possibly, but like, yeah, I, I do miss that I.
Alex: All right, am my AI question hilarious.
Serge: So, two months into Gardens, AI really embarrassed me. This is about two years ago. Chat GPT was pretty new. I had showed up to Gardens and we were trying to do internal play tests and nobody would knew what was in any build that we're doing each week. So, I'm like, great, export in JSON what the Perforce Change list is.
I'll get Chat GPT to summarize it. It was amazing. I then gamified it and I'm like, what's the best commit and what is the funniest commit? And that worked a couple times. And then the fourth week I did this, I announced it, and again, I'm still kind of new at Gardens and I read the Best Commit and who did it, and it's complete hallucination.
It was like, it's like nobody had made that change. It was a person who didn't work there. It's like, okay, I'm, I'm killing this project. That's pretty good.
Meir: Yeah, I, I find hallucinations, not that funny. Sorry. But I do have a story of a friend of mine, a real friend, not, not a friend. It's not me.
So when GPT switched to, GPT-4, he had put his LinkedIn into the, the tool and said, Hey, write me a cover letter. And, yeah. And it spit out that he co-founded Activision and he's like, sick. That's not, that's not right. I did not co-found Activision, let me like reformat and try again. And it came out again.
And he is like, okay. So, there's two possibilities here. One is there's an alternate reality of data that this was trained on where I did co-found Activision. And it's, uh, it's reflecting that or second, it, it couldn't possibly believe someone with my resume didn't co-found AAA game studio. So that's a funny hallucination. Those are really good actually.
Alex: But that's basically, that'll actually get us started for our first topic, the use of AI in games and engineering today. We're gonna open with some use cases then tackle some questions around concerns and innovations. And in an article from Pocket Gamer right out of TGS, the TGS organizer claimed that 51% of firms in the region are using AI or generative AI.
The use cases for the tech included producing videos and images, stories and texts and programming support. In our kickoff, both of you told me that Gardens and 2K are actively using AI every day on their engineering teams. And so, I wanna hear from you guys. What are your team's top three use cases of AI so far? Maybe Serge, you can go first.
Serge: Okay. I'll go first again. So the three major areas are community management, data science, and developer productivity. So, we use GGWP. It's old school AI, now new school AI, to really to track community sentiment, connect that with like in game player behavior and get a full picture for player support for the community management to see what's doing on Discord.
What, what is the community feeling? What are the bad actors? What do we need to moderate? Second one is data science, we've evaluated Databricks and Meta Base and going with Mease. Now they've got an AI beta tool that really lets you take this big stream of data telemetry that's coming into our platform and making it easier.
Like to me, I wanna get the designers as fast as possible able to validate whether their ideas are working or not. So, the more I can remove steps in the process where you gotta talk to the data engineer and then the data scientist, and then get this implemented, and this report looks good, the more I can just let them query and just try different ideas, even if it's not perfect, that's gonna be really great.
And last is just developer productivity. I think a lot of us you're using cloud code? We try lots of different,
Alex: No, I'm not really using cloud code.
Serge: Yeah. So that's like the, the major category, I think just pretty traditional.
Meir: Okay.
Serge: What about you?
Meir: Yeah, I'll, I'll just put out an industry observation, which is to say, I think, the second thing Serge had mentioned is super cool, which is like natural language querying of data sets.
Access to data it right now it's like a bit of Arc Cana, you know, and if you're even a really talented software engineer or server engineer, you may not be like super good at getting at data and the data you want to understand your customer better. And so, I think like NLQ is, is it like it's subject to all the same limitations that LLMs have, right?
It's can hallucinate, it can give you the wrong stuff, whatever, ut I think it does give us everyone access to data and that will help us understand what's going on in our systems better and also like what our customers are doing. And I'm, I'm pretty excited about where that heads as the, as the LLMs mature.
Alex: Alright, so in the third thing that you said, developer tools, you know, you mentioned GGWP, old school AI or new AI, and then, but in the third, in the third category, you know, you mentioned cloud code. Tell me a little bit about like the actual AI, the, I guess the code assistant AI suite that you actually are using, and how are you figuring out, like standardizing that across your team, and how do you guys work on that at guards?
Serge: So, we're very much in a experimentation phase. So, what I do is I give people a budget to optionally try out these tools. So I, I think I've got almost every product been used. So Code dm. Now Windsurf is one engineer's using someone's using Claude code. Someone's using Copilot. Someone just uses Chat GPT.
Most engineers use the intelligent J Suites, so they're using JetBrains AI. What this does is it allows us to sort of evaluate many different tools and figure out what they're good and good at, hallucinating at, and figure out like ultimately what stage of the project. It really is useful at. So I'm not really focused on standardization to this point, and more just like cross training and experimentation.
There's so much innovation happens, there's so much that changes, that I think just helping the team learn for those who are interested in doing it, it's really a powerful tool and it, it helps with like discovery and climbing a learning curve to really understand code bases you haven't seen before or be able to quickly iterate, especially for backend groups who are polyglots working in many different programming languages you haven't seen updated that helm chart in a while. So, you gotta like get a quick version of that and then jump over to some Kotlin and then write that bash script. It's really helpful to kind of help somebody move around between all the different areas of tech they want. So it's, but yeah, we're more focused on experimentation and just keeping up with the innovation at this point.
Alex: Hmm. I see. Any responses, similarities.
Meir: In terms of consolidation or, yeah, so one thing I've noticed is the way you used to build games, or I think we still do, is, you know, you, you kind of play with a different tool, bunch of different tools, and you lock it in and like make sure like your pipelines are working and your, and you can, you can go from there and lock it down.
AI tools don't really fall into that schema very well because the pace of change is incredibly fast. I think Mike was saying that earlier actually. It's, it's just too. It's too fast to say like, we're gonna lock down this tool now, where six months from now, whatever the tool does, could be baked into the model.
Right? You wouldn't even need that tool anymore. It might just be like part of like whatever model the underlying model is happening, or it's progressed. So, the technology has progressed so fast that like, that tool's not as effective as another one. And so that's like, again, like industry-wide, I see this as a kind of a problem is that we have to understand that AI tools don't, it's harder to lock in and standardize because of the pace of change and just have to be open to that and thinking about how we develop games, not just like which tools we're using.
Alex: Yeah. Yeah. And we're definitely gonna talk a little bit about stability, like in this section and how you guys are managing and thinking about that. But you, you mentioned a couple tools here, like Cursor, Claude, Code Copilot, et cetera. And I guess like one of the first questions and concerns is like context.
And sometimes there can be tens of hundreds of documents that you need to fully wrap your head around. All the systems and information you need to know as an engineer. And so, what are you guys doing to ensure that your AI tools have effective context for your team. And how do you think about making sure that that's secure?
Meir: You're looking at me, so I'm gonna talk, okay. Context matters for LMS a lot, and I know it's not the question you ask, but I just wanna lead with this to say like, the, if you ask a LLM, like, write me a story, who knows what you get? And if you give it a lot more information, you'll get something that's closer to what you're hoping to get.
And the same is true with, that's like how the underlying like technology works. And the same is true with when it's helping you code or create art or whatever it's doing. And so you have this, incentive to provide a ton of context in order to get the best possible outcome. Some of that's like the, the data it has access to, to evaluate some of it's the prompt you use.
There is, there are, it's not just diminishing returns. There is kind of a, a graph where too much context reduces accuracy. But I don't think we're like, generally have that problem. Part of the problem though is that when you provide all this data, you have to think about like what is the company that I'm providing this data to doing with that data?
And so, I have an example again. A friend, not me, not a friend, but a real friend who, at a different company who they, they used some slush art. They had like extra art line around for a project that never kicked off. And they were trying to see if they could get a LLM to help create like product kickoff ideas or like game, game ideas. And so, they put the art in, and they had a bunch of texts as well for the, for the gds. And what came out of it was like they got it working and they actually liked it and, and it went so well that at the company, they're like, oh, we, we actually kind of want to make this game like this.
This is a, we, we actually think this could be something. So that's. That's a good outcome, except that art that they used was now like given to the third party company and everyone had access to it. And so then it felt like we can't really do that anymore. We can't make the game with this art anymore. And we have to, you know, pivot, even though we initially own the art, we used it to train the model. And now like that's, that's gone awry. And so, there's like this balance you have to think, not just like, how do I give it all the context, but like how, if it's like an enterprise contract that's generally pretty safe, like your, your data segregated.
But if it's not, if it's just a tool you're using, you have to understand like what's gonna happen with the data you give to the tools so that you can know, like how to protect your business or your game or your ideas as a consequence.
Alex: Yeah, and I'd be curious, like your reaction to that, sitting at a different, you know, scale.
I, I presume that you know your guys' responsibilities to set boundaries kind of beat. Between AI access and system integration, you know, proprietary information that you're giving to potentially a public engine. Has, does that differ for you a little bit being on the startup side versus being at a large, you know, company like 2K?
Serge: So less from a compliance or security concern, but I think what we're learning is past vibe coding. There's agent coding where you're creating much more structured roles for the AI bots and you're giving them much more targeted context. So, from there you're creating kind of sandboxes to limit how much they can hallucinate by giving 'em very accurate roles and very accurate context.
So, it's similar in that we really try to think about as Meir said. You can't give them too little and you can't give 'em too much. But giving them the really, the right amount of information. I mean, it's kinda like developers. You give them some documentation, but not like every single bit of documentation and the 12 versions of it, you give them the right documentation, uh, and the bots can achieve a lot that way.
So we, I, I do think about it, but less from a more, from just an efficiency and be able to achieve the right answer sooner.
Alex: Has there ever been a point where you've kind of in your $100 slash fund where you've said, no, you can't use this tool because it's not secure, you don't trust where it's going?
Serge: It was definitely a concern two years ago when we got started where there weren't really clear terms in a lot of the licensing agreements. Now there's a much more standard version of it. So, if somebody mentions they want to spend money on a tool that they don't want, I mean, we have a 55 person studio, like people just come to me and say, I'm ready to spend on this tool.
Um, but I'm less concerned about that at this point, given our stage and, and the scale that we operate.
Alex: Okay. And moving to that stability point that, you know, Meir that you've already mentioned, you know, games, locking engines or tools so that you can build on a stable foundation towards a solid launch with AI tools changing every three to six months, as you've said.
How are you structuring your teams and your workflows to specifically avoid vendor lock-in as you've stated? And also, keep your team at pace with learning different technologies. So that, 'cause eventually everybody might have to actually like congregate into, into, into learning one.
Meir: Yeah, I'm, I'm actually curious to hear Serge's answering this, but I'll just add into what I said earlier, which is like.
I think we have to change the paradigm of how we view our tool set, and it's, I'm not, it's way easier said than done because like, you need to lock in to produce a game, right? You can't be having like, everything change underneath you. And so, I feel like we're, we're kind of at the beginning of that, of figuring out how to integrate AI tools and have them be effective over the long term.
Yeah, the pace of change on these tools is like dramatic. So, it's, it's, it's exciting, but it does also, also makes it very difficult to launch a completed product.
Serge: So the products I listed that we are using are third parties. So, I can basically hold that group accountable to make sure that they're implementing it to deal with the implementation and if they have to change LLMs or models, whatever it is, that's fine for us. Then on the developers we're it, it's like Blue Ocean, try out every different idea. I don't know how long these different tools are gonna last, but we're not building a product that's really is gonna require lock-in.
So to me it's, it's more from like a, a strategic perspective. I'll pick a vendor who will deliver that AI capability and it's up to them to figure out if they gotta change the implementation. Chat GPT seven comes out and they gotta switch to that. Great. You figure that out. But for us, we're much more in that experimentation and just like learning stage.
Alex: Okay. All right. So we've talked a little bit about the tools that you guys are using and the use cases, but I'd love for you guys to actually, like, maybe tell me a story about like, you know, and maybe specifically on the game design side, what you are actually like, you know, you, what are you actually using it for in a specific example case? So like, you know, you're using something like a cursor and how did that change? Did you find that impact to be like, pretty profound and has that actually meaningfully changed the velocity and the direction of the production pipeline? You know, whether that is in the iteration phase, 'cause I know that, you know, for you guys your product is in kind of pre-production, early production phases,
Serge: So, I have always tried to being balanced to the force between the game engineers and the microservice engineers. They typically do not like to touch the other side, and one thing that AI tools are very powerful at is helping people get across that learning curve. It kind of instills a sense of fearlessness.
I had a, one of my backend engineers says, he describes getting over to the game side as crossing the Mariana Trench, and he's just very dramatic, but it's very hard for him to worry about. C++, which is not his favorite language. How does he understand all of these mechanics and how do subsystems work?
But he's coding a bunch of backend social features that need to touch the game site as well. And he's been able to use Copilot to really feel more comfortable and get it to the point where it's ready to have a pull request reviewed by the game engineers and say, oh, that's, that's actually pretty good.
Maybe change the way that this implemented, which is getting people across that learning curve is just really powerful. And I think it allows me to just swap who can work on different projects and great more. And then ownership is generally a great pattern that I've, I've really struggled honestly to get people able to handle both Unreal and Kotlin frankly.
So, uh, that's, that's one thing that's been very effective in terms of getting people. So all my backend engineers are touching on real code. It's great.
Alex: Mm. That's amazing.
Meir: Yeah. Yeah, that's, I think that's right, like getting past the learning curve is that it's really useful there, like the AI tools feel less impactful than the marketing would have you believe, but better than zero, right?
Like they actually, they help, they don't harm, but at the same time, it's not replacing us. And so I, I, I, yeah, I think, I like your observation. I would agree with it. It's like getting people, uh, to quickly get up to speed on like an unfamiliar area. It's, it's very common to have to do as an engineer these days.
You don't just get to like, specialize in one language and walk away. It's not how it works anymore. , Seems like there's a lot of use there for AI tools.
Alex: Hmm. All right. Before we move on to our second topic, I wanna get your guys' sobering take on the, like, the, are we there yet? Question. And to be clear, I don't actually believe this question, but tell us, why all of us like product and business people are out of their minds when we think that making a video game with AI is like right around the corner?
Meir: you don't believe that?
Alex: I don't like believe it. Like, believe it. But maybe, maybe I’m wrong. We're there. Maybe you're telling me that it is true.
Meir: I'm telling you it's true. We're there.
Alex: Oh, I'm verifying it. And other questions.
Meir: I have a friend., Oh God. Why is it always my friends isn't? It's a completely different friend. It's starting to starting to sound unbelievable. Yeah. Too many friends. Oh, I should believe it. He has vibe coded. He's not an engineer. He has vibe coded, a platform that itself lets you vibe code video games at an NES or SNES level.
And it's super cool. It's also completely insecure. But it is super cool. And this guy cannot look at code and understand it. Like that's, that's the level. But he is very tenacious. Um, I've been playing with that and it's, and I thought to myself like, that's better than I thought it would be. Mike, can you say you made an asteroids? Clonally? I also did that. I just like, it's like, make me a game like this. I described asteroids essentially, and then took about five minutes and it spit it out. And I was like, all right. That's pretty, that's pretty interesting. It does a lot more than that. And so, on that measure, we're there, but that's not really what I think we're talking about when we say, are we, are we making games yet like self sufficiently?
Because those games are a dime a dozen and we're just copying like prior art essentially. So I don't think we're there. I think I AI tools, the future of them are augmenting human effort. Into making new creative games and whatnot. But if we can go back and make like very simple games, like it's, it's there today, it's a hundred percent available right now.
Serge: So some of, when I look at the hype, I view it as product managers having the IKEA effect, you know, like, I made that table, so I now value that table more. And I, I think some of that's what happened is. They're very excited. They made an app and it doesn't really scale. It doesn't really work. But on the flip side, a lot of what I do with the engineering team is to give the designers or the artists tools to be able to build their game.
And if this is another tool to allow them to build something, I'm not shipping a game that's all in the blueprints that the designer made. I'm gonna come through as the engineering team and really make this much more effective and performant. But if there's another way that they can do that without me having to spend a lot of time building tooling to allow that designer to come up with a new idea and demonstrate it so they're not having to talk through the engineer's fingers to get through there.
That's really powerful. It's, it's, as Meir said, it's, it's making games, uh, but it's maybe at the prototyping gray boxing phase, not the actual shipping phase.
Alex: Yeah. Yeah. And I guess maybe that's what I meant by, I don't believe it. I think 'cause it's, it's that distinction I think maybe that I was referring to.
And yeah, I think like one of the really interesting things that as someone who's I've, I've vibe coded one app on Rept. And I realized it had no persistence and I was like, this is not functioning as I expected. It's forgotten who I am. And what I wish that it had done was kind of ask me the questions that like my principal engineers at my studios would ask me.
They're like, well, how do you want it to function? And I think engineers often do that for designers. They're like, what do you actually mean that, what do you actually want this weapon to do? How should the enemy behave? Right. And I think that if we get to that level right, then someone like me without an engineering background that doesn't know the right questions to ask would be very effective.
And so, I'm actually really hopeful for kind of programs like that that will prompt me as if a principal engineer would prompt me, 'cause I can answer that. I can be like, oh well, like, yes, I want it to remember me, or like, oh yeah, like, I want it to, I want you to be able to log in or things like that.
So really curious to get your takes on that
Meir: Yeah, the, the thing is, it's not on rails yet, but like AI is good at that too. If you said to ai, ask me a bunch of questions that a principal engineer would ask when I was trying to build X, Y, Z, it'll, it'll knock that out. And like, that's kind of cool. Like you can see all the building blocks are there, but again, it's not like, it's not like a, it's not like a on rails experience quite yet.
Alex: All right, well, grounded in that reality. I wanna move on to our second topic around talent and teams. And in our kickoff together, we had a long discussion debate about whether or not anybody in their right mind today would try to build an MMO, given the cost structure, limited budgets, of course, impacting staffing and technology decisions.
And if you have to do more with less, do you think basically the engineering scope of a game is being shaved down? And I'd love to discuss that between the two of you. If that feels different in AAA versus at a startup.
Serge: I think engineering scope's getting shaven some, but I've never had enough engineering scope to accomplish everyone's ideas.
So I, I'm used to a whiteboard with 50 ideas and we slash it down to four or five, and if today it's down to three. It. You know, we still cut 45 or 47 ideas off the board. So to me it's less about the number we're actually gonna ship. 'cause then at the end of those are just player bets on what the experience is gonna succeed at.
So with tools like AI with. Engine game engines out of the box, like on Unreal and Pragma, that can just help us ship fast, get, get ideas out faster. It allows us to see sooner whether those are good ideas and, and then we're maybe not spending all of the money to ship those three or four ideas. So I view it as more intelligent use of it, but yeah, I, I never felt like I've had enough engineering scope.
Meir: Yeah, I agree with that. Like. It feels like there's no more iconic duo than the year calendar year increasing and doing more with less. Like that's just every year, my entire career. Always the ask. It feels pretty normal. And I think engineering scope, so like that feels like a problem that we constantly have to, to solve.
There's another aspect to it though, which is the, the expectations for gamers increases every time someone innovates that goes from novel to expected. And the amount of, and especially true of AAA, like the amount of the expectations are incredibly high for a full price game. Because if someone else has done it, why can't you?
And so that level, that like baseline just goes up and up and up. And even if your team size didn't change year over year, you the expectations did. And what, what you can do relative to those expectations is now less essentially because your team hasn't grown to meet those expectations. So it feels like a common problem, a very real problem. And, and I doubt it's gonna change anytime soon.
Alex: Yeah. And this morning, there was a discussion around, using boring, boring tech, or tech that works. Right. And do you think in the realm of this, this question specifically a smaller team and a smaller budget, do you think that's pushing you guys to towards that safer, boring tech and contrasted with all of this new AI tech? So do you feel like you're basically in this constrained environment? Do you feel like you're using new things more often or relying on old things more often? Which of it or a blend maybe?
Meir: Yeah, it's yes to all of that. It's game. When you're making a game, you don't go and. You know, grow up and think I want to build a backend so that, you know, there's no server downtime when I patch.
Like no one, no one says that they just wanna make a game that's fun and it's fun to play and, you know, all that stuff. And so, what we find is like the boring tech is supposed to keep us from having to worry about. The very real problems of building games that consume almost all of the time so that we can spend more time making the game fun or interesting or unique or anything like that.
Yeah, there's a push for boring tech. I think it's all rooted in just like we want to spend more time making a game fun and less time worrying about anything else.
Serge: I mean, the way I think about it is I wanna build internally in engineering org that is specialized in what makes us. Different. So, I don't view it as much boring so much as solve problems.
So, I will turn externally to a backend engine, a rendering system, a build pipeline, social communities, anything like that where I don't need to build it and somebody's got a solution. Great. Anytime my engineer's like, I bet I could do what they're doing. No, like, let's build what makes us really special and get really good at that.
I don't need us copying and implementing slightly more efficiently what somebody else has already solved. So, to me it's more about, I'll then use some of the innovative tech to help us specialize in social gameplay features that we're trying to build that nobody else has. So I, I view it as more an application of boring.
Alex: Okay. Interesting. All right. And the last question in this section is about. People, um, and many people, and we discussed this a little bit in our kickoff as well, consider Hollywood to be a bit of a Luddites industry. Like there's coalitions of film and movie people who are known to like actively dismiss technology and as a not approved way to make.
The highbrow art form. And, if you look at, you know, took engineers at Pixar, like for example, years to show the promise of animation tech to tell stories, gaming on the other hand, has been generally full of technophiles and a breeding ground for the adoption of technology, often before other consumer use cases.
And I personally feel that there's a bit of a divide in gaming right now. There's an aversion, especially maybe among legacy developers or specific disciplines to AI adoption. Given that you are engineering leaders, how are you bringing the organization, the artists animators, SFX, along with you in the engineering economy where you may be interested in using AI to do?
High to produce higher output, or be more efficient?
Meir: Good question. Some of the aversion like diversion is gonna be bucketed, I'm sure, and, but some of it is related to, am I training my replacement, right? If I, if this tool gets super good, am I outta here? And that was, I think, initially led by the marketing hype in I'm gonna say like 2022, 2023, where it was like AI companies are like, don't, you're not gonna need humans anymore. We got you. And a couple companies tried that. Not gaming companies like Klarna for instance. They, there was a very public story where they tried it well, like ran it back, nevermind. Like that didn't work so hot.
And I think we've reached a point now where it's easier to bring people along who have that specific fear because we can say these are augmentation jewels. Like they're here to make us better. Like you wouldn't be an artist and say I refuse to use Photoshop. Okay. I'm sure some artists do say that, but like most of 'em probably wouldn't say that.
They'd be like, yeah, of course. Like it helps me be better at what I do. It's like, great. Well, the same is true of these other tools, and the same is true for engineers using the tools and, and all the other disciplines. So, I think that conversation because of the marketing hype initially were, was like very fear-based and now it's becoming much easier to get people in line with like, this is just how we're gonna work and it's okay.
And it does, it does improve our ability to create outputs.
Serge: So, a long time ago when I had full head of hair, I was in the open source community and I, I learned from someone that if you come across somebody you disagree with, you have to listen. So hard so that you can make their argument for them. And the power of that is then you're able to find that middle ground.
And I find that's the tool I use very often when I'm talking to artists or other people who have concerns about the ecological impact of LLMs. Like there's a lot of reasons that people may not wanna adopt this technology, and if I can. I put myselves in their shoes, then we can find like a middle ground to be able to move forward.
I think a lot of engineering culture kind of like, well, we'll fix the ethics in post. Like, we'll, we'll address those down the road once we figure things out. And I just really try to meet them where they are and understand like where their concerns and people are much more amenable to change once you start to, to work with them and, and validate their, their fears.
It, it's makes people less fighting you and more just like able to change.
Alex: Yeah, Meir, I really like the example. And yeah, kind of approaching it from their side with empathy. About, about Photoshop. I was listening. I listened to this podcast called AI and I, if anybody else doesn't know it, it's awesome.
They interview like AI leaders and Cognition CEO and then the Box CEO did one the other day. I was just talking about like just the adoption curve of AI and, you know, at the end, in the early two thousands, everyone was building like an internet company. And like every company today is an internet company. Every company. In the future, we'll be an AI company. It's just an underlying infrastructure that just is the fundamental process of how you work. And that I think really stood out to me as kind of a way that I think about it for myself as well, because I think it's, it's impacting not only engineers, but business and product and the Excel monkeys of the world. But you know, I also don't derive very much value from putting numbers from the cash flow statement into an Excel spreadsheet. So, I'm pretty excited for that to potentially, potentially go away. But, thank you guys so much. We're gonna do a quick lightning round before we go to audience Q&A.
And so basically the exercise here is. Just five questions. You just answer first, whatever comes top of mind. Got it? So who will win: Chat GPT, OpenAI, Gemini, Google, Claude, or neither?
Meir: Claude. That is such a nuanced answer.
Alex: I just have to say what comes top of mind.
Meir: I guess, pedantically, none of 'em because. Yeah. None.
Alex: Alright. None. But there's more to say. Okay. You want like one sentence? No, no, no, that's fine. Move on. Okay. Alright. Cloud Code, GitHub, Co-pilot or neither?
Serge: Cloud code.
Alex: Oh, nice. Oh, you're team Claude over here okay. Sora, Google/YouTube VO three or do, do, do, god forbid Meta Vibes?
Serge: None of the above. Is that an option?
Alex: Sure. Yeah. Meir?
Meir: Maybe the Google one.
Alex: Okay. What is the scariest thing you've seen AI do?
Meir: Probably Will Smith eating spaghetti. I'll stand by that.
Serge: I don't think I can top that. I, I was gonna say force multiplying the hate posts and ranker among social media.
Meir: Oh, that's what I said. Yeah. That, that one.
Serge: I like your answer better.
Alex: And number of years until we reach a GI, Undefined. I know. Okay.
Meir: Yeah. I, five. Five is a five.
Serge: I disagree with the question. I, I feel like we thought chess was really impressive. We thought Jeopardy was really impressive. We thought searching the internet knowledge was really impressive. I think it's a terrible term. And I don't think that we're actually gonna achieve that 'cause we can't define it, so we're never gonna get there.
Meir: Let's dive into this one. 'cause like the exponential rate of improvement is like hard to, for our linear brains to like process. So like there's, I think there's a number. I just don't think we. It's like trying to think in four dimensions. Like we can't really quite do that.
Serge: I think we'll redefine what human intelligence is by once we realize what the computer can do.
Meir: Yeah, maybe.
Alex: I've once heard the definition that a GI is basically we've reached a GI when it is more efficient to have all of your AI systems and tools running all the time versus like turning them off. It's like always efficiently doing something. But if that's a definition, I think we could probably get there, but.
Meir: Undefined. Yeah. Well, it, and the capabilities are getting there, but it's way more like energy intensive than to run a human brain. And so, like that, I'm not convinced, like, I don't know where that finally crosses that threshold.
Alex: All right. Thank you, guys. What a way to close out. I'd love to move to some Q&A, uh, James, I dunno if you wanna come grab the mics.
Thanks everyone for listening and if you're listening on air. Thank you so much. You can always reach out to me at [email protected]. So questions.
Question 1: Hi. I'd like to ask how do you deal with the questionable ethicality, legality, et cetera, of AI training, data input and the questionable copyright of its output?
Serge: So, I think this is an issue that. Multiple countries are gonna have to figure out legally what's happening. I, I go back to when the internet first boomed.
There was an unclear sense of what digital copyright meant. If you took a book and you scanned it, what were your rights to distribute? And it's ethically unclear, US copyright law doesn't really give a good answer. At the same time, people are being sued for copying melodies of songs from decades ago.
So, I think like generally, there's a lot of intellectual property rights that need more clarity from the courts. And I also don't wanna be just US-centric because certainly EU and China, other countries will weigh in on what this is. So, I think it is. Morally uncertain, but ultimately I want to see how the legal system decides these cases, which it's gonna come, people are gonna sue.
I think there's already some core cases in the work, and I welcome those. I'd love to get some clarity on how these are treated.
Meir: Yeah, it's, it's definitely not up to the game companies to like figure this out. The AI companies are doing it, but yeah, it seems like it's the, it's great question. It's, it's happening right now, like those definitions are being created because of.
Use or misuse, depending on your point of view, And I'm also curious to see where that lands.
Alex: Minor follow up, but do you believe that, so like the times right now is suing OpenAI for a lot of , obviously the scraping of articles that are written by journalists. Do you think that it's on responsibility of maybe like an open ai, for example, if they're paying basically to, with the, for these huge data warehouses, billions of dollars to train the model that they should carve off some of that and pay the people who they're taking and train the data off of.
Meir: Yes. I mean, it's like such a, it's such a broad, it's a, it's a good question. Sorry, I don't mean to diminish it. It's a great question, but it's like hard to, it's hard to wrap your head around like the implications of any answer. Right. I think as training data be, you know, it starts to become available to all of us through our prompts and the outputs.
It becomes clear that like something's wrong. It's not clear yet, like how to make that actually feasible. Just from, from a business perspective and a humanity perspective and all that. I think it's effectively though Surge was getting at, it's like you have to think kind of worldwide about this problem, although I'm sure, sure the US and EU and um, a couple other big areas will like lead the charge there.
Serge: Yeah. And I think the answer is yes, there should be. But what is the economic value? What's the threshold? How much of a corpus means it's now something you can charge for? I, that's our interesting questions.
Question 2: I'll give you a chance to expand on your, this is a very nuanced answer piece. You had said you'd given a one word answer, none of them, and I wanted to hear the rest of it.
Meir: Was this who would win? What was the,
Question 2: Who?
Meir: Would win? It's because, maybe this won't be a nuanced answer 'cause they're good at different things, right? Like the models are trained to, there was a fantastic video I watched on YouTube, um, where some of the cloud researchers were coming in and just saying like well, the models are really good at X and not, so, you know, they fail a lot on Y and Z, but that's just because we haven't trained them on that yet. And then if there's an incentive to train it, we would, and then now they would be super good at that. And that list of things we could be training models on is practically infinite.
It's, it's very large. And, and whatever, whatever the, the companies do to incentivize training, whether it's like businesses asking or just, you know, for the betterment of humanity or whatever, like the, we're eating into that very, very slowly. And so I think how models or how companies will differentiate is just like where their training and what things are good at.
And for us as users, it becomes. The models aren't undifferentiated. They're actually quite differentiated, but you're expected to use multiple. You're not just supposed to be like a Claude stand or a GPT stand or something like that. You're expected to actually say, I use this model for these things, this other model of these other things.
Okay, so then backing up, well, who's gonna win? It's like, I think all of them. I said none of the above, but I really meant like all of them in that respect.
Question 2: Talk about junior talent and how they're affected. I know that's a big issue of concern. I'm concerned if, I'm wondering if you have answers or thoughts in that space.
Serge: I feel like these are macroeconomic forces that are happening. Not anything specific to ai. I, again, remember back in those open source days, we talked about how the growth of the internet was exponential and then mobile and all these other things, but the scaling of talent is linear, and so there's gonna be huge demand nowadays.
And, and so what that led to was decades of getting a compsci degree or get any engineering degree, you're gonna end up in compsci because that was the market demand. Nowadays, I had a daughter graduated a couple months ago with a chemical engineering degree, and the compsci engineers are not instantly getting jobs, and there's many other types of engineering jobs.
The world is not overflowing with engineers. They just may not be, be going into computer science the way they were. So that's largely how I view it, is just it is a shift in what was a very big drought for a long time. I still worry about them. I'd love my daughter to go into computer science, but she's likely gonna stick in chemical engineering.
So I, I think it's just a question of what we've been used to is just change. Not that it's a specific tool that's come along or, or some innovation that's changing the dynamic. It's, it, it's gonna go back to hard job to get into.
Meir: Yeah. I, that, that sounds smart. I think that's right. I will say my, initially, my point of view was like, Well, it was smart. It wasn't gonna sound smart. My point of view was like, oh, the juniors are, they're, they're, they're messed up. Like it's not gonna go well for juniors anymore. And I, and I think there's like this cliff that started in like 2002, sorry, 2022, not 2002, 2022. There's this cliff where like, no more, no more seniors, were gonna, no more future seniors.
Were gonna enter the job market. Or at least very few. And so then as the, the, whoever was senior in 2022 started to like progress and age out, we would start to re recognize that. Yeah, like there's a, there's a problem here. Like we haven't replaced our old seniors with new seniors, and that's because the junior pipeline was materially impacted by hiring practices due to AI.
That was my first take, and I felt that for a couple years. And then over time, as I've read more on it and talked to people, I think I've concluded it's more just like what Serge said, like, well, the, the nature of it has changed. We are adapting how we're. Hiring and creating and promoting juniors or bringing them in and then turning them into seniors.
That is adapting. Yes. But it's not gone. And it's just, it's just a different world now. It's, it's, we're, we're working our way through it, and so I no longer feel that sense of dread, that like there's no more future seniors out there. But I, I do think like we have to acknowledge that the pipeline we're all used to that is several decades old, is no longer accurate.
Alex: Follow up question to that, like, do you think that there would be. More engineers or more people that would be interested in being an engineer given some of the changes that are happening. I, I kind of had this, I guess, debate with one of my engineers recently where, you know, his concern is, oh my gosh, like people aren't gonna be classically trained in engineering anymore.
They won't really know what they're doing. And I was like, do you know machine code? He was like, no. And I was like, yeah, you, you know, c plus plus or Python or JavaScript, like that's a layer of abstraction and that level of engineering that you used to have to do and with punch cards and stuff has gone away.
Right. And perhaps this is, this is a, a hypothesis that this is maybe the next level of what it means to be an engineer.
Meir: It's different. So, this answer will come suspiciously close to like claiming I prefer the abacus over the calculator. And I do not. I, I do not prefer the abacus over the calculator, but, I think it's important to understand how the thing works underneath the scenes 'cause engineering isn't really about when things go right, can you make it? Vibe code the thing. It's fine if it goes right. It's about when it goes wrong, how long does it take you to fix it. And the more you understand the inner workings of the underlying technology, the better that goes. That's like most of the job.
It's the most frustrating part of the job, but it's also the job effectively. So yeah, like we've definitely changed who's coming in engineering, like vibe coding’s a cool name. So, you're gonna get people that do it. Like my friend, right? Who made the who, who made the, the platform. He, he's not an engineer and he is doing engineering stuff now.
But he can't debug it. And, and there are bugs and like that has to be solved. So yeah, I think we've, we've broadened. We've definitely broadened the inputs, but I don't know if it's, it's increased the quality of the outputs. Hmm.
Serge: Again, I sort of come back to the similar answer where I, I think there's just huge engineering demand in different specialties now.
Like what we understand with the human genome, what we're able to do with chemical, structural analysis. There is so much work in like solid state, products in pharma, like it's, you know, when we were growing up, it was a microprocessor 3G mobile devices, and that just, that was the big sucking sound of engineering talent.
But I, and I think as much as the AI companies are all insanely valued, I think that there's others of innovation that, that investors could get ahead of.
Alex: Any more questions? I think we probably have time for one. Okay.
Question 3: You, you talked a lot about how AI will improve and change game development. What are your thoughts on like gameplay experience, like the consumer experience, like how, what, where are you seeing changes in or AI changing the actual game and like beyond like NPCs that can have a conversation with you?
Meir: Oh yeah. There was a cool game that did that actually, the whole game was built on, just like you're talking, effectively talking to an LLM and it's like a, it was in the game.
I, I forget the name. It was a couple years ago. That was, I thought, pretty innovative. I think it's, but to answer your question, I think the answer is through democratizing access to data, which I talked about earlier with natural language querying. Like I honestly think that we, we, we can't use data to like de define the world.
Like, 'cause it's a, data is, any data we collect is an incomplete model of the world, but it does contextualize what we know to be true. And so that's how, where I think like. If we're talking about the player experience, what are they experiencing? A lot of that's collected in data and aggregated and, and sometimes hard to see.
I think and, and natural language querying gives us the ability to have people, you know, designers and producers and people who aren't super good at SQL or data querying,, actually try their hand at that and start to ask questions and get answers and then talk with people who know more about it.
I think that will actually massively accelerate how we deal with and how, how we help games become more fun as a consequence of AI.
Serge: I'm trying to be open-minded while skeptical. Some of that comes from the types of games I like. I like a game that is very intentional about the emotion that they're trying to create with me, which is a combination of art style, game mechanics, a bunch of different things.
Aligning in the same direction, and I just am skeptical AI will get there. But if I look at more UGC or open-ended types of plays where it's much more of a collaborative and you're giving people more tools with AI to be able to, to do that more rapidly, maybe that can work. That's not my type of game as much, so I, I can't rule out, but just in terms of what I find, I'm, I'm skeptical that it's gonna come, but I got good friends trying to make that happen. So, I wish them a lot of luck. I don't have as many friends as Meir.
I think that's it. No more questions. Yeah. Alright. Give it up. Thank you so much Serge, Alex and Meir. Awesome panel.
If you enjoyed today's episode, whether on YouTube or your favorite podcast app, make sure to like, subscribe, comment, or give a five-star review. And if you wanna reach out or provide feedback, shoot us a note at [email protected] or find us on Twitter and LinkedIn. Plus, if you wanna learn more about what Naavik has to offer, make sure to check out our website www.naavik.co there. You can sign up for the number one games industry newsletter, Naavik Digest, or contact us to learn about our wide-ranging consulting and advisory services.
Again, that is www.naavik.co. Thanks for listening and we'll catch you in the next episode.








