
**Moderator:**
Michael Gold - [X](https://twitter.com/michaelgold) | [Linkedin](https://www.linkedin.com/in/themichaelgold/)
**Panelists:**
Amber Cook - [X](https://twitter.com/100gamesproject) | [Linkedin](https://www.linkedin.com/in/ambermcook/)
Bilawal Sidhu - [X](https://twitter.com/bilawalsidhu) | [Linkedin](https://www.linkedin.com/in/bilawalsidhu/)
Kayla Comalli - [X](https://twitter.com/kay_coms) | [Linkedin](https://www.linkedin.com/in/kalyco/)
Vatsal Bhardwaj - [X](https://twitter.com/vatsal) | [Linkedin](linkedin.com/in/vatsal/)
[00:36.000 --> 00:41.000] **[Michael]** All right, thanks everyone.
[00:41.000 --> 00:51.000] So, we're here to talk about the impact of AI in terms of gameplay.
[00:51.000 --> 00:58.000] And specifically with this panel, what I want to do for all of you is give you kind of a
[00:58.000 --> 01:08.000] presentation of everyone's take on the technology and how that impacts the go-to-market for
[01:08.000 --> 01:10.000] implementing AI and games.
[01:20.000 --> 01:36.000] So, we want to give you a sense of what the go-to-market could look like for a game that
[01:36.000 --> 01:45.000] leverages heavily AI and how people are thinking about it both from the marketing side, from
[01:45.000 --> 01:49.000] the technology side, from the content side.
[01:49.000 --> 01:56.000] There's enabling technology here that's been around that AI empowers and there's also brand
[01:56.000 --> 02:04.000] new stuff like large language models and generative content like stable diffusion and even 3D
[02:04.000 --> 02:06.000] generative content that's on the horizon.
[02:06.000 --> 02:07.000] So, we'll talk about that.
[02:07.000 --> 02:11.000] We're going to keep it pretty non-technical.
[02:11.000 --> 02:15.000] How many of you in the audience want us to get technical?
[02:15.000 --> 02:17.000] Raise your hand.
[02:17.000 --> 02:20.000] We can geek out with you.
[02:20.000 --> 02:25.000] But we'll save that to the end of the panel.
[02:25.000 --> 02:33.000] And what I'm going to do is I'm going to put together a Google Doc with all of the panelists'
[02:33.000 --> 02:38.000] content info and all of our thoughts in written form.
[02:38.000 --> 02:40.000] I'll share that on LinkedIn.
[02:40.000 --> 02:45.000] So, if you follow any of us on LinkedIn, you'll be able to get that Google Doc with everything
[02:45.000 --> 02:46.000] we spoke about here.
[02:54.000 --> 03:02.000] So, with that, I'll let my panelists introduce themselves and we'll kick things off.
[03:07.000 --> 03:10.000] **[Vatsal]** My name is Vatsal Bhagwaj.
[03:10.000 --> 03:17.000] I've been in games for the last 15 years, building games or building platforms and technologies
[03:17.000 --> 03:18.000] for games.
[03:18.000 --> 03:21.000] Most recently, I was Chief Product Officer at Stills.
[03:21.000 --> 03:26.000] Before that, I was Head of Games Tech and Simulation Tech at AWS.
[03:26.000 --> 03:30.000] I had a similar role at Meta/Oculus before that.
[03:30.000 --> 03:36.000] Prior to that, I used to make games, mobile, free-to-play games, and Zynga and Storm 8.
[03:36.000 --> 03:39.000] So, yeah, I'm very excited to be here.
[03:48.000 --> 03:49.000] **[Amber]** I'm Amber Cppl.
[03:49.000 --> 03:52.000] I work in marketing for a company called Hidden Door.
[03:52.000 --> 03:57.000] We're building a platform that allows you to connect with your favorite universes and
[03:57.000 --> 04:01.000] get into interactive fiction very quickly based on these traditional worlds.
[04:01.000 --> 04:04.000] My background is mostly in tabletop gaming.
[04:04.000 --> 04:09.000] I moved over to digital to work as the Head of Marketing for Roll20, which is sort of
[04:09.000 --> 04:15.000] a platform for playing pen and paper role-playing games about five years ago.
[04:15.000 --> 04:20.000] I just switched over to this new AI platform a year ago.
[04:20.000 --> 04:21.000] **[Bilawal]** Hey, everyone.
[04:21.000 --> 04:22.000] My name is Bilawal Sidhu.
[04:22.000 --> 04:24.000] I'm an AI and 3D creator.
[04:24.000 --> 04:31.000] That means I make educational and hopefully entertaining content on YouTube, TikTok, and
[04:31.000 --> 04:34.000] I guess X is what we're calling it these days.
[04:34.000 --> 04:39.000] Prior to that, I spent a decade in tech predominantly at Google where I was a senior product manager
[04:39.000 --> 04:45.000] working on immersive capture, AR/VR platforms, and geospatial 3D mapping.
[04:45.000 --> 04:49.000] I fell in love with all of this good stuff at the age of 11 when I discovered VFX and
[04:49.000 --> 04:50.000] computer graphics.
[04:50.000 --> 04:54.000] And let me tell you with this current wave of AI, that's exactly how I feel, and I'm
[04:54.000 --> 04:56.000] super excited to be here.
[04:56.000 --> 04:57.000] **[Kayla]** Hi, everybody.
[04:57.000 --> 04:58.000] Hi, guys.
[04:58.000 --> 04:59.000] I'm Kayla Comalli.
[04:59.000 --> 05:01.000] I'm the CEO of Lovelace Studio.
[05:01.000 --> 05:06.000] We're building a user-generated content platform that leverages AI tools in a comprehensive
[05:06.000 --> 05:07.000] way.
[05:07.000 --> 05:14.000] So from language models with characters to general election networks for growing mechanics
[05:14.000 --> 05:19.000] and modular components and entire world building using prompt engineering.
[05:19.000 --> 05:27.000] And we're currently just a startup right now, but my experience is in robotics and AI perception,
[05:27.000 --> 05:34.000] working with tools and systems and have pivoted as a lifelong lover of game into how we can
[05:34.000 --> 05:37.000] build that in a big place.
[05:37.000 --> 05:39.000] **[Michael]** Thank you, everyone.
[05:39.000 --> 05:49.000] So there's this holy grail, if you will, about the infinitely replayable game that people
[05:49.000 --> 05:56.000] think AI might enable, procedural content generation might enable as well.
[05:57.000 --> 06:02.000] Is that the most exciting opportunity as you see it?
[06:02.000 --> 06:04.000] This is for the entire panel.
[06:04.000 --> 06:08.000] What opportunities really excite you?
[06:08.000 --> 06:11.000] What technologies are behind it?
[06:23.000 --> 06:26.000] **[Amber]** I think that enables infinitely replayable stories, but I don't really think that's the
[06:26.000 --> 06:27.000] most exciting thing about it.
[06:27.000 --> 06:31.000] I think the scalability is really interesting, but I think the ability to take these fictional
[06:31.000 --> 06:36.000] universes and bring fans really close to them and let them put their fingerprints on those
[06:36.000 --> 06:40.000] stories and write their friends into those worlds and then sort of say, all right, I
[06:40.000 --> 06:44.000] didn't want the TV show to go this way last night when I watched my favorite episode.
[06:44.000 --> 06:49.000] I want to see what happens if we have been able to finish the story out this way and
[06:49.000 --> 06:53.000] create their own campaign and take it a different direction than the writers do.
[06:53.000 --> 06:55.000] That's exciting.
[06:55.000 --> 07:00.000] **[Michael]** Is everyone here familiar with Showrunner?
[07:00.000 --> 07:03.000] We've got a lot of head nods.
[07:08.000 --> 07:19.000] AI generated content that kind of took a South Park episode and made it automated with bots,
[07:19.000 --> 07:26.000] and it was kind of good, but also needed a lot of work to be really good what you would
[07:26.000 --> 07:28.000] expect for AAA.
[07:28.000 --> 07:33.000] I'm wondering if anyone on the panel has thoughts and ideas of what they would do to improve
[07:33.000 --> 07:37.000] that with today's topic.
[07:37.000 --> 07:40.000] **[Bilawal]** The way I think about it is it gives you the scaffolding, right?
[07:40.000 --> 07:45.000] As mentioned in the prior panel, people were talking about world building
[07:45.000 --> 07:50.000] the ability to distill down the attributes of a world you’re building into something and then scale
[07:50.000 --> 07:54.000] that across a bunch of different episodes, perhaps in this case, and the fact that you
[07:54.000 --> 07:59.000] can go and see if something kind of makes sense, whether that's using, you know, the
[07:59.000 --> 08:03.000] Showrunner example that you gave, or taking a bunch of mid-journey art and running it
[08:03.000 --> 08:07.000] through Runaway and seeing motion added to it, I think it's just profound, right?
[08:07.000 --> 08:11.000] Like, it may not be the final product, but it gets you so much closer to it that you
[08:11.000 --> 08:16.000] can make more informed decisions about how you use that limited, you know, kind of human
[08:16.000 --> 08:19.000] resource that you have to achieve it.
[08:19.000 --> 08:21.000] **[Vatsal]** I think just reflecting on both those things of how do you make a forever game and this
[08:28.000 --> 08:33.000] South Park episode which came out, at least the way I think of it
[08:33.000 --> 08:37.000] is, forever games, by nature they are emergent.
[08:37.000 --> 08:43.000] And, you know, and what I mean by that is, can you set up a construct where the simple
[08:43.000 --> 08:47.000] rules, the players and the community can continue to evolve the game
[09:03.000 --> 09:09.000] Now, everything which we have seen in AI today, especially like the most recent trends
[09:09.000 --> 09:16.000] around LLMs and Gen AI, potentially it could put it on steroids, like potentially.
[09:16.000 --> 09:20.000] And the reason I say potentially is that's still the theory.
[09:20.000 --> 09:23.000] I think there are a lot of problems to be worked out.
[09:23.000 --> 09:30.000] I think the challenge is, can you harness these models in a way where you have a scaffolding
[09:30.000 --> 09:33.000] of control so that things don't go off range
[09:33.000 --> 09:39.000] So, again, just to tie it together, I think we can master sort of the hallucination of
[09:39.000 --> 09:46.000] these AI systems in a way that it offers delight and not misery.
[09:46.000 --> 09:51.000] I think we'll see a lot of invention in video games.
[09:51.000 --> 09:57.000] **[Michael]** So, I think one of the things that we should outline for the audience is, what is the state
[09:57.000 --> 09:59.000] of the art now?
[09:59.000 --> 10:01.000] What can it do?
[10:01.000 --> 10:03.000] What can you do with generative AI?
[10:03.000 --> 10:06.000] And also, what can't you do?
[10:06.000 --> 10:11.000] And based on your experiments, maybe Kayla, you want to take that one?
[10:11.000 --> 10:12.000] **[Kayla]** Sure, yeah.
[10:12.000 --> 10:16.000] And this actually goes off of the infinite question as well, because I think that what
[10:16.000 --> 10:21.000] you can do now is exceedingly powerful if you can tie it all together.
[10:21.000 --> 10:27.000] From a consumer side, things get very expensive if you're talking about, oh, I want a system
[10:27.000 --> 10:34.000] that can move from text to 3D comprehensive with meshes and hundreds of thousands of tries.
[10:34.000 --> 10:43.000] But if you want something that has the ability to create a mesh, you could use a parametric
[10:43.000 --> 10:46.000] base engine and just get that in less than 33 milliseconds.
[10:46.000 --> 10:50.000] And it might not be the highest rendering capabilities, but the technology is evolving
[10:50.000 --> 10:55.000] so quickly that if you tie it all together and expose it to players early on, you can
[10:55.000 --> 11:03.000] start leveraging this applied tool where the entire world has, 66% of the United States
[11:03.000 --> 11:06.000] is just gamers, 68% of adults are gamers.
[11:06.000 --> 11:11.000] And this is this growing underlying understanding that this is a skill set.
[11:11.000 --> 11:12.000] Everybody knows it has.
[11:12.000 --> 11:17.000] And if you could just give them the tools and use data-driven research to refine and
[11:17.000 --> 11:19.000] adjust it, then it's unstoppable, I think.
[11:19.000 --> 11:21.000] Yeah, so it's a good word for it.
[11:21.000 --> 11:28.000] **[Michael]** Yeah, the interplay between AI generation and parametric or procedural generation like
[11:28.000 --> 11:33.000] we see in Minecraft and No Man's Sky is a really big opportunity.
[11:33.000 --> 11:36.000] And it's kind of straightforward to chain these things together.
[11:36.000 --> 11:39.000] So I think we'll see a lot of that as well.
[11:39.000 --> 11:45.000] So Bilawal, I wanted to ask you, you have a unique insight to this.
[11:45.000 --> 11:52.000] What topics get the most responses and the most feedback when you make content around
[11:52.000 --> 11:53.000] them?
[11:53.000 --> 11:57.000] What are the general gaming audiences most excited about, most interested in?
[11:57.000 --> 12:00.000] What are they kind of scared about?
[12:00.000 --> 12:06.000] **[Bilawal]** If you have any posts go viral on AI, there's going to be a lot of hate and fear that you
[12:06.000 --> 12:07.000] get, for sure.
[12:07.000 --> 12:12.000] But on the positive side, I think people are very excited about the unbundling of these
[12:12.000 --> 12:13.000] vertical studios.
[12:13.000 --> 12:18.000] Like all these capabilities that have historically been very, very expensive.
[12:18.000 --> 12:23.000] Let's take reality capture and performance capture as two examples, perhaps.
[12:23.000 --> 12:24.000] So reality capture, right?
[12:24.000 --> 12:25.000] Like photogrammetry isn't new.
[12:25.000 --> 12:28.000] It's been around since Time immemorial, right?
[12:28.000 --> 12:30.000] Even before we even had digital cameras.
[12:30.000 --> 12:36.000] But what took data centers just a decade ago, you could do on like a $1500 Nvidia GPU
[12:36.000 --> 12:41.000] using apps like Reality Capture, which Epic Games now owns.
[12:41.000 --> 12:47.000] And then just two years ago, we've seen this rise of neural representations, these sort
[12:47.000 --> 12:52.000] of volumetric light fieldy, not exactly a light field, but close enough representations
[12:52.000 --> 12:58.000] to take just commodity imagery of the real world and create these immaculate 3D renditions
[12:58.000 --> 12:59.000] of it, right?
[12:59.000 --> 13:00.000] So that's magical.
[13:00.000 --> 13:05.000] The fact that you can capture these spaces, bring it into Unity or Unreal Engine and use
[13:05.000 --> 13:09.000] that as your game world, not only that, but like, you know, one of the other things I
[13:09.000 --> 13:12.000] had the opportunity to work at Google is 3d mapping, go get photorealistic 3D titles.
[13:12.000 --> 13:18.000] You have all of Earth now in a game engine, plus all the micro scale data that you are
[13:18.000 --> 13:21.000] capturing at ground level can be layered and anchored to that.
[13:21.000 --> 13:22.000] That's magical.
[13:22.000 --> 13:24.000] But that's just the reality capture side, right?
[13:24.000 --> 13:29.000] Like performance capture, you needed these crazy optical tracking systems, like, you
[13:29.000 --> 13:31.000] know, go rent out a Vicon studio, whatnot.
[13:31.000 --> 13:38.000] And this tech that had been almost incubated for, you know, kind of making whimsical Snapchat
[13:38.000 --> 13:42.000] filters are now finally being applied to content at a much higher fidelity, right?
[13:42.000 --> 13:46.000] So when you look at move AI, what they're doing, or if you look at what Wonder Dynamics
[13:46.000 --> 13:52.000] is doing, taking all these perception AI capabilities and being able to use that as a very small
[13:52.000 --> 13:58.000] team to just use your iPhone or like a Sony camera or small array of cameras to digitize
[13:58.000 --> 14:00.000] the performance and bring it into game engines.
[14:00.000 --> 14:05.000] Like, I think that has that 10x multiplier effect where it levels the playing field for
[14:05.000 --> 14:09.000] indies, but even people in bigger studios can kind of iterate faster because you don't
[14:09.000 --> 14:13.000] have to go get studio time to try out or validate a concept.
[14:13.000 --> 14:19.000] **[Michael]** Just from the audience here, have any of you already replaced motion capture stages for
[14:19.000 --> 14:27.000] capturing player animations with tools like move.ai or class or Rococo?
[14:27.000 --> 14:29.000] **[Bilawal]** Or augmenting it at least.
[14:29.000 --> 14:30.000] **[Michael]** Yeah.
[14:30.000 --> 14:39.000] So that is a big thing to look into because you can use iPhones and really inexpensive
[14:39.000 --> 14:48.000] mocap for game animations instead of renting studio time.
[14:48.000 --> 14:49.000] Go ahead, yeah.
[14:50.000 --> 14:59.000] **[Vatsal]** Just to jump in, just as a former game maker, I think where there are real gains, if you're
[14:59.000 --> 15:05.000] al game maker, is you can prototype and test things really,
[15:05.000 --> 15:06.000] really quickly.
[15:08.000 --> 15:12.000] Instead of three months, you can bring it down to really short timelines.
[15:12.000 --> 15:18.000] But when it comes to how do you take, whether it's diffusion models for 2D or NERF-based
[15:18.000 --> 15:26.000] models for 3D, how do you take these, integrate them into your current workflows at a quality
[15:26.000 --> 15:28.000] which your players expect?
[15:28.000 --> 15:31.000] I think there is lots of work to be done there.
[15:31.000 --> 15:37.000] These are amazing toys, and they'll get to production very, very soon.
[15:37.000 --> 15:45.000] But there is a lot of improvements around building security, how data is handled, integration
[15:45.000 --> 15:51.000] in workflows, which will need to get built out, and is getting built out over the next
[15:51.000 --> 15:53.000] couple of years, which will make it compelling.
[15:53.000 --> 15:58.000] I don't think we are necessarily there yet in many of these cases.
[15:58.000 --> 16:04.000] **[Michael]** I want to know more about the AI bar exam that Johnny mentioned.
[16:06.000 --> 16:13.000] There are so many legal references today, but I think we all need to have these types
[16:13.000 --> 16:18.000] of pass-fail exams, whether they use reinforcement learning or just traditional approaches to
[16:18.000 --> 16:23.000] make sure that the content quality is good.
[16:23.000 --> 16:28.000] I'm curious, Amber, when you have conversations, because you're on the marketing side, when
[16:28.000 --> 16:36.000] you have conversations with brands about licensing content, how do you explain content quality
[16:36.000 --> 16:39.000] with what you're doing with AI, and how do you get them comfortable with it?
[16:39.000 --> 16:41.000] What does the conversation look like?
[16:41.000 --> 16:46.000] **[Amber]** We have a real advantage in that our CEO is somebody who wrote a book on the ethics
[16:46.000 --> 16:47.000] of AI.
[16:47.000 --> 16:50.000] Obama is chief data scientist, so we have a little bit of an advantage, just because
[16:50.000 --> 16:52.000] she has some credibility in the field.
[16:52.000 --> 16:55.000] But what we're finding is that it just depends on who we're talking to.
[16:55.000 --> 16:59.000] Originally, our idea was that we were going to work with literary universes.
[16:59.000 --> 17:03.000] Because everything that we're doing, where a lot of people are talking about motion capture
[17:03.000 --> 17:06.000] and all this stuff, we're building basically an interactive graphic novel.
[17:06.000 --> 17:10.000] It sort of fits into the cozy game category more than anything else, more interactive
[17:10.000 --> 17:11.000] fiction.
[17:11.000 --> 17:15.000] What we need to make people comfortable with is that we're going to take their world, their
[17:15.000 --> 17:17.000] IP, and we are not going to exploit it in any way.
[17:17.000 --> 17:19.000] Batman's not going to die.
[17:19.000 --> 17:21.000] Batman's not going to start kissing random people in the street.
[17:21.000 --> 17:25.000] How do we make those rails there so that we don't violate any IP standards?
[17:25.000 --> 17:28.000] What are the laws of physics of those universes?
[17:28.000 --> 17:33.000] When we started approaching authors, it's just a difficult time to approach authors
[17:33.000 --> 17:34.000] right now.
[17:34.000 --> 17:38.000] We started getting replies from literary agents saying we don't want to exploit so-and-so's
[17:38.000 --> 17:39.000] work this way.
[17:39.000 --> 17:44.000] But what I'm finding is that the studios, the entertainment studios, realize that they're
[17:44.000 --> 17:45.000] moving forward in these ways.
[17:45.000 --> 17:50.000] They're more open to collaborating with us, and they understand that we'll put the work
[17:50.000 --> 17:54.000] in, and we'll have to figure out how do you get brand approvals and put AI in that term.
[17:54.000 --> 17:55.000] How do you...
[17:55.000 --> 17:58.000] You can't approve every single thing that is generated.
[17:58.000 --> 18:00.000] What is that going to look like?
[18:00.000 --> 18:03.000] That's the whole new world of licensing that I think we're going to be playing around in.
[18:03.000 --> 18:06.000] A lot of those bigger studios aren't afraid of it, but they're using...
[18:06.000 --> 18:10.000] They're almost using us as consultants in some ways because they're coming in and they're
[18:10.000 --> 18:14.000] having AI summits internally and asking us to come talk about what it's going to look
[18:14.000 --> 18:18.000] like to partner with us as one of their first AI licensees.
[18:18.000 --> 18:21.000] Some of the more independent studios are...
[18:21.000 --> 18:25.000] That have really strong brands and are very sensitive.
[18:25.000 --> 18:30.000] For example, the tabletop RPG community is very sensitive to AI who send it to the placement
[18:30.000 --> 18:31.000] of authors or writers.
[18:31.000 --> 18:34.000] Those brands are being a little bit more like...
[18:34.000 --> 18:37.000] We don't want to be the first ones out of the gate to announce a license.
[18:37.000 --> 18:39.000] We don't want to be the ones going to bat for AI.
[18:39.000 --> 18:44.000] Once everybody's over those first conversations, then we're happy to partner with you and license
[18:44.000 --> 18:46.000] our content to you.
[18:46.000 --> 18:48.000] **[Michael]** That makes a lot of sense.
[18:48.000 --> 18:49.000] It's tough.
[18:49.000 --> 18:53.000] Nobody wants to be the pioneer with the arrows in their backs.
[18:53.000 --> 18:55.000] That's always true.
[18:55.000 --> 19:02.000] I was expecting us...
[19:02.000 --> 19:07.000] Well, part of me thought that we might have picketers out here.
[19:07.000 --> 19:08.000] I didn't know.
[19:08.000 --> 19:09.000] I'm happy we are.
[19:09.000 --> 19:13.000] It seems to be a pretty AI-positive crowd here.
[19:13.000 --> 19:17.000] We're self-selected to come to a conference.
[19:17.000 --> 19:21.000] There are some tools, though, that are a little bit scary.
[19:21.000 --> 19:29.000] I was chatting with the co-founder of a tool called CoquiAI, which does voice model cloning.
[19:29.000 --> 19:37.000] He was telling me he thought that in games in the next year or two, everything except
[19:37.000 --> 19:41.000] for cutscenes would be AI voice-generated.
[19:41.000 --> 19:43.000] That was his prediction.
[19:43.000 --> 19:46.000] Of course, he's got much skin in the game there.
[19:46.000 --> 19:49.000] The cost is expensive.
[19:49.000 --> 19:56.000] It costs $10 to generate an hour of audio.
[19:56.000 --> 19:57.000] I'm curious.
[19:57.000 --> 20:06.000] If those are the parameters, what do you all think will be the use of that type of AI in
[20:06.000 --> 20:07.000] games?
[20:07.000 --> 20:10.000] Do you think it's going to blow up, or do you think there's going to be pushback to
[20:10.000 --> 20:11.000] it?
[20:11.000 --> 20:14.000] You can all jump in..
[20:15.000 --> 20:16.000] **[Amber]** I had a marketing agency.
[20:16.000 --> 20:19.000] I was looking to hire for short-form video content recently.
[20:19.000 --> 20:25.000] They captured my CEO's voice from a podcast she had done, and then they sent me short-form
[20:25.000 --> 20:29.000] video they created with her likeness, without any permission or anything, just as a pitch
[20:29.000 --> 20:30.000] to get us to work with them.
[20:30.000 --> 20:31.000] I hope it's not that.
[20:31.000 --> 20:34.000] I think content creation is going to be really interesting, though.
[20:34.000 --> 20:39.000] There's going to be a lot of weird toxicity that we're going to have to deal with because
[20:39.000 --> 20:40.000] of it.
[20:40.000 --> 20:44.000] **[Bilawal]** But I think there's also going to be some amazing personalities that come out of it
[20:44.000 --> 20:46.000] because of that.
[20:46.000 --> 20:49.000] I think the costs are going to go down.
[20:49.000 --> 20:54.000] The rate at which we're going from R&D research publication to actual products, now it is
[20:54.000 --> 20:59.000] an interesting situation where it's the independents that are far more inclined to play with these
[20:59.000 --> 21:04.000] capabilities based on all the training data provenance conversations we've had.
[21:04.000 --> 21:09.000] But it's an inevitability that the optimization will happen if we just take the reality capture
[21:09.000 --> 21:10.000] space.
[21:10.000 --> 21:14.000] I mean, it took 12 hours on a TPU to train like a NeRF.
[21:14.000 --> 21:22.000] Last January, and then Instant NGP came out, courtesy NVIDIA, and took a minute on a $1,500
[21:22.000 --> 21:23.000] consumer grade GPU.
[21:23.000 --> 21:27.000] But hey, still playing back and rendering neural volumes in real-time was still hard.
[21:27.000 --> 21:32.000] Now you've got a new technique, gaussian splatting, that's 240 frames per second.
[21:32.000 --> 21:37.000] I think that rate of innovation is insane, and I think it was the Unity gentlemen and
[21:37.000 --> 21:40.000] a couple of the other folks talking on-device at the edge.
[21:40.000 --> 21:46.000] I think right now, I'm doing a content partnership with a semiconductor company
[21:47.000 --> 21:48.000] They're running ControlNet on-device.
[21:48.000 --> 21:50.000] This content will be coming out next month.
[21:50.000 --> 21:51.000] That's wild.
[21:51.000 --> 21:56.000] I needed a crazy machine to play with this tech three or four months ago.
[21:56.000 --> 22:00.000] And then you look at what Apple's ecosystem is going to enable in all of this, I think
[22:00.000 --> 22:02.000] it's going to be absolutely magical.
[22:03.000 --> 22:06.000] You'll have your massive, behemoth, foundational models.
[22:06.000 --> 22:11.000] You'll distill them down, and a bunch of those workloads will be running at the edge, or
[22:11.000 --> 22:13.000] some sort of hybrid approach.
[22:13.000 --> 22:16.000] **[Michael]** Who here in the audience is familiar with ControlNet?
[22:16.000 --> 22:19.000] Raise your hand if you're familiar with it.
[22:20.000 --> 22:23.000] Why don't you explain ControlNet for everybody?
[22:23.000 --> 22:27.000] **[Bilawal]** I think this goes back to your earlier question of what is the thing that people have the
[22:27.000 --> 22:28.000] most negative reaction to.
[22:28.000 --> 22:35.000] I think the moment text-to-image generators came out, text, essentially words or tokens
[22:35.000 --> 22:39.000] that you describe, would condition an image.
[22:39.000 --> 22:46.000] That was really, really dissatisfying for somebody who perhaps has learned how to use a Wacom
[22:46.000 --> 22:49.000] tablet, they're proficient in Procreate or Photoshop, whatnot.
[22:49.000 --> 22:55.000] ControlNet is basically a way to take the old-school AI world, like these discriminative
[22:55.000 --> 23:00.000] models, these task-specific models, and use them to condition the image generation process.
[23:00.000 --> 23:05.000] Basically, it's two models working in concert where you could, let's say, just draw the
[23:05.000 --> 23:10.000] contours of line work, and then have this model fill in the colors, or you could give
[23:10.000 --> 23:16.000] it an existing image, extract edge maps or a depth map, which approximates the geometric
[23:16.000 --> 23:22.000] structure in a scene, and use that to create, to condition the image generation process.
[23:22.000 --> 23:29.000] It's a way of artists exerting control in this higher bandwidth fashion than just frickin'
[23:29.000 --> 23:34.000] text, because text is such a fickle way to describe the intricate composition of a scene.
[23:34.000 --> 23:40.000] That's what ControlNet is, and it's become fundamental to a lot of the AI video work, too.
[23:41.000 --> 23:52.000] **[Michael]** Yeah, and it really gives the artist so much more control over how the image looks, because
[23:52.000 --> 24:00.000] they're able to take some of their prior art, feed it through, and get stable diffusion
[24:00.000 --> 24:08.000] or whatever AI content system they're using, to look at that as prior art and combine it
[24:08.000 --> 24:14.000] and chain it together with the model that they're using.
[24:14.000 --> 24:17.000] It's become one of the fundamental tools.
[24:17.000 --> 24:25.000] What I found really exciting about it was to see the open source movement over the past
[24:25.000 --> 24:32.000] six months, introduce that in real time, and then see all of the different artists play
[24:32.000 --> 24:38.000] with that, and make really cool stuff, and post their workflows on Reddit, and then see
[24:38.000 --> 24:47.000] the evolution of what we call temporal consistency from frame to frame with AI video, like having
[24:47.000 --> 24:54.000] something that looks like an actual video instead of just a fever dream.
[24:55.000 --> 25:03.000] I taught a course in AI content at School of Visual Arts, and I had my students, basically
[25:03.000 --> 25:10.000] we would look at the latest innovations on Reddit, and try to implement that in class
[25:10.000 --> 25:15.000] and see how we did it, could we do better work than we actually reproduced that.
[25:15.000 --> 25:23.000] I'm curious, that was the process I took with my students, but what process do you all undertake
[25:23.000 --> 25:32.000] with looking at the never-ending stream of innovations in the AI space to bring that
[25:32.000 --> 25:40.000] in-house and decide whether you want to try to implement it or not, is it worth implementing
[25:40.000 --> 25:46.000] in your studio, do you want to experiment with it, because I think a lot of us here
[25:46.000 --> 25:50.000] might feel kind of overwhelmed with the pace of innovation.
[25:53.000 --> 25:54.000] Kayla, you want to start?
[25:54.000 --> 25:55.000] **[Kayla]** Yeah, sure.
[25:55.000 --> 26:03.000] So, our paradigm has always been to expose it to the players and let them identify what
[26:03.000 --> 26:08.000] is the most appealing first and foremost, so that first of all means approachability,
[26:08.000 --> 26:13.000] that means that anybody could be creator, so we're a PC-based game, we're not Web3,
[26:13.000 --> 26:20.000] we want no-code mechanics for the players, and so when people think about creators or
[26:20.000 --> 26:26.000] art, they typically think of visuals, as we discussed, computer vision and how you can
[26:26.000 --> 26:33.000] adjust image models, but with language models, players can actually be storytellers, if you
[26:33.000 --> 26:38.000] create a world that gets auto-generated, you can adjust the themes and the styles, if you
[26:38.000 --> 26:45.000] have NPCs that are there in day zero in that world, and beyond that, when you expose more
[26:45.000 --> 26:51.000] game mechanics and experiences, there's these lesser-known tools and frameworks that
[26:51.000 --> 26:57.000] you can use that actually help the players identify these mechanics that they know, that
[26:57.000 --> 27:03.000] they're all familiar with, as they're lifelong gamers, and that kind of gives them the sense
[27:03.000 --> 27:10.000] of being an architect, more so than just a single asset or piece is actually this entire
[27:10.000 --> 27:16.000] world, there's the bar rules for different player types, there's the achiever, the socializer,
[27:16.000 --> 27:22.000] the explorers, the killers, and if you, and everybody, people are typically some combination
[27:22.000 --> 27:29.000] of multiple, and it's just this evolving living space where if you can give them the tools
[27:29.000 --> 27:34.000] and give them the social environment, then it's frozen to this social community and eventually
[27:34.000 --> 27:39.000] these digital nations eventually, that's I believe what the true metaverse could evolve
[27:39.000 --> 27:45.000] into, but it has to start with the players building it out and getting exposed to the
[27:45.000 --> 27:48.000] systems quickly.
[27:48.000 --> 27:54.000] **[Bilawal]** I could briefly add to that and just say it is very overwhelming, every one to three months
[27:54.000 --> 27:59.000] there's the new thing dujour, and everyone forgets about the last thing, and along with
[27:59.000 --> 28:04.000] it come a bunch of these trends manifested in content and apps and startups too, I would
[28:04.000 --> 28:09.000] say you have to have a playground mode and an architect mode is what I've been calling
[28:09.000 --> 28:15.000] it, where just play with this stuff, don't be like, well, this isn't compute, it's so
[28:15.000 --> 28:19.000] compute-intensive, this would never work in production, you're going to miss the wave
[28:19.000 --> 28:24.000] completely if you don't just play with this stuff, open yourself to serendipity so that
[28:24.000 --> 28:29.000] happy accidents happen, and then you go into a far more structured, regimented mode where
[28:29.000 --> 28:35.000] you apply it, and I think given the compression of development life cycles, that's the only
[28:35.000 --> 28:42.000] way perhaps to stay afloat, we were making the rising all tides, let's all boat analogy,
[28:42.000 --> 28:46.000] so get on a boat first.
[28:46.000 --> 28:52.000] **[Michael]** It definitely works for me as someone, like, skipping ads as someone who works on a small
[28:52.000 --> 29:04.000] team, but I'm curious also, in larger organizations, you were with Amazon, how do you
[29:04.000 --> 29:10.000] implement this type of thing in larger, or with lots of, even 50 developers?
[29:10.000 --> 29:14.000] **[Bilawal]** Where do you get the legal approval to play with this stuff?
[29:14.000 --> 29:22.000] **[Vatsal]** I think the way I look at this is, going back to the fundamentals, if you're making a game,
[29:22.000 --> 29:28.000] the game actually has to be fun, and the story has to be compelling, and so all of these
[29:28.000 --> 29:36.000] innovations and GitHub projects, you have to filter them out by, will this actually be
[29:36.000 --> 29:43.000] a compelling experience for your users and players, and kind of talk about how to think
[29:43.000 --> 29:50.000] about users, at least gamers, so I've always started with that filter of like, who are
[29:50.000 --> 29:55.000] my customers, will they be delighted by it, or is this a distraction, because nothing
[29:55.000 --> 30:03.000] is going to replace people making really compelling core games and building this tech around it,
[30:03.000 --> 30:09.000] so yeah, that's how I sort of tend to filter out all of this noise.
[30:09.000 --> 30:14.000]
[30:43.000 --> 30:44.000] **[Michael]** Make it a fun game first.
[30:44.000 --> 30:46.000] **[Vatsal]** Make it a fun game first, yeah.
[30:46.000 --> 30:55.000] **[Michael]** I'd love to dive into some top content examples that come to mind for the rest of the panel,
[30:55.000 --> 30:59.000] because I think that might be super helpful for the audience, but before we go to that,
[30:59.000 --> 31:01.000] how are we on time?
[31:01.000 --> 31:02.000] We've got five minutes.
[31:02.000 --> 31:09.000] **[Michael]** All right, so real quick, ten second content example definition from the panel, and then
[31:09.000 --> 31:14.000] maybe we have time for like, one or two questions from the audience.
[31:14.000 --> 31:21.000] Does anybody have a great content example they want to share with the audience?
[31:21.000 --> 31:23.000] **[Amber]** Like AI generated content?
[31:23.000 --> 31:28.000] **[Michael]** Yeah, like just implementations of the things that you spoke about.
[31:28.000 --> 31:32.000] **[Amber]** One of my favorite things right now is just using like a web browser that's like, sort
[31:32.000 --> 31:35.000] of instead of Google, so I'm just getting, I mean that's one of my favorite product experiences
[31:35.000 --> 31:36.000] in AI right now.
[31:36.000 --> 31:40.000] But I haven't seen a lot in gaming that's really, what I'm looking for is for people
[31:40.000 --> 31:45.000] to be making things that feel really new, and I feel like that's, right now I see a
[31:45.000 --> 31:50.000] lot about efficiency, and I think it's when people are able to start connecting the kind
[31:50.000 --> 31:55.000] of information that like, one game developer couldn't have possessed, and they can create
[31:55.000 --> 31:57.000] a new kind of gaming experience.
[31:57.000 --> 31:59.000] **[Audience]** What browser was that?
[31:59.000 --> 32:04.000] **[Amber]** You, it's not the top, I guess it's just like my, instead of Google, it's my search engine
[32:04.000 --> 32:07.000] in my browser, my Chrome browser.
[32:07.000 --> 32:12.000] **[Kayla]** One of the first features we released was world generation, from like prompts to world
[32:12.000 --> 32:18.000] generation within ten seconds, and that uses a series of breaking down the worlds into
[32:18.000 --> 32:24.000] constituent components, hierarchy systems, relevancy scores, and defining the world
[32:24.000 --> 32:29.000] by those attributes, and because we wanted to build that first, that creates the layers
[32:29.000 --> 32:35.000] and the metadata that can inform exactly how that world gets built, and that's, you know,
[32:35.000 --> 32:39.000] that is somebody creating something that is in a 3D environment in a very quick amount
[32:39.000 --> 32:43.000] of time, and then just building off of that, like with the NPCs based on those world types
[32:43.000 --> 32:49.000] and such, so I thought, you know, that's at least our dream and vision in how that,
[32:49.000 --> 32:53.000] how content can be exposed to the players in new issues.
[32:53.000 --> 32:57.000] **[Michael]** So that's a good, that's a good practical approach to take home, start with the world,
[32:57.000 --> 32:59.000] and go from there.
[32:59.000 --> 33:01.000] Bilawal anything?
[33:01.000 --> 33:05.000] **[Bilawal]** Yeah, I would say one example that sort of weaves together a bunch of stuff that's interactive,
[33:05.000 --> 33:10.000] a little biased because I worked on this, which is Immersive View inside of Google Maps.
[33:10.000 --> 33:15.000] It's the next generation of Google Maps, and basically it's like a triple A game engine,
[33:15.000 --> 33:18.000] streaming pixels from the cloud to any device.
[33:18.000 --> 33:22.000] It's the full, like, you know, uncapped 3D model of the world.
[33:22.000 --> 33:26.000] It has NeRFs, so if you want to take a look inside, you can see the volumetric representation,
[33:26.000 --> 33:29.000] and then layered on top of it is all the simulation data.
[33:29.000 --> 33:32.000] What is the weather like right now, and what will it be in the future?
[33:32.000 --> 33:36.000] And it's just like such a utilitarian scaffolding, but if you take that,
[33:36.000 --> 33:41.000] Imagine the games that can be told on top of that, especially if you have the AR mirror to it.
[33:41.000 --> 33:44.000] Like, I think what people don't realize, like generative AI, again,
[33:44.000 --> 33:47.000] is going to be so crucial to populating these canvases,
[33:47.000 --> 33:53.000] but there's so many other primitives that are sitting around from, like, VPS to, like, 3D tiles
[33:53.000 --> 33:57.000] that people aren't, appling or leveraging to the full ability
[33:57.000 --> 34:01.000] that are there as, like, foundations to then put generative on top of.
[34:01.000 --> 34:05.000] **[Michael]** Absolutely. Any questions from the audience?
[34:05.000 --> 34:07.000] Yes, in the back.
[34:07.000 --> 34:11.000] **[Vish]** Hi. My name is Vish, first of all. Great job.
[34:11.000 --> 34:13.000] All of you are always smiling at me.
[34:14.000 --> 34:19.000] I am the legal that Bilawal was just joking about.
[34:19.000 --> 34:26.000] My job, as an attorney, is to help figure out, you know, gaming companies,
[34:26.000 --> 34:33.000] to navigate these amazing and exciting developments based on a legal landscape
[34:33.000 --> 34:40.000] that was set years ago and really hasn't been streaming related content yet.
[34:41.000 --> 34:46.000] A big issue that always comes up, and especially with clients
[34:46.000 --> 34:50.000] who are very, very excited to exploit this and obviously want to get into it,
[34:50.000 --> 34:54.000] like you said, want to get on the boat, see where all of these things go,
[34:54.000 --> 35:01.000] is ownership rights and how ownership gets allocated
[35:01.000 --> 35:04.000] and the rights associated with that. I'm sure you see that in licensing.
[35:04.000 --> 35:07.000] I'm sure that's why a lot of the authors that you were seeing
[35:07.000 --> 35:11.000] about Emily Hart's painting and all of that are having an issue.
[35:11.000 --> 35:18.000] How do you guys see ownership and where, how much exclusive rights you have
[35:18.000 --> 35:22.000] to the content that you will be able to generate from all of this?
[35:22.000 --> 35:27.000] How do you really feel that is affecting your current state of learning
[35:27.000 --> 35:29.000] as an attorney or whatever you want to go?
[35:32.000 --> 35:36.000] **[Amber]** I think we're just preparing for ending up in court one day.
[35:37.000 --> 35:43.000] I don't know. I mean, licensing just isn't going to be protected by this, right?
[35:43.000 --> 35:46.000] So you just have to prepare for somebody to be the one to go to court.
[35:46.000 --> 35:49.000] And also, do the players own the content that's generated?
[35:49.000 --> 35:54.000] Like if we augment a Batman storyline and curl up with a new thing, that's wonderful.
[35:54.000 --> 35:57.000] But does the player own it because they prompted for it?
[35:57.000 --> 35:59.000] Do we own it because our system generated it?
[35:59.000 --> 36:02.000] Or does Warner Brothers own it because they own Batman?
[36:02.000 --> 36:06.000] I just don't think we're going to know that until we go to court.
[36:11.000 --> 36:12.000] **[Michael]** That's dark.
[36:15.000 --> 36:18.000] **[Amber]** The chest of his legal fees is waiting for these.
[36:18.000 --> 36:22.000] We've been interviewing lawyers and like, whose kids are we sending to college?
[36:25.000 --> 36:31.000] The real truth is what you said too, because the law really hasn't been set right.
[36:31.000 --> 36:35.000] We really have to go to court to establish certain things.
[36:35.000 --> 36:42.000] And again, I think the next, earlier, I think in the next 10, 15 years, there will be a push.
[36:42.000 --> 36:50.000] Again, you know, that's based on historical data, what Disney has done to, you know, keep extending its copyright for the living of stuff.
[36:50.000 --> 36:56.000] If enough people, enough entertainment and media companies, especially in the human world,
[36:56.000 --> 37:00.000] you know, contributing hundreds of billions of dollars to the legal economy,
[37:00.000 --> 37:08.000] when there is more of a lobby push, based on the fact that this is where things are at, you know, what happened with YouTube,
[37:08.000 --> 37:14.000] I think there is going to be a lot more of legislative change if that's going to happen.
[37:14.000 --> 37:19.000] So I think sooner rather than later, we will also see some sort of progress.
[37:21.000 --> 37:27.000] **[Michael]** Thank you. We're unfortunately out of time for this panel, but connect with us on LinkedIn.
[37:27.000 --> 37:41.000] Look for us on Twitter. We'll share the notes.
[37:41.000 --> 37:47.000] I hope you got some good details to take back for your go-to-market in this space.