By now, Haven Studios should feel comfortable working on the technological edge.
Eventually, much of the studio’s founding team came from Google, where they worked to create first-party Stadia games that would showcase the power of cloud-streaming technology.
Haven is with Sony these days – the PlayStation maker provided investment in the start-up and fully acquired it earlier this year – but developers still rely on cloud technology to help with hybrid remote/local development help, and want to push this envelope to see how other new technologies can change the way games are made.
“We really have this vision to bring to life a new kind of player-centric, community-driven game. And that’s why these investments are very important.”
Jade Raymond
At the studio’s official opening of its Montreal offices last week, Haven co-founder Jade Raymond said more than 30% of the studio is currently working on machine learning, artificial intelligence and cloud-based development tools.
“We really have this vision to bring a new kind of player-centric, community-driven game to life,” said Raymond. “And these investments are very important for that.”
While it’s not entirely clear what kind of payoff these investments will have, Haven has gotten an early look at the technology, which it believes could potentially be used later in game development.
Logan Olson, Haven director of creative machine learning, first showed a short clip from the HBO sci-fi show Westworld, in which a person working with a futuristic user interface describes a story and a computer generates a rough image, that adapts to the changing details of the story in reality-time. Though it’s nowhere near practical right now, Olson said it’s a “pretty radical North Star” for people to work toward.
Given the strides AI text-to-image efforts like Midjourney and DALL-E 2 have made in recent years, it might not be as far off a goal as it seems either, he says. Such programs are now beginning to handle video and 3D models as well, and Olson says the rate of progress is so fast that it’s difficult to keep up.
Regarding what Haven is working on, Olson demonstrated a differentiable rendering tool that takes a 3D model of a human head in a photorealistic style and changes the image based on a user input, be it “evil clown”, “aquatic alien” , or “werewolf”. He warned that there were still a number of hurdles to be overcome before such results could be realistically exploited by the development team, but he was clearly excited about the future potential.
Following Olson’s demonstration, Raymond sat on a podium with Leon O’Reilly, Haven’s CTO, and Mark Cerny, PS5’s lead systems architect, and discussed the potential of machine learning to transform the way games are made. Cerny began by reflecting on all that has changed over his decades-long career in the industry.
“Of course, looking back, CPUs and GPUs are exponentially more powerful, and we talk about them quite a bit,” Cerny said. “What we tend not to talk about is the corresponding increase in the power of the tools we use to create games.”
He recalled working on his arcade hit Marble Madness in 1984 when changing a single pixel of the graphics took several hours.
“It was a multi-step process that involved things like burning the graphics data into ROMs and physically inserting the ROMs into a prototype motherboard,” Cerny said. “And of course that’s terrible because fast iteration is key. Rapid iteration has been key to the entire history of video games. We never get it right the first time.”
When the tools are too cumbersome, developers have a harder time turning their ideas into reality, Cerny said, adding that he was excited about the technology Olson was demonstrating and how it could help more developers realize their visions.
O’Reilly itself has a long history of development, starting with the Spectrum ZX. He said iteration times in games have steadily improved after the days when Cerny complained, but only to a point, and they’ve actually gotten worse in recent years.
“Iteration times probably peaked in the early 2000s when … the datasets were small enough that we could iterate quickly.”
Leon O’Reilly
“Iteration times probably peaked in the early 2000s, when the codebases we had were lean enough and small enough, the datasets small enough that we could iterate quickly,” O’Reilly said. “But as we moved away from that and started making these big open-world games with lots of assets and bigger teams, the iteration times got really tough. There’s a lot more data to process than before, and the codebases themselves are really exploding in size.”
While many of the codebases in use today date back to the early 2000s era, O’Reilly said so many extra layers have been piled on top of them that they’ve lost their original benefits.
Raymond reiterated the importance of iteration speed, saying, “It doesn’t take too many games to understand that the quicker you get to a good view and knowledge of what your game is — and then the more you can iterate on it — that’s how good your game will be.”
Acknowledging Haven’s roots with Stadia, she said the team is excited about what the cloud can do for the end-user experience, but was more interested in what it could do for game engines and development tools. She said she sees machine learning experiments in game development in a similar way.
Cerny was particularly interested in how tools like Olson could be used to quickly create lots of content that might not be “directly publishable AAA content” but is good enough to convey an idea.
“…before the game even starts, the whole process of conceptualization will become so much freer and faster”
Markus Cerny
“The first thing I see, before the game even starts, is that the whole process of conceptualization is going to be so much freer and faster,” Cerny said.
“Today, without those strategies, we walk into a conference room, we talk, we doodling on a whiteboard, and if we kind of know where we’re going, we send for storyboards and a week or two later we have a conversation based on that, you can see.” that we will soon be communicating at the speed of AI. We’ll be able to converge much faster on how our world feels and how a level feels.”
He adds that the technology could enable “mid-size” teams to do a lot more.
“That’s really important because what we do is a creative endeavor,” Cerny said. “Yes, you can force it brutally. Yes, you can involve 800 or a thousand people to make your game, but something like maintaining thematic coherence throughout the project when you have such a high staff count? The left hand doesn’t know what the right hand is doing at one point.”
He also echoed Olson’s point about the potential of such tools for unique custom content, saying, “What’s important is that you express your individuality as a player, and it’s a little less important if a serious artist looks at that and says, ‘You know ‘We could do a little better here.'”
Raymond concluded, “Our North Star at Haven is creating a new IP that can be owned by the fans, where the fans are really pushing that. Once you start a service game, the fans take over. And what does that mean, and where can we take that if we can get those tools to the players?”
#potential #machine #learning #gamers #developers
Leave a Comment