3D Scanning for Video Games

Scanned 3D content in games

Introduction

Real-time 3D has seen an amazing evolution over the years. The hardware can push more and more vertices and pixels, spitting out 30 to 60 full HD images per second. But, compared to offline rendering, and looking at real-world images, there is still a long way to go.  A realistic lighting model is one aspect that real-time applications need to go without to hit their frame rate. Additionally, the geometric complexity (number of triangles) of objects needs to be kept in check to match the processing capacity of GPUs. And the resolution of their textures need to be limited to fit into memory.

But cranking up the graphics quality in video games is not solely a matter of having more hardware resources for rendering. Developers need to create high quality content in the first place. And they need to create A LOT of content for games, especially for big productions like GTA, Assassin’s Creed or Destiny. Content creation for video games has traditionally involved manual sculpting and painting of objects. It’s a time intensive task that requires a lot of skill. 

A different approach is the use of 3D scanning. 3D scanning is a technique where real-world (physical) objects are captured and reconstructed as a digital 3D representation. It has been around for quite some time and a number of game developers have been using 3D scanning to create highly-detailed content for their games.  The end result looks amazing and it seems only natural that more and more developers will experiment with 3D scanning techniques.  In this post, I'll take a closer look at some of the approaches and projects that sparked my interest.

3D Facial and Body Scanning

One of the hardest things to sculpt and paint manually are human faces. We see faces everyday and immediately notice if some of the details are not right. Instead of spending huge amounts of time painting human characters, VFX studios started to use 3D facial and body scanning services in the nineties to create digital doubles for movie actors. A great overview article of the history of 3D scanning in VFX can be found in the February 2006 issue of Computer Graphics World

In the early days, game developers didn’t have a need for scanning as the low-resolution game characters were still manageable to paint by hand.  As the game hardware grew more powerful, this changed and developers wanted more realistic looking, high-quality characters.  The same companies that provided scanning services for the VFX industry started to work together with game developers.

One of the earliest examples is Sony’s The Getaway for the Playstation 2 in 2002. The team from Sony collaborated with Eyetronics, using their ShapeSnatcher Suite to scan the characters for the game. At that time, Eyetronics had portable scanning technology that used structured light to create a 3D scan from a single image. A presentation from 2006 by Dirk Callaerts (their CEO) about their scanning technology can still be found here

Another great example is the collaboration between Sony Computer Entertainment America and 2K sports to capture 1400-plus players in the American Major League Baseball in 2006. This was a huge undertaking, and both companies, fierce competitors in the sports game market, created a joint venture to split the cost.  Again, Eyetronics was the company that got the scanning contract. During the execution of the project, Eyetronics managed to limit the scanning time per person to 90 seconds.  Cleaning up the data took considerably longer, taking an average of five hours per head.  One or two hours were needed to pre-process the data. Then started the second phase (about three hours) that entailed putting the model together, retouching the information to patch up holes in the geometry, applying the textures and prepping the model for delivery. At the end of the project, Sony and 2K Sports had a high quality mesh and a 2K by 4K color texture map representing the head of nearly every player in the MLB. A detailed description of the project is worth the read. You can find it in the March 2007 edition of CGW.

If you search the web anno 2014, you’ll find many studios that are providing scanning services of humans.  These studios have 360 degrees scanning setups (a state of the art version can be seen here) that use an array of calibrated high-end cameras to capture every detail of the person instantaneous. Photogrammetry is used to reconstruct a digital version of the person.  This technique uses photographs taken from all angles of an object as input.

To get animated characters, the scans are usually mapped to a pre-rigged model. Some games focus heavily on performance capturing of faces to communicate all sorts of emotions in the most realistic way. Great examples are LA Noire, Heavy Rain or Beyond: Two Souls. Together with the motion capture, high-resolution animated textures for the faces ensure that the graphics match the depth of the animation to have a believable end result.

Screenshot Beyond Two Souls

[Screenshot from Beyond Two Souls by Quantic Dream ]

Scanning of objects

The scanned subject isn’t limited to humans of course. Scanning studios have been using their rigs to scan all sorts of object, ranging from animals and cars to all kinds of food (even a corn flake).

Surprisingly, it seems that the scanning of objects has primarily been done for visual effects in the movie and advertising industry.  For game development, there are still some issues to overcome during production and there is the challenge of getting the content to run on mainstream hardware.   

But things are moving in the industry. One of the protagonists is the Polish game developer The Astronauts. They have been advocates of using photogrammetry (one of the 3D scanning techniques) for some time.  Above all, as Andrzej Poznanski elegantly points out in his extensive blogpost, photogrammetry allows for truly photorealistic assets.  In the physical world, no two bricks of stone are exactly identical.  Dust will build up in specific places according to the wind, erosions of the surfaces will be noticeable due to rain or human traffic, etc. The real world is full of small details. It is the lack of this detail in the virtual world that immediately breaks the illusion.  Sculpting and painting these subtle details is nearly impossible to get right, or, from an economic standpoint, not feasible due to the insane amount of work and time it would take.

I first saw the impact of using photographic data for real-time rendering in 2011 when I was working on virtual texturing at the Multimedia Lab at the University of Ghent. Charles Hollemeersch, my colleague then and now co-founder at Graphine had created a demo for the VT system using aerial imagery and a heightmap of Antelope Island in Utah.  Because of the extremely high resolution ( 122,000 x 122,000 ) pixels of content, the real-time demo looked amazing. Of course, lighting was baked in, giving it, as you expect with a photograph, a photorealistic look (see screenshots below). As great as this demo was back then, its 2.5D approach is only practical for specific use cases (great looking flight sims though).  Photogrammetry turns photographs into textures of full 3D objects resulting in texture data with photographic quality. This is what we are looking for in many games and most visualization applications.

Real-time Antelope Island demo Multimedia lab

[ Screenshot from the real-time Antelope Island demo ]

Real-time Antelope Island demo Multimedia lab with VT tiling

[ Screenshot from the real-time Antelope Island demo with virtual texture tiling view ]

Scanned Objects and Environments in Games

For The Vanishing of Ethan Carter (released in 2014) the team from The Astronauts captured a number of objects (buildings, old trains, big rocks, …) that are impossible to physically bring to a studio. Luckily, photogrammetry software can handle images that are taken from non-calibrated camera positions as long as there is enough overlap between images. In other words, if you take enough pictures from all angles, the software should be able to reconstruct a digital 3D representation of the object. However, to get the best scan, a lot of things need to be taken into consideration. Lighting conditions need to be right. For example direct (sun) light is best avoided. Furthermore, the camera needs to be configured properly.  Once you master this process, the result can be amazing as the team from The Astronauts has shown.

Screenshot from The Vanishing of Ethan Carter

[ Screenshot from The Vanishing of Ethan Carter ]


Another great example of scanned content in games is Get Even (to be released) from The Farm 51. Their teaser trailer (below) has gotten a lot of attention and has been watched nearly a million times on youtube. They are scanning whole environments for Get Even. Wojciech Pazdur goes into some of the challenges and solutions about using this approach in his presentation at Digital Dragons.

[Teaser for Get Even by The Farm 51 showing their scanned environments ]

[Raw environment scan used in Get Even]

One thing the developers from The Farm 51 want to avoid as much as possible is texture magnification. When the resolution of the texture is small compared to the screen resolution and the size of the virtual object, you’ll see blurry textures because the renderer will upsample the texture to fill the screen pixels.  Due to their scanning approach, they have enough texture content to fill all the pixels, even if the virtual camera gets really close to the environment objects.  The challenging part is to be able to use this amount of content in a real-time 3D engine. If you want to see Unreal or Unity go to their knees, try using dozens of 8K by 8K textures in a single room. Most graphics cards don’t have enough video memory to load this content and even if they would, loading these many textures would create horrible stalls or popping effects.

For The Vanishing of Ethan Carter, the team from The Astronauts clearly had to make some compromises to ship their game. Downscaling of textures has traditionally been the method to reduce memory occupation, combined with a creative re-use of assets throughout the world. Luckily, even with texture magnification (that gamers have grown accustomed to), their skilful use of photogrammetry has resulted in a beautiful and atmospheric game.


The Farm 51 will take a different route and use the Granite SDK texture streaming engine (by Graphine), extending their game engine.  Granite can easily handle large amounts of texture content. The combination of virtual texturing with a highly-optimized tile-based texture streaming system keeps much less data in memory, without compromising on quality.  A constant texture cache size (in VRAM) of around 400MB is enough to support full HD screen rendering using the thousands of 8K textures that result from the scanning pipeline of The Farm 51. Granite can stream in the content fast enough to avoid noticeable popping or loading spikes. This is in part due to its on-disk compression that results in a more efficient use of disk bandwidth. An additional benefit is that around two times more texture content can be added to the game for the same size on disk.

Finally, an ambitious project worth mentioning is the game Re-roll  in production by the Canadian company Pixyul. Their post-apocalistic RPG will take place in a world closely resembling ours. Their plan is to scan the entire world, area by area.  The first instalment should be the city of Montreal. Not much is known about the project but a short teaser and description can be viewed below.

[Teaser for Re-roll by Pixyul]

Wrapping up

3D scanning has been used extensively in the VFX industry for all kinds of movies.  Game developers have been using similar technology for their characters for over a decade. However, the scanning of objects and entire environments doesn’t seem to be done for games before The Vanishing of Ethan Carter and Get Even.  Both games make use of photogrammetry as it doesn’t need highly specialized (and expensive) hardware. It only requires a high quality camera, the right processing software and good care in taking the photographs to get high quality assets that are nearly impossible to paint by hand. 

I’m sure there are other real-time projects that make use of photogrammetry and 3D scanning in general.  If you know any other video games that have been using 3D scans or are planning to make extensive use of photogrammetry, let me know! aljosha@graphinesoftware.com

Leave your information

×