Skip to navigationSkip to content
Video Game Graphics vs. Simulator Graphics: The Gap is Closing

Video Game Graphics vs. Simulator Graphics: The Gap is Closing

Andre Demers
Par Andre Demers
Product Marketing Manager
20 nov. 2018
Partager
Infolettre

Rester connecté, inscrivez-vous à notre infolettre.

You carefully pull back on the stick of your Apache AH-64C, and slowly climb out of the lush jungle valley. A flock of birds scatters from a nearby tree. The sun rising at your back, you gently crest the hill before you, getting a clear view of the plain ahead. In the distance, a convoy of six vehicles snakes along a dusty road beside a shimmering river. In a series of heavily rehearsed actions, you consult your radar, confirm your targets, select your ordinance, and release a deafening flurry of Hellfire AGM-114 missiles. Streaking across the bright green canopy, each projectile finds its target as vehicle after vehicle erupts into giant a fireball, punctuating the otherwise calm landscape with orange flames and thick, black smoke.

It is time to head back to base.

Why can’t my simulation look as good as my teenager’s video game?

Unfortunately, this spectacular scene is not from a state-of-the-art helicopter simulator, but from a recent video game release.

For those working in the simulation or simulator industry, it’s a common refrain; “why can’t my simulation look as good as my teenager’s video game?”

The answer is simple: Video games only need to look good.

And gosh, do they ever.

What’s the Difference?

The good news for those in the simulation (or synthetic training) industry is that the gap is now closing faster than ever. Software, assets, formats, and engines are increasingly playing bigger roles in the creation, development and exporting of simulator projects.

In the scenario described above, video game developers will sometimes pour tens of millions (!) of dollars into creating fun-to-drive vehicles, spectacular weapons, rich environments complete with pattern of life, fauna, atmospheric effects, and all of this is wrapped in movie-like sound effects (and music). Of course, these vehicles are far from realistic, and the environments are rarely geographically accurate, nor do they contain attributes that make them anything more than entertaining or pretty. So how does the simulation differ? Let me explain.

In a video game, the helicopter – or ownship – is of course massively simplified. Sure, it will throttle, pitch, yaw, and roll, but that’s all. In a helicopter simulator, the rotor blade physics, the engine power, the weight of the fuel and cargo, and thousands of other variables all come into consideration and factor into calculations whether a pilot is taking off, maneuvering, evading, landing, or deploying weapons. In addition, all of these actions are dependent on the hundreds of environmental (altitude, wind, temperature, etc.) attributes as well.

In the scenario above, a game will not calculate the actual speed of the Hellfire missile, the speed and direction of the convoy, the weight of the vehicles, or the level of damage that will occur. Thousands of calculations are occurring in real-time all the while continuing to display your OTW (out of the window), EO (electro-optical) or thermal or infrared sensor views – and this is true of all those participating in the simulation, whether in other aircraft, or on the ground. But more on that later.

Setting vehicles and weapons aside, game environments are designed mainly for entertainment. In simulators, generic or geo-typical environments can be used when training skills. But in instances where mission planning, or training requires accurate true-to-life detail, such as an airport, geo-specific environments are crucial. If a building has two doors facing east in real life, then the simulation must have the same doors, in the same direction. Taking it a step further, environments and objects can be assigned material attributes. For example, is the road surface rock, gravel, earth, or sand? Are buildings glass, wood, steel, or concrete? Or a mix of all four? By designating materials on buildings, vehicles, and vegetation, simulators take a giant leap forward in two ways:

  1. Materials are able to react to atmospheric conditions, i.e.: some surfaces heat up faster in the sun, or cool slowly at night; and
  2. The varying temperatures or types of materials is realistically emulated in specific sensors like infrared, radar, or night vision.

Many games have thermal or night-vision modes, but these sensors are approximate representations that are not physics-based like those in simulators.

Simulators require the utmost vehicle and performance realism for training purposes. Vehicles, weapons, sensors, and environments should react as realistically as possible in order to be effective training tools. It is because of this priority that the visual aspect of synthetic training has taken a back seat to physical realism.

Until now.

Mind the Gap

The push to make games more realistic – visually and functionally – is greatly benefiting the simulation market. The technology being developed and deployed to make games and movies more life-like is finding its way into the hands of those creating simulation databases. Everything from realistic terrain generation and high-quality 3D models to motion capture and physics engines is now being used to heighten the realism and immersion.

Nowhere is this trend better encapsulated than it is with game engine technology. Today, the most popular games run on gaming engines such as Unity™ or Unreal Engine™ to deliver high quality on-screen visualization. These engines are optimized to provide real-time, eye-popping visuals. Things like high-resolution textures, lens flares, volumetric lighting, vegetation, and explosions are being perfected by massive teams of programmers and artists in order to achieve stunning graphics. In the last year, technology developers, such as Presagis, are extending their tools to facilitate the exporting of terrains and simulations into game engine-compatible formats. A format such as FBX is widely used in the gaming industry and many content creation tools like Creator™ (used to create 3D models) and Terra Vista™ (for building 3D environments) now let users export to FBX so that gaming engines can be included in the production pipeline.

The push to make games more realistic – visually and functionally – is greatly benefiting the simulation market.

For their part, simulator visualization tools, such as Vega Prime by Presagis, now use many of the same assets as gaming engines in order to increase realism. Its modular architecture allows for the quick integration of software such as Silverlinings™ (by Sundog®), and SpeedTree™. Unlike a game engine, Vega Prime is specifically geared toward realistic representations of all aspects in a simulation. Things like whether the stars in the night sky are accurate impact a pilot’s ability to navigate, or wind speed not only affects an aircraft’s flight, but the sea state, or foliage. Seemingly insignificant items like the position of a star, the height of fog, or even the temperature (thermal view) of a water tower are crucial in recreating environments that will properly train military personnel in a way that will help them when they are executing missions in the real world.

This is where the visual display of simulation graphics becomes so much more challenging than those of games.

Don’t Forget the Hardware

In a typical game, an engine is mostly concerned with rendering (as real-time as possible) graphics on to a single screen (or channel). Rendering environments in real time is a delicate balance of LOD (level of detail), terrain paging, and processing power. For simulations, the requirements are much more stringent. Not only are you dealing with anywhere from three to 20 channels (screens), they all need to be synced, correlated, and responsive across all channels.

To accomplish this, simulators often use dedicated systems called image generators, or IGs. These image-generating platforms are dedicated to the task of pushing visuals to a simulator as quickly and as synchronized as possible. Because of these heavy graphical processing loads, it is important to build, optimize and streamline the way databases are created and published.

Tools such as Vega Prime, and STAGE (which permits the building of scenarios with CGF, and AI) are optimized for the real-time running of simulations regardless of the number of live participants. Participants. Plural. That’s right: synthetic training isn’t a single player endeavor.

Which brings us to a critical simulation challenge: ensuring a fair fight.

In our next post, we will describe the challenges of achieving a “fair fight” between all participants in a simulation. From forces on the ground, those 30,000 feet above, and everything in between, everyone needs to be on the same (virtual) page at the same time.

Partager
Infolettre

Rester connecté, inscrivez-vous à notre infolettre.

This scene was created and rendered using Presagis products such as Creator, Terra Vista, and Vega Prime. Games graphics can have increased immersion, but most likely much less realism.