“No pixel left behind”: The new era of high-fidelity graphics and visualization has begun


Everybody loves rich images. Whether it’s seeing the fine lines on Thanos’ villainous face, every strand of hair in The Secret Life of Pets 2, lifelike shadows in World of Tanks, COVID-19 molecules in interactive 3D, or the shiny curves of a new Bentley, demand for vivid, photorealistic graphics and visualizations continues to boom.

“We’re visual beings,” says Jim Jeffers, senior director of Advanced Rendering and Visualization at Intel. “Higher image fidelity almost always drives stronger emotions in viewers, and provides improved context and learning for scientists. Better graphics means better movies, better AR/VR, better science, better design, and better games. Fine-grained detail gets you to that Wow!”

Higher-fidelity images, movies, games produced faster

Appetite for high quality and high performance across all visual experiences and industries has sparked major advances – and new thinking about how computer-generated graphics can quickly and efficiently be made even more realistic.

In this interview summary, Jeffers, co-inventor earlier in his career of the NFL’s virtual first-down line, discusses the road ahead for a new era of hi-res visualization. His key insights include: a broadening focus beyond individual processors to open XPU platforms, the central role of software, the proliferation of state-of-the-art ray tracing and rendering, and the myth of one size fits all. (“Just because GPU has a G in front of it, ” he says, ” doesn’t mean it’s good for all graphics functions. Even with ray tracing acceleration, a GPU is not always the right answer for every visual workflow.”)

Above: Intel’s Jim Jeffers

Trends: More data, complexity, interactivity

Take a look at some of today’s big graphic trends and impacts: Higher fidelity means more objects to render and greater complexity. Huge datasets and an explosion of data require more memory and efficiency. The data explosion is outpacing what today’s card memory can address, leading to demand for more system wide efficient memory utilization.  AI integration is producing faster results and there’s greater collaboration, from edge to cloud.

There’s another new factor: Interactivity. In the past, most data visualization was predominantly used to create static plots and graphs or an offline rendered image or video. This remains valuable today, but for simulations of real-world physics and digital entertainment, scientists and film makers want to interact with the data. They want to drill down to see the detail, turn the visualization around, and get a 360-degree view for better understanding. All that means more real-time operations, which in turn requires more compute power.

A high-speed, interactive visualization of stellar radiation: spherical volumetric path tracing with >2TB 3000+ timesteps (frames). Image credit: Intel and Argonne National Labs, Simulation provided by University of California/Santa Barbara.

Above: A high-speed, interactive visualization of stellar radiation.
Image credit: Intel and Argonne National Labs, Simulation provided by University of California/Santa Barbara.

For example, UC Santa Barbara and Argonne National Labs, needed to study the temperature and magnetic fluctuations over time of simulated star flares to better understand how stars behave. To visualize that dataset with 3,000 time-steps (frames), each about 10 GBs in size, you need about 3 TB of memory.  Considering a current high-end GPU with 24GB of memory, it would require 125 GPUs packed into between 10-15 server platforms to match just one dual socket Intel Xeon processor platform with Intel Optane DC memory that can load and visualize the data. Further, that doesn’t even factor in the performance limitations of transferring 3D data over the PCIe bus and the 200-300 Watts of power needed per card in the processor platform they are installed in.

Pretty clearly, a next-gen approach is crucial for producing these rich, high-fidelity, high-performing visualizations and simulations even faster and more simply. New principles are driving state-of-the-art graphics today and will continue to do so.

Three mantras reshaping graphics

“No transistor left behind.”  High-fidelity graphics require real-world lighting plus more objects, at higher resolution, to drive compelling photorealism. A “virtual’ room created with one table, a glass, a grey floor with no texture and ambient lighting isn’t particularly interesting. Each object and light source you add, down to the dust floating in the air and reflecting light, creates the scene for “real” life experiences. This level of complexity involves moving, storing, and processing massive amounts of data, often simultaneously. Making this happen requires serious advancements across the computing spectrum—architecture, memory, interconnect, and software, from edge to cloud. So the first huge shift is to leverage the whole platform, as opposed to a single processing unit. “Platform” includes all CPUs, GPUs, and potentially has other elements like Intel Optane persistent memory, perhaps FPGAs, as well as software.

A platform can be optimized towards a specialized solution such as product design or the creative arts, but it still uses one core software stack. Intel is actively moving in this direction. Over time, a platform approach allows us to continually deliver an evolutionary path to an XPU era, exascale computing, and open development environments. (More on that in a bit.)

“No developer left behind.”  Handling all this capability and data pouring into the platform is complicated. How does a developer approach that? You have a GPU over here, two CPUs over there, and various specialized accelerators. There might be two individual CPUs at a data center platform, each with 48 cores, and with each core being its own CPU. How do you program that without blowing your mind? Or spending ten years?

What’s needed is a simplified, unified programming model that lets a developer take advantage of all the available hardware capabilities without re-writing code for every processor or platform. Modern, specialized workloads require a variety of architectures as no single platform can optimally run every single workload. We need a mix of scalar, vector, matrix, and spatial architectures (CPU, GPU, AI, and FPGA programmability) along with a programming model that delivers performance and productivity across all the architectures.

That’s what the oneAPI industry initiative and the Intel oneAPI product are about  — designing efficient, performant heterogeneous programming, where a single code base can be used across multiple architectures. The oneAPI initiative will accelerate innovation with the promise of portable code, provide easier lifts when migrating to new, innovative generations of supported hardware, and helps remove barriers such as single-vendor lock-in.

“No pixel left behind.” The other key piece of the platform is about open source rendering tools and libraries designed to integrate capabilities and accelerate all this power. High-performance, memory-efficient, state-of-the art tools such as Intel’s oneAPI Rendering Toolkit open the door to creating the film fidelity visuals not just across films/VFX and animation but also HPC scientific visualization, CAD, content creation, gaming, AR and VR – essentially anywhere better images aligned with how our visual system processes them is important.

Ray tracing is especially important in this new picture. If you compare the animated visual effects from a movie ten years ago with a movie today, the difference is amazing. A big reason for this is improved ray tracing. That’s the technique that generates an image by tracing the path of light and then simulates the effects of its encounters with virtual objects to create better pixels. Ray tracing produces more detail, complexity, and visual realism than a typical rasterized scanline rendering.

Compute platforms and tools have been continually evolving to handle larger data sets with more objects and complexity. So, it has become possible to deliver powerful render capabilities that can accelerate all types of workloads: interactive CPU rendering, global illumination with physically based shading and lighting, selective image denoising, and combined volume and geometry rendering. Intel’s goal is to enable these capabilities to run at all platform scales – on laptops, workstations, across the enterprise, HPC, and cloud.

 

Above: New ray tracing technology provides powerful capabilities far beyond today’s GPUs. Expanding model complexity beyond basic triangles to other shapes (above) accelerates rendering and increases accuracy while eliminating pesky inaccurate artifacts. Image credit: Intel

One of the most important new advances is in “primitives”, or graphics building block shapes. Most products today, especially GPU-based products, are highly attuned to triangles only. They’re the equivalent of an atom. So if you look at a globe in 3D, they’re showing you a mesh of triangles. Up leveling beyond triangles to other shapes,  results in individual objects such as discs, spheres and 3D objects like a globe or hair to require less memory footprint and typically much less processing time than say 1M triangles.   Reducing the number of objects and required processing can help you turn your film around faster, to say 12 months instead of 18, achieve higher accuracy and better visual results, and be photorealistic with fewer visible artifacts.  These existing ray tracing features plus new ones will take advantage of Intel’s upcoming XPU platforms with Xe discrete GPUs.   

Pioneers already reaping benefits

A lot of this is already taking place. Take the example from University of California, Santa Barbara, and Argonne National Labs we mentioned before. They’re using a ray tracing method called “Volumetric Path Tracing” to visualize magnetism and other radiation phenomena of stars. Using open-source software, several connected servers, with large random access plus persistent memory, researchers can load and interact (zoom, pan, tilt) with 3+ TB of time series data. That would not have been feasible with a GPU-focused approach.

Film and animation studios have been on the leading edge of this new technology. Tangent Studios, working together with Baozou studios as creators of “Next Gen” for Netflix, delivered motion blur and key rendering features in Blender with Intel Embree. They’re now doing renders five to six times faster than before, with higher quality. Laika, a stop-motion animation studio, worked with Intel to create an AI prototype that accelerated the time needed to do image cleanup – a painstaking job – by 50%.

Above: Bentley’s interactive online configurator brings buyers ultra-high res images of 10 billion orderable combinations of autos

In product design and customer experience, Bentley Motors Limited is using these pioneering open-source rendering techniques. They’re  generating, on-the-fly, 3D images of its luxury cars for a custom car configurator. Bentley and Intel demonstrated a prototype ‘virtual showroom” where buyers will interactively configure paint colors, wheels, interiors and much more. The prototype included 11 Bentley models rendered accurately with 10 billion possible configuration combinations which used 120-GB of memory per node. The whole platform and ten-server environment ran at 10-20 fps, with “hyper-real’ visuals and interactively with AI based denoising via Intel Open Image Denoise. More on graphics acceleration at Bentley here.

Exascale data – Millions fewer render hours

These new approaches come as we’re on the doorstep of the exascale computational era — a quintillion floating-point operations in one second. Creating high-performance systems that deliver those quintillion flops in a consumable way is a huge challenge. But the potential benefits could also be huge.

Think about a “render farm” – effectively a supercomputing data center, likely with thousands of servers, that handle the computing needed to produce animated movies and visual effects. Today, one of these servers works on a single frame for eight, 16, or even 24 hours. It’s typical for a 90-minute animated movie to have 130,000 frames.  At an average of 12-24 hours of computation per frame, you’re looking at between 1.5 and 3 million compute-hours. Not minutes, hours. That’s 171 to 342 compute years! Applying the exascale capabilities now being developed at Intel to rendering – with large memory systems, distributed capability, smart software and cloud services –could reduce that time dramatically.

Bringing films to life faster: The Secret Life of Pets 2 Image credit: Illumination Entertainment

Above: Exascale computing could to bring characters to life faster. Image credit: The Secret Life of Pets 2. Illumination Entertainment

Longer term, pouring exascale capability into a gaming platform or even onto a desktop could revolutionize how content gets made. A filmmaker might be able to interactively view and manipulate at 80% or 90% of a movie’s quality, for example. That would reduce the turn-around time, known as iterations, to get to the ‘final shot”. Consumers might have their own vision, and using laptops with such technology, could become creators themselves. Real-time interactivity will further blur the line between movies and games in exciting ways that we can only speculate about today, but ultimately make both mediums more compelling.

Coming soon to a screen near you: more advancements

NASA Ames researchers have done simulations and visualization with the Intel oneAPI Rendering Toolkit libraries including wind tunnel likes effects on flying vehicles, landing gear, space parachutes, and more.  When the visualization team showed their collaborating scientist an initial, basic rasterized visualization without ray tracing effects to check accuracy of the data, the scientist said “yes, you are on the right track.” and then a week later,  with an Intel OSPRay ray traced version. The scientist said: “That’s Great! Next time skip that ‘other’ image, and just show me this more accurate one”.

Innovative new platforms with combinations of processing units, interconnect, memory, and software are unleashing the new era of high-fidelity graphics. The picture is literally getting better and brighter and more detailed every day.

Learn more: 

High Fidelity Rendering Unleashed (video)

##

Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected].



Source link

What Teaching Online Classes Taught Me About Remote Learning Previous post What Teaching Online Classes Taught Me About Remote Learning
Former YouTube content moderator sues the company after developing symptoms of PTSD Next post Former YouTube content moderator sues the company after developing symptoms of PTSD