Energy-efficient processors and memory reduce your carbon footprint


The tech industry runs on silicon chips; therefore, any initiative to cut the carbon footprint and create greener options must begin with the chips that lie in the center of all smart devices. Any solution must find a way to lower the energy consumption of the billions of laptops, phones, tablets and embedded systems everywhere. 

The good news is that energy consumption has long been a priority for the industry. Users of mobile devices and laptops demand long battery life and lower power consumption of chips. The display is a simple way to deliver a device that runs for hours without being plugged in. Over the last decades, steady progress has produced smartphones that deliver billions of times more computing power for the same amount of electricity. 

At the same time, data center operators are also well aware that electricity use costs them twice. First, they want chips that can deliver the most performance per watt because electricity is one of the biggest line items for running the centers. Second, all electricity that enters the computers is turned into heat, and so the center must also pay to remove it from the building.

These two economic forces from the marketplace have been pushing the chip manufacturers to produce greener chips with lower carbon footprints. Even if the goal was only to save money, the economic drive is tightly aligned with the environmental imperatives.

Many companies aren’t shy about celebrating their environmental awareness. Apple, for instance, says that it is carbon-neutral for its global corporate operations, and by 2030 it intends to have “net-zero climate impact across the entire business, which includes manufacturing supply chains and all product life cycles.”

These forces are reflected in the marketplace in a number of different and sometimes divergent ways. Here are some of the top ways that the chip industry is building new hardware that minimizes the carbon footprint of computing.  

Pushing the ‘integrated’ part of integrated circuits (ICs)

The new M1 and M2 chips from Apple provide a novel solution that integrates on one main chip the processors used for general computing (CPU) and graphics processing (GPU). Most current computers use separate chips, often on separate circuit boards, to tackle the same tasks. The Apple chips share the same memory and use the tight integration to deliver dramatically faster speeds for many tasks that require both the CPU and GPU to work closely together. 

While much of the sales literature focuses on the boost in speed, the tight integration and shared resources also dramatically lower power consumption. In its announcement of the M2, Apple boasted that when it compared its new laptops against a PC competitor (Samsung Galaxy with an i7), its chip “matches its peak performance using a fifth of the power.”

Embracing ARM chips 

While the CPU marketplace has been dominated by the venerable Intel or x86 architecture for decades, lately a number of users are switching to the ARM (Acorn RISC Machine) architecture, in part because the chips using this architecture’s instruction set deliver more computing for less electricity. 

Amazon, for instance, touts its Graviton 2 and Graviton 3 chips, which it designed in-house and installed in data centers. When users recompile it code into the new instruction set, Amazon estimates that it’s not uncommon for the code to use 60% less electricity compared to its regular instances. The actual savings vary from application to application. 

The AWS users never see the power bill, but Amazon passes along the savings in the pricing for the instances. Its ARM machines are said to cost about 30% less for roughly the same computation power. In some cases, users don’t even know that they’re seeing the savings. AWS has quietly shifted some of the managed services like the Aurora database over to ARM instances. 

Amazon is far from the only company exploring the ARM architecture. A number of other companies are building ARM-based servers, and they report similar successes in cutting power. Qualcomm, for instance, is said to be working with major and minor cloud providers to use its ARM chips. It also recently acquired Nuvia, a startup that was designing its own ARM chips. 

In the meantime, this April Microsoft launched a preview of Ampere’s Altra chips with the claim that these chips may offer as much as a 50% better performance per cent. 

Using GPUs for big jobs

The graphics processing units (GPU) began as chips designed to help game players enjoy faster framerates, but they’ve evolved into crucial tools for helping solve big computational jobs using less energy. This may be a shock to some gamers who’ve become accustomed to installing fat GPUs that demand 600 watts, 700 watts, 800 watts, or more from the power supply. The top-of-the-line GPUs get hungrier and hungrier. 

The real measure, though, is power per unit work. The fat GPUs may chew through electricity, but they do even more work along the way. Overall, they can be an efficient way to calculate. This is why the GPUs are now commonly used to execute large, parallel processing jobs for a wide range of scientific and financial applications. Many machine learning algorithms also rely upon the GPUs. 

The best GPUs are often in high demand from cryptocurrency miners because they offer some of the most efficient sources for computation per unit of energy. The marketplace is highly evolved because the mining algorithms offer a predictable and stable workload that’s easy to predict. 

Creating specialized chips 

For some applications, there’s enough demand to warrant creating custom chips that are engineered to solve their problems faster and more efficiently. Lately, companies like Google and Amazon have been building special chips designed to speed up machine learning.

Google’s Tensor Processing Units (TPUs) are the basis for many of the company’s machine learning experiments. They’re highly efficient, and Google credits them for helping drive data center efficiency as low as possible. The company resells the TPUs for client work, but also deploys them internally for tasks like predicting demand in order to manage the power consumption.

“Today, on average, a Google data center is twice as energy efficient as a typical enterprise data center,” bragged Urs Hölzle, senior vice president for Technical Infrastructure at Google.

“And compared with five years ago, we now deliver around seven times as much computing power with the same amount of electrical power.”

In one recent presentation at AWS Summit in San Francisco in April 2022, Ali Saidi, a senior principal engineer at AWS, talked about the energy savings from Inferentia, a chip that was designed to apply machine learning models as quickly as possible. These models are often used extensively in front-line applications for classification or detection. Of particular interest is speeding up the search for the trigger word used by voice interfaces like Siri or Alexa. 

“[Inferentia] achieves between 1.5x and 3x better power efficiency, compared to [Nvidia’s Turing T4], with a median power efficiency improvement of about two times,” Saidi told the audience. “This means that inf1 instances are greener and cheaper to operate, and as always – we use that to pass the cost savings back to our customers.”

Right sizing chips

When Intel started building x86 chips for the lower-end laptops, it started stripping away all the extra features that the average user won’t need while opening a few browser windows. Its low-end chips like the Atom and Celeron line may not be capable of chewing through computation like the high-end servers, but the average user doesn’t need that power when checking email. The cost savings multiply because the batteries can be smaller too and still last a long time. 

Working in lower precision 

When Amazon designed its Gravitron 3 processor, it added the bfloat16, a special, lower-precision format for low-resolution calculations. The chip can execute four of these operations in the same amount of time and energy that it uses for standard, double-precision floating-point calculations. Some machine learning algorithms don’t seem to mind the difference, and so they can run on these chips using 1/4th the power. 

Improving memory

The CPUs are the only focus for the engineers looking to lower power consumption. The newest RAM standard, DDR5, runs at faster speeds but lower voltages, allowing it to save electricity while finishing the computation sooner. The differences in voltage are small (1.2v versus 1.1v) but they add up over time. 

Others are tweaking the architecture of the memory chips to improve power consumption. One option called the Load Reduced Dual Inline Memory Modules (LRDIMM) adds a memory buffer that can respond faster and reduce the load on the communication circuitry between the memory and the CPU. These are often found in servers in data centers with high constant usage. 

Drawing thinner lines 

As the silicon fabrication lines develop better processes, the amount of energy used for each computation drops. Thinner lines take fewer electrons to saturate them. While many think that Moore’s Law and the relentless shrinking of the size of each transistor is all about speed, the savings in electricity is an extra bonus. Newer chips built on the latest fabrication technology tend to use less power than the older ones. Chips built on the 5nm process sip less than those on the 7nm process, and so on. 

Moving beyond mechanical storage 

Many of the best servers and laptops use solid state “disks” with flash memory to store information, largely because they’re much faster. The older spinning magnetic disks, though, remain competitive by offering a lower price per byte for storage. 

That is shifting as more data centers are taking energy costs into account. When VAST Data rolled out its latest storage solution, it emphasized that energy costs should be a big part of the reason a company may want to buy its flash memory-filled storage racks. 

“From an energy perspective, our solution is far more efficient. You can save roughly 10x on what customers would otherwise have to go in provision for if you had hard drive-based infrastructure,” said Jeff Denton, in a Q&A with VentureBeat. “This infrastructure density always creates cost savings. When you add up the efficiencies, the power savings, the data center, space savings and the cost savings, we believe that we finally achieved cost parity with hard drive-based infrastructure and essentially eliminating the last argument for mechanical media in the enterprise.“

Shutting down whenever possible

Sometimes the best chips are the ones that do nothing at all. The designers of smartphones try to balance the demand for more performance with the practical need of long battery life. The chips in the various smartphones are all optimized to use the smallest amount of power while still delivering high resolution video displays and always-on communication. 

One of the key strategies is for the chip to shut down extra processor cores or subsystems whenever they’re not in use. Smartphone users can see the battery life track how much they use their phones and the smartest phones are the ones that use as little power as possible when they sit in a pocket.



Source link

Our 12 Favorite Paper Planners (2022): Planners, Pens, Stickers, and 1 Digital Tool Previous post Our 12 Favorite Paper Planners (2022): Planners, Pens, Stickers, and 1 Digital Tool
Crypto broker Voyager Digital files for bankruptcy following the collapse of Three Arrows Next post Crypto broker Voyager Digital files for bankruptcy following the collapse of Three Arrows