As AI chips improve, is TOPS the best way to measure their power?

Once in a while, a young company will claim it has more experience than would be logical — a just-opened law firm might tout 60 years of legal experience, but actually consist of three people who have each practiced law for 20 years. The number “60” catches your eye and summarizes something, yet might leave you wondering whether to prefer one lawyer with 60 years of experience. There’s actually no universally correct answer; your choice should be based on the type of services you’re looking for. A single lawyer might be superb at certain tasks and not great at others, while three lawyers with solid experience could canvas a wider collection of subjects.

If you understand that example, you also understand the challenge of evaluating AI chip performance using “TOPS,” a metric that means Trillions of Operations Per Second, or “tera operations per second.” Over the past few years, mobile and laptop chips have grown to include dedicated AI processors, typically measured by TOPS as an abstract measure of capability. Apple’s A14 Bionic brings 11 TOPS of “machine learning performance” to the new iPad Air tablet, while Qualcomm’s smartphone-ready Snapdragon 865 claims a faster AI processing speed of 15 TOPS.

But whether you’re an executive considering the purchase of new AI-capable computers for an enterprise, or an end user hoping to understand just how much power your next phone will have, you’re probably wondering what these TOPS numbers really mean. To demystify the concept and put it in some perspective, let’s take a high-level look at the concept of TOPS, as well as some examples of how companies are marketing chips using this metric.

TOPS, explained

Though some people dislike the use of abstract performance metrics when evaluating computing capabilities, customers tend to prefer simple, seemingly understandable distillations to the alternative, and perhaps rightfully so. TOPS is a classic example of a simplifying metric: It tells you in a single number how many computing operations an AI chip can handle in one second — in other words, how many basic math problems a chip can solve in that very short period of time. While TOPS doesn’t differentiate between the types or quality of operations a chip can process, if one AI chip offers 5 TOPS and another offers 10 TOPS, you might correctly assume that the second is twice as fast as the first.

Yes, holding all else equal, a chip that does twice as much in one second as last year’s version could be a big leap forward. As AI chips blossom and mature, the year-to-year AI processing improvement may even be as much as nine times, not just two. But from chip to chip, there may be multiple processing cores tackling AI tasks, as well as differences in the types of operations and tasks certain chips specialize in. One company’s solution might be optimized for common computer vision tasks, or able to compress deep learning models, giving it edges over less purpose-specific rivals; another may just be solid across the board, regardless of what’s thrown at it. Just like the law firm example above, distilling everything down to one number removes the nuance of how that number was arrived at, potentially distracting customers from specializations that make a big difference to developers.

Simple measures like TOPS have their appeal, but over time, they tend to lose whatever meaning and marketing appeal they might initially have had. Video game consoles were once measured by “bits” until the Atari Jaguar arrived as the first “64-bit” console, demonstrating the foolishness of focusing on a single metric when total system performance was more important. Sony’s “32-bit” PlayStation ultimately outsold the Jaguar by a 400:1 ratio, and Nintendo’s 64-bit console by a 3:1 ratio, all but ending reliance on bits as a proxy for capability. Megahertz and gigahertz, the classic measures of CPU speeds, have similarly become less relevant in determining overall computer performance in recent years.

Apple on TOPS

Apple has tried to reduce its use of abstract numeric performance metrics over the years: Try as you might, you won’t find references on Apple’s website to the gigahertz speeds of its A13 Bionic or A14 Bionic chips, nor the specific capacities of its iPhone batteries — at most, it will describe the A14’s processing performance as “mind-blowing,” and offer examples of the number of hours one can expect from various battery usage scenarios. But as interest in AI-powered applications has grown, Apple has atypically called attention to how many trillion operations its latest AI chips can process in a second, even if you have to hunt a little to find the details.

Apple’s just-introduced A14 Bionic chip will power the 2020 iPad Air, as well as multiple iPhone 12 models slated for announcement next month. At this point, Apple hasn’t said a lot about the A14 Bionic’s performance, beyond to note that it enables the iPad Air to be faster than its predecessor, and has more transistors inside. But it offered several details about the A14’s “next-generation 16-core Neural Engine,” a dedicated AI chip with 11 TOPS of processing performance — a “2x increase in machine learning performance” over the A13 Bionic, which has an 8-core Neural Engine with 5 TOPS.

Previously, Apple noted that the A13’s Neural Engine was dedicated to machine learning, but assisted by two machine learning accelerators on the CPU, plus a Machine Learning Controller to automatically balance efficiency and performance. Depending on the task and current system-wide allocation of resources, the Controller can dynamically assign machine learning operations to the CPU, GPU, or Neural Engine, so AI tasks get done as quickly as possible by whatever processor and cores are available.

Some confusion comes in when you notice that Apple’s also claiming a 10x improvement in calculation speeds between the A14 and A12. That appears to be referring specifically to the machine learning accelerators on the CPU, which might be the primary processor of unspecified tasks, or the secondary processor when the Neural Engine or GPU are otherwise occupied. Apple doesn’t break down exactly how the A14 routes specific AI/ML tasks, presumably because it doesn’t think most users care to know the details.

Qualcomm on TOPS

Apple’s “tell them only a little more than they need to know” approach contrasts mightily with Qualcomm’s, which generally requires both engineering expertise and an atypically long attention span to digest. When Qualcomm talks about a new flagship-class Snapdragon chipset, it’s open about the fact that it distributes various AI tasks to multiple specialized processors, but provides a TOPS figure as a simple summary metric. For the smartphone-focused Snapdragon 865, that AI number is 15 TOPS, while its new second-generation Snapdragon 8cx laptop chip promises 9 TOPS of AI performance.

The confusion comes in when you try to figure out how exactly Qualcomm comes up with those numbers. Like prior Snapdragon chips, the 865 includes a “Qualcomm AI Engine” that aggregates AI performance across multiple processors ranging from the Kryo CPU and Adreno GPU to a Hexagon digital signal processor (DSP). As parts of the company’s “fifth-generation” AI Engine, the Adreno 650 GPU promises 2x higher TOPS for AI than the prior generation, plus new AI mixed precision instructions, while the Hexagon 698 DSP claims 4x higher TOPS and a compression feature that reduces the bandwidth required by deep learning models. It appears that Qualcomm is adding the separate chips’ numbers together to arrive at its 15 TOPS total.

If those details weren’t enough to get your head spinning, Qualcomm also notes that the Hexagon 698 includes AI-boosting features such as tensor, scalar, and vector acceleration, as well as the Sensing Hub, an always-on processor that draws minimal power while awaiting either camera or voice activation. These AI features aren’t necessarily exclusive to Snapdragons, but the company tends to spotlight them in ways Apple does not, and its software partners — including Google and Microsoft — aren’t afraid to use the hardware to push the edge of what AI-powered mobile devices can do. While Microsoft might want to use AI features to improve a laptop’s or tablet’s user authentication, Google might rely on an AI-powered camera to let a phone self-defect whether it’s in a car, office, or movie theater, and adjust its behaviors accordingly.

Though the new Snapdragon 8cx has fewer TOPS than the 865 — 9 TOPS, compared with the less expensive Snapdragon 8c (6 TOPS) and 7c (5 TOPS) — note that Qualcomm is ahead of the curve just by including dedicated AI processing functionality in a laptop chipset, one benefit of building laptop platforms upwards from a mobile foundation. This gives the Snapdragon laptop chips baked-in advantages over Intel processors for AI applications, and we can reasonably expect to see Apple use the same strategy to differentiate Macs when they start moving to “Apple Silicon” later this year. It wouldn’t be surprising to see Apple’s first Mac chips stomp Snapdragons in both overall and AI performance, but we’ll probably have to wait until November to hear the details.

Huawei, Mediatek, and Samsung on TOPS

There are options beyond Apple’s and Qualcomm’s AI chips. China’s Huawei, Taiwan’s Mediatek, and South Korea’s Samsung all make their own mobile processors with AI capabilities.

Huawei’s HiSilicon division currently makes flagship chips called the Kirin 990 and Kirin 990 5G, which differentiate their Da Vinci neural processing units with either two- or three-core designs. Both Da Vinci NPUs include one “tiny core,” but the 5G version jumps from one to two “big cores,” giving the higher-end chip extra power. The company says the tiny core can deliver up to 24 times the efficiency of a big core for AI facial recognition, while the big core handles larger AI tasks. It doesn’t disclose the number of TOPS for either Kirin 990 variant.

Mediatek’s current flagship, the Dimensity 1000+, includes an AI Processing Unit called the APU 3.0. Alternately described as a hexa-core processor or a six AI processor solution, the APU 3.0 promises “up to 4.5 TOPS performance” for use with AI camera, AI assistant, in-app, and OS-level AI needs. Since Mediatek chips are typically destined for midrange smartphones and affordable smart devices such as speakers and TVs, it’s simultaneously unsurprising that it’s not leading the pack in performance, and interesting to think of how much AI capability will soon be considered table stakes for inexpensive “smart” products.

Last but not least, Samsung’s Exynos 990 has a “dual-core neural processing unit” paired with a DSP, promising “approximately 15 TOPS.” The company says its AI features enable smartphones to include “intelligent camera, virtual assistant and extended reality” features, including camera scene recognition for improved image optimization. Samsung notably uses Qualcomm’s Snapdragon 865 as an alternative to the Exynos 990 in many markets, which many observers have taken as a sign that Exynos chips just can’t match Snapdragons, even when Samsung has full control over its own manufacturing and pricing.

Top of the TOPS

Mobile processors have become popular and critically important, but they’re not the only chips with dedicated AI hardware in the marketplace, nor are they the most powerful. Designed for data centers, Qualcomm’s Cloud AI 100 inference accelerator promises up to 400 TOPS of AI performance with 75 watts of power, though the company uses another metric — ResNet-50  — to favorably compare its inference performance to rival solutions such as Intel’s 100-watt Habana Goya ASIC (~4x faster) and Nvidia’s 70-watt T4 (~10x faster). Many high-end AI chipsets are offered at multiple speed levels based on the power supplied by various server-class form factors, any of which will be considerably more than a smartphone or tablet can offer with a small rechargeable battery pack.

Another key factor to consider is the comparative role of an AI processor in an overall hardware package. Whereas an Nvidia or Qualcomm inference accelerator might well have been designed to handle machine learning tasks all day, every day, the AI processors in smartphones, tablets, and computers are typically not the star features of their respective devices. In years past, no one even considered devoting a chip full time to AI functionality, but as AI becomes an increasingly compelling selling point for all sorts of devices, efforts to engineer and market more performant solutions will continue.

Just as was the case in the console and computer performance wars of years past, relying on TOPS as a singular data point in assessing the AI processing potential of any solution probably isn’t wise, and if you’re reading this as an AI expert or developer, you probably already knew as much before looking at this article. While end users considering the purchase of AI-powered devices should look past simple numbers in favor of solutions that perform tasks that matter to them, businesses should consider TOPS alongside other metrics and features — such as the presence or absence of specific accelerators — to make investments in AI hardware that will be worth keeping around in the years to come.

Call of Duty: Black Ops — Cold War’s Zombies cross-play is a bunker-buster

Activision announced that Call of Duty: Black Ops — Cold War Zombies will take place in a World War II bunker and will have cross-platform play for four-player co-op matches.

The cross-platform play represents the first time that Zombies will be available for players to team up together even if they are on different platforms like the Xbox One, PlayStation 4, PC, or the new consoles arriving in November. In Treyarch games, Zombies is just about as popular as the multiplayer and single-player campaigns, enabling Activision to market these Call of Duty releases as three games in one. Treyarch is one of several developers working on this year’s Call of Duty, the 17th installment in the series, which debuts November 13.

This year’s Zombies will have a fresh narrative with a new cast of characters. In the co-op play, four human players square off against hordes of zombies who attack in waves. If you survive to the end of the undead assaults, you can finish the full story.

Zombies are glowing again.

Above: Zombies are glowing again in Call of Duty: Black Ops — Cold War Zombies.

Image Credit: Activision/Treyarch

The story has a nod to “Nacht der Untoten,” the first Zombies map in Call of Duty: World at War in 2009. The story of Cold War Zombies is “Die Maschine,” and it takes place in the early 1980s.

In the story, you are part of Requiem, a CIA-backed international response team led by Grigori Weaver from the original Black Ops story. Your operatives explore a World War II bunker that hasn’t only been ravaged by time but by the undead as well. The bunker is huge, and full of strange equipment, graffiti, colorful lighting, and lots of exotic weapons.

While fighting to suppress the undead at this graffiti-skinned, boarded-up bunker, Requiem team members investigate what lies beneath the structure after decades of neglect. Requiem seeks a cache of decades-old secrets that upend the global order.

At the same time, a Soviet-led division and rival to Requiem, the Omega Group, enters the fray. The Omega Group also has a keen interest in studying and harnessing the events and anomalies manifesting around the globe. The trailer didn’t show the Omega Group in action, so it’s not clear if they are playable.

There are other forces that may get in your way. Other characters with unknown agendas will also advance the plot.

Players will now advance through the Battle Pass with time played in Zombies, similar to multiplayer and Call of Duty: Warzone. Requiem team members can also start the match with their Gunsmith-crafted weapon of choice via loadout support. That means you can use the guns you’ve leveled up in multiplayer.

As you can see in the trailer, the bunker is a vast space. But you can also play above ground in the area around the bunker. You don’t start with a pistol anymore. And you can call in a helicopter or other scorestreaks to save you by raining fire from above.

In addition to the return of the Pack-a-Punch machine to transform your weapon, all weapons will now have a rarity. The higher the rarity, the greater the damage output and attachments for the weapon. For the first time, this will allow any weapon to still be useful in later rounds. You can find new weapons by buying them in Wall Buy or Mystery Box locations.

Above: He’s not so pretty. He’s a perfect fit for Call of Duty: Black Ops — Cold War’s Zombies.

Image Credit: Activision/Treyarch

Outside of weaponry, players can deploy field upgrades as proactive abilities that add another layer to squad-based tactics. You charge them up by killing zombies, then deploy them as needed. These include offensive buffs for evasion or healing and reviving.

Around the map, you can craft and find lethal, tactical, and support equipment. You can use Grenade Launchers, Sentry Turrets, Explosive Bows, and even Chopper Gunners. These can do a lot of damage to the zombies.

Classic Zombies perks are back with Cold War theming, including the return of Juggernog and Speed Cola. There’s no longer a limit to how many different perks you can consume. Post-launch content will be free.

More beta tests of Cold War’s multiplayer are coming for all platforms October 17 to October 19. The beta will be available on the PS4 in early access on October 8 to October 9 and open beta on October 10 to October 12. The cross-play beta will happen on October 15 to October 16 on the Xbox, PC early access, and PS4 open access.

GitHub launches code scanning to unearth vulnerabilities early

GitHub is officially launching a new code-scanning tool today, designed to help developers identify vulnerabilities in their code before it’s deployed to the public.

The new feature is the result of an acquisition last year when GitHub snapped up San Francisco-based code analysis platform Semmle; the Microsoft-owned code-hosting platform revealed at the time that it would make Semmle’s CodeQL analysis engine available natively across all open source and enterprise repositories. After several months in beta, code scanning is now rolling out to all developers.


It’s estimated that some 60% of security breaches involve unpatched vulnerabilities. Moreover, 99% of all software projects are believed to contain at least one open source component, meaning that dodgy code can have a significant knock-on impact for many companies.

Typically, fixing vulnerabilities requires a researcher to first find the vulnerability and disclose it to the repository maintainer, who fixes the issue and alerts the community, who then update their own projects to the fixed version. In a perfect world, this process would take minutes to complete, but in reality it takes much longer than that — it first requires someone to find the vulnerability, either by manually inspecting code or through pentesting, which can take months. And then comes the process of finding and notifying the maintainer and waiting for them to roll out a fix.

GitHub’s new code-scanning functionality is a static application security testing (SAST) tool that works by transforming code into a queryable format, then looking for vulnerability patterns. It automatically identifies vulnerabilities and errors in code changes in real time, flagging them to the developer before the code goes anywhere near production.

Above: GitHub: Vulnerability found


Data suggests that only 15% of vulnerabilities are fixed one week after discovery, a figure that rises to nearly 30% within a month and 45% after three months. According to GitHub, during its beta phase it scanned more than 12,000 repositories more than 1 million times, unearthing 20,000 security issues in the process. Crucially, the company said that developers and maintainers fixed 72% of these code errors within 30 days.

There are other third-party tools out there already designed to help developers find faults in their code. London-based Snyk, which recently raised $200 million at a $2.6 billion valuation, targets developers with an AI-powered platform that helps them identify and fix flaws in their open source code.

This helps to highlight how automation is playing an increasingly big role in not only scaling security operations, but also plugging the cybersecurity skills gap — GitHub’s new code-scanning smarts go some way toward freeing up security researchers to focus on other mission-critical work. Many vulnerabilities share common attributes at their roots, and GitHub now promises to find all variations of these errors automatically, enabling security researchers to hunt for entirely new classes of vulnerabilities. Moreover, it does so as a native toolset baked directly into GitHub.

GitHub’s code scanning hits general availability today, and it is free to use for all public repositories. Private repositories can gain access to the feature through a GitHub Enterprise subscription.

Launch Night in Google: How to watch, and what to expect

Today during its Launch Night In event, which kicks off at 11 a.m. Pacific (2 p.m. Eastern), Google is expected to launch a slew of new hardware across its product families. Leaks and premature sales spoiled some surprises — eagle-eyed buyers managed to snag Google’s new Chromecast from Home Depot, while Walmart’s mobile app leaked the specs of the Nest Audio smart speaker. Still, there’s a chance Google has an ace or two up its sleeve.

Here’s what we expect to see during this afternoon’s livestream.

Pixel 5 and Pixel 4a 5G

Pixel 5

It’s all but certain Google will announce two smartphones today: The Pixel 5 and Pixel 4a 5G. The Pixel 5 is the follow-up to last year’s Pixel 4, while the Pixel 4a 5G is a 5G-compatible version of the Pixel 4a that launched in August.

While the Pixel 5 might be a successor in name, it’s a potential downgrade from the Pixel 4 in that it reportedly swaps the Qualcomm Snapdragon 855 processor for the less-powerful Snapdragon 765G. This being the case, the leaks suggest that the RAM capacity has been bumped up to 8GB from 6GB, which could make tasks like app-switching faster. The Pixel 5 is also rumored to have a 4,080mAh battery, which would be the largest in any Pixel to date.

Google Pixel 5

We expect the Pixel 5 will retain the 90Hz-refresh-rate, 6-inch, 2340-by-1080 OLED display (19.5:9 aspect ratio) introduced with the Pixel 4 as well as the Pixel 4’s rear-facing 12.2-megapixel and 16-megapixel cameras. (The 16-megapixel camera might have a ultra-wide lens as opposed to the Pixel 4’s telephoto lens.) As for the front-facing camera, it’s rumored to be a single 8-megapixel wide-angle ordeal. Some outlets report that there’s a fingerprint sensor on the rear of Pixel 5, harkening back to the Pixel 3, and Google has apparently ditched the Pixel 4’s gesture-sensing Soli radar in favor of a streamlined design.

Other reported Pixel 5 highlights include IP68-rated water- and dust-resistant casing, sub-6GHz 5G compatibilit , and 18W USB-C charging and wireless charging. In terms of pricing, the Pixel 5 is anticipated to cost around $699 in the U.S., U.K., Canada, Ireland, France, Germany, Japan, Taiwan, and Australia, which would make it far cheaper than the $799-and-up Pixel 4.

Pixel 4a 5G

The Pixel 4a 5G is a tad less exciting, but rumors imply it’ll sport a larger display than the Pixel 4 (potentially 6.2-inches versus 5.8 inches). It might also share the Pixel 5’s 2340 x 1080 resolution, processor, and cameras alongside a headphone jack, but supposedly at the expense of other components. The Pixel 4a 5G is rumored to make do with a 60Hz screen refresh rate, 6GB of RAM, a 3,885mAh battery, and Gorilla Glass 3 instead of the Pixel 5’s Gorilla Glass 6, as well as no IP rating for water or dust resistance.

The Pixel 4A 5G will cost $499, according to Google — a $150 premium over the $349 Pixel 4a. It’ll be available in the U.S., Canada, the UK, Ireland, France, Germany, Japan, Taiwan, and Australia when it goes on sale likely later today.

Chromecast with Google TV and Nest Audio

Chromecast with Google TV

Google’s new Chromecast dongle runs Google TV, a rebrand of Android TV, Google’s TV-centric operating system. Unlike previous Chromecast devices, it ships with its own remote control featuring a directional pad with buttons for Google Assistant, YouTube, and Netflix.

Chromecast with Google TV

The new Chromecast supports 4K, HDR, and multiple Google accounts as well as Bluetooth devices and USB-to-Ethernet adapters. But it doesn’t appear to tightly integrate with Google’s Stadia gaming service — at least not out of the box. The Verge’s Chris Welch, who managed to get his hands on a Chromecast unit early this week, reports that he sideloaded the Stadia app without issue and streamed a few titles with an Xbox controller.

The new Chromecast costs $50, or $20 less than the Chromecast Ultra.

Nest Audio

Details about Nest Audio leaked more or less in full on Monday (courtesy of Walmart). The new speaker, which aligns with the design of the Nest Mini and Nest Hub, is covered in a mesh fabric (and 70% recycled fabric) and features four status LEDs and Bluetooth connectivity. It stands vertically and it’s substantially louder than the original Google Home speaker, with Google claiming it provides 75% louder audio and 50% stronger base. And like the Google-made smart speakers before it, Nest Audio works with other Nest speakers and displays for multiroom audio and leverages Google Assistant for voice-controlled music, podcasts, and audiobooks from Spotify, YouTube Music, and more.

Google Nest Audio

The rumors haven’t given an indication one way or another, but it’s a safe bet Nest Audio packs a dedicated AI chip for workloads like natural language understanding, speech recognition, and text synthesis. Google introduced such a chip with the Nest Mini and Google Wifi last year, claiming at the time that it could deliver up to a teraflop of processing power

Nest Audio is expected to cost around $100 and come in several colors.