- Tram Ho
The AI chip is so big that it needs a cooling system and is placed in servers designed specifically to run AI software.
Modern CPUs have a huge number of transistors, for example 7nm Epyc Rome CPU for servers and data centers, which was introduced by AMD last week with 32 billion transistors. But compared to the giant AI chip announced by Cerebras Systems startup, the figure is just like a tiny boy.
The newly launched chip focused on this artificial intelligence introduces the Wafer Scale Engine (WSE). A square of more than 200m each, this chip contains nearly 1.2 trillion transistors, dozens of times more than conventional chips.
Compared to Cerebras WSE, Nvidia Tesla V100 processor is just like a tiny kid.
And yet, this WSE chip also has 400,000 core sparse linear algebra (sparse linear algebra) with up to 18 GB of on-chip memory, the total memory bandwidth reaches 9PB / sec (a good PB) Petabyte is about 1 million GB). The entire chip is based on TSMC’s 16nm FinFET process.
Since the entire chip is built from a single wafer, the company must use multiple routing methods to bypass the broken cores on the die chip, and maintain the entire chip array connected even when available. Damaged cores in a certain area on the wafer. The company said that they also have spare cores on die chips, but did not discuss their methods in detail.
Co-founder Sean Lie, who is responsible for the chip’s design and technology architecture.
The cooling system for this giant chip is also designed completely differently than conventional chips. Above this chip is a giant cooling plate, with a series of water radiators placed vertically to cool directly to the processor.
A processor with huge size and configuration like WSE is clearly not for everyone’s personal computers. Instead, WSE is for artificial intelligence tasks.
Next to the WSE chip is bigger than the wireless keyboard width of a Mac.
According to Andrew Feldman, founder of Cerebras Systems, artificial intelligence software needs a huge amount of information to improve its processing capabilities, so processors need to be as fast as possible. handle that amount of data – even if it’s a huge size.
This chip was created by Cerebras not for sale. Its special size and cooling system makes it difficult to connect to conventional computer systems. Instead, the processor will be part of a new server installed in data centers. The company said it will test these systems with a range of potential customers and start shipping commercial machines in October.
Chip packaging and testing system developed by Cerebras.
So far Cerebras has raised more than $ 100 million from Silicon Valley investors, including Benchmark, Andy Bechtolsheim and Sam Altman. Feldman also has a team of 174 engineers and support from TSMC for the production of giant WSE chips.
The market for AI chips is joined by technology giants like Nvidia, Intel and British startups, Graphcore Ltd. There is also Google with their self-developed processor called Tensor Processing Units, which speeds up the processing of AI.
Source : Genk