Americas

  • United States

Micron launches CXL 2.0 memory expansion modules

News Analysis
Aug 18, 20233 mins
Servers

Memory expansion modules from Micron comply with Compute Express Link 2.0, which promises new security features and far more versatility than previous versions.

Network room and mainframes with virtual city in the cloud

Micron has introduced memory expansion modules that support the 2.0 generation of Compute Express Link (CXL) and come with up to 256GB of DRAM running over a PCIe x8 interface.

CXL is an open interconnect standard with wide industry support that is meant to be a connection between machines allowing for the direct sharing of contents of memory. It is built on top of PCI Express for coherent memory access between a CPU and a device, such as a hardware accelerator, or a CPU and memory.

PCIe is normally used in point-to-point communications, such as SSD to memory, while CXL will eventually support one-to-many communication. So far, CXL is capable of simple point-to-point communication only.

Development on the CXL standard began in early 2019, but it has only recently come to market because it required a faster PCIe bus as well as native support from CPU vendors Intel and AMD. Only their most recent CPUs support it.

For now, the initial applications revolve around attaching DRAM to a PCIe interface. That’s what Micron is offering with its CZ120 memory expansion modules. The modules are available in 128GB and 256GB capacities, which is pretty big for a memory module. They use a special dual-channel memory architecture capable of delivering a maximum of 36GB/s of bandwidth.

Ryan Baxter, senior director of the data center segment at Micron, said security features were paramount in this release. “There are a lot of security features in 2.0 that don’t exist or are not supported in 1.1,” Baxter said. 

Security is important when you have servers talking to each other. The CXL 2.0 standard now supports any-to-any communication encryption through the use of hardware acceleration built into the CXL controllers. This means that silicon providers do not have to build encryption security into their own hardware.

Baxter said that a lack of security is why customers testing and deploying CXL 1.1 tended to only experiment and use lower capacity memory. Some customers might have deployed CXL 1.1 in some internal workloads, but many avoided doing anything really ambitious while they waited for 2.0, he said.

CXL 2.0 will also support persistent memory, which stores data and memory like NAND flash but is much faster, almost as fast as DRAM. CXL 2.0 enables distinct PMEM support as part of a series of pooled resources.

Micron sees two key primary use cases for CXL 2.0: adding memory to a system to provide extra memory to a CPU under heavy workload; and supporting bandwidth intensive workloads, since the PCIe spec is actually faster than memory slots.

So, guess which workloads they have in mind? “We’re seeing [interest] with AI training and inference, where these use cases are driving a much bigger memory footprint around the CPU,” said Baxter. He also cited more traditional uses, such as in memory databases, as benefiting from CXL 2.0 memory capacity.

CXL has a lot of planned obsolescence in it. The 2.0 version is not backwards compatible with 1.1, and the 3.0 version will not be compatible with 2.0, since 3.0 uses the next generation of PCIe. Baxter doesn’t expect to see significant 2.0 use and product availability until at least next year if not 2025.

Andy Patrizio is a freelance journalist based in southern California who has covered the computer industry for 20 years and has built every x86 PC he’s ever owned, laptops not included.

The opinions expressed in this blog are those of the author and do not necessarily represent those of ITworld, Network World, its parent, subsidiary or affiliated companies.