AMD said it’s raising the bar for performance and price against Intel while introducing new security features in the data center with the launch of its third-generation EPYC processors, which represent the company’s broadest lineup yet for cloud, enterprise and high-performance computing workloads.
The new processors, code-named Milan, were announced Monday, only a few weeks before Intel is set to launch its next-generation Xeon Scalable processors, code-named Ice Lake, setting up a new refresh battle over customers in a data center market that is still dominated by Intel.
The Santa Clara, Calif.-based chipmaker said more than 100 new server platforms from Dell Technologies, Hewlett Packard Enterprise, Supermicro and other vendors will support the new processors. Cloud service providers like Amazon Web Services and Microsoft Azure also plan to ramp up their use of AMD EPYC processors, which are expected to power more than 400 cloud instances by the end of the year.
AMD said the 19 new processors, which feature up to 64 cores and 4.1 GHz in frequency, extend the chipmaker’s competitive advantage further than 2019’s second-generation EPYC Rome lineup and provides performance that is “head and shoulders” above Intel’s second-generation Xeon Scalable processors, which includes its rival’s latest Cascade Lake refresh parts from last year. (The company said it did not have public performance figures from Intel’s upcoming Ice Lake processors.)
But the new Milan processors are also continuing EPYC’s “perfect” total cost of ownership (TCO) story, according to company executives, giving data center operators more performance for their dollar on top of other economic benefits that come with higher core counts.
“Third-gen EPYC is the highest-performance server processor in the industry, at the socket level and at the per-core level when you look at every core-count boundary,” said Ram Peddibhotla, AMD’s corporate vice president of EPYC product management, in a pre-briefing. “And as a result of that performance, third-gen EPYC is the best choice for cloud, for HPC and enterprise, where it drives significant TCO value while providing state-of-the-art security and data protection.”
Another strength of Milan, according to Peddibhotla, is that it can run on existing server platforms for second-generation EPYC processors with a BIOS update, without losing any of Milan’s new capabilities, which means that it will be easier for partners to upgrade. This, he added, contrasts with Intel’s approach, which will require partners to adopt a new server platform, Whitley, with the upcoming Ice Lake chips, and another, Eagle Stream, with the Sapphire Rapids chips that come after.
“The third-gen EPYC processors come with complete platform leverage, and this reduces our partners’ investment in terms of adopting third-gen EPYC, and it reduces their end customer [qualification] cycles as well,” Peddibhotla said.
Dan McNamara, a former Intel executive who leads AMD’s EPYC business, said the Milan processors are two times faster than Intel’s second-generation Xeon Scalable chips for enterprise, cloud and HPC workloads based on internal tests conducted by AMD using the SPECJjbb2015, SPECrate2017_int_base and SPECrate2017_fp_base benchmark tools, respectively.
To encourage adoption, the processors will be supported by a large and growing ecosystem of cloud, hardware and software vendors as well as appliances and reference architectures that address various business needs, according to McNamara. Among the 80-plus vendors supporting Milan are Amazon Web Services, Cisco, Cloudera, Docker, Dell Technologies, Google Cloud, Hewlett Packard Enterprise, IBM Cloud, Juniper Networks, Lenovo, Microsoft Azure, Nutanix, Oracle, Red Hat, SUSE and VMware.
“We’re really trying to solve IT problems,” McNamara said. “It’s not just about delivering performance and TCO, but it’s also honing it with a solution set on top of it, so that customers can very quickly accelerate the value that we’re bringing, and that’s really the focus of Milan going forward.”
Milan is based on AMD’s 7-nanometer Zen 3 architecture that debuted last fall with the launch of AMD’s Ryzen 5000 desktop processors. Zen 3’s improvements over the previous Zen 2 architecture includes a 19 percent increase in instructions per clock as well as a larger L3 cache in each chiplet of the processor that gives as many as eight cores access to the same 32 MB cache.
The new processors also expand flexibility for memory channels, going beyond the four- and eight-channel options from the second generation with a third option for six channels.
As for security, Milan improves upon EPYC’s Secure Encrypted Virtualization capabilities from the second generation with a new feature called Secure Nested Paging that provides new protection for a guest operating system against an untrusted hypervisor. In addition, the new processors come with Shadow Stack, a new silicon-level protection against a form of malware called control-flow hijacking attacks that allows attackers to manipulate memory to modify existing code.
Other features include 128 lanes of PCIe 4.0 connectivity, 4 TB in memory capacity and support for a maximum DDR4 frequency of 3,200 MHz. All features in Milan are present in the entire stack of processors, which executives said is another way they differentiate from Intel’s.
AMD is splitting its 19 Milan processors into three groups to address different needs across the data center market: core performance for customers that need higher frequency and more cache per core, core density for customers that need high core and thread counts, and balanced and optimized for customers that need a mix of performance and total cost of ownership optimization. Four of the processors are designed for single-socket servers while the remaining are for dual-socket servers.
Peddibhotla said the lineup represents AMD’s “customer-centric approach” to the data center market, allowing customers to find the right processor for their workloads without needing to pay more for features or capabilities that they don’t need. He added that AMD will continue to sell its second-generation EPYC processors alongside the new parts.
The volume pricing for the Milan processors represents an increase of 5-19 percent over comparable models from the second-generation EPYC lineup, according to a CRN analysis. An AMD spokesperson said the company is positioning the third-generation EPYC processors as providing the “ultimate performance at each core boundary” while the second-generation will provide “fantastic performance and value.”
The core performance group is a refresh of the high-frequency EPYC 7Fx2 processors AMD introduced last year for enterprise workloads, such as hyperconverged infrastructure, relational databases and commercial HPC. The four processors consist of eight-, 16-, 24- and 32-core options with a boost frequency of 4 GHz and an additional 100 MHz for the eight-core model.
The core density group is designed for cloud, HPC and high-end enterprise segments. It consists of five high-core count processors with 48-, 56- and 64-core options. One 64-core model is for single-socket servers, and the 56-core processor represents a new core-count option from the previous generation.
The balanced and optimized group is designed for the “heart of the enterprise,” including hyperconverged infrastructure, data analytics, database, collaboration, web and Java workloads, according to Peddibhotla. The lineup consists of 10 processors with options for 16-, 24-, 28- and 32-core counts, and three of them are for single-socket servers with the rest meant for dual-socket.
Peddibhotla said the Milan processors provide strong competition against Intel’s second-generation Xeon Scalable lineup across the stack, based on estimates for relative performance using the SPECrate2017_int_base cloud benchmark. For instance, AMD’s new 28-core EPYC 7453 processor, a new core-count option, provides slightly higher performance than Intel’s flagship 28-core Xeon Gold 6258R, but the EPYC 7453’s volume pricing is $1,570 per chip, less than half of the Xeon Gold 6258R’s $3,950 recommended customer pricing, giving it a “dominant performance-per-dollar” ratio, he said.
Further down the stack, Peddibhotla said, AMD’s $650 16-core EPYC 7282 from the second generation already outperforms Intel’s $900 16-core Xeon Silver 4216. But with the introduction of three new 16-core options in the third generation, AMD’s performance advantage goes even further, he added, though they are $183 to $2,621 more expensive than Intel’s part.
Beyond Milan’s price-performance advantages, Peddibhotla demonstrated how the higher core density and performance of third-generation EPYC can reduce the total cost of ownership for data centers. He showed that 32 servers running 64-core AMD EPYC 7763s can provide the same level of integer performance as 63 servers running 28-core Intel Xeon Gold 6258Rs, cutting the number of servers by nearly half while reducing space by 25 percent and power by 35 percent. That amounts to an estimated 35 percent reduction in total cost of ownership over four years, according to Peddibhotla.
“In many cases, there is software licensing that’s layered in as well,” he said. “High performance would mean that IT decision makers can purchase fewer software licenses, and that’s an added bonus that becomes a multiplier effect on top of this bare-metal TCO savings that we’re showing here.”