CPU Benchmark Performance: AI and Inferencing

As technology progresses at a breakneck pace, so too do the demands of modern applications and workloads. With artificial intelligence (AI) and machine learning (ML) becoming increasingly intertwined with our daily computational tasks, it's paramount that our reviews evolve in tandem. Recognizing this, we have AI and inferencing benchmarks in our CPU test suite for 2024. 

Traditionally, CPU benchmarks have focused on various tasks, from arithmetic calculations to multimedia processing. However, with AI algorithms now driving features within some applications, from voice recognition to real-time data analysis, it's crucial to understand how modern processors handle these specific workloads. This is where our newly incorporated benchmarks come into play.

As chip makers such as AMD with Ryzen AI and Intel with their Meteor Lake mobile platform feature AI-driven hardware within the silicon, it seems in 2024, and we're going to see many applications using AI-based technologies coming to market.

We are using DDR5 memory on the Core i9-14900KS, as well as the other Intel 14th Gen Core series processors including the Core i9-14900K, the Core i7-14700K, Core i5-14600K, and Intel's 13th Gen at the relative JEDEC settings. The same methodology is also used for the AMD Ryzen 7000 series and Intel's 12th Gen (Alder Lake) processors. Below are the settings we have used for each platform:

  • DDR5-5600B CL46 - Intel 14th & 13th Gen
  • DDR5-5200 CL44 - Ryzen 7000
  • DDR5-4800 (B) CL40 - Intel 12th Gen

(6-1) ONNX Runtime 1.14: CaffeNet 12-int8 (CPU Only)

(6-1b) ONNX Runtime 1.14: CaffeNet 12-int8 (CPU Only)

(6-1c) ONNX Runtime 1.14: Super-Res-10 (CPU Only)

(6-1d) ONNX Runtime 1.14: Super-Res-10 (CPU Only)

(6-2) DeepSpeech 0.6: Acceleration CPU

(6-3) TensorFlow 2.12: VGG-16, Batch Size 16 (CPU)

(6-3b) TensorFlow 2.12: VGG-16, Batch Size 64 (CPU)

(6-3d) TensorFlow 2.12: GoogLeNet, Batch Size 16 (CPU)

(6-3e) TensorFlow 2.12: GoogLeNet, Batch Size 64 (CPU)

(6-3f) TensorFlow 2.12: GoogLeNet, Batch Size 256 (CPU)

(6-4) UL Procyon Windows AI Inference: MobileNet V3 (float32)

(6-4b) UL Procyon Windows AI Inference: ResNet 50 (float32)

(6-4c) UL Procyon Windows AI Inference: Inception V4 (float32)

Regarding AI and inferencing workloads, there is virtually no difference or benefit from going for the Core i9-14900KS over the Core i9-14900K. While Intel takes the win in our TensorFlow-based benchmark, the AMD Ryzen 9 7950X3D, and 7950X both seem to better grasp the type of AI workloads we've tested.

CPU Benchmark Performance: Science And Simulation Gaming Performance: 720p And Lower
POST A COMMENT

54 Comments

View All Comments

  • Ryan Smith - Saturday, May 11, 2024 - link

    We just finished reviewing it. Several things came in/required attention at once. Reply
  • lemurbutton - Friday, May 10, 2024 - link

    Anandtech should focus on comparing x86 CPUs to ARM CPUs if they want to differentiate. Any media can run the benchmarks provided here. But not many or no one is doing Intel/AMD vs Apple Silicon thoroughly.

    M4 ST speeds are 20%+ faster than an Intel 14900KS.
    Reply
  • evanh - Saturday, May 11, 2024 - link

    The problem is the same as it always has been - It's not a fair hardware comparison when the APIs (OS/libraries) are difference, compilers are different, power settings are different, and both hardware centric and software centric optimisations are different.

    Example: It's bad enough just comparing the same OpenGL package this way, Comparing D3D against OGL never worked.
    Reply
  • goatfajitas - Sunday, May 12, 2024 - link

    and there you are again, spouting Apple nonsense. Yes, in Apple's ARM favoring benchmarks ARM CPU's do better. IT has little relevance to x86 CPU's which are far more powerful. ARM is great for mobile. NEXT! Reply
  • evanh - Sunday, May 12, 2024 - link

    Apple strongly supported Intel parts for quite a number of years. They're moving on precisely because Intel hasn't been performing for a while now. Reply
  • Igor_Kavinski - Saturday, May 11, 2024 - link

    Hi Gavin! Can you please point to official URLs/documents that provide the JEDEC timings for Intel and Ryzen CPUs, based on which you chose your settings? Thanks! Reply
  • thestryker - Sunday, May 12, 2024 - link

    I look forward to future power profile testing as the issues with RPL die seem to have finally forced Intel's hand with regards to default settings. Would also love to see some memory speed scaling tests as this hasn't really been done in depth that I've seen. Reply
  • xray9 - Sunday, May 12, 2024 - link

    High power consumption, overclocking potential seemingly depleted from the get-go due to intensified competition with AMD. This likely leads to instabilities exacerbated by additional tuning from motherboard manufacturers, which used to be a non-issue. One could also argue that the design is in need of renewal or that one must abandon overclocking in this area. Both motherboard manufacturers and customers alike. Reply
  • edlee - Sunday, May 12, 2024 - link

    This cpu is a dumpster fire waiting to happen, eventually the aio will fail, and even will a correctly sized cooler this will heat up your room pretty quick. Intel needs to go back to and learn how to correctly make a cpu that doesn't need more than a 130w cooler. This is insane, work on ipc, and ipc alone, I don't give a hoot about max clock speeds. Reply
  • doncerdo - Sunday, May 12, 2024 - link

    Seeing the results, the conclusion from the AT of old would have been simple, do not buy unless you want a binned CPU for LN2 over locking. Bit disappointing tons of data, terrible conclusion. Reply

Log in

Don't have an account? Sign up now