- Get link
- X
- Other Apps
- Get link
- X
- Other Apps
AMD recently launched their new 6 core Ryzen 5 CPUs, the 3600 and 3600X, but what are the
differences between them and is it worth paying more for the X? In this detailed comparison
we’ll look at the differences in games and applications, both at stock and while overclocked
to help you decide which to get.
Let’s start with the specs. Both CPUs have 6 cores and 12 threads, are unlocked,
and have the same amount of cache. As for the differences, the 3600X has higher base
and boost clock speeds, and is listed with a higher TDP.
Both CPUs were tested in the same system, so the only difference was the CPUs and their
coolers. Both CPUs were tested with the Asrock Taichi X570 motherboard with 16gb of DDR4-3200
memory running in the dual-channel at CL14 with an Nvidia RTX 2080 Ti. I’ve tested both
CPUs with the stock coolers that they come with in the box as well as the thermal paste
that comes pre-applied, as I figure most people will probably end up using these. The 3600
comes with the Wraith Stealth, and the 3600X comes with the larger Wraith Spire. This difference
in cooling capacity also seems to be why the X model is listed with a higher TDP, as we’ll
see it’s not that much more powerful.
Testing was completed with the same version of Windows, Nvidia drivers, and BIOS for each
CPU, all of which were the latest available version at the time of testing.
I’ve tested both CPUs at stock and overclocked. I managed to get my 3600 to 4.1GHz and the
3600X to 4.2GHz, but this was a limitation of the coolers. I’ve been able to get my
3600 to 4.2GHz with an AIO in the past, but as you’ll see later the temperatures get
quite high with the smaller cooler which is why the max overclock was 100MHz less.
With that in mind we’ll first check out the differences in various applications, as
well as power draw and thermals, followed by gaming tests at 1080p and 1440p resolutions
afterward, then finish up by comparing some performance per dollar metrics.
Let’s start with Cinebench R20. I’ve got the overclocked results on the upper half
of the graph while the stock results are in the lower half. As expected the 3600X is coming
out ahead due to those higher clock speeds. As both CPUs have the same core count though
there’s not too big of a difference. At stock, the 3600X was around 5% faster than
the 3600, both in single and multicore performance. Once both are overclocked though this lowers
to just a 2% lead. It’s worth noting that the single-core results are lower with the
overclocks applied on both CPUs, this is because manually overclocking all 6 cores to the same
speed is lower than what both can boost to out of the box in single-core workloads.
Although Cinebench R15 has been replaced by the newer R20 just covered, I wanted to also
include the results of this one too, as many other people are still using it and that may
allow you to compare my results. At stock, there’s a similar 5% gain to a single core
performance, just like in R20, though multicore was only 3% ahead with the 3600X. Once both
are overclocked the gap closes in, with both single and multicore scores on the 3600X no
more than 2% ahead of 3600. Again we’ve got lower single-core performance when manually
overclocked as this prevents the higher boosts speeds that can be hit in single-core workloads.
I’ve tested Blender with the BMW and Classroom benchmarks, and this is a test that works
better with more cores. As we’ve got the same core count there’s only a small difference
due to the clock speed changes. At default stock speeds the 3600X was completing both
tasks 4.5% faster than the 3600, however, once both are overclocked the 3600X is only around
2% ahead.
Handbrake was used to convert a 4K file to 1080p, and then a different 1080p file to
720p. This is another workload that benefits from more threads, so the results are pretty
close together as both CPUs have the same 6 cores 12 threads. At stock the 4K task completed
just 3.7% faster on the 3600X, and 2.6% faster for the 1080p task. When overclocked the difference
dropped to 2.1% and 1.6% respectively, so hardly any real difference between these two
chips.
Adobe Premiere was used to export one of my laptop review videos at 1080p, and the results
between the two CPUs were again very close here. The 3600X was completing the export
3.6% faster than the 3600 at stock speeds, lowering to a 2.2% faster export time with
both processors overclocked.
I’ve also tested the warp stabilizer effect in Adobe Premiere, basically this processes
a video file to smooth it out. This is a single-core workload, and as my manual all core overclocks
technically lower the single-core boost speed, as we saw in the Cinebench results earlier,
I haven’t bothered testing the manual overclocks as they’d just be worse. In this test the
3600X was 3.7% faster than 3600.
This is the first time I’ve attempted to test Photoshop using the Puget Systems benchmark
tool. The 3600 was seeing no major difference once overclocked, and the 3600X got
worse performance with the overclock applied, so not sure if some components of this test
may favor single-core performance, which my manual overclock will lower.
I’ve used 7-Zip to test compression and decompression speeds. As another test that
favors additional cores, we’re again seeing close results, just small differences due
to the clock speeds. At stock the 3600X is around 2% faster than the 3600, then when
overclocked compression is only 0.3% faster, but decompression saw an increase
to 3.2%.
VeraCrypt was used to test AES encryption and decryption speeds. At stock both were
very close in encryption speeds, the margin of error difference realistically, while decryption
saw a 5% higher speed with the 3600X. This was another test where I didn’t bother with
overclocking, as in the past I’ve found overclocking to produce strange results with
this test such as worse speeds.
The V-Ray benchmark is another multicore test that relies on thread count to boost performance,
and as a result of both having the same core count and close clock speeds, the 3600X was
2.7% faster than 3600, both at stock and when overclocked.
The Corona benchmark uses the CPU to render out a scene, and like most of these other
applications is a multicore test, so there were no major differences. The 3600X completed
the render task 2.8% faster than the 3600 at stock and 2.2% faster with both CPUs overclocked,
or in other words from just 4 seconds faster to 3 seconds.
These are the differences between the 3600 and 3600X CPUs in all of these applications,
as we can see results depend on the specific workload. At stock in almost all of these
tests the 3600X was coming out ahead, as expected due to its higher clock speeds. Single-core
performance tended to do better, due to the different single-core boost speeds of each
chip.
Once we overclock both CPUs on all cores the difference between the two CPUs narrows in
a bit, putting the 3600X just 2% faster than the 3600 on average, so a pretty small difference
and not a good start for the X version.
I’ve also measured total system power draw from the wall while running the Blender benchmark.
At stock, the 3600X was using 4.9% more power than the 3600, which almost scales with the
4.5% faster speed that it scored in this test. With both overclocked, the 3600X was now using
3.3% more power, but the difference in actual blender performance was closer to 2% now.
These are the CPU temperatures with the same blender tests running. Both at stock and while
overclocked the 3600X was significantly cooler. My 4.1GHz overclock on the 3600 on the stock
cooler was probably a bit optimistic, though it was perfectly stable under all of these
tests.
The main reason the 3600X is running cooler despite performing better is down to the coolers,
the Wraith Spire that it comes with is just physically larger than the Wraith Stealth
that comes with the 3600.
As mentioned earlier, I couldn’t push the 3600 any higher in the blender due to thermal
limitations with the stock cooler, but I could reach 4.2GHz with an AIO. So with a better
cooler and both CPUs running at the same speed, I’d expect no difference in performance.
These are the average clock speeds during the blender stress test, so overclocking the
3600 could get it to hit the same speeds as the stock 3600X, though due to the better
cooling the 3600X did occasionally boost up a little more.
Anyway, at the stock, the 3600 was on the warmer side but still usable, if you want to overclock
an aftermarket cooler might be worth looking into, though the results from the 3600X already
show that there’s not too much extra to gain. This testing was with the stock thermal
paste that came pre-applied to both coolers, different paste could likely further improve
performance.
Let’s get into the gaming results next, I’ve tested these games at max settings
at both 1080p and 1440p resolutions. We’ll start with stock results, then look at
precision boost overdrive results afterward.
Shadow of the Tomb Raider was tested with the built-in benchmark at the highest settings.
In all upcoming gaming graphs, I’ve got the 1440p results shown on the upper half and
1080p results below. In this test, there was no practical difference, just 1
FPS either way.
Assassin’s Creed Odyssey was also tested with the built-in benchmark at max settings.
Again the difference between the two was extremely small, within 1 FPS in terms of average FPS,
and slightly higher differences for 1% low.
Battlefield 5 was tested in campaign mode at ultra settings with the same test done
on both CPUs. The results are even closer here, they’re essentially the same, it’s
too close to definitively say one’s better than the other.
Borderlands 3 was tested with the built-in benchmark at ultra settings. Again just a
1 FPS or so difference between these two CPUs, nothing amazing going on here.
Fortnite was tested with the replay feature, and the same replay was used on each CPU.
Although it appears there’s a bigger difference, it’s just because of the frame rate numbers
are higher than the ones previously seen, in terms of percentage change there’s basically
no difference.
CS: GO was tested with the elliptical FPS benchmark, and I thought this one would be interesting
given single-threaded CPU performance seems to matter more here compared to many other
titles. Sure enough, there was an 11% improvement to average FPS at 1080p with the 3600X, but
as we’ve seen this is the only game so far to see any sort of difference.
I’ve also tested out Rainbow Six Siege with the built-in benchmark. Like most of the other
games, we’re back to seeing no noteworthy differences between the two CPUs.
We’ve also got the option of enabling PBO through the Ryzen Master software with both
CPUs. I’ve retested Battlefield 5 and we can see just a tiny improvement over stock
settings at 1080p with both CPUs, and due to this insignificant change I haven’t bothered
retesting all of the other games as it’s hardly making a meaningful difference here.
I also haven’t tested with manual overclocks like I did with the multicore applications
earlier, as I’ve found many games benefit more from PBO with Ryzen as they still get
to keep their higher single-core boost speeds.
To summarize the gaming differences, there are no real differences, either CPU
will perform essentially the same in games, so that just leaves us with the final difference,
the price.
Prices will change over time, you can find updated prices linked in the description.
At the time of recording, the Ryzen 5 3600 is going for 200 USD, while getting that
extra letter X and better cooler is going to cost an extra 35 or 17.5% more money,
at least with the current sale, if you’re paying the full MSRP of 250 it’s 25% more
money.
These are the dollar per frame values at 1080p averaged out over all games tested. Basically
this just shows that the 3600 is better in terms of value, it’s 35 to 50 cheaper
and as we saw it performed essentially the same as the 3600X for the most part, so this
makes sense.
It’s not all about gaming though, I’ve also chosen Handbrake performance to compare,
as this is a real-world workload that scored about right on average out of all applications
tested. In terms of value, the 3600 is again winning. The 3600X was performing 3.7% better
in this test, but if you buy it at the sale price it’s still 13% less value, or over
20% more in terms of dollar per frame at the full MSRP.
With both CPUs overclocked the difference between them doesn’t change, as the
overclocks didn’t make too much difference. The 3600 is still ahead in terms of value
here, without sacrificing much performance at all.
That last sentence is the conclusion. Although the 3600X does perform slightly better
out of the box, in particular, in terms of single-core speeds, the extra price makes
it difficult to justify compared to 3600. You’re also getting the better stock cooler
with the 3600X, but I don’t think that’s worth 35 to 50 USD when you could instead
put that budget into an even better aftermarket option.
So to summarise based on the current prices, I wouldn’t consider buying the 3600X unless
the prices were much closer together, the small improvement in performance just doesn’t
seem to be worth it, both when we look at it in terms of gaming or other productivity
workloads.
Let me know which CPU you’d pick and why down in the comments, the 3600, or 3600X,
and if the 3600X I’m keen to hear why it’s worth the extra money, or maybe
it was just on a great sale? I’ve got more CPU comparisons on the way, so if you’re
new to the channel you’ll want to get subscribed for those as well as future
tech videos like this one.
AMD 1920X vs 2920X
AMD recently launched their new 12 core Threadripper 2920X CPU, but just how much of an upgrade
is it over the older 1920X, and which should you consider buying? We’ll take a look both
CPU and game benchmarks in this comparison to help you find out.
Let’s start by comparing the specs between these two CPUs to give us an idea of what’s
different between the first and second-generation chips.
Both are 12 core 24 thread parts, however, the 2920X is based on AMD’s newer Zen+ architecture
and has slightly higher clock speeds. The 2nd generation also has some extra features
like XFR2 and Precision Boost Overdrive, which we’ll get into later.
Before we dig into the results, I’ll briefly cover off the specs of the system that I’m
testing with. I’ve got the MSI MEG X399 Creation motherboard with the latest BIOS
update available applied. There are 4 sticks of G.Skill Flare X DDR4-3200 CL14 memory running
in the quad-channel, and I’m also using my trusty EVGA 1080 FTW2 graphics card, not the best
but it’s what I’ve got available to test with.
For the CPU cooler, I’m using the Enermax Liqtech 240 with its included thermal paste
as that’s what was available, so both the 1920X and 2920X CPUs were tested in the same
system and conditions.
Both CPUs were tested at stock speeds and while manually overclocked. It’s worth keeping
in mind that the overclocks on your particular CPU will vary anyway based on many other factors
such as cooling and the silicon lottery. With that in mind, I was able to get my 2920X to
4.2GHz at 1.35 volts on all 12 cores, and the 1920X at 4.0GHz with the same 1.35 volts,
and overclocking was done using the Ryzen Master software. I didn’t spend very much
time tweaking the overclocks though, so they could probably be dialed in a bit better.
Precision boost overdrive is a new feature present in Threadripper 2, so I’ve also
included results with this on the 2920X, but not the 1920X as it’s not supported. Basically,
it automatically increases the frequency and power limits like overclocking, but also still lets
we use precision boost 2 and XFR2, so we might see better results in some tests compared
to the manual, all core overclocks, particularly in single-threaded workloads where the cores
can still boost above what I’d be set with my lower all core overclocks. Unfortunately,
like overclocking this does void your warranty, although it’s not exactly clear if anyone
can determine you’ve done this.
Alright that’s a lot of explanation, now for those results, we’ll start with
the CPU benchmarks followed by the games afterward. These tests were all completed with the distributed
memory access mode enabled through the Ryzen Master software, which is enabled by default
and recommended for multicore workloads.
Starting with Cinebench at stock settings the 2920X is 6% ahead of the 1920X in the
multi-core test and 4% ahead in the single-core test. With the 1920X overclocked it’s
able to start scoring better than the 2920X at stock in multi-core performance, but as
this results in the 1920X capping all cores at 4GHz it’s getting a slightly lower single
core performance. With both CPUs overclocked the difference lowers slightly, with the overclocked
2920X 4.6% ahead of the overclocked 1920X in the multicore result.
In Adobe Premiere I’ve tested using the latest 2019 version by exporting one of my
laptop reviews at 1080p. At stock settings the 2920X is completing the task just 4% faster
than the 1920X. With both CPUs overclocked we’re seeing similar improvements in terms
of time saved, but this equates to the overclocked 2920X now completing the task around 7% faster
than the overclocked 1920X. I’ve also tested the Warp Stabilizer effect
in Adobe Premiere, although this was only done with a single instance running at once
rather than multiple. At stock speeds, the 2920X is getting this done almost 6% faster
than the 1920X at stock, but once both CPUs are overclocked the 2920X’s lead drops to
a 4% improvement compared to the overclocked 1920X, as in this test I found the
overclock making things worse for the 2920X which was not the case on the 1920X in this
specific test. Handbrake was tested by converting a 4K file
to 1080p, then a separate 1080p file to 720p. Starting with the 4K export shown by the blue
the bar at the stock the 2920X was performing the task almost 7% faster than the stock 1920X,
and then with both overclock applied the gap lowers a little, with the overclocked
2920X now just 4% faster than the overclocked 1920X. In the 1080p export result shown by
the purple bar, at the stock the 2920X is again around 7% faster than the stock 1920X, and
then with both overclocked the 1920X sees a fairly large improvement, putting the
2920X just 3% faster in this test, with the overclocked 1920X able to just get ahead of
the stock 2920X. Blender was used to test the BMW and Classroom
benchmarks. At stock speeds, the 2920X is completing the BMW benchmark 6.6% faster than
the stock 1920X, and 5.8% faster for the Classroom benchmark. With both CPUs overclocked though,
the 2920X is now just 5% faster than the overclocked 1920X in the BMW benchmark, and just under
3% faster in the Classroom benchmark. The Corona benchmark renders out a scene using
the CPU, and at stock the 2920X is completing the task 6% faster than the 1920X. With both
CPUs overclocked the gap closes a fair bit, with the overclocked 2920X now just 3% faster
than the overclocked 1920X. The V-Ray benchmark also uses the CPU to render
a scene, and in this test at stock speeds the 2920X was just 3.8% faster than the stock
1920X. With both CPUs overclocked the 2920X is now just 4% faster than the overclocked
1920X. 7-Zip was used to test compression and decompression
speeds and the 2920X came ahead in every test, even after overclocking the 1920X it
wasn’t quite able to pass the stock 2920X. At stock speeds, the 2920X is performing compression
tasks 4% faster than the stock 1920X, and the stock 2920X is 6% faster when it comes
to decompression. With the overclocks applied though the gap narrows, with the overclocked
2920X now just 2% ahead of the overclocked 1920X for compression, and 4% faster for decompression.
Veracrypt was used to test AES encryption and decryption speeds and in this test the
overclocked 1920X was at least able to surpass the stock 2920X, but realistically all results
aren’t too far apart here anyway. At stock speeds, the 2920X is just 2% faster in encryption
and almost 3% better in decryption when compared to the stock 1920X. With both chips overclocked
the 2920X moved further ahead of the overclocked 1920X in encryption, going up to 2.7% faster,
but then a slower 2% improvement for decryption. As we’ve seen the performance can vary a
bit depending on the specific test, though in general, we seem to be looking at anywhere
from a 2% to 8% improvement with the 2920X at stock speeds, with the average improvement
being 5.4% when taking all applications tested into consideration.
With the overclocks in place the average difference between the two CPUs lowers to the 2920X performing
3.9% better than the 1920X, though this will, of course, vary depending on the specific overclocks
that you’re able to get. Now for some games. While I wouldn’t recommend
buying a Threadripper CPU purely for gaming, for many of us such as myself the reality
is that we have one main system that we use, and while I don’t primarily use my 1950X
for gaming, I do play games on it, and this seems to be how AMD is marketing
the X series, so both the 1920X and 2920X chips, for the professional or enthusiast
that also wants to kick back at the end of the day with some games.
Far Cry 5 was tested specifically because I know it performs better when making use
of local memory and legacy modes, as shown by the results. This is an example which demonstrates
some games will see a performance improvement on Threadripper when memory is made local
or CPU dies are disabled. At stock, the 2920X is getting average frame rates 5% higher than
the 1920X, and at the other settings we can see the 2920X is only just a little in front.
This is not how all games perform though, so you can’t just drop into legacy mode
or change over to local memory from distributed when you plan on gaming to always get the
best performance, as shown here by our favorite Ashes of the Singularity test. The highest
frame rates are seen with distributed memory enabled, the default, with local memory and
half legacy mode lowering frame rates. In this test at the stock, the 2920X is performing
just 4% better than the 1920X in terms of average frame rates.
At 1440p or 4K resolutions I’d expect much smaller differences even when compared to
using an Intel CPU with faster clock speeds. If you’re after more gaming benchmarks on
these two CPUs along with the other Threadripper chips then I can highly suggest the Hardware
Unboxed video linked in the description. Realistically I think most people, myself
included, won’t bother swapping between distributed and local modes as it requires
a reboot, I know I’m personally happy playing games in distributed mode, the small frame
rate boost isn’t worth the reboot, but I’ve tested these games in both to try and show
you that results vary on the game anyway. For the temperatures as mentioned, I’m testing
with the Enermax Liqtech 240 all in one liquid cooler, as it’s designed for the TR4 socket.
Testing was completed with an ambient room temperature of 22 degrees Celsius with both
CPUs running the Blender classroom benchmark, so sort of a worst-case but realistic load
for these chips. At stock, both were fairly similar, with the 2920X just a couple of degrees
warmer. With PBO enabled on the 2920X it gets a fair but hotter than any other result, but
with both CPUs manually overclocked the 1920X is getting hotter than the 2920X, despite
it being clocked slower with both being set to 1.35 volts.
Here’s what the total system power draw looked like with both CPUs again running the
same Blender classroom benchmark. The results are in line with what we just saw in the previous
temperature graph, where at stock the 1920X is using less power than the stock 2920X,
but once overclocked the stock 1920X is using more power than the higher clocked 2920X at
the same 1.35 volts, which I think starts to show the better efficiencies of Threadripper
2. Finally, let’s discuss pricing. For updated
pricing check the links in the description, as prices will change over time. At the time
of recording, the 1920X is going for 445 USD while the 2920X is going for 650 USD,
so 46% more money for the 2920X. As we’ve seen in the results previously, we’re not
getting anywhere near an equivalent improvement in performance to fully justify that price
increase, so it will depend on whether you want to pay that much more for
approximately 5% better on average, at least based on the applications tested here.
The 1st generation Threadripper chips are available for some fairly low prices at the
moment, the prices went down quite a bit when the 2nd generation came out, so it will be
interesting to see how the third generation affects the price of the 2920X in the future,
it might end up being more worthwhile compared to the 1920X later.
So overall the 2920X is a nice little incremental improvement over the first generation 1920X.
If you’ve already got the 1920X I don’t think it’s worthwhile upgrading to the 2920X
at the current prices. That’s not to say that it’s bad, it’s an excellent CPU for
the price point, especially when you compare it against Intel’s offerings with a similar
core count, it seems like AMD is competing with their product line at the moment.
It’s great to see the Threadripper product mature over time though, with new features
and improved memory support at cheaper prices I can see this potentially leading to a higher
uptake in the platform. Consider that just over a year ago when the 1920X launched it
was going for 800 USD, now just over a year later the better 2920X launched at 650 USD,
exciting times.
So what did you guys think of the new Threadripper 2920X? I hope the benchmarks and comparison
against the older 1920X have been useful, especially if you’re picking between the
two, let me know down in the comments which you’d pick. Thanks for watching, and don’t
forget to subscribe for future tech videos like this one.
- Get link
- X
- Other Apps
Comments
Post a Comment