compare laptops with the Nvidia RTX 2060 and GTX 1070 graphics

I’ve had a lot of requests to compare laptops with the Nvidia RTX 2060 and GTX 1070 graphics,

so that’s exactly what we’re going to do here! We’ll find out which performs better,

how much difference there is, and help you decide which is better value for the price.

Let’s take a look at how the laptop RTX 2060 and GTX 1070 differ in terms of specs,

note that things like clock speed can vary between different laptops based on other factors

such as cooling.




The GTX 1070 has more Cuda cores, higher base, and boost clock speeds, and also a higher

TDP. The 1070 also has 8GB of GDDR5 memory, while 2060 has 6GB of GDDR6, so less total

size but the 2060 memory is faster.




I’m going to be showing results from four different laptops instead of two, so two laptops

with RTX 2060 graphics, and two laptops with GTX 1070 graphics. If you’ve seen my other

laptop videos, you’ll know the Intel i7-8750H CPU is a hot one, and if we’re not hitting

thermal throttling we’re reaching power limit throttling instead. This means each

laptop can perform differently based on the CPU, so to try and reduce this issue I’ve

used double the data compared to usual and taken the averages, which should, in theory,

be more accurate.




To try and reduce this further I’m only looking at second-highest graphics settings,

as lower settings are more CPU bound, so higher settings allow us to better see the differences

between graphics. These are also settings most people would be playing with

with this level of hardware anyway, and I’ll also note that the 1070 results were from

a few months ago, as I haven’t got one available for updated testing.




Let’s start with Far Cry 5, which was tested with the built-in benchmark using high

settings. In all upcoming graphs, I’ve got the GTX 1070 results in the top bar, and the

RTX 2060 results shown by the bottom bar. We can see that in this game, the GTX 1070

is coming out ahead, with a 5% higher average FPS when compared with 2060. There’s

a much larger change to the 1% low, however, where the GTX 1070 was almost 13% ahead of

the RTX 2060 here.




Rainbow Six Siege was tested with the built-in benchmark at high settings. While not exactly

graphically intensive, it’s a game that I and others have found to get a nice

performance boost from Nvidia’s new turing architecture. As a result, the RTX 2060 is

getting 4.6% higher average FPS when compared with the GTX 1070, and a larger 9% improvement

was seen to 1% low results. Out of all 12 games tested, this was the largest improvement

I saw with the RTX 2060 graphics.




Shadow of the Tomb Raider was tested with the built-in benchmark with very high settings

in use. There was only a small improvement with the GTX 1070 here, coming out 1.2% ahead

in average FPS when compared against the RTX 2060, or more simply put just 1 FPS ahead,

a margin of error territory realistically.




Fortnite was tested using the replay feature at high settings, and again we can see the

GTX 1070 was performing better here. This game saw the largest improvement

between the two out of all 12 games tested, with a 9.7% increase to average frame rate

with the GTX 1070, while the 1% low saw just a 3% improvement over the RTX 2060.




DOOM was tested using Vulkan with high settings, and the GTX 1070 was once more ahead of the

RTX 2060 in this game. In terms of average FPS, 1070 was 8.4% ahead of the 2060, while

the 1% low difference was closer together, with the 1070 6% ahead here. Realistically

this difference, like in most games, isn’t enough to even be noticeable, this

game was still running very well on either option.




Overwatch was tested in the practice range, as it allows me to perform the same

test run rather than with bots or other players which will always vary. In this test the GTX

1070 was 9.5% ahead of the RTX 2060, making it the game with the second-best performance

on 1070 out of all 12 games tested. In terms of 1% low, 1070 was a bit further

ahead now, scoring 10.5% above 2060. Like other games, however, both are still able to

hit above 200 FPS with ultra settings, so it honestly doesn’t make a difference

here.




Dota 2 was tested using the replay feature, so this test performed the same task.

I take the average of 3 runs, so each result is made of 6 total results, 3 from the two

2060 laptops and another 3 from each of the two 1070 laptops. We can see there’s no difference in terms of average FPS, 2060 was slightly ahead, but as this is a

more CPU demanding game I don’t think we’re seeing a difference in graphics here. This

is just an example that some games are more CPU bound, so that graphics differences don’t matter.

Shadow of War was tested using the built-in benchmark with very high settings, and this

was another of the few games tested that seemed to prefer the RTX graphics, at least with

the laptops I was testing with. In this test, the RTX 2060 was 3.7% ahead of the GTX 1070

in terms of average FPS.




I’ve also had to test some older games with 2060, like Battlefield 1, as Battlefield

5 didn’t exist when I last had a 1070 laptop for testing just a few months back. In this

game with high settings in use the GTX 1070 was able to achieve 3% higher average FPS

when compared with the RTX 2060, while there was a larger 7.6% improvement to 1% low with

1070.




Watch Dogs 2 was tested as a fairly resource-intensive game. At 1080p with very high settings

the GTX 1070 graphics were just 2% higher in terms of average frame rate, while the

1% low results were even closer together, with a slight edge towards the RTX 2060. Either

way both options are capable of above 60 FPS averages in this game, so both play it just

fine.



Ghost Recon was another resource-intensive game that was tested with the built-in benchmark

using very high settings. The average FPS from the GTX 1070 was 3% higher than the RTX

2060, however, there was a larger difference in 1% low performance noted, where 1070

was 14% higher, indicating that 1070 has fewer dips in performance in this particular

test.




The Witcher 3 was tested with high settings and hair works disabled. The average frame

rates of both graphics were very close together, however, the RTX 2060 was just over 1 FPS better

in my tests. The 1% low, on the other hand, was a different story, with a much larger

11% improvement seen with the GTX 1070 graphics.




I’ve also tested with some synthetic benchmarks, including Unigine’s Heaven, Valley, and

Superposition, as well as 3DMark’s Time Spy, Fire Strike, and VRMark. The RTX 2060

is shown by the purple bars while the GTX 1070 is shown by the red bars, and in most

of these tests, the GTX 1070 was ahead of the RTX 2060 graphics.




In terms of improvement, over these 12 games tested on average, the GTX 1070 was performing

just 2.65% faster when compared with the RTX 2060 in terms of average frame rate. The amount

of performance gain or loss depends on the game, but in general, there was a small performance

improvement with the GTX 1070 graphics. The difference doesn’t seem to be that significant,

honestly, I think when it comes down to it you could go either way when buying a laptop,

GTX 1070 or RTX 2060 should deliver similar performance overall.




Don’t forget these are just laptop results, it’s a different story when we look at the desktop

PC graphics, where I found the 2060 coming out 13% ahead of 1070 at 1080p, 15% ahead

at 1440p, and 16% ahead at 4K, check the card in the top right corner if you want to see

the full results.




Now let’s discuss overclocking. I only have data for one game across all 4 laptops when

it comes to overclocking, Far Cry 5. I’ve got the stock results in purple as before,

but the overclocked results are in red. It’s worth noting that the boost to performance

isn’t a result of just overclocking the graphics, I also undervolted the CPUs for

best performance, though if anything that will help reduce CPU throttling issues and

better showcase GPU performance. With that in mind, I got the same for

1% low and then only around a frame per second better with 1070, so again 1070 was

only just slightly ahead of the 2060, 1.3% faster in this case.




I’ve also got the clock speeds from running the Heaven benchmark, both at stock and while

overclocked. I wasn’t sure how to best graph the data, but each color represents

a different laptop, so both red bars are the same laptop with 2060 graphics, while the

purple bars are the second 2060 laptop, and blue and green show the two different 1070

laptops. We can see that overall, the 1070s are achieving higher clock speeds, and while

this will depend on factors such as cooling which will differ between laptops, this matches

the specs shown at the start of the video, where 1070 was listed with higher speeds.




In terms of percentage improvements, both the GTX 1070 and RTX 2060 were able to get

about 7% higher clock speeds while overclocked.




Unfortunately, I’m unable to compare temperatures and power draw at this time. As I used completely

different laptops from different companies the cooling solutions are different and can’t

accurately be compared against each other. As for power draw, I simply didn’t measure

this with the 1070 laptops when I had them.




Now for the final difference, the price, you can check the links in the description for

up to date laptop prices with either graphics card, as prices will change over time. This

will depend on where you live, but in general, it seems like GTX 1070 models are cheaper.




In the US on Newegg for instance, this ASUS Scar II with 2060 is   1800 USD, while they’ve

also got a 1070 version for   1500, granted this is with a current sale, but it’s still

possible to get more performance for less money. Another example is the same laptops

on Amazon, the RTX version costs   100 more, while the GTX version is on sale, granted

the RTX one does also have a larger SSD. Here in Australia, the MSI GE63 with RTX 2060 goes

for   2900 AUD, while the GTX 1070 model is   200 AUD less, or about   130 USD less minus

our taxes, so, in general, it does seem like the older GTX 1070 models can be picked up

for less money while offering slightly better performance in most games.




So which graphics card would you pick in your next laptop? The older but slightly better

overall GTX 1070, or newer but slightly behind RTX 2060? Of course, you’ve got the option

of using Nvidia’s new ray-tracing features with 2060, but from what I’ve seen so

far these aren’t that exciting with 2060 as it’s the lowest card in the RTX

lineup. With the games we’ve currently got, more power seems to be needed to run that

half decently, so at the moment I wouldn’t buy 2060 with RTX in mind.




In terms of raw gaming performance, 1070 was ahead in the majority of the games tested,

but as we saw, on average it doesn’t matter too much, I’d pick whichever was

cheapest at the time in your region. The 1070 may be better going forward, given it has

more memory, that could be useful later on making it pull out further ahead of 2060,

while in theory, 2060 is more likely to get driver updates that improve performance

compared to the older 1070, it is a close battle.




Let me know which graphics you’d pick down in the comments, and get subscribed for more

upcoming GTX and RTX laptop comparisons, there are going to be quite a few on the way.















The Intel i7-8700K and i7-8086K CPUs are both extremely similar. They’ve got the same

core count, but the 8086K comes with higher clock speeds out of the box and costs a fair

bit more, but as a better-binned chip should, in theory, be able to reach higher overclocks

and perform better, so let’s compare the two and find out which is worth buying.

Let’s start with the differences in specs.




Both CPUs have the same 6 cores and 12 threads and are based on Coffee Lake. They’ve got

the same TDP, same cache and same everything except for the clock speeds, where the 8086K

appears to have higher clock speeds. I say appears because there’s more to it than

what first meets the eye.




These are the clock speeds each CPU will run at based on the number of CPU cores that are

in use. Both the 8086K and 8700K have the same clock speeds when they’ve got between

two and all six cores active. The only time we get a difference is in single-core, where

the 8086K can boost to 5GHz while the 8700K boosts to 4.7GHz. That’s the key difference

in clock speeds.



The 8086K is a better-binned chip though, so we may be able

to manually overclock it better, otherwise, for the most part, the main idea behind the

8086K is that it’s a limited edition collectors item to celebrate Intel’s 40th anniversary

of the 8086 processor.




To see how these differences practically affect games and applications I’ve tested them

both in the same system. I’m using the MSI Z390 Gaming Pro Carbon motherboard,

as both CPUs use the same socket there’s no need to swap. I’ve got 16GB of T-Force

Night Hawk CL16 memory running at DDR4-3200 in dual channel, and for the graphics, I’m

using my Aorus 2080 Ti to minimize any GPU bottlenecks. I’m cooling the CPUs

using my Fractal S36 360mm all in one liquid cooler with the same Noctua NT H1 thermal

paste, so the same PC was used for testing both the 8700K and 8086K CPUs, like apples to

apples as things come.




As for overclocks, I was able to get the 8700K to 5.0GHz on all 6 cores with 1.33 volts,

and with the same 1.33 volts, I could get the 8086K to 5.1GHz.

With that in mind let’s check out the games first, followed by CPU specific benchmarks

like Cinebench and Adobe Premiere afterward. I’ve tested all games at 1080p, 1440p, and

4K resolutions using the same Nvidia driver version and Windows updates installed.




Fortnite was tested using the replay feature, starting at 1080p we're seeing very high frame

rates with either CPU. At stock, the 8086K was performing 5% better in average frame

rates compared to the 8700K, but with both CPUs overclocked it was just 2% ahead. Stepping

up to 1440p still saw very high frame rates, although almost no difference between them

now regardless of overclocks as the CPU becomes less important at higher resolutions, at 1440p

in all cases, the 8700K was ahead here. At 4K the game was still playable as

it is pretty well optimized, at stock the results were very similar and then while overclocked

the 8086K was 1.2% ahead.




Battlefield 5 was tested with and without ray tracing enabled. At 1080p with high settings,

it was still very playable with RTX on, averaging high 80 FPS with either CPU, although the

8700K was ahead. With RTX off the average frame rate is around twice as high, shown

by the blue bars, but again slightly better results with the 8700K. At 1440p the game

was still playable at high settings with ray tracing on, averaging over 60 FPS. At stock,

the 8700K was slightly ahead, but once both CPUs were overclocked the 8086K was getting

1.8% higher average frame rates with RTX on and 1.6% better with RTX off. At 4K RTX wasn’t

playable any longer with high settings, and at the stock the 8700K was ahead of the stock

8086K, while the overclocked 8086K was then ahead of the overclocked 8700K.




Far Cry 5 was tested using the built-in benchmark. At 1080p with the CPUs at the stock the 8086K

was getting 2.2% higher average frame rates, but a much larger 5% improvement to 1% lows

when compared with the 8700K. Once overclocked though there's much less of a difference to

1% lows between the two CPUs, however, we can see a fair improvement when comparing the

stock and overclocked results. At 1440p the difference starts to become less obvious as

we get more GPU bound, with the 8086K now 2% ahead of the 8700K at stock speeds and

just 0.9% higher average FPS once both have been overclocked. At 4K the differences are

quite minimal, especially while overclocked. At stock settings, we saw a 2.6% higher average

frame rate with the 8086K but then almost no difference with both overclocked, with

the 8086K now just 0.1% ahead in average FPS.




CS: GO was tested with the political benchmark, and at 1080p there was almost no difference

at stock, with a slightly larger 1.6% improvement to the 8086K once the overclocks are applied.

At 1440p the 8086K was 1.1% in front of the 8700K at stock, and then 1.3% ahead once both

have been overclocked, but slightly better 1% low results from the 8700K. At 4K the frame

rates are still quite high for this test, basically the same average frame rates at

stock and then largest improvement out of all games tested at 4K with the 8086K 2.2%

ahead of the 8700K.




Rainbow Six Siege was tested using the built-in benchmark, at 1080p with the CPUs at the stock

the 8086K was just 0.3% ahead in terms of average frame rate, and then a slightly higher

1.4% difference with the 8086K in front. Similar deal at 1440p, the 8086K was just 0.08% ahead

at stock, so a margin of error realistically, then the 8086K is 0.2% ahead with both CPUs

overclocked. At 4K there was almost no difference between both CPUs, regardless of if they were

running at stock speeds or overclocked, as higher resolutions tend to be more GPU bound.




Overwatch was tested in the practice range, at 1080p even with ultra settings we're still

hitting the 300 FPS cap, although we can see some improvements to the 1% lows with the

CPU overclocks applied. At stock, the 1% low from the 8086K is 3% ahead of the 8700K, but

then once overclocked the results are closer together with the 8086K now just 0.2% ahead.




Even at 1440p, we're only just below the 300 FPS cap, and the averages are all very close

together still with more differences noted between 1% low results. Similar results to

1080p in that sense, where once overclocked results move closer together. At 4K with stock

settings, the 8086K was consistently slower, though at 160 FPS and ultra settings

at this resolution this will, of course, makes no real difference. With both CPUs overclocked

though the 8086K is now just 0.9% ahead of the 8700K.




Assassin’s Creed Odyssey was tested with the built-in benchmark. At 1080p the 8086K

was just 0.2% ahead in terms of average frame rate, and then this increases slightly to

0.4% ahead with both CPUs overclocked. At 1440p the 8086K was just ahead in all tests,

interestingly even at stock, it was beating the 8700K in 1% low and very close in averages.

At 4K the results get even closer together, no real difference between either CPU but

still pretty good frame rates at this resolution with high settings thanks to the 2080 Ti and

these powerful CPUs.




PUBG was tested using the replay feature, and at 1080p we can see straight away that

overclocking is making a fair difference. Not so much of a change between the two CPUs

though, while overclocked both are performing very closely, while at the stock the 8700K was

scoring 4.7% higher average FPS. At 1440p the 8700K was still giving higher average

frame rates at stock, although a smaller 1.2% gap this time and then extremely close with

both overclocked, too close to call. Finally, at 4K there's much less of a difference between

all results now as we become GPU bound, even at high settings with this hardware these

are pretty decent results for this resolution.




Shadow of the Tomb Raider was tested using the built-in benchmark. At 1080p with high

settings, there was almost no difference between both CPUs, 1 FPS difference at stock then

the same results once both are overclocked. Similar results at 1440p, still high frame

rates but the 8700K was slightly ahead at stock, but with both overclocked they were

identical. Again quite similar at 4K, although while overclocked the 8086K was now 1 frame

per second ahead, and this was consistent over multiple test runs.





Watch Dogs 2 was tested as it’s a fair resource-demanding game. At stock the 8086K

was achieving 1.2% higher average frame rates compared to the 8700K, although a much larger

comparatively 8.6% improvement to 1% low. With both overclocked, the 8086K is now a massive

6.8% ahead of the 8700K, and I use the word massive there sort of jokingly, but it is

the largest improvement we see throughout all games tested. At 1440p the results start

getting closer together, with the 8086K being just 0.6% ahead of the 8700K at stock settings

but once both are overclocked it's the 8700K which is now 1.2% ahead of the 8086K. At 4K

the results get much closer together as the difference in CPU is less of a factor, all

performing around 60 FPS.




So as we’ve seen the differences are pretty minimal.

Starting at 1080p with both CPUs at stock on average there’s almost no difference,

with the 8086K on average coming out 0.01% ahead in terms of average frame rate. This, of course, varies depending on the game, for instance, Fortnite seemed more sensitive to

the difference in CPU while Overwatch was the same. If we swap over to the

overclocked results we can see that the 8086K is now just 0.55% better on average. In almost

all cases now it is performing slightly ahead of the 8700K compared to both at stock, most

likely as the overclock was 100MHz higher, but still, realistically these differences

wouldn’t even be noticeable while playing the games.





Looking at results from 1440p with stock settings, again there was no difference, on

average the 8086k was actually behind the 8700K in these games. With the overclocks

in place the scales tip towards the 8086K being slightly better, coming out 0.5% ahead

of the 8700K with most games seemingly favoring that slight 100MHz overclock.




At 4K with both CPUs at stock, there was once more no major differences, although on average

the 8086K was slightly behind the 8700K, either way, yet again it depends on the specific game.

With both CPUs overclocked the 8086K is now just 0.7% ahead of the 8700K with most games

favoring it now compared to the stock results we just saw.



As you’ve hopefully been able to see there’s no real difference between these two CPUs

when it comes to gaming performance, they both trade blows with each other depending

on the game and if overclocks are applied.




I didn’t expect the 8700K to come out ahead in as many tests as it did give the lower

clock speed, though realistically a lot of the results are within the margin of error ranges

anyway, I did, of course, try to reduce this by taking averages from multiple test runs,

but I think this just further illustrates that there’s minimal difference between

the two when it comes to gaming.




Now let’s move away from gaming and check out some CPU specific benchmarks, I’ve tested

both CPUs at stock speed and with the same overclocks, so 5.0GHz for the 8700K and 5.1GHz

for the 8086K.




Starting with Cinebench I’ve taken the average of 5 runs. I’ve got the 8700K at

stock at the bottom of the graph, followed by the 5GHz all-core overclock, then same

with the 8086K above it, the stock then with its 5.1GHz overclock. In the multicore test at

stock I was getting slightly better scores with the 8700K, but with both overclocked

the 8086K is 2.4% ahead of the 8700K. In terms of single-core performance at the stock the 8086K

is 3.1% ahead of the 8700K, but then with both overclocked just 1.3% ahead, which makes

sense when you consider that at stock the 8086K has a 300MHz higher single-core turbo

speed, then only a 100MHz difference once overclocked.




Adobe Premiere was tested using the newest CC 2019 version, I’ve just exported one

of my laptop reviews at 1080p using the built-in high bitrate preset. At stock speeds the

8086K can complete the task almost 6% faster than the 8700K, and then with both

overclocked, the gap closes significantly, completing within a second of each other.




I’ve also Adobe Premiere’s warp stabilizer just stabilizing one 4K clip at a time, and

this task seems to favor single-core performance. At stock speeds, the 8086K is completing the

task just 5 seconds faster, or 1.7% quicker compared to the 8700K. With both CPUs overclocked

the difference was just a couple of seconds or 0.7%, so almost no practical difference

here.




Blender was tested using the BMW and Classroom benchmarks, and the 8086K was just a few seconds

quicker than the 8700K in all tests. In most cases, we’re talking less than a percent

improvement, with the largest difference seen once overclocked in the Classroom benchmark,

where the 8086K at 5.1GHz was 1.9% faster than the 8700K at 5.0GHz.




7-Zip was used to test compression and decompression speeds, and the differences here were very

small between the two chips. There was a decent boost by applying the overclocks though, for

instance, the 8700K scores 15% better for decompression and 17% better for compression once overclocked

to 5GHz on all cores. Interestingly the compression speed at a stock with the 8086K was a fair bit

lower compared to the 8700K, I’m not sure what the deal was there but this was always

reproducible in my test system.




Veracrypt was used to test AES encryption and decryption speeds, and I saw lower results

on both CPUs with the overclocks replied, I’ve seen this in the past though I’m

not sure why it happens with this application. In any case, the 8700K was slightly

faster here, and while I did take the average of 5 test runs the results can be fairly sporadic.

Handbrake was used to convert a 4K video to 1080p, and then a separate 1080p file to 720p.





At stock speeds, both CPUs were performing very closely with a slight edge to the 8700K,

and this was the case for both the 4K file shown in the blue bar and 1080p file shown

in the purple bar. Once they’re both overclocked though the 8086K pulls out ahead, now 2.5%

ahead of the 8700K in the 4K test and 3% faster in the 1080p test.




The Corona benchmark uses the CPU to render a scene, and there was only a second difference

at stock speed, putting the 8700K just ahead. With both overclocked, the 8086K was now completing

the same task just over 3% faster than the 8700K.




The V-Ray benchmark gave us very similar results to what we just saw in Corona, exactly the

same results between both CPUs at stock, and then just 2 seconds quicker once both CPUs

are overclocked, making the 8086K 2.5% faster now.




Interestingly in many cases at stock speeds, the 8700K is slightly ahead of the 8086K,

although once overclocked the 8086K comes out ahead in all tests with the superior clock

the speed that I was able to run it at with the same voltage, as it’s probably a better

binned chip.




These are the temperatures of both CPUs at stock and while overclocked while running

a Blender benchmark with an ambient room temperature of 25 degrees celsius. At stock the 8086K

was just slightly cooler, my guess is as a better-binned chip it may be better in that

regard, however, once both were overclocked the same 90 degrees Celsius was reached, but

again worth remembering my 8086K did have a higher overclock.




I’ve measured total system power draw from the wall, and with Blender running at the stock

the system was using the same amount of power. There were just an extra 10 watts in use by

the 8086K once overclocked as it had an extra 100MHz on all 6 cores.




For updated pricing for either CPU check the links in the description, as prices will change

over time. At the time of recording, the 8700K goes for around   370 USD while the 8086K is

  490, so an extra   120 USD or a 32% price increase. At this point, you might as well

get the 9900K for   40 extra.




Here’s a quick price to performance graph based on the Cinebench multicore results,

we can see that there’s no major difference between the two in terms of score

but a fair bit of extra cost for the 8086K.




If you’ve got the cash and either want a limited edition CPU or otherwise want to roll

the dice on a potentially better overclock then it may be worth it, but for most people

I think the 8700K is offering extremely similar levels of performance for quite a fair bit

less money.




Basically when it comes down to it, for most people just get the 8700K and save the money

as they perform very similarly. Most people just want the best performance for their dollar

and likely aren’t all that interested in buying a collector's item.




With that said let me know which of these two CPUs you’s pick and why down in the

comments, and don’t forget to subscribe for more CPU comparisons, I’ve got the 9600K

and 9700K here for testing, as well as future tech videos like this one.

Comments