Lara Croft is back, this time in Shadow of the Tomb Raider. This is the third installment of the reboot series that catapulted Lara back into the spotlight with great gameplay and excellent visuals. The second game really expanded on the visual fidelity of the first game and even brought DX12 into the mix some time after it’s launch. But, it wasn’t exactly a great thing for everyone. When the launch of the first generation Ryzen came around there was a pretty nasty problem that arose in that game from testing in DX12 when paired with a GeForce card, significantly worse performance on Ryzen. Bad enough to the point that it garnered quite a bit of negative publicity around it. So I could imagine the concern some would have this time when the game is suggesting DX12 as the first option and actively wanting you to avoid DX11. Have things improved that much that we wouldn’t want to use DX11? Even with GeForce who is known for strong DX11 performance? That’s what we wanted to find out. That, and thanks to Denuvo DRM we could only change configurations a handful of times before getting locked out.
For testing we tried to keep this one fairly simple, since we’re targeting one aspect of the game rather than an overall performance comparison. So to do so we went straight to the top of the GPU stack in our office, the GTX 1080 FE and RX Vega 64 LC. I want to take a moment and ask that we stay focused on topic here as this isn’t an AMD vs Intel vs GeForce dig, but rather how well does GeForce perform on Ryzen this go around and is there a concern for those looking to play the game.
In order to pull this off I had to make a run to the local Best Buy and pick up a Z370 board and unfortunately no one around had a Core i7 8700k in stock which is why we are using the Core i5 8600k.
We utilized the games built in benchmark and results provided because they are giving a good amount of information and staying extremely consistent which helps as we’re swapping parts and staying laser focused on the results we were looking for. The inclusion of the AVG Frame Rate result as well as the 95th percentile results are very helpful. One metric that was really interesting was the percentage of GPU bound the scenario was, we’ll look at that too.
All testing was done at 1920×1080 with no AA and using the ‘High’ preset.
|CPU||Ryzen 7 1700 @ 4GHz||i5 8600k @ 5GHz|
|Memory||16GB G.Skill Flare X DDR4 3200||16GB Geil EVO X DDR4 3200|
|Motherboard||MSI X370 XPower Gaming Titanium||MSI Z370 Gaming Plus|
|Storage||Adata SU800 128GB|
2TB Seagate SSHD
|Adata SU800 128GB|
2TB Seagate SSHD
|PSU||Cooler Master V1200 Platinum||Cooler Master V1200 Platinum|
Graphics Cards Used
|GPU||Architecture||Core Count||Clock Speed||Memory Capacity||Memory Speed|
|NVIDIA GTX 1080 FE||Pascal||2560||1607/1733||8GB GDDR5X||10Gbps|
|AMD RX Vega 64 Liquid Cooled||Vega 10||4096||1406/1677||8GB HBM2||945Mbps|
Ryzen Core Scaling
The Ryzen core scaling test was actually one of the last tests we did and in this one is where we found we were running into issues with Denuvo and were cut short, to continue through with the rest of the tests would have delayed this by three days, and we’ll be neck deep in RTX testing by then. But the results show that at least 4 cores and 8 threads for Ryzen is sufficient with no benefit going past this point. May seem unfortunate, but keeps the game more accessible.
The Percentage GPU bound is supposed to represent to what percent the game is ‘bottlenecked’ by the CPU and just how much of the GPU is being utilized. Based on these results we see that DX12 is in fact where you’re going to want to be, but shows that the i5 8600k is taking a demanding lead. This may lead you to believe we’re somewhat repeating what happened before with DX12 with GeForce and Ryzen, but you can see that at least there’s an improvement, right? Let’s look at how this translates to actual frame rates.
Radeon RX Vega 64 LC
Looking at RX Vega 64 and there is no doubt you’re going to want to ditch DX11, and it absolutely exemplifies the single core advantage that Coffee Lake has over the Ryzen architecture in gaming. But DX12 changes that tune significantly. Bringing the slower Ryzen 7 1700 all the way up to match the i5 8600k by actually doing that thing that DX12 always promised it would. These results were rather exciting and made it worth the hassle to run out and chase down parts.
GeForce GTX 1080
Well, here it is. These are the results that this whole experiment was done to get. If you look up to the Vega results and compare you’ll see that the GeForce drivers are clearly superior in DX11 as we expected, showing the GTX 1080 performing much better under DX11 loads. But moving to DX12 shows us something even better, parity between CPUs. This was not what we saw in the past and shows a significant increase in performance over last generations title under the same game engine.
So, does Ryzen still suffer under GeForce DX12 performance in this game? Absolutely not. For me this isn’t surprising but it is a breath of fresh air for those out there running Ryzen and GeForce configurations looking to pick this game up. But why did it not surprise me? Oh it’s because GeForce partnered with the developer for ray tracing? No, that’s not entirely why I think that, but Ray Tracing is pointing us in the right direction to my thinking. See, the RTX feature set that are coming along with the new RTX cards are heavily dependent on next generation APIs and as I’ve been testing and retesting over time I’ve noticed a strong performance uptick in their DX12 and Vulkan performance. Take HITMAN for example, their performance was pretty bad by comparison in DX12 at first, but now…it’s good. Same for DOOM. I expect to see things only get stronger with their new lineup of cards coming in only a few days time now.
I would love to hear your thoughts on this topic and if you’ve got the game feel free to share your results screen with us along with your system specifications and game settings.