AMD Navi vs. Nvidia Turing: An Architecture Comparison
Reported today on TechSpot
For the full article visit: https://www.techspot.com/article/1874-amd-navi-vs-nvidia-turing-architecture/
Navi vs. Turing: An Architecture Comparison
AMD's 7nm Latest vs. Nvidia's RTX Tech
You've followed the rumors and ignored the hype; you waited for the reviews and looked at all the benchmarks. Finally, you slapped down your dollars and walked away with one of the latest graphics cards from AMD or Nvidia. Inside these, lies a large graphics processor, packed with billions of transistors, all running at clock speeds unthinkable a decade ago.
You're really happy with your purchase and games never looked nor played better. But you might just be wondering what exactly is powering your brand new Radeon RX 5700 and how different is it to the chip in a GeForce RTX.
Welcome to our architectural and feature comparison of the newest GPUs from AMD and Nvidia: Navi vs Turing.
Anatomy of a Modern GPU
Before we begin our breakdown of the overall chip structures and systems, let's take a look at the basic format that all modern GPUs follow. For the most part, these processors are just floating point (FP) calculators; in other words, they do math operations on decimal/fractional values. So at the very least, a GPU needs to have one logic unit dedicated to these tasks and they're usually called FP ALUs (floating point arithmetic logic units) or FPUs for short. Not all of the calculations that GPUs do are on FP data values, so there will also be an ALU for whole number (integer) math operations or it might even be the same unit, that just handles both data types.
Now, these logic units are going to need something to organize them, by decoding and issuing instructions to keep them busy, and this will be in the form of at least one dedicated group of logic units. Unlike the ALUs, they won't be programmable by the end