WHAT IS A GRAPHIC ADAPTER AND HOW IT APPLIES TO COMPUTE?
Earlier, when the computer world was young, all compute operations were performed by the CPU, a Central Processor Unit. It was responsible for computing, and for playing sound, and for processing requests from a video card and displaying graphics. People who played the old computer games thirty years ago probably remember that the video picture was far from good quality and the performance of a task simultaneously with the running game session slowed down the game up to its total freezing. We could mute the sounds in the game, that allowed unloading the processor a little bit, so the game stopped hanging. But in general, the problem was not solved.
With the computer equipment evolved, the graphics cards hardware had divided into integrated and discrete ones. Graphic tasks became more complicated; games and applications required more power for video cards. So video adapters got the processor — GPU, Graphic Processor Unit, which is responsible for the several similar computing that is processed in several simultaneous streams. Other names of GPU are video processor and graphics accelerator, which actually describe the functionality of the GPU.
At the application layer, we get complex graphic objects — both static (photos, drawings, diagrams), and dynamic (games, including 3D, animation, video) — that are displayed on the screen in high resolution.
CPU VS. GPU: THE COMPARISON
The main difference between the calculations performed by the CPU and the calculations on the GPU is the principle of stream processing operations, which is directly related to the functional features of the CPU and GPU. Let's firstly talk about processor cores, or ALU, arithmetic logic units.
The core of even the most powerful CPU performs operations step by step, in strict sequence — one after another. On the left-side scheme, this sequence is shown by green arrows. The embedding of urgent tasks with a high priority into the processing stream (also, interruptions — shown by orange arrows on the scheme) is possible. Still, their execution also is provided in the sequential order. The implementation of each subsequent step begins after the completion of the previous one and is based on the results obtained previously. Thus, an error made at one of the steps interrupts the operation of the entire program, and the process is crashed.
Now, up-to-the-date multi-core processor boards host several cores, and each of them processes instructions sequentially within a single thread. Thus, multitasking is implemented in the chip — various tasks are performed simultaneously in different threads. But each task in the thread is still processed sequentially.
The architecture of the GPU is another. A graphic processor contains many cores combined in blocks. The GPU cores' modus operandi is fundamentally different from the CPU, due to being based on the parallelism of operations. In other words, the graphics processor performs many tasks simultaneously in several parallel threads. On the left-side scheme, it's indicated by green arrows. In this case, a random error in one of the calculation flows does not lead to a critical failure in the program, since it affects only one of the vast number of threads. Thus, high-performance computing on the GPU is achieved, up to eight times higher compared to the CPU. Due to it, GPUs are also called graphics accelerators.
Another essential difference between CPUs and GPUs is memory, concerning, memory access and interaction with it. The GPU does not need large RAM, and the operations of writing data to the video card and reading the result are different operations that consume time and resources. However, in recent years, developments in this area have been actively conducted, allowing to accelerate the interaction of the graphics processor with video RAM.