There is an observation in computing called Moore’s Law which states that, historically, the number of transistors in dense integrated circuits has doubled about every two years and will continue to do so until reaching technical limitations. The hardware industry has striven to conform to this “law” pushing for this target constantly. Transistors are essentially semiconductors that are at the basis of literally everything inside your machine whether you have a PC, laptop or mobile device. The more you have, the more instructions your processor is capable of handling concurrently and instead of making enormous CPUs, we keep making the transistors smaller. This is where Polaris comes in.
Currently, the GPU spearheads, nVidia’s coveted GeForce GTX Titan X and Radeon’s R9 Fury X both contain 28 nanometer transistors. Intel has only recently shifted to 14nm in their Broadwell and Skylake CPUs and it was clear that this year GPUs would follow suit. Enter AMD to unveil their upcoming FinFET Polaris.
The above video showcases the new architecture’s features such as HDMI 2.0 support, DisplayPort 1.3 and 4K 60fps support but most noteworthy of all is its incredible decrease in power consumption: roughly 60% of a current gen nVidia GPU while running Star Wars Battlefront in full HD at 60fps. Of course, this is an in-house test meant for marketing, but 60% is a significant enough difference to take notice. This may not mean much for us PC users with our decadent luxuries in power supplies and cooling options but it’s an enormous step forward for laptop gaming and with reduced needs in those areas mobile devices might follow.
AMD’s new Polaris architectures are set to launch around mid 2016 and what remains to see is how nVidia will respond since there is precious little info so far on their next generation of GPU architecture, dubbed Pascal.