Happy New Years and all the best !

Let’s celebrate and start 2017 with something really special.
Yes, another halo product came into our hands. Like Agent smith warned Neo, “It is inevitable Mr. Anderson”. NVIDIA was bound to reveal the next successor of the Titan line. But this time they put an even higher price tag, from $ 999 to 1200 $. Time to sell your Hot-wheels collection or Coca Cola vintage bottle caps or … a kidney and let’s see what it is made of.
Everybody knows who NVIDIA is:


From gaming to AI Computing

They state that the GPU has proven to be unbelievably effective at solving some of the most complex problems in computer science. It started out as an engine for simulating human imagination, conjuring up the amazing virtual worlds of video games and Hollywood films.

Today, NVIDIA’s GPU simulates human intelligence, running deep learning algorithms and acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world.
NVIDIA is increasingly known as “the AI computing company.”

This is their life’s work — to amplify human imagination and intelligence.


First let’s dwell in some Titan history.


The first GTX Titan launched in spring of 2013 at $ 999 and brought a new brand by itself. The name came from the 18,688 Tesla K20 equipped Titan super computer at the Oak Ridge National Laboratory.
Based on the Keplar architecture, thus 28 nm, it had 6 GB of VRAM, 2,688 CUDA cores, a humongous 551mm² 7.1 billion transistor die, a 6 GHZ of effective memory clock and with a 384 bit bandwidth. But now it included 64 double precision units (DP Units) for enhanced compute functionality.


One year later we had The GTX Titan Black which was a full GK110 implementation. Thus 2,880 Cuda cores, a small increase in clocks, +1 GHZ on the memory and the same 6 GB VRAM. It was again, the best single solution GPU and the full architecture with no compromises.


Later the same year of 2014, we still don’t know what NVIDIA thought when they released the GTX Titan-Z. A dual GPU behemoth of an experiment. Basically 2 Titan Blacks but with smaller clocks and a 3 slot cooling solution. The catch? They asked 3,000 $. Yeah…. It was a failure because so much heat was generating they under clock it heavily and thus didn’t perform in gaming that well. And it got beat by, you guess it 2 regular Titans at a grand less or two 780Tis for a a fraction of the total price. Or AMD’s dual GPU R9 295 X2 priced at 1,500 $. Anywhoo an expensive experiment.


Next year came and brought with it the new generation of Maxwell architecture. A lot of improvements in power and efficiency came along and of course they made the Titan variation of the new GM200 chip. They named it The GTX Titan X. And they went full bonkers again: 12 GB of VRAM ! 8 billion transistors on a monolith GPU die of 601 mm2 making it NVIDIA’s largest GPU ever. This time they dropped the double precision FP64 compute power that made the Titan line a dual breed/compromise solution in favor for pure gaming cores. It was truly the successor of the legendary 8800 GTX, as the build philosophy goes. Again, the Maxwell Titan X was another example of raw power.
And thus today, anno 2016 we have the latest one – The Pascal TITAN X. Notice they dropped the “GTX” branding and finally they updated the design of the cooling.
Price when reviewed:

~ £ 1,199 – via Amazon.co.uk


~ $ 1,598 – via Amazon.com



Pascal, Presentation and Specs


*Courtesy of NVIDIA

Let’s see what the new Pascal architecture brings to the table:


Display Connectivity:

Nvidia’s Pascal generation products will receive a nice upgrade in terms of monitor connectivity. First off, the cards will get three DisplayPort connectors, one HDMI connector and a DVI connector. The days of Ultra High resolution displays are here, Nvidia is adapting to it. The HDMI connector is HDMI 2.0 revision b which enables:

– Transmission of High Dynamic Range (HDR) video
– Bandwidth up to 18 Gbps
– 4K@50/60 (2160p), which is 4 times the clarity of 1080p/60 video resolution
– Up to 32 audio channels for a multi-dimensional immersive audio experience
– DisplayPort wise compatibility has shifted upwards to DP 1.4 which provides 8.1 Gbps of bandwidth per lane and offers better color support using Display Stream Compression (DSC), a “visually lossless” form of compression that VESA says “enables up to 3:1 compression ratio.” DisplayPort 1.4 can drive 60 Hz 8K displays and 120 Hz 4K displays with HDR “deep color.” DP 1.4 also supports:

– Forward Error Correction: FEC, which overlays the DSC 1.2 transport, addresses the transport error resiliency needed for compressed video transport to external displays.
– HDR meta transport: HDR meta transport uses the “secondary data packet” transport inherent in the DisplayPort standard to provide support for the current CTA 861.3 standard, which is useful for DP to HDMI 2.0a protocol conversion, among other examples. It also offers a flexible metadata packet transport to support future dynamic HDR standards.
-Expanded audio transport: This spec extension covers capabilities such as 32 audio channels, 1536kHz sample rate, and inclusion of all known audio formats.
High Dynamic Range (HDR) Display Compatibility


High-Dynamic Range (HDR) refers to the range of luminance in an image. While trying not to get to technical, it typically means a range in luminance of 5 or more orders of magnitude. While HDR rendering has been around for over a decade, displays capable of directly reproducing HDR are just now becoming commonly available. Normal displays are considered Standard Dynamic Range and handle roughly 2-3 orders of magnitude difference in luminance.

So HDR is becoming something very interesting, especially for the movie aficionados. Think of better pixels, a wider color space, more contrast and more interesting content. These new displays will offer unrivaled color accuracy, saturation, brightness, and black depth – in short, they will come very close to simulating the real world.

The GeForce GTX 1000 series graphics cards are based on the latest iteration of GPU architecture called Pascal, named after the famous mathematician, the Titan X uses the revision A1 of GP102.

Pascal Architecture – The Nvidia Titan X’s Pascal architecture is the most powerful GPU design ever built. Comprised of 12 billion transistors and including 3,584 single-precision CUDA cores, the card is the world’s fastest consumer GPU.

16nm FinFET – The Titan X’s GP102 GPU is fabricated using a new 16nm FinFET manufacturing process that allows the chip to be built with more transistors, ultimately enabling new GPU features, higher performance, and improved power efficiency.

GDDR5X Memory – GDDR5X provides a significant memory bandwidth improvement over the GDDR5 memory that was used previously in NVIDIA’s flagship GeForce GTX GPUs. Running at a data rate of 10 Gbps, the Titan X’s 384-bit memory interface provides way more memory bandwidth than NVIDIA’s prior GeForce GTX 980 GPU. Combined with architectural improvements in memory compression, the total effective memory bandwidth increase compared to GTX 980 is 1.8x.
Color Compression


The Titan X uses GDDR5X memory, the really new good stuff at 10,010 MHz (effective), 12 GB and all on a 384-bit wise memory bus. NVIDIA has a lot of tricks up its sleeve and color compression being one of them. The GPU’s compression pipeline has a number of different algorithms that intelligently determine the most efficient way to compress the data. One of the most important algorithms is delta color compression. Pascal GPUs include a significantly enhanced delta color compression capability:

– 2:1 compression has been enhanced to be effective more often
– A new 4:1 delta color compression mode has been added to cover cases where the per pixel deltas are very small and are possible to pack into ¼ of the original storage
– A new 8:1 delta color compression mode combines 4:1 constant color compression of 2×2 pixel blocks with 2:1 compression of the deltas between those blocks
Asynchronous Compute


Modern gaming workloads are increasingly complex, with multiple independent, or “asynchronous,” workloads that ultimately work together to contribute to the final rendered image. Some examples of asynchronous compute workloads include:

– GPU-based physics and audio processing
– Postprocessing of rendered frames
– Asynchronous timewarp, a technique used in VR to regenerate a final frame based on head position just before display scanout, interrupting the rendering of the next frame to do so

These asynchronous workloads dictate a new scenario for the GPU architecture which involves overlapping workloads. Some types of workloads do not activate the GPU completely by themselves. In these cases there is a performance opportunity to run two workloads at the same time, sharing the GPU and running more efficiently.


Here, Pascal introduces support for “dynamic load balancing.” In Maxwell generation GPUs, overlapping workloads were implemented with static partitioning of the GPU into two subsets, one that runs graphics, and onet that runs compute. But, if the compute workload takes longer than the graphics workload, and both need to complete before new work can be done, and the portion of the GPU configured to run graphics will go idle. This can cause reduced performance that may exceed any performance benefit that would have been provided from running the workloads overlapped. Hardware dynamic load balancing addresses this issue by allowing either workload to fill the rest of the machine if idle resources are available.
GPU Boost 3.0


NVIDIA started with the Keplar generation to fine-grained voltage points, which defines a series of GPU voltages and their respective clockspeeds. The GPU then operates at points along the resulting curve, changing clockspeeds based on which voltage it’s at and what the environmental conditions are.

With Pascal, the individual voltage points are programmable, and it is now possible to adjust the clockspeed of Pascal GPUs at each voltage point, a much greater level of control than before. But for this to work perfect the results are very close related and depend on the temperature. This means it will widely vary if it perceives inadequate conditions and will down-clock accordingly much more aggressive than previous generations.

New generation of HB (High bandwidth) SLI bridges that doubles the available transfer bandwidth compared to the previous Maxwell architecture.

Just by the numbers, the Pascal Titan X is a freak of tech. It has 3,840 CUDA cores spread across 30 streaming multiprocessors and six processing clusters out of which 3,584 are enabled. There are 96 ROPs, 224 Texture units and the new VRAM GDDR5X memory, made by Micron, connected via a 384 bit memory interface, which runs at 10 GHz ! It has a 7+2 phase power delivery and the GPU die spreads over a size of 471 mm2.

Compared to the GTX 1080 we can see the Titan X has a 50 percent increase in peak bandwidth due to the 384 bit memory controller. It also has a 40 percent increase in texture units/shaders per-SM. Not to mention the 12GB of memory. So far is the largest VRAM available on any consumer grade video card.

Even though the clocks are lower than the GTX 1080, due to the fact of having so many cores, it will still decimate everything at 4k resolutions. We can’t wait to test and overclock it on water cooling.


Packaging & Visual inspection


A custom and unique box design from NVIDIA. All black background, with the video card’s cooler depicted on the front – all those extreme angles and triangular shapes. “TITAN X” is proudly written with reflective silver color.
The NVIDIA logo is present in the top left corner. A minimalist design done perfect.


The top of the box has the same writing, plus “Build by NVIDIA” in green.


On the left side we are informed of which generation of Titan X it is – “Pascal“.


Then, the unique box design continues to the interior as well. When you spend this kind of money on something, the attention to details and the experience of the unboxing must be just right.

The card sits as it would do in your system, looking from above. The inside is all green thus maintaining the color combo.


You only get a small NVIDIA branded booklet (that sits parallel with the video card) and contains one support guide and one quick start manual.


We must say that the box is really small and compact. Didn’t expect this. Have a look at these for some perspective.



And finally the TITAN X Pascal ! This is the new design for the all aluminium magnesium alloy cooler. Shapes and angles reminiscent of stealth fighter jets. It cools by the same blower type fan + vapor chamber principle.


It is a dual slot video card and measures 267 mm in length. The same as all reference NVIDIA cards since the 7xx Keplar series.


This combination of silver and black with all the sharp edges, gives it a very aggressive, expensive and fast look.


It comes with a backplate as well, which has a parallel grooves/lines pattern. In the middle there is the card’s name and the NVIDIA’s logo. Although they dropped the “GeForce GTX” branding from the official name, we see it here. Dubious.


The front looks like a Decepticon’s … behind. It has 3 screw holes for various adapter plates and the NVIDIA eye logo on top, near to the fan.


The standard PCI-Express 3.0 video slot.


Topside we have the two SLI connectors. With the Pascal generation you can only do 2-way SLI configuration that is it supported by all official drivers and games. There is a way to have 3 and 4 way SLI but you need a special code from NVIDIA. Doubt anybody will ever need more than 2 way SLI.

To power it you will need 1x 6 pin PCI-E and 1x 6+2/8 pin PCI-E connectors.

And again the GeForce branding, a logo that lights up in green. We think it would made more sense to have “TITAN X” written there.


And last but not least, the display connectivity. It has:


– 3x DisplayPorts at DP 1.4 standard – which is 1.2 certified and 1.3/1.4 ready.
– 1x HDMI 2.0b port
– 1x Dual Link DVI port

And get this, it supports up to 7680 x 4320 @ 60 Hz maximum resolution ! (To mention that for this to work it will need dual DisplayPort connectors or one DisplayPort 1.3 connector)

Right. We must say, it is a beautiful card. Now let’s see what it can do; unleash the kraken !


Testing methodology


All games and synthetic benchmarks are set, first at 1920 x 1080 resolution with maximum settings applied, V-Sync Off / any changes will be pointed out. Then at 4K, no anti-alising, no V-Sync. Margin of error is within 2-4 %.

Sound Meter: Pyle PSPL01 – positioned 1 meter from the case, (we will include +/- 2-3 % margin for error) with ambient noise level of 25-30 dB/A

Power consumption: measured via our wall monitor adapter: Prodigit 2000MU and backed up via the Corsair Link software.

Hardware used:

Processor: Intel i7 6700k @ 4.5 ghz – 1.239v
CPU Cooler: Eisberg 240L Prestige w/ 480 GTX Black Ice Radiator
Case: Phanteks Enthoo Primo White Special Edition E-ATX
SSD: 2x Samsung 850 Pro 256 GB
HDD: 3x Seagate ST Series (3+4+5 TB)
Motherboard: ASUS ROG Maximus Hero VIII Z170 ATX
RAM: 64 GB (4×16) AVEXIR Core Series DDR4 2400 Mhz C16 GREEN LED
PSU: Corsair AX1200i Platinum
Cable Extensions: CableMod ModeFlex C-Series Green
Sound Card: Asus Xonar D2X
Fan Controller: Reeven Six Eyes II RFC-02
Fans: 10x Noiseblocker B12-PS
Samsung 32″ FHD TV LED 100 Hz – UE32F5000
LG 55″ Ultra HD 4K TV – 55UC970V

Other video cards – all at factory/stock settings:

– MSI GTX 980ti TF V 6G
– MSI GTX 780 Ti TF IV 3GB

The card installed:



And as accustomed by NVIDIA, the logo lights up as well; in green, what else? It complements the rest of the system to perfection.





– Windows 10 Pro x64 Build 1607
– NVIDIA GeForce WHQL 375.63
– GPU-Z v1.9.0
– CPU-Z v1.72.1
– MSI Afterburner v4.11
– To OC and to record the FPS and load/temperatures (a room temperature of 20 degrees C)
– Valley Benchmark v1.0
– Corsair Link v4.3.0.154





First let’s do the FHD resolution tests.










There are some rumors that this card makes no sense for 1080p, because it’s built for 1440p/4K territory and the CPU (any CPU) might be a limiting factor.

So, in our scenario, in all of the games tested, the CPU did not reach 100% usage (it stayed at around 50-60 % max on all cores & threads as an average). Thus it should not be of any bottleneck or limiting factor.

In some games it is an overkill for instance in Battlefield 4 it reached an average of 200 FPS which is also the max because the game is locked. The same as well for Call of Duty Infinite Warfare, locked at 125 FPS. Insane.
And this is what we were waiting for – to see if it conquers 4K resolution gaming at a fluid FPS rate.
We only compared it to the GTX 980ti and the GTX 970 – the others made no sense at this resolution/settings, although the 780ti is very close to the GTX 970.










Even at 4K it maxed out the latest Call of Duty at 125 FPS.
It is incredible to see an average of 60-100 FPS in most of the games.

We can truly say it is the first single GPU to offer constant and fluid frame rate at 4k on max settings with no anti-aliasing options, because you don’t really need them at that massive resolution.

Can you imagine what the future may hold?

* One run of Valley Benchmark @ 1080p 8xAA Extreme HD DX11
* All video cards at stock settings and auto % rpm curve including the Titan X Pascal.
* Both EVGA GTX 970 SSC and the MSI GTX 980Ti TF V will turn off their fans under 60 degrees


As any stock NVIDIA cooling for the Titan generation, even though this is a new and improved design and construction and with the higher efficient of the 16 nm, still, it is barely adequate to keep up even at stock settings.
Noise output

* One run of Valley Benchmark @ 1080p 8xAA Extreme HD DX11

Power consumption

* One run of Valley Benchmark @ 1080p 8xAA Extreme HD DX11
* Values show total system consumption.


Can’t believe that it is more power efficient than the 980ti in the maximum draw scenario.

* One run of Valley Benchmark @ 1080p 8xAA Extreme HD DX11

Stock settings and results:

Although the stock boost settings are at 1531 Mhz, the card will increase itself to around 18xx Mhz in our tests, only because of the new Boost 3.0 (presented in the beginning). But due to the nature of the technology it will widely fluctuate proportionally with the temperature. Also do remember every card will overclock differently.

Overclock settings and results:

* Start in increments of 50 Mhz to the core and run a benchmark to test the stability.
* Here are our best settings so far given the limitations of the stock air cooler:

– Power limit: 120 %
– Temp limit: 91°C
– Core clock: +122 Mhz
– Fan speed: Auto %


So yeah we manged almost bloody 2 GHz on the core ! And it was stable, on air ! Truly epic.
We have a feeling it will go even further than that but on the stock cooling it is not worth the risk to further push it. We will revisit this when we will water cool it.



These are the gains in Rise of the Tomb Raider.




Let’s centralize our findings:

– The boost went from 1820 Mhz to 1961 Mhz (+ 10.07 % increase)
– The 3D Performance in Valley benchmark went from 125 FPS to 139 FPS (+ 11.12 % increase)
– In Rise of the Tomb Raider we saw the AVG FPS at 4k go from 47 to 53 FPS (+ 11.27 % increase)
– The temperature got close to 88-89 degrees C, hence our hesitation to push the OC on air even further




No shadow of a doubt, the most powerful GPU we had the honor to test and enjoy. An extreme and controversial product but awesome nonetheless.

The good:

+ The most powerful GPU to date
+ The first true solution for +60 fps at 4K
+ 12 GB VRAM of GDDR5X
+ New cooling with a futuristic/stealth design
+ Highly efficient
+ The new TSMC’s 16 nm FinFET fabrication node
+ More potential to be unlocked via overclock under custom water cooling
+ As of the date of the review, no competition for it

The bad:

– The Titan X Pascal is only available exclusively from the NVIDIA website and limited countries
– Lack of a better word, the stock cooling s**ks – it is merely adequate even at stock settings/load
– The fans don’t turn off in idle
– Extremely high price


Glob3trotters “Extreme & Decadent” Award – 4 out of 5


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.