Breaking News
recent

NVIDIA GeForce GTX Titan Powered Maingear SHIFT

NVIDIA is launching a new, ultra high-end graphics card today, the long rumored GeForce GTX Titan. Although the card itself and a couple of its features are new to the consumer graphics card market, many details of the GPU powering the GTX Titan, namely the NVIDIA GK110, have been previously covered here at HotHardware. In fact, NVIDIA revealed the GK110 at GTC 2012 in May of last year and released its first Tesla-branded products based on the GPU a few months later. The GK110 is also a key component of the GeForce GTX Titan’s namesake, the Titan Supercomputer, which uses almost 19,000 of the GPUs in tandem to crunch numbers at a brisk 20 petaflops. Products designed for the HPC space are somewhat different than what power our gaming PCs though. The GPUs on the cards may be similar, but the power, form factor and budget considerations are in totally different leagues.

Those requirements compelled NVIDIA to design the GeForce GTX Titan in such a way that it will not only usher in a new class uber-powerful gaming systems that can rip through today’s hottest games without breaking a sweat, but high-end small form factor systems as well. We’ve got the GeForce GTX Titan’s main features and specifications detailed below, along with some related reading materials if you’d like a refresher on some of the technologies at work inside GK110 GPU. Many more details follow on the pages ahead, though you’ll have to wait a little while longer, as we're held on embargo with the results of our benchmarks and performance tests. (Update: GeForce GTX Titan performance available here.)

The GK110-Based NVIDIA GeForce GTX Titan
NVIDIA GeForce GTX Titan
Specifications & Features
Graphics Processing Clusters 5
Streaming Multiprocessors 14
CUDA Cores (single precision) 2688
CUDA Cores (double precision) 896
Texture Units 224
ROP Units 48
Base Clock 836 MHz
Boost Clock 876 MHz
Memory Clock (Data rate) 6008 MHz
L2 Cache Size 1536K
Total Video Memory 6144MB GDDR5
Memory Interface 384-bit
Total Memory Bandwidth 288.4 GB/s
Texture Filtering Rate (Bilinear) 187.5 GigaTexels/sec
Fabrication Process 28 nm
Transistor Count 7.1 Billion
Connectors 2 x Dual-Link DVI
1 x HDMI
1 x DisplayPort
Form Factor Dual Slot
Power Connectors One 8-pin and one 6-pin
Recommended Power Supply 600 Watts
Thermal Design Power (TDP) 250 Watts
Thermal Threshold 95°C

The NVIDIA GeForce GTX Titan’s main features and specifications are listed in the table above. Before we get into the specifics of the card, its GPU, and the systems it will power, however, we want to direct your attention to a few past HotHardware articles that lay the foundation for what we’ll be showing you here today.

GK110 GPU Die Shot
Although the GeForce GTX Titan is built around a different GPU than its predecessors, the Kepler-based GK110 at the heart of Titan leverages technologies first introduced on previous-generation NVIDIA products. As such, we’d recommend checking out these articles for more detailed coverage of many of NVIDIA’s existing technologies that carry over to the new GeForce GTX Titan:
In our Fermi and GF100 architecture previews we discuss Fermi’s architecture and its CUDA cores, Polymorph and Raster engines, among many other features. In our GeForce GTX 480 coverage, we dig a little deeper into Fermi, and discuss the first graphics card based on the technology. Our GeForce GTX 580 coverage details the GF110, the more-refined re-spin of the GF100 GPU. And in our 3D Vision Surround, 3D Vision 2, and TXAA related articles, we cover NVIDIA’s multi-monitor, stereoscopic 3D technologies, and anti-aliasing technologies, which are both an integral part of the GeForce GTX Titan. Finally, in our GeForce GTX 680 and GTX 690 articles, we discuss the Kepler GPU architecture and its features in detail.
Let's take a look at the massive GK110 GPU before we dive more deeply into NVIDIA's new Titan graphics card.


our initial coverage here and more here in a follow-up we posted a few months later when NVIDIA officially launched the Tesla K20 and K20X featuring the GK110.

The GK110 GPU is massive to say the least.
To recap, the first thing you need to know about the GK110 is that the thing is a monster. The GK110 is comprised of roughly 7.1 billion transistors (yes, billion—with a B), which is over three times the number of transistors used in Intel’s Sandy Bridge-E based Core i7-3960X processor and twice as many as GK104 which powers the GTX 680. NVDIA has these chips built using TSMC’s 28nm process node.

NVIDIA GK110 High-Level Block Diagram
The GK110’s original design features 15 SMX clusters, each with 192 single-precision CUDA cores and 64 double-precision cores, for a grand total of 2880 SP cores and 960 DP cores. Please note, however, that one SMX is disabled in every GK110 to keep yields acceptable, which brings the actual, workable core counts to 2688 (SP) and 896 (DP). As configured on the GeForce GTX Titan, the GK110 GPU also features 224 texture units, 48 ROPs, 1.5MB of L2 cache and a 384-bit memory interface, up from 256-bits on the GK104.

A Close-Up of a single SMX in the GK110
At the 836MHz / 876MHz base and boost GPU frequencies defined by NVIDIA’s reference specifications, the GK110 can offer up to 4500 Gigaflops of compute performance and a textured fillrate of 187.5 GigaTexels/sec. On the GeForce GTX Titan, the GK110 is paired to a whopping 6GB of GDDR5 memory operating at an effective data rate of 6008MHz, for a peak 288.4 GB/s. If you’re keeping track, that’s about a 47% higher fillrate than the GeForce GTX 680 and nearly 100 GB/s of additional memory bandwidth.

The new NVIDIA GeForce GTX Titan’s design language is similar to the company’s current flagship dual-GPU powered GeForce GTX 690, but there are some obvious departures due to the vastly different card configurations.
As was listed in the specifications a couple of pages back, the GeForce GTX Titan is outfitted with a GK110 GPU, with a base clock of 836MHz and a Boost click of 876MHz. The card’s massive 6GB frame buffer is clocked at 6008MHz (effective GDDR5 data rate) and the memory links to the GPU via a wide 384-bit interface. At those clocks, the GeForce GTX Titan offers up a peak textured fillrate of 187.5 GTexels/s, 4500 GFLOPS of compute performance, and 288.4 GB/s of memory bandwidth, which should make the Titan the fastest single-GPU powered card available today.

Like the GeForce GTX 690, the GeForce GTX Titan is outfitted with a base frame made of aluminum to add rigidity. And the card has a metal fan housing as well. The GeForce GTX logo along the top edge of the card lights up like the GTX 690's too, though brightness can be controlled on the Titan.

The actual cooling hardware on the GTX Titan consists of a large vapor chamber with a densely packed, nickel-plated aluminum finstack, and large rear-mounted barrel-type fan with user-adjustable fan curves. Like the GeForce GTX 690, the Titan also uses low-profile components on about the front 65% of the PCB around the GPU and the card’s cooler has a flat, ducted baseplate for unobstructed airflow, which minimizes turbulence and helps quiet down and better cool the card. There is a window cut into the fan shroud that shows off the finstack (under a Lexan window), and due to the fan configuration, virtually all of the heat produced by the card is exhausted from a system. The GeForce GTX 690, with its centrally mounted axial-type fan, would expel half of the heated air from the system and dump the other half into the case.  Again, Titan's design pushes all airflow outside the system it's installed in.

   
As evidenced by the pair of SLI edge connectors at the top of the card, the GeForce GTX Titan supports up to 3-Way SLI, and because the TDP of the card is only 250 watts, single 8-pin and 6-pin supplemental PCI Express power feeds are all that are required to power the Titan.

Outputs consist of a pair of dual-link DVI outputs, a full-sized DisplayPort output, and an HDMI connector. The GeForce GTX Titan should have more than enough muscle to push multiple displays simultaneously, and as such it, supports NVIDIA's 3D Vision Surround technology.

GPU Boost 2.0:

NVIDIA is also ushering in a new version of GPU Boost with the GeForce GTX Titan, dubbed GPU Boost 2.0. Fundamentally, GPU Boost 1.0 and 2.0 are similar in that they both allow the graphics card’s GPU to ramp up clock speeds and dynamically alter voltages in an effort to increase performance, but the criteria used to determine the boost frequencies and voltages changes with GPU Boost 2.0.
With GPU Boost 1.0, which was first introduced with Kepler, a power target was used to determine the peak boost clocks. If a given workload wasn’t fully utilizing available board power and environmental conditions and temperatures were acceptable, the GPU’s voltage and frequency would be boosted to take advantage of any spare power. We’ve got a more detailed explanation of GPU Boost 1.0 in our original review of the GeForce GTX 680 if you’d like to check it out.
GPU Boost 2.0 works in a similar manner, but in lieu of a strict power target it uses an actual GPU temperature target in its determination of peak boost frequencies and voltages. NVIDIA was relatively conservative with GPU Boost 1.0. Even though the max power target may have been achieved with a given workload, the GPU temperature may not have hit its peak thermal threshold. With GPU Boost 2.0, if there is still temperature headroom available, the GPU will continue to ramp clocks and voltage until the temperature target attained. The end result is that the GPU ultimately runs at higher clocks more often than it would have with GPU Boost 1.0.

We should also mention that NVIDIA will allow users to unlock even higher voltages with Titan than were available with GPU Boost 1.0. With GPU Boost 1.0, the maximum voltage was determined by the power target and could not be increased to a level that would impact the long-term reliability of the silicon. A combination of high voltages and high temperatures can and will damage silicon, but on their own—within reasonable limits—neither one will do much, if any, damage. You can run a chip with higher-than-normal voltages (again, within limits) at low temperatures, without affecting the long-term reliability all that much. Conversely, you can also run at higher-than-normal temperatures with lower voltages without measurably affecting long-term reliability. Higher than normal voltages and temperatures combined can and will damage a chip, however.

If you’re cool with pushing your card beyond what NVIDIA considers normal limits though, with the GTX Titan, you’ll now have that ability through some third-party tweaking tools, like EVGA Precision or MSI Afterburner, provided you accept a warning and acknowledge that your actions may affect the long term reliability of the GPU. There are still limits in place, NVIDIA’s not going to let customers drag a slider and fry their GPU after all, but if the performance increase offered by GPU Boost and basic overclocking aren’t enough and you don’t mind pushing things further, potentially shortening the life of your card, you can go for it.
 

GPU Boost 2.0 (cont.):

What the new temperature target of GPU Boost 2.0 does, over and above allowing the card to ramp up to higher clocks more often, is significantly alter the temperature distribution of the GPU. With GPU Boost 1.0, users would often see a gradual ramping up or down of the GPU temperature as the chip idled or was put under load.
The typical temperature distribution of a GeForce 6 series card with GPU Boost 1.0 is represented in the slide above. What that graph says is that the GPU would most often run at about 80°C, and less often at lower temps or at higher temperatures approaching the GPU’s maximum threshold.
Temperature distributions with GPU Boost 2.0 are very different. Due to the fact that GPU Boost 2.0 uses an actual temperature target, which is monitored in real time, frequencies and voltages are ramped more aggressively to push the GPU right up to the desired temperature target. On the flip side, should the GPU temperature exceed the target, GPU Boost 2.0 will scale the voltage and frequency lower to bring the temp back down more quickly.
Users will have the ability to alter the desired temperature target with GPU Boost 2.0 as well. If you have a well-ventilated case with good cooling and want to improve a GeForce GTX Titan’s overall performance, you can increase the temperature target which will allow GPU Boost 2.0 to push the card’s GPU voltage and frequency more aggressively to run at the desired target.

Display Overclocking:

In addition to GPU Boost 2.0, NVIDIA is introducing Display Overclocking with the GeForce GTX Titan. The vast majority of standard LCD monitors being sold today will operate at display refresh rate 60Hz. 3D capable LCD monitors will typically have a display refresh rate of 120Hz. In the case of a standard LCD monitor, what that means is that when V-Sync is enabled, even if the graphics card is capable of higher performance, it’s outputting 60 frames per second to the monitor.
Some monitors, however, are capable of operating at higher refresh rates than their official rating. What NVIDIA’s Display Overclocking does is essentially ignore the feedback from the screen’s EDID (Extended Display Identification Data) and allow the graphics card to output higher refresh rates.

Like any overclocking, however, your mileage will vary from monitor to monitor. One monitor may be able to operate at refresh rates 20% higher than its rated specification, while another monitor of the same type may choke at slightly higher speeds. This is a feature that individuals will have to play with to find the sweet spot on their particular setup. 

NVIDIA took a somewhat different approach during the lead-up to the launch of the GeForce GTX Titan. Not only was the GK110 GPU powering the card previously announced, but in terms of sheer performance alone, the Titan may or may not outpace the roughly 10 month old, dual-GPU GeForce GTX 690, depending on the workload. What the GeForce GTX Titan does offer, however, is the ability to fit into more form factors than the GTX 690 and it sets a new high-bar in terms of ultimate performance in ultra-high-end systems.
What you’ll see from a number of NVIDIA’s key system partners moving forward are new flagship and small form factor systems, all powered by the GeForce GTX Titan. As it stands today, there is no more powerful graphics setup than a 3-Way GeForce GTX Titan configuration. At the same time, the Titan’s lower TDP, cooler configuration, and acoustic profile make it well-suited to boutique small form factor systems as well.

 

     
Maingear SHIFT Super Stock with GeForce GTX Titan Tri-SLI
To evaluate the GeForce GTX Titan, we were initially provided an absolutely gorgeous Maingear SHIFT Super Stock system, decked out with three Titans and a slick white and green paint job that would make even the most ardent auto enthusiast envious. An unforeseen issue with its motherboard ultimately prevented us from fully evaluating this new SHIFT configuration, but we’ll revisit it at some point in the future. One of the main reasons we were sent this high-end SHIFT system was because its 3-Way SLI setup supplanted Quad-SLI with a pair of GeForce GTX 690 cards as the premiere NVIDIA-based graphics configuration. A single GeForce GTX Titan may or may not outpace a single GTX 690 all of the time, but three GK110s is more powerful than a quartet of GK104s.

The GeForce GTX Titan also allows systems builders to offer killer graphics performance in form factors that couldn’t accommodate the GeForce GTX 690. Not only does the GeForce GTX Titan have a 20% lower TDP than the GTX 690, but it’s .5” shorter, and its cooler configuration exhausts hot air outside of a system. The GeForce GTX 690 exhausts some air outside of a system, but dumps the rest into the case. The GeForce GTX Titan is also quieter than previous high-end GeForce GTX 600 series cards, which is another desirable aspect for SFF rigs.

Companies like Maingear, Falcon Northwest, Digital Storm, iBuyPower, Origin and others will all be offering small form factor systems powered by the GeForce GTX Titan.

At this point in a new GPU launch article, we would normally list details of our test bed and dive right into the performance results. Unfortunately, for those of you that are itching to see exactly what the GeForce GTX Titan can do, you’ll have to wait just a little while longer.
NVIDIA has asked that we not publish our performance results for just few more days, but don’t sweat it—the wait will be worth it—especially if this Crysis 3 download finishes on time. In the meantime, here’s a video NVIDIA has put together announcing the GeForce GTX Titan. After geeking out over the last few pages, this little teaser should get you fired up all over again.

The GeForce GTX Titan should be available immediately from many of NVIDIA’s key system partners, with limited retail availability to follow. As you probably expect, pricing is going to be relatively high and will fall into the same range as the dual-GPU powered GeForce GTX 690. Whether or not you consider the GTX Titan worthy of consideration will depend on your budget and the card’s performance relatively to competing offerings. We’ll paint the complete picture soon enough, but one thing is for certain: the GeForce GTX Titan will easily be the most powerful single-GPU powered graphics card available and its high-end construction, new features, quiet operation, and ability to work in a wide range of form factors make it all the more interesting.

To be continued...

Update: GeForce GTX Titan performance review available here.


 

Maximilianus

Maximilianus

Tidak ada komentar:

Posting Komentar

max_the_hack_boy.CORPORATION. Diberdayakan oleh Blogger.