nVidia Quadro 600

Why would I ever buy an entry workstation card?

nVidia_Quadro_600I was always skeptical about Quadro cards…I mean, what is so special about them? We all knew for ages now that Quadro’s are not that different from their GeForce siblings, in many cases sharing 100% identical hardware, yet selling for many times the price.
How can nVidia get away with this?In fact, even for basic workstation graphic cards like the nVidia Quadro 600, retailers ask you investing as much as you would for a much more powerful and current GeForce GTX, and configuring your Dell or HP workstation with a “professional” card, would set you back even more…outrageous when you look at the underwhelming specs. It was mediocre back in 2010, and certainly it doesn’t’ look any better now:


Based on a version of the GF108 40nm Fermi processor, the Quadro 600 is nearly identical with the GeForce GT 420. 92 Stream processors (also referred to as CUDA cores) with only 8 of them FP64 capable, 1024MB of non ECC 128bit DDR3 VRAM, and two display outputs: DisplayPort (DP), and Dual Link DVI (DVI-D).
Both of the above are desirable and often supported by professional grade monitors, but users that would prefer HDMI would have to opt for an adapter  or a HDMI to DVI conversion cable. The DVI-D is welcome and a “must-have” for some 1440p or bigger panels. If bought retail, the Quadro 600 will also come with a handy DP to DVI adapter.

On paper the Quadro 600 is surely not a card capable for high computation performance. Unlike the top 3 Quadro cards of its generation (4000, 5000 & 6000) based on the “big-Fermi” GF100 architecture that offers good double precision computation, wide memory buses and of course much higher clocks across the board, the puny GF108 leaves you wondering: Could driver and BIOS optimizations alone help it rise to its name and price tag expectations?

This short review will focus on results from the benchmarks: SPEC Viewperf 11 benchmark suite, SPECapc for Maya® 2012 and Maxon Cinebench 11.5. As the database on the site gets updated with more graphic card models and the testing process refined with more up-to-date tests, the reviews will get updated.

The card selection was an experimentation on affordable workstation GPU selection, and of course YMMV.

Test Setup

The testbed system was an i7-3820 @ 4.5GHz, on the Asus P9X70 Pro with 4x4GB Samsung 30nm @ 2333. OS and apps were running on a Crucial m4 256MB @ 75% capacity.
The EVGA GTX 670 SC 4GB that lives in the above system, will be representing the current generation of gaming cards. At $400 is hardly a “cheap” solution. The SC (SuperClocked) model runs at  967MHz stock, while the reference GTX 670s are 915MHz core. The OC @ 1250MHz refers to the max boost speed, which is around 1110-1150MHz for reference cards under full load.

All cards were using the latest nVidia 310.90 drivers for both GeForce and Quadro, and were connected to two monitors – one 27″ 1440p, and one 22″ 1080p through DP or DVI-D, depending on the available ports for each card. Vsync was forced off through the driver settings.

Finally, my little laptop was thrown in with the beasts, to see what a mid-range, 3yo mobile GPU has to offer: Acer 7745G i7-720 @ 1.6GHz (2.8GHz boost), 12GB RAM @ 1066 and Mobility Radeon 5650 1GB. The GPU was driving the 900p laptop screen, and a 22″ 1080p screen through the available HDMI.

The OS was Win 7 64bit in all cases.

Test Results

SPECViewperf 11

Many of the tests included in Viewperf 11 could seem outdated for some readers, yet I know of no other standardized benchmark suite that is based on real viewport engines. Tests have been run 3 times and results have been averaged out for each program.


The little Quadro offers considerable performance gains over the much more powerful GTX 670 on most engines included in Viewperf 11. In many cases, the performance advantage is two or even threefold that of the GTX 670 SC, which retails nearly three times more than the Quadro 600, and cannot even keep up with models based on 3 or 4 architecture generations behind, featuring a fraction of the available VRam.

Ensite appears to have the only OpenGL engine that can take advantage of the GeForce drivers, giving the edge to the GTX card. Note that it is not only the Quadro cards outperforming the mighty GTX 670: the little Radeon M does appear to have much better OpenGL drivers, theoretically outperforming the GTX despite the massive raw power difference on both the GPUs and the platforms those are based on.
Overclocking the already faster than the reference GTX 670 board, yields very small benefits. With the addition of the GTX 560Ti in the charts, we can see that the Kepler card did manage to surpass it on every single test, in some case for more than 20-30%. Of course these are not directly comparable cards, but it does help you shape you expectations. The current 310.90 drivers did not allow the 560Ti to work with the Catia benchmark within SPECviewperf 11. Re-installing the drivers did not help.

SPECapc for Maya® 2012

First of all, I have to mention that running the SPECapc for Maya 2012 is extremely tedious! Each session goes through  each of the 5 scenes trough an animated sequence, testing recording fps switching between all 9 viewport modes, and it will repeat it 4 times.  The average time of completion for each card was roughly 40-45min. The benchmark records individual test scores and computes composite scores for graphics and CPU performance, which are normalized against a reference system – a Dell Precision 690 with Xeon 5130 2GHz, NVIDIA Quadro FX 570 graphics card, and 16GB RAM with 4x4GB ECC DDR2 667MHz. The test defaults at 1920x1200p windowed mode in our 1440p screen*, and the “usable” viewport is 1600×896.


Maya 2012 offers Viewport 2.0, which does support optimized instruction sets to streamline the way the program utilizes modern CPUs. The GTX 670 SC felt well, though like all gaming cards, the performance suffers a lot in wireframe and projected vertices modes, while it improves a lot when switching to shaded modes. The GTX 560Ti cannot keep up with the 670, but the perfomance difference is by no mean as pronounced as between a GeForce and even the cheapest Quadros.

This is a real time viewport capture of a single run under SPECapc for Maya® 2012 and the Quadro 600.


If you think the viewport with the Quadro 600 is not that responsive, you can always compare it side to side with the GeForce GTX 670 and identical settings.


Also keep in mind that the 2012 Maya benchmark is focusing and measuring a single maximized perspective view, while the 2009 version that is included in SPEC Viewperf 11 is animating 2 or 3 different views at the same time. In wireframe mode for example, the Maya 2009 test does feel very slow, and this is reflected by the numbers the GeForce card is allowed to achieve.

The test also stresses the on-board VRam with large textures, reaching around 1450MB of usage in our system with the GTX 670 4GB, while all other cards were maxed out. Keep note that the FX 570 of the reference system that would get “1.0 composite score”, is identical with the FX 1700 (same core and clocks), but has 256MB of 128bit RAM vs. 512MB for the 1700. That, along with a processor clocked 2.5x more, allows the almost identical GPU to perform ~80% better.

* The laptop, lacking 1440p capabilities through its HDMI port, was tested on 1080p.

Maxon Cinebench R11.5

The OpenGL graphics card testing procedure uses a complex 3D scene depicting a car chase (created by Renderbaron using car models from Dosch Design) with which the performance of your graphics card in OpenGL mode is measured.
During the benchmark tests the graphics card is evaluated by way of displaying an intricate scene that includes complex geometry, high-resolution textures and a variety of effects.

Talking about wheels turning…The Cinebench R11.5  OpenGL benchmark appears to favor both the gaming cards unfold their computation potential, surpassing easily the lower clocked, Quadro cards that show their age.

This is the first benchmark the older FX1800 loses to the Fermi based Quadro 600, which was trailing pretty close, taking second place in pretty much all the performance tests we have included in this review. Do not forget that the successor of the FX1800 is the current Quadro 2000, so the entry 600 is actually doing pretty well. The Fermi architecture did fare well against this test, allowing the GTX 560Ti to also surpass the much higher spec-ed GTX 670.



I have to admit that I was positively surprised. The puny little Quadro was able to smoke one of the most powerful gaming cards in the market today in the role of a 3D CAD viewport acceleration card, proving that it can comfortably drive a couple of monitors in a dedicated CAD workstation environment. It delivers the goods while remaining a half height, single-slot card that could fit in a mATX or even mITX low profile desktop, draws less than 1/3 the power of the 670, and costs nearly that much less.
Users on a tight budget, rejoice: the nVidia Quadro 600 walks the talk pretty well for a $150 card!


2013/02/08: Charts updated with GTX 560Ti 1GB Data. The card used was an EVGA GTX 560 Ti 1GB, tested on our i7-3820 machine with 310.90 drivers. More cards can be compared through PCFoo’s Resource pages.

12 thoughts on “nVidia Quadro 600

  1. Pingback: $1000 Workstation – The Intel i5 Edition |

  2. Dang if I would have found this earlier, I bought a gtx660 because the k600 seemed so weak against it. And the only review I could find was in japaneese :< Thank you still thow! 🙂

  3. Nice review, but it has one black spot! Why there isn’t any comparison about the performance in view port acceleration, of these cards, in 3Ds MAX ? If it is possible,I would like to see one, because, I’m considering buying one low – mid range Quadro Card (K600 or K2000), to combine it with my EVGA 660SC 3G, that I already have in my workstation!

    • You are perfectly right in, but since there was no current standardized test for 3DS Max, I didn’t want to come up with “my own”. I could, but then there could be a ton of subjective interpretations coming out of it too.
      3DS 2013/2014, is using a very aggressive viewport degradation algorithm, that actually allows GTX cards to run it pretty well – as far as fps goes. I don’t know really if going for a K600 or K2000, would give you a meaningful upgrade over a GTX 660, specifically for 3DS Max.

      • Oh, I see. So, regardless from this topic review and official benchmarks, could you, if it easy of course, do a test in 3ds MAX, with low and high polygons models, in wire frame and shaded or realistic mode, and tell me your experience between the 2 different cards, GTX 670 and the Quadro 600. In addition, what would you suggest for an upgrade solution? 1) A same GTX 660, in order to benefit the separate performance of the two GPU (one for the viewport and the other for RT) or 2) A better GPU with more CUDA cores? My budget is a little bit short…. 🙁

  4. Thanks for the review Dimitris! Since it’s never too late to crash a party i was wondering if you tested power consumption on this. Especially if there’s too much of a difference between single and multi-monitor setups. I have a GTX580 and apart from being very happy with 3ds max viewport performance i’m too conscious of power consumption and the 580 has a 50W difference between single and multimonitor setups. I’m inclined in buying and testing the new 750Ti and see how it performs in 3ds max viewport. Power consumption wise is an excellent card and according to benchmarks there is no difference in power consumption between single and multimonitor use. Check: http://www.techpowerup.com/reviews/ASUS/GTX_750_Ti_OC/23.html I’m assuming the Quadro 600 will be watt per watt roughly on par with the 750Ti but do you think it will give me the same performance of the 750Ti on 3ds max viewport? Will it be even better? I have absolutely no interest in gpu rendering. Thanks in advance! 😉

    • Since the newer 3DS Max versions (2013 and on-ward) use an adaptive degradation algorithm that works pretty well with fast GTX cards, I would expect the raw performance of the 750Ti will give it an advantage over the Quadro 600 in most applications other than Siemens NX and other “old-school” OpenGL viewport engines. Check my newest entries in SpecViewPerf 12 benches, and you will see that GTX cards are holding their own pretty well in most applications with updated viewport engines, including Solidworks and Maya, where workstation cards were dominating a couple of years ago.

      And yes, the 750Ti is far more power efficient than a power hog like the 580…don’t know if it will be faster though.
      Think my Kill-a-watt would clock the Quadro 600 in the high 40ish at full load. Hard to keep track and log with a tool that doesn’t record max, just median / current consumption over operations that last hours. The 750Ti should be in the 60W range in the worst case scenario, probably idling lower than even the 600.

      • Thanks Dimitris! The only thing holding me back into replacing the GTX580 for the 750Ti or the Quadro 600 is i’m afraid i’ll be crippled in using GPU dependent software like CryEngine and fluid simulations like PhoenixFD or Realflow. In that regard i actually think the GTX660 (price/performance/wattage use) would be a well balanced choice, don’t you agree? Do you think i’ll notice any difference in 3ds max viewport FPS between the GTX580 and GTX660?

        • I am not sure…I would expect the 580 to be faster, but how faster it really depends on the scene complexity. For simple models I would say there shouldn’t be notable difference at all.
          In case of a Quadro 600 vs any GTX, especially as powerful as the 580, there are lots of shortfalls outside old generation OpenGL graphic engines. GPGPU is virtually non-existent (if you leave your HD4xxx intel IGP on, you might do more than a Quadro 600 or even K2000 in OpenCL compute, but if you have to have CUDA, that’s not an option).

          • Thing is i tried the onboard graphics of one of my 4770k and it was unusable (3ds max 2014) because of the initial lag when i started panning or rotating with the middle mouse button. I thought i would get away with the IGP since i layer everything in max but it was miserable. I may buy a used Quadro 600 and the new GTX 750Ti and compare the two. It’s the only way i can find out 🙂 Also, do you recommend any ATI right now for 3ds max viewport only? When searching for benchmarks is there any game benchmark that one can extrapolate it’s performance to 3ds max viewport? For instance, if the GTX 770 gives 45FPS in Crysis 3 and the GTX 680 gives 43FPS, can one say the GTX770 is a better choice for 3ds max viewport performance than the GTX 680? I know things are not that simplistic but can i base my choices for 3ds max viewport seeing a DX11 game benchmark? Thanks for your help Dimitris! 😉

  5. Great site Dimitris, I have come across your comments on a number of sites regarding GPUs and you seem most knowledgable. Wondering if you would offer your expert opinion for an all around good higher end graphics card ( as of mid 2014) for Nuke / Adobe and general 3d ( 3Dmax and Maya). With prices being pretty close would you reccomended a Quadro card like K4000 or a GTX 780 or even a Titan? Also, please correct me if im wrong here but regarding fermi vs Kepler I thought kepler was half the frequency so in effect have double the cores on the new kepler cards would end up just being basically equal to the older fermi cards but without the compute power. Is this correct? If so then in effect this would make the new gen Quadro K cards know better and possibly even inferior? Also noticed there are far fewer preformance drivers now a days for the Quadra cards. Thanks Dimitris!

Leave a Reply to Stewart Cancel reply

Your email address will not be published. Required fields are marked *