CG Workstation – The “Pro”: Getting serious

Each generation of CPU architecture has its limits…so much MHz, so many transistors per mm2 . What if we want more power? Well, according to intel and nVidia, pack more of the same! The recipe seems to be working fine for CG artists, so lets see how much power we can cram in a 1P workstation.

Updated for 4930K and new Kepler Quadros and GTXs

Disclaimer: some of the links contained within this post and resource pages have my referral ID, leading to a small commission for each actual sale  that originates directly from my material. This doesn’t mean I won’t strive to provide you with objective information and honest opinions that will potentially help you on educated decisions. If you like the information in this site, please support it by using the links below when you decide to buy.

This build is meant to be a guideline in selecting components that ought to satisfy serious CG artists with an extended budget. CAD users that don’t render their own scenes won’t see an improvement over opting for a fast i5 system with a decent Quadro card, as even the latest versions of Autocad, Revit, ArchiCAD etc, are not multithreaded outside their rendering routines, nor utilize GPU acceleration outside their viewports to care much about extra cores and GPUs.
The “Pro”/professional tag is abused more often than not, but there is one truth behind choosing it for this build: if you don’t plan on making money with it, this build is seriously overpowered, as for most stuff it will cost a lot more yet not payback over a much more affordable 3770K or 4770K system.
Far from cheap, this workstation will provide cutting edge performance, while being cheaper from most even remotely comparable systems available form HP, Dell and Lenovo.

For the enthusiast crowd that want to push their hardware to or near its limits, the components of this workstation can easily and reliably be pushed as far or faster than the workstations available from BOXX or comparable boutiques. The 4.5-4.75GHz mark is relatively easy pushing enough Vcore through the CPU and providing enough cooling, while 5.0+GHz are possible should you opt for great cooling, a great motherboard and some luck with your particular CPU, as some do it better than others.

  • CPU: Intel Core i7-4930K or Intel Core i7-3930K  Hex (6) core CPUs. The benchmark for modern CPUs while still reasonably priced, the classic 3930K or the newelly introduced 4930K 6C/12T for intel’s socket 2011.
    Marketed as an i7, both those CPUs are actually based on the Xeon line of CPUs, with the 49xx line being based on Ivy Bridge EP (IB-EP) architecture, and the 39xx being SandyBridge EP (SB-EP). 
    The 4930K improves on both base clock and IPC (IB CPUs are faster clock-per-clock than SB CPUs), but the performance difference is not great, and the 3930K remains a “current” option.

    Being K series CPUs, we are dealing with “unlocked” processors, where we are free to adjust the multiplier to whatever value we want easily through the BIOS or the Windows overclocking utilities of most decent motherboards with a couple of very easy and almost fool-proof steps.

    Still these are 130W rated CPUs, and even without overclocking, it will be pumping some serious heat in your case. For serious overclocking, custom water cooling is almost mandatory. A top-shelf twin-tower air coolers or closed loop AIO water coolers might still support O/Cing around the 4.5GHz mark without an issue, but then heat a  thermal “wall”, as the 3930K is known to produce around 250W or more at those speeds (and the required Vcores for stability.
    There is no factory cooler included in the box.

    The 4930K is still a new product, and the BIOS optimizations haven’t reaced a point of maturily to draw conclusions on how overclockable this chip will be. Most reviews actually show a ceiling around 4.5GHz – where it produces considerably less heat than the 3930K – but the latter was able to reach speeds up to 5GHz or even more given proper cooling and a high-end motherboard.

  • CPU Cooler: Depending on how you will treat your system, there are quite a few cooler options to go for:
  1. Cooler Master Hyper 212 Plus. If you want a basic cooler that will do the job, it is hard to beat the value of the 212…it is cheap, and very effective while nearly silent. It is a “cookie-cutter” cross-socket choice that won’t disappoint even over a s2011 beast. Sure, this can act as a good quality “stock” cooler, it won’t be able to keep a 3930K overclocked and overvolted as succesfully as it can with much more efficient i5/i7 quads. Make sure you get a model with s2011 mounts, as there have been quite a few versions out there that don’t include them (still available separately).
  2. Noctua ND-14. The mother of modern twin tower air-coolers, this huge chunk of metal with dual 140mm fans is here to keep your CPU cool even at high overclocks. Practically you cannot get better noise/performance unless you opt for a custom Water cooling kit. It does have it issues, being heavy and relatively challenging to install, but it does worth it. Costs around 2 times that of a 212, and it is clearly overkill for CPUs that won’t be overclocked. Alternative air coolers of this quality are the Thermalright Silver Arrow SB-E that is a tad better yet a tad pricier than the ND-14. A common for both ND-14 and the Silver Arrow is the undifferent aesthetics. If you plan on getting a case with side window, something like the Phantek PH-TC14PE, comes in a lot of colors and though it is not better than the other two, “looks the part”. All of these three are massive, so certain limitations apply – esspecially with RAM selection. Read below.
  3. Corsair Hydro H100i. The current benchmark for AIO WC coolers, the H100i is not the best water cooling solution out there, but for the price it is hard to beat. The support from Corsair is also top notch, just like the warranty covering not just the unit, but also damaged components in case of a leak. It won’t impress versus a ND-14, but it is far prettier to look at, and much easier to install / remove. and won’t cause any interference with RAM dimms having large heatsinks. Costs around 3 times that of a 212, and though less of an overkill than it is on a IB i7 that relases around 77W, it is clearly not needed for a stock clocked 3930K.
  4. Custom WC: there are more ways to put together a custom WC loop than pretty much all of the components below combined (ok, maybe not, but still…). There are good starter kits from XSPC, Swiftech and other that can be expanded in the future to include more radiators and combine the GPU(s) in the loop. Personally I have been using a XSPC Raystorm Extreme Universal CPU Watercooling Kit w/ EX360 Radiator/D5 Pump. A good rule of thumb is adding 120mm of effective radiator area for every 100-150W of expected heat. Most 240mm rads will do a decent job for a single CPU, but plan on adding additional or bigger radiators if you plan on expanding the loop including GPUs.
  • Motherboard: Asus P9X79 WS. Baring the “workstation” label, this workstation doesn’t offer much over its P9X79 Pro sibling, other than more PCIe 16x slots that will allow for 4x double slot GPUs, while the Pro would be limited to 3x double slot cards. The WS has enough VRM phases for increased stability under most overclocking conditions, a great BIOS with a plethora of tweaking options.
    The Fan controls Asus builds into the motherboard are probably the best available, like the dual Gbit NICs by intel.
    USB 3.0 and Sata 6GB/s is natively supported by the X79 chipset.
    The WS is a a CEB sized board, that won’t fit in many mid-ATX cases. Make sure your case of choice does has room for it.
    Other choices would be the aforementioned Asus P9X79 Pro which will sufice for up to 3x GPU configurations with a standard ATX format, or the Asus Rampage IV Extreme for the absolute in overclocking potential and up to 4x GPUs in the E-ATX form factor and offers the best options as far as watercooling the board itself.
    Last but not least, the ASRock X79 Extreme11 is a CEB sized board that includes a LSI™ SAS 2308 8x SAS controller for those requiring a more complicated RAID array in their workstation, and can still utilize 4x GPUs without sacrificing a slot that with other boards would most likely be take by such RAID card. It is very pricey, but for those in actual need of the features (and not just e-peen) that won’t be a deal-breaker.
    Both X79 Extreme 11 and P9X79 WS, offer more than 4x slots, but without PCIe risers and a custom case, you won’t be able to use more than 4x double with cards with them, in the same way you cannot use all 4x slots the P9X79 Pro has, or the 5x slots the RIVE has.
  • RAM: G.SKILL Ares Series 16GB (2 x 8GB) 240-Pin SDRAM DDR3 1866 x 2.
    SB-E processors’ integrate a Quad channel memory controller, that naturally leads opting for multiple of 4x dimm configurations to get maximum performance out of. Most motherboards for intel s2011 have 8x dimm slots, and the maximum the memory controller supports is 64GB. 4 x 8GB sticks will be enough for most users, yet room for growth without parting out your initial investment is still there. Prefer low profile heat-spreaders. Those offer little to no gains in reliability and stability, but might cause installation issues obstructing the use of large CPU heasinks, and even airflow to the CPU in tight situations. G.Skill Ares and Corsair Vengeance LP are good examples. Water cooling kits (both open = custom WC , or “closed” like the H100) usually easily clear larger heatsinks found in the fancier memory kits that want to impress as if performance comes out of the grams of aluminum and copper used on the heat spreader. Just know what you buy.
    DDR3 1866 is far from the best out there, but I believe it is a sweetspot of price/performance. I run my X79 board with 2333 RAM and I don’t think I gain any measurable real life benefits.
  • Graphics: This is a tough one. Depending on the direction you will move, this is the second most important component after the CPU, and in many ways the most important. I will list 3 options, trying to keep everyone happy:
  1. The old-style cookie cutter: nVidia Quadro K4000. 3GB of buffer and decent number of Kepler SMX clusters, alond with Quadro optimized drivers will allow this mid-range Quadro to perform up to high stardards under most workloads.
    It is fast enough (faster than the CPU that is) for CUDA or OpenGL accelerated previews, yet not the ideal choice for larger, production images. Not a big deal, most of the CG pros out there haven’t adopted GPU renderers fully, as some of the advanced features are not still implemented – thus don’t care much for GPU performance outside their viewports.
    You won’t get a better all-around card under the $1700 price range and the nVidia Quadro  K5000, with the exception of the offerings by AMD like the FireGL W7000 4GB which forgoes the compatibility with CUDA applications.
    Both the Quadro K4000 and the FireGL W7000 workstation cards are single slot wide, so combining them with GTX cards that will act as CUDA accelerators is pretty easy with most motherboards.

  2. The budget conscious: If all you do is 3D Cad and you don’t care about games or GPU accelerated rendering, then I would recommend considering and opting for a  nvidia Quadro K2000. Offers the performance of the older Quadro 4000, in a cheaper, smaller and cooler package. Low to mid range Quadros will do the occasional GPU accelerated task, but limited clocks and buffer sizes will be less impressive. Quadro’s don’t look the part next to the massive GTX cards, but trust me, especially in OpenGL viewports, a Quadro K2000 will be embarrassingly faster than the gaming cards. No surprises, even the Quadro 600 is faster!

  3. The AIO budget CUDA Renderer: EVGA GeForce GTX 770 SC 4GB. For CUDA optimized applications, and GPU accelerated renderers like iRay, VRay RT GPU* and Octane (among others), you have to opt for nVidia*. 2GB of Vram is enough buffer for most viewport** in anything but extreme applications and scenes, but GPU renderers can ask for more.  These cards can be combined for 4-way SLI, but compute tasks don’t care about SLI. As long as more than one compatible card with sufficient memory is detected, it will be utilized regardless of them being identical or physically bridged. 3D Viewports also couldn’t care less about SLI or Crossfire, as those are currently utilized only by gaming engines.  The 770 is a rebadged 680 with faster VRam, and the performance increase over the older 680s and 670s is not substantial. Given you can find them for a cheaper price, any 670/680 with 4GB Ram is a direct replacement with marginal loss in performance.

  4. The fledged CUDA Renderer: EVGA GeForce GTX 770 SC 4GB, but rinse and repeat two or three times. Adding more cards, of course will speed up GPU computed views a lot. Viewports don’t care about SLI or multiple cards, so only the primary card having the monitor connected to it will carry the burden of accelerating your viewports. Unless you will be gaming “on the side” on your PC, make sure that VRay RT GPU, iRay etc renderers can accommodate your needs before you commit to buying more than one card. Also keep in mind that only the version of iRay that comes with 3DS Max 2013 supports the Kepler architecture and 6xx cards.
    The GTX 780 is faster, but since you can buy 3x 770 SC 4GBs for the price of 2x GTX 780 3GB boards , and since 2x GTX 770s (or even 2x 670s) are already outperforming a single GTX Titan, the 670 stops being the GPU of choice after your budget is comfortable thinking in the range of multiple Titan boards or a combination of 2-3 Titan boards and a Quadro/FireGL.
  5. The Hybrid Pro: nVidia Quadro K4000 + 2-3x GeForce GTX 770 SC 4GB. The best of 2 worlds…Some report issues with drivers, but from my experience with 6xx, 7xx and Titan GTX cards, I had no issues adding them as head-less accelerators to systems running with K2000, 2000 and 4000 Quadros as primary cards. Not a single glitch. Don’t expect the gaming power of the GTX cards on demand though. Switching monitor(s) to the GTX and re-installing drivers is a must if you want to game in a PC with both Quadro (or FirePro) and GTX cards co-existing. On the other hand, GPGPU programs and progressive renderers recognize all available compatible cards without issues.
  • SSD: Samsung 840 Pro 256GB. Probably the fastest consumer SATA drive at the moment, from the company with the best reliability record for the last couple of years. A 128GB drive is enough for most users wishing to add OS and basic CG apps, but 256GB will allow you for more flexibility + the ability to store most of your current projects without worrying too much about space. The 840 Pro series is based on MLC nand, instead of the “vanilla” 840 and 840 EVO that are based on TLC and will have a relatively shorter lifespan. Recent tests have proven that even TLC drivers offer an operational lifespan that will probably outlast most workstations, and you should feel safe as when SSD controllers sense the cells reaching their limits, they suspend writes. The data inside the drive will be readable and transferable to other devices, unlike HDDs where recovery after failure is a hugely more difficult.

    Note: SSDs are offering considerable boost in loading times for all apps and the overall responsiveness of the system. Still the performance benefits for most CG applications as far as rendering speed and viewport performance are negligible or nonexistent, with the slight exception of video editing with multiple streams of HD video. A smaller, or no SSD is the obvious choice for the budget conscious builder that doesn’t want to sacrifice real performance.
  • HDD: Western Digital Caviar Black 2 TB. Most >1TB 7200rpm drives are fast, but the WD Black line is at the moment the only one offering 5 years warranty, while competitors (and WD Green/Blues) give 2-3years. I would get the extra coverage over 5% performance any day. Combining two of the above drives in RAID 1 (mirroring) mode, will allow for the much safer storage of critical data in your system.
  • Case: Antec P280 Black Midi Case. There is a huge selection of cases available out there, offering a variety of features and aesthetics. The P280 is a simple and elegant design that works, offers some sound-dampening lining, has decent cable management options, can fit most large air coolers and closed loop AIO water coolers up to 240mm like the Corsair H100 without any modifications.
    More importantly, it was chosen for the 9 expansion slots, that allow for more flexibility when planning to use 3x double slot GPUs and still fit the single slot Quadro or FireGL in there, as the different arrangements in the PCIe 16x slots in many motherboards make it hard with most 7 or 8 slot cases – usually for the cards being in the far bottom slot. The P280 is one of the few 9-slot Midi ATX cases available that can fit a CEB board, and it is pretty good value. Feel free to substitute with any unit that suits your aesthetics, given it will accommodate your equipment properly.
  • PSU: Corsair AX1200i 80+ Platinum. Should you opt for multiple cards and wish to overclock a socket 2011 CPU, a large capacity unit is mandatory. For a Quadro + 2x GTX setup, most likely the Corsair AX860i 80+ Platinum will be more than enough, but if you want to go “all-out”, a large unit is recommended.
    People get over-zealous with their PSUs, opting for high capacity units that most likely will be greatly underutilized. It is not wise to get a PSU that you know will be stressed to more than 75% of its capacity for prolonged time, so I would never advocate on “low-balling” to save money, and potentially sacrificing stability or risking down-time due to a blown PSU.
    For a stock clock 3930K and a couple of cards, a quality 650W would probably be sufficient, but the 860 gives some room for future growth and will allow the PSU to operate closer to its sweet spot. Most likely the next generation of GPUs will be more power efficient, so you will end up using even less power should you upgrade in a couple of years. The 80+ Gold and above certifications, usually are limited to high quality units, so opting for one nearly ensures longevity ontop of measurable energy savings over time. The digital modulation in these latest Corsair units is promising to be improvement over the already great designs we had seen by Seasonic already.
  • Optical Drive: Optical Drives (ODs) are rarely utilized the last few years. Broadband connections, cloud storage and affordable flash drives are replacing them. For most ODs are limited to installing new software and OS. A Lite-On IHAS124-04 or Asus DRW-24B3ST would do the trick.

  • Operating System: Windows 7 Professional SP1 64bit (OEM) Windows OS is guaranteeing compatibility with most rendering packages, including 3DS Max. If your distributed rendering will be focused in Maya, you could actually get away with Linux as client packages for VRay are available.
    Windows 7 Home editions are limited to 16GB of RAM. Professional and Ultimate versions allow up to 128GB, and offer some additional networking/remote access features that are desirable.

* Latest drivers seem to restore compatibility of 79xx Radeon GPUs and the OpenCL version of Vray RT GPU, but performance is not there yet. In theory the GCN architecture smokes both Fermi and Kepler GTX cards in OpenCL (in most OpenCL benchmarks a $200 or less HD 7950 is notably faster than a GTX Titan), but current performance levels are too low to consider the issues with the drivers resolved.

** GPU memory utilization rarely breaks the 1.3GB barrier in our standardized viewport performance tests, and floats around 1/3 of that when large textures are not displayed or included in a scene.

58 thoughts on “CG Workstation – The “Pro”: Getting serious

  1. Thank you for great analysis of current HW demand in connection to workstation market. Sadly a I bought my GPU before reading your articles, thus I do have a question about buying a new to be more effective with my work. I currently use 2×Gigabyte Radeon 7950 3GB, yet I do not see any remarkable benefits when working with 2D environment in Autodesk, now I am very well aware I did a mistake buying a consumer cards rather than professional one. I don’t even play games. I don’t need huge computation power, what I seek is speed when working with 2D CAD applications, prompt clicks on my mouse and real time response on the display, nothing more. I use double monitor setup (2×1920×1200), but most of the time work only with one. I spent enough money on the ‘workstation’ already, thus I don’t want to buy $ 300+ GPU. Would a FirePro V3900 be a significant help (in comparison to officially not supported AMD HD7950)? Thank you for answering and keep up the good work here.

    • Andrew, it is a common misconception that more than one GPU will help with viewports in 2D/3D apps. The vast majority of them won’t:
      Autocad, Revit, Archicad, 3DS, Maya, Rhino, Cinema4D, Sketchup (think we covered 99% of CG and drafting in Architectural fields) and don’t care, and will used only one card as primary, while the second or third will idle.
      I believe the only CG related program that does make use of SLI/Crossfire cards is MARI, and GPU renderers do make use of multiple supported cards without physical linking or special driver settings.

      I don’t have the resources for reviews on-demand, but I would guess that a V3900 would be more than find for 2D Acad.
      Most of the performance is unlocked though the drivers with workstation cards, at least that’s what my Quadro 600 vs GTX 670 comparison in this site, and personal experience has shown.

  2. Dear Dimitris, I’ve found your article very interesting as I’d like to update my computer. But I still have some doubts abuot. Actually I’ve a Q9550@2.83GHz + quadro FX3800. First of all I thought to switch to a 2 x Xeon E5-2620 (24 threads in total) + quadro 4000 (conf.01). Then exploring a bit the web and asking around to 3D artists I’ve discovered that a lot of people use configurations like this on your article. I’ve tried to make a list online (not amazon as I can’t buy on it as EU citizen) and found out that i7 3930K + quadro 4000 + gtx 670 (conf.02) has more or less the same price of conf.01. Basically I use 3dsMax + Vray + Photoshop. As my goal is to improve render time and explore GPU render (especially use vrayRT for quick feedback that now don’t even start) I’d like to know if i7 will give me a good improvement compared with what I have now and if could be an option to keep the fx3800 insted of new quadro4000 as first card. Otherwise the 2 x Xeon configuration you think is less powerful aside render tasks? thank you very much.

    • Thanks for your kind words. Yes, I believe you have a dated enough system that an upgrade to any modern i7 (even quad cores) would be significant and noticeable. The same would be the case should you have a 1st generation i7. As far as rendering goes, a low-end 2P Xeon hex-core (24 threads total) system might have a veeeery slight edge against a s2011 hex-i7. As a rule of thumb you can always estimate the aggregate GHz by multiplying threads x GHz for each processor. Since both the 39xx i7s and the E5-26xx Xeons are based on Sandybridge-E architecture, the IPC is comparable and so are the aggregates.


      E5-2620 = 6 cores * 2 threads / core * 2GHz = 24GHz per CPU or 48GHz for a 2P workstation / 2.5GHz turbo single thread
      i7-3930K = 6 cores * 2 threads / core * 3.2GHz = 38.4GHz / 3.8GHz turbo single thread
      i7-3970X = 6 cores * 2 threads / core * 3.5GHz = 38.4GHz / 4.0GHz turbo single thread
      I would say that including the motherboard which should be considerably more expensive for the 2P Xeon, even a 3970X + mid-range X79 board is comparable in price. Obviously the single thread performance with the i7s will be massively better, as even without O/C you can get 50-60% more speed. Even with mild overclocks (4.5GHz are reliably doable even with relatively cheap < €100 cooling solutions), a 3930K can match and/or surpass aggregate speed of the 2P low end Xeon, and demolish even more the single thread potential.
      As far as GPUs go, yes, you can keep using the FX 3800 which should still be faster than any GTX for viewports, and add GTX cards as accelerators for VRay RT GPU.

      • Thank you for your clear and quick reply, as regards the quadro you think that switching to a new quadro4000 will be noticeable? I tried to find comparison between the two generations but didn’t find anything. I’m quite happy with the fx3800 but of course more geomteries I add and more problems I get, now I try to work around by using proxies and “box display mode” but I don’t know if with a new generation quadro fluency on the viewport will considerably improve.

        • It seems the one aspect that you are not mentioning or seeming to address is your RAM. How much memory does your system have. From industry experience, I can attest that memory is the number one factor in allowing more geometry. Yes, a legit video card will allow you to fly around the viewport more fluidly, but you will bottlneck yourself everytime without sufficient RAM. Good luck

          • I agree with you. If you go through the post again, you will see that I am suggesting 2* 16GB (2*8GB) Kits, for effectively 32GB DDR-3 1866 in Quad Channel.
            This is enough for most situations, but it also allows for maxing out the board with 64GB (8*8GB) if someone wishes.

            There is little to no notable gains going to faster than 1866 Ram modules in pretty much anything but gaming and the very latest AA titles.

  3. Dimitri, this is an amazing job that you are doing. I was looking for something like this for months…somebody that new 3D work environment and Computer specs and technology…. You definetely know what you are talking about, and you know quite a bit.. i am impressed of your knowledge in this areas, and you ability to explain them quite clearly, with such good intentions to help people…. After reading one or 2 random posts from you at Cg architect, i trust your recommendations with my eyes closed… Thanks for having a site like this, and helping people. That being said, i will soon be posting here my questions as soon as i am decided in the date that i will need to buy my new equipment… which i hope is soon.

  4. Hey Dimitris, Was looking for ages for a site like this! Great work explaining the pro’s and con’s for building my next CG workstation/ render farm! It feels a bit like reading the complete vray guide but then for hardware ^^ with lot’s of Aha moments. My question would be: What if I took you’re mid-range 3770K Workstation and added 1 or 2 of your 500$ render node’s? Would it be faster in rendering with the Vray distributed rendering option then when I would take one 3930K Workstation? How’s you’re view on distributed rendering actually? Thanks!

    • Distributed rendering is “the only way” past some performance requirements.

      The value of nodes is ofc much higher than getting more powerful single box solutions, and a 3770K with a couple of fast nodes – esp. as aggressively priced as those 8350s* – will beat a 3930K handily. Actually any 2 of the above will beat a single 3930K in real life rendering tasks, with the exemption of really small scenes with heavy textures or some combination where initial loading of assets through the network could be the determining factor. For smaller tasks.

      The great thing with distributed renderering, is that you can manage your rendering in the background 100% on the nodes, freeing up your workstation for you to produce other content in parallel.

      Small note: the 3770K, due to newer architecture is actually faster than the 3930K (around 10-15% clock per clock per thread) in anything that is not multithreaded: most particle effects in Max, most general modelling, photo editing and illustrating tasks etc are not.
      Certain filters in Adobe suite and ofc most renderers and video transcoders are multithreaded and would get a considerable boost from the 3930K (up to 50% faster than the 3770K).

      * PS: DDR3 RAM prices went up, so I don’t know if those are as close to $500 as when the article was posted. Still darn great value.

  5. Dear Dimitris, what is a reasonable limit for overclocking the 3930K and end up with a stable configuration? In case of overclocking do you suggest the nocuta nd-14 or the Hydro h100i? I saw that you suggest windows 7 as OS, is there a reason to choose the 7th edition instead the windows 8?

    • Each chip is unique, thus the tolerances somewhat different.

      Overclocking in a nutshell is usually balancing 3x factors: extra voltage required stability & operating temperature.

      Too high temperature and most modern processors will decrease their clocks to cool themselves down.
      Too low voltage and the chip will be unstable. Think of the CPU as a “electric-data-pump”…you require it to pump at a higher rate (higher clock speed), it needs more energy to do so. The laws of physics apply to this electric machine like any other.
      Too high voltage the chip will start overheating , motherboards that are not really designed to work with such loads will be unstable and prolonged operation under excessive current might deteriorate or kill the CPU – even if temperatures are not excessive as this transistor wear is happening in a micro-level.

      Most “experts” suggest you don’t go much higher than 1.4~1.425V for something that will be running 24/7. Some are more conservative. Most 3930K chips will be able to do 4.5GHz on a good board with less voltage than that, and I would think 4.5GHz a safe bet with most middle to high-end X79 boards, with 4.8GHz doable. Past 4.5GHz though either the ND-14, silver arrow or most AIO water coolers like the Hydro H100i will probably have the CPU running hot. A custom “open loop” water cooling solution should be prefered if you plan on really pushing a s2011 chip.

      As far as Win 7 vs 8, there have been some complains with Win8 regarding compatibility issues with some software – common with new OSs. The main complain ofc is the “Metro” UI, which some love and some hate (baby duck syndrome). Otherwise 8 is considered faster, more stable and safer for the webs, and will be receiving an overhaul to “8.1” soon. I guess most wrinkles will be ironed out.

  6. Thank you very much for your reply, Finally I hope to buy the configuration next week after few weeks of reserach… just saw that corsair has Hydro H110 cooler with more or less same price of H100i, do you think is a good choice? In that case probably the Antec P280 will not fit the cooler…any suggestion about a case that could fit it?

    • Depends on the motherboard. If you don’t want 4x cards or your board can do them with 8x slots, some midi cases like corsair 650D, C70 & 500R fit the 280mm h110. So does the fractal design arc midi (front). Many cases can fit it with slight mods. The usual issue is that with the increased width, without proper top clearance the rad or the fans (whichever is lower) don’t clear the motherboards vrm heatsinks. And all X79 boards have lots of that.

      Most full towers that do 2x140mm top fans will fit it fine.

      • Hi Dimitris, my seller unfortunately has some problem with the availability of Corsair cases. They suggest a Cooler Master HAF 932 Advance (pretty ugly but seems full of fans and slots)…do you think could be a good substitute? thank you

        • Never into HAF aesthetics myself. Big and versatile, yet ugly. I have a CM 690 II adv and it cannot fit the h110 @ top. Has room for either the 280 rad or 2x 140mm fans. Adding both, and you will be hitting the top VRM heatsing of my P9X79 Pro. You could mount 2x 120mm fans inside the top shroud but not 140mm.
          Most good looking CM cases don’t do 2x140mm top mounted.

          What about a fractal design define xl or nzxt switch 810?
          Kinda hard if he doesn’t carry corsair but…

          • It seems they have the fractal design define xl so I will opt for this…much better in terms of design for sure. thank you for your quick replies and support!

  7. Hi Dimitris…here again to ask for some help… the configuration will be like that: CPU: Intel Core i7-3930K Hexa-Core CPU Cooler:Corsair Hydro H110 Motherboard: Asus P9X79 WS RAM:–32 Gb– G.SKILL Ares Series 16GB 240-Pin SDRAM DDR3 1866 x 2 Graphic Card: nVidia Quadro K4000 SSD: Samsung 840 Pro 256GB Case: Fractal Define XL R2 PSU: Corsair AX1200i 80+ Platinum Optical Drive: ASUS DRW-24B5ST Operating System: Windows 8 Professional 64 Bit …….more GTX cards in the future The case has two empty spaces for more fans so I thought to get two 140mm (Noctua NF-P14 FLX ??). But I was thinking if all these fans as intake and only one as exhaust on the rear would generate a good air flow inside the case. Would be: 2 x 140 on the front, 1 x 140 on the bottom, 1 x 140 on the side, 1 x 140 on the rear, and the radiator on the top. I’ve never overclocked a PC. Do you know a good reference (guide) to work on such system? thank you

    • Kinda hard to beat this information.
      http://www.overclock.net/t/1189242/sandy-bridge-e-overclocking-guide-walk-through-explanations-and-support-for-all-x79-overclockers/0_50 If you are overwelmed, I can help you, but almost all you can ask for are in there.

  8. Dear Dimitris, Please help me on configuring a desktop PC for doing Architectural and Interior views and making Walkthoughs. All this work will be done professionally and at large scale so please suggest a robust configuration that will help me in reducing rendering time considerably both in still and walkthrus. Regards Vikrant Bakkar

  9. Hello Dimitris, Great work up here. A quick question for you if you don’t mind. System: P9X79 WS; I7 3930k, Samsung SSD 840 Pro 256gb, WesternD Black 2T, G.Skill 2133MHz RipjawsX 16GB, WC custom, NZXT 630. I have a tight budget on gpu, so I have to chose between k2000 and k4000. I plan to use it in architectural/ interior modelling (viewport), where sometimes the number of polygons can be quite high because of the details. I really dont know how this 2 gpu’s behave in scenes up to 20-30 milion polygons, and sometimes maybe more. I don’t plan to use it too much in renderind since for the present moment I don’t render with iray/ cuda, but if I will, I guess your solution with gtx will fit better with quadro k series. Also for interior renderings, I know opinions that don’t recommend iray for good quality/time results. Thanks a lot, and look forward for your reply.

    • Hello,
      I have no experience with the K4000. It has 2x the cores the K2000 and much faster ram. The K2000 offers comparable performance with the older Quadro 4000, so I consider it a good choice, being much faster than the older Quadro 2000 for practically the same price. For GPGPU and compatible rendering engines, Fermi and Kepler cards mix fine, you should not worry about that.
      To get meaningful rendering times in complex interiors, you definitely need to invest in more than one “accelerator” cards, but switching to talking about minutes from talking about hours of rendering times, is pretty important for some, thus they overcome both the expense and the minor tweaks in their workflow going the GPU rendering way.

      • Hi Dimitri, As you said it, I also think I will stay with K2000, hoping that it will be enough for my scenes viewport. As for rendering, iray in interiors, especially night lights, professionals say it’s time consuming. You are right, and it all depends on budget and how many rendering cards you can afford for iray. I’ll test first the iray/cuda rendering with one “accelerator” card (like gtx670) to get a feeling of it, and if the results are good, later I’ll invest in other cards, and better cooling, for more efficiency. If you have any other suggestions, these will be most welcome. Thank you.

  10. Hi Dimitris, Finally I have my machine and trying to experiment with OC. After reading forums and guides I turned out that the easiest way for me was just change the Turbo Ratio and leave everything else by default. I made few tests on renderings in 3dMax and got: Turbo Ratio 45 Core Voltage 1,408 Turbo Ratio 44 Core Voltage 1,384 …But…I really wondering if I should be aware of important settings on the Bios. Didn’t try any stress test but seems that 4.4Ghz could be fine as the only important thing I understood is to keep Core Voltage under 1,4. Do you think I should take care of others parameters?? thank you very much!

  11. Hey Dimitri Wow, I really appreciate your guide, it has been very useful. Anyway it left me with a couple of questions. I’m trying to put together a workstation for both modelling and rendering. The everlasting dilemma. I’m primarily using Rhino + Vray for rhino, Grasshopper, ACAD, Revit + postscripting in adobe CS. I’m looking at three options. Option A. i4930k Cooler Master Hyper 212 Plus ASUS X79 deluxe quadro k2000 ram G.Skill ARES 16 GB : 4 x 4 GB 1866 MHz SSD samsung EVO 256gb HDD 2TB seagate barracuda 6gb/s Seasonic 750W 80+ Silver certified Price (danish currency) 16.192 DKK ~ 2913 $ _____________ Option B i4820k Cooler Master Hyper 212 Plus ASUS X79 deluxe quadro k2000 ram G.Skill ARES 16 GB : 4 x 4 GB 1866 MHz SSD samsung EVO 256gb HDD 2TB seagate barracuda 6gb/s Seasonic 750W 80+ Silver certified Price (danish currency) 14.192 DKK ~ 2585 $ _____________ Option C i4770k Cooler Master Hyper 212 Plus ASUS Z87 deluxe or Gigabyte GA-Z87X-UD3H GPU quadro k2000 Ram G.Skill TridentX Series 2 x 8 GB 16 GB 1600mhz SSD samsung EVO 180gb HDD 2TB seagate barracuda 6gb/s Seasonic 750W 80+ Silver certified Price (danish currency) 13.394 DKK ~ 2439 $ The three options has raised some questions: Q1. Do you think the 6 cores in the 4930k is worth the money? Q2. Is the quad channel, and more GPU’s in the X79 worth the money compared to the i4770k? Q3A. Would I be able to install/upgrade with one or two GTX-cards together with the k2000(main GPU) Q3B. Would the K2000 and one or two GTX help improve the rendering in VRAY? How does this actually work? Q4. Do I need a bigger PSU if more GPU’s? On http://www.simplyrhino.co.uk they write the below quote about the ram, but I don’t understand the meaning of this: “Not all memory is the same and high speed low latency RAM will make a difference to performance particularly if the cache speed of the memory is matched to that of the processors. ” Q5. How can I check if the cache speed of the memory is the matched to that of the processors? Thank you in advance! I really do hope you can help me, and that it makes sense what I’m asking. Cheers, Thomas

    • Thomas,
      A1) Yes, the 6-core does make a difference during rendering or other heavy multithreaded operations. Few apps are multithreaded: i.e. the modelling / particle etc engines in most (all?) mainstream programs for CG are not. Long story short, if you are serious about becoming a professional or you are a professional rendering a lot, it will make a difference. If you are using it mostly for school and/or hobby, it won’t really. It is more of a self-satisfaction gimmick that will shine a few times here and there, but its real “niche” is paying back when it can shave a few hours a week or even a day from your rendering tasks. Thats if you want to do everything with a single box, as after one point the only answer for faster render times using CPUs is rendering nodes and distributed rendering, which would make a 6-core or a Xeon system redundant living in the modelling workstation -if rendering nodes are in the plan, better to stick with the fastest CPU in single thread, which today is the 4770K.


      A2) Not really. Quad channel memory is not used in most of the CG stuff, and not even fast paced apps – like games. The 4820K is a great CPU if you want to play with overclocking it a bit further than the 4770K, and you need things like 3-4 way crossfire or SLI for gaming due to more PCIe lanes embedded in the CPU. In most stuff, including games, the 4770K is 5-10% faster due to the newer Haswell architecture which improves uppon the Ivy Bridge architecture the 4820K uses.

      Q3A) Yes
      Q3B) GTX or any GPU will help with GPU based “progressive” rendering engines. That is engines like VRay RT GPU, iRay or Octane etc. Not the regular biased VRay that is still exclusively CPU based. Progressive rendering engines are almost all “brute force”/Monte Carlo based engines. The vast number of shaders/cores in powerful cards allow these engines to be much faster than CPU based engines, that cut time through interpolation (biased engines, i.e. what irradiance map / light cache is doing). There are certain features not supported by Vray RT GPU that are found in regular Vray Advanced, but if you can work with/around the limitations, results can be just as impressive and produced very fast. Some people use it exclusively as an active shade viewport to setup their scenes to render the final output with the CPU, and still claim time savings. The 2x cards are in place, so that one is working on the raytracing/active shade window, and the other keeps the modelling area/viewports responsive. If you force all cards to compute/render, the viewports will become very choppy.
      BUT: be careful…VRay RT GPU is not available in Vray for Rhino I think, so…maybe you should hold your horses on this one!

      A4) Yes, you would need more watts in a system with lots of GPUs vs. just one, but the 750W should do fine with 2-3 cards, given one of them is a low-mid end Quadro (the K2000 is like 50W, nothing much) and there is no overclocking. Most nVidia GPUs are topping 200W or less, and the 780/780ti/Titan are in the 250W range. A Z87/4770K system will be pulling around 100W @ full load, so with a couple of 770s and the K2000 will be doable with a 750W easily. The s2011 CPUs, and especially the 6-cores are much more power hungry, but still doable. It gets out of hand when you overclock tho, as the CPU/Mobo with a s2011 6-core running above 4.4-4.6GHz could be pulling 3 times the power of a stock 4770K. At any case, we are “designing for the worst case scenario”. In reality very rarely will a CG workstation system be 100% stressed in both CPU and GPU to need anything in that range.

      A5) Memory quote: I don’t know what he/she is talking about…The applications where memory faster than DDR3 1600 makes a difference are not CG related (mostly games). CPU (and esp. single thread performance) is the bottleneck in most CG apps, not RAM. Cache speed in the CPU is orders of magnitude faster than that of the external RAM. No stick “matches” the cache.

  12. Hi Dimitri Thank you very much for your big help and quick reply! I will be using the machine for professional manner(not gaming at all), writing competitions and smaller jobs, but as an home office. I still have some(a lot) questions left, which I do hope is okay =) I think the comment field messed up my “layout” the last time, so I’ll trying to divide this one better. ___________________________ I think I’ll go for the i4820K or the i4770k then. I don’t think I’ll need the extra cores according to the amount of money. If Chaos group manage to make rhino+vrayRT, I would love to use GPU rendering. But still a bit tricky. Perhaps you can help me with your further advice. ___________________________ Q-RHINO: I’m aware that the vray-rhino does not give me the RT option, but I’ve read they are trying to fix this in the next version. So to stay a little future-wise I guess this talks for the i4820k. Then I have to possibility to upgrade my GPU and put more memory (if this at all is needed?). ___________________________ Q-GPU: According to the GPU’s. I’m not sure I understand you right. I think I’ll start with the k2000, and then if vray+rhino makes an RT upgrade I would maybe upgrade.. But just to be sure; Would you recommend to install the combo (2 x gtx770 and one k2000) on the x78, in a modelling/rendering machine, or is it better to get the k4000? Do you have any links or guide on how mutiple GPU works, because I’m not sure I would know how to setup this system? ___________________________ OC: I would like to work a bit with OC, in order to get a better modelling performance (higher cpu speed on single thread). But which of the two cpu’s would you recommend? as you mention the i4770k is faster(normal mode), but if I start to OC to 4.6/4.7 Ghz, the i4820k will beat the i4770k right? Is the CMH 212-fan good enough for OC or do you recommend a better fan for OC? ___________________________ PSU: And again in this case will the PSU do the job with smaller OC? ___________________________ MEMORY: According to the memory, you don’t think that the QuadC would help me at all? If not the speed, then perhaps I could benefit from at least the amount of memory exceeding 32GB, or is this a totally stupid ‘future thought’? 🙂 ___________________________ ..Wauw, that resulted in a lot of questions once again, but I do hope this will help more than just me in the end. I’ll definitely recommend your site, where ever I can :)! Thanks, T

    • s2011 Quad vs. s1150 Quad: The 32GB supported by the 4770K is more than enough for most stuff, but if you want to use the 4820K as a stepping stone for a hex-core in the future, surely the s2011 board expense becomes more justified. Most people should be “ok” with 16GB or RAM, 32GB are relatively “safe” for demanding users, but depending on your workflow and average scene complexity, having the option for 64GB might be desirable. You are the one to answer that.

      Vray RT + Rhino: RT CPU is supported with Vray for Rhino 1.5+ I believe. It is the GPU part that is unsupported. Since they’ve introduced RT GPU for Vray 2.0 for Sketchup, I would expect that things will follow for Rhino 3D – don’t know how fast though.

      K4000: I don’t know if I can justify the cost difference over the K2000 for “regular” Rhino work…I guess there should be an improvement, but I haven’t had the luck to work with a K4000 to see if the performance difference justifies the cost difference, and after which scene complexity / poly count this is happening. Maybe it is relevant to your workflow, or not at all.

      OCing: You will need to pump at least 10% more MHz to a 4820K to match a 4770K. Assuming that both can be overclocked, and that 4.3GHz is roughly the “safe” bet with the 4770K (those don’t clock very well unless you are pretty lucky with your sample, or go through lapping / delidding etc), the 4820K needs to be around 4.7~4.8GHz to match the little sibling, or above that to surpass it.

      PSU:The 4820K will be pulling around 230~250W or maybe more heavily clocked. The 4770K will be pulling around 200W. Depends on sample for both (as Vcore goes up, and frequency goes up, so does consumption/heat generation, so doubling the rated TDP is actually pretty common/easy). Add the GPUs which will be 200W a piece + 10-15% when overclocked (most Kepler based nVidia GPUs don’t do more than 10% extra watts unless you flash a custom BIOS). Good quality PSUs can give 80-100% of their rated capacity 24/7, so no need to worry. A good 750 should be able to support a 4820K and 2x 770s with mild overclocks (to be safe), or a 4770K + K2000 + 2x 770s just fine. For hex core upgradability and GK110 based GPUs (Titan, 780s etc) I would go for a 1000W, tho if all this “upgradability” talk is hypothetic at the moment, do not forget that a 400-500W PSU is all you need to comfortably run an overclocked Quad (whichever socket) and a K2000. No need to pay in advance for something you won’t be using for a couple of years.

      Cooler: I would recomend a AIO/closed loop cooler, probably a Swiftech H220 or the closed loop Corsair and/or Coolermaster equivalents. Those are easy to mount in most newer mid-towers, and pretty decent in performance. High performance air coolers, like the ones mentioned in this blog are ofc a safe option too, but are a bit trickier to work with should you opt for dimms with higher than normal heat-spreaders. Unfortunately it is increasingly harder to find fast 8GB modules with low timings / high clocks. Memory branding markets “bigger is better”.

  13. Again thank you very much Dimitri. I will flip around the two options one more time, and then decide which one it will be. It’s a close run, and in the end it will be the small differences that will make me decide. __________ I found this 8 GB memory module “Corsair Vengeance – Hukommelse – 8 GB – 1600Mhz” and it seems to be a good choice for the 4770k, so right now i’m leaning towards this system. As you said with the PSU: “No need to pay in advance for something you won’t be using for a couple of years.” and I think the 4770k will fulfil my requirements atm. __________ The only question is if I can live with not being able to upgrade right away, both GPU and memory. Perhaps its a stupid desire, but its there.. 🙂 _________ Thank you very much for your big help! It’s been a pleasure talking to you. //Thomas

  14. Hi, I am awaiting for the 3930K + K2000 + 4x4DDR3 + SSD250 + P9X79 ASUS + 750 PSU workstation. I am wondering should I be adding GTX660 to it, or it is not worth for 100$ of additional investment? System will be used for 3ds Max, VRay modelling, rendering e.g. interior design projects. Thank you.

    • The answer lies within your workflow: if you believe that using the extra GTX as a progressive rendering accelerator for setting up lighting etc, it might be a good idea. BUT, the 660 is a pretty weak card. Surely will be faster than your K2000 in VRay RT (which honnestly pales @ GPGPU), but overall pretty weak. I would consider hunting for a cheap, used GTX 670 if you are serious about. The 660 will work as a cheap solution to “try the waters”, but soom after might leave you asking for more.

  15. Fantastic site, thanks a lot! I have ordered all the components but since I haven’t built a computer in years (been 100% mac until now) I’m wondering if you could recommend a site with video/images of how to do this. I know there’s a lot of videos about building computers, but maybe theres one out there close to your specifications? Thanks! F

    • There are a lot of resources to follow…many of them in youtube. It is a relatively easy process…one really important rule: if it doesn’t slide gracefully in place, you are doing something wrong! Don’t push too hard!

      • Ok, i’ll focus on sliding gracefully then! Another question, is Windows 7 still the best option or would Windows 8 also be a good option today? Thank you F

  16. Hi, i just arrived here from a post on cgarchitect. First of all thank you for this guides on WS, it was more than a usefull reading. That said i would like to ask a question: Is it really worth to spend the extra money on the 4930k over the 4770K? We are in though times and any saving we can do is important. I stumbled upon this article where they compare a 3930k vs the 4770k and especially on the vray test the times look the same. So what i was wondering…is it really worth considering the small difference between the 3939k and the 4930k? Thanks F

    • I don’t have a 4770K to test, but in theory it should not be the case. Yes, the 4770K is newer architecture, and even for the same clocks, it will be 10-15% faster than Sandy bridge (e.g. 2600K, 3820, 3930K etc) across the board. It will even be 5-10% faster than Ivy Bridge (e.g. the 3770K,4820K, 4930K etc). Add the small clock advantage over the hex cores in stock settings, and it is easy to see that the 4770K is the fastest current processor for single threaded applications (i.e. pretty much anything but rendering in itself, video transcoding etc). For modeling and most particle simulations in 3DS etc, the 4770K will be a tad faster.

      That said, the 3930K should still be faster than any stock quad core in Vray rendering. And fastest at that to a notable %, at least 25-30%, if not more (depends on the scene).
      The 4930K does worth it, not only because the newer architecture makes it slightly faster per clock (that should make it even faster than a 3930K in Vray and single threaded alike), but also because it lowers power consumption considerably, especially when overclocked. But that is for users than are willing to go into the s2011 platform now. Myself, being a 3930K owner, I don’t really see the benefits going to the 4930K, and I think that should be the case for most current 3930K users. I also overclock my machine, and the 3930K/SB-E family is easier to reach higher clocks than the 4930K by average. I will have to find a 4930K that does overclock well past 4.5GHz to just match my current performance, and those clocks are not guaranteed with 4930Ks (also known as silicon lottery, not all chips overclock the same, even from the same production batch / generation).

      • i agree with you, i wouldnt think even twice if i only based my choice on the stats of the cpu’s, but what got me wondering was all the max/vray tests i found on the internet, cause – if reliable – all of them seems to prove the 2 cpus almost equal even in multy threading, although it doesnt seem to be the case when the test are made on other softwares where the hexa outrun the quad. So i’ve started to wonder if the 4770k, could possibly have a better optimization vray-wise. I suppose i’ll have to find a test with the 2 cpu’s on vray to see what the real advantage could be

  17. I will finally build my computer this weekend. I see you recommend Windows 7 over Windows 8. Now that Windows 8 has been out there for a while, do you believe it is better to get Windows 8 now and all bugs have been sorted out?, or better to keep with the windows 7?

    • People are very emotional with changes, especially when those come from the MS side…it Apple would be doing Metro UI etc, it would be “fresh”, “a great change”, “a revelation”.
      Not that MS doesn’t mess it up with UIs, especially when it comes to MS Office, but quite often they lead with their changes, and the industry follows (i.e. who didn’t hate the Ribbon interface with MS Office 2007? Who would guess that all autodesk programs would start changing to that too few years later?)

      Most people that used Win 8 with an open mind, soon discovered that it is a tad faster than Win7 (that was known for some time), boots faster and it uses RAM more efficiently. 8.1 patch supposingly gives more options as far as the UI goes, and the Start Menu can be restored to almost Win7 style without the need for the Start8 plugin that most were using (and was dirt cheap). 8.1 also adds better optimizations for working on multiple and/or high resolution monitors. Scalling is pretty good (they had to catch up with the new 1600/1800p laptops, some say they did a great job, some say its better than OSX – no idea). It is also proven to perform better than Win7 with most modern games.

      Personally I’ve greatly regretted that I did not purchase the Win 8 Pro for $15, like so many did when it first launched. I have little to no experience WORKING with it, just used it on Surface Pro tablets and a couple of laptops for very little, unfortunately not with a rMBP or the equiv. samsung models to see how scaling works with it. If I had it, I could load it on one of my laptops and whatnot, getting better with it before diving deep.

      I am not aware of any real issues – at least as far as compatibility with latest apps etc goes.
      I do not recommend win 8 to users, simply because I don’t want to be “held responsible” for any preferential miss-match as far as the UI goes, not because Win8 lacks in any particular way vs. win 7.

  18. Hi Dimitris, Thanks for this great article, Indeed very useful. I am planning to buy a new rig. It will be mostly be used for rendering and manipulating scenes related to high end architecture projects inside 3ds Max and vray. I have few query’s. Hope you can throw some light on them. 1. Price of 3930K and 4930K is almost similar. Which is better choice? I currently own a 3930K system. I think 4930K is a step ahead over 3930K by looking over benchmarks from this website – http://www.cpubenchmark.net/high_end_cpus.html What do you think i should opt for ? 2. I am still confused over graphics card. As i said before, it will be used only for 3ds max. Which Graphic card do you recommend under 400-500$ budget ?. I don’t use vray RT or GPU based rendering . I’d appreciate if you could spare some time to answer above. Thanks a ton !

    • In general upgrading to a 4930K won’t yield any significant benefits for current 3930K owners. The absolute performance difference is less than 10%, and very hard to detect in person: e.g. you won’t upgrade and feel a difference in how responsive your programs became. A 10hr rendering might come out 30-45min faster etc. In general it is an upgrade of small benefits for the costs involved.

      Currently the cookie-cutter answer for a decent GPU is a GTX 760, or a 670 if you can still find it in stock at a similar price to the 760.
      A Radeon 280X can also be a pretty good card for all around DCC/CG work, with 3GB RAM and within/below your budget.
      If you do want to “deplete” that budget, depending on your local market you might be able to find a 780 on sale, but purely for viewport performance in 3DS, the difference over a 760 will be minimal to warrant the price.

  19. Dimitris, your knowledge is extraordinary and I’m so glad I’ve stumbled across your site! I’m hoping you can give me some specific advice which ties in with your excellent analysis of “Pro” workstations… So, I’ve been designing houses in 2D CAD since about 2003. 3yrs ago I moved to 3D which has been great. But now clients are asking me more and more often for much smarter and detailed 3D visuals. 3DMax is great and can create the results I’m after. However, from a cost/time basis, it takes way too long to create enough 3D detail in the models to make “wow” visuals… So in order to try and speed up the process, I’ve now moved to (some might say) the much more basic world of Sketchup. It’s actually been a great step forward because I now have a tool to quickly mock up 3D objects etc but also has enough features to quickly add the necessary detail. However, the rendering in sketchup is obviously poor, so I’ve now started using VRay for the rendering. This is where, ultimately, my existing system just isn’t cut out for it anymore. I understand the software etc, but am struggling with the hardware. The latest version of VRay for Sketchup includes the ability to either CPU render or GPU render. Does that mean that it will now be possible to install more than one GPU in my new system, or is it still limited and therefore better to just stick one GFX card? Also, processors… Am I right in thinking that basically I’ve got two options? i.e. 4930K or 3930K? Why is the 3930 slightly more money?!? Is there any point looking at the 4960X? Or seeing as the 4960X is about twice as much money as the others, what about going Dual CPU with 2x of either the 4930 or 3930?!? Any ideas most greatly received! Ben

    • Ben, Sorry for taking so long to respond.
      First of all I have to say that I really like Sketchup and I do think there is more to it than just the price that makes it the most popular 3D modeler in the industry today. Wish more tools were so easy and intuitive to use. I won’t comment on exporting straight from sketchup. Ofc it has its uses, but there is nothing like Vray – and how it could be? This “plugin” costs more than 2 times what Sketchup does, yet we are lucky to have one “industry leader” supporting the other =)

      One thing you have to remember is that sketchup is 32bit and singlethreaded. That means that multi-core CPUs will only be used when the VRay plugin is running, and after the “prepairing scene for VRay” dialogue is gone. Thus, single threaded performance is very important for any sketchup modeler, as regardless of having 2, 6 or 24 cores in your system, most operations will only be using one, so you better make it fast. The 4930K is the best balanced CPU for 1P workstations atm. It is not as fast as a 4770/4771 or 4790 series CPU in single threaded, but should cover up the difference when rendering. The 3930K is a good CPU, but since it was replaced by the 4930K, the only reason for it being more expensive, is your retailer having an outdated pricing table.

      The X series CPUs, are indeed faster, but we are talking like 200 out of 3800MHz. Won’t make enough of a difference to justify the cost in my opinion, and you could easily overclock the 4930K to higher speeds even through the automated options that most X79 motherboards come with. Dual CPU systems cannot be built with i7 CPUs. You have to have a Xeon E5-2xxx based system to do dual CPUs.
      Don’t let the common socket 2011 tell you otherwise: Xeon CPUs can be used in 1P configurations on a X79 motherboard, but i7 CPUs cannot be used on a 2p C602 board. As I’ve explained above, single thread performance is pretty important on a workstation – working on either 3DS or Sketchup, so if you were to go for a 2P Xeon, you would need fast 6 or 8 core CPUs running above 3GHz base speeds to get it.
      Cheap, sub-$600 hex cores @ 2.0GHz or similar won’t cut it, being notably slower than a 4930K in single threaded and barely faster – if so – during rendering. Decent Xeon E5-2xxx that I would recomend retail above $1200-1500 each. Ouch.

      • Wow you have a brilliant talent for explaining things very well! Thank you! So what about graphics cards… If I were going down the 4930 CPU route, is there a definite better option to go down? i.e. It seems to me that I’ve either got to go for a GeForce GTX 780 series, or a Quadro series?!? The latter being specifically designed for CG. Is that correct? But what if I also wouldn’t mind this PC to play the odd game on too? To be honest, just GTA V when it’s released at the end of the year! Would getting a GTX 780 still be great at the CG side of things as well as be good for gaming? Or what about the TITAN X cards? It seems they’re about the same price as a high spec Quadro card… Would that be a good one to go for? Thank you again.

        • For general viewport acceleration, a 760 or Radeon R9 280 should work just fine.
          The 780 is ofcourse a faster card, but in many ways this is not applicable outside gaming due to viewport engine and driver limitations: 3DS cannot really know how to use a 780 or Titan to its full potential, so despite having “nearly double” the raw performance potential of a 760, you simply won’t find this kind of advantage in real life CG workstations.

          Quadro cards add this “driver magic” to certain applications, but most mainstream ArchViz applications don’t really benefit from it: 3DS, AutoCAD, Revit etc are all switched to Direct3D, where workstation cards are not really beneficial over GTX/Radeon cards.

          Sketchup is an OpenGL application, but either the drivers provided by AMD/nVidia, or its viewport engine are not optimized to see real benefits from workstation cards.

          Industrial design modelers like Catia, ProEngineer, Siemens NX etc are the applications where workstation cards really shine – check the Specviewperf benchmark section to see results from hands-on testing some popular cards across the board.
          http://pcfoo.com/specviewperf-12-gpu-scores/

          The Titan Z is a weird card: it is a dual GK110 card = 2x GPUs in SLI on a single board. Thing is, SLI has to be supported by the app for both GPUs to be used, and CG programs don’t support SLI (or drivers are not written to support SLI outside gaming). Thus, there is no advantage going with a Titan Z outside gaming, and for gaming dual 780Ti or Titan Blacks or the equivalent R9 2×295 or Dual R9 290Xs are far cheaper and also faster solutions.

          Some people rave about “double precision performance” etc with Titans, but as a Titan owner and kinda “informed” around the industry, let me tell you that DP is not really used in the CG industry, or gaming or whatnot. Some complex matlab processes or compilers that care for “beyond the 17th decimal precision” (where DP starts counting) and do have a GPGPU platform taking advantage of an unlocked GK110’s DP might benefit, most buyers simply won’t – and ofc there is nothing “magic” in a Titan Z that cannot be accomplished faster with 2x Titan Blacks for 2/3 the cost.

          • Ah ok that makes sense… Although it seems to me that, on all accounts, the designers of all mainstream CG software need to pull their finger out and start designing their software to take far more advantage of the hardware that’s out there! So, I’m thinking that a single 4930 i7 with a Titan black could serve me well… With the potential of adding another Titan Black at a later date. While hoping that sketch up will eventually become 64bit and/or multithreading at some point! Does that sound like a good plan?! If so, what about motherboards? I’ve built many PCs in the past and never really given much thought to the mobo. Are they more important than I give them credit for?

            • Hi there Dimitris, Sorry to bother you again. Could I just go back to this Processor thing again and Sketchup/vRay for a moment… As Sketchup runs at 32bit and singlethread, would there be any benefit of running a 2P Xeon system over a 1P i7 system? i.e. Would sketchup use both single cores? Equally, the same question applies when it comes to rendering with vRay for Sketchup. As soon as vRay is running and 64bit mode has kicked in, would render times be significantly improved with a 2P Xeon system rather than a 1P i7 system? Cost isn’t really the issue in many respects. I’ve been given a decent budget from work to get the “right” system sorted out. The modelling side of using Sketchup isn’t so much the issue for us (our architectural models aren’t THAT complex in the whole scheme of things). It’s the render time of a visual that needs speeding up more than anything. If we would theoretically see vast improvement in render times using a 2P Xeon system, then I’m tempted to go with the expense of a 2P Xeon system… … or adversley, do more GPUs improve render times?

              • Sketchup is still 32bit and single threaded – so it cannot really utilize more than 3GB of RAM and a single thread/core regardless of how many are available in your system. Much like with any other modeling software (all of them are single threaded in when modeling and most when simulating particles etc), the maximum performance of the CPU per core is the bottleneck, and not the aggregate / multithreaded performance of all cores.

                Thus for example, while a 10C Xeon like the E5-2690V2 @ 3.0GHz will be easily faster than a i7-4930K/4960X Hex or the newer i7-5960X Octa when rendering, for most modeling processes, direct sketchup exports, including prepping the scene for Vray (the “Vray is currently processing your scene, please wait…” message), will perform better with the faster clocked cores in s2011/s2011-3 i7 CPUs, and even better with the even faster i7-4790K. Should you opt for a slow clocked, 6-core or whatever pair of Xeon E5 that would still be a tad better than a single hex/octa core i7, the machine could be notably slower as a workstation. That’s the nature of the beast.

                So, in a nutshell, if you want the best rendering performance without compromises there are 2 (or 2.5) ways:
                1) Go with a single 2P box, investing in a pair of fast yet expensive CPUs like the E5-2690V2 10C or E5-2667V2/E5-2687W v2 8C etc that will boost beyond 3.4~3.5GHz and give you good single threaded performance, and very strong multithreaded. It’s expensive to do many of those workstations in an office, and not that practical as most of this processing power will be underutilized for the most part. 2) Go with a rendering node (or more, forming a small farm) that will render exclusively. In this case the workstation can be a quad core i7, giving you access to the best single threaded performance. The render nodes should be fast for the best efficiency, but don’t need to be 3.0GHz – a E5-2680 v2 pair will be great, a E5-2670 v2 pair will be enough.
                2.5) Go with the nodes above, but have a 6-core (4930K or the newer 5820K even better) or 8-core i7-5960X contributing to the rendering a bit better than the #2 choice – tho, lets face it, if you have #2 you already sum 20+4 = 24C/48Threads. Going 26C/52Threads won’t mean that much more.

                Now, 64bit Vray mode is new with VRay for Sketchup 2.0 and I have no personal experience with it being stuck with 1.49. I am pressing my employers to get it, we have 3DS or outsource the heavy stuff and they don’t feel the pressure. =)
                The 64bit mode is not needed to rip the benefits from a 2P system. You get all cores rendering with 32bit too. It’s the 3GB RAM limitation that get expanded to pretty much all you have available (192GB for Win 7 Pro / 512GB for Win 8 Pro).

  20. Sir Demitris, could you give me a sample of a workstation build mainly for sketchup,vray and lumion. Thank you very much kind sir.

    • Some thoughts
      1st – What is your budget? – That’s the most important question.
      2nd – What will be you focus? Better Sketchup performance, better Vray performance? That will be your CPU choice driver, within the limitations from #1.
      3rd – For Lumion a GTX 970 or 980 should be your best choices. But again, you have to balance with #2 and #1.
      4th – Which version of VRay for Sketchup will you be using? I would strongly suggest 2.0 that introduces a lot of improvements, on of the most important being 64bit support that at last allows you to utilize more than 3GB of RAM.

  21. Hi Dimitris Thanks for some great articles, I was wondering if you have any input on wether or not to go for the new X99 motherboards, ddr4 and LGA2011-v3 Socket platform? Im gonna shop for a new workstation at the end of this month when “Black Friday” hits the webshops. Budget is tight, so I was thinking of sticking with: 4770k 32gb ddr3 motherboard? This is a jungle! or pay more and be a little bit more “future proof” (i know this is not really viable in the hardware world, but bear with me). Ie: 5820k 32gb ddr4 MSI x99s PLUS It will be used for hobby/freelance 3ds max, vray work so it will also do the render work. At the moment im on a laptop, so any choice will be a huge upgrade. I havent owned a normal workstation in the last 8 years, so I have no idea if the upgradeability of the “old” LGA2011 platform will suffice. I think I will be getting a Noctua fan and do some mild OC, so will I have to look for specific ram for that to work or just go with the cheapest/slowest ones? I remember you saying that the rendering task does not utilize the ram speeds, but will it effect OC or is that just the CL timings? The whole thing will also depend on what will be on sale when the sale starts, but I would like to be prepared to jump on the ddr4 wagon 🙂 Thanks for taking your time with helping us novice people.

    • Jens, thank you.
      I would recommend going with a 4790K configuration for maximum efficiency & modeling speed. That said, the 5820K is an excellent all-around CPU, that will give you an edge when it comes to rendering. The s2011-3/X99 platform is more expensive due to the need for DDR4, but in comparison to the old s2011/X79, the 5820K breaks even in performance with the 4390K, thus making the realistic cost for acquiring a 6-core intel CPU with 32GB of RAM roughly the same either way – i.e. the extra needed for RAM is evened out by the much cheaper CPU going s2011-3.

      Personally I see no real reason not going s2011-3 and sticking with s2011 when buying a new system at this point. The s2011 is a dead platform. Not that it won’t perform great with most stuff you through at it, yet the s2011-3 offers a minimum guaranteed upgrade path, not just in clocks but also in cores. The 5820K also seems to clock better @ average than a 4390K, and it is faster clock for clock to begin with. Should you have no fear trying even the mildest of overclocks, the 5820K renders the 5930K redundant: imho the choices are to either go “cheap” 6-core with the 5820K, or all out with the octa 5960X.

      I would think of a s2011/X79 system only if you had access/an offer to a cheap used CPU/Mobo and/or if you were already in possession of 32GB DDR3-1600/1866 or faster.

      Talking about RAM Speed, don’t really bother with anything faster than DDR4-2400…doesn’t worth the cost as the returns are diminishing to non-existent as far as performance goes. RAM is not the bottleneck for years, not even close.

      • Thanks for the quick reply Dimitris (sorry for the messy post, but everything just turned in to one long line of babble when I hit the “Post Comment” button).
        I am getting a biased vibe from your answer. Nice timing with your update to the $1500 workstation (was a good read!). So just to clear a few things up:

        The s2011-3 platform is a “dead” platform and there wont be many more updates to this one. So even though the 5820k is a good buy now, I would have to change the motherboard for a new socket when the new intel 6/8cores will be on the market?

        Where as the LGA1150 socket is newer and I would be able to reuse that motherboard untill DDR4 price/performance gets better in a couple of years?
        Since 3ds max is single core performance based I get the point of the 4790k being faster with its clock rate and I also read one of your earlier comments that viewport performance also depends on the relationship between CPU+GPU and not just the GPU.

        (I guess this is why my 5year old xeon at work has sucky viewport performance even though I managed to get a 770gtx…)

        But, will there be such a big difference in how 3ds max will perform if I chose the 5820k over the 4790k when I get a 970gtx? This is purely a modelling performance question as I am sure I would enjoy the two extra cores the 5820k gives at render time.

        This is mostly me thinking out loud trying to get to the point where I will just save a couple of hundreds euro and go with your new $1500 build and be amazed with the upgrade compared to my current laptop: i7-3610QM 2.3ghz (just to put things in perspective). I am not going to be rendering long complicated animations over night anyways…

        So if you know of some other article or place that can answer some of this stuff, just throw me a link 🙂

        Your help here and on cgarchitect is greatly appreciated!

        • Edited the post for readability – I hope according/clsoe to what you’ve typed.

          You got it backwards a bit: the dead platform comment is for the s2011-“original”, not the s2011-3 that is relatively new and will have some years ahead of it, with an upgrade path already set over the 5820K in the form of s2011-3 Octa core i7s. The s2011 has some Xeons that can be a good upgrade over a 4930K, but as far as i7s go, it is already outclassed by s2011-3 CPUs.

          The s1150 is also in a gray area. I personally don’t know if you can trust intel to drop something seriously faster than the 4790K in the same socket and remaining compatible with current motherboards and/or DDR3. But as it sits, the 4790K has no rival in single threaded performance out of the box.
          As you correnctly understood, this translates to faster communication all-around. Don’t buy into the extra PCIe lanes – much like the Quad-DDR3 or Quad-DDR4 memory bandwidth offered in s2011 platforms: fast clocked i7 quads in either s1150 or the s1155 before it were offering better graphics performance up to Tri-SLI/CF configurations. Even with my personal systems, my i7-4770K @ 4.5 outperforms my 3930K @ 4.7.

          Is that noticeable in real life performance & 3DS??
          …no, not really, cause we are talking 2x very fast systems that both have a 1GHz or so over their base speed and a small % diff between them (yet the 2-generations newer haswell out-paces the slightly faster clocked Sandybridge).
          Perhaps in pretty heavy models tho, you will see a difference between a stock 4790K and a stock 5820K, as the latter is almost 25% slower @ Turboboost. If you would O/C the 5820K around 4.4~4.5GHz which is not an irrational goal even with a “run-of-the-mill” CLC like the H110 or H105 if you get a good chip (I had to have a custom WC for my much hotter 3930K to go above 4.5), you will get the best of both worlds, but of course the investment won’t be contained in a $1,500ish budget.

          • Thanks for clarifying Dimitris. I’m looking forward to the black friday sales and we’ll see if i’ll be able to pick up some good stuff. I feel very informed now – thanks to you. If you ever get to the Copenhagen area (Denmark), i’d love to buy you a beer for your troubles!

Leave a Reply to Fiorentin Cancel reply

Your email address will not be published. Required fields are marked *