$1500 CG Workstation – Q3 2014

Based on Intel i7-4790K – Q3 2014

This built is an attempt to produce a well-rounded performer that will serve most CG artists, video and image editors looking for a machine that will set them back around $1,500 or $1,350 without OS. This workstation is based on the 4790K, an amazing CPU that out of the box can offer performance that around 6 months ago was available only through overclocking.

Contrary to public perception, most 3D Modeling and CAD applications are single threaded or heavily dependent on single thread performance. In layman’s terms, that means that real life performance is determined by the speed of one CPU core that acts either as the Front-man on a rock band or solo gig, or the conductor of a bigger band of Cores / Threads.

Thus, against what price tag and naming suggests, an enthusiast socket 2011-3 5930K Hex or even the 5860X i7-Extreme Octa cores are not the best choice of a CPU for all tasks in a workstation. Since most CAD drawing and modeling tasks are single threaded, the very high base and even higher turboboost frequencies on the i7-4790K will allow it to perform faster than either of the aforementioned s2011-3 CPUs. Many would come back arguing a memory bandwidth advantage, as the s2011-3 and the s2011 CPUs before that offered Quad-Channer DDR3 or DDR4 support, while the s1150 i7s are limited to Dual Channel DDR3, but again, real life performance indicates clearly that current generation of applications is nowhere near to bottleneck the memory bandwidth of Dual-Channel DDR3-1866 sticks. You will get better metrics in certain benchmarks, but that is pretty much it – pretty much any program that favors the s2011-3 or older s2011 platforms, is actually benefited by the extra cores on those CPUs, and not the onboard Cache or Quad-Channel memory – at least nowhere to the point that will make memory bandwith a driver to your decision.


$1500 CG Workstation – Q3 2014

Processor  Intel i7-4790K Quad Core 4.0GHz 
4 Cores, 8 Threads @ 4.4Hz Turboboost
Motherboard Asus Z97-A
Cooling  Stock – Optional Corsair H105 CLC
Memory Kingston HyperX Fury 1866 2×8 – 16GB
System Drive Samsung 840 EVO-Series 250GB
Storage Drive WD Black 1 TB 7200rpm SATA
Case Fractal Design Define R4
Power Supply Cooler Master V550 80+ Gold
Operating System Windows 7 or 8.1 Pro 64-bit


And some background rational that drives each choice.


  • CPU: intel i7-4790K – 4GHz (4.4GHz TurboBoost). The successor to the 4770K, doesn’t bring a lot on the table as far as increased IPC goes, but does boost the clocks decisively, with a 500MHz increase in both base & turboboost frequencies. The 4790K has a higher base clock than the 4770K had turboboost clock, leading to a comfortable increase in performance under any situation – effectively acting as a mildly overclocked 4770K out of the box. Note that unlike the previous code releases, the non-K version of the CPU doesn’t get the same clocks, so even if you don’t plan on overclocking, it is not wise to get the 4790 non-K, which clocks just 100MHz above the older 4770K & is locked equivalent 4771. The latter was an ok replacement for the K, saving you some money over the unlocked version, but for the 4790 series, stick with the K. As a side benefit, the impoved thermal interface between the heatspreader and the actual CPU die, allows you to overclock the 4790K much higher than the average 4770K would go without de-lidding.
  • CPU Cooler: No additional cooler is required, as the i7-4790K is sold in retail packaging accompanied by a factory cooler that works fine at stock speeds. If you want something “better” you could go for a large twin tower cooler like the Noctua or Thermalright SilverArrow. If you want to go the CLC way with an all-in-one water cooler, I believe the best choice would be something in the class of the Corsair H105. It is thicker than most 240mm CLC solutions in the market, but still fits most serious Midi & mATX Towers that rarely offer bigger than 240mm rad support. That said, a better cooler won’t be required unless you wish to overclock, or perhaps want something more effective in cooling down the CPU during prolonged rendering sessions. Using the stock cooler for the latter, is no issue. Will do its job.

  • Motherboard: Asus Z97-A. This is as great value Z97 board. Fancier versions bump up the available SATA/USB3.0 and M2 ports, add onboard WiFi and other features that the vast majority of users won’t really touch. The A already has an 8-phase power delivery system, good overclocking capabilities, Intel GBit NIC and can squeeze nearly every bit of performance out of your s1150 CPU with ease. A Gigabyte GA-Z97X-UD3H could also be a safe bet for an excellent board.

  • RAM: Kingston HyperX Fury 1866 2×8 – 16GB kit (8GBx2) .
    You can get 1x or 2x of these kits for 16GB and 32GB respectively.Planing ahead is wise. Most motherboards for intel i5/i7 and AMD FX CPUs have up to 4 slots for RAM. CPUs can support up to 32GBs. Even if you don’t see the need for 32GB of RAM, opt for 8GB sticks unless you want to go for a special speed etc. This will allow room for growth without parting out your initial investment (often the case with 4GB sticks). Prefer low profile heat-spreaders. Those offer little to no gains in reliability and stability, but might cause installation issues obstructing the use of large CPU heasinks. We won’t be using those in this built, but I believe it is proper to choose versatile components that can be used interchangeably in many builds.

  • Graphics: This is a tough one. Depending on the direction you will move, this is the second most important component after the CPU, and in many ways the most important. I will list 3 options, trying to keep everyone happy:
    • Asus STRIX GTX970 4GB. The latest sub $400 gem from NVidia, the GTX 970 4GB is a truly great performer. Great compute capabilities for those interested in GPGPU accelerated tasks, good all around performance with a very low heat and noise signature. The Asus STRIX design improves upon the reference board offering better cooling. The best GTX 970 out there is probably the Gigabyte G1 Gaming edition, offering better port configuration – perhaps tied with the PNY GTX 970 – and the best cooling & power delivery configuration should you wish to overclock. The PNY board is not as impressive for overclocking, but with using mini-DP ports, it is convertible to a single slot GPU when paired with a full-cover waterblock. This can allow for great density GPGPU configurations for more advanced builders. The downside on the Gigabyte G1 is the massive size and the increased cost over the ASUS. At stock speeds there won’t be any noise or heat concerns using any 970 out there, and performance will be virtually indistinguishable, so you could opt for any one of them without any real sacrifices.
    • PNY Quadro K2200 4GB.  In order for “workstation” cards to earn their pricetag’s worth,you will have to be using your workstation for engineering CAD applications like Solidworks, CATIA etc, that actually get a notable performance boost using workstation GPU drivers. The older Quadro K2000 is a mediocre performer, notably slower than the W5000, and certainly nowhere near a K2200. 4GB of VRam is a nice addition, but not a real requirement, i.e. don’t full yourself believing that the speed bump is because of it.
    • AMD FirePro W5000 2GB. If you don’t want to use nVidia, and will be using your workstation for the OpenGL CAD applications described above, this card won’t disappoint. 2GB of VRam is hardly a bottleneck unless you are doing GPGPU intensive stuff. Most applications will be happy for pure viewport acceleration, but it is not a bad compute card for its wattage either. The W5000 casually  trades blows with the Quadro K4000 3GB in real life performance.


  • SSD: Samsung 840 EVO-Series 250GB. Even towards the end of 2014, I think the 840 Evo as an excellent value drive. Fast and reliable, will minimize loading times and turbo-boost your swap and scratch files. I would worry about it being TLC NAND based, independent testing has proven that those drives can sustain so many TBs of writes, that it would take more than a decade for even the most active users to see any degradation. And remember, that when this happens, the controller will just suspend writes on the defective blocks, while reads (thus access to your files) is technically possible “forever”. 250GB are enough to fit OS, your design and 3D suites and even keep some working files in. To ensure maximum performance, remember that you should not fill your SSDs more than 75-80% to their maximum capacity. Make sure you have your drive updated to the latest firmware.


  • HDD: WD Black 1 TB 7200rpm SATA – WD10EZEX.
    This is an excellent and very popular 7200rpm drive. This should be your storage drive for libraries and archiving files that don’t need to be on the SSD. It is still a decent performing spin disk to allow for HD video editing and whatnot.

  • Case: Fractal Design Define R4. Stylish and minimal design, great layout  along with good noise dampening make this a great workstation case. Other good choices would be the Corsair or – in case you want to cut down costs without sacrificing much – Cooler master N400.

  • PSU: Cooler Master V550 80+ Gold. This is a quality unit with consistently good reviews. 550W should be more than enough, as this system will probably be using less than 100W for normal operation, and would rarely break 200W of wall-plug load. 80+ Gold rating ensures that the unit is running cool and energy is not wasted.

  • Optical Drive: Optical Drives (ODs) are rarely utilized the last few years. Broadband connections, cloud storage and affordable flash drives are replacing them. For most ODs are limited to installing new software and OS. A Lite-On IHAS124-04 or Asus DRW-24B3ST would do the trick, but unless you are required to provide optical media to your clients/school projects or you like burning backups, ODs are trully optional these days.

  • Operating System: Windows 7 Professional SP1 64bit (OEM) Windows OS is guaranteeing compatibility with most rendering packages. Windows 8.1 Pro 64-bit has no real issues, but some users complain with the UI and are willing to trade the better memory management it offers for the “good enough” and more familiar Windows 7. Home editions are limited to 16GB of RAM. Professional and Ultimate versions allow up to 128GB, and offer some additional networking/remote access features that are desirable.
    Disclaimer: For the total price, we assumed that you would go for the most basic configuration listed. No guarantees of pricing and availability in your region, just suggestions =)


Affiliate Links

If you’ve found any of the articles or DIY information useful, please support pcfoo.com and myself by using the supplied “affiliate” links in the site. When you use these links I get a small commission, regardless of what exactly you’ve bought, and of course I have no access or involvement in the process other than “recommending” a product.

In fact, a commission is made from any purchase store-wide made after a session was originated by one of my links, so if you want to support pcfoo, please visit Amazon.com using these links regardless of your intention to buy a new PC or not!

Are you outside the USA but still using Amazon, go to Amazon.com, scroll down to the bottom of the Amazon page and switch to the local Amazon site you would normally shop from in your country. It is far more elegant than filling the pages with adds or having automated routines replacing text with commercial links.

If you are not using Amazon at all and you wish to support us, please do so through Paypal by clicking the button below.

Thanks for your support.

74 thoughts on “$1500 CG Workstation – Q3 2014

  1. Hi Dimitris!!! Been Reading your site for quite some time now, and thanks a lot for taking your time to help the community!!! I just want to get your advice on systems for our office, we are currently planning to upgrade our systems and are planning to get four hi end desktop systems with the following configuration, Dell OPTIPLEX 9020MT, INTEL CORE I7-4790 PROCESSOR (3.4GHZ, 8M, 84W), 32GB (4X8GB) 1600MHZ DDR3 NON-ECC, 256 SSD HARD DRIVE,+ 1TB HDD 3.5” SATA, NVIDIA® Quadro® K620 2GB (DP, DL-DVI-I) (1 DP to SL-DVI adapter. These systems will be used by 1 cad Draftsman, 1 3d visualizer, and 2 designers (we will be using it for AutoCAD, Revit, 3d viz & Sketchup). the visualizer and me also use 3dsmax & Vray for interior visualizations. Is the above configuration enough or do we need to go for Dell Precision workstations with xeons?? or I just switch out the K620 with a K2200?? The processor is already based on your suggestion. Your help and advice will be greatly appreciated! Regards.

    • Your configuration is decent. You won’t need a Quadro for most of the apps you are describing, as there is no performance (or “stability”) benefit from using Quadro drivers with the Direct3D based Autodesk products. At least not with a card like the K620 that lucks in raw performance. Sketchup is OpenGL, but – much like Rhino – it won’t benefit from a Quadro either.

      You will have far better performance per $ spent if you would go for a GTX 750Ti 2GB card instead. Has the same raw performance as the K2200 (same chipset) but it is 1/3 the price.

      In general GPUs have no effect in your rendering process – only the fluidity while you model / setup your scene. The rendering process itself, is handled exclusively by the CPU, unless we are talking specifically about GPU accelerated renderers that use the GPU (Vray rT GPU, iRay etc). The industry standard renderers, like Vray Advanced & Mental Ray for 3DS are CPU only engines.

      • Thanks a lot for the quick reply! So I keep the i7 – 4790 as is, and replace the quadro with a GeForce?? but will the 750 Ti is enough?? is it future Proof?? or shall i opt for the 800 or 900 series for the longevity?? right now we have the k600 on our systems and we have a lot of refresh & screen Lag issues in both AutoCAD and 3dsmax! Once Again, I appreciate you help!! Thanks

        • The K600 is a “decent” card for its time, but it is a pretty old card. Its raw compute performance is very very low, and the K620 will not be that far greater vs. other current models. The K2000 is also a not very powerful card. My employer recently picked a new HP Z workstation for me with a K2000, and it is definitely worse in Revit than the GTX my previous machine had. The K2200 is much better on paper, but as I wrote above, Revit & AutoCAD don’t really care about Quadros, as those use Direct3D API, and gaming cards already have so much more raw performance (at comparable costs) that it doesn’t really matter. AutoCAD 2013 doesn’t even recognize the K2000 as a “certified” model.

          The 750Ti will be “good enough” for most applications. Is it as future proof as a 9xx ? No, but it is so much cheaper, that you can replace it in 2-3 years with another GPU in its current price range, and do better than a current 9xx. In many cases, graphic engines are also CPU limited. Even when you just orbit/pan around the model, the The 4790K (the K version) is the fastest single thread CPU you can buy at the moment. Drafting, opening files, appllying transformations etc in AutoCAD, renewing families in Revit, all shorts of modeling and even advanced particle physics simulations, along with pretty much everything in Sketchup are all single threaded = you will be using 1/4~1/8th the CPU’s horsepower for those tasks.

          The only reason to go Xeon, is for rendering, but since the s2011-3 “Extreme” i7 with 6 & 8 cores (12-18 threads) are already fast and much cheaper than most Xeons, the only reason to go Xeon is for CPUs with more than 8 cores. And since you will be already spending much more than what you would for a 1P (1-Processor) i7 system, to see big gains over that you should be looking at a 2P Xeon system.

          So we are already talking an expense in the region of 3+ times that of a 4790K system like the one I describe. To get comparable (still worse) single threaded performance to use it as a workstation with no compromises, you have to spend a lot, as 8-10C Xeons with more than 3GHz can cost as chips alone, more than a complete 5820K tower with 32GB DDR4, 9xx GPU etc etc.

  2. Hi Dimitris, I was longing for a workstation config within this price-range. Thank you for that! Two questions: I was planning to go for a Noctua cooler in stead of the H105. Do you think it will all fit in your setup? A quick look at the mobo tells me it will be, but what do you think. Does the case have enough space, also? By the way, is there a huge difference between the Noctua ND-14 and its successor ND-15? Do you opt for the latter, despite the 20 dollar difference? I was also thinking of using the 750Ti. I’m using 3ds max with Vray, so I won’t be using AutoCAD or Revit etc.. And, if I understand you correctly from your other configs and comments, the 750Ti should give me the same results and at the same time costs a lot less. Shouldn’t you have this one optional too, in your config? Thank you for helping guys like me to get the best for less. Dave Venmans Vidi Visuals w: http://www.vidivisuals.com b: https://www.behance.net/vidivisuals

    • The Noctua is an amazing cooler, even the original ND-14. You will easily match the performance of a H100i with it, and do so at lower noise levels. The H105 might be better, but that’s only if you plan on overclocking. That said, even the ND-14 is overkill if you don’t plan on overclocking. You don’t need a dual tower air-cooler, or even a CLC for that matter. The stock coolers should work fine for stock speeds, you could opt for a CM 212 or equiv. $20-30 tower cooler if you wanted an ease of mind and perhaps lower noise levels @ full load.

      The GTX 750Ti does cut it as an all around performer in 3DS Max. Vray doesn’t care about your GPU, unless you are talking VRay RT GPU, in which case perhaps a faster card would be appropriate, and the best bang for your buck is the GTX 970 right now. So no, the GTX 750Ti – albeit a great card for the money – is not “the same” all-around as the 970, but you can pull serious projects with it just fine as far as viewport acceleration goes.

    • It would not fit the $1,500 target without penalizing other components.

      The performance differences between 980 & 970 don’t warrant the price difference, unless maximum performance is required regardless of cost. Think of it as 780Ti vs 780.

      Even if I was building a PC for all-out gaming, I think I would favor a $700 970 SLI setup more than a $550-600+ single 980.

      • Hi Dimitris, Thank you so much for all the information you shared with us. Just want to ask that if I setup two GTX 970 in SLI mode, will the GPU based render like Octane, Vray RT or IRay use all the power of them or just use one of them? Here comes another probably related question. Now I am using Maya 2015 extension on the new Mac pro, which has dual ATI Firepro D700 professional graphic card on it. I just don’t know how this dual mode gonna affects in 3D application. I feel like Maya can only use one card of them to accelerate the view port speed. The reason I said this is because I can only set up the Ram limitation in Maya to 6G and there is no options about set up two professional graphic cards. Also, in After Effects OpenGL option, I can only setup one card under AE preference. So my question is, are there any Applications can use this dual mode professional graphic cards? Thanks a lot!!

        • The SLI (or CF) bridges are used to sync cards for gaming and/or viewports that support more than one cards for viewport acceleration. SLI or CF is not needed for GPGPU compute tasks, like Octane, VRay RT GPU, iRay and similar renders. The rendering engines search for compatible devices and will include them in the process should the user choose to, even if those are not identical. E.g. you can render with 7xx & 9xx series GPUs simultaneously, although you can only SLI between identical cards.

          You are right to be suspicious of Maya using more than one card for its viewport: it cannot use two. As far as I know, OSX doesn’t even support Crossfire. You have to load Windows in your MacPro to enable it (ofc only within Windows).
          I can only speculate why there is no single GPU option for the new Macpro.
          The story we indirectly get is that Apple saw GPGPU as a very important portion of its intended workflow, and with two cards tasks will be distributed to more than one GPU, so that you would see less impact in driving the displays and faster speeds at the same time.
          The less elegant speculation would be that their priority was to have 6x Thunderbolt ports in the back, and they could not drive graphics on all 6 of them with a single GPU. One could test that by removing one of the two GPUs and testing pluging monitors to check for disabled TB ports – but I have no MP to try that. 🙂

          So – in short – I don’t think you will find applications or even games than can use both Firepros in OSX, as crossfire is not supported. You will be able to use both in OpenCL accelerated apps for compute tasks.

  3. Hi Dimitris! Thank you for the article above, definitely gave me a better understanding of what’s been going in my pc. I just got a new desktop fixed and the configurations are as below: Core i7-4790, Kingston Hyper Fury 16GB, Asus VANGUARD B85, Plextor M65 256GB SSD, WD 1TB HDD, NVIDIA leadtek Quadro K620, Corsair CS650M, Cooler Master Elite 311 and OS Windows 8.1. I’m an Architecture student and frequently run AutoCAD, 3Ds MAX, Rhinoceros, Lumion and Revit. I’m using both Mental Ray and V-ray for 3Ds Max rendering. I also have some of the editing works done in Adobe Software(Photoshop, Lightroom, Illustrator and InDesign). I was told by the IT seller to get a Quadro workstation graphic card instead of a gaming gc for better performance. I don’t play games on my desktop and currently there are no screen lag issues while I’m running a rather heavy 3d scene in 3Ds Max. Based on your reply to the first commenter above, a Quadro K620 doesn’t help much in terms of rendering speed? If so, which graphic card would be optimum for speeding up V-ray and Mental Ray rendering in 3Ds Max without breaking the bank? (I have to render 20-30 still 3d scenes and about 2700 frames for video in most of the Studio assignments) I’m kinda okay with my current desktop’s rendering performance but I would like to know is there anymore room for growth. Thanks much!!

    • The Quadro will have zero effect on CPU rendering speed, and the specific low-end K620 will do badly in GPU accelerated computation tasks, like Lumion, or Vray RT GPU. So if you are using your CPU to render you stills (regular VRay Adv. engine), you won’t do better changing a GPU.

      • Thanks for the clarification! Oh yes, my pc encounter terrible screen lags when I’m running Lumion, that’s why I favor 3ds Max! If I were to resolve these lagging issues on Lumion and if I understand correctly from your reply, a much better GPU is needed? Anything else to resolve these lagging issues? Do you have any suggestions for graphic card series and Is the GTX800 range good enough?

        • Yes, a GTX card would be the way to go. Lumion favors raw compute power, so the fastest the card the better. K600, K620, K2000 and even K4000 cards have too few cores to keep up, so despite the price, will make little difference. There is no “800” series for desktop. You will need either a GTX750Ti or to go straight to a 900 series GPU – if we are talking nVidia.

  4. Hello Dimitris and huge thank for your whole blog. I may go you back some years with my question: I have 17-860 @ 2.8 procesor, 16gb ram, msiP55GD55 motherboard and for graphic card : MSI 460gtx 768mb OC (very “honest” card) At which card should i upgrade to? I m using 3dsmax-with Vray for archviz for the time,(autocad ,photoshop, AE also) but im into Zbrush Unreal engine and keyshot soon. Should i just go to gtx680? should i just sli with my current card? should i go for 2 560? i really dont have a clue, ive read so many articles and i m so confused. I dont plan going to a new cpu at the moment i think i m ok so i ended up looking for a better card.

    • A GTX750Ti would cut it as a cheap upgrade. Will be faster than the 460 while drawing much less power. Going for something faster, I would not settle for something older than a 9xx series, unless you find an amazing price for the 680. SLI won’t help you much for what you are doing.

      • Thank you very much for your responce! So you are saying that an “ok” card with good price is gtx750.(isnt a problem thats it is a gaming card??) And if i want something much better to go to a 9xx series ? Would a 9XX series be compatible with my specs?i never thought about a 9xx series cause i had the feeling , that it might had compatibilitity problems cause my CPU and motherboard are kinda old… (thanx και πάλι 🙂 )

  5. Hello Dimitris ,,, thank you very much for this useful information ,,, i have a question i’ll buy a workstation in next few days it’s basically for 3ds max and graphic programs ,,, i can go with one of two processors 4790k , or 5820k ,,, which one you prefer that will be faster in rendering and overall performance, and is the deference in performance worth the cost ,,, another thing for GPU computing is gtx970 good for GPU render based engines i mean is it better than 7xx generation ,,, specially in matter of fast and quality ,,, sorry for repeating this post 🙂 i wrote another one in pro overkill section

    • If you have the cash and will to overclock, the 5820K can be faster all around. Out of the box and with stock clocks, the 4790K will be faster for modeling and general compute stuff that does not scale well past a few threads, and the 5820K will be faster only in the task of rendering itself. i.e. after you press the “render” button in 3DS or any other program the 5820K will be faster, but before that the 4790K will be faster. You cannot have it all, unless you move in and “tweak” things. 🙂

      The 970 is a very honest card, and it is faster or on-par in compute performance with the GTX780, much faster than 760 & 770 cards. As GPU based engines will get better optimized for Maxwell (took quite some time to get there with Kepler = 6xx & 7xx, I think they are moving faster now), I would expect the performance with 9xx cards to become better and better. At this point the GTX Titan Black & 780Ti cards are the fastest available options, with an average of 10-15% faster rendering speeds than a 980, perhaps around 20-25% faster than a 970.

      The 780Ti is limited to 3GB or RAM, and you can get 2x 970s with 4GB RAM that will consume the same power and give notably better performance in GPU rendering for a similar price. I would go for a 9xx at this point. 8GB versions are promised early next year, so if buffer size would tempt you to go for a Titan, I would recommend waiting. Again, Titan = 780Ti in compute performance (as FP64 advantage is not really utilized).

      • Hey Dimitri! I’m also planning to go for an i7-5820k and GTX 970 for a rendering machine. Do you think the Cooler Master V550 80+ Gold would power this setup properly? I’m thinking about overclocking, possibly.

        • Yes, if you won’t go for extreme overclocking on the CPU it should do fine. I would go for the 650 if I wanted extra headroom, but most of the time you won’t be pushing both CPU & GPU(s) to even come close to stressing the PSU for prolonged time. I recently tested my own 550, using a 4770K @ 4.5GHz (does pull around 220W @ full CPU load with a 970 idling), but added 2x 970 STRIX @ SLI and the whole system was perfectly stable for the 3-4 days I was running my GPU tests on. I did not try to overclock the GPUs tho.

  6. Hi, I just built a new rig for 3D modeling and rendering, I use SolidWorks, Keyshot, 3DS Max Design 2015 and Vray 3.0 My specs are: Mobo – Asus Sabertooth Z97 Mark 1 GPU – MSI GTX 980 CPU – i7 4790K RAM – 18 Gb (2x 8Gb) Kingston Hyper X DDR3 1600 PSU – EVGA Nex 650G System – Samsung 840 EVO 250 Gb SSD Storage – Kingston 240 Gb SSD and two old 350 HHD’s I had Case – Gigabyte Sumo Omega My question is, do you think I made a good choice with everything? Should I change one or more components or maybe add something else? I have 3 more days to exchange any component if I am not convinced. Also, do I need some cooling system? I’m new to building PC’s so I didn´t know what to do about that and I opted for the stock GPU cooler and the ones on the Sumo, and since the MSI 980 has it’s own fans I think that is safe, but I really don´t know. Should I overclock? Are there any real benefits for the software I use? Should I learn how to use the fans and cooling profiles or maybe add some more cooling? I hope you have a chance to answer soon so that I have time to exchange something if needed.

  7. Hello Dimitris!! Amazing site…. I have learned a lot… I am an architect working on medium scale projects (not going to design a whole city… but i would like to be able to design a large commercial building). The programs i am using: AutoCad, ArchiCad, Revit, 3ds Max with Vray and Photoshop for post-production… I am working on a new rig which is almost identical to the one described above. The only difference is that i haven’t picked 4790K but the simple 4790 (do i need to OC?), same graphics card (gigabyte gaming version) and 16gb memory with 2400Mhz (You ll probably tell me that i don’t need 2400Mhz…). So the questions…: 1. Does ArchiCad (or some other program of the above) use OpenGL / OpenCL (i don’t really know the difference…) and how well does this card handle it? AMD is better at this? 2. Is 3ds Max 2015 and Vray 3.0 compatible with the latest gtx 970 drivers? (I love VrayRT) 3. Will the “gtx 970-3.5Gb + 0.5Gb memory” fiasco affect the card’s ability to handle large scenes with memory needs over 3.5Gb??? Thank you for your time…

    • For modelling / drawing in Revit, ArchiCAD, AutoCAD, 3DS, Maya and the such, the faster the CPU, the better. Cores don’t count really outside of rendering. The 4790 is a good CPU, but the 4790K is much better, regardless of you wishing to overclock or not. Out of the box, the K version (unlike the previous 4770K, 3770K which were identical or +100MHz vs. the “non K”) is much faster in both base and turbo clocks. Does that mean that the 4790 is a throw-away CPU? No, but the K worths 10% more (as it is 10% faster all-around, if not more).

      1) ArchiCAD is OpenGL. The 970 will handle it fine, but if you wanted to go with a R9 290, you would be fine. Yes, AMD was better in OpenGL (due to drivers mostly, yes, AMD can have better drivers) but nVidia did improve the 9xx performance and brought them up to speed – at least as much as they would allow a gaming card to perform.

      2)Yes, viewport wise there was no issue and Chaosgroup released a patch for Vray 3.0 that utilizes Maxwell cards properly.

      3) Afaik there was no real “fiasco”. The 970 is a fast as it was day one. I have one, and it replaced my GTX Titan SEAMLESSLY. Plus it still has 4GB of RAM, and for GPGPU puproses & VRay RT i doubt you will see slow-downs should you go past 3.5GB usage. Honestly, I don’t see why people get so upset, and I don’t want to sound like a green team funboy, but what is the difference having a Titan Fast $320 card with 3.5+.5GB card, with a $320 4GB card that both consume 1/2 the power? Do you care about e-peen in the forums, or actual performance? Cause the latter was there day 1, and is still there now. Oh, and actual performance in viewports and games, cause in OpenCL compute, the 970 mops the floor with the Titan (which – lets face it – was bad at OpenCL, even a 7950 could beat it handily, but damn the fact that Vray RT coding sucked with AMD cards for 2-3 years, only barely catching up now).

      • Thank you for your reply… I am grateful!! As for the gtx 970 “fiasco” i don’t really care about what specifications Nvidia released in the first place or if they are wrong or right. I still think it is a great card. Probably the best value for money right now. I just asked you because you are the guru and you would know better… Now I am sure for my rig!!! Thank you again!!

  8. Hi Dimitris, Just got to see ur post regarding the cpu/gpu build…. Thanks for your posts… I planning to buy graphic card, i work in 3ds max design… i have choosen gtx 970/quadro k2200, which one will go good for it…. or as you said gtx 750 ti will outperform these cards… Need help in it….

    • The K2200 is the GTX750Ti in disguise, with double the video memory. Thus the GTX750Ti cannot really outperform it, those two should be on par for pretty much anything that doesn’t really utilize the Quadro driver optimizations (3DS doesn’t for example) and doesn’t need more than 2GB of RAM (in general you are fine with 2GB for 3DS, even @ 1440~1600p in my experience). The GTX970 should outperform both of the above – unless of course it is some OpenGL app with properly developed drivers for the K2200. The 750Ti is a great value for a GPU, but I don’t think I’ve written it could outperform a 970.

      • Thanks for your reply Dimitris. And i have dell precision T3500. replacing the card fx 3800. will gtx 750 ti good pair for it.

        • A 970 would be a waste, regardless of which CPU out of those supported by the T3500 line you through at it. Don’t get me wrong, the X58 platform is not useless, but a stock clocked 5+ yo Xeon, cannot really ramp the latest generation of enthusiast GPUs. The T3500 starts showing its age, so anything faster than a GTX750Ti won’t really give you much more – the CPU in your system will be the bottleneck. A 750Ti will do the trick speeding up some modern software packages, but at this point you should start thinking replacing your system all-together if the resulting performance boost is not satisfying.

          • Thanks for the valuable information you are sharing. i received tthe card 750 ti today. I have i7 – 4790k +asus z97a also with me. in which one i will get best outcome. precision t3500 or i7.

            • I believe the 4790K will be faster than any CPU available for a 1P T3500 in pretty much anything. I believe the fastest CPU for the T3500 would be the W3650. It is almost a given that the 4790K would dominate for single threaded, but with no o/c options with a Dell motherboard, a stock 4790K would have zero issues matching and surpassing a 5+yo hex in multithreaded (rendering).

              • Thank you so much. will pair 750 ti with i7. already i have k2000 with i7. will chk and share the performance.

  9. paired my gtx 750 ti with i7…. and in my another system i have i7 with gts 250…. both the rendering time are equal…. hv to chk with workflow…. what can i do to increase the speed of rendering in hardware side…..

    • Where did you get the idea that the GPU would make a difference in rendering times (not using a GPU renderer)? The only way to have faster rendering is to get a CPU with more cores, higher clocks or overclocking your existing CPU. Technically speaking, you need the product of your cores and clocks to be higher than what you have now: CPU-A aggregate = CPU-A cores * CPU-A clock. If CPU-B aggregate > CPU-A aggregate, most likely CPU-B faster than CPU-A in rendering.

  10. Hello I built a pc for graphic design, 3D design and rendering, and movie making and some gaming. I have a i7 4790K with 16gb DDR3 2133 ram, 120 gb ssd, and a GTX660. I plan on buying a 250 gb ssd( I realize after installing the 120 gb that the 120gb ssd is too small for my main programs.), I will buy a 1 TB WD Black hdd for storinge work and other things on it. Is the GTX660 a ok card for now? I can buy the GTX960 now if I wanted too. The GTX970 is alitte out of my reach right now. Software wise I have Photoshop CC and Illustrator CC. Plan on getting Lightwave and Octane, or some other similar priced 3D package. I will use Blender also. I will use Premeire and After effects for the movie and video side of things. Which card would be best to use for now, the GTX660 or the GTX 960? Or should I really wait and get the GTX970 when I can?

    • You should be fine. I would not worry upgrading to the 960, and I don’t think you will see much of a difference for viewports to begin with. SSD is a great investment on the other hand…250 or even 500GB – you won’t look back!

  11. Firstly Dimitris I’d like to add my thanks for the work that you are doing. I’m trying to build a home PC for an animation student, she has identified Maya as the software that she needs to run. I’m not well versed in either the software or animation, and I’m struggling to understand the metrics that determine best graphic card/processor for her work. Is it correct that the high clock of the 4 core 4790K is more beneficial than a lower clocked 6 core flike 5820K for Maya because of single threading? I believe that Maya 2015 now can take advantage of Directx11, and this removes some of the requirement for a “workstation” graphic card. Do you think that the GTX970 would work well for Maya animation? I thank you again.

    • Charlie,
      both CPU and GPU are working together for viewport performance – and games, and anything…in a nutshell, the CPU “draws a schematic” for each frame, the GPU “renders” it, ideally with assets that are already in the VRAM. If the thing is easy for the CPU to pre-render, the “bottleneck” becomes the GPU. If the scene is complicated, both the CPU needs to keep working hard for each frame to be pre-rendered, and the GPU to produce that complicated final output, but since GPUs have grown exponentially bigger and faster last few years, we’ve crossed the line of saturating ultra fast GPUs, unless we are talking super large textures and super large resolutions.

      There is simply not enough grunt in our current CPUs to work with old-school viewport engines and feed GPUs with enough data.
      Thus there is a plateau in performance after your reach a certain performance level in GPUs. Get something as fast as a Radeon 7970 / R9 280X, or GTX 760 or 680 etc, and you are pretty much dumping the responsibility to the CPU which is trying to play catch up. So yes, to answer your question, the 970 should be plenty.

      So talking stock speeds, the best CPU for anything single threaded, will be the 4970K, and the Broadwell siblings that will be replacing it, due to superior single threaded performance. This advantage is merely due to clocks speeds in the 4970K case, as the architecture is practically identical with the s2011-3 CPUs of the 5xxx gen. The 5820K will be able to outperform those quads a bit in rendering, but that’s it, it will lack if left at stock clocks in viewport and other single threaded . The 5960X, will be even slower due to stock clocks unless you are actively rendering on all cores.

      So, you will have to judge by your workflow and see how much time you spend on single-threaded stuff (particle sims, modeling, testing etc) vs. rendering, before you are able to clearly say that the scale leans towards a fast clocked quad-core, or a slower-clocked hex or octa core.

      Ofc the “enthusiast’s” approach is to get a hex or octa and overclock them at 4.5GHz or so, leaving little to be desired from the Quad Core (other than power efficiency at that point).

  12. Hi dimitris, I wish to get a workstation developed for architectural drafting and modelling, I will be using revit, autocad, sketchup, 3d max and lumion for drafting and rendering.. I wish to have an i7 4970k processor anf gtx 970 4gb graphic card.. Please suugest me acompatible motherboard and power supply for optimum performance.. Thanks in advance..

    • Hello, not much has changed since this this post went public. An Asus Z98-A would be a great motherboard for the task, but so would almost any Z97 based board from Gigabyte, Asrock or MSI. A 550W PSU would be more than enough. I tend to like the Cooler Master V series for Gold 80+ rated units, but pretty much any 550W 80+ Bronze PSU won’t have any issue supplying your rig for maximum performance.

        • How large is a file is more a matter of real complexity than size. A Stadium can be designed to be made out of modular groups / components and be pretty light as it is repetitive. That said, give it 32GB of RAM and will fit “32GB” worth of rendering projects 🙂

          • Is nvidia quadro 620k graphic card compatible with Intel i7 4970k processor. Which motherboard shall be most effective for this configuration because gtx 970 is out of my budget.. It is purely for architectural drafting and 3d modelling and rendering purpose only. I have 2nos 8gb corsair ddr4 ram. 700w circle power supply Cooler Master cabimet and cooling system

  13. Hey dimitris, I would like to thank you for the brilliant efforts of yours.. I am using gtx 960card for architectural rendering and drafting. I am using autocad, sketchup, revit and 3ds max.. Intel i7 4970k processor But I am not getting any speed in drafting and rendering.. Shall I change graphic card or which motherboard shall I use?

    • Burhan, I don’t understand the question. The GTX shouldn’t affect you rendering speed, unless you are using a specific GPU accelerated renderer that supports your particular card (tricky with the new 9xx Maxwells, older versions of some GPGPU renderers don’t). The motherboard should also have no effect on performance, unless ofc it is broken in which case it could be causing problems. What kind of issues are you facing?

      • Cpu gets accelerated to 100% still the render speed is cery slow and sometimes there is no output either.. I use revit 2015 for modelling and rendering

  14. Hi Dimitris, I landed here while comparing different gpus for lumion and 3ds max. I’ve read the whole page…would really like to thank you for helping us all. I’m still confused on the gpu part. I’ve already got i7 4790k, asus z97 pro gamer, 8gb corsair vengeance, corsair rm650. I’ve found that 3ds max viewport performance on intel hd 4600 is very sluggish…and its unusable. The same goes with lumion, renders strange artifacts. My lumion scenes typically reach 12-15 Million 3D points with 900-1000 Trees/Plants. My budget is limited, I can’t afford very high end cards like titan or gtx 980. I want a card which at minimal cost would allow me to complete my work with decent speed. At the same time I don’t want to buy a card which may get outdated very soon. Should I go for 750ti or gtx 960? Or should I wait and save more money for gtx 980? Also, should I look for 4gb versions instead of 2gb? Does it make any difference in lumion and 3ds max viewport performance?

    • Hello, for starters I would not consider a 980 even if I had the money for it. If you cannot go for a 980Ti or Titan, the 970 will offer 90% or more of the performance of the 980, costing 60% the money. A very easy decision in my book. If I would go “cheaper” than 970, I think I would opt for the 750Ti, as the 960 doesn’t warrant the price difference based on the added performance in my opinion. Really complicated scenes with lots of large textures might benefit from 4GB cards, but this is very situational. What counts more for viewport performance is the memory bandwidth (memory bus bitrate * memory clock) : the 256bit memory bus on a 970/980 or 384bit bus on the 980Ti/Titan cards do allow for a higher bandwidth over the 128bit bus that both 750Ti & 960 use.

      • I’ve come to a conclusion that 960 being a newer generation card will perform faster than 750ti in lumion. But for 3ds max, I expect little or no viewport performance difference between the two. I think I should go for 960 2gb to be able to benefit from more cuda cores and dx 12 in upcoming versions of max n lumion? Thanks for your help Dimitris

  15. Hi Dimitris– thank you so much for these amazing build ideas! I was just resigning myself to buying a boxx workstation ($6k+) or a dimension T7650 ($7k+) similar to what I have at work, after my computers were stolen during a home-robbery this weekend. LUCKILY, I saw your posts on CGArchitect regarding a home renderbox, and after an evening of reading through the threads, have come up with this setup. Im an architecture professor and freelance in architecture so I do heavy rhino/grasshopper, 3ds max with MR and Maxwell, Illustrator, Photoshop, Lightroom, InDesign, occasionally AE & Premiere (for video) and some sound design with Reason. I would love your feedback on getting this setup: $1500 CG Workstation – Q3 2014 with mods: Processor Intel i7-4790K Quad Core 4.0GHz overclocked to 4.5 Motherboard Asus Z97-A Cooling Stock GPU Asus STRIX GTX970 4GB Memory Kingston HyperX Fury 1866 4×8 – 32GB System Drive Samsung 850EVO Pro-Series 512GB Storage Drive WD Black 1 TB 7200rpm SATA Case Fractal Design Define R5 Power Supply Cooler Master V550 80+ Gold Operating System Windows 7 pro 64 then for the renderfarm, I’ll start with 2 of your $600 render nodes. If I understand your reasoning (max and rhino being primarily single-threaded for modeling), does this look comparable (at under $3k total) for a freelance workstation + render (while i continue working) setup? my type of work can be found here: http://doronserban.com/ your help would be greatly appreciated as I need to

    • Doron, as you’ve understood yourself, you can get the performance of the best BOXX 4C workstation with less than 1/3 the price building it youself. Follow your heart, respect picking proper parts, and you almost cannot go wrong.

  16. Hello, Dimitris! First of all thank you for all your support and help for common users like me. I’m building workstation based on your original post, but I have two questions: 1) Do I feel major disadvantages using this motherboard: http://www.amazon.com/dp/B00K23BW70/?tag=pcpapi-20 instead of yours suggestion (Asus Z97-A)? 2) Which GPU is better ASUS STRIX GeForce GTX 970 or XFX Radeon R9 390 8GB Double Dissipation, mainly as you wrote in other post GPU just for viewports/CAD, so my question which on this run them faster and smothier? Thank you in advance

    • 1) The MSI has the same chipset, so it is practically the same motherboard as far as performance goes. I don’t think there will be disadvantages really. 2) I have no experience with the R9 390, but I highly doubt any CAD application will be able to make use of 8GB of VRAM now, or in the immediate future for viewport acceleration – even if we were talking 4K resolutions. Thus would opt for a 4GB card either nVidia or AMD. I think most viewport engines for CAD (3D/2D alike) are CPU limited and neither of the above cards will be the bottleneck.

  17. Hello 1. Supermicro X7DLV-E 2. 2x Xeon X5450 ->3.16GHz + 2x SNK-P0034AP4 supermicro coolings 3. 24GB Hynnix DDR2 667mhz FBDIMM EEC + Cooling from kingston 4. SSD on PCI-E X4 gen 1.1 from Plextor, model: m6e 256GB 5. 1 TB storage WD on SATA II 6. Zalman ZM700-LX (700w PSU) – 87% efficiency non-modular 7.steel Chieftek Dragon Tower case + 3 coolers: 2x 80mm front and 1x 140mm back 8. Radeon R9 280x from ASUS, mounted on X8 PCI-E gen 1.1 Custom workstation for Architecture. No1 software used: Rhinoceros 3D + Vray Opinions?

    • Will it work? Sure. Are the main components other than the GPU and SSD holding it down? Yes. This dual CPU box will be notably slower than a current single quad core i7 based machine, but if you already have it, its not a bad refresh for it.

  18. Hi there Dimitris, thanks for all the information you are sharing here. I want to ask you about this…for maya/zbrush/substance designerand painter/after effects, are you still recommending to go either with the gtx 970 or the 750 ti….instead of getting the 960? My system specs are i7 920, 3 x 4gb gskill ripjaws 1600ghz, p6x58d-e. And about quadros vs geforce for maya viewport….from what i understand you so far, geforces are ok with maya only if using viewport 2.0 right? Thanks!!

    • Yes, tbh I don’t think the 960 brings enough to the table for the price increase, although you do get better deals these days for 960s than you used to. If you are in the US there are also seasonal discounts this time of year, and all the way towards xmas, so if it was for $20-30, I would not mind one over a 750Ti. But with the latter being a $100, its a bit tough.

  19. hi dimitris my rig is intel i7 4790 with 8gbx2 kingston hyperex 1866mhz ,1tb wd blue hardisk,asus gtx 960 strix 2gb oc edition plz tellme whether is good config.for 3ds max2013 and vray rendering and autocadd.

    • Of course, the performance difference will be negligible between different chipsets and/or motherboard manufacturers in real life.

  20. Hi Dimitris, Would having a 2 graphics card combined be beneficial and possible. Let say a Quadro K2200 for Maya 2016 (Modelling, texturing and Viewport navigation). And a GTX 980 Ti for mainly GPU rendering (RedShift)?

    • Definately, although the K2200 is pretty pricey and not an amazing value based on its performance with Maya 2016. Last few generations of Maya with Viewport 2.0 viewport engine, do pretty well with fast GTX cards & D3D, so a low/mid range Quadro will give you diminishing returns.

  21. Hi Dimitris, I have been researching for building a new computer for 3d modelling in Rhino and SketchUp, and Rendering with V-ray. I also make use of Photoshop, Illustrator and InDesign. I have also recently taken on Z-Brush with a Cintiq, and Cinema 4. My old Dell Inspiron 7720 is crying out loud and I was looking to get a Dell Tower (5810) because of the processor/ram/hdd-sdd/video card combinations and the option to access the workstation remotely, but its kind of pricey and I am still trying to understand all the forums around bus/clock speed and video card performance. Would this build be good for me? I have seen reviews that argue for the 5810 dell tower while others don’t recommend it. Any help would be much appreciated! Will definitely click on your links! Thank you for all the info provided already in your posts.

    • My current workstation @ my office is a Dell 5810 (Xeon E5-1650 V3 3.5GHz / 32GB DDR4 2133 ECC / K4200 / M550 250 class SSD). It is pretty good overall and the cost is competitive vs. other OEM workstation PCs with similar specs, but value for the money? Nope. You can build yourself a massively faster PC for that kind of money, or the same for much less.

Leave a Reply to saurav Cancel reply

Your email address will not be published. Required fields are marked *