$1000 CG Workstation – Q1 2014

Intel Xeon E3 Edition – Q1 20141000_workstation_Farnsworth

This built is an attempt to produce a well-rounded performer that will serve most CG artists, video and image editors looking for a machine that will set them back around $1,000.

Last year, we’ve tried to built the equivalent system around an unlocked i5 processor, keeping open the potential for overclocking in order to tap into a higher level of performance. This time around, we are exploring the idea of letting components of higher pedigree to lead their way, providing excellent performance out of the box. By shaving off the cost for components that allow overclocking, like an enthusiast oriented motherboard with features that most users won’t utilize and aftermarket cooling, we are able to “squeeze” in a faster CPU, while allowing for the addition of a SSD drive to significantly boost our OS’s  and main applications’ responsiveness.

Adding the shiny-new Maxwell based 750Ti GPU alongside the peppy 4C/8T CPU, we are expecting a surprisingly cool and quiet, yet powerful desktop machine that won’t cost an arm and a leg.


  • CPU: intel Xeon E3-1230V3 3.3GHz. It is rare to think of a Xeon CPU when it comes to a value 1P workstation, but here is E3-1230 V3 to prove us wrong…this is the cheapest Haswell Quad with HT support, resulting to 8 threads. It is 200MHz slower than a i7-4770K and 4771, but I believe the performance difference will be very hard to spot in real life, unlike the cost savings which are not insignificant. The extra cost of around $25 over a “plain” i5 Haswell, will buy you a decent % of performance gains when rendering.
  • CPU Cooler: No additional cooler is required, as the Xeon is sold in retail package accompanied by a factory cooler that works fine at stock speeds.
  • Motherboard: Gigabyte GA-H87M-D3H.
    Since we won’t be pursuing overclocking, there is little merit in paying for a Z87 motherboard. The H87 provides all the important features and performance, in a more attractive price-point. This Gigabyte board is a solid performer, from the manufacturer that appears to have the best reliability record on both 7 and 8 series Intel chipsets. GBit lan, 6x SATA 3 ports with RAID support, USB 3.0 and a decent sound card solution are all on-board.
  • RAM: Crucial Ballistix Sport 16GB Kit (8GBx2) DDR3 1600 .
    You can get 1x or 2x of these kits for 16GB and 32GB respectively.Planing ahead is wise. Most motherboards for intel i5/i7 and AMD FX CPUs have up to 4 slots for RAM. CPUs can support up to 32GBs. Even if you don’t see the need for 32GB of RAM, opt for 8GB sticks unless you want to go for a special speed etc. This will allow room for growth without parting out your initial investment (often the case with 4GB sticks). Prefer low profile heat-spreaders. Those offer little to no gains in reliability and stability, but might cause installation issues obstructing the use of large CPU heasinks. We won’t be using those in this built, but I believe it is proper to choose versatile components that can be used interchangeably in many builds. The H87 doesn’t support faster than DDR3-1600 RAM, but worry not: this won’t be a real bottleneck.
  • Graphics: This is a tough one. Depending on the direction you will move, this is the second most important component after the CPU, and in many ways the most important. I will list 3 options, trying to keep everyone happy:
  1. EVGA GeForce GTX 750Ti 2GB: This is the latest chip from nVidia. Runs cool and is sipping power (doesn’t even require auxiliary power, the 75W from the PCIe slot are enough), yet it scores impressively all-around, having all enthousiasts and reviewers wondering what a 200-250W part based on the same Maxwell architecture will bring to the table. Compute performance is also pretty decent, and both CUDA and OpenCL accelerated applications will see benefit. Should you want to kick back a bit, this card should be able to play most AA titles pretty decently @ 1080p.
  2. Sapphire Radeon R9 270 2GB. Radeon drivers appear to naturally work better with OpenGL viewports. The R9 270 is a very potent card. Although more expensive than the 750Ti, it can be squeezed in the budget and will work great in most OpenCL accelerated applications. In 3D CAD under OpenGL like Rhino 3D or even SkechUp, the R9 270 will probably be better than even more expensive GTX cards. And it is not a shabby gamer either.
  • SSD: Samsung 840 EVO 250GB. Who would imagine that we could get 250GB of 500+ MB/sec capable storage for less than $0.6 per GB? This is an excellent value drive. Fast and reliable, will minimize loading times and turbo-boost your swap and scratch files. I would worry about it being TLC NAND based, independent testing has proven that  those drives can sustain so many TBs of writes, that it would take more than a decade for even the most active users to see any degradation. And remember, that when this happens, the controller will just suspend writes on the defective blocks, while reads (thus access to your files) is technically possible “forever”. 250GB are enough to fit OS, your design and 3D suites and even keep some working files in. To ensure maximum performance, remember that you should not fill your SSDs more than 75-80% to their maximum capacity.

  • HDD: Western Digital Caviar Blue 1 TB 7200rpm – WD10EZEX.
    This is an excellent and very popular 7200rpm drive. This should be your storage drive for libraries and archiving files that don’t need to be on the SSD. It is still a decent performing spin disk to allow for HD video editing and whatnot.
  • Case: . Cooler Master N200 mATX. This is a great value case, offering many high end amenities, like front panel USB 3.0, cable management slots, ample room for large GPUs and good airflow / fan options. You can even add a dual 120mm closed loop water-cooling system like the Corsair H100i or Cooler Master Seidon 240M should you wish to add one, but for our usage that would probably not be necessary. I really like the minimal aesthetics of the case, which would fit in a professional environment, and for $50 it is hard to beat.
  • PSU: Rosewill Capston 450W 80+ Gold. This is a quality unit with consistently good reviews. 450W should be more than enough, as this system will probably be using less than 100W for normal operation, and would rarely break 200W of wall-plug load. 80+ Gold rating ensures that the unit is running cool and energy is not wasted.
  • Optical Drive: Optical Drives (ODs) are rarely utilized the last few years. Broadband connections, cloud storage and affordable flash drives are replacing them. For most ODs are limited to installing new software and OS. A Lite-On IHAS124-04 or Asus DRW-24B3ST would do the trick.

The OS would be hard to squeeze in a $1000 budget with the above components. Still the natural suggestion:

  • Operating System: Windows 7 Professional SP1 64bit (OEM) Windows OS is guaranteeing compatibility with most rendering packages, including 3DS Max. If your distributed rendering will be focused in Maya, you could actually get away with Linux as client packages for VRay are available.
    Windows 7 Home editions are limited to 16GB of RAM. Professional and Ultimate versions allow up to 128GB, and offer some additional networking/remote access features that are desirable.

Disclaimer: For the total price, we assumed that you would go for the most basic configuration listed. No guarantees of pricing and availability in your region, just suggestions =)

Affiliate Links

If you’ve found any of the articles or DIY information useful, please support pcfoo.com and myself by using the supplied “affiliate” links in the site. When you use these links I get a small commission, regardless of what exactly you’ve bought, and of course I have no access or involvement in the process other than “recommending” a product.

In fact, a commission is made from any purchase store-wide made after a session was originated by one of my links, so if you want to support pcfoo, please visit Amazon.com using these links regardless of your intention to buy a new PC or not!

Are you outside the USA but still using Amazon, go to Amazon.com, scroll down to the bottom of the Amazon page and switch to the local Amazon site you would normally shop from in your country. It is far more elegant than filling the pages with adds or having automated routines replacing text with commercial links.

If you are not using Amazon at all and you wish to support us, please do so through Paypal by clicking the button below.

Thanks for your support.

22 thoughts on “$1000 CG Workstation – Q1 2014

  1. Hi man. Since you recommend the gtx 750ti, will it working with 3ds max 2014? For a simple scene such as one building on the site.

    • Yes, the GTX 750Ti is a very respectable GPU – not just for the money.
      Remember, that many people work on ArchViz with low/mid range laptop GPUs, like the rMBP or equivalent sub $1000 windows machines, that have GT6xx/GT7xx GPUs that are vastly slower than that.
      Current versions of 3DS with the adaptive degradation viewport option, work pretty well with most GTX cards.

  2. Hello Dimitris. I am an architect from Greece as you can see and I watch your comments in several forums. I want to buy two new desktops for my office (for me and my partner) but I am a little confused because I don’t know if they are ok. We are using Sketchup 8 +Vray for modeling and rendering. I am thinking to use Lumion too for rendering (I’ve seen tutorials and think it is very fast and easy). I don’t know if the new desktops will be fast enough for the programs we are using. Here are the specs of the new desktops: Intel core I7-4770S 3.10GHZ LGA1150-BOX GIGABYTE GA-Z87M-D3H RETAIL GEIL GET332GB1600C10QC 32GB (4*8GB) DDR3 PC3-12800 1600MHZ EVO TWO QUAD CHANNEL KIT INTEL 530 SERIES SSDSC2BW240A401 240GB SSD 2.5’’ SATA3 BROWN BOX GIGABYTE GEFORCE GV-NTITAN-6GD-B GTX TITAN 6GB GDDR5 PCI-E RETAIL Will be the Sketchup faster for large models or not? Thank you.

    • Sketchup is very CPU bound – getting a Titan or any GTX better than a 760 won’t help. Perhaps a Radeon R9 280X or equivalent will be faster, as AMD cards & drivers like OpenGL (what Sketchup uses) a tad better, but that’s pretty much it. Expensive workstation (quadro / firepro) or a 780/Titan won’t really do much after one point.
      Overall your machine will be fast, yet I would opt for a 4771 or one of the new 4790 CPUs that offer faster clocks. The 4770S is a low power model, but shaving off 20-30W on a workstation CPU is not that important. You have to keep 2 facts in mind:

      1) Sketchup – even the 2014 – doesn’t ever use more than one thread/core, so you should always aim for the faster clock. Only plugin renderers in SU are multithreaded.

      2)Sketchup is a 32bit application. That means that under Windows (regardless of windows being 32 or 64bit) it won’t be able to page more than 3GB or so out of your RAM. Newer versions of Vray for Sketchup 2.0 do have a 64bit spawner that can access lots of RAM, but that’s exclusive to the VRay rendering process. Sketchup itself won’t ever utilize all that RAM.

      • Thanks a lot for your answer. However we use Vray in Sketchup and I‘ve read that it needs a lot of RAM and a good graphic card like Titan. And what about Lumion? In their site they say the Titan is the best for it. Is that right? We are really confused, we want to spent about 1500-2000€ for each desktop and we don’t want spending money without results. Do you have any suggestions about a desktop like that? Thanks a lot again.

        • Well…
          as I’ve said:
          1) Sketchup is 32bit, so is Vray for Sketchup, unless you are using the 64bit spawner, but that’s “outside of Sketchup”. Sketchup and vray for Sketchup launched straight from the app won’t be using more than 3GB of RAM.
          2) The GTX Titan is a great card, but overkill/underutilized in Sketchup unless you will be using Vray 2.0 for GPU progressive rendering and nothing but that. 6GB of VRam won’t be used by Sketchup, and most likely you will have a hard time to build a model in Sketchup that will need more than 2/3GB of RAM available to far cheaper cards than the Titan.
          Where do you read otherwise?

          Lumion is all GPU accelerated when rendering (not in build mode) and will require a powerful card. That’s said, I believe you won’t easily meet the limits of a 3GB 780 or 4GB 760/770 to say that a Titan has to be used. The Devs themselves in their forum suggest a 6GB card only for very very complex scenes, with a 1-2GB card working for most people and 3GB being enough for complex scenes. Many times we are over-enthusiastic about what we are going to build: “of course I am making very complex scenes” – I would bet 75% of the people would say that, and that 100% of the sales representatives will agree =).

          Long story short: I would not go for a Titan in a quad core build that is not aimed for hardcore gaming in multiple screens – especially when you are limited to €1500 give or take – or GPU compute you know for sure will need more than 3/4GB of RAM. I personally own a GTX Titan, and I have never managed to push RAM usage past 3-4GB without trying to do just that (e.g. silly things to see how far I can take it).
          /end of English: μην το κουράζετε πολύ και ξοδεύετε λεφτά άδικα. Καλύτερα να πάτε σε κάτι με 4930K + Χ79 board (αν σας παίρνει οικονομικά) και μία 4GB 760 αν νιώθεις “ανασφαλής” με 2GB. Ίσως οριακά να σας φτάνουν τα 2000 (δε παρακολουθώ τιμές Ελλάδα), αλλά τουλάχιστον ο 6-πύρηνος θα δουλεύει καλύτερα σε CPU rendering, που θα είναι μάλλον αυτό που θα κάνετε περισσότερο με το VRay. O 4770/4770 (όχι S) θα είναι λίγο ταχύτερος σε ότι αφορά modeling & exporting από το Sketchup, αφού το κάθε core είναι 10% ταχύτερο πάνω κάτω λόγω νεότερης αρχιτεκτονικής, αλλά ο hexcore θα παίρνει τα πάνω του αισθητά στο VRay (και μόνο).

  3. hello Dimitris, I am just carious when we talk about scenes with millions of polygons will that makes us think of (lets say 780) not lower??? even maybe more ram than 16gb ram

    • Depends on the organization of the scene. Using Layers, blocks/groups/components/assemblies (and whatever each application uses to “Group” geometry) is highly recommended to mandatory, otherwise it is just a matter of time before a badly managed scene will bring not just a GTX 750, but a 780, a Titan or a Quadro K600 to its knees.

      Specifically for 3DS, the new adaptive degradation engine allows you to worry less about turning things on and off to speed up viewport performance, as it is dynamically degrading the quality (textures / light & Shadows) but also the visibility of geometry as a whole, by making blocks of geometry temporarily disappear to allow for smoother orbit/pan movements, and then automatically appear while you maintain a minimum fps around 10~15 or so. Of course you have to have grouped geometry for this to work. Just layers won’t work.

      It is hard to quantify the performance of cards this way, as the viewport engine cannot be adjusted to a “minimum quality”, where you can monitor achievable fps – it just degrades as much as it can (given geometry is organized in groups) too keep fps up.

      The catch is, how long it takes before the scene is “reloaded” to 100% on your screen, and of course the slower the card was performing, the more it had to turn off, and the more it will take for things to reload. Depending on what you are doing, this can be irritating, as you have no control on “what” goes away – “if I wanted isolation mode mr. 3DS, I would be in isolation mode” is what comes in my mind. So there are cases where you have to turn this adaptive degradation off.

      I wanted to create some short of “see for yourself” videos, but I did not find a proper 1080p capture card I was willing to pay for :p

      16GB of RAM is nor a real limit when you model / for viewport performance. It might be depending on how complex your scenes are, but unless we are talking crazy vegetation proxies + how big your renders are etc, but scenes are “workable” with 16GB.

  4. hallo Dimitris. first want say big thanks for your great job for benchmarks and for great posts, very usefull infos…and sorry for such big post from me…

    I have some questions and if u can and have time please help me solve it. here I want build PC for 3D Rendering,Modeling,Animation, Video Editing all kind stuff (gaming we can ignore), I need Professional like rig to render medium scenes with high quality (dof and motion blur).
    I will use mainly 3DS MAX with V-Ray(CPU and GPU both for Rendering) (will make some Simulations as well with phoenixFD), Adobe After Effects, Premiere, Photoshop and some other softs like that…I know that I need very good CPU and GPU, also alot of RAM, but I am limited with budget around 1000$, I already have Case,OS and HDD. I was thinking to buy this:
    240$ – Intel i5-4690K or 330$ – Intel i7-4790K (latest Devil’s Canyon) (I know i7 is around 30% better than i5 in Rendering but.)
    27$ – Cooler Master Hyper T4
    240$ – MSI GeForce GTX 760 OC 2GB 256-bit
    188$ – ASUS Z97-PRO
    165$ – Kingston HyperX Genesis 16GB Kit of 2 (2x8GB) 1600MHz
    97$ – Intel 530 Series 120GB
    80$ – Samsung 840 EVO 120GB
    (but I think Intel SSD) 150$ – Corsair HX750 Watt 80 PLUS® Gold
    This was my List before I read this post and it blow my mind, I really like Xeon and this E3 looks lovely in Benchmarks searched in net, also it cost as 4690K but better results (8 thread and overall benchmark scores)
    11475 Mark – Intel i7-4790K @ 4.0GHz
    9466 Mark – Intel Xeon E3-1230 v3 @ 3.30GHz
    7930 Mark – Intel Core i5-4690K @ 3.50GHz
    E3-1230 v3 looks like middle i5-4690K and i7-4790K but same price as i5..for me acceptable perfomance of i5-4690K with little overclock. E3 does not have Intel Graphic (I like that) also have 8 thread (I like that), but lower GHz and nonOverclockable. so here I am thinking is E3 really better than i5 in real wold use (in rendering)…is E3-1230 v3 same as Overclocked i5-4690K???? and if u can describe what is other benefit of E3 over i5,i7…(can I use normal Ram or plug in any 1150 mobo and etc.) and if u can please comment on my other parts as well, what do u think (already big, big thanks if u read my post 🙂 if u would like you can email me and save space on page and delete this post if u like, thanks

    • I would prefer to get a E3 over a i5, as what you describe is pretty CPU intensive.
      Overclocking a i5 will yield identical performance with a i7 of the same architecture / clocls, but @ stock speeds both the i7 and the E3 will do better due to HT -at least in multithreaded tasks.

      Will you overclock? If you don’t, you can easily save some money going with the stock intel Cooler, and downgrading your Mobo to the Z87-A. The “Pro” has nothing to offer in RL performance, and it is not that much – if at all – better in overclocking.

      I would get the Samsung EVO SSD…I find it excellent.

      The PSU is overkill @ 750W…even with overclocking, you can support a GTX 760 and a i7 easily using a good 550W PSU.

      • hey, thanks for answer. I decide E3 and luckly intel just lunched 2014 version of it E3-1231 v3 which has 100+MHz and with TSX Instruction (do not know what exactly means, and how it will reduce/improve perforamance)………and I think get E3-1231 v3 — 253$ or E3-1241 v3 — 275$…….so I do not need Z97 and this Gigabyte looks also good but it has PCI Express x16 (which conforms to PCI Express 3.0 standard) sad on official page, but it does not means PCI 3.0 right??? and my GPU will not work 100% as it has Bus Support PCI 3.0 or I am wrong??

  5. Hi dimitri, great explanations as always..big fan! So..i made custom pc plan below, please tell me that i made mistake on something.. Xeon e5 2620v2 Msi Gtx780 3gb Asus p9x79 pro 4x8gb pc12800 I am currently working as 3d generalist, but as far as my workflow goes i always troubled when dealing with fx scene in maya 2012 w/ my old vaio f-series laptop. I know that in production u have to seperate scne into appropriate fields, but in my case i could texturing, modifying char rigs or even animating when doing fx works in order to finish the scene as i wants. Hmm.. Question : Does this plan considered as overkill, quite enough or missing something? Or else..appreciate the time, thx

    • In general you are on the right track, but beware of the 2620v2…many people look at this “cheaper” 6 core / 12 threaded CPU and think they “trick” the system, but in reality this CPU is clocked very low, with 2.0GHz base and 2.6 turbo clocks.

      A 4930K will be doing 3.4GHz Base / 3.9MHz boost on multiple cores, beating the 2620v2 handily on pretty much anything. Remember, the vast majority of operations in general modeling but also many rendering operations like preping scene geometries & materials etc to be handled by Vray / Mental Ray etc are single threaded. A CPU with 66% the clocks is doomed to be notably slower.

        • Nothing is future proof 🙂 s2011 is 3 years old and replaced soon by s2011-3, different socket, DDR4 only etc…Will it last 3-4 years as a respectable workstation? Yes. But a year from now something better will be around the same money, and the cycle continues.

  6. Hi Dimitris,
    I’ll start by stating my current setup (about 6-7 years old)
    CPU: Intel Core i7-860 4C/8T (2,80GHz)
    Mobo: MSI P55-GD65
    GPU: Powercolor ATI Radeon HD5770 (1GB)
    RAM: 8GB (4x2GB in dual channel) Corsair XMS3 (DDR3-1333)
    PSU: Corsair HX520W
    SSD: Samsung 830

    I built it before I started working on 3D apps and it has served me quite well so far. I’m thinking of an upgrade though. Budget is tight, so I’m looking for a value-for-money solution. I’m modeling/rendering in 3DS Max (2012 for now but I’m planning on upgrading to the latest version anytime soon )for the most part, along with VRay (I’m also working a bit in Photoshop and I could at some point start learning Maya or Adobe Premiere or other software, but I guess my new setup would suffice anyway). My scenes are not that complex, but it would sure be nice to chop those rendering times down (or enrich my scenes preserving an acceptable estimated completion time). I’ve also recently started working on 3D Games in Unity, but I don’t think that’s very resource intensive (then again I could be wrong, I’m fairly new to all this).

    Like I said, budget is tight, so I’m willing to proceed with the upgrade only if it’s worth it. Cutting down a 20-min render to 15-min won’t do the trick (that’s one reason I mentioned my current setup). I would be satisfied with at least 50% cut-off if possible. Am I asking for too much?

    I went through your proposed setup (I don’t know if anything significant has changed since March), considering the fact that I would keep my PSU (thank you for that, before reading your article I was planning on buying a new one) and my SSD, I could get away with around 640 euro (~800$) with prices in greek stores (or even EU Amazon) for my CPU,GPU,Motherboard and RAM, which sounds awesome. In fact I could also change my case, since my Coolermaster Sileo 500 may have a sound-proofing foam, but probably in the cost of temperature, I had to buy a Megahalems V2 CPU Cooler to avoid extremely high temperatures.

    There are some issues I’m concerned with though, maybe you could be kind enough to make them clear for me: – I’ve been searching for a few days, some others recommend an i7-4790 CPU, I can find it about 20-30 euro more than the Xeon E3-1230V. Is it really worth it? – Correct me if I’m wrong, but I understood that GTX760/770 are Kepler-based and are a bit more expensive than the GTX750Ti with is Maxwell-based, but is better for some reason(??). How is older and cheaper better than the newer model? Unless you’re speaking from a value-for-money point of view. And is the Radeon R9 270 really better for the software I’m using?
    – I’m working with a dual-monitor setup. A quite odd setup to be honest, I use a Dell U2410 as my main monitor and a small 15″ LCD monitor where I usually move my panels to. I’m planning on changing the latter though quite soon, with another 24″ display. Does this change in any way the needs of my setup?
    – The main reason I got into this whole upgrading idea in the first place, is because I was blown away with Vray-RT videos like this: https://www.youtube.com/watch?v=3KTP-v043tM
    The rendering is done almost in real-time and although it’s not a very complicated scene, if this can be done with a GPU in my price range it would boost my productivity to a whole new level, even with the limitations you mentioned and I checked in the VRay-reference. Could I use both CPU and GPU rendering in a single rendering session? I’m talking about leaving the GPU handle anything it can, and push the rest of the load (displacements, fur etc) to the CPU? Or at least render my scene with my GPU at first, and then render specific regions where there is displacement with Vray-Adv. Am I missing something?
    – I mentioned my current setup in the beginning. I was wondering if I could use it as node and make myself a mini render-farm to distribute my rendering processes. Are there any hardware limitations to that? And is it really worth it in terms of performance? My other option is to sell my CPU, CPU Cooler, GPU, Mobo, RAM (and maybe my case) and raise my current budget by that amount (I estimate around 200-250$). If I decide (or if you persuade me to more precise) that this is the way to go, what kind of upgrade could I make to your initial proposed setup that would really be worth of the extra cash?
    – I’m curious if you’ve used the specific setup you proposed yourself, I had to face some motherboard-RAM compatibility issues when I built my current system, I had to lower my RAM frequency to avoid constant freezes and reboots. – Are you sure that an after-market CPU Cooler is not necessary? I would get around 85-90C with 100% CPU load while rendering, I found that quite disturbing, so I bought myself a Megahalems.
    That’s all, sorry for my very long post, I tried to be as descriptive as possible, I would be more than happy to hear your opinion.
    Greetings, George

    • George
      Many questions, so I will try to break it down and maybe touch them all.
      Your current system is old, not ancient, but I believe you will see considerable speed advantages upgrading.
      Word of caution tho, be considerate with timekeeping: it might not be 100% faster (all rendering times @ 50%), as there are software limitations in the mix too – usually the longer the rendering time, the more pronounced the speed difference is.

      If you can find the 4790K only €30 than the suggested xeon, go for it. Again, this built had a “personal bet” to be less than $1,000 (before taxes etc, in the US ΦΠΑ is conveniently not counted in, but it is also not 18-21% line in Europe & Greece). The 4970K is the fastest quad core intel makes, and a decent value. Make sure the motherboard you get, has a recent enough BIOS and it should work – it does with all s1150 boards given the right BIOS. Big speed advantages are also offered with Vray 3.0 over previous versions, so that is important to keep in mind too.

      The GTX 750Ti is newer and cheaper than the 760/770 series. Yes, it is Maxwell based and it is better core per core in compute than the Kepler based 6xx/7xx cards, but it is a low core count, low power card so it is not all-around faster than a 760/770, both of which consume 2x the power or more. Certain tasks do (impressively) put the 750Ti ontop, but that is not for viewport acceleration. Does that mean I would buy a 760 instead? Well, no, not at this point. The 750Ti is a great value card, and 3D apps are not that smart to take advantage of powerful GTX cards. You will see more benefits with a newer 3DS Max version that offers viewport engine improvements for GTX cards, than getting the faster hardware which cannot be really utilized. Much like with VRay 3.0 being faster, 3DS matures in utilizing newer hardware better with newer versions, even though the added features on the modeling side might be insignificant for your purposes.

      The GTX750Ti (an anything of this caliber) has no issues driving 2 monitors. I have a GTX750Ti driving a 27″ 1440p and a 24″ 1080p without issues.

      Your PSU/SSD/Case should be salvageable from your old PC. So would be RAM. It is not optimal for the fastest quad core i7 around, but DDR3-1333 still works with it. If you are tight with your budget you could keep it, and add some more – even if it is faster, you could still clock them all @ 1333, or maybe try for overclocking the old sticks @ 1600 or so.

      CM Sileo: I don’t really know the case, and I don’t know where your overheating issues came to be. Most likely was the lack of proper intake / exhaust fans to move in fresh and extract hot air. Easy and cheap fix for such issues, is opening the side panel and let the CPU fan breath directly. A properly mounted (good contact / good thermal paste application) stock cooler should be fine. Remember, CPUs have all thermal protection – you cannot burn it, it just shuts down if it overheats. Arbitrary limits, are set by users, not intel. If you heard that Dimitri doesn’t want his CPU going over 75oC, well, that’s his preference. Intel has set the threshold to 90 or 100oC for most CPUs, who knows better? Still, I believe your aftermarket cooler is compatible with s1156 & 1150, so of course you could also transplant it!

      Using your old hardware as a render-node: yes, it is possible, but you will have to buy new components for the new one, or at least a PSU & HDD/SSD, perhaps networking gear etc.

      Vray RT: it is complicated. it is not seamless with Vray Advanced, but I guess you can pull some PS trickery rendering with both. Many of the Youtube tech demos you will see, involve multiple GPUs, and often not that complicated scenes / textures, so do not believe you can get that kind of instant performance with just a 750Ti or a 770. Note that the GPU in your video is a single 690, but that is still nearly 2x the horsepower of a 680/770 (dual GPU card)

      Also, take note that this “scene prep” time is also present with the Vray RT engine, as the CPU needs to prepare and push the model and all the assets to the GPU’s VRam before anything starts rendering. Again, you will need to have the latest version of VRay, for the RT engine to recognize your hardware properly, or it might not work at all. The current built for example doesn’t recognize 9xx GPUs, but there are beta versions of Vray that registered users can ask for through the Chaosgroup forums.

      Keep in touch. – Χαιρετίσματα από Έλ Έι 😉

  7. Hi Dimitris, can you help me to choose graphic card for Rhino? I am a little bit confused. You write, that Rhino use OpenGL and you reccomended R9 270 for this application. But I found http://wiki.mcneel.com/rhino/rhino5videocards where they dont reccomended Radeon Cards but Quadro cards are good. So I dont know, what is the best for Rhino. Quadro K620, GTX 750Ti on Maxwell, GTX 760 with Kepler, FirePro V4900 or other card up to $300. It is better Xeon E3-1230V3 than i5-4690 for Rhino? I am not sure how Rhino use Multi Core threads. Thank you very much for your help. Sorry for my English.

    • Tom,
      the Rhino site doesn’t update hardware recommendations as often as you would think. This discussion about AMD cards etc dates 2011. .

      I would not recommend a K620 or other very low performance Quadro/Firepro cards.
      The recommended gaming cards should be working faster than those, and in general before you go to a K2200 / W5000/5100, performance with entry workstation cards pales.

      Often the reason for these performance valleys are CPU bottlenecks and the design of the graphics engine itself. Again, I don’t know what kind of complexity your models reach, but in general Rhino is not using a very sophisticated graphics engine. Doesn’t matter if it is OpenGL or not, it is simply not as powerful as Solidworks or Maya in handling multi-million polygons. In my experience even Sketchup does better.

      I did some searching for benchmarks, but the only I could find was the Holomark, now in version 2. With the older version 1, the results (tested by http://www.simplyrhino.co.uk/) showed decent gains going from a Quadro K600, to K2000 but very diminishing gains over the K2000 going to K4000 or K5000, which indicates for me the fact that the Rhino engine is CPU limited, thus doesn’t allow powerful GPUs to flex their muscles.

      As far as CPU goes, the multicore / hyperthreaded 1230-V3 should be faster only for rendering purposes. Rhino per se (and modeling plugins in it like Grasshopper) is single threaded, adn while you model and navigate through your model, only one core will be stressed, thus the faster clocked / better boosting option will give you a better experience.

      • Thanks for you answer. So I choose GTX750Ti. But I am not sure with procesor. Is better some Xeon with faster clock? i7 processor are expensive. I found these processor http://ark.intel.com/compare/75055,75122,80910,80810 but I dont know, which is difference between two Xeon. Is i7 better then these Xeon for Rhino? My models are not so big and difficult, but I must often rendering with many lights, textures and views. Have a nice day.

  8. Dimitris, What processor would be the update to this? e3-1230 v5 or e3-1246 v3? Other update I’m thinking is gtx 1060. Any other part you would recommend for this build? Thank you

    • I know it has been too long since I’ve updated the site 🙁 The equivalent of our days would be a E3 V5 series CPU, like the 1230 v5 or the 1240 v5, but although these do shave $30 off a i7-6700, they do lack some features (like IGP) and clock that much lower, so I would call it a toss. There is no HT enabled Xeon E3 with decent frequency that costs half way between a i5 & the i7, which was the case back in 2014 with that little E3-1230V3. If you don’t want to go all the way with a 6700K (that boosts higher than the 6700 and all E3 xeons mentioned), the vanilla 6700 is good. If you don’t mind losing some of the boost %, the 1230 V5 you’ve mentioned should work fine. ;). GTX 1060 looks like a solid card for the money.

Leave a Reply

Your email address will not be published. Required fields are marked *