Which GPU is better for increasing rendering speed?

I believe this is one of the hottest questions in every discussion around picking a new workstation. It is worded in various ways: Which is the best GPU for Vray? Or, Which is the best GPU for rendering in 3DS Max / Maya / Rhino 3D / Cinema 4D? etc. In most cases, the answer is none. Not as in “there is no single answer”, but more as in, “GPU has nothing to do with rendering” – or does it?

The talk around GPUs and any short of workstation application has been raging for ages. Blame the interwebs,  review sites and blogs, biased sellers that care for nothing but pushing the most expensive cards or the fact that most if not all purpose built PCs with a workstation label have a expensive (I don’t say good / bad / fast, just expensive) GPU, add some lack of education & the natural confusion on the buyer’s side, and there you have it.

Most of these points are extracted & enriched posts from my activity in the CGArchitect.com forums and presented here in a Q&A fashion.

The role of the GPU in traditional rendering engines

The GPU is not used for any aspect of “normal” renders with VRay advanced, Mental Ray, Maxwell or any other traditional renderer. It never had. It will get there, but it is not right now. The GPU – known for ages as “the graphics card” – for the most part does nothing more than displaying a visual representation of a digital model, on the 2D space of a monitor screen. That’s it. It doesn’t help the CPU – most likely the contrary – it delays it as the CPU is actively creating queues and prepares frames fro the GPU to render on screen. Yes, yes, the GPU is rendering – all the time – the frames you see on the screen, not your final production rendering.

Would make virtually no difference rendering something with a workstation or rendering node with “on-board” graphics/IGP, GTX 780, GT 610, Radeon 7750 or any Quadro. The fact that machines labeled as “workstations” marketed to people “rendering stuff” come options that contain “workstation” GPUs, mainly Quadros & Firepros have nothing to do with “rendering” itself. It is a delicate balance between vendors  trying to create an all-around product, capitalize on marketing promises for better compatibility and reliability and increase their margins upselling the more expensive workstation cards.

 APIs and the Worstation Card advantage

As viewport engines mature, most modern 3D packages offer optimizations that do benefit GTX / Radeon cards. Especially for Autodesk products that have departed OpenGL (or never used it), like 3DS Max, AutoCAD, Revit and to a lesser extend the current versions of Maya, Quadro & Firepro workstation cards have little to offer over a mid-range or better GTX/Radeon.

90% of what a workstation card has to offer, is OpenGL driver optimizations, most of which are intentionally “left out” of “gaming cards” that are optimized only for Direct3D – a traditiotionally “Gaming” API. For good or for bad, it has been the choice for many 3D App devs. This doesn’t mean that  Direct3D or the particular implementation of it by Autodesk for each of its products is allowing to properly utilize the performance of modern GPUs – quite the contrary: you will see that the performance reaches a plateau after you get into GTX 760/770 performance levels. Opting for a faster on paper 780 or a shiny 980, won’t yield substantial benefits.

Is it Autodesk doing a bad job, or NVidia / AMD intentionally leaving drivers without any optimizations for 3D Apps, trying to boost their workstation lines? Well, perhaps both. The reality is that you getting a K2000 instead of a 750Ti to work on the aforementioned D3D based apps, will be wasting your money. Users interested in Solidworks, Catia, NX and other modelers based on OpenGL have many good reasons to consider investing into a Quadro or Firepro, but for ArchVIZ, the investment in low to mid-range workstation cards is returning diminishing, or even negative results. Twisted, but real.

GPU Accelerated Rendering Engines

Much of this confusion is created due to the fact that companies have been experimenting with GPU accelerated rendering for quite some time now. The process of utilizing the GPU’s comptuting capabilities outside of driving one or more displays is offen refered to as General-Processing GPU applications or GPGPU.

The first commercially available example of GPGPU I was aware of was Octane, but it was followed by others like iRay, Vray RT GPU, Maxwell Multilight  – to name a few.

With Vray being the most popular choice in ArchViz, many are interested in VRay RT GPU. We ought to know that the RT engine is different than the original and not 100% compatible with all the features of Vray Advanced (yet).

Unfortunately, that employee that tried to upsell you that expensive GPU as a good choice “for rendering” has no idea when GPGPU started to become a “real thing” in 3DS Max or its plugins, much like in Photoshop or the rest of the Adobe suite, and most likely that is the case for 99% of the people giving advice in forums and blogs: we often just recycle rumors or what aligns to our general perception.

GPGPU implementations utilize either CUDA – nVidia’s proprietary programing language for GPGPU – or OpenCL. To a great extend, these langages communicate with the GPU in a very direct way, bypassing the drivers which still have an impact on actual performance, but it is much smaller. What it is sure, there is no benefit in using a Quadro or Firepro card with those, at least not in ArchViz where the fully unlocked FP64 (also referred to as double precision) has little to no role in GPGPU renderings.

Yes, once again, all the stuff you’ve been reading on how the GTX Titan is a game-changer of a card for offering increased DP/FP64 performance, and it being the perfect “workstation card” was again misinformed beliefs. Regardless of how much time spent on forums and blogs theorizing, the card or card combination with the better aggregate of shaders* x the base clock will yield the best performance. Thus the GTX Titan is worse in VRay RT GPU than say 2x 770s.

* Remember to compare directly only same generation / architecture GPUs. The Titan and 6xx/7xx GPUs other than the 750Ti are all Kepler cards. 750Ti & 9xx cards, along with some 8xx mobile GPUs are based on Maxwell which in general is better for compute despite using less shaders per cluster.

Biased & Unbiased Methods

Vray RT GPU and most GPGPU rendereres are “unbiased” renderers, that uses brute force & the direct compute method to literary calculate ray by ray and bounce per bounce of light for each and every pixel on the frame, gradually chewing through the whole GI solution. These very small “problems”, are a waste for the long, complicated compute threads of modern CPUs: the CPU is “done” with it very fast, but it has to wait for the next problem in queue to come up.

Calculating hundreds of thousands or millions of bounces with 8, 12 or 24 threads  – depending on the CPU(s) you have – is tedious and takes lots of time, with the CPU often waiting more time for the answer to go through the processing pipeline than it took for it to be solved.

This issue was solved long ago in rendering engines, with developers concluding that it was perfectly acceptable to utilize “intelligent shortcuts”, which in a nutshell involved grouping neighboring pixels and interpolating a lower resolution GI solution to more than one similar pixels to speed up rendering speeds (e.g. Irradiance mapping is such a technique). These techniques are characterizing a rendering engine as “biased“, since it doesn’t independently calculate each and every pixel on the final frame, but “cheats” through interpolating results based on a predetermined rate of interpolation.

The game changer came with the massive parallelism built into the 100s or 1000s of simple compute units (aka CUDA cores, shaders, etc) in modern GPUs. All of the sudden, the direct approach of calculating each and every pixel in a frame individually and in a timely manner became possible again, as these little cores are very efficient in calculating these exact small problems. Instead of calculating all those repetitive tasks in a handful of CPU threads,  you are throwing thousands of shaders to the task.

The theoretical speed advantage is  so much faster than using just the CPU (in the same task) that many times there is no merit in combining the CPU in this “loop” – even though many GPGPU engines will allow you to…it will just “burn” electricity. Also, for most intensive GPU tasks, you need at least a CPU thread “open” to feed data back and forth the GPU efficiently, thus occupying the CPU with something else to 100% of its capacity, might even be counter-productive.

Many of the features / options / fx of Vray are actually based on this “biased” methods, thus unavailable (still) to Vray RT GPU. For example you cannot have the latest in  VRayblend materials, doesn’t support displacement etc. See more in their support page. Maybe in future versions they will iron everything out as features are added with each generation of the engine, and eventually CPUs will be used less and less in the process, but for most people this time is not “here and now”.

Some serious amateurs and professionals have adapted their workflow and watered their wine down to capitalize on the speed benefits of GPGPU rendering, despite the lack of some features – many times with amazing results – so we are not talking down GPGPU rendering as a gimmick. The contrary. It is just specialized portion of the current ArchViz industry, and it should be part but not the exclusive factor for picking that much more expensive GPU.

What about VRam Buffer? How much RAM do I need on my GPU?

This is pretty complicated to be answered. Each application has different requirements, and each hardware configuration also. Again, the common perception that “more = better” is false, but in general 2GB cards are more than enough even for very complex models, given that the GPU has enough grunt to actually page 2GB of buffer. Many cards don’t, and that is the reason getting that 64bit 2GB card for $60 was not the bargain you were hoping for.

Older yet powerful cards with 1-1.5GB can still serve pretty well for viewports, although you might be pushing it if you try higher resolutions or multi-monitor setups. Very few viewport engines in the ArchViz world can push more than 2GB or VRam, and when they do, don’t expect a massive drop in performance.

For 3D studio, the #1 reason for the Vram buffer to fill up all those GBs is having massive texture sizes and/or thousands of instances using them. Proper grouping of geometry, layers, proxies and/or less demanding settings can alleviate the weight, and the adaptive degradation engines in current 3DS/Maya engines do help massively on top of that. If you are heaving issues with very complex models in cases of large vegetated models and whatnot, it might be the CPU that is bottlenecking the whole system, and  not Vram.

GPGPU rendering is a different story, as the whole scene along with all the required assets (mainly the textures) need to fit inside the VRam for it to work. In this case it is a “make it or break it” scenario: if it doesn’t fit, it won’t work, so you will probably have to use less demanding settings, fewer proxies and/or downsample the texture sizes. If GPGPU is a priority, a 4GB card should buy you some leeway for complex scenes. Usually VRay RT GPU users get bugged by other, more serious limitations than hitting a Vram ceiling.

that much more expensive GPU.

Final Advice

It is great to ask people that use the programs you want to use along with the hardware you wish to get, but don’t just fall for whatever is out there in the wild. Many times people just speak loudly to justify their own purchases. Doing some additional research doesn’t hurt anyone – unless of course money is not an issue, and nobody can go wrong with a K6000.


67 thoughts on “Which GPU is better for increasing rendering speed?

  1. Hi mate, I’m 3D visualizer involved mostly with interior design renderings. My SWs are 3Ds Max, Vray & PS. Vray is CPU based & I dont need Vray RT or Iray stuff. planning to make a better machine, As much as I learned till now is that more CPU cores plus more RAM is better than one or more GPUs with high vram & cuda cores, for a rendering station. so here is what I’m planning, Please I need some advice! 2 Xeons , e5 2630 v3 (or if you could suggest some thing in same price range) GTX 980 ti MB Asus Z10PE-D8 WS Corsair vegance 32 gigs (I will upgrade later) Corsair AX 860i PSU Corsair SSD 240 gigs 2 TB HDD 7200 rpm My question is RAM speed supported by this CPU is DDR4 1600/1866, will that make any difference ?? Regards

    • DDR4 doesn’t come at a lower rating than 2133. I would not go above 2400. You don’t need registered dimms if you don’t plan on going above 128GB.

      • Thanks Dimitris, So to my prepared machine parts, DDR3 1600/1866 will make any bottle necks in fast rendering? & with that board can I use corsair DDR4 2400 dimms with proposed xeon model, could there be any compatibility issues among DDR 4 2400 dimms & e5 2630 v3? if yes than what will be a better xeon model choice @ same price range. regards.

  2. Hi, I work on 3ds Max and render on V-ray and Scanline (with Skylight) that takes a lot of time to show up on screen.. Looking to buy a new laptop. Do you think Dell M3800 with i7 with 16 GB RAM and Quadro 2GB 1100K is sufficient. Currently I have a laptop with 4 GB Radeon Card but as I said it takes helluva time to render. Thanks in advance, Raihan

    • Are you sure you are using your GPU to render with Scanline? Cause if you are not, it won’t make any difference changing to another GPU, especially one like a Mobile Quadro that will most likely be slower as a compute chip anyways. Complicated scenes take a long time for the render to initiate, but that is a matter of the CPU and not the GPU used.

  3. Hi Dimitri, I’am looking to buy a new workstation to work essentially with vray renders. I’am thinking in a core i7-6700k with Asus z170 pro gaming and 2400mhz ddr4. My main problem is related with gpu. I have 3 possible choices : Gtx 750 ti, gtx 950 or gtx 960. So, can you help me with this little problem? I really appreciate that! Best regards! Hugo

    • Hello Hugo, your setup appears to have a solid base. As far as the GTX card, it all depends on the resolution and the model complexity of your scenes. It is easy to expect the 960 being a tad faster than the other two, but it is also the case that when things get heavy, there is no magic bullet: you need to manage what is displayed on the screen with layers, proxies etc. The bottleneck is not in the GPU, so performance increases are not linear, and in many cases are insignificant. That said, the GTX 950 appears to be good value (current price ~4.8/cuda core) if you want a slight upgrade over the 750Ti ($4.9/core). The 960 is all around not a great value by comparison ($5.4 cuda core, and remains 128bit like the smaller and much cheaper 7xx/9xx cards).

  4. Hi I have in task putting a workstaion together basically for architectural 3d rendering vray 3dmax autocad photoshop and sketchup I was thinking 5820k processor asus sabertooth x99, corsair ddr4 2666 or 2400 32GB, and corsair ax860 power supply . When it comes to gpu that’s the problem I like the titan x but is it worth the money,faster?compared to other gpu like 980ti (not big difference in price 200 to 300) 980 our even other’s. Anything you recommend? Not looking for quadro as they are to expensive What I m looking is for something stable and fast in cad ,V-Ray, PS,.. so I can do some real time finishing ,light setup and rendering mockups for clients Are the new skylake processor’s better?should I invest for now on a cheaper gpu and get the Nvidia Pascale chip next year Thanks for your time

    • The Titan X won’t be any faster than the 980Ti, which in turn it is not meaningfully faster than a 980 anywhere but for GPU rendering. It boils down to VRam pretty much. For material and lighting checks, most likely the 4GB of the 980 will suffice. If you need more than 4GB you are pretty much forced in the 980Ti (with the occasional, not linearly experienced, speed boost as a perk), and the Titan being used only if you are adamant that 6GB of VRam won’t cut it. In my experience with the original Titan & 6GB, that was never the case: you run to other issues with GPU rendering before you hit the Vram Cap, and then it is more likely to switch to CPU VRay RT than long for more Vram, and CPU RT works just fine for testing IMHO. Remember that for complicated models, Vray RT will take a long time to launch – much like regular Vray – so for “testing” the light balance or the bump of a material that will take a few seconds to show up once VRayRT window is launched, the CPU is working for a couple of minutes or much more, relying on its single threaded performance alone. That’s where if you have multiple machines for distributed rendering, at the end makes sense to have the fastest quad core (i.e. Skylake) working for you along with either CPU or GPU Vray RT, and then the rendering farm will deal with the final output. If you are limited to a single box doing all, a 6-core like the 5820K is a more preferable solution all-around. To finalize, I would rather go for a 5960X if I was to drop $1K on a component, than going for the Titan X. I think a vanilla 980 with 4GB RAM will do the trick. I like the Asus Strix line because the fans can turn off 100% and have it run passively 99% of its uptime.

  5. Hi Dimitri, I’ m 3d visualizer and I work on 3ds max, V-ray. I have a 7 years old workstation with the following specifications: Case : X20i-64 Midi Case (Akasa Mirage 62) – 650W Seasonic Modular PSU Mainboard : Tyan Tempest (S5397) [i5400 (Seaburg Chipset)] – SATA (8 DIMM Slots) Graphics Card(s) : ATI Fire GL V7700 512MB GDDR4 (PCI-E x16) Hard Drive (1) : Western Digital 150GB VelociRaptor – 10,000RPM Serial-ATA Hard Drive (2) : Seagate “Barracuda ES.2” 500GB Serial-ATA II – 32MB Cache CPU(s) : 2x Intel® Xeon® E5420 (4x 2.50GHz / 1333FSB / 2x 6MB Cache) Memory : 16GB DDR-2 667MHz – Fully Buffered ECC RAM (PC-5300) (8x 2GB) The problem is that my ATI Fire GL died last week (and the seagate “Baracuda ES.2” died last month). In a few months I am going to buy a new workstation but in the meantime I am looking for a cheap GPU solution to work. Do you have any suggestions? Thanks for your time. euxaristo!

    • Michalis, I would not get anything too fancy. Any Radeon or GTX in the €100 range or so will be more than enough. Doubt the CPU has the grunt to push for something more. A used GTX 6xx/7xx, Radeon 7950 etc will do. For new parts, a 750Ti will do fine.

  6. Hello Dimitris. I’m a student that just started an animation course 2 months ago and right now I’m having trouble in regards to buying a new laptop for my course(more info: I prefer a laptop mainly due to the convenient aspect). With my current budget, the best laptops I can afford have a GTX950m graphics card and its been bothering me whether that kind of graphics card would be able to run Maya 2016. Please let me know what you think, much thanks!

    • You should be ok. Don’t bother with cheap mobile quadro cards, those are actually weaker than even the cheapest mobile GTX cards and Autodesk doesn’t even bother to optimize for them in time to make a difference.

  7. Hi , I am a 3d architectural visualizer, i am using 3dsmax 2014 , vray 3.0, actually i want fast rendering presentation views, can you suggest me system hardware configuration, send me desktop and workstation configuration with graphic card details. i waiting for your reply please as soon as possible. thank you yuvaraj

    • Well, sorry but I don’t understand what you want exactly…it reads a bit, “i am bored to read much, give it to me”…This whole blog post was about the futility of hoping your GPU will speed up your renderings – which was the case for AGES of people thinking otherwise, and still is unless you are using specific tools. You don’t mention specific tools, so my best guess is you are still talking about CPU renderings, thus If there is no budget set, the answer is almost always, get the CPU with the best core * Mhz aggregate (usually but not always the most expensive one) and if you have it already, get 2 or more of the same to get faster.

  8. This site is extremely helpful. I am setting up a machine for rendering from 3DSMax and Sketchup using VRay. So far, we’re going dual Xeon processors, 16GB RAM, SSD Boot and SATA Secondary HDs, and of course I’m stuck at the GPU. I have a selection of AMD FirePro W series or NVidia Quadro K or M Series cards. What do you suggest? This will be a render box only, it will not be an everyday machine, but generally modeling and lighting will be done on other machines and then rendered on this one. Thanks!

    • If that’s going to be a render slave, the GPU is irrelevant. Get the cheapest one you can find just so that it fires up. Also, even tho your renderings might not be that complicated, mind that if you will be getting a dual CPU system, RAM is technically “split” between the two CPUs, and then to each of the cores of each CPU. Having 8-10 or more cores fighting over 8GB of RAM is a bit counterproductive.

        • Definitely does once you go for fast 8C/10C pair (or better). Going for the cheapest E5 6-cores doesn’t really cut it as you won’t get enough speed boost over a fast 1P machine of similar cost (say a 5960X) to justify the 2P hassle.

          • awesome. what can i do to help you out? this is such great and useful advice. so, i’m going with Dual Xeon E5-2623 v3, is there a better 1P option out there?

  9. awesome. what can i do to help you out? this is such great and useful advice. so, i’m going with Dual Xeon E5-2623 v3, is there a better 1P option out there?

  10. Hi. My Mac book pro has a AMD Radeon R9 M370X 2048 MB. I am an interior design student and use rendering a lot. Is this a good graphic card? Or is there a better & quicker performing graphic card?

    • It is as good as it gets for MacBooks. Of course there are better and quicker graphic cards out there for laptops, but 1) does the software you use care for them – or even the one you have now – and 2) does it matter for you if you want a Mac laptop?

  11. Hi read carefully the discussion you and I have a question. 3ds max V-Ray RT only active shaded mode uses GPU and then if OpenCL CUDA ? 1. for final renderings V-Ray Adv or RT ? and are diferentele.Processor i7 8 threads with video card would you compare? 2. I have an i7 (4 cores, 8 threads), 8GB RAM, GTX 970 4GB. Until now the turn only to the processor. I would like to know why when I set to “Assign Renderer” V-Ray RT and active parts in shaded mode Engine type leaving the CPU to go. I change engine type to CUDA or OpenCL and I receive error “Unsupported GI is disabled, but V-Ray RT always uses GI” We looked how to enable it, but I found only adv, not here. You know what’s the problem?

    • eGPU? As in a Thunderbolt enclosure? This will probably set you back as much as a whole hakintosh desktop. I am of the opinion that small portable laptops – like the rMBP – have their limitations for a reason. If you want a powerhouse on the go, just get a bigger / faster laptop. Small and powerful don’t go together. If you want to boost your output in your base, with your external monitor etc, I would advocate in favor of a desktop over an expensive EGPU any day.

  12. I have got answers for the age old cpu and gpu issues from your replies…Thanks. Just want to know one more thing..If we connect two systems of i7 Hq Processors and I5 dual core ones through some sort of network and the renders will speed up right? In that case, The networking can be made with wifi router or How is it made? What problems may we have to face in that scenario?

    • Adding more CPUs on a render job is the most effective way to speed it up. Yes, it is doable through WiFi, although it is a tad slower than it would be through a GBit wired connection. But WiFi is a “LAN”, so it works. Setup of the Distributed Rendering “spawner”, or a utility that broadcasts “availability” of a rendering slave to receive packages from the main workstation for rendering differs from program to program and the rendering plugin used, but it should be available through the documentation for each product.

  13. Hi Dimitris i am looking to improve my workflow and rendering times with my setup. I am an architect that works on the go. I have a macbook pro retina mid 2014 carrying a 2.5ghz i7 (4 cores), 16gb ram, GT 750M with 2gb. I use Vray with 3DMAX and Lumion to generate stills. I am looking into improving my RT experience with Lumion and Vray (single large external monitor) just to set up scenes quicker but mainly to improve render times for my output of stills. I was thinking of getting a EGPU like the GTX 980 but after reading through the posts i am now confused. Can you give me any recommendations or advice?

    • Hi friends, Im a 3dsmax & sketchup interior designer and im really tired in deciding what kind of graphics card that i can use to improve my rendering speed, plz help me to get the proper graphics card that i can us, at present Im using Intel Core2Duo, 8GB Ram, GT 610 graphics card

  14. Hi Dimitris, I am planning to setup a computer mainly for interior design work. Using mainly 3d max, vray, autocad & Photoshop. I can’t decide between i7 or Xeon, which is a better option? Graphic card should be a Zotac 980ti. What will u recommend and what will the spec be for the whole setup? Thanks Daniel

  15. Hi Hi Dimitris, I am planning to setup workstation for following configuration. I am using 3D max, Vray, Revit add autoCad . proposed components of the CPU: Processor :- Xeon 2620 v4 : (1 nos) mobo Asus :-Z10PA-D8 ATX Dual-CPU LGA2011-3 cooler:- Coolermaster Hyper 212x air cooler -(1 nos) RAM:- Kingston 32GB (4 x 8GB) Registered DDR4-2400 SSD :- Samsung 850Evo 250GB Hard Drive:-seagate Barracuda 2TB 7200 RPM In my budget, I haven’t enough money for add 2 processors and high performing Graphic card now. But I planned to upgrade within 3/4 months. What would be the best option for Graphic card? GTX 750 Ti or any other alternative?

  16. Hi dimitris, For Vray RT rendering, wich is better, a newer and faster gtx 1060 with 1280 cuda cores or the normal gtx titan 2688 cuda cores? And for overall pruporse, like video, 3d, etc.? Sorry about my english. Thanks in advance

    • I don’t know the exact answer but I would expect the older Kepler architecture the original Titan used to be better optimized with older Vray RT versions, and since we are talking less than half the shaders, the Pascal based 1060 might not be really competitive despite it being better in D3D games. Perhaps the updates later in 2016 or 2017 will be using the Pascal architecture to a better extend. I doubt that either card is really stressed in D3D viewports in 3DS, due to badly optimized drivers: the focus is on gaming, and both nVidia and AMD are working closely with gaming companies to achieve that optimization, while Autodesk is notoriously slow / indifferent to work with them. For Adobe stuff, I would think the Maxwell and Pascal cards having and advantage as Adobe uses exclusively OpenCL these days, and the newer nVidia architecture do perform much better than Kepler did in OpenCL.

  17. Hi Dimitris, First of all, thank you for your help – your comments are really enlightening. I’m an architect and graphic design enthusiast. I’m using mainly AutoCAD, Sketchup, V-Ray (switched from Artlantis), Photoshop, Illustrator and currently getting into 3ds max. I’m thinking about getting a new MacbookPro (as per my understanding you don’t recommend macs for 3D work, and i can understand your reasons, but i’ve been working with macs for many years now and, besides 3dsmax, which i have to run on a virtual machine, i’m accustomed and happy working in OS X). Giving the high cost of these computers, i’m not willing to make any major upgrades, reducing my limits to just one improvement: CPU – from i7quad-core 2.6 to 2.9 or GPU – from AMD Radeon Pro 450 (2GB) to Radeon Pro 460 (4GB). I would like to get your opinion on what would be the wisest upgrade. Thank you.

    • Tiago, to be completely honest with you, if you are buying a MBP you are doing it for portability and not absolute performance. The differences between the Radeon Pro 450, 455 and 460 “are there”, but the benefits do not warrant upgrading IMHO. The “base” MBP15 model with 2.6GHz 4C i7, 450 Pro 2GB, 16GB RAM and 256GB SSD is 95% of the performance of the “best” possible configuration. If I had to go for an upgrade for the “heck” of it, I would go for the CPU, because it does offer a 10% Turboboost advantage that – at least when plugged in – will give me that extra oommph for tasks that are single threaded and CPU limited, but even that probably won’t worth $300. I believe the 16GB of RAM will be a bottleneck for advanced 3DS projects far before the 2GB of VRAM will be – especially if you are running parallels. The financially conservative me, votes for the base model.

      • Dimitri, once again, thank you for your enlightening comments! I have to say, i’m still an amateur with regards to 3d work (i still focus mainly on sketching and physical models, like in the old days 😀 ) – most of my 3d work is done with SketchUp, simple rendering (Artlantis or Vray) and the precious help of photoshop. I’m just getting started with 3ds max. From this, I will surely follow your advice and stick to the base model, saving the money for, perhaps, in the future to invest in a desktop, looking for such “absolute performance”. Thank you very much for your help. Best regards

  18. Hi Dimitri, yet again, a question from a novice who wants to upgrade his pc for Sketchup, Vray and Adobe Lightroom 6. Currently, I’m using an old HP 6910P, but want to upgrade my laptop for more possibilties and starting with rendering my designs through V-Ray and later work towards VR presentations of my designs.. I’m am looking at the MSI GS63VR. i7 6700HQ 2.6-3.5 GHZ 16gb (might upgrade this later to 32gb) NVIDIA GeForce GTX 1060 256GB SSD PCI-e / 1TB HDD 5400rpm How do you think this laptop will perform, do you see any problems that might conflict with the Sketchup and V-Ray software and do I need to (or can I) do any modifications to improve its performance? I’m sorry for bothering you but I hope you can help this computer-novice out. Kind Regards, Justin

    • The upgrade to a 6th gen i7 from your C2D CPU will be massive and easily notable. Sketchup works fine with both nVidia, AMD and intel IGP graphics. So does Lightroom. Lightroom & PS will be the only apps taht could benefit from more than 16GB of RAM but I doubt you are working them that hard. If I was to perform an upgrade on that MSI, could possibly be swapping out the 5.4K HDD with a larger 2.5″ SSD, e.g. a 500 or 1TB 950 EVO, and go all solid-state. I would see that as a better and more holistic upgrade over going for 32GB of RAM.

      • Thank you very much! It is very difficult to get a straight answer from people trying to sell you a laptop. Thank you for helping this novice out! I’m thinking of spending a bit extra on a Alienware 15 R3, just because of having more faith in the build quality, the cooling and the i7 6820HK (overclockable) processor. Will probably help a bit in cutting down the VRay rendering time and making it a better investment in the long run. I will take up your advice, keep the 16gb and spending the extra on a 512GB SSD instead on more RAM. Again, thank you!

        • The laptop in question has very easy access to the SSD slots…I would recomend not to spend any additional money customizing it out of the Dell store…just buy it with the 256 SSD only, and add a M.2 Samsung EVO drive for storage. https://youtu.be/TnMO-ZX4SxM?t=7m16s

          • I was thinking of getting it with with 8GB (2400mhz) and only the 7200RPM HDD, Adding 1 8gb (2400mhz) myself, saving about €90,-. Than either adding a 256gb Samsung (950) SSD myself (saving another €20,-), or instead adding a 512GB SSD by Alienware (saving €40,- in comparisson to adding it myself). Do you think 1 256GB SSD is enough or do I really need the 512GB version?

            • I would get it with a 256GB SSD from the seller to have it ready up and running with OS running and activated. Then I would add a 500GB class “storage” SSD if needed in the future. Yes, a 500/512GB by itself as the main drive might suffice without the need for a 2nd drive anytime soon, but that depends on what you want to keep on your machine all the time.

              • Very clear! Again, thank you very much and I’m finally convinced of what I should get. 🙂 Kind regards from the Netherlands!

  19. Hi Dimitri. Very useful post and comments. I will ask for advice about the upgrading of my computer to start working with Vray in Sketchup, and maybe in the future with Lumion. Nowadays I have a i5-4570, 8 GB Ram (4+4) and HDD Seagate 7200rpm. I´m planning firstly to buy a GTX1050ti 4GB or maybe a GTX1060 3GB (do you think it really worth the extra price of the 1060 with less vRam? ). And about the other components, do you think that it will be also better to add 8 GB of Ram? Any bottleneck? Maybe the CPU? I´m new in rendering and still learning so any comment will be very appreciated.

    • The 1060 will be faster for sure, but to get your feet wet, the 1050Ti might suffice. I would agree that if you were to go for the 1060, perhaps the 6GB would be the way to go, getting you both more speed and more leeway to try higher resolutions and/or better textures.

  20. FYI Dimitris your worth your weight in GOLD. I run a basic drafting company that is expanding into 3D rendering. Wanting to really set the standard high but also bang out a lot of work fast as we general will be getting the models sent to us somewhat finished (still needing textures and lighting done along with a lot of other stuff). Anyways we currently have 7 rigs, all skylake, 3 are i5, 4 are i7. The i5’s are running with a basic video card like a 750ti, i7’s are 970 strix with a 1070 strix tossed in the mix. All i5 have SSDs and i7 have M.2 960 Pro NVMe. All networked to 10g LAN. A lot of what we are going to be rendering is exteriors that are to look as real as possible. They are being used as sales material for sales offices and not for architectural proposals. Also the environments are heavily forested and we plan to do site maps and mimic actual tree layouts. Budget is $6k-$10k. At first I was thinking of doing the good all throw money at it and do an Asus X99 WS-10G, 8 core Xeon, ECC ram (128gig but after reading I don’t know if I still need and that would change my CPU) and 4x 1080s none SLI (not sure on benefits of SLI being that pascal is not all that great doing SLI over 2x). Water cooling the whole thing and having a bad ass looking office trophy. Rendering will be done (at this point I think) 3DS Max via Vray RT. Not stuck on this but it looks to do the part and the draw up from DWG seems to be a good work flow. I also plan on using all of our computers to render over night via the network. • Via Vray RT will it take advantage of all the assistant on the network aka CPU & GPU? Will the other Vray version be better for this? • If we do only GPU rendering what CPU class am I looking at to feed 4x 1080s. I have no idea of the load levels I can expect. • Any suggestions in what I really should be doing. In regards to rendering and hardware layouts. A fear I have is that my staff have no experience in this area… or do I lol. So GPU forced rendering seems like a fast way to get stuff done without getting to technical. We do plan on development skills for this stuff… oh the joy.

    • Oh and all the SSD’s were for a work around issue with illustrator. Illustrator is not a fan of working with 60,000 polygons. Something to do with how it caches. One work around was running a RAM disk but that’s a different story.

      • No idea on Illustrator having issues with SSDs. I run most of my machines exclusively on SSDs the last few years, but I have to admit I don’t work really complicated stuff in AI. Surely, opening DWGs in AI often drives it (and me) crazy, but I have no idea how a RAMdisk would solve that. I think it is just a limitation with AI’s engine that cannot handle multiple thousands of objects at the same time.

        • This problem was solved… kind of. Ai stores the math for all the poly calculations on the scratch disk not the ram. Ram is hogged up by textures and other assets. So when moving high poly count assets around on the art board it would be drawing from the scratch disk or hard drives. In this case size is not the problem but latency to draw time. Ai was never built with the intent to move a sigle asset with 60k+ poly at one time but easily render almost any number of poly as long as its stationary. Thus it’s stored on the disk (should really be on the vram). SSD and even better NVMe’s alleviate this to some extent. A RAM disk can even be used and it’s a lot of fun but it takes a lot of ram. Just though I should give back a little.

    • There is no machine that you cannot bog down once you start adding vegetation. A couple of detailed 3D trees alone might be pushed to have more geometry than whole buildings! The solution is using proxies, along with scattering tools like Forrest Pack – https://www.itoosoft.com/forestpack.php . A lot of dense forestation can be recycled and reused in the post processing phase with Photoshop too, depending of course at the angle of your shot. Forrest pack and similar plugins do support VRAY & VRAY RT, also Octane and pretty much all popular CPU & GPU engines. As far as GPUs go, if you plan on using GPU renderers you don’t really need to worry about the CPUs. i5 and i7 Quads can easily host 4 1080s. You just need the proper motherboard with proper spacing – usually E-ATX / SSI sized boards do that, and of course the case that will allow for PCI 8-slots in the back (some midi towers, most full towers). If you don’t plan on buying these system immediately, you could wait for the 1080Ti that is rumored to launch early in 2017, and will be probably featuring 10GB of VRAM.

  21. Hi,I am planning to buy gtx 1060 6gb graphic card for Photoshop, revit and 3dsmax(mainly),and 7th gen i5 7600k processor is this sufficient for rendering and performance, Thanks in advance , please reply fast

  22. Hi Dimitris, Love all of your advice, you are doing a great job. I do have one question though, I have Quadro K5000 and tomorrow Im getting 980 ti, will I be able to use quadro for viewports and gtx for CUDA rendering (V-ray RT 3.5) Much appreciated, Chris

  23. Hi, I am from Brazil. I want to buy a workstation. I will work with softwares like sketchup+ vray, autocad and Lumion. Please, tell me if my configuration is good, and suggest anything you can think important. Core i7 7700K 16GB Kingston 2133Mhz Motherboard asus h170M SSD 850 Evo 500GB 2.5″ – Samsung VGA EVGA Geforce 1050ti 4GB Thanks…. Probably there are grammar english mistakes, sorry =D

        • I think the 1050Ti is faster enough to warrant the price difference. It is not that you are spending a fortune on it…Vray CPU and Sketchup will not really care. AutoCAD has horrible 3D acceleration / no need for a fast GPU anyways, even the slowest of built-in IGP will do fine. Lumion & Vray GPU enjoy all the extra grunt you can throw at them.

  24. I’m glad I find your blog. I´m planning on getting a laptop as a kind of desktop replacement that I can take with me to school when necessary. I want to run Rhino, SketchUp, Solidworks, Keyshot and Maxwell as well as Photoshop and Illustrator on it. I just make small models for 3d printing, some cnc machining and casual rendering and I want to spend as less as possible without compromising to much on the overall user experience. So, which would you think is a better value for money for what I’m describing between these combinations: i7 7300hq / 7700hq + 1050m or i7 7300hq / 7700hq + 1050ti? I’d really appreciate your recomendation. Thank you.

    • If you don’t plan on gaming, I would opt for the i7-7700HQ + 1050M. If you were gaming, ontop of all the apps mentioned, I would then favor the 1050Ti, even if it meant opting for the i5-7300HQ. The 2x CPUs don’t have much difference,i.e. both are Quad Cores and most apps don’t really use hyper-threading. But the i7 is faster clocked in both base and turbo so there are speed differences. The two GPUs will not give you a notable performance difference for the apps you’ve mentioned, you will only see difference in gaming (there the Ti will be 20-25% faster).

      • Thanks for the answer man. I got the Acer VX15 with the i7-7700hq and 1050 and I like it a lot. I mostly use Rhino/Keyshot/Photoshop and for the things I do it’s running super smooth and stable.

  25. Hello dimitri thanks for your help and info i am an arch student i’ll be working on autocad 3d max vray I have a lenovo y700 core i7.32 gb ram 2 hard disk m2 240gb and 1T I am worried about the cpu is gtx950 enough for heavy rendering

    • Hello Daniel, if you read my post above you will see that what I am actually claiming is that the GPU has little or more often nothing to do with CPU rendering which is what most people are really using when talking about Vray. You have a decent amount of RAM – 32GB – and that is what mainly limits how complicated or big a scene you render with your CPU is. The CPU defines how fast you render, not how complicated what you render is. Similarly, if you are using GPU accelerated rendering (Vray RT GPU engine), your GTX 960M will do “ok” and will render scenes – as long as you can fit them in the 4GB of VRAM that is available to the GPU. The GPU cannot directly access the 32GB of main system memory for that process. I am conservative with my estimates and call them “ok” not because those are bad parts – these are amongst the best you can get in a slim laptop regardless of price. But laptops have low powered, 30-45W CPUs and equivalent GPUs, when desktop parts are typically 70-140W CPUs and 150-300W GPUs. You cannot expect wonders from a laptop, it is simply impossible for it to match the performance of a desktop at 1/3 the consumption.

  26. Hi. I am Fazal. I work on 3dsmax and vray rt. Can I use GTX 970 4gb vram for viewports and GTX 1060 6gb vram for cuda Rendering. Both cards are in one single system. Thanks for ur help.

    • You should be able to use either GPU for VRayRT: just the 970, just the 1060 or both. Of course, viewport acceleration is only possible through the GPU that the monitor(s) are hooked up to with current GPU generations.

Leave a Reply to Dimitris Cancel reply

Your email address will not be published. Required fields are marked *