MentalRay vs Vray in Maya 2016
Over the past few months I have been playing around with Vray for Maya, and also with adapting the material design method popularized by Grant Warwick to a Maya workflow. So far, my experience with Vray has been mostly very good, and I find it significantly better than the mental ray experience in Maya. Here are some of the advantages that I found to be extremely beneficial in everyday use:
- Much better render framebuffer. The Vray render view is a lot more useful overall than the Maya framebuffer, over which mental ray has no control unfortunately. Not only am I able to use Maya while Vray is rendering, but I have access to a slew of additional cool features like mouse cursor based buckets, adaptive subdivision of buckets when the render reaches the end, better history buffer and controls for comparisons, and quick access to preview render elements/passes.
- Significantly better GI solutions and control. The Vray GI is simply better than whatever mental ray has to offer currently, in terms of features and speed, especially for interiors.
- MUCH better distributed rendering setup, with no bugs and crashes. I rely a lot on distributed rendering of single images, and the Vray setup just works out of the box, with no additional tweaking required. The mental ray satellite, on the other hand, requires a much more complicated setup by comparison, and is extremely finicky. It can crash for no apparent reason, can be extremely sensitive to render bucket sizes during command-line renders, and it requires me to reboot the workstation every time before I start using Maya. It seems to have gotten worse and worse starting with Maya 2015.
- The Vray RT offers a fantastic, and very handy solution for real-time preview, as well as GPU rendering. This is an extremely handy and time-saving feature. Unfortunately for mental ray, the current Maya implementation of progressive rendering through the IPR is absolutely awful. Not only is it pretty slow and lacks a GPU option, but it also crashes all the time. In the past I had to rely on Holomatix’s SprayTrace, for previous versions of Maya, and although it worked pretty decently, it still has several limitations compared to Vray RT. It’s also an added expense, lacks GPU and viewport support, and is entirely independent of the mental ray plugin. There currently isn’t even a Maya 2016 version yet (as of August 2015).
- The Vray render elements system is MUCH easier to use and setup than the render passes system in Maya. There’s almost no comparison. This is an essential feature for compositing, and one of the big reasons that Vray became so much more popular in VFX production.
- The Vray materials have more features, and are somewhat easier to work with. For example, the Vray material offers multiple shader types for specularity, it doesn’t require the additional bump 2D or 3D node for connecting a texture to the bump attribute, offers Dispersion of refraction, and it also has a built-in SSS feature that allows for very realistic, physically-correct materials. In addition to this, there is the Vray dirt material as well, which offers more features than the mental ray‘s occlusion node.
Aside from cost, one of the biggest reasons that I stuck with mental ray for so long (and I have tested Vray in the past several times, as well as other render engines), has been the fact that it offered support for pretty much everything in Maya. It has a strong set of essential and advanced features for production use, and it renders pretty fast once you learn how to use the settings.
However, now that Vray has decreased its render time significantly with version 3.0, the render times are either pretty much on par with mental ray, or even faster. This, in addition to the fact that it is so much more reliable and stable, makes it a no-brainer as the renderer of choice currently as both a CPU and GPU solution.
There are still some important differences that give mental ray some advantages, aside from the obvious cost difference.
One is the more extensive support for some of Maya’s features like fur, hair, xGen, bifrost, and fluids. With Vray, there is often a need to rely on additional plugins for support of similar features. An important example is fur. Although Vray has its own fur implementation, it is somewhat useless for characters, as it doesn’t have any grooming features. It does offer support for XGen, which could work, but XGen comes with its own problems currently. It seems like the only other option would be Yeti, which not only is not available for purchase in NorthAmerica, but it would be a pretty expensive additional purchase for independent artists.
Another difference, which would be an advantage in some situations, is the current implementation of the SSS shaders. They act quite different from eachother, and I found that for characters, the mental ray SSS shaders seem more suitable generally. They respond better to light, and offer better control over the scattering.
Testing Scene Setup
For a simple comparison, I put together a scene featuring the Stanford dragon scan, and some basic materials. The aim in this scenario was an exercise in trying to achieve the same look with both render engines.
The render setup included my workstation + 4 render nodes in distributed mode for Vray, and the same nodes using the satellite system for mental ray. I first set the scene up in Vray, with the following settings:
- HDR dome and a single rectangular Vray light for lighting set to use a 4500K colour at 1Lumen intensity. The light is positioned back and to the right of the dragons.
- For the SSS effect I decided to use the Vray Fast SSS2 material, in raytraced mode.
- Adaptive sampling with 16 max subdivs, and 0.015 threshold.
- for GI, I used Irradiance Map with High Preset for the primary rays, and Light Cache with default settings for secondary rays.
Then I tried to recreate the same setup for mental ray, using the following:
- Used the built in IBL with the same HDR map, and the new Physical Area light found in Maya 2016. I had to match the light intensity by trial and error, and use the mr Blackbody node for the colour temperature.
- All of the materials used in mental ray setup were MILA-based, and found that they behave pretty much exactly like the Vray material. The exception, of course is the MILA scatter layer, which behaves quite differently from the Vray Fast SSS2. Still, I tried to match the look as closely as time permitted. The reason for using the MILA materials was that its energy conservation matches the Vray material better than the older mia_material_x. This was also a good occasion to test the MILA materials.
- For GI I relied on Finalgather, with 100 rays, 0.5 density, 20 spread, and 3 diffuse bounces on reflect.
- For sampling, I decided to use 0.5 Overall quality, with 0.5 environment lighting quality. I found that the environment lighting quality produced a lot of noise in shadows and reflections at the default 0.2 level, and the increase to 0.5 added a lot to the render time.
To keep results consistent between the two engines, I didn’t use any custom camera lens mapping or colour mapping in either setup. This means that in Vray I didn’t switch on the Physical Camera attributes, and left the colour mapping at the default Linear Multiply. In mental ray, I didn’t use either the photographic or the simple lens node.
The materials in both instances share pretty much the same nodes for controlling the specularity, glossiness, and bump. One big difference between MILA and Vray Material is the way glossiness is controlled. They are backwards (Vray has sharp glossiness at 1, while mila has it at 0), and they scale differently from each other, which required different mapping nodes for the difference in values.
The noise maps are procedural 3D Fractal and Cloud maps, which is why they look slightly different in the Vray render from mental ray. Vray doesn’t have access to the code for the Maya procedurals, so it creates its own version trying to stay as similar as possible. I couldn’t use texture maps, because these models do not have UVs, and I wanted to test the procedural noise maps in Vray anyway.
In order to replicate more realistic reflection frenel curves, I was inspired by Grant Warwick’s tutorials in Mastering Vray, and tried to recreate the techniques in Maya. I won’t spend time here to explain the need for such custom curves, as he does a very good job. Instead, I just want to focus on the method I used to adapt the system from 3DSmax to Maya, and from Vray to mental ray.
The greatest challenge in adapting the workflow was figuring out how to create an equivalent to the custom curve nodes that 3DSmax provides. For that purpose, I chose to rely on the very useful remapValue node in Maya.
Basically, I start by hooking up the facingRatio output from the samplerInfo node into a reverse node, so that 0 becomes the front facing angles, and 1 becomes the edge angles. Then I take the output from that reverse node and plug it into the InputValue attribute of the remapValue Node. Then I customize my value curve by plotting points manually to control the shape, and then plugging the outValue attribute to the reflection attributes of the materials. This would be either the Reflection Amount for the Vray material, or the Weight attribute for the MILA materials.
Here you can see an example of how the remapValue curve looks for the plastic reflection:
These images show the material graphs for the mila plastic setup, and the Vray plastic material setup respectively.
One thing to note is that the facingRatio attribute from the samplerInfo node does not output a perfectly linear result. To get a more accurate ramp of the values from the front-facing angles to the side-facing angles, there would be a need for another adjustment curve to be applied, before the result is plugged into the custom remapValue node. This can be done with another remapValue node. However, I chose not to bother with it, as the change isn’t huge. I would only bother with it if I need absolute accuracy in reproducing specific material properties. In the large majority of cases, I have to make subjective visual adjustments anyway.
Overall, the setup was just as easy to put together on both platforms. The new advances in mental ray for Maya have made it much easier to set up an IBL solution, and the mila materials are pretty easy to use once you get used to the new system. The biggest problem I had in the whole process was actually getting the mental ray satellite for Maya 2016 to work properly for Service Pack 2. It turned out that there is a separate install for it specific for SP2, which wasn’t showing up on the Subscription page at Autodesk. It has to be downloaded separately from another page, and to make things even more confusing, the file isn’t labelled specifically for SP2.
The render times were somewhat similar, with Vray tending to produce cleaner results overall in less time. The only problem with the Vray render was actually very apparent noise on the flat shaded floor area. The noise came from the LightDome samples, which had to be raised to 128 to produce a relatively clean result on that surface.
The mental ray render, on the other hand, produced perfectly clean shading on the floor area even at very low render settings. However, it also generated heavy noise on the gold material at low environment light quality levels, which is still apparent in this render. With the new mental ray sampling quality attributes in Maya 2016, I actually had a much easier time tweaking the sampling settings for my Vray render, to prevent the sampler from working too hard. I was at a bit of a loss in optimizing the sampling in this new version of mental ray, and I suppose it might be possible to lose that IBL noise without an increase in render time.
Another disadvantage of the mental ray setup was the more limited FinalGather solution, which was much slower in generating a similar amount of bounced light in the scene as the Vray GI. In the end, I just decided to reduce the FinalGather settings a bit and reduce the render time significantly. This is also why the Vray render shows brighter, more yellowish shadows on the plastic Dragon. It is the GI at work. The mental ray render can achieve a similar result with more accuracy in the Final Gather settings, but at the expense of more render time.
A surprising difference between the two implementations was actually the stronger colour from the Blackbody node in mental ray, which produces more saturated tints than the Colour temperature setting in the Vray light. They were both set to a temperature of 4500K, but for some reason, they render slightly differently. This is why the mental ray render appears slightly more reddish in the top right section.
In the end, the new MILA material has proved to be a mixed bag in my opinion. On the one hand, it allows layering of various material attributes in a fairly simple fashion. On the other hand, it took me a while to figure out how some of its features worked, and the node linking takes some getting used to. It feels like it added simplicity and functionality on one hand, and made things more complicated on another. It also doesn’t delete the nodes once the layers have been removed from the material, which is kind of annoying. However, the render time improvements should be worth the trouble.
Hi, can you comment on how you transitioned Warwick’s techniques to Maya workflow? I’m trying to do the same on Maya 2016 and VRay 3.10.
Hi ICARO, I do provide an explanation of the steps within this article. You can find it under the Material Setup section. I show you there how to reproduce the custom curves from 3dsmax by using the remapValue node in conjunction with the samplerInfo node. That’s really the only part that’s different from 3dsmax.
Let me know if it’s still unclear.
Alright, I will try to reproduce now. Thank you very much.
Have you ever ran into the problem wherein the samplerate render element in vray was only showing a blank white result? All the other elements (alpha, reflections,refractions) was working well.
Nice read btw. Helped paint a clearer picture performance wise between the two.
Hi Eurk, I’m afraid I haven’t encountered that problem yet, but then again I haven’t needed to check the sample rate pass very often either. I switched to Vlado’s method of sampling, as it saves a lot of time of tweaking things. The amount of time that can be saved in rendering by optimising it per object or light is often not worth the effort.
I would suggest that you ask about your problem in the Chaos Group forums, and provide them with a sample scene. The developers are very helpful.
Nice article and i have one question.
For a start up 3d artist using Autodesk Maya 2016 what best to use mental ray or Vray?
TT, if money is no object, then I would definitely start with Vray. However, the skills that you learn using mentalRay would be easily transferrable to other render engines as well.
If you can buy a higher end video card, then going the route of using a GPU renderer like Redshift might be a better idea. It renders much faster than Vray (including VrayRT), and it’s half the price.
Thanks. This article helps me a lot.
I tried to set up the mental ray satellite for maya, for Distributed Rendering. It never worked properly: the images would never load in the slave computers.
I tried everything, like switching off the firewalls and antivirus. Any idea on why it could be?
Thanks for the article!.
Jorge, that could be for several different reasons. Have a look at the article that I wrote about setting up a Maya render farm with mentalRay. There are some details in there about the satellites. It’s old, from 2011, but the setup process is still very much the same:
Do you ever use Maya + Vray for batch rendering for animations? If so, could I ask some noob questions?
Sorry, didn’t mean for that to be anonymous.
Maya has switched to Arnold for it’s preferred render engine it is worse than mental ray
lacking very essential things such as light mapping with multi uv channels so stupid
yes I agree Vray is the best
Hi, I’m facing bit of an issue though, i cannot see lights,shaders or materials for vray in my Maya2016 viewport 2, can you please tell me how to fix those, feel like work blind on viewport until the final render comes.
From where I can download this dragon 3D model?
Hi Georgi, you can find the model on the Stanford 3D Scanning Repository:
Heya! Do you have a tutorial or node setup for the gold? Thanks for the awesome blog post!