Maya Render Farm
This past weekend I finally got my new computer parts together, and built my first render farm. Although it is small at the moment (only two additional nodes aside from my workstation), it certainly does make a big difference. I’ll add more nodes in the future, but I’ll probably be limited to 4 nodes in total. The reason for this max is actually the power breaker and storage limitations. Even with the low power consumption of the core i7 2600k CPUs, it still adds up to a fairly high draw on the same socket as my workstation.
Now, the reason for this post is to provide a mini walkthrough in setting up a small render farm at home. Not surprisingly, the information available out there is rather sparse and all spread out. It took me a couple of days and nights of figuring everything out, but hopefully this will help you get everything sorted out faster.
My goal for the render farm is to use it for distributed Maya and AfterEffects rendering. These are my two primary software packages for the freelance work that I do, and rendering can certainly be quite time-consuming. Aside from the specific mentalRay satellite setup, the rest of the steps to set up the render-farm will certainly apply to pretty much any other 3D setup.
STEP 1 – hardware
The first thing to consider is the computer equipment. In my case, I am familiar and comfortable putting together my own computer systems, and so I order all of the parts individually. For this, I use newegg.ca, as their prices are pretty much unbeatable. What you want from a render farm node is basically only the ability to connect to the network and render out images. You also want it to be as inexpensive as possible, use as little energy as possible and be as small as possible. In my case, since I have these computers in the same room as my office, other considerations were the quiet factor and looks. This led me to pick the following components:
CPU: Intel Core i7 2600k
Motherboard: ASUS P8H67-M LX (REV 3.0)
Power Supply: Corsair CX430
Chassis: Black Silverstone Grandia GD04B
RAM: G.SKILL Ripjaws Series 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10666)
HD: WesternDigital Caviar Blue 320GB
Some notes here: Since this CPU has an integrated video card when paired with an H67 motherboard, there was no need to buy a video card. For regular CPUs you want to look for a motherboard that comes with an integrated video chip option. There is absolutely no need for buying a separate video card.
Another consideration is the HD. In this case I simply went with the cheapest possible option at this time. There certainly is no need whatsoever for that much HD space, as all I’ll have installed on the node is the OS, RenderingSoftware (Maya, AfterEffects, etc) some small diagnostics, VNC programs, and the render queue client. This will take less than 60GB, if even 50GB. Also, a very fast HD is quite unnecessary, so it makes little sense currently to pay the extra premium for the SSD drives. The only reasons for buying an SSD drive instead is actually the weight, power draw, and heat. However, at this time they’re not really worth the premium.
There are smaller microATX cases out there, but an issue with them is cooling and noise. Smaller fans tend to be significantly louder, and the fact that I have the render farm located in the same room as my office makes proper case cooling more important, especially when I plan to overclock the CPUs by a moderate amount. Also, a larger HTPC case also offers the possibility of installing somewhat larger CPU fans for overclocking ability. They’re still small enough to stack together.
I also bought a cheap DVD drive, to be used in one of the nodes for installing the OS. Once the OS and basic software was installed on the first node, then it is just a matter of cloning the HD. This, of course, applies only when the rest of your render nodes use the exact same CPU and motherboard (the RAM could probably be different, but not 100% sure).
Lastly, a consideration to have is the network setup. In my case, since I have so few nodes, I opted to have them all connected directly to my router. Once I will be adding more nodes, I will have to get a network switch, for the extra ports. Just make sure the ports on your router or the switch are 10/100/1000 Gigabit network ports, not just 10/100 Ethernet ports. With more nodes, the network access to the main workstation also becomes an issue, and in that case having a NAS (network attached storage) set up would help things considerably.
STEP 2 – install OS and programs
Installing the OS and duplicating it to the other nodes is a pretty straight-forward step. In my case I use Windows 7, which is the same as my workstation. This ensures that I will have the least amount of headaches with the networking setup overall, and headaches are the last thing I’m looking for :).
Additional programs that I installed are the following: CCleaner, TightVNC (for remote desktop control), Backburner (which comes with the Maya install), mentalRay Satellite (download from your latest update link), and HWiNFO32 (mainly for CPU monitoring).
An additional item to set up are the network settings. You do want to create a static local IP, and to do that you need to follow the instructions here. These will also be based on your current router settings, so you need to be familiar with that.
STEP 3 – remote desktop connection
VNC, Maya, Mental Ray and render queue manager setup is next. First of all, let’s take a look at the VNC setup. As you may know, Windows 7 Ultimate and Enterprise come with RemoteDesktop capability. However, there are a couple of important limitations to this solution. First of all, you have to buy the more expensive windows version, and secondly you will run into licensing issues with certain types of FLEXlm licenses. These licensing issues will come up because activating these licenses will not work through RemoteDesktop. This is where a free VNC program like TightVNC comes in to save the day. It actually allows you to connect to your render node and log in remotely as though you are actually connected directly to it through a keyboard, mouse and monitor. RemoteDesktop from Windows doesn’t work quite that way.
Here are a few tips for setting up TightVNC to work as smoothly as possible. At first, with your first connection, you may notice that it is very laggy. To fix this there are a number of things you will want to do.
1. First of all, open the Configuration options for the TightVNC server running on your render node. Here you will want to change the Screen polling cycle to the minimum of 30ms.
2. Open up the Control PanelSystem and SecuritySystem window, and click on Advanced system settings. Here, under Advanced->Performance, click the Settings button. You will want to check off most of the fancy Visual Effects, especially Desktop Composition. This will turn off the Aero interface, which will speed up the screen update significantly. Disabling all the fading effects will also speed up the update considerably.
3. Change the desktop bit depth to 16-bit instead of the default 32-bit.
STEP 4 – Maya and mental ray
In order to distribute rendering to the render nodes, Maya has to be installed on the render node as well as the main workstation. Fortunately, for small render farms this isn’t an issue, as Maya comes with free network license for up to 5 or 8 additional nodes. See the FLEXlm network license server instructions. This is pretty straight-forward.
The tricky part comes in with the mentalray satellite setup. The mentalray satellite system is used specifically for distributing render tiles on a single frame across multiple render nodes. This works both with the mayabatch as well as within Maya with the Render Frame and IPR functions. First of all, download and install the mentalRay Satellite on the render node. You don’t need to install anything extra on the main workstation.
The way the satellite works over the network is basically in two parts:
1. First of all, once you install the satellite, there will be a service that runs in the background on the render node machine, and that service listens for requests over the network. This will start automatically when you boot up the machine.
2. On the main workstation, Maya will look for a file called maya.rayhosts located in your C:/Users/YourAccount/Documents/maya folder. This file needs to contain the names or IPs of each of your render nodes, with a name per line like this:
RENDERNODE1-PC:7415
RENDERNODE2-PC:7415
RENDERNODE3-PC:7415
RENDERNODE4-PC:7415
The RENDERNODE1-PC is the name of the render node computer, and the 7415 is the port used by the mentalRay satellite service to listen for render requests. This port number is specific to the Maya 2015 version. To find out exactly what port the satellite service uses, or to change it, simply open the services file in C:/Windows/System32/drivers/etc. You will find the mental ray service listed at the bottom of the file.
Now, once you have the maya.rayhosts file created, you need to make sure that the mentalRay satellite executable is added to your Windows firewall exception list (or whatever firewall you have). You also need to go to your router’s control panel and make sure to add that 7415 port to allow connections for each of your render nodes. Once you have all this set up, Maya will be able to distribute render tiles/buckets to all render nodes specified in the maya.rayhosts file.
One last thing to setup now is a way to enable/disable the maya.rayhosts file. Why do you need to do this? This is needed because when you use a render queue manager like Backburner to distribute animation frames across multiple render nodes, this will automatically also launch the mentalray satellite if using mentalRay as the main renderer. What you will have happen is actually using the satellite to distribute buckets from each frame to all render nodes at the same time as the render queue manager also sending individual frames to each render node. This is bad :).
So, when you want to use the render queue manager to distribute individual frames, you want to first disable the maya.rayhosts file, so that mental Ray doesn’t see additional satellites. For this, I created two .bat files in the maya folder where the maya.rayhosts file is located. In one of them I wrote:
rename maya.rayhosts maya.rayhosts.bak
and in the other one I wrote the opposite, where I rename it back to normal. Then I made shortcuts to these two .bat files on my desktop toolbar.
Keep in mind that Maya reads the maya.rayhosts file only upon startup, so if you need to enable or disable it, you will have to do it before starting Maya.
STEP 5 – render queue manager and auto login
Backburner comes free with Maya now, and this is a decent solution for a small renderfarm. It’s very easy to install and use. However, there are other solutions out there with more options. The downside is that they’re not free. One solution that I tested and really liked is the Deadline manager from ThinkBox. This has many additional options, but also a somewhat more complicated setup.
Either way, regardless of what your manager is, there will be a couple of things that you want to set up in order have this work as smoothly as possible.
1. You need to make sure that your projects reside in a shared folder, and that you map that folder as a network drive to your render nodes. What I did was to make my whole drive shared (I use a separate drive for my projects), then map that drive to the render nodes with the same letter as I have it on my main workstation. This allows me to keep all links intact, as E:Projects would be the same on my workstation as it would be on my render node.
2. You need to set up Windows on your render node to login automatically upon startup. This is so that your render queue manager can start up automatically when you boot up the render node, without the need for you to remotely log in to each render node first. To do this, follow these instructions, or do a search on the web for “windows 7 autologon registry”
STEP 6 – clone the drive
Finally, in order to clone the HD to the HDs for the other nodes, I use a free program called EaseUS Todo Backup. I hooked up the second HD to an enclosure and connected it to the render node through USB. The cloning process took about 40minutes. It basically creates an exact replica, including the partitions. After that all I have to do is install the cloned drive into its own render node, fire it up, and windows will start automatically with everything set up.
From there, make sure to log in remotely and change the static IP to a new one.
Best of luck, and I’d love to hear your suggestions and contributions if you have better ways of setting this up.
Honestly, I don’t even have any idea what a render farm is until I came across your post. Nevertheless, I like that you provided some instructions on how to set up a small render farm in your home. The steps you provided and the requirements needed look very basic. I certainly would like to try this out if I ever get to have the opportunity. Thanks for sharing this.