Interested in this Tutorial?

Check out the about tab below and start learning on CG Cookie today.

Start a Free Trial

Setting Up a Renderfarm

crew
  • Software:Blender 2.67  ·
  • Difficulty:Advanced

Accelerate your renders with a Render Farm!

A render farm is simply a collection of networked computers that work together to render a sequence in less time. By dividing your sequence between multiple machines your total render time becomes a fraction of what it is on a single computer. Most production studios will fill huge rooms with server rack upon server rack, full of thousands of rendering computers (or render nodes as you’ll hear them called). But they’re also useful (as well as financially viable) for smaller teams or even individuals. A render farm can be a custom built cluster for a few thousand dollars or it can be a collection of arbitrary computers, a computer lab with networked stations, or you can invite all your friends over with their laptops. The only requirement is that all the machines are functional with the basic requirements for running Blender. They can be operating with Linux (21:05 in the video), Mac OSX (12:22), Windows (14:22), or a mixture of each.

Blender makes it easy to take advantage of network rendering. The main thing I want you to take away from this tutorial is that this level of productivity can be harnessed by everyone regardless of budget.

I will be walking you through 3 stages. The first stage is the process I went through for building my own custom farm unit. This may not fit in every Blender user’s budget but at the very least I hope it sheds some light on the possibilities and potential of building a customized farm. The second stage covers the networking process of connecting all the machines on a local network and controlling all render nodes from a master machine with VNC. This stage is required for network rendering whether you’ve built your own unit or you’ve linked arbitrary machines. The same process is explained for OSX, Windows, and Linux Fedora. In the third stage I’ll show you how to take advantage of Blender’s included addon “Network Render”. It allows us to easily launch and dispatch render jobs as well as provides us with utility to manage and monitor jobs.


Prefer to watch instead? Become a Citizen to see the video! Though the written version is a great guide, the walk-through is much more thorough in the video. Also citizens have access to the downloadable PDF.



Step 1: Building a Custom Farm Unit

1. Collect Your Hardware.  Your farm is completely customizable from hardware to software. The only limit is your budget. My budget is on the tighter side of the spectrum: $3000 for a 6-node unit housed in a mobile filing cabinet. But you are free to pick and choose whichever components you want, as long as you take the time to confirm that they are all compatible with one another.

  • Helmer Cabinet: The Helmer filing cabinet from IKEA serves as the perfect enclosure for a small farm. It's actually kind of bizarre how well it houses all the components. And you can pick one of these up for $40 - and even in a few different colors. I feel like RED or YELLOW would make your farm run at least 2% faster
  • Ethernet Hub: This is necessary for networking our render nodes. I don't see a reason to get a fancy one - this inexpensive hub from Trendnet has worked great for me. Remember we need at least 7 plugs - 6 nodes and one for connecting to a router.
  • Power Strip: Since the helmer cabinet is mobile, with wheels on the bottom, it's best to have a dedicated power strip for easy plug and go mobility. I think I bought mine at Target for $20 but this one from CyberPower is an example of a nicer one. Again keep in mind the outlet count needs to accommodate all your nodes as well as the ethernet hub - with this build, at least 7.
  • PC Screw and Accessory Kit: At electronics stores you can find a convenient package of assorted screws, brass mounts, and washers that are likely to be needed when building a computer. During my build I found myself needing several of these for random things.

Those first four items are one-time purchases. You'll need 6 of each of the following.

  • CPU (6x): There's room for debate here, but I tend to think that the quantity of cores is a little more important than speed when it comes to farm rendering. So I went with an 8-core 3.6 GHZ Zambezi processor from AMD.
  • RAM (6x): The more the merrier when it comes to rendering. This Corsair 16 GB 2-stick combination fits best in my budget.
  • Motherboard (6x): I considered 3 major things when looking for a mother board: Dimensions, CPU socket, and Graphics. It needs to fit on the floor of the drawer while still leaving enough room for the power supply and other components.  Also the board's CPU socket needs to match the desired CPU. And finally, we can save a lot of cash by not purchasing 6 graphics cards. Fortunately there are certain motherboards that come with a basic graphics salutation built-into the board. This does of course mean that our farm will not be GPU-renderable with Cycles, only CPU. This smaller ASRock board fit's like a glove, matches the Zambezi AM3+ socket, and has an integrated graphics chip.
  • Storage (6x): A cheap 2.5 inch notebook hard drive is needed to simply boot and store the OS. In my experience I haven't used these drives for storing other data because when rendering, my image files are immediately transferred to the output directory on my master machine - so it doesn't need to be high-capacity. I went with a 160 GB scorpio blue from Western Digital.
  • Power Supply (6x): This is a standard piece of computer hardware. Originally I bought the cheapest one I could find. Though after a few months they started overheating and dying on me. This Antec Earthwatts Green Power Supply is still inexpensive but has held up much better. Since each rendering node is lean - meaning no disc drive or graphics card to supply with power - I went with a wattage on the lower end of the spectrum, but be sure to check that it can support whatever components you choose.
  • Cooling Fan (6x): Another standard piece of computer hardware needed to pull cool air through the front ventilation while pushing hot air out the back. The dimensions are important here. The 80mm size is a perfect fit.
  • Power Button (6x): We'll want to to turn our nodes on and off of course. This is a great opportunity to add a custom touch to your farm!
  • LED (6x): I chose to add an LED to each node to tell me when the hard drive is in use. This isn't necessary, but I think it looks..cool and high tech!
  • Ethernet Cable (6x): Standard CAT6 cables to connect each node to the hub. I recommend no shorter than 3ft in length.

2. AssemblyTools Needed: Dremel with metal cutter discs, electric drill, screw driver.

1.  If you choose to build a farm using the IKEA Helmer cabinet you will need to assemble it using the included instructions aside from leaving the back panel off so air can circulate through the nodes. You will also need to make a few modifications to the drawer panels: 2 holes in the back for the power supply fan and the cooling fan (dremel), 1 rectangular hole in the side for access to the motherboard's accessory ports (dremel), 1 hole punched out of the front label recess (dremel), 1 hole for a power button - another for the LED if you choose to use it (drill), and 4 holes in the drawer floor to secure the motherboard (drill).

nodeBuild_01nodeBuild_02nodeBuild_03nodeBuild_22

2. To begin assembling PC components, I start with locking the CPU into its socket on the motherboard. Then snap both RAM sticks into the memory slots.

3. Then secure the motherboard to the floor of the drawer. There are many holes in the board for fastening but I only used 4, one for each corner area. I made a template from the motherboard to drill the same holes in all 6 drawer floors. Then I screwed in a brass mount to each hole in the floor before screwing the board into the mounts.

nodeBuild_04b1nodeBuild_05nodeBuild_06

4. Plug in the leads from the power button and LED to the appropriate pins on the motherboard. They're labeled clearly enough on the board itself.

5. Next is installing the Power Supply. Slide it into position, lock it in with screws, then start plugging in a few leads to the motherboard. One for the CPU fan and the large 24-pin plug. And another to supply the CPU.

nodeBuild_07nodeBuild_08nodeBuild_09nodeBuild_10

6. Then secure the 80mm cooling fan to the back panel beside the power supply. This should be a very snug fit. Again, it's bizarre how well these components fit in these Helmer drawers. Then find the appropriate plug on the motherboard labeled for fan power and pop the smaller plug into the board. You'll need to plug the bigger one into the matching lead from the power supply.

nodeBuild_13nodeBuild_14nodeBuild_15nodeBuild_16

7. For the hard drive, find a SATA plug on the motherboard and connect it to the SATA plug on the hard drive. Then grab a SATA power lead from the supply and plug it into the hard drive as well. There should be a convenient space behind the fan to tuck the drive away.

nodeBuild_17nodeBuild_18nodeBuild_19

nodeBuild_20

8. Now the node is finished and to be slid into the cabinet. But notice - once again - how there's a convenient amount of space on the inner side of the cabinet housing for running ethernet cables through. Plug one into the node as you slide it in.

9. Repeat that process for the other 5 nodes. Then plug each node cable into the ethernet hub and connect the hub to your router.

nodeBuild_23nodeBuild_24nodeBuild_25

nodeBuild_26

10. The nodes are now ready for an operating system. For installation you will need to plug in a monitor, usb disc drive or thumbstick with the OS install data, keyboard, and mouse. Install the desired OS on all nodes. For my farm I chose Linux Fedora 17 LXDE. I found this one to be a very streamlined distro compared to others like Ubuntu for example. But again, the OS is completely up to your personal preference.

nodeBuild_27

Step 2: Networking

And at this point the task at hand is general networking. So it doesn't matter if you've built custom render nodes or you're simply connecting arbitrary machines, the concept is the same. As long as each node is connected via ethernet to a router, it's technically part of the local network. But how do we want our farm network to function? I suggest you setup each render node to be controlled by VNC from a master machine. 

farm_diagram_small

Virtual Network Computing allows us to connect to our farm nodes and control them as if they have a mouse, keyboard and monitor plugged in. Otherwise we'd have to plug in each of those peripherals to each node to perform the non-automated tasks of network rendering like launching Blender and enabling slave mode. Even though I'm sure those tasks can be automated...I however don't know how to set that up. But when a render fails or crashes, it's convenient to "VNC-into" that node from a master computer to relaunch Blender or fix other hiccups in the automation. VNC for Mac:

1. Open System Preferences & Sharing and on the left side enable Screen Sharing.

2. Notice the information under the green light that explains how to access your computer:  vnc://192.168.1.2 (IP address)

For security reasons I recommend you set a VNC password so the requesting machine can be verified. Click Computer Settings, check "VNC viewers may control scree with password" and type a password. Confirm with OK.

vnc_mac_01vnc_mac_02

3.Your Mac is now accessible to other machines (regardless of platform). In order to access other machines, switch to the Finder and in the menu bar press Go > Connect to Server (or command + K). Type "vnc://192.168.1.x" to target the desired computer. NOTE: For Mac-to-Mac screen sharing you will need to type a valid username/password for the target machine instead of the vnc password set in step 3. For Windows-to-Mac or Linux-to-Mac you only need to type the vnc password.

vnc_mac_03vnc_mac_04

4. Set a static IP address. By default most computers (regardless of platform) enable the IP address to be determined dynamically. However when using VNC it gets annoying when your target computer's IP keeps changing, causing you to hunt for the new IP. To avoid that headache we can setup a static IP address. Go to System Preferences > Network and click the "Automatic" location at the top of the window and then click "Edit Locations". Add another location by pressing the "+" button and rename the new entry appropriately to "Static" or something.

vnc_mac_05vnc_mac_06vnc_mac_07

5. This creates a new set of network settings. Depending on your internet connection type (Wi-Fi, Ethernet, etc) choose the active one on the left and click the "Advanced" button in the bottom right corner of the window. In the "TCP/IP" tab set Configure IPv4 to "Manually", type in a valid and unused IPv4 address (IP address), Subnet Mask, and Router. NOTE: You may have to add a DNS entry in the "DNS tab". This is usually the same as your router. Click Apply.

vnc_mac_09vnc_mac_08

VNC for Windows

1. I recommend downloading the TightVNC application from tightvnc.com. Install it with default settings.

2. On the ride side of the task bar click the little arrow pointing up and choose the VNC icon to reveal the configuration.

3. Set the Primary Password to verify the requesting computer. The View-only password is an optional way to require verification to change the VNC configuration.

vnc_win_01vnc_win_02vnc_win_03

4. Set a static IP address by right clicking on the network icon on the right side of the taskbar and choosing "Open Network and Sharing Center". Then Click on your connection type (in this case it's "Local Area Connection"for me) to open the settings window. Click on Properties then Internet Protocol Version 4 (TCP/IPv4) > Properties. Choose "Use the following IP address:"and type in the desired and available IP address, Subnet mask, default gateway, and preferred DNS server address. NOTE: You may want to enable "Validate settings upon exit" to make sure the address info is valid.

vnc_win_04vnc_win_05vnc_win_06vnc_win_07vnc_win_08vnc_win_09

5. If you installed TightVNC with default settings then an exception should have been added to the firewall to allow vnc connection to and from other machines. To connect to anther vnc-enabled machine open the TightVNC Viewer application. Type in the target IP address in the "Remote Host" field. If the target computer requires a password you will be prompted to enter it.

vnc_win_10vnc_win_11vnc_win_12vnc_win_13

VNC for Linux (Fedora LXDE)

1. Linux is more of a manual process than the other two. In preparation for VNC we need to enable auto login for Fedora because VNC won't run until a user is logged in. Open up a terminal window and type "sudo nano /etc/lxdm/lxdm.conf". After typing your admin password correctly the configuration file will open in the terminal window for editing. Follow the first set of instructions: "Uncomment and set autologin username to enable autologin". Control + X will save close the file and give you the option to save changes. Be sure to save.

vnc_lin_01vnc_lin_02

2. Install the x11vnc application by typing in the terminal: "sudo yum install x11vnc". This will query the repository for the application and ask your permission to download/install along with its dependencies.

3. Set a vnc password by typing in the terminal: "x11vnc -storepasswd", then the desired password, followed by the same password for verification. This saves the password file to a default directory at ~/.vnc/passwd.

vnc_lin_03vnc_lin_04vnc_lin_05

4. Set x11vnc to autorun upon login by typing in the terminal: "sudo nano ~/.config/lxsession/LXDE/autostart". If the file doesn't exist it will create it as a blank file. Add this entry into the file:"@x11vnc -forever -usepw -geometry 800x600" (application name, run constantly as long as the computer is on, use the password stored in ~/.vnc/passwd, scale the screen resolution to specified dimensions). Exit with Control + X and save the file.

vnc_lin_06

5. Make an exception in the firewall for the VNC port. Click the Fedora "start" button in the bottom left-hand corner, Administration > Firewall and verify your credentials. Click the "Other Ports" category in the left-hand column and add port 5900 with tcp protocol and port 5900 with udp protocol.

vnc_lin_08vnc_lin_07

6. Set a static IP address by click Fedora "start" button, Preferences > Network Connections. Choose the active connection tab for your system (ethernet cable = Wired, Wireless, etc) which is "Wired" in my case. Click the active settings (ex. System p5p1) and press Edit. Choose theIPv4 Settings tab, set "Method" to Manual and add a new address entry with desired/available IP address, Netmask, Gateway, and DNS server. Save changes.

vnc_lin_11vnc_lin_09vnc_lin_10

At this point we have successfully networked a collection of computers to be controlled by a singular master machine! Now how about we actually render something?

Step 3: Network Rendering with Blender

1. Blender comes with a great addon called Network Render that makes it easy to launch and dispatch render jobs across your farm. Launch Blender and open a file that's ready to be rendered as a sequence. This scene should be saved with your render settings established whether it be for Cycles or Internal.

netrender_01

2. We must enable the addon by opening File > User Preference, clicking the "Addons" tab, and searching for "Network" in the search field. The Network Render addon should appear because it's including in trunk. Enable it.

netrender_02

3. Now choose the "Network Render" entry that appears alongside "Blender Render" and "Cycles" in the render engine enumerator at the top of the window. Now your render settings has 3 options: Client, Master, and Slave. For a render job to launch we need one instance of Blender to be the "Client", one instance to be the "Master", and however many "Slaves" we can muster. The client configures the job settings and I usually run the client on my master machine. The master node receives the job and distributes among the slaves. I usually also run the master on my master machine. The slaves do all the actual work of rendering the frames where each render node is a slave. NOTE: I recommend you use the same version of Blender on all machines.

netrender_03netrender_04

4. Set the original Blender instance with the render scene loaded as the client. Open another instance via the terminal and set it as the master and click Start Service. The master instance and slave instances don't need to have the render scene loaded. They do however need to have a camera present in the scene in order to run the service. Launch a single instance of Blender on each of your render nodes, set them as slaves and click Start Service for each. Ideally, the slaves will automatically connect to the master by stating "Network render connected to master, waiting for jobs". If it fails and you get the error "No master server on network" then help it along by typing the IP address for the machine running the master in the "Address"field under the "Start Service" button.

netrender_06netrender_05

5. Make sure all slave nodes are connected to the master and waiting for jobs. On the client, click the refresh button (above the Open Master Monitor button) to double check all the connections. Then click the refresh button in the "Slaves Status" drop-down. If all your connections are good you will see the names of each of slave node listed.

6. With the connections established, on the client we simply need to set the job Type to "Blender" (for rendering .blend scenes), set the Engine to match our render scene engine, and set the Output to the desired directory and file type. Click "Animation on network" to submit the job to the farm!

netrender_08netrender_07

7. For monitoring your job's status and progress click Open Master Monitor. This launches a web-based utility that conveniently tracks all the information of your job including progress, slaves' status, current render frames, duration of render times for each node, etc. It also gives you the ability to pause, reset, and cancel any and/or all jobs. Once your job finishes, be sure to marvel at the job's render time versus the cumulative render time!

netrender_09netrender_10netrender_11 

Congratulations you have a working render farm! I realize that setting up a render farm is a fairly involved process especially if you build custom render nodes, but hopefully you can see just how valuable it is once it's setup - whether it's a custom build like this one or a bunch of arbitrary computers. Thanks for reading!


CC Credits