Discussion in 'Article Discussion' started by WilHarris, 7 Oct 2004.
I like this mod very much, very well planned and worked out.
Wow, that must use a lot of power!
I never knew about this network rendering dealio. I do a fair amount of rendering myself also. Good job!
I love yor render farm...
Do you have to use a diferent software licence (3dsmax) on every machine to network render?
Agreed, exceedingly impressive. Great job
This is really remarkable, I am interested in how the software on these systems is configured as that isnt highlighted in the article. What os is running on these boxes and how are they configured so they automatically divide tasks up in such a way.
This is giving me all sorts of ideas!! Thanks a ton and great job!
been eating lots of mr kipling's cakes recently ?
its an awsome idea and well executed, although it would be a right pain in the bum if there was a problem with one of the mobos, or something on one of the mobos. looks like it takes a while to assemble and dissasemble the "stack"
There's a couple of questions the come to the fore...
He didn't mention if he's using 10/100/1000 ethernet connections...
He didn't mention the OS he's running on any of the systems nor how he setup them in concurrent connection (something good I hope like clarkconnect)...
I'm guessing he most likely didn't use windows clustering support with remote desktop...
I would really like to know how he setup the software...
Awesome mod though! Really gets deserves a
He did say that he was using a 10/100 switch.
How did you get the powerbutton to work on all 5 systems at once
I'm looking for something simular for my two server systems.
awesome performance for the price
is this a home made blade server?
if not, how is a blade server different from this amazing project?
also waiting for answer for questions asked few posts up.
OS? software support?
These systems are quite a recent phenomenon. A Blade Server is a computer system on a motherboard, which includes processor(s), memory, a network connection and sometimes storage. The blade idea is to address the needs of large scale computing centres to reduce space requirements for application servers and lower costs. A typical application could be serving web pages. So along with a Storage Blade they can be rack mounted in multiple racks within a cabinet together with common cabling, redundant power supplies and cooling fans. Blades can be added as required, often as "hot pluggable" units of computing as they share a common high speed bus. Most of the Computer Manufacturers are now offering these computing elements, although some concerns about a lack of standards is currently being expressed.
It could be considered a home made blade server.
Mental Ray : Mental Ray is a powerful rendering system fully integrated within 3ds max. It was previously
available as an optional package, but this version makes it available as a standard feature.
Absolutely amazing. D@mn good job!
From what i know about setting up a render farm, each of those computers would be setup just like a normal computer with win2k/winxp on and then just networked together with some remote admin program to save you swapping the monitor cable about.
On his main machine that he uses to actually make the 3d work, you'd install the server program and then on each of the 5 render comps you'd install the client software.
When the software is all configured (found each other on the network and got permissions set up etc) you can then setup what you want to render from the server program. You could assign 2 of the client comuterss to each start rendering out individual frames from one scene, then tell another 2 comps to both work on the same frame from a different scene and you could have the last computer working on another scene file.
The server computer will them make sure each client computer is supplied with the files it needs to complete the render (model files, colour/bump/specular maps, cached data from simulations etc) the client computer will do its assigned tasks and send back the rendered frames (either whole or split up to be combined with another) to the server computer.
So they aren't actually a clustered or anything, just 5 networked computers with the software splitting up the jobs, there's a company that make small render farms, they do one called the Render Cube (link below) Its a big cube with a duel cpu motherboard on each side and all the gubbins in the center. An 8 cpu renderer but it costs around $10,000
There's this one company that do a render farm on a PCI card, it has 8 AR350 Raytracing Processors on it that are made specifically to do repetative rendering calculations. You can put 2 in a computer so you'd have 16 little rendering cpu's in a comp.
Thats all i can think of at the moment, i'm really rather stoned at present, just waffled on loads talking ****
hey i was wondering would this work, on video editing??? like eg TMPGEnc, it would surly speed up my dads time he spends on editing movies.
Not unless you manually broke your video into little pieces and set up one computer to render each one.
These things work great for CG stuff because a CG video is just a lot of little frames, so for 50 frames you would assign 10 frames to each computer and let them chug away individually. You can do this with special software, typically included in most CG packages, or manually.
But without the speical software it dosn't take advantage of the different systems.
VERY cool by the way.
Just on question what OS do you use in the server?
Is it a windows or a linux or maybe some flavor of BSD like FireFlyBSD.
IMO a single power-cable would look nicer. And maybe ducting all the exaust to a single, larger, port.
Is it real noisy?
the 400% increase in throughput speed is just crazy (i wouldn't expect it to hit 300% tbh, considering the better quality components his 3ghz rig has doubtless got compared to the 5*1.8ghz farm)
it's incredible that all this fits in a fairly small enclosure too
nice planning, nice implementation, nice job
Separate names with a comma.