The Basics

Transcription

The Basics
The Basics Cross pla2orm (WINDOWS, Linux, OS X) •  Graphical User interface (GUI) •  Command line interface •  SoHware Development Kit (API) •  Applica/on Path Transla/ons •  Patch transla/on management •  Mixed environment rendering •  more… Job Management •  Applica/on-­‐specific job request templates •  Priority queue •  Job control(suspend, cancel, re-­‐queue, etc…) •  Job status monitoring •  Email/SMS no/fica/ons •  SSH/SFTP/FTP Image Transfer •  Tile rendering •  Efficient load balancing •  Mul/ple queuing algorithms •  more… Network Management •  Automa/c node detec/on •  Client-­‐Master-­‐Slave configura/on •  Node pool management •  Ac/ve/Inac/ve node assignments •  No configura/on files •  Usage sta/s/cs(/meline, CPU, memory, etc…) •  more… Power Management •  Automa/c render farm shutdown •  Wake-­‐On-­‐LAN (WOL) management •  Remote management (shutdown, reboot, etc..) •  more… WORKSTATION01 LAN:192.168.0.50/24 Internet RENDERNODE01 192.168.0.100/24 WORKSTATION02 LAN:192.168.0.51/24 Modem WAN:76.23.14.23 LAN:192.168.0.1/24 RENDERNODE02 192.168.0.101/24 RENDERNODE03 192.168.0.102/24 WORKSTATION03 LAN:192.168.0.52/24 WORKSTATION04 LAN:192.168.0.53/24 MASTER LAN:192.168.0.75/24 Network Switch Render Nodes RENDERNODE04 192.168.0.103/24 RENDERNODE05 192.168.0.104/24 Network Storage 192.168.0.108/24 RENDERNODE06 192.168.0.105/24 RENDERNODE07 192.168.0.106/24 RENDERNODE08 192.168.0.107/24 SquidNet Master Client worksta/ons send render requests to Master controller SquidNet Client Worksta/ons Worksta/ons store ALL scene content on NAS. They also retrieve render results from NAS Master controller manages slave rendering opera/ons (start, stop, etc…) Slaves return render results to Master controller. SquidNet Slaves Slaves access scene content on NAS and store rendered images on NAS NAS Storage • 
• 
• 
Network folder: A directory on a computer or NAS that is available to all computers on the network. UNC Path: A reference to a folder that’s accessible on the local network. For example, \\NAS-­‐SERVER\maya-­‐projects is a UNC path. Mapped Drive: A WINDOWS-­‐only shortcut to a network folder. For example, local mapped drive M:\ can point to UNC path \\NAS-­‐SERVER\maya-­‐projects. In this case, local drive M:\ and UNC path \\NAS-­‐SERVER\maya-­‐projects both point to the same content. M:\myscene.mb or \\NAS-­‐SERVER\maya-­‐projects\myscene.mb /Volume/Volume_1/maya-­‐projects/myscene.mb All paths point to the same physical folder on NAS NAS box: NAS-­‐SERVER Exported folder: /Volume_1/maya-­‐projects /mnt/maya-­‐projects/myscene.mb Addi/onal informa/on: hjp://en.wikipedia.org/wiki/UNC_path#Uniform_Naming_Conven/on NAS Storage • 
• 
• 
Make sure all network folders are created. Make sure all network folders (SAMBA, NAS, etc…) are accessible (read, write permissions) from all render farm nodes. For WINDOWS machines: • 
• 
• 
• 
All SquidNet installa/on accounts MUST have ADMINISTRATOR privileges. All nodes MUST have the same ADMIN account name AND same password on each node. WARNING: WINDOWS (non Server versions) has a connec/on limit to network folders. If your farm has more than 5 nodes, it’s recommended that you use a NAS for content storage. Check with your IT professional on configura/on selngs. Verify Account permissions Read/Write Access Read/Write Access NAS Storage • 
WINDOWS: • 
• 
• 
• 
Must be installed when logged in under an ADMIN account. During installa/on, enter login informa/on for any ADMIN account. This ADMIN account must exist on all WINDOWS nodes and have the same password. It must also exist on NAS server. SquidNet Server runs as background service. LINUX and OS X: • 
• 
Install from shell under root account. Standard tarball installa/on. Untar and run squidnet-­‐install.sh script. Use root account for installa/on Enter local node computer name and name/password for ADMIN account Use DMG installer for installa/on • 
• 
• 
• 
• 
• 
On each render farm computer, SquidNet runs silently as a background process wai/ng for commands from the local user interface, SDK API, command line interface or from another node on the farm. On WINDOWS, background processes are called services. On OSX and Linux they’re called daemons. Generally, background processes are called “services” on any plaporm. On the MASTER node, the local UI communicates directly with the local SquidNet service. On client nodes, the local UI connec/ons with the local SquidNet service AND with the MASTER service. The local UI on slave nodes only connect with the local service. It is never necessary to log in to the local node to get the SquidNet service running. The local service gets started when the computer starts up. SquidNet Background Service Graphical User Interface Command Line Interface Remote SquidNet Server SDK API Interface • 
• 
• 
By default, user configura/on selngs (job profiles, applica/on paths, etc…) are stored in the <install-­‐path>\selngs folder. If SquidNet is uninstalled and reinstalled all user configura/on selngs will be lost. Therefore, its recommended that the default folder loca/on be changed in the preferences window. In a render farm where a single worksta/on will be used to submit jobs, the configura/on path can be set to any local hard drive path (example: C:\Squidnet-­‐config). Make sure to backup oHen. In mul/-­‐worksta/on environments, set the configura/on path to a folder on a NAS box that all worksta/ons have access to. This prevents from having to duplicate the same selngs on each worksta/on. The configura/on selngs folder is only used by submilng worksta/ons. MASTER and SLAVE nodes do not need to have the configura/on path set in their local user interface. Mul/ple Worksta/ons UNC Path UNC Path Configura/on Path Single Worksta/on ConfiguraTon Path • 
Local Drive NAS Storage 4 different node types: • 
PEER: The default node type when SquidNet is installed. • 
CLIENT: Defines and submits job requests to farm. Can process jobs at low priority, when user is logged out or never. • 
MASTER: Manages render farm network. Can be configured to process jobs. Can also assign specific master-­‐
like permissions to client nodes. • 
SLAVE: Processes job requests only. • 
When configuring a CMS setup, determine which node will be the MASTER first. Then setup the clients and slaves accordingly. SquidNet Slaves Defau
lts to SquidNet Master PEER SquidNet Clients To change configura/on, convert all CMS nodes to PEERS star/ng with slaves and clients. Un-­‐configure the MASTER node last. • 
• 
• 
Render Farm Pool: A set of nodes on a render farm allocated to perform a specific task or perform specific opera/ons. SquidNet has a default pool called “NETWORK” that all nodes are a member of. By default, all jobs render to the “NETWORK” pool. Typical scenario: Based on node performance, segment render farm nodes so that higher priority jobs always get processed on faster machines. NETWORK POOL Defined pools NIGHTLY POOL STAFF POOL HIGH PERFORMANCE POOL LOW PERFORMANCE POOL RENDERNODE10 belongs to these pools Pool assignments Available nodes • 
• 
• 
• 
In order to process job requests, SquidNet needs to know where applica/ons are installed on each node. Different versions of the same applica/on can be installed on each node. Use the Applica/on Path Manager to define “profiles” that contain absolute paths to each applica/on on each render node for a given rendering applica/on. Create one profile for each applica/on. RENDERNODE01 LightWave Installa/on path C:\Program Files\...\lwsn.exe Each profile can have mul/ple entries but only one per node. RENDERNODE02 Modo Installa/on path C:\Program Files\...\modo_cl.exe RENDERNODE03 Register installaTon paths with ApplicaTon Path Manager 3DSMAX Installa/on path C:\Program Files\...
\3dsmaxcmd.exe RENDERNODE04 Maya Installa/on path C:\Program Files\...\render.exe • 
• 
• 
• 
Transla/on paths allow SquidNet to submit the same job to different plaporm types (WINDOWS, Linux and OS X). Not needed if the same opera/ng system plaporm is being used. Each entry “maps” the same physical network share loca/on to one transla/on path. Embed $XPATH() macro in template when subs/tu/on is required. Same physical folder is mapped to a single translaTon path \\raid-­‐server00\volume_1\SquidNet /mnt/raid/SquidNet /Volumes/Volume_1/SquidNet • 
• 
• 
Any object (maps, textures, etc…) embedded in scene file MUST NOT be located on a local hard drive (C:\, D:\, etc…). They MUST be physically located on a network share drive (\\NAS-­‐SERVER\maya-­‐projects\maps…\...). If stored locally, render jobs will render just fine on the node where the scene objects exist but WILL NOT render on remote nodes because they’re not present on their local drives. Most applica/ons will produce an error for any job that has inaccessible scene objects. GOOD!!
NAS Storage (local reference) C:\maya-­‐projects\maps….\.... (network path) \\NAS_SERVER\maya-­‐projects\maps….\.... (local reference) D:\objects\textures\….\.... (network path) \\NAS_SERVER\objects\textures\….\.... BAD!!
Local Drive Local Drive • 
• 
• 
• 
SquidNet uses a project-­‐based framework to track job profiles. All submijed jobs are placed in specific project folders. At install /me, a default project folder is created (SQUIDNET DEFAULT) Use Project Manager to create new folders. Project Folder
Quick launch bujons Project Manager
• 
• 
• 
• 
SquidNet job templates contain processing instruc/ons for supported rendering and composi/ng applica/ons. Each template contains applica/on specific and common fields that define how the job is to be processed. When submijed, job template can be saved in to a job profile. Job profiles can be later resubmijed with the same or altered processing parameters. Group job profiles according to project. Use project manager to define new project. Common fields Applica/on specific fields • 
• 
• 
• 
• 
• 
In render farms, a job queue is where rendering requests get stored for processing. Typically, jobs are processed in first-­‐come-­‐first-­‐served order (FIFO). With SquidNet, jobs are processed according to a user defined priority level (0 thru 24: 0 being highest). Clients nodes submit jobs to the queue. The Master node manages the queue. Slave get assigned jobs from the queue by the Master node. JOB QUEUE
IN Job n Job n+1 SquidNet Job Queue Job n+2 Job n+3 Job n+4 OUT • 
• 
Jobs at higher priority are always processed first. Priority 0 (zero) is highest priority. 24 is the lowest. Jobs with same prioriTes are processed on first-­‐
come-­‐first-­‐serve basis. • 
• 
• 
• 
By default, SquidNet assigns one frame to each available processing node. The rendering applica/on on each render node must load the scene file before any rendering opera/on can begin. For small-­‐footprint scene files this is straighporward. However, for large-­‐footprint scenes (200MB or larger) this can be extremely inefficient because of the /me involved to load each scene file before processing. In some cases, loading the scene file can take considerably more /me then rendering the actual frame. For mul/-­‐frame render jobs, SquidNet supports the concept of job slices. Job slices allow you to determine how many frames will be rendered each /me an applica/on loads a scene file. Selng the job slice count to a value that evenly distributes the farm load maximizes render /mes considerably. For example in an extreme case: Processing a 500MB scene file on a 10-­‐node farm using a slice count of 10 (each render node loads scene once and processes 10 complete frames) is by far more efficient than using a slice count of 1 (the default) where each scene file is loaded 10 /mes per node (once per frame). Frames Example: 30 Frame Scene 10 frames per slice JOB QUEUE
Each render node will load scene once and render 10 frames at a Tme Frames Frames JOB SLICE QUEUE
• 
SquidNet’s processing pipeline is as follows: • 
Verify the scene is properly formajed (object files paths, etc…) • 
Setup SquidNet applica/on job template with processing parameters. Submit job to render farm • 
Monitor job queue for status. • 
Monitor network queue for resource usage. • 
Verify output content. Prepare scene Submit job Monitor job queue flow Pipeline workflow flow Pipeline workflow Monitor network flow Pipeline workflow Verify output flow Pipeline workflow • 
Monitor queued jobs in the network job queue. The job queue shows the following: • 
• 
• 
• 
• 
• 
Status of job (pending, processing, complete, etc…) Posi/on on the queue Percentage complete Job log showing detailed ac/vity and more… Monitor job slices using the job slice view: The job slice view shows the following: • 
• 
• 
• 
• 
The status of each job slice (pending, processing, complete, etc…) Render currently processing job slice. Comple/on status Job slice log showing detailed ac/vity and more JOB SLICE QUEUE
JOB QUEUE
JOBSLICE LOG
JOB LOG
• 
• 
Use the network view to monitor all ac/ve nodes. Use the network work queue view to: • 
See which jobs each node is processing. • 
Current status of job slices. • 
Number of node resources allocated. Network View
Network View
NODE LOG