首页 > 科技 > Introduction of NPACI Rocks

Introduction of NPACI Rocks

2005年10月10日 19点35分 发表评论 阅读评论

Customers can concentrate on their scientific computing instead of worrying about the management of their cluster.Stability,maintainability,ease of installation are the target of NPACI Rocks.

Complexity of cluster management,for example,determine if all nodes have a consistent set of software.In additional,this structure insures any upgrade will not interface with actively running jobs.

A cluster contains compute nodes,frontend nodes,and monitoring nodes.Within a distribution,different node types are defined with a machine specific Red Hat kickstart file,made from a Rocks Kickstart Graph.

A kickstart file is a text-based description of all the software packages and software configuration to be deployed on a node.I found Kickstart item in the Red Hat AS3.0 update4 which is the operating system of my office computer.It shows the information of the setup and configuration about hardware.

Rocks is robust system.Rocks makes complete operating system installation on a node the base management tool.With attention to complete automation of this process,it became FASTER to reinstall all nodes to a KNOWN CONFIGURATION than it is to determine if nodes were out of sychronization in the first place.So,it is a useful way to reinstall the system than determination the difference of nodes.

Rocks suppors all the hardware component that Red Hat support,but only support the following types hardware:x86,IA-64 architecture(No SPARC,Alpha or Yamhill):

Support:x86(IA32,AMD),IA-64(Itanium,Mckinley),x86-64(AMD opteron)
Network:Ethernet,Myrinet(Lanai 9x)

Physicial setup for Rocks:

1.Frontend

Nodes of this type are exposed to the outside world,many services run on there nodes.Frontend nodes are where users login in,submit jobs,and compute code,etc.This node can also act as router for other cluster nodes by using network address translation.Frontend nodes generally have the followiing characts:Two ethernte interface,one public,one private; Lots of disk to store file.

2.Compute

There area the workhorse nodes.They are also disposed. The complete OS to be reinstalled on every compute node in a short amount of time(~10min).There nodes are not seen on the public Internet.The main component:Power cable,Ethernet Connection for administration,Disk drive for caching the operating system(OS and libraries),optional high-performance network(Myrinet).

3.Ethernet Network

All compute nodes are connected with ethernet on the private network.This network is used for administration,monitoring,and basic file sharing.

4.Application Message Pass Network

All nodes can be connected with Gigabit-class network and required switches.

On the compute nodes,the Ethernet interface that Linux maps to eth0 must be connected to the cluster’s Ethernet switchs.This network is consided private,that is,all traffic on this network is physically separated from external public network.
On the frontend,two ethernet interface are required.The interface that Linux maps to eth0 must be connected to the same ethernet network as compute nodes.The interface that Linux maps to eth1 must be connected to external network.

Once you’ve physically assembled your cluster,each node needs to be set to boot without a keyboard.This procedure requires setting BIOS values and,unfortunataly,is different for every motherboard.We’ve seen some machines where you cannot set them to boot without a keyboard.

分类: 科技 标签:
  1. 本文目前尚无任何评论.
  1. 本文目前尚无任何 trackbacks 和 pingbacks.
您必须在 登录 后才能发布评论.