GPFS: day 1
Nov. 13th, 2008 07:45 pmSadly, no amusing interludes from the network people — they finally seem to have nailed the problems with the connection — so we were forced to concentrate on GPFS instead.
After a quick skim through the architecture of GPFS and the various roles that can be taken by the nodes, we talked a bit about quorum and how it had changed over time. Originally, quorum required a majority all nodes in the cluster to be available in order for the file systems to come up, but this had been replaced by a more sensible system that required a majority of a set of hand-picked nodes, making it possible to bring GPFS up with only a critical core of nodes running. We discussed the steps required to install GPFS and ran through some of the admin commands. Then, during a session to jump through a couple of practical examples, I firmly cemented my reputation as alpha geek by fixing a handful of problems that prevented some of the commands from running correctly on the test system.
The most interesting topic of the day was multi-clustering. This provides the ability to define a GPFS storage cluster and then to allow other GPFS clusters, e.g. a compute cluster that might have no discs of its own, to become limited members of the main storage cluster. This provides a neat way to allow compute systems to access GPFS data without any risk of losing CPU cycles to the GPFS server daemons and, because a cluster can join many storage clusters, a handy way to implement a set of independent pools of storage. Very neat.
After a quick skim through the architecture of GPFS and the various roles that can be taken by the nodes, we talked a bit about quorum and how it had changed over time. Originally, quorum required a majority all nodes in the cluster to be available in order for the file systems to come up, but this had been replaced by a more sensible system that required a majority of a set of hand-picked nodes, making it possible to bring GPFS up with only a critical core of nodes running. We discussed the steps required to install GPFS and ran through some of the admin commands. Then, during a session to jump through a couple of practical examples, I firmly cemented my reputation as alpha geek by fixing a handful of problems that prevented some of the commands from running correctly on the test system.
The most interesting topic of the day was multi-clustering. This provides the ability to define a GPFS storage cluster and then to allow other GPFS clusters, e.g. a compute cluster that might have no discs of its own, to become limited members of the main storage cluster. This provides a neat way to allow compute systems to access GPFS data without any risk of losing CPU cycles to the GPFS server daemons and, because a cluster can join many storage clusters, a handy way to implement a set of independent pools of storage. Very neat.