sawyl: (Default)
[personal profile] sawyl
Sadly, no amusing interludes from the network people — they finally seem to have nailed the problems with the connection — so we were forced to concentrate on GPFS instead.

After a quick skim through the architecture of GPFS and the various roles that can be taken by the nodes, we talked a bit about quorum and how it had changed over time. Originally, quorum required a majority all nodes in the cluster to be available in order for the file systems to come up, but this had been replaced by a more sensible system that required a majority of a set of hand-picked nodes, making it possible to bring GPFS up with only a critical core of nodes running. We discussed the steps required to install GPFS and ran through some of the admin commands. Then, during a session to jump through a couple of practical examples, I firmly cemented my reputation as alpha geek by fixing a handful of problems that prevented some of the commands from running correctly on the test system.

The most interesting topic of the day was multi-clustering. This provides the ability to define a GPFS storage cluster and then to allow other GPFS clusters, e.g. a compute cluster that might have no discs of its own, to become limited members of the main storage cluster. This provides a neat way to allow compute systems to access GPFS data without any risk of losing CPU cycles to the GPFS server daemons and, because a cluster can join many storage clusters, a handy way to implement a set of independent pools of storage. Very neat.
This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

Profile

sawyl: (Default)
sawyl

August 2018

S M T W T F S
   123 4
5 6 7 8910 11
12131415161718
192021222324 25
262728293031 

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Feb. 5th, 2026 06:57 am
Powered by Dreamwidth Studios