sawyl: (Default)
Listening to the news this morning and hearing about Meltdown/Spectre on the BBC news, I felt a profound sinking feeling. Sure enough, I lost almost my entire day to dealing with it. The problem itself is pretty interesting, but the mitigation is far more intriguing: to what extent is the fix for the problem likely to impact the performance of large-scale parallel jobs? I guess the answer depends on how much overhead it adds to MPI calls — the driver layer typically runs in user space with some sort of OS bypass — and what impact it has on IO throughput.

The afternoon was mostly spent talking to an endless parade of people — my visitor's chair doubling as a psychiatrist's couch — with one of them, who I've been working with to run some very high resolution global simulations, came by to tell me they'd found a bug in the model. Apparently the adaptive mesh software which calculates the routing table degrades pathologically when the resolution drops below 8 kilometres, so the jobs we were attempting to run wouldn't have worked, even if we had been able to get them to schedule...
sawyl: (Default)
Deflected from the things I'd intended to work on by a request from a collaborator for help tunnelling a connection from their desktop, through a series of firewalls and proxys, to the https front end of a disc array. After much puzzling over the end point of each tunnel, I eventually worked out that I needed to run one tunnel through the proxies to a machine on the same network as the array and then create a second tunnel through the in order to map a locally accessible port to 443 on the disc array at the far end of the tunnel.

Along the way, I found:

  • it was necessary to use HostKeyAlias when setting up the second tunnel to prevent secure shell from complaining about the mismatch between key returned by the tunnel to a remote ssh server and its expectation that the key ought to match that of the system running the entry point to the tunnel
  • it was necessary to bind the https tunnels to a different local network address, e.g. 127.0.1.1, for each unique host to prevent the browser from return an errors when accessing different servers via the same network address.

Interesting but not, I suspect, particuarly useful...

sawyl: (Default)
Spent a big chunk of my afternoon going through the high level implementation of the secure shell protocol, patiently trying to explain to someone why replacing the host keys was (a) necessary; (b) why this work had caused a few, transient man-in-the-middle warnings; and (c) why these warnings could not possibily have triggered any of the problems he was concerned about.

Essentially, the problem has occurred because:

  • in order to implement hostbased authentication, each host seems to require a unique public-private key pair (I haven't been able to convince myself, ab initio that this necessary, but I've also been unable to get hostbased authentication to work without it)
  • all the OS images are clones of a single instance meaning that they default to using the same host key
  • the only way to apply a customisation is through a post-boot script that copies the host-specific key into place halfway through the boot sequence, creating a window where the host will response to ssh requests with the wrong host key
  • the caching of host keys in ~/.ssh/known_hosts makes it possible for an invalid host key to be added to the system
  • someone has added StrictHostChecking no to the configuration causing the commands to work even when the host keys don't match, further adding to the confusion

After explaining all this a couple of times, somewhat incoherently, and following it up with an email, I'm not entirely convinced that I managed to get my point across and I was tempted to round the discussion off with, "Trust me: even if you don't understand it, I do and it isn't a problem..." Fortunately, tact and good sense prevailed over scarcasm and desire to be patronising.

sawyl: (Default)
Having made it to chapter 6 in my slow re-read of Security Engineering, here's what Anderson has to say about naming and identity:

...a common mistake is to confuse naming with identity. Identity is when two different names (or instances of the same name) correspond to the same principal (this is known in the distributed systems literature as an indirect name or symbolic link). The classic example comes from the registration of title to real estate. It is very common that someone who wishes to sell a house uses a different name than they did at the time it was purchased: they might have changed name on marriage, or after a criminal conviction. Changes in name usage are also common. For example, the DE Bell of the Bell-LaPadula system (which I’ll discuss in the next chapter) wrote his name “D. Elliot Bell” in 1973 on that paper; but he was always known as David, which is how he now writes his name, too. A land registration system must cope with a lot of identity issues like this.

Anderson, R., (2001), Security Engineering, 1st edition, Wiley, 128

And on the problem of uniqueness:

Human names evolved when we lived in small communities. They were not designed for the Internet. There are now many more people (and systems) online than we are used to dealing with. As I remarked at the beginning of this section, I used to be the only Ross Anderson I knew of, but thanks to Internet search engines, I now know dozens of namesakes. Some of them work in fields I’ve also worked in, such as software engineering and electric power distribution; the fact that I’m www.ross-anderson.com and ross.anderson@iee.org is just luck—I got there first. (Even so, rjanderson@iee.org is somebody else.) So even the combination of a relatively rare name and a specialized profession is still ambiguous.

ibid. 130

All of which seems particularly apposite, given the recent Google+ kerfuffle...

sawyl: (Default)
Two salient quotes from Ross Anderson on social engineering and default passwords:

Passwords are often extracted by false pretext phone calls. A harrassed system administrator is called once or twice on trivial matters by someone who claims to be a very senior manager’s personal assistant; once he has accepted the caller’s story, she calls and urgently demands a high-level password on some plausible pretext. Unless an organization has well-thought-out policies, attacks of this kind are very likely to work.

Anderson, R., (2001), Security Engineering, 1st edition, Wiley, 37

And:

A failure to think through the sort of rules that organizations should make, and enforce, to support the password mechanisms they have implemented has led to some really spectacular cases... Failure to change default passwords as supplied by the equipment vendor has affected many kinds of computer, some cryptographic equipment, and even mobile phones (where many users never bother to change an installed PIN of 0000).

ibid. 40

Plus ça change...

sawyl: (Default)
Did anyone else think the Foreign Secretary's concerns about cyber attacks fell slightly flat?

The foreign secretary said the FO attack came in the form of an email sent to three of his staff "which claimed to be about a forthcoming visit to the region and looked quite innocent". "In fact it was from a hostile state intelligence agency and contained computer code embedded in the attached document that would have attacked their machine. Luckily, our systems identified it and stopped it from ever reaching my staff," Hague said.

Or, put in plain English, someone in China emailed them a Word macro virus and their anti-virus software stopped it...

sawyl: (Default)
I finally feel like I'm starting to hit my stride with my dissertation, partly because I've found help from some slightly unexpected directions.

For those not already in the know, my thesis topic involves an analysis of the problems raised by national identity cards for societies founded on liberal principles — or, more broadly, societies that have historically provided strong support for individual autonomy. But in order to address the philosophical and political questions, I first need to lay out the details of the UK's national identity scheme, the associated database and, very briefly to skim out some of the details about the implementation, e.g. biometrics, if only so that I can then set them aside in favour of more essential questions.

So naturally, I found myself looking around for a good source and what should I find at hand, but Ross Anderson's excellent Security Engineering. This, I've discovered, contains a chapter (number 24, if you're interested) entitled Terror, Justice and Freedom, which spans enough ground to provide me with a decent starting point for my initial sections.

Reading through it, I was surprised to encounter this intriguing summary of David Brin's suggested response to pervasive state surveillance:

[Brin] reasons that the falling costs of data acquisition, transmission and storage will make pervasive surveillance technologies available to the authorities, so the only real question is whether they are available to the rest of us too. He paints a choice between two futures — one in which the citizens live in fear of an East German-style police force and one in which officials are held to account by public scrutiny. The cameras will exist: will they be surveillance cams or webcams?

Anderson, R., (2008), Security Engineering, Wiley: Indianapolis, 811

Which seems eerily prescient, given the way that videos of London G20 are being used to hold the police to account.

So maybe David Brin is right. The only way to deal with the panopticon which, we have to accept, is already here, is to demand open access to everything. To allow us to watch the watcher as they watch us; to undermine the normal Foucauldian disparity that exists between the watchers and the watched by making us all watchers.

How about that? Maybe we really can save the world simply by watching it on TV...

sawyl: (Default)
Yesterday, our user group rep sent round an email letting us know that the desktop people were going to be changing our xscreensaver configuration files to force the screens to lock after ten minutes of inactivity. And this was going to be enforced by changing the ownership of the file to root, to prevent the users from altering it.

Round about this point you should be thinking what I thought: changing the ownership won't work because the user still owns the parent directory and it's parent directory permissions that determine whether you can delete a file in Unix, not the permissions on the file itself. So I replied to the email in my usual tactful way, suggesting that I might have misunderstood how the permission thing was suppose to work before pointing out the obvious flaw in the design.

Today, my politeness paid dividends and I got a phone call from the group rep who obviously thought he'd got a query from J. Random Luser who didn't understand how ownerships and permissions worked. Out of politeness — I didn't want to interrupt! — I let him run through his spiel until he'd stressed that there was no way a root owned file could be removed before laying into his argument and handing him his head.

He then went off to debate with the desktop people who agreed that, yes, a file could be removed, just as I'd said, so they were going to create a root owned directory with restricted permissions and use that to hold the file, which would prevent the user from deleting the file. I agreed that this would prevent the directory from being deleted, but noted that my original point still held: that because of the permissions on the parent, there was nothing to stop the user from moving the restricted directory to another location and putting their own directory in its place.

After another break to allow the desktop people to think about it some more, I got an email back telling me that, despite their confidence in the original directory solution, they were temporarily postponing their plans in order to allow them to come up with another solution to the problem.

All of which probably means that my username has probably been added to the Big Book of Notoriously Difficult Users. Again. In Red. With underlining. And possibly asterisks.
sawyl: (Default)
Historically, secure data erasure hasn't been a big problem for supercomputers. Either the systems have been used in academic environments with a relaxed approach to secure deletion of data; or they've been used in paranoid government agencies, where all storage hardware is routinely destroyed during decommissioning. But with the changing shape of the HPC market and the growth in data protection and due diligence legislation, this would no longer seem to be the case. Thus, I wonder how long it will be before vendors start to offer a boot-and-nuke facility as standard with their systems.

The way I see it, the destruction process should mirror an automatic install process:

  1. The system bootstraps from an external server
  2. The boot process creates the necessary file systems in memory
  3. The post-boot processes detect any directly attached disc devices
  4. The detected discs would then be automatically wiped in a secure way, e.g. using scrub
  5. The system would then confirm its actions and shutdown

I can't imagine this as being particularly difficult to achieve on any of the machines I'm familiar with. Under Super-UX, for example, it might simply be a case of modifying the MINI install image to add a deletion utility and a set of scripts to kick it off.

I can understand why vendors might worry about including something as potentially dangerous as a secure deletion tool with their standard software bundle — no-one want to risk a support call from a customer who's just accidentally trashed their system beyond hope of recovery — but I'm sure there are customers out there who'd be interested in anything that made data confidentially less of a worry.

sawyl: (Default)
My security presentation went rather well. I covered the ground pretty comprehensively, I didn't get any adverse comments and, from the questions being asked, it seemed as though everything was pitched at the right level — there's no better sign than being able to answer a question with the line, "I'm glad you asked me that, because I'm going to deal with it in the next slide", to show that your listeners are right where you want them to be.

Meanwhile, in the world of non-bureaucracy, I finally decided on — and found the time to implement — a tasteful regime of dark blue nail polish:



It's not a very good picture, but you get the general idea. And no, I'm not responsible for the swirly wallpaper. Just the feet. And the clutter.

And some how, in amongst the work and the self decoration, I've also managed to find time to fit in large amounts of running and swimming. Enough, in fact, to put me well into the Midgewater Marshes. Yuck. I hope it doesn't mess up my pedicure.

Miles to Rivendell: 281
sawyl: (Default)
I've had a virtupitudinous day. I managed to fit in substantial amounts of both running and swimming; I finished up my security paper; I prepped the slides for my part of Friday's presentation; had a bunch of useful meetings with my colleagues; and generally got lots done. Which means I'll probably get asked to do twice as much tomorrow.

Miles to Rivendell: 297
sawyl: (Default)
Disc scrubbing is a total pain. Especially if you've got thousands of discs to clean. But in some ways, if you've got a lot of discs and relatively small amount of sensitive data, the problem appears to become less sever.

Consider a situation where a file system is made up eight logical stripes, with each stripe made up of five discs running in a RAID5 configuration. When an item of data is written to the file system, it is split into chunks across the arrays and then split blockwise across the discs.

In order to recover any information from this file system once it has been overwritten, an attacker would need to read the disc using, say, a magnetic force microscope, rebuild the blocks in the correct order, reassemble the stripes. Then, if only a minority of the data on the file system is sensitive, the attacker must then winnow the data to separate the restricted wheat from the unclassified chaff. This seems to be a fairly daunting task, even if the attacker already knows the form of the sensitive data on disc.

But how much harder would it be for an attacker when confronted, not with 30 or 40 unlabelled discs, but with 1,500? And what if the amount of sensitive data does not scale in line with the number of discs but remains constant?
sawyl: (Default)
Here's an interesting comment on liberty and the cost of terrorism from Ross Anderson:

First, there’s the political question: are Western societies uniquely vulnerable — because we’re open societies with democracy and a free press, whose interaction facilitates fearmongering — and if so what (if anything) should we do about it? The attacks challenged our core values — expressed in the USA as the Constitution, and in Europe as the Convention on Human Rights. Our common heritage of democracy and the rule of law, built slowly and painfully since the eighteenth century, might have been thought well entrenched, especially after we defended it successfully in the Cold War. Yet the aftermath of 9/11 saw one government after another introducing authoritarian measures ranging from fingerprinting at airports through ID cards and large-scale surveillance to detention without trial and even torture. Scant heed has been given to whether these measures would actually be effective: we saw in Chapter 15 that the US-VISIT fingerprinting program didn’t work, and that given the false alarm rate of the underlying technology it could never reasonably have been expected to work. We’ve not merely compromised our principles; we’ve wasted billions on bad engineering, and damaged whole industries. Can’t we find better ways to defend freedom?

Anderson, R., (2008), Security Engineering, Wiley: Indianapolis, 769–770

sawyl: (Default)
Setting up hostbased authentication with OpenSSH is generally pretty simple, but there are a couple of things to watch out for.

If a host is multi-homed, the host key entry in authorized_keys or /etc/ssh_known_hosts must contain a reference to every possible IP and interface name. These should be specified in a comma separated list at the start of the line containing the host key entry. For example, a host with two interfaces might look something like this:

foo,foo-ge,10.0.0.1,192.168.0.1 ssh-rsa XXXX

The HostbasedAuthentication parameter should be set to "yes" in /etc/ssh/sshd_config on the servers and /etc/ssh/ssh_config on the clients. The PreferredAuthentications should be set to something like hostbased,publickey,password to ensure that host keys are tried before any other method of authentication.

The parameter IgnoreRhosts should be set to "no" in /etc/ssh/sshd_config on the servers. This deals with situations where the system lacks a central hosts.equiv file and makes it possible to authenticate the root user via hostbased methods, should you feel sufficiently blasé.

If none of this works, the best way to debug the configuration is by running the command:

ssh -v -o PreferredAuthentications=hostbased foo

This prevents ssh from falling back on password or public key authentication, which generally makes it easier to determine where the fault lies.

sawyl: (Default)
Today's fascinating discovery? The choice of ssh encryption scheme matters. I ran a couple of tests and discovered that, for a 100MB file, triple DES took something like 3–4 times as long as arcfour, which was constantly 20 percent better than its nearest rival.
sawyl: (Default)
I was amused to learn of the existence of an Exchange appointment worm. I knew there was a good reason why I refused to use the loathsome thing.
sawyl: (Default)
Catching up on my RISKS reading, I came across an amusing picture of an SUV limo doing a seesaw impression. And to think, limos are supposed to be glamorous...
sawyl: (Default)
While the case for sudo-vs-root is slightly different for large, multi-admin systems, but this article is vaguely interesting, not so much for what it says as the thoughts it provokes.

I suppose the main point of using sudo in a production environment isn't so much security as CYA: sudo generates a nice audit trail of events, giving you proof that your minor change wasn't the one the screwed the system. Of course there's still the problem of people just starting root shells and bypassing the audit trail that way, but that can be easily dealt with by coming down like the wrath of God on anyone who breaks the rules. After all, what's the point in having a security policy if it's casually violated?
sawyl: (Default)
Had a meeting today to try to thrash out an answer to last week's question about the necessity of root passwords. I attended in my usual role meeting role of domine canis, Aquinas to the HC's Doctor Universalis, and enjoyed myself rather more than usual.

Maybe it's the result of an upbringing heavy on dialectics, but I reckon there's nothing better than a good argument to clarify which of the points up for discussion are in question. There's nothing like a challenge for firming up one's own beliefs, for as Mill says, if you don't challenge stuff it becomes, "deprived of its vital effect on the character and conduct, the dogma becomes a mere formal profession." And we wouldn't want that, now, would we?

Got root?

Feb. 22nd, 2006 05:11 pm
sawyl: (Default)
Today's big question: to what extent do sysadmins require the root password? My general feeling is that, given a correctly setup system with a decent sudo configuration, the answer is probably a lot less than people think they do.

Profile

sawyl: (Default)
sawyl

August 2018

S M T W T F S
   123 4
5 6 7 8910 11
12131415161718
192021222324 25
262728293031 

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Feb. 5th, 2026 06:57 am
Powered by Dreamwidth Studios