Internet Explorer – new vulnerability makes it just too dangerous to use

There’s a very serious problem with all versions of Internet Explorer on all versions of Windows. See here for the osvdb entry.

In simple terms, it involves pages with Flash content, and all you’ve got to do is open a page on a dodgy web site and it’s game over for you. There’s no patch for it.

Microsoft’s advice can be found in this technet article. It’s pathetic. Their suggested work-around is to deploy the Microsoft Enhanced Mitigation Experience Toolkit (EMET). Apparently this is a utility that “helps prevent vulnerabilities in software from successfully being exploited by applying in-box mitigations”. Microsoft continues “At this time, EMET is provided with limited support and is only available in the English language.”

Here’s my advice – just don’t use Internet Explorer until its been fixed.

Update

21-Sep-12

Microsoft has released a fix for this. See MS Security Bulletin MS 12-063.

If you have a legitimate copy of Windows this will download and install automatically, eventually. Run Windows Update manually to get it now – unfortunately it will insist on rebooting after installation.

 

TLS used in web browsers is vulnerable to cookie grabbing

I heard something really worrying yesterday – someone’s got a proof-of-concept that defeats TLS (previously known as SSL) encryption. Security researchers Thai Duong and Juliano Rizzo are planning to demonstrate this at Ekoparty in Argentina this week.

Fortunately this isn’t as bad as it sounds as it doesn’t actually involve breaking TLS, but it’s still pretty bad. It only applies to some web browsers, but it does allow the theft of supposedly encrypted login cookies and it seems to me a very practical method, although details aren’t officially published as yet. Basically, it involves putting some Javascript on a web site which causes the browser to fetch something from the site being targeted – say Paypal. The browser sends the request, encrypted along with the login cookie – compressed and then encrypted using TLS. You can’t read what’s in the TLS packets, but you can see how long they are.

Fundamentally, compression works by removing repeated information found in the uncompressed data. Therefore if you have repetition, the data compresses better. By making a number of requests for differing data (like bogus image file names) you’ll know by the size of the compressed packet if data in the unknown login cookie contains data repeated in the file requested simply because the combined encrypted packet will get shorter. In other words, because the unknown cookie and known file request are compressed into the same packet, you can determine whether there is any repetition simply by comparing the size of the compressed data – when it gets shorter you’ve got a match. Apparently you need make as few a six bogus requests for each letter in the login cookie to work out its contents.

You obviously need to be eavesdropping on the line, and your victim must be running your Javascript on their browser, but as TLS is there to prevent eavesdropping then this is a serious failure. It’s not the fault of TLS, but that of browser protocol writers, hoping that implementing TLS gives them security without further consideration.

Some people have suggested that this attack would be difficult to implement in practice, but I disagree. Why not simply hijack the DNS at an Internet Cafe (with a fake DCHP server) and force everyone to run the Javascript from the first web site they tried to open, and either snoop the WiFi or sniff the packets off the wire using traditional methods of circumventing switches.

Apparently this flaw doesn’t affect IE, but the others were vulnerable until tipped off about it. Make sure you’re running a current version

Chip and Pin is Definitely Not Safe

I’ve always had my doubts about Chip and Pin (or EMV to give it its proper name). We’ve all heard stories of people having cards stolen and used, when this should be impossible without the PIN. There are also credible stories of phantom withdrawals. The banks, as usual, stonewall; claiming that the victim allowed their PIN to be known, and that it was impossible for criminals to do this while you still had the card so someone close to you must be “borrowing” it.

In the old days it was very easily  to copy a card’s magnetic strip – to “clone” the card. Then all the criminals needed was the PIN, which could be obtained by looking over someone’s shoulder while they entered it. Cash could then be withdrawn with the cloned card, any time, any place, and the victim wouldn’t know anything about it. Chip and Pin was designed to thwart this, because you can’t clone a chip.

Well, it turns out that you don’t have to clone the card. All you need to do is send the bank the same code as the card would, and it will believe you’re using the card. In theory this isn’t possible, because the communications are secure between the card and the bank. A team of researchers at Cambridge University’s Computer Lab has just published a paper explaining why this communication isn’t secure at all.

I urge to you read the paper, but be warned, it’s unsettling. Basically, the problem is this:

The chip contains a password, which the bank knows (a symmetric key) and a transaction counter which is incremented each time the card is used. For an ATM withdrawal this data is encrypted and sent to the bank along with the details of the proposed transaction and the PIN, and the bank sends back a yes or no depending on whether it all checks out. It would be fairly easy to simply replay the transaction to the bank and have it send back the signal to dispense the money, except that a  random number (nonce) is added before its encrypted so no two transactions should be the same. If they are, the bank knows it’s a replay and does nothing.

What the researchers found was that with some ATMs, the random number was not random at all – it was predictable. All you need do is update your transaction with the next number  and send it to the bank, and out comes the dough. It’s not trivial, but its possible and criminals are known to be very resourceful when it comes to stealing money from ATMs.

What’s almost as scary is how the researchers found all this out: partly by examining ATM machines purchased on eBay! (I checked, there are machines for sale right now). There’s a bit of guidance on what random means in the latest EMV specification; the conformance test simply requires four transactions in a row to have different numbers.

It’s inconceivable to me that no one at the banks knew about this until they were tipped off by the researchers earlier this year. Anyone with the faintest clue about cryptography and security looking at code for these ATMs would have spotted the flaw. This begs the question, who the hell was developing the ATMs?

In the mean time, banks have been trying to pretend to customers than phantom withdrawals on their accounts must be their fault and refusing to refund the money, claiming that Chip and Pin is secure. It’s not, and a day of reckoning can’t come too soon.

Credit for the research goes to  Mike Bond, Omar Choudary, Steven J. Murdoch,Sergei Skorobogatov, and Ross Anderson at Cambridge. Unfortunately they’re probably not the first to discover it as it appears the criminals have know about it for some time already.

 

Is Quantum Cryptography About to be Hacked (again)?

I saw a curious note on the BBC teletext service saying physicists in Canada had just proved that the Heisenberg Uncertainty Principle wasn’t quite right and that therefore Quantum Cryptography was probably not as secure as we’d hoped.

The Heisenberg principle basically states that at quantum level (very small things) it’s impossible to measure the precise position and speed of anything (or measure any other two attributes). The more accurate a position reading, the less accurate the speed measurement, or if you measure the speed accurately the position will become uncertain.

However, quantum cryptography relies on is something much less weird to work practically – namely the Observer Effect, or Heisenberg’s Measurement-Disturbance Relationship. This is what the Canadian team were actually on about. You can find the paper causing all the fuss here:

Lee A. Rozema, Ardavan Darabi, Dylan H. Mahler, Alex Hayat, Yasaman Soudagar, and Aephraim M. Steinberg, Centre for Quantum Information & Quantum Control and Institute for Optical Sciences, Department of Physics, 60 St. George Street, University of Toronto, Toronto, Ontario, Canada M5S 1A7

The Observer Effect is much easier to understand. It says that when you measure some things you necessarily change them by the act of measuring. There are plenty of examples to choose from, like a volt meter in an electrical circuit connecting two hitherto unconnected points and allowing a current to flow that wasn’t there before the meter was introduced. If electronics isn’t your bag, consider measuring the tyre pressure on a car. When you apply the gauge a small amount of air escapes, so the pressure is obviously less than it was before you measured it.

As to whether it’s going to make a jot of difference to the safety of your credit card details, I highly doubt it. Quantum Cryptography is not widely used, although I believe laboratory experiments continue (notably British Telecom’s research lab in Ipswitch and latterly Raytheon BBN Technologies). And even then, it’s not at all clear whether this will make any difference to it.

So what is Quantum Cryptography in practice?

Unless you slept through ‘O’ Level (now GCSE) Physics at school, you’ll think you know what a polaroid is:  a filter that allows light waves through if the waves are oriented correctly and blocks them if they’re not; a bit like grating for light waves. Except, of course, they don’t behave like that in the real world, do they?

There’s the classic experiment where you take two polaroids and place them one in front of the other. If you have two polaroid sunglasses, try it now. If you have only one pair you could snap them in half to get two lenses, or just take my word for what follows.

As you look through the two lenses and rotate one they’ll either be transparent, black or at various states of fading in between. When the polaroids are aligned the theory says that all the light gets through, when they’re 90° apart then all the light will be blocked. But what about when they’re 45°apart? How come you can still see through? ‘O’ Level physics doesn’t want to bother you with quantum mechanics but as I understand it, this is caused by those pesky photons randomly changing direction all the time, and side-stepping the grill. There’s a random chance of photons still getting through, and it’s proportional to how far around the polaroid is out of alignment. Slightly out of line means most still get through, 45° means half get through and 90° means none get through.

Now suppose we’re sending information by polarising light and shoving it down an optical fibre; we send it through a polaroid. To measure the result we stick it through another polaroid at the other end, aligned at random. The sender’s polarisation pattern is secret at this time. If the receiving polaroid it a bit off, we’ll still get a signal but it will vary randomly. The thing is that there is no way of knowing whether we’re looking at a randomly corrupted signal, or whether all photons are getting through. However, we can record the results and if we’re later told what the polarisation settings were, we can discard the measurements we made with our receiving polaroid was set wrong and use simple error-correction techniques to make use of the remaining “good” data. The polarisation settings can be transmitted insecurely after the event, because they’re of no use to an attacker by then. This is subtle…

If someone decides to bung a polaroid in the middle of the line to try and examine our photons, unless they get lucky and have exactly the right polarisation every time then they’re going to filter off some of our the signal. This is going to show up as corrupted data by the recipient, and we’ll know we have an eavesdropper. When the correct settings are published, even if the eavesdropper gets to hear about them it will be too late – they will have corrupted the signal and given their presence away.

The current state-of-the-art in Quantum Cryptography relies on sending and detecting single or pairs of photons. Good luck with that one! It’s also not an easy thing to send and receive  a single polarised photon, so the research is looking towards simply swapping encryption keys for protecting the actual payload later. This is known as QKD – Quantum Key Distribution.

Suffice to say that this technique makes it impossible to eavesdrop on a line as to do so will corrupt whatever is being intercepted  and, with an appropriate protocol, it’ll be almost impossible to try this without being detected before any real data is exposed.

So why does the Heisenberg’s Measurement-Disturbance Relationship matter to all of this? Well, supposing someone was able to make a polarisation detector that could measure polarisation at any angle. With this they could read the polarisation of whatever was passing, and even if they destroyed it in doing so, they could re-transmit a new photon polarised the same way. Quantum mechanics currently says you can only test for polarisation in one plane (basis) at a time, so the eavesdropper couldn’t possibly do this. If quantum theory was actually wrong, someone would still have to find a practical way measure all-ways polarisation. Quantum Cryptography itself has practicality issues, this isn’t a reason to lose any sleep in the real world. A few companies offer QKD networking equipment, and demonstration networks come and go, but unless anyone can enlighten me, I’m not aware of any real-world users of the technology. Given the number of successful attack vectors found in all known experimental systems, it’s not surprising.

Please note – I am not a theoretical physicist; I’m looking at this from an application perspective. I’d love to hear from anyone with a full understanding of quantum mechanics able to shed further light on this, as long as they can keep it simple.

Panicky public gets scammer’s charter for cookie law

Are you worried about websites you visit using cookies? If so, you’re completely wrong; probably swept up in a tide of hysteria whipped up by concerned but technically ignorant campaigners. The Internet is full of such people, and the EU politicians have been pandering to them because politicians are a technically illiterate bunch too.

A cookie is a note that is stored by your web browser to recall some information you’ve entered in to a web site. For example, it might contain (effectively) a list of things you’ve added to your shopping cart while browsing, or the login name you entered. Web sites need them to interact, otherwise they can’t track who you are from one page to another. (Well there are alternatives, but they’re cumbersome).

So what’s the big deal? Why is there a law coming in to force requiring you to give informed consent before using a web site that needs cookies? Complete pig-ignorance and hysteria from the politicians, that’s why.

There is actually a privacy issue with cookies – some advertisers that embed parts of their website in another can update their cookies on your machine to follow you from one web site to another. This is a bit sneaky, but the practice doesn’t require cookies specifically, although they do make it a lot easier. These are known as tracking cookies. However, this practice is not what the new law is about.

So, pretty much every small business with a web site created more than 12 months ago (when this was announced) or written by a “web developer” that probably didn’t even realise how their CMS used cookies, is illegal as from today. Probably including this one (which uses WordPress). Nonetheless, head of the ICO’s project on cookies, Dave Evans, is still “planning to use formal undertakings or enforcement notices to make sites take action”.

What’s actually going to happen is that scamming “web developers” will be contacting everyone  offering to fix their illegal web sites for an exorbitant fee.

The ICO has realised the stupidity of its initial position and now allows “implied consent” – in other words if you continue to use a web site that uses cookies you will be considered to have consented to it. Again, this is a nonsense as the only possible problem cookies are tracking cookies, and these come from sources other than the web site you’re apparently looking at – e.g. from embedded adverts.

So – if you want to continue reading articles on this blog you must be educated enough to know what a cookie is and not mind about them. As an extra level of informed concent you must presumably agree that Dave Evans of the ICO and his whole department is an outrageous waste of tax-payers money. (In fareness to Dave Evans, he’s defending a daft EU law because that’s his job – its the system and not him, but he’s also paid to take the flack).

What is all this Zune comment spam about?

People running popular blogs are often targeted by comment spammers – this blog gets hit with at least 10,000 a year (and very useful for botnet research) – most of it is semi-literate drivel containing a link to some site being “promoted”. Idiots pay other idiots to do this because they believe it will increase their Google ranking. It doesn’t, but a fool and his money are soon parted and the comment spammers, although wasting everyone’s time, are at least receiving payment from the idiots of the second part.

But there’s a weird class of comment spam that’s been going for years which contains lucid, but repeated, “reviews” about something called a “Zune”. It turns out that this is a Microsoft MP3 player available in the USA. The spams contain a load of links, and I assume that the spammers are using proper English (well, American English) in an attempt to get around automated spam filters that can spot the broken language of the third-world spam gangs easily enough. But they do seem to concentrate on the Zune media player rather than other topics. Blocking them is easy: just block any comment with the word “Zune” in, as it doesn’t appear in normal English. Unless, of course, your blog is about media players available in the USA.

This really does beg the question: why are these spammers sicking to one subject with a readily identified filter signature? I’ve often wondered if they’re being paid by a Microsoft rival to ensure that the word “Zune” appears in every spam filter on the planet, thus ensuring that no “social media” exposure exists for the product. Or is this just a paranoid conspiracy theory?

An analysis of the sources shows that nearly all of this stuff is coming from dubious server hosting companies.  A dubious hosting company is one that doesn’t know/care what its customers are doing, as evidenced by continued abuse and lack of response to complaints. There’s one in Melbourne (Telstra!) responsible for quite a bit of it, and very many in South Korea plus a smattering in Europe, all of which are “one-time” so presumably they’re taking complains seriously even if they’re not vetting beforehand. It’s hard to be sure about the Koreans – there are a lot but there’s evidence they might be skipping from one hosting company to the other. Unusually for this kind of abuse there are very few in China and Eastern Europe, and only the odd DSL source. These people don’t seem to be making much use of botnets.

So, one wonders, what’s their game? Could it be they’re buying hosting space and appearing to behave themselves by posting reasonable-looking but irrelevant comments? Well any competent server operators could detect comment posting easily enough, but in the “cheap” end of the market they won’t have the time or even the minimal knowledge to do this.

I did wonder if they were using VPN endpoints for this, but as there’s no reverse-lookup in the vast majority of cases it’s unlikely to be any legitimate server.

Government’s red-herring email law

The government (UK) launched a red herring at the Internet today, and the news media has lapped it up. “We’re brining in a new law to allow security services to monitor email and other Internet traffic.” This is actually referring to the fact of the communication; not its content.

The TV news has subsequently been filled with earnest spokespersons from civil liberties groups decrying the worst Big Bother laws since New Labour got the boot – anything to get their silly mugs in front of a camera. Great news drama – the Conservatives moving over to the dark side.

Wake up people! What they’re proposing is just not possible. Blair already tried it in a fanfare of announcements and publicity, but anyone who knows anything about how email and the Internet function can tell you that it’s not even technically possible on so many levels.

1) Email does not necessarily use an ISP’s mail server or web mail service. Home users probably do; any company or organisation will most likely use their own. If anyone wanted to avoid snooping, they would too.

2) Users of commercial mail services are anonymous if they want to be. With a few minutes effort it’s possible to hide your IP address, or use an untraceable random one, and there’s no other trail leading back to an individual. The international criminals being targeted will know the tricks, for sure.

3) The security services already have the powers to do this, and do use them.

4) If the ISP is outside the UK, then what?

When the Blair government announced something similar I had to write to the government department concerned asking for the details. I heard about it from the general news. Apparently I, as an ISP, needed to keep records for a year – but records of what, exactly? They didn’t contact me to warn me it was happening; they can’t as there is no register of ISPs. There’s no definition of what counts as an ISP either. And needless to say, the government department concerned didn’t write back with the details.

So why is the current government making this announcement about an announcement now? Could they be wanting to change the news agenda? As usual they can rely on the media types to completely miss the fact it’s nonsense. Eventually the BBC got Andrew Mars on to comment, but I suspect his interview snippet was severely edited to suit their agenda.

FBI VoIP system conference call intercepted by Anonymous?

Major embarrassment today as Anonymous intercepts a conference call between several European and American law enforcement agencies, according to something I’ve just seen on the BBC. It’s on YouTube right now if you want to hear it for yourself, click here.

It got my attention – someone breaking into a VoIP system would. But on further investigation it’s pretty obvious to me that it wasn’t an intercept at all. The clues are in the intercepted email  and the start of the recording – Anonymous read an email circular inviting people to the conference call, where the access number and password were given.

This makes the authorities concerned seem even more incompetent that if they’d had their VoIP service compromised.

 

Certificate “Errors” on Internet Explorer 9 – and how to stop them

Like recent versions of Internet Explorer, Version 9 has a Microsoft-style way of handling SSL certificates. It won’t let lusers access anything over a secure connection if there’s anything wrong with the certificate the remote end has presented. On the face of it, this is all very reasonable, as you don’t want the lusers being tricked by nasty criminals. But in reality it’s not as simple as that.

A bit of background, because everyone should make an informed choice about this…

SSL (or TLS) has two purposes – authentication and encryption. When you send data over SSL then two things occur. Firstly it’s only readable by the receiving computer (i.e. it’s encrypted), and secondly you know you’re talking to the right server (the link is authenticated – both computers recognise each other). The computers don’t exactly exchange passwords, but they have a way of recognising each other’s SSL certificate. Put simply, if two computers need to talk they have a copy of each other’s certificate stored on their disk  and they use to make sure they’re not talking to an impostor (gross over-simplification, but it’s a paradigm that works). Should one computer not have the certificate needed to authenticate the other end it will be supplied, and this is supplied certificate is checked to see if its “signed” by an “signing authority” using a certificate it does already have has. In other words, the unknown remote certificate arrives and the computer checks with a “signing authority” certificate to see if it’s been signed, and is therefore to be trusted. If it’s okay, it’s stored and used.

Now here’s where it breaks in Microsoft-land: For your computer’s certificate (the one it sends) to be signed by a “signing authority”, money has to change hands. Quite a lot of money, in fact. If it’s not signed, the recipient will have no way of knowing it’s really you.

In the rest of the world (where SSL came from), on receipt of an unknown certificate,  you’d see a message saying that the remote computer says it can be recognised using the supplied certificate, but I’ve never seen it before: Do we trust it? In most cases the answer would be “yes” and the two computers become known to each other on subsequent connections. It’s okay to do this – it’s normal. Something like this happens on Windows with Firefox and other browsers, but not, apparently, Internet Explorer. Not until you did a bit deeper, anyway. Actually, Internet Explorer 9 can be made to recognise unsigned security certificates, and here’s how.

First off, we really need to know what we’re about to do. What are the symptoms? The address bar goes red and you get a page saying there’s a problem with the certificate every time you visit a “site”. You can click on something to proceed anyway, but the implication is that you’re heading for your doom. The “error” message you see is normally for one of three reasons, and reading it might be enlightening. On a bad day you might get all three! But taking them in turn:

“The security certificate presented by this website was not issued by a trusted certificate authority.”

This just means that no one has paid to have this certificate signed by anyone of Microsoft’s liking. It may be a private company-wide certificate, or that belonging to a piece of network equipment such as a router. If it’s a web site belonging to your bank or an on-line shop, then you should be worried! Otherwise, if there’s a reason why someone isn’t paying to have their certificate approved (indirectly) by Microsoft, make your own decision as to whether you trust it.

So how do you get around it? Actually it’s pretty simple but Microsoft aren’t giving out any clues! The trick is to run Internet Explorer as Administrator (not just when logged in as Administrator).  In current versions of Windows you do this by right-clicking on IE in the start menu and selecting “Run as Administrator” from the pop-up menu. If you don’t, the following won’t work.

Go to the site who’s certificate you wish to import, and proceed to view the site in spite of the warnings. Then in the address bar you’ll see “Certificate error”. Click on this and you’ll see an option to “View Certificate”, and (assuming you’re in Administrator mode) there’s be a button the “General” tab to “Install Certificate”. Follow the prompts. For maximum effectiveness(!) choose the option to “Place all certificates in…” and browse to the “Trusted Root Certification Authorities”. This probably isn’t necessary in most cases, but if you do this it’ll cover you for pretty much every use. Your PC will happily accept anything from the remote machine hereafter; so make sure you’re importing the right certificate!

“The security certificate presented by this website has expired or is not yet valid.”

This means the certificate is out-of-date, or exceptionally, too new. In most cases encountering a certificate that isn’t valid suggests that your computer’s clock has reset itself to 1980. If this sounds plausible, just proceed to use the certificate anyway (there’s a clear option on the screen to do this). You’ll still get a scary red address bar, then it’s up to the server operator to fix this, but before you get on the ‘phone and give them what for, make sure you’re computer’s idea of the time and date is actually correct.

“The security certificate presented by this website was issued for a different website’s address”

This third case is a bit more tricky. Basically the name of the computer is embedded into the certificate, but you might be referring to it by another name (i.e. an alias). Or it could be using a pinched certificate. If you’re talking to a network router like a Draytek 2820 by going to its IP address and it’s giving you a built-in certificate, it would have no way of knowing what name or address the router is ultimately going end up on. The certificate is bound to be wrong in this respect. However, fishing around in the Internet Explorer options, under Advanced (and right down near the bottom) there’s a check-box – “Warn about certificate name mismatches”. Un-check it and it’ll stop squawking. Unfortunately it’s either on or off; you can’t set it to ignore a mis-match for particular names only. Because of the risk that someone might be impersonating your bank, you’d probably be best to leave this one checked and put up with the red warnings.

Final word of warning

Some people reading this will reckon this advice is reckless. Why circumvent a security feature? Simple – if the authentication part of SSL isn’t working you still want it for the encryption. In an ideal world everyone would have signed certificates so you can verify everything you talk and know it’s what it claims to be the first time you meet it. Subsequent visits will be authenticated with your newly installed certificate, so if something turns up impersonating it alter it’ll be detected. In the real world you probably want your data encrypted regardless. A signed certificate is better, but not that much better.

Hassling everyone over security certificates, as Microsoft is doing, may be justifiable on some levels, but as far as I’m concerned, anything that makes the use of encrypted data paths more difficult or expensive to use than they need be is a bad thing. They’re throwing the baby out with the bathwater.

 

PAM authentication in PHP on FreeBSD

I have several groups of lusers who want to be able to set/change their mail vacation settings but aren’t up to using ssh to edit their .forward and .vacation.msg files. I thought I’d write a quick PHP application to allow them to do it in a luser-friendly way using a web browser. If this isn’t what PHP is for, I don’t know what good it is. The snag: you need to make sure the right user is editing the right file.

The obvious answer is to authenticate them with their mail user-name and password pair using PAM. (This is the system that will check user-name/password combinations against whatever authentication you see fit – by default /etc/passwd).

PHP has a module available for doing just this – it’s called “PAM” and there’s even a FreeBSD port of it you can install from /usr/ports/security/pecl-pam. If you want to use it, just “make” and “make install” – it’ll add it to the PHP extensions automatically, but don’t forget to restart Apache if you’re planning to use it there.

You’ll also have to configure PAM itself. This involves listing the authentication methods applicable to your module in /etc/pam.d/. In this case the php module will have the default name ‘php’ unless you’ve changed it in /etc/php.ini using a line like pam.servicename = "php";

Adding the above line above obviously does nothing as it’s the default, but it’s useful as a reminder of what the default is set to. I don’t like implicit defaults, but then again I don’t like a lot of the shortcuts taken by PHP.

The only thing you need to do to get it workings is to add a PAM module definition file called /etc/pam.d/php. The easy way to create this is copy an existing one, such as /etc/pam.d/ftp. This will be about right for most people, but read /etc/pam.d/README if you want to understand exactly what’s going on.

So – to test it. A quick PHP program such as the following will do the trick:

<?php
var_dump (pam_auth('auser','theirpassword',&$error,0));
print $error;
?>

If there’s an entry in /etc/passwd that matches then it’ll return true, otherwise false, and $error will contain the reason. Actually, it checks the file /etc/master.passwd – the one that isn’t world readable and therefore can contain the MD5 password hashes. And there’s the rub…

This works fine when run as root, but not as any other users; it always returns false. This makes it next to useless. It might be a bug in the code, but even if it isn’t it leads to interesting questions about security. For example, it would allow a PHP user to hammer away trying to brute-force guess passwords. I’ve seen it suggested to Linux users can overcome the need to run as root by making their shadow password group or world readable. Yikes!

If you’re going to use this with PHP inside Apache, you’re talking about giving the “limited” Apache user access to one of the most critical system files as far as security goes. I can see the LAMP lusers clamouring for for me to let them do this, but the answer is “no!” Pecl-pam is not a safe solution to this, especially on a shared machine. You could probably persuade it to use a different password file, but what’s the point? If the www user can read it, all web hosting users can and you might just as well read it from the disk directly (or use a database). PAM only makes sense for using system-wide passwords for authenticating real users.

I do now have a work-around: if you want your Apache PHP script to modify files in a user’s home directory you can do this using FTP. I’ve written some code to achieve this (not hard) and I’ll post it here if there’s any interest, and after I’ve decided it’s not another security nightmare.