How to stop Samba users deleting their home directory and email

Samba Carnival Helsinki summer 2009
Samba Carnival (the real Samba logo is sooo boring)

UNIX permissions can send you around the twist sometimes. You can set them up to do anything, not. Here’s a good case in point…

Imagine you have Samba set up to provide users with a home directory. This is a useful feature; if you log in to the server with the name “fred” you (and only you) will see a network share called “fred”, which contains the files in your UNIX/Linux home directory. This is great for knowledgeable computer types, but is it such a great idea for normal lusers? If you’re running IMAP email it’s going to expose your mail directory, .forward and a load of other files that Windoze users might delete on a whim, and really screw things up.

Is there a Samba option to share home directories but to leave certain subdirectories alone? No. Can you just change the ownership and permissions of the critical files to  root and deny write access? No! (Because mail systems require such files to be owned by their user for security reasons). Can you use permission bits or even an ACL? Possibly, but you’ll go insane trying.

A bit of lateral thinking is called for here. Let’s start with the standard section in smb.conf for creating automatic shares for home directories:

[homes]
    comment = Home Directories
    browseable = no
    writable = yes

The “homes” section is special – the name “homes” is reserved to make it so. Basically it auto-creates a share with a name matching the user when someone logs in, so that they can get to their home directory.

First off, you could make it non-writable (i.e. set writable = no). Not much use to use luser, but it does the job of stopping them deleting anything. If read-only access is good enough, it’s an option.

The next idea, if you want it to be useful, is to use the directive “hide dot files” in the definition. This basically returns files beginning in a ‘.’ as “hidden” to Windoze users, hiding the UNIX user configuration files and other stuff you don’t want deleted. Unfortunately the “mail” directory, containing all your loverly IMAP folders is still available for wonton destruction, but you can hide this too by renaming it .mail. All you then need to do is tell your mail server to use the new name. For example, in dovecot.conf, uncomment and edit the line thus:

mail_location = mbox:~/.mail/:INBOX=/var/mail/%u

(Note the ‘.’ added at the front of ~/mail/)

Please generate and paste your ad code here. If left empty, the ad location will be highlighted on your blog pages with a reminder to enter your code. Mid-Post

You then have to rename each of the user’s “mail” folders to “.mail”, restart dovecot and the job is done.

Except when you have lusers who have turned on the “Show Hidden Files” option in Windoze, of course. A surprising number seem to think this is a good idea. You could decide that hidden files allows advanced users control of their mail and configuration, and anyone messing with a hidden file can presumably be trusted to know what you’re doing. You could even mess with Windoze policies to stop them doing this (ha!). Or you may take the view that all lusers and dangerous and if there is a way to mess things up, they’ll find it and do it. In this case, here’s Plan B.

The trick is to know that the default path to shares in [homes] is ‘~’, but you can actually override this! For example:

[homes]
    path = /usr/data/flubnutz
    ...

This  maps users’ home directories in a single directory called ‘flubnutz’. This is not that useful, and I haven’t even bothered to try it myself. When it becomes interesting is when you can add a macro to the path name. %S is a good one to use because it’s the name as the user who has logged in (the service name). %u, likewise. You can then do stuff like:

[homes]
     path = /usr/samba-files/%S
     ....

This stores the user’s home directory files in a completely different location, in a directory matching their name. If you prefer to keep the user’s account files together (like a sensible UNIX admin) you can use:

[homes]
     comment = Home Directories
     path = /usr/home/%S/samba-files
     browseable = no
     writable = yes<

As you can imagine, this stores their Windows home directory files in a sub-directory to their home directory; one which they can’t escape from. You have to create “~/samba-files” and give them ownership of it for this to work. If you don’t want to use the explicit path, %h/samba-files should do instead.

I’ve written a few scripts to create directories and set permissions, which I might add to this if anyone expresses an interest.

 

GMail can’t send to sendmail

Gmail Fail

What’s happening with Google? Their Internet engineering used to be spot on. They’re generally a bunch of clever guys, and they follow standards and their stuff just works. Or did. Lately their halo has been getting a bit tarnished, and problems with GMail are a good case in point.

It all started quietly around a month ago on the 6th August. About a week later, people started complaining that users sending mail to them from GMail were getting bounce messages. It looks like Google had rolled out a broken software update, but they’re keeping a low profile on the subject.

After a great deal of investigation it appeared that their new MTA was attempting to make a STARTTLS connection when delivering mail on port 25. STARTTLS is a mechanism that allows encryption on a standard unencrypted channel. Basically, the sender tries a STARTTLS command and if the receiver supports it, returns a reply of “okay” and the remainder of the connection is encrypted using TLS. unfortunately Google’s implementation, which had been working for years, is now broken. The GMail lusers got a bounce back a week later that said it couldn’t negotiate a STARTTLS connection. No further explanation has been forthcoming. STARTTLS should work, and if it doesn’t GMail should try again without using it, but doesn’t.

On the servers I’ve examined there is no problem with STARTTLS. Other MTAs are continuing to use it. All certificate diagnostics pass. Presumably Google has changed the specification as to what kind of TLS/SSL its going to work with, as, presumably, it’s not happy working with all types. Not all servers have this problem. But Google isn’t telling anyone what they’ve done, at least not so far. Working out what’s wrong with their new specification using trial and error takes a while, and I have yet to find a combination that works. And besides, it’s not Google’s place to tell recipients what kind of encryption they should be using, especially when the default state is unencrypted.

Google does offer a troubleshooter but it doesn’t cover this eventuality. There is an option to report other problems, but to date I’ve had no response.

So what’s the solution? The only method I’ve found that works is to disable STARTTLS on Port 25. This means that Google can’t try and fail, and go in to sulk mode. And here’s the bit you’ve probably been waiting for: how to do it.

Assuming you have an access DB configured for sendmail, (the norm) you need to add an extra line somewhere and makemap it:


srv_features: S

On FreeBSD this file is /etc/mail/access and you can make it active using make run from the /etc/mail directory. But you probably knew that.

The srv_features stuff basically tells sendmail which services to advertise as being available. STARTTLS is option ‘S’, with a lower-case letter meaning “advertise it”, and an upper-case meaning “don’t advertise it”. This over-rides defaults, and all we want to do here is stop advertising STARTTLS. If it’s not advertised, Google doesn’t try using it (at least for now).

You might want to read this sendmail documentation for more information in the normal Sendmail easy-to-understand(!) format. If that doesn’t do it for you, look at section 5.1.4.15 of the manual, available in PDF here.

Now Google may defend this state of affairs by saying that they’re implementing something odd with STARTTLS for “security reasons”. There may even be some justification in this. If I knew what they’d changed I might be able to comment on that, but I can’t. However, even if this was the case, they’d be wrong in principle. Since the dawn of Internet email we’ve had RFCs telling us how things should work. You can’t just change the way you do things and expect everyone else to change to suit you, however large you are. And it’s possible that what Google has done is RFC compliant, even if it is bonkers. There are unspecified aspects in RFCs, and some grey areas. However, anyone who’s been around for long enough will know that Sendmail is the de-facto MTA. If you have an argument about the interpretation of an RFC, you can settle it by asking the question “Does it work with sendmail?” If it doesn’t, it’s your problem.

And while we’re at it, it’s really good of Google to stop anyone reading your email while it’s in transit (could they be thinking of the NSA here?) After all, you don’t want email sent through GMail to be readable by anyone until they’re delivered, do you? The only snag is that they are still being read and analysed, by Google. Of course. Email is never secure unless you have end-to-end encryption, and by definition, you can’t get this with a webmail service.

Problems receiving mail from GMail – STARTTLS is a bad idea

Gmail Fail

Note: You may wish to read this follow-up article, which contains a solution.

A couple of weeks ago, users started complaining that people using GMAIL (and possibly iCloud) were having their emails bounced back to them from my servers. This is odd – most complaints on the Internet are from users of dodgy hosting companies having their mail rejected by GMail as likely spam. But I haven’t blacklisted Google, and all other mail is working, so they must have been mistaken.

But as soon as I could, I tried it for myself. And sure enough, a bounce came back. The relevent bit is:

Technical details of temporary failure:
TLS Negotiation failed: generic::failed_precondition:
               starttls error (0): protocol error

On fishing around in Sendmail logs, I found evidence that this has been going on all over the place:

sm-mta[84848]: STARTTLS=server, error: accept failed=-1, SSL_error=1, 
               errno=0, retry=-1, relay=mail-qg0-f50.google.com [209.85.192.50]
sm-mta[84848]: STARTTLS=server: 84848:error:1408A0C1:SSL
               routines:SSL3_GET_CLIENT_HELLO:no shared cipher:/usr/src/secure
               /lib/libssl/../../../crypto/openssl/ssl/s3_srvr.c:1073:
sm-mta[84848]: t7QJXCPI084848: mail-qg0-f50.google.com [209.85.192.50] did
               not issue MAIL/EXPN/VRFY/ETRN during connection to MTA

Oh my! The STARTTLS stuff isn’t working because there’s no shared cypher! Hang on a minute, there isn’t supposed to be. Who told Google they could use STARTTLS on port 25. It’d be neat if it worked, but it’s not configured – at least not with a certificate from a public CA. It actually works just fine if you are cool with self-signed (private) certificates. So what is Google playing at?

In the wake of Edward Snowden, people have started worrying about this kind of thing, so companies like Google are trying to be seen doing something about it, and encrypting mail might seem like a good idea. Unfortunately STARTTLS is a bad idea. The rationale behind STARTTLS was to add encryption to a previously unencrypted port’s traffic. If the sender issued a STARTTLS as part of the protocol it could switch in to TLS mode if it knew how; otherwise it would just work as normal. The IETF was very keen on this in the late 1990’s as an easy fix, citing all sorts of iffy reasons, generally to do with having two ports; one standard and one encrypted. They thought it would be confusing, requiring different URLs and not allow for opportunistic automatic encryption of the kind Google seems to be attempting.

As far as I’m concerned, this is rubbish. Having clearly defined encrypted and unencrypted ports means you know where you are. It either is or it isn’t. If you say something must be encrypted, turn off the unencrypted port. STARTTLS allows a fall-back to plain text if you specify the clear text port; and if you have a man-in-the-middle you’ll never know that the STARTTLS was stripped from the negotiations. It opens up a vulnerability that need not be there, all for the sake of saving a port. And time is on my side in this argument. Since 1999 the implementation of encrypted ports has really taken off, with https, smtps (in spite of 465 being rescinded), imaps – you name it – all servers and clients support it and you know where you are.

So what’s this sudden clamouring for the insecure STARTTLS? Naivety on the part of the large internet companies, or a plot to make people think their email traffic is now safe from snoopers when its not?

I’ve reported this problem and I await an answer from Google, but my best guess is that they’re speculatively using STARTTLS, and then barfing and throwing their toys from the pram when it doesn’t work because the verify can’t be done. Having thought about it, I’m okay with the idea of trying STARTTLS as long as you don’t mind about the CA used for the certificate; and if you can’t negotiate a TLS link, go back to plain text. In many ways it’d be better to use the well known port 465 for TLS, and if it can’t be opened, go to plain text on 25. Except there’s no guarantee that port 465 is on the same server as port 25, and it’s normally configured to require SASL authentication. As everyone knows, apart from Google it seems, assumption is the mother of all foul-ups.

Encryption is a good idea, but making assumptions about Port 25 being anything other that straight SMTP is asking for trouble.

 

Using MX records to create backup mail server

There’s a widely held misunderstanding about “main” and “backup” MX records in the web developer world. The fact is that there no such thing! Anyone who tells you different is plain wrong, but there are a lot of web developers who believe there is the case, and some ISPs give in and provide them as it’s simpler than arguing. It’s possible to use two MX records in some crazy scheme that looks like a backup server, in practice it does very little to help and quite possibly rather a lot to hinder. It won’t make your email more robust in practical terms.

If you are using an email server at a data centre, with reasonable expectation of an always-on connection, you need a single MX record. If your processing requirements are great you can have multiple records at the same level to spread the load between peered servers, but none would be a backup any more than any other. Senders simply get one server at random. I have a single MX record.

“But you must have a backup!”, is the usual response. I do, of course, but it has nothing to do with having multiple MX records. Let me explain:

A domain’s MX record gives the address of the server to which its email should be sent. In practice, this means the company’s mail sever; or if they have multiple servers, the incoming one. Most companies have one mail server address, and this is fine. If that mail server dies it needs to be repaired or replaced, and the replacement gets the same address.

But what of having a second MX record with an alternative, lower-priority server? It may sound good, but it’s nuts. Think about it – the company’s mail server is where the mail ends up. It’s where the users expect to log in and read it. If you have an alternative server, the mail will go there instead, but the user’s won’t be able to read it. This assumes that the backup is on a different site, available if only the first site goes down. If it’s on the same site it’s even more pointless, as it will be affected by the same connectivity issues that took the first one offline. Users will have their existing mail on the broken server, new mail will be on a different server, and you’ll be in a real bugger’s muddle trying to reconcile the two later. It makes much more sense to just fix the broken one, or switch in a backup at the same location and on the existing IP address. In extremis, you can change the MX record to point to a replacement server elsewhere.

There’s a vague idea that if you don’t have a second MX, mail will be lost. Nothing can be further from the truth. If your one and only mail server is off-line, the sender’s server will queue up the message and keep trying until it comes back – it will normally do this for a week. It won’t lose it. Most mail servers will report back to the sender if it hasn’t been able to get through for four hours, so they’ll know there’s a problem and won’t worry that you haven’t replied.

If you have two mail servers, one on a different site, the secondary server will start receiving emails when the first one goes off-line. It’ll just queue them up, waiting to forward them to the primary one, but in this case the sender won’t get notification of the delay. Okay, if the primary server is off-line for more than a week it will prevent mail loss – but why would the primary server possibly be off-line for a week – the company won’t function unless it’s repaired quickly.

In the old days of dial-up, before POP3 came in to being, some people did use SMTP in a way where a server in a data centre forwarded to the remote site when it connected. I remember Cliff Stanford had a PC mail client called Turnpike that did just this in the early days of Demon. But SMTP was designed for always-on connections and POP3 was designed for dial-up, so POP3 won out.

Let’s get real: There are two likely scenarios for having a mail server off-line. Firstly, the hardware could be dead. If so, get it repaired, and in less than a week. Secondly, the line to the server could be down, and this could be medium-term if someone with a JCB has done a particularly “good job” on it. This gives you a week to make alternative arrangements and direct the mail down another line, which is plenty of time.

So, the idea of having a “backup” MX is pointless. It can only send mail to an off-site server; it doesn’t prevent any realistic mail loss and your email ends up where your users can’t get it until the primary server is repaired. But is there any harm in having one if it makes you feel better?

Actually, in practice, yes. It does make matters worse. In theory mail will just pile up on a spare server and get forwarded later. However, this spare server probably isn’t going to be up to the same specification as the primary one. They never are – they sit there idling, with nothing to do nearly all the time. They won’t necessary have the fastest line; their spam and virus filtering will be out-of-date or non-existent and they have a finite amount of disk space to absorb mail. This can really matter if you end up storing and forwarding a large amount of spam, as is often the case these days. The primary server can be configured to discard it quickly, but this isn’t a job appropriate for the secondary one. So it builds up until it’s ancient and meagre disk space is exhausted, and then it tells the sender to give up trying due to a “disk full” error – and the email is bounced off in to the ether. It’d have been much better to leave it on the sender’s server is the first place.

There are other security issues to having a secondary server. One problem comes with spam filtering. This is best done at the end of the line; it’s not the place of a secondary server to determine what gets delivered and what doesn’t. For starters, it doesn’t see the corpus of legitimate emails, so won’t be so adept at comparing and sorting. It’s probably going to be some old spare kit that’s under-powered for modern spam processing anyway. However, when it stores and forwards it, the primary server will see it comes from a “friend” rather than a dubious source in a lawless part Internet. Spammers do use secondary MX records as a back door to get around virus and spam filters for this very reason.

You could, of course, specify and configure a secondary mail server to be up to the job, with loads of disk space to prevent a DoS attack and fully functional spam filters, regularly maintained and sharing Bayesian analysis data and local rules with the actual server. And then have this expensive resource sitting there doing nothing all day but converting electricity in to heat. Realistically, it’s not going to happen.

By now you may be wondering, if multiple MX records are so pointless, why they exist? It’s one of these Internet myths; a paradigm that users feel comfortable with, without questioning the technology behind it. There is a purpose, but it’s not for “backup”.

When universal Internet email was new, messages would be sent to a user “@” computer, and computers were normally shared, so each would have multiple possible users. The computer would receive the email and put it in the mailbox corresponding to the user part of the address.

When the idea of sending email to a domain rather than a specific server came in to being, MD and MF records also came in to being. A MD record gave the IP address of the server where mail should end up (the Mail Destination). An MF record, if it existed, allowed the mail to be forwarded through another machine first (Mail Forward). This was sometimes necessary, for example if the MD was on a dial-up connection or behind a firewall and unable to accept direct connections over the Internet. The mail would go to the MF instead, and the MF would send it to the MD – presumably by batching it up and opening a line, transiting a firewall or using some other non-public mechanism.

In the mid 1980’s it was felt that having both MD and MF records placed double the load on DNS servers, so unified MX records, which could be read with a single lookup, were born. To allow for multiple levels of mail forwarding through firewalls, they were prioritised to 99 levels, although if you need more than three for any scheme you’re just being silly.

Unfortunately, the operation of MX records, rather than the explicitly named MF and MD, is a bit subtle. So subtle it’s often very misunderstood.

The first thing you need to understand is that email delivery should be controlled from the DNS for the domain, NOT from the individual mail servers that exist on that domain. This may not be obvious, but this is how it’s designed to work, and when you think of it, a central point of control is a good thing.

Secondly, DNS records should be universal. Every computer on the Internet, making the same DNS query, should get the same result. With the later addition of asymmetric NAT, there is now an excuse for varying this, but you can come unstuck if you get it wrong and that’s not what it was designed for.

If you want to reconfigure the route that mail takes to a domain, you do it by editing the single master DNS record (zone file) for that domain – you leave the multiple mail servers alone,

Now consider this problem: an organisation (called “theorganisation”) has a mail server called A. It’s inside the theorganisation’s firewall, for its own protection. Servers on the Internet can’t talk to A directly, because the firewall won’t let them through, but local users send and receive mail through it all day long. To receive external mail there’s another server called B, this time outside the firewall. A rule on the firewall allows specific traffic from B to get to A. The relevant part of the zone file may look something like this (at least logically):

MX 1 A.theorganisation
MX 2 B.theorganisation

So how do these simple lines tell the world, and servers A and B, how to operate? You need to understand the rules…

When another server, which I shall call C, sends a message to alice@theorganisation it will look up the MX records for theorganisation, and see the records above. C will then attempt to contact alice at the lowest numbered MX it finds, which points to server A. If C is located within the same department, it will be within the firewall and mail can be delivered directly; otherwise the firewall will block it. If C can’t contact A because of the firewall it will try the next highest on the list, in this case B. B is on the Internet, and will accept connections from C (and anyone else). The message arrives at B for Alice, but alice is not a user of B. However, B knows that it’s not the final destination for mail to theorganisation because the MX record says there’s a lower numbered server called A, so it attempts to forward it there. As B is allowed through the firewall, it can deliver the message to A, where it’s finally put in alice’s mailbox.

This may sound a bit complicated, but the rules for MX records can be made to route mail through complex paths simply by editing the DNS zone file, and this is how it’s supposed to work. The DNS zone file MX records control the path the mail will take. If you try to use the system for some contrary purpose (like a poor-man’s backup), you’re going to come unstuck.

There is another situation where you might want multiple MX records: If your mail server has multiple network interfaces on different (redundant) lines. If the MX priority value is the same for both, each IP address will (or should) be used at random, but if you have high-cost and low-cost lines you can change the priority to favour one route over another. With modern routing this artifice may not be necessary – let the router choose the line and mangle the IP addresses in to one for you. But sometimes a simple solution works just as well.

In summary, MX record forwarding is not designed for implementing backup mail servers and any attempt to use them for the purpose is going to do more harm than good. The ideas that this is what they’re all about is a myth.

 

Fetchmail, Sendmail and oversized emails

There’s a tendency for lusers to try to email anything these days. If you though a few Gig of outgoing mail queue was enough you haven’t come across the luser who decided to email the contents of a CD (uncompressed) to all her friends. Quite what they’d have made of their iPhone trying to download it I’ll never know.

Sendmail has a method for limiting emails to a sensible size. As a reminder, inside host.example.com.mc you need to add:

# The following sets the maximum message to 5Mb - otherwise it's infinite
define(`confMAX_MESSAGE_SIZE', `5242880')

Then run “make” and “make install” and “make restart”. This will generate the sendmail.cf (and any hashmaps) before restarting. The bit you always forget when changing .mc files is the “make install”. This is all for FreeBSD – Linux types, please do it your own way.

So this is great – anyone sending an over-sized email is bounced from their server, and local users submitting email will be similarly clipped into the world of sane and sensible (if you regard something as large as 5Mb as sensible for an email).

But I came across one interesting issue recently and it could happen to you, too, if you’re using fetchmail.

For those who haven’t come across it before, fetchmail pulls emails from a POP3 box and delivers them to local users – dropping them into your local MTA by default. This is reasonable, as everything then goes through the spam filtering, procmail and anything else you have defined. It’s really useful for legacy situations where someone’s ended up with a POP3 box somewhere and you need to integrate it with the rest of their mail.

Fetchmail does plenty more besides, and has a config file to match the functionality. Presumably as a reaction against the complexity of the sendmail.cf syntax, this one tries to operate in plain English. I’ve never quite figured out the full syntax, but it’s designed to be “flexible” and figure out what you’re trying to say. Personally I don’t think it succeeds in being any more friendly then sendmail.cf in spite of being on the other end of the spectrum.

Anyway, the fun comes when fetchmail downloads an over-sized email from the POP3 box and delivers it locally via Sendmail. Sendmail will reject it, and send a bounce back to the original sender. So far, so good but f Sendmail is running as a cron job every five minutes, the luser gets a bounce back every five minutes because the outsized mail is stuck in the POP3 box. Opps! It may serve them right, but they shouldn’t be allowed to suffer for too long.

Fortunately one of fetchmail’s many options allows you to control the maximum download size, if you could figure out the syntax. It’s available as a command-line option –l , but if you prefer to keep things in the .fetchmailrc file (the best plan) you’ll need to proceed as per the following example. They keywords are “limit” and “limitflush”.

  • local-postmster-account is the login for your local postmaster – undelivered emails go there.
  • pop3.isp.co.uk – mail server with the POP3 box
  • users-domain.co.uk – Domain name who’s email ends up in POP3 box above
  • pop3-username, pop3-password – what you use to log into the POP3 box
  • Tom, Dick and Harry are local mailboxes, with tom being the default.
    set postmaster local-postmster-account

    poll pop3.isp.co.uk proto pop3 aka users-domain.co.uk no envelope no dns:
    user "pop3-username", with password "pop3-password",
    limit 5242368 limitflush to

    dick
    "dick@users-domain.co.uk " = dick
    "richard@users-domain.co.uk " = dick

    harry
    "harry@users-domain.co.uk " = harry

    tom
    "tom@users-domain.co.uk" = tom
    "*@ users-domain.co.uk " = tom

    here

    This isn’t intended as a tutorial in writing .fetchmailrc files – only an example of the use of limit and limitflush.

    So what’s going on? The limit keyword must be part of the poll statement, and is followed by the size (in bytes) of the maximum email to be retrieved. In the example it’s 512 bytes less than the 5Mb used in Sendmail (I feel I need a bit of slack on a boundary condition; it may be okay if they’re identical but I why push your luck?)

    Please read the fetchmail documentation for full details (although it’s light on examples). With just the “limit” keyword in use, over-sized mails will be left I the POP3 box. The following “limitflush” keyword will silently delete over-sized emails so they don’t bother you again. You may not want to do this! If you don’t, someone will have to retrieve or delete the emails form the POP3 box manually.

    Note that putting a limit on the download will prevent the bounce messages going to the original sender as it won’t get as far as sendmail.