FreeBSD/Linux as Fibre Broadband router

British Telecom, bless them, has decided that copper telephone lines have to go and is forcing everyone onto fibre Internet and VoIP. Except rural customers currently connected to the Internet using a wet piece of string if they’re lucky, of course.

Incidentally, “Fibre Broadband” is a nonsense in a technical sense but the battle is lost – the public believes Broadband is any Internet connection to the home that isn’t dial-up.

Although I’ve written about routing on FreeBSD before, I thought it was time for an update. Why route on FreeBSD? Because unlike the cheap and nasty “routers” supplied by domestic (and some commercial) ISPs, it doesn’t crash. You don’t have to turn it off and on again. And it does what it’s told, with great diagnostics. You can also run plenty of other services on the same box if it’s powerful enough, or your throughput is modest.

Most of this should work fine on Linux, although the networking is generally considered less efficient than the real thing. However, at less than 1Gbps on a single line this isn’t going to matter, if it matters at all. With Linux you get less of the nuts and bolts built in to the base system so you may have to install extra packages depending on which distribution you are using. But this is all standard stuff so shouldn’t be too difficult. It’s the settings that matter, and probably the reason you’re reading this!

In this first article I’ll just consider a gateway router with NAT, and leave DNS, DHCP and other options until later.

Setting up PPPoE using user-ppp

First off, your WAN connection. With FTTC and FTTP this is normally a little white box – either a VDSL modem or an ONT. It connects to the phone line or fibre cable on one end, and has an RJ45 on the other that looks like Ethernet, because it is Ethernet. I’m going to call them Ethernet Modems, as they’re treated the same for our purpose. However, being Ethernet won’t do you much good as it’s just talking a protocol called PPPoE – or Point-to-Point Protocol over Ethernet.

PPP is an old protocol for making an Internet connection using dial-up, but it’s evolved (or suffered mission creep) and it’s now rather complicated thanks to all the baggage. Fortunately you can ignore the baggage and concentrate on the PPPoE stuff, once you know which is which. And that’s always the trick.

You’ll need a host (i.e. computer) with two Ethernet ports unless you want a complicated life. If you’re using an old PC with just one you can get away with a USB3 Ethernet adapter, but having a couple of server-grade NICs on the motherboard or add-on cards is the best way to go. Very generally, Intel or Broadcom are good choices, Realtek is at the low end.

You need to connect your Ethernet Modem to one port on your host and the other port goes to the LAN.

If you Ethernet Modem and the host you’re planning to use as a router are in different places you can connect them using a VLAN. It’s proper Ethernet and can be switched. Without a VLAN it’s not so simple, so plug it in using a direct cable.

PPP is built in (to FreeBSD etc) in the base system. Type ppp (as root) and it’ll start up in interactive mode. If it doesn’t, you’re not using BSD and therefore lack a base system and will have to install it as a package. You might like to start here: https://tldp.org/HOWTO/PPP-HOWTO/

Although you can compile PPP support into the kernel, the ppp we’re talking about is a program written by Toshiharu OHNO and Brian SOMERS in the early 1990s, and part of BSD since FreeBSD and OpenBSD 2. It’s the normal straightforward way of doing things.

ppp has a simple config file in /etc/ppp/ppp.conf. It can contain profiles for multiple services in sections, with the service name being arbitrary, and ending in a colon (“:”). You specify the service when you run it, and stuff in other sections is ignored. This is a hangover from the days when people had multiple dial-up connections.

Here’s a service definition for Cloudscape, one of my favourite ISPs, but other UK FTTP services will be similar or identical. UK FTTC and SoGEA modems are pretty much the same too.

cloudscape:
  delete default                # May already have a
                                # default route configured elsewhere
  set device PPPoE:bge1
  set authname user-name-supplied-by-ISP
  set authkey password-supplied-by-ISP
  set dial
  set login
  set lcp
  set mru 1492
  set mtu 1492
  disable ipv6cp              # Turn off IPv6
  enable ipcp                 # Turn on IPv4 (default)
#  enable lqr                 # Turn on Link Quality Requests
                              #   (detect dropped line)
  enable echo                 # Enable echo for LQR
  iface name wan0
  add default HISADDR

The ppp program was originally used for serial PPP connections to dial-up ISPs or organisations, but here we’re just using it for PPPoE. In support of switching ISPs it can add stuff to config files like resolv.conf and the routing table, which in the old days tended to be dynamic.

Feel free to read the manual that explains what the options above do, but briefly I’m starting by deleting the default route, which probably won’t exist unless you’ve configured it (possibly using DHCP), but if it does will cause problems when ppp adds another.

  set device PPPoE:bge1

This says we’re using PPPoE over the bge1 Ethernet card. Obviously set this to the Ethernet card to which your Ethernet Modem (e.g. ONT) is attached.

  set authname user-name-supplied-by-ISP
  set authkey password-supplied-by-ISP

This is the user-name and password supplied by your ISP. These tend to be low security, but are needed for the protocol for historic reasons.

  set dial
  set login
  set lcp

This will cause ppp to dial, log in and get details using LCP. Some people will try to tell you that internet lines are configured with DHCP – that’s for LANs. LCP (Link Control Protocol) provides the same function, such as what your IP address is and which DNS servers to use, over a point-to-point connection.

  set mru 1492
  set mtu 1492

There are eight bytes of protocol data added to every standard 1500 byte Ethernet frame so won’t fit 1:1 with a PPPoE packet. Reducing the MTU to 1492 gets around this and avoids fragmentation, which is a good thing. LCP might suggest or force a lower MTU but there’s no harm in specifying it.

  disable ipv6cp              # Turn off IPv6
  enable ipcp                 # Turn on IPv4 (default)
#  enable lqr                 # Turn on Link Quality Requests
                              #  (detect dropped line)
  enable echo                 # Enable echo for LQR
Please generate and paste your ad code here. If left empty, the ad location will be highlighted on your blog pages with a reminder to enter your code. Mid-Post

This disables IPv6 and enables IPv4 (which is on by default anyway). If you want to use IPv6 your service provider needs to support it, and most don’t.

LQR is probably not going to be necessary for our purposes and generates warnings, so I’ve left the line in but commented it out for now. The enable echo therefore has no effect.

  iface name wan0

By default, ppp will name its connections as tun0, tun1 and so on (tun being Tunnel). This means that you never know what the interface is going to be called, as other tunnels may exist before you start this one. We’re going to be referring to the interface in the PF firewall, so it helps to be sure what its name will be. The line above sets the name manually, and I’ve called in wan0, which is logical. You may, of course, have multiple WAN connections including dial-up backups, so giving them a sensible name is, er, sensible. You can call it anything you like if you’re nuts.

  add default HISADDR

This is an example of ppp messing with your system configuration – in this case it’s taking the IP address supplied by LCP, represented by the macro HISADDR, and adding it as the default route. If you have a static IP address you might want to set it statically in the normal way.

Likewise, if you add the line “enable dns” it will take the DNS servers offered by LCP and add them to resolv.conf. It won’t remove them, and may well end up messing up whatever local DNS arrangements you have, so I prefer to do this manually.

Once you’ve edited ppp.conf you can test it out interactively with “ppp cloudscape” and see what happens. Type “dial” and it should make the connection, and wan0 should appear in your list of network interfaces. Use netstat -r to see if the new default route has appeared.

Setting up the pf firewall

ppp-user is a large program that tries to do everything, including NAT and being a firewall. This isn’t very UNIX-like in philosophy, but you can use these facilities if you like. I prefer to have a dedicated standard firewall, PF, and leave that to do everything firewall-like.

If you’re setting up a router you’re probably going to need asymmetric NAT. Your /etc/pf.conf file will look something like this:

scrub in all
WAN=wan0
WANIP=1.2.3.4
nat pass on $WAN from 192.168.1.0/24 to any -> $WANIP
#rdr pass on $WAN proto tcp from any to $WANIP port 80 -> 192.168.1.123

The WAN IP comes from your ISP, although you will be able to see it using “ifconfig wan0:” if you don’t have it. I’m assuming your LAN is 192.168.1.0/24 – just set this to whatever you’re using. And that’s about it.

As a bonus, the commented out example line at the end would external port 80 to a web server on LAN address 192.168.1.123 – an open port. Peter Hansteen has written an excellent book on PF, called “The Book of PF”, which will tell you everything you need to know, and it’s well documented in various online handbooks and man pages, unlike ppp-user’s built in firewall.

The only reason for using user-ppp for NAT is if you’re on a dynamic IP address, in which case and “enable nat” and add ppp_nat=yes to /etc/rc.conf

Kicking it all off

First you need to enable routing:

 sysctl net.inet.ip.forwarding=1

This will work until reboot, and you can turn it off again by setting it to zero if something bad happens, like your NIC catching fire. Then dial your ISP (Cloudscape in this example)

ppp -ddial cloudscape

You should now have a connection to the Internet on the BSD box. Now enable PF for NAT.

service pf start (or onestart)

Of it it’s running, use “service pf reload” to load the new config. At this point every machine on the LAN should be able to use your LAN IP address as a gateway.

When you’re happy it works, to make this kick off automatically on boot, modify /etc/rc.conf:

sysrc ppp_enable=yes
sysrc ppp_mode=ddial
sysrc ppp_profile="cloudscape"
sysrc pf_enable=yes
sysrc gateway_enable=yes

Optionally “sysrc ppp_nat=yes” if you’re not using PF for NAT. Or if you’re editing rc.conf directly:

pf_enable=yes
gateway_enable=yes

ppp_enable="YES"
ppp_mode="ddial"
#ppp_nat="YES"	# We let PF do NAT
ppp_profile="name_of_service_provider"

I will do a part two to this post explaining how to configure DNS and DHCP, although there’s no reason these need to be on the same host you’re using as a router. In fact it’s good practice to separate them and have more than one DHCP and DNS server if you have the resources.

I hope you found it useful – any questions add a comment below.

How to tell if a host is up without ping

Some people seem to think that disabling network pings (ICMP echo requests to be exact) is a great security enhancement. If attackers can’t ping something they won’t know it’s there. It’s called Security through Obscurity and only a fool would live in this paradise.

But supposing you have something on your network that disables pings and you, as the administrator, want to know if it’s up? My favourite method is to send an ARP packet to the IP address in question, and you’ll get a response.

ARP is how you translate an IP address into a MAC address to get the Ethernet packet to the right host. If you want to send an Ethernet packet to 1.2.3.4 you put out an ARP request “Hi, if you’re 1.2.3.4 please send your MAC address to my MAC address”. If a device doesn’t respond to this then it can’t be on an Ethernet network with an IP address at all.

You can quickly write a program to do this in ‘C’, but you can also do it using a shell script, and here’s a proof of concept.

#!/bin/sh
! test -n "$1" && echo $0: Missing hostname/IP && exit
#arp -d $1  >/dev/null 2>/dev/null
ping -t 1 -c 1 -q $1 >/dev/null
arp $1 | grep -q "expires in" && echo $1 is up. && exit
echo $1 is down.

You run this with a single argument (hostname or IP address) and it will print out whether it is down or up.

The first line is simply the shell needed to run the script.

Line 2 bails out if you forget to add an argument.

Line 3, which is commented out, deletes the host from the ARP cache if it’s already there. This probably isn’t necessary in reality, and you need to be root user to do it. IP address mappings are typically deleted after 20 minutes, but as we’re about to initiate a connection in line 4 it’ll be refreshed anyway.

Line 4 sends a ping to the host. We don’t care if it replies. The timeout is set to the minimum 1 second, which means there’s a one second delay if it doesn’t reply. Other ways of tricking the host into replying exist, but every system has ping, so ping it is here.

Live 5 will print <hostname> is up if there is a valid ARP cache entry, which can be determined by the presence of “expires in” in the output. Adjust as necessary.

The last line, if still running, prints <hostname> is down. Obviously.

This only works across Ethernet – you can’t get an ARP resolution on a different network (i.e. once the traffic has got through a router). But if you’re on your organisation’s LAN and looking to see if an IoT devices is offline, lost or stolen then this is a quick way to poll it and check.

Why can’t I ping my Amazon Echo?

The simple answer is that the current Amazon Echo devices don’t respond to a ping – or technically an ICMP echo request. There’s a lot of waffle on the web saying this is because they’re too simple to do it, but this isn’t the case. The original Echo (at least before software updates) and the Echo Show 8” most certainly did respond to a ping, but the functionality has been dropped since then. Some people naively think that it’s a security risk, part of a doctrine known as Security Through Obscurity. As it’s easy enough to find an Echo without a ping, it’s only a slight inconvenience to a would-be attacker and a big inconvenience to an network administrator.

Most later Echos do have open ports, however, so you can check to see if it’s alive because the port will be there. I emphasise “open”, as Echos use quite a lot of ports that aren’t always open, for things like setup or communicating out. But these ports are open and can be connected to – even if the connection is refused it shows there’s something there to refuse it.

Based on my incomplete collection of Echo devices, they have the following characteristics:

ModelPing?Ports
Original Echo
Echo Dot fourth Generation1080, 6543, 8888
Echo Flex1080, 8888
Echo Dot Second Generation1080, 8888
Echo Dot Third Generation1080, 8888
Echo Show 8-inch (second generation)Y8009
Echo Spot first Generation
Echo Show 5-inch

So how can you reliably tell if your Amazon Echo device is alive on the network? Rather than messing around with ports, my favorite way is to send it an ethernet ARP request and see if you get a reply. I did say disabling ping was a fools solution to security.

See here for how to do this.

Using ddrescue to recover data from a USB flash drive

If you’re in the data recovery, forensics or just storage maintenance business (including as an amateur) you probably already know about ddrescue. Released about twenty years ago by Antonio Diaz Diaz, it was a big improvement over the original concept dd_rescue from Kurt Garloff in 1999. They copy disk images (which are just files in Unix) trying to get as much data extracted when the drive itself has faults.

If you’re using Windows rather than Unix/Linux then you probably want to get someone else to recover your data. This article assumes FreeBSD.

The advantage of using either of these over dd or cp is that they expect to find bad blocks in a device and can retry or skip over them. File copy utilities like dd ignore errors and continue, and cp will just stop. ddrescue is particularly good at retrying failed blocks, and reducing the block size to recover every last readable scrap – and it treats mechanical drives that are on their last legs as gently as possible.

If you’re new to it, the manual for ddrescue can be found here. https://www.gnu.org/software/ddrescue/manual/ddrescue_manual.html

However, for most use cases the command is simple. Assuming the device you want to copy is /dev/da1 and you’re calling it thumbdrive the command would be:

ddrescue /dev/da1
thumbdrive.img thumbdrive.map

The device data would be stored in thumbdrive.img, with ongoing state information stored in thumbdrive.map. This state information is important, as it allows ddrescue to pick up where it left off.

However, ddrescue was written before USB flash drives (pen drives, thumb drives or whatever). That’s not to say it doesn’t work, but they have a few foibles of their own. It’s still good enough that I haven’t modified ddrescue base code to cope, but by using a bit of a shell script to do the necessary.

USB flash drives seem to fail in a different way to Winchester disks. If a block of Flash EPROM can’t be read it’s going to produce a read error – fair enough. But they have complex management software running on them that attempts to make Flash EPROM look like a disk drive, and this isn’t always that great in failure mode. In fact I’ve found plenty of examples where they come across a fault and crash rather than returning an error, meaning you have to turn them off and on to get anything going again (i.e. unplug them and put them back in).

So it doesn’t matter how clever ddrescue is – if it hits a bad block and the USB drive controller crashes the it’s going to be waiting forever for a response and you’ll just have come reset everything manually and resume. One of the great features of ddrescue is that it can be stopped and restarted at any time, so continuing after this happens is “built in”.

In reality you’re going to end up unplugging your USB flash drive many times during recovery. But fortunately, it is possible to turn a USB device off and on again without unplugging it using software. Most USB hardware has software control over its power output, and it’s particularly easy on operating systems like FreeBSD to do this from within a shell script. But first you have to figure out what’s where in the device map – specifically which device represents your USB drive in /dev and which USB device it is on the system. Unfortunately I can’t find a way of determining it automatically, even on FreeBSD. Here’s how you do it manually; if you’re using a version of Linux it’ll be similar.

When you plug a USB storage device into the system it will appear as /dev/da0 for the first one; /dev/da1 for the second and so on. You can read/write to this device like a file. Normally you’d mount it so you can read the files stored on it, but for data recovery this isn’t necessary.

So how do you know which /dev/da## is your media? This easy way to tell is that it’ll appear on the console when you first plug it in. If you don’t have access to the console it’ll be in /var/log/messages. You’ll see something like this.

Jun 10 17:54:24 datarec kernel: umass0 on uhub5
kernel: umass0: <vendor 0x13fe USB DISK 3.0, class 0/0, rev 2.10/1.00, addr 2> on usbus1
kernel: umass0 on uhub5
kernel: umass0: on usbus1
kernel: umass0: SCSI over Bulk-Only; quirks = 0x8100
kernel: umass0:7:0: Attached to scbus7
kernel: da0 at umass-sim0 bus 0 scbus7 target 0 lun 0
< USB DISK 3.0 PMAP> Removable Direct Access SPC-4 SCSI device
kernel: da0: Serial Number 070B7126D1170F34
kernel: da0: 40.000MB/s transfers
kernel: da0: 59088MB (121012224 512 byte sectors)
kernel: da0: quirks=0x3
kernel: da0: Write Protected

So this is telling us that it’s da0 (i.e /dev/da0)

The hardware identification is “<vendor 0x13fe USB DISK 3.0, class 0/0, rev 2.10/1.00, addr 2> on usbus1” which means it’s on USB bus 1, address 2.

You can confirm this using the usbconfig utility with no arguments:

ugen5.1:  at usbus5, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
...snip...
ugen1.1: at usbus1, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen1.2: at usbus1, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON (300mA)

There it is again, last line.

usbconfig has lots of useful commands, but the ones we’re interested are power_off and power_on. No prizes for guessing what they do. However, unless you specify a target then it’ll switch off every USB device on the system – including your keyboard, probably.

There are two ways of specifying the target, but I’m using the -d method. We’re after device 1.2 so the target is -d 1.2

Try it and make sure you can turn your USB device off and on again. You’ll have to wait for it to come back online, of course.

There are ways of doing this on Linux by installing extra utilities such as hub-ctrl. You may also be able to do it by writing stuff to /sys/bus/usb/devices/usb#/power/level” – see the manual that came with your favourite Linux distro.

The next thing we need to do is provide an option for ddrescue so that it actually times out if the memory stick crashes. The default is to wait forever. The –timeout=25 or -T 25 option (depending on your optional taste) sees to that, making it exit if it hasn’t been able to read anything for 25 seconds. This isn’t entirely what we’re after, as a failed read would also indicate that the drive hadn’t crashed. Unfortunately there’s no such tweak for ddrescue, but failed reads tend to be quick so you’d expect a good read within a reasonable time anyway.

So as an example of putting it all into action, here’s a script for recovering a memory stick called duracell (because it’s made by Duracell) on USB bus 1 address 2.

#!/bin/sh
while ! ddrescue -T 25 -u /dev/da0 duracell.img duracell.map
do
echo ddrescue returned $?
usbconfig -d 1.2 power_off
sleep 5
usbconfig -d 1.2 power_on
sleep 15
echo Restarting
done

A few notes on the above. Firstly, ddrescue’s return code isn’t defined. However, it appears to do what one might expect so the above loop will drop out if it ever completes. I’ve set the timeout for time since last good read to 25 seconds, which seems about right. Turning off the power for 5 seconds and then waiting for 15 seconds for the system to recognise it may be a bit long – tune as required. I’m also using the -u option to tell ddrescue to only go forward through the drive as it’s easier to read the status when it’s always incrementing. Going backwards and forwards makes sense with mechanical drives, but not flash memory.

Aficionados of ddrescue might want to consider disabling scraping and/or trimming (probably trimming) but I’ve seen it recover data with both enabled. Data recovery is an art, so tweak away as you see fit – I wanted to keep this example simple.

Now this system isn’t prefect. I’m repurposing ddrescue, which does a fine job on mechanical drives, to recover data from a very different animal. I may well write a special version for USB Flash drives but this method does actually work quite well. Let me know how you get on.

Cookie problems

It’s been ten years since the original EU ePrivacy Directive (Regulation of the European Parliament and of the Council concerning the respect for private life and the protection of personal data in electronic communications and repealing Directive 2002/58/EC) came into effect in the UK. It’s implement as part of the equally wordy Privacy and Electronic Communications Regulations 2012 (PECR) One thing it did was require companies with websites to allow users to opt in to cookies. I’ve written about this before, but since then the amount of tracking cookies has become insane, and most people would choose to opt out of that much survailance.

The problem is that with so many cookies, some websites have effectively circumvent the law by making it impractical to opt out of all of them. At first glance they offer an easy way to turn everything off and “save settings”, but what’s not so clear is that they hide an individual option for every tracking cookie company they have a deal with. And there can be hundreds, each of which needs to be individually switched off using a mouse.

These extra cookies – and they’re the ones you don’t want – are usually hidden behind a “vendors” or “partners” tab. With the site shown below this was only found by scrolling all the way down and clicking on a link.

This kind of thing is not in the spirit of the act, and web sites that do this do not “care about your privacy” in any way, shape or form. And if you think these opt-in/out forms look similar, it’s because they are. Consent Management Platforms like Didomi, OneTrust and Quantcast are widely used to either set, or obfuscate what you’re agreeing to.

An update to the ePrivicy directive is now being talked about that says it must be “easy” to reject these tracking cookies, which isn’t the case now.

Meanwhile some governments are cracking down. In January, Google and Facebook both got slapped with huge fines in France from the
Commission Nationale de l’Informatique et des Libertés, which reckoned that because it took more clicks to reject cookies than accept them, Google and Facebook were not playing fair.

“Several clicks are required to refuse all cookies, against a single one to accept them. [The committee] considered that this process affects the freedom of consent: since, on the internet, the user expects to be able to quickly consult a website, the fact that they cannot refuse the cookies as easily as they can accept them influences their choice in favor of consent. This constitutes an infringement of Article 82 of the French Data Protection Act.”

I’m inclined to agree. And on top of a fines of €150 and €60 respectively, they’re being hit with €100K for each extra day they allow the situation to remain.

Unfortunately we’re not likely to see this in the UK. The EU can’t actually agree on the final form of the ePrivacy regulations. The UK, of course, is no longer in the EU and may be able to pass its own laws while the EU argues.

Last year the Information Commissioner’s office did start on this, taking proposals to the G7 in September 2021. Elizabeth Denham told the meeting that a popup with a button saying “I Agree” wasn’t exactly informed consent.

However, this year the government is going in the other direction. It’s just published plans to do away with the “cookie popup” and allow websites to slurp your data. Instead sites will be required to give users clear information on how to opt out. This doesn’t fill me with confidence.

Also in the proposals is to scrap the need for a Data Protection Impact Assessment (DPIA) before slurping data, replacing it with a “risk-based privacy management programme to mitigate the potential risk of protected characteristics not being identified”.

I don’t like the idea of any of this. However, there’s a better solution – simply use a web browser that rejects these tracking cookies in the first place.

Reply-To: gmail spam and Spamassassin

Over the last few months I’ve noticed huge increase is spam with a “Reply To:” field set to a gmail address. What the miscreants are doing is hijacking a legitimate mail server (usually a Microsoft one) and pumping out spam advertising a service of some kind. These missives only work if the mark is able to reply, and as even a Microsoft server will be locked down sooner or later, so they’ll never get the reply.

The reason for sending this way is, of course, spam from a legitimate mail server isn’t going to be blacklisted or blocked. SPF and other flags will be good. So these spams are likely to land in inboxes, and a few marks will reply based on the law of numbers.

To get the reply they’re using the email “Reply-To:” field, which will direct the reply to an alternative address – one which Google is happy to supply them for nothing.

The obvious way of detecting this would be to examine the Reply-To: field, and if it’s gmail whereas the original sender isn’t, flag it as highly suspect.

I was about to write a Spamassassin rule to do just this, when I discovered there is one already – and it’s always been there. The original idea came from Henrik Krohns in 2009, but it’s time has now definitely arrived. However, in a default install, it’s not enabled – and for a good reason (see later). The rule you want is FREEMAIL_FORGED_REPLYTO, and it’s found in 20_freemail.cf

Enabling FREEMAIL_FORGED_REPLYTO in Spamassassin

If you check 20_freemail.cf you’ll see the rules require Mail::SpamAssassin::Plugin::FreeMail, The FreeMail.pm plugin is part of the standard install, but it’s very likely disabled. To enable this (or any other plugin) edit the init.pre file in /usr/local/etc/mail/spamassassin/ Just add the following to the end of the file:

# Freemail checks
#
loadplugin Mail::SpamAssassin::Plugin::FreeMail FreeMail.pm

You’ll then need to add a list of what you consider to be freemail accounts in your local.cf (/usr/local/etc/mail/spamassassin/local.cf). As an example:

freemail_domains aol.* gmail.* gmail.*.* outlook.com hotmail.* hotmail.*.*

Note the use of ‘*’ as a wildcard. ‘?’ matches a single character, but neither match a ‘.’. It’s not a regex! There’s also a local.cf setting “freemail_whitelist”, and other things documented in FreeMail.pm.

Then restart spamd (FreeBSD: service spamd restart) and you’re away. Except…

The problem with this Rule

If you look at 20_freemail.cf you’ll see the weighting is very low (currently 0.1). If this is such a good rule, why so little? The fact is that there’s a lot of spam appearing in this form, and it’s the best heuristic for detecting it, but it’s also going to lead to false positives in some cases.

Consider those silly “contact forms” beloved by PHP Web Developers. They send an email from a web server but with a “faked” reply address to the person filling in the form. This becomes indistinguishable from the heuristic used to spot the spammers.

If you know this is going to happen you can, of course add an exception. You can even have the web site use a local submission port and send it to a local mailbox without filtering. But in a commercial hosting environment this gets a bit complicated – you don’t know what Web Developers are doing. (How could you? They often don’t).

If you have control over your users, it’s probably safe to up the weighting. I’d say 3.0 is a good starting point. But it may be safer to leave it at 0.1 and examine the results for what would have been false positives.

It’s the LAW (GDPR as an excuse)

GDPR

In the 2000s it was “It’s necessary for our QA procedure”. Now it’s GDPR. Basically, the technical sounding response to shut people up when they complain. As a qualified ISO-9000 auditor I used to had a lot of fun calling their bluff in the first case.

With data protection it might seem more clear cut than having an encyclopaedic knowledge of ISO9000:2000. After all DPA 2018 (that which implemented GDPR) isn’t that dissimilar to its predecessors, and has a much tighter scope. However, it’s more open for interpretation and we’re waiting for some test cases.

However, what it doesn’t cover are situations like this:

Dear Mr Leonhardt, 
Hope you're well; It is law to speak to the account holder.
Kind regards, Salvin Tingh
Morrisons Online Customer Service Team

I won’t bore you with the full details of what led to this attempted put-down, but briefly I emailed Morrisons about a mistake they’d made on an order. On receiving no response I called (and they sorted it out efficiently, over the phone). A week later I got an email response, and I said it was too late but it was sorted out, thanks very much. A week later, another reply that suggested they hadn’t read the first one. I said “Sorted, thanks, and I’ll just use the ‘phone in future”.

Next week’s reply was along the lines that they couldn’t verify I was the customer. I replied that perhaps they should have tried (they know my email address and telephone number), but don’t worry it’s sorted. A week later the above arrived (name changed to protect the guilty).

Leaving aside the principles of good customer service – if you need to check someone’s identity before solving a problem then do so – one might wonder what law he might be talking about. You see, data protection laws are not as wide-ranging as people think.

Basically, the law relates to sensitive information about an identifiable individual. Stronger protections exist depending on the sensitivity of the information (e.g. race, religion, biometrics and the usual stuff). But if it’s not sensitive information about an identifiable information it’s definitely out-of-scope.

In this case, Mr Tingh was dealing with a customer’s problem. He wasn’t being asked to divulge sensitive information to a possible third party. It’s possible (and desirable) that company procedures required that he make sure it really was the customer complaining, but that’s hardly “the law”. And had I been an imposter claiming I hadn’t received my sausage, the worst that would happen was someone else got a couple of quid refunded unexpectedly. Does Morrisons get that kind of thing often, one wonders?

And it also begs the question, if they were so concerned about whether a customer complaint about an order, emailed in with the full paperwork, really was from the household in question they need only pick up the phone; or check the email address? Neither of these is fool proof, but in the circumstances one might have thought this good enough. Did he want me to visit the shop show the manager my passport?

But to reiterate, The Data Protection Act (colloquially referred to as GDPR) is there to protect information pertaining to an individual. A company would have a duty to ensure it was talking to the right person if giving out sensitive information, but when someone is reporting the non-delivery of a vegan sausage to the suppler there is no sensitive information involved. They only need to check your identity if its really necessary.

Other protections in the DPA include transparent use of an individual’s data, not storing more than is necessary or for longer than necessary, and ensuring it’s accessible to the individual concerned, not leaked and is accurate (corrected if needs be). The European GDRP added provisions for portability, forcing companies to make your data available to competing services at your request.

So when someone tries to fob you off with “data protection”, stop and think if the above actually apply. And if you’re trying to fob someone off, don’t try to bluff a data security expert.

STARTTLS is not a protocol

As regular readers will know, I’m not a fan of STARTTLS but today I realised that some people are confused as to what it even means. And there’s a perfectly good reason for this – some graphical email software is actually listing STARTTLS as a protocol for talking to mail servers and people are jumping to conclusions.

So what is STARTTLS all about if you go back to basics?

Originally, when only nice people had access to computers, network traffic was unencrypted. If you had physical access to the network you could pretty much read anything you wanted to, as everything connected to the same network saw the same data. This isn’t true now, but encryption you data is a good idea just in case it can be intercepted – and if it’s going over the Internet that’s definitely the case.

In the mid 1990s, the original mass-market web browser, Netscape, decided to do something about it and they (or more specifically their chief scientist Taher Elgamal, invented a protocol called Secure Sockets Layer (SSL) to protect HTTP (web) traffic. Actually, several times as the first couple of attempts weren’t very secure at all.

SSL didn’t really fit in with the OSI model; it runs on top of the transport protocol (usually TCP) but under the presentation layer, which would logically handle encryption but doesn’t usually. To use it you need an SSL layer added to the stack to transparently do the deed on a particular port.

But, as a solution to the encryption problem, SSL took off and pretty much every major protocol has an SSL port along with its original cleartext one. So clear HTTP is on port 80, HTTPS is on port 443. Clear POP3 is on port 110, encrypted on 995. Clear IMAP is on port 143, encrypted on 993.

As is the way of genius ideas in cybersecurity, even the third version of SSL was found to be full of holes. SSL version 3.1, which was renamed TLS, continued plugging the leaks and by TLS 1.2 it’s considered pretty much secure now. TLS 1.3, which interoperates with TLS 1.2, simply deprecates certain cyphers and hashes on the suspicion they might be insecure; although anyone into cybersecurity should tell you that everything is secure only until it’s broken.

Unfortunately, because different levels of TLS use different cyphers and reject others, TLS levels are by no means interoperable. And neither is it the case that a newer version is more secure; bugs have been introduced and later fixed. This’d be fine if everything and everyone used the same version of TLS, but in the real world this isn’t practical – old hardware, in particular, bakes in old versions of SSL or TLS and if you decided to deprecate older cyphers and not work with them, you loose the ability to talk to your hardware.

But apart from this, things were going along pretty well; and then someone had the bright idea of operating encrypted and unencrypted connections on the same port by hacking it at the application layer instead. This was achieved by modifying the application protocol to include a STARTTLS command. If this is received, the application then negotiates a TLS connection. If the receiving host didn’t understand what STARTTLS meant it’d send back and error, and things could continue unencrypted.

In other words, if you’re implementing an SMTP server with STARTTLS, this keyword is added to the protocol and the SMTP server does something about it when it sees it.

What could go wrong?

Well quite a lot of things, actually. Because TLS doesn’t fit in to the OSI model, it’s actually very difficult to deal with the situation where a TLS connection is requested and agreed to but the TLS layer fails to agree on a cypher with an older or newer version on the other end. There’s no mechanism for passing this to the application to say “okay, let’s revert to Plan A”, and the connection tends to hang.

There’s also a problem with name-based virtual servers must all use the same host certificate because the TLS connection must be established before the application layer headers are transferred.

But perhaps my biggest gripe is that enabling STARTTLS makes encryption optional. You’re not enforcing encryption when you need to, and even if you think you are, STARTTLS connection are obviously vulnerable to a man-in-the-middle attack. You have no idea how many times TLS has been turned on and off between the two endpoints.

You might be tempted to think that optional encryption is better than none at all, but in reality it means you don’t care – and if you don’t care, don’t bother. It just leads to a false sense of security. And it can lead to interoperability problems. My advice is to use “always TLS” ports for sensitive data and turn off the old port.

Christmas Come Early for Scammers – Thanks Microsoft

As a reminder that Microsoft never lets security considerations get in the way of a Good Idea, it’s emailed 50,000 gift cards to random addresses it has on file. To quote:

To help spread holiday cheer, Microsoft Store has surprised a total of 50,000 U.S. customers with virtual gift cards via email. 25,000 customers will receive a $100 Microsoft Gift Card while 25,000 others will receive a $10 Microsoft Gift Card ahead of this holiday season. These randomly selected recipients can redeem their gift card on Microsoft Store through December 31, 2021 and spend it within 90 days of redemption

Publications in the US are advising punters to check their spam folder in case they’ve got an e-voucher for free Microsoft goodies. Presumably these email address are of lusers with a Microsoft account of some kind.

With the media coverage starting to appear in the US, anyone phishing for Microsoft account credentials now has the perfect social engineering exploit, available between now and the New Year. Nice one Microsoft.

The Huawei thing

A few months ago I was asked for comment on the idea that an embattled Theresa May was about to approve Huawei for the UK’s 5G roll-out, and this was a major security risk. Politics, I assumed. No one who knew anything about the situation would worry, but politicians making mischief could use it to make a fuss.

Now it’s happened again; this time with Boris Johnson as Prime Minister. And the same old myths and half-truths have appeared. So is Chinese company Huawei risky? Yes! And so is everything else.

Huawei was founded by a brilliant entrepreneurial engineer, Ren Zhengfei in 1987, to make a better telephone exchange. It came from the back to become the market leader in 2012. It also made telephones, beating Apple by 2018. While the American tech companies of the 1980’s grew old and fat, Huawei kept up the momentum. Now, in 2020, it makes the best 5G mobile telephone equipment. If you want to build a 5G network, you go to Huawei.

Have the American tech companies taken this dynamic interloper lying down? No. But rather than reigniting their innovative zeal, they’re using marketing and politics. Fear, Uncertainty and Doubt.

Some arguments:

“Huawei is a branch of the evil Chinese State and we should have nothing to do with it.”

Huawei says it isn’t, and there’s no evidence to the contrary. The Chinese State supports Chinese companies, but that’s hardly novel. And whether the Chinese State is evil is a subjective judgement. I’m not a fan of communist regimes, but this is beside the point if you’re making an argument about technology.

“Huawei is Chinese, and we don’t like the government or what it does”.

So we should boycott American companies because we don’t like Trump? We do business with all sorts of regimes more odious that the CPC, so this is a non-argument. You could make a separate argument that we should cease trade with any country that isn’t a liberal democracy, but this could be difficult as we’re buying gas from Russia and oil from the Middle East.

“Huawei works for the Chinese secret service and will use the software in its equipment to spy on, or sabotage us.”

First off, Ren Zhengfei has made it very clear that he doesn’t. However, there have been suspicions. In order to allay them, Huawei got together with the UK authorities and set up the HCSEC in Banbury. Huawei actually gives HCSEC the source code to its products, so GCHQ can see for itself; look for backdoors and vulnerabilities. And they’ve found nothing untoward to date. Well, they’ve found some embarrassingly bad code but that’s hardly uncommon.

Giving us access to source code is almost unprecedented. No other major tech companies would hand over their intellectual property to anyone; we certainly have no idea what’s inside Cisco routers or Apple iPhones. But we do know what’s inside Huawei kit.

“Because Huawei manufactures its stuff in China, the Chinese government could insert spying stuff in it.”

Seriously? Cisco, Apple, Dell, Lenovo and almost everyone else manufacturers its kit in China. If the Chinese government could/would knobble anything it’s not just Huawei. This is a really silly argument.

Conclusion

So should we believe what the American’s say about Huawei? The NSA says a lot, but has offered no evidence whatsoever. The US doesn’t use Huawei anyway, so has no experience of it. In the UK, we do – extensively – and we have our spooks tearing the stuff apart looking for anything dodgy. If we believe our intelligence services, we should believe them when they say
Huawei is clean.

Being cynical, one might consider the possibility, however remote, that America is scared its technology companies are being bested by one Chinese competitor and will say and do anything to protect their domestic producers; even though they don’t have any for 5G. Or if you really like deep dark conspiracies, perhaps the NSA has a backdoor into American Cisco kit and wants to keep its advantage?

The US President’s animosity to trade with China is hardly a secret. Parsimony suggests the rest is fluff.