FreeBSD 14 ZFS warning

Update 27-Nov-23
Additional information has appeared on the FreeBSD mailing list:
https://lists.freebsd.org/archives/freebsd-stable/2023-November/001726.html

The problem can be reproduced regardless of the block cloning settings, and on FreeBSD 13 as well as 14. It’s possible block cloning simply increased the likelihood of hitting it. There’s no word yet about FreeBSD 12, but this FreeBSD’s own ZFS implementation so there’s a chance it’s good

In the post by Ed Maste, a suggested partial workaround is to set the tunable vfs.zfs.dmu_offset_next_sync to zero, which has been on the forums since Saturday. This is a result of this bug:

https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=275308

There’s a discussion of the issue going on here:

https://forums.freebsd.org/threads/freebsd-sysctl-vfs-zfs-dmu_offset_next_sync-and-openzfs-zfs-issue-15526-errata-notice-freebsd-bug-275308.91136/

I can’t say I’m convinced about any of this.


FreeBSD 14, which was released a couple of days ago, includes OpenZFS 2.2. There’s a lot of suspicion amongst Gentoo Linux users that this has a rather nasty bug in it related to block cloning.

Although this feature is disabled by default, people might be tempted to turn it on. Don’t. Apparently it can lead to lost data.

OpenZFS 2.2.0 was only promoted to stable on 13th October, and in hindsight adding it to a FreeBSD release so soon may seem precipitous. Although there’s a 2.2.1 release you should now be using instead it simply disables it by default rather than fixing the likely bug (and to reiterate, the default is off on FreeBSD 14).

Earlier releases of OpenZFS (2.1.x or earlier) are unaffected as they don’t support block cloning anyway.

Please generate and paste your ad code here. If left empty, the ad location will be highlighted on your blog pages with a reminder to enter your code. Mid-Post

Personally I’ll be steering clear of 2.2 until this has been properly resolved. I haven’t seen conclusive proof as to what’s causing the corruption, although it looks highly suspect. Neither have I seen or heard of it affecting the FreeBSD implementation, but it’s not worth the risk.

Having got the warning out of the way, you may be wondering what block cloning is. Firstly, it’s not dataset cloning. That’s been working fine for years, and for some applications it’s just what’s needed.

Block cloning applies to files, not datasets, and it’s pretty neat – or will be. Basically, when you copy a file ZFS doesn’t actually copy the data blocks – it just creates a new file in the directory structure but it points to the existing blocks. They’re shared between the source and destination files. Each block has a reference count in the on-disk Block Reference Table (BRT), and only when a block in the new file changes does a copy-on-write occur; the new block is linked to the new file and the reference count in the BRT is decremented. In familiar Unix fashion, when the reference count for a block gets to zero it joins the free pool.

This isn’t completely automatic – it must be allowed when the copy is made. For example, the cp utility will request clone files by default. This is done using the copy_file_range() system call with the appropriate runes; simply copying a file with open(), read(), write() and close() won’t be affected.

As of BSDCAN 2023, there was talk about making it work with zvols but this was to come later, although clone blocks in files can exist between datasets as long as they’re using the same encryption (including keys).

One tricky problem here is how it works with the ZIL – for example what’s stopping a block pointer from disappearing from the log? There was a lot to go wrong, and it looks like it has.

Release notes for 2.2.1 may be found here.
https://github.com/openzfs/zfs/releases/tag/zfs-2.2.1

Quicken doesn’t like Amex QIF format

If you’re trying to import data into Quicken 98 (or compatible) having downloaded it from the American Express web site you’ll get some odd and undesirable results. There are two incompatibilities.

Firstly the date field (D) needs to have a two-digit year, whereas Amex adds the century.

Secondly, for reasons I haven’t quite figured out yet, the extra information M field sometimes results is a blank entry (other than the date).

You can fix this by editing the QIF file using an editor of your choice, but this gets tedious so here’s a simple program that will do it for you. It reads from stdin and writes to stdout so you’ll probably use it in a script or redirect it from and to a file. I compile it to “amexqif”. If there’s any interest I’ll tweak it to make it more friendly.

It’s hardly a complex program, the trick was figuring out what’s wrong in the first place.

#include <stdio.h>
#include <string.h>

char buf [200];

int main()
{
int lenread;
char *str;
        while ((str = fgets(buf, 200,stdin))) // (()) to placate idiot mode on CLANG
        {
                if ((lenread = strlen(str)) > 1)        // Something other than just the newline
                {
                        if (str[0] == 'M')
                                continue;       // Random stuff in M fields breaks Q98


                        if (str[0] == 'D' && lenread == 12)     // Years in dates must be two digit
                                strcpy (str+7, str+9);
                }
                printf("%s",str);
        }
}

Proper Case in a shell script

How do you force a string into proper case in a Unix shell script? (That is to say, capitalise the first letter and make the rest lower case). Bash4 has a special feature for doing it, but I’d avoid using it because, well, I want to be Unix/POSIX compatible.

It’s actually very easy once you’ve realised tr won’t do it all for you. The tr utility has no concept on where in the input stream it is, but combining tr with cut works a treat.

I came across this problem when I was writing a few lines to automatically create directory layouts for interpreted languages (in this case the Laminas framework). Languages of this type like capitalisation of class names, but other names have be lower case.

Before I get started, I note about expressing character ranges in tr. Unfortunately different systems have done it in different ways. The following examples assume BSD Unix (and POSIX). Unix System V required ranges to be in square brackets – e.g. A-Z becomes “[A-Z]”. And the quotes are absolutely necessary to stop the shell globing once you’ve introduced the square brackets!

Also, if you’re using a strange character set, consider using \[:lower:\] and \[:upper:\] instead of A-Z if your version of tr supports it (most do). It’s more compatible with foreign character sets although I’d argue it’s not so easy on the eye!

Anyway, these examples use A-Z to specify ASCII characters 0x41 to 0x5A – adjust to suit your tr if your Unix is really old.

To convert a string ($1) into lower case, use this:

lower=$(echo $1 | tr A-Z a-z)

To convert it into upper case, use the reverse:

upper=$(echo $1 | tr a-z A-Z)

To capitalise the first letter and force the rest to lower case, split using cut and force the first character to be upper and the rest lower:

proper=$(echo $1 | cut -c 1 | tr a-z A-Z)$(echo $1 | cut -c 2- | tr A-Z a-z)

A safer version would be:

proper=$(echo $1 | cut -c 1 | tr "[:lower:]" "[:upper:]")$(echo $1 | cut -c 2- | tr "[:upper:]" [":lower:"])

This is tested on FreeBSD in /bin/sh, but should work on all BSD and bash-based Linux systems using international character sets.

You could, if you wanted to, use sed to split up a multi-word string and change each word to proper case, but I’ll leave that as an exercise to the reader.

Cookie problems

It’s been ten years since the original EU ePrivacy Directive (Regulation of the European Parliament and of the Council concerning the respect for private life and the protection of personal data in electronic communications and repealing Directive 2002/58/EC) came into effect in the UK. It’s implement as part of the equally wordy Privacy and Electronic Communications Regulations 2012 (PECR) One thing it did was require companies with websites to allow users to opt in to cookies. I’ve written about this before, but since then the amount of tracking cookies has become insane, and most people would choose to opt out of that much survailance.

The problem is that with so many cookies, some websites have effectively circumvent the law by making it impractical to opt out of all of them. At first glance they offer an easy way to turn everything off and “save settings”, but what’s not so clear is that they hide an individual option for every tracking cookie company they have a deal with. And there can be hundreds, each of which needs to be individually switched off using a mouse.

These extra cookies – and they’re the ones you don’t want – are usually hidden behind a “vendors” or “partners” tab. With the site shown below this was only found by scrolling all the way down and clicking on a link.

This kind of thing is not in the spirit of the act, and web sites that do this do not “care about your privacy” in any way, shape or form. And if you think these opt-in/out forms look similar, it’s because they are. Consent Management Platforms like Didomi, OneTrust and Quantcast are widely used to either set, or obfuscate what you’re agreeing to.

An update to the ePrivicy directive is now being talked about that says it must be “easy” to reject these tracking cookies, which isn’t the case now.

Meanwhile some governments are cracking down. In January, Google and Facebook both got slapped with huge fines in France from the
Commission Nationale de l’Informatique et des Libertés, which reckoned that because it took more clicks to reject cookies than accept them, Google and Facebook were not playing fair.

“Several clicks are required to refuse all cookies, against a single one to accept them. [The committee] considered that this process affects the freedom of consent: since, on the internet, the user expects to be able to quickly consult a website, the fact that they cannot refuse the cookies as easily as they can accept them influences their choice in favor of consent. This constitutes an infringement of Article 82 of the French Data Protection Act.”

I’m inclined to agree. And on top of a fines of €150 and €60 respectively, they’re being hit with €100K for each extra day they allow the situation to remain.

Unfortunately we’re not likely to see this in the UK. The EU can’t actually agree on the final form of the ePrivacy regulations. The UK, of course, is no longer in the EU and may be able to pass its own laws while the EU argues.

Last year the Information Commissioner’s office did start on this, taking proposals to the G7 in September 2021. Elizabeth Denham told the meeting that a popup with a button saying “I Agree” wasn’t exactly informed consent.

However, this year the government is going in the other direction. It’s just published plans to do away with the “cookie popup” and allow websites to slurp your data. Instead sites will be required to give users clear information on how to opt out. This doesn’t fill me with confidence.

Also in the proposals is to scrap the need for a Data Protection Impact Assessment (DPIA) before slurping data, replacing it with a “risk-based privacy management programme to mitigate the potential risk of protected characteristics not being identified”.

I don’t like the idea of any of this. However, there’s a better solution – simply use a web browser that rejects these tracking cookies in the first place.

Reply-To: gmail spam and Spamassassin

Over the last few months I’ve noticed huge increase is spam with a “Reply To:” field set to a gmail address. What the miscreants are doing is hijacking a legitimate mail server (usually a Microsoft one) and pumping out spam advertising a service of some kind. These missives only work if the mark is able to reply, and as even a Microsoft server will be locked down sooner or later, so they’ll never get the reply.

The reason for sending this way is, of course, spam from a legitimate mail server isn’t going to be blacklisted or blocked. SPF and other flags will be good. So these spams are likely to land in inboxes, and a few marks will reply based on the law of numbers.

To get the reply they’re using the email “Reply-To:” field, which will direct the reply to an alternative address – one which Google is happy to supply them for nothing.

The obvious way of detecting this would be to examine the Reply-To: field, and if it’s gmail whereas the original sender isn’t, flag it as highly suspect.

I was about to write a Spamassassin rule to do just this, when I discovered there is one already – and it’s always been there. The original idea came from Henrik Krohns in 2009, but it’s time has now definitely arrived. However, in a default install, it’s not enabled – and for a good reason (see later). The rule you want is FREEMAIL_FORGED_REPLYTO, and it’s found in 20_freemail.cf

Enabling FREEMAIL_FORGED_REPLYTO in Spamassassin

If you check 20_freemail.cf you’ll see the rules require Mail::SpamAssassin::Plugin::FreeMail, The FreeMail.pm plugin is part of the standard install, but it’s very likely disabled. To enable this (or any other plugin) edit the init.pre file in /usr/local/etc/mail/spamassassin/ Just add the following to the end of the file:

# Freemail checks
#
loadplugin Mail::SpamAssassin::Plugin::FreeMail FreeMail.pm

You’ll then need to add a list of what you consider to be freemail accounts in your local.cf (/usr/local/etc/mail/spamassassin/local.cf). As an example:

freemail_domains aol.* gmail.* gmail.*.* outlook.com hotmail.* hotmail.*.*

Note the use of ‘*’ as a wildcard. ‘?’ matches a single character, but neither match a ‘.’. It’s not a regex! There’s also a local.cf setting “freemail_whitelist”, and other things documented in FreeMail.pm.

Then restart spamd (FreeBSD: service spamd restart) and you’re away. Except…

The problem with this Rule

If you look at 20_freemail.cf you’ll see the weighting is very low (currently 0.1). If this is such a good rule, why so little? The fact is that there’s a lot of spam appearing in this form, and it’s the best heuristic for detecting it, but it’s also going to lead to false positives in some cases.

Consider those silly “contact forms” beloved by PHP Web Developers. They send an email from a web server but with a “faked” reply address to the person filling in the form. This becomes indistinguishable from the heuristic used to spot the spammers.

If you know this is going to happen you can, of course add an exception. You can even have the web site use a local submission port and send it to a local mailbox without filtering. But in a commercial hosting environment this gets a bit complicated – you don’t know what Web Developers are doing. (How could you? They often don’t).

If you have control over your users, it’s probably safe to up the weighting. I’d say 3.0 is a good starting point. But it may be safer to leave it at 0.1 and examine the results for what would have been false positives.

Nothing new with Intel SDSi

Intel’s latest wheeze for its CPUs is Software Defined Silicone (SDSi). The deal is that you buy the CPU at one price and then pay extra for a license to enable more stuff.

If you want the geeky stuff about how it’s supposed to work in Linux, see here. https://github.com/intel/intel-sdsi

Basically, the CPU has an interface that you can access if you have an Authentication Key Certificate (AKC) and have purchased a Capability Activation Payload (CAP) code. This will then enable extra stuff that was previously disabled. Quite what the extra stuff is remains to be seen – it could be extra instructions or enabling extra cores on a multi-core chip, or enabling more of the cache. In other words, you buy extra hardware that’s disabled, and pay extra to use it. What’s even more chilling is that you could be continuously paying licenses for the hardware you’ve bought or it’ll stop working.

It’s not actually defining the silicone in software like a FPGA, as you’d expect from euphemistic name. Software Defined Uncrippling would be more honest, but a harder sell.

But this is nothing new. I remember IBM doing this with disk drives in the 1970’s. If you upgraded your drive to double the capacity an IBM tech turned up and removed a jumper, enabling the remaining cylinders. Their justification was that double the capacity meant double the support risk – and this stuff was leased.

Fast forward 20 years to Intel CPUS. Before the Intel 80486 chips you could provide whatever input clock you wanted to your 80386, just choosing how fast it went. Intel would guarantee the chip to run at a certain speed, but that was the only limiting factor. Exceed this speed at your own risk.

The thing was that the fast and slow CPUs were theoretically identical. It’s often the case with electronic components. However, manufacturing tolerances mean that not all components end up being the same, so they’re batch tested when the come off the line. Those that pass the toughest test get stamped with a higher speed and go in the fast bucket, where they’re sold for more. Those that work just fine at a lower speed go into the slower bucket and sell for less. Fair enough. Except…

It’s also the nature of chip manufacture that the process improves over time, so more of the output meets the higher test – eventually every chip is a winner. You don’t get any of the early-run slow chips, but you’re contracted to sell them anyway. The answer is to throw some of the fast chips into the slow bucket and sell them cheap, whilst selling others at premium price to maintain your margins.

In the early 1990’s I wrote several articles about how to take advantage of this in PCW, after real-world testing of many CPUs. It later became known as overclocking. I also took the matter up with Intel at the time, and they explained that their pricing had nothing to do with manufacturing costs, and everything to do with supply and demand. Fair enough – they were honest about it. This is why AMD gives you more bang-per-buck – they choose to make things slightly better and cheaper because that maximises their profits too.

With the introduction of the 80486, the CPU clock speed was set in the package so the chip would only run at the speed you paid for. SDSi is similar, except you can adjust the setting by paying more at a later date. It also makes technical sense – producing large quantities of just one chip has huge economies of scale. The yield improves, and you just keep the fab working. In order to have a product range you simply knobble some chips to make them less desirable. And using software to knobble them is the ultimate, as you can decide at the very last minute how much you want to sell the chip for, long after it’s packaged and has left the factory.

All good? Well not by me. This only works if you’re in a near monopoly position in the first place. Microsoft scalps its customers with licenses and residual income, and Intel wants in on that game. It’s nothing about being best, it’s about holding your customers to ransom for buying into your tech in the first place. This hasn’t hurt Microsoft’s bottom line, and I doubt it’ll hurt Intel’s either.

STARTTLS is not a protocol

As regular readers will know, I’m not a fan of STARTTLS but today I realised that some people are confused as to what it even means. And there’s a perfectly good reason for this – some graphical email software is actually listing STARTTLS as a protocol for talking to mail servers and people are jumping to conclusions.

So what is STARTTLS all about if you go back to basics?

Originally, when only nice people had access to computers, network traffic was unencrypted. If you had physical access to the network you could pretty much read anything you wanted to, as everything connected to the same network saw the same data. This isn’t true now, but encryption you data is a good idea just in case it can be intercepted – and if it’s going over the Internet that’s definitely the case.

In the mid 1990s, the original mass-market web browser, Netscape, decided to do something about it and they (or more specifically their chief scientist Taher Elgamal, invented a protocol called Secure Sockets Layer (SSL) to protect HTTP (web) traffic. Actually, several times as the first couple of attempts weren’t very secure at all.

SSL didn’t really fit in with the OSI model; it runs on top of the transport protocol (usually TCP) but under the presentation layer, which would logically handle encryption but doesn’t usually. To use it you need an SSL layer added to the stack to transparently do the deed on a particular port.

But, as a solution to the encryption problem, SSL took off and pretty much every major protocol has an SSL port along with its original cleartext one. So clear HTTP is on port 80, HTTPS is on port 443. Clear POP3 is on port 110, encrypted on 995. Clear IMAP is on port 143, encrypted on 993.

As is the way of genius ideas in cybersecurity, even the third version of SSL was found to be full of holes. SSL version 3.1, which was renamed TLS, continued plugging the leaks and by TLS 1.2 it’s considered pretty much secure now. TLS 1.3, which interoperates with TLS 1.2, simply deprecates certain cyphers and hashes on the suspicion they might be insecure; although anyone into cybersecurity should tell you that everything is secure only until it’s broken.

Unfortunately, because different levels of TLS use different cyphers and reject others, TLS levels are by no means interoperable. And neither is it the case that a newer version is more secure; bugs have been introduced and later fixed. This’d be fine if everything and everyone used the same version of TLS, but in the real world this isn’t practical – old hardware, in particular, bakes in old versions of SSL or TLS and if you decided to deprecate older cyphers and not work with them, you loose the ability to talk to your hardware.

But apart from this, things were going along pretty well; and then someone had the bright idea of operating encrypted and unencrypted connections on the same port by hacking it at the application layer instead. This was achieved by modifying the application protocol to include a STARTTLS command. If this is received, the application then negotiates a TLS connection. If the receiving host didn’t understand what STARTTLS meant it’d send back and error, and things could continue unencrypted.

In other words, if you’re implementing an SMTP server with STARTTLS, this keyword is added to the protocol and the SMTP server does something about it when it sees it.

What could go wrong?

Well quite a lot of things, actually. Because TLS doesn’t fit in to the OSI model, it’s actually very difficult to deal with the situation where a TLS connection is requested and agreed to but the TLS layer fails to agree on a cypher with an older or newer version on the other end. There’s no mechanism for passing this to the application to say “okay, let’s revert to Plan A”, and the connection tends to hang.

There’s also a problem with name-based virtual servers must all use the same host certificate because the TLS connection must be established before the application layer headers are transferred.

But perhaps my biggest gripe is that enabling STARTTLS makes encryption optional. You’re not enforcing encryption when you need to, and even if you think you are, STARTTLS connection are obviously vulnerable to a man-in-the-middle attack. You have no idea how many times TLS has been turned on and off between the two endpoints.

You might be tempted to think that optional encryption is better than none at all, but in reality it means you don’t care – and if you don’t care, don’t bother. It just leads to a false sense of security. And it can lead to interoperability problems. My advice is to use “always TLS” ports for sensitive data and turn off the old port.

Chip crisis? What chip crisis?

We’ve all seen the mass media going on about a chip shortage – or “crisis” as everything seems to be called these days. Silicon chips are unobtainable, apparently. And industry leaders are blaming their inability to meet demand for products on the “chip shortage”. But does this mean we should believe them?

Industry leaders are brilliant at blaming their mistakes on outside factors. Chips, and IT in general, is an obvious scapegoat.

It’s important to differentiate between a “chip shortage” and demand outstripping supply for particular ICs. Cryptocurrency mining is soaking up GPUs like there’s no tomorrow, so you could say there’s a GPU supply crisis. The the boyz will have to make do with plain old HD murder simulators for a while.

The automotive sector always had an interesting supply chain. They beat the price down to the last penny and order “just enough” semiconductors ahead of time meet their anticipated demand – if they guess wrong then it’s on them, and in a pandemic they’re going to lose their nerve and order less.

And then there are the usual “flood/fire/zombie invasion” stories on silicon foundries that accompany every supply crisis. I’m not having it.

The facts (remember “facts” from the old days?) tell a different story when you look at the units shipped. Okay, this lumps in NVidia GPUs with 741 op-amps but it still paints a picture.

The fact is that the supply of semiconductors continues to go up year on year. The latest predictions for 2021 are suggesting there’s been a 27% increase over 2020, and that was a significant increase over 2019. Business is booming.

So, if there’s a semiconductor supply crisis, please tell me which semiconductors are actually out of stock? Automotive manufacturers who failed to pre-order enough to meet demand of their particular custom chips are going to have to wait. And they might find that having beaten the price down, they’re not top of the list when it comes to rushing through a special order fast unless they pay a bit more.

Christmas Come Early for Scammers – Thanks Microsoft

As a reminder that Microsoft never lets security considerations get in the way of a Good Idea, it’s emailed 50,000 gift cards to random addresses it has on file. To quote:

To help spread holiday cheer, Microsoft Store has surprised a total of 50,000 U.S. customers with virtual gift cards via email. 25,000 customers will receive a $100 Microsoft Gift Card while 25,000 others will receive a $10 Microsoft Gift Card ahead of this holiday season. These randomly selected recipients can redeem their gift card on Microsoft Store through December 31, 2021 and spend it within 90 days of redemption

Publications in the US are advising punters to check their spam folder in case they’ve got an e-voucher for free Microsoft goodies. Presumably these email address are of lusers with a Microsoft account of some kind.

With the media coverage starting to appear in the US, anyone phishing for Microsoft account credentials now has the perfect social engineering exploit, available between now and the New Year. Nice one Microsoft.

Minecraft server in a FreeBSD Jail

You may have no interest in the game Minecraft, but that won’t stop people asking you to set up a server. Having read about how to do this on various forums and Minecraft fan sites (e.g. this one) I came to the conclusion that no one knew how to do it on current FreeBSD. So here is how you do it, jailed or otherwise.

First off, there isn’t a pre-compiled package. The best way to install it is from the ports, where it exists as /usr/ports/games/minecraft-server

Be warned – this one’s a monster! Run “make config-recursive” first, or it’ll go on stopping for options all the way through. Then run “make install”. It’s going to take quite some time.

The first configuration option screen asks if you want to make it as a service or stand-alone. I picked “service”, which sets up the start-up scripts for you but doesn’t actually tell you it’s done it. It does, however, stop it trying to run in graphics mode on your data centre server so I’m not complaining too much.

The good news is that this all works perfectly in a jail, so while it’s compiling (it could be hours) you can set up the required routing, assuming you’re using an internal network between jails – in this case 192.168.2.0/24. Using pf this will look something like:

externalip="123.123.123.123"
minecraft="192.168.2.3"
extinterface="fx0"
scrub in all
nat pass on $extinterface from 192.168.2.0/24 to any -> $externalip
rdr pass on $extinterface proto tcp from any to $externalip port 25565 -> $minecraft
rdr pass on $extinterface proto tcp from any to $externalip port
{19132,19133,25565} -> $minecraft

And that’s it. You’re basically forwarding on TCP and three UDP ports. If you’re not using a jail, you obviously don’t need to forward anything. For instructions on setting up jails properly, see here, and for networking jails see elsewhere on this blog.

One thing that’s very important – this is written in Java, so as part of the build you’ll end up with OpenJDK. This requires some special file systems are mounted – and if you’re using a jail this will have to be in the host fstab, not the jails!

# Needed for OpenJDK
fdesc /dev/fd fdescfs rw 0 0
proc /proc procfs rw 0 0

If you’re using a jail, make sure the jail definition includes the following, or Java still won’t see them:

mount.devfs;
mount.procfs;

Once you’ve finished building you might bet tempted to follow some of these erroneous instructions in forums and try to run “minecraft-server”. It won’t exist!

To create the basic configuration files run “service minecraft onestart”. This will create the configuration files for you in /usr/local/etc/minecraft-server. It will also create a file called eula.txt. You need to edit this change “eula=false” to “eula=true”.

You can make the minecraft service run on startup with the usual “minecraft_enable=yes” in /etc/rc.conf

And that’s really it. There are plenty of fan guides on tweaking the server settings to your requirements, and they should apply to any installation.

This assumes you’re handy with FreeBSD, understand jails and networking; if you’re not so handy then please leave a comment or contact me. Everyone has to start somewhere, and it’s hard to know what level to pitch instructions like this. Blame me for assuming to much!