Windows 10 Free Upgrade failure

Last Friday was the last chance to get a free upgrade/downgrade from Windows 7 to Windows 10. The Microsoft checking utility confidently announced my system was compatible, but I doubted that as I was running stuff in XP Mode, and some old Chicago (Windows 9x) software. But I thought I’d give Microsoft the benefit of the doubt and try. But before that I backed up the entire hard disk.

Giving Microsoft the benefit of any doubt is always a bad plan, and in my case the installation died half way. The update was apparently downloaded, but I left it all weekend and it failed to install.

It’s hard to see why anyone who knows about computers used for serious purposes would consider “upgrading” to Windows 10 a good idea. I’m not sad I had to revert to the backup and get my Windows 7 machine back. Windows 8+ completely failed to implement the backward compatibility that Microsoft used to do so well. Upgrading DOS or Windows meant you could keep your legacy applications and hardware, but switching to OS/2, Apple, UNIX or Linux meant you could not. Now upgrading Windows means ditching older software too – in my case, I suspect my company’s accounting system. If you’re going to do anything as rash as that, you might as well break free from Microsoft completely and choose a whole new platform.

I was expecting to write something slamming Microsoft for messing up my PC this morning, but thanks to their complete incompetence, the upgrade didn’t work anyway.

Parent Pay adds fuel to its fire

Following a disastrous software “upgrade” on 6th June, it appears that ParentPay, the controversial on-line payment system used by many schools, finally appears to have noticed it has a problem. In an email sent to all its 1.7 million victims users today, CEO Clint Wilson apologised that people were having difficulties with the new system and conceding their support service was overwhelmed. He promised to fix the problems and get it right over the summer.

Perhaps in order to emphasise the fact that he really don’t know much about this technology stuff, the email was sent in a Microsoft-only format, with an invitation to “view it in your web browser” if you weren’t using Hotmail, or whatever else it was designed for. It really doesn’t bode well.

The ParentPay website was always awkward, requiring very specific web browsers in order to operate, and using insecure technologies rather than HTML. The latest update relied very heavily on JavaScript and assumed specific screen resolutions, forcing people to upgrade browsers and wait for updates in order to use it – and it looks ridiculous on a desktop-sized screen.

At the same time ParentPay implemented a system where parents were made to pre-pay in to the account and then allocate funds later, rather than paying for the items at the time they were selected/purchased. Subsequently the company has sought to defend this tiresome system as an initiative to help low-income families, although exactly how pre-payment does this isn’t clear. The fact that ParentPay is left holding money for longer before the school gets is probably didn’t even cross their mind.

Parents, already leery about the whole ParentPay system and the way it has been imposed on them by schools in spite of widespread long-standing dissatisfaction, have taken to social media to slag off the crass software update and appalling customer service..

It’s a sad fact that schools and local authorities lack the necessary IT savvy to spot a turkey when its marching up and down in front of them, and instead opt for “safety” in numbers. I don’t actually blame the schools for this – it’s not their job. It’s the government and local authorities that are unable to provide good advice – but local authority and government IT projects are, of course, a byword for expensive shambles.

Am I being phished?

Today I received an intriguing email with a Microsoft Word attachment implying I had money coming to me if I filled in a form. Yeah, right. I was just about to hit delete but I was a bit surprised the sender was addressing me as Prof. Leonhardt. It’s hardly the first time someone’s got this wrong – and to be on the safe side I can see why people might start high and work backwards through Dr. and so on, as people who are about such matters are only offended if you start too low.

But why would a botnet add the title?

On closer inspection I recognised it was a royalty payment enquiry from a publishing company that had actually done a book for about five years ago. I didn’t expect it to sell (it wasn’t that kind of book), so hadn’t thought much about out.

But I still haven’t opened the attachment. The email headers suggest it came from the publisher, but they can be forged. And this could be a clever spear-phishing attempt – after all, if you bought the book, which was largely about email security, you’d have the name of the publisher and my name – and the email address used can be found using Google.

I don’t believe I have ever been spear-phished before, so I’m feeling a bit more important than I did yesterday.

Time to fire up the sandbox!

FreeBSD, ZFS and Denial of Service

I’ve been using ZFS since FreeBSD 8, and it has it’s uses. It’s pretty be wonderful and all that, but I was actually pretty happy with UFS, and switching to ZFS isn’t a no-brainer.

So what’s the up-side to ZFS? Well you get more error checking and correction and it’s great for managing huge filing systems. You can snapshot and roll back, and do lots of other wonderful stuff with datasets and rive arrays. And it’s more “auto” when it comes to allocating disk space. But call me old fashioned if you like; I don’t like “auto” if I can avoid it.

Penguinistas might not “get” this next bit, but on a UNIX system you didn’t normally have One Big Disk. Instead you had several, and even if you only had one, you’d partition the slice it up so it looked like several. And then, of course, you’d mount disks or partitions on to the root filing system wherever you wanted them to appear.

For reliability, you could also create mirrors and striped RAIDs, put a FS on them and mount them wherever you wanted. And demount them, and mount them somewhere else, and so on.

ZFS does all this good stuff, but automatically, and often as One Big Disk. A good thing? Well… if you must. But there are a few points you might want to consider before diving in.

First off, I like to know where and on which disk my data actually resides. I’m really uneasy with ZFS deciding for me. If ZFS loses it, I want to know where to find it. I also like having a FS on each drive or partition, so I can pull the drive out and mount it wherever I want to get data off – or move it from machine to machine. It’s my data, I’ll do what I want to with it, dammit! You can do this virtually with ZFS datasets, but you can’t unplug a dataset and hold it in your hand. Datasets, of course, are fluid rather than fixed in size, so you don’t need to guess how much space to allocate.

Secondly, with UFS I get to decide what hardware is used for each kind of file. Parts of the FS that are rarely used can be put on slow, cheap, huge disks. The database goes on a velociraptor or better, and the swap partitions – well! Okay, you can use multiple zpools for difference performance situations but then you’re using it like UFS.

Thirdly, there’s a price for all this ZFS wonderfulness. Apart from the software overhead, the Copy-on-Write business needs a lot of RAM to maintain good performance. Fragmentation no the physical drive is guaranteed. If you’re running software (e.g. a database) that uses random access files and lots of transaction, UFS with its in-place modification wins out. A DBMS will take care of its own consistency and storage optimisation, and it has the edge as it knows what the data represents at the application level.

But what of the Denial of Service problem in the headline? Okay, it’s been a bit of a ramble, but this is something you must consider.

There are always management issues with One Big Disk. Linux users seem oblivious to this, but this doesn’t mean putting everything on a big partition is a great plan – even if you’re using a single disk in practice.

With the old way of having multiple partitions, each with an FS, mounted on the directory tree, when an FS on a partition/drive filled up, it is was full. You couldn’t create more files on it. You either have to delete unwanted stuff, or you can mount a bigger drive in its place. With One Big Disk, when it’s full, it’s also full. The difference is that you can’t write any data anywhere on the entire FS. And this is where DoS comes it.

Take, for example, /var/log. Any UNIX admin with a bit of sense will have this in its own partition. If some script kiddie then did something that caused a lot of log file activity, eventually you’d run out of space in /var/log. But the rest of the system would still be alive. With UFS the default installation process created partitions with sensible sizes. Using the One Big Disk principle, ZFS satisfies the requests of any disk-eating process until there isn’t a single byte left anywhere, and then rolls over saying the zpool is full. Or it would say it if there was a monitor connected to the server in a data centre miles away, and there was someone there to look at it.

With ZFS you can set a limit to the size on a dataset-by-dataset basis and prevent this sort of thing from happening. But it doesn’t happen by default, so set your quotas manually if you’re plonking the OS, and in particular /var on it.

Okay, this might sound a bit anti-ZFS, and I’ve yet to have a disaster with a ZFS system that’s required me to move drives around, so I don’t really know how possible it is when the chips are down. And ZFS has is a nice unified way of doing stuff, rather than fiddling around with geom and the FS separately. But after a couple of years with FreeBSD 10, where it became practical to boot from ZFS, shouldn’t I be feeling a bit more enthusiastic about it?

Having a ZFS pool attached as a data store rather than as a boot device is, of course, a different story. That’s when you see the benefits. But it does also eat resources, so I want the benefits to be worth it for the particular application. For the time being I’m putting the OS on UFS, usually with a data partition for databases to thrash, and userland putting simple files on ZFS – best of both worlds.

BBC plays the temperamental chef

Today the BBC hit back after being told to do its job. The white paper on its future told the public service broadcaster that it needed to produce public service output, rather than duplicating material ably produced by the commercial sector. The phrase used was “distinctive output”, and this was repeated ad nausium in its reporting of this morning’s story that it would be dropping its popular web recipe archive.

The reason given was that this was not “distinctive output”, and according to Radio 4’s Today programme, it was to save £15M/year from its on-line budget. Really? Anyone who knows anything about web publishing can tell you that publishing recipes is cheap, especially when you already have them. A quick look around the BBC more exotic on-line offerings will soon show where the money really goes.

So what are they up to? Politics, of course. The liberal elite running the BBC isn’t happy about being reminded how it is supposed to be spending our money, and is acting up in a disgraceful manner.

In its own on-line reporting of the matter, the BBC is linking this to the new requirement to publish details everyone having their celebrity lifestyle funded by more than 450K  of our license money. This is going to be be awkward for the luvvies and the star-struck BBC executives fawning over them.

It’s about time the BBC started serving the people who pay for it. It’s hardly impartial when it comes to politics; it’s right in there playing politics itself – albeit the playground variety.

BT’s Infinitely useless Infinity call centre in India

 

In the middle of last week I was expecting an important call. It didn’t come. Then someone said they’d been trying to get me on the phone and couldn’t. It turns out I’ve got a line fault. And OpenReach’s response so far is an object lesson in to how to get things wrong.

First off, the fault results in the caller hearing a ringing tone but nothing ringing at this end. This means the caller simiply thinks your not answering. BT’s automated system quickly identified that the line wasn’t working, but I had to ask that callers got an “out-of-order” message. Is OpenReach reluctant to admit that it’s lines could be faulty?

The line provides VDSL and an analogue telephone, which goes in to a PABX. Therefore its lack of dial tone wasn’t noticed; the PABX simply skips until it finds a working line when you’re making an outside call. But on plugging a handset direct to the line, it’s as dead as a doornail. No voltage across A-B, no dial tone, no crackle. Nothing. Except, strangely, the VDSL is still working.

Now anyone with half a brain will realise that the pair is good, and the fault is going to be in the street box or exchange. My guess is that it’s in the green cabinet up the road where my line connects to the FTTC service.

So, several days pass and I notice it hasn’t been fixed. Then I get a call (on one of the remaining working lines) from BT; an obviously foreign accent. Apparently they have determined that there is a fault outside of the exchange (and by implication in my cabling). It’s not with me. The first thing anyone would do is disconnect the PABX and the VDSL modem (and its filter) and test the incoming socket. Just try explaining this to an overseas call centre reading from a script. To humour the hapless fool I eventually I again removed the NTE5 face plate (she didn’t even know what an NTE5 was!) and plugged a handset direct in to the incoming socket. Only then would she agree to send an “engineer” to look at it.

I did explain exactly where the fault was likely to be (remember, the VDSL hasn’t been interrupted – it’s not difficult to work it out). Apparently an engineer is now booked for Monday afternoon. I pointed out that he’d need to call me to get access, should it be needed (it shouldn’t) but I’m not sure she took it in.

And then, insult to injury, she sent a text message to the landline!

I told them about the fault days ago, and exactly what the problem was. It was flagged as fault on their own self-diagnostic. And OpenReach couldn’t even mark the line as out-of-order to callers until I moaned at them. BT makes a lot of money implementing overseas call centres. Yet even they can’t get them to work on a human level.

New mystery “Appear in Court” malware

In the early hours of the morning (BST) I intercepted a large number of emails of the “Appear in Court” variety, but unlike usual, these were not Microsoft documents but JavaScript (stored in a .ZIP file). They end in .doc.js, which means they obviously look odd.

I couldn’t resist running a few, to see what they did, and the answer is not much. They run cmd.exe and I’m pretty sure it does an egg hunt to find some code in core to execute, and it goes looking for DOCUME~1.DOC in various likely locations. But in my sandbox, it doesn’t get anywhere.

These are being spammed from clean IP addresses, no AV currently detects them by signature, so they’re going to get through. But what do they need to run, and what do they do if they succeed? Unfortunately I can’t stick around this morning to check further.

UbuntuBSD – lovechild of Linux and FreeBSD

It’s no secret that Linux users with good taste have viewed the FreeBSD kernel with envious eyes for many years. A while back Debian distributions started having the FreeBSD kernel as an option instead of the Linux one. (Yes, you read that correctly). But now things seem to have been turned up a notch with UbuntuBSD.

It seems a group of penguinistas regard the Ubuntu world’s adoption of systemd as a step too far, and forked. And rather than keeping with Linux, they’ve opted to dump the whole kernel and bolt the Ubuntu front-end on to FreeBSD instead, getting kernel technology like ZFS and jails but “…keeping the familiarity of Ubuntu”.

Where could this be going? We already have PC-BSD for a “shrink wrapped” graphical desktop environment. Is anyone actually using it? I’m not. I’m sure we’ve all downloaded it out of curiosity, but if I want a Windows PC I’ll have a Windows PC. With BSD I’m more than happy with a command line, thank you very much.

UbuntuBSD could be different. Linux users actually use the graphical desktop, and most can’t cope with a command line. If they were to switch to FreeBSD instead, UbuntuBSD would make a lot of sense.

Although it’s only been around a month, in early beta form, its Sourceforge page is showing a lot of downloads. If I wanted to run a graphical desktop on top of FreeBSD, UbuntuBSD would make a lot of sense over PC-BSD, because I get the vibes that Ubuntu has desktop applications more together.

The project has just launched its own web site too, at www.ubuntubsd.org.

So does this spell the end of PC-BSD, Ubuntu Linux, Windows 10 or none of the above? It’s surely a strong vote against systemd.

FreeBSD 10.3 hangs on upgrade – beware!

There seems to be a bit of a problem with upgrades to FreeBSD 10.3-RELEASE. Basically, shutdown -r is hanging, requiring you to manually reset the machine (turn it off and on again). This is annoying unless the machine in question happens to be at a data centre on a different continent, in which “annoying” really doesn’t cut it.

This was a known issue with 10.3-STABLE., but it appears to have made it in to -RELEASE too.

I suggest not using freebsd-update. Basically if you follow the official instructions you may need someone on hand to reboot the old fashioned way.

OKI laser duplex unit doesn’t work

OKI C5650 or C5750

This was a weird one. For some reason my OKI c5750 printer (similar to c5650) started ignoring its duplex unit. Stuff was printing on two sheets, single sided. I checked the Windows drivers, and that Duplex long-edge was enabled on them and on the printer control panel. But nothing was doing. The two single-sided sheets came out instead of double-sided, and what’s more it seemed slower than usual.

I initially thought it was a Windows fault introduced by my recent troubles. Had I restored the printer driver to a weird state? But after half an hour of fishing around I finally found the problem on the web interface. It’s subtle.

You can select the weight and type of paper for various trays. It turned out that the tray I was using was set to “thick” and “glossy”. I reset this to “normal weight” and “normal”, and everything started working again. I assume that the OKI internal software won’t send think paper (i.e. light card) into the duplexer, but it doesn’t tell you this. The Windows restore must have set the manual feed tray to light card. This would also explain the slower roller feed.

I hope this helps if you’re also having trouble.