If you’re trying to get Talkmobile working with the current version of Android and have tried various settings on the Web with no luck. The Talkmobile web site itself is also incorrect. Here are the real ones as of right now…
Go to “Access Point Names” under setting somewhere. You’ll see Vodafone ones already there, probably. Ignore them.
Create a new one. Call it “Talkmobile” or whatever you fancy. The only three settings you need to change are:
APN Name: talkmobile.co.uk
User name: wap
APN Type: * (if this doesn’t work try “Default”)
I haven’t given the MMS settings because I leave them blank and avoid rip-off charges!
I reviewed their first digital model, the DM-5R, and concluded it was a bad idea as it only implemented Tier 1 and therefore could only talk to identical transceivers. A real pity. There is supposed to be a Tier-II version, the DM-5R Plus, but I don’t know anyone who’s seen one and even the specifications say it’s isn’t compatible with Motorola. Anyway, it seems to be history or myth now the DM-9HX has arrived.
The DM-9HX does Tier II, and should talk to DMR sets from other manufacturers and work through repeaters. I haven’t personally tested this properly as yet, but indications are good. So with that in mind, on with an initial review:
I’ll assume you know previous Baofeng models well enough and concentrate on the differences. But just in case you don’t, the legendary Baofeng UV-5R series are cheap and cheerful handheld dual-band FM 2m/70cm transceiver with a speaker/mic socket and an MSMA connector for whatever antenna you choose. There is a tri-band model, and they all seem to have a built-in torch. A number of variations in case style exist, including waterproof, as do versions with uprated RF. But they’re pretty much identical at the user level; and they’re the mainstay of many people’s community PMR set-ups as well as a no-brainer for Ham use.
Baofeng announced it was going to produce a digital version, which was physically interchangeable with previous models but with added DMR capability. This is a great proposition for people like me, with dozens of UV-5R batteries, antennas, chargers, cases and so-on. It protects your investment whilst allowing controlled migration to DMR. It’s been a long time coming, but now it’s here.
So first off – the interoperability is there. It uses exactly the same accessories as the UV-5R. It’s the same size and looks like a UV-5R – apart from the all-new display. Good job. The only physical difference is the programming cable, which is a direct USB feed into the microphone socket. And it doesn’t work with CHIRP. If you look closely, the label also says DM-9HX (check the picture near the top) and the keypad is overprinted for digital mode – alpha instead of menu shortcuts. The DM-5R/Plus had a black VFO button but they’ve gone back to orange with this model. I’ve had to put a rubber sleeve on it to find it amongst the others.
Inside the box you get a new “digital” antenna, the standard charger and the large battery. I’ve yet to test how much difference the fancy antenna makes; for ease of carrying, and like-for-like comparison, I swapped it for a standard battery and a stubby antenna. Moonraker supplied a standard Beofeng headset (yeah) with theirs; others don’t. The charger is the same, and it comes with the larger BL-5 12Wh battery although the smaller type still fit.
It also comes with an English manual, which is reminiscent of the one supplied with the DM-5R. It doesn’t actually relate to the DM-9HX, which is different enough for this to matter. But we’re radio amateurs, right? We like fiddling with things to find out how they work.
Compared to the analogue models, the user interface is much improved in terms of sanity, while remaining similar in some respects. The buttons do more-or-less the same, with the side ones being programmable. Alpha text entry on the keypad is now Nokia-like, with the # key switching case and three alpha characters on each number key.
The display is a high-res monochrome dot-matrix instead of a segmented LCD found on the analogue models and the DM-5R. It’s very clear to read, and back-lit either permanently or on a timer. There are also no more voice prompts. This is either a good or bad thing, depending on your taste.
Instead of settings being arranged in one long numbered list, in the new world they’re in a hierarchy of menus. Some settings are in odd places, but in general it’s a big improvement and easy to get around. The layout in the manual is simply incorrect, but even then it didn’t take too long to find most things. Some, however, were more difficult – read on and save yourself some trouble.
One handy feature of Beofeng analogue sets is the “dual watch”. This allows you to monitor two frequencies, and optionally lock on to the active one for transmissions. Although it appears in the manual, it wasn’t in the menu. The truck is to turn off “Power Saving” mode, after which it appears. There’s no sensible explanation of “Power save” mode, but it’s on by default.
Another oddity is tone squelch. CTCSS can be set on T, R and C. I’m not sure what ‘C’ is but I suspect it simply sets both T and R at once. The same menu identifies itself as setting DCS modes, but doesn’t appear to allow any such thing. I’ve yet to find a way of doing it on the radio, but you can from the programming software. This turns out to be true of quite a few things, for not apparently good reason.
Remember the analogue channel saving game, where you could write current settings to a memory and it sometimes worked? It was always a bit hit-and-miss in my experience, so I left it to CHIRP, but the DM-9HX has dropped the option entirely from the radio but it’s still described in the manual.
I struggled to program our local repeater in to the set, and discovered the following:
It’s not possible to save current VFO settings to a memory.
It is possible to edit a memory when in MR mode, to an extent.
This is logical, but is a PITA if you’ve just got something working in VFO mode. and you want to save it. If you do want to store to a channel, switch to MR mode, choose the channel and then edit. The editing menu options vary from VFO mode, just to make life interesting. For example, you can’t program an offset transmit frequency using the direction/offset menu settings (they’re disabled in MR, but not in VFO). However, you can enter separate Tx and Rx frequencies directly (calculating the Tx in your head, of course). It’s a bit illogical, but it works.
Another thing you’ll need to know is that a memory location is either designated as Digital or Analogue. This is set using the programming software, and cannot be changed on the radio. Neither can unused memory locations be brought in to use. As shipped, a mixture of sixteen analogue and digital channels were configured by default; you’re going to need the programming software if you want to make use of the memory, but saying that, making quick tweaks to an existing memory on the radio is much easier than it was before. As a suggestion, you might want to define a load of channels in software early on, so you have enough to choose from when programming using the radio.
One big worry with the first unit I tested (I have others waiting) is that the CTSS appeared not to work on receive. However, leaving it set on Tx it seemed to work for both. Further investigation needed on this one.
And so to the programming software:
I received the programming cable and a small anonymous CD containing many files. One of these was a ZIP with a name in English identifying it as related to the DM-9HX, so I installed it. It was the right one, but it’s hard to tell because it came up in Chinese, and does so every time. Keep going through the menus until you find “English”, select the option and all will be well – assuming you don’t speak Chinese.
The cable is a USB lead, with multi-ring plugs that go into the mic socket. I’d have liked to see a micro-USB socket on the radio for programming, but it works. Windoze recognises without the need for any special COM port driver. Yeah! It recognises it as a mouse, but it works.
After this rocky start, I’m pleased to report that the programming software has worked perfectly so far. Some of the terminology for settings doesn’t match the radio, manual or any known term I know of but you can figure it out easily enough.
There’s no manual for the software, but it does have useful help information that appears in a lower window pane. A lot of additional options related to digital operation, such as phone books and zones. As a GUI, it works as you might expect.
For locking down the radio, you can select which menus are available to the user in a way that seems very flexible. You can also set the allowed frequencies, as you could with the analogue sets.
There is, however, one serious limitation to the software. I have found no way of importing/exporting memories to a spreadsheet. You have to enter them all, one at a time, using dialogue boxes. This is NOT cool.
Will CHIRP support this? Well no one has been inclined to add support for the DM-5R since 2016, but then again who would want to use one? Unfortunately, looking at the technicalities and very different nature of DMR it’d take some work to add, although it’s been propo DMR-6X2sed for 0.5.0.
The programming for another Beofeng DMR, the DMR-6X2, does import/export CSV so it’s entirely possible I’ve just not figured it out yet but I’ve looked closely.
That’s about it for this quick look. I’ve done some RF tests, the results are to follow, as is some proper photography. I’ve spoken to friends over analogue. The sound quality was described as fine, but through a repeater to mobile stations.
To conclude, after the false-start on the DM-5R, the DM-9HX delivers – both in terms of DMR functionality, compatibility and as a major step forward in usability. With a few rough edges.
“On the afternoon of Tuesday, September 25, our engineering team discovered a security issue affecting almost 50 million accounts….”
“Our investigation is still in its early stages. But it’s clear that attackers exploited a vulnerability in Facebook’s code that impacted… a feature that lets people see what their own profile looks like to someone else.”
Mark Zuckerberg’s understated response to the incident was “I’m glad we found this and fixed the vulnerability. It definitely is an issue that this happened in the first place. I think this underscores the attacks that our community and our services faces.”
Wall Street’s response so far has been a 3% drop in Facebook’s stock.
I’m now waiting to see which of my sock puppets is affected.
According to a Sky News exclusive, the FCA is set to clobber Tesco Bank with a fine of £30m over the data breach in late 2016, where £2.5m was snaffled from thousands of its customer’s current accounts. Except it turned out it wasn’t; only fifty accounts were actually plundered, not for very much, and it was all sorted.
So how does this warrant such a huge fine? It’s hard to see, but the first two theories I have are that Sky News has got of wrong, or the FCA has gone seriously bonkers. If they’re touching miscreant institutions for £600K per customer inconvenienced, RBS and NatWest are toast.
So what’s it all about? Well we don’t know what Tesco Bank actually did. My best guess is that someone cloned cards and cashed out at ATMs. That’s the easiest way, and there is no evidence this was widespread or sophisticated. And its interesting that only current accounts were hit; not credit – which is where the big money is in retail banking fraud.
But that’s just a guess. Why would the FCA be so exercised about some card fraud?
There is not shortage of other theories. There is the usual criticism of the patent company and its insecure non-banking systems. The usual unpatched server card is played. Yes, everyone knows Tesco self-checkouts use Windows XP. There ate criticisms of the lack of protective monitoring. Lack of AV. But this comes from commentators whose employer’s business is selling such things. There is talk of an inside job, which is possible but they didn’t take them for much if it was.
So if the FCA is really that cross with Tesco Bank, why?
The question no one is asking is why Tesco Bank announced a major breach, affecting so many people? Here I’m stacking guesses, but just for fun…
If I’m right about it being ATM bandits, could it be that staff investigating found something horrible and hairy, and jumped to the conclusion it was behind it? They did the right thing, and told everyone about the vulnerability, but the black hats hadn’t. The FCA would have been unimpressed, regardless of the consequences, and whacked them according.
If I’m right, it’s a bit rough on Tesco Bank, fined as a result of being robbed. But this is all one guess based on another. The truth may be still stranger.
The confected row about Facebook and CA’s mining of the latter’s users’ data beggars belief. Facebook’s raison d’être is to profile its users and sell the information to anyone needing to target messages (adverts). The punters sign up to this because access is free. They might not understand what they’re agreeing to; a quick look at Facebook shows that many users are far from the brightest lights in the harbour. Buy hey, it’s free!
This is basically how Web 2.0 works. Get the punters to provide the content for you, collect information of value to sell to advertisers, and use the money to pay for the platform. Then trouser a load of tax-free profit by exploiting the international nature of the Internet.
So why the brouhaha now? Where has the moral outrage been for the last ten years? How come punters have only just started talking of a boycott (about twelve years after I did)? What’s changed?
The media has suddenly taken notice because some messages were sent on behalf of Donald Trump’s presidential campaign. What might broadly be called “left-wing” politicians have been exploiting unregulated social media to sway opinion for a very long time. Some became very uncomfortable when Trump gained traction by “speaking directly to his supporters” on Twitter. And now they’ve finally woken up to the way that the simple majority using a social media platform are able to propagate fake news and reinforce their simplistic beliefs.
But it wasn’t until the recent revaluations that Donald Trump was using it that anyone batted an eyelid.
This rabbit hole goes very deep.
Does this spell the end of Facebook? I somehow doubt it. Social media addicts are just that. They don’t want to lose all their virtual “friends”. They want people to “like” them. Those that realise it’s a load of fluff try to cut back, or “detox” for a few weeks, but they always come back for more. And for those who see social media for what it and have nothing to do with it are constantly pressured by the addicts, like a drug user turned pusher.
“You don’t use Facebook? How are we supposed to contact you?”
No. This row doesn’t spell the end of Facebook. I know MySpace, bix, CompuServe, Geocities and the rest went out of fashion, but Facebook and Twitter are too well established, and even promoted on the BBC. And if the addicts were outraged enough to move to a different platform, where would they go? Part of their addiction comes from Facebook being “free”, and no one has come up with an alternative business model that works. They’ll stick with the devil they know.
Meanwhile investors have the jitters and the share price has fallen. This won’t last.
Many years ago I decried the new mania for virtual servers as a fix for Windows’ limitations in allowing services to be moved from one host to another. They’re also being used in the Linux world (particularly) in the form of “appliance architecture”, where services are not run on operating systems but whole systems are run within systems. I guess this allows non-technical people to visualise them better or something.
The situation is getting out-of-hand. People don’t understand they’re using a paradigm, and not a computer. This is leading to a lot of nuttery.
I’ve seen an instance when two virtual servers (running on one host) were running a service between them with a virtual load balancer in front in an attempt to improve performance. This was in a production environment. I only hope that whoever designed the system assumed it was going to run on real hardware, and then some muppet came along and simply copied a prototype to “the cloud”.
Reality check people: You may have something that looks like lots of small computers, but underneath there’s just one of them – and you’re sharing it with other customers. By virtualizing lots of small servers you’re just burning cycles on the big one, and retarding its disk performance. It’s a bonkers as a perpetual motion machine; it’s never going to run as fast as it would have directly on the host.
I’ve even heard people comparing one virtual host with another as if it was real hardware. Mine’s got 64Gb of RAM! Well mine is all SSD and a 16-core Xeon!
No you haven’t! You’ve got a software emulation of whatever your provider has sold you, running at whatever speed is left after the other customers have taken their chunk. You don’t have any RAM at all. Your OS thinks it has, but the whole OS could be swapped out. It’s disk accesses go through the hypervisor cache, and to its backing store at whatever speed it goes at. It may not look like your memory is paged, but the hypervisor is certainly going to be paging it anyway. If you feel better thinking you’ve got all the RAM you need, please continue in your virtual wonderland.
Ah, but you’ve got Elastic Computing, and can inflate the size of your RAM number of CPUs as demand increases. Let me tell you, an inflatable is never as good as the real thing. And your high demand may coincide with someone else’s. So you “reserve” the resources needed to cope with your peak demand. Hmm. Sounds a bit like having your own hardware to me.
I use one cloud server provider – vultr.com. It’s a bit of a love-hate relationship as, in case you didn’t realise, I don’t think much of cloud computing and anyway, I can afford to have my own. But if you need a small service on the end of an IP address on the other side of the world, they’re just what you need. I was amused to note that my “512Mb/20Gb” virtual server believed it came equipped with a 10Gb NIC talking to the Internet. Software emulation of 10Gbps anyone? And then there’s the contention ratio to worry about.
It was no surprise when people started asking me about Bitcoin. Money is of great interest to a lot of people; mix it with technology and they want to talk about it.
The main question asked is “Should I buy some?”, closely followed by “Is it safe?”, and “Do you think it’s a bubble?”
To answer the last one first: “Of course it’s a bubble you idiot”. I don’t think there’s anyone who believes it isn’t, but greed conquers common sense. And investing in a bubble can be a rational strategy as long as you make sure you take your capital out before it bursts. You could say the same about any form of investment to some extent. The value of shares will rise and fall in the long term, and everyone knows you should spread the risk. Seeing the return for a punt on Bitcoin at the moment persuades some to abandon this golden rule and put all their funds at risk.
As to whether the technology is safe: No way! It’s as safe as the security of the computers it is stored on, and the integrity of those storing it. Good luck with that. Technically, blockchain technology itself looks very secure but that isn’t where the risk lies.
And now we get back to the main question: Should I buy some? Well I wouldn’t, simply because it’s immoral.
Yes folks, if you can see beyond the chance of a fast buck, Bitcoin is sleaze. There are a few fundamental truths about cash it might be worth reiterating.
Back at the dawn of history, humans realised they’d be better off if they traded. If you had a lot of grain but no apples, find someone with apples and no grain who wanted to do a swap. Cash emerged so you could defer a transaction; or enter in to multi-party deals more easily by extracting the value from the item and placing it in to something more convenient (small pieces of soft shiny metal).
A coin’s value depends on whether you can buy what you need with it at a later date. If you exchange your grain for a coin you have to be convinced that the apple dealer will exchange the coin for your apples. Coins are a matter of confidence; confidence that they can be exchanged for something useful later.
If coins were easy to make, people would just make coins and the apple dealer would end up with a load of inedible shiny metal fragments; so there must be a finite supply for cash to work if the cash has representative rather than commodity value. Prisoners have often used cigarettes as they also have commodity value in that you can smoke them. Leaves, on the other hand, are a poor choice of currency as they grow on trees.
With no commodity value, you might ask why Bitcoin works at all? There are effectively a finite number of valid bitcoins, so you can’t make your own. And people have confidence that they can be exchanged for the goods they need at a later date. Perhaps not as much confidence as they do with regulated currencies, but their big advantage is that they are outside the regulatory system, and like cash or cigarettes, are ideal for black market transactions.
The bottom line is that criminals accept Bitcoin for the purchase of drugs, weapons and extortion payments. Like the legitimate world using BACS/CHAPS/CHIPS (electronic Bank payments), organised crime in the 21st Century benefits from a black money clearing system: Bitcoin. Cryptocurrency has a value because it can be used for buying drugs in large quantities across international borders far more conveniently than using the old-school suitcase of dollar bills. No questions asked. If you want to buy narcotics, you need to buy Bitcoin to pay the dealers with.
Like any currency with a floating exchange rate, the value of a Bitcoin should fluctuate based on the supply and demand for the illegal goods and services it represents. If the demand goes up and supply remains the same, the value of Bitcoin would rise as purchasers out-bid each other to secure enough Bitcoin to pay their dealer. I strongly suspect that knee-jerk (or just jerk) investors are seeing a rise in cost, and not looking too deeply at the tangible commodities backing it. Or perhaps city speculators are not being greedy and stupid; perhaps they really do need Bitcoin to pay for their coke habits.
So, as to whether I think Bitcoin is a good investment, they only answer is: “Yes – it’s can be just as profitable other parts of the drugs trade if you can get it right.”
Penguinisters are very keen on their docker, but for the rest of us it may be difficult to see what the fuss is all about – it’s only been around a few years and everyone’s talking about it. And someone asked again today. What are we missing?
Well docker is a solution to a Linux (and Windows) problem that FreeBSD/Solaris doesn’t have. Until recently, the Linux kernel only implemented the original user isolation model involving chroot. More recent kernels have had Control Groups added, which are intended to provide isolation for a group of processes (namespaces). This came out of Google, and they’ve extended to concept to include processor resource allocation as one of the knobs, which could be a good idea for FreeBSD. The scheduler is aware of the JID of the process it’s about to schedule, and I might take a look in the forthcoming winter evenings. But I digress.
So if isolation (containerisation in Linux terms) is in the Linux kernel, what is Docker bringing to the party? The only thing I can think of is standardisation and an easy user interface (at the expense of having Python installed). You might think of it in similar terms to ezjail – a complex system intended to do something that is otherwise very simple.
To make a jail in FreeBSD all you need do is copy the files for your system to a directory. This can even be a whole server’s system disk if you like, and jails can run inside jails. You then create a very simple config file, giving the jail a name, the path to your files and an what IP addresses to pass through (if any) and you’re done. Just type “service jail nameofjal start”, and off it goes.
Is there any advantage in running Docker? Well, in a way, there is. Docker has a repository of system images that you can just install and run, and this is what a lot of people want. They’re a bit like virtual appliances, but not mind-numbingly inefficient.
You can actually run docker on FreeBSD. A port was done a couple of years ago, but it relies on the 64-bit Linux emulation that started to appear in 10.x. The newer the version of FreeBSD the better.
Docker is in ports/sysutils/docker-freebsd. It makes uses of jails instead of Linux cgroups, and requires ZFS rather than UFS for file system isolation. I believe the Linux version uses Union FS but I could be completely wrong on that.
The FreeBSD port works with the Docker hub repository, giving you access to thousands of pre-packaged system images to play with. And that’s about as far as I’ve ever tested it. If you want to run the really tricky stuff (like Windows) you probably want full hardware emulation and something like Xen. If you want to deploy or migrate FreeBSD or Solaris systems, just copy a new tarball in to the directory and go. It’s a non-problem, so why make it more complicated?
Given the increasing frequency Docker turns up in conversations, it’s probably worth taking seriously as Linux applications get packaged up in to images for easy access. Jails/Zones may be more efficient, and Docker images are limited to binary, but convenience tends to win in many environments.
A while back I reviewed the Dell FS12-NV7 – a 2U rack server being sold cheap by all and sundry. It’s a powerful box, even by modern standards, but one of its big drawbacks is the disk system it comes with. But it needn’t be.
There are two viable solutions, depending on what you want to do. You can make use of the SAS backplane, using SAS and/or SATA drives, or you can go for fewer SATA drives and free up one or more PCIe slots as Plan B. You probably have an FS12 because it looks good for building a drive array (or even FreeNAS) so I’ll deal with Plan A first.
Like most Dell servers, this comes with a Dell PERC RAID SAS controller – a PERC6/i to be precise. This ‘I’ means it has internal connectors; the /E is the same but its sockets are external.
The PERC connects to a twelve-slot backplane forming a drive array at the front of the box. More on the backplane later; it’s the PERCs you need to worry about.
The PERC6 is actually an LSI Megaraid 1078 card, which is just the thing you need if you’re running an operating system like Windows that doesn’t support a volume manager, striping and other grown-up stuff. Or if your OS does have these features, but you just don’t trust it. If you are running such an OS you may as well stick to the PERC6, and good luck to you. If you’re using BSD (including FreeNAS), Solaris or a Linux distribution that handles disk arrays, read on. The PERC6 is a solution to a problem you probably don’t have, but in all other respects its a turkey. You really want a straightforward HBA (Host Bus Adapter) that allows your clever operating system to talk directly with the drives.
Any SAS card based on the 1078 (such as the PERC6) is likely to have problems with drives larger than 2Tb. I’m not completely sure why, but I suspect it only applies to SATA. Unfortunately I don’t have any very large SAS drives to test this theory. A 2Tb limit isn’t really such a problem when you’re talking about a high performance array, as lots of small drives are a better option anyway. But it does matter if you’re building a very large datastore and don’t mind slower access and very significant resilvering times when you replace a drive. And for large datastores, very large SATA drives save you a whole lot of cash. The best capacity/cost ratio is for 5Gb SATA drives
Some Dell PERCs can be re-flashed with LSI firmware and used as a normal HBA. Unfortunately the PERC6 isn’t one of them. I believe the PERC6/R can be, but those I’ve seen in a FS12 are just a bit too old. So the first thing you’ll need to do is dump them in the recycling or try and sell them on eBay.
There are actually two PERC6 cards in most machine, and they each support eight SAS channels through two SFF-8484 connectors on each card. Given there are twelve drives slots, one of the PERCs is only half used. Sometimes they have a cable going off to a battery located near the fans. This is used in a desperate attempt to keep the data in the card’s cache safe in order to avoid write holes corrupting NTFS during a power failure, although the data on the on-drive caches won’t be so lucky. If you’re using a file system like that, make sure you have a UPS for the whole lot.
But we’re going to put the PERCs out of our misery and replace them with some nice new LSI HBAs that will do our operating system’s bidding and let it talk to the drives as it knows best. But which to pick? First we need to know what we’re connecting.
Moving to the front of the case there are twelve metal drive slots with a backplane behind. Dell makes machines with either backplanes or expanders. A backplane has a 1:1 SAS channel to drive connection; an expander takes one SAS channel and multiplexes it to (usually) four drives. You could always swap the blackplane with an expander, but I like the 1:1 nature of a backplane. It’s faster, especially if you’re configured as an array. And besides, we don’t want to spend more money than we need to, otherwise we wouldn’t be hot-rodding a cheap 2U server in the first place – expanders are expensive. Bizarrely, HBAs are cheap in comparison. So we need twelve channels of SAS that will connect to the sockets on the backplane.
The HBA you will probably want to go with is an LSI, as these have great OS support. Other cards are available, but check that the drivers are also available. The obvious choice for SAS aficionados is the LSI 9211-8i, which has eight internal channels. This is based on an LSI 2000 series chip, the 2008, which is the de-facto standard. There’s also four-channel -4i version, so you could get your twelve channels using one of each – but the price difference is small these days, so you might as well go for two -8i cards. If you want cheaper there are 1068-based equivalent cards, and these work just fine at about half the price. They probably won’t work with larger disks, only operate at 3Gb and the original SAS standard. However, the 2000 series is only about £25 extra and gives you more options for the future. A good investment. Conversely, the latest 3000 series cards can do some extra stuff (particularly to do with active cables) but I can’t see any great advantage in paying megabucks for one unless you’re going really high-end – in which case the NV12 isn’t the box for you anyway. And you’d need some very fast drives and a faster backplane to see any speed advantage. And probably a new motherboard….
Whether the 6Gb SAS2 of the 9211-8i is any use on the backplane, which was designed for 3Gb, I don’t know. If it matters that much to you you probably need to spend a lot more money. A drive array with a direct 3Gb to each drive is going to shift fast enough for most purposes.
Once you have removed the PERCs and plugged in your modern-ish 9211 HBAs, your next problem is going to be the cable. Both the PERCs and the backplane have SFF-8484 multi-lane connectors, which you might not recognise. SAS is a point-to-point system, the same as SATA, and a multi-lane cable is simply four single cables in a bundle with one plug. (Newer versions of SAS have more). SFF-8484 multi-lane connectors are somewhat rare, (but unfortunately this doesn’t make them valuable if you were hoping to flog them on eBay). The world switched quickly to the SFF-8087 for multi-lane SAS. The signals are electrically the same, but the connector is not.
So there are two snags with this backplane. Firstly it’s designed to work with PERC controllers; secondly it has the old SFF-8484 connectors on the back, and any SAS cables you find are likely to have SFF-8087.
First things first – there is actually a jumper on the backplane to tell it whether it’s talking to a PERC or a standard LSI HBA. All you need to do is find it and change it. Fortunately there are very few jumpers to choose from (i.e. two), and you know the link is already in the wrong place. So try them one at a time until it works. The one you want may be labelled J15, but I wouldn’t like to say this was the same on every variant.
Second problem: the cable. You can get cables with an SFF-8087 on one end and an SFF-8484 on the other. These should work. But they’re usually rather expensive. If you want to make your own, it’s a PITA but at least you have the connectors already (assuming you didn’t bin the ones on the PERC cables).
I don’t know what committee designed SAS cable connectors, but ease of construction wasn’t foremost in their collective minds. You’re basically soldering twisted pair to a tiny PCB. This is mechanically rubbish, of course, as the slightest force on the cable will lift the track. Therefore its usual to cover the whole joint in solidified gunk (technical term) to protect it. Rewiring SAS connectors is definitely not easy.
I’ve tried various ways of soldering to them, none of which were satisfactory or rewarding. One method is to clip the all bare wires you wish to solder using something like a bulldog clip so they’re at lined up horizontally and then press then adjust the clamp so they’re gently pressed to the tracks on the board, making final adjustments with a strong magnifying glass and a fine tweezers. You can then either solder them with a fine temperature-controlled iron, or have pre-coated the pads with solder paste and flash across it with an SMD rework station. I’d love to know how they’re actually manufactured – using a precision jig I assume.
The “easy” way is to avoid soldering the connectors at all; simply cut existing cables in half and join one to the other. I’ve used prototyping matrix board for this. Strip and twist the conductors, push them through a hole and solder. This keeps things compact but manageable. We’re dealing with twisted pair here, so maintain the twists as close as possible to the board – it actually works quite well.
However, I’ve now found a reasonably-priced source of the appropriate cable so I don’t do this any more. Contact me if you need some in the UK.
So all that remains is to plug your HBAs to the backplane, shove in some drives and you’re away. If you’re at this stage, it “just works”. The access lights for all the drives do their thing as they should. The only mystery is how you can get the ident LED to come on; this may be controlled by the PERC when it detects a failure using the so-called sideband channel, or it may be operated by the electronics on the backplane. It’s workings are, I’m afraid, something of a mystery still – it’s got too much electronics on board to be a completely passive backplane.
Plan B: SATA
If you plan to use only SATA drives, especially if you don’t intend using more than six, it makes little sense to bother with SAS at all. The Gigabyte motherboard comes with half a dozen perfectly good 3Gb SATA channels, and if you need more you can always put another controller in a PCIe slot, or even USB. The advantages are lower cost and you get to free up two PCIe slots for more interesting things.
The down-side is that you can’t use the SAS backplane, but you can still use the mounting bays.
Removing the backplane looks tricky, but it really isn’t when you look a bit closer. Take out the fans first (held in place by rubber blocks), undo a couple of screws and it just lifts and slides out. You can then slot and lock in the drives and connect the SATA connectors directly to the back of the drives. You could even slide them out again without opening the case, as long as the cable was long enough and you manually detached the cable it when it was withdrawn. And let’s face it – drives are likely to last for years so even with half a dozen it’s not that great a hardship to open the case occasionally.
Next comes power. The PSU has a special connector for the backplane and two standard SATA power plugs. You could split these three ways using an adapter, but if you have a lot of drives you might want to re-wire the cables going to the backplane plug. It can definitely power twelve drives.
And that’s almost all there is to it. Unfortunately the main fans are connected to the backplane, which you’ve just removed. You can power them from an adapter on the drive power cables, but there are unused fan connectors on the motherboard. I’m doing a bit more research on cooling options, but this approach has promising possibilities for noise reduction.