Google is innocent (ish)

So Google’s streetview cars have been driving around harvesting people’s email passwords have they? Well this is probably true. Let’s sue/fine/regulate them!

Actually, let’s not. They haven’t done anything wrong. What Google’s surveying vehicles did was record the wireless Ethernet radio activity as they went along, to get an idea of where the WIFI hotspots are. This is a really useful thing for someone to have done – there’s no other way to find out what’s really where than by doing a ground-level survey.

In order to determine what kind of service they’re receiving you need to record a bit of the traffic for analysis. If it’s a private service, this traffic will be encrypted so it really doesn’t matter a jot – they’d be mostly recording gibberish. If it’s an open, public service they’d get the clear text of whatever happened to be transmitted at the time if the luser’s weren’t using application-layer encryption. If some technological dunderhead decides to do a radio broadcast of his unencrypted passwords, Google (and anyone else in the vicinity) will end up receiving that too.

Look at it another way – if someone wrote their password on a big sign and stuck it in the front of their house, anyone walking down the road couldn’t help but capture it. Are the pedestrians doing something wrong, or is the owner of the house an idiot?

It’s no good the idiots bleating on about Google. That won’t give them brains. It might, however, give them some of Google’s money and this could be the real motive.

The Information Commissioner, Christopher Graham, has come up with some surprising statements about Google. But on review, they’re only surprising to someone understanding the technical issues here. Does this mean Graham is a technological klutz? It’s one theory – at times it seems like everyone the government appoints to deal with technology requires this as a qualification. However I think it’s far more likely a case of bowing to media/political pressure on the subject and wishing to be seen to be doing something about it.

Then, last Friday, Google signed an undertaking with the Information Commissioner’s Office to train their staff that they mustn’t do naughty things (just in case they were ever tempted). In return for this the ICO promises to leave them alone. Read it for yourself – it’s only three pages long.

http://www.ico.gov.uk/~/media/documents/library/Data_Protection/Notices/google_inc_undertaking.ashx

What’s sad about the whole affair is that the ICO is, first and foremost, a political/media driven entity even if there are some level heads at work behind the scenes. But what a waste of time and money…

Oliver Drage makes mockery out of RIPA

Oliver Drage, suspected trader in child pornography, has just been sent down for refusing to disclose the password he’d used to encrypt his PC. This is an offence under RIPA (the Regulation of Investigatory Powers Act 2000). So if you’ve got something dodgy on your computer, you’ll get locked up whether or not the cops can decrypt it (or you’ve lost the password).

A spokesman for Lancashire police was pleased: “Drage was previously of good character so the immediate custodial sentence handed down by the judge in this case shows just how seriously the courts take this kind of offence.”

Really. Drage is going to gaol for sixteen weeks  (read “two months”) . How long would he have been locked up for if he’d given them the password so they could decrypt whatever it’s alleged he was hiding? Five years? Ten years? Lock up and throw away the key?

This is not what I call “taking it seriously”.

The penalties under RIPA for not disclosing passwords are far lower than the likely sentence assuming someone’s been up to anything of interest of the authorities in this way. They don’t take it seriously at all.

Comment spam from Volumedrive

Comment spammers aren’t the sharpest knives in the draw. If they did their research properly they’d realise that spamming here was a stupid as trying to burgle the police station (while it’s open). You’ll notice there’s no comment spam around here, but that isn’t to say they don’t try.

Anyway, there’s been a lot of activity lately from a spambot running at an “interesting” hosting company called Volumedrive. They rent out rack space, so it’s not going to be easy for them to know what their customers are doing, but they don’t seem inclined to shut any of them down for “unacceptable” use. For all I know they’ve got a lot of legitimate customers, but people do seem to like running comment spammers through their servers.

If you need to get rid of them, there is an easy way to block them completely if you’re running WordPress, even if you don’t have full access to the server and its firewall. The trick is to over-ride the clients Apache is prepared to talk to (default: the whole world) by putting a “Deny from” directive in the .htaccess file. WordPress normally creates a .htaccess file in its root directory; all you do is add:

Deny from bad.people.com

Here, “bad.people.com” is the server sending you the spam, but in reality they probably haven’t called themselves anything so convenient. The Apache documentation isn’t that explicit unless you read the whole lot, so it’s worth knowing you can actually list IP addresses (more than one per line) and even ranges of IP addresses (subnets).

For example:

Deny from 12.34.56.78
Deny from 12.34.56.89 22.33.44.55
Deny from 123.45.67.0/24

The last line blocks everything from 123.45.67.0 to 123.45.67.255. If you don’t know why, please read up on IP addresses and subnet masks (or ask below in a comment).

So when you get a a load of spammers from similar IP addresses, look up to see who the block belongs to using “whois”. Once you know you can block the whole lot. For example, if you’re being hit by the bot using Volumedrive on 173.208.67.154, run “whois 173.208.67.154”. This will return:

NetRange: 173.242.112.0 - 173.242.127.255
CIDR: 173.242.112.0/20
OriginAS: AS46664
NetName: VOLUMEDRIVE
NetHandle: NET-173-242-112-0-1
Parent: NET-173-0-0-0-0
NetType: Direct Allocation

<snip>

If you don’t have whois on your comptuer (i.e. you’re using Windoze) there’s a web version at http://www.whois.net/.

In the above, the CIDR is the most interesting – it specifies the block of IP addresses routed to one organisation. I’m not going in to IP routing here and now, suffice to say that in this example it specifies the complete block of addresses belonging to volumedrive that we don’t want – at least until they clean up their act.

To avoid volumedrive’s spambots you need to add the following line to the end your .htaccess file:

Deny from 173.242.112.0/20

If this doesn’t work for you the the web server you’re using may have been configured in a strange way – talk to your ISP if they’re the approachable type.

I have contacted Volumedrive, but they declined to comment, or even reply; never mind curtail the activities of their users.

This isn’t a WordPress-only solution – .htaccess belongs to Apache and you can use it to block access to any web site.

Perhaps there’s some scope in sharing a list these comment spambots in an easy-to-use list. If anyone’s interested, email me. This is a Turing test :-)

Why and how to hack a mobile phone

Anyone outraged that News of the Screws journalists have been “hacking” in to mobile ‘phones needs to get a grip on reality. They’re investigative tabloid journalists; what do you expect them to be doing?

To call it “hacking” is grossly overstating the case anyway – what they did required no technical knowledge other that that available in any playground in the country. All you need to do to retrieve people’s voice mail messages is dial their number, and when you get through to voice mail, enter the PIN. Most people leave the PIN as the system default.

You might argue that this is a gross breach of privacy and so forth. But it’s no more so than camping out on someone’s doorstep to see who goes in and out, following them, or tricking them into telling you something they wouldn’t if they knew your were a journalist.

New Labour was very keen to suppress the traditional liberties of the population in general and passed various dodgy laws to protect the lives of the guilty from prying journalists. In 2000, listening to other people’s voice mail was made a specific offence. “And quite right too!”. Wrong! It’s just another example of those in power making it difficult for us to check up on what they’re doing. We have (or had) a free press with a tradition of snooping on politicians, criminals and anyone else they wanted to using whatever means, as long as it was “In the public interest”.

Journalists are also out to sell papers, so the “public interest” defence is often strained to its limit, or broken. However, it should remain as a defence in a court of law and people should be able to argue their case there. It should be all about intent. But New Labour had other ideas.

People are uneasy about voice mail because it’s technological, so lets look at another example.

Suppose a journalist was camped outside someone’s house, noting down who came in and out. Another invasion of privacy, but right or wrong?

Well that depends – if it’s some innocent person then the journalist will probably end up throwing the notes away, so no harm done. If someone uses information collected in this way in the pursuance of a crime (e.g. Blackmail), that’s another matter, but journalists don’t do that.

Now supposing the journalist is investigating a suspected terrorist, and checking up to see who they’re associating with – or even a politician associating with a known crook. Clearly this information in the public interest.

It’s all about intent.

You could argue that investigations of this nature shouldn’t be carried out by private individuals but should be left to the security forces. That argument doesn’t bear scrutiny for more than a couple of seconds. The public needs the right to snoop as well as the government agents – anything else is known as a ‘police state’

As to the current difficulties – anyone who knows anything about the press will tell you that these and many other tricks are employed as a matter of course, although journalists won’t make a big noise about using them. It’s conceivable that an editor like Andy Coulson would neither know nor care exactly what his investigation teams were doing to come up with the information; you don’t ask. It’s also inconceivable that only the hacks on the News of the World had thought of it. Sources need protection.

It’s clearly a political stunt by old new Labour. Could they be upset that the press, including Mr Coulson’s old rag, turned against them? They used to be friends with the News of the World. At the time of the original scandal, it appears that the first politician to call Andy Coulson to commiserate with him about having to resign was none other than Gordon Brown. Apparently he went on to suggest that someone with his talent would soon find another job where he could make himself useful. (Source: Nick Clegg at today’s PMQs).

In defence of TalkTalk

The ICO has just had a go at TalkTalk for snooping on their customers. Hmm. I wouldn’t be a TalkTalk customer if they paid me so I’m not bothered on that score. But I’m also not worried because I can’t see they’ve actually done anything wrong in this instance.

What they’re accused of is harvesting the URLs of web sites visited by their punters. Reality check: networks log traffic anyway. It’s necessary for maintenance and optimisation. All managed networks do it, all the time. The system the ICO is making a fuss about simply collects the URLs and then sends a malware scanner to the site to check for dodgy stuff so it can blacklist the URL in future.

You can’t scan the whole web for malware; it’d take too long by a spectacular margin. Scanning the relatively small subset of URLs your customers are actually accessing is as good a way of directing your effort as any.

So why’s the ICO making the headlines? Just to show they’re on the ball, I suppose. And TalkTalk makes an easy target. This is probably the first time ever I’ve defended them on any issue.

Intel has just bought McAfee

Intel has just bought its neighbour in Santa Clara.

Well there’s a surprise. According to today’s Wall Street Journal it’s a done deal at $48/share (about £5bn). Paul Otellini (Intel’s CEO) has been saying that “security was becoming important” in addition to energy efficiency and connectivity. This lack of insight does not bode well.

I’ve been expecting something like this since Microsoft really got its act together with “Security Essentials”, its own PC virus scanner by another name. Unlike other PC virus scanners, Microsoft’s just sits in the background and gets on with the job without slugging the PC’s performance. Why would anyone stick with McAfee and Symantec products in these circumstances?

Whether PC virus scanners have much benefit in today’s security landscape is questionable, but at least the Microsoft one does no harm.

Intel has (apparently) paid about £5bn in cash for McAfee. I wonder if they’ve paid too much. It’ll generate revenue while lusers and luser IT managers are too scared to stop paying the subscription, but as anti-virus becomes built in to Windows this is going to dry up. I suspect McAfee was aware of this situation ad was moving on to mobile device security – not by developing anything itself, but by buying out companies that are.

When McAfee bought Dr Solomons in 1998, it was basically to pinch their technology for detecting polymorphic viruses and close down their European rival, which they did – everyone lost their jobs and the office closed. (Declaration of interest: Dr Solomons was a client of mine). Whether McAfee has any technology worth plundering isn’t so obvious, so presumably Intel is buying them as a ready-made security division.

McAfee does, of course, have some good researchers in the background – we all know the score.

India’s $10 laptop joke

There was a time when “Made in Hong Kong” was a byword for a cheap and nasty knock-off of the real thing, that didn’t really work. This was in the early 1970’s, and was pretty much true. In the late 1970’s I was horrified to discover that I’d bought a piece of electronic equipment “Made in Hong Kong”, but as it turned out, it was of really good quality and still works flawlessly today.

Hong Kong has now been assimilated by mainland China, and it seems that everything is made there – and is often none the worse for that. India has taken over Hong Kong’s mantel, although in this time of political correctness you don’t hear comedians joking about it.

But why is this? India seems to be a country desperate to be taken seriously – it has a space programme for no other reason than this. But artefacts manufactured in India tend to be either rough and ready, or inferior and semi-functional knock-offs of something made better elsewhere.

While still musing on the above I was sent this:

Apparently this thing, which looks like an iPad and runs Linux, would soon be produced for as little as $10. This in incredible. (Not credible). India’s Education Minister knows nothing about electronics or computing, and has announced this in spectacular style to the world. Apparently it was designed by the Indian Institute of Technology, and the Indian Institute of Science. Apparently they’re “elite” and “prestigious”. Their spokeswoman, Mamta Varma, said the device was feasible because of falling hardware costs. What they actually are, if this is anything to go by, is a laughing stock.

Of course, most people don’t know much about computing devices, but generally they have the good sense not to pretend they do. For the benefit of this majority: There is no way you can put a processor, colour touch-screen display and enough memory into a box for $10. It’d cost that for the battery and power supply.

Apparently this marvel has the facilities for video conferencing (i.e. a fast processor and a camera) and can run on solar power. Hmm. You’d need more than $10 worth of solar cells, for a start.

However, this won’t be “Made in India” – Sibal stated they were in discussions with a Taiwanese company about manufacturer. For $10? I don’t think so!

If India doesn’t want to be treated as a joke it needs to start by muzzling its ministers.

Sage data files

Sage Line 50 ACCDATA contains a load of files, and nowhere have I found any useful documentation as to what they are. Here’s a summary of what I think they are. They’re all data files unless otherwise stated. Most of the rest are indexes to the corresponding data files.

Anyone with more information is positively encouraged to leave a comment! Presumably Sage know, but they don’t seem that keen on publishing the information.

1..n.COA

Chart of Accounts
ACCESS.DTA

Access rights for users
ACCOUNT.DTA

Control Information (stuff across all accounts – VAT?
ACCRUAL.DTA

Accruals
ACCRUAL.DTA

Currency
ACCSTAT.DTA

Account Status
ASSETS.DTA

Fixed Asset
ASTCAT.DTA

Fixed Asset Categories file
ASTINDEX.DTA

Fixed Asset index file
BANK.DTA

Bank
BANKWWW.DTA

Bank WWW data
BILLS.DTA

Bills
BNKINDEX.DTA

Bank index file
CATEGORY.DTA

Category definitions
CONTACT.DTA

Contacts
CONTINDA.DTA

Contact Records index file
CONTINDD.DTA

Contact Date index file
COURWWW.DTA

Courier Resources
CREDWWW.DTA

Credit Resources
DEPARTM.DTA

Departments
FINRATES.DTA

Credit Charge
HEADERS.DTA

Transaction Headers file
INVINDEX.DTA

Invoice Record index file
INVITEM.DTA

Invoice Line Items file
INVOICE.DTA

Invoice Headers
MISCWWW.DTA

Miscellaneous Resources
NOMINAL.DTA

Nominal
NOMINDEX.DTA

Nominal Record index file
PREPAY.DTA

Prepayments
PUOINDEX.DTA

Purchase Order index file
PUOITEM.DTA

Purchase Order Line Items file
PUORDER.DTA

Purchase Order Headers
PURCHASE.DTA

Suppliers
PURINDEX.DTA

Suppliers record index file
QUEUE.DTA

List of users currently using
RECUR.DTA

Recurring Entries
REMIT.DTA

Remittance Line
REMITIDX.DTA

Remittance Line index file
SALES.DTA

Customers
SALINDEX.DTA

Customer Record index file
SAOINDEX.DTA

Sales Order index file
SAOITEM.DTA

Sales Order Line Items file
SAORDER.DTA

Sales Order Headers
SETUP.DTA

Setup information – manager passwords &c
SPLITS.DTA

Transaction Splits file
STKCAT.DTA

Stock Category
STKINDEX.DTA

Stock Record index file
STKTRANS.DTA

Stock Transactions file
STOCK.DTA

Stock
TODO.DTA

Task Manager
TODOIDX.DTA

Task Manager index file
USAGE.DTA

Transaction Usage’s file

Low Energy Lightbulbs are not that bright

Have you replaced a 60W traditional tungsten bulb with a 60W-equivalent low energy compact fluorescent and thought it’s not as bright as it was. You’re not imagining it. I’ve been doing some tests of my own, and they’re not equivalent.

Comparing light sources is a bit of art as well as science, and lacking other equipment, I decided to use a simple photographic exposure to give me some idea of the real-world performance. I pointed the meter at a wall, floor and table top. I didn’t point it at the light itself – that’s not what users of light bulbs care about.

The results were fairly consistent: Low energy light bulbs produce the same amount of light as a standard bulb of three to four times the rating. The older the fluorescent, the dimmer it was, reaching output of a third at a thousand hours use. Given that the lamps are rated at two to eight thousand hours, it’s reasonable to take the lower output figure as typical as this is how it will spend the majority of its working life.
This gives a more realistic equivalence table as:

CFL
Wattage
Quoted GLS
equivalent
Realistic GLS
equivalent
8W 40W 25-30W
11W 60W 35-45W
14W 75W 40-55W
18W 100W 55-70W

Table showing true equivalence of Compact Fluorescent (CFL) vs. conventional light bulbs (GLS)

So what’s going on here? Is there a conspiracy amongst light-bulb manufacturers to tell fibs about their performance? Well, yes. It turns out that the figures they use are worked out by the Institute of Lighting Engineers, in a lab. They measured the light output of a frosted lamp and compared that to a CFL. The problem is that the frosting on frosted lamps blocks out quite a bit of light, which is why people generally use clear glass bulbs. But if you’re trying to make your product look good it pays to compare your best case with the completion’s worst case. So they have.

But all good conspiracies involve the government somewhere, and in this case the manufactures can justify their methods with support from the EU. The regulations allow the manufactures to do some pretty wild things. If you want to look at the basis, it can be found starting here:

For example, after a compact fluorescent has been turned on it only has to reach an unimpressive 60% of its output after a staggering one minute! I’ve got some lamps that are good starters, others are terrible – and the EU permits them to be sold without warning or differentiation. One good thing the EU is doing, however, is insisting that CFL manufacturers state the light output in lumens in the future, and more prominently than the power consumption in Watts. This takes effect in 2010. Apparently. Hmm. Not on the packages I can see; some don’t even mention it in the small print (notably Philips).

However, fluorescent lamps do save energy, even if it’s only 65% instead of the claimed 80%. All other things being equal, they’re worth it. Unfortunately the other things are not equal, because you have the lifetime of the unit to consider.

A standard fluorescent tube (around since the 1930’s) is pretty efficient, especially with modern electronics driving it (ballast and starter). When the tube fails the electronics are retained, as they’re built in to the fitting. The Compact Florescent Lamps (CFL) that replace conventional bulbs have the electronics built in to the base so they can be used in existing fittings where a conventional bulb is expected. This means the electronics are discarded when the tube fails. The disposable electronics are made as cheaply as possible, so it may fail before the tube.

Proponents of CFLs says that it is still worth it, because the CFLs last so much longer than standard bulbs. I’m not convinced. A conventional bulb is made of glass, steel, cooper and tungsten and should be easy enough to recycle – unlike complex electronics.

The story gets worse when you consider what goes in to the fluorescent tubes – mercury vapour, antinomy, rare-earth elements and all sorts of nasty looking stuff in the various phosphor coatings. It’s true that the amount of mercury in a single tube is relatively small, and doesn’t create much of a risk in a domestic environment even if the tube cracks, but what about a large pile of broken tubes in a recycling centre?

So, CFLs are under-specified and polluting and wasteful to manufacture, but they do save energy. It’d be better to change light fittings to use proper fluorescent tubes, however. They work better than CFLs, with less waste. I don’t see it happening though. At the moment descrete tubes actually cost more because they fit relatively few fittings. People are very protective of their fittings. The snag is that with CFLs you need at least 50% more bulb sockets to get enough light out of them.

Standard bulbs produce less light than they could because a lot of the energy is turned into heat (more so than with a CFL). However, this heat could be useful – if your light bulbs aren’t heating the room you’d need something else. This is particularly true of passageways and so on, where there may be no other heating and a little warmth is needed to keep the damp away. The CFL camp rubbishes this idea, pointing out that in summer you don’t need heat. Actually, in summer, you don’t need much artificial light either, so they’d be off anyway. Take a look at document “BNXS05 The Heat Replacement Effect” found starting here for an interesting study into the matter – it’s from the government’s own researchers.
But still, CFLs save energy.

Personally, however, I look forward to the day when they’re all replaced by LED technology. These should last ten times longer (100,000 hours), be more efficient still, and contains no mercury anyway , nor even any glass to break.  The snag is that they run on a low voltage and the world is wired up for mains-voltage light fittings. I envisage whole light fittings, possibly with built-in transformers, pre-wired with fixed LEDs which will last for 50 years – after which you’d probably change the whole fitting anyway.

Ah yes, I hear the moaners starting, but I want to keep my existing light fitting. Okay, sit it the gloom under your compact fluorescents then.

 

How to improve Sage network performance

If you accept that Sage Line 50 is fundamentally flawed when working over a network you’re not left with many options other than waiting for Sage to fix it. All you can do is throw hardware at it. But what hardware actually works?

First the bad news – the difference in speed between a standard server and a turbo-nutter-bastard model isn’t actually that great. If you’re lucky, on a straight run you might get a four-times improvement from a user’s perspective. The reason for spending lots of money on a server has little to do with the speed a user’s sees; it’s much more to do with the number of concurrent users.

So, if you happen to have a really duff server and you throw lots of money at a new one you might see something that took a totally unacceptable 90 minutes now taking a totally unacceptable 20 minutes. If you spend a lot of money, and you’re lucky.

The fact is that on analysing the server side of this equation I’ve yet to see the server itself struggling with CPU time, or running out of memory or any anything else to suggest that it’s the problem. With the most problematic client they started with a Dual Core processor and 512Mb of RAM – a reasonable specification for a few years back. At no time did I see issues to do with the memory size and the processor utilisation was only a few percent on one of the cores.

I’d go as far as to say that the only reason for upgrading the server is to allow multiple users to access it on terminal server sessions, bypassing the network access to the Sage files completely. However, whilst this gives the fastest possible access to the data on the disk, it doesn’t overcome the architectural problems involved with sharing a disk file, so multiple users are going to have problems regardless. They’ll still clash, but when they’re not clashing it will be faster.

But, assuming want to run Line 50 multi-user the way it was intended, installing the software on the client PCs, you’re going to have to look away from the server itself to find a solution.

The next thing Sage will tell you is to upgrade to 1Gb Ethernet – it’s ten times faster than 100Mb, so you’ll get a 1000% performance boost. Yeah, right!

It’s true that the network file access is the bottleneck, but it’s not the raw speed that matters.

I’ll let you into a secret: not all network cards are the same.

They might communicate at a line speed of 100Mb, but this does not mean that the computer can process data at that speed, and it does not mean it will pass through the switch at that speed. This is even more true at 1Gb.

This week at Infosec I’ve been looking at some 10Gb network cards that really can do the job – communicate at full speed without dropping packets and pre-sort the data so a multi-CPU box could make sense of it. They cost $10,000 each. They’re probably worth it.

Have you any idea what kind of network card came built in to the motherboard of your cheap-and-cheerful Dell? I thought not! But I bet it wasn’t the high-end type though.

The next thing you’ve got to worry about is the cable. There’s no point looking at the wires themselves or what the LAN card says it’s doing. You’ll never know. Testing a cable has the right wires on the right pins is not going to tell you what it’s going to do when you put data down it at high speeds. Unless the cable’s perfect its going to pick up interference to some extent; most likely from the wire running right next to it. But you’ll never know how much this is affecting performance. The wonder of modern networking means that errors on the line are corrected automatically without worrying the user about it. If 50% of your data gets corrupted and needs re-transmission, by the time you’ve waited for the error to be detected, the replacement requested, the intervening data to be put on hold and so on your 100Mb line could easily be clogged with 90% junk – but the line speed will still be saying 100Mb with minimal utilisation.

Testing network cables properly requires some really expensive equipment, and the only way around it is to have the cabling installed by someone who really knows what they’re doing with high-frequency cable to reduce the likelihood of trouble. If you can, hire some proper test gear anyway. What you don’t want to do is let an electrician wire it up for you in a simplistic way. They all think they can, but believe me, they can’t.

Next down the line is the network switch and this could be the biggest problem you’ve got. Switches sold to small business are designed to be ignored, and people ignore them. “Plug and Play”.

You’d be forgiven for thinking that there wasn’t much to a switch, but in reality it’s got a critical job, which it may or may not do very well in all circumstances. When it receives a packet (sequence of data, a message from one PC to another) on one of its ports it has to decide which port to send it out of to reach its intended destination. If it receives multiple packets on multiple ports it has handle them all at once. Or one at a time. Or give up and ask most of the senders to try again later.

What your switch is doing is probably a mystery, as most small businesses use unmanaged “intelligent” switches. A managed switch, on the other hand, lets you connect to it using a web browser and actually see what’s going on. You can also configure it to give more priority to certain ports, protect the network from “packet storms” caused by accident or malicious software and generally debug poorly performing networks. This isn’t intended to be a tutorial on managed switches; just take it from me that in the right hands they can be used to help the situation a lot.

Unfortunately, managed switches cost a lot more than the standard variety. But they’re intended for the big boys to play with, and consequently they tend to switch more simultaneous packets and stand up to heavier loads.

Several weeks back I upgraded the site with the most problems from good quality standard switches to some nice expensive managed ones, and guess what? It’s made a big difference. My idea was partly to use the switch to snoop on the traffic and figure out what was going on, but as a bonus it appears to have improved performance, and most importantly, reliability considerably too.

If you’re going to try this, connect the server directly to the switch at 1Gb. It doesn’t appear to make a great deal of difference whether the client PCs are 100Mb or 1Gb, possibly due to the cheapo network interfaces they have, but if you have multiple clients connected to the switch at 100Mb they can all simultaneously access the server down the 1Gb pipe at full speed (to them).

This is a long way from a solution, and it’s hardly been conclusively tested, but the extra reliability and resilience of the network has, at least allow a Sage system to run without crashing and corrupting data all the time.

If you’re using reasonably okay workstations and a file server, my advice (at present) is to look at the switch first, before spending money on anything else.

Then there’s the nuclear option, which actually works. Don’t bother trying to run the reports in Sage itself. Instead dump the data to a proper database and use Crystal Reports (or the generator of your choice) to produce them. I know someone who was tearing their hair out because a Sage report took three hours to run; the same report took less than five minutes using Crustal Reports. The strategy is to dump the data overnight and knock yourself out running reports the following day. Okay, the data may be a day old but if it’s taking most of the day to run the report on the last data, what have you really lost?

I’d be really interested to hear how other people get on.