How to improve Sage network performance

If you accept that Sage Line 50 is fundamentally flawed when working over a network you’re not left with many options other than waiting for Sage to fix it. All you can do is throw hardware at it. But what hardware actually works?

First the bad news – the difference in speed between a standard server and a turbo-nutter-bastard model isn’t actually that great. If you’re lucky, on a straight run you might get a four-times improvement from a user’s perspective. The reason for spending lots of money on a server has little to do with the speed a user’s sees; it’s much more to do with the number of concurrent users.

So, if you happen to have a really duff server and you throw lots of money at a new one you might see something that took a totally unacceptable 90 minutes now taking a totally unacceptable 20 minutes. If you spend a lot of money, and you’re lucky.

The fact is that on analysing the server side of this equation I’ve yet to see the server itself struggling with CPU time, or running out of memory or any anything else to suggest that it’s the problem. With the most problematic client they started with a Dual Core processor and 512Mb of RAM – a reasonable specification for a few years back. At no time did I see issues to do with the memory size and the processor utilisation was only a few percent on one of the cores.

I’d go as far as to say that the only reason for upgrading the server is to allow multiple users to access it on terminal server sessions, bypassing the network access to the Sage files completely. However, whilst this gives the fastest possible access to the data on the disk, it doesn’t overcome the architectural problems involved with sharing a disk file, so multiple users are going to have problems regardless. They’ll still clash, but when they’re not clashing it will be faster.

But, assuming want to run Line 50 multi-user the way it was intended, installing the software on the client PCs, you’re going to have to look away from the server itself to find a solution.

The next thing Sage will tell you is to upgrade to 1Gb Ethernet – it’s ten times faster than 100Mb, so you’ll get a 1000% performance boost. Yeah, right!

It’s true that the network file access is the bottleneck, but it’s not the raw speed that matters.

I’ll let you into a secret: not all network cards are the same.

They might communicate at a line speed of 100Mb, but this does not mean that the computer can process data at that speed, and it does not mean it will pass through the switch at that speed. This is even more true at 1Gb.

This week at Infosec I’ve been looking at some 10Gb network cards that really can do the job – communicate at full speed without dropping packets and pre-sort the data so a multi-CPU box could make sense of it. They cost $10,000 each. They’re probably worth it.

Have you any idea what kind of network card came built in to the motherboard of your cheap-and-cheerful Dell? I thought not! But I bet it wasn’t the high-end type though.

Please generate and paste your ad code here. If left empty, the ad location will be highlighted on your blog pages with a reminder to enter your code. Mid-Post

The next thing you’ve got to worry about is the cable. There’s no point looking at the wires themselves or what the LAN card says it’s doing. You’ll never know. Testing a cable has the right wires on the right pins is not going to tell you what it’s going to do when you put data down it at high speeds. Unless the cable’s perfect its going to pick up interference to some extent; most likely from the wire running right next to it. But you’ll never know how much this is affecting performance. The wonder of modern networking means that errors on the line are corrected automatically without worrying the user about it. If 50% of your data gets corrupted and needs re-transmission, by the time you’ve waited for the error to be detected, the replacement requested, the intervening data to be put on hold and so on your 100Mb line could easily be clogged with 90% junk – but the line speed will still be saying 100Mb with minimal utilisation.

Testing network cables properly requires some really expensive equipment, and the only way around it is to have the cabling installed by someone who really knows what they’re doing with high-frequency cable to reduce the likelihood of trouble. If you can, hire some proper test gear anyway. What you don’t want to do is let an electrician wire it up for you in a simplistic way. They all think they can, but believe me, they can’t.

Next down the line is the network switch and this could be the biggest problem you’ve got. Switches sold to small business are designed to be ignored, and people ignore them. “Plug and Play”.

You’d be forgiven for thinking that there wasn’t much to a switch, but in reality it’s got a critical job, which it may or may not do very well in all circumstances. When it receives a packet (sequence of data, a message from one PC to another) on one of its ports it has to decide which port to send it out of to reach its intended destination. If it receives multiple packets on multiple ports it has handle them all at once. Or one at a time. Or give up and ask most of the senders to try again later.

What your switch is doing is probably a mystery, as most small businesses use unmanaged “intelligent” switches. A managed switch, on the other hand, lets you connect to it using a web browser and actually see what’s going on. You can also configure it to give more priority to certain ports, protect the network from “packet storms” caused by accident or malicious software and generally debug poorly performing networks. This isn’t intended to be a tutorial on managed switches; just take it from me that in the right hands they can be used to help the situation a lot.

Unfortunately, managed switches cost a lot more than the standard variety. But they’re intended for the big boys to play with, and consequently they tend to switch more simultaneous packets and stand up to heavier loads.

Several weeks back I upgraded the site with the most problems from good quality standard switches to some nice expensive managed ones, and guess what? It’s made a big difference. My idea was partly to use the switch to snoop on the traffic and figure out what was going on, but as a bonus it appears to have improved performance, and most importantly, reliability considerably too.

If you’re going to try this, connect the server directly to the switch at 1Gb. It doesn’t appear to make a great deal of difference whether the client PCs are 100Mb or 1Gb, possibly due to the cheapo network interfaces they have, but if you have multiple clients connected to the switch at 100Mb they can all simultaneously access the server down the 1Gb pipe at full speed (to them).

This is a long way from a solution, and it’s hardly been conclusively tested, but the extra reliability and resilience of the network has, at least allow a Sage system to run without crashing and corrupting data all the time.

If you’re using reasonably okay workstations and a file server, my advice (at present) is to look at the switch first, before spending money on anything else.

Then there’s the nuclear option, which actually works. Don’t bother trying to run the reports in Sage itself. Instead dump the data to a proper database and use Crystal Reports (or the generator of your choice) to produce them. I know someone who was tearing their hair out because a Sage report took three hours to run; the same report took less than five minutes using Crustal Reports. The strategy is to dump the data overnight and knock yourself out running reports the following day. Okay, the data may be a day old but if it’s taking most of the day to run the report on the last data, what have you really lost?

I’d be really interested to hear how other people get on.

9 Replies to “How to improve Sage network performance”

  1. We have GbE network cards, Cat6 cabling, and GbE smart switches. The server has 2 GbE ports linked together on the switch to provide 2GbE speed.

    It seems Sage does not handle concurrent users well. 2-3 users can open Sage at the same time instantly to the login screen. That 4th user and going forward, could take 3-5 minutes.

    Could it be disk access? Probably, but according to Resource Monitor, I’m only using 1MB/s disk activity. Don’t most SATA disks provide at least 90MB/s? At least that’s what I saw in the benchmark. However, random read/write access is low which could be the root cause of our problem.

    Could it be memory? Doubt it. We have 16GB installed, server never peaks higher than 4GB on Windows Server 2k8 R2.

    Could it be network speed? Doubt it, file transfers between a client and server happen VERY fast.

    1. There’s no excuse for that being slow (unless the use of two LAN ports on the server is going horribly wrong or the smart switches have been programmed incorrectly).

      You don’t say which version of Sage you are using, but as a general rule see the posting linked at the head of the article about how Sage Line 50 actually works for the background. In your case I suspect a problem with record locking. Sage isn’t a client-server system – it’s a free-for-all with workstations trying to share file on the server. I one workstations locks a file for updates the others get locked out for a while. If it doesn’t hand it back, the others just have to wait.

      You may get some relief if you make sure the file locking (especially the opportunistic locking – oplocks) on the server is set exactly as required by Sage. I believe this currently means oplocks need to be enabled, which you used to have to do in the registry.

      You’re using Windows Server 2008 (probably because someone from Sage told you to). They say it doesn’t work on UNIX servers. I say it doesn’t work on Windows servers either, and many people have found it’s just has happy or otherwise on UNIX. UNIX is also faster than Windows – worth a try.

  2. Good and comprehensive post. One thing you haven’t mentioned is the hard drive speed. We run Sage Line50 at my work place (half a dozen client workstations accessing a central database on a file server), and when we switched from from a standard 7200 RPM sata disk to a 10k RPM SAS drive, we saw noticeable performance improvement. I can only guess that Sage somehow does a lot of random seeks so a hard drive with faster random access speed somehow helps?

    1. This was an article about network performance, rather than the server. But thanks for the useful input. I’m not surprised it matters, once the network is going as fast as it can. Server hard disks do perform a lot better than desktop versions, most of the time anyway. They also have a lower quoted error rate. But they cost a lot more and have lower capacities. I used to be skeptical, but have found that modern server drives are worth the money in some circumstances.

      SAS has controller optimisations that are supposed to make access quicker by optimising head movements. SATA can do this too, but OS support is more patchy. (FreeBSD has been doing it for a few years, and Windows 7 now supports it if enabled). I’m not convinced that SAS is inherently better than SATA with NCQ. But SAS uses SCSI commands, SATA uses ATA commands – different driver families. The SCSI command set has more functionallity, but I’ not convinced this gives it an advantage. However, SCSI drivers are often better optimised for server use, so might perform better even if the underlying hardware should be about the same. I’m rambling. I used to write an awful lot on hard disks in the 1990’s. Back to the real world.

      An alternative approach would be to have a massive disk cache. If the OS is rarely hitting the drive because it’s almost all available instantly in RAM then the speed of the underlying drive becomes irrelevent – no hard drive is faster than RAM (hostage to fortune, but I think I’m safe for a while).

      You also have to tweak the disk system and make a good choice on write-through, but I’m not sure how much this can be achieved under Windows. I’m planning (one day) to run Sage on UNIX which has more advanced and optimised disk handling.

      As an update to the original post, I have now found that using a good 1Gb network card on the workstations can also make a lot of difference in some circumstances. You also need a big switch, and to hang the workstations directly off it, although using good quality workgroup hubs can also work.

      1. Hi Frank,

        Sorry I didn’t really read the title very well I have to say. I found this post by googling Sage Line50 issues. I appreciate your insightful reply though. You covered a lot of ground on drives in your reply and I agree with you on most parts.

        Now did you notice any improvements on Sage after your grade to Gigabit ethernet? That would be my first course of action too given how much gigabit switches cost these days. However I can’t make sense of how it could help – gigabit ethernet would improve the bandwidth but it won’t necessarily improve much on latency. Would Sage Line50 saturate a 100mb link often enough to make a gigabit network show significant improvement? Or maybe I’m missing something else completely. Any thoughts?

        1. I think that, at the time I wrote this, the upgade to the 1Gb switches helped a lot. It made a big difference to the stability – less lost packets and timeout leading for further problems. Subsequent experience has also shown that making a 1Gb connection to the workstation (instead of a 100M one) has helped greatly.

          Yes, Sage is really messing with the network in a bad way. I spent some time analysing the traffic it produced before I decided to upgrade anything. See http://blog.frankleonhardt.com/2010/why-is-sage-line-50-so-slow/ for the results.

          One thing that convinced me that upgrading the workstations too was the predictive drop-down lists on some forms. A direct connection improved these at least 5x – from 15 seconds on 2-3 seconds (i.e. unusable to slow).

          On this particular site the server isn’t my problem; when I decide to make it my problem I’ll probably go for the fastest I can find (SAS disks, or even SSD). The Sage Authorised Reseller used a Dell workstation, and then upgraded it to another Dell Workstaton (as far as I can make out).

  3. I came across your page via Google. I’m having similar problems on a network I manage. It was wired in a daisy chain fashion through crappy netgear unmanaged hubs and switches.

    My staff are currently installing two linked managed switches (Cisco, of course!) and then connect all the clients in a star configuration.

    I’ll report back if it solves our issues!

    1. Thanks for the comment here – most people are drawn to the “why is sage so slow” rant elsewhere on this blog.

      As an update to this, it really has sorted out a lot of the stability issues. On the site in question I discovered a large number of contraband “desktop” switches around the place, which probably weren’t helping matters. Why are they called “desktop” switches when people invariably hide them under the desk?

      One unanswered question – do SOHO switches implement spanning tree (802.1D)? I have reason to suspect not!

      FWIW I’ve found the D-Link Websmart switches pretty good at around £10/port if you can’t afford/don’t like Cisco.

      Please do report back; a lot of people seem to read these diatribes even if they don’t comment and you could be doing the world a favour.

Leave a Reply to Guy Cancel reply

Your email address will not be published. Required fields are marked *