Apps to force Web into decline?

Who’s going to win the format war – iOS (Apple iPad) or Android? “What format war?” you may ask. Come on, it’s obvious. Some are saying that the web is either dying (dramatic) or at the least being impacted by the modern fashion of Apps, and these run on iOS or Android (mostly). Actually, by sales Apple is winning hands-down.

This IS a format war, because developers need to support one or other platform – or both – and users need to choose the platform that has the content they need, and there is some sense in it when databases contents are queried and displayed in Apps rather than on web pages.
Apple has the early advantage, and the cool factor. But it’s the most expensive and the most hassle to develop for, as Apps can only be sold through Apple. Android is a free-for-all. Apps can be sold through Google, or anyone else making them available for download in the future. It’s an open standard. The security implications of this are profoundly worrying, but this is another story.

So, running iOS is expensive, Android is insecure and neither are very compatible. That’s before you consider Blackberry and any requirement to run an App on your Windows or Linux PC.

But, I don’t think this is a conventional format war. It’s mostly software based, and open standards software might just win out here (and I don’t mean Android). People like paying for and downloading Apps. Web browsers can (technically) support Apps, using Java and the upcoming HTML5 in particular. Why target a specific operating environment when you can target a standard web browser and run on anything?

As an aside, HTML5 is sometimes hailed as something new and different when in fact it’s just evolution and tidying up. The fact is that HTML is cross-platform and will deliver the same functionallity as Apps. HTML5 simply standardises and simplifies things, making cross-platform more open-standard, so every browser will be able to view page content without proprietary plug-ins, including better support for mobile devices which lost out in the late 1990’s onwards when graphic designers decided HTML was a WYSIWYG language.

Some modern-day pundits will proclaim that data will be accessed more through Apps in the future, and the web has had its decade. Apparently a third of the UK is now using smart-phones. Whether this statistic is correct or not, they’re certainly popular and I’ll concede that Apps are here to stay. But in my vision of the future they won’t be running on iOS, Android or Blackberry – they’ll be written using HTML5 and run on anything. It’s platform independence that launched HTML and the web twenty years ago, and it’s what will see off the competition for the next twenty years.

Google’s Evil Browser policy

Gmail Fail

Google’s VP of Engineering (Venkat Panchapakesan) has published one of the most outrageous policy statements I’ve seen in a long time – not in a press release, but in a blog post.

He’s saying that Google will discontinue support for all browsers that aren’t “modern” from the end of July, with the excuse that is developers need HTML5 before they can improve their offerings to meet current requirements. “Modern” means less than three versions old, which currently refers to anything prior to IE8 (now that IE 10 is available on beta) and Firefox 3.5. This is interesting – Firefox 4 has just been released, I’m beta testing Firefox 5 with Firefox 7 talked about by the end of 2011. This will obsolete last month’s release of Firefox 4 in just six months. Or does he mean something different by version number? Anyone who knows anything about software engineering will tell you that major differences can occur with minor version number changes too so it’s impossible to interpret what he means in a technical sense.

I doubt Google would be stupid enough to “upgrade” it’s search page. This will affect Google Apps and Gmail.

The fact is that about 20% of the world is using either IE 6 or a similar vintage browser. Microsoft and Mozilla have a policy of encouraging people to “upgrade” and are supportive of Google. Microsoft has commercial reasons for doing this; Mozilla’s motives are less clear – perhaps they just like to feel their latest creations are being appreciated somewhere.

What these technological evangelists completely fail to realise is that not everyone in the world wishes to use the “latest” bloated version of their software. Who wants their computer slowed down to a crawl using a browser that consumes four times as much RAM as the previous version? Not everyone’s laptop has the 2Gb of RAM needed to run the “modern” versions at a reasonable speed.

It’s completely disingenuous to talk about users “upgrading” – it can easily make older computers unusable. The software upgrade may be “free” but the hardware needed to run it could cost dear.

It’ll come as no surprise to learn that the third world has the highest usage of older browser versions; they’re using older hardware. And they’re using older versions of Windows (without strict license enforcement). There’s money to be made by forcing the pace of change, but it is right to make anything older than two years old obsolete?

But does Google have a point about HTML5? Well the “web developers” who’s blog comments they’ve allowed through uncensored seem to think so. But web developers are often just lusers with pretensions, fresh out of a lightweight college and dazzled by the latest cool gimmick. Let’s assume Google is a bit more savvie than that. So what’s their game? Advertising. Never forget it. Newer web technologies are driven by a desire to push adverts – Flash animations and HTML5 – everything. Standard HTML is fine for publishing standard information.

I’ll take a lot of convincing that Google’s decision isn’t to do with generating more advertising revenue at the expense of the less well-off Internet users across the globe. Corporate evil? It looks like it from here.

WPAD and Windows 7 and Internet Explorer 8

I’ve recently set up WPAD automatic proxy detection at a site – very useful if you’re using a proxy server for web access (squid in this case). However, some of the Windows 7 machines failed to work with it (actually, my laptop which is just about the only Windows 7 machine here). This is what I discovered:

It turns out that those smart guys at Microsoft have implemented a feature to stop checking for a WPAD server after a few failed attempts. It reckons it knows which network a roaming machine is on, and leaves a note for itself in the registry if it’s not going to bother looking for a proxy server on that again. A fat lot of use if you’ve only just implemented it.

If it fails to find a proxy, but manages to get to the outside world without one it will set the following key:


HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings\Wpad\
WpadDecision = 0

If you want it to try again (up to three times, presumably), you can simply delete this key. You can disable the whole crazy notion by adding a new the DWORD registry key:


HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings\Wpad\WpadOverride = 1

You may well want to do this if you’re using a VPN or similar, as I really don’t think Windows 7 has any completely reliable method of determining the network its connected to. I’m impressed that it manages to ever get it right, but I’m sure it’s easy enough to fool it. Does anyone know how it works?

Infosec Europe 2011 – worrying trend

Every Infosec (the Information Security show in London) seems to have have a theme. It’s not planned, it just happens. Last year it was encrypted USB sticks; in 2009 it was firewalls. 2011 was the year of standards.

As usual there were plenty of security related companies touting for business. Most of them claimed to do everything from penetration testing to anti-virus. But the trend seemed to be related to security standards instead of the usual technological silver bullets. Some of the companies were touting their own standards, others offering courses so you could get a piece of paper to comply with a standard, and yet others provided people (with aforementioned paper) to tick boxes for you to prove that you met the standard.

This is bad news. Security has nothing to do with standards; proving security has nothing to do with ticking boxes. Security is moving towards an industry reminiscent of Total Quality Assurance in the1990’s.

One thing I heard a lot was “There is a shortage of 20,000 people in IT security” and the response appears to be to dumb-down enough such that you can put someone on a training course to qualify them as a box-ticker. The people hiring “professionals” such as this won’t care – they’ll have a set of ticked boxes and a certificate that proves that any security breach was “not their fault” as they met the relevant standard.

Let’s hope the industry returns to actual security in 2012 – I’ll might even find merit in the technological fixes.

Google Phishing Tackle

In the old days you really needed to be a bit technology-savvy to implement a good phishing scam. You need a way of sending out emails, a web site for them to link back to that wouldn’t be blacklisted and couldn’t be traced, plus the ability to create an HTML form to capture and record the results.

Bank phishing scam form created using Google Apps
Creating a phishing scam form with Google Apps is so easy

These inconvenient barriers to entry have been swept away by Google Apps.

A few days back I received a phishing scam email pointing to a form hosted by Google. Within a couple of minutes of its arrival an abuse report was filed with the Google Apps team. You’d might expect them to deal with such matters, but this still hadn’t been actioned two days later.

If you want to have a go, the process is simple. Get a Gmail account, go to Google Docs and select “Create New…Form” from on the left. You can set up a data capture form for anything you like in seconds, and call back later to see what people have entered.

Such a service is simply dangerous, and Google doesn’t appear to be taking this at all seriously. Given their “natural language technology” it shouldn’t be hard for them to spot anything looking like a phishing form so, I decided to see how easy it was and tried something blatant. This is the result:

No problem! Last time I checked the form was still there, although I haven’t asked strangers to fill it in.

Fetchmail, Sendmail and oversized emails

There’s a tendency for lusers to try to email anything these days. If you though a few Gig of outgoing mail queue was enough you haven’t come across the luser who decided to email the contents of a CD (uncompressed) to all her friends. Quite what they’d have made of their iPhone trying to download it I’ll never know.

Sendmail has a method for limiting emails to a sensible size. As a reminder, inside host.example.com.mc you need to add:

# The following sets the maximum message to 5Mb - otherwise it's infinite
define(`confMAX_MESSAGE_SIZE', `5242880')

Then run “make” and “make install” and “make restart”. This will generate the sendmail.cf (and any hashmaps) before restarting. The bit you always forget when changing .mc files is the “make install”. This is all for FreeBSD – Linux types, please do it your own way.

So this is great – anyone sending an over-sized email is bounced from their server, and local users submitting email will be similarly clipped into the world of sane and sensible (if you regard something as large as 5Mb as sensible for an email).

But I came across one interesting issue recently and it could happen to you, too, if you’re using fetchmail.

For those who haven’t come across it before, fetchmail pulls emails from a POP3 box and delivers them to local users – dropping them into your local MTA by default. This is reasonable, as everything then goes through the spam filtering, procmail and anything else you have defined. It’s really useful for legacy situations where someone’s ended up with a POP3 box somewhere and you need to integrate it with the rest of their mail.

Fetchmail does plenty more besides, and has a config file to match the functionality. Presumably as a reaction against the complexity of the sendmail.cf syntax, this one tries to operate in plain English. I’ve never quite figured out the full syntax, but it’s designed to be “flexible” and figure out what you’re trying to say. Personally I don’t think it succeeds in being any more friendly then sendmail.cf in spite of being on the other end of the spectrum.

Anyway, the fun comes when fetchmail downloads an over-sized email from the POP3 box and delivers it locally via Sendmail. Sendmail will reject it, and send a bounce back to the original sender. So far, so good but f Sendmail is running as a cron job every five minutes, the luser gets a bounce back every five minutes because the outsized mail is stuck in the POP3 box. Opps! It may serve them right, but they shouldn’t be allowed to suffer for too long.

Fortunately one of fetchmail’s many options allows you to control the maximum download size, if you could figure out the syntax. It’s available as a command-line option –l , but if you prefer to keep things in the .fetchmailrc file (the best plan) you’ll need to proceed as per the following example. They keywords are “limit” and “limitflush”.

  • local-postmster-account is the login for your local postmaster – undelivered emails go there.
  • pop3.isp.co.uk – mail server with the POP3 box
  • users-domain.co.uk – Domain name who’s email ends up in POP3 box above
  • pop3-username, pop3-password – what you use to log into the POP3 box
  • Tom, Dick and Harry are local mailboxes, with tom being the default.
    set postmaster local-postmster-account

    poll pop3.isp.co.uk proto pop3 aka users-domain.co.uk no envelope no dns:
    user "pop3-username", with password "pop3-password",
    limit 5242368 limitflush to

    dick
    "dick@users-domain.co.uk " = dick
    "richard@users-domain.co.uk " = dick

    harry
    "harry@users-domain.co.uk " = harry

    tom
    "tom@users-domain.co.uk" = tom
    "*@ users-domain.co.uk " = tom

    here

    This isn’t intended as a tutorial in writing .fetchmailrc files – only an example of the use of limit and limitflush.

    So what’s going on? The limit keyword must be part of the poll statement, and is followed by the size (in bytes) of the maximum email to be retrieved. In the example it’s 512 bytes less than the 5Mb used in Sendmail (I feel I need a bit of slack on a boundary condition; it may be okay if they’re identical but I why push your luck?)

    Please read the fetchmail documentation for full details (although it’s light on examples). With just the “limit” keyword in use, over-sized mails will be left I the POP3 box. The following “limitflush” keyword will silently delete over-sized emails so they don’t bother you again. You may not want to do this! If you don’t, someone will have to retrieve or delete the emails form the POP3 box manually.

    Note that putting a limit on the download will prevent the bounce messages going to the original sender as it won’t get as far as sendmail.

Billing problems 1899.com

1899 and 18866 are two apparently linked low-cost telecoms companies. They’re so-named because that’s the prefix used to route through them.

Now some time ago I started using their services and wrote a couple of articles recommending them, with the proviso that you shouldn’t expect any kind of customer service. The company appears to be based in Switzerland and they don’t want to talk to anyone. But they’re legit. The only thing I said back then was to pay by credit card and get consumer protection. If you don’t mind this, they do deliver. Or did deliver.

After many years I had to change my credit card number, so I filled in the billing change for both companies. 1899 took no notice, and after several months tried to bill the old card – and was rejected. I made sure they had the right one, and told them to try again, but they wouldn’t. When I eventually got through to someone apparently from 1899 they said it was their policy not to try a card a second time and asked me to send the money using an international transfer, after which they’d start billing the card again. I don’t think so. This could have been anyone’s bank account, and if genuine it’s a very strange way to do business – as well as costing me £20 for the transfer. Apart from which, they weren’t trying to charge the old card again – it was a new number. That’s the point!

Their terms of business say you need to pay by credit card – no problem, they can charge the card.

They didn’t.

I wrote back saying charge the card, or if you really don’t want to, you can have cash. This is an offer to pay using legal tender – if they refused they won’t have a leg to stand on if they want the money any other way. I assumed they’d see sense.

They didn’t.

This went back and forth. I made it clear – charge the card (recommended), take the cash or I’ll see you in court. It’d be interesting to meet these guys if they went for one of the last two options.

It’s over a year now. I still owe them for the calls, and haven’t heard anything about it. It’s annoying that I owe them money. The service doesn’t work any more (unsurprisingly). I can manage without it. 18866 still works (that half of the company is using the correct card).

So do I still recommend 1899 and 18866? Well I suppose I do, but as I said in my original articles, it’s fine when it works but don’t expect any sane or sensible customer service if it doesn’t.

Seven Blunders of the Internet World

I’ve been involved with web hosting since the early 1990’s, and every week some hopeful bright spark comes to me with a great idea about making a fortune as an Internet entrepreneur. Whilst I hate to rain on anyone’s parade, a quick reality check is in order. Just because Amazon can make a fortune selling books on-line, doesn’t mean they can. Amazon got there first and they’ve got a slick, well organised operation. In short they can buy the books cheap, store them efficiently and, most importantly, stuff them into envelopes and post them quickly and cheaply. This doesn’t mean it’s impossible to compete with Amazon, but they were there first and have a massive advantage. If you decided to by a Cessna and try to compete with American Airlines on the London to New York run everyone would (rightly) say you were nuts, so why should it be a surprise to learn the same applies on-line.

Whatever you do, remember the ease of starting up on the Internet works for you and the competition. You need a unique selling point; a barrier to entry that only you can cross. If you don’t have one you’re competing with the rest of the world.

Here are seven popular but doomed ideas I’ve seen time after time…

  1. Auction Sites. eBay’s doing well, but they’re a bunch of *****s so you want a slice of the action. Unless you’re selling something very specialised (i.e. that eBay can’t handle) then you’re wasting your time. Why should anyone list items with you when you can’t match eBay’s user base? Whatever you think of eBay’s business methods, items auctioned to millions of potential buyers are going to fetch a better price and sellers know that.
  2. Social Networking Sites. So you want to be the next Facebook? Ask yourself why anyone would network their social life through you when there are bigger networks on Facebook (for home users) and LinkedIn (for professionals). Google is, I believe, planning to muscle in. They’re going to find it tough, but they’ve got almost limitless funds they can afford to speculate with, and their developers know exactly what they’re doing (well their top ones do). They’ll still need one hell of a good unique selling point.
  3. Blogging sites. Get someone to provide the content while you rake in the advertising revenue. How many mugs do you think you’ll find? People can either run their own site (and keep the advertising revenue) or use Blogspot.
  4. Directories. If your bright idea is to create a directory of business and get them to pay for a listing, I have to tell you it’s been done. If every business paid to be in every such directory they’d go bust in no time – they’re wise to it. They know that people will find them through Google, not you. There are ways this can sort-of work with advertising support but you’ll be lucky if they cover hosting costs this way.
  5. On-line shops. These do work if there’s a real shop behind them. If you’re plan is to buy a copy of Actinic or download a free copy of Zencart or one of the dozens of on-line shops, put something up and see who bits, forget it.

    Selling on-line you’re competing on price, order-fulfilment and uniqueness of stock – if people can get it cheaper and quicker somewhere else, they probably will. If you’re selling “unique” artefacts such as antiques or objet de art you’re competing with eBay or the artisans producing them, who would need a good reason not to set up their own web site and sell direct. If you’re thinking producers will pay for you to list them, ask yourself why they’d pay you rather than eBay or Amazon, where they’ll get far more exposure.

  6. Web Design Company. Great idea! Download some web template generator for Joomla and make a fortune creating web sites for… well your friends, family and then what? The problem is that there is very little barrier to entry and the market is flooded with the unemployed (and possibly unemployable) looking for a work-from-home job without getting their hands dirty. The real web design companies have real programmers and cater for customers with specialist needs. If you’re thinking of using Joomla you’re not in that league. Sorry.
  7. Internet multi-level marketing seller. Anyone can be a web hosting company, telephone company, ringtone provider or what-have-you – it’s easy! Just sign up to an affiliate programme, choose your branding and sell, sell, sell – along with thousands of others selling exactly the same thing. If it was easy to sell the provider would be selling direct, wouldn’t they?

    All of the above are tried and failed businesses. If you’ve got a plan that doesn’t fall foul of any of the above it’s either completely crazy or it might just work – in which case give me a call. There are some ideas that might just work, but I’m hardly going to reveal them here

Christmas Hackers 2010

 The 2010/2011 cybercrime season has been one of the most prolific I remember. There have been the usual script-kiddie attacks, wasting bandwidth. These largely consist of morons trying to guess passwords using an automated script, and they’re doomed to failure because no serious UNIX administrator would have left guessable passwords on proper accounts. And besides which they’re guessing system account names you only find on Windows or Linux.

What seems to be a bigger feature this year is compromised “web developer” software written in PHP. This is set up by designers, not systems people, and they really don’t understand security – hence they’re a soft target.

This year it appears that phpMyAdmin has been hit hard. This seems to be a vulnerability caused by poor installation (leaving the configuration pages up after use) and using a weak version of the code that was actually fixed a year ago. When I looked I found several copies of the old version, still active, and dating from the time when the web designer had initially commissioned the site.

The criminals appear to be using a mechanism that’s slightly different from the original exploit documentation, but is fairly obvious to any programmer looking a the setup.php script. It allows arbitary uploads to any directory that Apache has write access too.

The nature of the attacks has also been interesting. I’ve seen scripts dropping .htaccess files into all likely directories, redirecting accesses elsewhere using the mod_rewirte mechanism. This appears to intended as a simple DoS attack by overloading target servers (homelandsecurity.gov and fbi.gov being favourite targets).

That this is the work of script kiddies there is no doubt. They’ve left botnet scripts written in perl and python all over the place on honeypot machines. Needless to say this makes them really easy to decode and trace, and you can probably guess which part of the world they seem to be controlled from.

My advice to users of phpMyAdmin (a web based front end for administering mySQL) is to learn how to use SQL properly from the command line. If you can’t do that (or your hosting company won’t let you, which is a problem with low-cost web hosts), at least secure it properly. Upgrade to the latest version, keep it upgraded and remove it from the server when not in use. If you don’t want to remove it, at least drop a .htaccess file in the directory to disable it, or make it password protected.

chkrootkit finds bindshell infected on port 465

The current version of chkrootkit will throw up a warning that bindshell is INFECTED on port 465 in some circumstances when this is nothing to worry about. What it’s actually doing (in case you can’t read shell scripts, and why should you when there’s a perfectly good ‘C’ compiler available) is running netstat and filtering the output looking for ports that shouldn’t be being used. Port 465 is SMTP over SLL, and in my opinion should very definitely be used, but it is normally disabled by default.

As to whether this should worry you depends on whether you’re using secure SMTP, probably with sendmail. If you set up the server you should know this. If someone else set it up and you’re not too familiar with sendmail, the tell-tail line in the .mc file is DAEMON_OPTIONS(`Port=smtps, Name=TLSMTA, M=s')dnl. Note the ‘s’ on the end of smtp.

Assuming you are using SMTPS, you can easily stop chkrootkit from printing an error (or returning an error code) simply by modifying the bindshell() subroutine to remove 465 from the list of ports to check. It’s on line 269 on the current, 0.49, version of the script.

I’m not so convinced that chkrootkit is any substitute for an experienced operator, but it’s out there, people use it and its better than nothing.