For some reason this story was under my radar until I heard about it on the Tenable Security podcast. Last summer, Diebold and VMware demonstrated a prototype virtual ATM system – basically a cash machine with the OS hosted at a data centre. Something tells me the marketing hype will win, and this ‘innovative solution’ will eventually become commonplace.
Personally I reckon cloud computing is over-rated. It’s useful for many things, but not always a good solution. Diebold’s vice president says: ‘This development is an important milestone on Diebold’s roadmap to leveraging cloud computing technology in the retail financial space.’ But is that actually a good idea, or just outsourcing for the sake of it? Let’s bust a couple of myths, in particular the ones regarding efficiency and complexity that I deciphered from the marketing speak.
First off, the current ATM system is generally more efficient and secure compared to Diebold’s VM concept. Yes, it’s not perfect, but still… It boils down to complexity and the amount of traffic on the network. Complexity introduces more points of failure, points of entry and sometimes interoperability flaws between the system’s components. Those should be avoided when designing networks for security.
Instead of handling everything locally and exchanging data with the relevant banks over the SWITCH network, Diebold’s system introduces VMs as an intermediary, so now we have to worry about security locally, at the data centre, and the points between. And we have a lot more network traffic going in and out the local machine. Tim Conneally, of BetaNews.com, anticipates problems related to the logical addressing of the terminals and their VMs, including Man in the Middle attacks, which are unlikely but quite possible.
Contrary to what the press release claimed, the local ATMs still require onboard computers to handle network connections, hardware, I/O, whatever encryption, etc. Essentially they’re thin clients. They’d need servicing, just like current ATMs, and without biometrics could still be compromised through the usual attacks – shoulder surfing, card skimming, hidden cameras, fake keypads, social engineering, etc.
Complexity also means engineering problems, especially in terms of error handling and fault finding, both of which are fairly critical here. For example, what happens if the connection is dropped between dispensing cash and charging an account? What about something happening outside the scope of the software? How would the engineers quickly determine whether a fault is occurring in the terminal, VM or elsewhere? Have they considered the need for firmware updates somewhere along the line?
I’ve noticed commenters at The Register were mostly concerned about reliability, which is interesting because that shouldn’t be an issue. Current ATMs are already reliant on the public infrastructure. The VMs themselves would be an improvement, perhaps running a better OS and being closely managed. Data centres maintain something like 99.999% uptime, with excellent failure recovery.
The demonstration terminal also included biometrics, which I’m not a great fan of. Instead of being a miracle solution, biometrics simply add a layer of security under certain conditions. It’s not infallible and the sun doesn’t shine out of its ass, especially where it’s just tacked onto a standard serial interface with badly-written software doing the authentication.
Biometric readers are just input devices with the authentication itself done at the other end, which means a given system could potentially be subverted by pulling the reader off and sending whatever signal/data straight down the wire. Simpler methods of defeating them are sometimes demonstrated at hacker conferences. As far as ATMs are concerned, biometric readers can make life harder for ATM hackers, providing they’re highly tamper resistant, part of some multi-factor authentication system, and function reliably enough.
One of the major Internet security events this year is the Stop Online Piracy Act (SOPA), even if it’s going unmentioned by others in the infosec community. SOPA is about censoring, or blocking sites allegedly infringing copyright in whatever way, but the implications are much deeper than that. While ‘cyber security’ is about protecting confidentiality, integrity and availability, SOPA demands the direct opposite – interference with the Domain Name System and possibly that ISPs use deep packet inspection. In fact, SOPA looks remarkably similar to what’s already been implemented in China.
Even if the DNS isn’t essential for a functioning Internet, it’s a core system people rely on. SOPA undermines its intended purpose of being a reliable, trusted and definitive addressing system, and if domains start getting routinely blocked, reliability can be degraded to the extent it’s no longer usable because no domain is safe. The organisations running the root DNS servers cease to be trusted authorities.
These same concerns were pointed out in an open letter, signed by over 80 engineers who actually did create the technologies that make up the Internet. One particular statement in that letter sums things up:
‘We cannot have a free and open Internet unless its naming and routing systems sit above the political concerns and objectives of any one government or industry. To date, the leading role the US has played in this infrastructure has been fairly uncontroversial because America is seen as a trustworthy arbiter and a neutral bastion of free expression. If the US begins to use its central position in the network for censorship that advances its political and economic agenda, the consequences will be far-reaching and destructive.’
One of the things that threw a spanner in the works and caused the postponement of SOPA is DNSSEC, which prevents criminals redirecting their targets. If I understand it correctly, a browser using DNSSEC will keep searching until it finds a server that can resolve a given domain and authenticate it. Whether the system’s being interfered with by a judge or criminal is irrelevant, because the system is responding exactly as it should to an attempted compromise. To get around this, provisions were added to criminalise anyone developing countermeasures to SOPA orders.
Stewart Baker posted: ‘Browsers implementing DNSSEC will have to circumvent and bypass criminal blocking, and in the process, they will also circumvent and bypass SOPA orders. The new bill allows the AG to sue the browsers if he decides he cares more about enforcing his blocking orders than about the security risks faced by Internet users.’
It’s an interesting problem. Should developers intentionally remove DNSSEC in order to comply with SOPA provisions? Is it possible to comply with SOPA without aiding the very criminals DNSSEC is designed to stop? Should we conclude the US Cyber Security Strategy becomes meaningless bullshit if SOPA gets passed?
Now for the Good News
There is good news though. As I’ve pointed out, politicians, copyright lawyers, etc. are literally quite ignorant of how the Internet works, and they’re being opposed by hackers, developers and engineers determined to find ways of keeping the Internet open and free. We can guess the eventual outcome of this. In China, where the most draconian censorship is found, necessity became the mother of innovation, which is perhaps why there are so many hacker groups focusing on security there. I believe SOPA could eventually lead to the unintended consequence of a more secure and decentralised Internet.
Several alternatives to the current hierarchical Domain Name Sytem are being pushed. We could all maintain our own DNS and map URLs as needed, rather like adding contacts to an address book. We could also build a network of trusted DNS servers, as Telecomix has proposed. I think something along these lines will become quite common after the transition to IPv6, and in combination with IPSec and proxy servers, will render most censorship methods totally ineffective.
P2P-DNS is getting the most attention from tech journalists, and Gary Richmond posted a good article on P2P-DNS on the Free Software Magazine site. This particular system is the most likely replacement because it already exists, and because of the vast numbers of people already using P2P for various reasons. Although much is to be worked out, anyone can get started running a P2P-DNS node by downloading the software at SourceForge.net.
Recent EU press release: ‘European Commission Vice-President Neelie Kroes has invited Karl-Theodor zu Guttenberg, a former Federal Minister of Defence, and of Economics and Technology, in Germany, to advise on how to provide ongoing support to Internet users, bloggers and cyber-activists living under authoritarian regimes. This appointment forms a key element of a new No Disconnect Strategy’. Full text here…
Various groups have been fighting censorship since the 1990s. The central question is whether the No Disconnect Strategy can add anything new to this.
The three main people currently behind the Strategy – Neelie Kroes, Catherine Ashton and Karl zu Guttenberg – have a very difficult task ahead, and maybe little idea how to tackle it yet. Not only can existing systems like BT’s CleanFeed and HP Dragon be readily modified to censor anything, numerous vendors are trading similar technology to authoritarian regimes, in what seems an unregulated industry. These are systems the Strategy must work against to be effective.
Another thing to consider is the censorship/anti-censorship issue is being fought on two levels. On one level there are the politicians, lawyers and civil rights activists who argue their cases, start campaigns, raise awareness, etc.
Then there are those actually developing ways to defeat censorship, which requires a good amount of expertise when the adversary’s a nation state with an expensive filtering system and the ability to blacklist proxy servers. At the moment there aren’t enough people in the civil rights groups with the knowledge and skills to do this, and perhaps not enough engagement with groups like Telecomix, Chaos Computer Club and the Global Internet Freedom Consortium. These are the people who can make the Strategy workable.
Why Neelie Kroes selected Guttenberg and Ashton as advisors is a mystery. Neither of them have any technical background or appear qualified for the position, and Guttenberg has a record of advocating censorship. Numerous people have raised their concerns about this at Kroes’ own blog.
It seems the Patriot Act revision, which allows warrantless access to anything people store at a data centre in the United States, is already putting US-based cloud service providers at a disadvantage. Some European providers are using it as a selling point. Their potential clients can’t be accused of any misunderstanding over the Patriot Act, as protections against any unauthorised access to their data should always be a core demand.
I don’t see the situation changing anytime soon. It’s something most SaaS providers and their customers didn’t anticipate when negotiating SLAs, so the firms actually running the data centres have little obligation to challenge the government.
There are two further arguments here, which leads us to a rather tricky conundrum. Some, like myself, believe security, privacy and free expression are paramount, and must be protected by solid technical measures. Others believe law enforcement agencies should have ready access to anyone’s data, in the interests of fighting crime and terrorism.
Stephen Biggs, from the University of Wales Newport, takes the latter position and puts forward a reasonable argument for it. At least 70% of electronic crime is related to indecent images (and videos) of children, and potentially many criminals are utilising the cloud. In truth, we don’t know the actual extent of this problem, because any incriminating data is hard to access, hard to attribute, and even harder to pass off as reliable evidence. But should that argument be applied to the Patriot Act and undefined ‘terrorists’? Who exactly are the terrorists, and how many of them really are using the cloud?
Although I take a much different position to Biggs here, he pointed out in a recent conversation something most of us never thought of: Everyone’s being encouraged to outsource the storage and management of their information assets to third parties, but cloud computing isn’t mature enough for this. A decade ago, nobody was discussing the security issues related to it either. In other words, maybe we’re entirely wrong to assume confidentiality can be guaranteed with cloud computing.
Paradoxically, too little trust is given for the cloud computing industry to reach its full market potential. Any organisation can be compromised, and that risk is multiplied when another third party also has access to the data. Normally the SaaS and PaaS companies already have access to it, and now so does the US government. A compromise could happen through any of those entities.
Combined with the earlier point, I reckon it’s just a matter of time before a major provider and its corporate clients are compromised. Others are aware of that risk, and aren’t prepared to take it, especially with intellectual property theft and industrial espionage allegedly on the rise.
First off, I’d like to offer my condolences to Joe Weiss, the SCADA ‘cyber war expert’ who made an ass of himself over the water pump incident last month.
Basically the scare story about Russian hackers compromising a US water facility was based entirely on an IP address, not that any of us were told what it was. An IP address on its own is never reliable indication of where an attack originated.
So a report, which wasn’t open to scrutiny, investigation or analysis by any of us, found its way to Weiss who called ‘cyber war’. I called bullshit, because it had the signs of a hoax. There was no investigation, no reliable evidence, and my money was on the simplest and therefore most likely explaination – hardware failure. The only worrying thing here was the apparent lack of fault finding, incident handling and reporting procedures.
The facts have just been revealed by Wired’s Threat Level blog after an interview with the engineer (Jim Mimlitz) who set up the Curran Gardner Public Water District’s control system.
It turns out the engineer logged into the system while on holiday in Russia. After the water pump’s failure five months later, someone noticed the address in the logs and notified the Statewide Terrorism and Intelligence Center without the engineer being contacted, even though his username was listed next to that address. It was just assumed hacker(s) spent five months messing about with the system.
It’s the SCADAPOCALYPSE story they’ve all been waiting for – two water utility systems in the US were apparently hacked. Countless reporters were quick to remind us of STUXNET and to point out the alleged attack was traced to a server in Russia. There were the predictable ‘Why the hell was SCADA connected to the Internet?’ comments after every article.
Few of us really understand SCADA technology, beyond the fact it’s related to critical infrastructure. My own experience is limited to messing about with PLCs, HMIs and small-scale offline industrial systems, so I’m no expert either. But anyone in the information security business will develop a bullshit detector, and for me the indicators are a) unverifiable claims, b) liberal uses of the word ‘cyber’, and c) the absence of specific details.
The story is some hacker(s) acquired login details from a software firm’s database and used them to mess with a water facility, eventually breaking one of the pumps. Whether the attack was traced to a proxy in Russia is irrelevant, especially since we aren’t told the actual IP address.
Nobody’s asking the most important questions here: Which software firm was supposedly compromised? When was it compromised? Are its other clients at risk, and have they been informed? Why did it take months to figure out the system was being interfered with? How many other factors contributed to the incident?
The lack of any real information here can mean one of two things – this is a hoax, or there’s indeed something potentially serious the infosec community aren’t being told about.
Neither is this strictly a SCADA issue. If company bosses put systems online because it’s cheaper and more convenient, that kind of thinking will be applied elsewhere. And it was, if there’s any truth to the story – the water facility was compromised because the login details were pulled from another company’s database. Even a relatively secure SCADA system can be compromised because of shit key management, because of social engineering, because nothing’s been audited, or a host of other reasons within the infosec realm.
Joe Weiss, referenced as an expert on control systems, claims to have read a one page intelligence report which doesn’t name the company that was hacked, can’t be seen by the public, and therefore can’t be verified. Weiss goes on to say: ‘We don’t have cyber forensics, so when they see (issues) they don’t think it’s a cyber problem’. But the FBI has ‘cyber forensics’, they investigated, and said they couldn’t find evidence.
Someone going by the name Pr0f claimed to have hacked into the second facility, and released screenshots to prove it. Nowhere, in his PasteBin entries or several interviews with him, does he indicate any inside knowledge of the systems, although his second PasteBin entry is dead right on several things, especially this:
”Cyberwar’ is unlikely to happen, in my opinion. I’ve met enough .mil types to know that they’re pretty grounded in reality; blame spokespeople for the irritating craze of adding “cyber-” to everything. Even the concept of cyberwar is ridiculous; war is a meatspace occurence and simply couldn’t have a digital equivalent.’
What does the US government make of all this? For a start, the Illinois and Texas local authorities haven’t published anything on their sites, and seem quite unaware of the incident. The ICS-CERT/FBI issued a joint statement denying there was any evidence of an attack. What I find interesting is that people actually believed Weiss and Pr0f more than the government.
I looked a little bit deeper, and did a quick search for Curran-Gardner Township Water District, which was named by Wired.com. The results showed the company was making patches and modifications to its system for several years now, and encountered a few problems along the way. I’d say it’s more likely something just broke. The spokesman for that company told a local paper: ‘Whether the burnout of that pump was related to this what might or might not have been a hacking, we don’t know’.