Virtual ATMs – Complexity for the Sake of It?

Tags

, , , , , , , , ,

For some reason this story was under my radar until I heard about it on the Tenable Security podcast. Last summer, Diebold and VMware demonstrated a prototype virtual ATM system – basically a cash machine with the OS hosted at a data centre. Something tells me the marketing hype will win, and this ‘innovative solution’ will eventually become commonplace.

Personally I reckon cloud computing is over-rated. It’s useful for many things, but not always a good solution. Diebold’s vice president says: ‘This development is an important milestone on Diebold’s roadmap to leveraging cloud computing technology in the retail financial space.’ But is that actually a good idea, or just outsourcing for the sake of it? Let’s bust a couple of myths, in particular the ones regarding efficiency and complexity that I deciphered from the marketing speak.

First off, the current ATM system is generally more efficient and secure compared to Diebold’s VM concept. Yes, it’s not perfect, but still… It boils down to complexity and the amount of traffic on the network. Complexity introduces more points of failure, points of entry and sometimes interoperability flaws between the system’s components. Those should be avoided when designing networks for security.

Instead of handling everything locally and exchanging data with the relevant banks over the SWITCH network, Diebold’s system introduces VMs as an intermediary, so now we have to worry about security locally, at the data centre, and the points between. And we have a lot more network traffic going in and out the local machine. Tim Conneally, of BetaNews.com, anticipates problems related to the logical addressing of the terminals and their VMs, including Man in the Middle attacks, which are unlikely but quite possible.
Contrary to what the press release claimed, the local ATMs still require onboard computers to handle network connections, hardware, I/O, whatever encryption, etc. Essentially they’re thin clients. They’d need servicing, just like current ATMs, and without biometrics could still be compromised through the usual attacks – shoulder surfing, card skimming, hidden cameras, fake keypads, social engineering, etc.

Complexity also means engineering problems, especially in terms of error handling and fault finding, both of which are fairly critical here. For example, what happens if the connection is dropped between dispensing cash and charging an account? What about something happening outside the scope of the software? How would the engineers quickly determine whether a fault is occurring in the terminal, VM or elsewhere? Have they considered the need for firmware updates somewhere along the line?

I’ve noticed commenters at The Register were mostly concerned about reliability, which is interesting because that shouldn’t be an issue. Current ATMs are already reliant on the public infrastructure. The VMs themselves would be an improvement, perhaps running a better OS and being closely managed. Data centres maintain something like 99.999% uptime, with excellent failure recovery.

Biometrics Again
The demonstration terminal also included biometrics, which I’m not a great fan of. Instead of being a miracle solution, biometrics simply add a layer of security under certain conditions. It’s not infallible and the sun doesn’t shine out of its ass, especially where it’s just tacked onto a standard serial interface with badly-written software doing the authentication.
Biometric readers are just input devices with the authentication itself done at the other end, which means a given system could potentially be subverted by pulling the reader off and sending whatever signal/data straight down the wire. Simpler methods of defeating them are sometimes demonstrated at hacker conferences. As far as ATMs are concerned, biometric readers can make life harder for ATM hackers, providing they’re highly tamper resistant, part of some multi-factor authentication system, and function reliably enough.

SOPA: The Security Threat of 2011

Tags

, , , , , , ,

One of the major Internet security events this year is the Stop Online Piracy Act (SOPA), even if it’s going unmentioned by others in the infosec community. SOPA is about censoring, or blocking sites allegedly infringing copyright in whatever way, but the implications are much deeper than that. While ‘cyber security’ is about protecting confidentiality, integrity and availability, SOPA demands the direct opposite – interference with the Domain Name System and possibly that ISPs use deep packet inspection. In fact, SOPA looks remarkably similar to what’s already been implemented in China.

Even if the DNS isn’t essential for a functioning Internet, it’s a core system people rely on. SOPA undermines its intended purpose of being a reliable, trusted and definitive addressing system, and if domains start getting routinely blocked, reliability can be degraded to the extent it’s no longer usable because no domain is safe. The organisations running the root DNS servers cease to be trusted authorities.

These same concerns were pointed out in an open letter, signed by over 80 engineers who actually did create the technologies that make up the Internet. One particular statement in that letter sums things up:
‘We cannot have a free and open Internet unless its naming and routing systems sit above the political concerns and objectives of any one government or industry. To date, the leading role the US has played in this infrastructure has been fairly uncontroversial because America is seen as a trustworthy arbiter and a neutral bastion of free expression. If the US begins to use its central position in the network for censorship that advances its political and economic agenda, the consequences will be far-reaching and destructive.’

DNS Security
One of the things that threw a spanner in the works and caused the postponement of SOPA is DNSSEC, which prevents criminals redirecting their targets. If I understand it correctly, a browser using DNSSEC will keep searching until it finds a server that can resolve a given domain and authenticate it. Whether the system’s being interfered with by a judge or criminal is irrelevant, because the system is responding exactly as it should to an attempted compromise. To get around this, provisions were added to criminalise anyone developing countermeasures to SOPA orders.
Stewart Baker posted: ‘Browsers implementing DNSSEC will have to circumvent and bypass criminal blocking, and in the process, they will also circumvent and bypass SOPA orders. The new bill allows the AG to sue the browsers if he decides he cares more about enforcing his blocking orders than about the security risks faced by Internet users.’

It’s an interesting problem. Should developers intentionally remove DNSSEC in order to comply with SOPA provisions? Is it possible to comply with SOPA without aiding the very criminals DNSSEC is designed to stop? Should we conclude the US Cyber Security Strategy becomes meaningless bullshit if SOPA gets passed?

Now for the Good News
There is good news though. As I’ve pointed out, politicians, copyright lawyers, etc. are literally quite ignorant of how the Internet works, and they’re being opposed by hackers, developers and engineers determined to find ways of keeping the Internet open and free. We can guess the eventual outcome of this. In China, where the most draconian censorship is found, necessity became the mother of innovation, which is perhaps why there are so many hacker groups focusing on security there. I believe SOPA could eventually lead to the unintended consequence of a more secure and decentralised Internet.

Several alternatives to the current hierarchical Domain Name Sytem are being pushed. We could all maintain our own DNS and map URLs as needed, rather like adding contacts to an address book. We could also build a network of trusted DNS servers, as Telecomix has proposed. I think something along these lines will become quite common after the transition to IPv6, and in combination with IPSec and proxy servers, will render most censorship methods totally ineffective.

P2P-DNS is getting the most attention from tech journalists, and Gary Richmond posted a good article on P2P-DNS on the Free Software Magazine site. This particular system is the most likely replacement because it already exists, and because of the vast numbers of people already using P2P for various reasons. Although much is to be worked out, anyone can get started running a P2P-DNS node by downloading the software at SourceForge.net.