Tag Archives: cyberattack

Emergency Patch Issued for Samba, WannaCry-type Bug Exploitable with One Line of Code

The team behind the free networking software Samba has issued and emergency patch for a remote code execution vulnerability. Tom Spring reporting from Threatpost writes: “The flaw poses a severe threat to users, with approximately 104,000 Samba installations vulnerable to remote takeover. More troubling, experts say, the vulnerability can be exploited with just one line of code.” The Samba team which issued the patch on Wednesday, says “all versions of Samba from 3.5.0 onwards are vulnerable to a remote code execution vulnerability, allowing a malicious client to upload a shared library to a writable share, and then cause the server to load and execute it.”

“Comparisons are being made between the WannaCry ransomware attacks… because like WannaCry, the Samba vulnerability could be a conduit for a ‘wormable’ exploit that spreads quickly. Also, any exploit taking advantage of the Samba vulnerability would also take advantage of bugs in the same SMB protocol used by the NSA exploits used to spread WannaCry.” –Tom Spring, Threatpost, 25 May 2017

No signs of attacks yet in the 12 hours since its discovery was announced. “[I]t had taken researchers only 15 minutes to develop malware that made use of the hole. … This one seems to be very, very easy to exploit … more than 100,000 computers [are found] running vulnerable versions of the software, Samba, free networking software developed for Linux and Unix computers. There are likely to be many more.” –Jeremy Wagstaff and Michael Perry, Reuters, 25 May 2017

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity, Malware

Continue reading

Posted in circleid | Tagged , , | Comments Off on Emergency Patch Issued for Samba, WannaCry-type Bug Exploitable with One Line of Code

Security Costs Money. So – Who Pays?

Computer security costs money. It costs more to develop secure software, and there’s an ongoing maintenance cost to patch the remaining holes. Spending more time and money up front will likely result in lesser maintenance costs going forward, but too few companies do that. Besides, even very secure operating systems like Windows 10 and iOS have had security problems and hence require patching. (I just installed iOS 10.3.2 on my phone. It fixed about two dozen security holes.) So — who pays? In particular, who pays after the first few years when the software is, at least conceptually if not literally, covered by a “warranty”.

Let’s look at a simplistic model. There are two costs, a development cost $d and an annual support cost $s for n years after the “warranty” period. Obviously, the company pays $d and recoups it by charging for the product. Who should pay $n·s?

Zeynep Tufekci, in an op-ed column in the New York Times, argued that Microsoft and other tech companies should pick up the cost. She notes the societal impact of some bugs:

As a reminder of what is at stake, ambulances carrying sick children were diverted and heart patients turned away from surgery in Britain by the ransomware attack. Those hospitals may never get their data back. The last big worm like this, Conficker, infected millions of computers in almost 200 countries in 2008. We are much more dependent on software for critical functions today, and there is no guarantee there will be a kill switch next time.

The trouble is that n can be large; the support costs could thus be unbounded.

Can we bound n? Two things are very clear. First, in complex software, no one will ever find the last bug. As Fred Brooks noted many years ago, in a complex program patches introduce their own, new bugs. Second, achieving a significant improvement in a product’s security generally requires a new architecture and a lot of changed code. It’s not a patch, it’s a new release. In other words, the most secure current version of Windows XP is better known as Windows 10. You cannot patch your way to security.

Another problem is that n is very different for different environments. An ordinary desktop PC may last five or six years; a car can last decades. Furthermore, while smart toys are relatively unimportant (except, of course, to the heart-broken child and hence to his or her parents), computers embedded in MRI machines must work, and work for many years.

Historically, the software industry has never supported releases indefinitely. That made sense back when mainframes walked the earth; it’s a lot less clear today when software controls everything from cars to light bulbs. In addition, while Microsoft, Google, and Apple are rich and can afford the costs, small developers may not be able to. For that matter, they may not still be in business, or may not be findable.

If software companies can’t pay, perhaps patching should be funded through general tax revenues. The cost is, as noted, society-wide; why shouldn’t society pay for it? As a perhaps more palatable alternative, perhaps costs to patch old software should be covered by something like the EPA Superfund for cleaning up toxic waste sites. But who should fund the software superfund? Is there a good analog to the potential polluters pay principle? A tax on software? On computers or IoT devices? It’s worth noting that it isn’t easy to simply say “so-and-so will pay for fixes”. Coming up to speed on a code base is neither quick nor easy, and companies would have to deposit with an escrow agent not just complete source and documentation trees but also a complete build environment — compiling a complex software product takes a great deal of infrastructure.

We could outsource the problem, of course: make software companies liable for security problems for some number of years after shipment; that term could vary for different classes of software. Today, software is generally licensed with provisions that absolve the vendor of all liability. That would have to change. Some companies would buy insurance; others would self-insure. Either way, we’re letting the market set the cost, including the cost of keeping a build environment around. The subject of software liability is complex and I won’t try to summarize it here; let it suffice to say that it’s not a simple solution nor one without significant side-effects, including on innovation. And we still have to cope with the vanished vendor problem.

There are, then, four basic choices. We can demand that vendors pay, even many years after the software has shipped. We can set up some sort of insurance system, whether run by the government or by the private sector. We can pay out of general revenues. If none of those work, we’ll pay, as a society, for security failures.

Written by Steven Bellovin, Professor of Computer Science at Columbia University

Follow CircleID on Twitter

More under: Cyberattack, Cybercrime, Malware, Policy & Regulation, Security

Continue reading

Posted in circleid | Tagged , , , , | Comments Off on Security Costs Money. So – Who Pays?

Bell Canada Discloses Loss of 1.9 Million Email Addresses to Hacker, Says No Relation to WannaCry

Bell Canada, nation’s largest telecommunications company, disclosed late on Monday the illegal access of Bell customer information by an anonymous hacker. The information obtained are reported to include email addresses, customer names and/or telephone numbers. From the official release: “There is no indication that any financial, password or other sensitive personal information was accessed. … The illegally accessed information contains approximately 1.9 million active email addresses and approximately 1,700 names and active phone numbers. … This incident is not connected to the recent global WannaCry malware attacks.”

Follow CircleID on Twitter

More under: Cyberattack, Cybercrime, Email, Security

Continue reading

Posted in circleid | Tagged , , , | Comments Off on Bell Canada Discloses Loss of 1.9 Million Email Addresses to Hacker, Says No Relation to WannaCry

It’s Up to Each of Us: Why I WannaCry for Collaboration

WannaCry, or WannaCrypt, is one of the many names of the piece of ransomware that impacted the Internet last week, and will likely continue to make the rounds this week.

There are a number of takeaways and lessons to learn from the far-reaching attack that we witnessed. Let me tie those to voluntary cooperation and collaboration which together represent the foundation for the Internet’s development. The reason for making this connection is because they provide the way to get the global cyber threat under control. Not just to keep ourselves and our vital systems and services protected, but to reverse the erosion of trust in the Internet.

The attack impacted financial services, hospitals, medium and small size businesses. It was an attack that will also impact trust in the Internet because it immediately and directly impacted people in their day-to-day lives. One specific environment raises everybody’s eyebrows: Hospitals.

Let’s share a few takeaways:

On Shared Responsibility

The solutions here are not easy: they depend on the actions of many. Solutions depend on individual actors to take action and solutions depend on shared responsibility.

Fortunately, there are a number of actors that take their responsibility. There is a whole set of early responders, funded by private and public sector, and sometimes volunteers, that immediately set out to analyze the malware and collaborate to find root-causes, share experience, work with vendors, and provide insights to provide specific counter attack.

On the other hand, it is clear that not all players are up to par. Some have done things (clicked on links in mails that spread the damage) or not done things (deployed a firewall, not backed up data, or upgraded to the latest OS version) that exaggerated this problem.

When you are connected to the Internet, you are part of the Internet, and you have a responsibility to do your part.

On proliferation of digital knowledge

The bug that was exploited by this malware purportedly came out of a leaked NSA cache of stockpiled zero-days. There are many lessons, but fundamentally the lesson is that data one keeps can, and perhaps will, eventually leak. Whether we talk about privacy related data-breaches or ‘backdoors’ in cryptography, one needs to assume that knowledge, once out, is available on the whole of the Internet.

Permissionless innovation

The attackers abused the openness of the environment — one of the fundamental properties of the Internet itself. That open environment allows for new ideas to be developed on a daily basis and also allows those to become global. Unfortunately, those new innovations are available for abuse too. The uses of Bitcoins for the payment of ransom is an example of that. We should try to preserve the inventiveness of the Internet.

It is also our collective responsibility to promote innovation for the benefit of the people and to deal collectively with bad use of tools. Above all, the solutions to the security challenges we face should not limit the power of innovation that the Internet allows.

Internet and Society

Society is impacted by these attacks. This is clearly not an Internet-only issue. This attack upset people, rightfully so. People have to solve these issues, technology doesn’t have all the answers, nor does a specific sector. When looking for leadership, the idea that there is a central authority that can solve all this is a mistake.

The leadership is with us all, we have to tackle these issues with urgency, in a networked way. At the Internet Society we call that Collaborative Security. Let’s get to work.

This post is a reprint of a blog published at the Internet Society.

Written by Olaf Kolkman, Chief Internet Technology Officer (CITO), Internet Society

Follow CircleID on Twitter

More under: Cyberattack, Cybercrime, Internet Governance, Security

Continue reading

Posted in circleid | Tagged , , , | Comments Off on It’s Up to Each of Us: Why I WannaCry for Collaboration

Patching is Hard

There are many news reports of a ransomware worm. Much of the National Health Service in the UK has been hit; so has FedEx. The patch for the flaw exploited by this malware has been out for a while, but many companies haven’t installed it. Naturally, this has prompted a lot of victim-blaming: they should have patched their systems. Yes, they should have, but many didn’t. Why not? Because patching is very hard and very risk, and the more complex your systems are, the harder and riskier it is.

Patching is hard? Yes — and every major tech player, no matter how sophisticated they are, has had catastrophic failures when they tried to change something. Google once bricked Chromebooks with an update. A Facebook configuration change took the site offline for 2.5 hours. Microsoft ruined network configuration and partially bricked some computers; even their newest patch isn’t trouble-free. An iOS update from Apple bricked some iPad Pros. Even Amazon knocked AWS off the air.

There are lots of reasons for any of these, but let’s focus on OS patches. Microsoft — and they’re probably the best in the business at this — devotes a lot of resources to testing patches. But they can’t test every possible user device configuration, nor can they test against every software package, especially if it’s locally written. An amazing amount of software inadvertently relies on OS bugs; sometimes, a vendor deliberately relies on non-standard APIs because there appears to be no other way to accomplish something. The inevitable result is that on occasion, these well-tested patches will break some computers. Enterprises know this, so they’re generally slow to patch. I learned the phrase “never install .0 of anything” in 1971, but while software today is much better, it’s not perfect and never will be. Enterprises often face a stark choice with security patches: take the risk of being knocked of the air by hackers, or take the risk of knocking yourself off the air. The result is that there is often an inverse correlation between the size of an organization and how rapidly it installs patches. This isn’t good, but with the very best technical people, both at the OS vendor and on site, it may be inevitable.

To be sure, there are good ways and bad ways to handle patches. Smart companies immediately start running patched software in their test labs, pounding on it with well-crafted regression tests and simulated user tests. They know that eventually, all operating systems become unsupported, and they plan (and budget) for replacement computers, and they make sure their own applications run on newer operating systems. If it won’t, they update or replace those applications, because running on an unsupported operating system is foolhardy.

Companies that aren’t sophisticated enough don’t do any of that. Budget-constrained enterprises postpone OS upgrades, often indefinitely. Government agencies are often the worst at that, because they’re dependent on budgets that are subject to the whims of politicians. But you can’t do that and expect your infrastructure to survive. Windows XP support ended more than three year ago. System administrators who haven’t upgraded since then may be negligent; more likely, they couldn’t persuade management (or Congress or Parliament…) to fund the necessary upgrade.

(The really bad problem is with embedded systems — and hospitals have lots of those. That’s “just” the Internet of Things security problem writ large. But IoT devices are often unpatchable; there’s no sustainable economic model for most of them. That, however, is a subject for another day.)

Today’s attack is blocked by the MS17-010 patch, which was released March 14. (It fixes holes allegedly exploited by the US intelligence community, but that’s a completely different topic. I’m on record as saying that the government should report flaws.) Two months seems like plenty of time to test, and it probably is enough — but is it enough time for remediation if you find a problem? Imagine the possible conversation between FedEx’s CSO and its CIO:

“We’ve got to install MS17-010; these are serious holes.”

“We can’t just yet. We’ve been testing it for the last two weeks; it breaks the shipping label software in 25% of our stores.”

“How long will a fix take?”

“About three months — we have to get updated database software from a vendor, and to install it we have to update the API the billing software uses.”

“OK, but hurry — these flaws have gotten lots of attention. I don’t think we have much time.”

So — if you’re the CIO, what do you do? Break the company, or risk an attack? (Again, this is an imaginary conversation.)

That patching is so hard is very unfortunate. Solving it is a research question. Vendors are doing what they can to improve the reliability of patches, but it’s a really, really difficult problem.

Written by Steven Bellovin, Professor of Computer Science at Columbia University

Follow CircleID on Twitter

More under: Cyberattack, Cybercrime, Malware, Security

Continue reading

Posted in circleid | Tagged , , , | Comments Off on Patching is Hard