Tag Archives: security

Security Costs Money. So – Who Pays?

Computer security costs money. It costs more to develop secure software, and there’s an ongoing maintenance cost to patch the remaining holes. Spending more time and money up front will likely result in lesser maintenance costs going forward, but too few companies do that. Besides, even very secure operating systems like Windows 10 and iOS have had security problems and hence require patching. (I just installed iOS 10.3.2 on my phone. It fixed about two dozen security holes.) So — who pays? In particular, who pays after the first few years when the software is, at least conceptually if not literally, covered by a “warranty”.

Let’s look at a simplistic model. There are two costs, a development cost $d and an annual support cost $s for n years after the “warranty” period. Obviously, the company pays $d and recoups it by charging for the product. Who should pay $n·s?

Zeynep Tufekci, in an op-ed column in the New York Times, argued that Microsoft and other tech companies should pick up the cost. She notes the societal impact of some bugs:

As a reminder of what is at stake, ambulances carrying sick children were diverted and heart patients turned away from surgery in Britain by the ransomware attack. Those hospitals may never get their data back. The last big worm like this, Conficker, infected millions of computers in almost 200 countries in 2008. We are much more dependent on software for critical functions today, and there is no guarantee there will be a kill switch next time.

The trouble is that n can be large; the support costs could thus be unbounded.

Can we bound n? Two things are very clear. First, in complex software, no one will ever find the last bug. As Fred Brooks noted many years ago, in a complex program patches introduce their own, new bugs. Second, achieving a significant improvement in a product’s security generally requires a new architecture and a lot of changed code. It’s not a patch, it’s a new release. In other words, the most secure current version of Windows XP is better known as Windows 10. You cannot patch your way to security.

Another problem is that n is very different for different environments. An ordinary desktop PC may last five or six years; a car can last decades. Furthermore, while smart toys are relatively unimportant (except, of course, to the heart-broken child and hence to his or her parents), computers embedded in MRI machines must work, and work for many years.

Historically, the software industry has never supported releases indefinitely. That made sense back when mainframes walked the earth; it’s a lot less clear today when software controls everything from cars to light bulbs. In addition, while Microsoft, Google, and Apple are rich and can afford the costs, small developers may not be able to. For that matter, they may not still be in business, or may not be findable.

If software companies can’t pay, perhaps patching should be funded through general tax revenues. The cost is, as noted, society-wide; why shouldn’t society pay for it? As a perhaps more palatable alternative, perhaps costs to patch old software should be covered by something like the EPA Superfund for cleaning up toxic waste sites. But who should fund the software superfund? Is there a good analog to the potential polluters pay principle? A tax on software? On computers or IoT devices? It’s worth noting that it isn’t easy to simply say “so-and-so will pay for fixes”. Coming up to speed on a code base is neither quick nor easy, and companies would have to deposit with an escrow agent not just complete source and documentation trees but also a complete build environment — compiling a complex software product takes a great deal of infrastructure.

We could outsource the problem, of course: make software companies liable for security problems for some number of years after shipment; that term could vary for different classes of software. Today, software is generally licensed with provisions that absolve the vendor of all liability. That would have to change. Some companies would buy insurance; others would self-insure. Either way, we’re letting the market set the cost, including the cost of keeping a build environment around. The subject of software liability is complex and I won’t try to summarize it here; let it suffice to say that it’s not a simple solution nor one without significant side-effects, including on innovation. And we still have to cope with the vanished vendor problem.

There are, then, four basic choices. We can demand that vendors pay, even many years after the software has shipped. We can set up some sort of insurance system, whether run by the government or by the private sector. We can pay out of general revenues. If none of those work, we’ll pay, as a society, for security failures.

Written by Steven Bellovin, Professor of Computer Science at Columbia University

Follow CircleID on Twitter

More under: Cyberattack, Cybercrime, Malware, Policy & Regulation, Security

Continue reading

Posted in circleid | Tagged , , , , | Comments Off on Security Costs Money. So – Who Pays?

Bell Canada Discloses Loss of 1.9 Million Email Addresses to Hacker, Says No Relation to WannaCry

Bell Canada, nation’s largest telecommunications company, disclosed late on Monday the illegal access of Bell customer information by an anonymous hacker. The information obtained are reported to include email addresses, customer names and/or telephone numbers. From the official release: “There is no indication that any financial, password or other sensitive personal information was accessed. … The illegally accessed information contains approximately 1.9 million active email addresses and approximately 1,700 names and active phone numbers. … This incident is not connected to the recent global WannaCry malware attacks.”

Follow CircleID on Twitter

More under: Cyberattack, Cybercrime, Email, Security

Continue reading

Posted in circleid | Tagged , , , | Comments Off on Bell Canada Discloses Loss of 1.9 Million Email Addresses to Hacker, Says No Relation to WannaCry

WikiLeaks Releases CIA Malware Implants Called Assassin and AfterMidnight

The recent heavy news coverage of WannaCry has overshadowed the latest WikiLeaks release of critical CIA malware documentation: user manuals for two hacking tools named AfterMidnight and Assassin. Darlene Storm reporting in Computerworld writes: “WikiLeaks maintains that ‘Assassin’ and ‘AfterMidnight’ are two CIA ‘remote control and subversion malware systems’ which target Windows. Both were created to spy on targets, send collected data back to the CIA and perform tasks specified by the CIA… The leaked documents pertaining to the CIA malware frameworks included 2014 user’s guides for AfterMidnight, AlphaGremlin — an addon to AfterMidnight — and Assassin. When reading those, you learn about Gremlins, Octopus, The Gibson and other CIA-created systems and payloads.”

Follow CircleID on Twitter

More under: Malware, Security

Continue reading

Posted in circleid | Tagged , | Comments Off on WikiLeaks Releases CIA Malware Implants Called Assassin and AfterMidnight

WikiLeaks Releases CIA Malware Implants Called Assassin and AfterMidnight

The recent heavy news coverage of WannaCry has overshadowed the latest WikiLeaks release of critical CIA malware documentation: user manuals for two hacking tools named AfterMidnight and Assassin. Darlene Storm reporting in Computerworld writes: “WikiLeaks maintains that ‘Assassin’ and ‘AfterMidnight’ are two CIA ‘remote control and subversion malware systems’ which target Windows. Both were created to spy on targets, send collected data back to the CIA and perform tasks specified by the CIA… The leaked documents pertaining to the CIA malware frameworks included 2014 user’s guides for AfterMidnight, AlphaGremlin — an addon to AfterMidnight — and Assassin. When reading those, you learn about Gremlins, Octopus, The Gibson and other CIA-created systems and payloads.”

Follow CircleID on Twitter

More under: Malware, Security

Continue reading

Posted in circleid | Tagged , | Comments Off on WikiLeaks Releases CIA Malware Implants Called Assassin and AfterMidnight

It’s Up to Each of Us: Why I WannaCry for Collaboration

WannaCry, or WannaCrypt, is one of the many names of the piece of ransomware that impacted the Internet last week, and will likely continue to make the rounds this week.

There are a number of takeaways and lessons to learn from the far-reaching attack that we witnessed. Let me tie those to voluntary cooperation and collaboration which together represent the foundation for the Internet’s development. The reason for making this connection is because they provide the way to get the global cyber threat under control. Not just to keep ourselves and our vital systems and services protected, but to reverse the erosion of trust in the Internet.

The attack impacted financial services, hospitals, medium and small size businesses. It was an attack that will also impact trust in the Internet because it immediately and directly impacted people in their day-to-day lives. One specific environment raises everybody’s eyebrows: Hospitals.

Let’s share a few takeaways:

On Shared Responsibility

The solutions here are not easy: they depend on the actions of many. Solutions depend on individual actors to take action and solutions depend on shared responsibility.

Fortunately, there are a number of actors that take their responsibility. There is a whole set of early responders, funded by private and public sector, and sometimes volunteers, that immediately set out to analyze the malware and collaborate to find root-causes, share experience, work with vendors, and provide insights to provide specific counter attack.

On the other hand, it is clear that not all players are up to par. Some have done things (clicked on links in mails that spread the damage) or not done things (deployed a firewall, not backed up data, or upgraded to the latest OS version) that exaggerated this problem.

When you are connected to the Internet, you are part of the Internet, and you have a responsibility to do your part.

On proliferation of digital knowledge

The bug that was exploited by this malware purportedly came out of a leaked NSA cache of stockpiled zero-days. There are many lessons, but fundamentally the lesson is that data one keeps can, and perhaps will, eventually leak. Whether we talk about privacy related data-breaches or ‘backdoors’ in cryptography, one needs to assume that knowledge, once out, is available on the whole of the Internet.

Permissionless innovation

The attackers abused the openness of the environment — one of the fundamental properties of the Internet itself. That open environment allows for new ideas to be developed on a daily basis and also allows those to become global. Unfortunately, those new innovations are available for abuse too. The uses of Bitcoins for the payment of ransom is an example of that. We should try to preserve the inventiveness of the Internet.

It is also our collective responsibility to promote innovation for the benefit of the people and to deal collectively with bad use of tools. Above all, the solutions to the security challenges we face should not limit the power of innovation that the Internet allows.

Internet and Society

Society is impacted by these attacks. This is clearly not an Internet-only issue. This attack upset people, rightfully so. People have to solve these issues, technology doesn’t have all the answers, nor does a specific sector. When looking for leadership, the idea that there is a central authority that can solve all this is a mistake.

The leadership is with us all, we have to tackle these issues with urgency, in a networked way. At the Internet Society we call that Collaborative Security. Let’s get to work.

This post is a reprint of a blog published at the Internet Society.

Written by Olaf Kolkman, Chief Internet Technology Officer (CITO), Internet Society

Follow CircleID on Twitter

More under: Cyberattack, Cybercrime, Internet Governance, Security

Continue reading

Posted in circleid | Tagged , , , | Comments Off on It’s Up to Each of Us: Why I WannaCry for Collaboration