The Untapped Power of Security Headers

HTTP HeadersUser-targeted attacks, such as XSS, often involve manipulating the server or browser into behaving in a manner not originally intended. While this has always been a serious risk, new HTML5 technologies enhance browser capabilities enough to make client-side attacks an increasingly inviting attack vector.

For a web developer, protecting the client has traditionally been limited to preventing client-side injection and browser manipulation. Beyond that, there was not much more that could be done other than helping users to make smart security decisions. But with the new features of HTML5 comes new ways to improve client-side security through security headers.

Security headers are HTTP response headers that direct the browser’s behavior to enhance a web site’s security. To prevent attackers from manipulating browser behavior, security headers allow you to be specific about your intentions for content served from your web site.

Some of these header specifications are fairly new or browser-specific and they are not universally implemented across all browsers, but there is enough support that now is the time to start implementing these in some form.

If you aren’t already aware of the various security headers available, I thought I would do a quick summary:

 

Content Security Policy (CSP)

CSP is a least privilege approach to security that uses the Content-Security-Policy, X-Content-Security-Policy, or X-WebKit-CSP headers (depending on your browser) to define exactly which resources the browser is allowed to load for any particular page and from where they can be loaded. So if you want a certain document to only be able to load scripts from a specific server you can now specify that through the CSP header. CSP currently allows you to set restrictions for loading scripts, objects, styles, images, media, frames, and fonts. By default, CSP also blocks eval(), inline scripts, inline CSS styles, and data URIs. It further allows you to provide a reporting URL to help identity possible emerging threats which is especially helpful for DOM-Based XSS attacks that are normally difficult to detect.

At this point, CSP specifications vary between browsers and support in mobile browsers is somewhat limited. On top of that, many web sites use inline code and make use of the eval() function so initially full implementation is likely not possible. Still, even the most basic CSP implementation can cripple a large number of client-side attack vectors and is well worth the effort to implement.

In short, CSP is already a powerful tool for limiting client-side attacks and it looks to be an even more promising technique as the specification evolves.

Here is a demo page that shows how CSP can affect page elements (hint: use your browser’s developer panel to view messages about blocked elements).

Some tips for using CSP:

  • Use SSL to prevent tampering of CSP headers.
  • Set very specific origins for each type of resource rather than using the * wildcard.
  • The browser will fall back to a default source if none is provided so use the default-src directive to set a blanket policy.
  • When using a reporting URL, make sure you take precautions to prevent that from introducing new vulnerabilities.

Other useful CSP Resources:

 

Cross-Origin Resource Sharing (CORS)

CORS doesn’t directly increase security, it actually reduces it. But it allows you to do so in a controlled and granular manner to limit exposure to attack. Normally same-origin policy only allows a script to access resources from the same host. For those times when you need to access resources from other hosts, you can limit access for your specific needs. The CORS-related headers are:

Access-Control-Allow-Origin
Access-Control-Allow-Credentials
Access-Control-Expose-Headers
Access-Control-Max-Age
Access-Control-Allow-Methods
Access-Control-Allow-Headers
Access-Control-Request-Method
Access-Control-Request-Headers
Origin

Tips for using CORS:

  • Never allow GET or OPTIONS requests to modify data.
  • Use the Access-Control-Allow-Origin headers as needed on a per-resource basis, rather than for the entire domain.
  • Only use Access-Control-Allow-Origin: * for publicly-accessible static resources that do not include sensitive information or modify data.

 

HTTP Strict Transport Security (HSTS)

HSTS uses the Strict-Transport-Security header to tell the web browser to only server resources from your site using HTTPS. Once specified, the HSTS header will cause the browser to upgrade all HTTP requests to HTTPS. It also prevents users from overriding invalid certificate errors for your site.

Tips for using HSTS:

  • The browser only needs to see the header once so you generally only need to set it for documents, not for the resources loaded by those documents.
  • Because the browser will only honor HSTS headers delivered via HTTPS, you must still configure your site to redirect all resources to HTTPS so the browser gets the HSTS header.

 

X-Frame-Options

The X-Frame-Options header allows you to direct browsers not to embed a document in a frame or to only embed it from documents of the same origin. The purpose of this is to reduce exposure to clickjacking attacks. Although this header does not guarantee your content will never appear in a frame on all clients, all major web browsers will honor this and browsers are the primary mechanism for clickjacking attacks. Most web sites should implement this at least for documents of the same origin.

The X-Frame-Options header provides for two options: Deny, to block all frame embeds, or SameOrigin, to only allow the page to embed in frames from the same server.

 

IE-Specific Headers

Internet Explorer provides support for additional security headers not found in other web browsers. Although these protections only protect a portion of your user base, even a small decrease of attack surface is worth the effort.

X-Content-Type-Options: NoSniff

One IE-specific feature, the X-Content-Type-Options: nosniff header prevents the browser from improperly identifying MIME types. For example, it will not load a script unless the server specifically identifies it as a script through the Content-Type response header.

X-Download-Options: NoOpen

This header allows a web server to specify that a document should never be loaded in the browser and must be saved locally to view. This helps to prevent the browser from loading scripts and other potentially dangerous resources.

X-XSS-Protection

The X-XSS-Protection header,  allows you to control behavior of the IE XSS Filter which protects from a number of reflective XSS attacks. Normally XSS protection is enabled by default in Internet Explorer, but this header provides  you with two additional options to control this filter.

First, if the filter causes problems with your web site and your site is already well-tested for XSS vulnerabilities, you can disable the filter using this header:

X-XSS-Protection: 0

Second, you can make the filter more aggressive by completely blocking a page when XSS is detected rather than trying to simple filter out the attack. You can enable this with the header:

X-XSS-Protection: 1; mode=block

 

Implementation Strategy

The best thing about most of these security headers is that you generally do not need a full implementation to benefit from some of their protections. Although the ideal solution would be to coordinate security header implement between developers and web administrators, developers can implement many of these headers on a limited basis via code. Even if you are unable to implement security headers right now, it is important for developers to be aware of these features to lay the groundwork for future implementation. Moving away from inline scripts and styles, for example, is a major task that must be undertaken to prepare full CSP deployment. Start making those changes now because these security headers will make eventually play a major role in client security.


The Pathetic Reality of Adobe Password Hints

AdobeThe leak of 150 million Adobe passwords in October this year is perhaps the most epic security leak we have ever seen. It was huge. Not just because of the sheer volume of passwords, but also because it’s such a large dump from a single site, allowing for a much better analysis than earlier sets. But there’s something unique about the Adobe dump that makes it even more insightful–the fact that there are about 44 million password hints included in this dump. Even though we still haven’t decrypted the passwords, the data is extremely useful

One thing I have pondered over the years in analyzing passwords is trying to figure out *what* the password is. I can determine if the password contains a noun or a common name, but I can’t always determine what that noun or name means to the user.

For example, if the password is Fred2000, is that a dog’s name and a date? An uncle and his anniversary? The user’s own name and the year they set up the account? Once we know the significance of a password we gain a huge insight into how users select passwords. But I have never been able to come up with a method to even remotely measure this factor. Then came the Adobe dump.

The sheer amount of data in the Adobe dump makes it a bit overwhelming and somewhat difficult to work with. But if you remove the least common and least useful hints the data becomes a bit more manageable. Using a trimmed down set of about 10 million passwords, I was able to better work with the data to come up with some interesting insights.

Just glancing at the top one hundred hints, several patterns immediately become clear. In fact, what we learn is that a large percentage of the passwords are the name of a person, the name of a pet, the name of a place, or an important date.

Take dates for example. Consider the following list of top date-related hints:

Hint Total Note
birthday 29425
bday 17697
date 15272
birth 14956
DOB 13109
niver 9484 Spanish: Anniversary (short for aniversario)
fecha 8899 Spanish: Date
naissance 7892 French: Birth
anniversary 6959

In all, there are about 420,000 passwords with a date-related hint which represents about 3.6% of the passwords in the working set.

We see similar trends with dog names which account for 375,000 passwords or 3.2% of the total (plus another 120,00 that mention “pet”):

Hint Total Note
dog 70550
dogs name 13559
my dog 9780
dog’s name 8191
dog name 8187
perro 8000 Spanish
hund 7185 German, Danish, Swedish,  Norwegian
first dog 5653
chien 5542 French
doggy 5184

One interesting insight offered here is something we already know but find difficult to measure: password reuse. Surely a large percentage of these users have the same password across multiple sites, but it is interesting to see that about 361,000 users (or 3.11%) state this fact in their password hints:

Hint Total Note
same 44565
password 14634
always 13329
la de siempre 8559 Spanish: as always or the usual
same as always 8289
usual password 5277
same old 5111
siempre 4163 Spanish: always
normal password 3898
my password 3022

Keep in mind that these are just those passwords that admit to reuse in the hint. The number of passwords actually in use across multiple sites certainly is much greater than this.

Looking at the three lists above, we see that nearly 10% of the passwords fall into just these 3 categories. Adding names of people and places will likely account for 10% more.

So what did we learn by analyzing these hints?  First, that you should never use password hints. If users forget their password, they should use the password reset process. Second, that decades of user education has completely failed. No matter how much we advise not to use dates, family names or pet names in your passwords and no matter how much we tell people not to use the same passwords on multiple sites, you people will just do it anyway.

This is why we can’t have nice password policies.

 

 

9 Ways to Restrain the NSA

Keith Alexander

With U.S. Government surveillance being a hot news issue lately, several members of Congress have stepped up and started working on bills to place limits on NSA powers. Although these are admirable attempts, most proposals likely won’t have much affect on NSA operations. So of course I thought I’d propose some points that I think at a minimum any surveillance bill should cover.

1. No backdoors or deliberate weakening of security

The single most damaging aspect of recent NSA revelations is that they have deliberately weakened cryptography and caused companies to bypass their own security measures. If we can’t trust the security of our own products, everything falls apart. Although this has had the side-effect of causing the internet community to fill that void, we still need to trust basic foundations such as crypto algorithms.

Approaching a company to even suggest they weaken security should be a crime.

Related issue: the mass collecting of 0-day exploits. I have mixed feelings on how to limit this, but we at least need limits. The fact is that government law enforcement and military organizations are sitting on tens of thousands of security flaws that put us all at risk. Rather than reporting these flaws to vendors to get them fixed and make us all secure, they set these flaws aside for years waiting for the opportunity to exploit them. There are many real threats we all face out there and it is absurd to think that others can’t discover these same flaws to exploit us. By sitting on 0-days, our own government is treating us all as their personal cyberwar pawns.

2. Create rules for collection as well as searches

We saw how the NSA exploited semantics to get away with gathering personal records and not actually calling it a search. They got away with it once, we should never allow that excuse again. Any new laws should clearly define both searches and collection and have strict laws that apply to both.

3. Clear definition of national security

Since the Patriot Act, law enforcement agencies have stretched and abused the definitions of national security and terrorism so much that almost anything can fall under those terms. National security should only refer to imminent or credible domestic threats from foreign entities. Drug trafficking is not terrorism. Hacking a school computer is not terrorism. Copyright infringement is not terrorism.

4. No open-ended gag orders

Gag orders make sense for ongoing investigations or perhaps to protect techniques used in other investigations but there has to be a limit. Once an investigation is over, there is no valid reason to indefinitely prevent someone from revealing basic facts about court orders. That is, there’s no reason to hide this fact unless your investigations are perhaps stretching the laws.

5. No lying to Congress or the courts

“There is only one way to ensure compliance to laws: strong whistleblower protection. We need insiders to let us know when the NSA or other agencies make a habit of letting the rules slide.”

It’s disconcerting that I would even need to say this, but giving false information to protect classified information should be a crime. The NSA can simply decline to answer certain questions like everyone else does when it comes to sensitive information. Or there’s always the 5th amendment if the answer to a question would implicate them in a crime.

6. Indirect association is not justification

Including direct contacts in surveillance may be justified, but including friends of friends of friends is really pushing it and includes just about everyone. So there’s that.

7. No using loopholes

The NSA is not supposed to be spying on Americans but they can legally spy on other countries. The same goes for other countries, they can spy on the US. If the NSA needs info on Americans, they can just go to their spying partners to bypass any legal restrictions. Spying on Americans must include getting information from spy partners.

And speaking of loopholes, many of the surveillance abuses we have seen recently are due to loopholes or creative interpretation of the laws. Allowing the Government to keep these interpretations secret is setting the system up for abuse. We need transparency for loopholes and creative interpretations.

8. No forcing companies to lie

Again, do I even have to say this? The NSA and FBI will ultimately destroy the credibility of US companies unless the law specifically states that people like Mark Zuckerberg can’t come out and say they don’t give secret access to the US government.

9. Strong whistleblower immunity

We saw how self regulation, court supervision, and congressional oversight has overwhelmingly failed to protect us from law enforcement abuses. There is only one way to ensure compliance to laws: strong whistleblower protection. We need insiders to let us know when the NSA or other agencies make a habit of letting the rules slide.

Whistleblowers need non-governmental and anonymous third party protection. We need to exempt these whistleblowers from prosecution and provide them legal yet powerful alternatives to going public. You’d think that even the NSA would prefer fighting this battle in a court over having to face leaks of highly confidential documents. In fact, I think that the only reason to oppose these laws if you actually have something to hide. The NSA’s fear of transparency should be a blaring alarm that something is horribly wrong.

The NSA thinks that public response has been unfair and will severely limit their ability to protect us. What they don’t seem to understand is the reasons we have these limits in the first place. When the NSA can only focus on foreign threats, they have no interest in domestic law enforcement. Suspicionless spying is incompatible with domestic law enforcement and justice systems.

The greatest concern, however, is the unchecked executive and military power. The fact that there has been so much for Snowden to reveal demonstrates the level of abuse. Unfortunately, the capabilities are already in place so even legal limits are largely superficial and self-enforced. It would be trivial to ignore those laws in a national security emergency.

I cringe at the thought of becoming one of those people warning others to be afraid, but that is why we put limits on the government, so we know we don’t ever have to be afraid. We solve the little problems now so we don’t have to face the big problems later. We understand the need for surveillance, we just need to know when the cameras point at us.

 

 

Fingerprints and Passwords: A Guide for Non-Security Experts

iphoneToday Apple announced that the iPhone 5S will have a fingerprint scanner. Many of us in the security community are highly sceptical of this feature, while others saw this as a smart security move. Then of course there are the journalists who see fingerprints as the ultimate password killer. Clearly there is some disagreement here. I thought I’d lay this out for those of you who need to better understand the implications of using fingerprints vs or in addition to passwords.

Biometrics, like usernames and passwords, are a way to identify and authenticate yourself to a system. We all know that passwords can be weak and difficult to manage, which makes it tempting to call every new authentication product a password killer. But despite their flaws, passwords must always play some role in authentication.

The fact is that while passwords do have their flaws, they also have their strengths. The same is true with biometrics. You can’t just replace passwords with fingerprints and say you’ve solved the problem because you have introduced a few new problems.

To clarify this, below is a table that compares the characteristics of biometrics vs passwords, with check marks where one method has a clear advantage:

Passwords Biometrics
Difficult to remember Don’t have to remember 
Requires unique passwords for each system Can be used on every system 
Nothing else to carry around Nothing else to carry around
Take time to type Easy to swipe/sense 
Prone to typing errors Prone to sensor or algorithm errors
Immune to false positives  Susceptible to false positives
Easy to enroll  Some effort to enroll
Easy to change  Impossible to change
Can be shared among users 1  Cannot be shared 
Can be used without your knowledge Less likely to be used without your knowledge 
Cheap to implement  Requires hardware sensors
Work anywhere including browsers & mobile  Require separate implementation
Mature security practice  Still evolving
Non-proprietary  Proprietary
Susceptible to physical observation Susceptible to public observation
Susceptible to brute force attacks Resistant to brute force attacks 
Can be stored as hashes by untrusted third party  Third party must have access to raw data
Cannot personally identify you  Could identify you in the real world
Allow for multiple accounts  Cannot use to create multiple accounts
Can be forgotten; password dies with a person Susceptible to injuries, aging, and death
Susceptible to replay attacks Susceptible to replay attacks
Susceptible to weak implementations Susceptible to weak implementations
Not universally accessible to everyone Not universally accessible to everyone
Susceptible to poor user security practices Not susceptible to poor practices 
Lacks non-repudiation Moderate non-repudiation 
1 Can be both a strength and a weakness

 

What Does This Tell Us?

As you can see, biometrics clearly are not the best replacement for passwords, which is why so many security experts cringe when every biometrics company in their press releases claim themselves as the ultimate password killer. Biometrics do have some clear advantages over passwords, but they also have numerous disadvantages; they both can be weak and yet each can be strong, depending on the situation. Now the list above is not weighted–certainly some of the items are more important than others–but the point here is that you can’t simply compare passwords to biometrics and say that one is better than the other.

However, one thing you can say is that when you use passwords together with biometrics, you have something that is significantly stronger than either of the two alone. This is because you get the advantages of both techniques and only a few of the disadvantages. For example, we all know that you can’t change your fingerprint if compromised, but pair it with a password and you can change that password. Using these two together is referred to as two-factor authentication: something you know plus something you are.

It’s not clear, however, if the Apple implementation will allow for you to use both a fingerprint and password (or PIN) together.

Now specifically talking about the iPhone’s implementation of a fingerprint sensor, there are some interesting points to note. First, including it on the phone makes up for some of the usual biometric disadvantages such as enrollment, having special hardware sensors, and privacy issues due to only storing that data locally. Another interesting fact is that the phone itself is actually a third factor of authentication: something you possess. When combined with the other two factors it becomes an extremely reliable form of identification for use with other systems. A compromise would require being in physical possession of your phone, having your fingerprint, and knowing your PIN.

Ultimately, the security of the fingerprint scanner largely depends on the implementation, but even if it isn’t perfect, it is better than those millions of phones with no protection at all.

There is the issue of security that some have brought up: is this just a method for the NSA to build a master fingerprint database? Apple’s implementation encrypts and stores fingerprint locally using trusted hardware. Whether this is actually secure remains to be seen, but keep in mind that your fingerprints aren’t really that private: you literally leave them on everything you touch.

 

 

8 Ways to Prepare for CSP

Cross-Site Scripting (XSS) is a critical threat that, despite widespread training, still plagues a large number of web sites. Preventing XSS attacks can get complicated but even a small effort can go a long way–a small effort that nevertheless seems to evade us. Still, developers are getting better at input filtering and output escaping which means we are at least headed in the right direction.

Handling input and output aren’t the only strategies available to us. Content Security Policy (CSP) is a HTTP response header that–when correctly implemented–significantly reduces exposure to XSS attacks. CSP is exactly what it’s name implies: a security policy for your web content.

CSP not only allows you to whitelist browser content features on a per-resource basis, but also lets you whitelist those features on a per-host basis.

CSP not only allows you to whitelist browser content features on a per-resource basis, but also lets you whitelist those features on a per-host basis. For example, you might tell the browser it can load scripts for a page, but only if they come from a specific directory on your own web server. CSP allows you to set restrictions on scripts, styles, images, XHR, WebSocket, EventSource, fonts, embedded objects, audio, video, and frames.

One powerful feature of CSP is that by default it blocks inline scripts, inline styles, the eval() function, and data: URI schemes–all common XSS vectors. The only problem is that’s where it starts breaking existing code and could be a major obstacle to its widespread adoption. This is an all-too-common problem with frameworks, code libraries, plugins, and open source applications. If you write code that many other people use and don’t start getting it ready for CSP, you kind of hold us all back.  CSP does allow you to re-enable blocked features but that defeats the purpose of implementing content security policies.

So getting to my point, here are some things developers can do to their code to at least get ready for CSP:

  1. Remove inline scripts and styles. Surely you already know that it’s a good practice to separate code from presentation, now’s a good time for us all to stop being lazy and separating our code.
  2. Ditch the eval() function. Another thing we’ve all known to avoid for quite some time, but it still seems to show up. Most importantly, if you are working with JSON, make sure you parse it instead of eval’ing it. It’s rare to find a situation where there are secure alternatives to eval, you’d might be surprised how creative you can be.
  3. Don’t rely on data: schemes. Most often used for embedding icons into your code, data: URI’s are a powerful XSS vector. The problem here isn’t using them yourself which normally is safe, it’s that attackers might use them so the best solution is to disable them altogether. On pages that don’t work with user input in any form, you are probably safe to keep data: URI’s enabled.
  4. Create an organized, isolated directory structure. Scripts with scripts, images with images. Keeping content separate makes fine-grained CSP policies so much easier.
  5. Document features needed for each file. A good place to document features required for each file is in the source code itself, such as in a PHPDoc comment. By doing this, when you implement CSP you can start with the most restrictive policy and add only necessary features.
  6. Centralize your HTTP response headers code. A centralized function to send required headers makes it easier to keep track of it all and avoid hard-to-debug errors later.
  7. Eliminate unnecessary and superfluous scripts. It’s sometimes hard to give up cool features for security, but good discipline here can pay off. This is a business decision based on your threat model, but it’s always a good question to ask when adding new stuff.
  8. Mention it whenever possible. Yes, you should be that person who talks about CSP; so much that people simply stop inviting you to meetings.

And if you aren’t quite sure about how CSP works, here is some recommended reading:

An Introduction to Content Security Policy

Preventing XSS with Content Security Policy Presentation

Content Security Policy 1.0 Spec

Browser Support

CSP at OWASP

CSP Playground

CSP Readiness

 

 

So What Exactly Did The US Government Ask Lavabit to Do?

The recent shutdown of Lavabit’s email services prompted a flurry of reporting and speculation about the extent US Government spying, mostly due to the mysterious statement by Lavabit founder Ladar Levison:

Most of us saw this as yet another possibly overhyped government spying issue and didn’t really think too much of it. Much of the media coverage is already starting to die down but there still is some question as to exactly what the government required of Levison that left him with only one option: shutting down his entire business he built from ground up. I wondered if there were enough clues out there to get some more insight into this case. I started by looking at exactly what Lavabit offered and how that all worked behind the scenes.

Lavabit Encryption

Lavabit claimed they had “developed a system so secure that it prevents everyone, including us, from reading the e-mail of the people that use it. ” This is a bold claim and one that surely was a primary selling point for their services.

The way it worked is relatively simple: Lavabit encrypted all incoming mail with the user’s public key before storing the message on their servers. Only the user, with the private key and password could decrypt messages. Normally with encrypted email, users store private keys on their own computers, but it appears that in the case of Lavabit, they stored the users’ private keys, each encrypted with a hash of that user’s password. This is by no means the most secure way of doing this, but it dramatically increases transparency and usability for the user. By doing this, for example, users do not need to worry about private keys and they still have access to their email from any computer.

So let’s break this down: a user logs in with their password. This login might occur via POP3, IMAP4, or through the web interface (which in turn connected internally via IMAP). Because Lavabit used the user’s password to encrypt the private key, they will need the original plaintext password which means they would not be able to support any secure authentication methods. In other words, all clients must send passwords using AUTH PLAIN or AUTH LOGIN with nothing more than base64 encoding. The webmail interface appears to have been available as both SSL and non-SSL and the POP3, IMAP4, and SMTP interfaces all seem to have accepted connections with or without SSL. All SSL connections terminated at the application tier.

Once a user sends a password, the Lavabit servers create SHA-512 hashes explained as follows:

… Lavabit combines the password with the account name and a cryptographic salt. This combined string is then hashed three consecutive times, with the former iteration’s output being used as the input value of the next iteration. The output of the first hash iteration is used as the secret passphrase for AES [encryption of the private key]. The third iteration is stored in our password database and is used to verify that users entered their password correctly.

The process they describe produces two hashes: one for decrypting the user’s private key and after two more hashing iterations, a hash to store in the database for user authentication. While this is a fairly secure process, given strong user passwords, it does weaken Lavabit’s claim that even their administrators couldn’t read your email. In reality all it would take is a few lines of code code to log the user’s original password which allows you to decrypt the private key which in turn allows you to receive and send mail as that user as well as access any stored messages.

The message here is that US courts can force a business to subvert their own security measures and lie to their customers, deliberately giving them a false sense of security.

It is important to note that the scope of Lavabit’s encryption was limited to storage on it’s own servers. The public keys were for internal use and not something you published for others to use. Full protection would require employing PGP or S/MIME and having untapped SSL connections between all intermediate servers. On the other hand, if an email was sent through Lavabit already using PGP or S/MIME encryption, they would never be able to intercept or read those emails.

The question here is what exactly did the government request Levison to do that was so bad that he’d rather shut down his entire business? What information could Lavabit even produce that would be of interest to a government agency? Unencrypted emails, customer IP addresses, customer payment methods, and customer passwords. Based on media statements, it appears that he would be required to provide unencrypted copies of all emails going through his system.

Let’s look at some quotes levison has given to various media outlets. First, here are some quotes from an interview with CNET:

“We’ve had a couple of dozen court orders served to us over the past 10 years, but they’ve never crossed the line.”

“Philosophically, I put myself in a position that I was comfortable turning over the information that I had. I built Lavabit in a reaction to the original Patriot Act.”

“Where the government would hypothetically cross the line is to violate the privacy of all of my users. This is not about protecting a single person or persons, it’s about protecting all my users. What level of access to this nation does the government have?”

“Why should I collect that info if I didn’t need it? [That philosophy] also governed what kind of information I logged.”

“Unfortunately, what’s become clear is that there’s no protections in our current body of law to keep the government from compelling us to provide the information necessary to decrypt those communications in secret.”

“If you knew what I know about e-mail, you might not use it either.”

In an article from NBC News, we have this:

Levison stressed that he has complied with “upwards of two dozen court orders” for information in the past that were targeted at “specific users” and that “I never had a problem with that.” But without disclosing details, he suggested that the order he received more recently was markedly different, requiring him to cooperate in broadly based surveillance that would scoop up information about all the users of his service. He likened the demands to a requirement to install a tap on his telephone. Those demands apparently began about the time that Snowden surfaced as one of his customers, apparently triggering a secret legal battle between Levison and federal prosecutors.

And finally in an interview with RT he said:

I think the amount of information that they’re collecting on people that they have no right to collect information on is the most alarming thing,” he told RT. “I mean, the Fourth Amendment is supposed to guarantee that our government will only conduct surveillance on people in which it has a probable suspicion or evidence that they are committing some crime, and that that evidence has been reviewed by a judge and signed off by a judge before that surveillance begins. And if there’s anything alarming, it’s that now that’s all being done after the fact. Everything’s being recorded, and then a judge can after the fact say it’s okay to go look at the information.

Given the above information, let’s analyze some of the facts we know:

  • The government asked Lavabit to do something which levison considered to be a crime against the American people.
  • Levison was comfortable and had complied with warrants requesting information on specific users.
  • Levison told Forbes that “This is about protecting all of our users, not just one in particular.”
  • Levison is not even able to reveal some details with his own attorney or employees.
  • Shutting down operations was an option to circumspect compliance, although there was a veiled threat he could be arrested for doing so.
  • He did not delete customer data, he still has that in his possession so this was a request for ongoing surveillance.
  • This was a court order, which levison is fighting through the US Court of Appeals for the Fourth Circuit.
  • Levison compared the request to installing a tap on his telephone.

Apparently what made Levison uncomfortable with the request was that the fact that it collected information about all users, without regards to a warrant. Presumably law enforcement wanted to collect all data that they would later retroactively view as necessary once they had a warrant. The two issues here are that the Government wanted to collect information on innocent users (including Levison himself) and Levison would be out of the loop completely, taking away his control over what information he provided to law enforcement. These were the lines the Government crossed.

What’s interesting here is that Lavabit terminated the SSL connections right on the application servers themselves. These are the servers that also performed the encryption of email messages. Because of that, a regular network tap would be ineffective. The only way to perform the broad surveillance Levinson objected to would be (in order of likelihood) :

  1. Force Lavabit to provide their private SSL keys and route all their traffic through a government machine that performed a man-in-the-middle style data collection;
  2. Change their software to subvert Lavabit’s own security measures and log emails after SSL decryption but before encrypting with the users’ public keys; or
  3. Require Lavabit to install malicious code to infect their own customers with government-supplied malware.

Sure, this could have been a simple request to put a black box on Lavabit’s network and Levinson is just overreacting, but the evidence doesn’t seem to indicate that. Regardless of which of the requests the Government made, they would all make Levison’s entire business a lie; all efforts to encrypt messages would be pointless. Surely there were some heated words spoken when the Department of Justice heard about Levison’s decision, but this is not an act of civil disobedience on Levison’s part, his personal integrity was on the line. Compliance would make his very reason for running Lavabit a deception; a government-sponsored fraud.

While Lavabit initially had quite a bit of media coverage over this issue, the hype seems to be a casualty of our frenzied newscycle. But after looking closely at the facts here, I now see that this is a monumentally important issue, one that the media needs to once again address. The message here is that US courts can force a business to subvert their own security measures and lie to their customers, deliberately giving them a false sense of security. They can say what they want about security on their web sites, it means nothing. If they did it to Lavabit, how many hundreds or thousands of other US companies already participate in this deception?

If the courts can force a business to lie, we can never again trust the security claims of any US company. The reason so many businesses specifically rely on US services is the sense of stability and trust. How sad that an overreaching and panicked pursuit of a whistleblower has thrown that all away.

This issue is so much more than a simple civil liberties dispute, it is the integrity of a nation at stake. We walked with the devil in a time of need–that is a legacy we must live with–but at what point do we sever that relationship and return to the integrity required to lead the world through respect and not by fear?

 

Lavabit,png

2004-2013

 

UPDATE: Since publishing this post, this Wired article has since revealed that in fact Lavabit was required to supply their private SSL keys as suspected above.

 

Should You Ditch LastPass?

LastPassSteve Thomas, aka Sc00bz, has brought up some very interesting issues about the LastPass password monitor that are causing some confusion so I thought I’d give another perspective on the issue.

Summary of Steve’s points:

  1. When you use the LastPass web site to login to your account, your web browser will first send a hash with a single iteration, no matter how many iterations you have set for your account. It isn’t until this hash fails that the browser tells the user the correct number of iterations to use.
  2. LastPass has a default setting of 500 iterations (at least at that time, now it recommends 5000 iterations).
  3. The extension should warn you if it is going to send a hash with fewer iterations than what you have set.
  4. LastPass does not encrypt the URLs of sites stored in your password database

LastPass hashes your password rather than sending the plain text to the server when you login. The algorithm it uses is sha256(sha256(email + password) + password). This hash, while not necessarily insecure, can be cracked in a reasonable amount of time with ordinary hardware, unless the user has a relatively strong password. It isn’t until after this single iteration hash is sent that the LastPass server responds and tells the browser exactly how many iterations it should use; hash is sent again using the correct number of iterations. More iterations means it will take much more time to crack your password. A good minimum number of iterations is 5,000. If you go too high with the number of iterations, some clients such as mobile phones may be very slow logging in.

This is an issue that certainly should be addressed, but it is not serious enough to warrant abandoning LastPass

The mitigating factors here are:

  1. You are logging in via SSL so the primary threats here are a MitM attack with spoofed SSL certificates, a government warrant, or a government spy agency.
  2. They still need to crack your hash so if you have a very strong password, even a single iteration hash could provide a reasonable amount of protection.
  3. A second factor of authentication, country restrictions, blocking tor logins,  restricting mobile access, and other settings still protect your account from unauthorized logins, unless the attacker is able to obtain your stored hashes through hacking, warrant, or spying.

One thing I might also add is that the server is telling the client how many iterations it expects, so this does make an attack much easier if someone acquires your hash.

My opinion is that this is an issue that certainly should be addressed, but it is not serious enough to warrant abandoning LastPass altogether unless your inpidual threat model includes the NSA or other government agencies. The LastPass plugin should identify when someone is logging in to LastPass via the web login and provide the client-side script with the correct number of iterations. The server should never respond with this at all. However, if someone is logging in through a web browser that doesn’t have LastPass with their account data installed, the server sending the number of iterations is the only option.

The only proper solution here is to have your primary login different than the decryption login, at least for accessing the web interface if not everywhere. That way, the number of iterations is never publicly revealed and sending a single iteration hash would be unnecessary. Other companies such as RoboForm use this method. I have always wanted this feature and I would highly recommend LastPass implement this if it is feasible.

As for the other points, the default iteration count mentioned in number 2 has been addressed and the warning mentioned in number 3 would be a good thing to add if it already hasn’t but this would not be possible if using a web browser with your LastPass account installation.

iterations

As for encrypting URLs mentioned in number 4, LastPass’s response was that this is necessary to grab favicons. Although unencrypted URLs may not be an issue, there certainly are scenarios where you would want these encrypted. LastPass should make this an option for the user.

LastPass does provide strong security controls, although there clearly is room for improvement. If you do not find LastPass to be secure enough, the only reasonable alternative I would recommend is KeePass, which puts you in complete control over your data while still being quite usable. I would not recommend ditching LastPass, but I would recommend that LastPass address these issues. I would also recommend that Steve Thomas keep up the great research he provides to the community.

I have not heard a recent response from LastPass on these issues but would love to hear from them. I will update this post if and when I do.


Disclosure: I am a LastPass user and I get a free month of premium service whenever someone clicks on banners located on this site. I have no affiliation with LastPass and receive no other compensation from the company. 

Thanks NSA for Ruining the Internet

NSA: We Are PlanetI know, we have been told for years that the NSA has been spying on us. The revelations in recent months really aren’t that new. We always assumed there was that looming over us and many of us have even greeted various government agencies in our private chats and emails (i.e, “I want to blow that up, j/k nsa, LOL, no really just kidding”).

On the other hand, our lives are full of conspiracy theories that nag at us: Is that mirror in the dressing room a two-way mirror? Is that webcam on my laptop secretly recording me? Why is that black Suburban parked on my street? Fortunately, it’s easy to dismiss these things as conspiracy—that is until two guys step out of that Suburban and approach your door. Edward Snowden’s leaks about NSA spying made it that real to us.

Perhaps the most frustrating aspect of this is the reaction by our government. To them the problem isn’t that they are spying, it’s that we found out about it. If we just don’t know about it then everything will be okay, right? Fire some admins, tell us about a few of their programs, and maybe issue a terror alert to help us see how much they are helping us. Clearly they are missing the point here. What bothers us isn’t just the spying, it’s the loss of trust. Not just in our government, but in almost everything we do online. We can’t trust our email, our phone conversations, text messages, or online chats. Right, we kind of already knew this.

Once we found out that it is acceptable to lie in the name of national security, that changed everything.

But once we found out that it is acceptable to lie in the name of national security, that changed everything. Suddenly when the NSA says there are no domestic spying programs, is that really true, or is it the least untruthful answer they are allowed to give us? When an online service we use says they don’t give information to the NSA, is that true or are they being to deny this? Do terms of service and privacy policies mean anything anymore when national security trumps everything?

Do we now just assume that all online privacy is compromised? If not by our own government, by some other entity? What about our online backups, our cloud storage services, online notes, bookmarks, calendars, to-do lists, photos, accounting, online banking, hosted web servers, password management services, or medical records? And what about encryption, do we even trust our current technologies anymore?

Denials by the NSA or these companies don’t mean anything to us now because how do we know these aren’t just National Security denials?

Fortunately, in the end this will be good for us. We will be forced to develop technologies that make us all more secure. We will step up to the challenge, putting us back in control of our data. Improving security will be the new civil disobedience.

In the meantime, thanks NSA for ruining the internet.

Dear NSA, I Don’t Think You Meant Yottabytes

NSA Spying - What is a yottabyte?Several media reports claim that the NSA’s Utah data center may ultimately be able to store data on the scale of yottabytes because, you know, they think they’re totally going to need yottabytes. To put this into perspective, a yottabyte would require about a trillion 1tb hard drives and data centers the size of both Rhode Island and Delaware. Further, a trillion hard drives is more than a thousand times the number of hard drives produced each year. In other words, at current manufacturing rates it would take more than a thousand years to produce that many drives. Not to mention that the price of buying those hard drives would cost up to 80 trillion dollars–greater than the GDP of all countries on Earth.

Let’s just establish one thing NSA, without major improvements in storage technology, yottabyte storage capabilities is simply absurd. Just get yottabytes out of your mind and stop telling people you need yottabytes, it makes you sound dumb. Even the zettabyte, which is 1/1000 of a yottabyte, is silly but let’s use that as an example anyway. Continue reading “Dear NSA, I Don’t Think You Meant Yottabytes” »

Getting Started with PGP in 10 Minutes or Less

NSA SpyingConsidering recent news about the collecting of data communications, I think its time to bring PGP back to life. PGP is an extremely secure encryption method that is easy to integrate into email messages. Although it has been around since 1991, early efforts to make it a standard largely failed. Even I eventually stopped installing PGP because I simply never used it. But the Internet is very different nowadays and I think it’s time to dust off that old key and give PGP another chance.

PGP does have some limitations and it is by no means a perfect solution, but it is much better than sending unencrypted emails. Unfortunately, one big reason for lack of adoption is simply that too many people are intimidated by it. The good news is that you really don’t have to understand how PGP works in order to start using it.

Here are the basic concepts you probably should know:

  • You start by creating two keys for yourself:  one that you pass out to the world, and one that you guard with your life (along with having a reliable and secure backup). The Kleopatra software mentioned below walks you through the process of creating these. It’s really easy. 
  • If someone wants to send you an encrypted email they will need your public key so you generally publish it on a public keyserver or put it on your web site. Mine is here.
  • To be able to read that email you use your private key.  That is why you keep it private.
  • You can also use your private key to sign emails that  you send out. Signing just proves that you are the real sender. Others can verify your signature with your public key.
  • Other people can sign your public key. The more people who sign your key, the more others can be sure that it is authentic.

That’s all you really need to know, so here are some quick instructions for getting started under Windows:

1. Download and install Gpg4Win.

2. After installing, open Kleopatra to import an existing key or create a new key.

3. Once your key is in Kleopatra, right click on it and select Export Certificates to Server to publish your key on a public keyserver, then get another PGP user you know to sign it.

4. Configure your mail client (or use Claws Mail client that comes with Gpg4Win). If you are using Outlook, Gpg4Win comes with an Outlook add-on. If you are using Thunderbird, get the Enigmail add-on.

5. Talk a friend into installing PGP so you can send and receive PGP encrypted email.

A big problem for most people is that using PGP with web-based mail accounts such as GMail, Hotmail, or Yahoo! mail just isn’t that easy. Yes there are solutions, but I have not had great luck with any of those.

In my case, what I chose to do is create a Gmail filter to forward all messages that contain “BEGIN PGP MESSAGE” to another unpublished email account (and delete the original). This other email account is a POP account that I access with Mozilla Thunderbird. I installed the Enigmail plugin and now have Thunderbird to deal with all PGP messages.

All my regular mail I still access through GMail and all my encrypted and other sensitive messages I deal with through Thunderbird. Works great so far! My next step is to explore some of the mobile PGP apps.

I have also added a new contact form that will automatically encrypt all messages using my public PGP key. The nice thing about the form is that it comes from my server, not from you; your email address and name are encrypted in the message body.

My PGP Key: https://xato.net/x/Mark_Burnett_mb@xato.net_(0x6E23BA97)_pub.asc
PGP Key ID: 0x6E23BA97
Fingerprint: C127 C510 79D7 E457 E20D 4419 7752 D68B 6E23 BA97