The Pathetic Reality of Adobe Password Hints

AdobeThe leak of 150 million Adobe passwords in October this year is perhaps the most epic security leak we have ever seen. It was huge. Not just because of the sheer volume of passwords, but also because it’s such a large dump from a single site, allowing for a much better analysis than earlier sets. But there’s something unique about the Adobe dump that makes it even more insightful–the fact that there are about 44 million password hints included in this dump. Even though we still haven’t decrypted the passwords, the data is extremely useful

One thing I have pondered over the years in analyzing passwords is trying to figure out *what* the password is. I can determine if the password contains a noun or a common name, but I can’t always determine what that noun or name means to the user.

For example, if the password is Fred2000, is that a dog’s name and a date? An uncle and his anniversary? The user’s own name and the year they set up the account? Once we know the significance of a password we gain a huge insight into how users select passwords. But I have never been able to come up with a method to even remotely measure this factor. Then came the Adobe dump.

The sheer amount of data in the Adobe dump makes it a bit overwhelming and somewhat difficult to work with. But if you remove the least common and least useful hints the data becomes a bit more manageable. Using a trimmed down set of about 10 million passwords, I was able to better work with the data to come up with some interesting insights.

Just glancing at the top one hundred hints, several patterns immediately become clear. In fact, what we learn is that a large percentage of the passwords are the name of a person, the name of a pet, the name of a place, or an important date.

Take dates for example. Consider the following list of top date-related hints:

Hint Total Note
birthday 29425
bday 17697
date 15272
birth 14956
DOB 13109
niver 9484 Spanish: Anniversary (short for aniversario)
fecha 8899 Spanish: Date
naissance 7892 French: Birth
anniversary 6959

In all, there are about 420,000 passwords with a date-related hint which represents about 3.6% of the passwords in the working set.

We see similar trends with dog names which account for 375,000 passwords or 3.2% of the total (plus another 120,00 that mention “pet”):

Hint Total Note
dog 70550
dogs name 13559
my dog 9780
dog’s name 8191
dog name 8187
perro 8000 Spanish
hund 7185 German, Danish, Swedish,  Norwegian
first dog 5653
chien 5542 French
doggy 5184

One interesting insight offered here is something we already know but find difficult to measure: password reuse. Surely a large percentage of these users have the same password across multiple sites, but it is interesting to see that about 361,000 users (or 3.11%) state this fact in their password hints:

Hint Total Note
same 44565
password 14634
always 13329
la de siempre 8559 Spanish: as always or the usual
same as always 8289
usual password 5277
same old 5111
siempre 4163 Spanish: always
normal password 3898
my password 3022

Keep in mind that these are just those passwords that admit to reuse in the hint. The number of passwords actually in use across multiple sites certainly is much greater than this.

Looking at the three lists above, we see that nearly 10% of the passwords fall into just these 3 categories. Adding names of people and places will likely account for 10% more.

So what did we learn by analyzing these hints?  First, that you should never use password hints. If users forget their password, they should use the password reset process. Second, that decades of user education has completely failed. No matter how much we advise not to use dates, family names or pet names in your passwords and no matter how much we tell people not to use the same passwords on multiple sites, you people will just do it anyway.

This is why we can’t have nice password policies.



9 Ways to Restrain the NSA

Keith Alexander

With U.S. Government surveillance being a hot news issue lately, several members of Congress have stepped up and started working on bills to place limits on NSA powers. Although these are admirable attempts, most proposals likely won’t have much affect on NSA operations. So of course I thought I’d propose some points that I think at a minimum any surveillance bill should cover.

1. No backdoors or deliberate weakening of security

The single most damaging aspect of recent NSA revelations is that they have deliberately weakened cryptography and caused companies to bypass their own security measures. If we can’t trust the security of our own products, everything falls apart. Although this has had the side-effect of causing the internet community to fill that void, we still need to trust basic foundations such as crypto algorithms.

Approaching a company to even suggest they weaken security should be a crime.

Related issue: the mass collecting of 0-day exploits. I have mixed feelings on how to limit this, but we at least need limits. The fact is that government law enforcement and military organizations are sitting on tens of thousands of security flaws that put us all at risk. Rather than reporting these flaws to vendors to get them fixed and make us all secure, they set these flaws aside for years waiting for the opportunity to exploit them. There are many real threats we all face out there and it is absurd to think that others can’t discover these same flaws to exploit us. By sitting on 0-days, our own government is treating us all as their personal cyberwar pawns.

2. Create rules for collection as well as searches

We saw how the NSA exploited semantics to get away with gathering personal records and not actually calling it a search. They got away with it once, we should never allow that excuse again. Any new laws should clearly define both searches and collection and have strict laws that apply to both.

3. Clear definition of national security

Since the Patriot Act, law enforcement agencies have stretched and abused the definitions of national security and terrorism so much that almost anything can fall under those terms. National security should only refer to imminent or credible domestic threats from foreign entities. Drug trafficking is not terrorism. Hacking a school computer is not terrorism. Copyright infringement is not terrorism.

4. No open-ended gag orders

Gag orders make sense for ongoing investigations or perhaps to protect techniques used in other investigations but there has to be a limit. Once an investigation is over, there is no valid reason to indefinitely prevent someone from revealing basic facts about court orders. That is, there’s no reason to hide this fact unless your investigations are perhaps stretching the laws.

5. No lying to Congress or the courts

“There is only one way to ensure compliance to laws: strong whistleblower protection. We need insiders to let us know when the NSA or other agencies make a habit of letting the rules slide.”

It’s disconcerting that I would even need to say this, but giving false information to protect classified information should be a crime. The NSA can simply decline to answer certain questions like everyone else does when it comes to sensitive information. Or there’s always the 5th amendment if the answer to a question would implicate them in a crime.

6. Indirect association is not justification

Including direct contacts in surveillance may be justified, but including friends of friends of friends is really pushing it and includes just about everyone. So there’s that.

7. No using loopholes

The NSA is not supposed to be spying on Americans but they can legally spy on other countries. The same goes for other countries, they can spy on the US. If the NSA needs info on Americans, they can just go to their spying partners to bypass any legal restrictions. Spying on Americans must include getting information from spy partners.

And speaking of loopholes, many of the surveillance abuses we have seen recently are due to loopholes or creative interpretation of the laws. Allowing the Government to keep these interpretations secret is setting the system up for abuse. We need transparency for loopholes and creative interpretations.

8. No forcing companies to lie

Again, do I even have to say this? The NSA and FBI will ultimately destroy the credibility of US companies unless the law specifically states that people like Mark Zuckerberg can’t come out and say they don’t give secret access to the US government.

9. Strong whistleblower immunity

We saw how self regulation, court supervision, and congressional oversight has overwhelmingly failed to protect us from law enforcement abuses. There is only one way to ensure compliance to laws: strong whistleblower protection. We need insiders to let us know when the NSA or other agencies make a habit of letting the rules slide.

Whistleblowers need non-governmental and anonymous third party protection. We need to exempt these whistleblowers from prosecution and provide them legal yet powerful alternatives to going public. You’d think that even the NSA would prefer fighting this battle in a court over having to face leaks of highly confidential documents. In fact, I think that the only reason to oppose these laws if you actually have something to hide. The NSA’s fear of transparency should be a blaring alarm that something is horribly wrong.

The NSA thinks that public response has been unfair and will severely limit their ability to protect us. What they don’t seem to understand is the reasons we have these limits in the first place. When the NSA can only focus on foreign threats, they have no interest in domestic law enforcement. Suspicionless spying is incompatible with domestic law enforcement and justice systems.

The greatest concern, however, is the unchecked executive and military power. The fact that there has been so much for Snowden to reveal demonstrates the level of abuse. Unfortunately, the capabilities are already in place so even legal limits are largely superficial and self-enforced. It would be trivial to ignore those laws in a national security emergency.

I cringe at the thought of becoming one of those people warning others to be afraid, but that is why we put limits on the government, so we know we don’t ever have to be afraid. We solve the little problems now so we don’t have to face the big problems later. We understand the need for surveillance, we just need to know when the cameras point at us.



Fingerprints and Passwords: A Guide for Non-Security Experts

iphoneToday Apple announced that the iPhone 5S will have a fingerprint scanner. Many of us in the security community are highly sceptical of this feature, while others saw this as a smart security move. Then of course there are the journalists who see fingerprints as the ultimate password killer. Clearly there is some disagreement here. I thought I’d lay this out for those of you who need to better understand the implications of using fingerprints vs or in addition to passwords.

Biometrics, like usernames and passwords, are a way to identify and authenticate yourself to a system. We all know that passwords can be weak and difficult to manage, which makes it tempting to call every new authentication product a password killer. But despite their flaws, passwords must always play some role in authentication.

The fact is that while passwords do have their flaws, they also have their strengths. The same is true with biometrics. You can’t just replace passwords with fingerprints and say you’ve solved the problem because you have introduced a few new problems.

To clarify this, below is a table that compares the characteristics of biometrics vs passwords, with check marks where one method has a clear advantage:

Passwords Biometrics
Difficult to remember Don’t have to remember 
Requires unique passwords for each system Can be used on every system 
Nothing else to carry around Nothing else to carry around
Take time to type Easy to swipe/sense 
Prone to typing errors Prone to sensor or algorithm errors
Immune to false positives  Susceptible to false positives
Easy to enroll  Some effort to enroll
Easy to change  Impossible to change
Can be shared among users 1  Cannot be shared 
Can be used without your knowledge Less likely to be used without your knowledge 
Cheap to implement  Requires hardware sensors
Work anywhere including browsers & mobile  Require separate implementation
Mature security practice  Still evolving
Non-proprietary  Proprietary
Susceptible to physical observation Susceptible to public observation
Susceptible to brute force attacks Resistant to brute force attacks 
Can be stored as hashes by untrusted third party  Third party must have access to raw data
Cannot personally identify you  Could identify you in the real world
Allow for multiple accounts  Cannot use to create multiple accounts
Can be forgotten; password dies with a person Susceptible to injuries, aging, and death
Susceptible to replay attacks Susceptible to replay attacks
Susceptible to weak implementations Susceptible to weak implementations
Not universally accessible to everyone Not universally accessible to everyone
Susceptible to poor user security practices Not susceptible to poor practices 
Lacks non-repudiation Moderate non-repudiation 
1 Can be both a strength and a weakness


What Does This Tell Us?

As you can see, biometrics clearly are not the best replacement for passwords, which is why so many security experts cringe when every biometrics company in their press releases claim themselves as the ultimate password killer. Biometrics do have some clear advantages over passwords, but they also have numerous disadvantages; they both can be weak and yet each can be strong, depending on the situation. Now the list above is not weighted–certainly some of the items are more important than others–but the point here is that you can’t simply compare passwords to biometrics and say that one is better than the other.

However, one thing you can say is that when you use passwords together with biometrics, you have something that is significantly stronger than either of the two alone. This is because you get the advantages of both techniques and only a few of the disadvantages. For example, we all know that you can’t change your fingerprint if compromised, but pair it with a password and you can change that password. Using these two together is referred to as two-factor authentication: something you know plus something you are.

It’s not clear, however, if the Apple implementation will allow for you to use both a fingerprint and password (or PIN) together.

Now specifically talking about the iPhone’s implementation of a fingerprint sensor, there are some interesting points to note. First, including it on the phone makes up for some of the usual biometric disadvantages such as enrollment, having special hardware sensors, and privacy issues due to only storing that data locally. Another interesting fact is that the phone itself is actually a third factor of authentication: something you possess. When combined with the other two factors it becomes an extremely reliable form of identification for use with other systems. A compromise would require being in physical possession of your phone, having your fingerprint, and knowing your PIN.

Ultimately, the security of the fingerprint scanner largely depends on the implementation, but even if it isn’t perfect, it is better than those millions of phones with no protection at all.

There is the issue of security that some have brought up: is this just a method for the NSA to build a master fingerprint database? Apple’s implementation encrypts and stores fingerprint locally using trusted hardware. Whether this is actually secure remains to be seen, but keep in mind that your fingerprints aren’t really that private: you literally leave them on everything you touch.



8 Ways to Prepare for CSP

Cross-Site Scripting (XSS) is a critical threat that, despite widespread training, still plagues a large number of web sites. Preventing XSS attacks can get complicated but even a small effort can go a long way–a small effort that nevertheless seems to evade us. Still, developers are getting better at input filtering and output escaping which means we are at least headed in the right direction.

Handling input and output aren’t the only strategies available to us. Content Security Policy (CSP) is a HTTP response header that–when correctly implemented–significantly reduces exposure to XSS attacks. CSP is exactly what it’s name implies: a security policy for your web content.

CSP not only allows you to whitelist browser content features on a per-resource basis, but also lets you whitelist those features on a per-host basis.

CSP not only allows you to whitelist browser content features on a per-resource basis, but also lets you whitelist those features on a per-host basis. For example, you might tell the browser it can load scripts for a page, but only if they come from a specific directory on your own web server. CSP allows you to set restrictions on scripts, styles, images, XHR, WebSocket, EventSource, fonts, embedded objects, audio, video, and frames.

One powerful feature of CSP is that by default it blocks inline scripts, inline styles, the eval() function, and data: URI schemes–all common XSS vectors. The only problem is that’s where it starts breaking existing code and could be a major obstacle to its widespread adoption. This is an all-too-common problem with frameworks, code libraries, plugins, and open source applications. If you write code that many other people use and don’t start getting it ready for CSP, you kind of hold us all back.  CSP does allow you to re-enable blocked features but that defeats the purpose of implementing content security policies.

So getting to my point, here are some things developers can do to their code to at least get ready for CSP:

  1. Remove inline scripts and styles. Surely you already know that it’s a good practice to separate code from presentation, now’s a good time for us all to stop being lazy and separating our code.
  2. Ditch the eval() function. Another thing we’ve all known to avoid for quite some time, but it still seems to show up. Most importantly, if you are working with JSON, make sure you parse it instead of eval’ing it. It’s rare to find a situation where there are secure alternatives to eval, you’d might be surprised how creative you can be.
  3. Don’t rely on data: schemes. Most often used for embedding icons into your code, data: URI’s are a powerful XSS vector. The problem here isn’t using them yourself which normally is safe, it’s that attackers might use them so the best solution is to disable them altogether. On pages that don’t work with user input in any form, you are probably safe to keep data: URI’s enabled.
  4. Create an organized, isolated directory structure. Scripts with scripts, images with images. Keeping content separate makes fine-grained CSP policies so much easier.
  5. Document features needed for each file. A good place to document features required for each file is in the source code itself, such as in a PHPDoc comment. By doing this, when you implement CSP you can start with the most restrictive policy and add only necessary features.
  6. Centralize your HTTP response headers code. A centralized function to send required headers makes it easier to keep track of it all and avoid hard-to-debug errors later.
  7. Eliminate unnecessary and superfluous scripts. It’s sometimes hard to give up cool features for security, but good discipline here can pay off. This is a business decision based on your threat model, but it’s always a good question to ask when adding new stuff.
  8. Mention it whenever possible. Yes, you should be that person who talks about CSP; so much that people simply stop inviting you to meetings.

And if you aren’t quite sure about how CSP works, here is some recommended reading:

An Introduction to Content Security Policy

Preventing XSS with Content Security Policy Presentation

Content Security Policy 1.0 Spec

Browser Support


CSP Playground

CSP Readiness



So What Exactly Did The US Government Ask Lavabit to Do?

The recent shutdown of Lavabit’s email services prompted a flurry of reporting and speculation about the extent US Government spying, mostly due to the mysterious statement by Lavabit founder Ladar Levison:

Most of us saw this as yet another possibly overhyped government spying issue and didn’t really think too much of it. Much of the media coverage is already starting to die down but there still is some question as to exactly what the government required of Levison that left him with only one option: shutting down his entire business he built from ground up. I wondered if there were enough clues out there to get some more insight into this case. I started by looking at exactly what Lavabit offered and how that all worked behind the scenes.

Lavabit Encryption

Lavabit claimed they had “developed a system so secure that it prevents everyone, including us, from reading the e-mail of the people that use it. ” This is a bold claim and one that surely was a primary selling point for their services.

The way it worked is relatively simple: Lavabit encrypted all incoming mail with the user’s public key before storing the message on their servers. Only the user, with the private key and password could decrypt messages. Normally with encrypted email, users store private keys on their own computers, but it appears that in the case of Lavabit, they stored the users’ private keys, each encrypted with a hash of that user’s password. This is by no means the most secure way of doing this, but it dramatically increases transparency and usability for the user. By doing this, for example, users do not need to worry about private keys and they still have access to their email from any computer.

So let’s break this down: a user logs in with their password. This login might occur via POP3, IMAP4, or through the web interface (which in turn connected internally via IMAP). Because Lavabit used the user’s password to encrypt the private key, they will need the original plaintext password which means they would not be able to support any secure authentication methods. In other words, all clients must send passwords using AUTH PLAIN or AUTH LOGIN with nothing more than base64 encoding. The webmail interface appears to have been available as both SSL and non-SSL and the POP3, IMAP4, and SMTP interfaces all seem to have accepted connections with or without SSL. All SSL connections terminated at the application tier.

Once a user sends a password, the Lavabit servers create SHA-512 hashes explained as follows:

… Lavabit combines the password with the account name and a cryptographic salt. This combined string is then hashed three consecutive times, with the former iteration’s output being used as the input value of the next iteration. The output of the first hash iteration is used as the secret passphrase for AES [encryption of the private key]. The third iteration is stored in our password database and is used to verify that users entered their password correctly.

The process they describe produces two hashes: one for decrypting the user’s private key and after two more hashing iterations, a hash to store in the database for user authentication. While this is a fairly secure process, given strong user passwords, it does weaken Lavabit’s claim that even their administrators couldn’t read your email. In reality all it would take is a few lines of code code to log the user’s original password which allows you to decrypt the private key which in turn allows you to receive and send mail as that user as well as access any stored messages.

The message here is that US courts can force a business to subvert their own security measures and lie to their customers, deliberately giving them a false sense of security.

It is important to note that the scope of Lavabit’s encryption was limited to storage on it’s own servers. The public keys were for internal use and not something you published for others to use. Full protection would require employing PGP or S/MIME and having untapped SSL connections between all intermediate servers. On the other hand, if an email was sent through Lavabit already using PGP or S/MIME encryption, they would never be able to intercept or read those emails.

The question here is what exactly did the government request Levison to do that was so bad that he’d rather shut down his entire business? What information could Lavabit even produce that would be of interest to a government agency? Unencrypted emails, customer IP addresses, customer payment methods, and customer passwords. Based on media statements, it appears that he would be required to provide unencrypted copies of all emails going through his system.

Let’s look at some quotes levison has given to various media outlets. First, here are some quotes from an interview with CNET:

“We’ve had a couple of dozen court orders served to us over the past 10 years, but they’ve never crossed the line.”

“Philosophically, I put myself in a position that I was comfortable turning over the information that I had. I built Lavabit in a reaction to the original Patriot Act.”

“Where the government would hypothetically cross the line is to violate the privacy of all of my users. This is not about protecting a single person or persons, it’s about protecting all my users. What level of access to this nation does the government have?”

“Why should I collect that info if I didn’t need it? [That philosophy] also governed what kind of information I logged.”

“Unfortunately, what’s become clear is that there’s no protections in our current body of law to keep the government from compelling us to provide the information necessary to decrypt those communications in secret.”

“If you knew what I know about e-mail, you might not use it either.”

In an article from NBC News, we have this:

Levison stressed that he has complied with “upwards of two dozen court orders” for information in the past that were targeted at “specific users” and that “I never had a problem with that.” But without disclosing details, he suggested that the order he received more recently was markedly different, requiring him to cooperate in broadly based surveillance that would scoop up information about all the users of his service. He likened the demands to a requirement to install a tap on his telephone. Those demands apparently began about the time that Snowden surfaced as one of his customers, apparently triggering a secret legal battle between Levison and federal prosecutors.

And finally in an interview with RT he said:

I think the amount of information that they’re collecting on people that they have no right to collect information on is the most alarming thing,” he told RT. “I mean, the Fourth Amendment is supposed to guarantee that our government will only conduct surveillance on people in which it has a probable suspicion or evidence that they are committing some crime, and that that evidence has been reviewed by a judge and signed off by a judge before that surveillance begins. And if there’s anything alarming, it’s that now that’s all being done after the fact. Everything’s being recorded, and then a judge can after the fact say it’s okay to go look at the information.

Given the above information, let’s analyze some of the facts we know:

  • The government asked Lavabit to do something which levison considered to be a crime against the American people.
  • Levison was comfortable and had complied with warrants requesting information on specific users.
  • Levison told Forbes that “This is about protecting all of our users, not just one in particular.”
  • Levison is not even able to reveal some details with his own attorney or employees.
  • Shutting down operations was an option to circumspect compliance, although there was a veiled threat he could be arrested for doing so.
  • He did not delete customer data, he still has that in his possession so this was a request for ongoing surveillance.
  • This was a court order, which levison is fighting through the US Court of Appeals for the Fourth Circuit.
  • Levison compared the request to installing a tap on his telephone.

Apparently what made Levison uncomfortable with the request was that the fact that it collected information about all users, without regards to a warrant. Presumably law enforcement wanted to collect all data that they would later retroactively view as necessary once they had a warrant. The two issues here are that the Government wanted to collect information on innocent users (including Levison himself) and Levison would be out of the loop completely, taking away his control over what information he provided to law enforcement. These were the lines the Government crossed.

What’s interesting here is that Lavabit terminated the SSL connections right on the application servers themselves. These are the servers that also performed the encryption of email messages. Because of that, a regular network tap would be ineffective. The only way to perform the broad surveillance Levinson objected to would be (in order of likelihood) :

  1. Force Lavabit to provide their private SSL keys and route all their traffic through a government machine that performed a man-in-the-middle style data collection;
  2. Change their software to subvert Lavabit’s own security measures and log emails after SSL decryption but before encrypting with the users’ public keys; or
  3. Require Lavabit to install malicious code to infect their own customers with government-supplied malware.

Sure, this could have been a simple request to put a black box on Lavabit’s network and Levinson is just overreacting, but the evidence doesn’t seem to indicate that. Regardless of which of the requests the Government made, they would all make Levison’s entire business a lie; all efforts to encrypt messages would be pointless. Surely there were some heated words spoken when the Department of Justice heard about Levison’s decision, but this is not an act of civil disobedience on Levison’s part, his personal integrity was on the line. Compliance would make his very reason for running Lavabit a deception; a government-sponsored fraud.

While Lavabit initially had quite a bit of media coverage over this issue, the hype seems to be a casualty of our frenzied newscycle. But after looking closely at the facts here, I now see that this is a monumentally important issue, one that the media needs to once again address. The message here is that US courts can force a business to subvert their own security measures and lie to their customers, deliberately giving them a false sense of security. They can say what they want about security on their web sites, it means nothing. If they did it to Lavabit, how many hundreds or thousands of other US companies already participate in this deception?

If the courts can force a business to lie, we can never again trust the security claims of any US company. The reason so many businesses specifically rely on US services is the sense of stability and trust. How sad that an overreaching and panicked pursuit of a whistleblower has thrown that all away.

This issue is so much more than a simple civil liberties dispute, it is the integrity of a nation at stake. We walked with the devil in a time of need–that is a legacy we must live with–but at what point do we sever that relationship and return to the integrity required to lead the world through respect and not by fear?





UPDATE: Since publishing this post, this Wired article has since revealed that in fact Lavabit was required to supply their private SSL keys as suspected above.


Should You Ditch LastPass?

LastPassSteve Thomas, aka Sc00bz, has brought up some very interesting issues about the LastPass password monitor that are causing some confusion so I thought I’d give another perspective on the issue.

Summary of Steve’s points:

  1. When you use the LastPass web site to login to your account, your web browser will first send a hash with a single iteration, no matter how many iterations you have set for your account. It isn’t until this hash fails that the browser tells the user the correct number of iterations to use.
  2. LastPass has a default setting of 500 iterations (at least at that time, now it recommends 5000 iterations).
  3. The extension should warn you if it is going to send a hash with fewer iterations than what you have set.
  4. LastPass does not encrypt the URLs of sites stored in your password database

LastPass hashes your password rather than sending the plain text to the server when you login. The algorithm it uses is sha256(sha256(email + password) + password). This hash, while not necessarily insecure, can be cracked in a reasonable amount of time with ordinary hardware, unless the user has a relatively strong password. It isn’t until after this single iteration hash is sent that the LastPass server responds and tells the browser exactly how many iterations it should use; hash is sent again using the correct number of iterations. More iterations means it will take much more time to crack your password. A good minimum number of iterations is 5,000. If you go too high with the number of iterations, some clients such as mobile phones may be very slow logging in.

This is an issue that certainly should be addressed, but it is not serious enough to warrant abandoning LastPass

The mitigating factors here are:

  1. You are logging in via SSL so the primary threats here are a MitM attack with spoofed SSL certificates, a government warrant, or a government spy agency.
  2. They still need to crack your hash so if you have a very strong password, even a single iteration hash could provide a reasonable amount of protection.
  3. A second factor of authentication, country restrictions, blocking tor logins,  restricting mobile access, and other settings still protect your account from unauthorized logins, unless the attacker is able to obtain your stored hashes through hacking, warrant, or spying.

One thing I might also add is that the server is telling the client how many iterations it expects, so this does make an attack much easier if someone acquires your hash.

My opinion is that this is an issue that certainly should be addressed, but it is not serious enough to warrant abandoning LastPass altogether unless your inpidual threat model includes the NSA or other government agencies. The LastPass plugin should identify when someone is logging in to LastPass via the web login and provide the client-side script with the correct number of iterations. The server should never respond with this at all. However, if someone is logging in through a web browser that doesn’t have LastPass with their account data installed, the server sending the number of iterations is the only option.

The only proper solution here is to have your primary login different than the decryption login, at least for accessing the web interface if not everywhere. That way, the number of iterations is never publicly revealed and sending a single iteration hash would be unnecessary. Other companies such as RoboForm use this method. I have always wanted this feature and I would highly recommend LastPass implement this if it is feasible.

As for the other points, the default iteration count mentioned in number 2 has been addressed and the warning mentioned in number 3 would be a good thing to add if it already hasn’t but this would not be possible if using a web browser with your LastPass account installation.


As for encrypting URLs mentioned in number 4, LastPass’s response was that this is necessary to grab favicons. Although unencrypted URLs may not be an issue, there certainly are scenarios where you would want these encrypted. LastPass should make this an option for the user.

LastPass does provide strong security controls, although there clearly is room for improvement. If you do not find LastPass to be secure enough, the only reasonable alternative I would recommend is KeePass, which puts you in complete control over your data while still being quite usable. I would not recommend ditching LastPass, but I would recommend that LastPass address these issues. I would also recommend that Steve Thomas keep up the great research he provides to the community.

I have not heard a recent response from LastPass on these issues but would love to hear from them. I will update this post if and when I do.

Disclosure: I am a LastPass user and I get a free month of premium service whenever someone clicks on banners located on this site. I have no affiliation with LastPass and receive no other compensation from the company. 

Thanks NSA for Ruining the Internet

NSA: We Are PlanetI know, we have been told for years that the NSA has been spying on us. The revelations in recent months really aren’t that new. We always assumed there was that looming over us and many of us have even greeted various government agencies in our private chats and emails (i.e, “I want to blow that up, j/k nsa, LOL, no really just kidding”).

On the other hand, our lives are full of conspiracy theories that nag at us: Is that mirror in the dressing room a two-way mirror? Is that webcam on my laptop secretly recording me? Why is that black Suburban parked on my street? Fortunately, it’s easy to dismiss these things as conspiracy—that is until two guys step out of that Suburban and approach your door. Edward Snowden’s leaks about NSA spying made it that real to us.

Perhaps the most frustrating aspect of this is the reaction by our government. To them the problem isn’t that they are spying, it’s that we found out about it. If we just don’t know about it then everything will be okay, right? Fire some admins, tell us about a few of their programs, and maybe issue a terror alert to help us see how much they are helping us. Clearly they are missing the point here. What bothers us isn’t just the spying, it’s the loss of trust. Not just in our government, but in almost everything we do online. We can’t trust our email, our phone conversations, text messages, or online chats. Right, we kind of already knew this.

Once we found out that it is acceptable to lie in the name of national security, that changed everything.

But once we found out that it is acceptable to lie in the name of national security, that changed everything. Suddenly when the NSA says there are no domestic spying programs, is that really true, or is it the least untruthful answer they are allowed to give us? When an online service we use says they don’t give information to the NSA, is that true or are they being to deny this? Do terms of service and privacy policies mean anything anymore when national security trumps everything?

Do we now just assume that all online privacy is compromised? If not by our own government, by some other entity? What about our online backups, our cloud storage services, online notes, bookmarks, calendars, to-do lists, photos, accounting, online banking, hosted web servers, password management services, or medical records? And what about encryption, do we even trust our current technologies anymore?

Denials by the NSA or these companies don’t mean anything to us now because how do we know these aren’t just National Security denials?

Fortunately, in the end this will be good for us. We will be forced to develop technologies that make us all more secure. We will step up to the challenge, putting us back in control of our data. Improving security will be the new civil disobedience.

In the meantime, thanks NSA for ruining the internet.

Getting Started with PGP in 10 Minutes or Less

NSA SpyingConsidering recent news about the collecting of data communications, I think its time to bring PGP back to life. PGP is an extremely secure encryption method that is easy to integrate into email messages. Although it has been around since 1991, early efforts to make it a standard largely failed. Even I eventually stopped installing PGP because I simply never used it. But the Internet is very different nowadays and I think it’s time to dust off that old key and give PGP another chance.

PGP does have some limitations and it is by no means a perfect solution, but it is much better than sending unencrypted emails. Unfortunately, one big reason for lack of adoption is simply that too many people are intimidated by it. The good news is that you really don’t have to understand how PGP works in order to start using it.

Here are the basic concepts you probably should know:

  • You start by creating two keys for yourself:  one that you pass out to the world, and one that you guard with your life (along with having a reliable and secure backup). The Kleopatra software mentioned below walks you through the process of creating these. It’s really easy. 
  • If someone wants to send you an encrypted email they will need your public key so you generally publish it on a public keyserver or put it on your web site. Mine is here.
  • To be able to read that email you use your private key.  That is why you keep it private.
  • You can also use your private key to sign emails that  you send out. Signing just proves that you are the real sender. Others can verify your signature with your public key.
  • Other people can sign your public key. The more people who sign your key, the more others can be sure that it is authentic.

That’s all you really need to know, so here are some quick instructions for getting started under Windows:

1. Download and install Gpg4Win.

2. After installing, open Kleopatra to import an existing key or create a new key.

3. Once your key is in Kleopatra, right click on it and select Export Certificates to Server to publish your key on a public keyserver, then get another PGP user you know to sign it.

4. Configure your mail client (or use Claws Mail client that comes with Gpg4Win). If you are using Outlook, Gpg4Win comes with an Outlook add-on. If you are using Thunderbird, get the Enigmail add-on.

5. Talk a friend into installing PGP so you can send and receive PGP encrypted email.

A big problem for most people is that using PGP with web-based mail accounts such as GMail, Hotmail, or Yahoo! mail just isn’t that easy. Yes there are solutions, but I have not had great luck with any of those.

In my case, what I chose to do is create a Gmail filter to forward all messages that contain “BEGIN PGP MESSAGE” to another unpublished email account (and delete the original). This other email account is a POP account that I access with Mozilla Thunderbird. I installed the Enigmail plugin and now have Thunderbird to deal with all PGP messages.

All my regular mail I still access through GMail and all my encrypted and other sensitive messages I deal with through Thunderbird. Works great so far! My next step is to explore some of the mobile PGP apps.

I have also added a new contact form that will automatically encrypt all messages using my public PGP key. The nice thing about the form is that it comes from my server, not from you; your email address and name are encrypted in the message body.

My PGP Key:
PGP Key ID: 0x6E23BA97
Fingerprint: C127 C510 79D7 E457 E20D 4419 7752 D68B 6E23 BA97

Dear NSA, It’s Not Just About the Spying

NSA SpyingThis not only applies to the NSA, but to Congress and President Obama: You betrayed our trust. That’s why we are angry.

It’s not about spying and it’s not about having anything to hide. The fact is, my life is very boring and it’s kind of sad knowing how many terabytes of data might be stored of me complaining to the phone company about my phone bill or calling my wife to pick up an energy drink while she’s out. I can’t even imagine how many SMS messages are stored of my kids texting their friends 24 hours a day. Then there’s that endless flow of useless junk in my inbox.

And it’s not just me, it’s my boring life times 300 million other American lives. Just South of me  there’s a million square feet ready to start storing all of that data. We’re not talking about petabytes, exabytes, or even zetabytes here, but yottabytes of data, a number so large there’s just no metaphor to help you comprehend it. I imagine this data center slowly filling up like a massive reservoir behind a newly built dam. A massive reservoir of 300 million lives, 75 million of those being under 18 years old. A million square feet, billions of dollars, eventually up to 200 megawatts of power, 60,000 tons of cooling equipment, and a carbon footprint greater than some entire countries.

Keep in mind that this is just their new data center. There are those existing data centers scattered across the country that are apparently running low on free disk space. Even that isn’t enough, the NSA has new equally massive facilities coming online in Maryland and Texas as well.

So what has all these yottabytes of storage and exaflops of computing power bought us? Apparently we stopped literally dozens of terrorist attacks (I bet not letting fingernail clippers on airplanes also prevented dozens of attacks!). “Dozens of attacks” it turns out means around fifty–ten of which were domestic plots. But some members of Congress have even questioned that number.

“Backed up by secret courts, secret interpretations of law, and the ability to accompany data requests with gag orders empowered the NSA to collect any data it wanted”

When Congress introduced the Patriot Act, there were a number of privacy concerns, but we put our trust in the government to do what was right. We were hurt and angry after 9/11 and there was a national cry to stop these terrorists. We knew when the Patriot Act became law that we would be giving up some of our privacy but it was for the greater good. The government assured us that there were checks and balances to prevent abuse of these new powers.

The government, it turns out, lied to us. NSA officials such as James Clapper, came right out and falsely told Congress that the NSA was not collecting data on Americans. Backed up by secret courts, secret interpretations of law, and the ability to accompany data requests with gag orders empowered the NSA to collect any data it wanted–all with the blessing of Congress. Sure, we already figured that the NSA spied on us, but we kept getting all those assurances that they weren’t.

When we elect government officials we try to not only find those people who represent our political views, but we also look for people with a certain amount of integrity. We want congressmen and presidents who we can trust. Much of President Obama’s original platform was based on changing how government worked by adding transparency, targeting government abuses of power, and encouraging whistleblowers who revealed government abuses. It sounded pretty convincing and enough people believed in him to elect him President, but now even some of his most ardent supporters feel he betrayed their trust.

We also trusted Congress with the billions of tax dollars they spent building the largest spying mechanism ever known to man. Billions of dollars spent on millions of hard disks spinning away recording my kids texting their friends about what a loser their Dad is.

Yes it’s creepy knowing the NSA is always listening and we don’t like that the government considers all of us the enemy. Yes, it’s a violation of our constitutional rights that they gather evidence on us before we have even considered committing a crime. But what really bothers us the most is the violation of trust. We gave you power and–albeit predictably–you overreached way beyond that power, crafting laws that prevented us from even questioning your abuses and aggressively pursuing those who do.

You can claim that these practices are legal, strictly monitored, and performed with court approval, but we just don’t believe you anymore. You no longer have any credibility because humans are good at not trusting those who repeatedly lie to us. In fact, we want you to give us back control over what you do and how you spend our money. We don’t need your massive data collection to stop ten domestic terror attacks. In fact, we don’t even believe you that this data collection is about protecting us from terrorists anyway, you can only use that excuse so much before we start seeing through it. Ultimately, it comes down to the same old power, greed, and corruption that we learned about in History class.

You betrayed our trust so now you don’t get our trust. We don’t want new data centers; we want to cut back on your data collection free-for-all and even start shutting down existing data centers. We want to take away your massive and seemingly unlimited budgets. We want you to stop pre-collecting data so you later take “the book off the shelf and opening it up and reading it.” We want to be able to limit what you can do in the name of national security and we want Congress to roll back some of the overly permissive provisions of the Patriot Act. We want Congress to actually punish those who lie under oath and not just let it slide just because they are the Director of National Intelligence. We want you to provide some form of protection for those whistleblowers who expose clear and possibly illegal abuses of power.

We don’t trust you anymore and we don’t know how far you are willing to go in the name of national security. You are laying a framework of abuse so vast that we fear it could someday become oppressive. We certainly don’t think you have our best interests in mind and we are seriously questioning the power (and petabytes of storage) the people have given you.

It’s time for us to speak now: we want our data back.


Grant Edward Snowden Retroactive Immunity

NSA SpyingLast week I was struck by the absurdly hypocritical statement by James Clapper, the Director of National Intelligence:

“The unauthorized disclosure of information about this important and entirely legal program is reprehensible and risks important protections for the security of Americans.”

I suppose that if you live at the top of the intelligence food chain long enough, statements like this eventually start sounding perfectly normal to you. For releasing classified information about the NSA’s clandestine spying programs, Clapper is quick to label Edward Snowden as a traitor. But who betrayed the American more, Snowden or the NSA?

The US Constitution defines treason against the United States as “levying War against them, or in adhering to their Enemies, giving them Aid and Comfort.” The question here is what is the United States, is it the governmental or the citizens? Is the United States some .gov organization or is it the people who inhabit our political geography? Is the United States a secret spying program or a representative democracy that is “of the people, by the people, for the people?

If it were up to the American people to decide, I think we would have a very different opinion of who should be called the traitors.

While Snowden may have violated the terms of his security clearance, he did not betray America. It is absurd to claim that revealing the NSA’s overreaching is in any way an aid to any enemy. Certainly no one buys the claim that terrorists will now communicate any differently than they did last month. On the contrary, I would argue that Snowden’s actions are in fact a powerful demonstration of true loyalty that he was willing to sacrifice himself for the American people. He betrayed his employer, but not the American people.

Ask yourself, do you feel more betrayed that Snowden revealed this secret program or do you feel more betrayed by the program itself?

Do you feel more betrayed that Snowden told the truth to journalists or that Clapper recently deceived a Congressional committee when asked a direct yes or no question about information gathering?

Do you feel more betrayed that Snowden produced actual evidence of spying on Americans or that the NSA does not want you to know what a FISC court ruled about the constitutionality of their spying programs and that the NSA spying has violated the constitution at least once before?

How about Bradley Manning, do you feel more betrayed that he exposed a number documents revealing questionable and possibly criminal acts or would you feel more betrayed if you knew exactly what the NSA plans to store in their unfathomably massive Utah data storage facility? (Hint: you don’t need exabytes of storage unless you have exabytes of information to store).

Would you feel betrayed if you knew that the NSA and other government agencies buy up and sit on 0-day exploits so that they can use them in their cyberwarfare efforts, knowingly leaving millions of our own systems vulnerable in the process?

Last year it was reported that the Flame malware, allegedly an NSA effort, included a digital certificate that appeared to be legitimately signed by Microsoft. Do you feel betrayed knowing that the NSA has this ability? Would you feel betrayed if we knew the full extent of their capabilities in faking certificates?

And how about crypto algorithms? Would you feel betrayed finding out the NSA has broken some of these yet still knowingly lets us use them?

If it were up to the American people to decide, I think we would have a very different opinion of who should be called the traitors.

Nevertheless, chances are that if allowed to, the US government will be able to successfully prosecute Snowden. US laws on sedition and subversive behavior are broad, especially during times of war. I imagine that it would take an act of Congress to grant this individual, and others like him, immunity for exposing wrongdoings of the government. George Bush was able to persuade Congress to grant retroactive immunity to telcos when the NSA spying program first came to light, why can’t they grant this same privilege to this material witness who exposed this overreaching and possibly unconstitutional spying program?

To Congress I say, considering how little you have done for the American people lately, you guys really owe us this one.