Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Encrypted E-Mail Company Hushmail Spills to Feds (2007) (wired.com)
127 points by computer on Aug 8, 2013 | hide | past | favorite | 46 comments


Use OpenPGP software on your local device (GnuPG). Never copy your private key into the could. Set a super strong password on the private key. You now can have point to point encryption that is "military grade" (per Bruce Schneier) and no one but you has access to the private key.

It's not that hard to do either. Nice point and click convenient web interface or true privacy. Pick one.


Go to prison (US: Contempt of court? UK: RIPA) unless you hand over your key.


In the U.S. I believe at least one court circuit has ruled that you cannot be compelled to give up an encryption key if doing so would involve self-incrimination [1].

The 'catch' is that if the government already knows generically what incriminating data you have they can order you to produce it (as it's not any further incriminating). Or something like that? Recent decision though, so things can still change.

http://yro.slashdot.org/story/13/04/24/1458203/federal-magis...


You don't have to give up the key, just decrypt the data upon request...


Unless decrypting it would force you to incriminate yourself...


yes, but at least you get to make that choice for yourself, rather than having somebody else get to decide 'go to prison or hand over DanBC's key?'


I was wondering, what if you are trying the key and it doesnt work... you forgot it for example. You just really cannot open it yourself! Would that be contempt of court?? How do you convince the judge...


But how far could they go? Could they compel GnuPG authors to secretly weaken GPG, along with a gag order?

I know that seems like it's taking it to the logical extreme, but if they can compel web service authors to alter their apps to insert exploits (assuming that's what happened), why couldn't they compel open source desktop app authors to do the same?

The only difference being that the GPG authors would have to be really sneaky to avoid code review. Possibly with "help" from the NSA.


You should look closer at what this type of orders actually compel a service to do. In the lavabit case, my theory is that it went as follows:

- The FBI compels lavabit through a FISA warrant/NSA to produce Snowden's emails/data/etc.

- This data is encrypted, so lavabit cannot comply immediately. However, on logging in, Snowden provides lavabit with his password so that they can decrypt the emails.

- Since lavabit then has the password and access to the emails, they are now required to hand that over to the government. They may need to add some code to store the password, but that is not a fundamental change to the system; it is merely a way of intercepting the data they are sent and required to hand over to the government.

So [I think that] the government does not order lavabit to make changes, it orders it to produce evidence. Such an order would not make much sense when aimed at the GPG authors.


What is really bizarre is that Hushmail still has a lot of loyal users following that incident.


Crypto AG, who made "secure" telex machines, were revealed to have cooperated in an NSA exploit in the '80s, and are still in business: http://www.crypto.ch/

Lot's of people who should be locking down their email are now pretending nothing is wrong. Denial can be very strong.


Well, there are lots of people whose threat model doesn't include the police.

Say for instance you are a psychiatrist who wants to offer patients a fairly secure method of talking to you. Or you are a small business doing work in China.


> Well, there are lots of people whose threat model doesn't include the police.

> Say for instance you are a psychiatrist who wants to offer patients a fairly secure method of talking to you.

If you are a psychiatrist, your threat model ought to include the police.


This reinforces the point that web-based cryptography (particularly when the JS is provided by a server) is not adequately secure. This vulnerability--sniffing the plaintext prior to server-side encryption--is one of the two most commonly-citied reasons to avoid web-based crypto. (The other reason concerns client-side, JavaScript crypto. In that architecture, a compromised server can just send a backdoored version of the JS crypto code.)


the vuln was not that the JS was bad or untrustworthy, but that the passphrase was sent to the server.

If the entire connection to the server is secure, and the JS is loaded from the secure server and is auditable/checksummed, then the risk is minimal. You either trust the code or you get out.


That doesn't make sense. Who would audit the code and how would you get the checksum?

If you only run code that matches a static, hardcoded checksum then there's no point in downloading it. Might as well just store a copy of it because now you have a client application. On the other hand, if you already have a secure method of receiving a checksum you can trust, you might as well just use that other, magically incorruptible communication channel for downloading all the code (and all your email, for that matter).


> the vuln was not that the JS was bad or untrustworthy, but that the passphrase was sent to the server.

Which is one of the most commonly cited vulnerabilities with web-based crypto. The fact that no JS was involved doesn't really change anything.

> If the entire connection to the server is secure, and the JS is loaded from the secure server and is auditable/checksummed, then the risk is minimal. You either trust the code or you get out.

I assume you're referring to client-side, JS-based encryption. The problem is that the server can always slip a backdoor into the webpage, and your browser will dutifully execute it. It doesn't matter if the JS is sent via SSL. A compromised server can deliver compromised JS, and SSL provides no protection against that.

You can't practically audit for this. You could theoretically read the JavaScript. But performing a thorough audit on a cryptosystem is a big undertaking. And then how do you know it hasn't changed the next time you visit the page? You mentioned checksums, but how are you going to do that? Are you going to manually MD5 the JS file on every visit?

Even if you did, the JS file is just one of many ways a compromised server can backdoor your browser. For example, we recently learned about HTML5 timing attacks (http://www.contextis.com/files/Browser_Timing_Attacks.pdf). And that's just the tip of the iceberg. Checksums aren't going to tell you if these sorts of things are happening.

A compromised server can compromise any data entered into webpages it serves. There is no known way around this.


You cannot trust the code. You cannot trust the code because the supplier could be forced to include a backdoor, and thus you have to audit the code before every session.


Hence the checksum.


Good argument not to do anything you really care about on the web. Plain old email clients still work just fine, after all.


Gonna name drop: I spoke to Phil Zimmerman about this yesterday. He says Hushmail had no choice, and they didn't willingly do anything. They had a well-known insecure access method which meant they had access to the content. The government simply required them to hand over content they had access to. So slamming them for this is not really appropriate.


> He says Hushmail had no choice

As Snowden, Manning, Assange and now the guy behind Lavabit are demonstrating, there is always a choice.

Hushmail could easily have just shutdown.


Right. "No choice" and "difficult choice" are not the same thing.


The case for hushmail is different then lavabit. Hushmail had some piece of information, legally they were required to hand that piece of information over. They may have had the option to shutdown shop after handing that over however.

For lavabit for all appearances did not have access to any information the government wanted and was not ordered to hand anything over. It seems like they were probably ordered to implement a method for the government to gain access to future communications. They choose to close up shop rather then implement this access and lie to their customers about the security of the service.


Well, not exactly.

HushMail strongly suggested that when given a court order and the targeted user was using the client-side Java applet, that Hushmail sent a backdoored applet. That technically could be detecting by checking hashes, but in practice...

It's an open question whether companies can be forced to build backdoor, but that sure looks like what happened to Hushmail and Lavabits.

(just noting that I'm the author of this more than 5 year old story).


Thanks for pointing that out. I was relying on a source that said otherwise, so for the future I will not trust it as much.

> can be forced to build backdoor,

It seems like they have the option of shutting down as an alternative to implementing a backdoor and lying to their customers about the level of security. If you know of examples of business forced to stay open and the owners forced to continue to work at a company for the purpose of government investigation I would be interested in learning more.


They could have shut down that method. (Server-side webmail access)

You _always_ have a choice.


Or they could have fixed it to the do the encryption client side. In this case that would be in JavaScript since it is a webmail client.


Did you read the article, which talks about the two options that Hushmail offers? The server side (weak) encryption and the client side Java option?

And the fact that even Hushmail said that they could be forced to serve malformed Java applets to targets.


Right. So why didn't they then close the insecure access method, and deny FURTHER content requests?


Because some people don't care about people with correctly-formed legal documents getting access to their email. They just want something quick and easy to prevent weaker attackers from having access.

EDIT: Not defending Hushmail here, but at least they have some warnings about the risks.


Hushmail was snake-oil from day one. Anyone who had spent more than 2 seconds reading about public key cryptography knew that. I was astounded to see Zimmerman defending Hushmail back then and saddened to hear that he is still defending them.


That's why I switched to gmail and rot13.


Could explain all the advertisements for Hooked on Phonics.


Could you explain how rot13 helps?

"The algorithm provides virtually no cryptographic security..." - https://en.wikipedia.org/wiki/ROT13


I believe it was meant ironically. Similar to the joke "I double rot13 all my emails, but I'm probably just being paranoid." (If you're not familiar with rot13, running it twice results in the original plaintext.)


It's a joke, but at the same time, I doubt the pattern-recognition bots correct for it, so it would likely have some effect security-wise, to the same extent you get by spelling 'shit' as 'sh|t'.


Once you explain a joke, it's not funny anymore.


A joke, maybe. But explaining terminology doesn't [usually] hurt a joke.


Merci


"virtually"...being kinda generous, aren't we?


You should use base_64 instead, that's what all the cool kids use.


Honestly GMail is perfect for the threat model of the mails I use it for.

It's very well-defended against random leakage across the Internet, against hacking attacks, etc.

The government has an easier time of getting access to it if I piss them off, but if I do piss them off enough they can essentially invent bad stuff regarding me anyways, and I'm screwed no matter what. Once I came into karmic balance with that I accepted it and went on with my life.

Of course, what's good for me is not necessarily what's good for you.


ROT13 is ready for enterprise.


No no, enterprise requires double ROT13.


Just to make sure, that article is from 2007. Ages ago. S




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: