Skip Navigation

The CEO of PROTON answers YOUR questions! Drive, Linux support, Photos, features, and a lot more!

tilvids.com The CEO of PROTON answers YOUR questions! Drive, Linux support, Photos, features, and a lot more!

Andy Yen, the CEO of Proton (Mail, Drive, VPN, Pass...) answered a lot of the questions you, the community, asked, in an interview that covers basically everything! He discusses security, privacy, the origins of Proton, how they operate, Linux support, future projects, products and features, quantum...

The CEO of PROTON answers YOUR questions! Drive, Linux support, Photos, features, and a lot more!

Andy Yen, the CEO of Proton (Mail, Drive, VPN, Pass...) answered a lot of the questions you, the community, asked, in an interview that covers basically everything!

He discusses security, privacy, the origins of Proton, how they operate, Linux support, future projects, products and features, quantum computing, passkeys, and more!

Proton Mail: https://proton.me/mail/TheLinuxEXP Proton VPN: https://protonvpn.com/TheLinuxEXP

👏 SUPPORT THE CHANNEL: Get access to a weekly podcast, vote on the next topics I cover, and get your name in the credits:

YouTube: https://www.youtube.com/@thelinuxexp/join Patreon: https://www.patreon.com/thelinuxexperiment Liberapay: https://liberapay.com/TheLinuxExperiment/

Or, you can donate whatever you want: https://paypal.me/thelinuxexp

👕 GET TLE MERCH Support the channel AND get cool new gear: https://the-linux-experiment.creator-spring.com/

🎙️ LINUX AND OPEN SOURCE NEWS PODCAST: Listen to the latest Linux and open source news, with more in depth coverage, and ad-free! https://podcast.thelinuxexp.com

🏆 FOLLOW ME ELSEWHERE: Website: https://thelinuxexp.com Mastodon: https://mastodon.social/web/@thelinuxEXP Pixelfed: https://pixelfed.social/TLENick PeerTube: https://tilvids.com/c/thelinuxexperiment_channel/videos Discord: https://discord.gg/mdnHftjkja

#vpn #privacy #proton #onlinesecurity #protonmail

Timecodes:

00:00 Intro 01:16 How did Proton start? 03:24 Why start with email? 06:03 What is Proton's business model? 08:34 Why set up in Switzerland? 11:33 What data do you have on customers? 14:39 How is encryption important? 18:20 Do you always need to use a VPN? 20:47 Why focus on building an ecosystem? 24:55 Is an Office Suite planned? 26:29 What differentiates Proton from competitors? 30:26 Is Proton a viable alternative to big tech services? 33:31 Why expand to more products instead of finishing existing ones? 37:19 Does the general public care about privacy? 38:45 What's next for Proton services? 40:08 What are the plans for native Linux clients? 46:03 Will ProtonVPN offer dedicated IPs to everyone? 47:46 What's the environmental impact of Proton? 49:27 Proton on F-Droid, without Google Play notifications? 52:03 Why are code repos all separated and hard to find? 53:12 Why are addresses ending in ".me" ? 54:57 When will all apps reach feature parity? 56:24 Will SMTP relay be supported? 57:47 Will Proton focus more on businesses in the future? 59:50 Why put all your eggs in one basket with just Proton services? 01:01:00 Will Proton support passkeys? 01:03:21 Does E2E matter is the recipient isn't using it? 01:04:49 Will Proton disable port forwarding in VPN? 01:06:41 Is encryption enough to make email private? 01:09:06 What protects users from a change in Proton's code licensing? 01:11:14 How does Proton protect its infrastructure? 01:13:14 Impacts of Quantum Computing on privacy and security? 01:14:24 What's the future of Proton Bridge? 01:16:25 When will Proton photos be a thing? 01:17:17 Plans for Proton Notes? 01:18:20 Will VPN support the Apple TV? 01:21:12 Support the channel

15

You're viewing a single thread.

15 comments
  • I have a question for ProtonMail:

    What is the purpose of your end-to-end encryption?

    It seems like its only conceivable purpose is to protect against the server being malicious, since the HTTPS encryption between client and server is already protecting against all adversaries who don't control the server. But, if the server is malicious then it can target an individual user and serve them different javascript when they login. (This special javascript for the targeted user can exfiltrate their passphrase and then the adversary can decrypt everything...)

    So, is it correct to say that the only scenarios where ProtonMail e2ee is actually useful in any way (eg, it could prevent an adversary from seeing plaintext) are these two?

    1. When an adversary obtains data from the server, but does not have operational control over it
    2. When an adversary compromises the server and decides to target a user, but after that users final ever login (eg, they never log in again after the time when the adversary began to target them)

    Also, separately from potential special behavior for targeted users, is there any way to verify the integrity of the javascript being served to everyone currently (or at any point in time)? (Just having it be open source and audited isn't sufficient, since the javascript that people actually run while using the site is minified...)

    • I have seen you linking this comment in multiple conversations about Proton/Encryption, so I wanted to add my two cents and understand better your perspective.

      1. You mention "only scenarios" (where e2ee is useful), but then you say "When an adversary obtains data from the server, but does not have operational control over it". This is not a corner case, it is a very likely universe of scenarios in which some components of Proton infra/environments get compromised, but an attacker does not have the specific operational capability to push new/different code for the JS code that does the cryptographic operations. Obviously we don't know if they have additional controls specifically to protect this piece of code, which - if I were one of their security engineers - I would realize has fundamental security importance. It might be they internally they only whitelist specific hash/images for this code to the point that gaining this operational capability is indeed quite complex. It might also be that they do nothing else. Either way, pushing a malicious update requires capabilities well beyond the "I compromise their server", because systems so big as so complex go way beyond a "server". The good thing is that even if some parts of their infrastructure gets compromised, email content is still safe.
      2. Let's abstract for a second the "JS in browser", and let's talk high level: there is a client side code (which in this case runs in the browser) that does the encryption/decryption, and which talks with a server. If an attacker gains the capability to tamper/compromise that client-side code, the encryption is generally toast. This issue is always going to be there, there is no silver bullet against this attack vector. Whether the code that does the crypto operation is JS in the browser, a cli tool that you install from your package manager, a code that you write yourself using a library, if an attacker gains operational capability to deploy new/malicious JS code, compromises your package manager or the upstream repository, the tool or the crypto library repository, your client-side encryption is toast. I genuinely don't see any difference between Proton encryption in the browser and or a Thunderbird plugin, or a CLI tool. If an attacker can push a malicious update, I have no protection. There is no way I can verify the integrity of the code (the hash does not help here), without checking the actual code and verifying the logic. The only difference is formal, that is, malicious JS code can reach a larger number of users quickly, while a malicious update has to be installed, but it is not a mitigation for this risk in any substantial way.
      3. To this end, what possible verification you can expect with "is there any way to verify the integrity of the javascript being served to everyone currently"? If they get compromised to the point that the attacker can push malicious updates, what guarantees you that the attacker cannot also tamper with whatever verification mechanism you have in place (i.e., pushing to their GH account)? What's a possible option here? Not obfuscating the JS so that you can compare line-by-line with the one in the git repository and at every commit validate that the JS in the repository doesn't do anything malicious? Possible, but honestly I am not sure it would be a reliable solution.

      To be clear, the risks you are eliciting are real, but I don't think there is any effective mechanism that can mitigate them. I am curious to understand what a satisfactory solution from your PoV would more-or-less look like.

      • I genuinely don’t see any difference between Proton encryption in the browser and or a Thunderbird plugin, or a CLI tool. If an attacker can push a malicious update, I have no protection.

        In the browser you're effectively doing an "update" with every page load (and after you've identified yourself to the server!) and there is no authenticity check besides HTTPS and no possibility to confirm that you received the same thing as everyone else (or that you received something that corresponds to source code in git, if the javascript happens to be open source).

        It is easy to confirm that two users are running an identical version of a piece of local software; it is nearly impossible to confirm the same in the web context. Every pageload is another opportunity to deliver malicious code to a targeted user with very little chance of being detected.

        As I wrote in another comment:

        People should be skeptical of anyone selling a service involving cryptography software which has nearly no conceivable purpose except for to protect against the entity delivering the software. Especially if they re-deliver the software to you every time you use it, via a practically-impossible-to-audit channel, and require you to identify yourself before re-receiving it (as almost any browser-based e2ee software which doesn’t require installing any software does, due to the current web architecture).

        If you think this kind of perfect-for-targeted-exploitation architecture isn’t regularly used for targeted exploitation… well, you’re mistaken. In the web context specifically, it has been happening since the 90s.

        ... and still is today.

        • In the browser you’re effectively doing an “update” with every page load (and after you’ve identified yourself to the server!) and there is no authenticity check besides HTTPS and no possibility to confirm that you received the same thing as everyone else (or that you received something that corresponds to source code in git, if the javascript happens to be open source).

          Partially true (usually JS blobs are cached), and I have acknowledged this fact already:

          The only difference is formal, that is, malicious JS code can reach a larger number of users quickly, while a malicious update has to be installed, but it is not a mitigation for this risk in any substantial way.

          But what security does this offer? If a malicious update is pushed through other channels (say, a release in an APT repo), you can get compromised when you update the software. Where is the substantial difference with getting compromised when "the page loads"? The only difference is really time, but it doesn't change the security model. Also, the fact that you have to authenticate yourself might make it easier for an attacker to attack specific individuals, but that is by no means a necessity to carry out the attack. A malicious update can be installed by many people and it's trivial to understand which users have been compromised post-factum by simply accessing the emails, so that non-target can simply be ignored. On the other hand, as an update can be pushed quickly, it can also be overridden more easily forcing attackers to be more noisy and repeat the attack, while a software installed can potentially not be updated anymore for months and if you install a compromised tool it's likely your whole machine/network can get compromised, while at least the JS code runs in the browser sandbox that has to be escaped. So there are pros and cons, but the fundamental security risk is the same.

          It is easy to confirm that two users are running an identical version of a piece of local software; it is nearly impossible to confirm the same in the web context.

          And what security does this control offer? Who does this comparison? If doing it is the provider, it's worthless, because that process can be compromised too. The only benefit would be if users compared to each other, but first of all, nobody does this; second of all, it's anyway a very weak control because for each software there are N versions available (non-web), so users can totally be running different versions of the software legitimately. I really don't understand what scenario you are imagining here.


          I still do not understand what would constitute a secure setup in your view. Personally, as a security professional, I think you are pointing out legitimate risks, but these risks have no fundamental solution, whatever the software.

          As I said:

          I am curious to understand what a satisfactory solution from your PoV would more-or-less look like

          I am really curious on your view, because personally I think that Proton is doing well for what it can be reasonably offered. There is nothing that they can provide (hashes/signatures for the code) that would add any security in the scenario or their total compromise, because you still only have one trust boundary (you and Proton). You would need a third party that verifies the software, signs it and gives you the possibility to verify your code against it, but such thing is not used almost anywhere AFAIK. And it's exactly the same with software releases in other ways (e.g., package manager). The package and the signature are provided by the same entity, so it doesn't protect against the repository compromise (but only the channel compromise and partially from local tampering).

          • But what security does this offer? If a malicious update is pushed through other channels (say, a release in an APT repo), you can get compromised when you update the software. Where is the substantial difference with getting compromised when “the page loads”?

            The difference is that targeted delivery of malicious versions is far less likely to ever be noticed than backdooring binaries for everyone would be.

            There are various shortcomings in the numerous software distribution mechanisms in use today, but very few that make it easier to undetectably deliver malicious code to specific targeted users than javascript on a web page does.

            You would need a third party that verifies the software

            🔔🔔🔔

            When you use credible end-to-end encryption software, that is exactly what you are doing: you are getting the encryption software from someone other (literally anyone would be better) than the entity who's job it is to store your ciphertext.

            Of course, the quality of software distribution channels varies widely, but, a few properties which are pretty common outside the browser (even for proprietary software) include:

            • Ability for users to know when the software is being updated
            • Ability for users to verify that they're running the same software as other people
            • Ability for users to download the software without identifying themselves

            Even if most users aren't taking additional manual steps to verify their software authenticity, a system where it is possible for them to makes it more difficult for attackers to execute a targeted attack without risking detection.

            When you use things like Proton, Tuta, or Hushmail (which, again, is the same deceptively-marketed architecture as Proton and Tuta and has been doing this for literally 25 years) you lack all three of those properties and you instead constantly refetch the encryption implementation from the only (in most cases) 3rd party which happens to have your ciphertext. This architecture is designed for them to exfiltrate keys from targeted users.

            Their marketing says that they can't read your mail, and this is a lie. Some non-zero number (maybe dozens? who knows) of employees at each of these companies have the ability to read any user's mail by serving them slightly different javascript one day, and therefore so do any 3rd parties who can coerce or compel one of these employees through legal or extralegal means.

            Think about what an attacker needs to circumvent the encryption between two proton users: they need one protonmail employee. Done.

            Now think about what an attacker needs to do the same to users of some PGP implementation and a normal email host that doesn't sell snakeoil:

            • they would need to get the ciphertext somehow (perhaps by compromising an email provider, or an insider there), and
            • they need a signing key for the software update mechanism for the victim's PGP implementation, and
            • either they need to risk getting caught compromising the software distribution for everyone, or
            • they need to be located in the right place on the network to target the victim and intercept their connection while they are updating their software

            Of course, they might also try to compromise the victim's endpoint in various other ways, but I'm not trying to address all of the problems of computer security in this example: I'm just contrasting the properties provided by the Proton/Tuta/Hushmail architecture with how other email encryption works.

            HTH!

            • I see your point, and I generally agree.

              However:

              Ability for users to know when the software is being updated

              This is relatively useless, unless you (the user) can actually verify the legitimacy of the code, which you can't. You may verify provenance, but that doesn't tell you anything.

              Ability for users to verify that they’re running the same software as other people

              Nobody checks this really. I cannot think of a single example where I have done this or where I would be able to do this.

              Ability for users to download the software without identifying themselves

              This is technically feasible, but obviously not in the context of actual usage, so I agree.


              That said, you are forgetting that:

              When you use credible end-to-end encryption software, that is exactly what you are doing: you are getting the encryption software from someone other (literally anyone would be better) than the entity who’s job it is to store your ciphertext.

              I think you underestimate the whole supply chain of the software that uses your PGP key. The CLI tool, the libraries. All it takes is one malicious commit in any of that, and you are toast (provided you install that version). The only protection you have is the chance that someone will notice the malicious commit(s). There are examples of similar attacks where nobody noticed.

              Think about what an attacker needs to circumvent the encryption between two proton users: they need one protonmail employee. Done.

              This might be an overstatement. We don't know what internal security measures they have. Even basic compliance require separation of duty, which means a single person cannot carry out such a process end-to-end (replacing code). They might also have internal monitoring etc., it's not so trivial.

              Now think about what an attacker needs to do the same to users of some PGP implementation and a normal email host that doesn’t sell snakeoil:

              I agree, but there is a problem: you will never in a million years get the average person to use PGP. The whole tooling is messed up, even for technical people. This is a fact, and while I agree that the security it offers is better, the average person who is not trying to protect themselves from nation states is much better off with Proton than with Gmail, since that's the realistic alternative. Also, even in the legal cases where Proton did disclose the data they had (anti-"terrorism" cases), they did not disclose any email content and what they had was minimal. I think if you are a target of nation state adversaries and you are thinking to communicate via Email, you are probably doomed.

              Of course, they might also try to compromise the victim’s endpoint in various other ways

              To be fair, this is much, much, much, much easier that compromising Proton or getting to one of the employees. It's also a much more reasonable attack to compromise multiple communication channels compared to only email.

              Ultimately I think that you calling these product snake-oil is a misrepresentation of the reality. For the risk model of the average person, Proton (and similar) does deliver what it promises. The fact that sophisticated attackers might be able to compromise the provider and compromise the encryption is not a reason to invalidate the product tout court, in my opinion. Especially because neither me nor you know exactly the security controls they have internally to protect the integrity of that code.

              • This might be an overstatement. We don’t know what internal security measures they have.

                When evaluating a security product that is fundamentally based on unverifiable promises, I think it makes sense to give more consideration to the scenarios where the promises are being broken than where they are being kept.

                If you're completely confident that the promises are being kept, the end-to-end encryption is not necessary.

                Your implication that our being unable to know "what internal security measures they have" makes their (implied) promise not to circumvent the encryption more, rather than less, believable... does not make sense to me.

                Even basic compliance require separation of duty, which means a single person cannot carry out such a process end-to-end (replacing code).

                What regulatory regime do you believe ProtonMail is complying with which makes it impossible for a single person to do something like that?

                Also, to be clear: I'm mostly not talking about replacing the code for everyone all the time (which is also possible but would have a much higher chance of someone noticing). I'm talking about doing it for specific users. And besides control of the right server, there are lots of other scenarios where a lone actor can do this without control of any proton servers at all but simply with the right TLS certificate and a suitable network position (eg, on-path between the user and server, or the user and their DNS recursor, or any number of other places). I think it is reasonable to assume that there are a two-digit number of people at ProtonMail who can do the kind of attacks I'm talking about. Any adversary (not only nation states) who wants to read a Proton user's mail simply needs to figure out how to coerce one of them into performing a small task for them.

                you will never in a million years get the average person to use PGP

                Sorry, but this is simply not true. I know lots of people who adopted PGP many years ago while being computer novices (eg, never used a terminal in their life). PGP has plenty of problems, but if you want to encrypt email, it is the standard (outside of the corporate S/MIME world). PGP will probably never be ubiquitous but neither will snakeoil things like Proton and Tuta.

                Email is also not the only encrypted communication option these days, and the incorrect perception that ProtonMail's end-to-end encryption provides meaningful security is undoubtedly preventing some of their customers from using better tools instead.

                the average person who is not trying to protect themselves from nation states is much better off with Proton than with Gmail, since that’s the realistic alternative

                There are plenty of other email providers which have reasonable-sounding privacy policies and don't supplement them with misleading technical claims. If you are willing to trust Proton, why not instead trust some other company that doesn't lie to you about the usefulness of browser-based encryption?

                Ultimately I think that you calling these product snake-oil is a misrepresentation of the reality

                But, you do agree that, in contrast to non-web-based end-to-end-encryption solutions, web-based e2ee can always be unilaterally undetectably circumvented at any moment for specific users by a single insider or anyone with access to the right server, or the right TLS certificate, without exploiting any software bugs, right? You just think that isn't snakeoil? 🤷

                • When evaluating a security product that is fundamentally based on unverifiable promises, I think it makes sense to give more consideration to the scenarios where the promises are being broken than where they are being kept.

                  What you describe it's an attack, not a feature though. A broken promise would be if they intentionally did so, which is something we have no proof at all (and would also be pretty stupid for them). They have no way to prevent the attack you mention completely because that's inherent to the fact that the same entity that serves the software handles the ciphertext. There is absolutely nothing they could do to improve their stance on this "promise".

                  our implication that our being unable to know “what internal security measures they have” makes their (implied) promise not to circumvent the encryption more, rather than less, believable… does not make sense to me.

                  My stance is that you need to some extent to suspend the judgement. They have an internal security team which likely considers this scenario one of the (main) ways in which their encryption can be broken. They might have extremely tight processes around that, which makes the scenario described potentially very unlikely. It might also be they have done nothing, but I can't know either way. This fact generates a risk which different people will estimate in different quantity. Your stance seems to be pretty binary instead, while not everyone has the same risk appetite that Snowden has.

                  What regulatory regime do you believe ProtonMail is complying with which makes it impossible for a single person to do something like that?

                  They are HIPAA compliant (which I am not very familiar with, since I am not in the healthcare sector), but separation of duties is a basic principle really that even companies without any certification adopt. I see HIPAA does have provisions in terms of access-control, but I don't know how they comply. Either way, this does not make it "impossible" in any case, but it's also not granted that a single employee could completely break their product.

                  Also, to be clear: I’m mostly not talking about replacing the code for everyone all the time (which is also possible but would have a much higher chance of someone noticing). I’m talking about doing it for specific users.

                  I understand. That would still require some level of persistence, and to compromise what I am sure is plenty of replicas for the particular service (I doubt it's predictable which one will serve which user).

                  with the right TLS certificate and a suitable network position (eg, on-path between the user and server, or the user and their DNS recursor, or any number of other places)

                  Well, if an attacker has the ability to install the certificate on the device (which requires root), then even the PGP encryption is probably toast (I can likely phish the user password with a fake-prompt and then read the private key). Either case, they use strict-transport-security headers and they are on a preload list, so DNS poisoning and the like won't work, the browser will refuse to load the page and won't even prompt the user to accept the risk.

                  Any adversary (not only nation states) who wants to read a Proton user’s mail simply needs to figure out how to coerce one of them into performing a small task for them.

                  Absolutely true, but then again, this is true about pretty much anybody. I work in the financial sector and I can assure you the malicious employees can do a lot of damage if they wanted and that's why the malicious insider is a threat right at the top of the list for every security department. As I said before, a malicious commit to an OSS repo for a library/tool that implements your e2ee and you have the same attack vector. Also, they might as well just coerce you if they can, rather than an employee of a company.

                  Sorry, but this is simply not true. I know lots of people who adopted PGP many years ago while being computer novices (eg, never used a terminal in their life).

                  Apologies, but this is might be simply a bubble you are in. I know one person who does, and I live in a tech bubble. PGP is incredibly annoying and the web of trust is not scalable. This without even talking about the technical challenges (the tooling sucks). Proton has 100million users alone (sure, many duplicates) and will never be mainstream, but I would be surprised if more than a million people use PGP (I have no numbers) "vanilla". My perception is also anecdotal, but PGP being borderline unusable is almost a meme.

                  Email is also not the only encrypted communication option these days, and the incorrect perception that ProtonMail’s end-to-end encryption provides meaningful security is undoubtedly preventing some of their customers from using better tools instead.

                  Yeah, perhaps. But then again, those people are probably not those who have this kind of attack in their risk model.

                  There are plenty of other email providers which have reasonable-sounding privacy policies and don’t supplement them with misleading technical claims. If you are willing to trust Proton, why not instead trust some other company that doesn’t lie to you about the usefulness of browser-based encryption?

                  Because their service is top-notch and I don't consider the eventuality of them being compromised a lie in their statements. In my mind, the theoretical capability to do something does not mean that something is done or easy to do, especially because - again - I have no idea of what other (internal) control they have implemented to prevent and control that particular vector. As far as we know, to update the frontend code employees might need to access some specific server that requires ad-hoc approval and the supervision of 3 people, plus a manual signoff, and after that the signature of the code (say, container image) is verified before locking the system again. I just made it up of course, but I think they have plenty of people in-house that figured out this risk too.

                  But, you do agree that, in contrast to non-web-based end-to-end-encryption solutions, web-based e2ee can always be unilaterally undetectably circumvented at any moment for specific users by a single insider or anyone with access to the right server, or the right TLS certificate, without exploiting any software bugs, right? You just think that isn’t snakeoil?

                  I think that web-based e2ee can be more quickly broken if the provider is compromised/there is a malicious insider and the provider does not have an appropriate level of security mitigations/controls, as it does not require user interaction. This is really the substantial difference compared to non-web e2ee, for whatever is worth. So yeah, it's not snake-oil in my view, it's an inherent property of web services and the best a web service can do. Actually, if you use Proton bridge you are in the same exact situation than if you were to use your favorite PGP cli tool or plugin, so there is also this.

                  I wouldn't consider snake-oil something because it can be compromised, because snake-oil selling implies the bad faith of the seller, which in case of Proton I have no reason to think is there.

                  • What you describe it’s an attack, not a feature though

                    Yeah, i think it is a feature, and a very beneficial one for the people this system was designed for - those who want a lot of privacy-desiring users to settle on using an encryption solution which isn't too difficult to circumvent.

                    They have no way to prevent the attack you mention completely because that’s inherent to the fact that the same entity that serves the software handles the ciphertext.

                    Yep! That is what I've been saying: that is the problem with this architecture!

                    Note that, throughout this discussion, I'm not really just talking about Proton but rather them and Tuta and Hushmail and anything else that shares this architecture.

                    There is absolutely nothing they could do to improve their stance on this “promise”.

                    Well, they could be honest and inform their users: "to have the convenience of using webmail you must sacrifice the benefit of end-to-end encryption (not needing to trust the server and its operators to refrain from reading your messages)."

                    Do you think telling users that would surprise many/most of them, and cause them to stop using it? Could that be why they don't mention it?

                    They might have extremely tight processes [...] but I can’t know either way.

                    Yep. But no matter how tight their processes are, there are still single points of failure that can be coerced to gain access to anyone's email.

                    not everyone has the same risk appetite that Snowden has

                    It's funny you mention Snowden. Even he was naive enough to use a Proton/Tuta/Hushmail-like system back in 2013... it was called Lavabit. In the case of Lavabit, I think the operator was actually also naive and well-intentioned because when people investigating Snowden asked him to perform the exact attack i've been describing (which his architecture enabled him to) he instead opted to shut down the entire service and to notify Snowden and the world.

                    ProtonMail operating with the architecture they have in a post-Lavabit world (they were actually founded just after that happened, and rode the Snowden privacy-awareness wave to success) is pretty strong evidence that they would not shut down their system if the alternative was being forced to spy on some of their users.

                    (To be clear, Lavabit should've known better too, existing in a post-Hushmail world...)

                    if an attacker has the ability to install the certificate on the device

                    🤦 that is (obviously, i thought) not what i'm talking about. i know how the PKI and HSMs and HSTS and CT and CAA (protonmail's CAA records authorize 3 different CAs to sign for them) etc etc work, and their many failures over the years that have lead to the current set of mitigations, and how HPKP worked/works (which, btw, i just checked, and protonmail is sending a public-key-pins-report-only: header, very nice 🤣) but I don't have the energy to explain to you why selling something as e2ee while it reduces to (among other things) specifically the security of TLS is dishonest.

                    Yeah, perhaps. But then again, those people are probably not those who have this kind of attack in their risk model.

                    I just checked their site and they still say it's "for journalists", and "we can never access your messages", etc etc.

                    Someone hiding from a violent criminal organization might well realize that they have a life-or-death "risk model" and yet not realize that ProtonMail's (lauded by knowledgeable people like yourself) security actually has numerous human single points of failure which their adversary can coerce to read their email.

                    The people who need security most are often people who lack the expertise to adequately evaluate the veracity of claims like ProtonMail's. They look to knowledgeable people (like you and i) to help them decide what is reasonable. Also, even very knowledgeable people who badly need security will sometimes sacrifice security for convenience (eg, Snowden; he also used other things, but, he used Lavabit too, presumably assuming that this type of attack, while possible, would not actually happen).

                    If what you want is not privacy from adversaries who can compromise your mailserver, but rather just protection from GMail reading your mail, then you don't need e2ee: you need a provider with a privacy policy you believe they will honor. By saying things like this:

                    screenshot of protomail website with text: Strong encryption at all times
Proton believes your data belongs to you. That’s why we use end-to-end encryption and zero-access encryption to ensure that only you can read your emails. We cannot read or give anyone else access to your emails. And this encryption happens automatically — no special software or tech skills required.

                    ... ProtonMail is demonstrating that they are not trustworthy. When they aren't circumventing their encryption, are they honoring their privacy policy with regard to the things the encryption doesn't protect (metadata like social graph, location, etc)? Why would you assume they are when they're lying about their ability to read your emails?

                    From your replies here, it's becoming clear to me that you do see this: if i understand you correctly, you are not saying that ProtonMail "cannot read or give anyone else access to your emails" as they are saying; rather you are just saying that you think it is very unlikely that they would ever abuse that capability and that you assume their procedures make it so that one rogue employee couldn't do it alone. You do seem to understand that, contrary to what they've written in the screenshot above, ProtonMail as a company technically could decide to. But, do you think most of their customers understand that?

                    Proton has 100million users

                    I'm growing rather tired of this discussion, but I have a few more questions for you.

                    Given that they have 100 million users, which of these statements do you think is the most likely to be accurate:

                    1. ProtonMail has never been asked to circumvent their encryption
                    2. They get asked to frequently, and they always steadfastly refuse to do so
                    3. They get asked to frequently, and they almost always say no, but, depending on who is asking (and what kind of legal or other threats the request is sent with) they do it sometimes
                    4. They get asked to frequently, and they do it for anyone who represents law enforcement (or appears to?) in any country from some list of countries

                    Personally, I think #3 is a bit more likely than #4, while #1 and #2 are extremely unlikely.

                    So, my last questions are:

                    • If it were revealed that #4 were in fact the case, would you agree that it is snakeoil?
                    • If you agree with me that #3 is the most likely scenario, approximately how many times per hour/week/year would they need to be complying with these requests before you would agree that they are, in fact, snakeoil?

                    In any case, as you said, we "can’t know either way".

                    • Yeah, i think it is a feature, and a very beneficial one for the people this system was designed for - those who want a lot of privacy-desiring users to settle on using an encryption solution which isn’t too difficult to circumvent.

                      This you need to prove somehow. Has there be any attack that happened like this? Has there been any content leaked this way, or provided to law enforcement? In other words, did they use this "feature" in any way? Because if this is just a design limitation, then it's not a feature, it's a risk exactly like using someone else's code exposes you to supply chain risk. Would you say that anybody who uses any external library is actually a snake-oil seller about the property of their product because if a supplier (library, dependency, etc.) get compromised their product could be compromised? I wouldn't say so. I think that intentions matter here.

                      Note that, throughout this discussion, I’m not really just talking about Proton but rather them and Tuta and Hushmail and anything else that shares this architecture.

                      Yes, I understand.

                      Well, they could be honest and inform their users: “to have the convenience of using webmail you must sacrifice the benefit of end-to-end encryption (not needing to trust the server and its operators to refrain from reading your messages).”

                      But that's not true. End-to-end encryption simply means that the encrypt/decrypt operation happen on the client side. It doesn't mean that it's an unbreakable design. Following this logic, every software that does PGP encryption should say "to have the convenience of not having to rewrite all the code ourselves we use suppliers which might allow third parties to read your messages". Proton content is still end-to-end encrypted, with the code hosted publicly. The fact that vectors exist to invalidate that is not a reason to invalidate the whole thing, exactly like the existence of supply chain attacks are not a reason to dismiss the validity of e2ee for CLI tools and the like.

                      Also, I mentioned the potential to use the bridge. That is a fully client-side tool which does not run in the browser, does that satisfy your risk appetite?

                      Yep. But no matter how tight their processes are, there are still single points of failure that can be coerced to gain access to anyone’s email.

                      They are a point of failure, not a single point of failure necessary (as in a single person).

                      but I don’t have the energy to explain to you why selling something as e2ee while it reduces to (among other things) specifically the security of TLS is dishonest.

                      But this was not your claim, your claim was that compromising them and serving backdoored JS was not the only way, and that an attacker in an appropriate network position could achieve the same. I am saying that particular vector does not apply, because your browser will actually refuse to serve Proton without a valid certificate due to HSTS. So an attacker can tamper with the code only at either of the "ends" (either compromising them or compromising your endpoint).

                      I just checked their site and they still say it’s “for journalists”, and “we can never access your messages”, etc etc.

                      Just for reference, what I meant is that people referred by the statement "and the incorrect perception that ProtonMail’s end-to-end encryption provides meaningful security is undoubtedly preventing some of their customers from using better tools instead." are not those who have that risk model. Journalist and other at-risk people have technical consultants and are (hopefully?) aware of the risks, and can apply additional controls (for example, using Proton to send encrypted content). They are not those who won't use other - more secure - channels than email because they read Proton pages.

                      If what you want is not privacy from adversaries who can compromise your mailserver, but rather just protection from GMail reading your mail, then you don’t need e2ee: you need a provider with a privacy policy you believe they will honor.

                      e2ee is just a very nice and clear-cut way to enforce the privacy policy. Law enforcement can still get the data from a provider. If the data is not collected, the data cannot be given. Sure, it's possible that a 3-letter agency will coerce Proton to compromise a user but a) this did not happen yet (as far as we know?) and 2) again, if that's part of your risks, don't use emails or just use email to send encrypted content...

                      Why would you assume they are when they’re lying about their ability to read your emails?

                      You seem to be really fixated with this statement, but it's not true. They don't have the "ability" to read emails. They have a setup that - provided the violation of controls that we both don't know about - can possibly grant them that ability. I really don't understand why you think it's different from any other software. If the NSA goes to https://www.gnupg.org and says "you know what, the next time you serve your software to IP x.x.x.x", you serve this package, you will never know and your encryption is toast. Would you say that the folks behind GnuPG "have the ability to read your emails"? I wouldn't, because they are not backdooring the software, although the possibility for them, contributors and national actors to do that exists.

                      rather you are just saying that you think it is very unlikely that they would ever abuse that capability and that you assume their procedures make it so that one rogue employee couldn’t do it alone. You do seem to understand that, contrary to what they’ve written in the screenshot above, ProtonMail as a company technically could decide to.

                      Yeah, you are correct. This is exactly the same as me saying that technically a lot of people in my organization could tamper with payments and violate the integrity of most of UK bank transfers. In practice, there are a bazillions controls in place to ensure that this does not happen, and before touching production there are tons of safeguards, but theoretically my company could decide to break compliance, remove procedure and allow a free-for-all on banking transactions before being fined/shut down/to the abyss.

                      I do believe that they have no interest whatsoever to abuse this architectural feature, but I agree that they could be coerced to. However, as I said before, I believe the same to be true for any other software, which is why I don't agree on the risk model to be significantly different from many other tools. In fact, the fact that they are in Swiss jurisdiction might help, compared to a lot of (F)OSS entities which are in the US.

                      But, do you think most of their customers understand that?

                      No, I think most people don't.

                      which of these statements do you think is the most likely to be accurate:

                      I have no idea. I would say 1 or 3 are the most likely. It seems a very unnecessary way (if I were a certain 3 letter agency) to gain access to a small set of data, when I can compromise the whole device and maintain persistence much more conveniently (for example coercing the ISP to give me access to the router and go from there, or ask directly Apple and Microsoft, etc.).

                      If it were revealed that #4 were in fact the case, would you agree that it is snakeoil? If you agree with me that #3 is the most likely scenario, approximately how many times per hour/week/year would they need to be complying with these requests before you would agree that they are, in fact, snakeoil?

                      I would say that they should disclose that for sure, at least with a warrant canary, since they might actually not even be allowed to fully disclose it. I am fairly conflicted about the fact that government surveillance has -sometimes- reason to be exercised, provided a judge has vetted it and proper guarantees are in place (not the US way, to be clear), and the fact it is routinely abused. I also believe that perfect security does not exist, and it's enough for me to send an encrypted attachment via Proton and mitigate this whole risk.

                      To answer your question, I would say that if this is a forced action that happened a handful of times, for extremely high profile cases and severe reasons, then I might still consider their claim legitimate. If it's a routine procedure to satisfy pretty much any request, then I would agree that this becomes more of a feature than an attack.

                      That said, I also have a couple of final questions for you too:

                      • Proton bridge runs on the client and does not use the browser. The code is open source. Since they provide this too, would you consider this on-par with using your favorite CLI/plugin for PGP? Would this solve the problem you raise?
                      • Do you think that it's possible that any of the 3-letters agencies could coerce a software author (or some collaborator) and produce a malicious release for the code that is served only to you (for example, by IP, fingerprint or other identifier) or that it activates only for you (device ID etc.)? For example go to Kevin McCarthy and force him to produce a backdoored version of Mutt (http://www.mutt.org/download.html) which is backdoored to leak your keys.
                      • Do you think that alternatively Github/Bitbucket for example could be coerced by said agencies to backdoor the version (and signature) you get for a given code, say https://bitbucket.org/mutt/mutt/downloads/mutt-2.2.12.tar.gz (maybe after graciously "asking" Kevin for his key to sign the software).
                      • If you think the above is possible, do you think there is any distributor for software that could not be coerced? And how this vector is actually different from Proton being forced to break their own encryption?
                      • If you agree that the above is possible, would you say that any claim about Mutt using PGP to e2e encrypt/decrypt your emails are snakeoil?
                      • Yeah, i think it is a feature, and a very beneficial one for the people this system was designed for - those who want a lot of privacy-desiring users to settle on using an encryption solution which isn’t too difficult to circumvent.

                        This you need to prove somehow.

                        I said "i think" because, unlike many of the other things I'm saying here which are statements of fact, my suggestion that ProtonMail specifically is designed for this attack to be possible is merely well-informed speculation.

                        Has there be any attack that happened like this?

                        See the links in my earlier comments for evidence of this kind of attack happening against all three of the other largest email providers with similar architectures as ProtonMail (Tuta, Hushmail, and Lavabit).

                        Also, I mentioned the potential to use the bridge. That is a fully client-side tool which does not run in the browser, does that satisfy your risk appetite?

                        If both users are using the bridge (assuming it is designed how I think it is), they would certainly be better off than if one or both of them is using the webmail e2ee. However, I would never use or recommend using protonmail, even with the bridge, because it is very likely that the people I'm writing to would often not be using the bridge. Also, because ProtonMail e2ee doesn't interoperate with anything else, and by using it I'd be endorsing it and encouraging others to use it ("it" being ProtonMail, which for most users is this webmail snakeoil).

                        Also, I don't know in detail how the bridge actually works, and, like most of the people I know who sometimes audit things like this... the open source bits from Proton like their bridge aren't interesting enough to be worth auditing for free (except perhaps by a security company, for their own marketing purposes) because, even if it turns out to be soundly implemented itself, it is a component of a non-interoperable proprietary snakeoil platform.

                        Yep. But no matter how tight their processes are, there are still single points of failure that can be coerced to gain access to anyone’s email.

                        They are a point of failure, not a single point of failure necessary (as in a single person).

                        From your earlier comments I think you're working from a mental model where an individual employee performing the attack would need to check something in to git or something like that, but, don't you think anyone with root on, say, one of the caching frontend webservers do this? I suggest that you try to think about how you would design their system to prevent a single person from unilaterally doing it, and then figure out how you can break your design.

                        I am saying that particular vector does not apply, because your browser will actually refuse to serve Proton without a valid certificate due to HSTS.

                        Yes, I get that you are saying that, but it's because you have not been hearing me saying that HTTPS has been circumvented numerous ways over the years and will continue to be. Do you think we've seen the last rogue certificate authority? Or the last HSM where (oops!) they key can actually be extracted?

                        Don't you think there is a reason why most modern software update mechanisms don't rely solely on HTTPS for authenticity of their updates?

                        🤔 I actually wonder why ProtonMail lists Digicert and Comodo alongside LetsEncrypt in their CAA DNS records. (Fwiw, they currently have a cert from LetsEncrypt, from my network perspective at least). Doesn't that mean that, against a browser supporting DNSSEC and CAA records, a rogue employee at any of those 3 companies can issue a cert that would allow this attack to be performed? (Of course, against a browser that isn't validating CAA with DNSSEC, anybody at any one of thousands of sub-CAs can also do it...)

                        at-risk people have technical consultants and are (hopefully?) aware of the risks, and can apply additional controls

                        As someone who has been one of those technical consultants, let me tell you, arguing with at-risk people about the veracity of posts on privacy forums singing the praises of things like protonmail is part of the job. 😭

                        If the NSA goes to https://www.gnupg.org and says “you know what, the next time you serve your software to IP x.x.x.x”, you serve this package, you will never know and your encryption is toast. Would you say that the folks behind GnuPG “have the ability to read your emails”? I wouldn’t, because they are not backdooring the software, although the possibility for them, contributors and national actors to do that exists.

                        This is a false equivalence in several ways:

                        • Targeting an IP address is much less useful than targeting a user by their username and password
                        • Careful users have the ability to (and many do) verify hashes and signatures of a downloaded program before they run it, unlike javascript on a web page
                        • Users retain a copy of the program after downloading it and so often have evidence if an attack took place
                        • Many users obtain their GPG binaries from some distribution rather than the GnuPG website (read on about that...)

                        Again, these software distribution channels (eg, Linux distros, etc) have many of their own problems, but they are in a different league than javascript in a browser. Ways they're better include:

                        • These days, in many/most cases, at least two keys/people are required to compromise them. This isn't nearly enough but it is better than one.
                        • Other than by IP, users aren't identifying themselves before downloading things
                        • Users can access them from many different mirrors; there isn't a single server from which to target all users of a given distribution

                        rather you are just saying that you think it is very unlikely that they would ever abuse that capability and that you assume their procedures make it so that one rogue employee couldn’t do it alone. You do seem to understand that, contrary to what they’ve written in the screenshot above, ProtonMail as a company technically could decide to.

                        I do believe that they have no interest whatsoever to abuse this architectural feature, but I agree that they could be coerced to.

                        But, do you think most of their customers understand that?

                        No, I think most people don’t.

                        Isn't that because their web page says something to the contrary?

                        I have no idea. I would say 1 or 3 are the most likely.

                        Really? Scenario 1 is possible? You think a privacy-touting email service with 100M users might have never had a request to circumvent their encryption, despite being able to?

                        It seems a very unnecessary way (if I were a certain 3 letter agency) to gain access to a small set of data, when I can compromise the whole device and maintain persistence much more conveniently (for example coercing the ISP to give me access to the router and go from there, or ask directly Apple and Microsoft, etc.).

                        Again, I'm not just talking about 3 letter agencies, but anyone who wants to read someone's mail. And often there is a point where the email address is all that is known about the target.

                        Do you think that it’s possible that any of the 3-letters agencies could coerce a software author (or some collaborator) and produce a malicious release for the code that is served only to you (for example, by IP, fingerprint or other identifier)

                        I use some mitigations I won't go into, but, yeah, on the system I'm typing this on I do sadly use a distribution which relies on a single archive signing key, so, if you compromise that key (or the people with access to it), and obtain a valid HTTPS certificate for the particular mirror I use, and you know the IP address I'm using at the moment I'm doing an OS update, you can serve me a targeted (by IP) malicious software update. 😢

                        that it activates only for you (device ID etc.)? For example go to Kevin McCarthy and force him to produce a backdoored version of Mutt (http://www.mutt.org/download.html) which is backdoored to leak your keys.

                        I think the vast majority of Mutt users don't get their Mutt binaries from Kevin McCarthy, and having him put a targeted backdoor in the source code would be foolish as it would be likely to be noticed by one of the mutt distributors who builds it before it gets distributed. Since reproducible builds still aren't ubiquitous, the best place to insert a widely-distributed-but-targeted-in-code backdoor would be at the victim's distributor's buildserver.

                        Do you think that alternatively Github/Bitbucket for example could be coerced by said agencies to backdoor the version (and signature) you get for a given code, say https://bitbucket.org/mutt/mutt/downloads/mutt-2.2.12.tar.gz (maybe after graciously “asking” Kevin for his key to sign the software).

                        Yes, but unlike the ProtonMail case there is a chance of being caught so it is a much higher risk for the attacker.

                        If you think the above is possible, do you think there is any distributor for software that could not be coerced? And how this vector is actually different from Proton being forced to break their own encryption?

                        There are a wide variety of software distribution paradigms, on a spectrum of difficulty to attack. At one end of the spectrum you have things like Bitcoin Core, where binaries are deterministically built and signed by multiple people, and many users actually verify the signatures to confirm that multiple builders (with strong reputations) have independently built an identical binary artifact. At the other end of the spectrum you have things like ProtonMail with zero auditability, users identifying themselves and re-downloading the software at each use, and numerous single points of failure that can be exploited to attack a specific user. Things like mainstream free software operating system distributions, macOS, Windows Update, etc sit somewhere in the middle of that spectrum.

                        If you agree that the above is possible, would you say that any claim about Mutt using PGP to e2e encrypt/decrypt your emails are snakeoil?

                        No. See previous answers for the massive differences.

15 comments