Tag: News

  • Jurisdiction Is Nearly Irrelevant to the Security of Encrypted Messaging Apps

    Every time I lightly touch on this point, I always get someone who insists on arguing with me about it, so I thought it would be worth making a dedicated, singular-focused blog post about this topic without worrying too much about tertiary matters.

    Here’s the TL;DR: If you actually built your cryptography properly, you shouldn’t give a shit which country hosts the ciphertext for your messaging app.

    The notion of some apps being somehow “more secure” because they shovel data into Switzerland rather than a US-based cloud server is laughable.

    But this line of argument sometimes becomes sinister when people evangelize storing plaintext instead of using end-to-end encryption, and then try to justify not using cryptography by appealing to jurisdiction instead.

    That more extreme argument is patently stupid. That is all I will say about it, lest this turn into a straw man argument. But if I didn’t bring it up somewhere, someone would tell me I “forgot” about it, so I’m mentioning it for completeness.

    Let’s start with the premise of the TL;DR.

    What does “actually [building] your cryptography properly” mean?

    Properly Built Cryptography

    An end-to-end encrypted messaging app isn’t as simple as “I called AES_Encrypt() somewhere in thee client-side code. Job done!”

    If you’ve implemented the cryptography properly, you might even be a contender for a real alternative to Signal. This isn’t an exercise for the faint of heart.

    To begin with, you need to solve key management. This means both client-side, secret-key management (and deciding whether or not to pass The Mud Puddle Test) and providing some mechanism for validating that the public key vended by the server is the correct one for the other conversation participant.

    The cryptography tried for over three decades to make “key fingerprints” happen, but I know professional cryptographers who have almost never verified a PGP key fingerprint or Signal safety number in practice. I’m working on a project to provide Key Transparency for the Fediverse. This is much a better starting point. Feel free to let power users do whatever rituals they want, but don’t count on most people bothering.

    Separately, the app that ships the cryptography should itself strictly adhere to reproducible builds and binary transparency (i.e., SigStore).

    What’s This About Transparency?

    Both “Key Transparency” and “Binary Transparency” are specific instances of a general notion of using a Transparency Log to keep a privileged system honest.

    Also, Key Transparency is an abbreviated term. The thing that you’re being incredibly transparent about is a user’s public keys. If that wasn’t the case, key transparency would be a dangerous and scary idea.

    If you don’t know what a public key is, this blog post might be too technical for you right now.

    If that’s the case, start here to get a sense for how people try to explain it simply.

    Separate to both of those topics, Certificate Transparency is already being used to keep the Certificate Authorities that secure Internet traffic honest.

    But either way, they’re just specific instances of using a transparency log to provide some security property to an ecosystem.

    What’s a Transparency Log?

    A transparency log is a type of log or ledger that uses an append-only data structure, such as a Merkle tree.

    They’re designed such that anyone can verify the integrity and consistency of the log’s entries. See this web page for more info.

    Sometimes you’ll hear cryptographers talk about a “secure bulletin board” in a protocol design. What they almost always mean is a transparency log, or something fancier built on top of one.

    If this vaguely sounds blockchainy to you, you would be correct: Every cryptocurrency ledger is a consensus protocol (often “proof-of-work”) stapled onto a transparency log, and from there, they build fancier features like smart contracts and zero-knowledge virtual machines.

    Independent Third-Party Monitors Are Essential

    There is little point in running any sort of transparency log if you do not have independent third parties that monitor the log entries.

    Even better if you take a page out of Sigsum’s book and implement witness co-signatures as a first class feature.

    What Does Transparency Give You?

    If you’re wondering, “Okay, so what?” then let me try to connect the dots.

    If you want to surreptitiously compromise a messaging app, you might try to:

    1. Backdoor the client-side software.

      But binary transparency and reproducible build verification will make this extremely easy to detect–or even worse, mitigate.

    2. Compromise the server to distribute the wrong public keys.

      But key transparency prevents the server from successfully lying about the public keys that belong to a given user. Additionally, it prevents the server from changing history without being detected.

    For a more detailed treatment, refer to the threat model I wrote for the public key directory project.

    What Else Is Needed for Proper Implementations?

    Once you have reproducible builds, binary transparency, secret-key management (which may or may not include secure backups), and public key transparency, you next need to actually ship a secure end-to-end encryption protocol.

    The two games in town are MLS and the Signal Protocol. My previous blog post compared the two. They provide different subtly different security properties, serve slightly different use cases, and have similar but not identical threat models.

    If you want to go with a third option, it MUST NOT tolerate plaintext transmission at all. Otherwise, it doesn’t qualify.

    If your use case is to focus on scaling up group chats to large numbers of participants, efficiently, and don’t care about obfuscating metadata or social graphs, you might find MLS a more natural fit for your application.

    Cryptographers use formal notions to describe the security goals of a system, and prove the security of a design under a game theoretic design that proves an attacker’s advantage stays below some threshold (usually something like “the birthday bound of a 256-bit random function”).

    If you use the same algorithm (e.g., a hash function) in more than one place, you should take extra care to use domain-separation. Both of the protocols I mentioned above do this properly, but any custom features you introduce will also need to be implemented with great care.

    Your protocol should not allow the server to do dumb things, like control group memberships. Also, don’t even think about letting any AI (not even a local model) have access to message contents.

    Once you think you’re secure, you should hire cryptographers and software security experts to audit your designs and try to break them. This is something I do professionally, and I’ve written about my general approach to auditing cryptography products if you’re interested.

    Any mechanisms (static analysis, etc.) you can introduce into your CI/CD pipeline that will fail and prevent a build if you introduce a memory-safety bug or cryptographic side-channel are a wonderful idea.

    Section Recap

    If you actually built your cryptography correctly, then it should always be the case that the server never sees any plaintext messages from users.

    Furthermore, if the server attempts to substituting one user’s public key for another, it will fail, due to key transparency, third-party log monitors, and automatic Merkle tree inclusion proof verification.

    While you’re at it, your binary releases should be reproducible from the source code, and the release process should emit attestations on a binary transparency log.

    If you do all this, and managed to avoid introducing cryptographic vulnerabilities in your app’s design, congratulations! You have properly implemented the cryptography.

    Interlude: Who’s Proper Today?

    As of right now, there isn’t a perfect answer. I’m setting a high bar, after all. The main sticking point is key transparency.

    WhatsApp uses key transparency, but is owned by Meta and is shoving AI features into the product, so I doubly distrust it. Factor in WhatsApp being closed source, and it’s immediately disqualified.

    Matrix, OMEMO, Threema, Wire, and Wickr all rely on key fingerprints. The same can be said for virtually every PGP-based product (e.g., DeltaChat).

    As of this writing, Signal’s key transparency feature still has not shipped (though it is being developed).

    Today, “safety numbers” are the mechanism for keeping track of whether a public key has been substituted for a conversation partner. This is morally equivalent to key fingerprints. As soon as this feature launches, Signal will be a proper implementation.

    Signal offers reproducible builds, but there isn’t enough attention on third-party verification of their builds. This is probably more of an incentive problem than a technical one.

    None of the mainstream apps currently use binary transparency, but that’s an easier lift.

    Enter, Jurisdiction

    Now that the premise has been explained in sufficient detail, let’s revisit the argument I made at the top of the page:

    If you actually built your cryptography properly, you shouldn’t give a shit which country hosts the ciphertext for your messaging app.

    At the bottom of the cryptography used by a properly-built E2EE app, you will have an AEAD mode which carries a security proof that, without the secret key, an encrypted message is indistinguishable from an encryption of all zeroes set to the same length as the actual plaintext.

    This means that the country of origin cannot learn anything useful about the actual contents of the communication.

    They can only learn metadata (message length, if padding isn’t used; time of transmission; sender/recipients). Metadata resistance isn’t a goal of any of the mainstream private messaging solutions, and generally builds atop the Tor network. This is why a threat model is important to the previous section.

    Regardless, if the only thing you’re seeing on the server is encrypted data, then where the data is stored doesn’t really matter at all (outside of general availability concerns).

    But What If The Host Country…

    …Wants to Stealthily Backdoor the App?

    Binary transparency and reproducible builds would prevent this from succeeding stealthily. If the government wants the attack to succeed, they have to accept that it will be detected.

    …Legally Compels the App Store to Ship Malware?

    This is an endemic risk to smartphones, but binary transparency makes this detectable.

    That said, at minimum, the developer should control their own signing keys.

    …Wants to Replace A User’s Public Key With Their Own?

    Key transparency + independent third-party log monitors. I covered this above.

    …Purchases Zero-Day Exploits To Target Users?

    This is a table-stakes risk for virtually all high-profile software. But if you think your threat model is Mossad, you’re not being reasonable.

    When Does Jurisdiction Matter?

    If the developers for an app do not live in a liberal democracy with a robust legal system, they probably cannot tell their government, “No,” if they’re instructed to backdoor the app and cut a release (stealth be damned).

    Of course, that’s not the only direction a government demand could take. As we saw with Shadowsocks, sometimes they’re only interested in pulling the plug.

    If you’re worried about the government holding a gun to some developer’s head and instructing them to compromise millions of people–including their own employees and innocent civilians–just to specifically get access to your messages, you might be better served by learning some hacker opsec (STFU is the best policy) than trying to communicate at all.

    In Conclusion

    If you’re trying to weigh the importance of jurisdiction in your own personal risk calculus for deciding between different encrypted messaging apps, it should rank near the very bottom of your list of considerations.

    I will always recommend the app that actually encrypts your data securely over the one that shovels weakly-encrypted (or just plaintext) data to Switzerland.

    It’s okay to care about data sovereignty (if you really want to), but that’s really not a cryptographic security consideration. I’ve found that a lot of Europeans prioritize this incorrectly, and it’s kind of annoying.


    Header art: AJ, photo from FWA 2025 taken by 3kh0.

  • Unwanted Person in Fursuit Parade Anthrocon 2025

    Unwanted Person in Fursuit Parade Anthrocon 2025

    Anthrocon was made aware of a controversial individual who participated in the Fursuit Parade on July 5th, 2025. After careful review with our team members, we believe that this individual specifically and intentionally circumvented our Event and Safety measures (including costume review) with the intention of causing a scene and disruption.

    We want to reassure the members of our community that this individual  is not allowed membership at Anthrocon now or in the future. Given the urgent and sensitive nature of the individual’s actions, we took special exception to our policy of not discussing bans or banned individuals as we want our attendees to know that we hear them and take their concerns seriously.

    Anthrocon strives to be a positive and supportive member of our furry community. The Anthrocon staff and venue partners want to thank everyone for their patience and information as we endeavor to host the best furry event that we possibly can. Sincerely, Anthrocon, Inc.

  • Conbook Cover for CeSFur 2025

    Conbook Cover for CeSFur 2025

    Conbook cover for @CeSFuR 2025. Ride on in the post-apo world! The main characters – Axel the raccoon, Lyra the coyote and Drake the crocodile.

  • Checklists Are The Thief Of Joy

    Checklists Are The Thief Of Joy

    I have never seen security and privacy checklists used for any other purpose but deception.

    After pondering this observation, I’m left seriously doubting if comparison checklists have any valid use case except to manipulate the unsuspecting.

    But before we get into that, I’d like to share why we’re talking about this today.

    Recently, another person beat me to the punch of implementing MLS (RFC 9420) in TypeScript. When I shared a link to their release announcement, one Fediverse user replied, “How does this compare to Signal’s protocol?”

    Great! A fair question from a curious mind. Love to see it.

    But when I started drafting a response, I realized that any attempt to write any sort of structured comparison would be misleading. They’re different protocols with different security goals, and there’s no way to encapsulate this nuance in a grid of green, yellow, and red squares to indicate trustworthiness.

    But that doesn’t stop bullshit like this (alternate archive) from existing.

    This is a wonderful case study in how to deceive someone with facts.

    When you first load the page, the first thing you’re shown is some “summary” fields, including a general “Is this app recommended?” field with “Yes”/”No”. This short-circuits the decision-making for people too lazy or clueless to read on.

    And then immediately after that, the very first thing you’re given is jurisdiction information.

    An excerpt from the website linked above, where they emphasize

    This is a website that bills itself as a comparison for “secure messaging apps”.

    Users shouldn’t have to care about jurisdiction if the servers cannot ever read their messages in the first place. Any app that fails to meet this requirement should wholesale be disqualified.

    The most important questions that actually matter to security:

    1. Is end-to-end encryption turned on by default?
    2. Can you (accidentally, maliciously) turn it off?

    If the answers aren’t “yes” and “no”, respectively, your app belongs in the garbage. Do not pass Go.

    But this checklist wasn’t written by a cryptography expert. If it were, there would be more information about the protocols used than a collection of primitives used under-the-hood with arbitrary coloring.

    The

    Why does “X25519 / XSalsa20 256 / Poly1305” get a green box but “Curve25519 256 / XSalsa20 256 / Poly1305-AES 128” get a yellow box? Actually, why does it refer to the same algorithm as X25519 and Curve25519 in different cells? Hell if I know. I’d wager the author doesn’t, either.

    Now, I don’t want to belabor the point and pick on this checklist in particular. It’s not that this specific checklist is the problem. It’s that all checklists are.

    The entire idea of using checklists to compare apps like this is fundamentally flawed. It’s like trying to mentally picture an 1729-dimensional object on a 2-dimensional screen.

    Not only will you inevitably be wrong, but your audience will think you’re somehow being objective while you do it.

    How Do You Compare Signal to MLS?

    Since I brought it up above, I might as well talk about this here.

    The Signal Protocol was designed to provide state-of-the-art encryption for text messages between mobile phone users. It has since slowly expanded its scope to include desktop users and people that don’t want to give their phone numbers to strangers. Signal does a lot of cool stuff, and I’ve spent a weekend reviewing how its cryptography is implemented. Signal didn’t give a hoot about interop, and probably won’t for the foreseeable future, either.

    The MLS protocol is an IETF RFC intended to standardize a reasonable protocol for encrypted messaging apps. It was meant to eventually be interoperable across apps/devices.

    Signal uses a deniable handshake protocol. MLS does not.

    Signal tries to hide the social graph from the delivery service. MLS does not.

    Signal’s approach to group messaging is an abstraction over 1:1 messaging, with zero-knowledge proofs to hide group memberships from the Signal server. Because this is an abstraction, it’s trivial to send a different message to each member of a group, and consistent histories are not guaranteed.

    MLS proposes an efficient scheme for continuously agreeing on a group secret key. This kind of setup makes invisible salamanders style attacks on a group conversation untenable.

    There are a lot of additional things that libsignal offers out-of-the-box, that you won’t get with MLS. Soon, key transparency may be on the list of things Signal offers but MLS doesn’t.

    Ultimately, both protocols are good. They’re certainly way better choices than OpenPGP, OMEMO, Olm, MTProto, etc.

    When I began drafting ideas for end-to-end encryption for the Fediverse, my starting point for this idea was MLS, not the Signal Protocol. Your social graph is already visible to ActivityPub, so there’s little value in trying to hide it with deniable handshakes. Furthermore, efficient group key agreement makes conversations involving dozens or even hundreds of participants scale better.

    (You may also be interested in knowing that the author of the ActivityPub E2EE draft specification also settled on the MLS protocol.)

    Your mileage may vary. Talk to your cryptographer. If you do not have a cryptographer, hire one before you design your own protocol.

    If you want me to give your design a once-over, see this page for more information.

    How Do Experts Make Secure Messaging App Recommendations?

    During my review of the cryptography used by Signal, I explained my personal approach to cryptography audits. We’re doing the same sort of thing here, but for messaging app recommendations.

    First, you need to let go of “lists” and “tables” entirely.

    You’re going to be working with graphs. A flow-chart (where sections can be added as-needed) might be a suitable deliverable, but only if your audience can follow one.

    Above, I mentioned that the first two questions you ask are:

    1. Is end-to-end encryption turned on by default?
    2. Can you (accidentally, maliciously) turn it off?

    If you stop there, you can sort of call it a list, but the immediate next question I ask is, “What is the use-case and threat model for the app?”

    There is no yes/no wiring here (except to fail any app that doesn’t have a coherent threat model to begin with). It’s open-ended and always requires a deeper analysis.

    If you want to see what a rudimentary threat model looks like, see the one I wrote for my public key directory project.

    Depending on the intended use and threat model of the app in question, a lot of different follow-up questions will also precipitate. It wouldn’t make sense to ask about elliptic curve choice if an app is fully committed to non-hybrid ML-KEM, after all.

    Takeaways

    If you see a dumb checklist trying to convince you to use a specific app or product, assume some marketing asshole is trying to manipulate you. Don’t trust it.

    If you’re confronted with a checklist in the wild and want an alternative to share instead, Privacy Guides doesn’t attempt to create comparison tables for all of their recommendations within a given category of tool.


    Header art: AJ.

    The title is a reference to the quote, “Comparison is the thief of joy.”

    Also, I’m specifically talking about comparison checklists, not every list of any shape or size that has a space for a checkbox in any or every industry. Please don’t @ me with your confusion if you didn’t pick up on this.

  • Anthrocon 2025 Unofficial Total

    As when I am posting this at 7:28am July 7, 2025 Anthrocon has not officially posted any number. In fact the only thing that comes close is what was announced during Closing which is

    18,357

    Will Update when Official Numbers are released

  • Anthro Irish 2025 Schedule

    Anthro Irish 2025 Schedule

    🚀Events Schedule!🚀

    With AnthroIrish only a few weeks away, prepare for launch on Saturday July 26th with our galactic events schedule!

    With so many exciting panels and events to choose from, it is going to be a stellar weekend! 👨‍🚀🌌

    Along our space adventure, many new panels and events are taking place, such as our Variety Show, Girls Club, DJ 101 with Tai Husky and so much more! 🕺.

    We would like to take a moment to thank all those who made this events schedule possible! 🪐

  • Why You Can’t Lose Fat in THIS Area (And Other Weight Loss Facts)

    Why You Can’t Lose Fat in THIS Area (And Other Weight Loss Facts)

    You’ve been exercising, eating better, and doing everything right, but one area just won’t budge. Is it your workout routine, your diet, or something completely unexpected? In this new video, you’ll find out the hidden reasons behind stubborn fat and the surprising truth about weight loss.

    🔔 Don’t forget to SUBSCRIBE! 🔔

    SUGGEST A TOPIC:
    https://bit.ly/suggest-an-infographics-video

    💬 Come chat with me: https://discord.gg/theinfoshow

    🔖 MY SOCIAL PAGES
    TikTok ► https://www.tiktok.com/@theinfographicsshow
    Facebook ► https://www.facebook.com/TheInfographicsShow

    📝 SOURCES:
    https://freepaste.link/jj3mc15wj4

    All videos are based on publicly available information unless otherwise noted.

While viewing the website, tap in the menu bar. Scroll down the list of options, then tap Add to Home Screen.
Use Safari for a better experience.