To understand my point, I need to first explain three different cryptography attack papers / blog posts. I promise this won’t be boring.
Three Little Disclosures
Misuse-Prone Ciphers For All
In a blog post titled Carelessness versus craftsmanship in cryptography, cryptography analyst and Queer in Cryptography emcee Opal Wright delves into the misuse-prone and side-channel-riddled JavaScript and Python implementations of the AES block cipher.
The two implementations (aes-js, pyaes) aren’t just wildly insecure, they also saw widespread adoption–from VPN management software to cryptocurrency wallets.
Despite the problems being disclosed to the developer in 2022, and enormous downstream impacts with millions of dollars of potential cost, the response was remarkably cavalier.
The AES block cipher is a cryptographic primitive, so it’s very important to understand and use it properly, based on its application. It’s a powerful tool, and with great power, yadda, yadda, yadda.
ricmoo
Yikes.

A Tale of Three Password Managers
A group of cryptography researchers published a paper titled Zero Knowledge (About) Encryption recently, which delves into the security of several cloud-based password managers. They will be presenting their paper at Real World Cryptography 2026 next month.
Despite being a cryptography paper available on IACR’s ePrint server, it’s relatively approachable. Some cryptography papers are so incomprehensible to folks lacking domain-specific knowledge that it’s tempting to wonder if the paper is, itself, encrypted.
This paper is different. It begins by pointing out how vague the published threat models are for these products, and then makes a reasonable argument that their customers probably believe it provides specific security properties that are important to them.
And then it goes on to eviscerate LastPass, Bitwarden, and Dashlane’s security in practice under those very reasonable interpretations of the products’ threat models.
Amazing.(Art: MarleyTanuki)
Some of the attacks are interesting, but many of them are the same failure modes we’ve known about since at least 2002: Unauthenticated AES-CBC is vulnerable to padding oracle attacks (thanks Vaudenay). I’ve written about this before.
Recent Matrix Disclosures
I’m not going to spend too much time on this one, since I just wrote a very long blog post about that topic.
The Matrix team posted a blog post that demonstrated a critical lack of understanding. I added an addendum to my original disclosure blog post to address their response, but a lot of people didn’t bother reading it.

Matrix’s “debunk” (archive) can be summarized as:
- We constructed a strawman argument of the issue and argue that there’s no impact.
- The actual impact is outside of our threat model, which says fuck all about cryptography to begin with.
And that was apparently convincing enough to mislead some people and provoke others to defame or harass me.
Classy.Art: CMYKat
Responsibility and Cryptography
Matt Green wrote this recently, about a lawsuit against WhatsApp that alleged Meta had access to plaintext messages.
When I’m speaking to laypeople, I like to keep things simple. I tell them that cryptography allows us to trust our machines. But this isn’t really an accurate statement of what cryptography does for us. At the end of the day, all cryptography can really do is extend trust. Encryption protocols like Signal allow us to take some anchor-point we trust — a machine, a moment in time, a network, a piece of software — and then spread that trust across time and space. Done well, cryptography allows us to treat hostile networks as safe places; to be confident that our data is secure when we lose our phones; or even to communicate privately in the presence of the most data-hungry corporation on the planet.
But for this vision of cryptography to make sense, there has to be trust in the first place.
Phil Rogaway wrote a paper in 2015 titled, The Moral Character of Cryptographic Work.
If you take a step back and look at the three issues closely enough, you’ll notice a pattern start to emerge: Many developers of cryptography software are neglecting the duty of care they have for their users.
The term “duty of care” is a legal one, but let me be clear: I am not making a legal argument here. It’s just the closest phrase I can find that encapsulates what I mean.
When confronted with research about novel attack strategies, the ol’ reliable “this is outside our threat model” gets invoked by the vendor.
But, as the case often is: “What fucking threat model?”

If you can even find a formal threat model document, it’s often vague, poorly specified, out-of-date, and foolishly scoped.
But if you then construct a mental model for what the assets, assumptions, threat actors, and risk actually are for a product or service by reading their marketing copy, you will end up with a wildly different understanding.
This is the original sin of most cryptography software: They didn’t start with a specification, and they never had a clear model of the threats they’re mitigating and which ones they’re choosing to not mitigate.
Counter-Example: Public Key Directory
I’ve been working since June 2024 on Key Transparency for the Fediverse in order to make end-to-end encryption for ActivityPub possible. Integrating the two should be relatively simple.
Earlier today, a friend on Signal pointed out:
The mechanism I added to allow security researchers (e.g., the folks behind Have I Been Pwned?) to be able to issue a special “revocation token” for compromised accounts? That could be used to permanently lock someone out of key transparency and therefore prevent them from using encryption.
This is true, but the attack requires a secret key compromise to pull off. I still added it to the threat model and left it classified as an “open” risk anyway, because it’s better to be transparent and vocally self-critical if you’re building software that extends trust.
Which, y’know, is what cryptography does.
I also spend a lot of time thinking about how to test the software I create to prevent my own mistakes or hubris from hurting people that use it.
The Duty of Care for Cryptographic Code
I don’t think it’s reasonable to expect everyone to meet my high standards for cryptography 100% of the time, but the bare minimum is higher than most projects even aspire towards.
- Be as boring as possible.
- Boring here means “obviously secure”. If someone reviewing your code has to question if something is secure (beyond their standard checklists or reasonable assumptions about, like, P vs NP?), you’re not boring enough.
- If something requires
attempts to succeed, it is not boring. Nation states have the resources to pull it off.
- If something requires more than
attempts, it’s boring.
- State your security goals clearly, along with assumptions made.
- “We encrypt data in SQL and assume it’s confidential” isn’t enough.
- “We encrypt data in SQL, using per-customer keys in an AEAD cipher mode, and bind the context to the AAD to ensure both confidentiality and integrity, but do not assume immunity to multi-key attacks” is better.
- Do you assume AES is a secure PRP? Write that down.
- Do you rely on the security of MD5? Bad news.
- Be transparent about the project’s maturity.
- If you implemented ChaCha20 in brainfuck for fun, maybe don’t submit it to package managers. Also consider archiving the project on GitHub so folks know not to use it.
This isn’t a lot to ask.
Most people that work with cryptography full time short-circuit all this and leave it as, “don’t roll your own crypto.”
You shouldn’t roll your own crypto if you’re going to make embarrassing mistakes, be neglectful to your users’ safety, or simply dismiss cryptography issues reported to you because it didn’t come gift-wrapped with a full kill chain exploit kit.
“Open Source” Excuses Zilch
I’ve heard a few folks gripe about cryptography researchers and Google Project Zero disclosing security vulnerabilities to open source projects over the years.
Now, I’m generally sympathetic to the plight of free and open source software developers. They get a lot of bullshit, all the time, and very little appreciation for the work they do. The recent Microsoft Copilot incursion into open source and the endless deluge of LLM slop being poured on the people doing thankless work sucks. It just does.
However, when the peanut gallery claims “but it’s open source” as if that magically erases any responsibility of the developer to their users is… a strange take, to say the least.
If open source developers aren’t willing to accept the responsibility that cryptography engineering requires, they shouldn’t publish cryptography in a format that encourages other people to use it.
If you’re detecting a bit of annoyance in these words, that is because this is a very frustrating topic for me. I generally despise gatekeeping.
I’ve long held the opinion that most people use insecure cryptography because cryptographers and security engineers failed to provide them with secure alternatives to begin with.
But when it comes to some software (e.g., messaging apps), folks will speak over actual experts to evangelize their favorite app.
Then, when someone clueful comes along and criticizes the same apps’ cryptography, they get a reception like the screenshots I shared above.
What Can We Do?
If you work on software that provides (or at least advertises) cryptographic features, try to at least meet the bare minimum bar established by this blog post. If said software is a private messaging app, aim higher. Try to be more secure than Signal.
If you use cryptographic software, ask about the threat model. As a community, raise money to non-profits like OSTIF and encourage them to cover software we care about. This will be a net-positive for Internet security.
As for me, I’m going to continue to insist on higher standards in my own development processes and hopefully inspire others to follow suit.
Together, we can make a better Internet for everyone.

Original post written by Soatok




Leave a Reply