Everyone loves Steam Next Fest here at Gaming Furever! Our staff goes through the entire list of games published in the Fest to find and list each and every one that features anthropomorphic character(s), or features animals as a focus. This time, we went through all 3450+ games to get to this list of 370+ games. We could’ve missed a couple, but we hope this is an all-encompasing list for this season’s available games! Almost every one has a demo, so go have fun and wishlist the ones that pique your interest!
Considering upgrading your registration? Higher tiers will come with more Calfurry points in our Calfurry Points Program! You have until end of day on March 1 to upgrade using early bird prices, so don’t waste time – get a higher reg, earn more Calfurry points!
Furry Migration will take place Sept 11-13 2026 at the Hyatt Regency Minneapolis
Registration
Registration prices as listed.
Attendee
Sponsor
Super Sponsor
Adult
$65
$145
$220
Youth (7-17)
$40
$145
$220
Youth 6 and under are free with an adult membership.
Attendees will receive access to Furry Migration for all three days of the convention (Fri-Sun).
Individuals ages 7 through 17 years old will receive a discount on the price of Attendee registration. Attendees under the age of 18 years may be required to present special documentation to attend Furry Migration. Please refer to our Minors Policy with a parent/legal guardian for full details prior to registering.
Sponsors will receive a themed Furry Migration T-shirt and lanyard, Sponsor ribbon, and early access to the Dealers’ Den and Artists’ Alley.
Super Sponsors will receive a themed Furry Migration T-shirt and lanyard, Super Sponsor ribbon, early access to the Dealers’ Den and Artists’ Alley, Saturday morning brunch, preferred seating at major convention events, a special convention badge design, and a special gift.
Special Note: This is the very 1st year Super Sponsors are getting a Saturday Morning Brunch
Nominations for the Good Furry Awards are now open now through June 30. This is an annual awards program run by furries for the furry fandom. It recognizes furries who do good deeds and promote a positive image of the fandom. Nominations come from the furry community to acknowledge people in three categories:
The Good Egg Award for volunteer work.
The Image Award for furries who work in various media promoting the fandom and educating the public about it.
The Furtastic Award for furries who do wonderful things in the community that do not easily fall into the first two categories.
Each category’s winner will receive a handsome trophy and a check for $200. Nominees who did not win in a category will receive a certificate stating their nomination.
The awards also have a Lifetime Achievement category that is chosen by committee. Previous winners of the Lifetime Achievement include Mark Merlino, Rod O’Riley, Steve Gallacci, Reed Waller, and Ken Fletcher.
The two implementations (aes-js, pyaes) aren’t just wildly insecure, they also saw widespread adoption–from VPN management software to cryptocurrency wallets.
Despite the problems being disclosed to the developer in 2022, and enormous downstream impacts with millions of dollars of potential cost, the response was remarkably cavalier.
The AES block cipher is a cryptographic primitive, so it’s very important to understand and use it properly, based on its application. It’s a powerful tool, and with great power, yadda, yadda, yadda.
A group of cryptography researchers published a paper titled Zero Knowledge (About) Encryption recently, which delves into the security of several cloud-based password managers. They will be presenting their paper at Real World Cryptography 2026 next month.
Despite being a cryptography paper available on IACR’s ePrint server, it’s relatively approachable. Some cryptography papers are so incomprehensible to folks lacking domain-specific knowledge that it’s tempting to wonder if the paper is, itself, encrypted.
This paper is different. It begins by pointing out how vague the published threat models are for these products, and then makes a reasonable argument that their customers probably believe it provides specific security properties that are important to them.
And then it goes on to eviscerate LastPass, Bitwarden, and Dashlane’s security in practice under those very reasonable interpretations of the products’ threat models.
Some of the attacks are interesting, but many of them are the same failure modes we’ve known about since at least 2002: Unauthenticated AES-CBC is vulnerable to padding oracle attacks (thanks Vaudenay). I’ve written about this before.
The Matrix team posted a blog post that demonstrated a critical lack of understanding. I added an addendum to my original disclosure blog post to address their response, but a lot of people didn’t bother reading it.
Matt Green wrote this recently, about a lawsuit against WhatsApp that alleged Meta had access to plaintext messages.
When I’m speaking to laypeople, I like to keep things simple. I tell them that cryptography allows us to trust our machines. But this isn’t really an accurate statement of what cryptography does for us. At the end of the day, all cryptography can really do is extend trust. Encryption protocols like Signal allow us to take some anchor-point we trust — a machine, a moment in time, a network, a piece of software — and then spread that trust across time and space. Done well, cryptography allows us to treat hostile networks as safe places; to be confident that our data is secure when we lose our phones; or even to communicate privately in the presence of the most data-hungry corporation on the planet.
But for this vision of cryptography to make sense, there has to be trust in the first place.
If you take a step back and look at the three issues closely enough, you’ll notice a pattern start to emerge: Many developers of cryptography software are neglecting the duty of care they have for their users.
The term “duty of care” is a legal one, but let me be clear: I am not making a legal argument here. It’s just the closest phrase I can find that encapsulates what I mean.
When confronted with research about novel attack strategies, the ol’ reliable “this is outside our threat model” gets invoked by the vendor.
But, as the case often is: “What fucking threat model?”
If you can even find a formal threat model document, it’s often vague, poorly specified, out-of-date, and foolishly scoped.
But if you then construct a mental model for what the assets, assumptions, threat actors, and risk actually are for a product or service by reading their marketing copy, you will end up with a wildly different understanding.
This is the original sin of most cryptography software: They didn’t start with a specification, and they never had a clear model of the threats they’re mitigating and which ones they’re choosing to not mitigate.
The mechanism I added to allow security researchers (e.g., the folks behind Have I Been Pwned?) to be able to issue a special “revocation token” for compromised accounts? That could be used to permanently lock someone out of key transparency and therefore prevent them from using encryption.
I also spend a lot of time thinking about how to test the software I create to prevent my own mistakes or hubris from hurting people that use it.
The Duty of Care for Cryptographic Code
I don’t think it’s reasonable to expect everyone to meet my high standards for cryptography 100% of the time, but the bare minimum is higher than most projects even aspire towards.
Be as boring as possible.
Boring here means “obviously secure”. If someone reviewing your code has to question if something is secure (beyond their standard checklists or reasonable assumptions about, like, P vs NP?), you’re not boring enough.
If something requires attempts to succeed, it is not boring. Nation states have the resources to pull it off.
If something requires more than attempts, it’s boring.
State your security goals clearly, along with assumptions made.
“We encrypt data in SQL and assume it’s confidential” isn’t enough.
“We encrypt data in SQL, using per-customer keys in an AEAD cipher mode, and bind the context to the AAD to ensure both confidentiality and integrity, but do not assume immunity to multi-key attacks” is better.
Do you assume AES is a secure PRP? Write that down.
Do you rely on the security of MD5? Bad news.
Be transparent about the project’s maturity.
If you implemented ChaCha20 in brainfuck for fun, maybe don’t submit it to package managers. Also consider archiving the project on GitHub so folks know not to use it.
This isn’t a lot to ask.
Most people that work with cryptography full time short-circuit all this and leave it as, “don’t roll your own crypto.”
You shouldn’t roll your own crypto if you’re going to make embarrassing mistakes, be neglectful to your users’ safety, or simply dismiss cryptography issues reported to you because it didn’t come gift-wrapped with a full kill chain exploit kit.
“Open Source” Excuses Zilch
I’ve heard a few folks gripe about cryptography researchers and Google Project Zero disclosing security vulnerabilities to open source projects over the years.
Now, I’m generally sympathetic to the plight of free and open source software developers. They get a lot of bullshit, all the time, and very little appreciation for the work they do. The recent Microsoft Copilot incursion into open source and the endless deluge of LLM slop being poured on the people doing thankless work sucks. It just does.
However, when the peanut gallery claims “but it’s open source” as if that magically erases any responsibility of the developer to their users is… a strange take, to say the least.
If you’re detecting a bit of annoyance in these words, that is because this is a very frustrating topic for me. I generally despise gatekeeping.
I’ve long held the opinion that most people use insecure cryptography because cryptographers and security engineers failed to provide them with secure alternatives to begin with.
If you use cryptographic software, ask about the threat model. As a community, raise money to non-profits like OSTIF and encourage them to cover software we care about. This will be a net-positive for Internet security.
As for me, I’m going to continue to insist on higher standards in my own development processes and hopefully inspire others to follow suit.
Together, we can make a better Internet for everyone.