Search
Items tagged with: security
Western inaction on Ukraine’s security guarantees opens door to global nuclear proliferation
Western indecision in Ukraine’s pursuit of #security #guarantees risks triggering a global chain reaction, with nations turning to #nuclear #weapons as a deterrent in the absence of reliable security commitments.
#Ukraine's security commitment - the #Budapest #Memorandum of 1994 - is currently and has been repeatably violated
#RussianAggression #RussiaInvadedUkraine
Opinion: Western inaction on Ukraine’s security guarantees opens door to global nuclear proliferation
Russia’s invasion of Ukraine is approaching its 11th year, with three years of full-scale war. In search of security guarantees like NATO membership, Ukraine has been left in limbo due to Russian-occupied territories and Western bureaucracy.Julian McBride (The Kyiv Independent)
Earlier this year, Cendyne wrote a blog post covering the use of HKDF, building partially upon my own blog post about HKDF and the KDF security definition, but moreso inspired by a cryptographic issue they identified in another company’s product (dubbed AnonCo).
At the bottom they teased:
Database cryptography is hard. The above sketch is not complete and does not address several threats! This article is quite long, so I will not be sharing the fixes.Cendyne
If you read Cendyne’s post, you may have nodded along with that remark and not appreciate the degree to which our naga friend was putting it mildly. So I thought I’d share some of my knowledge about real-world database cryptography in an accessible and fun format in the hopes that it might serve as an introduction to the specialization.
Note: I’m also not going to fix Cendyne’s sketch of AnonCo’s software here–partly because I don’t want to get in the habit of assigning homework or required reading, but mostly because it’s kind of obvious once you’ve learned the basics.
I’m including art of my fursona in this post… as is tradition for furry blogs.
If you don’t like furries, please feel free to leave this blog and read about this topic elsewhere.
Thanks to CMYKat for the awesome stickers.
Contents
- Database Cryptography?
- Cryptography for Relational Databases
- Cryptography for NoSQL Databases
- Searchable Encryption
- Order-{Preserving, Revealing} Encryption
- Deterministic Encryption
- Homomorphic Encryption
- Searchable Symmetric Encryption (SSE)
- You Can Have Little a HMAC, As a Treat
- Intermission
- Case Study: MongoDB Client-Side Encryption
- Wrapping Up
Database Cryptography?
The premise of database cryptography is deceptively simple: You have a database, of some sort, and you want to store sensitive data in said database.
The consequences of this simple premise are anything but simple. Let me explain.
Art: ScruffKerfluff
The sensitive data you want to store may need to remain confidential, or you may need to provide some sort of integrity guarantees throughout your entire system, or sometimes both. Sometimes all of your data is sensitive, sometimes only some of it is. Sometimes the confidentiality requirements of your data extends to where within a dataset the record you want actually lives. Sometimes that’s true of some data, but not others, so your cryptography has to be flexible to support multiple types of workloads.
Other times, you just want your disks encrypted at rest so if they grow legs and walk out of the data center, the data cannot be comprehended by an attacker. And you can’t be bothered to work on this problem any deeper. This is usually what compliance requirements cover. Boxes get checked, executives feel safer about their operation, and the whole time nobody has really analyzed the risks they’re facing.
But we’re not settling for mere compliance on this blog. Furries have standards, after all.
So the first thing you need to do before diving into database cryptography is threat modelling. The first step in any good threat model is taking inventory; especially of assumptions, requirements, and desired outcomes. A few good starter questions:
- What database software is being used? Is it up to date?
- What data is being stored in which database software?
- How are databases oriented in the network of the overall system?
- Is your database properly firewalled from the public Internet?
- How does data flow throughout the network, and when do these data flows intersect with the database?
- Which applications talk to the database? What languages are they written in? Which APIs do they use?
- How will cryptography secrets be managed?
- Is there one key for everyone, one key per tenant, etc.?
- How are keys rotated?
- Do you use envelope encryption with an HSM, or vend the raw materials to your end devices?
The first two questions are paramount for deciding how to write software for database cryptography, before you even get to thinking about the cryptography itself.
(This is not a comprehensive set of questions to ask, either. A formal threat model is much deeper in the weeds.)
The kind of cryptography protocol you need for, say, storing encrypted CSV files an S3 bucket is vastly different from relational (SQL) databases, which in turn will be significantly different from schema-free (NoSQL) databases.
Furthermore, when you get to the point that you can start to think about the cryptography, you’ll often need to tackle confidentiality and integrity separately.
If that’s unclear, think of a scenario like, “I need to encrypt PII, but I also need to digitally sign the lab results so I know it wasn’t tampered with at rest.”
My point is, right off the bat, we’ve got a three-dimensional matrix of complexity to contend with:
- On one axis, we have the type of database.
- Flat-file
- Relational
- Schema-free
- On another, we have the basic confidentiality requirements of the data.
- Field encryption
- Row encryption
- Column encryption
- Unstructured record encryption
- Encrypting entire collections of records
- Finally, we have the integrity requirements of the data.
- Field authentication
- Row/column authentication
- Unstructured record authentication
- Collection authentication (based on e.g. Sparse Merkle Trees)
And then you have a fourth dimension that often falls out of operational requirements for databases: Searchability.
Why store data in a database if you have no way to index or search the data for fast retrieval?
Credit: Harubaki
If you’re starting to feel overwhelmed, you’re not alone. A lot of developers drastically underestimate the difficulty of the undertaking, until they run head-first into the complexity.
Some just phone it in with AES_Encrypt()
calls in their MySQL queries. (Too bad ECB mode doesn’t provide semantic security!)
Which brings us to the meat of this blog post: The actual cryptography part.
Cryptography is the art of transforming information security problems into key management problems.Former coworker
Note: In the interest of time, I’m skipping over flat files and focusing instead on actual database technologies.
Cryptography for Relational Databases
Encrypting data in an SQL database seems simple enough, even if you’ve managed to shake off the complexity I teased from the introduction.
You’ve got data, you’ve got a column on a table. Just encrypt the data and shove it in a cell on that column and call it a day, right?
But, alas, this is a trap. There are so many gotchas that I can’t weave a coherent, easy-to-follow narrative between them all.
So let’s start with a simple question: where and how are you performing your encryption?
The Perils of Built-in Encryption Functions
MySQL provides functions called AES_Encrypt and AES_Decrypt, which many developers have unfortunately decided to rely on in the past.
It’s unfortunate because these functions implement ECB mode. To illustrate why ECB mode is bad, I encrypted one of my art commissions with AES in ECB mode:
Art by Riley, encrypted with AES-ECB
The problems with ECB mode aren’t exactly “you can see the image through it,” because ECB-encrypting a compressed image won’t have redundancy (and thus can make you feel safer than you are).
ECB art is a good visual for the actual issue you should care about, however: A lack of semantic security.
A cryptosystem is considered semantically secure if observing the ciphertext doesn’t reveal information about the plaintext (except, perhaps, the length; which all cryptosystems leak to some extent). More information here.
ECB art isn’t to be confused with ECB poetry, which looks like this:
Oh little one, you’re growing up
You’ll soon be writing C
You’ll treat your ints as pointers
You’ll nest the ternary
You’ll cut and paste from github
And try cryptography
But even in your darkest hour
Do not use ECBCBC’s BEASTly when padding’s abused
And CTR’s fine til a nonce is reused
Some say it’s a CRIME to compress then encrypt
Or store keys in the browser (or use javascript)
Diffie Hellman will collapse if hackers choose your g
And RSA is full of traps when e is set to 3
Whiten! Blind! In constant time! Don’t write an RNG!
But failing all, and listen well: Do not use ECBThey’ll say “It’s like a one-time-pad!
The data’s short, it’s not so bad
the keys are long–they’re iron clad
I have a PhD!”
And then you’re front page Hacker News
Your passwords cracked–Adobe Blues.
Don’t leave your penguins showing through,
Do not use ECB— Ben Nagy, PoC||GTFO 0x04:13
Most people reading this probably know better than to use ECB mode already, and don’t need any of these reminders, but there is still a lot of code that inadvertently uses ECB mode to encrypt data in the database.
Also, SHOW processlist;
leaks your encryption keys. Oops.
Credit: CMYKatt
Application-layer Relational Database Cryptography
Whether burned by ECB or just cautious about not giving your secrets to the system that stores all the ciphertext protected by said secret, a common next step for developers is to simply encrypt in their server-side application code.
And, yes, that’s part of the answer. But how you encrypt is important.
Credit: Harubaki
“I’ll encrypt with CBC mode.”
If you don’t authenticate your ciphertext, you’ll be sorry. Maybe try again?
“Okay, fine, I’ll use an authenticated mode like GCM.”
Did you remember to make the table and column name part of your AAD? What about the primary key of the record?
“What on Earth are you talking about, Soatok?”
Welcome to the first footgun of database cryptography!
Confused Deputies
Encrypting your sensitive data is necessary, but not sufficient. You need to also bind your ciphertexts to the specific context in which they are stored.
To understand why, let’s take a step back: What specific threat does encrypting your database records protect against?
We’ve already established that “your disks walk out of the datacenter” is a “full disk encryption” problem, so if you’re using application-layer cryptography to encrypt data in a relational database, your threat model probably involves unauthorized access to the database server.
What, then, stops an attacker from copying ciphertexts around?
Credit: CMYKatt
Let’s say I have a legitimate user account with an ID 12345, and I want to read your street address, but it’s encrypted in the database. But because I’m a clever hacker, I have unfettered access to your relational database server.
All I would need to do is simply…
UPDATE table SET addr_encrypted = 'your-ciphertext' WHERE id = 12345
…and then access the application through my legitimate access. Bam, data leaked. As an attacker, I can probably even copy fields from other columns and it will just decrypt. Even if you’re using an authenticated mode.
We call this a confused deputy attack, because the deputy (the component of the system that has been delegated some authority or privilege) has become confused by the attacker, and thus undermined an intended security goal.
The fix is to use the AAD parameter from the authenticated mode to bind the data to a given context. (AAD = Additional Authenticated Data.)
- $addr = aes_gcm_encrypt($addr, $key);+ $addr = aes_gcm_encrypt($addr, $key, canonicalize([+ $tableName,+ $columnName,+ $primaryKey+ ]);
Now if I start cutting and pasting ciphertexts around, I get a decryption failure instead of silently decrypting plaintext.
This may sound like a specific vulnerability, but it’s more of a failure to understand an important general lesson with database cryptography:
Where your data lives is part of its identity, and MUST be authenticated.Soatok’s Rule of Database Cryptography
Canonicalization Attacks
In the previous section, I introduced a pseudocode called canonicalize()
. This isn’t a pasto from some reference code; it’s an important design detail that I will elaborate on now.
First, consider you didn’t do anything to canonicalize your data, and you just joined strings together and called it a day…
function dumbCanonicalize( string $tableName, string $columnName, string|int $primaryKey): string { return $tableName . '_' . $columnName . '#' . $primaryKey;}
Consider these two inputs to this function:
dumbCanonicalize('customers', 'last_order_uuid', 123);
dumbCanonicalize('customers_last_order', 'uuid', 123);
In this case, your AAD would be the same, and therefore, your deputy can still be confused (albeit in a narrower use case).
In Cendyne’s article, AnonCo did something more subtle: The canonicalization bug created a collision on the inputs to HKDF, which resulted in an unintentional key reuse.
Up until this point, their mistake isn’t relevant to us, because we haven’t even explored key management at all. But the same design flaw can re-emerge in multiple locations, with drastically different consequence.
Multi-Tenancy
Once you’ve implemented a mitigation against Confused Deputies, you may think your job is done. And it very well could be.
Often times, however, software developers are tasked with building support for Bring Your Own Key (BYOK).
This is often spawned from a specific compliance requirement (such as cryptographic shredding; i.e. if you erase the key, you can no longer recover the plaintext, so it may as well be deleted).
Other times, this is driven by a need to cut costs: Storing different users’ data in the same database server, but encrypting it such that they can only encrypt their own records.
Two things can happen when you introduce multi-tenancy into your database cryptography designs:
- Invisible Salamanders becomes a risk, due to multiple keys being possible for any given encrypted record.
- Failure to address the risk of Invisible Salamanders can undermine your protection against Confused Deputies, thereby returning you to a state before you properly used the AAD.
So now you have to revisit your designs and ensure you’re using a key-committing authenticated mode, rather than just a regular authenticated mode.
Isn’t cryptography fun?
“What Are Invisible Salamanders?”
This refers to a fun property of AEAD modes based on Polynomical MACs. Basically, if you:
- Encrypt one message under a specific key and nonce.
- Encrypt another message under a separate key and nonce.
…Then you can get the same exact ciphertext and authentication tag. Performing this attack requires you to control the keys for both encryption operations.
This was first demonstrated in an attack against encrypted messaging applications, where a picture of a salamander was hidden from the abuse reporting feature because another attached file had the same authentication tag and ciphertext, and you could trick the system if you disclosed the second key instead of the first. Thus, the salamander is invisible to attackers.
Art: CMYKat
We’re not quite done with relational databases yet, but we should talk about NoSQL databases for a bit. The final topic in scope applies equally to both, after all.
Cryptography for NoSQL Databases
Most of the topics from relational databases also apply to NoSQL databases, so I shall refrain from duplicating them here. This article is already sufficiently long to read, after all, and I dislike redundancy.
NoSQL is Built Different
The main thing that NoSQL databases offer in the service of making cryptographers lose sleep at night is the schema-free nature of NoSQL designs.
What this means is that, if you’re using a client-side encryption library for a NoSQL database, the previous concerns about confused deputy attacks are amplified by the malleability of the document structure.
Additionally, the previously discussed cryptographic attacks against the encryption mode may be less expensive for an attacker to pull off.
Consider the following record structure, which stores a bunch of data stored with AES in CBC mode:
{ "encrypted-data-key": "<blob>", "name": "<ciphertext>", "address": [ "<ciphertext>", "<ciphertext>" ], "social-security": "<ciphertext>", "zip-code": "<ciphertext>"}
If this record is decrypted with code that looks something like this:
$decrypted = [];// ... snip ...foreach ($record['address'] as $i => $addrLine) { try { $decrypted['address'][$i] = $this->decrypt($addrLine); } catch (Throwable $ex) { // You'd never deliberately do this, but it's for illustration $this->doSomethingAnOracleCanObserve($i); // This is more believable, of course: $this->logDecryptionError($ex, $addrLine); $decrypted['address'][$i] = ''; }}
Then you can keep appending rows to the "address"
field to reduce the number of writes needed to exploit a padding oracle attack against any of the <ciphertext>
fields.
Art: Harubaki
This isn’t to say that NoSQL is less secure than SQL, from the context of client-side encryption. However, the powerful feature sets that NoSQL users are accustomed to may also give attackers a more versatile toolkit to work with.
Record Authentication
A pedant may point out that record authentication applies to both SQL and NoSQL. However, I mostly only observe this feature in NoSQL databases and document storage systems in the wild, so I’m shoving it in here.
Encrypting fields is nice and all, but sometimes what you want to know is that your unencrypted data hasn’t been tampered with as it flows through your system.
The trivial way this is done is by using a digital signature algorithm over the whole record, and then appending the signature to the end. When you go to verify the record, all of the information you need is right there.
This works well enough for most use cases, and everyone can pack up and go home. Nothing more to see here.
Except…
When you’re working with NoSQL databases, you often want systems to be able to write to additional fields, and since you’re working with schema-free blobs of data rather than a normalized set of relatable tables, the most sensible thing to do is to is to append this data to the same record.
Except, oops! You can’t do that if you’re shoving a digital signature over the record. So now you need to specify which fields are to be included in the signature.
And you need to think about how to model that in a way that doesn’t prohibit schema upgrades nor allow attackers to perform downgrade attacks. (See below.)
I don’t have any specific real-world examples here that I can point to of this problem being solved well.
Art: CMYKat
Furthermore, as with preventing confused deputy and/or canonicalization attacks above, you must also include the fully qualified path of each field in the data that gets signed.
As I said with encryption before, but also true here:
Where your data lives is part of its identity, and MUST be authenticated.Soatok’s Rule of Database Cryptography
This requirement holds true whether you’re using symmetric-key authentication (i.e. HMAC) or asymmetric-key digital signatures (e.g. EdDSA).
Bonus: A Maximally Schema-Free, Upgradeable Authentication Design
Art: Harubaki
Okay, how do you solve this problem so that you can perform updates and upgrades to your schema but without enabling attackers to downgrade the security? Here’s one possible design.
Let’s say you have two metadata fields on each record:
- A compressed binary string representing which fields should be authenticated. This field is, itself, not authenticated. Let’s call this
meta-auth
. - A compressed binary string representing which of the authenticated fields should also be encrypted. This field is also authenticated. This is at most the same length as the first metadata field. Let’s call this
meta-enc
.
Furthermore, you will specify a canonical field ordering for both how data is fed into the signature algorithm as well as the field mappings in meta-auth
and meta-enc
.
{ "example": { "credit-card": { "number": /* encrypted */, "expiration": /* encrypted */, "ccv": /* encrypted */ }, "superfluous": { "rewards-member": null } }, "meta-auth": compress_bools([ true, /* example.credit-card.number */ true, /* example.credit-card.expiration */ true, /* example.credit-card.ccv */ false, /* example.superfluous.rewards-member */ true /* meta-enc */ ]), "meta-enc": compress_bools([ true, /* example.credit-card.number */ true, /* example.credit-card.expiration */ true, /* example.credit-card.ccv */ false /* example.superfluous.rewards-member */ ]), "signature": /* -- snip -- */}
When you go to append data to an existing record, you’ll need to update meta-auth
to include the mapping of fields based on this canonical ordering to ensure only the intended fields get validated.
When you update your code to add an additional field that is intended to be signed, you can roll that out for new records and the record will continue to be self-describing:
- New records will have the additional field flagged as authenticated in
meta-auth
(andmeta-enc
will grow) - Old records will not, but your code will still sign them successfully
- To prevent downgrade attacks, simply include a schema version ID as an additional plaintext field that gets authenticated. An attacker who tries to downgrade will need to be able to produce a valid signature too.
You might think meta-auth
gives an attacker some advantage, but this only includes which fields are included in the security boundary of the signature or MAC, which allows unauthenticated data to be appended for whatever operational purpose without having to update signatures or expose signing keys to a wider part of the network.
{ "example": { "credit-card": { "number": /* encrypted */, "expiration": /* encrypted */, "ccv": /* encrypted */ }, "superfluous": { "rewards-member": null } }, "meta-auth": compress_bools([ true, /* example.credit-card.number */ true, /* example.credit-card.expiration */ true, /* example.credit-card.ccv */ false, /* example.superfluous.rewards-member */ true, /* meta-enc */ true /* meta-version */ ]), "meta-enc": compress_bools([ true, /* example.credit-card.number */ true, /* example.credit-card.expiration */ true, /* example.credit-card.ccv */ false, /* example.superfluous.rewards-member */ true /* meta-version */ ]), "meta-version": 0x01000000, "signature": /* -- snip -- */}
If an attacker tries to use the meta-auth
field to mess with a record, the best they can hope for is an Invalid Signature exception (assuming the signature algorithm is secure to begin with).
Even if they keep all of the fields the same, but play around with the structure of the record (e.g. changing the XPath or equivalent), so long as the path is authenticated with each field, breaking this is computationally infeasible.
Searchable Encryption
If you’ve managed to make it through the previous sections, congratulations, you now know enough to build a secure but completely useless database.
Art: CMYKat
Okay, put away the pitchforks; I will explain.
Part of the reason why we store data in a database, rather than a flat file, is because we want to do more than just read and write. Sometimes computer scientists want to compute. Almost always, you want to be able to query your database for a subset of records based on your specific business logic needs.
And so, a database which doesn’t do anything more than store ciphertext and maybe signatures is pretty useless to most people. You’d have better luck selling Monkey JPEGs to furries than convincing most businesses to part with their precious database-driven report generators.
Art: Sophie
So whenever one of your users wants to actually use their data, rather than just store it, they’re forced to decide between two mutually exclusive options:
- Encrypting the data, to protect it from unauthorized disclosure, but render it useless
- Doing anything useful with the data, but leaving it unencrypted in the database
This is especially annoying for business types that are all in on the Zero Trust buzzword.
Fortunately, the cryptographers are at it again, and boy howdy do they have a lot of solutions for this problem.
Order-{Preserving, Revealing} Encryption
On the fun side of things, you have things like Order-Preserving and Order-Revealing Encryption, which Matthew Green wrote about at length.
[D]atabase encryption has been a controversial subject in our field. I wish I could say that there’s been an actual debate, but it’s more that different researchers have fallen into different camps, and nobody has really had the data to make their position in a compelling way. There have actually been some very personal arguments made about it.Attack of the week: searchable encryption and the ever-expanding leakage function
The problem with these designs is that they have a significant enough leakage that it no longer provides semantic security.
From Grubbs, et al. (GLMP, 2019.)
Colors inverted to fit my blog’s theme better.
To put it in other words: These designs are only marginally better than ECB mode, and probably deserve their own poems too.
Order revealing
Reveals much more than order
Softcore ECBOrder preserving
Semantic security?
Only in your dreamsHaiku for your consideration
Deterministic Encryption
Here’s a simpler, but also terrible, idea for searchable encryption: Simply give up on semantic security entirely.
If you recall the AES_{De,En}crypt()
functions built into MySQL I mentioned at the start of this article, those are the most common form of deterministic encryption I’ve seen in use.
SELECT * FROM foo WHERE bar = AES_Encrypt('query', 'key');
However, there are slightly less bad variants. If you use AES-GCM-SIV with a static nonce, your ciphertexts are fully deterministic, and you can encrypt a small number of distinct records safely before you’re no longer secure.
From Page 14 of the linked paper. Full view.
That’s certainly better than nothing, but you also can’t mitigate confused deputy attacks. But we can do better than this.
Homomorphic Encryption
In a safer plane of academia, you’ll find homomorphic encryption, which researchers recently demonstrated with serving Wikipedia pages in a reasonable amount of time.
Homomorphic encryption allows computations over the ciphertext, which will be reflected in the plaintext, without ever revealing the key to the entity performing the computation.
If this sounds vaguely similar to the conditions that enable chosen-ciphertext attacks, you probably have a good intuition for how it works: RSA is homomorphic to multiplication, AES-CTR is homomorphic to XOR. Fully homomorphic encryption uses lattices, which enables multiple operations but carries a relatively enormous performance cost.
Art: Harubaki
Homomorphic encryption sometimes intersects with machine learning, because the notion of training an encrypted model by feeding it encrypted data, then decrypting it after-the-fact is desirable for certain business verticals. Your data scientists never see your data, and you have some plausible deniability about the final ML model this work produces. This is like a Siren song for Venture Capitalist-backed medical technology companies. Tech journalists love writing about it.
However, a less-explored use case is the ability to encrypt your programs but still get the correct behavior and outputs. Although this sounds like a DRM technology, it’s actually something that individuals could one day use to prevent their ISPs or cloud providers from knowing what software is being executed on the customer’s leased hardware. The potential for a privacy win here is certainly worth pondering, even if you’re a tried and true Pirate Party member.
Just say “NO” to the copyright cartels.
Art: CMYKat
Searchable Symmetric Encryption (SSE)
Forget about working at the level of fields and rows or individual records. What if we, instead, worked over collections of documents, where each document is viewed as a set of keywords from a keyword space?
Art: CMYKat
That’s the basic premise of SSE: Encrypting collections of documents rather than individual records.
The actual implementation details differ greatly between designs. They also differ greatly in their leakage profiles and susceptibility to side-channel attacks.
Some schemes use a so-called trapdoor permutation, such as RSA, as one of their building blocks.
Some schemes only allow for searching a static set of records, while others can accommodate new data over time (with the trade-off between more leakage or worse performance).
If you’re curious, you can learn more about SSE here, and see some open source SEE implementations online here.
You’re probably wondering, “If SSE is this well-studied and there are open source implementations available, why isn’t it more widely used?”
Your guess is as good as mine, but I can think of a few reasons:
- The protocols can be a little complicated to implement, and aren’t shipped by default in cryptography libraries (i.e. OpenSSL’s libcrypto or libsodium).
- Every known security risk in SSE is the product of a trade-offs, rather than there being a single winner for all use cases that developers can feel comfortable picking.
- Insufficient marketing and developer advocacy.
SSE schemes are mostly of interest to academics, although Seny Kamara (Brown Univeristy professior and one of the luminaries of searchable encryption) did try to develop an app called Pixek which used SSE to encrypt photos.
Maybe there’s room for a cryptography competition on searchable encryption schemes in the future.
You Can Have Little a HMAC, As a Treat
Finally, I can’t talk about searchable encryption without discussing a technique that’s older than dirt by Internet standards, that has been independently reinvented by countless software developers tasked with encrypting database records.
The oldest version I’ve been able to track down dates to 2006 by Raul Garcia at Microsoft, but I’m not confident that it didn’t exist before.
The idea I’m alluding to goes like this:
- Encrypt your data, securely, using symmetric cryptography.
(Hopefully your encryption addresses the considerations outlined in the relevant sections above.) - Separately, calculate an HMAC over the unencrypted data with a separate key used exclusively for indexing.
When you need to query your data, you can just recalculate the HMAC of your challenge and fetch the records that match it. Easy, right?
Even if you rotate your keys for encryption, you keep your indexing keys static across your entire data set. This lets you have durable indexes for encrypted data, which gives you the ability to do literal lookups for the performance hit of a hash function.
Additionally, everyone has HMAC in their toolkit, so you don’t have to move around implementations of complex cryptographic building blocks. You can live off the land. What’s not to love?
Hooray!
However, if you stopped here, we regret to inform you that your data is no longer indistinguishable from random, which probably undermines the security proof for your encryption scheme.
How annoying!
Of course, you don’t have to stop with the addition of plain HMAC to your database encryption software.
Take a page from Troy Hunt: Truncate the output to provide k-anonymity rather than a direct literal look-up.
“K-What Now?”
Imagine you have a full HMAC-SHA256 of the plaintext next to every ciphertext record with a static key, for searchability.
Each HMAC output corresponds 1:1 with a unique plaintext.
Because you’re using HMAC with a secret key, an attacker can’t just build a rainbow table like they would when attempting password cracking, but it still leaks duplicate plaintexts.
For example, an HMAC-SHA256 output might look like this: 04a74e4c0158e34a566785d1a5e1167c4e3455c42aea173104e48ca810a8b1ae
Art: CMYKat\
If you were to slice off most of those bytes (e.g. leaving only the last 3, which in the previous example yields a8b1ae
), then with sufficient records, multiple plaintexts will now map to the same truncated HMAC tag.
Which means if you’re only revealing a truncated HMAC tag to the database server (both when storing records or retrieving them), you can now expect false positives due to collisions in your truncated HMAC tag.
These false positives give your data a discrete set of anonymity (called k-anonymity), which means an attacker with access to your database cannot:
- Distinguish between two encrypted records with the same short HMAC tag.
- Reverse engineer the short HMAC tag into a single possible plaintext value, even if they can supply candidate queries and study the tags sent to the database.
Art: CMYKat\
As with SSE above, this short HMAC technique exposes a trade-off to users.
- Too much k-anonymity (i.e. too many false positives), and you will have to decrypt-then-discard multiple mismatching records. This can make queries slow.
- Not enough k-anonymity (i.e. insufficient false positives), and you’re no better off than a full HMAC.
Even more troublesome, the right amount to truncate is expressed in bits (not bytes), and calculating this value depends on the number of unique plaintext values you anticipate in your dataset. (Fortunately, it grows logarithmically, so you’ll rarely if ever have to tune this.)
If you’d like to play with this idea, here’s a quick and dirty demo script.
Intermission
If you started reading this post with any doubts about Cendyne’s statement that “Database cryptography is hard”, by making it to this point, they’ve probably been long since put to rest.
Art: Harubaki
Conversely, anyone that specializes in this topic is probably waiting for me to say anything novel or interesting; their patience wearing thin as I continue to rehash a surface-level introduction of their field without really diving deep into anything.
Thus, if you’ve read this far, I’d like to demonstrate the application of what I’ve covered thus far into a real-world case study into an database cryptography product.
Case Study: MongoDB Client-Side Encryption
MongoDB is an open source schema-free NoSQL database. Last year, MongoDB made waves when they announced Queryable Encryption in their upcoming client-side encryption release.
Taken from the press release, but adapted for dark themes.
A statement at the bottom of their press release indicates that this isn’t clown-shoes:
Queryable Encryption was designed by MongoDB’s Advanced Cryptography Research Group, headed by Seny Kamara and Tarik Moataz, who are pioneers in the field of encrypted search. The Group conducts cutting-edge peer-reviewed research in cryptography and works with MongoDB engineering teams to transfer and deploy the latest innovations in cryptography and privacy to the MongoDB data platform.
If you recall, I mentioned Seny Kamara in the SSE section of this post. They certainly aren’t wrong about Kamara and Moataz being pioneers in this field.
So with that in mind, let’s explore the implementation in libmongocrypt and see how it stands up to scrutiny.
MongoCrypt: The Good
MongoDB’s encryption library takes key management seriously: They provide a KMS integration for cloud users by default (supporting both AWS and Azure).
MongoDB uses Encrypt-then-MAC with AES-CBC and HMAC-SHA256, which is congruent to what Signal does for message encryption.
How Is Queryable Encryption Implemented?
From the current source code, we can see that MongoCrypt generates several different types of tokens, using HMAC (calculation defined here).
According to their press release:
The feature supports equality searches, with additional query types such as range, prefix, suffix, and substring planned for future releases.
Which means that most of the juicy details probably aren’t public yet.
These HMAC-derived tokens are stored wholesale in the data structure, but most are encrypted before storage using AES-CTR.
There are more layers of encryption (using AEAD), server-side token processing, and more AES-CTR-encrypted edge tokens. All of this is finally serialized (implementation) as one blob for storage.
Since only the equality operation is currently supported (which is the same feature you’d get from HMAC), it’s difficult to speculate what the full feature set looks like.
However, since Kamara and Moataz are leading its development, it’s likely that this feature set will be excellent.
MongoCrypt: The Bad
Every call to do_encrypt()
includes at most the Key ID (but typically NULL
) as the AAD. This means that the concerns over Confused Deputies (and NoSQL specifically) are relevant to MongoDB.
However, even if they did support authenticating the fully qualified path to a field in the AAD for their encryption, their AEAD construction is vulnerable to the kind of canonicalization attack I wrote about previously.
First, observe this code which assembles the multi-part inputs into HMAC.
/* Construct the input to the HMAC */uint32_t num_intermediates = 0;_mongocrypt_buffer_t intermediates[3];// -- snip --if (!_mongocrypt_buffer_concat ( &to_hmac, intermediates, num_intermediates)) { CLIENT_ERR ("failed to allocate buffer"); goto done;}if (hmac == HMAC_SHA_512_256) { uint8_t storage[64]; _mongocrypt_buffer_t tag = {.data = storage, .len = sizeof (storage)}; if (!_crypto_hmac_sha_512 (crypto, Km, &to_hmac, &tag, status)) { goto done; } // Truncate sha512 to first 256 bits. memcpy (out->data, tag.data, MONGOCRYPT_HMAC_LEN);} else { BSON_ASSERT (hmac == HMAC_SHA_256); if (!_mongocrypt_hmac_sha_256 (crypto, Km, &to_hmac, out, status)) { goto done; }}
The implementation of _mongocrypt_buffer_concat()
can be found here.
If either the implementation of that function, or the code I snipped from my excerpt, had contained code that prefixed every segment of the AAD with the length of the segment (represented as a uint64_t
to make overflow infeasible), then their AEAD mode would not be vulnerable to canonicalization issues.
Using TupleHash would also have prevented this issue.
Silver lining for MongoDB developers: Because the AAD is either a key ID or NULL, this isn’t exploitable in practice.
The first cryptographic flaw sort of cancels the second out.
If the libmongocrypt developers ever want to mitigate Confused Deputy attacks, they’ll need to address this canonicalization issue too.
MongoCrypt: The Ugly
MongoCrypt supports deterministic encryption.
If you specify deterministic encryption for a field, your application passes a deterministic initialization vector to AEAD.
We already discussed why this is bad above.
Wrapping Up
This was not a comprehensive treatment of the field of database cryptography. There are many areas of this field that I did not cover, nor do I feel qualified to discuss.
However, I hope anyone who takes the time to read this finds themselves more familiar with the subject.
Additionally, I hope any developers who think “encrypting data in a database is [easy, trivial] (select appropriate)” will find this broad introduction a humbling experience.
Art: CMYKat
https://soatok.blog/2023/03/01/database-cryptography-fur-the-rest-of-us/
#appliedCryptography #blockCipherModes #cryptography #databaseCryptography #databases #encryptedSearch #HMAC #MongoCrypt #MongoDB #QueryableEncryption #realWorldCryptography #security #SecurityGuidance #SQL #SSE #symmetricCryptography #symmetricSearchableEncryption
NIST opened public comments on SP 800-108 Rev. 1 (the NIST recommendations for Key Derivation Functions) last month. The main thing that’s changed from the original document published in 2009 is the inclusion of the Keccak-based KMAC alongside the incumbent algorithms.One of the recommendations of SP 800-108 is called “KDF in Counter Mode”. A related document, SP 800-56C, suggests using a specific algorithm called HKDF instead of the generic Counter Mode construction from SP 800-108–even though they both accomplish the same goal.
Isn’t standards compliance fun?
Interestingly, HKDF isn’t just an inconsistently NIST-recommended KDF, it’s also a common building block in a software developer’s toolkit which sees a lot of use in different protocols.
Unfortunately, the way HKDF is widely used is actually incorrect given its formal security definition. I’ll explain what I mean in a moment.
Art: Scruff
What is HKDF?
To first understand what HKDF is, you first need to know about HMAC.HMAC is a standard message authentication code (MAC) algorithm built with cryptographic hash functions (that’s the H). HMAC is specified in RFC 2104 (yes, it’s that old).
HKDF is a key-derivation function that uses HMAC under-the-hood. HKDF is commonly used in encryption tools (Signal, age). HKDF is specified in RFC 5869.
HKDF is used to derive a uniformly-random secret key, typically for use with symmetric cryptography algorithms. In any situation where a key might need to be derived, you might see HKDF being used. (Although, there may be better algorithms.)
Art: LvJ
How Developers Understand and Use HKDF
If you’re a software developer working with cryptography, you’ve probably seen an API in the crypto module for your programming language that looks like this, or maybe this.hash_hkdf( string $algo, string $key, int $length = 0, string $info = "", string $salt = ""): string
Software developers that work with cryptography will typically think of the HKDF parameters like so:
$algo
— which hash function to use$key
— the input key, from which multiple keys can be derived$length
— how many bytes to derive$info
— some arbitrary string used to bind a derived key to an intended context$salt
— some additional randomness (optional)The most common use-case of HKDF is to implement key-splitting, where a single input key (the Initial Keying Material, or IKM) is used to derive two or more independent keys, so that you’re never using a single key for multiple algorithms.
See also:
[url=https://github.com/defuse/php-encryption]defuse/php-encryption[/url]
, a popular PHP encryption library that does exactly what I just described.At a super high level, the HKDF usage I’m describing looks like this:
class MyEncryptor {protected function splitKeys(CryptographyKey $key, string $salt): array { $encryptKey = new CryptographyKey(hash_hkdf( 'sha256', $key->getRawBytes(), 32, 'encryption', $salt )); $authKey = new CryptographyKey(hash_hkdf( 'sha256', $key->getRawBytes(), 32, 'message authentication', $salt )); return [$encryptKey, $authKey];}public function encryptString(string $plaintext, CryptographyKey $key): string{ $salt = random_bytes(32); [$encryptKey, $hmacKey] = $this->splitKeys($key, $salt); // ... encryption logic here ... return base64_encode($salt . $ciphertext . $mac);}public function decryptString(string $encrypted, CryptographyKey $key): string{ $decoded = base64_decode($encrypted); $salt = mb_substr($decoded, 0, 32, '8bit'); [$encryptKey, $hmacKey] = $this->splitKeys($key, $salt); // ... decryption logic here ... return $plaintext;}// ... other method here ...}
Unfortunately, anyone who ever does something like this just violated one of the core assumptions of the HKDF security definition and no longer gets to claim “KDF security” for their construction. Instead, your protocol merely gets to claim “PRF security”.
Art: Harubaki
KDF? PRF? OMGWTFBBQ?
Let’s take a step back and look at some basic concepts.(If you want a more formal treatment, read this Stack Exchange answer.)
PRF: Pseudo-Random Functions
A pseudorandom function (PRF) is an efficient function that emulates a random oracle.“What the hell’s a random oracle?” you ask? Well, Thomas Pornin has the best explanation for random oracles:
A random oracle is described by the following model:
- There is a black box. In the box lives a gnome, with a big book and some dice.
- We can input some data into the box (an arbitrary sequence of bits).
- Given some input that he did not see beforehand, the gnome uses his dice to generate a new output, uniformly and randomly, in some conventional space (the space of oracle outputs). The gnome also writes down the input and the newly generated output in his book.
- If given an already seen input, the gnome uses his book to recover the output he returned the last time, and returns it again.
So a random oracle is like a kind of hash function, such that we know nothing about the output we could get for a given input message m. This is a useful tool for security proofs because they allow to express the attack effort in terms of number of invocations to the oracle.
The problem with random oracles is that it turns out to be very difficult to build a really “random” oracle. First, there is no proof that a random oracle can really exist without using a gnome. Then, we can look at what we have as candidates: hash functions. A secure hash function is meant to be resilient to collisions, preimages and second preimages. These properties do not imply that the function is a random oracle.
Thomas Pornin
Alternatively, Wikipedia has a more formal definition available to the academic-inclined.In practical terms, we can generate a strong PRF out of secure cryptographic hash functions by using a keyed construction; i.e. HMAC.
Thus, as long as your HMAC key is a secret, the output of HMAC can be generally treated as a PRF for all practical purposes. Your main security consideration (besides key management) is the collision risk if you truncate its output.
Art: LvJ
KDF: Key Derivation Functions
A key derivation function (KDF) is exactly what it says on the label: a cryptographic algorithm that derives one or more cryptographic keys from a secret input (which may be another cryptography key, a group element from a Diffie-Hellman key exchange, or a human-memorable password).Note that passwords should be used with a Password-Based Key Derivation Function, such as scrypt or Argon2id, not HKDF.
Despite what you may read online, KDFs do not need to be built upon cryptographic hash functions, specifically; but in practice, they often are.
A notable counter-example to this hash function assumption: CMAC in Counter Mode (from NIST SP 800-108) uses AES-CMAC, which is a variable-length input variant of CBC-MAC. CBC-MAC uses a block cipher, not a hash function.
Regardless of the construction, KDFs use a PRF under the hood, and the output of a KDF is supposed to be a uniformly random bit string.
Art: LvJ
PRF vs KDF Security Definitions
The security definition for a KDF has more relaxed requirements than PRFs: PRFs require the secret key be uniformly random. KDFs do not have this requirement.If you use a KDF with a non-uniformly random IKM, you probably need the KDF security definition.
If your IKM is already uniformly random (i.e. the “key separation” use case), you can get by with just a PRF security definition.
After all, the entire point of KDFs is to allow a congruent security level as you’d get from uniformly random secret keys, without also requiring them.
However, if you’re building a protocol with a security requirement satisfied by a KDF, but you actually implemented a PRF (i.e., not a KDF), this is a security vulnerability in your cryptographic design.
Art: LvJ
The HKDF Algorithm
HKDF is an HMAC-based KDF. Its algorithm consists of two distinct steps:
HKDF-Extract
uses the Initial Keying Material (IKM) and Salt to produce a Pseudo-Random Key (PRK).HKDF-Expand
actually derives the keys using PRK, theinfo
parameter, and a counter (from0
to255
) for each hash function output needed to generate the desired output length.If you’d like to see an implementation of this algorithm,
defuse/php-encryption
provides one (since it didn’t land in PHP until 7.1.0). Alternatively, there’s a Python implementation on Wikipedia that uses HMAC-SHA256.This detail about the two steps will matter a lot in just a moment.
Art: Swizz
How HKDF Salts Are Misused
The HKDF paper, written by Hugo Krawczyk, contains the following definition (page 7).The paper goes on to discuss the requirements for authenticating the salt over the communication channel, lest the attacker have the ability to influence it.
A subtle detail of this definition is that the security definition says that A salt value , not Multiple salt values.
Which means: You’re not supposed to use HKDF with a constant IKM, info label, etc. but vary the salt for multiple invocations. The salt must either be a fixed random value, or NULL.
The HKDF RFC makes this distinction even less clear when it argues for random salts.
We stress, however, that the use of salt adds significantly to the strength of HKDF, ensuring independence between different uses of the hash function, supporting “source-independent” extraction, and strengthening the analytical results that back the HKDF design.Random salt differs fundamentally from the initial keying material in two ways: it is non-secret and can be re-used. As such, salt values are available to many applications. For example, a pseudorandom number generator (PRNG) that continuously produces outputs by applying HKDF to renewable pools of entropy (e.g., sampled system events) can fix a salt value and use it for multiple applications of HKDF without having to protect the secrecy of the salt. In a different application domain, a key agreement protocol deriving cryptographic keys from a Diffie-Hellman exchange can derive a salt value from public nonces exchanged and authenticated between communicating parties as part of the key agreement (this is the approach taken in [IKEv2]).
RFC 5869, section 3.1
Okay, sure. Random salts are better than a NULL salt. And while this section alludes to “[fixing] a salt value” to “use it for multiple applications of HKDF without having to protect the secrecy of the salt”, it never explicitly states this requirement. Thus, the poor implementor is left to figure this out on their own.Thus, because it’s not using HKDF in accordance with its security definition, many implementations (such as the PHP encryption library we’ve been studying) do not get to claim that their construction has KDF security.
Instead, they only get to claim “Strong PRF” security, which you can get from just using HMAC.
Art: LvJ
What Purpose Do HKDF Salts Actually Serve?
Recall that the HKDF algorithm uses salts in the HDKF-Extract step. Salts in this context were intended for deriving keys from a Diffie-Hellman output, or a human-memorable password.In the case of [Elliptic Curve] Diffie-Hellman outputs, the result of the key exchange algorithm is a random group element, but not necessarily uniformly random bit string. There’s some structure to the output of these functions. This is why you always, at minimum, apply a cryptographic hash function to the output of [EC]DH before using it as a symmetric key.
HKDF uses salts as a mechanism to improve the quality of randomness when working with group elements and passwords.
Extending the nonce for a symmetric-key AEAD mode is a good idea, but using HKDF’s salt parameter specifically to accomplish this is a misuse of its intended function, and produces a weaker argument for your protocol’s security than would otherwise be possible.
How Should You Introduce Randomness into HKDF?
Just shove it in theinfo
parameter.Art: LvJ
It may seem weird, and defy intuition, but the correct way to introduce randomness into HKDF as most developers interact with the algorithm is to skip the salt parameter entirely (either fixing it to a specific value for domain-separation or leaving it NULL), and instead concatenate data into the
info
parameter.class BetterEncryptor extends MyEncryptor {protected function splitKeys(CryptographyKey $key, string $salt): array { $encryptKey = new CryptographyKey(hash_hkdf( 'sha256', $key->getRawBytes(), 32, $salt . 'encryption', '' // intentionally empty )); $authKey = new CryptographyKey(hash_hkdf( 'sha256', $key->getRawBytes(), 32, $salt . 'message authentication', '' // intentionally empty )); return [$encryptKey, $authKey];}}
Of course, you still have to watch out for canonicalization attacks if you’re feeding multi-part messages into the info tag.
Another advantage: This also lets you optimize your HKDF calls by caching the PRK from the
HKDF-Extract
step and reuse it for multiple invocations ofHKDF-Expand
with a distinctinfo
. This allows you to reduce the number of hash function invocations from to (since each HMAC involves two hash function invocations).Notably, this HKDF salt usage was one of the things that was changed in V3/V4 of PASETO.
Does This Distinction Really Matter?
If it matters, your cryptographer will tell you it matters–which probably means they have a security proof that assumes the KDF security definition for a very good reason, and you’re not allowed to violate that assumption.Otherwise, probably not. Strong PRF security is still pretty damn good for most threat models.
Art: LvJ
Closing Thoughts
If your takeaway was, “Wow, I feel stupid,” don’t, because you’re in good company.I’ve encountered several designs in my professional life that shoved the randomness into the
info
parameter, and it perplexed me because there was a perfectly good salt parameter right there. It turned out, I was wrong to believe that, for all of the subtle and previously poorly documented reasons discussed above. But now we both know, and we’re all better off for it.So don’t feel dumb for not knowing. I didn’t either, until this was pointed out to me by a very patient colleague.
“Feeling like you were stupid” just means you learned.
(Art: LvJ)Also, someone should really get NIST to be consistent about whether you should use HKDF or “KDF in Counter Mode with HMAC” as a PRF, because SP 800-108’s new revision doesn’t concede this point at all (presumably a relic from the 2009 draft).
This concession was made separately in 2011 with SP 800-56C revision 1 (presumably in response to criticism from the 2010 HKDF paper), and the present inconsistency is somewhat vexing.
(On that note, does anyone actually use the NIST 800-108 KDFs instead of HKDF? If so, why? Please don’t say you need CMAC…)
Bonus Content
These questions were asked after this blog post initially went public, and I thought they were worth adding. If you ask a good question, it may end up being edited in at the end, too.Art: LvJ
Why Does HKDF use the Salt as the HMAC key in the Extract Step? (via r/crypto)
Broadly speaking, when applying a PRF to two “keys”, you get to decide which one you treat as the “key” in the underlying API.HMAC’s API is HMACalg(key, message), but how HKDF uses it might as well be HMACalg(key1, key2).
The difference here seems almost arbitrary, but there’s a catch.
HKDF was designed for Diffie-Hellman outputs (before ECDH was the norm), which are generally able to be much larger than the block size of the underlying hash function. 2048-bit DH results fit in 256 bytes, which is 4 times the SHA256 block size.
If you have to make a decision, using the longer input (DH output) as the message is more intuitive for analysis than using it as the key, due to pre-hashing. I’ve discussed the counter-intuitive nature of HMAC’s pre-hashing behavior at length in this post, if you’re interested.
So with ECDH, it literally doesn’t matter which one was used (unless you have a weird mismatch in hash functions and ECC groups; i.e. NIST P-521 with SHA-224).
But before the era of ECDH, it was important to use the salt as the HMAC key in the extract step, since they were necessarily smaller than a DH group element.
Thus, HKDF chose HMACalg(salt, IKM) instead of HMACalg(IKM, salt) for the calculation of PRK in the HKDF-Extract step.
Neil Madden also adds that the reverse would create a chicken-egg situation, but I personally suspect that the pre-hashing would be more harmful to the security analysis than merely supplying a non-uniformly random bit string as an HMAC key in this specific context.
My reason for believing this is, when a salt isn’t supplied, it defaults to a string of
0x00
bytes as long as the output size of the underlying hash function. If the uniform randomness of the salt mattered that much, this wouldn’t be a tolerable condition.https://soatok.blog/2021/11/17/understanding-hkdf/
#cryptographicHashFunction #cryptography #hashFunction #HMAC #KDF #keyDerivationFunction #securityDefinition #SecurityGuidance
Update (2020-04-29): Twitter has fixed their oversight.
{ "errors": [{ "code": 356, "message": "preferences.gender_preferences.gender_override: Must provide a non-empty custom value 30 characters or less in length." }]}
Anyone who set their custom gender to a long volume of text, should still have it set to a long volume of text.
The original article follows after the separator.
I was recently made aware of a change to Twitter, which exposes a new Gender field. If you’ve never specified your gender before, they guessed what it was (which is a really shitty thing to do, especially towards trans folks!).
https://twitter.com/leemandelo/status/1254179716451438592
Slightly annoyed, I went to go see what Twitter thinks my gender is.
Curses! They know I’m a guy. This won’t do at all.
But what’s this? An “Add your gender” option?
That’s at least, something, I guess? Defaulting to [whatever the algorithm guesses] is sucky, but at least nonbinary folks can still self-identify however they want.
But 30 characters isn’t a lot. What if I want to drop in, say, 68 characters? Do I need to do some crazy Unicode fuckery to pull that off?
Nope, Inspect Element + set maxlength="255"
and now Twitter thinks my gender is the EICAR test file. Wonderful!
Which means: If someone downloads my Twitter data without my consent onto a workstation running antivirus software, the file will delete itself and all will be right in the marketing world.
https://twitter.com/SoatokDhole/status/1254635753319079937
(Okay but seriously, a lot of downstream systemic failures would have to exist for any damage to occur from me deciding to self-identify to marketers this way.)
Lessons to Learn
Twitter enforced a maxlength of 30 in the HTML element of the “Add your gender” text input, but they didn’t enforce this requirement server-side. The takeaway here is pretty obvious.
Also, don’t try to automatically[b] guess people’s gender at scale[/b]. It’s insulting when you get it wrong, and it’s creepy when you get it right.
(This sticker is tongue-in-cheek.)
What’s the Upper Limit for the Field?
I don’t know, but this indicates it has a larger upper bound than a tweet.
https://twitter.com/txlon5/status/1254648412261228545
If anyone has success dropping an entire thesis on gender identity and culture in the Gender field, let me know.
Update: The Best Genders
Everyone is having a lot of fun with the Gender field. Here’s some of the best tweets I’ve seen since publishing this stupid bug.
https://twitter.com/TecraFox/status/1254653500887310337
https://twitter.com/everlasting1der/status/1254652388713082880
https://twitter.com/hedgehog_emoji/status/1254650551473594368
https://twitter.com/Neybulot/status/1254659048886210563
A fox in Furry Technologists suggested building genderfs, which is a lot like redditfs but hoists the entire filesystem into the Gender field.
While I have your attention, trans rights are human rights and biology disagrees with the simple notion of “two sexes”. Thank you and good night.
https://soatok.blog/2020/04/27/why-server-side-input-validation-matters/
#furry #infosec #inputValidation #LGBTQIA_ #security #softwareDevelopment #Twitter
Update (2020-04-29): Twitter has fixed their oversight.{ "errors": [{ "code": 356, "message": "preferences.gender_preferences.gender_override: Must provide a non-empty custom value 30 characters or less in length." }]}
Anyone who set their custom gender to a long volume of text, should still have it set to a long volume of text.
The original article follows after the separator.
I was recently made aware of a change to Twitter, which exposes a new Gender field. If you’ve never specified your gender before, they guessed what it was (which is a really shitty thing to do, especially towards trans folks!).
https://twitter.com/leemandelo/status/1254179716451438592
Slightly annoyed, I went to go see what Twitter thinks my gender is.
Curses! They know I’m a guy. This won’t do at all.But what’s this? An “Add your gender” option?
That’s at least, something, I guess? Defaulting to [whatever the algorithm guesses] is sucky, but at least nonbinary folks can still self-identify however they want.But 30 characters isn’t a lot. What if I want to drop in, say, 68 characters? Do I need to do some crazy Unicode fuckery to pull that off?
Nope, Inspect Element + setmaxlength="255"
and now Twitter thinks my gender is the EICAR test file. Wonderful!Which means: If someone downloads my Twitter data without my consent onto a workstation running antivirus software, the file will delete itself and all will be right in the marketing world.
https://twitter.com/SoatokDhole/status/1254635753319079937
(Okay but seriously, a lot of downstream systemic failures would have to exist for any damage to occur from me deciding to self-identify to marketers this way.)
Lessons to Learn
Twitter enforced a maxlength of 30 in the HTML element of the “Add your gender” text input, but they didn’t enforce this requirement server-side. The takeaway here is pretty obvious.Also, don’t try to automatically[b] guess people’s gender at scale[/b]. It’s insulting when you get it wrong, and it’s creepy when you get it right.
(This sticker is tongue-in-cheek.)
What’s the Upper Limit for the Field?
I don’t know, but this indicates it has a larger upper bound than a tweet.https://twitter.com/txlon5/status/1254648412261228545
If anyone has success dropping an entire thesis on gender identity and culture in the Gender field, let me know.
Update: The Best Genders
Everyone is having a lot of fun with the Gender field. Here’s some of the best tweets I’ve seen since publishing this stupid bug.https://twitter.com/TecraFox/status/1254653500887310337
https://twitter.com/everlasting1der/status/1254652388713082880
https://twitter.com/hedgehog_emoji/status/1254650551473594368
https://twitter.com/Neybulot/status/1254659048886210563
A fox in Furry Technologists suggested building genderfs, which is a lot like redditfs but hoists the entire filesystem into the Gender field.
While I have your attention, trans rights are human rights and biology disagrees with the simple notion of “two sexes”. Thank you and good night.
https://soatok.blog/2020/04/27/why-server-side-input-validation-matters/
#furry #infosec #inputValidation #LGBTQIA_ #security #softwareDevelopment #Twitter
Earlier this year I discussed some noteworthy examples of crackpot cryptography and snake-oil security on this blog.
In this post, I’m going to analyze the claims made by CEW Systems Canada about “Post-Quantum Encryption” and argue that their so-called “bi-symmetric encryption” is another example of the same genre of crackpot cryptography.
https://twitter.com/veorq/status/1159575230970396672
Let’s get the disclaimers out of the way: This post was written solely by some security engineer with a fursona that has happens to have a lot of opinions about cryptography. This post is solely the opinion of said author, who also claims to be a blue anthropomorphic dhole, and not the opinion of any employer (especially his).
Credit: Lynx vs Jackalope.
It’s entirely up to you whether or not you want to take me seriously, knowing all of that.
Additionally, by “fraud” I am not speaking in a legal sense, but a colloquial sense.
What Is “Bi-Symmetric Encryption”?
CEW Systems, a Canadian company incorporated in December 2020 by Chad Edward Wanless, claims to have developed a technique called “bi-symmetric encryption”, which they describe as follows:
What exactly is Bi-Symmetric encryption?Bi-symmetric encryption is an internet communications handshake protocol that uses public/private keys similar to typical asymmetric encryption, but instead uses an underlying symmetric encryption system as the encryption backbone.
(source)
Their FAQ page goes on to claim:
Why is it called Bi-Symmetric?We chose bi-symmetric encryption as the name because the encryption handshake is a hybrid of both asymmetric and symmetric encryption. It uses public/private keys just as asymmetric encryption does, while using symmetric encryption called CEW encryption as the underlying encryption software routine.
(source)
Ah, what a contradiction! According to this page, bi-symmetric encryption is a handshake protocol that simultaneously:
- Uses public/private keys just as asymmetric encryption does, but
- Uses an underlying symmetric encryption system
But if your underlying encryption for the handshake is entirely symmetric, where do the asymmetric keypairs come in?
Asymmetric cryptography has public/private keypairs because their security is based on a hard computational problem (large integer factorization, the elliptic curve discrete logarithm problem, etc.). You can generally take a private key (or some secret seed that generates both keys) and easily derive its public key, but doing the opposite is prohibitively expensive.
If you’re only using symmetric cryptography, you don’t have this hard computational problem in the mix, so where do the keypairs come in?
The FAQ goes on to imply that bi-symmetric encryption is resistant to brute-force attack, and then vaguely describes One-Time Passwords (a.k.a. two-factor authentication codes).
Brute force attacks on an ordinary computer work by incrementally testing possible values until the desired output response is found. For example, if a vehicle was locked and a smart device is used to hack it, the brute force attack would start at 0,000,000 and say, the unlock code was 1,234,678, the device would resend the code incrementally advancing the value by 1. The signals would repeat until the correct value was eventually found and the vehicle unlocked. Bi-symmetric software works by using a challenge code and test data combination that changes the unlock code for each attempt. Staring at 0,000,000 and incrementing to 9,999,999 would not unlock the vehicle as the unlock could would always be changing with every attempt.(source)
Even if you’re not a cryptography expert, I hope it’s clear that “synchronized random numbers” (one-time passwords) and “mangling a message so no one else can understand its contents without the key” (symmetric encryption) are totally disparate operations, and not at all interchangeable.
But here’s where things get really funny. After all this confusing and contradictory bullshit, they say this:
Another reason is that asymmetric encryption relies upon a math formula to determine what the private key is by factoring the public key. Bi-symmetric encryption does not mathematically correlate the two, instead one is encrypted by the other.(source)
Yeah, that concept already exists. It’s called envelope encryption, my dudes. There’s nothing magically post-quantum about envelope encryption, and it doesn’t obviate the need for asymmetric cryptography.
And if both keys are symmetric, and you’re communicating them in the clear, then what’s to stop someone from using the algorithm the same way a legitimate user does?
Of course, now that we’ve gotten to the meaty center, the remainder of the FAQ entry is the other half of the bullshit sandwich.
The largest reason asymmetric encryption is vulnerable is that the entire plain text being encrypted is mathematically modified using a math formula.(source)
What are they even talking about?
Credit: Harubaki.
There are a lot of problems with asymmetric encryption. For example: Developers encrypting directly with RSA. This is an antipattern that I’ve complained about before.
But asymmetric encryption isn’t, as a whole, “vulnerable” to anything.
The reason NIST and other standards organizations are focused on post-quantum cryptography is that the currently-deployed asymmetric cryptographic algorithms (RSA, ECC) are broken by a quantum computer (if it ever exists). The solution is to study and standardize better asymmetric algorithms, not throw out the entire class of algorithms, forever.
The fact that quantum computers break RSA and ECC has nothing to do with “the entire plain text being encrypted”, as CEW Systems claims, because that’s generally not what’s actually happening.
If you use TLS 1.2 or 1.3 to connect to a website, one of the following things is happening:
- You have an existing session, no handshake needed.
- Your browser and the webserver use Elliptic Curve Diffie-Hellman to establish a session key. The server’s ephemeral public key is signed by the ECDSA or RSA key in their Certificate, which has been signed by a Certificate Authority independently trusted by the browser and/or operating system you use.
- Your browser encrypts a random value, called the pre-master secret, using the RSA Public Key on the Certificate. The pre-master secret is used by the server to derive the session key for subsequent connections. This doesn’t have forward secrecy like option 2 does, but it’s otherwise secure.
At no point is “the plain text” ever encrypted directly. The ECC option doesn’t even do asymmetric encryption the same way RSA does. ECC is used for key agreement, exclusively.
Understanding the basics of “how cryptography is used” is table stakes for even thinking about inventing your own cryptography, and CEW Systems cannot even clear that bar.
With the under lying encryption of bi-symmetric, each individual letter is modified separately, there is no mathematical link to the entire plain text being encrypted.(source)
https://www.youtube.com/watch?v=QC1WeLyOjj0
The charitable interpretation is that they’re describing a stream cipher, or a block cipher used in Counter Mode.
A more likely interpretation is that they’re encrypting each letter independently in something like ECB mode, which offers no semantic security.
Credit: XKCD
The less charitable interpretation is reinforced by this image included in their FAQ page that archive.org did not successfully capture:
Image taken from this page but with colors inverted to better suit my blog’s theme.
This obsession over big key sizes is oddly reminiscent of the DataGateKeeper scam on KickStarter in 2016.
The about page further cements the insanity of their proposal:
This encryption method is a hybridization of asymmetric public/private keys combined with symmetric encryption to modify each character individually and not the data packets.(source)
Credit: Lynx vs Jackalope.
Moving on…
A great example that demonstrates how bi-symmetric encryption works: If one were to encrypt, for example, a credit card number, a brute force attack would produce every possible credit card number between 0000 0000 0000 0000 and 9999 9999 9999 9999 with no means to determine which output would be the correct value.(source)
This just in! Crackpot company that claims to have solved post-quantum cryptography using only symmetric cryptography also hasn’t heard of authenticated encryption. Film at 11.
Credit: Lynx vs Jackalope
It’s frustrating to read bold claims from someone who flunks the fundamentals.
Credit Card Numbers adhere to the Luhn Algorithm, so an attacker isn’t going to even bother with 90% of the possible card numbers in their brute force range.
(Because of the Luhn Algorithm, there is only one valid checksum value, stored as the last digit, for any given combination of the first 15 digits, which is a 10x speedup in guessing. This mostly exists to detect typos before sending a payment request to the bank. Also not every credit card number is a full 16 digits; they can be as short as 13 or as long as 19.)
Also, for posterity, here’s my actual credit card number, encrypted with a 256-bit random key with a scheme that exists and is widely deployed today (n.b. NOT their snake-oil). You ain’t gonna brute force it.
42a0d7af9ace893289ae4bd86d62c604ab1fa708f1063172777be69511fa01d4af5027ad55a15166b49f6861c825fd026fba00f4eecc1a67
TL;DR
In short, bi-symmetric encryption is the term CEW Systems uses to describe their crackpot cryptographic algorithm that is, allegedly, simultaneously a one-time password algorithm and an envelope encryption algorithm, which involves public/private keys but doesn’t involve an asymmetric mathematical problem that calls for mathematically related keys.
This contradictory and convoluted definition is almost certainly intended to confuse people who don’t understand advanced mathematics while sounding convincing and confident. It’s bullshit, plain and simple.
More Crackpot Claims
If you feel like you haven’t suffered enough, the team behind “bi-symmetric encryption” goes on to make claims about password protection.
Because of course they do.
Password Protection
CEW systems has given great thought to how to protect users’ passwords. As noted in the man-in-the-middle attack, passwords are combined with unique identifying data from users’ computers or smart devices, such as serial numbers, before being modified into encryption keys.(source)
Wrong. So wrong.
Credit: Swizz
Password hashing and password-authenticated key exchanges are an entire discipline that I don’t want to delve into in this post, but passwords are salted and stretched with a computationally difficult symmetric algorithm (usually a password hashing function), especially when they’re being used to derive encryption keys.
There are schemes that use TPMs or secure enclaves to produce key material from a given password, but that doesn’t rely on a “serial number” the way they’re implying.
Additionally, CEW systems created a patent pending and copyrighted custom user interface password edit box. This new user interface tool displays a dropdown window that contains “Forgot Password”, “Change Password” buttons and a phishing email warning tip window that informs and reminds users that the only means by which to change the password is through the software they are currently using.(source)
That is a lot of, well, something. Good luck getting a patent awarded on something that almost every corporate intranet has implemented since Hackers came out in 1995.
Patent Pending on this? Prior art, yo.
I’m also deeply curious how they propose to implement account recovery in their systems for when a users forgets their password.
If anyone reading this ever finds themselves making security decisions for a company, warning labels like this are not effective at all. A much better solution to phishing (and weak passwords, while we’re talking about it) is WebAuthn with hardware security keys (i.e. Solo V2).
Establishing Fraud
Hanlon’s Razor is an adage that states, “Never attribute to malice that which is adequately explained by stupidity“.
To call something fraudulent, it’s not sufficient to merely conclude that they have crackpot ideas (which would be stupid), you also have to demonstrate deception (which is a form of malice).
(To be clear: Me calling something fraudulent, in my opinion, isn’t the same bar that the law uses. Do not misconstrue my statements claims about the legal system. I’m not a lawyer, and I am not making a legal case.)
In the previous section, we’ve looked at enough evidence to justify calling bi-directional encryption an instance of crackpot cryptography. But when does it stop being overconfidence and start becoming a grift?
I submit to you that the line of demarcation is when a crackpot attempts to gain money, fame, notoriety, or a reputational lift for their crackpot idea.
To begin, let’s look at some red flags on the CEW Systems website. Then let’s dive a little bit deeper and see what we can dredge up from the Internet.
Credit: Lynx vs Jackalope
Red Flags
CTO Report
The front page of the CEW Systems website claims to have a “Third Party Academic Independent Review” from Dr. Cyril Coupal from Saskatchewan Polytechnic’s Digital Integration Centre of Excellence.
Immediately after this claim, the website states:
Dr. Cyril Coupal’s CTO report currently can be made available to those who have signed a Non-Disclosure Agreement.(source)
Let me tell ya, as a security engineer, I’m used to dealing with Non-Disclosure Agreements. NDAs are almost always a prerequisite for getting contracted to review a company’s source code or conduct a penetration test on their networks.
Almost nobody working in cryptography today would ever sign an NDA in order to read a third-party academic review of any cryptographic system. That’s bonkers.
In fact, you don’t need to sign anything: Simply navigate to Software Tools, then click Papers at the bottom, and you can download it directly from their website.
Here’s a mirrored copy of this “CTO Report” (PDF).
The “How It Works” Page
A common tactic of scammers and frauds is to sponsor a talk at a prestigious conference and then use the film of your talk at said conference to artificially inflate the credibility of your claims.
This is what we saw with TimeAI at Black Hat.
CEW Systems took a different route than Crown Sterling:
They joined with two other companies (Uzado, Terranova Defense) to form the so-called TCU Alliance in February 2020 (source), then invited a Constable from the Toronto Police Department’s E3 Cyber Security Division to deliver a talk and give legitimacy to their accompanying talk (archived).
Interestingly, their page about this TCU Alliance also states:
This alliance came together during 2020; while bidding on government proposals being issued by the Innovation for Defense Excellence and Security (IDEaS) proposals.(source)
This detail alone is sufficient in establishing the financial incentives needed to claim “fraud”. They’re out to win government contracts.
Will This Save America From Cyber-War?
Speaking of Terranova Defense, their Chairperson James Castle wrote an opinion piece (in response to a The Hill article) that claims:
I would be thinking that collaboratively with our quantum proof encryption software, We “COULD” Survive a Cyber War!!(source)
I wish I was making this up.
Time What Is Time?
CEW Systems was incorporated in December 2020. However, their FAQ page states December 2019 and Chad Wanless’s LinkedIn page (PDF) claims 2017. The copyright year on their website states 2023.
If you cannot reasonably establish the history and timeline of the company you’re talking to, they’re probably up to no good.
Is It Really Fraud?
Aside from the abundant red flags, and the establishment of financial incentives, and the convoluted claims about the company’s timeline, the other significant modicum of evidence for fraud isn’t found on the CEW Systems website.
Rather, it’s kind of meta.
The entire reason that I’m writing about this at all is because CEW Systems pitched their crackpot cryptography to a current Dhole Moments reader, which led to me being alerted to the existence of CEW Systems and their so-called “bi-symmetric encryption” in the first place.
Crackpot ideas are stupid; trying to sell your crackpot ideas to others is fraud.
I don’t know if it was desperation or greed, but they tried to sell their crackpot product to an audience with someone attending that was just clueful enough to realize that something’s amiss. If they hadn’t been trying to sell their crackpot ideas, I would never have even heard of them.
When you add those facts together, I can only conclude that bi-symmetric encryption is a fraud being perpetuated by Chad Wanless of CEW Systems in Canada.
What Did Dr. Coupal Reveal?
If you recall, CWE Systems erroneously leaked the same “CTO Report” that they claimed would only be made available to parties that agreed to their Non-Disclosure Agreement.
I’d like to take this opportunity to annotate some of the interesting revelations from Dr. Cyril Coupal’s report. Feel free to skip this section if you aren’t interested.
The Analysis Was “Short”
The introduction of the CTO report states:
CEW Systems Canada Inc. has asked the Saskatchewan Polytechnic Digital Integration Centre of Excellence (DICE) group to perform a short CTO-funded analysis on their Bi-Symmetric Hybrid Encryption System.
I don’t know what “short CTO-funded analysis” (a unique phrase that doesn’t exist online outside the context of CEW Systems) even means, but any short analysis is unlikely to be a careful one.
How the “Encryption” is Achieved
The bottom of page 1 (Overview of the Approach) states:
The encryption itself is achieved by randomly generating keys and interweaving them with portions of unencrypted data to be transmitted, applied to single bytes of data rather than long byte collections.
This is basically how the broken stream cipher, RC4, was designed. There’s not much novel to say about this. RC4 sucked.
Misleading Scale
The top of page 4 contains this gem of deception:
Two things:
- This should really use a logarithmic scale.
- The powers of 2 being discussed are small potatoes. If you’re trying to factor 2048-bit numbers, your X axis needs to extend way past 30.
I’m honestly not sure if this is because the author was in a rush, or if they’re in on the scam. I sent an email and will update this post when I have any further light to shed on this matter.
Trusted Setup Required
Towards the bottom of page 8 (under the heading: What about initial secret exchange and account setup?) states:
Common secrets, known to both server and client, must be exchanged when initial set up of accounts is made. Various methods exist to do this, but most involve the human factor, which is dangerous.
Way to bury the lede! I can already devise and deploy purely symmetric system that requires pre-shared keys today. That doesn’t make such a system practical or reliable.
Revenge of the Immortal Security Questions
At the top of page 10, Dr. Coupal was kind enough to include a screenshot titled “Forgot Password Example” which shows the breathtaking cluelessness of CEW Systems.
Security questions are an anti-pattern. There are better mechanisms available. Why would anyone intentionally design a new system that uses password-equivalents that users don’t realize are as sensitive as their actual passwords?
It doesn’t matter how you’re encrypting the answers if they can be leaked from third party apps, and are rarely (if ever) revoked.
Cursed User Interface
Just look at this monstrosity.
This is absolutely cursed.
The Smoking Gun
The entire discipline of Cryptography has a tenet called Kerckhoffs’s Principle: a cryptosystem should be secure, even if everything about the system, except the key, is public knowledge.
At the bottom of page 11 of the CTO Report, Dr. Coupal states:
The implementation algorithms cannot be open source.Knowing the procedures would aid in hacking the keys, therefore, the actual implementation of the algorithms, as well as the algorithms themselves, must be kept secret. The interweaving protocol is not mathematically based, but procedurally based. Of course, the data secrets for each client-server interchange must also be known, which is highly unlikely. CEW has many protocols in place to keep their application code secure. However, this may cause difficulty in obtaining certification by security agencies if they cannot inspect the code for security issues and thoroughness. Finally, it is not currently known how easy it would be to reverse engineer a copy of the executable code.
(Emphasis mine.)
Credit: Lynx vs Jackalope
In Conclusion
While cryptographic snake-oil salesmen aren’t peddling sex lube, they’ll be quick to fuck you just the same.
In this author’s opinion, “Bi-Symmetric Encryption” is a crackpot fraud, just like MyDataAngel, TimeAI, Crown Sterling, and so many other bullshit products and services before them. Don’t give them any money.
This story has a silver lining: Someone who felt something was amiss spoke up and the fraud was thus detected.
As @SwiftOnSecurity is quick to remind us when discussing their history as a Help Desk worker, your users are your first line of defense against security threats. They’ll be exposed to bullshit that you never will. (Chad Wanless never posted a paper on IACR’s ePrint, after all.)
Addendum (2021-10-20)
A now-deleted InfoQ article (which was preserved by Google’s search cache (alternative archive)), written by Cyril M. Coupal, echoes a lot of the same claims as the CEW Systems website.
Credit: Lynx vs Jackalope
I think it’s safe to conclude that Dr. Coupal isn’t a disinterested third party. I have not updated the relevant section to reflect this new evidence.
Sophie Schmieg points out that the factorization graph clearly used trial division rather than an optimized factoring algorithm.
If you look at the graph from Cyril Coupal’s “CTO Report”, it’s pretty clear that he didn’t terminate the algorithm after reaching the square root of the number they were attempting to factor:
When factoring a large integer, if you don’t encounter a factor after reaching the square root of that integer, you know the number is prime. After all, the only way for there to be a factor of N that’s larger than sqrt(N) is if there’s also one smaller than sqrt(N). If you reach sqrt(N), inclusively, and don’t find a factor, your number is prime. You can stop searching.
(Before a mathematician objects: Yes, I know I’m completely ignoring Gaussian Integers which have factors in the complex number space, but no factors in the real space. We’re not talking about those.)
This observation has the effect of doubling the X axis for the curve plotted. Factoring a 32-bit integer should require no more than 65,536 trial divisions.
I’m astonished that someone who claims to have a Ph.D in Computer Science doesn’t know this. Makes you wonder about his credentials a bit, doesn’t it?
https://soatok.blog/2021/09/28/the-bi-symmetric-encryption-fraud/
#biSymmetricEncryption #CEWSystems #crackpots #crypto #cryptography #Cybercrime #fraud #security #snakeOil
A few years ago, when the IETF’s Crypto Forum Research Group was deeply entrenched in debates about elliptic curves for security (which eventually culminated in RFC 7748 and RFC 8032), an IT Consultant showed up on the mailing list with their homemade cipher, Crystalline.Mike Hamburg politely informed the consultant that the CFRG isn’t the right forum for proposing new symmetric ciphers, or even new modes for symmetric ciphers, and invited them to email them off-list.
If you’re not familiar with the CFRG, let me just say, this was on the more patient and measured responses I’ve ever read.
Naturally, the author of Crystalline responded with this:
I’m somewhat disappointed in your reply, as I presumed that someone with a stated interest in ciphers would be eager to investigate anything new to pop up that didn’t have obvious holes in it. It almost sounds like you have had your soul crushed by bureaucracy over the years and have lost all passion for this field.Full quote available here. It doesn’t get much better.
Really dude? (Art by Khia.)The discussion continued until Tony Arcieri dropped one of the most brutal takedowns of a cryptographic design in CFRG history.
I think the biggest problem though is all of this has already been pointed out to you repeatedly in other forums and you completely refuse to acknowledge that your cipher fails to meet the absolute most minimum criteria for a secure cipher.Tony Arcieri, landing a cryptographic 360 no-scope on Crystalline.
In spite of this mic drop moment, the author of Crystalline continued to double down and insist that a symmetric cipher doesn’t need to be indistinguishable from randomness to be secure (which, to severely understate the affairs, is simply not true).Normally, when a cipher fails at the indistinguishable test, it’s subtle. This is what Crystalline ciphertexts look like.
Data encrypted with Crystalline, provided in the CFRG mailing list.
Modern ciphers produce something that will look like white noise, like an old TV without the cable plugged in. There should be no discernible pattern.
Crystalline’s author remained convinced that Crystalline’s “131072-bit keys” and claims of “information-theoretic security” were compelling enough to warrant consideration by the standards body that keeps the Internet running.
This was in 2015. In the year 2021, I can safely say that Crystalline adoption never really took off.
Against Crackpot Crypto
Instances of Crackpot Cryptography don’t always look like Crystalline. Sometimes the authors are more charismatic, or have more financial resources to bedazzle would-besuckers^investors. Other times, they’re less brazen and keep their designs far away from the watchful gaze of expert opinions–lest their mistakes be exposed for all to see.Crackpot cryptography is considered dangerous–not because we want people to avoid encryption entirely, but because crackpot cryptography offers a false sense of security. This leads to users acting in ways they wouldn’t if they knew there was little-to-no security. Due to the strictly performative nature of these security measures, I also like to call them Security Theater (although that term is more broadly applicable in other contexts).
The Cryptology community has a few defense mechanisms in place to prevent the real-world adoption of crackpot cryptography. More specifically, we have pithy mottos that distill best practices in a way that usually gets the intent across. (Hey, it’s something!) Unfortunately, the rest of the security industry outside of cryptology often weaponizes these mottos to promote useless and harmful gatekeeping.
The best example of this is the, “Don’t roll your own crypto!” motto.
They See Me Rollin’ [My Own Crypto]
Crackpots never adhere to this rule, so anyone who violates it immediately or often, with wild abandon, can be safely dismissed for kooky behavior.But if taken to its literal, logical extreme, this rule mandates that nobody would ever write cryptographic code and we wouldn’t have cryptography libraries to begin with. So, clearly, it’s a rule meant to be sometimes broken.
This is why some cryptography engineers soften the message a bit and encourage tinkering for the sake of education. The world needs more software engineers qualified to write cryptography.
After all, you wouldn’t expect to hear “Don’t roll your own crypto” being levied against Jason Donenfeld (WireGuard) or Frank Denis (libsodium), despite the fact that both of those people did just that.
But what about a high-level library that defers to libsodium for its actual crypto implementations?
In a twist that surprises no one, lazy heuristics have a high false positive rate. In this case, the lazy heuristic is both, “What qualifies as rolling one’s own crypto?” as well as, “When is it safe to break this rule?”
More broadly, though, is that these knee-jerk responses are a misfiring defense mechanism intended to stop quacks from taking all the air out of the room.
It doesn’t always work, though. There have been a few downright absurd instances of crackpot cryptography in the past few years.
Modern Examples of Crypto Crackpottery
Craig Wright’s Sartre Signature Scam
Satoshi Nakamoto is the alias of the anonymous cryptographer that invented Bitcoin. In the years since Satoshi has gone quiet, a few cranks have come out of the woodwork to claim to be the real Satoshi.Craig Wright is one of the more famous Satoshi impersonators due to his Sartre Signature Scam.
Satoshi’s earliest Bitcoin transactions are public. If you can lift the public key and signature from the transaction and then replay them in a different context as “proof” that you’re Satoshi, you can produce a proof of identity that validates without having to possess Satoshi’s private key. Then you can just wildly claim it’s a signature that validates the text of some philosopher’s prose and a lot of people will believe you.
With a little bit of showmanship added on, you too can convince Gavin Anderson by employing this tactic. (Or maybe not; I imagine he’s learned his lesson by now.)
Time AI
Crown Sterling’s sponsored talk at Black Hat USA 2019 is the most vivid example of crackpot cryptography in most people’s minds.Even the name “Time AI” just screams buzzword soup, so it should come as no surprise that their talk covered a lot of nonsense: “quasi-prime numbers”, “infinite wave conjugations”, “nano-scale of time”, “speed of AI oscillations”, “unified physics cosmology”, and “multi-dimensional encryption technology”.
Naturally, this pissed a lot of cryptographers off, and the normally even-keeled Dan Guido of Trail of Bits actually called them out on their bullshit during their presentation’s Q&A section.
https://twitter.com/dguido/status/1159579063540805632?lang=en
For most people, the story ended with a bunch of facepalms. But Crown Sterling doubled down and published a press release claiming the ability to break 256-bit RSA keys.
Amusingly, their attack took 50 seconds–which is a lot slower than the standard RSA factoring attacks for small key sizes.
(For those who are missing context: In order to be secure, RSA requires public key sizes in excess of 2048 bits. Breaking 256-bit RSA should take less than a minute on any modern PC.)
Terra Quantum
Earlier this week, Bloomberg news ran a story titled, A Swiss Company Says It Found Weakness That Imperils Encryption. If you only read the first few paragraphs, it’s really clear that the story basically boils down to, “Swiss Company realizes there’s an entire discipline of computer science dedicated to quantum computers and the risks they pose to cryptography.”Here’s a quick primer on quantum computers and cryptography:
If a practical quantum computer is ever built, it can immediately break all of the asymmetric cryptography used on the Internet today: RSA, DSA, Diffie-Hellman, Elliptic Curve Cryptography, etc. The attack costs to break these algorithms vary, but are generally in the range (for numbers of queries).
The jury is still out on whether or not quantum computers will ever be practical. Just in case, a lot of cryptographers are working on post-quantum cryptography (algorithms that are secure even against quantum computers).
Symmetric cryptography fares a lot better: The attack costs are roughly reduced by a factor of . This makes a 128-bit secure cipher have only a 64-bit security level, which is pretty terrible, but a 256-bit secure cipher remains at the 128-bit security level even with practical quantum computers.
So it’s a little strange that they open with:
The company said that its research found vulnerabilities that affect symmetric encryption ciphers, including the Advanced Encryption Standard, or AES, which is widely used to secure data transmitted over the internet and to encrypt files. Using a method known as quantum annealing, the company said its research found that even the strongest versions of AES encryption may be decipherable by quantum computers that could be available in a few years from now.From the Bloomberg article.
Uh, no.Let’s do some math: calculations can be performed in seconds on modern computers. If we assume that practical quantum computers are also as fast as classical computers, it’s safe to assume this will hold true as well.
You can break 128-bit ciphers in time, using Grover’s algorithm. You can’t break 256-bit ciphers in any practical time, even with the quantum computer speed-up. Most software prefers 256-bit AES over 128-bit AES for this reason.
What does time look like?
https://www.youtube.com/watch?v=vWXP3DvH8OQ
In 2012, we could break DES (which has 56-bit keys) in 24 hours with FPGAs dedicated to the task. Since each extra bit of security doubles the search space, we can extrapolate that 64-bits would require or 256 days.
So even with a quantum computer in hand, you would need to spend several months trying to break a single 128-bit AES key.
(Art by Scruff Kerfluff.)
If this were just one poorly written Bloomberg article put together by someone who vastly misunderstands post-quantum cryptography, Terra Quantum AG wouldn’t require much mention.
But, as with other crackpots before them, Terra Quantum doubled down with yet another press release published on Business Wire. (Archived.)
Terra Quantum realised that the AES is fairly secure against already identified algorithms but may appear fenceless against upcoming threats. To build the defence, Terra Quantum set out to look for a weakness by testing the AES against new algorithms. They Terra Quantum discovered a weakness on the message-digest algorithm MD5.
Okay, so in the time that elapsed between the two press releases, they realized they couldn’t realistically break AES with a quantum computer, but…MD5? MD-fucking-FIVE?! This is a joke right?
“Let’s hype up a hypothetical attack leveraged by quantum computers and then direct it at the most widely broken hash function on Earth.” – Shorter Terra Quantum
(Art by Khia.)The press release goes on to almost have a moment of self-awareness, but ultimately fails to do so:
The Terra Quantum team found that one can crack an algorithm using a quantum annealer containing about 20,000 qubits. No such annealer exists today, and while it is impossible to predict when it may be created, it is conceivable that such an annealer could become available to hackers in the future.(Emphasis mine.)
Yikes. There’s a lot of bullshit in that sentence, but it only gets zanier from there.https://twitter.com/boazbaraktcs/status/1359283973789278208
Here’s an actual quote from Terra Quantum’s CTOs, Gordey Lesovik and Valerii Vinokur about the “solution” to their imaginary problem:
“A new protocol derives from the notion that Quantum Demon is a small beast. The standard approach utilises the concept that the Demon hired by an eavesdropper (Eva) is a King Kong-like hundred kilometres big monster who can successfully use all the transmission line losses to decipher the communication. But since real Quantum Demons are small, Eva has to recruit an army of a billion to successfully collect all the scattered waves leaking from the optical fibre that she needs for efficient deciphering. Terra Quantum proposes an innovative technique utilizing the fact that such an army cannot exist – in accord with the second law of thermodynamics.”I seriously cannot fucking make this shit up. My fiction writing skills are simply not good enough.
I don’t partake in recreational drugs, but if I did, I’d probably want whatever they’re on.It’s important to note, at no point does Terra Quantum show their work. No source code or technical papers are published; just a lot of press releases that make exaggerated claims about quantum computers and totally misunderstands post-quantum cryptography.
Takeaways
If you see a press release on Business Wire about cryptography, it’s probably a scam. Real cryptographers publish on ePrint and then peer-reviewed journals, present their talks at conferences (but not sponsored talks), and exercise radical transparency with all facets of their work.Publish the source code, Luke!
Cryptography has little patience for swindlers, liars, and egomaniacs. (Although cryptocurrency seems more amenable to those personalities.) That doesn’t stop them from trying, of course.
If you’re reading this blog post and feel like learning about cryptography and cryptanalysis and feel put off by the “don’t roll your own crypto” mantra, and its implied gatekeeping, I hope it’s clear by now who that phrase was mostly intended for and why.
https://soatok.blog/2021/02/09/crackpot-cryptography-and-security-theater/
#asymmetricCryptography #crackpots #CrownSterling #cryptography #kooks #postQuantumCryptography #quantumComputers #scamArtists #scammers #scams #symmetricCryptography #TerraQuantum #TimeAI
Let me say up front, I’m no stranger to negative or ridiculous feedback. It’s incredibly hard to hurt my feelings, especially if you intend to. You don’t openly participate in the furry fandom since 2010 without being accustomed to malevolence and trolling. If this were simply a story of someone being an asshole to me, I would have shrugged and moved on with my life.
It’s important that you understand this, because when you call it like you see it, sometimes people dismiss your criticism with “triggered” memes. This isn’t me being offended. I promise.
My recent blog post about crackpot cryptography received a fair bit of attention in the software community. At one point it was on the front page of Hacker News (which is something that pretty much never happens for anything I write).
Unfortunately, that also means I crossed paths with Zed A. Shaw, the author of Learn Python the Hard Way and other books often recommended to neophyte software developers.
As someone who spends a lot of time trying to help newcomers acclimate to the technology industry, there are some behaviors I’ve recognized in technologists over the years that makes it harder for newcomers to overcome anxiety, frustration, and Impostor Syndrome. (Especially if they’re LGBTQIA+, a person of color, or a woman.)
Normally, these are easily correctable behaviors exhibited by people who have good intentions but don’t realize the harm they’re causing–often not by what they’re saying, but by how they say it.
Sadly, I can’t be so generous about… whatever this is:
https://twitter.com/lzsthw/status/1359659091782733827
Having never before encountered a living example of a poorly-written villain towards the work I do to help disadvantaged people thrive in technology careers, I sought to clarify Shaw’s intent.
https://twitter.com/lzsthw/status/1359673331960733696
https://twitter.com/lzsthw/status/1359673714607013905
This is effectively a very weird hybrid of an oddly-specific purity test and a form of hazing ritual.
Let’s step back for a second. Can you even fathom the damage attitudes like this can cause? I can tell you firsthand, because it happened to me.
Interlude: Amplified Impostor Syndrome
In the beginning of my career, I was just a humble web programmer. Due to a long story I don’t want to get into now, I was acquainted with the culture of black-hat hacking that precipitates the DEF CON community.
In particular, I was exposed the writings of a malicious group called Zero For 0wned, which made sport of hunting “skiddiez” and preached a very “shut up and stay in your lane” attitude:
Geeks don’t really come to HOPE to be lectured on the application of something simple, with very simple means, by a 15 year old. A combination of all the above could be why your room wasn’t full. Not only was it fairly empty, but it emptied at a rapid rate. I could barely take a seat through the masses pushing me to escape. Then when I thought no more people could possibly leave, they kept going. The room was almost empty when I gave in and left also. Heck, I was only there because we pwned the very resources you were talking about.Zero For 0wned
My first security conference was B-Sides Orlando in 2013. Before the conference, I had been hanging out in the #hackucf IRC channel and had known about the event well in advance (and got along with all the organizers and most of the would-be attendees), and considered applying to their CFP.
I ultimately didn’t, solely because I was worried about a ZF0-style reception.
I had no reference frame for other folks’ understanding of cryptography (which is my chosen area of discipline in infosec), and thought things like timing side-channels were “obvious”–even to software developers outside infosec. (Such is the danger of being self-taught!)
“Geeks don’t really come to B-Sides Orlando to be lectured on the application of something simple, with very simple means,” is roughly how I imagined the vitriol would be framed.
If it can happen to me, it can happen to anyone interested in tech. It’s the responsibility of experts and mentors to spare beginners from falling into the trappings of other peoples’ grand-standing.
Pride Before Destruction
With this in mind, let’s return to Shaw. At this point, more clarifying questions came in, this time from Fredrick Brennan.
https://twitter.com/lzsthw/status/1359712275666505734
What an arrogant and bombastic thing to say!
At this point, I concluded that I can never again, in good conscience, recommend any of Shaw’s books to a fledgling programmer.
If you’ve ever published book recommendations before, I suggest auditing them to make sure you’re not inadvertently exposing beginners to his harmful attitude and problematic behavior.
But while we’re on the subject of Zed Shaw’s behavior…
https://twitter.com/lzsthw/status/1359714688972582916
If Shaw thinks of himself as a superior cryptography expert, surely he’s published cryptography code online before.
And surely, it will withstand a five-minute code review from a gay furry blogger who never went through Shaw’s prescribed hazing ritual to rediscover specifically the known problems in OpenSSL circa Heartbleed and is therefore not as much of a cryptography expert?
(Art by Khia.)
May I Offer You a Zero-Day in This Trying Time?
One of Zed A. Shaw’s Github projects is an implementation of SRP (Secure Remote Password)–an early Password-Authenticated Key Exchange algorithm often integrated with TLS (to form TLS-SRP).
Zed Shaw’s SRP implementation
Without even looking past the directory structure, we can already see that it implements an algorithm called TrueRand, which cryptographer Matt Blaze has this to say:
https://twitter.com/mattblaze/status/438464425566412800
As noted by the README, Shaw stripped out all of the “extraneous” things and doesn’t have all of the previous versions of SRP “since those are known to be vulnerable”.
So given Shaw’s previous behavior, and the removal of vulnerable versions of SRP from his fork of Tom Wu’s libsrp code, it stands to reason that Shaw believes the cryptography code he published would be secure. Otherwise, why would he behave with such arrogance?
SRP in the Grass
Head’s up! If you aren’t cryptographically or mathematically inclined, this section might be a bit dense for your tastes. (Art by Scruff.)
When I say SRP, I’m referring to SRP-6a. Earlier versions of the protocol are out of scope; as are proposed variants (e.g. ones that employ SHA-256 instead of SHA-1).
Professor Matthew D. Green of Johns Hopkins University (who incidentally used to proverbially shit on OpenSSL in the way that Shaw expects everyone to, except productively) dislikes SRP but considered the protocol “not obviously broken”.
However, a secure protocol doesn’t mean the implementations are always secure. (Anyone who’s looked at older versions of OpenSSL’s BigNum library after reading my guide to side-channel attacks knows better.)
There are a few ways to implement SRP insecurely:
- Use an insecure random number generator (e.g. TrueRand) for salts or private keys.
- Fail to use a secure set of parameters (q, N, g).
To expand on this, SRP requires q be a Sophie-Germain prime and N be its corresponding Safe Prime. The standard Diffie-Hellman primes (MODP) are not sufficient for SRP.This security requirement exists because SRP requires an algebraic structure called a ring, rather than a cyclic group (as per Diffie-Hellman).
- Fail to perform the critical validation steps as outlined in RFC 5054.
In one way or another, Shaw’s SRP library fails at every step of the way. The first two are trivial:
- We’ve already seen the RNG used by srpmin. TrueRand is not a cryptographically secure pseudo random number generator.
- Zed A. Shaw’s srpmin only supports unsafe primes for SRP (i.e. the ones from RFC 3526, which is for Diffie-Hellman).
The third is more interesting. Let’s talk about the RFC 5054 validation steps in more detail.
Parameter Validation in SRP-6a
Retraction (March 7, 2021): There are two errors in my original analysis.
First, I misunderstood the behavior of SRP_respond()
to involve a network transmission that an attacker could fiddle with. It turns out that this function doesn’t do what its name implies.
Additionally, I was using an analysis of SRP3 from 1997 to evaluate code that implements SRP6a. u
isn’t transmitted, so there’s no attack here.
I’ve retracted these claims (but you can find them on an earlier version of this blog post via archive.org). The other SRP security issues still stand; this erroneous analysis only affects the u
validation issue.
Vulnerability Summary and Impact
That’s a lot of detail, but I hope it’s clear to everyone that all of the following are true:
- Zed Shaw’s library’s use of TrueRand fails the requirement to use a secure random source. This weakness affects both the salt and the private keys used throughout SRP.
- The library in question ships support for unsafe parameters (particularly for the prime, N), which according to RFC 5054 can leak the client’s password.
Salts and private keys are predictable and the hard-coded parameters allow passwords to leak.
But yes, OpenSSL is the real problem, right?
(Art by Khia.)
Low-Hanging ModExp Fruit
Shaw’s SRP implementation is pluggable and supports multiple back-end implementations: OpenSSL, libgcrypt, and even the (obviously not constant-time) GMP.
Even in the OpenSSL case, Shaw doesn’t set the BN_FLG_CONSTTIME
flag on any of the inputs before calling BN_mod_exp()
(or, failing that, inside BigIntegerFromInt
).
As a consequence, this is additionally vulnerable to a local-only timing attack that leaks your private exponent (which is the SHA1 hash of your salt and password). Although the literature on timing attacks against SRP is sparse, this is one of those cases that’s obviously vulnerable.
Exploiting the timing attack against SRP requires the ability to run code on the same hardware as the SRP implementation. Consequently, it’s possible to exploit this SRP ModExp timing side-channel from separate VMs that have access to the same bare-metal hardware (i.e. L1 and L2 caches), unless other protections are employed by the hypervisor.
Leaking the private exponent is equivalent to leaking your password (in terms of user impersonation), and knowing the salt and identifier further allows an attacker to brute force your plaintext password (which is an additional risk for password reuse).
Houston, The Ego Has Landed
Earlier when I mentioned the black hat hacker group Zero For 0wned, and the negative impact their hostile rhetoric, I omitted an important detail: Some of the first words they included in their first ezine.
For those of you that look up to the people mentioned, read this zine, realize that everyone makes mistakes, but only the arrogant ones are called on it.
If Zed A. Shaw were a kinder or humbler person, you wouldn’t be reading this page right now. I have a million things I’d rather be doing than exposing the hypocrisy of an arrogant jerk who managed to bullshit his way into the privileged position of educating junior developers through his writing.
If I didn’t believe Zed Shaw was toxic and harmful to his very customer base, I certainly wouldn’t have publicly dropped zero-days in the code he published while engaging in shit-slinging at others’ work and publicly shaming others for failing to meet arbitrarily specific purity tests that don’t mean anything to anyone but him.
But as Dan Guido said about Time AI:
https://twitter.com/veorq/status/1159575230970396672
It’s high time we stopped tolerating Zed’s behavior in the technology community.
If you want to mitigate impostor syndrome and help more talented people succeed with their confidence intact, boycott Zed Shaw’s books. Stop buying them, stop stocking them, stop recommending them.
Learn Decency the Hard Way
(Updated on February 12, 2021)
One sentiment and question that came up a few times since I originally posted this is, approximately, “Who cares if he’s a jerk and a hypocrite if he’s right?”
But he isn’t. At best, Shaw almost has a point about the technology industry’s over-dependence on OpenSSL.
Shaw’s weird litmus test about whether or not my blog (which is less than a year old) had said anything about OpenSSL during the “20+ years it was obviously flawed” isn’t a salient critique of this problem. Without a time machine, there is no actionable path to improvement.
You can be an inflammatory asshole and still have a salient point. Shaw had neither while demonstrating the worst kind of conduct to expose junior developers to if we want to get ahead of the rampant Impostor Syndrome that plagues us.
This is needlessly destructive to his own audience.
Generally the only people you’ll find who outright like this kind of abusive behavior in the technology industry are the self-proclaimed “neckbeards” that live on the dregs of elitist chan culture and desire for there to be a priestly technologist class within society, and furthermore want to see themselves as part of this exclusive caste–if not at the top of it. I don’t believe these people have anyone else’s best interests at heart.
So let’s talk about OpenSSL.
OpenSSL is the Manifestation of Mediocrity
OpenSSL is everywhere, whether you realize it or not. Any programming language that provides a crypto
module (Erlang, Node.js, Python, Ruby, PHP) binds against OpenSSL libcrypto.
OpenSSL kind of sucks. It used to be a lot worse. A lot of people have spent the past 7 years of their careers trying to make it better.
A lot of OpenSSL’s suckage is because it’s written mostly in C, which isn’t memory-safe. (There’s also some Perl scripts to generate Assembly code, and probably some other crazy stuff under the hood I’m not aware of.)
A lot of OpenSSL’s suckage is because it has to be all things to all people that depend on it, because it’s ubiquitous in the technology industry.
But most of OpenSSL’s outstanding suckage is because, like most cryptography projects, its API was badly designed. Sure, it works well enough as a Swiss army knife for experts, but there’s too many sharp edges and unsafe defaults. Further, because so much of the world depends on these legacy APIs, it’s difficult (if not impossible) to improve the code quality without making upgrades a miserable task for most of the software industry.
What Can We Do About OpenSSL?
There are two paths forward.
First, you can contribute to the OpenSSL 3.0 project, which has a pretty reasonable design document that almost nobody outside of the OpenSSL team has probably ever read before. This is probably the path of least resistance for most of the world.
Second, you can migrate your code to not use OpenSSL. For example, all of the cryptography code I’ve written for the furry community to use in our projects is backed by libsodium rather than OpenSSL. This is a tougher sell for most programming languages–and, at minimum, requires a major version bump.
Both paths are valid. Improve or replace.
But what’s not valid is pointlessly and needlessly shit-slinging open source projects that you’re not willing to help. So I refuse to do that.
Anyone who thinks that makes me less of a cryptography expert should feel welcome to not just unfollow me on social media, but to block on their way out.
https://soatok.blog/2021/02/11/on-the-toxicity-of-zed-a-shaw/
#author #cryptography #ImpostorSyndrome #PAKE #SecureRemotePasswordProtocol #security #SRP #Technology #toxicity #vuln #ZedAShaw #ZeroDay
Sometimes my blog posts end up on social link-sharing websites with a technology focus, such as Lobste.rs or Hacker News.On a good day, this presents an opportunity to share one’s writing with a larger audience and, more importantly, solicit a wider variety of feedback from one’s peers.
However, sometimes you end up with feedback like this, or this:
Apparently my fursona is ugly, and therefore I’m supposed to respect some random person’s preferences and suppress my identity online.
I’m no stranger to gatekeeping in online communities, internet trolls, or bullying in general. This isn’t my first rodeo, and it won’t be my last.
These kinds of comments exist to send a message not just to me, but to anyone else who’s furry or overtly LGBTQIA+: You’re weird and therefore not welcome here.
Of course, the moderators rarely share their views.
https://twitter.com/pushcx/status/1281207233020379137
Because of their toxic nature, there is only one appropriate response to these kinds of comments: Loud and persistent spite.
So here’s some more art I’ve commissioned or been gifted of my fursona over the years that I haven’t yet worked into a blog post:
Art by kazetheblaze
Art by leeohfox
Art by Diffuse MooseIf you hate furries so much, you will be appalled to learn that factoids about my fursona species have landed in LibreSSL’s source code (decoded).
Never underestimate furries, because we make the Internets go.
I will never let these kind of comments discourage me from being open about my hobbies, interests, or personality. And neither should anyone else.
If you don’t like my blog posts because I’m a furry but still find the technical content interesting, know now and forever more that, when you try to push me or anyone else out for being different, I will only increase the fucking thing.
Header art created by @loviesophiee and inspired by floccinaucinihilipilification.
https://soatok.blog/2020/07/09/a-word-on-anti-furry-sentiments-in-the-tech-community/
#antiFurryBullying #cyberculture #furry #HackerNews #LobsteRs #Reddit
There are two news stories today. Unfortunately, some people have difficulty uncoupling the two.
- The Team Fortress 2 Source Code has been leaked.
- Hackers discovered a Remote Code Execution exploit.
The second point is something to be concerned about. RCE is game over. The existence of an unpatched RCE vulnerability, with public exploits, is sufficient reason to uninstall the game and wait for a fix to be released. Good on everyone for reporting that. You’re being responsible. (If it’s real, that is! See update at the bottom.)
The first point might explain why the second happened, which is fine for the sake of narrative… but by itself, a source code leak is a non-issue that nobody in their right mind should worry about from a security perspective.
Anyone who believes they’re less secure because the source code is public is either uninformed or misinformed.
I will explain.
Professor Dreamseeker is in the house. Twitch Emote by Swizz.
Why Source Code Leaks Don’t Matter for Security
You should know that, throughout my time online as a furry, I have been awarded thousand dollar bounties through public bounty programs.
How did you earn those bounties?
By finding zero-day vulnerabilities in those companies’ software.
But only some of those were for open source software projects. CreditKarma definitely does not share their Android app’s source code with security researchers.
How did you do it?
I simply reverse engineered their apps using off-the-shelf tools, and studied the decompiled source code.
Why are you making that sound trivial?
Because it is trivial!
If you don’t believe me, choose a random game from your Steam library.
Right click > Properties. Click on the Local Files tab, then click “Browse Local Files”. Now search for a binary.
Me, following these steps to locate the No Man’s Sky binary.
If your game is a typical C/C++ project, you’ll next want to install Ghidra.
Other platforms and their respective tools:
If you see a bunch of HTML and JS files, you can literally use beautifier.io to make the code readable.
Open your target binary in the appropriate reverse engineering software, and you can decompile the binary into C/C++ code.
Decompiled code from No Man’s Sky’s NMS.exe file on Windows.
Congratulations! If you’ve made it this far, you’re neck-and-neck with any attacker who has a leaked copy of the source code.
Every Information Security Expert Knows This
Almost literally everyone working in infosec knows that keeping a product’s source code a secret doesn’t actually improve the security of the product.
There’s a derisive term for this belief: Security Through Obscurity.
The only people whose job will be made more difficult with the source code leak are lawyers dealing with Intellectual Property (IP) disputes.
In Conclusion
Remote Code Execution is bad.
The Source Code being public? Yawn.
Pictured: Soatok trying to figure out why people are worried about source code disclosure when he publishes everything publicly on Github anyway (2020). Art by Riley.
Update: Shortly after I made this post, I was made aware of another news story worthy of everyone’s attention far more than FUD about source code leaks.
With the Source leaks happening today, I think everyone is missing the most important part: how much does Valve swear? I tallied up instances of these words in the leak*:"fuck": 116
"shit": 63
"damn": 109*There was some non-Valve stuff in the leak; I didn't count it
— @tj (@tjhorner) April 22, 2020
Well damn if that doesn’t capture my interest.
Now this is the kind of story that makes Twitter worthwhile!
Is the RCE Exploit Even Real?
Update 2: I’ve heard a lot of reports that the alleged RCE exploit is fake. I haven’t taken the time to look at Team Fortress 2 or CS:GO in any meaningful way, but the CS:GO team did have this to say about the leaks:
We have reviewed the leaked code and believe it to be a reposting of a limited CS:GO engine code depot released to partners in late 2017, and originally leaked in 2018. From this review, we have not found any reason for players to be alarmed or avoid the current builds.— CS2 (@CounterStrike) April 22, 2020
Fake news and old news are strange (yet strangely common) bedfellows.
https://soatok.blog/2020/04/22/source-code-leak-is-effectively-meaningless-to-endpoint-security/
#commonSense #informationSecurity #infosec #misinformation #reverseEngineering #security #securityThroughObscurity #sourceCode
There are two news stories today. Unfortunately, some people have difficulty uncoupling the two.
- The Team Fortress 2 Source Code has been leaked.
- Hackers discovered a Remote Code Execution exploit.
The second point is something to be concerned about. RCE is game over. The existence of an unpatched RCE vulnerability, with public exploits, is sufficient reason to uninstall the game and wait for a fix to be released. Good on everyone for reporting that. You’re being responsible. (If it’s real, that is! See update at the bottom.)
The first point might explain why the second happened, which is fine for the sake of narrative… but by itself, a source code leak is a non-issue that nobody in their right mind should worry about from a security perspective.
Anyone who believes they’re less secure because the source code is public is either uninformed or misinformed.
I will explain.
Professor Dreamseeker is in the house. Twitch Emote by Swizz.Why Source Code Leaks Don’t Matter for Security
You should know that, throughout my time online as a furry, I have been awarded thousand dollar bounties through public bounty programs.How did you earn those bounties?
By finding zero-day vulnerabilities in those companies’ software.But only some of those were for open source software projects. CreditKarma definitely does not share their Android app’s source code with security researchers.
How did you do it?
I simply reverse engineered their apps using off-the-shelf tools, and studied the decompiled source code.Why are you making that sound trivial?
Because it is trivial!If you don’t believe me, choose a random game from your Steam library.
Right click > Properties. Click on the Local Files tab, then click “Browse Local Files”. Now search for a binary.
Me, following these steps to locate the No Man’s Sky binary.
If your game is a typical C/C++ project, you’ll next want to install Ghidra.Other platforms and their respective tools:
If you see a bunch of HTML and JS files, you can literally use beautifier.io to make the code readable.
Open your target binary in the appropriate reverse engineering software, and you can decompile the binary into C/C++ code.
Decompiled code from No Man’s Sky’s NMS.exe file on Windows.
Congratulations! If you’ve made it this far, you’re neck-and-neck with any attacker who has a leaked copy of the source code.Every Information Security Expert Knows This
Almost literally everyone working in infosec knows that keeping a product’s source code a secret doesn’t actually improve the security of the product.There’s a derisive term for this belief: Security Through Obscurity.
The only people whose job will be made more difficult with the source code leak are lawyers dealing with Intellectual Property (IP) disputes.
In Conclusion
Remote Code Execution is bad.The Source Code being public? Yawn.
Pictured: Soatok trying to figure out why people are worried about source code disclosure when he publishes everything publicly on Github anyway (2020). Art by Riley.
Update: Shortly after I made this post, I was made aware of another news story worthy of everyone’s attention far more than FUD about source code leaks.With the Source leaks happening today, I think everyone is missing the most important part: how much does Valve swear? I tallied up instances of these words in the leak*:"fuck": 116
"shit": 63
"damn": 109*There was some non-Valve stuff in the leak; I didn't count it
— @tj (@tjhorner) April 22, 2020
Well damn if that doesn’t capture my interest.
Now this is the kind of story that makes Twitter worthwhile!Is the RCE Exploit Even Real?
Update 2: I’ve heard a lot of reports that the alleged RCE exploit is fake. I haven’t taken the time to look at Team Fortress 2 or CS:GO in any meaningful way, but the CS:GO team did have this to say about the leaks:We have reviewed the leaked code and believe it to be a reposting of a limited CS:GO engine code depot released to partners in late 2017, and originally leaked in 2018. From this review, we have not found any reason for players to be alarmed or avoid the current builds.— CS2 (@CounterStrike) April 22, 2020
Fake news and old news are strange (yet strangely common) bedfellows.
https://soatok.blog/2020/04/22/source-code-leak-is-effectively-meaningless-to-endpoint-security/
#commonSense #informationSecurity #infosec #misinformation #reverseEngineering #security #securityThroughObscurity #sourceCode
https://thecopenhagenbook.com/
The 30-year-old internet backdoor law that came back to bite
News broke this weekend that China-backed hackers have compromised the wiretap systems of several U.S. telecom and internet providers, likely in an effort to gather intelligence on Americans.
The wiretap systems, as mandated under a 30-year-old U.S. federal law, are some of the most sensitive in a telecom or internet provider’s network, typically granting a select few employees nearly unfettered access to information about their customers, including their internet traffic and browsing histories.
But for the technologists who have for years sounded the alarm about the security risks of legally required backdoors, news of the compromises are the “told you so” moment they hoped would never come but knew one day would.
“I think it absolutely was inevitable,” Matt Blaze, a professor at Georgetown Law and expert on secure systems, told TechCrunch regarding the latest compromises of telecom and internet providers.
Fact is, any intentional backdoor is not going to be secure. Secrets don’t remain secret. That is just the way things are, and more so if more than one person knows about it.
“There’s no way to build a backdoor that only the ‘good guys’ can use,” said Signal president Meredith Whittaker, writing on Mastodon.
The theory around backdoors comes from the same era as changing your password every 30 days. Times have changed, and we should know better in 2024.
See techcrunch.com/2024/10/07/the-…
#Blog, #backdoors, #security, #technology
Who owns your shiny new #Pixel 9 #phone? You can’t say no to #Google’s #surveillance
Source: https://cybernews.com/security/google-pixel-9-phone-beams-data-and-awaits-commands/
Every 15 minutes, #GooglePixel 9 Pro XL sends a data packet to Google. The device shares #location, email address, phone number, #network status, and other #telemetry. Even more concerning, the phone periodically attempts to download and run new code, potentially opening up #security risks...
Don't be a data cow 🐮 on Google's server farm 👎
#tracking #fail #bigbrother #orwell #economy #online #Problem #news #Smartphone #android #bigdata #datacow
🔴 Agenda na #JesienLinuksowa już dostępna! 🔴
W programie: #DevOps, #security, #gaming, #prywatność i więcej!
Goście specjalni: Kuba Mrugalski (@uwteam), Tomasz Zieliński (@infzakladowy)
Dodatkowo: 💬 Unconference ⚡ Lightning Talks 🎉 Fedora release party!
Do zobaczenia! 🥳
The French Detention: Why We're Watching the Telegram Situation Closely
#endtoendencryption #freespeech #security #privacy #electronicfrontierfoundation #eff #digitalrights #digitalprivacy
posted by pod_feeder_v2
The French Detention: Why We're Watching the Telegram Situation Closely
EFF is closely monitoring the situation in France in which Telegram’s CEO Pavel Durov was charged with having committed criminal offenses, most of them seemingly related to the operation of Telegram.Electronic Frontier Foundation
Do you want to help make software safer? Find the bugs in our ntpd-rs!
The ntpd-rs Bug Bounty Program offers a reward to anyone who finds a qualifying vulnerability.
Read the details here: https://yeswehack.com/programs/pendulum-bug-bounty-program
This Bug Bounty Program is organized and funded by @sovtechfund . Read more about this initiative here: https://www.sovereigntechfund.de/programs/bug-resilience/
ntpd-rs Bug Bounty Program bug bounty program - YesWeHack
ntpd-rs Bug Bounty Program bug bounty program detailsYesWeHack #1 Bug Bounty Platform in Europe
#Amsterdam municipality bans #Telegram on work phones over criminal use, #espionage #threat
Telegram is a “safe haven for hackers, cybercriminals, and drug dealers,” a spokesperson for Amsterdam’s IT alderman Alexander Scholtes told the broadcaster. The city is also concerned about possible espionage through the app, even though it no longer has official ties to #Russia. Telegram was set up in Russia, but the head office has since moved to #Dubai, and the #company is officially located in the Virgin Islands.
#news #software #messenger #crime #cybercrime #cybersecurity #security #problem #Netherlands #hack #hacker
Amsterdam municipality bans Telegram on work phones over criminal use, espionage threat
The municipality of Amsterdam has banned its civil servants from using the chat app Telegram on their work phones over the criminal activities on the app and the risk of espionage, a spokesperson confirmed to BNR.NL Times
Software, Update, Microsoft
Here is the #solution for this #problem: news.itsfoss.com/windows-break… #windows #update #microsoft #help #os #software #windows #fail
♲ anonymiss - 2024-08-22 10:37:05 GMT
After #Windows #Update on dual boot systems: Verifying shim #SBAT data failed: #Security Policy Violation.
Source: askubuntu.com/questions/152343…1) Disable Secure Boot in BIOS
2) Log into your Ubuntu user and open a terminal
3) Delete the SBAT policy with: sudo mokutil --set-sbat-policy delete
4) Reboot your PC and log back into Ubuntu to update the SBAT policy
5) Reboot and then re-enable secure boot in your BIOS.
After #Windows #Update on dual boot systems: Verifying shim #SBAT data failed: #Security Policy Violation.
Source: https://askubuntu.com/questions/1523438/verifying-shim-sbat-data-failed-security-policy-violation
1) Disable Secure Boot in BIOS
2) Log into your Ubuntu user and open a terminal
3) Delete the SBAT policy with: sudo mokutil --set-sbat-policy delete
4) Reboot your PC and log back into Ubuntu to update the SBAT policy
5) Reboot and then re-enable secure boot in your BIOS.
#help #Linux #Microsoft #fail #Software #boot #os
Verifying shim SBAT data failed: Security Policy Violation
It seems that with the recent Windows Update, for systems that have dual-boot is not letting grub start, showing the message: Verifying shim SBAT data failed: Security Policy Violation. Does anyone...Ask Ubuntu
Question for Unix/Linux/Android, is there a login that the password determines the user?
Example: a special password used under duress with the authorities over my shoulder demanding access, they get into the prepared account. If my usual password is entered, the system logs me into my normal account with all my gay. And a third "self destruct" password does a rm -rf in the background while a forever static login screen is displayed.
I'm surprised I've never seen this hack done yet...
#security #RubberHoseSecurity
Second Factor #SMS: Worse Than Its Reputation
Source: https://www.ccc.de/en/updates/2024/2fa-sms
IdentifyMobile, a provider of 2FA-SMS, shared the sent one-time passwords in real-time on the internet. The #CCC happened to be in the right place at the right time and accessed the data. It was sufficient to guess the subdomain "idmdatastore". Besides SMS content, recipients' phone numbers, sender names, and sometimes other account information were visible.
#news #security #internet #2fa #mobile #cybersecurity #problem #password
So now that we all understand that thanklessly relying on free work of overworked maintainers is a problem, how about we put our money where our mouth is?
I think @AndresFreundTec needs a fat bonus check for saving our asses.
And Lasse Collin needs a lot of support, and probably a nice vacation.
I pledge $100, for starters.
Now how can we make sure to send the funds to the correct people?
Or is there already any fundraiser that I missed?
„GitHub Disables The XZ Repository Following Today's Malicious Disclosure“
https://www.phoronix.com/news/GitHub-Disables-XZ-Repo
GitHub Disables The XZ Repository Following Today's Malicious Disclosure
Today's disclosure of XZ upstream release packages containing malicious code to compromise remote SSH access has certainly been an Easter weekend surprise..www.phoronix.com
Millions Of #google #whatsapp #Facebook #2FA #Security Codes #Leak Online
Security experts advise against using SMS messages for two-factor authentication codes due to their vulnerability to interception or compromise. Recently, a security researcher discovered an unsecured database on the internet containing millions of such codes, which could be easily accessed by anyone.
#news #tech #technews #technology #privacy
Millions Of Google, WhatsApp, Facebook 2FA Security Codes Leak Online
A security researcher has discovered an unsecured database on the internet containing millions of two-factor authentication security codes. Here's what you need to know.Davey Winder (Forbes)
Over 100,000 Infected Repos Found on GitHub
https://apiiro.com/blog/malicious-code-campaign-github-repo-confusion-attack/
Over 100,000 Infected Repos Found on GitHub
The Apiiro research team has detected a repo confusion campaign that has evolved and expanded, impacting over 100k GitHub repos with malicious code.Gil David (Apiiro)
Why I use #Firefox
- The about:config page
- Mozilla cannot decrypt my data on their servers
- Translating web pages is also completely private
- Mozilla develops their own browser engine
- The best support for extensions on #Android
- A great picture-in-picture player
I #trust #Mozilla more than I trust #Google, #Apple, #Microsoft, or any other company that makes #web browsers. This trust is based on the fact that Mozilla chooses the highest level of user privacy when developing services such as Firefox Sync, Firefox Translate, and others. A web browser is an integral part of a person’s #online life, so it makes sense to choose a #browser from a company that one trusts the most.
source: https://šime.eu/3
#software #freedom #opensource #foss #floss #internet #privacy #security #www #surfing
Y'all know not to use #Temu right? Right???
Temu app contains ‘most dangerous’ #spyware in circulation: class action lawsuit | Fashion Dive
https://www.fashiondive.com/news/temu-class-action-lawsuit-data-collection/699328/
Temu app contains ‘most dangerous’ spyware in circulation: class action lawsuit
The complaint alleges that the fast fashion giant gains access to “literally everything on your phone” once its app is downloaded.Fashion Dive
In ads: Our apps mind their business. Not yours.
In court: Given Apple’s extensive privacy disclosures, no reasonable user would expect that their actions in Apple’s apps would be private from Apple.
#Privacy #Security #Cybersecurity #Apple #iPhone #InfoSec #dataprivacy
But to be fair ...
Is it the implementation language being the main issue? Or is it the flexibility of extending it with plugins and that it is effectively a setuid tool, granting root access immediately when an unprivileged user starts the program (the privileges are reduced first when it has parsed the sudo config).
Sudo is a nice tool from the user's side. But security wise it's a disastrous approach. Privileges should first be elevated *after* the config has been parsed and the expected privilege level has been established. Then the tool should ideally jump to that privilege level directly.
This post introduces some new ideas ... https://tim.siosm.fr/blog/2023/12/19/ssh-over-unix-socket/
It's not a perfect approach in all cases. But it gets rid of the setuid issue.
sudo without a setuid binary or SSH over a UNIX socket
In this post, I will detail how to replace sudo (a setuid binary) by using SSH over a local UNIX socket. I am of the opinion that setuid/setgid binaries are a UNIX legacy that should be deprecated.Siosm's blog
A controversial developer circumvented one of Mastodon’s primary tools for blocking bad actors, all so that his servers could connect to Threads.
Authorized Fetch Circumvented by Alt-Right Developers
We’ve criticized the security and privacy mechanisms of Mastodon in the past, but this new development should be eye-opening. Alex Gleason, the former Truth Social developer behind Soapbox and Rebased, has come up with a sneaky workaround to how Authorized Fetch functions: if your domain is blocked for a fetch, just sign it with a different domain name instead.
How did this happen?
Gleason was originally investigating Threads federation to determine whether or not a failure to fetch posts indicated a software compatibility issue, or if Threads had blocked his server. After checking some logs and experimenting, he came to a conclusion.
“Fellas,” Gleason writes, “I think threads.net might be blocking some servers already.”
What Alex found was that Threads attempts to verify domain names before allowing access to a resource, a very similar approach to what Authorized Fetch does in Mastodon.
You can see Threads fetching your own server by looking at the `facebookexternalua` user agent. Try this command on your server:`grep facebookexternalua /var/log/nginx/access.log`
If you see logs there, that means Threads is attempting to verify your signatures and allow you to access their data.
This one weird trick allowed him to verify that, while his personal instance wasn’t blocked, more than a few of his communities were: Spinster, Neenster, Poast, and the Mostr Bridge are all reportedly blocked domains. While Alex isn’t directly involved in all of these projects, they have benefited from his development and support, providing spaces for bigoted speech to grow and spread.
What’s interesting is that Threads itself has been reportedly lax on policies pertaining to transphobia and hate speech, so the blocks are something of a surprise. Accounts such as Libs of Tiktok remain active, widely followed, and unbanned on Threads.
Block Evasion
To get around the block, Alex found that it’s possible to sign fetch requests with a different domain name entirely, using an A record that points back to the receiving instance.
Meta seems to be betting on the fact that people have played nicely in the past, but I for one am not going to let them have their way. I am going to ensure the data they publish remains free and open to all…Tools to work around Authenticated fetch are being shipped with new versions of Fediverse software. Censorship by Meta will create a continued need for this industry to grow.
While this is being framed as a freedom of access / freedom of speech issue, in an almost David vs Goliath kind of fight, the real problem here is that there’s now an established way to circumvent the flimsy user protection that Mastodon popularized, which is really bad for the vulnerable communities using it.
What Now?
Look, Mastodon has been providing a half-measure to its users for years. Now it’s the time to make things right: going into 2024, I think it’s going to absolutely be a requirement to develop more robust forms of privacy options and access controls to empower users.
Bonfire is doing an incredible amount of research focused on this very problem, and Spritely has put forward some groundbreaking work on Object Capabilities in the recent past.
#AlexGleason #AuthorizedFetch #Security
https://wedistribute.org/2023/12/authorized-fetch-circumvented/
A lot of people make up all kinds of wild assumptions Mastodon, how it works, and what it is. We’re here to help clear up some of the biggest ones.Debunking the Top 10 Myths About Mastodon
We have to give credit where credit is due: Mastodon brought life to the Fediverse and opened up the space for many people. As a platform, it’s been transformative for federated social networking, bringing millions of active users, hundreds of apps, and many new platforms to the network. The network couldn’t have grown without it.
Here’s the thing, though: there are a lot of myths and rumors swirling around within the Mastodon userbase that either misunderstand or greatly fabricate information about the platform. In the interest of correcting the record on a large number of things, we’ve come up with a list of the most pervasive Mastodon Myths.
Table of Contents
- Myth #10: Mastodon doesn’t have algorithms, because algorithms are bad
- Myth #9: Mastodon is the same thing as the Fediverse
- Myth #8: There are no Nazis on Mastodon
- Myth #7: Mastodon should avoid features of popular social networks, because they’re abuse vectors.
- Myth #6: Mastodon respects your privacy, and is ideal for secure communication
- Myth #5: If you’re on a bad server, you can easily move to a good one
- Myth #4: Mastodon Federation basically works like email.
- Myth #3: Mastodon is so much nicer than other places!
- Myth #2: Mastodon is ActivityPub-Compliant
- Myth #1: Mastodon is Easy to Use!
Myth #10: Mastodon doesn’t have algorithms, because algorithms are bad
Myth: Mastodon’s timelines are better, because they don’t have algorithms influencing what you see. Instead, you just see posts in chronological order, as your account becomes aware of new posts.Fact: This myth is complicated because it conflates several different things together. When people talk about social algorithms, they’re typically referring to the black boxes that Facebook and Twitter use to drive engagement. There’s a negative emphasis because it’s a practice done by “bad” networks to:
- keep people on their platforms for longer and longer
- push users further into bubbles that reinforce their own views
- provide malleable content streams that can control social narratives.
The thing is, none of these things describe what an algorithm even is. Worse, this lack of understanding leads people to assume that Mastodon has no algorithms at all.
What is an algorithm?
The Geeks for Geeks blog has a great tidbit from their article Introduction to Algorithms:The word Algorithm means ” A set of finite rules or instructions to be followed in calculations or other problem-solving operations ”
Or
” A procedure for solving a mathematical problem in a finite number of steps that frequently involves recursive operations”.What Is An Algorithm?
In a nutshell, it’s a process that follows some steps to produce an output, most often with data. It is not a mysterious black-box procedure.How does Mastodon use algorithms?
Believe it or not, the chronological feed Mastodon provides uses a very simple algorithm: sort posts in this timeline based on the timestamp indicating when a post was written.ALGORITHMS!!!
These days, Mastodon actually has more algorithms, such as the one that powers Trending Posts and Mastodon’s feed of trending News Articles. All they’re really doing is running stats on how much a thing gets likes or activity, then showing what’s popular within a window of time.
Believe it or not, algorithms.
The thing is, blaming algorithms for the sins of large corporate platforms ignores the fact that the tool itself was harmless. Algorithms aren’t any more evil than an abacus or a typewriter is. When people are given power over their own platforms, they can even leverage these tools to their own advantage.Verdict: Algorithms are neither good nor bad, they’re just tools for sorting data. Regardless, Mastodon actually makes use of algorithms a lot more often than you might think, and these things could actually be really helpful in assisting user discovery in the Fediverse.
Myth #9: Mastodon is the same thing as the Fediverse
Myth: It’s okay to just refer to the Fediverse as “Mastodon”, because it makes up the biggest part of the network, and most of the people I follow just use that.Fact: The Fediverse is bigger than just Mastodon, and it’s much older, too. Mastodon is just one platform in a network consisting of over 80 different platforms in various states of development.
While Mastodon is still the most popular, there are a number of alternatives that are catching up in terms of adoption. Misskey and Lemmy take up the second and third spot, respectively, and neither PeerTube nor Pixelfed are slouches in their positions, either.
Source: FediDB
Some people will ridicule this correction as being like the “GNU/Linux Copypasta“, in the sense that some other party is whining about not getting credit. But the fact of the matter is, the network is being built by more people than just Eugen Rochko. It’s a collective effort of thousands of people.
How is the Fediverse defined?
There’s been some discussions over the years as to what things are considered “part of the Fediverse”. My favorite explanation comes from Wikipedia:The fediverse (a portmanteau of “federation” and “universe”) is an ensemble of social networks, which, while independently hosted, can communicate with each other. ActivityPub, a W3C standard, is the most widely used protocol that powers the fediverse. Users on different websites can send and receive updates from others across the network.Wikipedia, Entry for the Fediverse
That being said, there are three distinct positions that can be taken on what things constitute as being “Part of the Fediverse”:
- Functional Fundamentalism: “The Fediverse is comprised of federated social platforms that use common protocols to communicate! Doesn’t matter which protocol, as long as it’s social.”
- Protocol Fundamentalism: “The Fediverse is comprised of federated social platforms that specifically use ActivityPub! If you don’t interoperate, you’re not part of it!”
- Functional-Protocol Nihilism: “The Fediverse is anything that federates! XMPP is part of the Fediverse! Email is part of the Fediverse! Fidonet is part of the Fediverse. It doesn’t matter if any of it operates, or is even social, it’s all part of the Fediverse.”
The debates rage on, but one thing is for certain: whatever this thing is, it isn’t just one microblogging platform made by a dude in Germany.
Verdict: Refering to “the Fediverse” as Mastodon is like calling the ocean a fish. Just as a fish might be one part of the ocean, Mastodon is just one part of the network.
Myth #8: There are no Nazis on Mastodon
Myth: Mastodon was intended to be “Twitter without the Nazis”, and there definitely aren’t any Nazis now.Fact: Being part of a federated, decentralized network where server operators can set whatever rules they want, it’s no surprise that part of the network hosts white supremacists, Neo-Nazis, and far right dissidents producing disgusting amounts of hate speech and racist propaganda. Some of these communities existed on the network way before Mastodon was even a thing.
The easiest way to find actual bonafide nazis on the fediverse is to look at Pieville. Pieville is an instance operated by people associated with StormFront, a self-described “White Nationalist Community.” Users openly share videos and messages from key people in the white nationalist movement, such as Billy Roper and William Pierce. Other neo-nazi figures like Alex Linder have an account there. Oh, and Pieville runs Mastodon v2.7.4 at present time of writing.Ariadne Connill, The Fediverse, or Shitpost Ergo Sum Ego Sum
Several sites notoriously ran their own Mastodon forks: Gab and Truth Social adopted it at one point, and Spinster, Poast, and Kiwifarms technically still use frontend software that was forked from Mastodon’s UI. Sure, that’s the nature of open source software. If an extremist installs WordPress and uses it to post hate speech, it’s not WordPress’s fault specifically. But, it does mean that we have to take into account that some parts of the network are like this, and act accordingly.Wait, how do I avoid the Nazis?
While a big part of the network blocks those servers to limit their reach, it doesn’t mean that those communities don’t exist. If your instance doesn’t proactively take a stance to filter them out, there’s a sizeable chance you may just run across them.There’s some really interesting initiatives out there trying to develop solutions. Oliphant has a tiered system of site listings, ranging from “just a bit too edgy” to “these people post gore and send death threats.” The Bad Space is trying to collect and evaluate listings shared within a ring of trusted servers, with Composable Moderation being the ultimate goal. Fedifence and IFTAS are trying to offer comprehensive resources to moderators and admins to make the process easier to deal with.
Verdict: there are actually a lot of Nazis on the Fediverse, some of which even use Mastodon. Several pieces of Mastodon’s own code (server backend, client frontend) have been adopted by these communities.
Myth #7: Mastodon should avoid features of popular social networks, because they’re abuse vectors.
Myth: Some people want to see Mastodon adopt things like Quote Tweets and Full-Text Search, but they shouldn’t because those are used to harass people on the network.Fact: Around this time last year, Twitter users migrated to Mastodon en masse in response to Elon Musk’s acquisition of the platform. As a side effect, many of these new Mastodon users asked: why is search so broken? Why don’t we have quote toots? Why do I have to CW everything?
How did people respond?
The response to this from some long-time Mastodon users was overwhelmingly negative. A lot of people made statements like the following:
- Full-Text Search: Mastodon doesn’t offer full-text search, because it could be a vector for abuse! A harasser could just look up whatever public statuses their victims post. Removing this protects users.
- Quote Toots: Quoting other users on Twitter is often done in a very passive-aggressive manner, and can be incredibly toxic for user interactions. We don’t want to be like Twitter in this regard.
- Not Using Content Warnings: you should be mindful of how many people live vastly different lifestyles than you do. It’s disrespectful to make assumptions that your posts won’t be triggering for someone. The responsibility for Content Warnings should always be on the poster, not the reader.
A lot of new users read this, tried their best to deal with it, and eventually decided that Mastodon wasn’t for them. Many people were tone-policed for describing their own lived experiences with racism, queerphobia, and abuse.
Many of these hostilities led users to equate Mastodon with a Homeowners Association, in which rude and nosy neighbors freely critiqued even the most minor behaviors as faux pas. It’s not an entirely unfair statement, given that people were described as being affected by “Twitter Influencer Mind Rot” for simply asking about these things.
What does it look like in practice?
If we actually look at Full-Text Search, Quote Toots, and Mastodon critically, a different picture emerges: Mastodon’s privacy and consent mechanisms absolutely suck, and the platform has relied on user features being broken for years as a way to gloss over it. What’s also particularly telling is that quite a few platforms had both of these features for years: Friendica, Pleroma, and Misskey have all largely benefited from it.Ironically, one of the biggest actual attack vectors of abuse has been Private Mentions. Death threats, sexual advances, and other “fun” kinds of interactions are often done privately, in a way that maintains deniability between victim and harasser.
Verdict: People are generally resistant to change, and apprehensive towards things that might fundamentally shift social dynamics for the worse. Most of what people were afraid of with Full-Text Search and Quote Tweets were already present with Private Mentions, and largely boil down to Mastodon’s limitations in how user consent is factored in.
Myth #6: Mastodon respects your privacy, and is ideal for secure communication
Myth: Mastodon is a privacy-first platform. You can be confident that nobody can access your private messages or posts.Fact: This idea is, unfortunately, completely out of touch with reality. While Mastodon offers some privacy options for statuses and messages, those provisions are paper-thin at best.
Let’s talk about privacy scopes. Mastodon has four of them:
- Public – anybody can see your post, boost it, or respond to it.
- Unlisted – Same as above, but it doesn’t show up in timelines.
- Followers Only – Only your followers can see your status or respond to it. Nobody can boost your post.
- Private Mention – Only you and the people you mention can see the post.
There are some significant problems with the above options. I’m going to break them down into two buckets: problems where the scope is too broadly defined, and problems where access levels are confusing.
Scoping Problems
The first issue here is that the privacy scopes, if you can even call them that, are all over the place:
- Two of the scopes, Public and Unlisted, basically lets anybody do whatever they want with your statuses. These actually have nothing to do with privacy, and everything to do with which timelines a post shows up in.
- Followers Only basically addresses everyone that follows you, with no granularity whatsoever. I can’t just pick out a collection of my mutuals and talk with them privately about something, without it being a long Private Mention with a lot of names in the message body somewhere.
- Private Mentions are their own strange beast, as it’s sometimes unclear who is actually privy to the conversation. More than a few times, I’ve seen people get accidentally mentioned in private gossip that was about them, because people thought Private Mentions worked like Twitter DMs. They don’t. It’s horribly awkward. Don’t do that.
Access Problems
There’s also some really weird caveats over who has access to something. Again, Public and Unlisted are basically the same levels of user access, just with different visibility rules. If you post a private status with Followers Only, anybody that follows you after the fact can see it, effectively circumventing the privacy. If you do some personal correspondence with Private Mention, there’s nothing stopping your admin from reading it in the database.Verdict: Mastodon is great as a public forum, and decent for semi-private posts. However, Mastodon isn’t very good when it comes to privacy provisions, and should never be used to exchange truly sensitive information.
Myth #5: If you’re on a bad server, you can easily move to a good one
Myth: Because Mastodon allows you to move accounts, you can always migrate to a different server. If you’re having a bad time, you can take your data somewhere else!Fact: In theory, this is a great idea. In practice, it’s a hot mess. There are two problems here:
1. Server Availability – The #1 Achilles Heel in this situation is that account migration doesn’t work if your server is down. Either you can’t get to the export screen to download your data, or your original server isn’t around to import data from.
2. Connection Availability – The second significant issue is this: users can’t cross connection boundaries, which often happens when one server defederates from another. If Server A and Server B get into a dispute and block each other, users on one of those servers won’t be able to directly migrate to the other place.
Heck, we’re just talking about moving from one server to another at this point. If you get banned from an instance, there’s a big chance that you can’t even download your own data or perform a migration. Some variations of banning allow a user to log in, access their own things, and maybe make an appeal to the moderators. More often than not, though, it’s easier to just delete the account entirely.
Even when everything works, the experience can be really flaky. Erin Kissane has this to say about her migration experience:
If it weren’t so difficult to understand how to choose a server to begin with, the downsides of migration would sting less, but it is so hard to know if you’ve found the right (for many varied values of right) server until you’re already settled in—by which time you’ve built up posts and conversations you may not be delighted to lose.Erin Kissane, Notes From a Mastodon Migration
Verdict: User migration is a really good idea. When it works well, people are happy. The problem is that it’s not actually usable in a number of circumstances. I’m actually writing an article on my personal blog about what could be done to actually make this experience better, because there’s a lot we could do.Myth #4: Mastodon Federation basically works like email.
Myth: ActivityPub federation, which Mastodon uses, is email-like. Therefore, email is a useful metaphor for understanding the Fediverse.Fact: If you squint, it kind of makes sense. ActivityPub stipulates that users have an inbox and an outbox, which send and receive things. However, the similarities end there.
I mean, it sounds like email?
Technical Differences
Mastodon servers use a push-pull mechanism for dispatching posts and bringing in interactions. Everything you do in Mastodon is handled through this mechanism. Instead of an email message, though, what’s actually being sent are JSON payloads, which are sent to a server’s MultiInbox, then disseminated to a user’sInbox
.The best way to understand anything in ActivityPub-land is that users are performing activities on objects:
[Actor] + [Activity] +
[Object]This gets interpreted as things like:
- [Jim] [Checked In] at [McDonalds]
- [Frank] [Watched] [Terminator 2]
- [Fred] [favorited] [“I Want it That Way” by The Backstreet Boys]
All of this gets interpreted through the ActivityStreams 2.0 Vocabulary, which is a whole other document that needs to be known about prior to implementing ActivityPub. Actor verbs an object, then sends it via their Outbox to a collection of people.
Let’s say that I created this status, represented in JSON:
{"@context": "https://www.w3.org/ns/activitystreams", "type": "Create", "id": "https://firefish.tech/sean/posts/9282e9cc-14d0-42b3-a758-d6aeca6c876b", "to": ["https://firefish.tech/sean/followers/", "https://www.w3.org/ns/activitystreams#Public"], "actor": "https://firefish.tech/sean", "object": {"type": "Note", "id": "https://firefish.tech/sean/posts/d18c55d4-8a63-4181-9745-4e6cf7938fa1", "attributedTo": "https://firefish.tech/sean/", "to": ["https://firefish.tech/sean/followers/", "https://www.w3.org/ns/activitystreams#Public"], "content": "Oh man, One Punch Man is such a great anime!"}}
It’s just a note that I posted to myFollowers
collection, as well as aPublic
collection to define the privacy scope. Then,@[url=https://mastodon.social/users/bob]bob@mastodon.social[/url]
sends me a reply:
{"@context": "https://www.w3.org/ns/activitystreams", "type": "Create", "id": "https://mastodon.social/bob/d74d44q5-2p34-6431-8421-3s9ed1623brd", "to": ["https://firefish.tech/sean/", "https://www.w3.org/ns/activitystreams#Public"], "actor": "https://mastodon.social/bob", "object": {"type": "Note", "id": "https://mastodon.social/bob/posts/f25j22f3-5h13-3422-5632-8m7dp4530pej", "attributedTo": "https://mastodon.social/bob/", "to": ["https://firefish.tech/sean/"], "inReplyTo": "https://firefish.tech/sean/posts/49e2d03d-b53a-4c4c-a95c-94a6abf45a19", "content": "Dude, you have no idea what you're talking about."}}
Here’s what you’re actually looking at: user Sean created an object called aNote
. Bob created a status that’s also aNote
, containing aninReplyTo
pointer that references the original post and its ID. It’s also aPublic
status shared with hisFollowers
collection.Social Differences
There are also some significant social differences to take into account. The biggest thing to understand is that different Mastodon instances have different rules. Software other than Mastodon is capable of sending more than just microblogging statuses and likes.Regardless of semantics, what’s being constructed is actually a public or private conversation that can be fetched from a URL as a resource. Email focuses more on the exchange of messages (text or HTML) between servers in a manner where the resource generally can’t exist publicly. You can’t use webfinger to pull in an external email conversation to your Thunderbird client. In fact, if you’re not using a mainstream email platform like Gmail or Outlook, the manner in which messages in conversations get threaded together can vary on a server-by-server or client-by-client basis.
With email, you just don’t have a situation where your entire domain is cut off because a few bad actors are on it sending bad messages (unless you’re on a spam server). Imagine if Hotmail and Gmail defederated because they just had irreconcilable differences in policies. Imagine if part of Yahoo’s community spent time making receipts of the worst Outlook users’ outbound messages. It just doesn’t work the same way.
Verdict: The Fediverse has some email-like mechanisms, but the metaphor is closer to Usenet groups than it is to the kind of email communication most people are familiar with. Even then, it doesn’t really describe dispatching social interactions back and forth, and doesn’t begin to describe the user experience.
Myth #3: Mastodon is so much nicer than other places!
Myth: Ever since I switched to Mastodon, I’ve had such a great time! People are friendly, more personable, and more thoughtful. It’s so much nicer than the other place I came from!Fact: On the surface, this sounds positively lovely. It’s a feel-good statement reflecting that someone is enjoying a new place and happy to be a part of it. What’s wrong with that?
The problem is a confusion of cause and effect. You may personally have a great time – I certainly have, and it’s kept me on the network for 15 years. However, a positive personal experience can be attributed to a handful of factors:
- Joining the right server at the right time, and matching the vibes.
- Moving to a smaller pond where individuals stand out way more, and engage with each other more frequently.
- Engaging on niche topics that people in that space want to talk about.
- Using a new network differently than you used your old one.
Look, I’m not trying to rain on anyone’s parade. Loads of people have a great time being part of the Fediverse, but that doesn’t mean that the network is inherently nicer than anywhere else. People can bond pretty much anywhere, whether it’s Reddit, Discord, Facebook Groups, or even a public bus station.
Vitriol, bigotry, and other forms of nastiness exist on Mastodon, too. The really confounding part for new users is that they don’t really know if they’re walking into a really great community or a really toxic one, until they’re already part of it. Being on the wrong instance can absolutely ruin a person’s impression of the rest of the network. Why would they even want to come back?
Verdict: Mastodon (and the Fediverse in general) can be really, really great. However, a big part of your experience hinges on who you connect with at the time of signing up, what communities you take part in, and how well your admin responsibly runs a community server. If anything, the “it’s so much friendlier here!” thing is like comparing a really big party to having tea with a few friends, and saying that tea drinkers are much more inviting than party-goers.
Myth #2: Mastodon is ActivityPub-Compliant
Myth: Mastodon’s federation protocol is compliant to the ActivityPub spec, which is why so many different platforms can talk to Mastodon.Fact: Mastodon benefits from being the first major platform to implement the ActivityPub protocol. Rather than conform its platform to the protocol’s specifications, Mastodon made a series of compromises in implementation details. The project did this in a way where its implementation is mostly compliant, but various pieces were adjusted or changed for Mastodon’s needs.
One catastrophic side effect of this is that Mastodon’s implementation became the de-facto standard. Ideally, ActivityPub would benefit from a neutral testing suite that implements the full protocol spec, so that developers could test against it.
That didn’t end up happening. All of the platforms that can talk to Mastodon are only able to do so because they were tested against Mastodon and consequently, one another.
What kind of ways is Mastodon not compliant?
One of the most notable examples involve ActivityPub’s Client-to-Server API, which is meant as a way for clients to talk to a server. C2S was intended to provide consistency for users between platforms, while also allowing many different kinds of clients with distinct activity types to tie into the main platform. This isn’t just one little footnote that was glossed over: ActivityPub C2S comprises literally half of the standard. Granted, C2S was described by more than a few people as cumbersome and vague, whereas Mastodon’s approach was simple and opinionated.Due to Mastodon’s popularity and the rise of clients, the platform’s own client API became the dominant standard. As a result, a large body of fediverse clients and platforms took on Mastodon’s form, since its API dictated how they should work. Had something more like C2S been the dominant standard instead, we may have ended up with a situation where things like Pixelfed or Bookwyrm would have been ActivityPub clients, instead of servers running their own bespoke ActivityPub federation variants.
It’s not all bad, though. The ActivityPub spec is actually pretty vague in some cases, and Mastodon did bring in some useful innovations: Webfinger, HTTP Signatures, and federated user reports are all fairly standard things thanks to Mastodon.
Verdict: Mastodon is compatible with an incomplete subset of ActivityPub that suits Mastodon’s specific needs, making it a de-facto standard. Everybody else achieves interoperability by striving to be compatible with Mastodon, not ActivityPub itself. For the long term, this is actually bad for the protocol.
Myth #1: Mastodon is Easy to Use!
Myth: Mastodon is so easy to use, literally anybody can use it! In fact, everyone should use it!Fact: For a time, a lot of people in the Fediverse did find Mastodon to be more polished, with a better design, and a greater focus on ease of use.
However, that perspective is relative to the audience: compared to Diaspora, Friendica, and Hubzilla at the time, Mastodon felt relatively streamlined. It was comparatively easier to use than those other systems, and it brought in a ton of pretty microblogging clients to enhance the experience further.
Usability from an outside perspective
Unfortunately, it’s a whole other ball game for people coming from other networks. Newcomers are often perplexed by how the system is intended to work:
- Using the search form to grab remote accounts and contents is extremely useful, but not at all obvious.
- Prior to full-text search, discovery was a joke. There were just hashtags, and you’d better hope that people used it when posting about stuff you were interested in.
- Trying to interact with remote content on another server is still kind of confusing, if you’re not interacting with it from your own server. Yes, the popup for interaction and following got a lot better, and you don’t have to copy and paste things anymore. The flow is still bewildering to people who don’t yet understand it.
- Sometimes, just to trying to follow or respond to a remote account just fails, because the user had no idea their servers blocked each other.
- Privacy scopes have a bunch of exceptions to their expected behavior.
The problem is that these people will often complain about clearly broken UX, and then proponents of the network will basically tell those people that they’re stupid for “not getting it.”
Some of the shade that gets thrown around involves how those new users were groomed by the network they’re fleeing to have an “influencer mindset”, or that they’re “mad that our thing isn’t exactly like Twitter.” A lot of these new people end up feeling alienated by this treatment, and either check out Bluesky, or go back to Twitter.
Verdict: Mastodon is vastly easier to use than a lot of the Fediverse platforms that came before it, and gradually improving. It’s still full of unfamiliar concepts and rough edges to newcomers. We should also remember that things which seem ordinary to us might be wildly different to someone new, and try to help them, rather than shame them.
Thanks for taking the time to read all of this. I didn’t write this article with the intention of hating on Mastodon. It’s just that the network has been going through a rapid state of expansion and growth. As new users come in, we need to pick up the events of the past, examine them critically, and try not to repeat some of our worst mistakes. Misinformation and knee-jerk reactions are a big part of that.
Despite all of these myths and misconceptions, Mastodon is still a valuable and important platform, and still plays a large part in the Fediverse’s growth today. There are tons of amazing people on it, willing to share their life stories, hobbies, perspectives, and passions.
If you’re interested in giving Mastodon a try, we wrote a super-comprehensive guide that can help you every step of the way.
https://wedistribute.org/2023/11/debunking-the-top-10-myths-about-mastodon/
Erin Kissane's small internet website
The latest entries posted on Erin Kissane's small internet websiteerinkissane.com
Configuring the Service Actor domain on Rebased ($3634512) · Snippets · Soapbox / Rebased · GitLab
Fediverse backend written in Elixir. The recommended backend for Soapbox. https://soapbox.pubGitLab
In my quest to get rid of GAFAM and decrease as much the tracking I’m subject to, I’m only at the start.
Social Media Alternatives
By the end of 2021, fed up with infinite news feeds, I’ve removed my Facebook, Twitter and SensCritique accounts with almost no alternatives. I needed to declutter and focus on my family, so I also reduced the influx of information by stopping RSS feeds, Newsletter subscriptions, by pausing notifications of most apps like WhatsApp, and of course I’ve stopped watching TV Shows & Movies, news… Those initiatives were also motivated by family augmentation as I became father in 2021 🙂
Anyway I had to pursue efforts !
Recently I’ve also removed Pinterest and I’ve set up Pixelfed (self hosted) to replace IG (Instagram). I’ve also joined a Mastodon instance as a replacement for Twitter. The good thing with Mastodon and federated networks is you can be on any instance and follow users from other instances. From my Pixelfed (IG-like) I’m subscribed to my Mastodon page (Twitter Like) and I’m also followed by my WordPress page thanks to the ActivityPub plugin I’ve setup on this blog. I’ve migrated my secrets from Bitwarden to Vaultwarden (self-hosted), my code from Github to Gitea (self-hosted), my WordPress and Shaarli from OVH to Cloudron running on Contabo VPS. I’ve also started using Nextcloud and Collabora as replacements for my “Office” suite (Google Docs, Google Sheets, …) but I’m not yet sold to the UI.
Self-Hosting Tools
The goal is to try to use as much self-hosted tools as I can. It’s not easy to find good alternatives especially if you care about usability, data portability, privacy, and connect with other users …
I’m struggling to get rid of YT (Youtube) and YT Kids. There are plenty of alternatives to YT, but not so much with both a kid-friendly UX and possibilities to filter content.
Challenges in Replacing YouTube
The problem is not to find a privacy respecting alternative tool to YT : It’s easy to find and host content on Peertube but not so much to find a good instance with interesting content.
The problem is most of the popular and interesting content is on YT. There are alternatives frontends like Invidious, SkyTube (Android), Clipious (an Android client for Invidious instances). But Invidious does not yet support filtering of videos, which make it an unsafe place for kids. At least SkyTube allows for filtering channels and that’s what I’m gonna try with my kid, but the UX is lacking behind YT and that makes it quite difficult for my little one to navigate alone in the app and pick a video.
Future Goals and Challenges
Important next steps for me are to get rid fully of any Google software, that includes Gmail (this will be a though one), Youtube, Contacts, Calendar, Google Drive, Google Keep, Google Maps, etc. and to stop using software that depends on tech giants or trackers. Not easy. Just try surfing the web using Big Tech Detective extension. Even search engines or browsers that claim to respect your privacy are tracking you or rely on GAFAM infrastructure or trackers.
For instance, when trying to use DuckDuck Go search engine with the Big Tech Detective extension on :
Exodus on Android shows that most of the “privacy” friendly browsers on Android are full of trackers : DuckDuck Go, Ghostery, Firefox, Opera, Tor Browser… Only Brave on mobile didn’t contain tracker, according to Exodus. The spyware watchdog catalog will provide additional data about some of the mentioned browsers while also claiming Brave is to be considered as spyware.
Fortunately there exist search engines alternatives, such as Mojeek (which uses its own index) and SearX.
While looking at my expenses and optimizing my budget, I’ve removed my Medium subscription, Pluralsight, and OpenAI subscription, and started to cancel hosting plans as I’ll host most of what I need on my VPS. Next challenges would be to get rid of Netflix, Meta/WhatsApp, Amazon (Prime, Kindle…), Apple, Spotify, Dropbox (to Nextcloud?), ChatGPT (I’ve yet to find a comparable alternative)by looking for alternatives or privacy friendly frontends for what cannot be easily replaced. But they do not make it easy. By trying to get rid of Netflix, I’ve noticed that as the “main” profile of our family shared account, it’s impossible to delete my data. I can only create a new Netflix account and transfer the family profiles to that one. At the end, I’d like also to replace my OnePlus One with a dumb phone or a fair phone. I had a look at privacy friendly OSes we can setup on Android but the compatibility with OnePlus 10 Pro is still work in progress. Anyway if you are interested, take some inspiration here. I also want to fix this blog by taking inspiration from Small Web and Slow Web philosophies. , works for future browsers or at least relies on libre JavaScript (see also LibreJS) or no JavaScript at all.
And also to convince my loved one to try free and open source software on her devices (Apple 🙂 ). Lot of friends and family are using Whatsapp while I’m sold to Signal or alternatives. This one will be difficult to bypass. I aim at least to automatically backup all WhatsApp files to my Nextcloud.
Every small improvement towards more privacy, data portability, open-source, data ownership, freedom, will be worth it and I’ll share my findings with you 🙂
Web (browser) extensions I recommend
- LibRedirect a web extension that redirects YouTube, Twitter, TikTok, and other websites to their alternative privacy friendly frontends. You can find my settings here.
- Privacy Redirect a simple web extension that redirects Twitter, YouTube, Instagram & Google Maps requests to privacy friendly alternatives.
- DuckDuckGo Privacy Essentials includes tracker blocking, cookie protection, DuckDuckGo private search, email protection, HTTPS upgrading, and much more. I use it to generate temporary email addresses when my email is requested.
- Ultrablock blocks ads, trackers and third party cookies.
- Privacy Badger blocks invisible trackers.
Web (browser) extensions I’m Trying
- JShelter An anti-malware Web browser extension to mitigate potential threats from JavaScript, including fingerprinting, tracking, and data collection! But it causes some crappy websites to malfunction 😉
- Big Tech Detective helps you track tech giants. If fully activated, you won’t be able to browse certain websites and even some supposed privacy friendly tools, such as Duck Duck Go Search engine which behind the scene rely on Microsoft infrastructure and hard partnerships with Microsoft.
Android apps I recommend
- Exodus an app that audits Android apps for trackers.
- F-Droid a installable catalog of FOSS (Free and Open Source Software) applications for the Android platform.
- Aegis Authenticator : a free, secure and open source app to manage your 2-step verification tokens for your online services.
WordPress plugins I recommend
- ActivityPub With this installed your WordPress blog itself functions as a federated profile, allowing to reach a wider audience.
Tools I recommend
- Kill the Newsletter Convert email newsletters to RSS feeds.
- Bitwarden which is a password manager. I’m hosting Vaultwarden which implements the same API. Bitwarden client is available on any popular platform.
- Invidious to be used as frontend alternative to YouTube. Same content as YouTube. No ads, no trackers.
Some reading I recommend
- https://spyware.neocities.org/articles/ watchdog of popular apps with trackers.
Self hosting providers
- I recommend Contabo (no sponsor here), that I’ve hand picked among many alternatives, while choosing a good VPS offering at good prices. Only downside is their administration panel UX, but other than that, it was cheap. And they buy green energy to run their servers. The alternatives I had compared Contabo with were OVH (way too expensive) and Hetzner (cheap but unfortunately their VPS hardware lacks AVX support thus is not compatible with Cloudron requirements). Contabo in the end beats VPS offerings of Hetzner and OVH at that price, with more storage and CPU for me !
Self hosted apps
- If you own a dedicated server / VPS, try Cloudron, it’s such an easy way to self host your apps and take control of your data ! I’m currently using it with Nextcloud, WordPress, and all apps mentioned below below :
- Wallabag as read-it later app.
- Miniflux minimalist and opinionated feed reader.
- Vaultwarden unofficial Bitwarden compatible server.
- Changedetection.io free open source website change detection, website watcher.
- PrivateBin minimalist, open source online pastebin where the server has zero knowledge of pasted data.
- Invoice Ninja Invoices, Expenses and Tasks built with Laravel, Flutter and React
- Gitea a self-hosted Git hosting service that offers code hosting, code review, team collaboration, package management and CI/CD features. It is compatible with GitHub Actions, Docker, various databases and tools, and supports multiple languages and architectures.
- Pixelfed a replacement for Instagram as it used to be (without all the ads, and crap).
- Shaarli personal, minimalist, super-fast, database free, bookmarking service. Useful for sharing links or keep them safe and private.
Concluding Thoughts
I hope these insights and recommendations help you in your journey towards a more private and self-reliant digital life. As always, I’m eager to hear your experiences and suggestions. Let’s make our digital space a bit more ours!
#alternatives #android #cloudron #decentralization #degooglify #fediverse #libre #openSource #privacy #security #selfHosting
A couple of weeks back, I was getting my ass kicked at chess. It was a blast, even as I blundered into defeat.Here’s the thing: in some games, like life, the right focus at the right time can flip the board. It’s about spotting chances and seizing them. Remark : If interested in the “perfect timing” topic, do read about the power of when.
Being focused on specific goals can help make the difference in the long term. Also being aware of the opportunities and reality.
Last year? A financial nightmare. But I hustled, optimizing my budget. Running my own company, I could shuffle some expenses around – a neat trick.
I axed unnecessary subscriptions – online courses, publishing platforms, various IT tools. Sometimes, the best alternative isn’t a new provider; it’s you. Betting on my skills, I cut costs and upped my privacy game. That’s a win in my book.
Now, this blog and my digital life sit on a fresh, cost-effective infrastructure. More privacy, less cash bleed.
My new obsession? Privacy and open source. Ditching GAFAM and seeing where that road takes me. It’s about discipline and the right tools.
Next year’s mission: maintain this focus and help others grab back control of their budgets and privacy.
Catch you in 2024.
https://morgan.zoemp.be/personal-insights-on-finance-and-digital-privacy/
#budget #goals #openSource #opportunities #privacy #strategy #wellbeing
WHEN: The Scientific Secrets of Perfect Timing | Daniel H. Pink
Daniel H. Pink, the #1 bestselling author of Drive and To Sell Is Human, unlocks the scientific secrets to good timing to help you flourish at work, at school, and at home.Outthink (Daniel H. Pink)
Block Ads, Trackers and Third Party Cookies
UltraBlock protects your privacy by blocking ads, invisible trackers and third-party cookies. It makes websites load super fast and more secure.UltraBlock
Seriously, WTF @protonmail ?
#YouHadOneJob as #eMail #Provider and that is to get shit reliably sent and recieved.
If that's too hard then how should anyone trust them re: #security and #privacy?
Spoiler: Noine should!
https://www.youtube.com/watch?v=QCx_G_R0UmQ
ProtonMail Sends User IP and Device Info to Swiss Authorities.
Original articleshttps://mobile.twitter.com/tenacioustek/status/1434604102676271106https://techcrunch.com/2021/09/06/protonmail-logged-ip-address-of-french-a...YouTube
Almost got scammed selling some stuff online. 🤙
Had a person send me their number as an interested buyer and told me to text them. I did (first mistake), and we arranged a meetup time. Then they asked if, for their safety, they could send me a six digit code (some of you already know where this is going) that I could repeat back to them to verify myself.
I said, "absolutely!" And sure enough, I got a Google Voice verification number. lol
If you're not familiar with the scam, shady people will take your phone number and try to create a Google Voice account with it. If you provide them with the 6-digit code that Google sends you, they can "verify" that they are you, and then basically use your phone number to run scams, commit fraud, etc. It's nasty business.
I called them out, blocked them, then reported them to the marketplace website and to the FTC--though, almost certainly, they were using the phone number of another poor soul to carry this out.
I used to work as a social engineer, running phishing campaigns (ethically, with consent lol), against Fortune 1000 companies to assess their level of vulnerability. Luckily for me, I was super familiar with this, but most of the people I told about it have said, "Oh, I probably would have fallen for that...", and even I set myself up for it.
So that is why I'm posting this. Please be aware of sketchy shit like this. If someone is asking you for a verification code over SMS or email, tread with EXTREME caution. Also, it's usually pretty shady if a stranger you're already chatting with wants to move to a new platform. Not always, but if someone emails or messages you on Facebook to ask you to text them, that's a little weird. I'd had legitimate buyers/sellers do that, so it's not unheard of, but it should put you on guard.
If you buy/sell/trade online frequently, it's a good idea to use a dedicated MySudo number, VOIP number, and/or a burner phone for that.
Stay safe out there, kids.
#BraveBrowser is installing VPNs without users' consent, even if you didn't willingly enable their #VPN service. Just stop using #Brave, it's garbage.
Edit: the services are disabled by default, but they were still installed with very little to no transparency about them towards the user, alongside all the other stuff that's often unwanted from Brave users (Pocket on Firefox is to blame too, lol.)
https://www.ghacks.net/2023/10/18/brave-is-installing-vpn-services-without-user-consent/
#Browser #Security #Privacy #OpenSource #FreeSoftware #LibreSoftware
#Microsoft comes under blistering criticism for “grossly irresponsible” #security
source: https://arstechnica.com/security/2023/08/microsoft-cloud-security-blasted-for-its-culture-of-toxic-obfuscation/
Did Microsoft quickly fix the issue that could effectively lead to the breach of multiple customers' networks and services? Of course not. They took more than 90 days to implement a partial #fix—and only for new applications loaded in the service.
#Azure #problem #software #bug #cybersecurity #econemy #cloud #news
Microsoft comes under blistering criticism for “grossly irresponsible” security
Azure looks like a house of cards collapsing under the weight of exploits and vulnerabilities.Ars Technica
Are your Google Docs safe from AI training?
With AI continuing its slow rise to prominence, consumers are concerned their personal content is being used to train Google's generative service.Jack Wallen (ZDNET)