Search
Items tagged with: rNetsec
I get a lot of emails from job recruiters that, even to this day, I’m not qualified for. They often ask for ridiculous requirements, like a Master’s Degree or Ph.D in Computer Science, for what would otherwise be a standard programming job without any particular specializations (e.g. cryptography, which I happen to specialize in).
One time I humored one of these opportunities for a PHP Developer position and was immediately told over the phone that my number of years of experience with PHP was too low, because I didn’t start working with it in 1996 like the rockstar developers on their payroll, but that they’d call me back if they had any “junior” openings in the future. Given that I was born in 1989 and didn’t have access to a computer until about Christmas 1999, I won’t even begin to pretend this is a reasonable ask.
This was my actual reaction after I hung up. (Art by Khia.)
In a lot of ways, I have it easy. I have enough experience with software development and security research under my belt to basically ignore the requirements that HR puts on job listings and still get an interview with most companies. (If you want a sense of what this looks like, look no further than rawr-x3dh or my teardown of security issues in Zed Shaw’s SRP library… which are both things I did somewhat casually for this blog.)
The irony is, I’m probably deeply overqualified for the majority of the jobs that come across my inbox, and I still don’t meet the HR requirements for the roles, and the people who are actually a good fit for it don’t have the same privilege as me.
So if the rules are made up and the points don’t matter, why do companies bother with these pointlessly harrowing job requirements?
(Art by Khia.)
The answer is simple: They’re being toxic gatekeepers, and we’re all worse off for it.
https://twitter.com/IanColdwater/status/1357381321488621569
Toxic Gatekeeping
Gatekeeping is generally defined as “the activity of controlling, and usually limiting, general access to something” (source).
Gatekeeping doesn’t have to be toxic: Keeping children out of adult entertainment venues is certainly an example of gatekeeping, but it’s a damned good idea in that context.
In a similar vein, content moderation is a good thing, but necessarily involves some gatekeeping behaviors.
As with many things in life, toxicity is determined by the dose. I’ve previously posited that any group has a minimum gatekeeping threshold necessary for maintaining group identity (or in the example of keeping kids out of 18+ spaces, avoiding liability).
When the amount of gatekeeping exceeds the minimum, the excess is almost always toxic. To wit:
https://twitter.com/BlackDGamer1/status/1361352840980164609
Toxic Gatekeeping in Tech
The technology industry is filled with entry-level gatekeepers. Sometimes this behavior floats up in the org chart, but it’s most often concentrated at neophytes.
https://twitter.com/fancy_flare/status/1371568476331012101
In practice, toxic gatekeeping often employs arbitrary Purity Tests, stupid job requirements, and questionably legal hazing rituals. Conversations with toxic gatekeepers often–but not always–involve gratuitous use of No True Scotsman fallacies.
But what’s really happening here is actually sinister: Toxic gatekeepers in tech are people with internalized cognitive distortions that either affirm one’s sense of superiority or project their personal insecurities–if not both things.
This is almost always directed towards the end of excluding women, racial or religious minorities, LGBTQIA+ and neurodivergent people, and other vulnerable populations from the possibility at pursuing lucrative career prospects.
If you need a (rather poignant) example of the above, the gatekeeping behaviors against women in tech even apply to the forerunners of computer science:
https://twitter.com/gurlcode/status/1170664258197024768
If you’re still unconvinced, I have my own experiences I can tell you about; like that one time my blog’s domain was banned from the netsec subreddit because of other peoples’ toxicity.
That Time soatok.blog Was Banned from Reddit’s r/netsec Subreddit
Earlier this year, I thought I’d submit my post about encrypting directly with RSA being a bad idea to the network security subreddit–only to discover that my domain name had been banned from r/netsec.
https://twitter.com/SoatokDhole/status/1352140779586805760
Prior to this, I’d had some disagreements with other r/netsec moderators (i.e. @sanitybit, plus whoever answered my Reddit messages) about a lack of communication and transparency about their decisions, but there were no lingering issues.
A lot of the times when something I wrote ended up on their subreddit, I was not the person to submit it there. Usually this omission was intentional: If I didn’t submit it there, I didn’t feel it belonged on r/netsec (usually due to being insufficiently technical).
The comments I received were often hostile non sequitur about me being a furry. This general misconduct isn’t unique to r/netsec; I’ve received similar comments on my Lobste.rs submissions, which forced the sysop’s hand into telling people to stop being dumb and terrible.
https://twitter.com/SoatokDhole/status/1352142604406816771
The hostility was previously severe enough to get noticed by the r/SubredditDrama subreddit (and, despite what you might think of drama-oriented forums, most of the comments there were surprisingly non-shitty towards me or furries in general).
Quick aside: Being a furry isn’t the important bit of this anecdote; people face this kind of behavior for all sorts of reasons. In particular: transgender people face even shittier behavior at every level of society, and a lot of what they endure is much more subtle than the overt yet lazy bigotry lobbed my way.
So was my domain name banned by a r/netsec moderator because other people kept being shitty in the comments whenever someone submitted one of my blog posts there?
It turns out: Yes. This was later confirmed to me by a r/netsec moderator via Twitter DM.
r/netsec moderator @albinowax
I’ve cut out some irrelevant crap.
As I had said publicly on Twitter and reiterated in the DM conversation above: I had already decided I would not return to r/netsec in light of this rogue moderator’s misconduct.
Trust is a funny thing: It’s easy to lose and hard to gain. Once trust has been lost, it’s often impossible to recover it. Security professionals should understand this better than anyone else, given our tendency to deal with matters of risk and trust.
What Could They Have Done Better?
Several things! Many of which are really obvious!
- Communicating with me. If nothing else, they could have told me they were banning my domain name from their subreddit and given a reason why.
- Maybe there was some weird goal in mind?
(E.g. to stop people from submitting posts on my behalf, since I had made it clear that I’d intentionally not share stuff there if I didn’t think it belonged.) - I’ll never know, because nobody told me anything.
- Maybe there was some weird goal in mind?
- Communicating with each other. I mean, this is just a matter of showing respect to your fellow moderators. It’s astonishing that this didn’t happen.
- Taking steps to protect members of vulnerable populations from the kinds of shitheads that make Reddit a miserable experience.
- For example: If someone’s previously been a target of bigotry, have auto-moderator prune all comments not from the OP or Trusted Contributors–and if any TCs violate the mods’ trust, revoke their TC status.
Since then, I’ve been informed that they implemented my suggestion to prevent themselves from having to suffer through a bunch of negative vitriol.
Truthfully, I still haven’t decided if I want to give r/netsec another chance.
On the one paw: The moderators really burned a lot of trust with me and I expect security professionals to fucking know better.
On the other: Representation matters, and removing myself from their community gives the bigots that caused the trouble in the first place a Pyrrhic victory.
Neither choice sits well with me, for totally disparate reasons.
I wish I could put a happy ending on this tale, but life doesn’t work that way most of the time.
If you’re looking for non-toxic subreddits, r/crypto is always a pleasant community. I also contribute a lot to r/furrydiscuss.
When to Be a Gatekeeper
If someone is a threat to the safety or well-being of your group, you should exclude them from your group.
In the furry community, we had a person that owned a widely-used costume making business get outed for a lot of abusive actions. Their response was to try to file a SLAPP suit against some unrelated person that merely linked to the victims’ statements on Twitter.
https://twitter.com/qutens_/status/1357496129659707392
In these corner-case situations, be a gatekeeper!
But generally, it’s not warranted. Gatekeeping compounds systemic harms and makes it harder for newcomers to join a community or industry.
Gatekeeping hurts women. Gatekeeping hurts LGBTQIA+ folks. Gatekeeping hurts non-white people. Gatekeeping hurts the neurodivergent.
But if that’s not enough of a reason to avoid it: Gatekeeping hurts straight white males too!
Newcomers who aren’t narcissists almost always experience some degree of Impostor Syndrome. If you apply the gatekeeping behaviors we’ve discussed previously, you’re going to totally exacerbate the situation.
People will quit. People will burn out.
The only people who stand to gain anything from gatekeeping are the survivors who made it through the gate. If the survivors are insecure or arrogant, the vicious cycle continues.
So why don’t we simply…not perpetuate it?
There’s an old saying that’s popular in punk and anarchist circles: “No gods, no masters.” I think the correct attitude to have regarding gatekeeping is analogous to the spirit of this saying.
Without Gatekeeping, A Deluge?
Sometimes you’ll hear hiring manager defend the weird job requirements that HR departments shit out because every job posting gets flooded with hundreds of applicants. They insist that the incentives of this dynamic are to blame, rather than gatekeeping.
Unfortunately, we’re both right on this one. Economic forces and toxicity often synergize in the worst ways, and gatekeeping behaviors are no exception.
Hiring managers that are forced to sift through a deluge of applications to fill an opening will inevitably rely on their own subconscious biases to select “qualified” candidates (from a pool of people who are actually qualified for the job). Thus, they become gatekeepers moreso than the minimum amount their job requires. This is one reason why tech companies often only employ people that fit the same demographic.
Savvy tech companies will employ work-sample tests in the same way that musicians employ blind auditions to assess candidates, rather than relying on these subconscious biases to drive their decisions. Not all companies are savvy, and we all suffer for it.
Instead, what happens is that the candidates that endure the ritual of whiteboard hazing (which tests for anxiety rather than technical or cognitive ability) will in turn propagate the ritual for the next round of newcomers.
The same behaviors and incentives that maintain these unhealthy traditions overlap heavily with the people who will refuse to train or mentor their junior employees. This refusal isn’t just about frugality; it’s also in service of the ego. Maintaining their power within existing social hierarchies is something that toxic gatekeepers worry about a lot.
What About “Don’t Roll Your Own Crypto”?
There’s a fine line between reinforcing boundaries to maintain safety and inventing stupid rules or requirements for people to be allowed to participate in a community or industry. (Also, I’ve talked about this before.)
Rejection of gatekeeping isn’t the same as rejecting the concept of professional qualifications, and anyone who suggests otherwise isn’t being intellectually honest.
The excellent artwork used in the blog header was made by Wolfool.
https://soatok.blog/2021/03/04/no-gates-no-keepers/
#gatekeepers #gatekeeping #onlineAbuse #rNetsec #Reddit #Society #toxicity #Twitter
Let me say up front, I’m no stranger to negative or ridiculous feedback. It’s incredibly hard to hurt my feelings, especially if you intend to. You don’t openly participate in the furry fandom since 2010 without being accustomed to malevolence and trolling. If this were simply a story of someone being an asshole to me, I would have shrugged and moved on with my life.It’s important that you understand this, because when you call it like you see it, sometimes people dismiss your criticism with “triggered” memes. This isn’t me being offended. I promise.
My recent blog post about crackpot cryptography received a fair bit of attention in the software community. At one point it was on the front page of Hacker News (which is something that pretty much never happens for anything I write).
Unfortunately, that also means I crossed paths with Zed A. Shaw, the author of Learn Python the Hard Way and other books often recommended to neophyte software developers.
As someone who spends a lot of time trying to help newcomers acclimate to the technology industry, there are some behaviors I’ve recognized in technologists over the years that makes it harder for newcomers to overcome anxiety, frustration, and Impostor Syndrome. (Especially if they’re LGBTQIA+, a person of color, or a woman.)
Normally, these are easily correctable behaviors exhibited by people who have good intentions but don’t realize the harm they’re causing–often not by what they’re saying, but by how they say it.
Sadly, I can’t be so generous about… whatever this is:
https://twitter.com/lzsthw/status/1359659091782733827
Having never before encountered a living example of a poorly-written villain towards the work I do to help disadvantaged people thrive in technology careers, I sought to clarify Shaw’s intent.
https://twitter.com/lzsthw/status/1359673331960733696
https://twitter.com/lzsthw/status/1359673714607013905
This is effectively a very weird hybrid of an oddly-specific purity test and a form of hazing ritual.
Let’s step back for a second. Can you even fathom the damage attitudes like this can cause? I can tell you firsthand, because it happened to me.
Interlude: Amplified Impostor Syndrome
In the beginning of my career, I was just a humble web programmer. Due to a long story I don’t want to get into now, I was acquainted with the culture of black-hat hacking that precipitates the DEF CON community.In particular, I was exposed the writings of a malicious group called Zero For 0wned, which made sport of hunting “skiddiez” and preached a very “shut up and stay in your lane” attitude:
Geeks don’t really come to HOPE to be lectured on the application of something simple, with very simple means, by a 15 year old. A combination of all the above could be why your room wasn’t full. Not only was it fairly empty, but it emptied at a rapid rate. I could barely take a seat through the masses pushing me to escape. Then when I thought no more people could possibly leave, they kept going. The room was almost empty when I gave in and left also. Heck, I was only there because we pwned the very resources you were talking about.Zero For 0wned
My first security conference was B-Sides Orlando in 2013. Before the conference, I had been hanging out in the #hackucf IRC channel and had known about the event well in advance (and got along with all the organizers and most of the would-be attendees), and considered applying to their CFP.I ultimately didn’t, solely because I was worried about a ZF0-style reception.
I had no reference frame for other folks’ understanding of cryptography (which is my chosen area of discipline in infosec), and thought things like timing side-channels were “obvious”–even to software developers outside infosec. (Such is the danger of being self-taught!)
“Geeks don’t really come to B-Sides Orlando to be lectured on the application of something simple, with very simple means,” is roughly how I imagined the vitriol would be framed.
If it can happen to me, it can happen to anyone interested in tech. It’s the responsibility of experts and mentors to spare beginners from falling into the trappings of other peoples’ grand-standing.
Pride Before Destruction
With this in mind, let’s return to Shaw. At this point, more clarifying questions came in, this time from Fredrick Brennan.https://twitter.com/lzsthw/status/1359712275666505734
What an arrogant and bombastic thing to say!
At this point, I concluded that I can never again, in good conscience, recommend any of Shaw’s books to a fledgling programmer.
If you’ve ever published book recommendations before, I suggest auditing them to make sure you’re not inadvertently exposing beginners to his harmful attitude and problematic behavior.
But while we’re on the subject of Zed Shaw’s behavior…
https://twitter.com/lzsthw/status/1359714688972582916
If Shaw thinks of himself as a superior cryptography expert, surely he’s published cryptography code online before.
And surely, it will withstand a five-minute code review from a gay furry blogger who never went through Shaw’s prescribed hazing ritual to rediscover specifically the known problems in OpenSSL circa Heartbleed and is therefore not as much of a cryptography expert?
(Art by Khia.)
May I Offer You a Zero-Day in This Trying Time?
One of Zed A. Shaw’s Github projects is an implementation of SRP (Secure Remote Password)–an early Password-Authenticated Key Exchange algorithm often integrated with TLS (to form TLS-SRP).Zed Shaw’s SRP implementation
Without even looking past the directory structure, we can already see that it implements an algorithm called TrueRand, which cryptographer Matt Blaze has this to say:
https://twitter.com/mattblaze/status/438464425566412800
As noted by the README, Shaw stripped out all of the “extraneous” things and doesn’t have all of the previous versions of SRP “since those are known to be vulnerable”.
So given Shaw’s previous behavior, and the removal of vulnerable versions of SRP from his fork of Tom Wu’s libsrp code, it stands to reason that Shaw believes the cryptography code he published would be secure. Otherwise, why would he behave with such arrogance?
SRP in the Grass
Head’s up! If you aren’t cryptographically or mathematically inclined, this section might be a bit dense for your tastes. (Art by Scruff.)When I say SRP, I’m referring to SRP-6a. Earlier versions of the protocol are out of scope; as are proposed variants (e.g. ones that employ SHA-256 instead of SHA-1).
Professor Matthew D. Green of Johns Hopkins University (who incidentally used to proverbially shit on OpenSSL in the way that Shaw expects everyone to, except productively) dislikes SRP but considered the protocol “not obviously broken”.
However, a secure protocol doesn’t mean the implementations are always secure. (Anyone who’s looked at older versions of OpenSSL’s BigNum library after reading my guide to side-channel attacks knows better.)
There are a few ways to implement SRP insecurely:
- Use an insecure random number generator (e.g. TrueRand) for salts or private keys.
- Fail to use a secure set of parameters (q, N, g).
To expand on this, SRP requires q be a Sophie-Germain prime and N be its corresponding Safe Prime. The standard Diffie-Hellman primes (MODP) are not sufficient for SRP.This security requirement exists because SRP requires an algebraic structure called a ring, rather than a cyclic group (as per Diffie-Hellman).
- Fail to perform the critical validation steps as outlined in RFC 5054.
In one way or another, Shaw’s SRP library fails at every step of the way. The first two are trivial:
- We’ve already seen the RNG used by srpmin. TrueRand is not a cryptographically secure pseudo random number generator.
- Zed A. Shaw’s srpmin only supports unsafe primes for SRP (i.e. the ones from RFC 3526, which is for Diffie-Hellman).
The third is more interesting. Let’s talk about the RFC 5054 validation steps in more detail.
Parameter Validation in SRP-6a
Retraction (March 7, 2021): There are two errors in my original analysis.First, I misunderstood the behavior of
SRP_respond()
to involve a network transmission that an attacker could fiddle with. It turns out that this function doesn’t do what its name implies.Additionally, I was using an analysis of SRP3 from 1997 to evaluate code that implements SRP6a.
u
isn’t transmitted, so there’s no attack here.I’ve retracted these claims (but you can find them on an earlier version of this blog post via archive.org). The other SRP security issues still stand; this erroneous analysis only affects the
u
validation issue.Vulnerability Summary and Impact
That’s a lot of detail, but I hope it’s clear to everyone that all of the following are true:
- Zed Shaw’s library’s use of TrueRand fails the requirement to use a secure random source. This weakness affects both the salt and the private keys used throughout SRP.
- The library in question ships support for unsafe parameters (particularly for the prime, N), which according to RFC 5054 can leak the client’s password.
Salts and private keys are predictable and the hard-coded parameters allow passwords to leak.
But yes, OpenSSL is the real problem, right?
(Art by Khia.)Low-Hanging ModExp Fruit
Shaw’s SRP implementation is pluggable and supports multiple back-end implementations: OpenSSL, libgcrypt, and even the (obviously not constant-time) GMP.Even in the OpenSSL case, Shaw doesn’t set the
BN_FLG_CONSTTIME
flag on any of the inputs before callingBN_mod_exp()
(or, failing that, insideBigIntegerFromInt
).As a consequence, this is additionally vulnerable to a local-only timing attack that leaks your private exponent (which is the SHA1 hash of your salt and password). Although the literature on timing attacks against SRP is sparse, this is one of those cases that’s obviously vulnerable.
Exploiting the timing attack against SRP requires the ability to run code on the same hardware as the SRP implementation. Consequently, it’s possible to exploit this SRP ModExp timing side-channel from separate VMs that have access to the same bare-metal hardware (i.e. L1 and L2 caches), unless other protections are employed by the hypervisor.
Leaking the private exponent is equivalent to leaking your password (in terms of user impersonation), and knowing the salt and identifier further allows an attacker to brute force your plaintext password (which is an additional risk for password reuse).
Houston, The Ego Has Landed
Earlier when I mentioned the black hat hacker group Zero For 0wned, and the negative impact their hostile rhetoric, I omitted an important detail: Some of the first words they included in their first ezine.For those of you that look up to the people mentioned, read this zine, realize that everyone makes mistakes, but only the arrogant ones are called on it.
If Zed A. Shaw were a kinder or humbler person, you wouldn’t be reading this page right now. I have a million things I’d rather be doing than exposing the hypocrisy of an arrogant jerk who managed to bullshit his way into the privileged position of educating junior developers through his writing.If I didn’t believe Zed Shaw was toxic and harmful to his very customer base, I certainly wouldn’t have publicly dropped zero-days in the code he published while engaging in shit-slinging at others’ work and publicly shaming others for failing to meet arbitrarily specific purity tests that don’t mean anything to anyone but him.
But as Dan Guido said about Time AI:
https://twitter.com/veorq/status/1159575230970396672
It’s high time we stopped tolerating Zed’s behavior in the technology community.
If you want to mitigate impostor syndrome and help more talented people succeed with their confidence intact, boycott Zed Shaw’s books. Stop buying them, stop stocking them, stop recommending them.
Learn Decency the Hard Way
(Updated on February 12, 2021)One sentiment and question that came up a few times since I originally posted this is, approximately, “Who cares if he’s a jerk and a hypocrite if he’s right?”
But he isn’t. At best, Shaw almost has a point about the technology industry’s over-dependence on OpenSSL.
Shaw’s weird litmus test about whether or not my blog (which is less than a year old) had said anything about OpenSSL during the “20+ years it was obviously flawed” isn’t a salient critique of this problem. Without a time machine, there is no actionable path to improvement.
You can be an inflammatory asshole and still have a salient point. Shaw had neither while demonstrating the worst kind of conduct to expose junior developers to if we want to get ahead of the rampant Impostor Syndrome that plagues us.
This is needlessly destructive to his own audience.
Generally the only people you’ll find who outright like this kind of abusive behavior in the technology industry are the self-proclaimed “neckbeards” that live on the dregs of elitist chan culture and desire for there to be a priestly technologist class within society, and furthermore want to see themselves as part of this exclusive caste–if not at the top of it. I don’t believe these people have anyone else’s best interests at heart.
So let’s talk about OpenSSL.
OpenSSL is the Manifestation of Mediocrity
OpenSSL is everywhere, whether you realize it or not. Any programming language that provides acrypto
module (Erlang, Node.js, Python, Ruby, PHP) binds against OpenSSL libcrypto.OpenSSL kind of sucks. It used to be a lot worse. A lot of people have spent the past 7 years of their careers trying to make it better.
A lot of OpenSSL’s suckage is because it’s written mostly in C, which isn’t memory-safe. (There’s also some Perl scripts to generate Assembly code, and probably some other crazy stuff under the hood I’m not aware of.)
A lot of OpenSSL’s suckage is because it has to be all things to all people that depend on it, because it’s ubiquitous in the technology industry.
But most of OpenSSL’s outstanding suckage is because, like most cryptography projects, its API was badly designed. Sure, it works well enough as a Swiss army knife for experts, but there’s too many sharp edges and unsafe defaults. Further, because so much of the world depends on these legacy APIs, it’s difficult (if not impossible) to improve the code quality without making upgrades a miserable task for most of the software industry.
What Can We Do About OpenSSL?
There are two paths forward.First, you can contribute to the OpenSSL 3.0 project, which has a pretty reasonable design document that almost nobody outside of the OpenSSL team has probably ever read before. This is probably the path of least resistance for most of the world.
Second, you can migrate your code to not use OpenSSL. For example, all of the cryptography code I’ve written for the furry community to use in our projects is backed by libsodium rather than OpenSSL. This is a tougher sell for most programming languages–and, at minimum, requires a major version bump.
Both paths are valid. Improve or replace.
But what’s not valid is pointlessly and needlessly shit-slinging open source projects that you’re not willing to help. So I refuse to do that.
Anyone who thinks that makes me less of a cryptography expert should feel welcome to not just unfollow me on social media, but to block on their way out.
https://soatok.blog/2021/02/11/on-the-toxicity-of-zed-a-shaw/
#author #cryptography #ImpostorSyndrome #PAKE #SecureRemotePasswordProtocol #security #SRP #Technology #toxicity #vuln #ZedAShaw #ZeroDay