Skip to main content


Lots of folks warning that overreliance on AIs can lead to bias.

But that can sound a bit abstract, so let's just leave these examples here.

#CHATGPT #AI #bias
What's happening here is two things.

First an assumption that if information is there it must be relevant to the question. Often that's the case, but sometimes it's not! The AI is bad at determining this.

Second, once it has determined it, it's assigning scores to the properties to try and fit the question, and the relative score is (opaquely) based on its training input, since that's usually what you want. But here that's just reflecting the input bias (that is existing social biases) back.
It's one of those things that's sort of true and not true at the same time.

The AI isn't /inherently/ biased. The code itself doesn't act in a way that intentionally encodes obnoxious biases. The programmers didn't do this on purpose.

But the *training set* introduces biases, because it's based on vast sums of human social experience and *that* is systemically biased.

So anyway, be v careful about delegating major decisions to AI or treating it as "unbiased" because it's code.
A final point: these are particularly obvious examples, but real life ones can be much more insidious.

There's been cases where AIs have done cool/horrifying things to circumvent anti-biasing.

One great example was an AI that was "blinded" against race when making life changing decisions.

Horray! We fixed the racism problem!

But alas...
the AI was smart enough to synthesize a proxy for race to implement racist decisions.

That's because race correlated well with the variable it was trying to match in the training data because of underlying racism, but after being "blinded" to race, it discovered that postcode—in this case used as a proxy for race—was a great correlating factor to the system it was trying to replace.

And it didn't *tell* anyone it was doing this. It just derived it itself.