Generative AI models are trained on existing images that are tagged. If an ordinary Jewish activity looks just like an ordinary human activity, there's no reason for it to be tagged as Jewish. That's why the AI will produce stereotypes for any term that's tagged. The model isn't antisemitic, it's incomplete. Unless you're Jewish and exclusively training a model for Jewish purposes, you're not going to get better tagging than this and I cannot imagine how OpenAI could improve this without resorting to simply removing or blocking potentially problematic terms. As a Jew, I'm more concerned by how much antisemitic training material is available than I am by AIs reproduction of it, but I guess I would be more comfortable if they blocked non-specific ethnic terms like "Jew" while letting us include terms like "menorah"...

--

--

Adam Fisher / fisher king (@therightstuff)
Adam Fisher / fisher king (@therightstuff)

Written by Adam Fisher / fisher king (@therightstuff)

Software developer and writer of words, currently producing a graphic novel adaptation of Shakespeare's Sonnets! See http://therightstuff.bio.link for details.

Responses (2)