I was writing a blog post tangentially referencing the book "Weapons of Math Destruction" (which itself points to the unfair and sometimes deadly consequences of algorithms) when a friend shared the link below.

Leaked data from Persona (identity verification) only corroborates the awful extension from the premise of Weapons of Math Destruction and the recycling of bad practices from a decade ago...

From the post:

"The blog [from the security researcher] claims that 2,456 source files expose 269 verification checks offered to government customers, including checks for whether a face looks “suspicious,” and two parallel systems for politically exposed persons (PEPs)."

I wonder what is being used to check whether a face looks suspicious? It's not an AI system that is based on data that inherently discriminates against marginalized people, right? ... Right?

cybernews.com/privacy/persona-

0

If you have a fediverse account, you can quote this note from your own instance. Search https://hachyderm.io/users/pythonbynight/statuses/116098910784947801 on your instance and quote it. (Note that quoting is not supported in Mastodon.)