Large Language Models, colloquially called Artificial Intelligence by companies selling rebranded autocomplete tools to other companies and the public, can automate a lot of entry-level projects but when it comes to anything more complex the flaws are quickly seen.

When Black people using social media exhibit possible depression in their prose, a paper claims LLM/AI tools don't detect it as easily as they do other skin colors.

Before we go on a social justice rampage, let's keep in mind that is in PNAS, the magazine that claimed an herbicide magically turns frogs 'gay' and that female-named hurricanes are more dangerous because of the patriarchy, and this was 800 Facebook posts. A high-quality LLM would use much larger datasets before going to the race card so we should be skeptical just like we are skeptical about Biden administration claims that mandating and subsidizing more electric cars will save us $100 billion in health care costs.



Let's get into it. This kind of claim comes up here and there. A while box Microsoft was called racist because the Xbox Kinect tool didn't detect Black people as well as White. Having more people of color at Microsoft would've fixed that, went the social justice refrain. Except it wouldn't have done that, the Kinect also detected White and Latino people poorly in low light. If I test a drug on 1,000 people and it is safe, using 2,000 doesn't improve anything - despite what government dictates about FDA approval. If numbers 2001 through 2005 harm someone, lawyers get out their fancy knives. No amount of testing can eliminate all risk, only a San Francisco jury chosen because they believe plants are little people (and so weedkillers can give humans cancer) would think 'needs more testing' leads to perfection.

If white people are creating LLMs, or the data that goes into it, there actually isn't much way to embed racism, LLMs don't work that way. Here is a graphical 'AI' and I used a picture from Halloween to create what I might look like as a wizard for the Renaissance Faire.



It's nothing like a close-up of me transformed into a wizard. Yet is it racism because it changed my skin tone so drastically? If you have monetized, either in the social currency or the real kind, culture wars you would certainly make the case if the skin tones went the other way around. 

The more likely reason is I am obviously not very good at this stuff. With that example, you see how nearly all epidemiologists get so much wrong in statistics. They feel like they have converged on the best answer, they declare statistical significance, but if I ask all of you to rate me on a scale of 9 to 10 I can then authoritatively declare the Internet rates me a 9.01, with statistical significance.(1)

The reason AI may not 'sense' a mental health issue in people of color is that signal words used by White people may be different than those used by Blacks or the Latino community. Understanding that, the authors tried to tune the model to use terms that more Black people may use, and it still did not work well - when it came to depression, the model was three times less predictive for depression what it came to Black people.

The social justice community may want to mandate that all of corporate America fire a bunch of currently employed people and hire demographics equal to ethnic representation in America, but that's just a way to have terrible sports teams - or anything else where there is a goal of meritocracy and competence.

Instead, the problem is that too many companies, and academics, are believing the hype about the promise of AI and ignoring the reality. We fall prey to that a lot in culture, from organic food to solar panels, and those are expensive lessons to learn.

Yet what we should not try to teach is that AI is racist unless it is written by the changing demographics of a country because even then it will be wrong. A New York cop has nothing at all to do with a San Francisco anti-science activist but they both vote Democrat and to AI they will converge on a template looking like an average of both. That is why AI shouldn't scare anyone - unless they believe its claims.

NOTES:

 (1) Think that doesn't happen? If a donor underwrites our epidemiology program, we can show not only does it happen, it is the default at IARC, NIEHS, and Harvard School of Public Health.