Earlier: AI Can Detect Race From X-Rays Even When Humans Can't
Steve Sailer’s recent post about a bit of pearl-clutching over the uncanny ability of artificial intelligence (AI) to detect race on radiographs reminds me of other stories about insensitive robotic brains. AI is scary to Race Denialists because machine learning is based on pattern recognition taking place inside an artificial neural network that has never been taught not to notice. The associations that the machine makes are nonverbal and statistical, and they are hard to explain away.
"AI has a race problem," said Mutale Nkonde, a former journalist and technology policy expert who runs the U.S.-based non-profit organization AI For the People, which aims to end the underrepresentation of Black people in the U.S. technology sector. [AI has a racism problem, but fixing it is complicated, say experts, by Jorge Barrera, and Albert Leung, CBC, May 17, 2021]
But the story also reminds me of a much earlier clash of racial reality and fantasy: skull x-rays, from my experience as a diagnostic radiologist. Back in the 1940s and 50s, x-ray techs in Baltimore were taught to increase the exposure “technique” on skull radiographs of black patients because of the greater thickness of the cranium in that group (A real phenomenon—see Thickness of the normal skull in the American Blacks and Whites, by A Adeloye, K R Kattan, F N Silverman, American Journal of Physical Anthropology, July 1975. ) Then the 1960s brought enlightened thinking on such matters, and all skulls received the same exposure regardless of the coloration of the skin around them.
As a result, black patients’ skull films were consistently underexposed and often had to be repeated. White liberals felt good about themselves, and black patients got over-radiated—in short, it was exactly the sort of trade-off that is generally accepted as social justice. Then automatic photo-timers came along, and everybody forgot about this little ant-racist triumph because the machine automatically made the necessary discriminatory adjustment.
Steve quotes an interesting claim by one of the anguished AI researchers: “humans literally cannot recognise racial identity from images.” Actually, they can under some circumstances: not only by cheating (patient name: Da’queesha Jackson) or identifying signs of genetic, race-specific pathology (“trivial proxies” in the language of the research paper) such as sickle-cell disease, but also by spotting characteristic differences in skull conformation and facial features.
Two different lateral skull radiographs. Can you guess the race without AI?
But radiologists are in the business of making correct interpretations as quickly as possible. The race of the patient seldom enters their consciousness, even when an ornamental gold incisor obscures the odontoid process on a cervical spine series or when marked steatopygia degrades the quality of an AP pelvis. Few practicing physicians have the time, interest, or spare mental energy to play at guessing race.
What I find particularly bizarre (and professionally insulting) is the researcher’s assumption that if doctors can find out a patient’s race, they will somehow mistreat him because of “bias.” I suspect that this concern, for which no evidence is offered, is a cover for a deeper fear. The objective demonstration of race and sex differences is a scary prospect because it threatens America’s civic religion of absolute human equality. Hence the trend to omit race from the clinical history.
Transsexual activists are now pushing to eliminate “gender” from the medical record too. Heavens, what if a smart machine could identify the patient’s sex? The radiologist might be able to tell a cervical remnant from an enlarged prostate, or an ER doc might differentiate pregnancy from dyspepsia. What a disaster that would be if it denied our equality!
The unending quest for social justice requires sacrifice; and if accurate diagnosis of race- or sex-specific conditions is a casualty of progress, well, at least we can all feel good about ourselves.