If you want to learn something truly fascinating that you might not have considered before, Google “big data is a civil rights issue.” As you scan through the search results and glean tidbits of data from the descriptions, or skim the various articles, you will no doubt see that the world is sitting on a powder keg of outrage. Algorithms and automation threaten to divide my country more profoundly than any political unrest around civil war statues ever could. Indeed, when it comes to racial disparity in terms of opportunity or various life advancements, nothing is more injurious than the unintended consequences of machine learning. When I consider the effects on society and the labor market, I shudder at the pending reality of it all.
In preparation for an online conversation with Recruiting Live, I read up on current developments in artificial intelligence. The topic of AI has long been a fascination of mine. In fact, I gave a presentation on Big Data a few years ago and tangentially touched on how civil rights were being violated. My, my, my how things have progressed for the worse since then.
What I find most ironic about these developments is that they are not wholly malicious in nature; they are unintentional and that concerns me most of all. Take for example, the racial bias inherent in facial recognition software.
In Los Angeles, there are 16 “undisclosed locations” where the public is being monitored by police surveillance cameras. Said cameras can recognize individuals from up to 600 feet away. According to The Atlantic, “The faces they collect are then compared, in real-time, against “hot lists” of people suspected of gang activity or having an open arrest warrant… Considering arrest and incarceration rates across L.A., chances are high that those hot lists disproportionately implicate African Americans…Facial-recognition systems are more likely either to misidentify or fail to identify African Americans than other races, errors that could result in innocent citizens being marked as suspects in crimes.”  Several states enroll driver license data into their facial recognition databases which helps them identify suspected fugitives caught on camera and I think that is a good thing; in fact, I applaud it. However, I have one bit of apprehension. Who inspects these facial recognition systems for accuracy? As far as I have been able to discern, police are not required to check these systems for bias.
Perhaps the rationale against testing machines for discriminatory factors is, a machine is identifying people, so there is no racial bias. To that argument, I would give a few examples; most notably from Google. When I Google the term “unprofessional hairstyles for work” and refine by “face.” I see African American women overwhelmingly represented. Does Google think that the various hairstyles of African American women are unprofessional? Google, the company, probably does not. Google, the machine, thinks otherwise. Were these results engineered to appear this way? I doubt it.
Consider another instance of search engine faux pas, when Google was identifying black people as gorillas. Does Google believe that black people bear a resemblance to simians? Google, the company, probably does not. Google, the machine, thinks otherwise. At least, it used to before Google, the company, made adjustments. (Apologies for the language in the tweet below.)
Google Photos, y’all fucked up. My friend’s not a gorilla. pic.twitter.com/SMkMCsNVX4
— Jacky Alciné ✈ NYC (@jackyalcine) June 29, 2015
If you would indulge me, allow me another Google example. Google used to feature mugshots at the top of search results for people with “black-sounding” names. Latanya Sweeney, a black professor in government and technology at Harvard University and founder of the Data Privacy Lab, brought this to the public’s attention in 2013 when she published her study of Google AdWords. She found that when people search Google for names that traditionally belong to black people, the ads shown are of arrest records and mugshots. 
Did you know that “Online ads for high-paying jobs are shown more often to men than women, according to a recent study from researchers at Carnegie Mellon University…?”  In the study, “Ads for careerchange.com, a job coaching website, advertising “$200k+ Jobs – Execs Only” were shown roughly 1,800 times to the “male” profiles and only around 300 times to the “female” profiles.” According to Anupam Datta, one of the researchers on the study and an associate professor in Computer Science and Electrical and Computer Engineering at Carnegie Mellon, “It could be that Google’s machine learning algorithm over time may have inferred that more males were clicking on these [career services] ads and the system optimized to show them to males. If this is so, it is then unintentional automated discrimination….”
Another case of unintentional automated discrimination is inherent in retail theft databases. Did you know that people accused of retail theft by an employer, even if they’re never prosecuted, may have trouble landing a job in the future? There are at least 3 of these databases in operation. Cleveland.com points them out and offers some compelling commentary. 
- The National Retail Mutual Association, for example, has collected more than 500,000 incidents of employee theft in its NRMA Retail Theft Database. NRMA accepts reports from client stores who have obtained a signed confession, a signed restitution agreement, a fully paid civil demand, a criminal conviction or other “documentary evidence.”
- Choicepoint, the giant data aggregator, says its Esteem workplace theft database collects reports from more than 75,000 retail stores that provide an employee’s signed confession or proof of a theft conviction.
- HireRight has an employee-theft database to which 500 member companies contribute a signed confession, evidence of a conviction, video surveillance or eyewitness statements.
Fortunately, for those concerned, you have a right to get a free copy of your report every 12 months. [Find that info here] Such is not the case, at least not here in the USA. If an algorithm denies you an opportunity or discriminates against you unfairly, for whatever reason, there is no recourse. Who do you petition to review the algorithmic practices of companies that have denied you credit, prevent you from getting work in your industry or any other unfair practice? Fortunately, there is a glimmer of hope, actually two beams of optimism; one from the UK and the other from India.
Last year, TechCrunch reported that “A UK parliamentary committee has urged the government to act proactively — and to act now — to tackle “a host of social, ethical and legal questions” arising from growing usage of autonomous technologies such as artificial intelligence…Publishing its report into robotics and AI today, the Science and Technology committee flags up several issues that it says need “serious, ongoing consideration” — including: taking steps to minimise bias being accidentally built into AI systems”  The article also cited, “EU’s incoming General Data Protection Regulation (GDPR) — which comes into force for EU Member States in 2018 — creates a “right to explanation” for users, whereby they will have the right to ask for “an explanation of an automated algorithmic decision that was made about them…”
In India, there has been a truly Orwellian movement to link all of its citizens account to a single biometric ID so that you can access any and all governmental services. From what I can tell, if you want to file your taxes, open a bank account or get a mobile phone, all of that data is tied to the Aadhaar biometric system which is controlled by the Indian government.  Fortunately for the Indian people (and I think the world), India’s Supreme Court ruled that privacy is a fundamental right for its citizens  and the effort seems to be stalled, at least for now. The court judgement is 547 pages and I have not made the time to dive into it just yet; so, no details from me on that yet. However, if you can’t wait on me to revisit this in the future, I was just made aware (literally) of this article  that gives analysis on the matter.
So, after reading all of the above, what are your thoughts concerning the matter? My hope is that you recognize the dangers of unchecked algorithms. Perhaps you will start a company that does risk assessments on algorithms for disparate impact on consumers? Maybe you will lobby the government for the creation of an agency to watchdog the effect of machine learning algorithms? Or, perhaps you will crawl into a ball, mortified by the impending doom looming towards us all. If so, fear not, the end while near is not upon us. There is still time to take precautions and dissuade the AI apocalypse. If you must fear anything concerning any of these matters; let your trepidation be for those who have been made aware of the dangers and still choose inaction. Hmm… I think that’s us.
 Facial-Recognition Software Might Have a Racial Bias Problem
 Algorithmic accountability
 Women shown fewer online ads for high-paying jobs, study shows
 Retail theft databases could make it hard for workers accused of stealing to find another job: Sheryl Harris
 AI accountability needs action now, say UK MPs
 Why I’m holding off on getting my Aadhaar number for as long as possible
 India has won the battle for a right to privacy – now for the war on Aadhaar
 FAQ: What SC’s Right to Privacy Judgment Means for Aadhaar and Mass Surveillance
# Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks
# Inspecting Algorithms for Bias
# FaceApp apologizes for building a racist AI
# The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think by Eli Pariser
# AI programs are learning to exclude some African-American voices < HT to Jackye Clayton
Have you subscribed to my blog yet? If not, why not? Subscribe now!Follow me on Social Media: