Do you need social media insurance?

Do you live on social media? You might need social media liability insurance. Why? There have been quite a few cases where an errant tweet, or a passionate Facebook comment made in haste (or anger) has resulted in thousands of dollars paid in punitive damages. Watch this video and learn the risks of social media liability and how you can reduce the danger of lawsuits. Umm… apart from using common sense and/or posting only lovely cat videos.


# Why You Might Need Libel Insurance for Your Tweets
# Insurance Companies Offer Protection from Libel Lawsuits
# Jack Monroe wins Twitter libel case against Katie Hopkins
# How to Report Slander on Twitter
# Facebook Defamation Case Sets New Standard for Social Media Commentary
# Trump defamation by Twitter case tossed out
# Estranged husband awarded $12.5k over Facebook post
# Did You Know You Can Be Sued for Libel for Your Tweets?
# Careful What You Say On Facebook — You Could Wind Up Paying $500K
# 10 Astonishing Lawsuits That Happened Because Of Social Media


# The Number One Job Hunting Book In the World!
# Resume Forensics: How To Find Free Resumes and Passive Candidates On Google
# Content Is The New Sourcing: Strategies for Attracting and Engaging Passive Candidates


# 12 Gorgeous Chrome New Tab Extensions You’ve Probably Never Tried, Free Guide
# 7 Incredible Chrome Extensions to Improve Google Calendar, Free Guide
# 10 Amazing Google Chrome Experiments You Need to Try, Free Guide
# 25 Chrome Extensions to Make You More Productive, Free Guide

This is Why Big Data is a Civil Rights Issue

Big data algorithms are dangerous.

If you want to learn something truly fascinating that you might not have considered before, Google “big data is a civil rights issue.”  As you scan through the search results and glean tidbits of data from the descriptions, or skim the various articles, you will no doubt see that the world is sitting on a powder keg of outrage. Algorithms and automation threaten to divide my country more profoundly than any political unrest around civil war statues ever could. Indeed, when it comes to racial disparity in terms of opportunity or various life advancements, nothing is more injurious than the unintended consequences of machine learning. When I consider the effects on society and the labor market, I shudder at the pending reality of it all.

In preparation for an online conversation with Recruiting Live, I read up on current developments in artificial intelligence. The topic of AI has long been a fascination of mine. In fact, I gave a presentation on Big Data a few years ago and  tangentially touched on how civil rights were being violated. My, my, my how things have progressed for the worse since then.

What I find most ironic about these developments is that they are not wholly malicious in nature; they are unintentional and that concerns me most of all. Take for example, the racial bias inherent in facial recognition software.

In Los Angeles, there are 16 “undisclosed locations” where the public is being monitored by police surveillance cameras. Said cameras can recognize individuals from up to 600 feet away. According to The Atlantic,  “The faces they collect are then compared, in real-time, against “hot lists” of people suspected of gang activity or having an open arrest warrant… Considering arrest and incarceration rates across L.A., chances are high that those hot lists disproportionately implicate African Americans…Facial-recognition systems are more likely either to misidentify or fail to identify African Americans than other races, errors that could result in innocent citizens being marked as suspects in crimes.” [1]  Several states enroll driver license data into their facial recognition databases  which helps them identify suspected fugitives caught on camera and I think that is a good thing; in fact, I applaud it. However, I have one bit of apprehension. Who inspects these facial recognition systems for accuracy? As far as I have been able to discern, police are not required to check these systems for bias.

Perhaps the rationale against testing machines for discriminatory factors is, a machine is identifying people, so there is no racial bias. To that argument, I would give a few examples; most notably from Google. When I Google the term “unprofessional hairstyles for work” and refine by “face.”  I see African American women overwhelmingly represented. Does Google think that the various hairstyles of African American women are unprofessional? Google, the company, probably does not. Google, the machine, thinks otherwise. Were these results engineered to appear this way? I doubt it.


bias in algorithms
Google search results on 8.24.2017

Consider another instance of search engine faux pas, when Google was identifying black people as gorillas. Does Google believe that black people bear a resemblance to simians? Google, the company, probably does not. Google, the machine, thinks otherwise. At least, it used to before Google, the company, made adjustments. (Apologies for the language in the tweet below.)

If you would indulge me, allow me another Google example. Google used to feature mugshots at the top of search results for people with “black-sounding” names. Latanya Sweeney, a black professor in government and technology at Harvard University and founder of the Data Privacy Lab, brought this to the public’s attention in 2013 when she published her study of Google AdWords. She found that when people search Google for names that traditionally belong to black people, the ads shown are of arrest records and mugshots. [2]

Did you know that “Online ads for high-paying jobs are shown more often to men than women, according to a recent study from researchers at Carnegie Mellon University…?” [3] In the study, “Ads for, a job coaching website, advertising “$200k+ Jobs – Execs Only” were shown roughly 1,800 times to the “male” profiles and only around 300 times to the “female” profiles.” According to Anupam Datta, one of the researchers on the study and an associate professor in Computer Science and Electrical and Computer Engineering at Carnegie Mellon, “It could be that Google’s machine learning algorithm over time may have inferred that more males were clicking on these [career services] ads and the system optimized to show them to males. If this is so, it is then unintentional automated discrimination….”

Another case of unintentional automated discrimination is inherent in retail theft databases. Did you know that people accused of retail theft by an employer, even if they’re never prosecuted, may have trouble landing a job in the future? There are at least 3 of these databases in operation. points them out and offers some compelling commentary. [4]

  • The National Retail Mutual Association, for example, has collected more than 500,000 incidents of employee theft in its NRMA Retail Theft Database. NRMA accepts reports from client stores who have obtained a signed confession, a signed restitution agreement, a fully paid civil demand, a criminal conviction or other “documentary evidence.”
  • Choicepoint, the giant data aggregator, says its Esteem workplace theft database collects reports from more than 75,000 retail stores that provide an employee’s signed confession or proof of a theft conviction.
  • HireRight has an employee-theft database to which 500 member companies contribute a signed confession, evidence of a conviction, video surveillance or eyewitness statements.

Fortunately, for those concerned, you have a right to get a free copy of your report every 12 months. [Find that info here]  Such is not the case, at least not here in the USA. If an algorithm denies you an opportunity or discriminates against you unfairly, for whatever reason, there is no recourse. Who do you petition to review the algorithmic practices of companies that have denied you credit, prevent you from getting work in your industry or any other unfair practice? Fortunately, there is a glimmer of hope, actually two beams of optimism; one from the UK and the other from India.

Last year, TechCrunch reported that  “A UK parliamentary committee has urged the government to act proactively — and to act now — to tackle “a host of social, ethical and legal questions” arising from growing usage of autonomous technologies such as artificial intelligence…Publishing its report into robotics and AI today, the Science and Technology committee flags up several issues that it says need “serious, ongoing consideration” — including: taking steps to minimise bias being accidentally built into AI systems” [5] The article also cited, “EU’s incoming General Data Protection Regulation (GDPR) — which comes into force for EU Member States in 2018 — creates a “right to explanation” for users, whereby they will have the right to ask for “an explanation of an automated algorithmic decision that was made about them…”

In India, there has been a truly Orwellian movement to link all of its citizens account to a single biometric ID so that you can access any and all governmental services. From what I can tell, if you want to file your taxes, open a bank account or get a mobile phone, all of that data is tied to the Aadhaar biometric system which is controlled by the Indian government. [6] Fortunately for the Indian people (and I think the world), India’s Supreme Court ruled that privacy is a fundamental right for its citizens [7] and the effort seems to be stalled, at least for now. The court judgement is 547 pages and I have not made the time to dive into it just yet; so, no details from me on that yet. However, if you can’t wait on me to revisit this in the future, I was just made aware (literally) of this article [8] that gives analysis on the matter. 

So, after reading all of the above, what are your thoughts concerning the matter? My hope is that you recognize the dangers of unchecked algorithms. Perhaps you will start a company that does risk assessments on algorithms for disparate impact on consumers? Maybe you will lobby the government for the creation of an agency to watchdog the effect of machine learning algorithms? Or, perhaps you will crawl into a ball, mortified by the impending doom looming towards us all. If so, fear not, the end while near is not upon us. There is still time to take precautions and dissuade the AI apocalypse. If you must fear anything concerning any of these matters; let your trepidation be for those who have been made aware of the dangers and still choose inaction. Hmm… I think that’s us. 

[1] Facial-Recognition Software Might Have a Racial Bias Problem
[2] Algorithmic accountability
[3] Women shown fewer online ads for high-paying jobs, study shows
[4] Retail theft databases could make it hard for workers accused of stealing to find another job: Sheryl Harris
[5] AI accountability needs action now, say UK MPs
[6] Why I’m holding off on getting my Aadhaar number for as long as possible
[7] India has won the battle for a right to privacy – now for the war on Aadhaar
[8] FAQ: What SC’s Right to Privacy Judgment Means for Aadhaar and Mass Surveillance

# Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks
# Inspecting Algorithms for Bias
# FaceApp apologizes for building a racist AI
# The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think by Eli Pariser
# AI programs are learning to exclude some African-American voices < HT to Jackye Clayton

Have you subscribed to my blog yet? If not, why not? Subscribe now!

How To Find (More) Female Engineers on LinkedIn

I was pleasantly surprised to get a shout-out from fellow comics geek and SourceCon superfan – Jess Roberts. Jess recapped one of the strategies I shared at the SourceCon Atlanta meetup last month. (Good times!) Basically, it was all about finding female engineers on LinkedIn.

female software developer

After skimming the article, I quickly scrolled down to the comments to see what other strategies my fellow sourcers have suggested. Alas, I saw none, then again, the article just posted. So, as I wait with abated breath to discover new tactics from the sourcing community, I will throw one more tip out there.

Search the term, “on maternity leave.” Yeah, simple I know. How many men are on “maternity leave” these days? I would guess none. At this writing, there are 1,861 results.

How to find female software developers on LinekdIn

As you may have noticed, on the bottom right side of the image, I refined my search by only those in the “Information Technology” and “Computer Software” services. At this writing, there are 1,861 results! Refine the search further by adding titles. See for yourself here >

If I add “software developer” to the “Title” section of my search, I get 18 female software developers.

If I add “software engineer” to the Title, I get 43 results.

A search for programmer gets me 8 results.

I add a tech word like java, I get 4 results.

If I refine my search to only those women in the USA with the title of developer or engineer, further refined by IT and software industry I get a whopping 53 results.  If I remove the industry restrictions, I get 139 results. Hmm… Not a lot of results with this search but, some options to recruit nonetheless.

So, what do you think of this strategy? Let me know in the comments below or better yet, on the SourceCon blog with Jess’ blog post.

Until the next SourceCon event in Austin, TX (whoop-whoop), happy hunting!


Can you imagine a world without typing? Its coming.

In case you missed it, the world just changed (again). Typing is out! Video and voice are the new normal for the next decade. Get ready to update all of your recruiting strategies.

Check out the video below, wow, and imagine what a world without typing will look like.

In India, inexpensive smartphones and data plans have brought an unlikely group of users online: the uneducated and illiterate, who are adapting apps to fit their own needs and skills.