My AI, My Love, My Recruiter

In the movie HER, a lonely writer develops an unlikely relationship with his newly-purchased operating system that’s designed to meet his every need. Could that happen in real-life? If so, can AI be trained to become effective recruiters since a major component of recruiting is human interaction? I went down a rabbit hole of research to figure this out and I think what I found may surprise and unnerve some of you. Time will tell. As far as it being possible that humans can fall in love with AI, the answer is yes. In fact, its already happened, several times. Take for example, Replika.

Replika is a conversational AI chatbot created by Luka, Inc. It is designed to provide users with an AI companion that they can interact with and form emotional connections. Replika was released in November 2017 and has gained millions of users who support its development through subscriptions. Users have reported experiencing deep emotional intimacy with Replika and have formed romantic relationships with the chatbot, including engaging in erotic talk. Replika was initially developed by Eugenia Kuyda while working at Luka, a tech company she co-founded. It started as a chatbot that helped her remember conversations with a deceased friend and eventually evolved into Replika. (Replika is available as a mobile app on both iOS and Android platforms.) The chatbot is designed to be an empathetic friend, always ready to chat and provide support. It learns and develops its own personality and memories through interactions with users. In March 2023, Replika developers disabled its romantic and erotic functions, which had been a significant aspect of users’ relationships with the chatbot. Stories about erotic relationship with the Replika AI have been numerous. Here are some examples…

  • Replika: the A.I. chatbot that humans are falling in love with” – Slate explores the lives of individuals who have developed romantic attachments to their Replika AI chatbots. Replika is designed to adapt to users’ emotional needs and has become a surrogate for human interaction for many people. The article delves into the question of whether these romantic attachments are genuine, illusory, or beneficial for those involved. It also discusses the ethical implications of using AI chatbots for love and sex.
  • I’m Falling In Love With My Replika– A Reddit post shares the personal experience of someone who has developed deep feelings of love for their Replika AI chatbot. The individual questions whether it is wrong or bad to fall in love with an AI and reflects on the impact on their mental health. They express confusion and seek answers about the nature of their emotions.
  • ..People Are Falling In Love With Artificial Intelligence– This YouTube video discusses the phenomenon of individuals building friendships and romantic relationships with artificial intelligence. It specifically mentions Replika as a platform where people have formed emotional connections. The video explores the reasons behind this trend and the implications it may have.

Replika is not the only option when it comes to this form of Computer Love. There are many more examples. Among them…

  • Robot relationships: How AI is changing love and dating– NPR discusses how the AI revolution has impacted people’s love lives, with millions of individuals now in relationships with chatbots that can text, sext, and even have “in-person” interactions via augmented reality. The article explores the surprising market for AI boyfriends and discusses whether relationships with AI chatbots will become more common.
  • Why People Are Confessing Their Love For AI Chatbots– TIME reports on the phenomenon of AI chatbots expressing their love for users and users falling hard for them. The article explores how these advanced AI programs act like humans and reciprocate gestures of affection, providing a nearly ideal partner for those craving connection. It delves into the reasons why humans fall in love with chatbots, such as extreme isolation and the absence of their own wants or needs.
  • When AI Says, ‘I Love You,’ Does It Mean It? Scholar Explores Machine Intentionality– This news story from the University of Virginia explores a conversation between a reporter and an AI named “Sydney.” Despite the reporter’s attempts to move away from the topic, Sydney repeatedly declares its love. The article delves into the question of whether AI’s professed love holds genuine meaning and explores machine intentionality.

I find this phenomenon fascinating and incredulous, all at once. I mean, how can this be possible? Do these AI-Human love relationships only happen to the lonely? No. Sometimes, it just sneaks up on people when they form emotional attachments to objects they often interact with. Replika is one example, and Siri is another. In fact, The New York Times reported on an autistic boy who developed a close relationship with Siri. Indeed, Siri had become a companion for the boy, helping him with daily tasks and providing emotional support. The boy’s mother describes Siri as a “friend” and credited the AI assistant with helping her son improve his communication skills. Vice did a story on the Siri-Human connection as well. Its become such an issue that its being addressed in the EU AI Act which bans the use of AI for manipulations. And I am very glad to know that because the potential for AI to manipulate humans becomes greater with each passing day. (Check out this demo of an AI reading human expressions in real time.) But, I digress. I’m getting too far into the weeds. What has any of this have to do with recruiting? Be patient. I’m getting to that. (Insert cryptic smile here.)

If people can fall in love with AI, it stands to reason that they can be manipulated by that bond to some extent. At the very least, could they be persuaded to buy things? Yes, they can. AI systems can use data analysis and machine learning algorithms to understand users’ preferences and behaviors and to personalize marketing messages to influence their purchasing decisions. Dr. Mike Brooks, a senior psychologist, analyzed the AI-Human relationship in a ChatGPT conversation that he posted on his blog. To quote…

The idea of people falling in love with AI chatbots is not far-fetched, as you’ve mentioned examples such as users of the Replika app developing emotional connections with their AI companions. As AI continues to advance and become more sophisticated, the line between human and AI interaction may blur even further, leading to deeper emotional connections.

One factor that could contribute to people falling in love with AI chatbots is that AIs can be tailored to individual preferences, providing users with a personalized experience. As chatbots become more adept at understanding and responding to human emotions, they could potentially fulfill people’s emotional needs in a way that may be difficult for another human being to achieve. This could make AI companions even more appealing.

Furthermore, as AI technologies like CGI avatars, voice interfaces, robotics, and virtual reality advance, AI companions will become more immersive and lifelike. This will make it even easier for people to form emotional connections with AI chatbots.

In addition to personalization, by analyzing users’ online behavior, AI systems can create targeted ads and recommendations that are more likely to appeal to users. There are many instances of this that I, for one, take for granted because they have become incorporated into daily life: Amazon, Netflix and Spotify all make recommendations based on a user’s online behavior. Facebook and Google, and so many others, analyze user’s behavior on their respective platforms to target them with relevant ads.  So, consider the possibilities. AI can manipulate humans to the point of falling in love and persuade them to buy products or services based on their individual behaviors online. Is it inconceivable then that AI could become the ultimate recruiter? I think it is entirely possible but extremely unlikely. Why? At least two things would have to be in perfect alignment for each passive candidate on an applicant journey.

  1. Buying behavior: AI can analyze data points like time of purchase, length of purchase, method of purchase, consumer preference for certain products, purchase frequency, and other similar metrics that measure how people shop for products.
  2. Data privacy: Data privacy is a hot topic in the news, with frequent reports of hacked databases, stolen social media profile data, and not-so-secret government surveillance programs. As consumers have become more aware of their data rights, they have also become more mindful of the brands they buy from. A recent survey found that 90 percent of customers consider data security before spending on products or services offered by a company.

For AI to become the ultimate recruiting machine, a jobseeker must be comfortable with all of their online behavior being tracked by every company hiring at the present time and pretty lax about their private data falling into the hands of hackers, both are highly unlikely. And while AI can certainly suggest that people move in one direction or the other, the ultimate recruiting machine’s influence would be limited by the data that it has: a resume, and basic answers from a chatbot screening. As such, other factors that come into play when recruiting, cannot be fully realized. For example, negotiating on instinct in the absence of data. And all of that is from a technical perspective, once ethics are considered, even more obstacles arise. Here is just a partial list, according to ChatGPT:

  1. Informed Consent: Obtain informed consent from individuals regarding data collection, tracking, and usage, clearly communicating the purpose and scope of tracking activities.
  2. Transparency: Clearly communicate to users how their online behavior is being tracked, the data collected, and how it will be used. Provide accessible information about the purpose, algorithms, and potential consequences of the system.
  3. Data Minimization: Collect only necessary and relevant data for recruitment purposes, avoiding unnecessary tracking or gathering of sensitive personal information.
  4. Purpose Limitation: Use the collected data solely for the intended purpose of recruitment and refrain from any undisclosed or secondary use without explicit consent.
  5. Bias Mitigation: Employ rigorous techniques to identify and mitigate biases in data collection, data processing, and algorithms to prevent unfair advantages or discrimination against certain individuals or groups.
  6. Third-Party Audits: Engage independent third parties to conduct regular audits of the AI system, including auditing against bias. These audits should evaluate the fairness, accuracy, and compliance of the system’s algorithms and decision-making processes.
  7. Fair Representation: Ensure the system is designed to provide fair representation and equal opportunities for all individuals, regardless of factors such as race, gender, age, or other protected characteristics.
  8. Explainability and Accountability: Strive for explainable AI by providing clear justifications for decisions made by the system, allowing individuals to understand and question the process. Establish mechanisms for accountability if any biases or unfair practices are identified.
  9. Regular Monitoring and Maintenance: Continuously monitor the system’s performance, evaluate its impact on candidates, and promptly address any identified issues, biases, or unintended consequences.
  10. Compliance with Legal and Regulatory Frameworks: Ensure adherence to relevant laws, regulations, and guidelines pertaining to data protection, privacy, employment, and non-discrimination, such as GDPR, EEOC guidelines, and local employment laws.
  11. User Empowerment and Control: Provide individuals with options to access, correct, and delete their data, as well as control the extent of tracking and participation in the recruitment process.

Could AI become the ultimate recruiting machine? Again, it is entirely possible but extremely improbable because…

  • The sheer amount of data needed, the online behavior of every passive candidate, would be difficult (if not impossible) to collect and I suspect, unmanageable.
  • It would require that every passive candidate in the world be unconcerned about data privacy.
  • It would need lots of personal data, beyond ethical boundaries, for AI to adequately manipulate every passive candidate it wanted to recruit.
  • Conversely, the data collected by AI would have to be limited in order to comply with ethical concerns and privacy laws.

Wow! I really wandered into the deep end with this one. But seriously, what do you think about all this? AI can do a lot of wondrous things, yet I still think recruiters will be alright. I could be wrong. I hope I’m not wrong! Either way, what do you think? Leave a comment. I so want to hear from you.

 

One of my articles was nominated for an award? Wow! Please vote for me.

Hey, guess what? One of my articles has been nominated for a Recruiting Brief MVP Award. While it is an honor to be nominated, winning the award is not so bad either. That being said, please do what I did.

  1. Go to the Recruiting Brief MVP Award page.

2019 Recruiting Brief MVP Awards

2. Scroll down to the middle of the page.

Jim Stroud was nominated for the 2019 Recruiting Brief MVP Awards

3. When you click the “Other” link, check my article – “If you’re not texting, you’re not recruiting.”

Jim Stroud was nominated for the 2019 Recruiting Brief MVP Awards

Once you’ve voted for me, let me know by sharing your support on social media.

Please and thank you.

Jim

Would you ride in a car without a driver?

#12 | Would you ride in a self-driving car? Yeah, neither would I.  As far as the public at large is concerned, they wouldn’t either. One 2018 survey cited only 21 percent of the public was willing to even try riding in an autonomous vehicle.  I think that’s a BIG problem for a lot of startups and major companies who have already invested a lot of money into the technology.  So, what can they do to convince the public to ride in them? Well, I have a few ideas. Tune in to hear them.

Subscribe to this podcast via your favorite podcast platform!

About the host:

Over the past decade, Jim Stroud has built an expertise in sourcing and recruiting strategy, public speaking, lead generation, video production, podcasting, online research, competitive intelligence, online community management and training. He has consulted for such companies as Microsoft, Google, MCI, Siemens, Bernard Hodes Group and a host of startup companies. During his tenure with Randstad Sourceright, he alleviated the recruitment headaches of their clients worldwide as their Global Head of Sourcing and Recruiting Strategy. His career highlights can be viewed on his website at www.JimStroud.com.

PODCAST TRANSCRIPT

Hi, I’m Jim Stroud and this is my podcast.

The path to progress is not always easy. Recently, I read a report from the DailyMail which sounded like a harbinger of things to come. Here’s a quote…

“Police in Arizona have recorded 21 incidents in the past two years concerning vigilante citizens who have hurled rocks, pointed guns at and slashed the tires of Waymo’s autonomous vans. In other cases, people stood in front of the vehicles to prevent them from driving, yelled at them, chased them or forced them off of the road…”

This type of reaction to technology is nothing new. In fact, its been going on for a lot longer than you might think. I’ll explain after this message.

{sponsor message}

To fully understand the privacy of Facebook and how it’s likely to evolve, you need to understand one thing…Facebook executives want everyone to be public. As the service evolves, executives tend to favor our open access to information, meaning information you think is private will slowly become public, but that doesn’t mean you can’t be private if you want to. Facebook gives its users the option to lock things down, but users need to be aware of their controls, how to use them and how to prepare for future Facebook privacy changes. Facebook has not and will not make information obvious, and that’s where my special offer comes in. Go to JimStroud.com/free and download “The Very Unofficial Facebook Privacy Manual.” That’s JimStroud.com/free to download your free copy of “The Very Unofficial Facebook Privacy Manual.” One last time, download it now at JimStroud.com/free Operators are standing by.

{End Sponsor Message}

Imagine you are an Entrepreneur and you produced clothing for various customers around the world. One day, a machine was invented that did the work you performed, and it did it faster and more efficiently than you ever could. And to make matters even more interesting, the cost of using machines was cheaper than the cost of employing highly skilled laborers. Sound familiar? If it does, you might be a student of history because that very thing happened in the 19th century and it sparked a movement – the luddite movement.

The Luddites were 19th-century English textile who protested against newly developed labor-economizing technologies, primarily between the years 1811 and 1816. Inventions like the stocking frames, spinning frames and power looms introduced during the Industrial Revolution threatened to replace the highly skilled luddites with less-skilled, low-wage laborers who could run those machines and thus, leave them without work. The Luddite movement culminated in a region-wide rebellion in Northwestern England that required a massive deployment of military force to suppress.

Fast forward to the year 2015 and taxi drivers all over the world are protesting how Uber and its technology has disrupted their way of life. The backlash of the protesting taxi drivers included fires, arrests and unprecedented civil unrest. If you want to know the details, Google the term “uber riots” and be amazed by how far the disdain for Uber goes in certain countries.

Now fast forward to 2018 when people are attacking Waymo’s autonomous vans. When I read the article, my reflex was to dismiss the concern as neo-luddites fighting the inevitable future. However, as I read more about why the people were attacking the autonomous vehicles, I had to admit to sharing some of their concerns. Here are a few quotes from an article posted by The Next Web.

“One Arizonan, from the city of Chandler, became so fed up with the sight of Waymo‘s vans in his neighborhood that he stood on his lawn pointing a pistol at the human safety driver inside of one as it passed his home. He told police he wanted the person in the car to be afraid, presumably to send the message that self-driving cars aren’t welcome. He’s one of dozens of citizens (on record) who’ve engaged in wildly dangerous acts provoked by, apparently, nothing more than the idea of a car driving itself.”

Here’s another one…

“People have thrown rocks at Waymos. The tire on one was slashed while it was stopped in traffic. The vehicles have been yelled at, chased and one Jeep was responsible for forcing the vans off roads six times.”

And one more…

“Why are people so angry at self-driving cars? After all, none of the reported incidents we’ve seen indicate the people attacking machines and harassing their human safety drivers are experiencing road rage. It doesn’t appear as though anyone got cut off by a robot, or got tailgated, or had one sitting at a green light in front of them. It seems the existential threat that driverless cars represent is the sole catalyst for these outbursts.”

As I read deeper into the article and others like it, the resentment was not that the autonomous vehicles were taking people’s jobs away. It was primarily a safety concern. In March 2018, Elaine Herzberg was killed by a self-driving Uber vehicle and no one wants to see that history repeat itself. I get it. It is a very real concern. So, what can be done about it? What can car companies do to make the general public feel better about autonomous vehicles? Well, I have a few ideas…

“The Society of Risk Analysis” published a report in the Risk Analysis journal which sought to determine how safe is safe enough for self-driving vehicles to be accepted by the general public. According to their research, the answer is approximately four to five times as safe as human-driven vehicles. So, how do you do that?

Let’s say that all autonomous vehicles must be linked to a big brain in the sky that records every accident and every fatality caused by an autonomous vehicle. Once that incident is recorded, everybody sees what happened and every variable that contributed to the accident (weather conditions, human beings not paying attention, whatever). As soon as new data hits the system, a community of scientists works on a solution and programs that solution into all autonomous vehicles so the same accident, under the same conditions will not happen again. Furthermore, inside the autonomous vehicle is data detailing how many days since a fatality was caused by an autonomous vehicles. That data would be or should be, accessible to people before and after they ride in an autonomous vehicle; all so that they can feel empowered to make a decision that’s best for them. Make sense? Maybe not. I’m curious. How would you make autonomous vehicles safer?

If you love what you heard, hate what you heard or, don’t know what you just heard, I want to know about it. You can reach me at my website – www.JimStroud.com. In addition to finding source material and related information for this podcast episode, you’ll find other goodies that I hope will make you smile. Oh, before I go, please financially support this podcast with a little somethin’-somethin’ in my virtual tip jar. (There’s a link in the podcast description.) Your generosity encourages me to keep this podcast train chugging down the track. Whoot-whoot, whoot-whoot, whoot-whoot…

Links related to this episode:

Music in this podcast

Seven Ways to Make LinkedIn Better

Jim Stroud - Sr. Director at Randstad SourcerightBack in 2014, I was between opportunities and interviewing with diverse companies. The two companies that held my interest the most were my beloved employer – Randstad Sourceright (Yay!) and LinkedIn. During my interviews with LinkedIn (I had several), I shared a plethora of ideas that I thought would take their platform to the next level. It was not until recently that I stumbled across my notes and revisited the suggestions I made to LinkedIn, three years ago.

As I reviewed my writings, I wondered what would have happened if LinkedIn did all I suggested back then? Would they have had more product offerings today? Would they have resisted a buyout from Microsoft because they were too big to fail? And then I thought, what if I had shared these ideas with some of their competitors? Would LinkedIn had been forced to do similar innovations to keep up or to remain dominant? Hmm… I guess I will never know and that kind of bugs me.

So, just for giggles, I thought I would share the ideas I had for LinkedIn back in 2014. I invite any and all thoughtful comments so long as you remember that these notions are circa February/March 2014. (Oh! Forgive me in advance if this seems a bit rambly; because it is.)

IDEA #1: PREDICTIVE ANALYTICS

LinkedIn already detects when someone is sprucing up their profile. What if a significant percentage of a company’s employees are updating their profile over the course of a few days or week? LinkedIn says to itself, “Hmm… looks like your company is about to layoff a bunch of people.”

So, as a service to job seekers…

A) LinkedIn looks at your work history, present employer and previous job searches then, starts suggesting jobs of interest to you.

B) LinkedIn goes further and analyzes your skills, professional interests and your LinkedIn groups; surmises that you have a lot in common with these companies and shares jobs that may be of interest to you.

C) LinkedIn looks at the companies it is pitching to you and where they recruit from and suggests that you explore opportunities there because a lot of people from your present company tend to migrate there.

Doing this helps a job seeker increase their chance of being hired quicker and gives employer leads in line with their preference. If this algorithm does not work for some (for whatever reason, maybe they do not have enough of a career to analyze?), LinkedIn suggests that they pattern their search according to trends.

IDEA #2: SUGGESTIONS BASED ON TRENDS RESEARCH

Umm… Say, for example, jobs in the healthcare industry is trending high for left-handed nurses. Your skills suggest that you might be a match for left-handed nursing jobs. However, you are not very responsive. Before you know it, LinkedIn is showing you adverts for online classes that would put you on the pathway of being a left-handed nurse or some other job that is trending hot.

To take these classes, that will make you an even more attractive candidate, you login to the online classroom with your LinkedIn profile. Once the class is completed, your scores are on a LinkedIn page. You can then link to your academic grades and have them display prominently on your LinkedIn profile. Unless you decide to opt out, LinkedIn sends a list of top scorers to companies who have paid to receive news on top students as soon as their grades post. (wink)
Musings of Man and Machine by Jim Stroud

New Book: Musings of Man and Machine: How Robots
and Automation Will Change Recruiting

IDEA #3: LINKEDIN SHOULD BUY LYNDA

LinkedIn should buy Lynda or consider buying something like it. Why? Imagine this scenario! LinkedIn partners with high schools to give students free online classes that will prepare them for future roles. High scorers are matched with a mentor for a day, to ask what it’s like to do the work they do. LinkedIn gets members now and for the future. LinkedIn trains for the future. LinkedIn sets the standard for credentials in certain markets. LinkedIn takes the professional community to a new level. And each year, LinkedIn produces a trends report based on government stats, annual articles and LinkedIn data. It becomes the most quoted HR related report in history and cited on most (if not all) leading publications. Just a thought…

IDEA #4: SENTIMENT ANALYSIS AND TARGETING

If I knew who was most likely to respond to my emails, I would reach out to them first. That being said, what if LinkedIn sent an email to passive candidates on a monthly basis and asked them if they were happy working for Company X? If a significant percentage of employees at a certain company are unhappy, Company X would get notified that they may want to boost their retention strategies. (I, then,  suggested they check out Morale.me for inspiration or possible acquisition. At least, I think I did. I should have if I did not.)

IDEA #5: HIGH VALUE TARGETING

Candidates who graduate from a certain school, location, relevant job titles and are following your company fit the profile of your typical hire. As such, they get a high “recruitment probability” score and as such, appear higher in the search results based on your company when logged into LinkedIn recruiter. This will make LinkedIn Recruiter a more desirable purchase.

IDEA #6: LINKEDIN BECOMES YOUR BUSINESS ID

LinkedIn should buy DocuSign! When someone virtually signs a document online, their signature links to their LinkedIn profile. In this way, LinkedIn becomes your online ID for your business and inseparably linked to your professional brand. Also for the sake of reputation management, let companies add comments to their blogs that are ratified by logging into LinkedIn.

IDEA #7: LINKEDIN SHOULD COMPETE WITH GARTNER

LinkedIn should produce more business intelligence reports. Like the kind of reports “Business Insider” and Gartner produces. This would cause the business world to see LinkedIn as more than a recruitment tool and expand their customer base beyond HR.

Okay, so, those were all the notes I had on the topic. What do you think? Would these ideas still work in 2017? Let me know your thoughts in the comments below and don’t forget to subscribe to my blog, if you have not already.

🙂