Artificial Intelligence Enters the Workforce: Cengage Group’s 2023 Employability Report Exposes New Hiring Trends, Shaky Graduate Confidence

  • Half of graduates feel threatened by AI (46%) and question their workforce readiness (52%)
  • 59% of employers say AI has caused them to prioritize different skills when hiring, including “uniquely human” skills
  • Half of employers have dropped degree requirements for entry-level roles

BOSTON, July 20, 2023 /PRNewswire/ — The job landscape has been completely transformed. In response to workplace transitions like The Great Resignation, Quiet Quitting and now the rise and adoption of artificial intelligence (AI), employer hiring habits continue to evolve with 50% of employers now admitting they’ve dropped 2- and 4-year degree requirements for entry-level positions (a 32% increase over 2022) and started prioritizing softer skills and previous job experience (66%).

Data from the 2023 Cengage Group Employability Survey tracks opinions on key workforce trends among recent graduates and employers.

And while these shifts signal a move toward skills-based hiring (over degree-based hiring), it also introduces new uncertainties for graduates.

According to Cengage Group’s 2023 Employability Report – the third annual report surveying 1,000 graduates who completed a degree or non-degree program in the last 12 months and 1,000 U.S. hiring decisionmakers – the growth of emerging technologies, like generative AI, have a third of grads second-guessing their career choice. Additionally, more than half (52%) of graduates say competition from AI has them questioning how prepared they are for the workforce.

“The workplace has changed rapidly in the last few years, and now we are witnessing a new shift as AI begins to reshape worker productivity, job requirements, hiring habits and even entire industries,” said Michael Hansen, CEO of Cengage Group. “With new technology comes both new uncertainties and new opportunities for the workforce, and educators and employers must do more to prepare today’s workers for these technological shifts.”

Data shows that educators still have work to do in preparing graduates. Just 41% of grads said their program taught them skills needed for their first job – down from 63% who said the same in 2022. Recent graduates report they are not getting enough preparation to develop “soft skills,” something employers say they will prioritize more with the growth of AI. Nearly 3 in 5 grads (58%) believe closer alignment between employers and learning institutions would help them develop important workplace skills.

Additional findings include:

  • The struggle for talent is still very real and has forced employers to do things differently. Half of employers (53%) struggle to find talent (down from 65% in 2022), which has improved their willingness to interview candidates with experience but no degree (66%; up from 53% in 2022). Additionally, employers are more open to upskilling with nearly half of employers (48%) admitting they will hire talent with some but not all the skills needed for a role and upskill them, and 17% open to finding and upskilling talent from within the company.
  • Dropping degree requirements has increased grad confidence. With half of employers dropping degree requirements on entry-level job listings, grads are more confidently applying to jobs with 3 in 5 (61%) employers seeing an uptick in non-degree applicants. In fact, recent degree and non-degree graduates are feeling more confident regarding their qualifications to apply for entry-level jobs, with only one-third (33%) stating they felt underqualified, down significantly from the last two years in which roughly half of graduates felt underqualified.
  • There’s still work to be done to connect education to the workforce. Half of all graduates (49%) say their educational institution should be held accountable for placing them in jobs upon graduation, however fewer graduates gained important workforce experience before graduating. Less than half of graduates (47%) participated in an internship, externship or apprenticeship this year, compared with 63% in 2022. Of those graduates who did, more than a third (35%) did not receive any guidance from their school in finding the opportunity.
  • The “Great Reskilling” is coming as employer priorities shift. With more than half of employers (57%) saying certain entry level jobs, teams and skills could be replaced by AI, they are calling for employees to upskill. More than two-thirds of employers (68%) say many of their employees will need to reskill or upskill in the next 3-5 years because of emerging technology and grads agree – 3 in 5 (61%) say they will need to develop or strengthen their digital skills due to AI. The good news for employers: graduates (65%) recognize that and say they need more training in working alongside new technology.

“No part of the workforce is immune to the changes AI will bring. Many workers will need to develop new skills to work alongside new technology or perhaps even find new careers as a result of AI disruption. As we collectively navigate these changes, we are laser-focused on helping people develop in demand skills and connect to sustainable employment,” said Hansen.

For more information on the 2023 Cengage Group Graduate Employability Report, click here. To learn more about workforce training and career development, visit Cengage Group at www.cengagegroup.com.

Survey Methodology:
The findings in the Cengage Group 2023 Graduate Employability Report are the result of two surveys conducted by Cengage Group via the online platform Pollfish in June 2023. The graduate survey targeted 1,000 U.S. men and women between the ages of 18 and 65 who had completed an education program (ie., associate, bachelor’s or graduate degree or vocational training or certification) for their perspectives on their recent experience seeking employment. The employer survey targeted 1,000 U.S. men and women between the ages of 18 and 65 who have hiring responsibilities within their organization for their views on determining a candidate’s fitness for a specific role.

About Cengage Group
Cengage Group, a global education technology company serving millions of learners, provides affordable, quality digital products and services that equip students with the skills and competencies needed to be job ready. For more than 100 years, we have enabled the power and joy of learning with trusted, engaging content, and now, integrated digital platforms. We serve the higher education, workforce skills, secondary education, English language teaching and research markets worldwide. Through our scalable technology, including MindTap and Cengage Unlimited, we support all learners who seek to improve their lives and achieve their dreams through education. Visit us at www.cengagegroup.com or find us on LinkedIn or Twitter.

My AI, My Love, My Recruiter

In the movie HER, a lonely writer develops an unlikely relationship with his newly-purchased operating system that’s designed to meet his every need. Could that happen in real-life? If so, can AI be trained to become effective recruiters since a major component of recruiting is human interaction? I went down a rabbit hole of research to figure this out and I think what I found may surprise and unnerve some of you. Time will tell. As far as it being possible that humans can fall in love with AI, the answer is yes. In fact, its already happened, several times. Take for example, Replika.

Replika is a conversational AI chatbot created by Luka, Inc. It is designed to provide users with an AI companion that they can interact with and form emotional connections. Replika was released in November 2017 and has gained millions of users who support its development through subscriptions. Users have reported experiencing deep emotional intimacy with Replika and have formed romantic relationships with the chatbot, including engaging in erotic talk. Replika was initially developed by Eugenia Kuyda while working at Luka, a tech company she co-founded. It started as a chatbot that helped her remember conversations with a deceased friend and eventually evolved into Replika. (Replika is available as a mobile app on both iOS and Android platforms.) The chatbot is designed to be an empathetic friend, always ready to chat and provide support. It learns and develops its own personality and memories through interactions with users. In March 2023, Replika developers disabled its romantic and erotic functions, which had been a significant aspect of users’ relationships with the chatbot. Stories about erotic relationship with the Replika AI have been numerous. Here are some examples…

  • Replika: the A.I. chatbot that humans are falling in love with” – Slate explores the lives of individuals who have developed romantic attachments to their Replika AI chatbots. Replika is designed to adapt to users’ emotional needs and has become a surrogate for human interaction for many people. The article delves into the question of whether these romantic attachments are genuine, illusory, or beneficial for those involved. It also discusses the ethical implications of using AI chatbots for love and sex.
  • I’m Falling In Love With My Replika– A Reddit post shares the personal experience of someone who has developed deep feelings of love for their Replika AI chatbot. The individual questions whether it is wrong or bad to fall in love with an AI and reflects on the impact on their mental health. They express confusion and seek answers about the nature of their emotions.
  • ..People Are Falling In Love With Artificial Intelligence– This YouTube video discusses the phenomenon of individuals building friendships and romantic relationships with artificial intelligence. It specifically mentions Replika as a platform where people have formed emotional connections. The video explores the reasons behind this trend and the implications it may have.

Replika is not the only option when it comes to this form of Computer Love. There are many more examples. Among them…

  • Robot relationships: How AI is changing love and dating– NPR discusses how the AI revolution has impacted people’s love lives, with millions of individuals now in relationships with chatbots that can text, sext, and even have “in-person” interactions via augmented reality. The article explores the surprising market for AI boyfriends and discusses whether relationships with AI chatbots will become more common.
  • Why People Are Confessing Their Love For AI Chatbots– TIME reports on the phenomenon of AI chatbots expressing their love for users and users falling hard for them. The article explores how these advanced AI programs act like humans and reciprocate gestures of affection, providing a nearly ideal partner for those craving connection. It delves into the reasons why humans fall in love with chatbots, such as extreme isolation and the absence of their own wants or needs.
  • When AI Says, ‘I Love You,’ Does It Mean It? Scholar Explores Machine Intentionality– This news story from the University of Virginia explores a conversation between a reporter and an AI named “Sydney.” Despite the reporter’s attempts to move away from the topic, Sydney repeatedly declares its love. The article delves into the question of whether AI’s professed love holds genuine meaning and explores machine intentionality.

I find this phenomenon fascinating and incredulous, all at once. I mean, how can this be possible? Do these AI-Human love relationships only happen to the lonely? No. Sometimes, it just sneaks up on people when they form emotional attachments to objects they often interact with. Replika is one example, and Siri is another. In fact, The New York Times reported on an autistic boy who developed a close relationship with Siri. Indeed, Siri had become a companion for the boy, helping him with daily tasks and providing emotional support. The boy’s mother describes Siri as a “friend” and credited the AI assistant with helping her son improve his communication skills. Vice did a story on the Siri-Human connection as well. Its become such an issue that its being addressed in the EU AI Act which bans the use of AI for manipulations. And I am very glad to know that because the potential for AI to manipulate humans becomes greater with each passing day. (Check out this demo of an AI reading human expressions in real time.) But, I digress. I’m getting too far into the weeds. What has any of this have to do with recruiting? Be patient. I’m getting to that. (Insert cryptic smile here.)

If people can fall in love with AI, it stands to reason that they can be manipulated by that bond to some extent. At the very least, could they be persuaded to buy things? Yes, they can. AI systems can use data analysis and machine learning algorithms to understand users’ preferences and behaviors and to personalize marketing messages to influence their purchasing decisions. Dr. Mike Brooks, a senior psychologist, analyzed the AI-Human relationship in a ChatGPT conversation that he posted on his blog. To quote…

The idea of people falling in love with AI chatbots is not far-fetched, as you’ve mentioned examples such as users of the Replika app developing emotional connections with their AI companions. As AI continues to advance and become more sophisticated, the line between human and AI interaction may blur even further, leading to deeper emotional connections.

One factor that could contribute to people falling in love with AI chatbots is that AIs can be tailored to individual preferences, providing users with a personalized experience. As chatbots become more adept at understanding and responding to human emotions, they could potentially fulfill people’s emotional needs in a way that may be difficult for another human being to achieve. This could make AI companions even more appealing.

Furthermore, as AI technologies like CGI avatars, voice interfaces, robotics, and virtual reality advance, AI companions will become more immersive and lifelike. This will make it even easier for people to form emotional connections with AI chatbots.

In addition to personalization, by analyzing users’ online behavior, AI systems can create targeted ads and recommendations that are more likely to appeal to users. There are many instances of this that I, for one, take for granted because they have become incorporated into daily life: Amazon, Netflix and Spotify all make recommendations based on a user’s online behavior. Facebook and Google, and so many others, analyze user’s behavior on their respective platforms to target them with relevant ads.  So, consider the possibilities. AI can manipulate humans to the point of falling in love and persuade them to buy products or services based on their individual behaviors online. Is it inconceivable then that AI could become the ultimate recruiter? I think it is entirely possible but extremely unlikely. Why? At least two things would have to be in perfect alignment for each passive candidate on an applicant journey.

  1. Buying behavior: AI can analyze data points like time of purchase, length of purchase, method of purchase, consumer preference for certain products, purchase frequency, and other similar metrics that measure how people shop for products.
  2. Data privacy: Data privacy is a hot topic in the news, with frequent reports of hacked databases, stolen social media profile data, and not-so-secret government surveillance programs. As consumers have become more aware of their data rights, they have also become more mindful of the brands they buy from. A recent survey found that 90 percent of customers consider data security before spending on products or services offered by a company.

For AI to become the ultimate recruiting machine, a jobseeker must be comfortable with all of their online behavior being tracked by every company hiring at the present time and pretty lax about their private data falling into the hands of hackers, both are highly unlikely. And while AI can certainly suggest that people move in one direction or the other, the ultimate recruiting machine’s influence would be limited by the data that it has: a resume, and basic answers from a chatbot screening. As such, other factors that come into play when recruiting, cannot be fully realized. For example, negotiating on instinct in the absence of data. And all of that is from a technical perspective, once ethics are considered, even more obstacles arise. Here is just a partial list, according to ChatGPT:

  1. Informed Consent: Obtain informed consent from individuals regarding data collection, tracking, and usage, clearly communicating the purpose and scope of tracking activities.
  2. Transparency: Clearly communicate to users how their online behavior is being tracked, the data collected, and how it will be used. Provide accessible information about the purpose, algorithms, and potential consequences of the system.
  3. Data Minimization: Collect only necessary and relevant data for recruitment purposes, avoiding unnecessary tracking or gathering of sensitive personal information.
  4. Purpose Limitation: Use the collected data solely for the intended purpose of recruitment and refrain from any undisclosed or secondary use without explicit consent.
  5. Bias Mitigation: Employ rigorous techniques to identify and mitigate biases in data collection, data processing, and algorithms to prevent unfair advantages or discrimination against certain individuals or groups.
  6. Third-Party Audits: Engage independent third parties to conduct regular audits of the AI system, including auditing against bias. These audits should evaluate the fairness, accuracy, and compliance of the system’s algorithms and decision-making processes.
  7. Fair Representation: Ensure the system is designed to provide fair representation and equal opportunities for all individuals, regardless of factors such as race, gender, age, or other protected characteristics.
  8. Explainability and Accountability: Strive for explainable AI by providing clear justifications for decisions made by the system, allowing individuals to understand and question the process. Establish mechanisms for accountability if any biases or unfair practices are identified.
  9. Regular Monitoring and Maintenance: Continuously monitor the system’s performance, evaluate its impact on candidates, and promptly address any identified issues, biases, or unintended consequences.
  10. Compliance with Legal and Regulatory Frameworks: Ensure adherence to relevant laws, regulations, and guidelines pertaining to data protection, privacy, employment, and non-discrimination, such as GDPR, EEOC guidelines, and local employment laws.
  11. User Empowerment and Control: Provide individuals with options to access, correct, and delete their data, as well as control the extent of tracking and participation in the recruitment process.

Could AI become the ultimate recruiting machine? Again, it is entirely possible but extremely improbable because…

  • The sheer amount of data needed, the online behavior of every passive candidate, would be difficult (if not impossible) to collect and I suspect, unmanageable.
  • It would require that every passive candidate in the world be unconcerned about data privacy.
  • It would need lots of personal data, beyond ethical boundaries, for AI to adequately manipulate every passive candidate it wanted to recruit.
  • Conversely, the data collected by AI would have to be limited in order to comply with ethical concerns and privacy laws.

Wow! I really wandered into the deep end with this one. But seriously, what do you think about all this? AI can do a lot of wondrous things, yet I still think recruiters will be alright. I could be wrong. I hope I’m not wrong! Either way, what do you think? Leave a comment. I so want to hear from you.

 

ChatGPT and the Cybersecurity Threat No One is Talking About #hrtech

Discover the shocking truth about ChatGPT and HR Tech security in this eye-opening video. Learn about the rampant hacking and data breaches affecting ChatGPT accounts and HR technology tools. Find out how to protect your personal data and what steps your employer should be taking. Don’t miss this crucial video for safeguarding your information! 🔒💻

Open or Closed: The Battle for the Soul of AI

There is a battle raging, right now, over the soul of artificial intelligence. To understand the fight, you have to understand two terms: Open-source technology and Closed-source technology.

  • Open-source technology refers to software or technology that is openly available, allowing users to view, modify, and distribute its source code. It promotes transparency and collaboration as anyone can access and contribute to the development and improvement of the software.
  • Closed-source technology, also known as proprietary or commercial software, is developed and distributed with restricted access to its source code. The source code is typically owned by a specific organization or entity, and users are granted licenses to use the software.

Open-source AI models, which let anyone view and manipulate the code, are growing in popularity as startups and giants alike race to compete with market leader ChatGPT. And on the flip side, early movers in generative AI — including OpenAI and Google, are seeking to protect their early-mover advantage by keeping their secret sauce – secret. (FYI: OpenAI, despite its name, uses a closed model for ChatGPT — meaning it’s kept full control and ownership.)

Whichever way the next wave of HR tech tools proliferates, open-source or closed-source (commercial software), it will directly impact what HR tools will be able to do and how much you will be paying for them. I’ll go into more details on that after this very important announcement to my viewers in New York. What I am about to share with you is timely and will impact your HR Tech budget as of July 5, 2023. Everyone else, pay attention, because its only a matter of time before this becomes relevant to you.

New York City has adopted final rules on the use of automated employment decision tools (AEDTs) in hiring, which take effect on July 5, 2023. The AEDT law (also referred to as New York Local Law 144) restricts the use of AEDTs and artificial intelligence (AI) by employers and employment agencies by requiring that such tools be subjected to bias audits and requiring employers and employment agencies to notify employees and job applicants of their use. (FYI: An AEDT is defined as a tool that “substantially assists or replaces” an employer’s discretion in making employment-related decisions.) The law applies to any decision relating to the hiring, promoting, reassigning, evaluating, disciplining, terminating, or setting of salary of an employee, assignment of working hours or shifts, or any other similar decision with respect to an employee or applicant for employment.

All that to say this, if the HR tech tools you are using in NY are not complaint with the AEDT law, you can be penalized as much as $1500.00 per day, for each HR tech tool that is not compliant. Now knowing that, I bet you are wondering, “Are the tools in my tech stack AEDT compliant? I need to know and I need to know now!” No worries, click here to be taken to an online registry of AEDT compliant tools. Simply fill out a form and get an AEDT report today. Get instant peace of mind knowing your HR tech stack is fully compliant or, get a reason to freak out – Act now.

And if you are not in NY, its still a good reason to click the link in the video description because one day, the AEDT law (or something similar) may be coming to your state soon.

But, I digress. Open Source vs closed source aka proprietary software aka commercial software. The debate around using open source technology verses closed technology has been ongoing for years. And typically, when it happens, you hear these points:

On the open source side:

  • Open source software is typically developed collaboratively by a community of developers, which can lead to innovation and improvements (Think: Open-source tools – Linux, Android, Apache).

On the closed source side:

  • Closed systems can be more optimized for performance, scalability, and security. (Think: Microsoft Windows, Adobe Photoshop, Apple iOS).

And, to be fair, there are more points to consider on both sides of the issue but that’s what I tend to hear the most. When I think of all the other points that come up in the open/close debate, I start to speculate how that debate impacts HR technology. Whether the HR tech trends lean towards open source or whether they lean more closed source, I am confident HR tech customers will likely see the following…

In the case of closed-source development for AI tools, customers can expect..

  • Limited customization: Closed source software is not open to modification or enhancement by users, which means that organizations may not be able to customize the software to meet their specific needs.
  • Higher costs: Closed source software is often more expensive than open source software, which could make it more difficult for smaller organizations to afford.
  • Reduced innovation: Open source software encourages innovation and collaboration among developers, which could be stifled if LLMs like ChatGPT become closed source.
  • Decreased transparency: Closed source software is not transparent, which means that users cannot inspect the source code to ensure that it is secure and free from vulnerabilities.
  • Limited access to data: Closed source software may not provide access to all the data that organizations need to make informed decisions about their workforce.
  • Integration challenges: If LLMs like ChatGPT become closed source, it could be more difficult to integrate them with other HR systems, such as learning management systems (LMS)

Now, let’s consider the alternative. In the case of open-source development for AI tools, customers can expect..

  • Increased customization: Open source software is designed to be modified or enhanced by users, which means that organizations may be able to customize the software to meet their specific needs.
  • Lower costs: Open source software is often free or less expensive than closed source software, which could make it more accessible to smaller organizations.
  • Increased innovation: Open source software encourages innovation and collaboration among developers, which could lead to the development of new and improved HR tech tools.
  • Increased transparency: Open source software is transparent, which means that users can inspect the source code to ensure that it is secure and free from vulnerabilities.
  • Increased access to data: Open source software may provide access to all the data that organizations need to make informed decisions about their workforce.
  • Easier integration: If LLMs like ChatGPT become open source, it could be easier to integrate them with other HR systems, such as learning management systems (LMS)

Let me wrap this up…

The battle between open-source and closed-source technology is shaping the future of artificial intelligence and its impact on HR technology. As discussed, both approaches have their advantages and drawbacks. While closed-source AI tools offer optimized performance and security, they may limit customization, increase costs, and hinder innovation. On the other hand, open-source AI models promote collaboration, customization, lower costs, and innovation, but they may present integration challenges and require careful consideration of data access and transparency. As the landscape evolves, HR tech customers need to stay informed about the implications of these choices and how they align with their specific needs and goals. Whether the trend leans towards open source or closed source, it is clear that the decisions made in this battle will shape the future of HR technology and ultimately impact organizations and employees alike.

But hey, that’s just my opinion. What’s yours? Leave a comment and let’s have a conversation about it. Operators are standing by.

Ethics in AI with Barb Hyman of Sapia

Join Jim Stroud in a captivating conversation with Barb Hyman, CEO of Sapia, as they explore the ethical considerations of AI in hiring. Uncover the potential biases and discrimination in AI-driven systems, while discovering practical strategies to promote fairness, diversity, and inclusion. Gain insights from real-world cases, learn about transparency, and how job candidates can play a role in holding organizations accountable for ethical AI practices in hiring. Discover how organizations can strike a balance between efficiency and ethics, protect data privacy, and evaluate AI-driven systems’ effectiveness. Explore guidelines for ethical implementation and the role of candidates and advocacy groups in accountability. Tune in for valuable insights to navigate AI and ethics in hiring.

Mentioned in this podcast: