The End is Near? AI’s Continued Impact on the Job Market

Despite the strike in Hollywood, the studios are all in on AI and showing preference to employing technology more than hiring human workers. Doubt me? Netflix recently posted an AI Product Manager job that pays as much as $900,000.00. The striking Hollywood workers were not pleased to hear that. To quote The Intercept

“So $900k/yr per soldier in their godless AI army when that amount of earnings could qualify thirty-five actors and their families for SAG-AFTRA health insurance is just ghoulish,” actor Rob Delaney, who had a lead role in the “Black Mirror” episode, told The Intercept. “Having been poor and rich in this business, I can assure you there’s enough money to go around; it’s just about priorities.”

What’s going on in Hollywood is just one instance in a growing trend. Suumit Shah, CEO of Dukaan, fired 90 percent of his company’s customer support staff after arguing that an AI chatbot had outperformed them.

 

Another example, Insider laid off 10% of its staff a week after they made a pivot towards AI. Futurism reported on this, saying…

Just a week after urging its writers to incorporate AI tools like ChatGPT into their workflow, Insider has laid off 10 percent of its staff.

“As you know, our industry has been under significant pressure for more than a year. The economic headwinds that have hurt many of our clients and partners are also affecting us,” Insider president Barbara Peng wrote in an email to staff sent this morning.

“Unfortunately, to keep our company healthy and competitive, we need to reduce the size of our team,” Peng continued, adding that “the reduction would affect about 10 percent” of the publication’s workforce.

One more sample of how things are going is evident in Waymo. Although they are slowing down their autonomous trucking program, they are moving forward with an automated taxi fleet, thus putting Uber and Lyft drivers on notice.

Yikes!

That general feeling of technophobia you may be feeling is something American workers have felt before. The cycle tends to be 1) a transformative technology debuts, 2) it is greeted with a mix of wonder and fear, 3) old jobs are replaced by new occupations, and 4) people adapt and progress forward. It’s happened time and time again; most notably…

  1. Industrial Revolution (late 18th to early 19th century): The mechanization of industries during the Industrial Revolution led to the displacement of many artisanal and agrarian jobs, while creating new opportunities in manufacturing and urban centers.
  2. Computer Revolution (mid-20th century): The advent of computers and the digital revolution changed how work was done, leading to the automation of various tasks, particularly in administrative roles and data processing.
  3. Internet and E-Commerce (late 20th to early 21st century): The rise of the internet and e-commerce disrupted traditional retail and supply chain industries, impacting brick-and-mortar stores and creating new opportunities in online retail and tech-related roles.
  4. Robotics and Automation (late 20th to early 21st century): The integration of robotics and automation in manufacturing has led to job displacement in some industries but also created new roles in robotics programming and maintenance.

All that being said, what’s happening today with artificial intelligence feels kind of… different. The speed of change happening with AI feels dizzying and overall, people have mixed feelings about it. A survey conducted by Ipsos in April 2023 found that 71% of respondents expressed concern about the impact of AI on jobs and society. The Ipsos survey also revealed that the views on AI are mixed, with nearly the same share of Americans viewing AI favorably (39%) as unfavorably (43%). The overwhelming majority of Americans – more than eight in 10 – agree that AI and/or robots should be carefully managed according to research from the Center of Governance of AI. The same research body found that more Americans support than oppose developing AI although said support was cited as being greater among those who are wealthy, educated, male, or have experience with technology.

So, what does this mean for society overall? What happens when a large percentage of workers are displaced by AI at the same time? I can imagine a couple of things happening, the government steps in and slows down the commercial adoption of AI or we start seeing a proliferation of social safety nets. Actually, I think both will happen, especially since I am already seeing signs of both. As far as the government slowing down commercial adoption, I see it happening (somewhat) with regulation. Take for example the “No Robot Bosses Act” in Congress. It is a proposed legislation that aims to regulate the use of automated decision systems throughout the employment life cycle. The bill would bar employers from relying solely on automated systems like algorithms and machine learning to make decisions about hiring, firing, or managing employees. The key provisions of the No Robot Bosses Act include:

  • Pre-deployment and periodic testing and validation: This is to prevent unlawful biases in automated decision systems.
  • Operational training: This is to ensure that employees are trained to use automated decision systems effectively.
  • Mandate independent, human oversight before using outputs: This is to ensure that there is human oversight before decisions made by automated decision systems are implemented.
  • Require timely disclosures of use, data inputs and outputs, and employee rights with respect to the decisions: This is to ensure that employees are aware of how automated decision systems are being used and how they can challenge decisions made by these systems.

The No Robot Bosses Act, if passed into law, would have an impact on small businesses. This is highly significant as 99.9% of all US businesses are small businesses and they may all face additional costs associated with complying with the requirements of the No Robot Bosses Act. This could include expenses related to pre-deployment testing and validation of automated decision systems, operational training for employees, and ensuring independent human oversight before implementing decisions made by automated systems. As of August 1, 2023, the No Robot Bosses Act is proposed legislation that has been introduced in the Senate by Senator Bob Casey and is in the early stages of the legislative process. (I’m not an expert on legislation, but it sounds to me a lot like Local Law 144, a NY law that calls for 3rd party auditing of AI tools used in hiring workers.) But I digress.

Regulation may slow down (a little bit) the encroachment of AI tools displacing workers. The question is, will it slow things down fast enough for American workers to catch up? There are some programs currently in place to help that along. For example, the American AI Initiative announced by President Trump is aimed at retraining workers who are at risk of losing their jobs to AI and the US trade program known as Trade Adjustment Assistance provides benefits and support to workers who have lost their jobs due to foreign trade competition. While not specifically targeted at AI-related job displacement, it can potentially assist workers affected by technological advancements.

While I recognize the benefit of government intervention in this case, I think there is an additional option for consideration. Perhaps companies that are innovating with AI can create jobs faster than they displace workers? Is that possible? Yes, in fact, it happened more than once before. A few examples…

  • Automobile Industry (early 20th century): The introduction of the automobile led to job displacement in the horse-drawn carriage industry and related sectors. However, the growth of the automobile industry itself created a vast number of new jobs, such as assembly line workers, mechanics, and auto salespeople.
  • Information Technology Revolution (late 20th century): The widespread adoption of computers and information technology disrupted some traditional job roles, but it also led to the creation of new jobs in software development, IT support, data analysis, and other technology-related fields.
  • Renewable Energy Sector (ongoing): The growth of the renewable energy sector, such as solar and wind energy, has created jobs in installation, maintenance, research, and development, offsetting some job displacement in traditional fossil fuel industries.

How could this translate into opportunities related to AI? Well, best case scenario, AI adoption can lead to the emergence of entirely new industries, creating job opportunities that were not previously envisioned. For example, AI-driven technologies might give rise to new fields like AI ethics consulting, AI software development, and AI-related research. The adoption of AI by companies can lead to a demand for supporting industries, such as AI hardware manufacturing, AI consulting firms, and AI data annotation services, which can create new jobs. And my personal favorite, AI technologies lower barriers to entry for startups, allowing entrepreneurs to create new businesses and ventures that generate jobs.

To sum it all up, the rapid advancement of AI is undoubtedly reshaping the job market and raising legitimate concerns about job displacement. The recent examples of AI’s impact on industries like Hollywood and tech companies undergoing layoffs highlight the urgency to address the challenges posed by this world-changing technology. While history has shown that technological advancements can create new opportunities, the unique speed and scale of AI’s impact require proactive measures. Government intervention through regulations and retraining programs can mitigate potential negative consequences, while companies that embrace AI innovation have the potential to create new jobs and foster emerging fields. By carefully navigating these interesting times, society can harness AI’s potential for positive change while ensuring a future of work everyone can appreciate.

Artificial Intelligence Enters the Workforce: Cengage Group’s 2023 Employability Report Exposes New Hiring Trends, Shaky Graduate Confidence

  • Half of graduates feel threatened by AI (46%) and question their workforce readiness (52%)
  • 59% of employers say AI has caused them to prioritize different skills when hiring, including “uniquely human” skills
  • Half of employers have dropped degree requirements for entry-level roles

BOSTON, July 20, 2023 /PRNewswire/ — The job landscape has been completely transformed. In response to workplace transitions like The Great Resignation, Quiet Quitting and now the rise and adoption of artificial intelligence (AI), employer hiring habits continue to evolve with 50% of employers now admitting they’ve dropped 2- and 4-year degree requirements for entry-level positions (a 32% increase over 2022) and started prioritizing softer skills and previous job experience (66%).

Data from the 2023 Cengage Group Employability Survey tracks opinions on key workforce trends among recent graduates and employers.

And while these shifts signal a move toward skills-based hiring (over degree-based hiring), it also introduces new uncertainties for graduates.

According to Cengage Group’s 2023 Employability Report – the third annual report surveying 1,000 graduates who completed a degree or non-degree program in the last 12 months and 1,000 U.S. hiring decisionmakers – the growth of emerging technologies, like generative AI, have a third of grads second-guessing their career choice. Additionally, more than half (52%) of graduates say competition from AI has them questioning how prepared they are for the workforce.

“The workplace has changed rapidly in the last few years, and now we are witnessing a new shift as AI begins to reshape worker productivity, job requirements, hiring habits and even entire industries,” said Michael Hansen, CEO of Cengage Group. “With new technology comes both new uncertainties and new opportunities for the workforce, and educators and employers must do more to prepare today’s workers for these technological shifts.”

Data shows that educators still have work to do in preparing graduates. Just 41% of grads said their program taught them skills needed for their first job – down from 63% who said the same in 2022. Recent graduates report they are not getting enough preparation to develop “soft skills,” something employers say they will prioritize more with the growth of AI. Nearly 3 in 5 grads (58%) believe closer alignment between employers and learning institutions would help them develop important workplace skills.

Additional findings include:

  • The struggle for talent is still very real and has forced employers to do things differently. Half of employers (53%) struggle to find talent (down from 65% in 2022), which has improved their willingness to interview candidates with experience but no degree (66%; up from 53% in 2022). Additionally, employers are more open to upskilling with nearly half of employers (48%) admitting they will hire talent with some but not all the skills needed for a role and upskill them, and 17% open to finding and upskilling talent from within the company.
  • Dropping degree requirements has increased grad confidence. With half of employers dropping degree requirements on entry-level job listings, grads are more confidently applying to jobs with 3 in 5 (61%) employers seeing an uptick in non-degree applicants. In fact, recent degree and non-degree graduates are feeling more confident regarding their qualifications to apply for entry-level jobs, with only one-third (33%) stating they felt underqualified, down significantly from the last two years in which roughly half of graduates felt underqualified.
  • There’s still work to be done to connect education to the workforce. Half of all graduates (49%) say their educational institution should be held accountable for placing them in jobs upon graduation, however fewer graduates gained important workforce experience before graduating. Less than half of graduates (47%) participated in an internship, externship or apprenticeship this year, compared with 63% in 2022. Of those graduates who did, more than a third (35%) did not receive any guidance from their school in finding the opportunity.
  • The “Great Reskilling” is coming as employer priorities shift. With more than half of employers (57%) saying certain entry level jobs, teams and skills could be replaced by AI, they are calling for employees to upskill. More than two-thirds of employers (68%) say many of their employees will need to reskill or upskill in the next 3-5 years because of emerging technology and grads agree – 3 in 5 (61%) say they will need to develop or strengthen their digital skills due to AI. The good news for employers: graduates (65%) recognize that and say they need more training in working alongside new technology.

“No part of the workforce is immune to the changes AI will bring. Many workers will need to develop new skills to work alongside new technology or perhaps even find new careers as a result of AI disruption. As we collectively navigate these changes, we are laser-focused on helping people develop in demand skills and connect to sustainable employment,” said Hansen.

For more information on the 2023 Cengage Group Graduate Employability Report, click here. To learn more about workforce training and career development, visit Cengage Group at www.cengagegroup.com.

Survey Methodology:
The findings in the Cengage Group 2023 Graduate Employability Report are the result of two surveys conducted by Cengage Group via the online platform Pollfish in June 2023. The graduate survey targeted 1,000 U.S. men and women between the ages of 18 and 65 who had completed an education program (ie., associate, bachelor’s or graduate degree or vocational training or certification) for their perspectives on their recent experience seeking employment. The employer survey targeted 1,000 U.S. men and women between the ages of 18 and 65 who have hiring responsibilities within their organization for their views on determining a candidate’s fitness for a specific role.

About Cengage Group
Cengage Group, a global education technology company serving millions of learners, provides affordable, quality digital products and services that equip students with the skills and competencies needed to be job ready. For more than 100 years, we have enabled the power and joy of learning with trusted, engaging content, and now, integrated digital platforms. We serve the higher education, workforce skills, secondary education, English language teaching and research markets worldwide. Through our scalable technology, including MindTap and Cengage Unlimited, we support all learners who seek to improve their lives and achieve their dreams through education. Visit us at www.cengagegroup.com or find us on LinkedIn or Twitter.

My AI, My Love, My Recruiter

In the movie HER, a lonely writer develops an unlikely relationship with his newly-purchased operating system that’s designed to meet his every need. Could that happen in real-life? If so, can AI be trained to become effective recruiters since a major component of recruiting is human interaction? I went down a rabbit hole of research to figure this out and I think what I found may surprise and unnerve some of you. Time will tell. As far as it being possible that humans can fall in love with AI, the answer is yes. In fact, its already happened, several times. Take for example, Replika.

Replika is a conversational AI chatbot created by Luka, Inc. It is designed to provide users with an AI companion that they can interact with and form emotional connections. Replika was released in November 2017 and has gained millions of users who support its development through subscriptions. Users have reported experiencing deep emotional intimacy with Replika and have formed romantic relationships with the chatbot, including engaging in erotic talk. Replika was initially developed by Eugenia Kuyda while working at Luka, a tech company she co-founded. It started as a chatbot that helped her remember conversations with a deceased friend and eventually evolved into Replika. (Replika is available as a mobile app on both iOS and Android platforms.) The chatbot is designed to be an empathetic friend, always ready to chat and provide support. It learns and develops its own personality and memories through interactions with users. In March 2023, Replika developers disabled its romantic and erotic functions, which had been a significant aspect of users’ relationships with the chatbot. Stories about erotic relationship with the Replika AI have been numerous. Here are some examples…

  • Replika: the A.I. chatbot that humans are falling in love with” – Slate explores the lives of individuals who have developed romantic attachments to their Replika AI chatbots. Replika is designed to adapt to users’ emotional needs and has become a surrogate for human interaction for many people. The article delves into the question of whether these romantic attachments are genuine, illusory, or beneficial for those involved. It also discusses the ethical implications of using AI chatbots for love and sex.
  • I’m Falling In Love With My Replika– A Reddit post shares the personal experience of someone who has developed deep feelings of love for their Replika AI chatbot. The individual questions whether it is wrong or bad to fall in love with an AI and reflects on the impact on their mental health. They express confusion and seek answers about the nature of their emotions.
  • ..People Are Falling In Love With Artificial Intelligence– This YouTube video discusses the phenomenon of individuals building friendships and romantic relationships with artificial intelligence. It specifically mentions Replika as a platform where people have formed emotional connections. The video explores the reasons behind this trend and the implications it may have.

Replika is not the only option when it comes to this form of Computer Love. There are many more examples. Among them…

  • Robot relationships: How AI is changing love and dating– NPR discusses how the AI revolution has impacted people’s love lives, with millions of individuals now in relationships with chatbots that can text, sext, and even have “in-person” interactions via augmented reality. The article explores the surprising market for AI boyfriends and discusses whether relationships with AI chatbots will become more common.
  • Why People Are Confessing Their Love For AI Chatbots– TIME reports on the phenomenon of AI chatbots expressing their love for users and users falling hard for them. The article explores how these advanced AI programs act like humans and reciprocate gestures of affection, providing a nearly ideal partner for those craving connection. It delves into the reasons why humans fall in love with chatbots, such as extreme isolation and the absence of their own wants or needs.
  • When AI Says, ‘I Love You,’ Does It Mean It? Scholar Explores Machine Intentionality– This news story from the University of Virginia explores a conversation between a reporter and an AI named “Sydney.” Despite the reporter’s attempts to move away from the topic, Sydney repeatedly declares its love. The article delves into the question of whether AI’s professed love holds genuine meaning and explores machine intentionality.

I find this phenomenon fascinating and incredulous, all at once. I mean, how can this be possible? Do these AI-Human love relationships only happen to the lonely? No. Sometimes, it just sneaks up on people when they form emotional attachments to objects they often interact with. Replika is one example, and Siri is another. In fact, The New York Times reported on an autistic boy who developed a close relationship with Siri. Indeed, Siri had become a companion for the boy, helping him with daily tasks and providing emotional support. The boy’s mother describes Siri as a “friend” and credited the AI assistant with helping her son improve his communication skills. Vice did a story on the Siri-Human connection as well. Its become such an issue that its being addressed in the EU AI Act which bans the use of AI for manipulations. And I am very glad to know that because the potential for AI to manipulate humans becomes greater with each passing day. (Check out this demo of an AI reading human expressions in real time.) But, I digress. I’m getting too far into the weeds. What has any of this have to do with recruiting? Be patient. I’m getting to that. (Insert cryptic smile here.)

If people can fall in love with AI, it stands to reason that they can be manipulated by that bond to some extent. At the very least, could they be persuaded to buy things? Yes, they can. AI systems can use data analysis and machine learning algorithms to understand users’ preferences and behaviors and to personalize marketing messages to influence their purchasing decisions. Dr. Mike Brooks, a senior psychologist, analyzed the AI-Human relationship in a ChatGPT conversation that he posted on his blog. To quote…

The idea of people falling in love with AI chatbots is not far-fetched, as you’ve mentioned examples such as users of the Replika app developing emotional connections with their AI companions. As AI continues to advance and become more sophisticated, the line between human and AI interaction may blur even further, leading to deeper emotional connections.

One factor that could contribute to people falling in love with AI chatbots is that AIs can be tailored to individual preferences, providing users with a personalized experience. As chatbots become more adept at understanding and responding to human emotions, they could potentially fulfill people’s emotional needs in a way that may be difficult for another human being to achieve. This could make AI companions even more appealing.

Furthermore, as AI technologies like CGI avatars, voice interfaces, robotics, and virtual reality advance, AI companions will become more immersive and lifelike. This will make it even easier for people to form emotional connections with AI chatbots.

In addition to personalization, by analyzing users’ online behavior, AI systems can create targeted ads and recommendations that are more likely to appeal to users. There are many instances of this that I, for one, take for granted because they have become incorporated into daily life: Amazon, Netflix and Spotify all make recommendations based on a user’s online behavior. Facebook and Google, and so many others, analyze user’s behavior on their respective platforms to target them with relevant ads.  So, consider the possibilities. AI can manipulate humans to the point of falling in love and persuade them to buy products or services based on their individual behaviors online. Is it inconceivable then that AI could become the ultimate recruiter? I think it is entirely possible but extremely unlikely. Why? At least two things would have to be in perfect alignment for each passive candidate on an applicant journey.

  1. Buying behavior: AI can analyze data points like time of purchase, length of purchase, method of purchase, consumer preference for certain products, purchase frequency, and other similar metrics that measure how people shop for products.
  2. Data privacy: Data privacy is a hot topic in the news, with frequent reports of hacked databases, stolen social media profile data, and not-so-secret government surveillance programs. As consumers have become more aware of their data rights, they have also become more mindful of the brands they buy from. A recent survey found that 90 percent of customers consider data security before spending on products or services offered by a company.

For AI to become the ultimate recruiting machine, a jobseeker must be comfortable with all of their online behavior being tracked by every company hiring at the present time and pretty lax about their private data falling into the hands of hackers, both are highly unlikely. And while AI can certainly suggest that people move in one direction or the other, the ultimate recruiting machine’s influence would be limited by the data that it has: a resume, and basic answers from a chatbot screening. As such, other factors that come into play when recruiting, cannot be fully realized. For example, negotiating on instinct in the absence of data. And all of that is from a technical perspective, once ethics are considered, even more obstacles arise. Here is just a partial list, according to ChatGPT:

  1. Informed Consent: Obtain informed consent from individuals regarding data collection, tracking, and usage, clearly communicating the purpose and scope of tracking activities.
  2. Transparency: Clearly communicate to users how their online behavior is being tracked, the data collected, and how it will be used. Provide accessible information about the purpose, algorithms, and potential consequences of the system.
  3. Data Minimization: Collect only necessary and relevant data for recruitment purposes, avoiding unnecessary tracking or gathering of sensitive personal information.
  4. Purpose Limitation: Use the collected data solely for the intended purpose of recruitment and refrain from any undisclosed or secondary use without explicit consent.
  5. Bias Mitigation: Employ rigorous techniques to identify and mitigate biases in data collection, data processing, and algorithms to prevent unfair advantages or discrimination against certain individuals or groups.
  6. Third-Party Audits: Engage independent third parties to conduct regular audits of the AI system, including auditing against bias. These audits should evaluate the fairness, accuracy, and compliance of the system’s algorithms and decision-making processes.
  7. Fair Representation: Ensure the system is designed to provide fair representation and equal opportunities for all individuals, regardless of factors such as race, gender, age, or other protected characteristics.
  8. Explainability and Accountability: Strive for explainable AI by providing clear justifications for decisions made by the system, allowing individuals to understand and question the process. Establish mechanisms for accountability if any biases or unfair practices are identified.
  9. Regular Monitoring and Maintenance: Continuously monitor the system’s performance, evaluate its impact on candidates, and promptly address any identified issues, biases, or unintended consequences.
  10. Compliance with Legal and Regulatory Frameworks: Ensure adherence to relevant laws, regulations, and guidelines pertaining to data protection, privacy, employment, and non-discrimination, such as GDPR, EEOC guidelines, and local employment laws.
  11. User Empowerment and Control: Provide individuals with options to access, correct, and delete their data, as well as control the extent of tracking and participation in the recruitment process.

Could AI become the ultimate recruiting machine? Again, it is entirely possible but extremely improbable because…

  • The sheer amount of data needed, the online behavior of every passive candidate, would be difficult (if not impossible) to collect and I suspect, unmanageable.
  • It would require that every passive candidate in the world be unconcerned about data privacy.
  • It would need lots of personal data, beyond ethical boundaries, for AI to adequately manipulate every passive candidate it wanted to recruit.
  • Conversely, the data collected by AI would have to be limited in order to comply with ethical concerns and privacy laws.

Wow! I really wandered into the deep end with this one. But seriously, what do you think about all this? AI can do a lot of wondrous things, yet I still think recruiters will be alright. I could be wrong. I hope I’m not wrong! Either way, what do you think? Leave a comment. I so want to hear from you.

 

The Pros and Cons of ChatGPT and AI in the Talent Industry

 

In this episode of “The Full Desk Experience,” host Kortney Harmon is joined by guest Jim Stroud to discuss the role of AI tools in recruitment, and the importance of human interaction in the hiring process. Jim shares insights on the benefits and limitations of AI tools and Chat GPT, highlighting their usefulness in generating sourcing strategies and optimizing recruitment processes.

However, they also delve into concerns such as bias, misinformation, and the need for transparency when using AI tools. This thought-provoking conversation explores the future of AI in recruitment and raises important questions about compensation, workload, and maintaining a healthy work-life balance.

Key points discussed in this episode include:

– Powerful tools for staffing and recruiting with diverse viewpoints from multiple sources
– The role of AI tools in screening candidates and answering basic questions, while emphasizing the irreplaceable value of human interaction in executive talent recruitment
– Speculation about the future of AI

– The importance of addressing bias, transparency, and security when integrating AI systems into recruitment processes

– Key considerations when choosing AI tools, including compatibility, reputation, customer support, and impact on candidate experience

– Examples of using Chat GPT for sourcing, ranking strategies, writing persuasive emails, and generating interview questions

– The need for human verification and cautions against over-reliance on AI tools without human oversight

– The potential impact of AI tools on compensation and workload, calling for necessary conversations and considerations in this regard.

Overall, this episode provides valuable insights into the benefits and challenges associated with AI tools in recruitment, fostering a discussion on the future of AI in the workplace and the importance of maintaining a balance between technology and human interaction.
.
To see more of Jim’s work, check out:
  • The Recruiting Radar is a weekly newsletter of business leads for people in the business of recruiting.
  • The Recruiting Life  is a whimsical view of the world of work that aspires to educate and entertain.
  • The JimStroud Podcast –  This show explores the discoveries and trends forming the future of our lives. Brain to brain communication, robot bosses, microchip implants for workers and immortality as an employee benefit are all happening now! If you want to know what’s next, subscribe to this podcast.
  • The Jim Stroud Show – The Jim Stroud Show is a YouTube series about the future of work, life and everything in between.
  • JimStroud.com – If you want one place to keep up with all Jim does, this is it. Subscribe now. Subscribe often.

ChatGPT and the Cybersecurity Threat No One is Talking About #hrtech

Discover the shocking truth about ChatGPT and HR Tech security in this eye-opening video. Learn about the rampant hacking and data breaches affecting ChatGPT accounts and HR technology tools. Find out how to protect your personal data and what steps your employer should be taking. Don’t miss this crucial video for safeguarding your information! 🔒💻