top of page

The Risks of Using AI in HR

Updated: Apr 15



A colorful illustration showing three people in a workplace HR scenario. A person with orange hair wearing an orange top and purple pants sits at a desk holding papers. Across from them, a person with green hair in business attire gestures while speaking, with an orange speech bubble above them. A third person with blue hair stands to the right holding a magnifying glass. Above them are profile icons representing employees, puzzle pieces symbolizing teamwork, a gauge/meter, and a star in a speech bubble. The illustration uses a minimalist style with bright colors (orange, green, purple, and blue) to represent HR processes and evaluation.

AI applications in HR carry significant risks that deserve more critical attention, attention that many of the large HR organizations continuously fail to provide (SHRM I’m talking about you). Yes, AI can do a lot, but just because it can, doesn’t mean it should. Additionally, many of these emerging AI companies marketing their brand new, revolutionary HR products are created by founders with virtually no background is anything remotely related to HR or HR compliance, making these pretty high-risk and usually not that effective.


Below are the biggest risks that anyone in HR needs to consider before implementing these tools. 


Bias and Discrimination Concerns


  1. Amplified Historical Biases: AI trained on past hiring data can perpetuate or magnify existing workplace discrimination. This is a massive problem, and there are now US states which have strict compliance requirements around evaluation of the tools being used in critical workflows, like recruiting. 

  2. Protected Class Impacts: Algorithms may create disparate impacts on candidates based on race, gender, age, or disability status. Again, specific laws are actively being enforced to prevent this, and you’d be surprised how many companies are still churning out products that do not mitigate this risk. 

  3. "Black Box" Decision-Making: Lack of transparency in how hiring decisions are made creates legal vulnerability. Never use tools that provide you with an output in which the logic cannot be explained. 

  4. Proxy Discrimination: AI can identify subtle patterns that serve as proxies for protected characteristics. 


NY, CA, MD, IL, WA, CO all have specific laws regarding the use of AI in interviews and any kind of automated decision-making processes. This will only continue to get more complex in the US, with more states inevitably following suit as more issues emerge with the use of AI in the workplace context.


AI in HR: Legal and Compliance Risks


TLDR; there are many, and they are complicated. 


Many of these will overlap with risks around bias and discrimination, with an added layer of jurisdiction complexities as it pertains for privacy and security regulation. A few of the highest risks to be aware of:


  • Regulatory Violations: AI tools may violate employment laws like the ADA, Title VII, ADEA, and EEOC guidelines. Even if you don’t realize they are, you are still responsible for the outcomes. 

  • Documentation Deficiencies: Inability to explain algorithmic decisions when challenged by candidates or regulators. The lack of transparency works against the organization and people who are using it.

  • International Compliance Issues: Different jurisdictions have different requirements for automated employment decisions, and these are actively changing as this technology rapidly evolves. 

  • Emerging Legislation: Growing body of AI-specific employment laws (like NYC's Local Law 144) requiring bias audits - if you are going to implement these tools, make sure you have the right resources who are able to understand the compliance mandates that come with them. 


Privacy and Data Security


Even outside of HR, there are many risks and issues with privacy and security as it relates to AI tools, especially generative AI products. The highest risk workflows to keep a close eye on will be around recruiting, especially if you are interviewing outside of the US, benefits administration where PHI could come into play, performance management, and workplace monitoring.


  • Candidate Data Collection: Excessive gathering of personal information without clear purpose or consent. Jurisdictions make this really tricky, especially when handling candidates covered under GDPR, and even in certain states in the US. 

  • Data Security Vulnerabilities: Sensitive applicant and employee information becoming subject to breaches - this is a real risk with most of these tools, even outside of HR. 

  • Employee Surveillance: AI-powered monitoring systems creating privacy invasions in the workplace. Collect what is necessary, and understand what is being done with the data when it’s collected. 

  • Informed Consent Issues: Candidates often unaware of how their data is being processed by AI systems


I can’t stress enough how important it is to actually read the privacy policies of these AI tools before even creating an account. The lower the price, the less control you usually have over the data being processed and collected. HR departments typically handle some of the most high-risk and sensitive data in the workplace, so don’t willy-nilly throw that data into a new tool without understanding how it’s protected. 


Workplace and Cultural Impacts


  • Dehumanized Experience: Over-reliance on AI reducing human connection in the recruitment process and even during the entire employment lifecycle. AI Chatbots are annoying, and even more so when someone is faced with a crisis and needs assistance quickly. 

For those who are bullish on AI chatbots to assist in HR support, try calling your bank and tell me how many times you say “representative” or press “0.” Seemingly great solution in theory, but not so much in practice. Rippling’s AI chatbot is another great example of why this is not so great in practice - I would rather watch paint dry then have to try to reach that support team. 

  • Employee Trust Erosion: Perception that critical career decisions are made by impersonal algorithms. Never outsource critical employment decisions to a piece of technology, it’s that simple. 

  • Skills Measurement Limitations: AI struggling to evaluate soft skills, creativity, and cultural fit. Why? Because it’s not human. You also shouldn’t need AI to do this if you have a strong system in place to measure the skillsets and progress of your workforce. 

  • Workforce Homogenization: Risk of creating teams lacking diversity of thought and experience. 


    HR already takes a lot of heat on social media and out in the wild for not being effective or trustworthy, making it critical to prioritize maintaining strong relationships with the workforce over efficiency when it comes to providing support and demonstrating the value of internal HR. You can't make something more efficient if it's not effective to begin with, so focus on ensuring your HR workflows are effectively supporting the workforce before trying to speed up your processes.


Technical and Implementation Risks


The HR world is way too bullish on implementing AI without actually providing accessible ways for practitioners to learn and understand the technology (without paying 500+ for a sub-par course). The biggest risks are being realized today, which include these tools being brought on board and then used incorrectly, or user adoption falling off completely. A few of the common problems we’ve been observing: 


  • Inappropriate Use Cases: Applying AI to HR decisions where algorithmic approaches are unsuitable. Just because it can, doesn’t mean it should. 

  • Data Quality Problems: Training models on incomplete, outdated, or non-representative workforce data. The quality of the outputs is determined by the data quality overall. 

  • Lack of Human Oversight: Insufficient review of AI recommendations by qualified HR professionals who understand both how the technology works and understanding of the compliance aspect. Governance is critical and truly isn’t optional when using it in HR workflows. 

  • Overconfidence in Results: Treating AI outputs as inherently objective despite underlying limitations. Another issue is being naive to what those limitations actually are, believing the product is the solution instead of a tool. 


Organizations implementing AI in HR functions need to adopt robust governance frameworks, conduct regular bias audits, maintain human oversight of all significant decisions, and develop transparent processes for candidates and employees to understand and challenge automated assessments. It's important for the risks to be considered prior to bringing new technology on board, so working with legal and compliance teams will be crucial in ensuring the success of integrating AI into your HR tech stack.


The Takeaway


HR is a difficult job because it requires a broad skillset, and most importantly, the ability to take nuance into account in just about every situation when it pertains to the workforce and people in any way, shape, or form. Before implementing this technology into your HR tech stack, critically evaluate whether or not it’s going to be 1) effective, 2) efficient, and 3) necessary. Make sure you educate yourself on the risks involved, read the privacy policies, including those pertaining to data privacy and security. Most importantly, do your research and don’t get sucked into the hype machine. 


 
 
 

Comments


bottom of page