post featured image

AI technology is revolutionizing the employment landscape. As more organizations embrace this technology, establishing proper workplace policies can help employers protect against related risks and prevent potential violations.

Establishing Workplace Policies on Artificial Intelligence

Share this Story

Artificial intelligence (AI) has made its way into many workplaces nationwide and is rapidly changing how organizations operate and make decisions. In some cases, employees may be using AI tools without their employers’ permission or knowledge. While this technology presents opportunities for organizations, including enhanced workflows, streamlined operations, and improved customer experiences, it has limitations and exposures that employers need to consider.

Implementing workplace policies can help ensure employers understand the potential legal, business, and reputational risks associated with using AI tools and protect against them. Therefore, now is the time for employers to start considering how best to create and enforce policies that address the use of AI technology in the workplace.

 

General Considerations

Many employers are using AI systems to sort through resumes, create job postings, streamline the hiring and onboarding processes, and automate many HR functions. While this technology can help improve organizations’ operational efficiencies, it presents certain risks. For example, AI algorithms can reinforce biased or discriminatory hiring practices even when unintentional. Additionally, AI tools’ increased monitoring of employee activities can trigger privacy issues. As the integration of AI systems becomes more widespread, anticipating the issues this technology may pose in the workplace is increasingly essential.

Despite the potential risks of using AI tools, laws and regulations haven’t kept up with employers’ acceptance and incorporation of this technology. While many existing laws address AI-related issues, as a whole, such technology is a relatively new legal area. There’s currently a patchwork of federal and state regulations that address aspects of using AI tools in the employment context; however, legal issues related to these tools will likely continue to emerge as AI technology develops and becomes more advanced.

Because AI technology in the workplace is largely unregulated, there are many gray areas employers must navigate. Employers can establish governance policies and procedures to evaluate and monitor AI tools as well as assess the long-term impacts of these tools. Understanding how AI tools are used in the workplace can direct employers as they develop related policies. Existing workplace policies may already address some AI-related risks, but employers may need to reevaluate these policies to address specific concerns. This can help ensure that organizations use AI tools responsibly and integrate such technology to complement human activity in the workplace.

For employers operating in multiple states, the use of AI tools can present compliance problems due to varying federal and state laws regulating this technology. In particular, it’s possible that using AI tools in the workplace may be illegal in some jurisdictions or subject to different regulations. As such, organizations must devise policies to navigate these issues if they have employees working in different states.

In addition, adopting AI tools may lead many workers to feel their jobs are being threatened. That’s why it’s important for employers to understand the impact that introducing AI technology in the workplace may have on the well-being of their employees. Employers must consider ways to support their employees during this transition. This can be done by establishing policies and educating and training employees to understand the roles and functions of AI tools in the workplace. With this in mind, it’s vital that organizations implement related policies and procedures when adopting AI tools.

 

Data Privacy and Surveillance

AI technology can collect and analyze data to help increase workforce and organizational productivity. This can help employers transform their approaches based on AI-derived insights or tracking employee performance. However, employers must consider employees’ privacy rights when doing so and institute effective policies to outline and protect those rights. Some jurisdictions have imposed consent and notice requirements for using AI tools in the workplace. Currently, New York, Delaware, and Connecticut require employers to notify employees of electronic monitoring. Other states have implemented consent and notice requirements for using AI technology as an interview tool. For example, in Maryland, an employer cannot use facial recognition software during an interview unless the interviewee signs a waiver. Establishing policies to address these issues can help ensure that increased monitoring of employees through AI tools doesn’t become intrusive or reveal private or confidential information. This can include disclosing how such technology is utilized with applicants and employees.

 

Copyright and Intellectual Property Rights

AI-generated content can violate copyright laws or infringe on third-party intellectual property rights. For instance, conversations employees have with AI chatbots may be reviewed by AI trainers, inadvertently disclosing sensitive and confidential business information and trade secrets to third parties. This could potentially expose employers to legal risks under privacy laws. Additionally, employers should consider the status of any content generated using AI tools, how it’s protected, and who holds the right to utilize that content before using it. Employers can review and update their confidentiality and trade secret policies to ensure they cover third-party AI tools. Organizations can also train employees on potential copyright and intellectual property issues, ensuring inputs used to create AI-generated content do not include data that are protected or confidential. Employers can also restrict access to AI tools to reduce their legal risks.

 

Anti-discrimination Concerns

Using AI technology can lead to intentional and unintentional discrimination in the workplace, resulting in costly lawsuits or investigations. For example, AI algorithms used to make employment decisions may be based on historical data sets that could be biased or discriminatory—benchmarking resumes or other job requirements based on protected characteristics, such as age, race, gender, or national origin. As a result, employers should be cautious when developing, applying or modifying data to train and operate AI tools to make employment decisions.

The U.S. Equal Employment Opportunity Commission (EEOC) identified AI technology as a priority subject matter in its 2023- 2027 Strategic Enforcement Plan, signaling a potential increase in AI-related enforcement actions. The agency recently issued guidance regarding employers’ use of algorithms and AI tools when making hiring or other employment decisions, to ensure their decisions don’t violate employees’ federal civil rights. In 2023, the EEOC launched the Artificial Intelligence and Algorithmic Fairness Initiative to help ensure that workplace use of AI tools complies with federal civil rights laws. While many employers likely already have anti-discrimination policies in place, they can consider instituting bias audits to impartially evaluate the disparate impacts of their AI tools on protected classes. They can also review their AI-based compensation management tools to ensure they don’t violate pay equity laws. Organizations should consider doing the same with any vendors they use.

 

Ethical Issues

As AI tools become more advanced, employers’ abilities to control this technology will likely become more limited. That’s why it’s important that organizations establish policies to ensure the ethical use of AI tools. While there are still many unknowns when it comes to AI tools, employers should establish policies to account for what is known and reevaluate their policies regularly as the technology evolves.

 

Employer Takeaway

AI technology is revolutionizing the employment landscape. As more organizations embrace this technology, establishing proper workplace policies can help employers protect against related risks and prevent potential violations. Being proactive in creating AI-related policies and procedures can help employers identify their exposures and outline strategies to address them.

 

Being in the know is the first step to protecting yourself and your business from cyber fraud. Choice Bank is committed to providing you with up-to-date resources and tips to help you stay informed. Learn more at bankwithchoice.com/cybersecurity.

 

This HR Insights is not intended to be exhaustive nor should any discussion or opinions be construed as professional advice. © 2023 Zywave, Inc. All rights reserved.