The number of employers using generative artificial intelligence (AI) is skyrocketing: Nearly one in four organizations reported using automation or AI to support HR-related activities, including recruitment and hiring, according to a 2022 SHRM survey. In recruiting, there are some risks that companies should be aware of when it comes to using AI in the hiring process or writing job descriptions.
As more workers enjoy the benefits of AI tools like ChatGPT, some company leaders are growing concerned about employees inputting sensitive information into the bot. It’s led companies like JPMorgan Chase, Accenture, and Amazon to limit or ban employees from using it.
Samsung Electronics Co. is banning employee use of popular generative AI tools after discovering staff uploaded sensitive code to the platform. The company is concerned that data transmitted to such AI platforms including Google Bard and Bing is stored on external servers, making it difficult to retrieve and delete, and could end up being disclosed to other users, according to the document.
Additionally, Verizon stated that “ChatGPT is not accessible from our corporate systems, as that can put us at risk of losing control of customer information, source code and more…as a company, we want to safely embrace emerging technology,” in a public address to employees.
Should companies trust their employees are using this new tool in a way that doesn’t put important information at risk? A poll of 62 HR leaders in February 2023 by consulting firm Gartner found that about half of them were formulating guidance on employees’ use of ChatGPT, Bloomberg reported.
While some companies are banning the use of OpenAI’s ChatGPT and other generative AI tools, there are also ways to be more specific about the restrictions around the use of sensitive information that could be exposed by entering it (code, for example) into an AI tool.
Creating a generative AI policy for a company involves establishing guidelines and principles for using the language model effectively and responsibly.
Here’s what to consider when building a policy for using AI tools:
Remember that developing a generative AI policy requires input from stakeholders across different departments, including legal, IT, HR, and customer support. It’s important to strike a balance between leveraging the capabilities of the model and ensuring responsible and ethical use.
In addition to the areas outlined above, the Society for Human Resources Management (SHRM) has an excellent resource on How to Create the Best ChatGPT Policies.
Supported By WordPress Database Support Services