Blog

Home / Blog

The ground rules and policies for using ChatGPT at work

William Tsu
Data Analyst
Experienced data analyst working with data visualization, cloud computing and ETL solutions.
July 10, 2023


In the workplace, the use of generative AI tools like ChatGPT is growing in popularity. These tools can generate text, analyse and solve problems, translate between languages, summarise complex content, and even generate code for software applications. For instance, they may create a new marketing campaign, website coding, or customer-facing emails in a matter of seconds.

It Is crucial to develop and implement a company strategy that regulates such use as organisations investigate the potential applications of generative AI and employees utilise the technology informally to help with daily activities. This is a problem worth thinking about because it’s likely that employees in all firms currently use ChatGPT or other applications for work-related activities.

Why Do We Need a Policy?

It is advisable to have a company policy guiding the usage of generative AI technologies for a variety of reasons. First, since generative AI tools are not always good at determining the quality of the data they consume, their output and replies may contain biassed, erroneous, or deceptive analysis and conclusions. Employees will be more likely to validate or verify all output if there is a clear policy in place and they are aware of it.

Second, anything shared with publicly accessible generative AI tools runs the risk of being made available to others for quality assurance or bug-fixing purposes. Although some applications give users the option to forego such uses of input data, it is unclear how useful this actually is. The organisation may better protect sensitive data and uphold its confidentiality requirements by making sure staff are aware of the problem and understand whether to share personal information, proprietary information, or private information using generative AI technologies.

Third, an organisation can reduce any legal concerns brought on by the improper use of generative AI technologies by putting in place a business policy. A well-written policy will outline the permissible uses and lessen the possibility of legal problems resulting from improper use of the generative AI technologies by employees. This is analogous to how employees frequently use other IT and communication technologies offered by their employers, as well as the internet and social media.

Fourth, a well-crafted policy can inform staff members about the concerns related to intellectual property ownership when utilising generative AI technologies to produce content, and it can be used to inform use restrictions and other policy provisions.

Finally, business policies may ensure that staff members use generative AI tools responsibly and effectively, maximising the advantages for the company while minimising potential inefficiencies and distractions.

Creating and Putting in Place a Corporate Policy

When creating a business policy, the following considerations must to be taken into account:

1. Scope.

The first stage is to decide which specific uses of generative AI technologies employees should be given permission to use. This could involve, among other things, writing emails, creating reports, doing research, or writing software code. An organisation can forbid employees from utilising these technologies for high-risk activities like making investment or employment-related decisions or deciding whether to offer services to certain people by defining the permissible uses. Determining the business policy’s intended audience and whether any supplemental policies ought to apply to particular teams are also crucial. For instance, the HR team’s usage of generative AI tools may raise different issues and dangers from those raised by software engineers.

2. Establish rules for the confidentiality and protection of data.

When using generative AI technologies to handle sensitive material (such as personal data and confidential or proprietary information), establish clear protocols that staff members must adhere to. This could include clear prohibitions against disclosing such information or guidelines for doing so. It’s crucial to establish precise security rules that are consistent with other security policies and procedures, such as those relating to the storage of generated material and the deletion of sensitive data after usage.

3. Train your staff.

Take into account the employee communication strategy and any necessary training. To make sure that staff members are aware of the policy and understand how to use generative AI technologies responsibly, it may be beneficial to offer training. Training could go through the principles of the policy as well as useful advice for utilising the tools in a way that complies with the policy.

4. Observe conformity.

To maintain compliance with the policy, it will be crucial to constantly check how often employees are using generative AI tools. Consider putting in place an auditing procedure to examine the output and staff tool usage. To guarantee that the result is accurate, employees should be encouraged to let coworkers know that work product has been developed using generative AI.

5. Refresh the policy.

 It is crucial to periodically examine and update your organisational policy to accommodate new advancements and potential hazards as generative AI tools improve. Identifying the team or person in charge of this review and the review date may be useful.

There are numerous real-world uses for ChatGPT in the workplace, such as:

6. Consumer Assistance

A great tool for customer support is ChatGPT. Many companies utilise ChatGPT to deliver customer service and give clients a quick and easy option to ask for assistance.

Client Service teams can reply fast to client questions and deal with problems effectively with ChatGPT’s chatbots and real-time messaging.

7. Creation of Content

In some cases, ChatGPT is helpful for content production as well. Employees can utilise ChatGPT to create copy for a variety of purposes, including emails, ads, and product descriptions. Even the generation of code for websites and programmes is possible with it.

8. Analytics

A useful tool for business analytics is ChatGPT. Businesses can learn important things about team productivity and communication by looking at chat logs.

Chat logs can offer useful information for year-end assessments, enabling managers to assess team performance, pinpoint areas that need development, and monitor advancement over time. Additionally, they can be used to examine consumer interactions, helping organisations better comprehend the wants and demands of their clients.

9. Coding

The two uses of ChatGPT that are most frequently used in the workplace are writing new code and checking old code. Many programmers assert that ChatGPT has considerably improved their productivity and efficiency.

10. Document Editing

Because ChatGPT is a language model that has been trained on millions of texts, it is skilled at editing text. Workers submit awkwardly phrased paragraphs to ChatGPT, which corrects the grammar, adds more clarity, and generally makes the text easier to read.

11. Fact-Checking

Employees use ChatGPT in the same way they would use Wikipedia or Google to check the accuracy of information in documents they are creating or reviewing.

12. Use Only for Business Purposes

While ChatGPT can be a great resource for information gathering, it’s important to remind staff members that it’s a formal communication medium. Tell your staff not to use ChatGPT for personal or recreational purposes, and to only use it for business-related discussions.

Keep Your Information Private

Tell your staff to keep all information private when utilising ChatGPT. They should be made aware that ChatGPT messages are not private. Encourage your staff to exercise prudence and refrain from sharing sensitive information in chat.

Be Aware of When to Use ChatGPT

Although ChatGPT is a flexible tool, it might not always be the best choice for every circumstance. For instance, lengthy written content needs a human touch, like articles and newsletters. Teach your staff when it’s okay to use a different tool or finish the job by hand.

Workplace Risks of ChatGPT

Despite its stellar performance, ChatGPT occasionally yields unreliable results. Parts of a legal brief occasionally make reference to unrelated or made-up cases. Because it is a linguistic model, it could struggle with computational tasks and give inaccurate answers when asked to solve simple algebraic problems. OpenAI is well aware of these restrictions, and ChatGPT routinely cautions users that it might produce inaccurate results. Additionally, it is lacking in information about global events that took place after 2021. When the person assessing ChatGPT’s data can spot and fix such errors right away, these risks can be decreased. However, the quality control risks increase if the reviewer is unable to rapidly pinpoint what is incorrect or missing from ChatGPT’s response.

Conclusion

Organisations face both possibilities and problems as a result of the widespread use of generative AI tools like ChatGPT in the workplace. Organisations can look to harness the full potential of these tools while attempting to mitigate potential risks and pursue compliance with legal and regulatory requirements by developing and implementing a comprehensive corporate policy governing employee use, although adherence does of course depend on the integrity of the employees to whom it is addressed.