A recent Reuters/Ipsos poll reveals that a growing number of workers across the United States are turning to ChatGPT, a conversational AI program, to assist with various basic tasks. However, this trend has led major employers like Microsoft and Google to take measures to curtail its usage due to concerns related to security and potential intellectual property leaks.
Around the world, companies are grappling with the best ways to leverage ChatGPT, which employs generative AI to engage in conversations and respond to a wide range of prompts. While many find it beneficial for tasks such as drafting emails, summarizing documents, and conducting preliminary research, security firms and companies have raised alarms about the potential risks.
The Reuters/Ipsos poll, conducted between July 11 and 17, revealed that 28 percent of respondents regularly use ChatGPT at work. However, only 22 percent stated that their employers explicitly allow the use of external AI tools. The poll, which surveyed 2,625 adults across the US, had a credibility interval of approximately 2 percentage points.
The results also showed that 10 percent of those polled indicated that their bosses had explicitly banned external AI tools, while about 25 percent were uncertain about whether their companies permitted the use of such technology.
ChatGPT’s rapid rise to popularity since its November launch has generated both enthusiasm and concerns. OpenAI, the developer behind ChatGPT, has faced regulatory conflicts, particularly in Europe, where its data-collecting practices have drawn criticism from privacy watchdogs.
One of the key worries is that human reviewers from other companies may access the generated chats, potentially leading to the reproduction of absorbed data during training, which poses a risk to proprietary information. Ben King, VP of customer trust at corporate security firm Okta, emphasized the lack of understanding among users about how data is used in generative AI services, which poses a significant challenge for businesses.
OpenAI did not comment on the implications of individual employees using ChatGPT but highlighted a recent company blog post that reassured corporate partners that their data would not be used to further train the chatbot without explicit permission.
Google’s Bard, a similar AI tool, collects data like text and location information. While users can delete past activity from their accounts and request removal of content fed into the AI, Alphabet-owned Google did not provide additional details when asked about this aspect.
Microsoft did not respond immediately to a request for comment on the matter.
A US-based employee of Tinder revealed that workers at the dating app use ChatGPT for “harmless tasks” such as writing emails, despite the company’s lack of official approval. While some companies have taken proactive measures to restrict usage, other firms, like Coca-Cola, have embraced ChatGPT to explore how AI can enhance operational effectiveness. However, security remains a priority for those companies.
Tate & Lyle’s Chief Financial Officer, Dawn Allen, mentioned that the global ingredients maker is experimenting with ChatGPT in a safe manner, seeking ways to optimize tasks in various areas such as investor relations and knowledge management.
Meanwhile, some employees face barriers in accessing the platform on their company computers. For example, a Procter & Gamble employee expressed that ChatGPT is completely banned on the office network.
Security experts, like Paul Lewis, chief information security officer at Nominet, acknowledge the potential benefits of increased AI capability. However, they caution that the information is not entirely secure, citing the possibility of “malicious prompts” that could lead AI chatbots to disclose sensitive information.
While a blanket ban on ChatGPT may not yet be warranted, companies need to tread carefully, according to Lewis, balancing the potential advantages of AI with the critical need for data security.