Is ChatGPT becoming a serious security risk for your business?


Many bosses have begun banning ChatGPT and other generative AI tools out of fear of data leaks and similar cybersecurity incidents, a new report suggests.

Enterprise generative AI platform Writer recently polled 450 executives at large enterprises, to gauge their opinion with regards to AI-powered generative chatbots, finding almost half (46%) believe someone in their company may have inadvertently shared corporate data with the tool.

While ChatGPT can only use data created until September 2021, this may very well change in the future, not to mention that other tools might not have this kind of contingency set up. That means that the tools could use sensitive data in its learning models, and later share them with other users. Consequently, the companies whose sensitive data was shared could end up being chased by data security watchdogs for the leaks. That being said, ChatGPT was banned by 32% of the respondents, followed by CopyAI (28%) and Jasper (23%). 

Love and hate

But the tool is still extremely popular. Almost half (47%) use ChatGPT at work every day (CopyAI is used in 35% of cases, and Anyword in 26%). They use it in different departments, from IT (30%), to operations (23%), from customer success (20%), to marketing (18%), from support (16%) to sales and HR (15%). 

Most of the time, the tools are used for copy, including ads, headlines, blogs, knowledgebase articles, and similar. 

Furthermore, most firms don’t plan on sticking with the free version for long, as 59% said they purchased (or plan to purchase) at least one such tool this year. A fifth (19%) are using five or more generative AI tools. They see productivity boost as the tool’s key selling proposition, as it improves employee productivity, generates higher-quality output, and saves on costs. 

“Enterprise executives need to take note. There is a real competitive advantage in implementing generative AI across their businesses, but it’s clear there’s a likelihood of security, privacy and brand reputation risks,” said May Habib, Writer CEO and co-founder. 

“We offer enterprises complete control – from what data LLMs can access to where that data and LLM is hosted. If you don’t control your generative AI rollout, you certainly can’t control the quality of output or the brand and security risks.” 



from TechRadar - All the latest technology news https://ift.tt/W6XNAiV

No comments