
By Mia MacGregor
Feb 12 - (The Insurer) - As artificial intelligence continues to gain traction in the workplace, companies must establish clear protocols for employee usage, according to Brown & Brown senior vice president Anju Owad.
While platforms like ChatGPT have become household names, newer AI tools, such as Chinese startup DeepSeek – which suffered a cyberattack shortly after its rapid rise in popularity – have raised concerns about data security.
"Organisations want to have a competitive edge," Owad said. "They want things that are efficient, faster, cheaper, and AI is a good tool for that. But if you're not going to do the due diligence behind it, then it can be dangerous."
Owad noted that DeepSeek had the ability to collect more information than users realised and suffered from security vulnerabilities. “So even though it's fast and exciting, it also caused folks to slow down a little bit,” she said.
Founded in 2023, DeepSeek focuses on large language models and the pursuit of artificial general intelligence (AGI). It quickly gained popularity and, at one point, surpassed OpenAI’s ChatGPT as the top-rated free app on Apple’s US App Store.
However, the cyberattack raised concerns about the security of user data, sparking broader discussions on AI safety as the technology becomes increasingly integrated into business operations.
"Data is a valuable asset for all organisations, especially sensitive data. With DeepSeek seeing so many downloads, it's vital that people understand just because something is available on your company’s network or your phone doesn’t mean it’s safe,” Owad explained.
She emphasised the need for companies to train employees on AI safety and create guidelines for proper usage.
"Many people assume, 'If it's available to me, I might as well use it because it's going to make my life easier.' But organisations need to restrict access to certain AI platforms when necessary and identify the tools that are valuable while putting parameters around their use," she said.
Additionally, Owad stressed the importance of defining what constitutes sensitive information.
"Obviously, there's your customers’ date of birth and Social Security number, but what is considered sensitive can vary within an organisation. That definition is really important."
Owad also noted that companies using AI must understand where their data is stored and who has access to it. She pointed to DeepSeek as an example, explaining that its data is held on Chinese servers that may not adhere to US data protection regulations.
"Where's the data going? How secure is it? What kind of controls are in place? How much control do you have over the data? Those questions have to be asked," she said.
Organisations already have policies governing internet use, and AI should be incorporated into those same security frameworks, Owad argued.
"There are certain things employees should not be able to download, and that should be closely monitored. AI policies need to be a focus, just like any other company-wide data security protocol."
According to Owad, AI governance should be an ongoing priority.
"AI should be its own subset within an organisation because it's constantly evolving. Companies need to provide specialised training, conduct due diligence, and integrate AI oversight into existing policies while still giving it the attention it deserves."
Owad observed that companies are starting to take action, but need to move faster.
"I think the start of that is happening. Companies are becoming more aware that they need to train employees and implement policies, but it needs to probably happen a little bit faster than it's happening right now."
While AI is a powerful tool, Owad underscored the importance of human oversight.
"You can never replace a human with AI, and that's what organisations need to understand. There have to be audits, there has to be due diligence, and even with generative AI, you have to really look and see if the information is correct."