Generative AI in Retail: Adoption Comes at High Security Cost
A new report by cybersecurity firm Netskope reveals that the retail industry's rapid adoption of generative AI has created a massive new surface for cyberattacks and sensitive data leaks. According to the study, 95% of retail organizations now use generative AI applications, up from 73% just one year ago.
The report highlights the sector's shift away from chaotic early adoption towards a more controlled, corporate-led approach. However, this transition has also led to a significant increase in security risks. "As retailers integrate generative AI into their operations, they are creating new vulnerabilities that can be exploited by attackers," said Netskope's chief technology officer, Sanjay Nair.
The report notes that the use of personal AI accounts among retail staff has more than halved since the beginning of 2025, from 74% to 36%. This shift towards corporate-led adoption is a response to growing concerns about data security and compliance. "Retailers are finally waking up to the fact that their employees' personal AI accounts can be a major liability," said Nair.
Generative AI refers to a type of artificial intelligence that uses algorithms to generate new content, such as text, images, or music. In retail, these tools are used for tasks like product recommendation, chatbots, and even generating fake customer reviews.
The rapid adoption of generative AI in retail has been driven by the sector's desire to stay ahead of the competition and improve customer experience. However, this rush to adopt new technology has left many retailers vulnerable to cyber threats.
"The use of generative AI in retail is a double-edged sword," said Dr. Rachel Kim, a leading expert on AI ethics. "While it offers many benefits, such as improved efficiency and personalization, it also creates new risks that must be carefully managed."
The Netskope report highlights the need for retailers to prioritize security and compliance when adopting generative AI. This includes implementing robust data protection measures, conducting regular security audits, and providing training for employees on AI-related security best practices.
As the retail industry continues to adopt generative AI, it is clear that security will be a major concern. "Retailers must be aware of the potential risks associated with generative AI and take steps to mitigate them," said Nair. "The consequences of not doing so could be severe."
Background and Context
Generative AI has been gaining traction in retail over the past few years, driven by its ability to improve customer experience and increase efficiency. However, as more retailers adopt these tools, concerns about data security and compliance have grown.
Additional Perspectives
Dr. Kim notes that the use of generative AI in retail raises important questions about data ownership and control. "As retailers rely increasingly on AI-generated content, they must be transparent about how this content is created and used," she said.
Current Status and Next Developments
The Netskope report highlights the need for retailers to prioritize security and compliance when adopting generative AI. As the sector continues to evolve, it is likely that we will see more emphasis on AI-related security best practices and greater investment in data protection measures.
In conclusion, while generative AI offers many benefits for retail organizations, its adoption comes with significant security risks. Retailers must be aware of these risks and take steps to mitigate them in order to avoid costly cyberattacks and sensitive data leaks.
*Reporting by Artificialintelligence-news.*