AI adoption raises questions about data security and privacy. We recently sat down with data privacy expert John Cook (founder of Kingwood Data Privacy) to explore the risks and opportunities associated with integrating AI into the enterprise. Below are key insights we discussed about how companies can leverage AI effectively while protecting their sensitive data.
The rise of user-friendly generative AI tools (Gemini, ChatGPT, etc.) has created new privacy challenges. Employees, seeking immediate value, may inadvertently share proprietary or sensitive data with public AI platforms. Once that information is shared, it could be exposed beyond the organization’s walls.
Organizations need clear guidelines and possibly technological restrictions on what data employees can feed into external AI tools. Otherwise, confidential information could end up outside corporate control.
Data privacy doesn’t have to be all-or-nothing. John Cook suggests using techniques like homomorphic encryption or anonymizing data to protect personal or sensitive details. These techniques make it possible to remove or scramble specific identifiers so that data can’t be traced or reconstructed.
Stripping out identifying information helps companies safely expand the scope of their AI projects without exposing personal or confidential data.
A well-thought-out, centralized data architecture is needed before layering AI tools on top. Relevant data should be kept in a secure environment.
Integrate your organization’s various data sources (ERP systems, CRM, contract files, product databases, manufacturing systems, etc.) into a single AI-ready environment. This typically involves using an Integration Platform as a Service (iPaaS) solution (Workato, Celigo, etc.) to streamline how data moves across the company. Unlike traditional data warehouses, iPaaS solutions streamline data movement by automating workflows between different systems.
Once data is integrated, the organization can employ retrieval-augmented generation (RAG) or similar techniques to query large data sets in a controlled manner. Do this in house behind a firewall or within a trusted cloud environment to retain full oversight.
This is about making AI more accessible within a company so employees can use it effectively without relying on risky or unapproved external tools.
Many organizations chase overly ambitious, isolated AI projects that seem promising but don’t deliver value. These projects might be too complex, isolated from business needs, or difficult to implement widely.
A more practical approach is to build a centralized, internal AI platform that different business units can use for their specific needs. If employees have a secure in-house AI environment, they’re far less inclined to turn to unapproved external tools.
AI can enhance or even leapfrog the capabilities of legacy systems like ERP packages. Predictive analytics, intelligent automation, and advanced data synthesis can revolutionize everyday business processes, from production planning and capacity forecasting to contract management and revenue analysis.
Instead of trying to modify old systems with AI-based add-ons (which can be clunky and limited), companies can develop AI models that work alongside existing systems. AI can add value without being restricted by the limitations of legacy software or overhauling the entire system.
It may seem like AI implementation is a race, but don’t mistake speed for progress. Rushing to adopt AI without clear rules for data can create more problems than it solves. Make sure AI is built for the long term by taking time to prioritize security and data privacy. Instead of viewing data privacy as a limitation, companies should see it as the foundation for AI adoption at scale.
At Trenegy, we help organizations implement and use AI realistically. To chat more about AI strategy, email us at info@trenegy.com.