As Generative AI revolutionizes businesses everywhere, security and IT leaders find themselves in a tough spot. Executives are mandating speedy adoption of Generative AI tools to drive efficiency and stay abreast of competitors. Meanwhile, IT and Security teams must rapidly develop an AI Security Strategy, even before the organization really understands exactly how it plans to adopt and deploy Generative AI.
IT and Security teams are no strangers to “building the airplane while it is in flight”. But this moment comes with new and complex security challenges. There is an explosion in new AI capabilities adopted by employees across all business functions — both sanctioned and unsanctioned. AI Agents are ingesting authentication credentials and autonomously interacting with sensitive corporate resources. Sensitive data is being shared with AI tools, even as security and compliance frameworks struggle to keep up.
While it demands strategic thinking from Security and IT leaders, the problem of governing the use of AI internally is far from insurmountable. SASE (Secure Access Service Edge) is a popular cloud-based network architecture that combines networking and security functions into a single, integrated service that provides employees with secure and efficient access to the Internet and to corporate resources, regardless of their location. The SASE architecture can be effectively extended to meet the risk and security needs of organizations in a world of AI.
Cloudflare’s SASE Platform is uniquely well-positioned to help IT teams govern their AI usage in a secure and responsible way — without extinguishing innovation. What makes Cloudflare different in this space is that we are one of the few SASE vendors that operate not just in cybersecurity, but also in AI infrastructure. This includes: providing AI infrastructure for developers (e.g. Workers AI, AI Gateway, remote MCP servers, Realtime AI Apps) to securing public-facing LLMs (e.g. Firewall for AI or AI Labyrinth), to allowing content creators to charge AI crawlers for access to their content, and the list goes on. Our expertise in this space gives us a unique view into governing AI usage inside an organization. It also gives our customers the opportunity to plug different components of our platform together to build out their AI and AI cybersecurity infrastructure.
This week, we are taking this AI expertise and using it to help ensure you have what you need to implement a successful AI Security Strategy. As part of this, we are announcing several new AI Security Posture Management (AI-SPM) features, including:
-
shadow AI reporting to gain visibility into employee’s use of AI,
-
confidence scoring of AI providers to manage risk,
-
AI prompt protection to defend against malicious inputs and prevent data loss,
-
out-of-band API CASB integrations with AI providers to detect misconfigurations,
-
new tools that untangle and secure Model Context Protocol (MCP) deployments in the enterprise.
All of these new AI-SPM features are built directly into Cloudflare’s powerful SASE platform.
And we’re just getting started. In the coming months you can expect to see additional valuable AI-SPM features launch across the Cloudflare platform, as we continue investing in making Cloudflare the best place to protect, connect, and build with AI.
In this guide, we will cover best practices for adopting generative AI in your organization using Cloudflare’s SASE (Secure Access Service Edge) platform. We start by covering how IT and Security leaders can formulate their AI Security Strategy. Then, we show how to implement this strategy using long-standing features of our SASE platform alongside the new AI-SPM features we launched this week.
This guide below is divided into three key pillars for dealing with (human) employee access to AI – Visibility, Risk Management and Data Protection — followed by additional guidelines around deploying agentic AI in the enterprise using MCP. Our objective is to help you align your security strategy with your business goals while driving adoption of AI across all your projects and teams.
And we do this all using our single SASE platform, so you don’t have to deploy and manage a complex hodgepodge of point solutions and security tools. In fact, we provide you with an overview of your AI security posture in a single dashboard, as you can see here:

AI Security Report in Cloudflare’s SASE platform
The first step to securing AI usage is to establish your organization’s level of risk tolerance. This includes pinpointing your biggest security concerns for your users and your data, along with relevant legal and compliance requirements. Relevant issues to consider include:
-
Do you have specific sensitive data that should not be shared with certain AI tools? (Some examples include personally identifiable information (PII), personal health information (PHI), sensitive financial data, secrets and credentials, source code or other proprietary business information.)
-
Are there business decisions that your employees should not be making using assistance from AI? (For instance, the EU AI Act AI prohibits the use of AI to evaluate or classify individuals based on their social behavior, personal characteristics, or personality traits.)
-
Are you subject to compliance frameworks that require you to produce records of the generative AI tools that your employees used, and perhaps even the prompts that your employees input into AI providers? (For example, HIPAA requires organizations to implement audit trails that records who accessed PHI and when, GDPR requires the same for PII, SOC2 requires the same for secrets and credentials.)
-
Do you have specific data protection requirements that require employees to use the sanctioned, enterprise version of a certain generative AI provider, and avoid certain AI tools or their consumer versions? (Enterprise AI tools often have more favorable terms of service, including shorter data retention periods, more limited data-sharing with third-parties, and/or a promise not to train AI models on user inputs.)
-
Do you require employees to completely avoid the use of certain AI tools, perhaps because they are unreliable, unreviewed or headquartered in a risky geography?
-
Are there security protections offered by your organization’s sanctioned AI providers and to what extent do you plan to protect against misconfigurations of AI tools that can result in leaks of sensitive data?
-
What is your policy around the use of autonomous AI agents? What is your strategy for adopting the Model Context Protocol (MCP)? (The Model Context Protocol is a standard way to make information available to large language models (LLMs), similar to the way an application programming interface (API) works. It supports agentic AI that autonomously pursues goals and takes action.)
While almost every organization has relevant compliance requirements that implicate their use of generative AI, there is no “one size fits all” for addressing these issues.
-
Some organizations have mandates to broadly adopt AI tools of all stripes, while others require employees to interact with sanctioned AI tools only.
-
Some organizations are rapidly adopting the MCP, while others are not yet ready for agents to autonomously interact with their corporate resources.
-
Some organizations have robust requirements around data loss prevention (DLP), while others are still early in the process of deploying DLP in their organization.
Even with this diversity of goals and requirements, Cloudflare SASE provides a flexible platform for the implementation of your organization’s AI Security Strategy.
To implement your AI Security Strategy, you first need a solid SASE deployment.
SASE provides a unified platform that consolidates security and networking, replacing a fragmented patchwork of point solutions with a single platform that controls application visibility, user authentication, Data Loss Prevention (DLP), and other policies for access to the Internet and access to internal corporate resources. SASE is the essential foundation for an effective AI Security Strategy.
SASE architecture allows you to execute your AI security strategy by discovering and inventorying the AI tools used by your employees. With this visibility, you can proactively manage risk and support compliance requirements by monitoring AI prompts and responses to understand what data is being shared with AI tools. Robust DLP allows you to scan and block sensitive data from being entered into AI tools, preventing data leakage and protecting your organization’s most valuable information. Our Secure Web Gateway (SWG) allows you to redirect traffic from unsanctioned AI providers to user education pages or to sanctioned enterprise AI providers. And our new integration of MCP tooling into our SASE platform helps you secure the deployment of agentic AI inside your organization.
If you’re just starting your SASE journey, our Secure Internet Traffic Deployment Guide is the best place to begin. For this guide, however, we will skip these introductory details and dive right into using SASE to secure the use of Generative AI.
You can’t protect what you can’t see. The first step is to gain visibility into your AI landscape, which is essential for discovering and inventorying all the AI tools that your employees are using, deploying or experimenting with in your organization.
Shadow AI refers to the use of AI applications that haven’t been officially sanctioned by your IT department. Shadow AI is not an uncommon phenomenon – Salesforce found that over half of the knowledge workers it surveyed admitted to using unsanctioned AI tools at work. Use of unsanctioned AI is not necessarily a sign of malicious intent; employees are often just trying to do their jobs better. As an IT or Security leader, your goal should be to discover Shadow AI and then apply the appropriate AI security policy. There are two powerful ways to do this: inline and out-of-band.
The most direct way to get visibility is by using Cloudflare’s Secure Web Gateway (SWG).
SWG helps you get a clear picture of both sanctioned and unsanctioned AI and chat applications. By reviewing your detected usage, you’ll gain insight into which AI apps are being used in your organization. This knowledge is essential for building policies that support approved tools, and block or control risky ones. This feature requires you to deploy the WARP client in Gateway proxy mode on your end-user devices.
You can review your company’s AI app usage using our new Application Library and Shadow IT dashboards. These tools allow you to:
-
Review traffic from user devices to understand how many users engage with a specific application over time.
-
Denote application’s status (e.g., Approved, Unapproved) inside your organization, and use that as input to a variety of SWG policies that control access to applications with that status.
-
Automate assessment of SaaS and Gen AI applications at scale with our soon-to-be-released Cloudflare Application Confidence Scores.

Shadow IT dashboard showing utilization of applications of different status (Approved, Unapproved, In Review, Unreviewed).
Even if your organization doesn’t use a device client, you can still get valuable data on Shadow AI usage if you use Cloudflare’s integrations for Cloud Access Security Broker (CASB) with services like Google Workspace, Microsoft 365, or GitHub.
Cloudflare CASB provides high-fidelity detail about your SaaS environments, including sensitive data visibility and suspicious user activity. By integrating CASB with your SSO provider, you can see if your users have authenticated to any third-party AI applications, giving you a clear and non-invasive sense of app usage across your organization.

An API CASB integration with Google Workspace, showing findings filtered to third party integrations. Findings discover multiple LLM integrations.
Now that you’ve gained visibility into your AI landscape, the next step is to proactively manage that risk. Cloudflare’s SASE platform allows you to monitor AI prompts and responses, enforce granular security policies, coach users on secure behavior, and prevent misconfigurations in your enterprise AI providers.
If you have TLS decryption enabled in your SASE platform, you can gain new and powerful insights into how your employees are using AI with our new AI prompt protection feature.
AI Prompt Protection provides you with visibility into the exact prompts and responses from your employees’ interactions with supported AI applications. This allows you to go beyond simply knowing which tools are being used and gives you insight into exactly what kind of information is being shared.
This feature also works with DLP profiles to detect sensitive data in prompts. You can also choose whether to block the action or simply monitor it.

Log entry for a prompt detected using AI prompt protection.
Once your monitoring tools give you a clear understanding of AI usage, you can begin building security policies to achieve your security goals. Cloudflare’s Gateway allows you to create policies based on application categories, application approval status, users, user groups, and device status. For example, you can:
-
create policies to explicitly allow approved AI applications while blocking unapproved AI applications;
-
create policies that redirect users from unapproved AI applications to an approved AI application;
-
limit access to certain applications to specific users or groups that have specific device security posture;
-
build policies to enable prompt capture (with AI prompt protection) for specific high-risk user groups, such as contractors or new employees, without affecting the rest of the organization; and
-
put certain applications behind Remote Browser Isolation (RBI), to prevent end users from uploading files or pasting data into the application.

Gateway application status policy selector
All of these policies can be written in Cloudflare Gateway’s unified policy builder, making it easy to deploy your AI Security Strategy across your organization.
You can use Cloudflare Access to control your employees’ access to your organization’s internal LLMs, including any proprietary models you train internally and/or models that your organization runs on Cloudflare Worker’s AI.
Cloudflare Access allows you to gate access to these LLMs using fine-grained policies, including ensuring users are granted access based on their identity, user group, device posture, and other contextual signals. For example, you can use Cloudflare Access to write a policy that ensures that only certain data scientists at your organization can access a Workers AI model that is trained on certain types of customer data.
As you define which AI tools are sanctioned, you can develop functional security controls for consistent usage. Cloudflare newly supports API CASB integrations with popular AI tools like OpenAI (ChatGPT), Anthropic (Claude), and Google Gemini. These “out-of-band” integrations provide immediate visibility into how users are engaging with sanctioned AI tools, allowing you to report on posture management findings include:
-
Misconfigurations related to sharing settings.
-
Best practices for API key management.
-
DLP profile matches in uploaded attachments
-
Riskier AI features (e.g. autonomous web browsing, code execution) that are toggled on

OpenAI API CASB Integration showing riskier features that are toggled on, security posture risks like unused admin credentials, and an uploaded attachment with a DLP profile match.
Robust data protection is the final pillar that protects your employee’s access to AI..
Our SASE platform has long supported Data Loss Prevention (DLP) tools that scan and block sensitive data from being entered into AI tools, to prevent data leakage and protect your organization’s most valuable information. You can write policies that detect sensitive data while adapting to organization-specific traffic patterns, and use Cloudflare Gateway’s unified policy builder to apply these to your users’ interactions with AI tools or other applications. For example, you could write a DLP policy that detects and blocks the upload of a social security number (SSN), phone number or address.
As part of our new AI prompt protection feature, you can now also gain a semantic understanding of your users’ interactions with supported AI providers. Prompts are classified inline into meaningful, high-level topics that include PII, credentials and secrets, source code, financial information, code abuse / malicious code and prompt injection / jailbreak. You can then build inline granular policies based on these high-level topic classifications. For example, you could create a policy that blocks a non-HR employee from submitting a prompt with the intent to receive PII from the response, while allowing the HR team to do so during a compensation planning cycle.
Our new AI prompt protection feature empowers you to apply smart, user-specific DLP rules that empower your teams to get work done, all while strengthening your security posture. To use our most advanced DLP feature, you’ll need to enable TLS decryption to inspect traffic.

The above policy blocks all ChatGPT prompts that may receive PII back in the response for employees in engineering, marketing, product, and finance user groups.
MCP (Model Context Protocol) is an emerging AI standard, where MCP servers act as a translation layer for AI agents, allowing them to communicate with public and private APIs, understand datasets, and perform actions. Because these servers are a primary entry point for AI agents to engage with and manipulate your data, they are a new and critical security asset for your security team to manage.
Cloudflare already offers a robust set of developer tools for deploying remote MCP servers—a cloud-based server that acts as a bridge between a user’s data and tools and various AI applications. But now our customers are asking for help securing their enterprise MCP deployments.
That is why we’re making MCP security controls a core part of our SASE platform.
MCP servers typically use OAuth for authorization, where the server inherits the permissions of the authorizing user. While this adheres to least-privilege for the user, it can lead to authorization sprawl — where the agent accumulates an excessive number of permissions over time. This makes the agent a high-value target for attackers.
Cloudflare Access now helps you manage authorization sprawl by applying Zero Trust principles to MCP server access. A Zero Trust model assumes no user, device, or network can be trusted implicitly, so every request is continuously verified. This approach ensures secure authentication and management of these critical assets as your business adopts more agentic workflows.
Cloudflare MCP Server Portal is a new feature in Cloudflare’s SASE platform that centralizes the management, security, and observation of an organization’s MCP servers.
MCP Server Portal allows you to register all your MCP servers with Cloudflare and provide your end users with a single, unified Portal endpoint to configure in their MCP client. This approach simplifies the user experience, because it eliminates the need to configure a one-to-one connection between every MCP client and server. It also means that new MCP servers dynamically become available to users whenever they are added to the Portal.
Beyond these usability enhancements, MCP Server Portal addresses the significant security risks associated with MCP in the enterprise. The current decentralized approach of MCP deployments creates a tangle of unmanaged one-to-one connections that are difficult to secure. The lack of centralized controls creates a variety of risks including prompt injection, tool injection (where malicious code is part of the MCP server itself), supply chain attacks and data leakage.
MCP Server Portals solve this by routing all MCP traffic through Cloudflare, allowing for centralized policy enforcement, comprehensive visibility and logging, and a curated user experience based on the principle of least privilege. Administrators can review and approve MCP servers before making them available, and users are only presented with the servers and tools they are authorized to use, which prevents the use of unvetted or malicious third-party servers.

An MCP Server Portal in the Cloudflare Dashboard
All of these features are only the beginning of our MCP security roadmap, as we continue advancing our support for MCP infrastructure and security controls across the entire Cloudflare platform.
As organizations rapidly develop and deploy their AI security strategies, Cloudflare’s SASE platform is ideally situated to implement policies that balance productivity with data and security controls.
Our SASE has a full suite of features to protect employee interactions with AI. Some of these features are deeply integrated in our Secure Web Gateway (SWG), including the ability to write fine-grained access policies, gain visibility into Shadow IT and introspect on interactions with AI tools using AI prompt protection. Apart from these inline controls, our CASB provides visibility and control using out-of-band API integrations. Our Cloudflare Access product can apply Zero Trust principles while protecting employee access to corporate LLMs that are hosted on Workers AI or elsewhere. We’re newly integrating controls for securing MCP that can also be used alongside Cloudflare’s Remote MCP Server platform.
And all of these features are integrated directly into Cloudflare’s SASE’s unified dashboard, providing a unified platform for you to implement your AI security strategy. You can even gain a holistic view of all of your AI-SPM controls using our newly-released AI-SPM overview dashboard.

AI security report showing utilization of AI applications.
As one the few SASE vendors that also offer AI infrastructure, Cloudflare’s SASE platform can also be deployed alongside products from our developer and application security platforms to holistically implement your AI security strategy alongside your AI infrastructure strategy (using, for example, Workers AI, AI Gateway, remote MCP servers, Realtime AI Apps, Firewall for AI, AI Labyrinth, or pay per crawl .)
Ensuring AI is scalable, safe, and secure is a natural extension of Cloudflare’s mission, given so much of our success relies on a safe Internet. As AI adoption continues to accelerate, so too does our mission to provide a market-leading set of controls for AI Security Posture Management (AI-SPM). Learn more about how Cloudflare helps secure AI or start exploring our new AI-SPM features in Cloudflare’s SASE dashboard today!