In a sarcastic tone, the text can be rewritten as follows:
In previous posts, we highlighted how Kafka and service mesh support enterprise AI deployments by strengthening real-time data processing and enabling event orchestration within agentic architectures. However, the question now becomes: “How do you make AI development and deployment repeatable, governed, and accessible across the enterprise?”
This is where platform engineering comes in. By applying the same principles that transformed modern application delivery, such as self-service, automation, software templates, and trusted supply chains, we can create an AI-ready foundation that allows developers and data scientists to innovate with confidence, while concurrently reducing the cognitive load.
As organizations move beyond isolated AI experiments, the challenge shifts to ensuring that AI development and deployment are repeatable, governed, and accessible across the enterprise. This is where platform engineering plays a crucial role by embedding AI workloads into the same enterprise-grade environments that already run critical applications. By combining OpenShift, OpenShift AI, and developer portals like Red Hat Developer Hub, organizations can provide a standardized, self-service platform for both developers and data scientists (as well as customers, infra, ops, SREs…).
OpenShift AI provides the infrastructure for reproducible training pipelines, GPU-powered workloads, and enterprise-grade scaling
In previous posts, we looked at how supporting technologies like Kafka and service mesh strengthen enterprise AI deployments. Kafka helps AI applications process real-time data flows and can act as an event orchestrator within agentic architectures, while service mesh ensures secure and observable communication between AI-powered services.
We also explored the foundations of platform engineering for developers, which has become a central strategy for enabling teams’ self-service access to reliable, governed infrastructure. Platform engineering can also bring standardization and governance through software templates.
Now, these threads come together. As organizations move beyond isolated AI experiments, the question becomes: How do you make AI development and deployment repeatable, governed, and accessible across the enterprise?
This is where platform engineering plays a crucial role. By applying the same principles that transformed modern application delivery (such as self-service, automation, software templates, and trusted supply chains), we can create an AI-ready foundation that allows developers and data scientists to innovate with confidence, accompanied with a reduction of the cognitive load.
In this post, we’ll explore how Red Hat OpenShift, Red Hat OpenShift AI, and developer portals like Red Hat Developer Hub (based on Backstage) make AI a seamless part of enterprise platforms (and the other way around).
The AI challenge
Artificial intelligence has enormous potential, but running AI in production is not as simple as training a model in a notebook. Enterprises face challenges around reproducibility, scalability, governance, and integration with existing business systems.
Training and serving models involves data pipelines, containerized environments, GPUs, and frequent updates. On top of this, regulations such as the EU AI Act and Digital Operational Resilience Act (DORA) demand explainability, audit trails, and compliance. Without the right foundation, many organizations struggle. Data scientists spin up cloud notebooks in isolation, developers manually deploy models (or just connect to online AI APIs), and security checks happen too late. This leads to shadow IT, compliance risks, and slow innovation.
For example, a retail company deploying a recommendation model locally may find it impossible to reproduce the same code and dataset six months later for audit purposes. Without a governed platform, this becomes a barrier to compliance and customer trust.
Next, in the era of vibe coding (that is, AI-generated code), automated security scanning should become part of your platform. For example, add it to the CI/CD pipeline. When I ask at developer conferences who’s working with AI-generated code, almost every hand goes up in the air. When I then ask who’s validating the introduced libraries, biases, and risks, almost every hand goes down again. This highlights the importance of a mature developer platform.
Where platform engineering fits
Platform engineering solves these challenges by embedding AI workloads into the same enterprise-grade environments that already run critical applications. By combining OpenShift, OpenShift AI, and developer portals such as Red Hat Developer Hub, organizations can provide a standardized, self-service platform for both developers and data scientists (as well as customers, infra, ops, SREs…).
OpenShift AI provides the infrastructure for reproducible training pipelines, GPU-powered workloads, and enterprise-grade scaling. AI artifacts, such as code, models, and datasets, flow through the trusted software supply chain, ensuring that everything deployed is verified, signed, and compliant. With GitOps, the lifecycle of AI workloads is fully automated: versioning, rollbacks, and approvals are handled through familiar Git workflows.
The nice thing about GitOps is that you now get audit trails “for free,” as every commit counts as a trace. You only need to ensure that the one assigned to a commit was the one committing. Hence, signed commits as part of the trusted software supply chain. Next to that, you need to ensure that the build artifact is the deployed artifact (for example, not a tampered one from a malicious artifact mirror). Therefore, signed artifacts are part of the trusted software supply chain.
The developer portal complements all of this by making AI capabilities discoverable and accessible. Instead of filing tickets, developers request GPU environments, training pipelines, or model endpoints through self-service templates. Teams can also reuse existing services or depend on standardized blueprints (that is, software templates). For example, a bank that has already built a fraud detection model can publish it in the portal. Other teams can consume it directly, avoiding duplication and ensuring consistent, compliant AI across the organization.
Data platforms as part of AI-ready platforms
AI without data is like a car without fuel. A critical part of an AI-ready platform is the ability to deliver the right data in the right shape for the right audience. A one-size-fits-all data layer rarely works in practice, because developers, data scientists, and compliance teams all have different needs.
A mature platform therefore supports multiple types of data delivery:
- Data scientists might want simple outputs such as CSV files or Parquet snapshots that can be quickly ingested into notebooks.
- Application developers need resilient APIs or event-driven streams they can integrate directly into services.
- AI development teams often rely on anonymized or synthetic datasets to train and test models safely, without exposing sensitive information. This can then be explored in, for example, cloud environments, while sensitive data remains on-premises, highlighting the need for a hybrid or multicloud strategy in your platform architecture.
Solving the frustration gap
Application developers often work against resilient APIs with strict SLAs and KPIs, where every change must go through design reviews, testing, and quality control. For production systems, this is essential. But for data scientists experimenting with models, waiting weeks for a schema update or a new endpoint is unacceptable. This difference in requirements and way to look at it, often results in frustrations between these teams (or even other teams as well).
A data platform based on platform engineering principles allows organizations to handle this tension. You can keep your mission-critical APIs stable and resilient, while at the same time offering “quick fixes” in parallel for exploratory use cases. For example, when a new field is needed for an AI experiment, instead of waiting for the API team’s full release cycle, the data platform can deliver a simple CSV or Parquet export in a matter of hours.
This dual approach means:
- Application developers keep consuming robust APIs with high availability guarantees.
- Data scientists get the agility they need to experiment quickly, without being blocked by production release cycles.
- Platform teams stay in control, because both flows are managed, governed, and monitored through the same (CDC-based) pipelines.
The role of CDC patterns
Patterns like change data capture (CDC) make this possible. Think about Kafka + Debezium and optionally Camel. Rather than building brittle point-to-point integrations or duplicating databases, CDC streams changes once from the source and allows multiple consumers to access the data in different forms. A CDC stream can power a high-availability API for customer-facing applications, while at the same time feeding an anonymized dataset into a data lake for AI training, and exporting a quick CSV for a data scientist’s experiment.
Example: Accelerating fraud detection in banking
Consider a large bank developing fraud detection models. The core payment APIs must remain highly resilient, with strict SLAs and regulatory oversight. Adding new fields to these APIs requires design reviews, impact assessments, and weeks of testing before release.
Meanwhile, the bank’s AI team wants to experiment with new fraud signals based on transaction metadata. Instead of waiting weeks for the production API update, the platform team uses CDC pipelines to stream the same data to different destinations:
- Resilient APIs continue to serve validated data for customer-facing applications.
- Anonymized CSV extracts are generated within hours for data scientists, giving them the agility to experiment quickly and to validate the current data structures (e.g., are there data fields missing, are there data relationships missing, …, which can result in new requirements for the API layer).
- Masked data lakes are populated for model training and long-term analytics.
The result: the AI team can validate hypotheses and train fraud models in days instead of months, while the core systems remain compliant and stable. Both innovation speed and enterprise governance are preserved.
By treating data delivery as part of the platform, enterprises can reduce friction, accelerate AI development, and still maintain governance and trust across all data products. In this regard, I often advocate for treating your platform as you most important core product, with a dedicated product owner.
Enterprise benefits
The combination of OpenShift, OpenShift AI, GitOps, Developer Hub, and a mature data platform creates an AI-ready foundation. This delivers several key benefits to enterprises:
- Reproducibility: AI pipelines are defined as code and run in containerized environments, ensuring that models can be retrained and audited at any time.
- Scalability: AI services can handle thousands of concurrent users or workloads, scaling dynamically with Kubernetes-native capabilities.
- Compliance and trust: The trusted software supply chain and GitOps provide an auditable history of how models were trained, deployed, and updated, supporting EU AI Act and DORA requirements.
- Self-service innovation: Developers and data scientists get frictionless access to AI resources and datasets, while platform teams maintain governance and cost control.
For example, an e-commerce company using GitOps for AI can quickly roll back a biased recommendation model to a previous version, while training a corrected model. At the same time, its developers consume CDC-driven APIs, while its AI team trains on anonymized datasets: all powered by the same hybrid- and/or multi-cloud platform. This not only limits business risk but also ensures regulators can trace every change.
Conclusion
AI in the enterprise is not just about building smarter models; it is about creating smarter platforms. Platform engineering principles (such as standardization, self-service, GitOps-driven automation, trusted supply chains, and flexible data platforms) are what transform isolated AI experiments into reliable, production-ready systems.
With OpenShift, OpenShift AI, and Developer Hub, enterprises can deliver AI that is not only innovative but also trustworthy, reproducible, and aligned with regulatory requirements.
The future of enterprise AI will be powered by platforms that make AI a seamless, governed part of both the software and data lifecycle.
The post How platform engineering accelerates enterprise AI adoption appeared first on Red Hat Developer.