Home / News / AI search with style: Fashion on OpenShift AI with EDB

AI search with style: Fashion on OpenShift AI with EDB

In the realm of e-commerce, where the search bar is often the starting point for the shopping journey, a significant gap exists between the catwalk and the audience. This gap can be deceptive, as fashion brands often prioritize the aesthetic appeal of a product over its exact wording or description. To address this issue, we developed a solution that leverages semantic search, leveraging the capabilities of the EDB Postgres AI database extension to store and search embeddings alongside standard product data.

The architecture of our solution consists of the following components:

1. **EDB Postgres AI**: This is the primary component that connects to AI models and processes AI data through the aidb extension. It serves as the hub for storing and searching embeddings alongside standard product data, which is crucial for semantic search. The EDB Postgres AI database allows for efficient storage and retrieval of data, as well as the ability to handle unstructured and structured data.

2. **OpenShift AI**: This is the underlying platform that supports the AI models and services. OpenShift AI is a containerized environment that enables quick and scalable deployment of AI models and services, including the EDB Postgres AI. By integrating the EDB Postgres AI with OpenShift AI, the search functionality can be seamlessly integrated into

In e-commerce, the search bar is often where the buying journey starts. But in fashion, the gap between the catwalk and the audience can be deceptively wide. In this article, we will discuss a solution we built for fashion, a solution that could work for any catalog where, how something looks or feels is more important than exact wording.

Why keyword search falls short in fashion

Imagine a customer typing “bohemian-style sundress for a beach trip.” A traditional full-text search might match products with those exact words in the title or description. But it won’t grasp the true intent: the cut, the cloth, or the color the customer has in mind. That’s where semantic search excels: it reveals the meaning behind the words to surface more relevant results, even if the exact terms aren’t used. Sometimes, the shopper doesn’t know the right words at all… but they have a photo saved on their phone.

Traditional keyword search matches strings of text. It doesn’t connect meaning, and it can’t handle images without a lot of extra work. To make search feel natural for this kind of product, we need to move from matching text to matching meaning.

Architecture overview

This fashion recommender runs on:

  • EDB Postgres AI connects to AI models and processes AI data through AI Database extension (aidb), whether unstructured or structured, to store and search embeddings (numeric representations of meaning) alongside standard product data.
  • Red Hat OpenShift AI serves the models that generate those embeddings.
  • Red Hat OpenShift provides a scalable, secure platform for the application.

The architecture is straightforward once you see it, but it’s the combination that makes it work.

And we do our little turn

The next steps are the “show me” moments. You’ll see how we create an AI-ready knowledge base (KB) directly inside EDB Postgres AI, run a visual search from nothing more than an uploaded image, turn messy free-form reviews into something a human can actually read, and finally stitch it all together in a clean frontend. Think of it as the before-and-after montage in a home renovation show, except we’re giving search a complete style makeover.

Step 1: Creating models inside Postgres AI

OpenShift AI and EDB Postgres AI are two separate platforms, but they work seamlessly together. You can spin up your own AI models on a serving platform like OpenShift AI, connect to hosted models online, or even use pre-packaged local models. All of which can be integrated into your pipelines using the create_model function under AI Database in Postgres AI. Once your model is deployed, connecting it to aidb is as simple as referencing its OpenAI-compatible endpoint in a single SQL command. From there, it’s ready to be used directly within your SQL workflows with no context switching required.

Create a model served by OpenShift AI:

# an LLM that runs on OpenShift AI to handle text data

SELECT aidb.create_model('product_descriptions_embeddings','embeddings', '{"model": "gritlm-7b","url": "https://gritlm-7b...openshift.com/v1/embeddings"}'::JSONB, credentials => '{"token": "abcd"}'::jsonb );

Create a pre-packaged local model.

# Local multi-modal model to handle image data
SELECT aidb.create_model('recom_images', 'clip_local');

Now we will discuss simplicity as a design principle. Being able to register and use models with just a line of SQL removes the usual friction of integrating AI into your stack. Instead of connecting external APIs, configuring pipelines, or bouncing between tools, your models are just there, ready to use where your data already lives. It means faster iterations, simpler architecture, and a much shorter path from idea to impact.

Step 2: Spinning up a KB inside Postgres AI

One of the most novel parts of this solution is that you can set up an AI-ready search index inside Postgres AI in just a few lines of SQL, running directly on OpenShift. No separate vector database, no sync jobs: your embeddings live alongside your operational data in the same containerized environment where OpenShift AI can also serve the models that create them. On top of that, you can create a Knowledge Base with just a few more lines of SQL using the AI Database extension, making it even easier to build AI-powered applications.

KB for images in an S3 bucket:

SELECT aidb.create_volume_knowledge_base(
            name => 'recom_images',
            model_name => 'recom_images',
            source_volume_name => 'images_bucket_vol',
            batch_size => 500);
SELECT aidb.bulk_embedding('recom_images');

KB for text from a PostgreSQL DB table:

# Create a retriever for product descriptions
# auto_processing can be set to 'Live', 'Background', or 'Disabled'
SELECT aidb.create_table_knowledge_base(
                    name => 'recommend_products',
                    model_name => 'product_descriptions_embeddings',
                    source_table => 'products',
                    source_key_column => 'product_id',
                    source_data_column => 'productdisplayname',
                    source_data_type => 'Text',
                    auto_processing =>'Live',
                    batch_size => 1000
                    );
SELECT aidb.bulk_embedding('recommend_products');

This matters because keeping embeddings in the same database as your operational data cuts down on complexity and latency. You spend less time managing infrastructure and more time refining the experience your users actually see.

Step 3: Image search that just works

When a shopper uploads an image, the app base64-encodes it and sends it to a vision model (an AI system trained to understand and represent images as numbers) running on OpenShift AI. The model returns the embedding to Postgres AI, which stores it, queries the KB, and returns the closest matches, filtering by product attributes happens in the same SQL call.

SELECT aidb.retrieve_key('recom_images', decode('{encoded_img_data}', 'base64'), 5);

Combining similarity search and filtering in one database query keeps the experience fast and relevant. The user sees accurate matches right away, without the delays or mismatches that can happen when results are stitched together across systems.

Step 4: Text search that understands intent

When a shopper types a query like the summer dress we mentioned, they express intent instead of using the exact words in your product catalog. Postgres AI bridges that gap by embedding the query using a language model and searching against your knowledge base. It retrieves semantically similar items, not just exact keyword matches, and lets you layer in product filters like price, size, or stock availability, all in one go.

SELECT * FROM aidb.retrieve_text('{text_retriever_name}', '{text_query}', 5);

There are a few practical takeaways:

  • Semantic text search is reduced to a single function call.
  • No external tooling.
  • No custom APIs.
  • No data wrangling between services.
  • This tight integration makes it easy to prototype smarter search experiences quickly and scale them without re-architecting your stack.

Step 5: Summarizing reviews in one call

AI database extension isn’t just about matching vectors. It can also run prompts against registered LLMs right in the database. In an app, that means you can turn a wall of product reviews into a concise summary and a short set of labels.

SELECT decode_text FROM aidb.decode_text(model_name, context_text);

Running summarization close to the data reduces the need for extra services and data transfers. Wherever the data is, you can deliver clear, digestible insights with minimal engineering overhead.

Step 6: Bringing it together in the UI

Once the KB returns product IDs, the frontend displays the details. In this build, a Streamlit app runs on OpenShift alongside the backend services, so UI updates can be deployed, scaled, and maintained as part of the same environment that handles the search and AI workloads.

# Retrieve product ids via aidb
image_kb_query = “SELECT aidb.retrieve_key('recom_images', decode('{encoded_img_data}', 'base64'), 5);”
cur.execute(image_kb_query);
results = cur.fetchall()
product_ids = [row[0].split(',')[0].strip('()') for row in results]
# Display images from retrieved product ids
for product_id in product_ids: 
    product = get_product_details_in_category(product_id)
    if product["image_path"]:
        col_img, col_button = st.columns([3, 1])
        with col_img:
            image_name = os.path.basename(product["image_path"])
            display_image_s3(image_name)

When the backend handles the heavy work, the frontend can stay focused on clarity and usability. That’s what turns an innovative capability into an experience people want.

Wrap up

This solution changes search from a keyword match into a meaning match. EDB Postgres AI transforms AI data (unstructured and structured) into embedding vectors using models on OpenShift AI then handles storage and querying. OpenShift AI runs the models. The Python app brings it together into something fast and interactive.

We built it for fashion, but the same structure could work for any catalog where “looks like” or “feels like” is more important than exact wording (i.e., furniture, cars, or recipes).

The full code and deployment instructions are in our GitHub repository. Try it, modify it, and see how search changes when it understands the intent behind the input. Check out the Red Hat Ecosystem Catalog.

The post AI search with style: Fashion on OpenShift AI with EDB appeared first on Red Hat Developer.

Tagged:

Leave a Reply

Your email address will not be published. Required fields are marked *