From roadblock to widget: as Mendix developers, we're always looking for ways to build smarter applications faster. I set out to integrate AI chatbots like ChatGPT into low-code applications to enhance user experience and deliver advanced, high-quality support. This is the story of how I hit a technical wall and ended up building my own solution: a custom widget that makes AI responses stream smoothly in real-time, just like when you use ChatGPT directly.
GenAI (Generative AI) refers to artificial intelligence that can generate new content. Think ChatGPT for text, DALL-E for images, or Copilot for code. These models understand user input and generate human-like responses, making them ideal for smart chatbots, content creation, and automated assistants.
GenAI has become impossible to ignore. Not just as hype, but as a genuine shift in how we approach digitalization. For me, it represents the next step toward technology that's not about the tech itself, but about creating real value for users.
I started experimenting with OpenAI (the company behind ChatGPT) integration using a pre-built connector from Mendix's marketplace. Getting a basic AI response with the chat-completions API? Easy enough.
But I wanted more. I wanted to use advanced AI features through the responses API, add custom parameters like reasoning effort and structured responses, and most importantly, enable streaming responses so users see the AI "typing" in real-time instead of waiting for the full reply.
That last one, streaming, became my biggest challenge.
Here's the technical challenge in simple terms: when you chat with ChatGPT, you see words appearing as it "thinks." This happens through something called Server-Sent Events (SSE). It’s like the difference between a phone call, where someone speaks continuously, and text messages that arrive one at a time.
Mendix, our low-code platform, prefers the "text message" approach. It wants complete responses, not continuous streams.
I tried standard REST calls first, which worked fine without streaming but broke the moment I enabled it. Then I experimented with Mendix's relatively new "consumed REST service" resource. It recognized the stream but couldn't handle the format. My next attempt involved custom Java and JavaScript code. Technically it worked, but Mendix would only show the result after everything finished, completely defeating the purpose of streaming.
Each attempt hit the same wall: Mendix wanted complete data packages, not a continuous stream.
After banging my head against this wall, it clicked. "What if I build a custom widget that handles both the streaming logic AND displays it directly in the UI?"
Instead of forcing Mendix to do something it wasn't designed for, I'd create a specialized component that could handle streaming natively.
One evening, I rolled up my sleeves and started building. Using TypeScript and some AI-assisted coding, I created a widget that handles OpenAI streaming responses natively, displaying responses as they’re generated.
The widget:
The widget is surprisingly simple to configure. You choose your API type (chat-completions or responses), add your OpenAI API key, set an optional system prompt to define the AI's personality or role, and add any custom parameters you need. Drop it into your Mendix app, and you've got a professional AI chat interface with real-time streaming responses.
This project taught me that technical limitations often spark the best innovations. Mendix is incredibly powerful for rapid development, but sometimes you need to think outside the low-code box to unlock its full potential.
With GenAI evolving at lightning speed, we need integration tools that can keep pace.
Building custom widgets like this bridges that gap, and with AI-assisted coding, it's more accessible than ever for Mendix developers to create their own solutions.
I'm creating a comprehensive how-to video for Mendix developers on building custom widgets using AI-assisted development (a.k.a. "vibe coding").. This approach transformed my productivity, and I want to share these techniques with the community.
If this resonated with you or you're interested in custom GenAI solutions for Mendix, reach out! I'd love to hear about your GenAI experiments in low-code.