Skip to main content

Rancher Desktop 1.18.2 Is out and of course with AI


Geoff Burke
Forum|alt.badge.img+22

Hi Everyone,

 

Going forward I am going to focus more on Kubernetes/Podman/Docker with AI. Not because I am jumping on the AI hype wave, or because I am completely an Anti-AI-er 😂, but instead because there are things happening in this space and as my guest Marino Wijay said on the last Kubernetes Korner show:

AI and Cloud Native are definitely going to have a common path going forward. 

It did not take long for Rancher Desktop to jump on the bandwagon. As of Version 1.17 and following a similar effort by Podman Desktop https://podman-desktop.io/docs/ai-lab Rancher Desktop has now launched its own AI extension https://docs.rancherdesktop.io/tutorials/working-with-llms/. The extension is using OpenWebUI https://www.openwebui.com/, ollama and by default leveraging Tinyllama LLM, it will also discover any other LLMs you may have downloaded so you can choose one of them instead if you choose to. 

Interesting that they have websearch and Rag baked in using SearXNG which I had never heard of https://github.com/searxng/searxng.

However, every time I open my laptop these days something jumps out at me about AI which I have never heard of so I guess this will be the normal state of things for awhile until everything settles down.

Let’s take this new Rancher Desktop Extension for a test ride:

As always the extension installation is very simple:

OpenWebUI Extension
Press Install

You can also install from the CLI:

https://github.com/rancher-sandbox/rancher-desktop-rdx-open-webui/pkgs/container/rancher-desktop-rdx-open-webui

The extension works on local containers:

 

Upon accessing the extension for the first time you are give a list of what’s new in the latest release:

What’s New

As stated before it discovered my the models that I already have downloaded for LMstudio and Ollama:

My Models

Back to the chat screen I decided to use the Tinyllama LLM and ask it to create a simple yaml example of a Kubernetes ingress object. This type of request on my laptop with no GPU came back nice and quickly:

Request

If you click websearch you will enable web scraping but I the results are not good (again this could be due to the model that I used so I will need to do some more experimenting):
 

Web Search

The answer was a complete hallucination which I will only show a part of:

 

The code Interpreter was better:

 

Code Interpreter

 

For the RAG portion you can upload files:

RAG Test
Results

The result was not too bad and I can work on my prompts to get better results.

In my next post I will dive a bit deeper into the controls and functionalities available in this version.

While this is an interesting tool and we will see where it goes, at this point I don’t really see the need for it if you already are leveraging something like OpenWebUI, LMstudio or even just command line Ollama locally. 

 

1 comment

Chris.Childerhose
Forum|alt.badge.img+21
  • Veeam Legend, Veeam Vanguard
  • 8485 comments
  • March 13, 2025

Sounds like it will be a fun series for sure.  Will be sure to follow along this new journey. 😁