You have the server. You have the Llama 3 model. Now, let’s make it actually do something useful.
Running an AI model in a black terminal window is cool for about ten minutes. But integrating that AI into your WordPress website so it can answer visitor questions, write content, or act as a support agent? That is where the magic happens.
This guide will show you how to bridge the gap between your self-hosted Ollama instance and your WordPress site using the AI Engine plugin. No API fees, no data leaks, just pure open-source power.
The Logic: How It Connects
Before we click buttons, visualize the setup:
- The Brain: Your VPS running Ollama (e.g., DigitalOcean, Vultr, or Contabo).
- The Body: Your WordPress website.
- The Bridge: The HTTP API. WordPress sends a text prompt to your VPS IP address; Ollama processes it and sends the text back.
The problem? By default, Ollama is “shy.” It only listens to itself (localhost). We need to tell it to listen to the outside world.
Step 1: Expose Ollama to the Internet
If your WordPress site is hosted on the same server as Ollama, you can skip this step. But if your WordPress is on SiteGround/Hostinger and Ollama is on a separate VPS (which is recommended for performance), you need to open the gates.
1. Edit the System Service
SSH into your VPS and edit the Ollama service configuration:
Bash
systemctl edit ollama.service
2. Add the Environment Variable
In the editor that opens, add these lines:
Ini, TOML
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_ORIGINS=*"
0.0.0.0tells Ollama to accept connections from any IP.OLLAMA_ORIGINS=*prevents CORS errors (browser security blocks) when WordPress tries to talk to it.
3. Restart Ollama
Save the file and restart the service to apply changes:
Bash
systemctl daemon-reload
systemctl restart ollama
4. Open the Firewall
Allow traffic on port 11434:
Bash
ufw allow 11434
Security Note: This makes your AI accessible to anyone with your IP. Later, you should use a firewall rule to only allow your WordPress site’s IP address.
Step 2: Install the “AI Engine” Plugin
There are many AI plugins, but AI Engine by Jordy Meow is the industry standard for custom integrations. It is clean, reliable, and has a free version that works perfectly with Ollama.
- Log in to your WordPress Dashboard.
- Go to Plugins -> Add New.
- Search for “AI Engine”.
- Install and Activate the one with the cat logo (Meow Apps).
Step 3: Connect WordPress to Your VPS
Now we build the bridge.
- Go to Meow Apps -> AI Engine in your dashboard sidebar.
- Click on the Settings tab.
- Look for the “AI Models” or “Environments” section.
- You will see tabs like “OpenAI”, “Anthropic”, etc. Click on Ollama.
Configuration:
- Service Name: Ollama
- Model Name:
llama3(ormistral, depending on what you pulled on your VPS). - Endpoint / URL:
http://YOUR_VPS_IP:11434/api/chat- Important: Make sure you include the
http://and the port.
- Important: Make sure you include the
- Context Window: Set this to
4096or8192(keep it lower if your VPS has low RAM).
Click Save.
The Test:
There is usually a “Test” button next to the model name. Click it.
- Green Checkmark? Success! Your website just shook hands with your server.
- Red Error? Check your VPS firewall and ensure
OLLAMA_HOSTwas saved correctly.
Step 4: Create the Chatbot UI
Now that the brain is connected, let’s give it a face.
- In the AI Engine menu, go to Chatbots.
- Click Create New.
- Visual Settings: Give your bot a name (e.g., “Helper Bot”), choose an avatar, and set the welcome message.
- AI Settings:
- Deployment: Select the “Ollama” model you added in Step 3.
- Context/System Prompt: This is crucial. Tell the AI who it is.
- Example: “You are a helpful support assistant for [Your Website Name]. You answer questions briefly and professionally. Do not make up facts.”
Step 5: Display the Chatbot
You have two ways to show the bot to your visitors:
Option A: The Shortcode (For specific pages)
Copy the shortcode provided (e.g., [mwai_chat context=”default”]) and paste it into any Post or Page.
Option B: The Floating Widget (Site-wide)
- Go to the Chatbot settings tab.
- Enable “Popup” or “Full-Site Widget”.
- This adds the classic “chat bubble” to the bottom corner of your entire website.
Performance Tuning & Real Talk
Since you are self-hosting, the speed depends entirely on your VPS.
- The “Thinking” Pause: Unlike ChatGPT, which streams instantly, your server might take 2-3 seconds to “warm up” the model if it hasn’t been used in a while.
- Keep It Light: Use smaller models like Llama 3 8B or Mistral 7B. Do not try to run Llama 3 70B on a standard VPS; it will crash everything.
- Cache: If you use a caching plugin (like WP Rocket or LiteSpeed), exclude the page where the chatbot lives, or ensure the REST API isn’t being cached aggressively.
Next Step
Would you like me to write the “System Prompt” for your chatbot? If you tell me what your website is about (e.g., News, eCommerce), I can craft a prompt that ensures the AI stays on topic and sells your services.