Self-Hosted Sentiment Analysis with n8n, Ollama and Google’s Gemma3 using Coolify

Guide for Self-Hosting n8n and ollama with Coolify

Step-By-Step Instructions for n8n and Ollama Set Up on Gozunga Cloud

Streamline workflows with self-hosted sentiment analysis using n8n, Ollama, and Gemma3 on Coolify. Boost customer service via Gozunga Cloud.
Coolify

BEFORE YOU CONTINUE

Review our previous article where we create a Coolify server using Gozunga Cloud
View Article

Overview

In our example scenario, we have a need to categorize chat messages based on their sentiment for our customer service team.

We will use our existing Coolify server to host our new sentiment analysis app. Here is a little bit about each component’s function:

  • Coolify is used as our application platform installed on a Gozunga Virtual Server
  • n8n will be used orchestrate a workflow to receive chat messages so we can simulate a real customer service message
  • Ollama will run Google’s Gemma3:1b model
  • Open WebUI will be used briefly as a quick frontend to download our model and test it

In order to run all of this software, our Virtual Server will need 4 CPU and 16GB of RAM, so you should ensure your server is of st.medium2 or larger

1. INITIAL SETUP

First we create a new Coolify project to house our apps and then set up the resources we need

  • Coolify create project n8n with OllamaFrom the Projects screen, click Add and then choose a name for your project, ex. n8n with Ollama
  • Choose the Production environment and then from the Resources screen, click New then type ‘n8n

  • Setup n8n ResourceChange the Service Name to just ‘n8n‘ and choose the ‘Connect To Predefined Network checkbox
  • Click the purple link next to the service URL to make it something like ‘https://n8n.coolify.mydomain.com:5678‘ and click Save


  • Ollama with Open WebUIHead back to your project’s ‘production‘ environment and choose ‘New‘ and find Ollama with Open Webui and add it to our project

  • Ollama Service StackChange the Service Name to just ‘ollama-with-open-webui‘ and choose the ‘Connect To Predefined Network checkbox
  • Click the purple link next to the service URL for the ‘Open Webui‘ service to make it something like ‘https://openwebui.coolify.mydomain.com:8080‘  and click Save.
    • We’ll use that to download our Gemma3:1b model later on.

2. n8n Setup

Perform the initial configuration for n8n

  • Head back to our n8n service within our Coolify project and go to the Environment Variables section
  • Set the GENERIC_TIMEZONE and TZ variables to your preferred timezone (e.g. America/Chicago)
  • Choose Deploy to start building n8n

3. Ollama Setup

Use Open WebUI to download our gemma3:1b model

  • Go to our Ollama with OpenWebUI resource and click Deploy
  • Take note of the full ollama-api service name on the Deploy screen — it will say something as follows:
    Container ollama-api-o4scoswso888wk4gggkw8oc4 Started.
    • This is the part we want “ollama-api-o4scoswso888wk4gggkw8oc4
    • Yours will be different

  • Open WebUI Get StartedWhen that is complete, head over to the URL we set up earlier, e.g. https://openwebui.coolify.mydomain.com
    • Note: You don’t need the :8080 we used earlier — that is the service port used internally within Coolify and the underlying Docker container
  • You’ll need to create a new account. This is a local account within our Open Web UI instance. Click Get started and create your account.

  • OpenWebUI Admin Panel MenuYou will accept a welcome screen, then head over to the User menu to find the Admin Panel
  • Then head over to Settings -> Models






  • OpenWebUI Models ScreenClick the Download icon


  • Download Gemma3:1bEnter ‘gemma3:1b’ and click Download
  • This will take a few moments


  • You can click New Chat and type out a test message of your choosing to verify its up and running.

Now Ollama is ready for use for our n8n workflow

4. WORKFLOW SETUP

We will do our first login to n8n to get a basic workflow set up

  • Visit the n8n URL you selected before, e.g. https://n8n.coolify.mydomain.com
    • Note: you don’t need the :5678 port – that is the service port used internally within Coolify and the underlying Docker container
  • Create an account using the on-screen prompts for name, email, password, and any of the other optional information
  • Change n8n Workflow nameCreate a new n8n workflow by clicking the ‘Start from scratch’ icon
  • Give it a name, for example ‘Chat Message Sentiment Analysis‘ and click Save
  • Click the Add first step button then select On chat message, then click Back to canvas
  • Add a node connected to our ‘When chat message received’ node called AI Agent
  • Under Options choose System Message, select Expression, and add the following:
				
					You are a sentiment analysis tool for customer service. Analyze the sentiment of this message and classify it as "Positive," "Negative," or "Neutral." Provide a short explanation. Use this format:

Sentiment: [Positive/Negative/Neutral]
Explanation: [Brief reasoning]

Customer message: "{{$json.chatInput}}"
				
			
  • Click the Chat Model + along the bottom of our screen and choose Ollama Chat Model
  • Under Credential to connect with click Create New, and enter in the base URL: http://ollama-api-o4scoswso888wk4gggkw8oc4:11434 then click Save.
  • It will test to attempt to connect to it
  • If you leave the Ollama Chat Model, then go back to it, you should now see the Gemma3:1b model we downloaded earlier

5. TEST IT OUT

Submit our chat messages to confirm it's working

  • Click n8n’s Open chat button
  • Enter some text to simulate a customer’s message such as “Gozunga is really wonderful
  • You will see a full breakdown of the input and output of the sentiment analysis and the duration of the requestSentiment Analysis Result

REVIEW AND NEXT STEPS

What did we do, why is it awesome, and whats next?

With this configuration, using Coolify as our application platform, we were able to set up our own n8n automation orchestration service with an AI agent configured for Google’s new Gemma3:1b model. With minimal effort, this workflow could be configured to read directly from your incoming email, website chat, or ticketing system to quickly analyze your customer’s communications and then report back to you in seconds.


Gozunga Advantages

  • Private – The workflow execution and LLM inference operations are all local to your Gozunga hosted applications
  • Inexpensive – Using a small quality model allows you to control cost by using CPU-based instances for basic requests
  • Flat Rate – There are no token limitations or rate-limiting
  • Consistent Performance – Because of all of these things, you can expect consistent results

This just scratches the surface. From here, you can easily tie in other RAG (Retrieval Augmented Generation) resources to add more context to the analysis process. Think about it — If a customer is complaining to you, the LLM can look up their service record history to help influence and prioritize your customer service initiatives.


We’d like to talk to your further about Gozunga being a part of your AI Automation journey by providing a solid foundation of cloud resources  for you to use to compose your web services and applications.  How can we help you?

All new accounts will receive $100 of account credit when signing up!
Picture of James

James

Founder & CTO of Gozunga
Scroll to Top