Ollama Setup Instructions
Download and Install Ollama
Download the current version of Ollama for your platform from the official website: HERE. There’s a standard installer package for each operating system (Windows or macOS).
After the Installation
Once the installation is complete, you need to pull a vision-capable LLM. Most models are multiple gigabytes, so the download will take some time depending on your internet connection speed.
Our recommendations for use with LrGeniusAI via Ollama:
- gemma3:12b-it-q4_K_M or gemma3:4b-it-q4_K_M - The open source version of Google's Gemini models.
- qwen3-vl:4b-instruct-q4_K_M or qwen3-vl:8b-instruct-q4_K_M - Another very strong AI vision model
- llava - Popular vision model
- minicpm-v - Smaller, faster option for basic analysis
You can pull and use any vision-capable model available in Ollama.
Pull an AI Model
You need to download at least one AI model to use Ollama with LrGeniusAI. Recommendations are gemma3 or qwen3-vl. Try multiple models to find the best combinaton of results and performance for your use-case.
See the Ollama model list online
Run at least one of the following commands:
ollama pull gemma3:4b-it-q4_K_M GPU memory recommended: 8GB
ollama pull gemma3:12b-it-q4_K_M GPU memory recommended: 12GB
ollama pull qwen3-vl:4b-instruct-q4_K_M GPU memory recommended: 8GB
ollama pull qwen3-vl:8b-instruct-q4_K_M GPU memory recommended: 8-12GB
ollama pull llava GPU memory recommended: 8GB
ollama pull minicpm-v GPU memory recommended: 8GB
The download will take some time depending on your internet connection speed.