LogoOllaMan Docs

Quick Start

Zero-code setup, get started with OllaMan in 5 minutes

Welcome to OllaMan

OllaMan is a professional Ollama model management desktop application that makes managing and using AI models easy through a friendly graphical interface, no coding required.

This guide will walk you through the complete process from installation to your first conversation.


Step 1: Install Ollama

OllaMan requires Ollama as the backend service. Let's install it first.

Download and Install

Visit ollama.com/download and install Ollama for your operating system.

Verify Installation

After installation, Ollama will automatically run in the background. Let's verify:

  1. Open your web browser
  2. Navigate to http://localhost:11434
  3. If the page displays "Ollama is running", congratulations, installation successful!

Ollama Running Success Message

Can't see this message?

If your browser shows "Cannot connect", please check:

  • System tray (Windows) or menu bar (macOS) for the Ollama icon
  • Try restarting the Ollama application
  • Windows users can check Ollama service status in "Services"

Step 2: Configure Ollama

Before connecting OllaMan, we need to configure Ollama with necessary settings.

Enable Network Access

If the Ollama server is deployed on a remote host, you need to configure it to allow external network access:

Locate the Ollama Icon

  • Windows: Find the Ollama icon in the system tray at the bottom-right corner
  • macOS: Find the Ollama icon in the menu bar at the top-right corner

Open Settings Menu

Right-click (or left-click) the Ollama icon and select "Settings"

Enable Network Access

In the settings window, find and enable the "Expose Ollama to the network" option

Enable Network Access Option

Why enable network access?

Security Notice: This option allows other devices on the same network to access your Ollama service.

  • Convenience: Enables OllaMan and other management tools to connect
  • Security: Only enable in trusted network environments
  • Recommendation: Default settings are secure enough for local-only use

Sign in to Ollama Account

If you want to use Ollama Cloud's online models, sign in to your account:

Click Sign In

In the Ollama settings window, click the "Sign in" button

Create or Sign In

  • If you have an account, enter your email and password to sign in
  • If you don't have an account, click "Create Account" to register

Verify Sign-In Status

After successful sign-in, the settings window will display your account information

Why sign in?

After signing in to your Ollama account, you can:

  • Access online models on Ollama Cloud
  • Sync your model list across multiple devices
  • Favorite and manage your preferred models

Step 3: Connect OllaMan

Now, let's launch OllaMan and connect to the Ollama service.

Launch OllaMan

Double-click the OllaMan application icon to start the program.

Configure Server Connection

On first launch, OllaMan will automatically attempt to connect to the local Ollama service:

  1. At the top of the sidebar, you'll see the server selector
  2. The default server address is http://localhost:11434
  3. If connected successfully, you'll see a green "Connected" status indicator

OllaMan Server Connection Interface

Connection Failed?

If it shows "Not Connected" status, please check:

  • Confirm Ollama is running (verify by visiting http://localhost:11434)
  • Confirm "Expose to network" option is enabled
  • Try clicking "Test Connection" button to recheck

Multi-Server Management

You can add multiple Ollama servers:

Open Server Management

Click the server selector and choose "Manage Servers"

Add New Server

  1. Click the "Add Server" button
  2. Enter server name (e.g., "Office Server")
  3. Enter server URL (e.g., http://192.168.1.100:11434)
  4. If required, enter authentication credentials (username and password)

Test and Save

Click "Test Connection" button, then click "Save" after successful connection


Step 4: Manage Models

After successful connection, let's start managing your AI models.

Browse Online Model Library

Go to Online Models Page

Click "Discover" (Online Models) option in the sidebar

Search for Models

Use the search box to find models you're interested in, such as:

  • llama3 - Meta's latest large language model
  • mistral - High-performance open-source model
  • codellama - Code generation specialist model

Online Models Browse Interface

View Model Details

Click any model card to see detailed information:

  • Model description and capabilities
  • Available versions and parameter sizes
  • Download counts and update time

Download Models

After finding the model you want, you can download it directly:

Select Version

On the model details page, choose the version that suits you:

  • Larger parameters mean stronger capabilities but more space and memory usage
  • Recommend starting with smaller versions (e.g., 7B)

Start Download

Click the "Pull" button next to the version

Monitor Download Progress

The download manager will automatically appear on the right side, showing real-time:

  • Download progress percentage
  • Current speed
  • Estimated remaining time

Download Manager Interface

View Local Models

Go to Local Models Page

Click "Installed" (Local Models) option in the sidebar

Browse Installed Models

You'll see all downloaded models, each card displays:

  • Model name and tags
  • Parameter size and quantization level
  • Storage space and modification time

Local Models List

Manage Models

Each model card has action buttons:

  • "Chat": Start conversation with the model
  • "Delete": Remove unwanted models to free up space

Step 5: Start Chatting

Now, let's have your first conversation with an AI model!

Select a Model

From the local models list, click the "Chat" button on any model

Start Asking

Type Your Message

In the input box at the bottom of the page, type your question, for example:

  • "Hello, please introduce yourself"
  • "Write a bubble sort algorithm in Python"
  • "Help me plan a weekend trip"

Send Message

  • Press Enter to send the message
  • Press Shift + Enter for a new line

View Response

The model will generate responses in real-time, you can see:

  • Streaming text content
  • Code highlighting and formatting
  • Generation speed at the bottom (tokens/second)

Chat Interface Example

Advanced Chat Features


Dashboard: Monitor System Status

Check system status anytime:

Access Dashboard

Click the "Dashboard" option in the sidebar

View Key Metrics

The dashboard displays real-time:

Model Statistics

  • Total number of installed models
  • Number of currently running models

Resource Usage

  • Memory usage (VRAM)
  • Disk space usage

Running Models

  • Real-time display of active models
  • Model expiration countdown
  • One-click unload to free memory

Dashboard Interface


Next Steps

Congratulations on completing the OllaMan quick start! Now you can:


Need Help?

Get Support

  • Visit full documentation for detailed features
  • Join community discussions to share experiences
  • Submit issues on GitHub for feedback
  • Contact technical support for assistance

Enjoy using OllaMan!