Skip to main content

Ollama: Running with large language models locally

Ollama is a user-friendly tool designed to run large language models (LLMs) locally on a computer. It supports a variety of AI models including LLaMA-2, uncensored LLaMA, CodeLLaMA, Falcon, Mistral, Vicuna model, WizardCoder, and Wizard uncensored. It is current1ly compatible with MacOS, Linux and Windows.

Key features

1. User friendly

Ollama is very easy to set up and use. The installation is super easy and you can install it via comand line or by installing an executable file from the Ollama website

2. The variety of LLM models

Ollame supprots a large list of open-source models, and you can also use uncensored LLMs. You can see the full list on their website

3. REST Api

Ollame allows you to run a REST Api

4. Easy installation

The installation is very easy and quick, you can see the installation guide on their website