My first experiment running AI locally using Ollama, Python, and Hugging Face

 

Running AI Locally: My First Experiment

As part of my journey to understand Artificial Intelligence and Large Language Models, I wanted to go beyond simply using cloud-based AI tools.

Instead, I decided to try something more hands-on:

Running an AI model locally on my own machine.

Running AI locally offers several advantages:

  • More control over the models

  • Better privacy

  • No dependency on external APIs

  • Ability to experiment freely

  • Deeper understanding of how AI systems actually work

This post summarizes the tools I used and the steps I followed to run AI locally.


Tools and Technologies Used

To set up a local AI environment, I used the following tools.


1. WSL (Windows Subsystem for Linux)

Since I was working on Windows, I first installed WSL.

WSL allows you to run a Linux environment inside Windows, which is extremely useful because most AI tools are designed to run on Linux systems.

Benefits include:

  • Linux development environment

  • Better compatibility with AI frameworks

  • Easier installation of packages and libraries


2. Python

Python is the primary programming language used in AI development.

Most AI frameworks are built around Python, including:

  • PyTorch

  • TensorFlow

  • Hugging Face Transformers

Python becomes the foundation for running and experimenting with AI models.


3. Poetry

Python dependency management can quickly become complicated.

To handle this, I used Poetry, which provides:

  • Dependency management

  • Virtual environments

  • Reproducible builds

Poetry helps keep Python projects organized and avoids conflicts between packages.


4. Hugging Face

Hugging Face is one of the most important platforms in the AI ecosystem.

It provides:

  • Pre-trained AI models

  • Datasets

  • AI development libraries

Developers can easily download and experiment with models using the Hugging Face ecosystem.


5. Ollama + AI Models

One of the easiest ways to run Large Language Models locally today is using Ollama.

Ollama allows you to run models like:

  • Llama

  • Mistral

  • DeepSeek

  • Code generation models

It simplifies model installation and interaction.


6. Pyenv

Different AI libraries require different Python versions.

To manage this, I used Pyenv, which allows you to:

  • Install multiple Python versions

  • Switch versions easily

  • Avoid dependency conflicts

This makes Python environment management much easier.


Step-by-Step Guide: Running AI Locally

Below are the basic steps I followed to get a local AI model running.


Step 1: Install WSL

Open PowerShell as Administrator and run:

wsl --install

After installation, restart the system and open the Ubuntu terminal.


Step 2: Install Pyenv

Install dependencies first:

sudo apt update
sudo apt install -y build-essential curl git

Install pyenv:

curl https://pyenv.run | bash

Add pyenv to your shell configuration.


Step 3: Install Python Using Pyenv

Install Python:

pyenv install 3.11
pyenv global 3.11

Verify installation:

python --version

Step 4: Install Poetry

Install Poetry with:

curl -sSL https://install.python-poetry.org | python3 -

Verify installation:

poetry --version

Step 5: Create a Python Project

Create a project folder:

mkdir local-ai-project
cd local-ai-project

Initialize Poetry:

poetry init

Activate the virtual environment:

poetry shell

Step 6: Install Hugging Face Libraries

Install required libraries:

pip install transformers torch

These libraries allow you to load and run AI models.


Step 7: Install Ollama

Install Ollama from the official website:

https://ollama.com

After installation, run a model such as:

ollama run llama3

The model will download automatically and start running locally.


Step 8: Test the Model

Once the model is running, you can interact with it directly from the terminal.

Example prompt:

Explain how large language models work.

You will receive a response generated entirely on your local machine.


What I Learned

Running AI locally helped me understand several important things:

  • AI tools are becoming easier to use

  • Local experimentation gives deeper insight into model behaviour

  • The ecosystem around local AI is growing rapidly

It also showed me that developers can now experiment with AI systems without relying entirely on cloud services.


Final Thoughts

Running AI locally is an exciting way to explore modern machine learning systems.

While cloud-based AI services remain extremely powerful, local AI development allows developers to:

  • Experiment freely

  • Build private AI applications

  • Understand the technology more deeply

This is just the beginning of my journey into local AI development, and I look forward to experimenting further with different models and frameworks.


✍️ Part of my ongoing exploration into Artificial Intelligence and Large Language Models.

Comments

Popular posts from this blog

The pesky scrollbars on Remote desktop - Finally fixed!!

API Testing with Rest Assured - Validating the json schema