Tuesday, 22 April 2025

Getting Started with the Python Ollama Library: A Quick Guide to Listing LLMs on Your System

Step 1: Install the Ollama Library

To begin, you need to install the Ollama library. Depending on your Python setup, you can use either pip or pip3:

pip install ollama
    (OR)
pip3 install ollama

This ensures the library is installed and ready to use.

 

Step 2: Writing Your First Script

Once the library is installed, you can create a Python script to list all the available LLMs in your system. Here's a simple example:

 

llmsInMySystem.py

import ollama

llmsInMySystem = ollama.list();
print(llmsInMySystem);

 

Step 3: Running the Script

Save the above code in a file named llmsInMySystem.py. Run the script using the following command.

python llmsInMySystem.py

 

If everything is set up correctly, the script will output a list of all the available LLMs on your system.

Output
models=[Model(model='llama3.2-vision:latest', modified_at=datetime.datetime(2025, 1, 18, 15, 21, 43, 518870, tzinfo=TzInfo(+05:30)), digest='085a1fdae525a3804ac95416b38498099c241defd0f1efc71dcca7f63190ba3d', size=7901829417, details=ModelDetails(parent_model='', format='gguf', family='mllama', families=['mllama', 'mllama'], parameter_size='9.8B', quantization_level='Q4_K_M')), Model(model='llama3.2:latest', modified_at=datetime.datetime(2025, 1, 17, 12, 26, 55, 61315, tzinfo=TzInfo(+05:30)), digest='a80c4f17acd55265feec403c7aef86be0c25983ab279d83f3bcd3abbcb5b8b72', size=2019393189, details=ModelDetails(parent_model='', format='gguf', family='llama', families=['llama'], parameter_size='3.2B', quantization_level='Q4_K_M'))]

 

ollama.list(): This function queries your system for all installed or accessible LLMs, returning their details in a structured format.

 

Let’s print the llms available in my system in a tablular format like below.

 

prettyPrintLlms.py

 

import ollama
from datetime import datetime

# Fetch the list of available LLMs
llms_in_my_system = ollama.list()

# Print the output in a table format
print(f"{'Model Name':<10} {'Modified At':<15} {'Digest':<20} {'Size (bytes)':<5} {'Parameter Size':<15} {'Quantization Level':<10}")
print("-" * 140)

for model in llms_in_my_system.models:
    model_name = model.model
    modified_at = model.modified_at.strftime("%Y-%m-%d %H:%M:%S")  # Format datetime
    digest = model.digest
    size = model.size
    parameter_size = model.details.parameter_size
    quantization_level = model.details.quantization_level

    print(f"{model_name:<20} {modified_at:<25} {digest:<40} {size:<15} {parameter_size:<15} {quantization_level:<20}")

Output

Model Name Modified At     Digest               Size (bytes) Parameter Size  Quantization Level
--------------------------------------------------------------------------------------------------------------------------------------------
llama3.2-vision:latest 2025-01-18 15:21:43       085a1fdae525a3804ac95416b38498099c241defd0f1efc71dcca7f63190ba3d 7901829417      9.8B            Q4_K_M              
llama3.2:latest      2025-01-17 12:26:55       a80c4f17acd55265feec403c7aef86be0c25983ab279d83f3bcd3abbcb5b8b72 2019393189      3.2B            Q4_K_M 

 

Previous                                                    Next                                                    Home

No comments:

Post a Comment