Ollama
Language Models (LLMs) are powerful tools for generating human-like text, and
interacting with them via APIs opens up endless possibilities.
In this post, we'll explore how to use Python's requests library to send prompts to an Ollama LLM and process its responses efficiently.
Why to Use APIs to Interact with Ollama LLMs?
APIs provide a seamless way to communicate with LLMs for tasks like text generation, summarization, and more.
By using APIs:
· You can integrate LLMs into your existing applications.
· You can handle dynamic inputs and automate tasks with ease.
Step-by-Step Guide to Interacting with Ollama LLMs via APIs in Python
1. Prerequisites:
· Ensure Ollama's API is running locally or on a server (http://localhost:11434 in our example).
· Install Python 3 and the necessary libraries
pip install requests
2. API Endpoint and Payload:
The API endpoint is /api/generate. The payload includes:
· model: The name of the Ollama model to use (e.g., "llama3.2").
· prompt: The input text for the LLM (e.g., "Tell me a joke").
· stream: Whether to receive a streaming response (true or false).
3. Python Code
Here’s the Python program to send a POST request to the Ollama API and handle the response.
helloWorld.py
import requests import json # Define the API endpoint and payload url = "http://localhost:11434/api/generate" payload = { "model": "llama3.2", "prompt": "Tell me a joke", "stream": False } # Send the POST request to the Ollama API response = requests.post(url, json=payload) # Decode the response and parse JSON if response.status_code == 200: decoded_response = response.content.decode('utf-8') json_response = json.loads(decoded_response) # Extract and print the LLM's response if "response" in json_response: print("Generated Response from Ollama LLM:") print(json_response["response"]) else: print("No 'response' field found in the JSON.") else: print(f"Error: {response.status_code}, Message: {response.text}")
Output
Generated Response from Ollama LLM: Here's one: What do you call a fake noodle? An impasta.
Interacting with Ollama LLMs via APIs in Python is straightforward and highly versatile. By following the example above, you can easily integrate these powerful language models into your applications for dynamic, human-like text generation.
Previous Next Home
No comments:
Post a Comment