As shown in the previous lectures, you can directly access locally running LLMs (when using Ollama) by leveraging the Ollama API - either the custom one or the OpenAI-compatible one.
Alternatively, if you don't want to send HTTP requests from scratch (or use the OpenAI SDK), you can also use the official Ollama SDKs for Python and JavaScript: