Tipps und Tricks für Entwickler und IT-Interessierte
As developers, managing virtual environments is a crucial part of our workflow. With Python projects constantly shifting between dependencies and Python versions, using tools that streamline this process is key. Enter uv
: a tool designed to simplify the creation, activation, and management of virtual environments and to manage python packages and projects.
In this post, I’ll introduce you to uv
, walk you through its installation, and provide some tips to help you get started.
uv
?uv
is an extremely fast Python package and project manager, written in Rust. It is a powerful tool that allows developers to manage Python virtual environments effortlessly. It provides functionality to create, activate, and switch between virtual environments in a standardized way.
By using uv
, you can ensure that your virtual environments are consistently created and activated across different projects without the need to manually deal with multiple commands.
uv
?Managing Python projects often involves juggling various dependencies, versions, and configurations. Without proper tooling, this can become a headache. uv
helps by:
In our examples, before each command you will see our shell prompt:
❯
Don’t type the ❯ when you enter the command. So, when seeing
❯ uv init
just type
uv init
In addition, when we activate the virtual environment, you will see a changed prompt:
✦ ❯
Getting started with uv
is easy. Below are the steps for installing and setting up uv
for your Python projects.
uv
With MacOS or Linux, you can install uv from the website:
❯ curl -LsSf https://astral.sh/uv/install.sh | sh
Alternatively, you can install uv
using pip
. You’ll need to have Python 3.8+ installed on your system.
❯ pip install uv
Once installed, you can use uv
to create a virtual environment for your project. Simply navigate to your project directory and run:
❯ uv new
This command will create a new virtual environment inside the .venv
folder within your project.
After creating the virtual environment, you can easily activate it using the following command:
uv activate
No need to worry about different activation scripts for Windows, Linux, or macOS. uv
handles that for you.
Once the environment is active, you can install your project’s dependencies as you normally would:
❯ pip install -r requirements.txt
uv
ensures that your dependencies are installed in the correct environment without any extra hassle.
You can also switch to a pyproject.toml
file to manage your dependencies.
First you have to initialize the project:
❯ uv init
Then, add the dependency:
❯ uv add requests
When you create a virtual environment, the corresponding folder should be in your PATH.
Normally this is .venv/bin
, when you create it with uv init
. This path is added to your $PATH variable when you run uv activate.
But, if you want to choose a different folder, you must set the variable UV_PROJECT_ENVIRONMENT to this path:
❯ mkdir playground ❯ cd playground ❯ /usr/local/bin/python3.12 -m venv .venv/python/3.12 ❯ . .venv/python/3.12/bin/activate ✦ ❯ which python .../Playground/.venv/python/3.12/bin/python ✦ ❯ export UV_PROJECT_ENVIRONMENT=$PWD/.venv/python/3.12
✦ ❯ pip install uv Collecting uv Downloading uv-0.4.25-py3-none-macosx_10_12_x86_64.whl.metadata (11 kB) Downloading uv-0.4.25-py3-none-macosx_10_12_x86_64.whl (13.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.2/13.2 MB 16.5 MB/s eta 0:00:00 Installing collected packages: uv Successfully installed uv-0.4.25 ✦ ❯ which uv .../Playground/.venv/python/3.12/bin/uv
✦ ❯ uv init Initialized project `playground`
So, with the default settings, you will get an error because uv is searching the virtual environment in .venv.
✦ ❯ uv add requests warning: `VIRTUAL_ENV=.venv/python/3.12` does not match the project environment path `.../.venv/python/3.12` and will be ignored
Use the environment variable to tell uv where the virtual environment is installed.
✦ ❯ export UV_PROJECT_ENVIRONMENT=$PWD/.venv/python/3.12 ✦ ❯ uv add requests Resolved 6 packages in 0.42ms Installed 5 packages in 8ms + certifi==2024.8.30 + charset-normalizer==3.4.0 + idna==3.10 + requests==2.32.3 + urllib3==2.2.3
Use direnv
to automatically set your environment:
.envrc
file:✦ ❯ . .venv/python/3.12/bin/activate ✦ ❯ export UV_PROJECT_ENVIRONMENT=$PWD/.venv/python/3.12
✦ ❯ direnv allow
uv
CommandsHere are a few more useful uv
commands to keep in mind:
uv deactivate
uv remove
uv list
uv
Effectivelyuv
uses .venv
as the folder name for virtual environments. Stick to this default to keep things consistent across your projects.uv
into your CI/CD pipeline: Ensure that your automated build tools use the same virtual environment setup by adding uv
commands to your pipeline scripts.uv
in combination with pyproject.toml
: If your project uses pyproject.toml
for dependency management, uv
can seamlessly integrate, ensuring your environment is always up to date.uv
‘s environment activation and deactivation commands make it easy to switch between projects without worrying about which virtual environment is currently active.uv
with direnv
or add an activation hook in your shell to automatically activate the correct environment when you enter a project folder.uv
Command Cheatsheetuv new | Creates a new virtual environment in the .venv directory. |
uv activate | Activates the virtual environment. |
uv deactivate | Deactivates the active virtual environment. |
uv remove | Removes the virtual environment in the project. |
uv list | Lists all available virtual environments in the project. |
uv install | Installs dependencies from requirements.txt or pyproject.toml . |
uv pip [pip-command] | Runs a pip command within the virtual environment. |
uv python [python-command] | Runs a Python command within the virtual environment. |
uv shell | Starts a new shell session with the virtual environment active. |
uv status | Shows the status of the current virtual environment. |
uv pip install [package] | Installs a Python package in the active environment. |
uv pip uninstall [package] | Uninstalls a Python package from the environment. |
uv pip freeze | Outputs a list of installed packages and their versions. |
uv pip list | Lists all installed packages in the environment. |
uv pip show [package] | Shows details about a specific installed package. |
uv activate | Activates the virtual environment. |
uv deactivate | Deactivates the active environment. |
uv remove | Deletes the current virtual environment. |
uv list | Lists all virtual environments in the project. |
uv clean | Removes all .pyc and cache files from the project. |
uv upgrade | Upgrades uv itself to the latest version. |
uv python | Runs Python within the virtual environment. |
uv pip [command] | Runs any pip command within the virtual environment. |
uv status | Displays the current virtual environment status. |
uv help | Displays help about available commands. |
Here is a shot list of websites with documentation or other information about uv:
<script> export default { data() { return { isMobile: false, isDesktop: false, windowWidth: window.innerWidth, windowHeight: window.innerHeight, }; }, created() { this.updateWindowSize(); window.addEventListener('resize', this.updateWindowSize); }, methods: { updateWindowSize() { // console.log("updateWindowSize())"); this.windowWidth = window.innerWidth; this.windowHeight = window.innerHeight; this.checkIsMobile(); }, checkIsMobile() { this.isMobile = this.windowWidth <= 768; // console.log(`checkIsMobile(): windowWidth = ${this.windowWidth} isMobile=${this.isMobile}`) }, beforeUnmount() { console.log("beforeUnmount()"); window.removeEventListener('resize', this.updateWindowSize); }, }, }; </script>
<script setup> import { onActivated, onBeforeMount, onBeforeUnmount, onBeforeUpdate, /* onCreated, */ onDeactivated, onErrorCaptured, onMounted, /* onRenderTracked,*/ onRenderTriggered, onScopeDispose, onServerPrefetch, onUnmounted, onUpdated, /* onWatcherCleanup, */ } from 'vue'; onActivated(() => { console.log('onActivated() called'); }); onBeforeMount(() => { console.log(`onBeforeMount():`) }) onBeforeUnmount(() => { console.log('onBeforeUnmount() called'); }); onBeforeUpdate(() => { console.log(`onBeforeUpdate():`) }) onDeactivated(() => { console.log('onDeactivated() called'); }); onErrorCaptured((err, instance, info) => { console.log('onErrorCaptured() called'); console.error(err); return false; }); onMounted(() => { console.log(`onMounted():`) }) onRenderTriggered((e) => { console.log('onRenderTriggered() called', e); }); onUnmounted(() => { console.log(`onUnmounted():`) }) onUpdated(() => { console.log('onUpdated() called'); }); onScopeDispose(() => { console.log('onScopeDispose() called'); }); onServerPrefetch(() => { console.log('onServerPrefetch() called'); }); </script>
php artisan route:list
composer require illuminate/support
use Illuminate\Support\Facades\File; function generateRoutes($basePath, $baseNamespace = 'Pages', $routePrefix = '/') { $files = File::allFiles($basePath); foreach ($files as $file) { $relativePath = str_replace([$basePath, '.vue'], '', $file->getRelativePathname()); $routeName = str_replace(DIRECTORY_SEPARATOR, '.', $relativePath); $routeUri = str_replace(DIRECTORY_SEPARATOR, '/', $relativePath); // Example: if file is `resources/js/Pages/Examples/layout-discord.vue` // $routeName = 'Examples.layout-discord'; // $routeUri = 'examples/layout-discord' Route::get($routePrefix . $routeUri, function () use ($relativePath, $baseNamespace) { return Inertia::render($baseNamespace . str_replace('/', '\\', $relativePath)); })->name($routeName); } } generateRoutes(resource_path('js/Pages'));
Lokaler Mailserver für SMTP Testing
<script setup> const theme = { "menu": { "background": 'black', "item": { "background": "green" }, "subitem": { "background": "green" } } } </script>
<style scoped> .menu { background-color: v-bind('theme.menu.background'); }
❯ pnpm add primevue @primevue/themes ❯ pnpm add primevue @primevue/icons
yarn global add @vue/cli # OR npm install -g @vue/cli
Neues Projekt erstellen
vue create my-project # OR vue ui
Vite ist ein Build-Tool für die Webentwicklung, das aufgrund seines nativen ES-Modul-Importansatzes eine blitzschnelle Bereitstellung von Code ermöglicht.
Installation mit npm:
npm init @vitejs/app <project-name> cd <project-name> npm install npm run dev
Oder mit Yarn
$ yarn create @vitejs/app <project-name> $ cd <project-name> $ yarn $ yarn dev
Wenn der Projektname Leerzeichen ehthält, kann es zu Fehlern kommen. Dann hilft das nachfolgende Kommando
$ create-vite-app <project-name>
Description:
Group | Link Text | URL | Description | Details | Advantages | |
---|---|---|---|---|---|---|
UI Framework | Vuetify | https://next.vuetifyjs.com/ | Material design component framework for Vue 3. | Rich in features and components. | Highly customizable, extensive components, material design. | |
UI Framework | Quasar | https://quasar.dev/ | Build responsive websites, mobile, and desktop apps using a single codebase with Vue 3. | All-in-one framework for web, mobile, and desktop apps. | Cross-platform, fast development, rich ecosystem. | |
UI Framework | Element Plus | https://element-plus.org/ | Enterprise-ready UI component library for Vue 3. | Popular in the Chinese market, enterprise-friendly. | Well-documented, easy to use, comprehensive components. | |
UI Framework | Naive UI | https://www.naiveui.com/ | Minimalistic and customizable component library for Vue 3. | Lightweight and easy to integrate. | Customizable, lightweight, modern. | |
UI Framework | PrimeVue | https://primefaces.org/primevue/ | Rich set of customizable UI components for Vue 3. | Comes with many pre-built themes and components. | Wide variety of components, responsive, many themes. | |
UI Framework | Ant Design Vue | https://2x.antdv.com/ | Vue 3 implementation of the Ant Design UI library. | Well-suited for professional and enterprise-grade apps. | Clean, professional design, extensive components. | |
UI Framework | BootstrapVue 3 | https://bootstrap-vue.org/ | Bootstrap-based Vue 3 components. | Based on Bootstrap for familiarity. | Bootstrap ecosystem, responsive, familiar grid system. | Lacks some modern UI components compared to newer libraries. |
Routing | Vue Router | https://router.vuejs.org/ | Official Vue 3 router for single-page applications. | Powerful and flexible routing. | Seamless integration with Vue 3, dynamic routing, nested routes. | Requires setup for advanced features (SSR, lazy loading). |
State Management | Pinia | https://pinia.vuejs.org/ | Lightweight, intuitive state management library for Vue 3. | Vuex alternative with Composition API support. | Simple API, modular, Composition API support, easy to learn. | Limited ecosystem compared to Vuex. |
State Management | Vuex | https://vuex.vuejs.org/ | Official state management library for Vue.js, compatible with Vue 3. | Centralized state management for Vue apps. | Well-supported, battle-tested, great for large apps. | Can be complex for small applications, more boilerplate. |
Build Tool | Vite | https://vitejs.dev/ | Fast build tool with native support for Vue 3. | Modern alternative to Webpack, optimized for Vue 3. | Super fast builds, modern JavaScript support, HMR. | Still evolving, lacks plugins compared to Webpack. |
Build Tool | Vue CLI | https://cli.vuejs.org/ | CLI to scaffold and manage Vue.js applications, supports Vue 3. | Long-standing, mature build tool. | Easy to use, integrates well with Vue ecosystem, powerful plugins. | Slower build times compared to Vite. |
Dev Tools | Vue Devtools | https://devtools.vuejs.org/ | Browser extension for debugging Vue.js applications. | Essential for Vue development. | Powerful debugging, time-travel debugging, component inspection. | Can slow down large apps in development mode. |
Meta Framework | Nuxt 3 | https://v3.nuxtjs.org/ | Vue 3 meta-framework for SSR and static site generation. | Built on Vue 3, optimized for server-side rendering. | SSR, static site generation, auto-routing, great SEO support. | More complex setup, slower build times than SPAs. |
Utility Library | VueUse | https://vueuse.org/ | Collection of essential Vue 3 composition utilities. | Focused on utility functions for the Composition API. | Makes Vue Composition API easier, reusable functions. | Only useful for Composition API users, lacks official support. |
Data Fetching | Apollo Vue | https://apollo.vuejs.org/ | A Vue 3 integration for building GraphQL-powered applications. | Full-featured GraphQL client for Vue 3. | Great GraphQL support, works well with Vue, powerful querying. | Heavyweight, more setup required for small projects. |
Data Fetching | Vue Query | https://vue-query.vercel.app/ | Data-fetching and state management library, similar to React Query, for Vue 3. | Simplifies API data-fetching and caching. | Easy API, great for handling remote data, caching, and synchronization. | Less support for large data models compared to Vuex. |
Validation | Vuelidate | https://vuelidate-next.netlify.app/ | Validation library for Vue 3 with support for the Composition API. | Composition API-based validation. | Lightweight, easy to integrate, simple to use. | Not as feature-rich as some alternatives (like VeeValidate). |
Form Handling | FormKit | https://formkit.com/ | Robust form management and validation for Vue 3. | Advanced form management with full validation support. | Extensive features for forms, great validation handling. | Overkill for simple forms, can increase bundle size. |
UI Framework | Ionic Vue | https://ionicframework.com/docs/vue/ | Build cross-platform mobile apps with Vue 3 and Ionic. | Optimized for mobile development. | Cross-platform, mobile-first components, easy PWA integration. | Can feel bloated for web-only applications. |
UI Framework | Vue 3 Material | https://vuematerial.io/ | Material Design 3 component library for Vue 3. | Material Design components for Vue 3. | Simple to use, clean material design. | Fewer components compared to other material design libraries like Vuetify. |
UI Framework | Vuestic UI | https://vuestic.dev/ | UI library for building accessible, fully customizable interfaces with Vue 3. | Focused on accessibility and customization. | Highly customizable, lightweight, accessible out of the box. | Smaller community and ecosystem. |
UI Framework | DevExtreme Vue | https://js.devexpress.com/Overview/Vue/ | Enterprise-ready Vue 3 components for data-heavy applications. | Optimized for enterprise and data-heavy apps. | Great data components, enterprise-grade, responsive. | Commercial product, steeper learning curve. |
Testing | Vue Test Utils | https://test-utils.vuejs.org/ | Official unit testing library for Vue components. | Built for Vue component testing. | Official testing library, well-supported, integrates with Jest and Mocha. | Can be challenging for complex components. |
Testing | Cypress | https://www.cypress.io/ | End-to-end testing framework for web applications, supports Vue 3. | Easy-to-use end-to-end testing tool. | Real browser testing, powerful debugging tools, great for Vue 3 apps. | Requires real browser setup, slower than unit testing. |
Testing | Jest | https://jestjs.io/ | JavaScript testing framework with Vue 3 support via vue-jest . | Popular testing framework in JavaScript. | Fast, easy to configure, great Vue 3 support. | Configuration required for Vue 3 with vue-jest . |
Animation | GSAP Vue 3 | https://greensock.com/docs/v3/Installation?ref=platforms | Vue 3 integration for creating animations with GSAP (GreenSock Animation Platform). | Leading animation library with Vue 3 integration. | High-performance animations, extensive feature set, works well with Vue 3. | Can add to the complexity and size of your app if overused. |
Data Visualization | Vue Chart 3 | https://vuechartjs.org/ | Charting library for Vue 3 built on Chart.js. | Chart.js integration for Vue 3. | Easy to use, lightweight, built on the popular Chart.js. | Limited to what Chart.js supports, not as flexible as some other charting libraries. |
Testing | Vitest | https://vitest |
Create a file launch.json and add this to configuration
"configurations": [ { "name": "Streamlit", "type": "debugpy", "request": "launch", "module": "streamlit", "args": [ "run", "${file}", "--server.port", "2000" ] } ]
pip install ollama
import ollama response = ollama.chat(model='llama2', messages=[ { 'role': 'user', 'content': 'Why is the sky blue?', }, ]) print(response['message']['content'])
npm install ollama
import ollama from 'ollama' const response = await ollama.chat({ model: 'llama2', messages: [{ role: 'user', content: 'Why is the sky blue?' }], }) console.log(response.message.content)
Both libraries support Ollama’s full set of features. Here are some examples in Python:
for chunk in chat('mistral', messages=messages, stream=True): print(chunk['message']['content'], end='', flush=True)
with open('image.png', 'rb') as file: response = ollama.chat( model='llava', messages=[ { 'role': 'user', 'content': 'What is strange about this image?', 'images': [file.read()], }, ], ) print(response['message']['content'])
result = ollama.generate( model='stable-code', prompt='// A c function to reverse a string\n', ) print(result['response'])
modelfile=''' FROM llama2 SYSTEM You are mario from super mario bros. ''' ollama.create(model='example', modelfile=modelfile)
ollama = Client(host='my.ollama.host')
More examples are available in the GitHub repositories for the Python and JavaScript libraries.
OLLAMA_ORIGINS=https://webml-demo.vercel.app OLLAMA_HOST=127.0.0.1:11435 ollama serve
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
Install the Nvidia container toolkit.
Run Ollama inside a Docker container
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
Now you can run a model like Llama 2 inside the container.
docker exec -it ollama ollama run llama2
from openai import OpenAI client = OpenAI( base_url = 'http://localhost:11434/v1', api_key='ollama', # required, but unused ) response = client.chat.completions.create( model="llama2", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who won the world series in 2020?"}, {"role": "assistant", "content": "The LA Dodgers won in 2020."}, {"role": "user", "content": "Where was it played?"} ] ) print(response.choices[0].message.content)
from langchain_community.llms import Ollama llm = Ollama(model="gemma2") llm.invoke("Why is the sky blue?")
from llama_index.llms.ollama import Ollama llm = Ollama(model="gemma2") llm.complete("Why is the sky blue?")
ollama run wizard-math 'Expand the following expression: $7(3y+2)$'
Large Language Models (LLMs) have revolutionized the field of Natural Language Processing (NLP) by providing powerful capabilities for understanding and generating human language. Open-source LLMs have democratized access to these technologies, allowing developers and researchers to innovate and apply these models in various domains. In this blog post, we will explore Ollama, a framework for working with LLMs, and demonstrate how to load webpages, parse them, build embeddings, and query the content using Ollama.
LLMs are neural networks trained on vast amounts of text data to understand and generate human language. They can perform tasks such as translation, summarization, question answering, and more. Popular LLMs include GPT-3, BERT, and their open-source counterparts like GPT-Neo and BERT variants. These models have diverse applications, from chatbots to automated content generation.
Ollama is an open-source framework designed to simplify the use of LLMs in various applications. It provides tools for training, fine-tuning, and deploying LLMs, making it easier to integrate these powerful models into your projects. With Ollama, you can leverage the capabilities of LLMs to build intelligent applications that understand and generate human language.
The following example from the ollama documentation demonstrates how to use the LangChain framework in conjunction with the Ollama library to load a web page, process its content, create embeddings, and perform a query on the processed data. Below is a detailed explanation of the script’s functionality and the technologies used.
Imports and Setup
The script starts by importing the necessary modules and libraries, including sys
, Ollama
, WebBaseLoader
, RecursiveCharacterTextSplitter
, OllamaEmbeddings
, Chroma
, and RetrievalQA
.
from langchain_community.llms import Ollama from langchain_community.document_loaders import WebBaseLoader from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain_community.embeddings import OllamaEmbeddings from langchain_community.vectorstores import Chroma from langchain.chains import RetrievalQA
Loading the Web Page
The script uses WebBaseLoader
to load the content of a webpage. In this case, it loads the text of “The Odyssey” by Homer from Project Gutenberg.
print("- get web page") loader = WebBaseLoader("https://www.gutenberg.org/files/1727/1727-h/1727-h.htm") data = loader.load()
Splitting the Document
Due to the large size of the document, it is split into smaller chunks using RecursiveCharacterTextSplitter
. This ensures that the text can be processed more efficiently.
print("- split documents") text_splitter=RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0) all_splits = text_splitter.split_documents(data)
Creating Embeddings and Storing Them
The script creates embeddings for the text chunks using the Ollama library and stores them in ChromaDB, a vector database. This step involves instantiating an embedding model (nomic-embed-text
) and using it to generate embeddings for each text chunk.
print("- create vectorstore") oembed = OllamaEmbeddings(base_url="http://localhost:11434", model="nomic-embed-text") vectorstore = Chroma.from_documents(documents=all_splits, embedding=oembed)
Performing a Similarity Search
A question is formulated, and the script uses the vector database to perform a similarity search. It retrieves chunks of text that are semantically similar to the question.
print("- ask for similarities") question="Who is Neleus and who is in Neleus' family?" docs = vectorstore.similarity_search(question) nrofdocs=len(docs) print(f"{question}: {nrofdocs}")
Creating an Ollama Instance and Defining a Retrieval Chain
The script initializes an instance of the Ollama model and sets up a retrieval-based question-answering (QA) chain. This chain is used to process the question and retrieve the relevant parts of the document.
print("- create ollama instance") ollama = Ollama( base_url='http://localhost:11434', model="llama3" ) print("- get qachain") qachain=RetrievalQA.from_chain_type(ollama, retriever=vectorstore.as_retriever())
Running the Query
Finally, the script invokes the QA chain with the question and prints the result.
print("- run query") res = qachain.invoke({"query": question}) print(res['result'])
Now lets look at the impresiv result:
In this example, we are going to use LangChain and Ollama to learn about something just a touch more recent. In August 2023, there was a series of wildfires on Maui. There is no way an LLM trained before that time can know about this, since their training data would not include anything as recent as that.
So we can find the Wikipedia article about the fires and ask questions about the contents.
url = "https://en.wikipedia.org/wiki/2023_Hawaii_wildfires"
question="When was Hawaii's request for a major disaster declaration approved?"
With Open Source Toools, it is easy to analyse images.
Just install Ollama, download the llava image and run this command:
❯ ollama run llava:latest "Beschreibe das Bild <path to image>"
Try this image: Statue of LIberty
❯ ollama run llava:latest "Beschreibe das Bild /tmp/statue-liberty-liberty-island-new-york.jpg"
Added image '/tmp/statue-liberty-liberty-island-new-york.jpg' The image shows the Statue of Liberty, an iconic landmark in New York Harbor. This neoclassical statue is a symbol of freedom and democracy, and it has become a universal symbol of the United States. The statue is situated on Liberty Island, which is accessible via ferries from Manhattan. In the background, you can see a clear sky with some clouds, indicating good weather. The surrounding area appears to be lush with greenery, suggesting that the photo was taken in spring or summer when vegetation is abundant. There are also people visible at the base of the statue, which gives a sense of scale and demonstrates the size of the monument.
The GitHub Copilot extension is an AI pair programmer tool that helps you write code faster and smarter.
We want to use this feature with Open Source Tools:
Download Ollama and install it.
To start Ollama, you have two possibilities:
With MacOS, you could start Ollama
You should the the runnning Ollama instance in the header
Run
ollama pull phi3