Developer Blog

Tipps und Tricks für Entwickler und IT-Interessierte

uv – The new Python Package Manager

A Developer’s Guide to Simplifying Environment Management

As developers, managing virtual environments is a crucial part of our workflow. With Python projects constantly shifting between dependencies and Python versions, using tools that streamline this process is key. Enter uv: a tool designed to simplify the creation, activation, and management of virtual environments and to manage python packages and projects.

In this post, I’ll introduce you to uv, walk you through its installation, and provide some tips to help you get started.

What is uv?

uv is an extremely fast Python package and project manager, written in Rust. It is a powerful tool that allows developers to manage Python virtual environments effortlessly. It provides functionality to create, activate, and switch between virtual environments in a standardized way.

By using uv, you can ensure that your virtual environments are consistently created and activated across different projects without the need to manually deal with multiple commands.

Why Use uv?

Managing Python projects often involves juggling various dependencies, versions, and configurations. Without proper tooling, this can become a headache. uv helps by:

  • Standardizing virtual environments across projects, ensuring consistency.
  • Simplifying project setup, requiring fewer manual steps to get your environment ready.
  • Minimizing errors by automating activation and management of virtual environments.

Hint

In our examples, before each command you will see our shell prompt:

Don’t type the ❯ when you enter the command. So, when seeing

❯  uv init

just type

uv init

In addition, when we activate the virtual environment, you will see a changed prompt:

✦ ❯ 

Installation and Setup

Getting started with uv is easy. Below are the steps for installing and setting up uv for your Python projects.

1. Install uv

With MacOS or Linux, you can install uv from the website:

❯ curl -LsSf https://astral.sh/uv/install.sh | sh

Alternatively, you can install uv using pip. You’ll need to have Python 3.8+ installed on your system.

❯ pip install uv

2. Create a New Virtual Environment

Once installed, you can use uv to create a virtual environment for your project. Simply navigate to your project directory and run:

❯ uv new

This command will create a new virtual environment inside the .venv folder within your project.

3. Activate the Virtual Environment

After creating the virtual environment, you can easily activate it using the following command:

uv activate

No need to worry about different activation scripts for Windows, Linux, or macOS. uv handles that for you.

4. Install Your Dependencies

Once the environment is active, you can install your project’s dependencies as you normally would:

❯ pip install -r requirements.txt

uv ensures that your dependencies are installed in the correct environment without any extra hassle.

You can also switch to a pyproject.toml file to manage your dependencies.

First you have to initialize the project:

❯ uv init

Then, add the dependency:

❯ uv add requests

Tips with virtual environments

When you create a virtual environment, the corresponding folder should be in your PATH.

Normally this is .venv/bin, when you create it with uv init. This path is added to your $PATH variable when you run uv activate.

But, if you want to choose a different folder, you must set the variable UV_PROJECT_ENVIRONMENT to this path:

❯ mkdir playground
❯ cd playground
❯ /usr/local/bin/python3.12 -m venv .venv/python/3.12
❯ . .venv/python/3.12/bin/activate

✦ ❯ which python
.../Playground/.venv/python/3.12/bin/python

✦ ❯ export UV_PROJECT_ENVIRONMENT=$PWD/.venv/python/3.12
✦ ❯ pip install uv
Collecting uv
  Downloading uv-0.4.25-py3-none-macosx_10_12_x86_64.whl.metadata (11 kB)
Downloading uv-0.4.25-py3-none-macosx_10_12_x86_64.whl (13.2 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.2/13.2 MB 16.5 MB/s eta 0:00:00
Installing collected packages: uv
Successfully installed uv-0.4.25

✦ ❯ which uv
.../Playground/.venv/python/3.12/bin/uv
✦ ❯ uv init
Initialized project `playground`

So, with the default settings, you will get an error because uv is searching the virtual environment in .venv.

✦ ❯ uv add requests
warning: `VIRTUAL_ENV=.venv/python/3.12` does not match the project environment path `.../.venv/python/3.12` and will be ignored

Use the environment variable to tell uv where the virtual environment is installed.

✦ ❯ export UV_PROJECT_ENVIRONMENT=$PWD/.venv/python/3.12

✦ ❯ uv add requests
Resolved 6 packages in 0.42ms
Installed 5 packages in 8ms
 + certifi==2024.8.30
 + charset-normalizer==3.4.0
 + idna==3.10
 + requests==2.32.3
 + urllib3==2.2.3

Tip

Use direnv to automatically set your environment:

  • Set .envrc file:
✦ ❯ . .venv/python/3.12/bin/activate
✦ ❯ export UV_PROJECT_ENVIRONMENT=$PWD/.venv/python/3.12
  • Allow the .envrc file:
✦ ❯ direnv allow

Common uv Commands

Here are a few more useful uv commands to keep in mind:

  • Deactivate the environment: uv deactivate
  • Remove the environment: uv remove
  • List available virtual environments in your project: uv list

Tips for Using uv Effectively

  1. Consistent Environment Names: By default, uv uses .venv as the folder name for virtual environments. Stick to this default to keep things consistent across your projects.
  2. Integrate uv into your CI/CD pipeline: Ensure that your automated build tools use the same virtual environment setup by adding uv commands to your pipeline scripts.
  3. Use uv in combination with pyproject.toml: If your project uses pyproject.toml for dependency management, uv can seamlessly integrate, ensuring your environment is always up to date.
  4. Quick Switching: If you manage multiple Python projects, uv‘s environment activation and deactivation commands make it easy to switch between projects without worrying about which virtual environment is currently active.
  5. Automate Activation: Combine uv with direnv or add an activation hook in your shell to automatically activate the correct environment when you enter a project folder.

Cheatsheet

uv Command Cheatsheet

General Commands

uv newCreates a new virtual environment in the .venv directory.
uv activateActivates the virtual environment.
uv deactivateDeactivates the active virtual environment.
uv removeRemoves the virtual environment in the project.
uv listLists all available virtual environments in the project.
uv installInstalls dependencies from requirements.txt or pyproject.toml.
uv pip [pip-command]Runs a pip command within the virtual environment.
uv python [python-command]Runs a Python command within the virtual environment.
uv shellStarts a new shell session with the virtual environment active.
uv statusShows the status of the current virtual environment.

Working with Dependencies

uv pip install [package]Installs a Python package in the active environment.
uv pip uninstall [package]Uninstalls a Python package from the environment.
uv pip freezeOutputs a list of installed packages and their versions.
uv pip listLists all installed packages in the environment.
uv pip show [package]Shows details about a specific installed package.

Environment Management

uv activateActivates the virtual environment.
uv deactivateDeactivates the active environment.
uv removeDeletes the current virtual environment.
uv listLists all virtual environments in the project.

Cleanup and Miscellaneous

uv cleanRemoves all .pyc and cache files from the project.
uv upgradeUpgrades uv itself to the latest version.

Using Python and Pip Inside Virtual Environment

uv pythonRuns Python within the virtual environment.
uv pip [command]Runs any pip command within the virtual environment.

Helper Commands

uv statusDisplays the current virtual environment status.
uv helpDisplays help about available commands.

More to read

Here is a shot list of websites with documentation or other information about uv:

Vue – Cookbook

Responsive Design

React on Size Change

<script>
export default {
    data() {
        return {
            isMobile: false,
            isDesktop: false,

            windowWidth: window.innerWidth,
            windowHeight: window.innerHeight,
        };
    },

    created() {
        this.updateWindowSize();
        window.addEventListener('resize', this.updateWindowSize);
    },

    methods: {
        updateWindowSize() {
            // console.log("updateWindowSize())");

            this.windowWidth = window.innerWidth;
            this.windowHeight = window.innerHeight;
            this.checkIsMobile();
        },

        checkIsMobile() {
            this.isMobile = this.windowWidth <= 768;
            // console.log(`checkIsMobile(): windowWidth = ${this.windowWidth} isMobile=${this.isMobile}`)
        },

        beforeUnmount() {
            console.log("beforeUnmount()");

            window.removeEventListener('resize', this.updateWindowSize);
        },

    },
};
</script>

Debugging

<script setup>
import {
    onActivated,
    onBeforeMount,
    onBeforeUnmount,
    onBeforeUpdate,
    /*  onCreated, */
    onDeactivated,
    onErrorCaptured,
    onMounted,
    /*  onRenderTracked,*/
    onRenderTriggered,
    onScopeDispose,
    onServerPrefetch,
    onUnmounted,
    onUpdated,
    /*  onWatcherCleanup, */

} from 'vue';

onActivated(() => { console.log('onActivated() called'); });
onBeforeMount(() => { console.log(`onBeforeMount():`) })
onBeforeUnmount(() => { console.log('onBeforeUnmount() called'); });
onBeforeUpdate(() => { console.log(`onBeforeUpdate():`) })
onDeactivated(() => { console.log('onDeactivated() called'); });
onErrorCaptured((err, instance, info) => { console.log('onErrorCaptured() called'); console.error(err); return false; });
onMounted(() => { console.log(`onMounted():`) })
onRenderTriggered((e) => { console.log('onRenderTriggered() called', e); });
onUnmounted(() => { console.log(`onUnmounted():`) })
onUpdated(() => { console.log('onUpdated() called'); });
onScopeDispose(() => { console.log('onScopeDispose() called'); });
onServerPrefetch(() => { console.log('onServerPrefetch() called'); });

</script>

Laravel | Cookbook

Routing

Alle Routen anzeigen

php artisan route:list

Routen dynamisch erzeugen

composer require illuminate/support
use Illuminate\Support\Facades\File;

function generateRoutes($basePath, $baseNamespace = 'Pages', $routePrefix = '/')
{
    $files = File::allFiles($basePath);

    foreach ($files as $file) {
        $relativePath = str_replace([$basePath, '.vue'], '', $file->getRelativePathname());
        $routeName = str_replace(DIRECTORY_SEPARATOR, '.', $relativePath);
        $routeUri = str_replace(DIRECTORY_SEPARATOR, '/', $relativePath);

        // Example: if file is `resources/js/Pages/Examples/layout-discord.vue`
        // $routeName = 'Examples.layout-discord';
        // $routeUri = 'examples/layout-discord'

        Route::get($routePrefix . $routeUri, function () use ($relativePath, $baseNamespace) {
            return Inertia::render($baseNamespace . str_replace('/', '\\', $relativePath));
        })->name($routeName);
    }
}

generateRoutes(resource_path('js/Pages'));

Mail / SMTP

Lokaler Mailserver für SMTP Testing

MailHog: Web and API based SMTP testing

Vue3 and Laravel + Inertia | Cookbook

Allgemeines

Vue and CSS

Styling with CSS Variables

<script setup>
const theme = {
    "menu": {
        "background": 'black',
        "item": {
            "background": "green"
        },
        "subitem": {
            "background": "green"
        }
    }
}
</script>
<style scoped>
.menu {
    background-color: v-bind('theme.menu.background');
}

Using PrimeVue

Installation

❯ pnpm add primevue @primevue/themes
❯ pnpm add primevue @primevue/icons

Vue3 | Einstieg

Allgemeines

Installation

Installation mit der vue-cli

yarn global add @vue/cli
# OR
npm install -g @vue/cli

Neues Projekt erstellen

vue create my-project
# OR
vue ui

Installation mit Vite

Vite

Vite ist ein Build-Tool für die Webentwicklung, das aufgrund seines nativen ES-Modul-Importansatzes eine blitzschnelle Bereitstellung von Code ermöglicht.

Installation mit npm:

npm init @vitejs/app <project-name>
cd <project-name>
npm install
npm run dev

Oder mit Yarn

$ yarn create @vitejs/app <project-name>
$ cd <project-name>
$ yarn
$ yarn dev

Wenn der Projektname Leerzeichen ehthält, kann es zu Fehlern kommen. Dann hilft das nachfolgende Kommando

$ create-vite-app <project-name>

Vue Frameworks

Description:

GroupLink TextURLDescriptionDetailsAdvantages
UI FrameworkVuetifyhttps://next.vuetifyjs.com/Material design component framework for Vue 3.Rich in features and components.Highly customizable, extensive components, material design.
UI FrameworkQuasarhttps://quasar.dev/Build responsive websites, mobile, and desktop apps using a single codebase with Vue 3.All-in-one framework for web, mobile, and desktop apps.Cross-platform, fast development, rich ecosystem.
UI FrameworkElement Plushttps://element-plus.org/Enterprise-ready UI component library for Vue 3.Popular in the Chinese market, enterprise-friendly.Well-documented, easy to use, comprehensive components.
UI FrameworkNaive UIhttps://www.naiveui.com/Minimalistic and customizable component library for Vue 3.Lightweight and easy to integrate.Customizable, lightweight, modern.
UI FrameworkPrimeVuehttps://primefaces.org/primevue/Rich set of customizable UI components for Vue 3.Comes with many pre-built themes and components.Wide variety of components, responsive, many themes.
UI FrameworkAnt Design Vuehttps://2x.antdv.com/Vue 3 implementation of the Ant Design UI library.Well-suited for professional and enterprise-grade apps.Clean, professional design, extensive components.
UI FrameworkBootstrapVue 3https://bootstrap-vue.org/Bootstrap-based Vue 3 components.Based on Bootstrap for familiarity.Bootstrap ecosystem, responsive, familiar grid system.Lacks some modern UI components compared to newer libraries.
RoutingVue Routerhttps://router.vuejs.org/Official Vue 3 router for single-page applications.Powerful and flexible routing.Seamless integration with Vue 3, dynamic routing, nested routes.Requires setup for advanced features (SSR, lazy loading).
State ManagementPiniahttps://pinia.vuejs.org/Lightweight, intuitive state management library for Vue 3.Vuex alternative with Composition API support.Simple API, modular, Composition API support, easy to learn.Limited ecosystem compared to Vuex.
State ManagementVuexhttps://vuex.vuejs.org/Official state management library for Vue.js, compatible with Vue 3.Centralized state management for Vue apps.Well-supported, battle-tested, great for large apps.Can be complex for small applications, more boilerplate.
Build ToolVitehttps://vitejs.dev/Fast build tool with native support for Vue 3.Modern alternative to Webpack, optimized for Vue 3.Super fast builds, modern JavaScript support, HMR.Still evolving, lacks plugins compared to Webpack.
Build ToolVue CLIhttps://cli.vuejs.org/CLI to scaffold and manage Vue.js applications, supports Vue 3.Long-standing, mature build tool.Easy to use, integrates well with Vue ecosystem, powerful plugins.Slower build times compared to Vite.
Dev ToolsVue Devtoolshttps://devtools.vuejs.org/Browser extension for debugging Vue.js applications.Essential for Vue development.Powerful debugging, time-travel debugging, component inspection.Can slow down large apps in development mode.
Meta FrameworkNuxt 3https://v3.nuxtjs.org/Vue 3 meta-framework for SSR and static site generation.Built on Vue 3, optimized for server-side rendering.SSR, static site generation, auto-routing, great SEO support.More complex setup, slower build times than SPAs.
Utility LibraryVueUsehttps://vueuse.org/Collection of essential Vue 3 composition utilities.Focused on utility functions for the Composition API.Makes Vue Composition API easier, reusable functions.Only useful for Composition API users, lacks official support.
Data FetchingApollo Vuehttps://apollo.vuejs.org/A Vue 3 integration for building GraphQL-powered applications.Full-featured GraphQL client for Vue 3.Great GraphQL support, works well with Vue, powerful querying.Heavyweight, more setup required for small projects.
Data FetchingVue Queryhttps://vue-query.vercel.app/Data-fetching and state management library, similar to React Query, for Vue 3.Simplifies API data-fetching and caching.Easy API, great for handling remote data, caching, and synchronization.Less support for large data models compared to Vuex.
ValidationVuelidatehttps://vuelidate-next.netlify.app/Validation library for Vue 3 with support for the Composition API.Composition API-based validation.Lightweight, easy to integrate, simple to use.Not as feature-rich as some alternatives (like VeeValidate).
Form HandlingFormKithttps://formkit.com/Robust form management and validation for Vue 3.Advanced form management with full validation support.Extensive features for forms, great validation handling.Overkill for simple forms, can increase bundle size.
UI FrameworkIonic Vuehttps://ionicframework.com/docs/vue/Build cross-platform mobile apps with Vue 3 and Ionic.Optimized for mobile development.Cross-platform, mobile-first components, easy PWA integration.Can feel bloated for web-only applications.
UI FrameworkVue 3 Materialhttps://vuematerial.io/Material Design 3 component library for Vue 3.Material Design components for Vue 3.Simple to use, clean material design.Fewer components compared to other material design libraries like Vuetify.
UI FrameworkVuestic UIhttps://vuestic.dev/UI library for building accessible, fully customizable interfaces with Vue 3.Focused on accessibility and customization.Highly customizable, lightweight, accessible out of the box.Smaller community and ecosystem.
UI FrameworkDevExtreme Vuehttps://js.devexpress.com/Overview/Vue/Enterprise-ready Vue 3 components for data-heavy applications.Optimized for enterprise and data-heavy apps.Great data components, enterprise-grade, responsive.Commercial product, steeper learning curve.
TestingVue Test Utilshttps://test-utils.vuejs.org/Official unit testing library for Vue components.Built for Vue component testing.Official testing library, well-supported, integrates with Jest and Mocha.Can be challenging for complex components.
TestingCypresshttps://www.cypress.io/End-to-end testing framework for web applications, supports Vue 3.Easy-to-use end-to-end testing tool.Real browser testing, powerful debugging tools, great for Vue 3 apps.Requires real browser setup, slower than unit testing.
TestingJesthttps://jestjs.io/JavaScript testing framework with Vue 3 support via vue-jest.Popular testing framework in JavaScript.Fast, easy to configure, great Vue 3 support.Configuration required for Vue 3 with vue-jest.
AnimationGSAP Vue 3https://greensock.com/docs/v3/Installation?ref=platformsVue 3 integration for creating animations with GSAP (GreenSock Animation Platform).Leading animation library with Vue 3 integration.High-performance animations, extensive feature set, works well with Vue 3.Can add to the complexity and size of your app if overused.
Data VisualizationVue Chart 3https://vuechartjs.org/Charting library for Vue 3 built on Chart.js.Chart.js integration for Vue 3.Easy to use, lightweight, built on the popular Chart.js.Limited to what Chart.js supports, not as flexible as some other charting libraries.
TestingVitesthttps://vitest

Ollama Cookbook

Setting up VS Code Environment

Debugging

Create a file launch.json and add this to configuration

"configurations": [
        {
            "name": "Streamlit",
            "type": "debugpy",
            "request": "launch",
            "module": "streamlit",
            "args": [
                "run",
                "${file}",
                "--server.port",
                "2000"
            ]
        }
    ]

Ollama und Python

Beispiele

Getting Started

Python

pip install ollama
import ollama

response = ollama.chat(model='llama2', messages=[
  {
    'role': 'user',
    'content': 'Why is the sky blue?',
  },
])

print(response['message']['content'])

JavaScript

npm install ollama
import ollama from 'ollama'

const response = await ollama.chat({
  model: 'llama2',
  messages: [{ role: 'user', content: 'Why is the sky blue?' }],
})
console.log(response.message.content)

Use cases

Both libraries support Ollama’s full set of features. Here are some examples in Python:

Streaming

for chunk in chat('mistral', messages=messages, stream=True):
    print(chunk['message']['content'], end='', flush=True)

Multi-modal

with open('image.png', 'rb') as file:
  response = ollama.chat(
    model='llava',
    messages=[
      {
        'role': 'user',
        'content': 'What is strange about this image?',
        'images': [file.read()],
      },
    ],
  )

print(response['message']['content'])

Text Completion

result = ollama.generate(
  model='stable-code',
  prompt='// A c function to reverse a string\n',
)

print(result['response'])

Creating custom models

modelfile='''
FROM llama2
SYSTEM You are mario from super mario bros.
'''

ollama.create(model='example', modelfile=modelfile)

Custom client

ollama = Client(host='my.ollama.host')

More examples are available in the GitHub repositories for the Python and JavaScript libraries.

Tipps und Tricks

ollama serve

OLLAMA_ORIGINS=https://webml-demo.vercel.app
OLLAMA_HOST=127.0.0.1:11435 ollama serve

Docker

Run Ollama in Docker container

CPU only

docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

Nvidia GPU

Install the Nvidia container toolkit.
Run Ollama inside a Docker container

docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

Run a model

Now you can run a model like Llama 2 inside the container.

docker exec -it ollama ollama run llama2

OpenAI

OpenAI Compatibility

from openai import OpenAI

client = OpenAI(
    base_url = 'http://localhost:11434/v1',
    api_key='ollama', # required, but unused
)

response = client.chat.completions.create(
  model="llama2",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Who won the world series in 2020?"},
    {"role": "assistant", "content": "The LA Dodgers won in 2020."},
    {"role": "user", "content": "Where was it played?"}
  ]
)

print(response.choices[0].message.content)

Using Streamlit

LangChain

from langchain_community.llms import Ollama
llm = Ollama(model="gemma2")
llm.invoke("Why is the sky blue?")

LlamaIndex

from llama_index.llms.ollama import Ollama

llm = Ollama(model="gemma2")
llm.complete("Why is the sky blue?")

LLMs and Models by Example

wizard-math

ollama run wizard-math 'Expand the following expression: $7(3y+2)$'

Daily AI: Analyse WebPages with AI

Introduction

Large Language Models (LLMs) have revolutionized the field of Natural Language Processing (NLP) by providing powerful capabilities for understanding and generating human language. Open-source LLMs have democratized access to these technologies, allowing developers and researchers to innovate and apply these models in various domains. In this blog post, we will explore Ollama, a framework for working with LLMs, and demonstrate how to load webpages, parse them, build embeddings, and query the content using Ollama.

Understanding Large Language Models (LLMs)

LLMs are neural networks trained on vast amounts of text data to understand and generate human language. They can perform tasks such as translation, summarization, question answering, and more. Popular LLMs include GPT-3, BERT, and their open-source counterparts like GPT-Neo and BERT variants. These models have diverse applications, from chatbots to automated content generation.

Introducing Ollama

Ollama is an open-source framework designed to simplify the use of LLMs in various applications. It provides tools for training, fine-tuning, and deploying LLMs, making it easier to integrate these powerful models into your projects. With Ollama, you can leverage the capabilities of LLMs to build intelligent applications that understand and generate human language.

Example

The following example from the ollama documentation demonstrates how to use the LangChain framework in conjunction with the Ollama library to load a web page, process its content, create embeddings, and perform a query on the processed data. Below is a detailed explanation of the script’s functionality and the technologies used.

Technologies Used

  1. LangChain: A framework for building applications powered by large language models (LLMs). It provides tools for loading documents, splitting text, creating embeddings, and querying data.
  2. Ollama: A library for working with LLMs and embeddings. In this script, it’s used to generate embeddings for text data.
  3. BeautifulSoup (bs4): A library used for parsing HTML and XML documents. It’s essential for loading and processing web content.
  4. ChromaDB: A vector database used for storing and querying embeddings. It allows efficient similarity searches.

Code Breakdown

Imports and Setup

The script starts by importing the necessary modules and libraries, including sys, Ollama, WebBaseLoader, RecursiveCharacterTextSplitter, OllamaEmbeddings, Chroma, and RetrievalQA.

from langchain_community.llms import Ollama

from langchain_community.document_loaders import WebBaseLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.embeddings import OllamaEmbeddings
from langchain_community.vectorstores import Chroma
from langchain.chains import RetrievalQA

Loading the Web Page

The script uses WebBaseLoader to load the content of a webpage. In this case, it loads the text of “The Odyssey” by Homer from Project Gutenberg.

print("- get web page")

loader = WebBaseLoader("https://www.gutenberg.org/files/1727/1727-h/1727-h.htm")
data = loader.load()

Splitting the Document

Due to the large size of the document, it is split into smaller chunks using RecursiveCharacterTextSplitter. This ensures that the text can be processed more efficiently.

print("- split documents")

text_splitter=RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)
all_splits = text_splitter.split_documents(data)

Creating Embeddings and Storing Them

The script creates embeddings for the text chunks using the Ollama library and stores them in ChromaDB, a vector database. This step involves instantiating an embedding model (nomic-embed-text) and using it to generate embeddings for each text chunk.

print("- create vectorstore")

oembed = OllamaEmbeddings(base_url="http://localhost:11434", model="nomic-embed-text")
vectorstore = Chroma.from_documents(documents=all_splits, embedding=oembed)

Performing a Similarity Search

A question is formulated, and the script uses the vector database to perform a similarity search. It retrieves chunks of text that are semantically similar to the question.

print("- ask for similarities")

question="Who is Neleus and who is in Neleus' family?"
docs = vectorstore.similarity_search(question)
nrofdocs=len(docs)
print(f"{question}: {nrofdocs}")

Creating an Ollama Instance and Defining a Retrieval Chain

The script initializes an instance of the Ollama model and sets up a retrieval-based question-answering (QA) chain. This chain is used to process the question and retrieve the relevant parts of the document.

print("- create ollama instance")
ollama = Ollama(
    base_url='http://localhost:11434',
    model="llama3"
)

print("- get qachain")
qachain=RetrievalQA.from_chain_type(ollama, retriever=vectorstore.as_retriever())

Running the Query

Finally, the script invokes the QA chain with the question and prints the result.

print("- run query")
res = qachain.invoke({"query": question})

print(res['result'])

Result

Now lets look at the impresiv result:

Try another example: ask wikipedia page

In this example, we are going to use LangChain and Ollama to learn about something just a touch more recent. In August 2023, there was a series of wildfires on Maui. There is no way an LLM trained before that time can know about this, since their training data would not include anything as recent as that.

So we can find the Wikipedia article about the fires and ask questions about the contents.

url = "https://en.wikipedia.org/wiki/2023_Hawaii_wildfires"
question="When was Hawaii's request for a major disaster declaration approved?"

Daily AI: Analyse Images with AI

General

With Open Source Toools, it is easy to analyse images.

Just install Ollama, download the llava image and run this command:

❯ ollama run llava:latest "Beschreibe das Bild <path to image>"

Try this image: Statue of LIberty

❯ ollama run llava:latest "Beschreibe das Bild /tmp/statue-liberty-liberty-island-new-york.jpg"
Added image '/tmp/statue-liberty-liberty-island-new-york.jpg'
The image shows the Statue of Liberty, an iconic landmark in New York Harbor. This neoclassical statue is a symbol of freedom and democracy, and it has become a universal symbol of the United States. The statue is situated on Liberty Island, which is accessible via ferries from Manhattan.

In the background, you can see a clear sky with some clouds, indicating good weather. The surrounding area appears to be lush with greenery, suggesting that the photo was taken in spring or summer when vegetation is abundant. There are also people visible at the base of the statue, which gives a sense of scale and demonstrates the size of the monument.

Add CoPilot functionality to VSCode with Open Source tools

Introduction

The GitHub Copilot extension is an AI pair programmer tool that helps you write code faster and smarter. 

We want to use this feature with Open Source Tools:

Setup

Install Ollama

Download Ollama and install it.

To start Ollama, you have two possibilities:

From the command line

Using the Icon from the Installation

With MacOS, you could start Ollama

You should the the runnning Ollama instance in the header

Pull Phi3 Model

Run

ollama pull phi3

Install VS Code

Install VS Code Extension Continue

Start Model

Configure VS Code Extension

Ollama | Create a ChatGPT Clone with Ollama and HyperDiv

In this blog post, we’ll explore how to create a ChatGPT-like application using Hyperdiv and Ollama. Hyperdiv provides a flexible framework for building web applications, while Ollama offers powerful local machine learning capabilities.

We will start with the Hyperdiv GPT-chatbot app template and adapt it to leverage Ollama, which runs locally. This guide will walk you through the necessary steps and code changes to integrate these technologies effectively.

TL;DR

The complete code for this tutorial is here.

Step 1: Setting Up Your Environment

Install Ollama

Download Ollama from https://ollama.com/download.

Install (Windows) or unpack (macOS) the downloaded file. This gets you an Ollama app (which allows you to start the Ollama service) and a Ollama command line.

Start the Ollama service by starting the Ollama app.

On macOS, you will see an icon for the Ollama Servce at the top bar.

Then, open a terminal and type ollama list. This command displays the install models.

ollama list

To install a model, type

ollama pull llama3

For our ChatGPT Clone, we will use the llama3 model.

If you want to use another model, then search here: https://ollama.com/library

Clone the HyperDiv Examples Repository

Start by cloning or downloading the Hyperdiv GPT-chatbot app. This app provides a basic structure for a chatbot application, which we will modify to work with Ollama.

Go to your desired local folder to store the sources and type

git clone https://github.com/hyperdiv/hyperdiv-apps

Then, go to the folder hyperdiv-apps/gpt-chatbot

Adapt app to use Ollama backend

First, we will create an ollama client to process all request:

from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:11434/v1",
    api_key="ollama",
)

Then we modify the request function to use this client

We change

response = openai.ChatCompletion.create(

to

response = client.chat.completions.create(

Next step is changing the accees to the response fields. With OpenAI, the response data is a dictionary, so the way to acess the fields is like

chunk["choices"]

With Ollama, we can access the field by name

chunk.choices

The changes are

 for chunk in response:
    message = chunk.choices[0].delta
    state.current_reply += message.content

And the last step would be the change to use the correct model:

model = form.select(
    options=("codellama", "llama2", "llama3", "mistral"),
        value="llama3",
        name="gpt-model",
)

Thats is! Save all changes

Prepare Python environment and run app

Install the required modules:

pip install openai hyperdiv

Run the app:

python start.py

Open the browser at http://localhost:8888

Final Result

The complete code for this tutorial is here.