Developer Blog

Tipps und Tricks für Entwickler und IT-Interessierte

Django-Docker-Builds optimieren mit uv

Wer in Docker-Containern Python-Abhängigkeiten mit pip verwaltet, sollte einen Blick auf uv von Astral werfen. Dieses Tool bietet deutlich schnellere Installationen und robustere Verwaltung der Abhängigkeiten. Besonders für Django-Projekte in der CI oder Produktion kann das Build-Tempo spürbar verbessert werden. Auch die Pflege und das Aktualisieren der Abhängigkeiten werden einfacher.

uv im Dockerfile einbinden

Die empfohlene Methode nutzt ein Multi-Stage-Build, um das aktuelle uv-Binary direkt ins Image zu kopieren:

COPY --from=ghcr.io/astral-sh/uv:latest /uv /usr/local/bin/uv

So spart man sich die Installation via pip oder Paketmanager und bleibt immer aktuell.

Wichtige Umgebungsvariablen

Im Dockerfile werden zentrale Umgebungsvariablen gesetzt:

ENV UV_PROJECT_ENVIRONMENT=/venv \
    UV_NO_MANAGED_PYTHON=1 \
    UV_PYTHON_DOWNLOADS=never \
    VIRTUAL_ENV=/venv
  • UV_NO_MANAGED_PYTHON=1: uv verwaltet Python nicht selbst.
  • UV_PYTHON_DOWNLOADS=never: Downloads werden komplett deaktiviert.
  • UV_PROJECT_ENVIRONMENT=/venv: Virtuelle Umgebung wird unter /venv verwaltet.

Performance-Verbesserungen

ENV UV_COMPILE_BYTECODE=1 \
    UV_LINK_MODE=copy \
    UV_CACHE_DIR=/app/.cache/uv
  • Bytecode wird beim Installieren erzeugt (.pyc), schnellere Startzeit.
  • Hardlink-Probleme werden durch Kopieren vermieden.
  • Caching via Docker-Mounts beschleunigt Builds enorm.

Sicherheit und Reproduzierbarkeit

ENV UV_FROZEN=1 \
    UV_REQUIRE_HASHES=1 \
    UV_VERIFY_HASHES=1
  • Nur exakt definierte Versionen werden installiert.
  • Jeder Download wird über einen kryptografischen Hash abgesichert.

Abhängigkeiten installieren

RUN uv venv $VIRTUAL_ENV

RUN --mount=type=cache,target=/app/.cache/uv \
    --mount=type=bind,source=uv.lock,target=uv.lock \
    --mount=type=bind,source=pyproject.toml,target=pyproject.toml \
    uv sync --no-install-project --no-editable

Diese Befehle installieren exakt die in der uv.lock Datei festgelegten Pakete.

Dev- und Test-Abhängigkeiten verwalten

Mit dependency-groups lassen sich verschiedene Umgebungen differenzieren:

[project]
dependencies = ["django~=5.2"]

[dependency-groups]

dev = [“django-debug-toolbar”] test = [“pytest”]

[tool.uv]

default-groups = []

Mit Build-Argumenten lassen sich Gruppen gezielt installieren:

ARG BUILD_GROUPS=""

RUN --mount=type=cache,target=/app/.cache/uv \
    --mount=type=bind,source=uv.lock,target=uv.lock \
    --mount=type=bind,source=pyproject.toml,target=pyproject.toml \
    uv venv $VIRTUAL_ENV && \
    uv sync --no-install-project --no-editable $BUILD_GROUPS

Beispielaufrufe:

# Nur Produktivabhängigkeiten
$ docker build .

# Mit Entwicklungswerkzeugen
$ docker build --build-arg BUILD_GROUPS="--group dev"

# Mit Testumgebung
$ docker build --build-arg BUILD_GROUPS="--group test"

Migration von pip

Bestehende Projekte lassen sich überführen mit:

uv add --requirements requirements.txt
uv add --group dev --requirements requirements-dev.txt
uv lock

Pflege und Updates

Veraltete Pakete prüfen:

uv pip list --outdated

Lock-Datei aktualisieren:

uv lock --upgrade

Damit bleibt die Abhängigkeitsverwaltung reproduzierbar, sicher und wartbar.


Diese Methode spart Zeit, Bandbreite und sorgt für eine klar strukturierte und abgesicherte Docker-Infrastruktur für Python-Projekte wie Django. Wer auf schnelle CI/CD-Pipelines, reproduzierbare Builds und saubere Produktionsimages setzt, findet in uv eine moderne und effiziente Lösung.

Hier das finale Dockerfile

### BUILD IMAGE ###

FROM python:3.13-slim AS builder

ENV DJANGO_SETTINGS_MODULE=myproj.settings \
    PATH="/venv/bin:$PATH" \
    PYTHONDONTWRITEBYTECODE=1 \
    PYTHONUNBUFFERED=1 \
    UV_CACHE_DIR=/root/.cache/uv \
    UV_COMPILE_BYTECODE=1 \
    UV_FROZEN=1 \
    UV_LINK_MODE=copy \
    UV_NO_MANAGED_PYTHON=1 \
    UV_PROJECT_ENVIRONMENT=/venv \
    UV_PYTHON_DOWNLOADS=never \
    UV_REQUIRE_HASHES=1 \
    UV_VERIFY_HASHES=1 \
    VIRTUAL_ENV=/venv

COPY --from=ghcr.io/astral-sh/uv:latest /uv /usr/local/bin/uv

RUN <<EOT
apt-get update -y && \
apt-get install -y --no-install-recommends \
    build-essential \
    # other build dependencies here
EOT

WORKDIR /app

ARG BUILD_GROUPS=""

RUN --mount=type=cache,target=/app/.cache/uv \
    --mount=type=bind,source=uv.lock,target=uv.lock \
    --mount=type=bind,source=pyproject.toml,target=pyproject.toml \
    uv venv $VIRTUAL_ENV && \
    uv sync --no-install-project --no-editable $BUILD_GROUPS

# Copy what's needed to run collectstatic.
COPY myproj /app/myproj
COPY media /app/media
COPY manage.py /app/

RUN DEBUG=False ./manage.py collectstatic --noinput

### FINAL IMAGE ###

FROM python:3.13-slim

ARG PORT=8000
ENV DJANGO_SETTINGS_MODULE=myproj.settings \
    PATH="/venv/bin:$PATH" \
    PORT=${PORT} \
    PYTHONDONTWRITEBYTECODE=1 \
    PYTHONUNBUFFERED=1 \
    VIRTUAL_ENV=/venv

EXPOSE ${PORT}
ENTRYPOINT ["/bin/bash", "/app/bin/run"]
CMD ["prod"]

WORKDIR /app

RUN <<EOT
apt-get clean -y && \
apt-get update -y && \
apt-get install -y --no-install-recommends \
	# OS dependencies, e.g. bash, db clients, etc.
    bash && \
apt-get autoremove -y && \
apt-get clean -y && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
EOT

# Copy selectively from builder to optimize final image.
# --link enables better layer caching when base image changes
COPY --link --from=builder /venv /venv
COPY --link --from=builder /app/myproj /app/myproj
COPY --link --from=builder /app/static /app/static
COPY --link --from=builder /app/manage.py /app/manage.py

Daily: How to transcribe Videos

Using Commandline and Python

Option 1: Transcribe with Whisper (official)

pip install git+https://github.com/openai/whisper.git

Then run:

whisper demo.mp4 --model medium --language auto --output_format txt

Optional flags:

  • --output_format srt (for subtitles)
  • --language en (to skip language detection)

Create a shell script to transcribe multiple files

#!/usr/bin/env bash

# Loop through all arguments (filenames)
for VIDEO in "$@"; do
    echo "Transcribing: $VIDEO"
    whisper "$VIDEO" --model medium --language en --output_format txt
done

Option 2: Transcribe with faster-whisper (much faster on CPU or GPU)

pip install faster-whisper

Run via Python:

import sys
from faster_whisper import WhisperModel

VIDEO=sys.arg[1]

model = WhisperModel("medium", device="cpu")  # or "cuda" for GPU
segments, info = model.transcribe(VIDEO)

with open("transcript.txt", "w") as f:
    for s in segments:
        f.write(f"{s.start:.2f} --> {s.end:.2f}: {s.text.strip()}\n")

Optional: Convert to audio (if needed)

If transcription fails or is slow, extract audio first:

ffmpeg -i your_video.mp4 -ar 16000 -ac 1 -c:a pcm_s16le audio.wav

Then transcribe audio.wav instead.


Problems with numpy

Error when running whisper: A module that was compiled using NumPy 1.x cannot be run in NumPy 2.2.6 as it may crash.

Reason:

The error you’re seeing comes from an incompatibility between NumPy 2.x and some Whisper dependencies that were compiled against NumPy 1.x.

Solution: Downgrade NumPy

You can fix this by downgrading NumPy to version 1.x:

pip install "numpy<2"

Then run Whisper again:

whisper demo.mp4 --model medium --language auto --output_format txt

Hide warning UserWarning: FP16 is not supported on CPU; using FP32 instead

Option 1: Suppress all Python warnings (quick + global)

In your terminal or script, set the environment variable:

PYTHONWARNINGS="ignore" whisper dmeo.mp4 --model medium<br></code>

Or, in Python code:

import warnings

warnings.filterwarnings("ignore")

Option 2: Suppress only that specific warning

If you’re using faster-whisper in Python and want to filter only that one:

import warnings

warnings.filterwarnings(
    "ignore",
    message="FP16 is not supported on CPU; using FP32 instead"
)

Option 3: Patch the library (if you’re comfortable)

You can find the line in the faster_whisper source code (usually in transcribe.py) that issues the warning and comment it out or remove it:

# warnings.warn("FP16 is not supported on CPU; using FP32 instead")

Not recommended unless you maintain the code.

direnv – hide displaying variables

Situation

Running direnv reload, you see something like this:

❯ direnv reload
direnv: loading ~/.envrc
direnv: export +ENV_HOME +ENV_NAME +UV_PROJECT_ENVIRONMENT +VIRTUAL_ENV +VIRTUAL_ENV_PROMPT +VSCODE_HOME ~PATH

Solution

  • Create the file $HOME/.config/direnv/direnv.toml
  • Use this content:
    hide_env_diff = true

Script

#!/usr/bin/env bash

FILE_CONFIG=$HOME/.config/direnv/direnv.toml
FLDR_CONFIG=$(dirname $FILE_CONFIG)

echo "- Create folder $FLDR_CONFIG"
mkdir -p $FLDR_CONFIG

echo "- Create file   $FILE_CONFIG"
echo "  add hide_env_diff = true"
echo "hide_env_diff = true" >$FILE_CONFIG

Vue – Cookbook

Responsive Design

React on Size Change

<script>
export default {
    data() {
        return {
            isMobile: false,
            isDesktop: false,

            windowWidth: window.innerWidth,
            windowHeight: window.innerHeight,
        };
    },

    created() {
        this.updateWindowSize();
        window.addEventListener('resize', this.updateWindowSize);
    },

    methods: {
        updateWindowSize() {
            // console.log("updateWindowSize())");

            this.windowWidth = window.innerWidth;
            this.windowHeight = window.innerHeight;
            this.checkIsMobile();
        },

        checkIsMobile() {
            this.isMobile = this.windowWidth <= 768;
            // console.log(`checkIsMobile(): windowWidth = ${this.windowWidth} isMobile=${this.isMobile}`)
        },

        beforeUnmount() {
            console.log("beforeUnmount()");

            window.removeEventListener('resize', this.updateWindowSize);
        },

    },
};
</script>

Debugging

<script setup>
import {
    onActivated,
    onBeforeMount,
    onBeforeUnmount,
    onBeforeUpdate,
    /*  onCreated, */
    onDeactivated,
    onErrorCaptured,
    onMounted,
    /*  onRenderTracked,*/
    onRenderTriggered,
    onScopeDispose,
    onServerPrefetch,
    onUnmounted,
    onUpdated,
    /*  onWatcherCleanup, */

} from 'vue';

onActivated(() => { console.log('onActivated() called'); });
onBeforeMount(() => { console.log(`onBeforeMount():`) })
onBeforeUnmount(() => { console.log('onBeforeUnmount() called'); });
onBeforeUpdate(() => { console.log(`onBeforeUpdate():`) })
onDeactivated(() => { console.log('onDeactivated() called'); });
onErrorCaptured((err, instance, info) => { console.log('onErrorCaptured() called'); console.error(err); return false; });
onMounted(() => { console.log(`onMounted():`) })
onRenderTriggered((e) => { console.log('onRenderTriggered() called', e); });
onUnmounted(() => { console.log(`onUnmounted():`) })
onUpdated(() => { console.log('onUpdated() called'); });
onScopeDispose(() => { console.log('onScopeDispose() called'); });
onServerPrefetch(() => { console.log('onServerPrefetch() called'); });

</script>

Learning | Hello World in different Languages

Python

print("Hello, World!")

Java

public class HelloWorld {
    public static void main(String[] args) {
        System out println("Hello, World!");
    }
}

C

#include <stdio h>

int main() {
    printf("Hello, World!\n");
    return 0;
}

C++:

#include <iostream>

int main() {
    std::cout << "Hello, World!" << std::endl;
    return 0;
}

JavaScript

console log("Hello, World!");

Ruby

puts "Hello, World!"

Swift

print("Hello, World!")

Go

package main

import "fmt"

func main() {
    fmt Println("Hello, World!")
}

Rust

fn main() {
    println!("Hello, World!");
}

PHP

<?php
echo "Hello, World!";
?>

Perl

print "Hello, World!\n";

Kotlin

fun main() {
    println("Hello, World!")
}

Scala

object HelloWorld {
  def main(args: Array[String]): Unit = {
    println("Hello, World!")
  }
}

Lua

print("Hello, World!")

Haskell

main :: IO ()
main = putStrLn "Hello, World!"

Dart

void main() {
  print('Hello, World!');
}

Shell

echo "Hello, World!"

Batch

@echo off
echo Hello, World!

PowerShell

Write-Output "Hello, World!"

VBScript

MsgBox "Hello, World!"

Objective-C

#import <Foundation/Foundation h>

int main(int argc, const char * argv[]) {
    @autoreleasepool {
        NSLog(@"Hello, World!");
    }
    return 0;
}

Assembly

section  data
    hello db 'Hello, World!',10
    len equ $ - hello

section  text
    global _start

_start:
    ; write our string to stdout
    mov eax, 4         ; sys_write
    mov ebx, 1         ; file descriptor 1 (stdout)
    mov ecx, hello     ; message to write
    mov edx, len       ; message length
    int 0x80           ; syscall
    ; exit
    mov eax, 1         ; sys_exit
    xor ebx, ebx       ; exit status 0
    int 0x80           ; syscall

VBA (Visual Basic for Applications)

Sub HelloWorld()
    MsgBox "Hello, World!"
End Sub

Tcl

puts "Hello, World!"

COBOL

       IDENTIFICATION DIVISION 
       PROGRAM-ID  HELLO-WORLD 
       PROCEDURE DIVISION 
           DISPLAY "Hello, World!" 
           STOP RUN 

26 F#:

printfn "Hello, World!"

Elixir

IO puts "Hello, World!"

SQL (MySQL)

SELECT 'Hello, World!';

SQL (SQLite)

SELECT 'Hello, World!';

SQL (PostgreSQL)

SELECT 'Hello, World!';

SQL (Oracle)

SELECT 'Hello, World!' FROM DUAL;

SQL (SQL Server)

PRINT 'Hello, World!';

Smalltalk

Transcript show: 'Hello, World!'; cr 

R

cat("Hello, World!\n")

Bash

echo "Hello, World!"

Erlang

-module(hello) 
-export([hello_world/0]) 

hello_world() ->
    io:fwrite("Hello, World!~n") 

Julia

println("Hello, World!")

MATLAB

disp('Hello, World!');

AutoHotkey

MsgBox, Hello, World!

Clojure

(println "Hello, World!")

Groovy

println "Hello, World!"

OCaml

print_endline "Hello, World!"

D

import std stdio;

void main()
{
    writeln("Hello, World!");
}

Crystal

puts "Hello, World!"

Nim

echo "Hello, World!"

Common Lisp

(format t "Hello, World!~



Scheme

(display "Hello, World!")
(newline)

Prolog

:- initialization(main) 

main :-
    write('Hello, World!'), nl,
    halt 

ABAP

REPORT ZHELLO_WORLD 

WRITE: / 'Hello, World!' 

VB NET

vb net
Module HelloWorld
    Sub Main()
        Console WriteLine("Hello, World!")
    End Sub
End Module 

AI Environment | Writing Apps for OpenAI, ChatGPT, Ollama and others

Python UI Frameworks

  • Gradio
    Gradio is the fastest way to demo your machine learning model with a friendly web interface so that anyone can use it, anywhere!
  • Streamlit
    Streamlit turns data scripts into shareable web apps in minutes.
    All in pure Python. No front‑end experience required.
  • HyperDiv
    Open-source framework for rapidly building reactive web apps in Python, with built-in Shoelace components, Markdown, charts, tables, and more.
  • Shoelace
    A forward-thinking library of web components.

Working with local LLMs

List of Frameworks and Tools

Ollama

curl -fsSL https://ollama.com/install.sh | sh
 ollama run llama2

LLM

pip install llm
llm install llm-gpt4all
llm -m the-model-name "Your query"
llm aliases set falcon ggml-model-gpt4all-falcon-q4_0
llm                 -m ggml-model-gpt4all-falcon-q4_0 "Tell me a joke about computer programming"

Working with Hugging Face

List of Models

https://huggingface.co/models?sort=trending&search=gguf

Install models with Ollama

https://huggingface.co/docs/hub/ollama

The snippet would be in format:

ollama run hf.co/{username}/{repository}

Please note that you can use both hf.co and huggingface.co as the domain name.

Here are some models you can try:

ollama run hf.co/bartowski/Llama-3.2-1B-Instruct-GGUF
ollama run hf.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated-GGUF
ollama run hf.co/arcee-ai/SuperNova-Medius-GGUF
ollama run hf.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF

Gatsby | Getting started with Gatsby

Installation

npm init gatsby
pnpm create gatsby
❯ npm create gatsby
.../21.6.2/pnpm/store/v3/tmp/dlx-42253   |   +3 +
.../21.6.2/pnpm/store/v3/tmp/dlx-42253   | Progress: resolved 3, reused 1, downloaded 2, added 3, done
create-gatsby version 3.13.1

                                                                            Welcome to Gatsby!
What would you like to call your site?
✔ Getting-Started-with-Gatsby/ site
✔ Will you be using JavaScript or TypeScript?
· TypeScript
✔ Will you be using a CMS?
· No (or I'll add it later)
✔ Would you like to install a styling system?
· Tailwind CSS
✔ Would you like to install additional features with other plugins?
· Add responsive images
· Add an automatic sitemap
· Generate a manifest file
· Add Markdown and MDX support


Thanks! Here's what we'll now do:

    🛠  Create a new Gatsby site in the folder site
    🎨 Get you set up to use Tailwind CSS for styling your site
    🔌 Install gatsby-plugin-image, gatsby-plugin-sitemap, gatsby-plugin-manifest, gatsby-plugin-mdx

✔ Shall we do this? (Y/n) · Yes
✔ Created site from template
✔ Installed Gatsby
✔ Installed plugins
✔ Created site in site
🔌 Setting-up plugins...
info Adding gatsby-plugin-postcss
info Adding gatsby-plugin-image
info Adding gatsby-plugin-sitemap
info Adding gatsby-plugin-manifest
info Adding gatsby-plugin-mdx
info Adding gatsby-plugin-sharp
info Adding gatsby-transformer-sharp
info Adding gatsby-source-filesystem
info Adding gatsby-source-filesystem
info Installed gatsby-plugin-postcss in gatsby-config
success Adding gatsby-plugin-postcss to gatsby-config - 0.224s
info Installed gatsby-plugin-image in gatsby-config
success Adding gatsby-plugin-image to gatsby-config - 0.221s
info Installed gatsby-plugin-sitemap in gatsby-config
success Adding gatsby-plugin-sitemap to gatsby-config - 0.230s
info Installed gatsby-plugin-manifest in gatsby-config
success Adding gatsby-plugin-manifest to gatsby-config - 0.258s
info Installed gatsby-plugin-mdx in gatsby-config
success Adding gatsby-plugin-mdx to gatsby-config - 0.264s
info Installed gatsby-plugin-sharp in gatsby-config
success Adding gatsby-plugin-sharp to gatsby-config - 0.265s
info Installed gatsby-transformer-sharp in gatsby-config
success Adding gatsby-transformer-sharp to gatsby-config - 0.269s
info Installed gatsby-source-filesystem in gatsby-config
success Adding gatsby-source-filesystem (images) to gatsby-config - 0.279s
info Installed gatsby-source-filesystem in gatsby-config
success Adding gatsby-source-filesystem (pages) to gatsby-config - 0.286s
🎨 Adding necessary styling files...
🎉  Your new Gatsby site Getting Started with Gatsby has been successfully created
at /Users/Shared/CLOUD/Programmier-Workshops/Kurse/Gatsby/Einsteiger/Getting-Started-with-Gatsby/site.
Start by going to the directory with

  cd site

Start the local development server with

  npm run develop

See all commands at

  https://www.gatsbyjs.com/docs/reference/gatsby-cli/
❯ cd site
❯ npm install
❯ npm run develop