Developer Blog

Tipps und Tricks für Entwickler und IT-Interessierte

Docker

Docker | Create an extensible build environment

Inhaltsverzeichnis

TL;DR

The complete code for the post is here.

General Information

Building a Docker image mainly means creating a Dockefile and specifying all the required components to install and configure.

So a Dockerfile contains at least many operating system commands for installing and configuring software and packages.

Keeping all commands in one file (Dockerfile) can lead to a confusing structure.

In this post I describe and explain an extensible structure for a Docker project so that you can reuse components for other components.

General Structure

The basic idea behind the structure is: split all installation instructions into separate installation scripts and call them individually in the dockerfile.

In the end the structure will look something like this (with some additional collaborators to be added and described later)

RUN bash /tmp/${SCRIPT_UBUNTU}
RUN bash /tmp/${SCRIPT_ADD_USER}
RUN bash /tmp/${SCRIPT_NODEJS}
RUN bash /tmp/${SCRIPT_JULIA}
RUN bash /tmp/${SCRIPT_ANACONDA}
RUN cat ${SCRIPT_ANACONDA_USER} | su user
RUN bash /tmp/${SCRIPT_CLEANUP}

For example, preparing the Ubuntu image by installing basic commands is transfered into the script 01_ubuntu_sh

Hint: There are hardly any restrictions on the choice of names (variables and files/scripts). For the scripts, I use numbering to express order.

The script contains this code:

apt-get update                                       
apt-get install --yes apt-utils
apt-get install --yes build-essential lsb-release curl sudo vim python3-pip

echo "root:root" | chpasswd

Since we will be working with several scripts, an exchange of information is necessary. For example, one script installs a package and the other needs the installation folder.

We will therefore store information needed by multiple scripts in a separate file: the environment file environment

#--------------------------------------------------------------------------------------
USR_NAME=user
USR_HOME=/home/user

GRP_NAME=work

#--------------------------------------------------------------------------------------
ANACONDA=anaconda3
ANACONDA_HOME=/opt/$ANACONDA
ANACONDA_INSTALLER=/tmp/installer_${ANACONDA}.sh

JULIA=julia
JULIA_HOME=/opt/${JULIA}-1.7.2
JULIA_INSTALLER=/tmp/installer_${JULIA}.tgz

And each installation script must start with a preamble to use this environment file:

#--------------------------------------------------------------------------------------
# get environment variables
. environment

Preparing Dockerfile and Build Environment

When building an image, Docker needs all files and scripts to run inside the image. Since we created the installation scripts outside of the image, we need to copy them into the image (run run them). This also applies to the environment file environment.

File copying is done by the Docker ADD statement.

First we need our environment file in the image, so let’s copy this:

#======================================================================================
FROM ubuntu:latest as builder

#--------------------------------------------------------------------------------------
ADD environment environment

Each block to install one required software looks like this. To be flexible, we use variable for the script names.

ARG SCRIPT_UBUNTU=01_ubuntu.sh
ADD ${SCRIPT_UBUNTU}    /tmp/${SCRIPT_UBUNTU}
RUN bash tmp/${SCRIPT_UBUNTU}

Note: We can’t run the script directly because the run bit may not be set. So we will use bash to run the text file as a script.

As an add-on, we will be using Docker’s multi-stage builds. So here is the final code:

#--------------------------------------------------------------------------------------
# UBUNTU CORE
#--------------------------------------------------------------------------------------
FROM builder as ubuntu_core
ARG SCRIPT_UBUNTU=01_ubuntu.sh
ADD ${SCRIPT_UBUNTU}    /tmp/${SCRIPT_UBUNTU}
RUN bash 

Final results

Dockerfile

Here is the final Dockerfile:

#==========================================================================================
FROM ubuntu:latest as builder

#------------------------------------------------------------------------------------------
# 
#------------------------------------------------------------------------------------------
ADD environment environment
#------------------------------------------------------------------------------------------
# UBUNTU CORE
#------------------------------------------------------------------------------------------
FROM builder as ubuntu_core
ARG SCRIPT_UBUNTU=01_ubuntu.sh
ADD ${SCRIPT_UBUNTU}    /tmp/${SCRIPT_UBUNTU}
RUN bash                /tmp/${SCRIPT_UBUNTU}

#------------------------------------------------------------------------------------------
# LOCAL USER
#------------------------------------------------------------------------------------------
ARG SCRIPT_ADD_USER=02_add_user.sh
ADD ${SCRIPT_ADD_USER}  /tmp/${SCRIPT_ADD_USER}
RUN bash /tmp/${SCRIPT_ADD_USER}

#------------------------------------------------------------------------------------------
# NODEJS
#-----------------------------------------------------------------------------------------
FROM ubuntu_core as with_nodejs

ARG SCRIPT_NODEJS=10_nodejs.sh
ADD ${SCRIPT_NODEJS}    /tmp/${SCRIPT_NODEJS}
RUN bash                /tmp/${SCRIPT_NODEJS}

#--------------------------------------------------------------------------------------------------
# JULIA
#--------------------------------------------------------------------------------------------------
FROM with_nodejs as with_julia

ARG SCRIPT_JULIA=11_julia.sh
ADD ${SCRIPT_JULIA} /tmp/${SCRIPT_JULIA}
RUN bash /tmp/${SCRIPT_JULIA}

#---------------------------------------------------------------------------------------------
# ANACONDA3  with Julia Extensions
#---------------------------------------------------------------------------------------------
FROM with_julia as with_anaconda

ARG SCRIPT_ANACONDA=21_anaconda3.sh
ADD ${SCRIPT_ANACONDA} /tmp/${SCRIPT_ANACONDA}
RUN bash /tmp/${SCRIPT_ANACONDA}

#---------------------------------------------------------------------------------------------
#
#---------------------------------------------------------------------------------------------
FROM with_anaconda as with_anaconda_user
ARG SCRIPT_ANACONDA_USER=22_anaconda3_as_user.sh
ADD ${SCRIPT_ANACONDA_USER} /tmp/${SCRIPT_ANACONDA_USER}
#RUN cat ${SCRIPT_ANACONDA_USER} | su user

#---------------------------------------------------------------------------------------------
#
#---------------------------------------------------------------------------------------------
FROM with_anaconda_user as with_cleanup

ARG SCRIPT_CLEANUP=99_cleanup.sh
ADD ${SCRIPT_CLEANUP} /tmp/${SCRIPT_CLEANUP}
RUN bash /tmp/${SCRIPT_CLEANUP}

#=============================================================================================
USER    user
WORKDIR /home/user

#
CMD ["bash"]

Makefile

HERE := ${CURDIR}

CONTAINER := playground_docker

default:
	cat Makefile

build:
	docker build -t ${CONTAINER} .

clean:
	docker_rmi_all
	
run:
	docker run  -it --rm  -p 127.0.0.1:8888:8888 -v ${HERE}:/src:rw -v ${HERE}/notebooks:/notebooks:rw --name ${CONTAINER} ${CONTAINER}

notebook:
	docker run  -it --rm  -p 127.0.0.1:8888:8888 -v ${HERE}:/src:rw -v ${HERE}/notebooks:/notebooks:rw --name ${CONTAINER} ${CONTAINER} bash .local/bin/run_jupyter

Installation scripts

01_ubuntu.sh

#--------------------------------------------------------------------------------------------------
# get environment variables
. environment

#--------------------------------------------------------------------------------------------------
export DEBIAN_FRONTEND=noninteractive
export TZ='Europe/Berlin'

echo $TZ > /etc/timezone 

apt-get update                                       
apt-get install --yes apt-utils

#
apt-get -y install tzdata

rm /etc/localtime
ln -snf /usr/share/zoneinfo/$TZ /etc/localtime
dpkg-reconfigure -f noninteractive tzdata

#
apt-get install --yes build-essential lsb-release curl sudo vim python3-pip

#
echo "root:password" | chpasswd

Angular | Starting with Capacitor

Inhaltsverzeichnis

Introduction

Install Angular

$ npm -g install @angular/cli

Create sample app

❯ ng new app-capacitor
? Would you like to add Angular routing? Yes
? Which stylesheet format would you like to use? SCSS   [ https://sass-lang.com/documentation/syntax#scss]

❯ cd app-capacitor

Change output path to capacitor defaults

Change line in angular.json

      "architect": {
        "build": {
          "builder": "@angular-devkit/build-angular:browser",
          "options": {
            "outputPath": "dist",

 Add capacitor support

❯ ng add @capacitor/angular
ℹ Using package manager: npm
✖ Unable to find compatible package.  Using 'latest'.
✖ Package has unmet peer dependencies. Adding the package may not succeed.

The package @capacitor/angular will be installed and executed.
Would you like to proceed? Yes

Capacitor initialisieren

❯ npx cap init
[?] What is the name of your app?
    This should be a human-friendly app name, like what you'd see in the App Store.
√ Name ... app-capacitor
[?] What should be the Package ID for your app?
    Package IDs (aka Bundle ID in iOS and Application ID in Android) are unique identifiers for apps. They must be in
    reverse domain name notation, generally representing a domain name that you or your company owns.
√ Package ID ... com.example.app
√ Creating capacitor.config.ts in D:\CLOUD\Online-Seminare\Kurse\Angular\Einsteiger\App_Capacitor in 5.31ms
[success] capacitor.config.ts created!

Next steps:
https://capacitorjs.com/docs/v3/getting-started#where-to-go-next
[?] Join the Ionic Community! 💙
    Connect with millions of developers on the Ionic Forum and get access to live events, news updates, and more.
√ Create free Ionic account? ... no
[?] Would you like to help improve Capacitor by sharing anonymous usage data? 💖
    Read more about what is being collected and why here: https://capacitorjs.com/telemetry. You can change your mind at
    any time by using the npx cap telemetry command.
√ Share anonymous usage data? ... no

Build your app

We will need the dist directory with the web files

❯ ng build --prod
Option "--prod" is deprecated: No need to use this option as this builder defaults to configuration "production".
✔ Browser application bundle generation complete.
✔ Copying assets complete.
✔ Index html generation complete.

Initial Chunk Files               | Names         |      Size
main.b633c9096acdb457c421.js      | main          | 212.34 kB
polyfills.e4574352eda6eb439793.js | polyfills     |  35.95 kB
runtime.d66bb6fe709ae891f100.js   | runtime       |   1.01 kB
styles.31d6cfe0d16ae931b73c.css   | styles        |   0 bytes

                                  | Initial Total | 249.29 kB

Build at: 2021-05-28T09:32:28.905Z - Hash: ed2ed2d661b0d58b48f2 - Time: 28225ms
❯ npm install @capacitor/ios @capacitor/android
npm WARN @capacitor/angular@1.0.3 requires a peer of rxjs@~6.4.0 but none is installed. You must install peer dependencies yourself.
npm WARN @capacitor/angular@1.0.3 requires a peer of typescript@~3.4.3 but none is installed. You must install peer dependencies yourself.
npm WARN ajv-keywords@3.5.2 requires a peer of ajv@^6.9.1 but none is installed. You must install peer dependencies yourself.
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@2.3.2 (node_modules\fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@2.3.2: wanted {"os":"darwin","arch":"any"} (current: {"os":"win32","arch":"x64"})
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@1.2.13 (node_modules\webpack-dev-server\node_modules\fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.2.13: wanted {"os":"darwin","arch":"any"} (current: {"os":"win32","arch":"x64"})

+ @capacitor/ios@3.0.0
+ @capacitor/android@3.0.0
added 2 packages from 1 contributor, removed 1 package and audited 1375 packages in 7.385s

88 packages are looking for funding
  run `npm fund` for details

found 35 moderate severity vulnerabilities
  run `npm audit fix` to fix them, or `npm audit` for details

Add Platform Android

❯ npx cap add android
√ Adding native android project in android in 167.07ms
√ Syncing Gradle in 1.45ms
√ add in 169.83ms
[success] android platform added!
Follow the Developer Workflow guide to get building:
https://capacitorjs.com/docs/v3/basics/workflow
√ Copying web assets from dist to android\app\src\main\assets\public in 55.26ms
√ Creating capacitor.config.json in android\app\src\main\assets in 3.50ms
√ copy android in 164.93ms
√ Updating Android plugins in 7.76ms
√ update android in 75.79ms

Open Android Studio to build App

❯ npx cap open android
[info] Opening Android project at: android.

Daily: VS Code Error NSOSStatusErrorDomain

If you got an NSOSStatusErrorDomain Error, when you start VS Code from the command line

❯ code
[0309/155203.303710:ERROR:codesign_util.cc(108)] SecCodeCheckValidity: Error Domain=NSOSStatusErrorDomain Code=-67062 "(null)" (-67062)

You should do this: codesign --force --deep --sign -

❯ which code
/Users/Shared/VSCode/Default/Visual Studio Code - Insiders.app/Contents/Resources/app/bin/code

❯ codesign --force --deep --sign - "/Users/Shared/VSCode/Default/Visual Studio Code - Insiders.app"
/Users/Shared/VSCode/ralphg/Visual Studio Code - Insiders.app: replacing existing signature

❯ code -v
1.88.0-insider
19ecb4b8337d0871f0a204853003a609d716b04e
x64

The Art of AI: Create / Recreating Images

Today, i want to find out, how to create / recreating images with ChatGPT.

Challenge: I found an image and want to recreate the image:

Write a Prompt that creates this Image
Create an image of a woman with long, sleek black hair and striking features. She should have a strong, confident posture with her arms crossed. She's wearing a simple, elegant black dress with a round neckline. The background is a modern, minimalist office with a hint of sophistication. The color palette should be neutral, with emphasis on shades of grey and black to match her attire. The lighting is soft and diffused, highlighting her from the front and casting a gentle shadow behind her. The overall aesthetic is chic and polished with a touch of corporate professionalism.
Write a Prompt that creates this Image
Create an image of a poised and professional woman with long, sleek black hair and prominent, striking features. She stands with a sense of authority, her arms confidently crossed in front of her. She is dressed in a simple yet sophisticated black dress with a modest round neckline. The setting is a contemporary, minimalist office space that exudes a refined elegance. The color scheme is a harmonious blend of neutral tones, dominated by greys and blacks that complement her attire. Soft, ambient lighting bathes the scene, casting a gentle illumination that accentuates her silhouette against the muted sophistication of the office backdrop. The composition should convey an atmosphere of chic professionalism and understated luxury.
Write a Prompt that creates this Image
Create an image of an elegant and professional woman with long, black hair and defined, striking features. She exhibits a posture of confidence and composure with her arms crossed. Her attire is a chic, form-fitting black dress with a modest neckline. The environment is a stylish, minimalist office setting with sophisticated furnishings and a neutral color scheme emphasizing grays and blacks. The scene is illuminated with soft, ambient lighting that casts a gentle glow on the woman, highlighting her assertive stance against the contemporary backdrop, evoking a sense of refined grace and corporate elegance.

Notes:

Mostly, after running the same prompt the second time, ChatGPT responded with something like that:

So, i ask ChatGPT nicely to do it again:

Create the image again, because each time the image looks different

Next Step: find the precise prompt

As we see, using the same prompt results in different images. So i asked ChatGPT how to create a prompt that produces the same image each time:

MeUsing one prompt multiple times, each image is different from another. How can a state the prompt more clearly so that the prompt creates each time exactly the same imageChatGPT
ChatGPTCreating exactly the same image multiple times using a text-to-image AI model like DALL-E is challenging due to the inherent variability in how these models interpret and render prompts. Each time a prompt is processed, the model generates a unique interpretation based on its training and algorithms, which can lead to variations in the resulting images. However, there are some strategies you can use to minimize these differences: Be Highly Specific: Provide as many details as possible in your prompt, including specifics about posture, facial expression, clothing, and background elements. The more detailed your description, the less room there is for the AI to interpret differently. Use Precise Descriptors: Instead of using general terms, use precise descriptors. For example, instead of saying “long, black hair,” specify the style, such as “long, straight, black hair that falls to mid-back.” Define the Scene Layout: Be explicit about the composition of the image, like where the subject is positioned in the frame, how much of the background is visible, and the perspective or angle of the view. Describe Colors and Lighting: Clearly state the colors you want to be used, including shades and tones, and describe the lighting (e.g., “soft, diffused lighting from the front left”). Include Comparable Examples: If possible, reference a similar existing image or style to guide the AI’s interpretation. Consistency in Format: If you’re looking for a portrait, landscape, or specific aspect ratio, mention this explicitly. Even with these strategies, it’s important to remember that AI models like DALL-E are probabilistic and may still produce variations with each rendering. The nature of these models means they interpret prompts creatively, so exact replication of images is not always possible.

Ok, lets try:

Me Using your suggestions, create a precise prompt that create this image. Use all 6 strategies in your prompt
ChatGPTCreate a digital portrait of a woman who embodies elegance and professionalism. She should have long, straight, jet-black hair with a glossy sheen, cascading down to mid-back. Her facial features are sharply defined, with high cheekbones, almond-shaped hazel eyes, meticulously groomed eyebrows, and full, neutral-colored lips. Her skin has a matte finish with a porcelain-like quality. Her stance is assertive; she stands straight with her shoulders back and arms crossed at the waist, showcasing a silver bracelet on her right wrist and a matching ring on her left middle finger. She wears a tailored, form-fitting black sheath dress with three-quarter sleeves and a boat neckline, ending just above the knees. The setting is an upscale, minimalist office: to her right is a steel-gray filing cabinet with a potted green succulent on top; to her left, a sleek, black office chair behind a dark walnut desk, on which rests a closed silver laptop and a white ceramic mug. The background comprises floor-to-ceiling windows with thin, vertical blinds casting soft shadows, and the walls are a light gray. The lighting is soft and directional, coming from the top left, creating a subtle highlight on her left cheek and a soft shadow to her right, enhancing her features without harshness. The overall color palette is monochrome, with a focus on blacks, whites, and grays, providing contrast against her black dress and the lighter office environment. The composition is centered and balanced, with the woman placed in the middle of the frame, allowing for an equal amount of office space on either side of her. The camera angle is at eye level, ensuring a direct and engaging connection with the viewer. The image should convey a narrative of confidence, sophistication, and modern professional elegance.

Wow, impresiv. Ok, here are the image created by this prompt:

The results looks more the same, but still lacking these “unique” factor.

So i asked ChatGPT again:

MeThe images still looks different. Is there a deterministic way to uniquely generate an imageChatGPT
ChatGPTAI image generation models like DALL-E are designed to introduce variation and creativity into their outputs. Even with highly detailed prompts, they are not deterministic in the traditional sense — you won’t get the exact same image with each generation because the model incorporates an element of randomness or “seed” into each creation process. This randomness is a core feature, as it enables the generation of unique images from the same prompt. However, in some advanced implementations of these models, you can specify a “seed” value, which can help in producing the same image across multiple runs if the seed is kept constant. The “seed” is essentially a starting point for the random number generator used in the generation process. By using the same seed, you tell the AI to start from the same random point, leading to the same sequence of “random” numbers, which can result in the same output. At the moment, the system you are interacting with does not provide an option to control the seed value manually for each request. Each new image generation is treated as a fresh instance, without a way to replicate the exact randomness of a previous run. Therefore, while you can guide the AI with very specific and detailed prompts to achieve close approximations, there is currently no way to ensure that a publicly available AI image generation model like DALL-E will produce the exact same image twice.

Ok, so we wont get 10

Me

Me

Azure Open AI – The Roadmap to become an expert

Inhaltsverzeichnis

Roadmap

  • Basics of Programming: Python is a great starting point due to its simplicity and extensive support in the AI field. Familiarize yourself with variables, data types, loops, conditionals, functions, and object-oriented programming. Resources: Codecademy, LeetCode, HackerRank.
  • Understanding of AI and Machine Learning: Learn about foundational concepts like supervised learning, unsupervised learning, reinforcement learning, regression, classification, clustering, neural networks. Resources: Andrew Ng’s course on Coursera, “Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow” by Aurélien Géron.
  • Study Deep Learning: Deep learning is a special kind of machine learning involving artificial neural networks with several layers (“deep” structures). Study convolutional networks (used in image recognition), recurrent networks (used in sequence data), and the concept of deep reinforcement learning. Resources: Deep Learning Specialization by Andrew Ng on Coursera, “Deep Learning” by Ian Goodfellow, Yoshua Bengio and Aaron Courville.
  • Learn about Microsoft Azure: Azure is a cloud platform offering services like compute power, storage options, and AI models. Learn how to navigate the Azure portal, create/manage resources, and understand core services (Azure Compute, Azure Storage, Azure Data Services). Resources: Microsoft Learn, Azure documentation.
  • Microsoft’s AI Services: Azure AI services include Azure Machine Learning for building, training, and deploying machine learning models, and Azure Cognitive Services for pre-trained AI services like vision, speech, and language processing. Resources: Microsoft Learn’s path for AI Engineer, Azure AI documentation.
  • Azure OpenAI: Azure OpenAI offers access to powerful models like GPT-3 for tasks like text generation, translation, summarization, etc. Learn how to call these APIs from your application and how to use the results effectively. Resources: Azure OpenAI documentation, OpenAI’s GPT-3 playground.
  • Projects: Practical projects could include building a chatbot using Azure Bot Service and QnA Maker, creating a text summarizer using GPT-3, or developing a computer vision application using Azure Cognitive Services.
  • Stay Updated: AI is an ever-evolving field. Blogs like Towards Data Science, Medium, the Microsoft Azure blog, and arXiv for research papers can keep you updated. Webinars and online forums like AI Saturdays, AI Fest, and Stack Overflow also help.
  • Certifications: Certifications like the Microsoft Certified: Azure AI Engineer Associate or Microsoft Certified: Azure Data Scientist Associate validate your skills and knowledge. They require a solid understanding of Azure services and machine learning concepts.
  • Contribute to the Community: Sharing your knowledge and experience helps solidify your understanding and establishes your expertise. Write blogs or make YouTube videos explaining concepts, give talks at local meetups or conferences, contribute to open-source projects on GitHub, or answer questions on forums like Stack Overflow.

Understanding of AI and Machine Learning

  • Mathematical Foundations: Brush up on your knowledge of Linear Algebra, Calculus, Probability, and Statistics. These are essential for understanding machine learning algorithms.
  • Programming Skills: Python is the most used language in the field of machine learning. Familiarize yourself with Python, its libraries like NumPy, Pandas, and Matplotlib, and an IDE such as Jupyter Notebook.
  • Data Preprocessing: Learn how to clean and preprocess data, handle missing data, and perform feature scaling. Understand the importance of data visualization for exploring your dataset.
  • Supervised Learning: Start with simple linear regression and then move on to multiple linear regression, logistic regression, k-nearest neighbors, support vector machines, and decision trees. Understand the concept of overfitting and underfitting, and techniques to combat them like regularization.
  • Unsupervised Learning: Learn about clustering techniques like K-means and hierarchical clustering. Understand dimensionality reduction techniques like Principal Component Analysis (PCA).
  • Advanced Topics: Once you’re comfortable with the basics, move onto more advanced topics like ensemble methods, neural networks, and deep learning. Learn about different types of neural networks like Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and transformers.
  • Tools and Libraries for Machine Learning: Get hands-on experience with Scikit-Learn, TensorFlow, and Keras. Learn how to build, train, and evaluate models with these libraries.
  • Validation and Evaluation Metrics: Understand validation techniques like k-fold cross-validation. Learn about different evaluation metrics like accuracy, precision, recall, F1 score, ROC AUC, mean squared error, etc.
  • Special Topics: Study special topics like natural language processing, reinforcement learning, generative adversarial networks, transfer learning, etc.
  • Projects: Apply your knowledge to real-world projects. This could be anything from predicting house prices to building a chatbot.
  • Stay Updated and Keep Learning: Machine learning is a rapidly evolving field. Read research papers, follow relevant blogs, participate in Kaggle competitions, and take online courses to keep your knowledge up-to-date.

Deep Learning

  • Mathematical Foundations: Make sure you have a good understanding of Linear Algebra, Calculus, Probability, and Statistics. These are crucial for understanding how deep learning algorithms work.
  • Programming Skills: Python is the most common language in the field of deep learning. Make sure you’re comfortable with it. Libraries like NumPy and Matplotlib will also be useful.
  • Understanding of Machine Learning: Before delving into deep learning, you should have a solid understanding of basic machine learning concepts and algorithms. This includes concepts like overfitting, underfitting, bias-variance tradeoff, and understanding of algorithms like linear regression, logistic regression, and more.
  • Introduction to Neural Networks: Start with understanding what neural networks are and how they work. Learn about the architecture of neural networks, activation functions, and the backpropagation algorithm for training neural networks.
  • Frameworks for Deep Learning: Get hands-on experience with deep learning frameworks like TensorFlow and PyTorch. Learn how to define, train, and evaluate neural networks with these libraries.
  • Convolutional Neural Networks (CNNs): These are used primarily for image processing, object detection, and recognition tasks. Understand the architecture of CNNs and concepts like convolutional layers, pooling layers, and filters.
  • Recurrent Neural Networks (RNNs): Used for sequence data like time series and natural language. Learn about the structure of RNNs, the problem of long-term dependencies, and solutions like LSTM (Long Short-Term Memory) and GRU (Gated Recurrent Units).
  • Advanced Topics in Deep Learning: Explore advanced topics like autoencoders, generative adversarial networks (GANs), transfer learning, reinforcement learning, etc.
  • Deep Learning for Natural Language Processing: Understand how deep learning is used in NLP. Learn about Word2Vec, GloVe, RNNs, LSTMs, GRUs, and Transformer models like BERT.
  • Projects: Apply your knowledge to real-world projects. This could be anything from image recognition tasks, text generation, sentiment analysis, etc.
  • Stay Updated and Keep Learning: Deep learning is a rapidly evolving field. Follow the latest research papers, participate in online courses and challenges like those on Kaggle.

Book on Machine Learning

Introduction to Machine Learning

  • Definition and Importance
  • Applications of Machine Learning
  • Types of Machine Learning: Supervised, Unsupervised, Reinforcement Learning

Mathematical Foundations

  • Linear Algebra
  • Calculus
  • Probability and Statistics
  • Exercises and Sample Problems

Python for Machine Learning

  • Python Basics
  • Python Libraries: NumPy, Pandas, Matplotlib
  • Exercises: Python Coding Problems

Data Preprocessing

  • Importing Data
  • Cleaning Data
  • Feature Engineering
  • Data Visualization
  • Exercises: Data Cleaning and Visualization

Supervised Learning

  • Regression Analysis
    • Simple Linear Regression
    • Multiple Linear Regression
    • Logistic Regression
  • Classification Algorithms
    • Decision Trees
    • K-Nearest Neighbors
    • Support Vector Machines
  • Exercises: Implementing Supervised Learning Algorithms

Unsupervised Learning

  • Clustering
    • K-Means Clustering
    • Hierarchical Clustering
  • Dimensionality Reduction
    • Principal Component Analysis
  • Exercises: Implementing Unsupervised Learning Algorithms

Evaluating Machine Learning Models

  • Overfitting and Underfitting
  • Bias-Variance Tradeoff
  • Cross-Validation
  • Performance Metrics
  • Exercises: Evaluating Models

Neural Networks and Deep Learning

  • Introduction to Neural Networks
  • Backpropagation
  • Convolutional Neural Networks
  • Recurrent Neural Networks
  • Exercises: Implementing Neural Networks

Advanced Topics

  • Ensemble Methods
  • Reinforcement Learning
  • Natural Language Processing
  • Transfer Learning
  • Generative Adversarial Networks
  • Exercises: Advanced Topics Projects

Case Studies in Machine Learning

  • Case Study 1: Predictive Analytics
  • Case Study 2: Image Recognition
  • Case Study 3: Natural Language Processing

Future Trends in Machine Learning

Appendix

References and Further Reading

Glossary of Machine Learning Terms

Unveiling the Power of Whisper AI: Architecture and Practical Implementations

Inhaltsverzeichnis

Introduction

Welcome to the fascinating world of Whisper AI, OpenAI’s groundbreaking speech recognition system. As we delve deeper into the digital age, the ability to accurately transcribe and understand human speech has become invaluable. From powering virtual assistants to enhancing accessibility in technology, speech-to-text solutions are reshaping our interaction with devices.

This article will explore the intricate architecture of Whisper AI, guide you through its usage via command line, Python, and even demonstrate how to integrate it into a Flask application.

Whether you’re a developer, a tech enthusiast, or simply curious, join me in unraveling the capabilities of this remarkable technology.

Section 1: Understanding Whisper AI

Whisper AI stands as a testament to the advancements in artificial intelligence, specifically in speech recognition. Developed by OpenAI, this system is designed not just to transcribe speech but to understand it, accommodating various accents, dialects, and even noisy backgrounds.

Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification.

The development of Whisper AI marks a significant milestone, showcasing OpenAI’s commitment to pushing the boundaries of AI and machine learning. In a world where effective communication is key, Whisper AI is a step towards bridging gaps and making technology more accessible to all.

Section 2: The Architecture of Whisper AI

At the heart of Whisper AI lies a sophisticated neural network, likely based on the Transformer model, renowned for its effectiveness in processing sequential data like speech.

These models are trained on vast datasets, enabling the system to recognize and interpret a wide array of speech patterns. What sets Whisper AI apart is its multilingual capabilities, proficiently handling different languages and dialects.

This architectural marvel not only enhances accuracy but also ensures inclusivity in speech recognition technology.

Section 3: Getting Started with Whisper AI

To embark on your journey with Whisper AI, a Python environment is a prerequisite. The system’s robustness requires adequate hardware specifications to function optimally.

Installation is straightforward: Whisper AI’s official documentation or https://github.com/openai/whisper provides comprehensive guides to get you started.

So, first: setup a python. virtual environment:

$ python -m venv venv
$ . venv/bin/activate
$ which python
<Your current directoy>/venv/bin/python

Next step: install required tool for whisper:

$ sudo apt update && sudo apt install ffmpeg

And, at last: install whisper:

$ pip install git+https://github.com/openai/whisper.git 

And your done.

Section 4: Using Whisper AI via Command Line

Whisper AI shines in its simplicity of use, particularly via the command line. After a basic setup, transcribing an audio file is as simple as running a command

For instance, whisper youraudiofile.mp3 could yield a text transcription of the recorded speech. This ease of use makes Whisper AI an attractive option for quick transcription tasks, and the command line interface provides a straightforward way for anyone to harness its power.

Run the following commmand to transcribe your audio file:

$ whisper <your audio file> --model medium --threads 16

See some samples in the following Section and in Section 9.

Section 5: Integrating Whisper AI with Python

Python enthusiasts can rejoice in the seamless integration of Whisper AI with Python scripts.

Imagine a script that takes an audio file and uses Whisper AI to transcribe it – this can be as concise as a few lines of code.

import whisper

model = whisper.load_model("base")
result = model.transcribe("audio.mp3")
print(result["text"])

The API’s intuitive design means that with basic Python knowledge, you can create scripts to automate transcription tasks, analyze speech data, or even build more complex speech-to-text applications.

import whisper

model = whisper.load_model("base")

# load audio and pad/trim it to fit 30 seconds
audio = whisper.load_audio("audio.mp3")
audio = whisper.pad_or_trim(audio)

# make log-Mel spectrogram and move to the same device as the model
mel = whisper.log_mel_spectrogram(audio).to(model.device)

# detect the spoken language
_, probs = model.detect_language(mel)
print(f"Detected language: {max(probs, key=probs.get)}")

# decode the audio
options = whisper.DecodingOptions()
result = whisper.decode(model, mel, options)

# print the recognized text
print(result.text)

Section 6: Building a Flask Application with Whisper AI

Integrating Whisper AI into a Flask application opens a realm of possibilities. A simple Flask server can receive audio files and return transcriptions, all powered by Whisper AI.

This setup is ideal for creating web applications that require speech-to-text capabilities. From voice-commanded actions to uploading and transcribing audio files, the combination of Flask and Whisper AI paves the way for innovative web-based speech recognition solutions.

Here is a short code for a flask app:

from flask import Flask, abort, request
from flask_cors import CORS
from tempfile import NamedTemporaryFile
import whisper
import torch

torch.cuda.is_available()
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"

model = whisper.load_model("base", device=DEVICE)

app = Flask(__name__)
CORS(app)

@app.route("/")
def hello():
    return "Whisper Hello World!"

@app.route('/whisper', methods=['POST'])
def handler():
    if not request.files:
        abort(400)

    results = []

    for filename, handle in request.files.items():
        temp = NamedTemporaryFile()
        handle.save(temp)
        result = model.transcribe(temp.name)
        results.append({
            'filename': filename,
            'transcript': result['text'],
        })

    return {'results': results}

Section 7: Advanced Features and Customization

For those looking to push the boundaries, Whisper AI offers advanced features and customization options.

Adapting the system to recognize specific terminologies or accent nuances can significantly enhance its utility in specialized fields. This level of customization ensures that Whisper AI remains a versatile tool, adaptable to various professional and personal needs.

Section 8: Ethical Considerations and Limitations

As with any AI technology, Whisper AI brings with it ethical considerations.

The paramount concern is privacy and the security of data processed by the system. Additionally, while Whisper AI is a remarkable technology, it is not without limitations. Potential biases in language models and challenges in understanding heavily accented or distorted speech are areas that require ongoing refinement.

Addressing these concerns is crucial for the responsible development and deployment of speech recognition technologies.

Section 9: Samples

First, get your audio file to transcribed. I choose a famous speech from Abraham Lincoln and get it from Youtube.

$ yt-dlp -S res,ext:mp4:m4a "https://www.youtube.com/watch?v=bC4kQ2-kHZE" --output Abraham-Lincoln_Gettysburg-Address.mp4

Then, i use whisper to transcribe the audio.

$ whisper Abraham-Lincoln_Gettysburg-Address.mp4  --model medium --threads 16  --output_dir .
.../.venv/python/3.11/lib/python3.11/site-packages/whisper/transcribe.py:115: UserWarning: FP16 is not supported on CPU; using FP32 instead
  warnings.warn("FP16 is not supported on CPU; using FP32 instead")
Detecting language using up to the first 30 seconds. Use `--language` to specify the language
Detected language: English
[00:00.000 --> 00:19.660]  4 score and 7 years ago, our fathers brought forth on this continent a new nation, conceived
[00:19.660 --> 00:28.280]  in liberty and dedicated to the proposition that all men are created equal.
[00:28.280 --> 00:37.600]  Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived
[00:37.600 --> 00:43.680]  and so dedicated, can long endure.
[00:43.680 --> 00:48.660]  We are met on a great battlefield of that war.
[00:48.660 --> 00:54.800]  We have come to dedicate a portion of that field as a final resting place for those who
[00:54.800 --> 00:59.840]  here gave their lives that that nation might live.
[00:59.840 --> 01:09.600]  It is altogether fitting and proper that we should do this, but in a larger sense, we
[01:09.600 --> 01:18.440]  cannot dedicate, we cannot consecrate, we cannot hallow this ground.
[01:18.440 --> 01:25.720]  The brave men, living and dead, who struggled here, have consecrated it, far above our
[01:25.720 --> 01:30.640]  poor power to add or detract.
[01:30.640 --> 01:37.280]  The world will little note, nor long remember what we say here, but it can never forget
[01:37.280 --> 01:40.040]  what they did here.
[01:40.040 --> 01:47.440]  It is for us, the living rather, to be dedicated here, to the unfinished work which they who
[01:47.440 --> 01:52.360]  fought here have thus far so nobly advanced.
[01:52.360 --> 01:59.680]  It is rather for us to be here dedicated to the great task remaining before us, that from
[01:59.680 --> 02:06.220]  these honored dead we take increased devotion to that cause for which they gave the last
[02:06.220 --> 02:13.960]  full measure of devotion, that we here highly resolve that these dead shall not have died
[02:13.960 --> 02:23.520]  in vain, that this nation, under God, shall have a new birth of freedom, and that government
[02:23.520 --> 02:31.000]  of the people, by the people, for the people, shall not perish from the earth.

Conclusion

Whisper AI represents a significant leap in speech recognition technology. Its implications for the future of human-computer interaction are profound. As we continue to explore and expand the capabilities of AI, tools like Whisper AI not only enhance our present but also shape our future in technology.

I encourage you to explore Whisper AI, experiment with its features, and share your experiences. The journey of discovery is just beginning.

Just image, if AI is helping you in daily tasks, you have more free time for other things:

Additional Resources

For those eager to dive deeper, the Whisper AI documentation, available on OpenAI’s official website and GitHub, offers extensive information and tutorials. Community forums and discussions provide valuable insights and practical advice for both novice and experienced users.

Power Apps | CookBook

Inhaltsverzeichnis

Components

Button

Enable Button depending on multipe Conditions (e.g. Dropdowns are selected):

Set OnChange of Coomponents:

Set DisplayMode of Component:

If(IsBlank(Radio1.Selected.Value), DisplayMode.Disabled, DisplayMode.Edit)

DropDown

Populate Dropdown

Switch(
    <Dropdown>.Selected.Value,
    "Value A", "a",
    "Value B", "b",
    "Value C", "c",
    <Dropdown>.Selected.'name '
) 

Populating dropdown which depends on selection of a control

OnVisible property of Screen:

ClearCollect(defaultDemands, Defaults(Demands));
Clear(defaultDemands);
Patch(defaultDemands, Defaults(Demands), {Name:"Select Product"})

Items property of dropdown you can choose between the two collections depending on the state of the radio selection:

If(
    IsBlank(<Radio>.Selected.Value),
    defaultDemands,
    Filter(ProductDemands, ProductValue = <Radio>.Selected.Value)
)

JSON

Extract Value from JSON

Match(
    jsonString, 
    """elementName"": *""(?<value>[^""]+)"
).value

Styling

Color

Set Color of Button:

Define a variable with the desired color in App.OnStart

Set BasePalletColor of Component to this variable:

FFMPEG | Compress video files

Inhaltsverzeichnis

Compress and Convert MP4 to WMV

Compress and Convert MP4 to Webm for YouTube, Ins, Facebook

ffmpeg -i source.mp4 -c:v libvpx-vp9 -b:v 0.33M -c:a libopus -b:a 96k \<br>-filter:v scale=960x540 target.webm

Compress and Convert H.264 to H.265 for Higher Compression

ffmpeg -i input.mp4 -vcodec libx265 -crf 28 output.mp4

Set CRF in FFmpeg to Reduce Video File Size

ffmpeg -i input.mp4 -vcodec libx264 -crf 24 output.mp4

Reduce video frame size to make 4K/1080P FHD video smaller

ffmpeg -i input.avi -vf scale=1280:720 output.avi

Command-line – resize video in FFmpeg to reduce video size

ffmpeg -i input.avi -vf scale=852×480 output.avi

Resize video in FFmpeg but keep the original aspect ratio

Specify only one component, width or height, and set the other component to -1, for eample,

ffmpeg -i input.jpg -vf scale=852:-1 output_852.png

Converting WebM to MP4

ffmpeg -i video.webm video.mp4

When the WebM file contains VP8 or VP9 video, you have no choice but to transcode both the video and audio.

Video conversion can be a lengthy and CPU intensive process, depending on file size, video and audio quality, video resolution, etc. but FFmpeg provides a series of presets and controls to help you optimize for quality or faster conversions.

A note on video quality

When encoding video with H.264, the video quality can be controlled using a quantizer scale (crf value, crf stands for Constant Rate Factor) which can take values from 0 to 51: 0 being lossless, 23 the default and 51 the worst possible. So the lower the value the better the quality. You can leave the default value or, for a smaller bitrate, you can raise the value:

ffmpeg -i video.webm -crf 26 video.mp4

Video presets

FFmpeg also provides several quality presets which are calibrated for a certain encoding speed to compression ratio. A slower preset will provide a better compression. The following presets are available in descending order: ultrafastsuperfastveryfastfaster, fastmediumslowslower and veryslow. The default preset is medium but you can choose a faster preset:

ffmpeg -i video.webm -preset veryfast video.mp4

Placing the MOOV atom at the beginning

All MP4 files contain a moov atom. The moov atom contains information about the length of the video. If it’s at the beginning it immediately enables a streaming video player to play and scrub the MP4 file. By default FFmpeg places the moov atom at the end of the MP4 file but it can place the mov atom at the beginning with the -movflags faststart option like this:

ffmpeg -i video.webm -movflags faststart video.mp4

Using Levels and Profiles when encoding H.264 video with FFmpeg

To ensure the highest compatibility with older iOS or Android devices you will need to use certain encoding profiles and levels. For example a video encoded with the High Profile and Level 4.2 will work on iPhone 5S and newer but not on older iOS devices.

ffmpeg -i video.webm -movflags faststart -profile:v high -level 4.2 video.mp4

Converting WebM with H.264 video to MP4

In some rare cases the .webm file will contain H.264 video and Vorbis or Opus audio(for example .webm files created using the MediaRecorder API on Chrome 52+ ). In such cases you don’t have to re-encode the video data since it’s already in the desired H.264 format (re-encoding is also not recommended since you’ll be loosing some quality in the process while consuming CPU cycles) so we’re just going to copy over the data

To copy the video data and transcode the audio in FFmpeg you use the -c:v copy option:

ffmpeg -i video.webm -c:v copy video.mp4

BeautifulSoup | Complete Cheatsheet with Examples

Inhaltsverzeichnis

Installation

pip install beautifulsoup4
from bs4 import BeautifulSoup

Creating a BeautifulSoup Object

Parse HTML string:

html = "<p>Example paragraph</p>"
soup = BeautifulSoup(html, 'html.parser')

Parse from file:

with open("index.html") as file:
  soup = BeautifulSoup(file, 'html.parser')

BeautifulSoup Object Types

When parsing documents and navigating the parse trees, you will encounter the following main object types:

Tag

A Tag corresponds to an HTML or XML tag in the original document:

soup = BeautifulSoup('<p>Hello World</p>')
p_tag = soup.p

p_tag.name # 'p'
p_tag.string # 'Hello World'

Tags contain nested Tags and NavigableStrings.

NavigableString

A NavigableString represents text content without tags:

soup = BeautifulSoup('Hello World')
text = soup.string

text # 'Hello World'
type(text) # bs4.element.NavigableString

BeautifulSoup

The BeautifulSoup object represents the parsed document as a whole. It is the root of the tree:

soup = BeautifulSoup('<html>...</html>')

soup.name # '[document]'
soup.head # <head> Tag element

Comment

Comments in HTML are also available as Comment objects:

<!-- This is a comment -->

Copy

comment = soup.find(text=re.compile('This is'))
type(comment) # bs4.element.Comment

Knowing these core object types helps when analyzing, searching, and navigating parsed documents.

Searching the Parse Tree

By Name

HTML:

<div>
  <p>Paragraph 1</p>
  <p>Paragraph 2</p>
</div>

Python:

paragraphs = soup.find_all('p')
# <p>Paragraph 1</p>, <p>Paragraph 2</p>

By Attributes

HTML:

<div id="content">
  <p>Paragraph 1</p>
</div>

Python:Copy

div = soup.find(id="content")
# <div id="content">...</div>

By Text

HTML:

<p>This is some text</p>

Python:

p = soup.find(text="This is some text")
# <p>This is some text</p>

Searching with CSS Selectors

CSS selectors provide a very powerful way to search for elements within a parsed document.

Some examples of CSS selector syntax:

By Tag Name

Select all

tags:

soup.select("p")

By ID

Select element with ID “main”:

soup.select("#main")

By Class Name

Select elements with class “article”:

soup.select(".article")

By Attribute

Select tags with a “data-category” attribute:

soup.select("[data-category]")

Descendant Combinator

Select paragraphs inside divs:

soup.select("div p")

Child Combinator

Select direct children paragraphs:

soup.select("div > p")

Adjacent Sibling

Select h2 after h1:

soup.select("h1 + h2")

General Sibling

Select h2 after any h1:

soup.select("h1 ~ h2")

By Text

Select elements containing text:

soup.select(":contains('Some text')")

By Attribute Value

Select input with type submit:

soup.select("input[type='submit']")

Pseudo-classes

Select first paragraph:

soup.select("p:first-of-type")

Chaining

Select first article paragraph:

soup.select("article > p:nth-of-type(1)")

Accessing Data

HTML:

<p class="content">Some text</p>

Python:

p = soup.find('p')
p.name # "p"
p.attrs # {"class": "content"}
p.string # "Some text"

The Power of find_all()

The find_all() method is one of the most useful and versatile searching methods in BeautifulSoup.

Returns All Matches

find_all() will find and return a list of all matching elements:

all_paras = soup.find_all('p')

This gives you all paragraphs on a page.

Flexible Queries

You can pass a wide range of queries to find_all():Name – find_all(‘p’)Attributes – find_all(‘a’, class_=’external’)Text – find_all(text=re.compile(‘summary’))Limit – find_all(‘p’, limit=2)And more!

Useful Features

Some useful things you can do with find_all():Get a count – len(soup.find_all(‘p’))Iterate through results – for p in soup.find_all(‘p’):Convert to text – [p.get_text() for p in soup.find_all(‘p’)]Extract attributes – [a[‘href’] for a in soup.find_all(‘a’)]

Why It’s Useful

In summary, find_all() is useful because:It returns all matching elementsIt supports diverse and powerful queriesIt enables easily extracting and processing result data

Whenever you need to get a collection of elements from a parsed document, find_all() will likely be your go-to tool.

Navigating Trees

Traverse up and sideways through related elements.

Modifying the Parse Tree

BeautifulSoup provides several methods for editing and modifying the parsed document tree.

HTML:

<p>Original text</p>

Python:

p = soup.find('p')
p.string = "New text"

Edit Tag Names

Change an existing tag name:

tag = soup.find('span')
tag.name = 'div'

Edit Attributes

Add, modify or delete attributes of a tag:

tag['class'] = 'header' # set attribute
tag['id'] = 'main'

del tag['class'] # delete attribute

Edit Text

Change text of a tag:

tag.string = "New text"

Append text to a tag:

tag.append("Additional text")

Insert Tags

Insert a new tag:

new_tag = soup.new_tag("h1")
tag.insert_before(new_tag)

Delete Tags

Remove a tag entirely:

tag.extract()

Wrap/Unwrap Tags

Wrap another tag around:

tag.wrap(soup.new_tag('div))

Unwrap its contents:

tag.unwrap()

Modifying the parse tree is very useful for cleaning up scraped data or extracting the parts you need.

Outputting HTML

Input HTML:

<p>Hello World</p>

Python:

print(soup.prettify())

# <p>
#  Hello World
# </p>

Integrating with Requests

Fetch a page:

import requests

res = requests.get("<https://example.com>")
soup = BeautifulSoup(res.text, 'html.parser')

Parsing Only Parts of a Document

When dealing with large documents, you may want to parse only a fragment rather than the whole thing. BeautifulSoup allows for this using SoupStrainers.

There are a few ways to parse only parts of a document:

By CSS Selector

Parse just a selection matching a CSS selector:

from bs4 import SoupStrainer

only_tables = SoupStrainer("table")
soup = BeautifulSoup(doc, parse_only=only_tables)

This will parse only the tags from the document.

By Tag Name

Parse only specific tags:

only_divs = SoupStrainer("div")
soup = BeautifulSoup(doc, parse_only=only_divs)

By Function

Pass a function to test if a tag should be parsed:

def is_short_string(string):
  return len(string) < 20

only_short_strings = SoupStrainer(string=is_short_string)
soup = BeautifulSoup(doc, parse_only=only_short_strings)

This parses tags based on their text content.

By Attributes

Parse tags that contain specific attributes:

has_data_attr = SoupStrainer(attrs={"data-category": True})
soup = BeautifulSoup(doc, parse_only=has_data_attr)

Multiple Conditions

You can combine multiple strainers:

strainer = SoupStrainer("div", id="main")
soup = BeautifulSoup(doc, parse_only=strainer)

This will parse only

.

Parsing only parts you need can help reduce memory usage and improve performance when scraping large documents.

Dealing with Encoding

When parsing documents, you may encounter encoding issues. Here are some ways to handle encoding:

Specify at Parse Time

Pass the from_encoding parameter when creating the BeautifulSoup object:

soup = BeautifulSoup(doc, from_encoding='utf-8')

This handles any decoding needed when initially parsing the document.

Encode Tag Contents

You can encode the contents of a tag:

tag.string.encode("utf-8")

Use this when outputting tag strings.

Encode Entire Document

To encode the entire BeautifulSoup document:

soup.encode("utf-8")

This returns a byte string with the encoded document.

Pretty Print with Encoding

Specify encoding when pretty printing

print(soup.prettify(encoder="utf-8"))

Unicode Dammit

BeautifulSoup’s UnicodeDammit class can detect and convert incoming documents to Unicode:

from bs4 import UnicodeDammit

dammit = UnicodeDammit(doc)
soup = dammit.unicode_markup

This converts even poorly encoded documents to Unicode.

Properly handling encoding ensures your scraped data is decoded and output correctly when using BeautifulSoup.

Django | Debugging Django-App in VS Code

See here how to configure VS Code:

  • Switch to Run view in VS Code (using the left-side activity bar or F5). You may see the message “To customize Run and Debug create a launch.json file”. This means that you don’t yet have a launch.json file containing debug configurations. VS Code can create that for you if you click on the create a launch.json file link:Django tutorial: initial view of the debug panel
  • Select the link and VS Code will prompt for a debug configuration. Select Django from the dropdown and VS Code will populate a new launch.json file with a Django run configuration. The launch.json file contains a number of debugging configurations, each of which is a separate JSON object within the configuration array.
  • Scroll down to and examine the configuration with the name “Python: Django”:
{
  "version": "0.2.0",
  "configurations": [
    {
      "name": "Python: Django",
      "type": "python",
      "request": "launch",
      "program": "${workspaceFolder}\\manage.py",
      "args": ["runserver"],
      "django": true,
      "justMyCode": true
    }
  ]
}
  • This configuration tells VS Code to run "${workspaceFolder}/manage.py" using the selected Python interpreter and the arguments in the args list. Launching the VS Code debugger with this configuration, then, is the same as running python manage.py runserver in the VS Code Terminal with your activated virtual environment. (You can add a port number like "5000" to args if desired.) The "django": true entry also tells VS Code to enable debugging of Django page templates, which you see later in this tutorial.
  • Test the configuration by selecting the Run > Start Debugging menu command, or selecting the green Start Debugging arrow next to the list (F5):Django tutorial: start debugging/continue arrow on the debug toolbar
  • Ctrl+click the http://127.0.0.1:8000/ URL in the terminal output window to open the browser and see that the app is running properly.
  • Close the browser and stop the debugger when you’re finished. To stop the debugger, use the Stop toolbar button (the red square) or the Run > Stop Debugging command (Shift+F5).
  • You can now use the Run > Start Debugging at any time to test the app, which also has the benefit of automatically saving all modified files.

Grafana: Getting Started

Run Grafana

Run Docker Image

docker run -d --name=grafana -p 3000:3000 grafana/grafana-enterprise:10.0.0-ubuntu

The, start Browser with http://localhost:3000 and login with admin/admin

After Login, you have the possibility to change the password. We skip here for now.