Developer Blog

Tipps und Tricks für Entwickler und IT-Interessierte

NestJS | Getting started – Part 1

Introduction

NestJS (just Nest from here on out), is a Node framework meant to build server-side applications. Not only is it a framework, but it is also a platform to meet many backend application needs, like writing APIs, building microservices, or doing real-time communications through web sockets.

Nest is also heavily influenced by Angular, and you will immediately find its concepts familiar. The creators of Nest strived to make the learning curve as small as possible, while still taking advantage of many higher level concepts such as modules, controllers, and dependency injection.

Installation

Install NodeJS

Download NodeJS from here and install as described here.

For example, on macOS using Homebrew

brew install node

Or download the package

curl "https://nodejs.org/dist/latest/node-${VERSION:-$(wget -qO- https://nodejs.org/dist/latest/ | sed -nE 's|.*>node-(.*)\.pkg</a>.*|\1|p')}.pkg" > "$HOME/Downloads/node-latest.pkg" && sudo installer -store -pkg "$HOME/Downloads/node-latest.pkg" -target "/"

Install NextJS

npm i -g @nestjs/cli

Create server App

Create new server App

nest new demo.server

Start server App

cd demo.server
npm run start:dev

Now open browser on http: localhost:3000

App Structure

main.ts

import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);
  await app.listen(3000);
}
bootstrap();

app.module.ts

import { Module } from '@nestjs/common';
import { AppController } from './app.controller';
import { AppService } from './app.service';

@Module({
  imports: [],
  controllers: [AppController],
  providers: [AppService],
})
export class AppModule {}

app.controller.ts

import { Controller, Get } from '@nestjs/common';
import { AppService } from './app.service';

@Controller()
export class AppController {
  constructor(private readonly appService: AppService) {}

  @Get()
  getHello(): string {
    return this.appService.getHello();
  }
}

app.service.ts

import { Injectable } from '@nestjs/common';

@Injectable()
export class AppService {
  getHello(): string {
    return 'Hello World!';
  }
}

Add Functionality

Create service and controller

nest g service missions
nest g controller missions

Modify mission service

@Injectable()
export class MissionsService {
  missions: Mission[] = [
    { id: 1, title: 'Rescue cat stuck in asteroid', reward: 500, active: true, },
    { id: 2, title: 'Escort Royal Fleet', reward: 5000, active: true, },
    { id: 3, title: 'Pirates attacking the station', reward: 2500, active: false, },
  ];

  async getMissions(): Promise<Mission[]> {
    return this.missions;
  }
}

Modify mission controller

@Controller('missions')
export class MissionsController {
  constructor(private missionsService: MissionsService) {}

  @Get()
  getMissions() {
    return this.missionsService.getMissions();
  }
}

Open in browser: http://localhost:3000

Create Frontend App

Create new frontend App

ionic start demo.frontend sidemenu

Working with Database and TypeORM

Create / Sync database with schema

Add command to package.json

"scripts": {
    "typeorm": "ts-node -r tsconfig-paths/register ./node_modules/typeorm/cli.js"
}
npm run typeorm schema:sync

TypeORM Commands

schema:sync         
Synchronizes your entities with database schema. It runs schema update queries on all connections you have. To run update queries on a concrete connection use -c option.
schema:log          
Shows sql to be executed by schema:sync command. It shows sql log only for your default connection. To run update queries on a concrete connection use -c option.
schema:drop         
Drops all tables in the database on your default connection. To drop table of a concrete connection's database use -c option.
query               
Executes given SQL query on a default connection. Specify connection name to run query on a specific connection.
entity:create       
Generates a new entity.
subscriber:create   
Generates a new subscriber.
migration:create    
Creates a new migration file. [Aliase: migrations:create]
migration:generate  Generates a new migration file with sql needs to be executed to update schema. [Aliase: migrations:generate]
migration:run       
Runs all pending migrations. [Aliase: migrations:run]
migration:show      
Show all migrations and whether they have been run or not
migration:revert    
Reverts last executed migration. [Aliase: migrations:revert]
version             
Prints TypeORM version this project uses.
cache:clear         
Clears all data stored in query runner cache.
init                
Generates initial TypeORM project structure. If name specified then creates files inside directory called as name. If its not specified then creates files inside current directory.

Additional readings

Frontend | Toolbox

Installation Overview

NodeJS

Angular

Ionic

Install

npm install -g @ionic/cli

Create App

ionic start Getting-Started tabs --type react

Start App

ionic serve

React / ReactJS

Install

React Native

Stencil

npm init stencil
npm install --save-exact @stencil/core@latest 
npm install --save-dev @types/jest@26.0.12 jest@26.4.2 jest-cli@26.4.2 
npm install --save-dev @types/puppeteer@3.0.1 puppeteer@5.2.1
npm test
npm start

Gatsby

NextJS

NestJS

Usefull Libraries

VideoJSHTML5 player frameworkVideoHomeGithub
Animate on ScrollAnimationHomemichalsnik/aos
ScrollMagicAnimationHome
ScrollRevealJSAnimationHome/jlmakes/scrollreveal
PixiJSGraphicsHomepixijs/pixi.jsthub
AnimeAnimationHomejuliangarnier/anime
ThreeJSGraphicsHomemrdoob/three.js
animate.cssAnimationHomeanimate-css/animate.css
HowlerJSAudio libraryAudioHomeGithub
RevealJSHTML Presentation FrameworkPresentationHomeGithub
ChartJSChartHomeGithub
anime.jsHome
granim.jsCreate fluid and interactive gradient animationsGraphicsHomesarcadass/granim.js
Multiple.jsSharing background across multiple elements using CSSHomeNeXTs/Multiple.js
choreographer-jsA simple library to take care of complicated animations.Homechristinecha/choreographer-js
cleave.jsFormat your <input/> content when you are typingHomenosir/cleave.js
premonishHomemathisonian/premonish
SplittingAnimationHomeCodepenshshaw/splitting/

More to read

Rust | Getting Started

Introduction

From Wikipedia, the free encyclopedia:

Rust is a multi-paradigm programming language focused on performance and safety, especially safe concurrency. Rust is syntactically similar to C++, and provides memory safety without using garbage collection.

Rust was originally designed by Graydon Hoare at Mozilla Research.

It has gained increasing use in industry and is now Microsoft’s language of choice for secure and safety-critical software components.

Rust has been the “most loved programming language” in the Stack Overflow Developer Survey every year since 2016.

Rust could be used in different areas

Read more

Installation

Rustup

Download install script and run it

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

Modify .bashrc to add Rust path to PATH

source $HOME/.cargo/env

Other way to install Rust on MacOS

brew install rust

Create and run your first App

Create the app

$ cargo new hello_world
$ cd hello_world

Show folder structure

$ tree .
.
├── Cargo.lock
├── Cargo.toml
└── src
    └── main.rs
1 directory, 3 files

Show main source file

fn main() {
    println!("Hello, world!");
}

Build your app

$ cargo build
Compiling hello_world v0.1.0 (.../hello_world)
Finished dev [unoptimized + debuginfo] target(s) in 6.32s

Or build a production ready version

$ cargo build --release<br>Finished release [optimized] target(s) in 0.19s

Run your app

$ cargo run
Finished dev [unoptimized + debuginfo] target(s) in 0.04s
Running `target/debug/hello_world
Hello, world!

Add functionality to your app

Add Dependencies

Let’s add a dependency to our application. You can find all sorts of libraries on crates.io, the package registry for Rust. In Rust, we often refer to packages as “crates.”

In this project, we’ll use a crate called ferris-says.

In our Cargo.toml file we’ll add this information (that we got from the crate page):

[dependencies]
ferris-says = "0.1"

Modify main source

Now let’s write a small application with our new dependency. In our main.rs, add the following code:

use ferris_says::say; // from the previous step
use std::io::{stdout, BufWriter};

fn main() {
    let stdout = stdout();
    let message = String::from("Hello fellow Rustaceans!");
    let width = message.chars().count();

    let mut writer = BufWriter::new(stdout.lock());
    say(message.as_bytes(), width, &mut writer).unwrap();
}

Run App

$ cargo build
    Updating crates.io index
  Downloaded object v0.20.0
  Downloaded textwrap v0.11.0
  Downloaded adler v0.2.3
  Downloaded ansi_term v0.11.0
  Downloaded miniz_oxide v0.4.1
  Downloaded gimli v0.22.0
  Downloaded strsim v0.8.0
  Downloaded error-chain v0.10.0
  Downloaded vec_map v0.8.2
  Downloaded clap v2.33.3
  Downloaded smallvec v0.4.5
  Downloaded ferris-says v0.1.2
  Downloaded backtrace v0.3.50
  Downloaded rustc-demangle v0.1.16
  Downloaded addr2line v0.13.0
  Downloaded 15 crates (1.4 MB) in 1.65s
   Compiling libc v0.2.76
   Compiling bitflags v1.2.1
   Compiling gimli v0.22.0
   Compiling adler v0.2.3
   Compiling rustc-demangle v0.1.16
   Compiling unicode-width v0.1.8
   Compiling object v0.20.0
   Compiling cfg-if v0.1.10
   Compiling strsim v0.8.0
   Compiling vec_map v0.8.2
   Compiling ansi_term v0.11.0
   Compiling smallvec v0.4.5
   Compiling textwrap v0.11.0
   Compiling miniz_oxide v0.4.1
   Compiling addr2line v0.13.0
   Compiling atty v0.2.14
   Compiling backtrace v0.3.50
   Compiling clap v2.33.3
   Compiling error-chain v0.10.0
   Compiling ferris-says v0.1.2
   Compiling hello_world v0.1.0 (.../hello_world)
    Finished dev [unoptimized + debuginfo] target(s) in 14.73s

Run your app

$ cargo run
     Finished dev [unoptimized + debuginfo] target(s) in 0.14s
       Running`target/debug/hello_world`
----------------------------
| Hello fellow Rustaceans! |
----------------------------
              \
               \
                  _~^~^~_
              \) /  o o  \ (/
                '_   -   _'
                / '-----' \

Next steps

Read Getting Started on rust homepage

Explore Learn Rust

Next Readings

Readings

Exercises

Azure Databricks| Working with Unit Tests

Introduction

Problem

Like any other program, Azure Databricks notebooks should be tested automatically to ensure code quality.

Using standard Python Test Tools is not easy because these tools are based on Python files in a file system. And a notebook doesn’t correspond to a Python file.

Solution

To enable automated testing with unittest (documentation), we proceed as follows:

  • Create a test class that contains all the tests you want
  • Execution of all defined tests

Create Notebook with the Code

We will create a simple Notebook for our test.

This notebook will implement a simple calculator, so that we can test the basic calculator operations like add ad multiply.

Create a new Notebook with the name Calculator:

class Calculator:

	def __init__(self, x = 10, y = 8):
		self.x = x
		self.y = y
		
	def add(self, x = None, y = None):
		if x == None: x = self.x
		if y == None: y = self.y			
          
		return x+y

	def subtract(self, x = None, y = None):
		if x == None: x = self.x
		if y == None: y = self.y	
          
		return x-y

	def multiply(self, x = None, y = None):
		if x == None: x = self.x
		if y == None: y = self.y			
          
		return x*y

	def divide(self, x = None, y = None):
		if x == None: x = self.x
		if y == None: y = self.y			
          
		if y == 0:
			raise ValueError('cannot divide by zero')
		else:
			return x/y

The notebook should look like this

To use this class, write the following lines:

c = Calculator()
print(c.add(20, 10), c.subtract(20, 10), c.multiply(20, 10), c.divide(20, 10))

Create Notebook with the Tests

Create a new Notebook in the same folder with the name Calculator.Tests.

The name is not important, but it is convenient to name the test program like the program to be tested with the suffix ‘Tests’.

Create the first command to import the Calculator Notebook




Create the Test Class

import unittest

class CalculatorTests(unittest.TestCase):
  
  @classmethod
  def setUpClass(cls):
    cls.app = Calculator()

  def setUp(self):
    # print("this is setup for every method")
    pass

  def test_add(self):
    self.assertEqual(self.app.add(10,5), 15, )

  def test_subtract(self):
    self.assertEqual(self.app.subtract(10,5), 5)
    self.assertNotEqual(self.app.subtract(10,2), 4)

  def test_multiply(self):
    self.assertEqual(self.app.multiply(10,5), 50)

  def tearDown(self):
    # print("teardown for every method")
    pass

  @classmethod
  def tearDownClass(cls):
    # print("this is teardown class")
    pass

Create the code to run the tests

suite =  unittest.TestLoader().loadTestsFromTestCase(CalculatorTests)
unittest.TextTestRunner(verbosity=2).run(suite)

Azure | Cookbook Databricks

Databricks CLI

Export all Notebooks

databricks workspace list | ForEach { databricks workspace export_dir /$_ $_ }

Troubleshooting

Problem

Error in SQL statement: AnalysisException: Can not create the managed table('`demo`'). The associated location('dbfs:/user/hive/warehouse/demo') already exists.;

Solution

dbutils.fs.rm("dbfs:/user/hive/warehouse/demo/", true)

Handling Complex Data Scenarios

When working with nested data structures in Databricks, the explode() function is essential but comes with hidden pitfalls. Here are key insights for advanced users:

1. The Null Trap in explode()

The standard explode() function silently drops rows with empty arrays or null values – a common pain point in production pipelines. Consider this dataset:

pythondata = [
    (1, "Luke", ["baseball", "soccer"]),
    (2, "Lucy", None),
    (3, "Eve", [])
]

df = spark.createDataFrame(data, ["id", "name", "likes"])

Standard explode behavior

Output retains only Luke’s exploded rows

df.select("id", "name", explode("likes")).show()

Solution: explode_outer()

Preserves Lucy (null) and Eve (empty array) with null values

from pyspark.sql.functions import explode_outer

df.select("id", "name", explode_outer("likes")).show()

2. Advanced Array Handling

For complex nested structures, combine explode_outer() with struct typing:

pythonfrom pyspark.sql.types import StructType, StructField, StringType

schema = StructType([
    StructField("sport", StringType()),
    StructField("level", StringType())
])

df.withColumn("nested", array(struct(lit("baseball").alias("sport"), 
                                   lit("pro").alias("level")))) \
  .select(explode_outer("nested")) \
  .select("col.*") \
  .show()

3. Z-Order Optimization for Exploded Data

When working with large exploded datasets, optimize Delta Lake storage:

python(df
 .write
 .format("delta")
 .option("delta.optimizeWrite", "true")
 .option("delta.dataSkippingNumIndexedCols", "3")
 .saveAsTable("exploded_data")
)

spark.sql("OPTIMIZE exploded_data ZORDER BY (id, sport)")

4. Performance Comparison

OperationTime (10M rows)Data Skipped
Standard explode()45s12

5. Best Practices

  • Always use explode_outer() unless explicitly filtering nulls
  • Combine with coalesce() for default values:
    explode_outer(coalesze(col("likes"), array(lit("unknown"))))
  • For map types, use explode_outer(map_from_arrays()) pattern
  • Monitor with DESCRIBE HISTORY for Delta Lake optimizations

These techniques ensure data integrity while maintaining query performance, crucial for production-grade implementations. The key is understanding how null handling interacts with Delta Lake’s optimization features – a critical insight for advanced users building reliable data pipelines168.

Azure | Working with Widgets

TL;DR

Don’t want to read the post, then explore this Azure Notebook

Requirements

Define needed moduls and functions

from datetime import datetime

import pyspark.sql.functions as F

Create DataFrame for this post:

df = spark.sql("select * from diamonds")
df.show()

Working with Widgets

Default Widgets

dbutils.widgets.removeAll()

dbutils.widgets.text("W1", "1", "Text")
dbutils.widgets.combobox("W2", "3", [str(x) for x in range(1, 10)], "Combobox")
dbutils.widgets.dropdown("W3", "4", [str(x) for x in range(1, 10)], "Dropdown")

Multiselect Widgets

list = [ f"Square of {x} is {x*x}" for x in range(1, 10)]
dbutils.widgets.multiselect("W4", list[0], list, "Multi-Select")

Monitor the changes when selection values

print("Selection: ", dbutils.widgets.get("W4"))
print("Current Time =", datetime.now().strftime("



Filter Query by widgets

Prepare widgets

dbutils.widgets.removeAll()

df = spark.sql("select * from diamonds")

vals = [ str(x[0]) for x in df.select("cut").orderBy("cut").distinct().collect() ]
dbutils.widgets.dropdown("Cuts", vals[0], vals)

vals = [ str(x[0]) for x in df.select("carat").orderBy("carat").distinct().collect() ]
dbutils.widgets.dropdown("Carat", vals[0], vals)

Now, change some values

filter_cut = dbutils.widgets.get("Cuts")
df=spark.sql(f"select * from diamonds where cut='{filter_cut}'").show()

Power Query | Cookbook

Arbeiten mit dem Header

Schreibweise ändern

Grossschreibung/Kleinschreibung/CamelCase

= Table.TransformColumnNames(RenameColumns, Text.Upper)
= Table.TransformColumnNames(RenameColumns, Text.Lower)
= Table.TransformColumnNames(RenameColumns, Text.Proper)

Bestimmte Zeichen entfernen (z. B. _)

= Table.TransformColumnNames(Source,each Text.Proper(Replacer.ReplaceText( _ , "_", " ")))

Aufteilen in Worte

= Table.TransformColumnNames(Source, each Text.Combine(
                    Splitter.SplitTextByCharacterTransition({"a".."z"},{"A".."Z"})(_), " "))

Als Function

(columnNames as text) =>
let 
    splitColumn = Splitter.SplitTextByCharacterTransition({"a".."z"}, {"A".."Z"})(columnNames)
in
    Text.Combine(splitColumn, " ")

Daten transformieren

Zeilen gruppenweise pivotieren

Aufgabenstellung

Werden Daten angeliefert, in denen das Gruppierungsmerkmal in den Zeilen vorhanden ist und somit mehrere Zeilen pro Datensatz vorhanden, wünscht man sich meist eine kompaktere Darstellung.

Für den Datensatz mit dem Wert “Daten 1” werden also vier Zeilen mit unterschiedlichen Werten in GRUPPE und Wert angeliefert.

Problemstellung

Gewünscht ist aber eine kompaktere Darstellung mit den vorhandenen Gruppen als Spalten:

Die Aufgabenstellung ist somit die Umwandlung der angelieferten Daten:

Eine Beispieldatei liegt hier. Das Endergebnis liegt hier. Speichern sie beide Dateien im Order C: \TMP, dann stimmt der Verweis in Query.xlsx auf die Daten Daten.xlsx.

Schritt 1: Daten vorbereiten

Im ersten Schritt erstellen wir eine neue Excel-Daten und greifen auf die vorbereiteten Daten über Power Query zu.

Wählen Sie dazu im Register Daten den Eintrag Daten abrufen / Aus Datei / Aus Arbeitsmappe und selektieren sie die gewünschte Datei:

Eine Beispieldatei liegt hier.

Ein Klick auf Importieren führt sie zum Navigation

Sie sehen im Navigator 3 verschiedenen Elemente:

  • DATEN: die intelligente Tabelle im Tabellenblatt. Diese beinhaltet genau die gewünschten Daten
  • ERGBNIS: die intelligente Tabelle, die das zu erwartende Ergbnis beinhaltet
  • Beispieldaten: das Tabellenblatt mit den beiden intelligenten Tabellen

Selektieren sie das Element DATEN und klicken sie auf Daten transformieren.

Schritt 2: Spalte pivotieren

Wir wollen die Werte der Spalte GRUPPE als neue Spalten erhalten.

Hier klicken sie auf die Spalte GRUPPE und wählen dein Eintrag Spalte pivotieren im Register Transformieren / Beliebige Spalte:

Die Werte für die neuen Spalten (Gruppe 1, Gruppe 2 , ..) kommen aus der Spalte WERT (Wert 11, Wert 12, ..):

Wir wollen die Werte selbst übernehmen und keine (wie bei Pivottabellen meist üblich) Aggregierungsfunktion verwenden (Summe, Max, Anzahl, ..).

Klicken sie hierzu auf Erweiterte Optionen und selektieren sie den Eintrag Nicht aggregieren:

Anschließen klicken sie auf OK:

Zum Abschluss beenden wir den Power Query Editor:

Power BI | Importing multiple files

Getting Started

To import multiple files from a folder, the following two steps had to be done:

  • create a list of all files in the folder
  • for each file: read the file and add it to the result table

When importing files with Power BI, you can do both tasks together or each task separately.

The decision, which way to go, ist done after selection the folder:

You could choose between 4 posibilities. Strictly speaking, you have to possibilities, both with the same to final steps.

  1. Load or Combine files
    • Load means, the list of the files will be loaded as table
      Technicaly two things are done:
      • a connection is created in the model
      • the data (list of files) is loaded to the mode
  2. Just Load or Transform data
    • Transform means, you will end up in the Power Query Editor, so you can add additional modifications

In order to better understand the process, we show the two steps separately: one after the other

Load the list of files from folder

Start Power BI and close the start screen, if it is still visible.

Then, click on the Get Data Button in the Home Ribbon

If you click on the small down arrow on the Get Data Button, you have to select the option More

Now, select Folder and click on Connect

Enter the folder (or Browse…) with the files to be loaded and click Ok

After this, Power Query will create a table with all files in the folder.

Now, here is the point to decide, which way to go:

  • Combine
    • Read list of files and combine all files into on table
  • Load
    • Just keep the list of files and return to Power BI
  • Transform
    • Keep the list of files and open the Power Query Editor

We will choose to load the files, because we will do each step later separately

In Power BI Desktop, click on the Data Icon to show the resulting table.

Combine all files into one table

To add additional steps, we need the Power Query Editor.

So click on the 3 dots at the right side of the Query name Samples and choose Edit Query

Now, you are in the Power Query Editor

To combine all files, just click on the small icon beneath the header of the content column:

In the following dialog, you will see all files an a preview of the content for each file. For excel files, you will see the sheet names and the names of the intelligent tables in the sheets.

Click on OK to start the import.

When Power Query is done with this step, you will see the result:

The previous query Samples is still there, but now with the content of all files.

Additionally, you will see four other elements:

How combining the files is done

Each query consists of a list of steps, which are process one after another. Normaly, each step is using the result (data) of the previous step, performs some modifications and has a result (data) for the next step.

So, each step is modifying the whole data of the previous step. Describing some modifications means either

  • do one thing, e.g. add an additional column

or

  • do something for each row in the data
    This means, we need some sort of a loop, like “do xyz for each row in the data

Lets see, how Power Query solves this task.

In the query Samples, exampine the Step Invoke Custom Function1

The Step if performing the M function Table.AddColumn

This functions needs 3 parameter:

  • table: which is normaly the name of the prevoius step
    In our example #”Filtered Hidden Files1″
  • newColumnName: the name for the column to be added
    “Transform File”
  • columnGenerator: a function which is called for each row in the input table and creates the new column content
    each #”Transform File”([Content])

This results in the following procedure:

  • for each row of the list of files (output from step #”Filtered Hidden Files1″)
  • get the content of the column Content (this will be the parameter for the function call)
  • call the function “Transform File”([Content]) to create the column with one parameter: the value of the column ([Content] i

Helper Queries (Required)

This is the required function to create the column content for each file

Helper queries (Optional)

For the resulting query Samples to work, only the function definition is required.

But Power Query add some additional elements, to test the function and show the result

Create a parameter used in the query Transform Sample File and define the curent value Sample File

Define a value for the parameter. Here, the first row in the list of files is used.

Create a query and use an excel workbook as input. The name of the excel file is speficied as a parameter

In this query, the previously create parameter Parameter1 is used as parameter (to much of the word parameter, i know :))

Importing multiple files with different formats

If the selected folder contains files with different format, the result is not what you may be expect:

The list of files contains all files, both csv files and xls files

When combining the files, you can select between the files. So first take a look at an csv file:

The csv file looks as expected:

But the xls files looks strange:

But lets try. Click on ok to combine all files.

But, looking at the resulting query, the data of the xls files still looks strange:

To understand this, take a look into the create transfer function:

The crucial instruction is line 2:

Source = Csv.Document(Parameter3,[Delimiter=",", Columns=10, Encoding=1252, QuoteStyle=QuoteStyle.None]),

The source document (each file in the list of files) is interpreted as csv file.

So, the xls files are also read in as csv files. This leads to the strange result.

You can fix this by adding an additional filter step in the query to select only csv files:

Elixir| Einstieg in Elixir und Phoenix

Installation

Erlang / Elixir

Unter Windows

Installationspakete unter Erlang / Downloads (Version 24.0) und Elixir / Downloads (Web Installer)

Erstellen einer ersten Anwendung

Anwendung mit dem Namen ‘app’ erstellen.

Als Vorlage wird ‘live’ werwendet. Diese vewendet Phoenix.LiveView und erleichter die Erstellung von Web Anwendungen.

mix phx.new --live app

Frontend erstellen

cd app
cd assets 
npm install 
node node_modules/webpack/bin/webpack.js --mode development

Datenbank-Instanz einrichten und starten

Elixir und Phoenix verwendet in der Standardeinstellung eine PostgreSQL Datenbanken.

Der einfachste Weg, eine lauffähige PostgrSQL Datenbank einzurichten, ist mit Hilfe von Docker.

Erstellen Sie hierzu einen Datei docker-compose.yml:

version: '3.5'

networks:
  postgres:
    name: ${POSTGRES_CONTAINER:-workshop_elixir_postgres}
    driver: bridge

volumes:
    postgres:
      name: postgres

services:
  postgres:
    container_name: ${POSTGRES_CONTAINER:-workshop_elixir_postgres}
    image: postgres
    environment:
      POSTGRES_USER: ${POSTGRES_USER:-postgres}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-postgres}
      PGDATA: /data/postgres

    volumes:
       - postgres:/data/postgres

    ports:
      - "5432:5432"

    networks:
      - postgres

    restart: unless-stopped

Starten Sie die Datenbank in einem eigenen Fenster mit dem nachfolgenden Kommando:

docker compose up
[+] Running 14/14
 - db Pulled
   - b4d181a07f80 Already exists
   - 46ca1d02c28c Pull complete
   - a756866b5565 Pull complete
   - 36c49e539e90 Pull complete
   - 664019fbcaff Pull complete 
   - 727aeee9c480 Pull complete
   - 796589e6b223 Pull complete
   - 6664992e747d Pull complete
   - 0f933aa7ccec Pull complete
   - 99b5e5d88b32 Pull complete
   - a901b82e6004 Pull complete
   - 625fd35fd0f3 Pull complete
   - 9e37bf358a5d Pull complete
[+] Running 1/1
 - Container elixis_postgres  Started
Attaching to elixis_postgres
elixis_postgres  | The files belonging to this database system will be owned by user "postgres".
elixis_postgres  | This user must also own the server process.
...
...
...
elixis_postgres  | 2021-07-12 15:01:08.042 UTC [1] LOG:  database system is ready to accept connections

Datenbanktabellen erstellen

Festlegen der Datenbank-Verbindungsparameter in der Datei config/dev.exs.

Wir verwenden dabie die gleichen Werte, die wir in der Datei docker-compose.yml verwendet haben:

POSTGRES_USERusername
POSTGRES_PASSWORDpassword
POSTGRES_DBdatabase
config :app, App.Repo,
  username: "root",
  password: "root",
  database: "playground",
  hostname: "localhost",
  show_sensitive_data_on_connection_error: true,
  pool_size: 10

Erstellen der Datenbank-Tabellen

mix ecto.create

Webserver starten

mix phx.server

Links

Elixir und das Web

Elixir und Datenbanken

  • Ecto – domain specific language for writing queries and interacting with databases Github

Tips zum Erlernen von Elixir

Screencasts

Übungen /Exercises

Bücher

Copyright © 2025 | Powered by WordPress | Aasta Blog theme by ThemeArile