<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Blog of Silvio Wangler]]></title><description><![CDATA[Thoughts, stories and ideas.]]></description><link>https://wangler.io/</link><generator>Ghost 5.88</generator><lastBuildDate>Tue, 07 Apr 2026 18:49:06 GMT</lastBuildDate><atom:link href="https://wangler.io/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Running Artifactory 7 and Postgres using Docker Compose V2]]></title><description><![CDATA[You want to run Artifactory 7 using Docker Compose without using the rather strange installation process that JFrog provides? This blog post is your starting point.]]></description><link>https://wangler.io/running-artifactory-7-using-docker-compose-v2/</link><guid isPermaLink="false">6425871d91457c0001d40d57</guid><category><![CDATA[docker]]></category><category><![CDATA[Artifactory]]></category><dc:creator><![CDATA[Silvio Wangler]]></dc:creator><pubDate>Thu, 30 Mar 2023 13:00:00 GMT</pubDate><media:content url="https://wangler.io/content/images/2023/03/wood-working-g7d85531f9_1280.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://wangler.io/content/images/2023/03/wood-working-g7d85531f9_1280.jpg" alt="Running Artifactory 7 and Postgres using Docker Compose V2"><p>When JFrog released Artifactory 7 they changed the way you can install Artifactory quiet a bit by introducing some sort of installation script for the Docker Compose setup. The documentation always was a bit brief and therefore I never understud why I need to download a TAR, then run a script on a server to get single instance of Artifactory up and running with Docker Compose.</p><p>Since then I always felt a bit lost, and honestly I did not want to spend time on investigating how the &#xAB;new installation process&#xBB; works and why I became more difficult. Starting in 2023 JFrog released this installation video that explains the simplest setup using Artifactory 7 with Postgres.</p><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/mRLkXBJtqrM?start=136&amp;feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="How to install Artifactory and Xray with Docker Compose?"></iframe></figure><p>But again that video does helped me a lot, since I still need to run a script and that script produces two Docker Compose YAML files. One for Artifactory and another one for Postgres. That&apos;s not what I&apos;m looking for. Furthermore, most or almost all examples related to Docker Compose refer to version 6 of Artifactory.</p><p>So this time I started digging. The intention of this blog post is to give experienced developers and/or ops a starting point. A Docker Compose YAML file that works, and simply can be developed further.</p><p>Let&apos;s start with the <code>.env</code> file that hold global configuration values. First of all we want to use Artifactory <code>7.55.9</code> which is the latest version as of this writing. With <code>ROOT_DATA_DIR</code> we set the root path for the Docker volumes (Postgres- &amp; Artifactory data) to persist application data. Last but not least we define that port <code>8080</code> is the port Artifactory accepts external calls.</p><figure class="kg-card kg-code-card"><pre><code>ARTIFACTORY_VERSION = 7.55.9
ROOT_DATA_DIR = /opt/jfrog/artifactory/volumes
JF_ROUTER_ENTRYPOINTS_EXTERNALPORT = 8080</code></pre><figcaption>.env</figcaption></figure><p>Then let&apos;s take a look at our <code>docker-compose.yml</code>. As mentioned before it contains a Postgres database &#xAB;<code>postgres</code>&#xBB; and Artifactory &#xAB;<code>artifactory</code>&#xBB;.</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">version: &apos;3.9&apos;
services:
    postgres:
        image: postgres:13.9-alpine
        container_name: postgresql
        environment:
            - POSTGRES_DB=artifactory
            - POSTGRES_USER=artifactory
            - POSTGRES_PASSWORD=gravis
        ports:
            - &quot;127.0.0.1:5432:5432&quot;
        volumes:
            - ${ROOT_DATA_DIR}/postgres/var/data/postgres/data:/var/lib/postgresql/data
            - /etc/localtime:/etc/localtime:ro
        restart: always
        deploy:
            resources:
                limits:
                    cpus: &quot;1.0&quot;
                    memory: 500M
        logging:
            driver: json-file
            options:
                max-size: &quot;50m&quot;
                max-file: &quot;10&quot;
        ulimits:
            nproc: 65535
            nofile:
                soft: 32000
                hard: 40000

    artifactory:
        image: releases-docker.jfrog.io/jfrog/artifactory-oss:${ARTIFACTORY_VERSION}
        container_name: artifactory
        environment:
            - JF_ROUTER_ENTRYPOINTS_EXTERNALPORT=${JF_ROUTER_ENTRYPOINTS_EXTERNALPORT}
        ports:
            - &quot;127.0.0.1:${JF_ROUTER_ENTRYPOINTS_EXTERNALPORT}:${JF_ROUTER_ENTRYPOINTS_EXTERNALPORT}&quot; # for router communication
#            - 8081:8081 # for artifactory communication
        volumes:
            - ${ROOT_DATA_DIR}/artifactory/var:/var/opt/jfrog/artifactory
            - /etc/localtime:/etc/localtime:ro
        restart: always
        logging:
            driver: json-file
            options:
                max-size: &quot;50m&quot;
                max-file: &quot;10&quot;
        deploy:
            resources:
                limits:
                    cpus: &quot;2.0&quot;
                    memory: 4G
        ulimits:
            nproc: 65535
            nofile:
                soft: 32000
                hard: 40000
</code></pre><figcaption>docker-compose.yml</figcaption></figure><p>Additonally you need to configure Arifactory by providing a <code>system.yaml</code> configuration file.</p><figure class="kg-card kg-code-card"><pre><code>shared:
    node:
        ip: 127.0.0.1
        id: artifactory-one
        name: artifactory-one
    database:
        type: postgresql
        driver: org.postgresql.Driver
        password: gravis
        username: artifactory
        url: jdbc:postgresql://postgres:5432/artifactory
router:
    entrypoints:
        externalPort: 8080
</code></pre><figcaption>volumes/artifactory/var/etc/system.yaml</figcaption></figure><p>So this is how your directory should look like</p><pre><code class="language-shell">.
&#x251C;&#x2500;&#x2500; .env
&#x251C;&#x2500;&#x2500; docker-compose.yml
&#x2514;&#x2500;&#x2500; volumes
    &#x2514;&#x2500;&#x2500; artifactory
        &#x2514;&#x2500;&#x2500; var
            &#x2514;&#x2500;&#x2500; etc
                &#x2514;&#x2500;&#x2500; system.yaml</code></pre><p>When running <code>docker compose up -d</code> and waiting while, you finally can login to your fresh Artifactory 7 installation on http://localhost:8080/ by using the default user <code>admin</code> with password <code>password</code>.</p><figure class="kg-card kg-image-card"><img src="https://wangler.io/content/images/2023/03/CleanShot-2023-03-30-at-15.59.47.png" class="kg-image" alt="Running Artifactory 7 and Postgres using Docker Compose V2" loading="lazy" width="1804" height="1454" srcset="https://wangler.io/content/images/size/w600/2023/03/CleanShot-2023-03-30-at-15.59.47.png 600w, https://wangler.io/content/images/size/w1000/2023/03/CleanShot-2023-03-30-at-15.59.47.png 1000w, https://wangler.io/content/images/size/w1600/2023/03/CleanShot-2023-03-30-at-15.59.47.png 1600w, https://wangler.io/content/images/2023/03/CleanShot-2023-03-30-at-15.59.47.png 1804w" sizes="(min-width: 720px) 720px"></figure><p>I hope this helps you setting up Artifactory. As mention this is the very minimum, feel free to extend it to your needs.</p><p>As a bonus please find the resource consumption of a bored Artifactory accompanied by a Postgres database running on Apple Silicon M1.</p><figure class="kg-card kg-image-card"><img src="https://wangler.io/content/images/2023/03/CleanShot-2023-03-30-at-16.12.41.png" class="kg-image" alt="Running Artifactory 7 and Postgres using Docker Compose V2" loading="lazy" width="1296" height="373" srcset="https://wangler.io/content/images/size/w600/2023/03/CleanShot-2023-03-30-at-16.12.41.png 600w, https://wangler.io/content/images/size/w1000/2023/03/CleanShot-2023-03-30-at-16.12.41.png 1000w, https://wangler.io/content/images/2023/03/CleanShot-2023-03-30-at-16.12.41.png 1296w" sizes="(min-width: 720px) 720px"></figure>]]></content:encoded></item><item><title><![CDATA[Java Money - Sum up monetary amounts]]></title><description><![CDATA[This blog post is about working with monetary amounts in Java using Java Money.]]></description><link>https://wangler.io/java-money-sum-monetary-amounts/</link><guid isPermaLink="false">63e29518ff89d5000173d371</guid><category><![CDATA[Java]]></category><dc:creator><![CDATA[Silvio Wangler]]></dc:creator><pubDate>Tue, 07 Feb 2023 18:44:17 GMT</pubDate><media:content url="https://wangler.io/content/images/2023/02/schweizisk-franc.webp" medium="image"/><content:encoded><![CDATA[<img src="https://wangler.io/content/images/2023/02/schweizisk-franc.webp" alt="Java Money - Sum up monetary amounts"><p><a href="https://javamoney.github.io/?ref=wangler.io">Java Money</a> (<a href="https://jcp.org/en/jsr/detail?id=354&amp;ref=wangler.io">JSR-354</a>) provides an excellent API (and SPI) to deal with money in Java. The following example shows how easily it can be used with Java Streams to sum up <code>MonetaryAmounts</code>.</p><p>Given the following list with two monetary amounts of CHF 12.50 and CHF 99.35</p><figure class="kg-card kg-image-card"><img src="https://wangler.io/content/images/2023/02/List-Of-Monetary-Amounts.jpeg" class="kg-image" alt="Java Money - Sum up monetary amounts" loading="lazy" width="1042" height="586" srcset="https://wangler.io/content/images/size/w600/2023/02/List-Of-Monetary-Amounts.jpeg 600w, https://wangler.io/content/images/size/w1000/2023/02/List-Of-Monetary-Amounts.jpeg 1000w, https://wangler.io/content/images/2023/02/List-Of-Monetary-Amounts.jpeg 1042w" sizes="(min-width: 720px) 720px"></figure><p>we can easily calculate the grand total (sum up the monetary amounts) by using Java Streams. Note that <code>Money</code> is backed by a <code>BigDecimal</code> so precision is respected. For the sake of readability the code examples are working with <code>double</code> values.</p><figure class="kg-card kg-image-card"><img src="https://wangler.io/content/images/2023/02/Sum-it-up-1.jpeg" class="kg-image" alt="Java Money - Sum up monetary amounts" loading="lazy" width="1228" height="691" srcset="https://wangler.io/content/images/size/w600/2023/02/Sum-it-up-1.jpeg 600w, https://wangler.io/content/images/size/w1000/2023/02/Sum-it-up-1.jpeg 1000w, https://wangler.io/content/images/2023/02/Sum-it-up-1.jpeg 1228w" sizes="(min-width: 720px) 720px"></figure><p>The result is a monetary amount <code>sum</code> with CHF 111.85. Neat, isn&apos;t it?</p><h2 id="but-why-monetaryfunctions">But why MonetaryFunctions?</h2><p>The example above uses <code>MonetaryFunctions::sum</code> to sum up the monetary amounts. Of course you could simply use <code>MonetaryAmount::add</code> as shown in the listing below.</p><figure class="kg-card kg-image-card"><img src="https://wangler.io/content/images/2023/02/Sum-by-add.jpeg" class="kg-image" alt="Java Money - Sum up monetary amounts" loading="lazy" width="877" height="494" srcset="https://wangler.io/content/images/size/w600/2023/02/Sum-by-add.jpeg 600w, https://wangler.io/content/images/2023/02/Sum-by-add.jpeg 877w" sizes="(min-width: 720px) 720px"></figure><p>By using <code>MonetaryFunctions::sum</code> you make sure, that only monetary amounts of the same currency are summed up. If you try to sum up CHF 5.50 and $ 12.33 using <code>MonetaryAmount::add</code>, you then get <code>MonetaryException</code>.</p>]]></content:encoded></item><item><title><![CDATA[Install Docker Compose v2 system-wide]]></title><description><![CDATA[<p>In this post I&apos;m going to show you how you can install Docker Compose version 2 on a Linux system and register a <code>systemctl</code> service.</p><h2 id="install-docker-compose">Install Docker Compose</h2><p>Docker Compose releases are available at the following Git Hub repository <a href="https://github.com/docker/compose/releases?ref=wangler.io">https://github.com/docker/compose/releases</a>. This means we</p>]]></description><link>https://wangler.io/install-docker-compose-v2-system-wide/</link><guid isPermaLink="false">635cde5e82018300014a3ca0</guid><category><![CDATA[docker]]></category><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Silvio Wangler]]></dc:creator><pubDate>Sat, 29 Oct 2022 08:23:17 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1453906971074-ce568cccbc63?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDF8fGNvbXBvc2V8ZW58MHx8fHwxNjY3MDMxNjcy&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1453906971074-ce568cccbc63?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDF8fGNvbXBvc2V8ZW58MHx8fHwxNjY3MDMxNjcy&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Install Docker Compose v2 system-wide"><p>In this post I&apos;m going to show you how you can install Docker Compose version 2 on a Linux system and register a <code>systemctl</code> service.</p><h2 id="install-docker-compose">Install Docker Compose</h2><p>Docker Compose releases are available at the following Git Hub repository <a href="https://github.com/docker/compose/releases?ref=wangler.io">https://github.com/docker/compose/releases</a>. This means we easily can download the binary for e.g. a 64 bit Linux operating system. In the example below I use <code>curl</code> to perform the download and store it at <code>/usr/libexec/docker/cli-plugins/docker-compose</code>.</p><pre><code class="language-bash">sudo curl -SL https://github.com/docker/compose/releases/download/v2.12.2/docker-compose-linux-x86_64 -o /usr/libexec/docker/cli-plugins/docker-compose</code></pre><p>Now we need to make sure that the binary is executable.</p><pre><code>sudo chmod 755 /usr/libexec/docker/cli-plugins/docker-compose</code></pre><h2 id="install-a-systemctl-service">Install a Systemctl Service</h2><p>Now I want to run my docker composition as a service to make sure the composition starts when the operating system starts. Therefore I create a new file <code>myservice.service</code> at <code>/etc/systemd/system</code>.</p><pre><code>[Unit]
Description=My service with docker compose
Requires=docker.service
After=docker.service

[Service]
Type=oneshot
RemainAfterExit=true
WorkingDirectory=&lt;path to your Docker composition
ExecStart=/usr/libexec/docker/cli-plugins/docker-compose up -d --remove-orphans
ExecStop=/usr/libexec/docker/cli-plugins/docker-compose down

[Install]
WantedBy=multi-user.target</code></pre><p>Finally we register the new service by executing <code>sudo systemctl enable myservice.service</code>. Et voil&#xE0;, the service is registered and can be started using <code>sudo systemctl start myservice.service</code>.</p>]]></content:encoded></item><item><title><![CDATA[How authentication works with Micronaut Security]]></title><description><![CDATA[Have you ever wondered how Micronaut Security internally works? Well, I did and here is want I have found. Hope you enjoy it.]]></description><link>https://wangler.io/how-authentication-works-with-micronaut-security/</link><guid isPermaLink="false">6321ba4e1f5a9d00012f870e</guid><category><![CDATA[Micronaut]]></category><category><![CDATA[Security]]></category><category><![CDATA[Java]]></category><dc:creator><![CDATA[Silvio Wangler]]></dc:creator><pubDate>Wed, 14 Sep 2022 14:14:00 GMT</pubDate><media:content url="https://wangler.io/content/images/2022/09/scott-webb-yekGLpc3vro-unsplash-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://wangler.io/content/images/2022/09/scott-webb-yekGLpc3vro-unsplash-1.jpg" alt="How authentication works with Micronaut Security"><p>Have you ever wondered how Micronaut Security internally works? Well, I did and here is want I have found. Hope you enjoy it.</p><p>I&apos;m a visual person, meaning I love pictures or any other visuals to understand a certain topic. Recently I had to enable Basic Authentication within a Micronaut 3.6.x application. The nice thing is, that I simply had to apply the following two dependencies to my Gradle build.</p><figure class="kg-card kg-code-card"><pre><code class="language-groovy">depdendencies {
	annotationProcessor(&quot;io.micronaut.security:micronaut-security-annotations&quot;)
	implementation(&quot;io.micronaut.security:micronaut-security&quot;)
    // ... other dependencies omitted
}</code></pre><figcaption>build.gradle</figcaption></figure><p>By adding these two dependencies, Micronaut Security is part of your application. As a next step I needed to turn on the security feature and secure the API paths.</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">micronaut:
  application:
    name: artinaut
  security:
    enabled: true # turn on Micronaut Security
    basic-auth:
      enabled: true # enable basic authentication (default is: true)
    intercept-url-map: # Secure the API
      - pattern: /api/v1/**
        access:
          - ADMIN
      - pattern: /repos/**
        access:
          - isAnonymous()
          - isAuthenticated()</code></pre><figcaption>application.yml</figcaption></figure><p>HTTP Request with path <code>/api/v1/**</code> automatically require an authenticated user having the role <code>ADMIN</code>. And HTTP requests to <code>/repos/**</code> are available to the public. So how do I tell Micronaut Security about my users and their roles? The answer is that I need to provide an <code>AuthenticationProvider</code>. So here is what I did.</p><ol><li>Pick the username from the <code>AuthenticationRequest</code>.</li><li>Try to read the user from the database using the username.</li><li>Comparing the hashed passwords and either return a successful or a failed <code>AuthenticationResponse</code>.</li></ol><figure class="kg-card kg-code-card"><pre><code class="language-java">@Singleton
@RequiredArgsConstructor
@Slf4j
public class MyAuthenticationProvider implements AuthenticationProvider {

  private final UserService userService;
  private final PasswordEncoder passwordEncoder;

  @Override
  public Publisher&lt;AuthenticationResponse&gt; authenticate(
      HttpRequest&lt;?&gt; httpRequest, AuthenticationRequest&lt;?, ?&gt; authenticationRequest) {
    return Flux.create(
        emitter -&gt; {
          String identity = (String) authenticationRequest.getIdentity();
          UserDto user = userService.findUser(identity).orElse(null);

          if (user != null) {

            if (passwordEncoder.matches(
                (String) authenticationRequest.getSecret(), user.password())) {

              final Set&lt;String&gt; roles =
                  user.groups().stream()
                      .map(GroupDto::roles)
                      .flatMap(Collection::stream)
                      .map(RoleDto::name)
                      .collect(Collectors.toSet());
              emitter.next(AuthenticationResponse.success(identity, roles));
            } else {
              log.debug(
                  &quot;Password does not match for user &#xAB;{}&#xBB; (user id &#xAB;{}&#xBB;)&quot;, identity, user.id());
              emitter.next(AuthenticationResponse.failure());
            }

          } else {
            emitter.next(AuthenticationResponse.failure());
          }
          emitter.complete();
        },
        FluxSink.OverflowStrategy.ERROR);
  }
}</code></pre><figcaption>MyAuthenticationProvider.java</figcaption></figure><p>Easy as pie, thank you Micronaut. It works! But wait, I want to dig in deeper. What else is involved? Since Micronaut does not rely on reflection their source code is very easy to read and debug. I ended up with the following, simplified sequence diagram. It explains what other participants are involved in the authentication process, even before my <code>AuthenticationProvider</code> gets invoked.</p><figure class="kg-card kg-image-card kg-width-full kg-card-hascaption"><img src="https://wangler.io/content/images/2022/09/micronaut-security-authentication-sequence.png" class="kg-image" alt="How authentication works with Micronaut Security" loading="lazy" width="1653" height="746" srcset="https://wangler.io/content/images/size/w600/2022/09/micronaut-security-authentication-sequence.png 600w, https://wangler.io/content/images/size/w1000/2022/09/micronaut-security-authentication-sequence.png 1000w, https://wangler.io/content/images/size/w1600/2022/09/micronaut-security-authentication-sequence.png 1600w, https://wangler.io/content/images/2022/09/micronaut-security-authentication-sequence.png 1653w"><figcaption>Micronaut Security Authentication Sequence</figcaption></figure><p>I hope you liked it. Thank you for reading.</p>]]></content:encoded></item><item><title><![CDATA[Running Keycloak 17+ as Docker Container]]></title><description><![CDATA[<p>Keycloak 17+ is not based on Wildfly anymore but uses <a href="https://quarkus.io/?ref=wangler.io">Quarkus</a>. This makes it a first-class citizen for running it as a Docker container. Quarkus reduces the startup time of Keycloak massively and reduces its memory footprint. Previously a Keycloak Docker container based on Wildfly consumed around ~800 MB RAM</p>]]></description><link>https://wangler.io/keycloak-17-on-docker/</link><guid isPermaLink="false">6294f358be020a00017699cb</guid><category><![CDATA[docker]]></category><category><![CDATA[keycloak]]></category><dc:creator><![CDATA[Silvio Wangler]]></dc:creator><pubDate>Mon, 30 May 2022 17:17:10 GMT</pubDate><media:content url="https://wangler.io/content/images/2022/05/keycloak.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://wangler.io/content/images/2022/05/keycloak.jpeg" alt="Running Keycloak 17+ as Docker Container"><p>Keycloak 17+ is not based on Wildfly anymore but uses <a href="https://quarkus.io/?ref=wangler.io">Quarkus</a>. This makes it a first-class citizen for running it as a Docker container. Quarkus reduces the startup time of Keycloak massively and reduces its memory footprint. Previously a Keycloak Docker container based on Wildfly consumed around ~800 MB RAM and took roughly 30 seconds to be up and running. Keycloak 17+ (<em>Codename Keycloak.X</em>) changes this by consuming ~300MB RAM and starts almost instantly.</p><p>Therefore I was super keen on giving the new Keycloak setup a try and run it on my local machine as a Docker container. This article is about my failures and the eventual success.</p><p>My main goal is to run Keycloak, as mentioned as a Docker container connected to a MariaDB database. As mentioned in the <a href="https://www.keycloak.org/server/containers?ref=wangler.io">Keycloak documentation</a> it is highly recommended to build your own optimised Keycloak Docker image.</p><blockquote>For the best start up of your Keycloak container, build an image by running the <code>build</code> step during the container build. This step will save time in every subsequent start phase of the container image.</blockquote><p><strong>Easy, let&apos;s give it a try</strong></p><p>So I started with a MariaDB and a optimised Keycloak Docker image, that listens on http port <code>8080</code>. In my first attempt I build the optimised Keycloak image using the following <code>Dockerfile</code>.</p><pre><code class="language-yaml">FROM quay.io/keycloak/keycloak:18.0.0 as builder

ENV KC_DB=mariadb
RUN /opt/keycloak/bin/kc.sh build

FROM quay.io/keycloak/keycloak:18.0.0
COPY --from=builder /opt/keycloak/ /opt/keycloak/
WORKDIR /opt/keycloak
ENTRYPOINT [&quot;/opt/keycloak/bin/kc.sh&quot;, &quot;start&quot;]</code></pre><p>In the first phase I use the base image <code>quay.io/keycloak/keycloak:18.0.0</code> to build my optimised Keycloak setup, declaring that I want to use MariaDB. In the second stage I pack the optimised output of the <code>builder</code> stage and put it into my Docker image.</p><pre><code class="language-shell">docker build --no-cache . -t ghcr.io/saw303/zscsupporter-be/keycloak-18.0.0:0.0.1
</code></pre><p>Then I setup a Docker composition declaring a reverse proxy (Caddy), my Keycloak image and the MariaDB.</p><pre><code class="language-yaml">version: &quot;3.9&quot;
services:
  proxy:
    image: caddy:2.5.1-alpine
    ports:
      - &quot;${PROXY_IP}:80:80&quot;
      - &quot;${PROXY_IP}:443:443&quot;
    volumes:
      - ${BASE_PATH:-.}/docker-volume/caddy/Caddyfile:/etc/caddy/Caddyfile:Z
      - ${BASE_PATH:-.}/docker-volume/caddy/caddy_data:/data:Z
      - ${BASE_PATH:-.}/docker-volume/caddy/caddy_config:/config:Z
  
  keycloak:
    image: ghcr.io/saw303/zscsupporter-be/keycloak-18.0.0:0.0.1
    ports:
      - &quot;127.0.0.1:9001:8080&quot;
      - &quot;127.0.0.1:9443:8443&quot;
    environment:
      KC_HOSTNAME: localhost
      KC_HOSTNAME_PORT: 80
      KC_HOSTNAME_STRICT_BACKCHANNEL: true
      KEYCLOAK_ADMIN: admin
      KEYCLOAK_ADMIN_PASSWORD: admin
      KC_DB_URL: jdbc:mariadb://keycloakdb:3306/keycloak
      KC_DB_USERNAME: keycloak
      KC_DB_PASSWORD: secret
      KC_LOG_LEVEL: info
      KC_PROXY: edge

  keycloakdb:
    image: mariadb:10.7.3-focal
    environment:
      MYSQL_ROOT_PASSWORD: root_secret
      MYSQL_DATABASE: keycloak
      MYSQL_USER: keycloak
      MYSQL_PASSWORD: secret
      TZ: &quot;Europe/Zurich&quot;
    tmpfs:
      - /var/lib/mysql:rw
    ports:
      - &quot;127.0.0.1:3307:3306&quot;</code></pre><p>My initial idea was to access Keycloak and its Admin Console using http (insecure), since I was running it on my local machine. The <code>Caddyfile</code> for Caddy Server 2.0 looks like this.</p><pre><code class="language-Caddyfile">{
  admin off
}

localhost:80

reverse_proxy /* keycloak:8080

log</code></pre><p>It simply passed all the request to the Keycloak container running on port <code>8080</code>. But as you might have guessed. That did not work out very well. Clicking on the Admin Console link ended up in a <a href="https://stackoverflow.com/questions/72426072/how-to-configure-keycloak-18-running-http-only-in-production-mode/72436158?ref=wangler.io#72436158">blank browser page</a>.</p><p><strong>FFS, ...after some other failed attempts</strong></p><p>I took my a while to understand that the Admin Console is requiring secure access by design. I did not find any way around it but finally found a why to make my reverse proxy creating self-signed certificates. So here is a working setup.</p><p>First of all, you need to configure Caddy to listen on e.g. port 443 and create a self-signed certificate by declaring <code>tls internal</code>.</p><pre><code class="language-Caddyfile">{
  admin off
}

localhost:443 {
        reverse_proxy keycloak:8080
        tls internal
}

log</code></pre><p>Then you need to build Keycloak a bit differently.</p><pre><code class="language-Dockerfile">FROM quay.io/keycloak/keycloak:18.0.0 as builder

ENV KC_FEATURES=token-exchange
ENV KC_DB=mariadb
RUN /opt/keycloak/bin/kc.sh build

FROM quay.io/keycloak/keycloak:18.0.0
COPY --from=builder /opt/keycloak/ /opt/keycloak/
WORKDIR /opt/keycloak
ENTRYPOINT [&quot;/opt/keycloak/bin/kc.sh&quot;, &quot;start&quot;]</code></pre><p>And finally you got to adjust the Docker composition.</p><pre><code>version: &quot;3.9&quot;
services:
  proxy:
    image: caddy:2.5.1-alpine
    ports:
      - &quot;${PROXY_IP}:80:80&quot;
      - &quot;${PROXY_IP}:443:443&quot;
    volumes:
      - ${BASE_PATH:-.}/docker-volume/caddy/Caddyfile:/etc/caddy/Caddyfile:Z
      - ${BASE_PATH:-.}/docker-volume/caddy/caddy_data:/data:Z
      - ${BASE_PATH:-.}/docker-volume/caddy/caddy_config:/config:Z

  keycloak:
    image: ghcr.io/saw303/zscsupporter-be/keycloak-18.0.0:0.0.1
    ports:
      - &quot;127.0.0.1:9443:8443&quot;
    restart: unless-stopped
    environment:
      KC_DB_URL: jdbc:mariadb://keycloakdb:3306/keycloak
      KEYCLOAK_ADMIN: admin
      KEYCLOAK_ADMIN_PASSWORD: admin
      KC_HOSTNAME: localhost
      KC_HOSTNAME_STRICT: false
      KC_HTTP_ENABLED: true
      KC_DB_USERNAME: keycloak
      KC_DB_PASSWORD: secret
      KC_PROXY: edge

  keycloakdb:
    image: mariadb:10.7.3-focal
    environment:
      MYSQL_ROOT_PASSWORD: root_secret
      MYSQL_DATABASE: keycloak
      MYSQL_USER: keycloak
      MYSQL_PASSWORD: secret
      TZ: &quot;Europe/Zurich&quot;
    tmpfs:
      - /var/lib/mysql:rw
    ports:
      - &quot;127.0.0.1:3307:3306&quot;</code></pre><p>And this is how it worked for me. Have fun with your local Keycloak container on <a href="https://localhost/?ref=wangler.io">https://localhost</a>. &#xA0;Hope this helps.</p>]]></content:encoded></item><item><title><![CDATA[Automate your Mac with Hammerspoon]]></title><description><![CDATA[<p>You maybe know these situations. You just left your home and commute to work by the public transport. While traveling you connect your MacBook with your personal hot spot or maybe with a hotspot that is provided by the train or bus company. Anyway, you start to work and you&</p>]]></description><link>https://wangler.io/automate-your-mac-with-hammerspoon/</link><guid isPermaLink="false">61b8b9bc2e4a9f000122c37d</guid><dc:creator><![CDATA[Silvio Wangler]]></dc:creator><pubDate>Tue, 14 Dec 2021 15:54:40 GMT</pubDate><media:content url="https://wangler.io/content/images/2021/12/pexels-digital-buggu-171198.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://wangler.io/content/images/2021/12/pexels-digital-buggu-171198.jpg" alt="Automate your Mac with Hammerspoon"><p>You maybe know these situations. You just left your home and commute to work by the public transport. While traveling you connect your MacBook with your personal hot spot or maybe with a hotspot that is provided by the train or bus company. Anyway, you start to work and you&apos;re maybe listening to music with a pair of headphones on. Suddenly you realize you forgot to mute your MacBook and passengers around you might be entertained by audio feedbacks of your terminal or other applications.</p><p>Would it be nice to automatically mute the Mac when I connect to foreign Wifi hotspots? And turn the volume back up when I return home? Well this can be easily achieve with <a href="http://www.hammerspoon.org/?ref=wangler.io">Hammerspoon</a>. Hammerspoon allows you to implement such an use case using <a href="https://www.lua.org/start.html?ref=wangler.io">Lua</a>. The script below welcomes you with a message and sets the volume to 80% when you connect to your Wifi at home. When you connect to another Wifi, it mutes the MacBook.</p><pre><code class="language-Lua">wifiWatcher = nil
homeSSID = &quot;my-wifi&quot;
lastSSID = hs.wifi.currentNetwork()

function ssidChangedCallback()
    newSSID = hs.wifi.currentNetwork()

    if newSSID == homeSSID and lastSSID ~= homeSSID then
        -- We just joined our home WiFi network
        hs.audiodevice.defaultOutputDevice():setVolume(80)
        hs.alert.show(&quot;Welcome home&quot;)
    elseif newSSID ~= homeSSID and lastSSID == homeSSID then
        -- We just departed our home WiFi network
        hs.audiodevice.defaultOutputDevice():setVolume(0)
        hs.alert.show(&quot;You are now connected to &quot; .. newSSID)
    end

    lastSSID = newSSID
end

wifiWatcher = hs.wifi.watcher.new(ssidChangedCallback)
wifiWatcher:start()</code></pre><p>I hope you found the short article helpful and enjoyed it.</p>]]></content:encoded></item><item><title><![CDATA[Dealing with configuration list in Micronaut]]></title><description><![CDATA[<p>Recently I wanted to map configuration list within the <code>application.yml</code> into an immutable configuration object of a Micronaut application. The YAML configuration is fairly simple. It defines a list of instances containing fields such as <code>name</code>, <code>version</code>, <code>endpoint</code> and <code>read-timeout</code>.</p><pre><code class="language-yaml">a:
  b:
    instances:
      - name: T1
        endpoint: &quot;http:</code></pre>]]></description><link>https://wangler.io/dealing-with-configuration-list-in-micronaut/</link><guid isPermaLink="false">617bab7c898e730001228d5a</guid><dc:creator><![CDATA[Silvio Wangler]]></dc:creator><pubDate>Fri, 29 Oct 2021 08:41:52 GMT</pubDate><media:content url="https://wangler.io/content/images/2021/10/developer-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://wangler.io/content/images/2021/10/developer-1.jpg" alt="Dealing with configuration list in Micronaut"><p>Recently I wanted to map configuration list within the <code>application.yml</code> into an immutable configuration object of a Micronaut application. The YAML configuration is fairly simple. It defines a list of instances containing fields such as <code>name</code>, <code>version</code>, <code>endpoint</code> and <code>read-timeout</code>.</p><pre><code class="language-yaml">a:
  b:
    instances:
      - name: T1
        endpoint: &quot;http://t1&quot;
        version: &quot;1.5.3.24305,2021-08-09 18:01&quot;
        read-timeout: 20
      - name: T2
        endpoint: &quot;http://t1&quot;
        version: &quot;2.0.0.16555,2022-01-03 16:48&quot;
        read-timeout: 20</code></pre><p>Now I created an immutable Java configuration and wanted Micronaut to bind the YAML onto my configuration.</p><pre><code class="language-java">@ConfigurationProperties(&quot;a.b&quot;)
public interface MyConfig {

  List&lt;Instance&gt; getInstances();

  interface Instance {
    String getName();
    URL getEndpoint();
    String getVersion();
    int getReadTimeout();
  }
}</code></pre><p>Easy, so far. As good developers we are used to write tests and this is what I did to verify whether everything is working as expected.</p><pre><code class="language-groovy">@MicronautTest
class MyConfigSpec extends Specification {
  @Inject
  MyConfig config

  void &quot;Make sure the config has instances&quot;() {

    expect:
    config.instances.isEmpty() == false
  }
}</code></pre><p>Guess what. The test failed &#x1F97A;. <code>getInstances()</code> always returns an empty list no matter what I do. After debugging a while I realized that Jackson ObjectMapper is trying to create an instance of <code>Instance</code> but obviously can&apos;t since there is no implementation present. But migrating the interface <code>Instance</code> into a concrete class did work either. At that point I was not sure what&apos;s really going on here so I posted my problem to <a href="https://stackoverflow.com/questions/69750117/immutable-configuration-in-micronaut-with-list-in-yaml?ref=wangler.io">Stack Overflow</a>. Within less than 4 hours my problem was solved. </p><p>The solution is simple. We need to help Micronaut and introduce a dedicated <code>TypeConverter</code> that allows Micronaut to convert a <code>Map</code> into an <code>Instance</code>. Here is an example of <a href="https://e.printstacktrace.blog/?ref=wangler.io">Szymon Stepniak</a>.</p><pre><code class="language-java">@Singleton
class MapToInstanceConverter implements TypeConverter&lt;Map, Instance&gt; {

    @Override
    public Optional&lt;Instance&gt; convert(Map object, Class&lt;Instance&gt; targetType, ConversionContext context) {
        return Optional.of(new Instance() {
            @Override
            public String getName() {
                return object.getOrDefault(&quot;name&quot;, &quot;&quot;).toString();
            }

            @Override
            public URL getEndpoint() {
                try {
                    return new URI(object.getOrDefault(&quot;endpoint&quot;, &quot;&quot;).toString()).toURL();
                } catch (MalformedURLException | URISyntaxException e) {
                    throw new RuntimeException(e);
                }
            }

            @Override
            public String getVersion() {
                return object.getOrDefault(&quot;version&quot;, &quot;&quot;).toString();
            }

            @Override
            public int getReadTimeout() {
                return Integer.parseInt(object.getOrDefault(&quot;read-timeout&quot;, 0).toString());
            }
        });
    }
}</code></pre><p>This converter made my test go <em>green</em>. Configuration binding done! Hope this helps.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://wangler.io/content/images/2021/10/laptop-g59ff5c28f_1280.jpg" class="kg-image" alt="Dealing with configuration list in Micronaut" loading="lazy" width="1280" height="720" srcset="https://wangler.io/content/images/size/w600/2021/10/laptop-g59ff5c28f_1280.jpg 600w, https://wangler.io/content/images/size/w1000/2021/10/laptop-g59ff5c28f_1280.jpg 1000w, https://wangler.io/content/images/2021/10/laptop-g59ff5c28f_1280.jpg 1280w" sizes="(min-width: 720px) 720px"><figcaption>Job Done</figcaption></figure><p></p>]]></content:encoded></item><item><title><![CDATA[Gradle: Debug Unit Tests]]></title><description><![CDATA[<p>Have you ever been in the situation that everything runs smoothly in your IDE or on your machine and then the build breaks on your command line or CI?</p><p>Well, in this case you might want to attach your IDE Debugger and inspect whats going on. This brief article will</p>]]></description><link>https://wangler.io/debug-your-gradle-process/</link><guid isPermaLink="false">613c8a8e5d0f9d000195fa3b</guid><category><![CDATA[Gradle]]></category><dc:creator><![CDATA[Silvio Wangler]]></dc:creator><pubDate>Sat, 11 Sep 2021 11:08:54 GMT</pubDate><media:content url="https://wangler.io/content/images/2021/09/Python-Debugging-With-Pdb_Watermarked.webp" medium="image"/><content:encoded><![CDATA[<img src="https://wangler.io/content/images/2021/09/Python-Debugging-With-Pdb_Watermarked.webp" alt="Gradle: Debug Unit Tests"><p>Have you ever been in the situation that everything runs smoothly in your IDE or on your machine and then the build breaks on your command line or CI?</p><p>Well, in this case you might want to attach your IDE Debugger and inspect whats going on. This brief article will show you how to do it.</p><p>On your local machine you simply tell Gradle to run in debug mode and without the Gradle Daemon. Run your tests using the <code>check</code> task.</p><pre><code class="language-shell">gradle --no-daemon -Dorg.gradle.debug=true check</code></pre><p>After that you can attach your IDEs Debugger onto port 5005 and see whats actually happening with your test.</p>]]></content:encoded></item><item><title><![CDATA[Browser Automation with Geb, Spock & Gradle]]></title><description><![CDATA[<p><em>This is a repost of my blog post in 2016.</em> Recently I was asked to do a small intro into the <a href="http://www.gebish.org/?ref=wangler.io">Geb</a>. These are the results of a small workshop I gave at <a href="http://www.adcubum.com/?ref=wangler.io">Adcubum</a>.</p><h2 id="so-what-is-it">So what is it? </h2><p>Geb is a really nice and handy browser automation tool on top</p>]]></description><link>https://wangler.io/browser-automation-with-geb-spock-gradle/</link><guid isPermaLink="false">60106b5ec895e200012f5884</guid><category><![CDATA[Geb]]></category><category><![CDATA[Testing]]></category><category><![CDATA[Browser]]></category><dc:creator><![CDATA[Silvio Wangler]]></dc:creator><pubDate>Tue, 26 Jan 2021 19:38:33 GMT</pubDate><media:content url="https://wangler.io/content/images/2021/06/geb-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://wangler.io/content/images/2021/06/geb-1.png" alt="Browser Automation with Geb, Spock &amp; Gradle"><p><em>This is a repost of my blog post in 2016.</em> Recently I was asked to do a small intro into the <a href="http://www.gebish.org/?ref=wangler.io">Geb</a>. These are the results of a small workshop I gave at <a href="http://www.adcubum.com/?ref=wangler.io">Adcubum</a>.</p><h2 id="so-what-is-it">So what is it? </h2><p>Geb is a really nice and handy browser automation tool on top <a href="http://www.seleniumhq.org/projects/webdriver/?ref=wangler.io">Selenium Webdriver</a>. It provides you the full power of Selenium Webdriver. But it adds some nice features such as: </p><ul><li>a much more readable <a href="https://en.wikipedia.org/wiki/Domain-specific_language?ref=wangler.io">DSL</a> </li><li><a href="http://martinfowler.com/bliki/PageObject.html?ref=wangler.io">Page objects</a> to structure your test code and make it reusable. </li><li>Integration with the <a href="http://spockframework.org/?ref=wangler.io">Spock Framework</a> </li></ul><h2 id="and-how-does-it-look-like">And how does it look like? </h2><p>Page objects help you to encapsulate the content of a specific page and reuse it in several test classes. In this example <code>GoogleFrontPage</code> provides an easy to use identifier for example the Google search input field. It also provides you an easy way to click on the Google search button.</p><figure class="kg-card kg-code-card"><pre><code class="language-groovy">class GoogleFrontPage extends geb.Page {

    static url = &apos;/&apos;

    static at = {
        title == &apos;Google&apos;
    }

    static content = {
        searchInputField { $(&quot;input&quot;, name: &quot;q&quot;) }

        searchButton { $(&quot;button&quot;, name: &quot;btnG&quot;) }

        searchResultsContainer { $(&apos;#sbfrm_l&apos;) }

        searchResults { $(&apos;h3.r&apos;) }

        firstResult { searchResults[0] }
    }
}</code></pre><figcaption>GoogleFrontPage.groovy</figcaption></figure><p>This enables you as a developer to write much more readable test code by writing commands like </p><pre><code class="language-groovy">to GoogleFrontPage </code></pre><p>which tells Geb to browse to <a href="http://www.google.com/?ref=wangler.io">http://www.google.com</a>. You can then tell <em>Geb</em> to enter some text into search input field of the Google Search Engine by writing</p><pre><code class="language-groovy"> searchInputField.value = &apos;Geb Framework&apos;</code></pre><p> and then start the Google search by clicking on the </p><pre><code class="language-groovy">button.searchButton.click()</code></pre><h2 id="putting-all-of-this-together">Putting all of this together</h2><p>The listing below shows you the entire Geb test case implemented as a Spock specification. Looks neat, right?</p><figure class="kg-card kg-code-card"><pre><code class="language-groovy">@spock.lang.Stepwise
class GoogleSpec extends geb.spock.GebReportingSpec {

  void &quot;Visit Google.com&quot;() {

    when:
    to GoogleFrontPage

    then:
    title == &apos;Google&apos;
  }

  void &quot;Make sure the query field is initially empty&quot;() {

    expect: &apos;The search field is initially empty&apos;
    searchInputField.text() == &apos;&apos;
  }

  void &quot;Enter a query&quot;() {

    when: &apos;Enter &quot;Geb Framework&quot; into the search field&apos;
    searchInputField.value = &apos;Geb Framework&apos;

    and: &apos;Click the search button&apos;
    searchButton.click()

    and: &apos;wait until the search result element is visible&apos;
    waitFor { searchResultsContainer.displayed }

    then:
    title == &apos;Geb Framework - Google Search&apos;

    and:
    firstResult.text() == &apos;Geb - Very Groovy Browser Automation&apos;
  }
}</code></pre><figcaption>The whole Geb Testcase implemented as a Spock specification</figcaption></figure><h2 id="where-can-i-get-that-stuff">Where can I get that stuff? </h2><p>I wrote a small starter tutorial that is hosted at <a href="https://github.com/saw303/geb-starter/?ref=wangler.io">https://github.com/saw303/geb-starter/</a>. Feel free to clone it and run those tests yourself. </p><h2 id="i-wanna-see-more-">I wanna see more! </h2><p>As initially mentioned the workshop was held in german. Therefore the recordings only are available in German language.</p><figure class="kg-card kg-embed-card"><iframe width="356" height="200" src="https://www.youtube.com/embed/kZ2fSC7JqUU?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><figure class="kg-card kg-embed-card"><iframe width="356" height="200" src="https://www.youtube.com/embed/mJzI11DZC4k?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><figure class="kg-card kg-embed-card"><iframe width="356" height="200" src="https://www.youtube.com/embed/2jbG6N0UcBI?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure>]]></content:encoded></item><item><title><![CDATA[Split large Git repositories]]></title><description><![CDATA[<p>Recently I had to split a <em>large application</em> into several smaller applications. The source code of that application was in single Git repository and had well-encapsulated modules within its own sub directories such as <em>module-a</em>.</p><figure class="kg-card kg-code-card"><pre><code>large-application
&#x251C;&#x2500;&#x2500; module-a
&#x251C;&#x2500;&#x2500; module-b
&#x2514;&#x2500;&#x2500; module-c</code></pre><figcaption>directory structure</figcaption></figure>]]></description><link>https://wangler.io/split-large-git-repositories/</link><guid isPermaLink="false">6010361ec895e200012f57cb</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Silvio Wangler]]></dc:creator><pubDate>Tue, 26 Jan 2021 19:01:51 GMT</pubDate><media:content url="https://wangler.io/content/images/2021/01/1_wPqqYFfNreXF4INrNhYkeQ.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://wangler.io/content/images/2021/01/1_wPqqYFfNreXF4INrNhYkeQ.jpeg" alt="Split large Git repositories"><p>Recently I had to split a <em>large application</em> into several smaller applications. The source code of that application was in single Git repository and had well-encapsulated modules within its own sub directories such as <em>module-a</em>.</p><figure class="kg-card kg-code-card"><pre><code>large-application
&#x251C;&#x2500;&#x2500; module-a
&#x251C;&#x2500;&#x2500; module-b
&#x2514;&#x2500;&#x2500; module-c</code></pre><figcaption>directory structure of the original application</figcaption></figure><p>Since the modules where already encapsulated the main goal was to extract the modules into new dedicated Git repositories. And here is how I achieved it.</p><h2 id="mission-extraction">Mission: <em>Extraction</em></h2><p>First let&apos;s start by going into the Git repository of the <em>large application</em>.</p><pre><code class="language-sh">cd ~/large-application</code></pre><p>For now we concentrate on extracting module <em>module-a</em> and we do this by telling Git to create a <code>subtree</code> of your directory and store that subtree in a new branch called <code>feature/split-module-a</code>.</p><pre><code class="language-sh">git subtree split -P module-a -b feature/split-module-a</code></pre><p>After this we create a new empty Git repository for <em>module-a</em>. </p><pre><code class="language-sh">mkdir ~/new-repo
cd ~/new-repo
git init</code></pre><p>Alright, we are about done here. All we need to do is to move the extracted branch from the source Git repository to the target Git repository.</p><pre><code class="language-sh">git pull ~/large-application feature/split-module-a</code></pre><p>This is it. Now your new repo contains only the <em>module-a </em>related commits in the new Git repository.</p><p>This post is inspired by the following answer at <a href="https://stackoverflow.com/questions/359424/detach-move-subdirectory-into-separate-git-repository/17864475?ref=wangler.io#17864475">Stackoverflow</a>.</p>]]></content:encoded></item><item><title><![CDATA[Upload a file to a Docker container]]></title><description><![CDATA[This article explains how to copy data from and/or into a Docker container]]></description><link>https://wangler.io/upload-a-file-to-a-docker-container/</link><guid isPermaLink="false">600fdcf2c895e200012f5764</guid><category><![CDATA[docker]]></category><dc:creator><![CDATA[Silvio Wangler]]></dc:creator><pubDate>Tue, 26 Jan 2021 15:30:06 GMT</pubDate><media:content url="https://wangler.io/content/images/2021/01/docker-logo-2.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://wangler.io/content/images/2021/01/docker-logo-2.jpg" alt="Upload a file to a Docker container"><p>There are situations where you want to copy a file from your machine to a running Docker container or vice versa. Especially in cases when the Docker container does not use a volume that is bound to your local file system. &#xA0;For these situations you can use one of the approaches described below.</p><h2 id="use-pipes">Use Pipes</h2><p>Pipes are a fantastic way passing data from one process to another. The following example uses <code>cat</code> to read the file content of the <code>missing_data.sql</code> and hands it over to the running Docker container. </p><pre><code class="language-sh">cat missing_data.sql | \ 
docker exec -i &lt;your container name&gt; \
sh -c &apos;cat &gt; /missing_data.sql&apos;</code></pre><p>Once this process has been completed you will find the file inside the container at <code>/missing_data.sql</code>.</p><h2 id="use-the-docker-binary">Use the Docker Binary</h2><p>Nevertheless copying can be achieved by using pipes, I recommend to use the <code>copy</code> utility of the Docker binary. It is much easier to ready and simpler to write.</p><pre><code class="language-sh">docker cp missing_data.sql &lt;container-id&gt;:/missing_data.sql</code></pre><p>I hope this helped you exchanging files with Docker containers</p>]]></content:encoded></item></channel></rss>