cat missing_data.sql | docker exec -i <your container name> sh -c 'cat >/missing_data.sql'
06 November 2018
If you need to upload a file into a running Docker container you can do this as follows
cat missing_data.sql | docker exec -i <your container name> sh -c 'cat >/missing_data.sql'
Afterwards the file is located inside the container at /missing_data.sql
Have fun!
06 November 2018
Go into your big Git repository.
cd ~/my-big-repo
Use git subtree split -P <name of the folder in the Git repo> -b feature/split
to move a folder into a new branch.
Then create a new Git repository using
mkdir ~/new-repo
cd ~/new-repo
git init
Pull the created branch into the new repository with
git pull ~/my-big-repo feature/split
That’s it. Your new repo has now only the related commit in there.
12 Dezember 2017
Recently I had to find a way to flatten a java.util.List
that contains another java.util.List
using Java 8 streams.
Let’s take the following java.util.List manyLists
as example.
List<List<String>> manyLists = new ArrayList<>();
manyLists.add(Arrays.asList("A", "B", "C"));
manyLists.add(Arrays.asList("X", "Y", "Z"));
manyLists.add(Arrays.asList("1", "2", "3"));
Before Java 8 I would have implemented it maybe like that
List<String> all = new ArrayList<>();
for (List<String> list: manyLists) {
all.addAll(list);
}
Since we got streams in Java 8 there is another possibly to solve the problem
List<String> all = manyLists.stream()
.flatMap(List::stream) (1)
.collect(Collectors.toList()); (2)
System.out.println(all);
flattens the List<List<String>>
to a stream of List<String>
returns the stream a new list
This results in [A, B, C, X, Y, Z, 1, 2, 3]
.
05 Dezember 2017
Recently I was wondering why IntelliJ IDEA keeps warning me about using java.util.Optional
as parameters of methods.
Why can’t I use this comfortable wrapper type as input parameters? So I did some research and I would like to share the outcome
with you.
Let’s pretend there is a method signature like as follows
/**
* This method simply does something.
*
* @param name mandatory name of someone
* @param age optional age of someone
*/
public void doSomething(String name, Optional<Integer> age) {
// some mighty code that does something
}
What could possibly be the problem with that signature that IntelliJ tries to convince me not writing such code?
Alright, I understand that I should not use it as a field. But why not using it as an input parameter? Is it because it is bad design? Because developers that are calling that method are forced to write something ugly like this?
obj.doSomething(
"Silvio Wangler",
Optional.of(new Integer(32)) (1)
);
Ugly because we force the caller to wrap the Integer into a java.util.Optional
Take a look at the preceding example. Isn’t it an ugly API is it? Well I would not want to code against this API either. So this might be the reason why I is not recommended to use java.util.Optional as input parameters in method signatures.
But what is the recommendation? Let’s refactor the method for a smoother API. Let’s remove that java.util.Optional.
/**
* This method simply does something.
*
* @param name mandatory name of someone
* @param age optional age of someone
*/
public void doSomething(String name, Integer age) {
Objects.requiredNonNull(name); (1)
Optional<Integer> possibleAge = Optional.ofNullable(age); (2)
possibleAge.ifPresent(System.out::println); (3)
}
Enforce that name
is mandatory.
Wrap that optional age
parameter into an java.util.Optional.
Only do something with age
if this is an non null value.
Well that seems to be the way to do it. That way I am not forcing my API callers to write boiler plate code and I as author of
that method care about the mandatory/optional validation. And in order to provide more convenience to the API user I could even
overwrite the method doSomething
and hide this optional parameter.
/**
* This method simply does something.
*
* @param name mandatory name of someone
*/
public void doSomething(String name) {
doSomething(name, null); (1)
}
Delegate the call and set age to null
.
31 Oktober 2016
Geb is a really nice and handy browser automation tool on top Selenium Webdriver. It provides you the full power of Selenium Webdriver. But it adds some nice features such as:
a much more readable DSL
Page objects to structure your test code and make it reusable.
Integration with the Spock Framework
Page objects help you to encapsulate the content of a specific page and reuse it in serveral test classes.
In this example GoogleFrontPage
provides an easy to use identifier for example the Google search input field.
It also provides you an easy way to click on the Google search button.
package ch.silviowangler.geb.pages
import geb.Page
/**
* @author Silvio Wangler
*/
class GoogleFrontPage extends Page {
static url = '/'
static at = {
title == 'Google'
}
static content = {
searchInputField { $("input", name: "q") }
searchButton { $("button", name: "btnG") }
searchResultsContainer { $('#sbfrm_l') }
searchResults { $('h3.r') }
firstResult { searchResults[0] }
}
}
This enables you as a developer to write much more readable test code by writing commands likely
to GoogleFrontPage
which tells Geb to browse to http://www.google.com
. You can then tell Geb to enter some text into
Googles search input field by writing
searchInputField.value 'Geb Framework'
and then start the Google search by clicking on the button.
searchButton.click()
import ch.silviowangler.geb.pages.GoogleFrontPage
import geb.spock.GebReportingSpec
import spock.lang.Stepwise
@Stepwise
class GoogleSpec extends GebReportingSpec {
void "Visit Google.com"() {
when:
to GoogleFrontPage
then:
title == 'Google'
}
void "Make sure the query field is initially empty"() {
expect: 'The search field is initially empty'
searchInputField.text() == ''
}
void "Enter a query"() {
when: 'Enter "Geb Framework" into the search field'
searchInputField.value 'Geb Framework'
and: 'Click the search button'
searchButton.click()
and: 'wait until the search result element is visible'
waitFor { searchResultsContainer.displayed }
then:
title == 'Geb Framework - Google Search'
and:
firstResult.text() == 'Geb - Very Groovy Browser Automation'
}
}
I wrote a small starter tutorial that is hosted at GitHub. Feel free to clone it and run those tests yourself. Hope you enjoy it.
The workshop was held in german. Therefore the recordings only are available in German language.
22 Mai 2016
Recently we where running into strange OutOfMemoryErrors
while Gradle was executing our integration tests.
In order to get a better insight on whats going wrong we decided let Java Flightrecorder profile the test execution.
All you need to do is to but the following snippet into your build.gradle
. This will enable Flightrecorder and it will
record the whole test execution and writes the result to a file called build/dumponexit.jfr
.
test {
maxHeapSize = "2g"
jvmArgs += ["-XX:+UnlockCommercialFeatures", "-XX:+FlightRecorder"]
def jfrOptions = [
defaultrecording: true,
dumponexit : true,
dumponexitpath : "${project.buildDir}/dumponexit.jfr",
globalbuffersize: '10M',
disk : true,
settings : "${System.getProperty("java.home")}/lib/jfr/profile.jfc".toString()
]
jvmArgs += ["-XX:FlightRecorderOptions=${jfrOptions.collect { k, v -> "$k=$v".toString() }.join(',')}"]
}
Hope that blog post helps.
29 Februar 2016
Flyway is an excellent tool to place database migrations into your Java application. It’s easy to understand and to integrate in your application. Since your database is a part of your application Fly enables you to manage migration along with your source code.
Recently I upgraded a Grails 2.5.x application to its successor Grails 3.1.1. And I had to upgrade the Grails Flyway Plugin in order to run with Grails 3.1.1. This blog post will introduce the plugin to you as a Grails developer.
First of all you need to declare a runtime dependency in your build.gradle
. In the following example uses lastest.integration
that causes Gradle to always use the latest version of the plugin.
compile 'org.grails.plugins:grails-flyway:latest.integration'
Next you need to configure the plugin in either your application.yml
or application.groovy
. This example uses YAML.
flyway:
enabled: false
locations: migration/db/mysql
baselineOnMigrate: true
The Grails Flyway plugin for Grails 3.x is available at the official plugin repository at Bintray.
02 Januar 2016
I am currently working on a project that provides a REST API based on RESTEasy. The REST API uses metadata (options.json) to describe a resource. The following listing describes a simple resource Person
that contains the attributes id
and name
. The attribute name
defines a contrainst that its value has to contain at least 1 character and a maximum of 30 characters.
{
"general": {
"description": "Person resource",
"majorVersion": 1,
"icon": "map",
"lifecycle": {
"deprecated": false,
"info": "This version is valid"
},
"x-route": "/:version/person/:entity"
},
"verbs": [
{
"verb": "POST",
"rel": "Add person",
"responseStates": [
{
"code": 200,
"message": "200 Ok",
"comment": "content in response body"
},
{
"code": 503,
"message": "503 Service Unavailable",
"comment": "Backend server eventually not reachable or to slow"
}
],
"defaultRepresentation": "json",
"representations": [
{
"name": "json",
"comment": "",
"responseExample": "{...}",
"isDefault": true,
"mimetype": "application/json"
}
],
"options": [
],
"permissions": [
{
"name": "role-a",
"mode": "all",
"comment": ""
}
]
}
],
"fields": [
{
"name": "id",
"type": "uuid",
"options": null,
"mandatory": [],
"min": null,
"max": null,
"multiple": false,
"defaultValue": null,
"protected": [false],
"visible": true,
"sortable": true,
"scopeable": true,
"x-comment": "unique identifier"
},
{
"name": "name",
"type": "string",
"options": null,
"mandatory": ["POST"],
"min": 1,
"max": 30,
"multiple": false,
"defaultValue": null,
"protected": [false],
"visible": true,
"sortable": false,
"scopeable": false,
"x-comment": "The name of the person"
}
],
"subresources": []
}
Since the options.json
allows us to define constraints on resource field level we had the requirement to validate REST request at runtime.
The first thing that came in my mind was why not using filters or interceptors to implement the request validation? So I started to investigate and learned that filters or interceptors have not been invent for validation in the first place.
While filters modify request or response headers, reader and writer interceptors deal with message bodies.
After reading the section Bean Validation of the RESTEasy documentation it turned out that this is most likely the way to go. Here is how it works.
First add the following additional RESTEasy dependency to your Gradle project.
compile "org.jboss.resteasy:resteasy-validator-provider-11:3.0.12.Final"
Validation is then turned on automatically when RESTEasy detects resteasy-validator-provider-11
within its classpath. Then use the annotation ValidateRequest
to validate an HTTP request before your method gets called. In the case below I want every POST request on /v1/person/
to be validated to make sure that the parameter name
does not exceed the size of 30 characters before the method addPerson
gets called
package io.wangler.resteasy.example.validation;
import javax.validation.constraints.NotNull;
import javax.validation.constraints.Size;
import javax.ws.rs.FormParam;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.ws.rs.core.Response;
@Path("/v1/person")
public class PersonResource
{
@POST
@ValidateRequest
public Response addPerson(@NotNull @Size(min=1, max=30) @FormParam("name") String name)
{
return Response.created().build();
}
}
Et voilà. Your POST request gets validated by the Hibernate validator before the actual method addPerson
gets called.
22 November 2015
Have you ever been in need to analyse why a unit test is failing in your Gradle build but perfectly works in your IDE? No problem. Simply use Gradle in the non daemon mode and set the org.gradle.debug
property to true
.
gradle --no-daemon -Dorg.gradle.debug=true clean check
After that set your breakpoints in your source code and attach your IDE debugger to the Gradle process.
31 Oktober 2015
In production log files can easily become large within no time. Many common editors cannot properly handle a large text file and therefore it usually takes quiet some time to browse through a large log file.
If your working with Linux or Mac OS X sed
is a wonderful tool to cut your large log file into pieces. Let’s say you need to analyse a log file called application.log
that contains the most interesting content between lines 5400 and 5623. The follow command line call will extract excately this range for you and print it to your stout
which is your console.
sed -n '5400,5623p' application.log
It might be handy to redirect the output to a new file by using sed -n '5400,5623p' application.log > application5400-5623.log
.
03 April 2013
Currently I am working on task to scan documents to PDFs and retrieve their content. This article explains how you do it if you do not have a searchable PDF. The following command have been evaluated on Ubuntu Linux 12.10 and will most likely work on any other Debian based distribution.
sudo apt-get install tesseract-ocr
To do so I have use Libre Office Writer and saved the document as PDF. Make sure the document contains the language you try to capture using OCR.
gs -o multipage-tiffg4.tif -sDEVICE=tiffg4 multipage-input.pdf
The following tells Tesseract to scan the TIFF called multipage-tiffg4.tif using an English dictionary and store the captured output in a file called multipage-tiffg4-ocr-capture.txt
. The .txt
was is added by Tesseract itself.
tesseract multipage-tiffg4.tif multipage-tiffg4-ocr-capture -l en
You made it! Enjoy the result
16 Juli 2012
Recently I have learned that I can use iText to determine the number of pages from a single or multi page TIFF document. Here is how it works.
private byte[] pdfContent;
int numberOfPages = TiffImage.getNumberOfPages(new RandomAccessFileOrArray(pdfContent));
Isn’t it simple?
02 Mai 2012
I recently found a tweet by Kristian Rosenvold on Twitter talking about performance improvements on multi module Maven 2/3 projects. Our build process takes quiet an amount of time and therefore performance improvements always are very welcome on my company’s software project.
The tweet leads to a Gist on GitHub that informs new version of the Plexus compiler that is used by the Maven-Compiler-Plugin. Nice! So I applied the explicit dependency in my <root> pom.xml in the <pluginManagement> section (see the listing below).
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<dependencies>
<dependency>
<groupId>org.codehaus.plexus</groupId>
<artifactId>plexus-compiler-javac</artifactId>
<version>1.8.6</version>
</dependency>
</dependencies>
</plugin>
Then I asked Jenkins to run the build several times and I was really surprised by the result. On my multi module project a full build consumes about 12-15 minutes. After applying that new Plexus version I managed to decrease the build time down to about 7-8 minutes. So the result is in my case about 30% - 45% performance improvement!
30 Dezember 2011
Groovy is just wonderful. Check out the following Groovy listing. With Groovy you easily can implement dynamic method calls.
class WellThatIsGroovy {
String name
Date bar
}
def x = 'name'
def j = new WellThatIsGroovy(name : 'hzasdjkfhjk', bar: new Date())
println j."${x}"
println j.'bar'.format('dd.MM.yyyy HH:mm:ss')
Have a go with this script at the Groovy Web Console.
Older posts are available in the archive.