Kategorien
Technology

Docker image for Wallabag v2

Sometimes I want to save articles that I want to read for later, because I don’t have the time to read right now. There are a couple of free online services that you can use, like Instapaper or Pocket. The services all work the same:

  • you give them a link to a webpage you want to read later
  • they scrape the page and store the text
  • when you got time, you open the service in your browser or a mobile app and read the article
  • you can save, delete, star the article

But with my ongoing quest to avoid services and host my own versions of them, I began using my own installation of Wallabag a couple weeks ago. Of course I decided to run it in a Docker container. Wallabag provides an official docker image you can use to get started. It works fine, although I had an issue when trying to run it behind a reverse proxy. But I worked around that and Wallabag in general works good enough that I don’t feel the urge to switch back to Instapaper every time I use it. The android app could use some love, but – again – it works good enough that I don’t want to hang myself everytime I use it.

Then I had a couple of issues with the docker container on my server. I tried to restart a couple of my container and while checking the process list of the server I noticed a defunct process pointing to the s6 and nginx services in the Wallabag container. This lead to a full server reboot, because I couldn’t kill the process and I couldn’t get rid of the container.

Because I don’t like rebooting my server just because I have a stray container running, I build my own docker image for Wallbag based on php:apache image and without any kind of additional supervisor. I run a couple other containers based on this image and I never had any issues with them, so I hope there won’t be any stray processes on my server anymore.

To use my image, just run the following command on your command line:

docker run --rm -p 8080:80 -v ./data:/var/www/wallabag/data moritanosuke/wallabag-docker

After the container is up, you can access your Wallabag at http://localhost:8080. All data will be stored in the directory data, so you should be able to stop/start the container and all your stored articles should still be there. Please note that I use the option --rm, so the container will be removed as soon as it is stopped.

Personally I like to run my containers via docker-compose. Here’s an example for your docker-compose.yml:

version: '2'

services:
  wallabag:
    image: moritanosuke/wallabag-docker
    ports:
    - 8080:80
    volumes:
    - ./data:/var/www/wallabag/data
    environment:
    - SYMFONY_ENV=prod

Now you can just run docker-compose up and you should be able to access your Wallabag at http://localhost:8080 again.

My docker image is slightly bigger, 260MB for an apache image instead of 150MB for the official image based on Alpine Linux – but I take that if I don’t have to reboot my server every couple of weeks.

If you want to try Wallabag, please give my image a shot and report any issues so I can improve it.

Using haproxy to load balance tomcat instances with Docker

Today I wanted to set up my local environment with a load balancer for
my tomcats. To get started, I took a look at the
dockercloud-haproxy image and the provided example configuration
for docker-compose:

version: '2'
services:
  web:
    image: dockercloud/hello-world
  lb:
    image: dockercloud/haproxy
    links:
      - web
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    ports:
      - 80:80

You can run this example right away and open http://localhost to see the
load balance hello-world page. To scale the web service, use
docker-compose scale web=2 and reload the page a couple of times. You
should see the hostname changing with each request, as haproxy is
using a roundrobin load-balancing strategy by default in this image.

To make this a bit more specific to my use case, I decided to replace
the hello-world image with the official tomcat image. Because I
like small images, I also decided to use tomcat:alpine.

version: '2'
services:
  web:
    image: tomcat:alpine
    volumes:
      - ./index.jsp:/usr/local/tomcat/webapps/ROOT/index.jsp:ro
  lb:
    image: dockercloud/haproxy
    links:
      - web
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    ports:
      - 80:80

I added a volume and a custom JSP to make this example a bit more my
own:

<html>
<body>
This is host <strong><%= java.net.InetAddress.getLocalHost().getHostName() %></strong>.
</body>
</html>

If you run this example with docker-compose up -d && docker-compose
scale web=2
you can again reload http://localhost a couple of times and
you should see a message like This is host 76ce0ef9f152..
Congratulations, you now have (very simple) load balanced tomcat setup.

Next steps would be to replace the simple tomcat setup with a real
cluster
to get session replication.

Kategorien
Technology

Dockerfile to build qdirstat

Today I wanted to check how much diskspace I wasted in my home directory. On windows I’d use WinDirStat and on linux there is the alternative qdirstat. Unfortunately there is no pre-build binary for my distribution, so I decided to build from source. But I don’t want litter my system with Qt dependencies, so I decided to build it in a docker container. Here’s my Dockerfile:

First you have to create an image from this Dockerfile:

# copy the Dockerfile into a new directory and run the following command in a terminal in that directory
docker build -t moritanosuke/qdirstat .

Now you can start a container from this image to clone and build qdirstat:

# run the container
docker run --rm -it -v $(pwd):/usr/target moritanosuke/qdirstat

Now you should have the executable in the current directory. The owner will be root, but you should be able to run it on your local machine now – assuming that you have all dependencies to run Qt applications already installed. The container was removed right after it completed the build, so the only thing left to do is removing the image:

docker rmi moritanosuke/qdirstat

This is now my preferred way to build random open source projects without messing up my local machine. In earlier times I used full blown VMs with snapshots, but this is even faster.

Running a local Docker registry

When you’re using Docker extensively, you sometimes need to run your own registry in order to push images around in your local network – or even on your local machine. For that you need a registry where you can push and pull images. With Docker itself, it’s easy to set it up. Here is a small docker-compose.yml which runs a registry and a simple frontend:

registry:
  image: registry:2
  volumes:
  - data:/var/lib/registry
  ports:
  - 5000:5000
frontend:
  image: konradkleine/docker-registry-frontend:v2
  links:
  - registry
  ports:
  - 8080:80
  environment:
  - ENV_DOCKER_REGISTRY_HOST=registry
  - ENV_DOCKER_REGISTRY_PORT=5000

The registry is persisting its data into the directory data, so nothing is lost if you throw away and recreate the container.

Save the file as docker-compose.yml and start it via docker-compose up in the same directory. After that you can open http://localhost:8080 and you see a basic frontend (taken from kwk/docker-registry-frontend).

Docker Registry with Frontend

Now when you push an image to your registry with the following command it becomes visible on the frontend:

docker pull alpine && docker tag alpine localhost:5000/alpine
docker push localhost:5000/alpine

Refresh the registry frontend and you’ll see the new image is available. You can now pull it with the following command:

docker pull localhost:5000/alpine

Have fun!

Kategorien
Allgemein

Minimal Docker image for DokuWiki

Yesterday I upgraded my Docker setup from v1.5 to v.10. I had to do a full apt-get dist-upgrade as well to bring my system up to Debian Jessie. On the way I discovered that my backup script wasn’t backing up anything from my personal wiki running with DokuWiki. Another case of Schroedingers Backup: The backup is only there, if it can be successfully restored. ☹ Anyway, I had only a few snippets from my daily development work and some links to software I tend to use on the wiki, so not that much of a loss. And because I had to set up my wiki again, I decided to build a minimal container instead of re-using the offical php images.

I am already using Alpine Linux for a couple of images at work and I am switching everything using busybox to it too. So it was a natural decision to build the minimal wiki container with Alpine as well. Modifying the Dockerfile was easy, and I also added some ONBUILD instructions to let users add their own users, access control list and DokuWiki configuration. I probably will provide an image to just run a wiki instance and one with the ONBUILD instructions to extend from.

Here’s the README:

Create the files local.php, acl.auth.php and users.auth.php according to DokuWiki documentation. They will be added when you build your own image. Create a Dockerfile in the same directory as your configuration files:

FROM moritanosuke/dokuwiki-docker

Build the Dockerfile:

docker build -t yourname/dokuwiki .

Start your wiki:

docker run -d --name some-dokuwiki -p 8080:80 yourname/dokuwiki

Now you can access your dokuwiki at http://localhost:8080

You can find the image dokuwiki-docker on Docker Hub and on github. Here’s the current image size:

Updating my ELK stack with GELF appender

A couple of days ago I wrote about setting up an ELK stack with docker-compose. I did some small changes to the set up, so I thought it’s worth an update.

First change is in the docker-compose.yml to enable logstashs gelf input, move the Log4j socket to port 12202 and add an UDP port forwarding for docker on port 12201:

elasticsearch:
  image: elasticsearch
  ports:
  - 9200:9200
logstash:
  image: logstash:latest
  links:
  - elasticsearch:elasticsearch
  ports:
  - 12201:12201/udp
  - 12202:12202
  command: logstash agent -e 'input { gelf { port => "12201" } log4j { mode => "server" port => "12202"} } output { elasticsearch { hosts => ["elasticsearch"] } }'
kibana:
  image: kibana
  links:
  - elasticsearch:elasticsearch
  ports:
  - 5601:5601
  environment:
  - ELASTICSEARCH_URL=http://elasticsearch:9200

This way, logstash will listen to ports 12201 with the gelf input and port 12202 with the Log4j socket input.

Now I modified the log4j.properties for my application and replaced the Log4j SocketAppender with a biz.paluch.logging.gelf.log4j.GelfLogAppender:

log4j.appender.gelf=biz.paluch.logging.gelf.log4j.GelfLogAppender
log4j.appender.gelf.Threshold=INFO
log4j.appender.gelf.Host=udp:127.0.0.1
log4j.appender.gelf.Port=12201
#log4j.appender.gelf.Version=1.1
#log4j.appender.gelf.Facility=java-test
log4j.appender.gelf.ExtractStackTrace=true
log4j.appender.gelf.FilterStackTrace=true
log4j.appender.gelf.MdcProfiling=true
log4j.appender.gelf.TimestampPattern=yyyy-MM-dd HH:mm:ss,SSSS
log4j.appender.gelf.MaximumMessageSize=8192

# This are static fields
log4j.appender.gelf.AdditionalFields=environment=local

The gelf appender is much more configurable and the search abilities in logstash are now a bit more useful to me. But I have to add another dependency to my application now:

<dependency>
    <groupId>biz.paluch.logging</groupId>
    <artifactId>logstash-gelf</artifactId>
    <version>1.8.0</version>
</dependency>

You can find the documentation at https://github.com/mp911de/logstash-gelf.

Building an ELK stack with docker-compose

Because I have a hard time searching logfiles during development (I like to run everything on DEBUG), I decided to build myself an ELK stack (elasticsearch, logstash and kibana) to throw all my logs into and have a nice UI to search for a special log message.

Fortunately there are official docker images for these tools:

So, everything is easily available, I just needed to figure out how to glue it together.

Because I’m not interested in storing the data over a long period of time, I don’t care about the setup of the elasticsearch engine. When I’m done developing or debugging things, I want to throw away everything and start with a clean environment. So I don’t store anything outside of the docker containers and I don’t want to write any Dockerfiles myself.

The way to go is a simple docker-compose.yml which I can start with a single command and have everything set up to accept log messages from my java applications. So, here we go:

elasticsearch:
  image: elasticsearch
  ports:
  - 9200:9200
logstash:
  image: logstash:latest
  links:
  - elasticsearch:elasticsearch
  ports:
  - 12201:12201
  command: logstash agent --debug -e 'input { log4j { mode => "server" port => "12201"} } output { elasticsearch { hosts => ["elasticsearch"] } stdout {} }'
kibana:
  image: kibana
  links:
  - elasticsearch:elasticsearch
  ports:
  - 5601:5601
  environment:
  - ELASTICSEARCH_URL=http://elasticsearch:9200

As you can see, I only use images – no custom Dockerfile needed. I also put the configuration for logstash and kibana into the docker-compose.yml, so when I use another computer (and I do that often) I can just copy this one file, run docker-compose up, open http://localhost:5601 and get going.

I also exposed port 12201 on the logstash host to be able to send log messages from locally running applications. I modified my log4j.properties and added a new appender named logstash with the following configuration:

log4j.rootLogger=INFO,logstash
...
log4j.appender.logstash=org.apache.log4j.net.SocketAppender
log4j.appender.logstash.port=12201
log4j.appender.logstash.remoteHost=127.0.0.1

The org.apache.log4j.net.SocketAppender comes from the default log4j installation, so no funky external dependencies needed. Just add those lines when deploying locally and the application will log to my ELK stack.

I tried to use docker networks, but logstash acted up and I think it does not particularly like underscores in the elasticsearch hostname. Unfortunately, docker-compose will generate hostnames with underscores, so I had to stick with the old way and use links. If you have any insight how to use a network, please get in touch.

Kategorien
Technology

Docker FTW!

The past few days I was playing around with Docker to set up a small testbed with ActiveMQ, StatsD and Grafana. I want to test how to collect metrics from Apache Camel routes and display them via a webfrontend in nice graphcs. That’s exactly what Grafana is doing best.

I started with running a few ActiveMQ instances via Docker. Then I created a simple Camel application using camel-archetype-activemq and modified it to write a random amount of messages into my already running ActiveMQ instances using the failover Transport.

Now, to the installation of Grafana. To avoid jumping through all the hoops, I decided to spin up another Docker container using kamon-ios Grafana image. But I ran into some problems when I tried to add a new dashboard to Grafana and add my counters to a panel. I opened an issue, but I already found a workaround for myself: I upgraded the Dockerfile to use Grafana 1.8.1, the latest version and it fixed my problems. If you have similar problems, try to replace the following lines in the Dockerfile and rebuild the image:

#Install & Patch Grafana
RUN mkdir /src/grafana &&
git clone https://github.com/grafana/grafana.git /src/grafana &&
cd /src/grafana &&
git checkout v1.8.1
ADD ./grafana/correctly-show-urlencoded-metrics.patch /src/grafana/correctly-show-urlencoded-metrics.patch
RUN git apply /src/grafana/correctly-show-urlencoded-metrics.patch --directory=/src/grafana &&
RUN cd /src/grafana &&
npm install &&
npm install -g grunt-cli &&
grunt build

What I did here is checking out v1.8.1 of Grafana and then not applying the patch that tried to fix an issue with urlencoded metrics, that was applied to the sources before building Grafana. I hope I get a response on my issue, if this is causing problems – but maybe it’s just my local setup…

Anyhow, after working around this little issue, I was able to send metrics into statsd from my Camel route. From Zero to having 2 ActiveMQ instances, a StatsDGraphiteGrafana stack running and my Camel route reporting metrics it took me about 2 hours – not counting the fix to the Dockerfile. That’s why I think Docker is here to stay.

Kategorien
Gaming Technology

Running Minecraft in a Docker container

If you ever want to run a minecraft server, but you don’t want to add all the Java and minecraft files directly to your servers filesystem, you can use the following Dockerfile:

FROM google/debian:wheezy
RUN apt-get -y update
RUN apt-get -y install openjdk-7-jre-headless wget
RUN wget https://s3.amazonaws.com/Minecraft.Download/versions/1.8/minecraft_server.1.8.jar
EXPOSE 25565
RUN echo eula=true > eula.txt
CMD ["java", "-jar", "minecraft_server.1.8.jar"]

You can then build a Docker container with the following command:

sudo docker build .

After that, you have a Minecraft 1.8 server running and you can connect to 127.0.0.1:25565 with your Minecraft client. Enjoy!