luges (Linux User Group Esslingen)

October 01, 2014

luges Gathering

luges Stammtisch - 2014-10-01

This month's come together will be held on 2014-10-01, 8pm at our regular pub (Trödler).

October 01, 2014 12:00 AM

September 10, 2014

luges Gathering

luges Stammtisch - 2014-09-10 [verlegt]

This month's come together will be held on 2014-09-10, 8pm at our regular pub (Trödler).

September 10, 2014 12:00 AM

August 13, 2014

luges Gathering

luges Stammtisch - 2014-08-13 [verlegt]

This month's come together will be held on 2014-08-13, 8pm at our regular pub (Trödler).

August 13, 2014 12:00 AM

August 10, 2014

Alexander König

Ruining a perfectly good phone

I had just finished tuning my ownCloud sync setup, when – after years of smooth, unharmed operation despite numerous cement-terminated falls – the better parts of my N9’s gorilla glass finally decide to break apart as the phone left the the bike mount mid-ride. It seems the mount broke due to modifications I made as it kept pressing buttons unintentionally.

glass

Hopefully I will be able to get my hands on a another (retired) N9 next week so I can use  that phone’s display to replace the broken one, which is nice as I wouldn’t know which new phone I would by right now, for some reason the Ubuntu Edge I ordered never shipped.

This way I can continue using SyncEvolution with my little script to sync with ownCloud which uses some MeeGo D-Bus magic to pop-up a short message informing me when the sync is complete. As I failed at ash arithmetic the script feels a little clumsy, but it seems to do what it should.

by alex at August 10, 2014 09:54 AM

July 02, 2014

luges Gathering

luges Stammtisch - 2014-07-02

This month's come together will be held on 2014-07-02, 8pm at our regular pub (Trödler).

July 02, 2014 12:00 AM

June 04, 2014

luges Gathering

luges Stammtisch - 2014-06-04

This month's come together will be held on 2014-06-04, 8pm at our regular pub (Trödler).

June 04, 2014 12:00 AM

May 07, 2014

luges Gathering

luges Stammtisch - 2014-05-07

This month's come together will be held on 2014-05-07, 8pm at our regular pub (Trödler).

May 07, 2014 12:00 AM

April 02, 2014

luges Gathering

luges Stammtisch - 2014-04-02

This month's come together will be held on 2014-04-02, 8pm at our regular pub (Trödler).

April 02, 2014 12:00 AM

March 05, 2014

luges Gathering

luges Stammtisch - 2014-03-05

This month's come together will be held on 2014-03-05, 8pm at our regular pub (Trödler).

March 05, 2014 12:00 AM

February 28, 2014

Adrian Reber

Cluster Updated to CentOS 6.5 (IGB/PTP Problems)

Normally I would not mention that our Linux cluster was updated. But as the update to CentOS 6.5 produced some strange errors I thought that I write it down in case somebody else has the same errors.

Our cluster has a bit more than 200 nodes and all nodes are running disk-less with read-only mounted filesystem over NFS. Until now we were using Scientific Linux 5.5 and it was time to update it to something newer: CentOS 6.5.

So all nodes were shut down and then started with the new CentOS 6.5 image and everything seemed fine. After a few minutes there were, however, about 30 nodes which went offline. The hardware on all nodes is the same and it was strange that 30 nodes should have the same hardware error after a software upgrade. I was not able to contact the defect systems over Ethernet but they still were answering ping requests over InfiniBand. I could not log in into the defect systems as the filesystem was mounted over Ethernet and not InfiniBand. Going to the console of the systems I saw that the system was still up and running but was not reachable over Ethernet. The link was still active and the kernel detected if the link was going up or down. But the driver of the Ethernet card refused to answer any packets.

Without Ethernet it was hard to debug as the systems have no local drive and as soon as the Ethernet driver stopped working no logging in was possible.

Looking at the protocols of the boot I saw that the system starts up with the wrong date which is then corrected by NTP during the boot. I also saw that the moment the time was corrected the systems stopped working. At least most of the time.

Looking at the parameters of the network driver (igb) to find some debug options I saw that it has a dependency on the ptp module. I had no idea what PTP was but the Internet told me that it is the Precision Time Protocol and that it is a feature which was enabled with RHEL6.5 and therefore also with our used CentOS 6.5. The network driver also stopped working once I tried to write the correct time to the RTC using hwclock.

On some of the systems the time stored in the RTC was more than 3.5 years in the past. The reason for this might be that the most of the time the systems are not shut down cleanly but only powered off or power cycled using ipmitool because the systems are disk-less and have a read-only filesystem. But this also means that hwclock is never run on shutdown to sync the time to the RTC.

Setting SYNC_HWCLOCK in /etc/sysconfig/ntpdate to yes syncs the actual time to the RTC and after the next reboots all my problems were gone.

Syncing the RTC to a reasonable value helped to solve my problem but this still looks like a bug in the network driver that it stops working after changing the time.

by adrian at February 28, 2014 09:57 PM

February 20, 2014

Adrian Reber

Checkpoint and almost Restart in Open MPI

Now that checkpoint/restart with CRIU is possible since Fedora 19 I started adding CRIU support to Open MPI. With my commit 30772 it is now possible to checkpoint a process running under Open MPI. The restart functionality is not yet implemented but should be soon available. I have a test case (orte-test) which prints its PID and sleeps one second in a loop which I start under orterun like this:

/path/to/orterun --mca ft_cr_enabled 1 --mca opal_cr_use_thread 1 --mca oob tcp --mca crs_criu_verbose 30 --np 1 orte-test

The options have following meaning:

  • –mca ft_cr_enabled 1
    • ft stands for fault tolerance
    • cr stands for checkpoint/restart
    • this option is to enable the checkpoint/restart functionality
  • –mca opal_cr_use_thread 1: use an additional thread to control checkpoint/restart operations
  • –mca oob tcp: use TCP instead of unix domain sockets (the socket code needs some additional changes for C/R to work)
  • –mca crs_criu_verbose 30: print all CRIU debug messages
  • –np 1: spawn one test case

The output of the test case looks like this:


[dcbz:12563] crs:criu: open()
[dcbz:12563] crs:criu: open: priority = 10
[dcbz:12563] crs:criu: open: verbosity = 30
[dcbz:12563] crs:criu: open: log_file = criu.log
[dcbz:12563] crs:criu: open: log_level = 0
[dcbz:12563] crs:criu: open: tcp_established = 1
[dcbz:12563] crs:criu: open: shell_job = 1
[dcbz:12563] crs:criu: open: ext_unix_sk = 1
[dcbz:12563] crs:criu: open: leave_running = 1
[dcbz:12563] crs:criu: component_query()
[dcbz:12563] crs:criu: module_init()
[dcbz:12563] crs:criu: opal_crs_criu_prelaunch
[dcbz:12565] crs:criu: open()
[dcbz:12565] crs:criu: open: priority = 10
[dcbz:12565] crs:criu: open: verbosity = 30
[dcbz:12565] crs:criu: open: log_file = criu.log
[dcbz:12565] crs:criu: open: log_level = 0
[dcbz:12565] crs:criu: open: tcp_established = 1
[dcbz:12565] crs:criu: open: shell_job = 1
[dcbz:12565] crs:criu: open: ext_unix_sk = 1
[dcbz:12565] crs:criu: open: leave_running = 1
[dcbz:12565] crs:criu: component_query()
[dcbz:12565] crs:criu: module_init()
[dcbz:12565] crs:criu: opal_crs_criu_reg_thread
Process 12565
Process 12565
Process 12565

To start the checkpoint operation the Open MPI tool orte-checkpoint is used:

/path/to/orte-checkpoint -V 10 `pidof orterun`

which outputs the following:


[dcbz:12570] orte_checkpoint: Checkpointing...
[dcbz:12570] PID 12563
[dcbz:12570] Connected to Mpirun [[56676,0],0]
[dcbz:12570] orte_checkpoint: notify_hnp: Contact Head Node Process PID 12563
[dcbz:12570] orte_checkpoint: notify_hnp: Requested a checkpoint of jobid [INVALID]
[dcbz:12570] orte_checkpoint: hnp_receiver: Receive a command message.
[dcbz:12570] orte_checkpoint: hnp_receiver: Status Update.
[dcbz:12570] [ 0.00 / 0.08] Requested - ...
[dcbz:12570] orte_checkpoint: hnp_receiver: Receive a command message.
[dcbz:12570] orte_checkpoint: hnp_receiver: Status Update.
[dcbz:12570] [ 0.00 / 0.08] Pending - ...
[dcbz:12570] orte_checkpoint: hnp_receiver: Receive a command message.
[dcbz:12570] orte_checkpoint: hnp_receiver: Status Update.
[dcbz:12570] [ 0.00 / 0.08] Running - ...
[dcbz:12570] orte_checkpoint: hnp_receiver: Receive a command message.
[dcbz:12570] orte_checkpoint: hnp_receiver: Status Update.
[dcbz:12570] [ 0.06 / 0.14] Locally Finished - ...
[dcbz:12570] orte_checkpoint: hnp_receiver: Receive a command message.
[dcbz:12570] orte_checkpoint: hnp_receiver: Status Update.
[dcbz:12570] [ 0.00 / 0.14] Checkpoint Established - ompi_global_snapshot_12563.ckpt
[dcbz:12570] orte_checkpoint: hnp_receiver: Receive a command message.
[dcbz:12570] orte_checkpoint: hnp_receiver: Status Update.
[dcbz:12570] [ 0.00 / 0.14] Continuing/Recovered - ompi_global_snapshot_12563.ckpt
Snapshot Ref.: 0 ompi_global_snapshot_12563.ckpt

orte-checkpoint tries to connect to the previously started orterun process and requests that a checkpoint should be taken. orterun outputs the following after receiving the checkpoint request:


[dcbz:12565] crs:criu: checkpoint(12565, ---)
[dcbz:12565] crs:criu: criu_init_opts() returned 0
[dcbz:12565] crs:criu: opening snapshot directory /home/adrian/ompi_global_snapshot_12563.ckpt/0/opal_snapshot_0.ckpt
[dcbz:12563] 12563: Checkpoint established for process [56676,0].
[dcbz:12563] 12563: Successfully restarted process [56676,0].
Process 12565

At this point the checkpoint has been written to disk and the process continues (printing its PID).

For a complete checkpoint/restart functionality I still have to implement the restart functionality in Open MPI and I also have to take care of the unix domain sockets (shutting them down for the checkpointing).

This requires the latest criu package (criu-1.1-4) which includes headers to build Open MPI against CRIU as well as the CRIU service.

by adrian at February 20, 2014 08:21 PM

February 19, 2014

Adrian Reber

Using the ownCloud address book in mutt

Now that I have been syncing my ownCloud address book to my mobile devices and my laptop I was missing this address book in mutt. But using pyCardDAV and the instructions at http://got-tty.org/archives/mutt-kontakte-aus-owncloud-nutzen.html it was easy to integrate the ownCloud address book in mutt. As pyCardDAV was already packaged for Fedora it was not much more work than yum install python-carddav, edit ~/.config/pycard/pycard.conf to get the address book synced.

I was already using a LDAP address book in mutt so that I had to extent the existing configuration to:
set query_command = "~/bin/mutt_ldap.pl '%s'; /usr/bin/pc_query -m '%s'"

Now, whenever I press CTRL+T during address input, first the LDAP server is queried and than my local copy of the ownCloud address book.

by adrian at February 19, 2014 10:47 AM

February 17, 2014

terminatorX

Updated RPMFusion packages available

Fedora users can now install pre-built terminatorX packages from the RPMFusion repository. The 3.90 packages are currently in the testing repository and should move on to updates in the next days.

by alex at February 17, 2014 07:36 PM

February 13, 2014

terminatorX

PPA builds for release 3.90 available

You can now install terminatorX 3.90 on Ubuntu 13.10 via the terminatorX PPA, details on how to use that archive are available in the download section.

by alex at February 13, 2014 10:31 PM

February 12, 2014

terminatorX

terminatorX 3.90

A new terminatorX release is available, grab the tarball from the download section if you want to give it a try. While still GTK+2 based, this release completes the first half of the GTK+3 migration guide, expect the next releases to be GTK+3 based. Aside of lots of cleanups addressing deprecated APIs, this release also brings:

  • gradient for the sample widget to freshen the UI a bit
  • a fix for Bug #33
  • delayed initialization for the jack engine (when jack is not activated via preferences) to avoid unnecessary start-up delays

 

by alex at February 12, 2014 09:02 PM

February 05, 2014

luges Gathering

luges Stammtisch - 2014-02-05

This month's come together will be held on 2014-02-05, 8pm at our regular pub (Trödler).

February 05, 2014 12:00 AM

January 25, 2014

terminatorX

SCM Migration

Finally, the terminatorX sources have a new home, the source code repository is now using git. We also have one of those neat gitweb instances up and running here.

More details are available on the new Code section.

by alex at January 25, 2014 12:46 PM

January 22, 2014

terminatorX

Site Revamped

In preparation for the upcoming terminatorX release this site has been overhauled. The layout generated by the original handcrafted scripts looked rather antiquated, so the scripts were retired in favour of WordPress with all its goodies including comments and HTML5 media playback.

In case you have visited terminatorX.org before you should find all the things you could find before although presented in much more aesthetically pleasing form.

by alex at January 22, 2014 11:31 PM

January 08, 2014

luges Gathering

luges Stammtisch - 2014-01-08

This month's come together will be held on 2014-01-08, 8pm at our regular pub (Trödler).

January 08, 2014 12:00 AM

December 04, 2013

luges Gathering

luges Stammtisch - 2013-12-04

This month's come together will be held on 2013-12-04, 8pm at our regular pub (Trödler).

December 04, 2013 12:00 AM

November 06, 2013

luges Gathering

luges Stammtisch - 2013-11-06

This month's come together will be held on 2013-11-06, 8pm at our regular pub (Trödler).

November 06, 2013 12:00 AM

November 04, 2013

Adrian Reber

New external RAID

Today a new external RAID (connected via Fibre Channel) was attached to our mirror server. To create the filesystem (XFS) I used this command:

mkfs -t xfs -d su=64k -d sw=13 /dev/sdf1

According to https://raid.wiki.kernel.org/index.php/RAID_setup#XFS this are the correct options for 13 data disks (15 with RAID6 plus 1 hot spare) and a stripe size of 64k.

by adrian at November 04, 2013 04:07 PM

October 02, 2013

luges Gathering

luges Stammtisch - 2013-10-02

This month's come together will be held on 2013-10-02, 8pm at our regular pub (Trödler).

October 02, 2013 12:00 AM

September 08, 2013

Adrian Reber

Dynamic DNS

For the last ten years I wanted to set up my own dynamic DNS service but was never motivated enough. Recently enough motivation was provided and using the scripts from http://www.fischglas.de/software/dyn/ made it really easy to set up a dynamic DNS service using bind. Following changes were necessary to the named.conf file:

zone "dyn.domain" in {
        type master;
        file "db.dyn.domain";
        allow-update {
                key host.domain.;
        };
};

Whenever the IP address of my host changes I am loading a URL with my hostname and password encoded. The script behind the URL checks if my hostname and password is correct and updates the zone file using nsupdate with a TTL of 120 seconds.

The script uses a simple configuration file (/etc/dyn/dyn.cfg) with the following content:

dns.key.name:host.domain.
dns.key:yyeofEWfgvdfgdfgerX==
authfile:/etc/dyn/secrets
dns.host:host.domain
debug:0

by adrian at September 08, 2013 12:59 PM

September 05, 2013

Adrian Reber

bcache Follow-Up

After using bcache for about three weeks it still works without any problems. I am serving around 700GB per day from the bcache device and looking at the munin results cache hits are averaging at about 12000 and cache misses are averaging at around 700. So, only looking at the statistics, it still seems to work very effectively for our setup.

by adrian at September 05, 2013 07:03 PM

September 04, 2013

luges Gathering

luges Stammtisch - 2013-09-04

This month's come together will be held on 2013-09-04, 8pm at our regular pub (Trödler).

September 04, 2013 12:00 AM

August 21, 2013

Adrian Reber

RPM Fusion’s MirrorManager moved

After running RPM Fusion’s MirrorManager instance for many years on Fedora I moved it to a CentOS 6.4 VM. This was necessary because the MirrorManager installation was really ancient and still running from a modified git checkout I did many years ago. I expected that the biggest obstacle in this upgrade and move would be the database upgrade of MirrorManager as its schema has changed over the years. But I was fortunate and MirrorManager included all the necessary scripts to update the database (thanks Matt). Even from the ancient version I was running.

RPM Fusion’s MirrorManager instance uses postgresql to store its data and so I dumped the data on the one system to import it into the database on the new system. MirrorManager stores information about the files as pickled python data in the database and those columns were not possible to be imported due to problems with the character encoding. As this is data that is provided by the master mirror I just emptied those columns and after the first run MirrorManager recreated those informations.

Moving the MirrorManager instance to a VM means that, if you are running a RPM Fusion mirror, the crawler which checks if your mirror is up to date will now connect from another IP address (129.143.116.115) to your mirror. The data collected by MirrorManager’s crawler is then used to create http://mirrors.rpmfusion.org/mm/publiclist/ and the mirrorlist used by yum (http://mirrors.rpmfusion.org/mirrorlist?repo=free-fedora-updates-released-19&arch=x86_64). There are currently four systems serving as mirrors.rpmfusion.org

Looking at yesterday’s statistics (http://mirrors.rpmfusion.org/statistics/?date=2013-08-20) it seems there were about 400000 accesses per day to our mirrorlist servers.

by adrian at August 21, 2013 02:36 PM

August 20, 2013

Adrian Reber

bcache on Fedora 19

After having upgraded our mirror server from Fedora 17 to Fedora 19 two weeks ago I was curious to try out bcache. Knowing how important filesystem caching for a file server like ours is we always tried to have as much memory as “possible”. The current system has 128GB of memory and at least 90% are used as filesystem cache. So bcache sounds like a very good idea to provide another layer of caching for all the IOs we are doing. By chance I had an external RAID available with 12 x 1TB hard disc drives which I configured as a RAID6 and 4 x 128GB SSDs configured as a RAID10.

After modprobing the bcache kernel module and installing the necessary bcache-tools I created the bcache backing device and caching device like it is described here. I then created the filesystem like I did it with our previous RAIDs. For RAID6 with 12 hard disc drive and a RAID chunk size of 512KB I used mkfs.ext4 -b 4096 -E stride=128,stripe-width=1280 /dev/bcache0. Although I am unsure how useful these options are when using bcache.

So far it worked pretty flawlessly. To know what to expect from /dev/bcache0 I benchmarked it using bonnie++. I got 670MB/s for writing and 550MB/s for reading. Again, I am unsure how to interpret these values as bcache tries to detect sequential IO and bypasses the cache device for sequential IO larger than 4MB.

Anyway. I started copying my fedora and fedora-archive mirror to the bcache device and we are now serving those two mirrors (only about 4.1TB) from our bcache device.

I have created a munin plugin to monitor the usage of the bcache device and there are many cache hits (right now more than 25K) and some cache misses (about 1K). So it seems that it does what is supposed to do and the number of IOs directly hitting the hard disc drives is much lower than it would be:

I also increased the cutoff for sequential IO which should bypass the cache from 4MB to 64MB.

The user-space tools (bcache-tools) are not yet available in Fedora (as far as I can tell) but I found http://terjeros.fedorapeople.org/bcache-tools/ which I updated to the latest git: http://lisas.de/~adrian/bcache-tools/

Update: as requested the munin plugin: bcache

by adrian at August 20, 2013 02:50 PM

August 17, 2013

Adrian Reber

Remove Old Kernels

Mainly using Fedora, I am accustomed that old kernel images are automatically uninstalled after a certain number of kernel images have been installed using yum. The default is to have three kernel images installed and so far this has always worked.

I am also maintaining a large number of Ubuntu VMs and every now and then we have the problem that the filesystem is full, because too many kernel images are installed. I have searched for some time but there seems to be no automatic kernel image removal in apt-get. There is one command which is often recommended which is something like:

dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;
s/^[^ ]*[^ ]* \([^ ]*\).*/\1/;/[0-9]/!d' | xargs sudo apt-get -y purge
[1]

This works, but only if you are already running the latest kernel and therefore I have adapted it a little for our needs. Instead of removing all kernel images except the running kernel image I remove all kernel images except the running and the newest kernel image. No real big difference but important for our setup where we do not reboot all VMs with every kernel image update.

Running the script gives me following output
# remove-old-kernels

linux-image-3.2.0-23-generic linux-image-3.2.0-36-generic linux-image-3.2.0-37-generic linux-image-3.2.0-38-generic linux-image-3.2.0-39-generic linux-image-3.2.0-40-generic linux-image-3.2.0-43-generic linux-image-3.2.0-45-generic linux-image-3.2.0-48-generic linux-image-3.2.0-49-generic

The output of the script can then be easily used to remove the unnecessary kernel images with apt-get purge.

The script can be downloaded here: remove-old-kernels

And before anybody complains: I know it is not really the most elegant solution and I should have not written it using bash.

by adrian at August 17, 2013 09:17 PM

August 08, 2013

The Administration

Updated to Fedora 19 (Schrödinger’s Cat)

August 08, 2013 12:00 AM

August 07, 2013

luges Gathering

luges Stammtisch - 2013-08-07

This month's come together will be held on 2013-08-07, 8pm at our regular pub (Trödler).

August 07, 2013 12:00 AM

July 03, 2013

luges Gathering

luges Stammtisch - 2013-07-03

This month's come together will be held on 2013-07-03, 8pm at our regular pub (Trödler).

July 03, 2013 12:00 AM

June 05, 2013

luges Gathering

luges Stammtisch - 2013-06-05

This month's come together will be held on 2013-06-05, 8pm at our regular pub (Trödler).

June 05, 2013 12:00 AM

May 08, 2013

luges Gathering

luges Stammtisch - 2013-05-08

This month's come together will be held on 2013-05-08, 8pm at our regular pub (Trödler).

May 08, 2013 12:00 AM

April 19, 2013

Adrian Reber

A New Home

After having received my Raspberry Pi in November, I am finally using it. I have connected it to my television using raspbmc.Using XBMC Remote I can control it without the need for a mouse, keyboard or lirc based remote control and so far it works pretty good. Following are a few pictures with the new case I bought a few days ago:

Pi

Pi

Pi

by adrian at April 19, 2013 12:23 PM

April 03, 2013

luges Gathering

luges Stammtisch - 2013-04-03

This month's come together will be held on 2013-04-03, 8pm at our regular pub (Trödler).

April 03, 2013 12:00 AM

March 06, 2013

luges Gathering

luges Stammtisch - 2013-03-06

This month's come together will be held on 2013-03-06, 8pm at our regular pub (Trödler).

March 06, 2013 12:00 AM

February 06, 2013

luges Gathering

luges Stammtisch - 2013-02-06

This month's come together will be held on 2013-02-06, 8pm at our regular pub (Trödler).

February 06, 2013 12:00 AM

January 24, 2013

Adrian Reber

Process Migration coming to Fedora 19 (probably)

With the recent approved review of the package crtools in Fedora I have made a feature proposal for checkpoint/restore.

To test checkpoint/restore on Fedora you need to run the current development version of Fedora and install crtools using yum (yum install crtools). Until it is decided if it actually will be a Fedora 19 feature and the necessary changes in the Fedora kernel packages have been implemented it is necessary to install a kernel which is not in the repository. I have built a kernel in Fedora’s buildsystem which enables the following config options: CHECKPOINT_RESTORE, NAMESPACES, EXPERT.

A kernel with these changes enabled is available from koji as a scratch build: http://koji.fedoraproject.org/koji/taskinfo?taskID=4899525

After installing this kernel I am able to migrate a process from one Fedora system to another. For my test case I am migrating a UDP ping pong (udpp.c) program from one system to another while communicating with a third system.

udpp

udpp is running in server mode on 129.143.116.10 and on 134.108.34.90 udpp is started in client mode. After a short time I am migrating, with the help of crtools, the udpp client to 85.214.67.247. The following is part of the output on the udpp server:


-->

Received ping packet from 134.108.34.90:38374
Data: This is ping packet 6

Sending pong packet 6
--
-->

Received ping packet from 134.108.34.90:38374
Data: This is ping packet 7

Sending pong packet 7
--
-->

Received ping packet from 85.214.67.247:38374
Data: This is ping packet 8

Sending pong packet 8
--
-->

Received ping packet from 85.214.67.247:38374
Data: This is ping packet 9

Sending pong packet 9
--

So with only little changes to the kernel configuration it is possible to migrate a process by checkpointing and restoring a process with the help of crtools.

by adrian at January 24, 2013 08:44 PM

January 17, 2013

Adrian Reber

If you have too much memory

We have integrated new nodes into our cluster. All of the new nodes have a local SSD for fast temporary scratch data. In order to find which are the best options and IO scheduler I have written a script which tries a lot of combinations (80 to be precise) of file system options and IO schedulers. As the nodes have 64 GB of RAM the first run of the script took 40 hours as I tried to write always twice the size of the RAM for my benchmarks to avoid any caching effects. In order to reduce the amount of available memory I wrote a program called memhog which malloc()s the memory and then also mlock()s it. The usage is really simple

$ ./memhog
Usage: memhog <size in GB>

I am now locking 56GB with memhog and I reduced the benchmark file size to 30GB.

So, if you have too much memory and want to waste it… Just use memhog.c.

by adrian at January 17, 2013 02:34 PM

January 15, 2013

Adrian Reber

Kover 6

After having successfully updated libcdio in rawhide to 0.90 and also introduced the split off libcdio-paranoia in Fedora’s development branch, I rebuilt most of on libcdio depending packages. Two packages were no longer building but their maintainers quickly fixed it. The only broken dependent package was kover. As I am still upstream of kover I had to change the code to use the new CD-Text API of libcdio 0.90.

With these changes I have released kover version 6 which is available at http://lisas.de/kover/kover-6.tar.bz2.

by adrian at January 15, 2013 10:31 AM

January 09, 2013

luges Gathering

luges Stammtisch - 2013-01-09

This month's come together will be held on 2013-01-09, 8pm at our regular pub (Trödler).

January 09, 2013 12:00 AM

December 05, 2012

luges Gathering

luges Stammtisch - 2012-12-05

This month's come together will be held on 2012-12-05, 8pm at our regular pub (Trödler).

December 05, 2012 12:00 AM

November 22, 2012

The Administration

November 15, 2012

The Administration

Installed 10TB of additional disk space. There are now roughly 30TB of disk space used for mirroring.

November 15, 2012 12:00 AM

November 07, 2012

luges Gathering

luges Stammtisch - 2012-11-07

This month's come together will be held on 2012-11-07, 8pm at our regular pub (Trödler).

November 07, 2012 12:00 AM

October 10, 2012

luges Gathering

luges Stammtisch - 2012-10-10

This month's come together will be held on 2012-10-10, 8pm at our regular pub (Trödler).

October 10, 2012 12:00 AM

October 04, 2012

The Administration

Moved to new hardware. 128GB RAM. 12 cores (24 with hyperthreading).

October 04, 2012 12:00 AM

September 05, 2012

luges Gathering

luges Stammtisch - 2012-09-05

This month's come together will be held on 2012-09-05, 8pm at our regular pub (Trödler).

September 05, 2012 12:00 AM

August 08, 2012

luges Gathering

luges Stammtisch - 2012-08-08

This month's come together will be held on 2012-08-08, 8pm at our regular pub (Trödler).

August 08, 2012 12:00 AM

July 04, 2012

luges Gathering

luges Stammtisch - 2012-07-04

This month's come together will be held on 2012-07-04, 8pm at our regular pub (Trödler).

July 04, 2012 12:00 AM

June 06, 2012

luges Gathering

luges Stammtisch - 2012-06-06 [ausgefallen]

Unfortunately no come together this month.

June 06, 2012 12:00 AM

May 02, 2012

luges Gathering

luges Stammtisch - 2012-05-02

This month's come together will be held on 2012-05-02, 8pm at our regular pub (Trödler).

May 02, 2012 12:00 AM

April 04, 2012

luges Gathering

luges Stammtisch - 2012-04-04

This month's come together will be held on 2012-04-04, 8pm at our regular pub (Trödler).

April 04, 2012 12:00 AM

March 07, 2012

luges Gathering

luges Stammtisch - 2012-03-07

This month's come together will be held on 2012-03-07, 8pm at our regular pub (Trödler).

March 07, 2012 12:00 AM

February 01, 2012

luges Gathering

luges Stammtisch - 2012-02-01

This month's come together will be held on 2012-02-01, 8pm at our regular pub (Trödler).

February 01, 2012 12:00 AM

January 11, 2012

luges Gathering

luges Stammtisch - 2012-01-11 [verlegt]

This month's come together will be held on 2012-01-11, 8pm at our regular pub (Trödler).

January 11, 2012 12:00 AM

December 07, 2011

luges Gathering

luges Stammtisch - 2011-12-07

This month's come together will be held on 2011-12-07, 8pm at our regular pub (Trödler).

December 07, 2011 12:00 AM

November 02, 2011

luges Gathering

luges Stammtisch - 2011-11-02

This month's come together will be held on 2011-11-02, 8pm at our regular pub (Trödler).

November 02, 2011 12:00 AM

October 05, 2011

luges Gathering

luges Stammtisch - 2011-10-05

This month's come together will be held on 2011-10-05, 8pm at our regular pub (Trödler).

October 05, 2011 12:00 AM