Roon and NAS SSD cache

Using a dedicated NUC for Roon and a Synology DS918+ NAS as the storage for all music seemed like the perfect solution. After a while something started to annoy me.

The four spinning hard drives in the DS918+ fell asleep after twenty minutes. So if I had not played any music for twenty minutes they would have to be spin up again, before I could hear any music. This process takes probably less than a minute, but when encountered often enough it started to feel like an eternity.

I had already expanded the DS918+ with a 4GB SO-DIMM and even though the extra memory is allegedly being used for cache also, I did not notice any improvements. My thoughts turned to the M.2 slots at the bottom of the NAS. Also, memory uses was quite low.

In articles and posts I read about the vulnerability of SSD caches, but wondered how serious of an issue this would be. Especially considering my use case. I would use a read only cache that would not have to deal with a lot of constant and intensive reads and writes . The idea being that it would only be filled with albums I regularly listen to.

So I ordered a Synology SNV3400-400G, which is a 400GB M.2 NVME drive. The install was a piece of cake. No screws needed. Just pop out one of the covers on the bottom of the NAS, install it, return the cover and start up your NAS. After that I only had to assign the SDD to cache duties.

At first, of course, the cache was empty, so when I played some music after the drives had gone to sleep, I still had to wait for the one minute spinup cycle. Even after playing the same album several times, I could hardly see the cache growing. I felt a bit disappointed.

I have no idea how the caching algorithm works, but I am starting to see a change. After a week I do not experience the lag anymore. Albums I play frequently start immediately. The cache hit rate over this week is over 80% which sounds good, no pun intended. The user interface of the NAS also feels snappier.

It is a bit surprising to see that the cache size is still only at 4.6GB. I play a lot of hires audio and I would expect a larger cache size. Let us see how this cache evolves with longer use.

High Availability Kubernetes cluster

Yesterday a computer, which I had ordered quite a while ago, finally arrived. It is an Intel NUC 10i7FNH with 64GB of memory and a 500GB Samsung 970 EVO Plus. I now have three of these. All the same specs.

I bought them over a period of several months. The 10i7FNH is not the most current model, still the price of every machine I bought was higher than the previous one. Between the first and the last machine there is a price difference of 160 euros. Quite a difference if you take into account the first one cost 790 euros. It is just another effect of COVID-19. Let us hope we can leave this whole pandemic behind us soon.

The Kubernetes cluster now has 144GB of RAM to run applications in. There are three master nodes for High Availability and three master nodes also means etcd has reached quorum.

Adding another master and worker node to a running Kubernetes cluster is quite a job. I could not have done it without the help from this article.

Now I can safely wait for one of the SSDs to break. Master nodes write so much data to disk, it’s just a matter of time before one of the consumer SSDs in the nodes breaks. Or at least that is my expectation. We will see.

Continuous Integration drama

When I read that Bitbucket Server is going to be discontinued in the future, I could have done two things. I could have waited as I could still use Bitbucket Server for quite a long time or I could go out and search for a new solution. I did the latter. Well, at least the searching part. I am still trying to find the best solution.

I am still trying to work with Bitbucket Cloud, but I am running into some issues:

  1. I am still not very pleased with having to put the credentials for my Nexus server into someone’s web application.
  2. Pipelines in Bitbucket Cloud aren’t very fast.
  3. Creating a Docker image with the spring-boot-maven-plugin fails at this time and it seems this problem isn’t going to be fixed any time soon.

I’d better have a look at gitlab and see what it can do for me, but there’s a good chance I’ll stick with Bitbucket Cloud and my own Jenkins server. More on that later.

And then my Bitbucket server died

One day I moved all my LXC containers to one host. This was done to use one of my NUCs as a Roon ROCK server. Moving the containers was easy with LXC. Just take a snapshot of the container and copy it to another server. Start it there and well, that was that.

In the back of my head a voice was telling me that all my LXC containers have boot.autostart set to true. The voice was telling also telling me this might become an issue. What if the Bitbucket server starts before the PostgreSQL server running on the same host?

Anyway, quite soon, after a few reboots, I got into trouble. Bitbucket was stuck at “Migrating home directory”.

I’m not saying booting all containers at the same time is the problem. It might be. It might also be that I shutdown down the SQL server before Bitbucket.

Looking for a solution wasn’t easy as I couldn’t find anything in the Bitbucket logs:

******@bitbucket ERROR: function aurora_version() does not exist at character 8

Apparently there is some sort of PostgreSQL implementation you can run on the Amazon cloud that is called Aurora. You learn something new every day…

I thought I had found the root cause, but also realised that all the people mentioning these log messages weren’t saying their server didn’t boot.

Then I started googling the message “Migrating home directory” and quickly had a solution. It seems my database was locked. This statement allowed my server to boot Bitbucket successfully again:

UPDATE DATABASECHANGELOGLOCK SET LOCKED=false, LOCKGRANTED=null, LOCKEDBY=null where ID=1;

The dreadful missing JDK dialog on macOS

I’m not a fan of Eclipse or products derived from Eclipse. I think they’re slow, not very intuitive and the program state never seems to be up to date, but sometimes unfortunately there’s no alternative.

Sometimes I use Apache Directory Studio to edit my LDAP data. Today I installed Directory studio, but it wouldn’t start because it couldn’t find the JDK. I have unwrapped several JDK tar balls in my /opt directory, but Directory Studio doesn’t know that.

Of course I immediately started googling and a lot of people mentioned to put -vm <path to java> in the Contents/Eclipse/ApacheDirectoryStudio.ini inside the application’s folder in /Applications. This unfortunately didn’t work for me.

I started checking other files in the application folder and found Contents/Info.plist. Inside the array tag in that file at the bottom there’s a comment about using a particular Java version. Adding this to that array tag did the trick: <string>-vm</string><string>/somepath/java</string>

Just now I have installed Eclipse to see if it suffers from the same problem out of the box and it does. The solution is the same for Eclipse. Just add the vm information in the Info.plist in the Contents folder in the Eclipse application folder.