Together with the rise of Java , Java Applications Servers (e.g Tomcat, IBM WAS) also came into the game. The idea of an application server was to have one JVM and to run multiple deployments (e.g. jar, ear, war), this way the memory foot print necessary for running lots of processes can be reduced.
Historically, the teams were well-defined in application administrators that could also be responsible with the operations aspects and developers with a focus on developing the business features of their product and not so much on how their code runs in terms of performance, reliability and so on. App admins instead, took a heavy interest in how the code is running on their administered applications, so they would have a stable system. As one of them, even with an optimized code, I still had a job that killed that Java process on a 24h basis, just to be sure, considering that we had processes that just froze after 1 week.
Then containers were launched and the idea that everything can be packed in images and those images can run anywhere. Kubernetes would be used for orchestration, monolith applications would be migrated to the cloud, by re-writing them in microservices.
The app admins will remain to administrate the monolith and the developers will go and build new small applications based on microservices in Docker.
In some of these new teams there were no application administrators, to raise awareness on JRE, JVM and the garbage collector related issues, and no one else took care about this aspect. Instead, Kubernetes is used to deploy our newly shiny Docker images. So, there are not too many people to wonder if Java is really suitable for this kind of architecture from a reliability point of view.
As an old app admin to a new SRE position, I learned about Golang, which looks like a much better alternative for microservices.
Meanwhile all Kubernetes is full of Java code and there are apps that cannot start if they have not a minimum of 200MB of RAM because Java code runs into a container image with its own JWM, JRE, JAR, CVE security patches and so on (all with an Image size 500+ MB).
During my learning time for Go, I started to play by writing small compiled Go microservices:
- Static data that have 11MB in image size and 9MB of RAM used
- DB Command Model 13MB in image size and 23MB of RAM used
(*both Java and Go screenshots are from services that runs for at least 1 day).
There are many talks about Java, that is still relevant in the cloud, the general opinion is that in the cloud you should use whatever tool gets the thing done, and if you have Java developers just use them, but, still, I wondered why is a Java app constantly killed by Kubernetes due to out of memory. Searching the Internet about the topic, I found out quite strong argumentation why Java inside Docker is not such a great idea.
Turns out that in Java 8 it was discovered that the JVM does not play well with the Linux CPU and Memory isolation, they are used by Kubernetes to set resource limitations for a Pod (can have one or more containers). Unless you explicitly state heap sizes, the JVM makes guesses about sizing based on the host on which it runs; it will see 32GB of RAM when it is actually constraint to 1 GB. There were some fixes released, yet not all Java teams make use of them.
Conclusions why I would not use Java in Kubernetes:
- Too many resources needed to run code inside the JVM
- Too much time needed for optimizing
- There are better alternatives
If you have a different opinion or any feedback please send me a message on twitter: https://twitter.com/i2gh0st/status/973092695592849408