We've come to the final part of this portion of this blog series where we focus on microservices with ORDS and Micronaut. In this post we'll look at deploying the service as a Docker container on Kubernetes, and while that sounds very similar to the final part of the Helidon portion of this series, I promise you that there is a different twist to this post that you will definitely be interested in checking out.
Please make sure to check out the rest of this series (at the very least, make sure you've read parts 1 & 2 of the ORDS with Micronaut chapter).
Helidon And Hibernate:
ORDS With Micronaut:
We can easily deploy our ORDS with Micronaut service as a Docker container, just as we did before with Helidon. Like Helidon, Micronaut gives us a generated Dockerfile to get started. We'll modify it ever so slightly to take advantage of the Graal JIT compiler. If you are not familiar with Graal, I highly encourage you to read more about it. There are numerous advantages to using Graal, but the easiest way to see an immediate improvement in your application is to enable the Graal JIT compiler via a few Java options:
-XX:+UnlockExperimentalVMOptions -XX:+EnableJVMCI -XX:+UseJVMCICompiler
You can read more about the JIT compiler if you're interested, but further discussion on the topic is out of scope for the current blog post. For now, add the options above to the generated
Dockerfile so you end up with something that looks like this:
And build the Docker image with:
docker build -t user-svc-micronaut .
Before you run the image, make sure you have the environment variables set in your terminal that we set in part 2, then run:
We'll see the application start up:
In this example the application started up in 2432ms. Micronaut's AOT compilation has definitely given us a much quicker startup time than we might be used to seeing just a few years ago!
We can test that our application responds to requests just as it did before when we ran it as a JAR file:
Let's shut down the local Docker container and deploy to Kubernetes. First, push the Docker image to our OCIR registry just as we did in the Helidon portion of this series:
docker tag user-svc-micronaut [region].ocir.io/[tenancy]/cloud-native-microservice/user-svc-micronaut
docker push [region].ocir.io/[tenancy]/cloud-native-microservice/user-svc-micronaut
We'll need a Kubernetes YAML file for the deployment and the secret which will contain our config values. We'll need to Base64 encode the secret values before creating the YAML file. On *nix systems, use something like this to accomplish for each value:
echo -n "client_id.." | base64
Plug the encoded values into a
Then deploy the secret with:
kubectl create -f secret.yaml
Next, create an
app.yaml file for the deployment. You can use mine as an example, but make sure that you substitute the proper URL for your Docker image. Then deploy with:
kubectl create -f app.yaml
Check the pod status with
kubectl get pods and the service with
kubectl get services. Once the service has been assigned an IP address your service has been fully deployed!
So we've deployed as a JAR using the Graal JIT compiler which means we have super fast startup times thanks to Micronaut's AOT and improved performance thanks to the JIT compiler. But what if we took it one step further and deployed as a native image? It's surprisingly easy and we should end up with even better resource utilization in our Docker image.
We'll create a new
Dockerfile, this one called
Graal-Dockerfile. Since our ORDS service utilizes HTTPS, we'll need to make sure that we include
libsunec in our container so that we can enable HTTPS in the native image. This blog post goes into great detail on why this is necessary, but for now just make sure that you have a copy of the file inside your
build-resource directory of the project. We'll make sure that this file gets into our Docker image and we'll also set an additional environment variable to tell our application the path to that file. Finally, make a slight modification to our
Application.java file to make sure the
sunec library is loaded at startup:
Here we're using a builder image, installing the Graal native-image tool, copying our files into the base image and generating the native-image from the JAR file on line 5. The next step is to copy the generated native image and
libsunec library in, set the path to the
libsunec library and tell Docker to start our image. Note that we're using the
-Xmx64m option to set the maximum heap size to keep the memory consumption low on our application. You may need to adjust this setting for your application - read more about this option and how to experiment with it.
We can build, push and deploy this Dockerfile just as before.
The deployment configuration is similar, just pointing at the native Docker image instead. Again, deploy with
kubectl create -f app-native.yaml, check with
kubectl get pods and
kubectl get services. Note the startup time with the native image is now significantly improved:
It's interesting to look at the performance of the JIT version compared to the AOT native image. To compare, I ran a very simple load test (600 users over 1 minute) against each deployed service and monitored the CPU, Memory of each during that test. We'll look at median response time as well to see how each fares with throughput.
First up, the JIT version load test results:
A median response time of 307ms is pretty decent, considering our application utilizes ORDS to retrieve data we should expect some additional latency over a typical JDBC transaction. Let's look at the CPU and memory consumption:
The performance here is better than what we'd expect to see if we weren't using the JIT compiler, but we still see some spikes and overall high utilization numbers on the CPU. Memory consumption is pretty level, running around 325MB on average.
Next, let's see how the native image performed.
The median response time here is ever so slightly slower than the JIT version by 16ms, but not nearly enough to be concerned that the throughput on the native image version is inadequate or lacking. What about CPU and memory?
Amazingly, the CPU stays consistently under 1% utilization with the native image and the memory consumption hovers around 72MB throughout the duration of the load test.
So what have we learned during this portion of the microservice blog series? Well, we found out that it's possible to create a microservice to perform CRUD operations without a single SQL statement in our application code. We learned that we could create declarative HTTP clients using a simple interface or abstract class and let Micronaut handle the implementation of that service. We also looked at how that declarative client can use RxJava to perform our HTTP requests in an async and non-blocking manner. We can take advantage of the Graal JIT compiler as well as the AOT native image capabilities to increase our deployed microservice's performance.
In future posts we'll take a look at a new approach to persisting JSON documents and eventually take a look at how we can tie all of these posts together in a meaningful way.
Photo by JESHOOTS.COM on Unsplash
I've written many blog posts about connecting to an Autonomous DB instance in the past. Best practices evolve as tools, services, and frameworks become...
Email delivery is a critical function of most web applications in the world today. I've managed an email server in the past - and trust me - it's not fun...
In my last post, we looked at the technical aspects of my Brain to the Cloud project including much of the code that was used to collect and analyze the...