In the last few posts of our microservice journey we created a compartment, launched a Kubernetes cluster and set our tenancy up for a Docker user and registry and created an Autonomous DB instance that we can use for data persistence. In this post we will start to take a look at writing some actual microservice code. I want to reiterate that each application has unique requirements that should be evaluated before you choose to implement any solution and so the choices that I make in this blog series might be different than the choices your organization will make. The important questions to ask yourself are:
These are important questions to ask yourself, because introducing a new way of thinking can bring up issues that are difficult to resolve later on.
Before we dive into the code, let’s start by defining a few patterns for microservice data management. The easiest patterns to digest when it comes to microservices are the shared database and database (or schema) per service patterns so let’s start with those patterns.
In monoliths, our data is usually stored in a single relational database. This made life easy when it came to persistence and querying – we could write queries that utilized joins and we could use ACID transactions to enforce data consistency. The shared database microservice pattern states that a single database is shared by multiple services which can freely query across tables using joins to retrieve data and utilize transactions to modify data in a reliable way that enforces consistency. That makes this pattern less difficult to comprehend for new developers, however it introduces challenges as our API becomes more complex. Schema changes now have to be coordinated with developers of other services because adding columns, changing default values and other operations could potentially break services that might access that same table. Also, long running transactions have the potential to block other services by holding locks on shared tables. Lastly, this pattern assumes that all services will persist their data in a traditional relational table and eliminates the possibility of utilizing NoSQL documents or Graph DB’s for persistence (there are workarounds here, which we’ll see a bit later on).
The database (or schema) per service pattern addresses some of the shortcomings of the shared database pattern. Each service gets its own database which essentially means the database is part of the implementation of that service. Schema changes now won’t impact other services. This doesn’t necessarily have to result in a database server for each service. Often times it can be represented by individual tables per service (as long as there are users and permissions bound to each table which restrict access by other services), or even as a unique schema within a database instance for each service. Using database per service means that each service is free to use the type of database that is best suited to their needs. Of course, there are some downsides to this pattern. Transactions are now difficult to manage. Referential integrity can’t be enforced as easily. Queries that join data can be difficult, if not impossible.
I've done a lot of talking so far in this series about what microservices are and why you might use them, but it's always best to look at code to understand these theories so let's finally do that. For this series, we’ll build out an API for a simple social media style application in several parts. This gives us the opportunity to utilize some different microservice patterns as well as various features in the cloud and should present some interesting problems that we’ll need to address.
This service utilizes Helidon MP with Hibernate to persist users to a
user table in an Oracle ATP instance. To get started, we can utilize the Heldon Maven archetype which will scaffold out some files and structure for our service. Here's the command (you can modify the path to your liking, or leave it as is):
Before we modify or look at the generated code, let's create our schema user for this microservice. You'll need to connect to your running ATP instance as admin to run the next query. Using SQL Developer Web as shown in the last post in this series would be an easy way to run it. Once you're ready, run the following (making sure to modify the password to something strong):
If you're using SQL Developer Web, you'll need to ensure that the admin user enables each schema that you would like to use with the following command:
For the command above, the placeholder values should be substituted as follows:
SCHEMA-NAMEis the database schema name in all-uppercase.
schema-aliasis an alias for the schema name that will appear in the URL the user will use to access SQL Developer Web. Oracle recommends that you do not use the schema name itself as a security measure to keep the schema name from being exposed.
After enabling user access, the ADMIN user needs to provide the enabled user with their URL to access SQL Developer Web. This URL is the same as the URL the ADMIN user enters to access SQL Developer Web, but with the
admin/ segment of the URL replaced by
From here on out, we'll use the
usersvc user, so log out of SQL Developer Web (or whatever tool you're using) and log in with the new username and password that we just created at the proper URL per the instructions above. Once logged in with the
usersvc user, run the following to create a table for the microservice:
Next, we'll need to grab some dependencies. Create a folder called
/build-resource in the root of the project and add a subdirectory called
/libs. We'll need to grab the following JAR files so that our project can use the OJDBC driver to talk to our ATP instance:
Download the JARs from Oracle and place them in
/build-resource/libs. We also need to publish these to our local Maven repo so that when we run the application locally they'll be properly resolved. The following commands should help you out with that:
Next, modify your pom.xml file to include all of the necessary dependencies for JPA, Hibernate, Jackson and the OJDBC JARs. Add the following entries to the dependencies section:
We'll need to add a Hibernate persistence config, so create a file under
persistence.xml and populate it like so:
While you're in that directory, modify
microprofile-config.properties like so:
Now that we have all of our prerequisite configuration complete we can move on to the application code. Locate the generated
GreetApplication.java file. You can either modify it, or delete it and replace it with a new file, but ultimately we want to end up with a
UserApplication.java that contains the following code:
Use the same process (modify or replace) for
GreetingProvider to end up with a
UserProvider that looks like so:
UserProvider class will contain our populated configuration values when we run the application. At this point, we're ready to start writing the real logic of our microservice. In the next post in this series we'll take a deep dive into that code and get our service running and deployed on the Oracle Cloud.
I talk to a lot of developers in my job as a Developer Advocate. Sometimes they've been using the products in the Oracle Cloud for a long time, and sometimes...
A while back, I blogged about using Oracle Advanced Queuing (AQ) for messaging within your applications. It's a great option for durable and reliable...
I’ve written about messaging many, many times on this blog. And for good reason, too. It’s a popular subject that developers can’t seem to get enough of...