We’re starting to see some influencers and larger organizations scale back from an “all in” and “by the book” stance on microservices and advocate for a more sensible and logical approach in 2020. There’s nothing wrong with the microservice pattern when used appropriately. It makes extremely complex application architectures easier to manage and continuously deploy for large, distributed teams, but just like all other design patterns it is not (and should not be) a solution for every architecture.
There is no question that smaller, purpose driven services are (and have been for many years) a smart approach to building out backends and APIs. If you’re not doing a complete “table/schema per service” with event sourcing and CQRS approach, it still makes sense to break up your persistence operations into small, manageable services that can be independently scaled and deployed. This is typically the approach that I’ve used over the past several years with the applications that I’ve built. Some people call this approach a “distributed monolith” or even just the standard Service Oriented Architecture (SOA). The ultimate point that I’m trying to express in the next few posts on this blog is that it’s OK to not “do” microservices “by the book”.
You shouldn’t feel bad if your application is labeled as a “monolith” or a “distributed monolith” or any other terms someone may come up with to make you feel like you’re inferior because you’re not up to date on the latest industry buzzword and fad. We can talk about trends and forecasts and where the industry is headed all day long, but at the end of the day your application needs to do the following things:
Be responsive, easy to use and error free.
When users visit your site, it needs to be user friendly, available and quick to respond to their requests and it needs to complete those requests without failure. Every other decision that you make about your architecture makes your developer’s lives easier or is reflected in your cloud bill at the end of the month. Don’t get me wrong - those are important things - but don’t lose focus on the most important goal which is creating a great experience for the end user.
So if our goal is to create a great experience for the people browsing and using our site/application, why do we spend so much time and effort worrying about making sure we’re following the latest trends and fads in the industry? Well, by our nature a lot of developers are tinkerers who are curious and always interested in learning new things. We like to challenge ourselves and no one likes to feel like they are being “left behind” in an industry (especially one that changes as rapidly as ours does). Having been in this industry for almost 20 years now I have started to notice that we tend to circle back to the trends that we used once before. We tend to “reinvent the wheel” quite a bit. I don’t know if we’ve forgotten the lessons we’ve learned, or we ultimately realize that adding layers of complexity to solutions is a bad thing. Either way, it’s OK - sometimes you have to learn things the hard way. Pain is OK, as long as it results in a positive outcome at the end of the day. It’s only when we refuse to learn from our mistakes or compound the pain that we’re dealing with out of stubbornness or ignorance that we’re really doing more harm than good.
I say all of this to prove a point, and to ultimately move into the actual point of this blog post series:
Sometimes the best solution is the one that has been staring us in the face all along.
In the case of this series, maybe the best solution for publishing and subscribing to changes in our database has been the one we’ve pretended didn’t exist all along: using the database itself. There’s nothing wrong with using triggers, stored procedures or scheduled jobs in our database, but for the last few years we’ve just stopped using them. We’ve kept this logic outside of the database because…reasons…for a long time. I’d bet that the first reason I would hear if I asked someone why we've done that would be that putting business logic in the DB leads to “vendor lock in” because each RDBMS has its own flavor of SQL and none of them are compatible when it comes to certain tasks. To which I’d respond: when was the last time you re-wrote or re-architected an application where the DB layer moved completely intact to the new solution? My answer, in 16 years, was: “never”. If we rewrote something, we looked at the persistence tier and made changes as necessary. Things tend to get ugly over time, and that tier is no exception. Don’t let “vendor lock in” be your excuse. You’re not going to find a piece of functionality that exists in one RDBMS that doesn’t have a suitable counterpart in another RDBMS. You just won’t. Don’t use that excuse.
Regardless if you’ve gone “all in” or are building a “distributed monolith” (or whatever you want to call it) the next few posts in this series will show you how to use the database to accomplish some tasks that you may be doing the “hard way” in your applications and services today. I’ll show you exactly how you can use Oracle’s Autonomous DB and a few PL/SQL scripts to publish all changes to a table via triggers and use scheduled jobs to update a table from messages posted to a stream in Oracle Streaming Service. But as I said earlier, none of what I’m going to cover is specific to Oracle DB (other than the code itself). This functionality can be done in just about any RDBMS out there - and I think it’s time we started trusting our database to handle data again instead of handling the complexity of these operations in our application code.
It’s time to think like we used to again. It’s time to stop caring about whether or not your solution is something that someone else in this industry thinks you should be doing and start worrying about our users and our developers. There’s no need to overcomplicate software - it’s hard enough to build and maintain as it is.