Max VelDink

I like Ruby, portable architecture, type systems and mentoring.

Modules, Not Microservices
from Ted Neward
This is a signal boost to another author's content. Before reading my thoughts, I highly encourage you to read the original post.

Ahh, microservices. The white whale we’ve been chasing for a long time in the industry. Ted does a great job in this post examining what teams (and the industry) think they are accomplishing when moving to microservices and how it differs from the actual issues teams face as software gets larger.

I’ve worked in several engineering organizations transitioning to microservices (or some semblance of them), my present organization included. I don’t look to bash any technological approach, and I appreciate Ted’s post not engaging in the basest form of indulgence by wholesale ridiculing microservices. It is my responsibility to clearly warn teams and organizations when they declare microservices in the future that it’s not an elixir to solve all of their problems.

The amount of overhead you invite when you split your domain into separate service boundaries is staggering. Network latency, an increase in consistency problems, and more points of failure are just a few challenges that now need to be thought about holistically, in addition to more application-specific concerns such as authentication, distributed tracing, and data ownership. These issues need to be considered from day one of a service-oriented architecture, are often neglected until closer to production deployment, and result in innumerable frustrations.

What’s Actually Wrong Here?

After pointing out historical warnings in our industry around distributed computing challenges, Ted finishes with two points to consider: what we actually need when we fill the urge to split our domain into microservices.

“Do you need to decompose the problem into independent entities? […] The key is to establish that common architectural backplane with well-understood integration and communication conventions, whatever you want or need it to be.”

A consistent approach to modularization and interfaces is at the heart of a scalable architecture. That can be a network hop (using your favorite inter-process communication protocol), an intra-container process call, a function call to a separated module running within the same process, etc. We should spend more time debating the messages passed around the system and who owns the composition rather than what the network topology of the receivers should look like.

“Do you need to reduce the dependencies your development team is facing? […] The key is to give the team the direction and goal, the autonomy to accomplish it, and the clarion call to get it done.”

Earlier in the post, Ted points out that Amazon, one of the first discussing microservices on the web, initially was not advocating for microservices as much as they were talking about independent engineering teams that were empowered to deliver their organizational mandates: the so-called “two-pizza teams.” This reflects my main focus on the sprint teams I lead: empowering the team to work at any level of the stack to satisfy requirements coming in from our workstreams. Generally, I’ve seen organization size be the main culprit for the inability to deliver on this promise in a monolith, especially if you use release trains or have not optimized delivery to production.

How do I reckon with the microservice siren song?

Like many senior contributors, I’m currently thinking through an engineering organization’s desire to pursue microservices while contending with the complexities that microservices invite to the table. I’m reading books on distributed computing and thinking through what sharing looks like in our Ruby services and a monolith. As our organization is still in the early days of service orientation, my most significant contributions are forcing the conversations around the complex parts of services: authentication, API contracts, distributed tracing and monitoring, and so on.

Suppose we want to step into microservices to allow our differentiated teams’ autonomy to develop in their domains, fine. But let’s go in with eyes wide open and start thinking through the complex parts. If we start solving for them early, we will avoid rushed implementations and last-minute considerations.

I’m still a fan of the majestic monolithic. For most single-product companies with less than 50 engineers, your hard pressed to be more productive than in a single deployable unit. That’s not to say you can’t focus on a modular architecture; in fact, this is one of the best times to become invested in separating domain boundaries amongst teams and promote tight development feedback loops (small PRs, direct deploys to production from main commits, etc.). Both enable sustainable software delivery, be they in a monolith or microservices.

Micro-apps over microservices

I’ve started promoting the idea of micro-apps (let’s see if that catches on) over microservices. Many of our monolithic woes come from legacy code and tight coupling, not inherently from being in a monolith. As we look to solve systemic problems, I’d likely spin up smaller, single-purpose applications that govern their whole world without becoming services to be consumed by others. We will have consumable services (sorry, “microservices”) that we offer to other teams. Still, we will probably have a fleet of single-purpose Rails apps responsible for a handful of business functions, rewritten from legacy code, now that we understand the problem space better.

Micro-apps are a better frame for solving problems. Instead of thinking about the service boundaries and interface that needs to be agreed upon between producers and consumers, we focus on writing enough code to fulfill the business requirement. This could look like a Rails app dedicated to sending emails with certain guarantees or auditing requirements or breaking an internal management tool into a new React app backed by a dedicated Go service (which is not responsible for receiving requests from anywhere else in the organization).

Libraries, not frameworks

From there, we’ll also develop a library-over-framework mindset. The knee-jerk reaction of most software engineers is to look for abstraction and figure out how to share code with other teams as soon as possible. As we do this, more requirements unrelated to our team creep in, and we start overengineering or overthinking simple abstractions. As Ted points out in his post, the Unix philosophy is helpful here: do one thing well and allow for the composition of outputs.

We should look for composable libraries, as functional and strongly typed as possible in our case; other apps and services can use that to handle concerns. Internally, we have some business-specific primitives that are easily portable and have no framework dependencies. We have a library for defining an expected CSV schema and, in a Sorbet-friendly manner, check for input validity and cast rows into desired types. The features we implement are enough for our use cases, and we mark the repos as open for contributions from the rest of the organization. This prevents us from falling into the trap of supporting others’ use cases before our own are met.

Notably, we don’t mandate these libraries be used throughout the organization. It’s okay if another team writes code that solves the same problem, and they could write a library that contains nearly 100% parity with ours. It’s okay. Internal development and competition of ideas is a healthy dynamic we should embrace in software engineering, especially as it reduces coupling between teams.

Don’t get me wrong. We still market our libraries to other teams and love it when other groups see the need that it solves and contribute some features or fix bugs that they encounter. What we’re looking for here is that the library is the best choice for a team to use. We don’t need a mandate; the library should speak for itself.

Microservices as a solution to autonomy is interesting, but they aren’t an inevitability, and they are just one other answer we’ve come up with over the past 50+ years. Regardless, architecture, feedback loops, and intelligent reuse work to enable feature delivery, which is what we should be optimizing for.