Microservice architecture: breaking one large, monolithic app with lots of functionality into a network of small apps that all communicate with each other.
This architectural style addresses two frustrations associated with building monolithic apps:
- Working on large teams. The team may be building or maintaining several different streams of functionality at once. If everyone is doing this in the same app, there is a risk of merge conflicts and generally stepping on one another’s toes. Also, to redeploy after something is added or changed in one stream, the team has to redeploy the whole app, which creates downtime for everyone. With a microservice architecture, different streams of functionality happen in totally separate apps. They can change and redeploy independently without affecting the other apps except by changing, removing, or breaking the mechanisms by which they communicate with the other apps in the network.
- Scaling. If one function in a monolith, by dint of additional data or users, needs more resources, then the whole app has to be scaled. If that function has its own microservice, then the additional resources need only be allocated to that one little app. Again, this does not affect the other apps unless it changes, removes, or breaks the mechanisms by which the app communicates with the other apps in the network.
There’s an important “except” in both of those points.
That’s what we’re going to talk about: cases where microservices jeopardize robust, secure communication between the apps that have information and the apps that need it.
Authentication: app A provides health data for its users. It has a frontend through which members can sign in to see their health data. One of the pieces of data is a list of all the doctor, dentist, etc. visits thay they have made over the past 6 years. They don’t input this data themselves. Instead, app B integrates with the systems of several doctors’ offices to aggregate that data into a list. App A asks app B for the list. App B’s data is provate, so it needs the user’s sign-in information to authenticate before it will provide the list. So App A can just pass that through from its database to app B, right?
Yes, but: in order to do that, app A has to know how to decrypt user login information, so it can use that information to make a call to app B. The more an app knows about the secure data it’s storing, the more risk there is for hackers to find a way to use that data: just ask LinkedIn, Twitter, or Ashley Madison.
Now, let’s add app C: an app that doctors can use to pull up records of a member’s past visits to other doctors. App B still requires authentication from the actual patient to release this data.
Sounds fine: we can make doctor logins with special access rights in app B, because we own app B.
But what if we didn’t own app B? That brings us to the next situation where microservices might make life harder for developers:
When a development team doesn’t own all the information required by their applications.
Now suppose app B integrates with services whose endpoints will only give up the goods with the patient’s login information. Does the doctor have to make the patient log in every time she visits? App C is hamstrung. It can’t get the info for doctors. It can’t have a batch job that runs in the morning to get all the patient histories that a doctor will need that day. Sure, if the devs could change the endpoints needed by app B, they could redesign to avoid these problems. But it’s not their decision to make, so now the whole system is limited. I do want to add a special note here on another common integration pattern and how it plays with microservices:
Batch jobs: sometimes, users need a lot of data at once. There are two approaches to resolve this: a batch job, which requests/manipulates each row of data individually with an API designed to handle one record at a time but does a bunch of them concurrently (in batches), or a bulk API that is designed, of itself, to handle many records at a time. If the whole develoment team is not on the same page about which of tgese approaches to take, it’s going to be difficult to take either of them. A batch job needs individual APIs, so if another microservice can only take the records in bulk, then the devs have to perform hackery to make it work. Not good hackery, either.
And that brings up a more general risk associated with microservices: because each microservice can run independently, the team can deploy a microservice without knowledge of how another microservice is using it. They can unwittingly break part A in the system by changing the “decoupled” part B and not even realize it until the system is up. The code no longer inherently communicates the plan for API integration, so the team itself has to take extra care to coordinate on the mechanisms of communication between apps that don’t depend on each other to compile and run, but do depend on each other to provide any value in the real world.
And so, microservices don’t reduce the need for team members to coordinate. They just make it less obvious when that coordination has failed-until patient history won’t load during user testing and devs have to scramble to figure out why.
Regardless of the architecture chosen, dev team communication is key. And though microservices are a valuable tool, they can be a bad fit for some application development circumstances. Reach for them when you feel pain: not right at the start of a project.