Skip to main content

A CQRS Microservice Architecture - My Way


Microservice Architecture is the new trend in the industry. When I thought of building a decisioning engine to work as my personal assistant I decided to design it as a swarm of Microservices. The most compelling selling point about Microservice Architecture which resonated well for me was its ease of maintainability and that's a big factor for a hobby project like this which has a lot of custom logic programmed into it and requires a lot of frequent changes. In theory, Microservice should fit right in. I tried doing it in the Monolithic way and I failed !!. So, will Microservices solve it for me? Let's find out.

In this post I will do it in the "Software Engineering" way.

Background:

Few weeks ago I created a wifi controlled water system that I can control via my Home Assistant (using my phone even if I am on the other side of the planet, like Batman). And this is working great for me. Read about that here: Smart wifi controlled irrigation system.
But I still needed to remember to turn on the water twice every week. So the plan was to add some intelligence behind it so that it does not always require me to turn on/off and it can make its own decision.

Solution:

At a very high level this is what I have done to add the intelligence behind the semi dumb IoT automation.

  • I will provide my input in the Evernote. The system will collect input from Weather and Note Service.
  • The system will process all the inputs and make a decision.
  • The system will raise action items against its decision for the IoT automation service.
  • Result will improve my life (through IoT automation, Alerts/Notification etc).


The one line requirement: 

I want a system that will automatically decide when to do certain tasks (eg> turn on the garden tap to water the garden).
** The service I built here does a bit more than this. But in this post I will only focus on its watering functionality. 

The breakdown of user requirement into functional scope:

** Skip the text if you understood this diagram.

Func 01: 
The system needs to know my preferred schedule to water
Why: To optimise the cost and usage of water (council regulation), the system needs to understand when (or what are the days) I want it to turn_on water. I could hard code the scheduling days, but in that case, it will not be flexible and frequently maintainable. So, it needs external input point. 
How: Luckily there's Evernote which comes with a set of good APIs and I usually use Evernote for a lot of my other stuffs too. So decided that I will put my schedule in a note. This way changing the schedule is super simple. I could just open the app on phone and simply type in my preferred watering schedule. Like this: 
i) the note is called "My garden watering schedule"
ii) the content in note is Mon 5:40 <new line> Sun 6:00
To achieve this I created a Evernote parsing service (this service does few other things along with reading watering schedule) which reads my note and supply as input to the decision engine.
Func 02:
The system needs to know the weather condition
Why: Watering garden/backyard is also heavily dependent on weather condition. If it rained or going to rain recently then no point of turning on water today. Rather let nature do the watering and wait for the next schedule (of my preference).
How: There're few whether services out there that provides forecast and weather history but it also provides a lot of other data. I created a weather reader service that will read from different service providers (because I am in Australia) and retrieve days of information and extracts only the relevant information.
Func 03:
The system also needs data on my manual action:
Why: Of course I don't trust AI completely yet and I have recently watched the new Terminator. So I will have control over my tap and will turn on and off when I please. So the system will need to know (before making decision) about my manual activity with the water.
How: This is a paradox for the system. But I created few services to let the system know when I manually turned on the water.
Func 04:
The system needs to decide
Why: The system now has many information available to it. It now needs to analyse the information and decide what to do. Deciding like a human brain is tricky business.  I made the decisioning logic myself (just cause I could; sure it's not going to be a good as alexa, but I am in control of what work for me).
How: I created a decision service to do the below
  • Understand if today is the day to water: Gets my preferred schedule from Evernote Service and checks if Today is the day to water.
  • Analyse weather pattern: Gets relevant information from weather service and forms a decision by analysing the below:
    • In recent days (means 4 days in the past and 3 days in the future.)
    • Rain and probability (if probability of rain on a day was or going to be higher than 70% then it assumes that it had or will rain on that day)
    • Temperature & Humidity (too hot and dry means garden needs water even if it rained few days ago or going to rain in few days time).
    • That's it for now. 
  • If the the result for the above is yes and yes then the system will come to then conclusion that "The garden needs water today" other wise it will skip raising the task/event.
Func 05:
The system needs to execute physical action
What: So far the system only decided whether to turn on water or not. There also needs to be the physical components that the system will use to execute the physical action (eg> start the flow of water).
How: To loose couple the decision engine from the IoT automation components and keep it true to its responsibility the system, in this case, only raises an action event for the IoT service.
Func 06:
The system should be highly agile
Why: Usually this type of system grows in functionalities over time and sometime the growth is very rapid. I will start using it and I will enhance it as I experience it. That means I will (at least should be able to) make frequent changes to it (enhancing the weather analysis etc) and I do not want to spend/waste time on deploying the changes (and/or worry about its impact on other parts of the system).
How: I created automated deployment pipelines to make the CI/CD process really smooth (even the part where deployment needs to on the hardware and not just cloud).

Design Consideration:

These are some facts that I considered when I planned the system design:
  • The system has multiple components for performing different types of tasks (from gathering data to analysis and making decision to action on the decision).
  • Each component has its own single responsibility that it is concerned with and nothing more. The components are also completely stateless (because of their segregation of duty).
  • The components raises events and commands among themselves to complete a functionality of the over all system (event leading to action or vice versa).
  • The system needs to very loosely coupled. Changes to one should never crash the whole system (data/information not available is allowed).
  • No time for code to production. The system should not require deployment as a separate action. 
If you have read the above by now you have a pretty good guess of where the whole design process is heading towards. Hint Hint: 
  1. Microservice Architecture
  2. CQRS Design Pattern (read more here: https://microservices.io/)
  3. DevOps Pipelines (I needed to get creative here to satisfy the requirement)

The System Design




The diagram should be self explanatory. Here are few points:
  • Action items/events (that are not commands) are placed in the Data Event Q. A service picks up each event and records in the Postgres DB. Another query/reader service reads these events (on demand), derives facts and delivers these facts to the querying party.
  • The command event queue is used for commanding a service to do something. In this case, right now, it is used to command the data query service to purge its cache and reload facts in the event when the data record service picks up something that changes the facts of state of the system (eg> there's a new auth token available).
  • The HA Event Q (Home Assistant event queue) is used to raise tasks (to be executed) for Home Assistant. 
  • The cron job triggers "Smart Water Service" which collects information from Weather and Note Services. And after the analysis of the information once it makes a decision it does 2 things:
    • Notifies me (the user) via email.
    • Places a task in the queue for Home Assistant to execute.
  • I am using Eureka as my service discovery engine. The number of services will keep growing and so will the interactions among them. So it will become pretty hard to keep track of their endpoints. Having a service discovery in place makes the system very flexible to move to any other cloud without changing a lot of references and also allows it to add scalability on one without impacting any configuration changes to other services.


The CI/CD Pipelines to make the system highly agile



  • Code repo is Git on BitBucket. Branching strategy is maintained.
  • Commit (and push) to a release branch triggers pipelines. In the pipelines there are 2 different paths/patterns (not to be mixed with pipelines step) that I am using to automate the DevOps
    1. Deploy to Cloud Path: This is traditional pipeline deployment stuff (I will describe it below).
    2. Deploy to RPi in Local Network Path: This is where I put in something a little creative and I must brag about it. I did not want the hassle of logging into the RPi and checkout the new code and restart the service (too much of a hassle, manual effort  and waste of time). Rather I wanted all these to be automated. So I ended up doing:
      1. Created a small python script using git-python that will:
      2. checkout/pull source code from repo about the service/program passed as parameter. So this script is really program agnostic. (eg HACommandHandler)
      3. ReStart the running program that is was triggered for so that the new code is in effect (eg> HACommandHandler)
      4. Raise a DEPLOY event in a queue (eg > RPiDeploymentQueue) as a step in the pipelines.
      5. A python program is subscribed to the queue. And when the DEPLOY event is raised by the Pipelines it will trigger step a.
  • In this mix I am using BitBucket pipeline. There are many good things about BitBucket pipeline. Few that I considered for this projects are:
    • The BitBucket pipeline does almost all of what I would need to do if I were using Jenkins. Below are what I needed to do from a CI/CD pipelines perspective:
      • Code compile and artifact produce
      • Make docker image (although on Heroku docker isn't really necessary, I wanted the system to be IaaS agnostic. And all popular IaaS provider these days support docker anyway. It will also allow me to use Container Service should I ever need to. So really makes it future proof [unless future of docker evaporates in future]) 
      • Deploy docker to Heroku registry and then command Heroku to deploy that from its registry to dyno.
    • BitBucket pipelines does all of it. So no need to use Jenkins (which will require additional server). Since my code base is already on BitBucket, using its pipelines made perfect sense to me. 
    • You can source control the pipelines code itself.
    • This is probably the most heavy weight point that biased my decision to using BitBucket pipelines. It comes with 50min free. Which is perfect for hobby projects. (The paid plan is cheap too).
    • I live in Australia. So Atlassian all the way.

That is it.

What's next:

- Demo
- Make the system more sophisticated.

Now go, build your own Microservice because Monolithic is so 90s.

Technologies I used

  • IaaS -- Heroku.
  • Local CPU/Server -- Raspberry Pi. I know I have written RPi 0 in the diagram but I ended up running the command handler script on one of the Media Centres running OSMC. The media centre is on RPi 3 B+ and was doing nothing but running media centre. So decided to give it some more work to do by multi-purpose it.
  • Code Repo -- BitBucket
  • Service Discovery -- Eureka (by Netflix)
  • Queue -- RabbitMQ (AMQP)
  • Home Assistant queue for IoT devices -- Mosquito MQTT
  • Cron-Job.Org -- for schedule trigger of services. (also health check on the available service. This trick works great to keep hobby dyno alive.)
  • DB -- Heroku Postgres
  • Microservice -- All cloud Microservices are writted using Spring Boot. (purely because I am still refusing and rejecting NodeJS for no good reason).
  • RPi Services/Programs -- Written in Python, because RPi OS ships with python in it (eg> Raspbian, OSMC all has python pre-installed. So keep it light)

Running Cost

AUD $0

Wait, does it make the entire process polygot ?

Comments

Popular posts from this blog

The story of a Hack Job

"So, you have hacked it" -- Few days ago one of the guys at work passed me this comment on a random discussion about something I built. I paused for a moment and pondered: Do I reply defending how that's not a hack. OR Do I just not bother I picked the second option for 2 reasons: It was late. It probably isn't worth defending the "hack vs" topic as the comment passed was out of context. So I chose the next best action and replied "Yep, sure did and it is working great.". I felt like Batman in the moment. In this post I will rant about the knowledge gap around hacking and then describe about one of the components of my home automation project (really, this is the main reason for this post) and use that as an example how hacking is cool and does not always mean bad. But first lets align on my definition of hacking: People use this term in good and bad, both ways. For example: "He/she did a hack job" -- Yeah, that probably

Smart wifi controlled irrigation system using Sonoff and Home Assistant on Raspberry Pi - Part 1

If you have a backyard just for the sake of having one or it came with the house and you hate watering your garden or lawn/backyard then you have come to the right place. I genuinely believe that it is a waste of my valuable time. I would rather watch bachelorette on TV than go outside, turn on tap, hold garden hose in hand to water. Too much work!! Luckily, we have things like sprinkler system, soaker etc which makes things a bit easy. But you still have to get off that comfy couch and turn on tap (then turn off if there's no tap timer in place). ** Skip to the youtube video part if reading is not your thing   When I first moved into my house at first it was exciting to get a backyard (decent size), but soon that turned on annoyance when it came down maintaining it, specially the watering part. I laid bunch sprinklers and soaker through out the yard and bought tap timer but I still needed to routinely turn on the tap timer. Eventually few days ago I had enough of this rub

Kubectl using SSH tunnel for TKG K8s Clusters

We know SSH'ing and probably many knows about SSH tunnel. The way, in my opinion, these 2 (SSH and SSH tunnel) are different to me (and I am in favor of SSH Tunnel) is how I use it. From tooling perspective I would almost always do tunnel instead of direct ssh.  In this post I will describe how to do SSH tunnel for kubectl to interact with remote kubernetes cluster (Specifically Tanzu Kubernetes Grid aka TKG cluster). Get the project ready to go from my github:  https://github.com/alinahid477/vsphere-with-tanzu-wizard Topics Backstory SSH tunnel for TKG Clusters using Docker container Technical stuff: Tunnel through Bastion for TKG K8s cluster Technical stuff: SSH Tunnel for Kubectl for remote K8s Clusters (same with or without docker) Technical stuff: Explain me this A famous quote from Darth Vader himself: "Feel the power of SSH Tunnel" Backstory Why ssh or ssh tunnel? The below diagram shows in what scenario a SSH or SSH Tunnel almost becomes a necessity. Let's st