4 minute read

Requirements and business rules are often considered hard and fast, never to be broken. Sometimes real value can be gained by breaking them. Find out how.

Excerpt:

The nice thing about programming is that everything is black and white. Wrong!

When designing new features, we all go through a process of discovering requirements. If you are a good developer, you will start to ask questions and probe deeper to find the edge cases. Why? Because you know the edge cases can often take up 80% of the development time. But is that where 80% of the value is?

[…]

One of the great advantages of event sourcing and eventual consistency is that you can decide how strictly you need to adhere to the requirements. In other words you focus on delivering value, rather than features in isolation of value. In the voting example above maybe the 5000 limit is a soft limit. In which case you close off the voting system when the vote count reaches or exceeds 5000. Maybe the option of a high performance system with a small chance of exceeding the limit is more valuable than a slower approach that guarantees never to exceed 5000. If using event sourcing, the business (or a process manager) could then issue compensating commands/events to repair the problem (just like they do in the real world every day). As Greg Young put it “Consistency is over-rated.”

But what if we do have to deal with consistency. So given this kind of scenario, what options are there to resolve this kind of set based issue.

Locking, transactions and database constraints are old tried and tested tools for maintaining data integrity, but they come at a cost. Often the code/system is difficult to scale and can be complex to write and maintain. But they have the advantage of being well understood with plenty of examples to learn from. By implication this approach is generally done using CRUD based operations. If you want to maintain the use of event sourcing then you can try a hybrid approach.

If you are using event sourcing and reliance on an eventually consistent read model is not an option. You can adopt a locking field approach. In our voting example, before you issue the command you check a locking table/field, usually a database for the voting count. If under the max, then increment it and carry that value forward with the command. If, when the operation is but complete the count still matches or is still valid, then the operation can complete. When checking things like email address uniqueness, you could use a lookup table. Reserve the address before issuing the command. For these sort of operations it is best to use a data store that isn’t eventually consistent and can guarantee the constraint (uniqueness in this case). Additional complexity is a clear downside of this approach, but less obviously is the problem of knowing when the operation is complete. Read side updates are often carried out on a different thread or process or even machine to the command and there could be many different operations happening.

To some this sounds like an oxymoron, however it is a rather neat idea. Inconsistent things happen in systems all the time. Event sourcing allows you to handle these inconsistencies. Rather than throwing an exception and loosing someone’s work, all in the name of data consistency, record the event and fix it later.

[…]

It is only when you dig deeper into a domain and question the requirements and specs, do you start to find the flex points. When you accept that sometimes, business rules are not rules but guidelines. When you strive to understand value behind a rule, only then can you actually focus your time on delivering value over completeness. I’m guessing most businesses would rather you delivered a solution in 3 months that met 80% of their needs, than 98% in 12 months.

comments powered by Disqus