because writing is clarifying

because writing is clarifying

Subbu Allamaraju’s Journal

Technology Decision Making and Architecture Reviews

In this note, I would like to implore you to not use architecture reviews as a means to improve quality of technology decisions. Instead, I would ask you to rely on acts of asking for, and giving feedback in open forums that favor autonomy, constructive feedback and dialog over correctness and objectivity of decisions.

Technology decision making processes like architecture review boards, architecture working groups, virtual architecture teams (VATs), or “A” teams (with the “A” standing for either “architecture” or the “best of the breed”) that rely on “review” of decisions tend to be slow, impede team autonomy and produce top-heavy decisions.

On the other hand, processes or rituals that use dialog and feedback as a means of offering improvements put the autonomy back in the hands of those seeking feedback, and empower them to determine how best to incorporate the feedback into actions. Without the autonomy to make decisions, whether optimal or not, no team can learn and own outcomes.

When you take the feedback centric approach, those giving feedback would be forced to formulate constructive and actionable feedback instead of arguing against the decision or explaining why the proposed decision is wrong or inferior. Since there is no review, there are no approvals to make, and no up or down votes to give. Opinions on why any particular decision is good or bad become immaterial, unless those are followed by constructive and actionable feedback showing better alternatives. Any bar-raising push for improvements happens via constructive feedback and not outright disapproval.

The onus for incorporating the feedback falls back onto the one seeking the feedback, thus maintaining autonomy. The feedback is non-binding. The feedback seeker is free to disregard the feedback or interpret it in ways that he/she sees fit.

Wouldn’t this approach likely to produce inferior outcomes? Before answering this question, let us look at the rationale behind commonly practiced architecture reviews.

Meritocracy

Most architecture review initiatives start with a desire to improve quality of decisions. Teams wanting to make decisions come prepared to present their designs and analyses in the form of a proposal. A nominated set of individuals discuss, review, offer feedback and/or critique. These individuals are usually considered the best in their roles.

Upon presenting the proposal, a decision is either reached or not reached by a process of approval or voting. If a decision is made, the presenter(s) will get to implement the decision. When a decision could not be reached, the presenters may need to come back with an improved proposal at a later time.

This approach assumes that the individuals reviewing the proposals know it all, and have earned the feathers to offer critique on any topic. Instead of helping the proponent improve by way of feedback, this style puts the emphasis on a review and approval of a proposal under the guise that those reviewing know the answers. This is very rarely the case. Even when it is, it disenfranchises those implementing the decisions.

Regardless, teaching someone how to make better decisions by way constructive feedback is far more valuable than making better decisions for them. When there is no teaching, there is no learning. When there is no learning, there is no improvement.

Silos and The Principle of Least Effort

What also prompts such architecture review boards or forums is increased size from a small team to a larger organization of several teams. When the size is small, decisions are easily understood by everyone and feedback flows through quickly. But silos form as the size increases. Individuals outside the silo find it difficult to understand what decisions are being taken, and the rationale behind those decisions. Nonetheless, decisions still move swiftly within each silo due to the autonomy.

However, autonomy without external feedback often leads to local optima ignoring alternatives that can help produce a global optima. In the absence of feedback, the principle of least effort takes over, and the team may gravitate towards known and comfortable decisions avoiding uncomfortable alternatives.

Architecture Feedback

I would argue that it is okay for the feedback process to produce inferior outcomes in the beginning, as long as there is a mechanism for the feedback to flow continually. The feedback, coupled with the autonomy to own the outcomes by the proponent, will eventually autocorrect decisions.

If you are ready to rechristen your architecture review ritual into an architecture feedback ritual, here are some suggestions for the feedback seekers and feedback givers.

Feedback Seekers

  • Approach as though you don’t have all the answers, let alone the best answer.
  • Remember that the purpose of feedback is to learn, and that feedback is not intended to disenfranchise you of your autonomy.
  • Don’t defend your solution. Consider that there are many ways to solve the same problem.
  • When you don’t like the feedback or agree with it, explain to open dialog, but not to defend your position.
  • Ask what you may be missing.
  • Take notes and ask clarifying questions about the feedback.
  • Avoid defensive phrases like “we decided” or “we want to”. Instead, start with “Here is what I/we thought … What do you think?”.
  • Don’t outright reject the feedback if you’ve already thought about it. You might instead say, “Thanks for the suggestion. Let me reconsider”, or “Would you mind walking me through this further? I couldn’t come to the same conclusion.”

Feedback Givers

  • Approach as though you don’t have all the answers, let alone the best answer.
  • Practice how to give constructive feedback about the things you disagree with.
  • Work on shaping your anxiety to push for what you consider as better choices into constructive feedback.
  • Ask clarifying questions to understand the context behind proposed decisions.
  • Instead of “This won’t scale/work/whatever”, try alternatives like “I’ve observed that this approach might not scale/work/whatever. Here are the reasons why. … I will be happy to walk you through such and such alternative.”
  • Set and raise the bar while providing additional context.
  • Don’t challenge or overtake the presenters’ autonomy to incorporate the feedback in the best way possible, including completely ignoring your feedback.
  • It’s okay to not have an opinion. Don’t be compelled to voice one.

Though this approach might sound radical, I submit that, the only way to raise the bar on technology decision making while preserving team autonomy is through feedback and dialog.

If you enjoyed this article, consider subscribing to my journal.

See Also