Chris Lorig's Blog

Building Better Teams & Organizations

Performance evaluation processes can be a great fuel for personal growth. However, some strategies become the obstacle or even sabotage any chance at growth.

Focusing on weaknesses

Working on one's weaknesses is great, if done for the right reasons. However, if the process forces people to work on skills they hardly use, just because they're a requirement on a checklist, it can kill all motivation to grow.

Forcing a great engineer to learn management skills, just to move to a senior position is one common example. Skill matrices are particularly prone to this.

Grading on a curve

Grading on a curve, or allowing for only a set number of promotions a year, or providing a set budget of points to distribute within a team towards promotion, all of those have one thing in common: They incentivize joining weak teams to get promoted quickly and leaving high-performing teams, because they slow down a personal growth trajectory.

Systems like this will lose self-motivated high-performers, that don't want to leave their teams hanging for their own gain. They're a great environment for the politically savvy, that don't mind a bit of back-stabbing to look better than their team mates.

Targeting averages

Similar to grading on a curve, targeting averages across teams, departments or a whole company incentivizes back-stabbing or even sabotage to look better by comparison.

They offer a second incentive too: Joining a low-performing team and taking it slow, joining the low average that then will be increased to stay inside the target band.

Hidden agendas

Nothing hurts trust and motivation more than a hidden agenda. Teams are fully dependent on what their managers and the company as a whole communicate. Great motivational speeches about career path systems are very common but it's just as common for those to not reflect the whole truth.

Some companies value looking good over transparency. They praise career opportunities and rewarding individual growth, but then, when the chips are down, apply methods like the ones above to 'equalize teams', 'harmonize growth' or 'avoid having too many leaders'.

Motivation from this is temporary. At some point the people will catch on, motivation will drop and churn will increase.


Design your career model with transparency and fairness in mind. Allow employees some freedom to develop the skills they are most interested in or motivated by. Evaluate them as objectively as possible, based on demonstrated behaviors and past performance. Reward success on team level and growth on personal level. Be open about your intentions.


Nice and handy term. Looks very german. Essentially, it means to build on your strengths (as opposed to mitigating weaknesses).

Time and again when people are struggling with their next career steps, trying to understand where their path leads, or who are unhappy with the options that present themselves, they turn to self-discovery. And in doing so, they invariably are confronted with their weaknesses. Our knee-jerk reaction is to tackle those head-on and mitigate them.

Don't do it.

Or, at the very least, be intentional about it.

I have a colleague who is in just such a situation right now. She's found some areas she calls “weaknesses” and that's where her development focus ended up.

When I suggested, she should not do that but focus on “Stärken stärken”, she was confused: Aren't people supposed to mitigate their weaknesses first?

I asked her a simple question and her eyes went wide:

Do you want to stand out or fit in?

This is really the key distinction. When mitigating your weaknesses you get to a “well-rounded” skill-set. Jack of all trades, very employable. Master of none, vanishing in the crowd.

When focusing on your strength, you build a skill-set that is much more specialized, but you get to operate on a much higher level. You're qualified for very different jobs.


When my team and I started working on our new project, it seemed like this daunting pile of work with no end in sight. Initial guesses put us at a time horizon of four months to get it all done. It was way later than everyone would have liked.

So we set out to see how we can speed things up. At first we started to negotiate the scope, but we were already looking at an MVP plan, there wasn’t much to trim. With a fixed scope – our backlog in hand – and a clear budget – 40 person hours in any given week (at best) – we turned to look into getting the most out of the time we have – and to figuring out where we were losing time.

Enter stage right: A tool called LinearB provided us insight into our current flow of change.

Coding time → time waiting for a review → time it takes to review → time waiting for a deployment → deployment

Looking at the data immediately made it clear that we were losing or wasting a lot of time, waiting for things to happen.

With LinearB as a canary, we made a number of changes to improve our flow of work. Every time, we kept tracking if indicators moved in the right direction. This post is just about the first and possibly most impactful step we took.

There’s very little reason to wait for reviews, and virtually no reason to wait for a deployment.

Just Leave it Out

The most straightforward way to eliminate these waits is to do just that: eliminate them.

Mob programming or ensemble programming are the best approach to this: The whole team works on the same problem together, in one large session with a shared screen. Once a problem is solved, everyone has seen, contributed to and reviewed the solution automatically, by virtue of being present. (There's a bit more to it, but bear with me for argument's sake.)

This is a highly effective step and yields surprising results. It's also very counter-intuitive to most people. Getting a dev team and wider organization on board with this technique is a whole endeavor in and of itself, in many cases.

So with this idea promoted to ultimate goal, I thought about what first step we could take to make small-but-significant improvements to our way of work. This is the first iteration, we came up with.

The Rule

We changed our workflow to focus on these waiting things first, with a very simple rule:

When picking your next activity, focus on the right-most column first.

  1. Deploy what needs deploying – the right-most column (besides Done).

  2. Then review what needs reviewing – second column from the right.

  3. Then see if you can unblock blocked tickets (we placed the Blocked column after In Progress).

  4. Next, try to help out with something already in progress.

  5. Lastly, start a new item.

Our cycle time (DORA definition: the time between starting development and deploying the code to production), shortened quite a bit, just from eliminating the waste of waiting.

Why does this matter? Aren’t we working in sprints, delivering a package at the end?

It matters because focusing on delivering every task as soon as possible allows for the fastest feedback cycle. Every step after the initial coding time adds feedback and improves the work. Wait time delays this feedback, but without this feedback we can’t know if a given task is in fact done already. We run the risk of not finishing the story within the sprint. Earlier feedback mitigates that risk and increases the chance of delivering everything we committed to.

When focusing on the thing on the right, you’re focusing on the right thing.


As you grow your business and your team, keeping everyone on the same page can be challenging. To ensure that everyone is working in alignment, you’ll want to optimize the cognitive load in your organization. Cognitive load is the amount of mental effort needed to understand a task and complete it successfully. In other words, how much do you have to think about what you’re doing? In this blog post, we’ll walk you through three ways you can optimize cognitive load in your organization: identifying Cognitive Load Optimization strategies, using tools to track and monitor improvements, and implementing changes that reduce bad cognitive load as much as possible.

What does Cognitive Load mean?

Cognitive load is a term used in cognitive psychology to describe the amount of mental effort required to perform a task. It is the total amount of information and activities that a person's working memory can process at any given time. When the cognitive load is too high, it can lead to mental fatigue and decreased performance.

Identifying Cognitive Load

The first step in optimizing cognitive load is to identify the sources of the load. Common sources of cognitive load include:

  • Complexity of the task
  • Volume of information presented at once
  • Interference from irrelevant information
  • Lack of control over the task
  • Novelty of the task

Monitor Progress with Tools

Once the sources of cognitive load have been identified, it's important to monitor progress towards reducing the load. This can be done through a variety of tools, including:

  • Self-report questionnaires
  • Performance monitoring software
  • Eye-tracking technology

Optimizing Cognitive Load – Three Strategies

Minimize extraneous cognitive load

Extraneous cognitive load refers to the load that comes from information that is not directly relevant to the task at hand. To minimize extraneous cognitive load, you can:

  • Present information in small chunks
  • Highlight the most important information
  • Remove irrelevant information

Minimize intrinsic cognitive load

Intrinsic cognitive load refers to the inherent complexity of the task itself. To minimize intrinsic cognitive load, you can:

  • Simplify the task by breaking it down into smaller, more manageable steps
  • Provide clear instructions and feedback
  • Allow for customization of the task

Enhance germane cognitive load

Germane cognitive load refers to the load that is necessary to achieve a desired outcome. To enhance germane cognitive load, you can:

  • Provide meaningful, relevant context
  • Encourage exploration and experimentation
  • Use visualization and other aids to support understanding


Cognitive load is an important concept in optimizing performance. By understanding the sources of cognitive load, monitoring progress, and reducing extraneous and intrinsic load while enhancing germane load, you can ensure that you are able to perform at your best.


I'm impressed by what people accomplish with tools like Excel, Macros, and some no-code automation. Recently, my DevOps team and I had the opportunity to support a project that aimed to replace such an existing, purely hand-built logistics solution with a custom-built system.

The company hired experts with lots of prior experience in such projects. Those experts made project plans, designed interfaces, created road maps. Then everyone went to work. As projects are want to do, this one blew away early estimations and went quite a bit over the time budget. Pressure mounted and corners were cut.

It was a wild ride. The team tried to roll out the new system all at once, and it was a complete and utter disaster. They attempted and failed with several roll-outs and ultimately had to roll everything back. It was a humbling experience and a valuable lesson.

When we approach the implementation of such big projects, a phased roll-out is key.

First and foremost, it's essential to start with a thorough analysis of the current system. Understand its limitations and the pain points that the users are facing. Instead the experts fell victim to the law of the instrument: they approached the project with previous solutions already in mind.

The things we know that just ain't so.

(not) Mark Twain

Involve the users and stakeholders in defining the requirements for the new system. Ask them where their biggest needs and problems lie. This way, you'll have a clear understanding of what the new system needs.

Next, instead of trying to replace everything at once, start with a small set of functionalities. Then gradually expand as you gain confidence in the new system. This minimizes the risk of failure and allows you to stay agile.

Moreover, it's crucial to have a robust testing and validation process to catch and fix issues before they surface in production. And, a proper training and communication plan should be in place. It is key to helping the end users to be prepared and using the new system efficiently.

Salvaging the situation

In the wake of the rollback, I sat down with some stakeholders to really listen to what they need. We found that one capability was completely missing from the current system: Automated handling of returns. Gaps like this are the obvious first step for a phased roll-out. As there is no component to replace, the risk is minimal and even small improvements would pay off. It is also the last step in our workflow. For the phased approach, we could back-track, working our way us until we have everything covered.

In summary, when it comes to big projects implementation, a phased roll-out approach is key. Starting with a specific functionality that addresses a specific pain point minimizes the risk of disruption to existing business operations and allows for adjustments to be made as needed. Additionally, you need a robust testing and validation process, along with proper training and communication plan, to ensure the success of the new system.


Message queues, events streams, events sourcing, those fronts have almost become as entrenched as the language and editor debates. In the effort to win this religious war, engineers forget their actual use case and default to screaming into the void. But that's not the only way to go.

Screaming into the void is what happens when architecture ends at the choice of tools: You pick Kafka or Kinesis, set up a topic, add a couple subscribers and start publishing events. Nice and decoupled. The service sending the event does not have to pay attention to the consumers at all. Everyone can listen and decide what to do with the event they receive.

There is even a valid use case with this. Our old friend the Enterprise Service Bus often worked this way. Attached systems emit status events, like data modifications, inputs, deletions. Consumers on the bus would follow along and if there was something to do in reaction to a status change, they would do it. If this is your usecase, if you care about keeping other systems merely informed of the latest developments (like a news paper), you're all set. Keep going.

What else is there?

Of course we have to talk about everybody's favorite: CQRS, Command-Query-Responsibility-Separation. The quest to build this often enough leads to everyone screaming (their commands) into the void. But in this case you do care about your message getting received and you do care about what it effects. Screaming into the void is not good enough, in this case.

A command stream is more akin to a mail or telegraph system. You may not know which operator precisely will execute your command, but do know where to address your message to ensure it gets done. Architecturally, this means topic separation, at-least-once-processing, maybe dispatching.

Event sourcing is another audience favorite, that regularly gets mixed in when implementing the screaming-into-the-void pattern. Events get written into a queue and then are used as a basis to compute a current (or historic) state as needed.

For this application, screaming into the void is not only an anti-pattern, but actively opposing the use case. State change events (the source for event sourcing) get mixed with status updates (the consequence of a computed state change), commands (events that may or may not lead to state changes), commands, and anything else people see fit to send.

What do I choose?

Before making a choice, get clarity on what exactly you want to achieve.

Want to send commands to another system, have them processed in order, and generally care about the outcomes of your events? Try a message queue or at least ensure that the relevant constraints and processing guarantees apply.

Want to rely on your events as a data source that you can rearrange and modify and recompute to get to a state? Pick an actual event store or at least ensure that the topics are cleanly separated and events are guaranteed to be persisted to your expectations.

Want to keep your environment appraised of what's going on, but don't care about how they react what you do? Go ahead and buy Twitter scream into the void. Set up your favorite stream and publish your updates into it, for the world to consume or ignore.


I used to work with a team that struggled to give accurate estimations for their projects. Things were usually off to a factor 2-5x. I struggled to keep stakeholders at bay as delays accumulated and the whole quarterly plan grew later and later.

We tried various things to address this, from switching to a more flexible work flow to providing ongoing re-estimations as the project progressed. The results largely stayed the same: We were not able to even remotely predict when something was done and even rolling estimates helped as they only highlighted how much we had to correct our guesses in the course of a given project.

After several months of struggling, I eventually went back through the data and almost kicked myself when I realized what I saw. I tried to figure out why our initial estimations were so off, what it was in particular that we constantly missed, but I couldn't tell. It was just not clear from the data, because we didn't actually create all the necessary stories before starting the work: We just kept adding more stories as we progressed through the project. One project lead had it all in their head, but there was no actual plan upfront.

I almost kicked myself, because this lack of a formulated plan of action was something I had subliminally noticed several times. But I couldn't put my finger on it until it stared me right in the face.

But plans are useless, are they not?

Plans are worthless,

but planning is everything.

Dwight D. Eisenhower

No amount of planning will ever yield a set of steps that can be followed to a T. And such amounts of planning to even come close are a huge time sink, but don't really add value if you don't work on the scale of NASA.

Still, making a plan is essential, because it makes your thinking and expectation about a project transparent and provides an overview of the expected amount of work. That is an important basis for giving any kind of reasonable estimate at all. Furthermore it gives insight into the short-term road map which is important to some teams.

Most importantly, it allows you to improve.

To tell apart expected from unexpected work, find blind spots and systemic problems that will otherwise be hidden between all the other expected tickets, never to be found. It helps a team reflect on why these delays happen and be honest about things that don't work well.

Nobody actually expects to create plans that survive the contact with reality, but for this team there was definitely room to improve. Maybe for yours, too.


I've always been a startup guy, so especially early in my career, I never knew such a thing a career paths, plans or guides. I had to make up my own mind about what my next steps would be and how to get there.

A few years, I got into a heated discussion with a colleague who demanded (the audacity!) a career guide to tell them what their next steps would be. I tried reasoning with them, explaining how their career is – and should be – in their own hands, that nobody else would be better suited to determine their path, that nobody else would know what made them happy.

“But it's not about happiness! It's about my career!”

That stopped me in my tracks. For me the two were one and the same but for this colleague, they seemed to be fundamentally disconnected, to the point where they would abdicate control over one of them to a completely different person.

So I gave in, worked something out with my head of engineering and, lo and behold, even more people appreciated what we created and I learned that a lot of people around me were uncomfortable being left alone with such big decisions.

To make sure, this wouldn't happen again, I started learning a lot about different career models and how to find your way through them. It's completely reasonable to ask for guidance or people to tell you the next steps. However there are a few things that everybody needs to answer for themselves, if they want to pick a career that will make them lastingly happy.

Key questions


Motivation: What gets you out of bed and all excited to work in the morning?

Impact: What difference do you want to make in the lives of others by acting on this motivation?

Calling: What do you see as your “right” way to contribute to this impact?


Passion: What skill or activity are you most passionate about when following your calling?

Practice: What practice do you plan to follow to improve your craft?


Task decisions: How much freedom do you look for in how to execute your tasks?

Priority decisions: How much freedom do you look for in setting your priorities?

Strategy decisions: How much freedom do you look for in determining project/company strategy?

Learning decisions: How much freedom do you look for in defining your own learning path?

Project allocation decision: How much freedom do you look for in picking and choosing what projects to work on?


Velocity is where theory and practice clash very hard. Your numbers may look great but your predictability is worthless – or vice versa.

When you start with any agile practice, velocity seems like the easiest, most magical part. You keep count of your number of tickets to make forecasts about when more will be done. And it works! It's baffling how easy forecasting is with this method.

Until it isn't. Unplanned work appears out of nowhere, the team has to handle lots of interruptions, a surge in bugs after a release, any of a number of circumstances that make forecasting by number of tickets unreliable.

A few years ago I worked with a team on a mobile app generator. They had a stable velocity and were quite good at forecasting until the product hit a wider audience. Suddenly a lot of requests from the fulfillment team caused interruptions and because those were hard to predict they wreaked havoc with our planning. Focusing on counting backlog items and then moving to estimation of those fulfillment requests helped keep our velocity stable – after all, we were still doing the same amount of work, why wouldn't it be? – but our commitments and forecasts went down the drain.

More recently one of my teams faced the challenge of building a disruptive product from a very unclear product vision with lots of uncertainties and changing requirements. We ended up having to redo a lot of stories that we already considered done. What to do in that case? We tried reestimating stories, reopening them with the same estimation, creating new stories instead, but we still couldn't get to a confident forecast.

Both teams struggled with gross vs net velocity.

It's an extension of the concepts of planned and unplanned work. Following the book “The Phoenix Project” there are actually four kinds of work, but still only one is planned. Gross velocity encompasses all kinds of work and at least hints at how the team is doing in terms of flow, skill development and Muda, wasteful activities. But forecasting only concerns itself with how quickly a plan can be realized. So only planned work should be factored in.

The app generator team tried to keep their gross velocity stable by including unplanned work, estimation and all, and when they realized their forecasts didn't match velocity, they even experimented with buffers. But even those didn't help, because the interruptions were too unpredictable. When we started analyzing the split between planned and unplanned work, we finally had a breakthrough that helped come to a more predictive, if less stable, value: Net velocity.

The disruptive team was in a different, but similar situation: They tried lots of ways to make the numbers fit their preconception: We're doing a lot of good work, so our velocity should be high an stable. With this in mind, they tried to factor in everything they did but never hit any sprint goals. They always took in too much, based on their velocity. Similarly to the first example, even when they tried leaving a buffer it didn't help much.

So again, I turned to net velocity and excluded all bugs and external requests from the metric but that still wasn't enough to notably improve our forecasts. Only when we started to exclude bugs, external requests, rework and stories we missed during planning, did we arrive at a useful number for meaningful forecasts.

While gross velocity has its uses, it's really net velocity you want to look when you are trying to do reliable forecasts. To predict how quickly your team can work a plan, focus only on how quickly planned work was delivered in the past. Anything else will mislead and require you to artificially pad your numbers. And it still won't get you good results.

Net velocity is no silver bullet, but I find it does the job better than any other metric.


Absolutely nothing. Ha! Everybody wants it and the power that comes with it, but when you use it, you lose it.

Rank, that's power invested in you by a role or title. 'Pulling rank' means invoking the power differential between yourself and the person you're pulling rank on, to make them do something you can't get them to do otherwise. But that's what roles and titles are for, right? That's why we hand them to people.

Here's the problem: When you actually do this, when you pull rank on someone and force them to do something against their will, you create resistance and resentment. That will make it harder to get them to do anything next time. Pretty soon you'll find yourself needing to pull rank to get them to even comply with reasonable requests. Soon after they won't even do things for you that they enjoy without you pulling rank. It's a downward spiral that can only end in disaster: disciplinary action, resignation, firing.

And it gets worse: With rank, even simple statements like “I like this idea”, can be misconstrued as “I want you to implement this idea”, reinforced by the rank, not the person, uttering it. So you may pull rank without even knowing or noticing it.

When your find yourself wanting to pull rank, invoke your seniority or ownership of something, to make someone follow your plan, stop yourself. Explore instead what keeps them from following your advice in the first place. If you really are more senior, you should be able to argue your point successfully without intentionally invoking rank. If you can't do this, maybe your idea isn't actually that good and you should at least consider their position thoroughly before discarding it.

When you find something you said accidentally was enforced by your rank, make amends. Go the extra mile to clarify what people should do when you state an opinion and how people can tell it apart from you giving an actual order.

My teasing statement at the beginning isn't true, of course. Rank isn't useless. There are situations where you need to exert power, give a clear order, set a rule or invoke disciplinary action. But today in most industries we rely heavily on people's creativity. Overwriting their ideas and plans with yours simply because you outrank them kills creativity. So be very, very careful with the power invested in you by your rank.