Here’s How an Agency Can Make Its Staff Performance Reviews Less Arbitrary

The Media Kitchen created transparency and an even playing field

Two years ago, we weren't able to give everyone at The Media Kitchen a meaningful year-end bonus, which was always our tradition. I had to decide if I wanted to give a few people a good bonus or everyone a token. I decided to give a few people a good bonus, and I decided that the people who contributed most to revenue would be the beneficiaries. Those individuals were not surprisingly the most senior staffers and included the three highest-ranking job titles: group director, associate director and senior strategist. The lowest two levels, strategist and associate strategist, would not get bonuses.

Barry Lowenthal

While this thinking is fairly typical when prioritizing staff and often reflects the way bonuses are awarded in most companies, I took it one step further and managed to put my foot in my mouth, which we'll discuss in a second. But that experience led us to become incredibly transparent in how we award bonuses and salaries, review staffers, and even provide feedback.

We went from a process that was fairly opaque to one that provides staff with data on their overall performance around 10 criteria that everyone is evaluated against. Now every staffer is fully aware how they measure up and why.

But let's rewind a bit to when I decided to only give bonuses to the top three levels. At the time, I decided it was important to gather the bottom two levels together and explain my reasoning. I figured they'd eventually learn they weren't going to get a bonus, so I might as well tell them my rationale directly. It's incredible how quickly word spreads in an agency, and it's remarkable how easily millennials talk about how much they're earning. I'm a Gen Xer, and we never told anyone our salaries.

So, in this meeting, I explained that I didn't have enough money to go around, and I decided to give a bonus to those people who were most closely connected to driving revenue, the most important people in the agency and the most important people to me, the agency president. Since the bottom two levels were not as close to revenue, they were less important to me and therefore were not going to get a bonus. It's easy to look back on that statement and see where I made a mistake. No one likes being told they are less important than someone else even if they know they earn less than their colleagues and are less senior.

I went on to explain that not everyone contributes equally in an organization, and those people who contribute the most are being awarded. Again, while this may all be true, in the absence of clear performance criteria, which we didn't have, people felt like they didn't know what they needed to do to become one of the individuals who are "most closely connected to revenue."

The junior staffers wanted to know what they could do, apart from aging, to earn a bonus.

I thought I was being an enlightened leader by bringing people together to have a hard conversation about priorities and money, and I thought I was brave to address conflict head-on. Instead, I made people feel like they weren't important.

This experience encouraged five brave junior staffers to sit me down and ask me what it takes to get ahead at The Media Kitchen. They wanted to know how raises were decided and how star performers are rewarded.

First, I described the process we go through for determining raises. I explained that at the start of every year, we figure out how much money the agency has to generate to deliver its growth targets. The difference between our current revenue and our revenue goal is called our "blue sky," which is achieved either through winning new business or growing existing business (i.e., increasing the fees we're paid by clients).

The revenue goal includes expenses and profits. Salaries are the biggest expense for an agency, so the more we pay our staff, the more revenue we have to generate. Given that we operate in a very competitive business climate, there is always pressure to give out smaller raises. Our finance team initially recommends a raise pool, which I can allocate as I see fit—it's my P&L to manage. Typically, I review everyone who is up for a raise in a given year and award raises based on how much it would cost to replace each individual, their reputation and the individual's past performance.

Past performance is described in an annual written qualitative performance assessment prepared by managers with input from the entire team. Inevitably, the raise pool handed down from finance and my raise wish list are very different, and I work with finance to find a middle ground that doesn't put too much pressure on our overall revenue targets and still satisfies the staff.

However, after sitting down with these five staffers, I decided to change our approach. I realized how arbitrary our raise process was, and it should not be at my discretion. We were very good at giving annual performance reviews and providing ongoing feedback, but it was clear our approach to awarding raises was far too arbitrary and relied too much on my opinion and feelings. Once we decided we had to change our approach, we investigated quite a few methodologies, none of which felt right—none would provide actionable and frequent performance feedback and reflect the agency's values—so we decided to create our own.

The first step was to decide on the criteria we wanted to use to evaluate our staffers. Our staff is a mix of people with no prior job experience—they're entry level—and people who have 25-plus years of experience. While people often work above and below their job titles, every title has its own set of responsibilities, and it's assumed that if you're at a certain level, you know how to perform certain tasks.

We started mapping out each level's responsibilities, but we wound up with a long list that made evaluating each responsibility hard to manage. The process was starting to become very complicated and overwhelming. After a lot of discussion, I kept going back to the idea that some people in an organization contribute more than others to revenue growth, and everyone should aspire to help an organization grow revenue. We then started to unpack that thought even more.

I spent a lot time thinking about what makes an individual more important than another, and we came up with 10 simple criteria everyone can be evaluated against, including myself. Not every criterion is weighed equally; some are more important than others, and some ladder up into others (e.g., you can't build a great client relationship if you can't be trusted to develop great work).

We also realized that while it's important to solicit feedback from everyone, not everyone is equally qualified to judge a person's performance. For instance junior people do not have enough experience to fully evaluate whether someone is developing effective media plans, which is one of the criteria. As a result, we decided to use the following weights when scoring:

  • My score as president and the group director's score would count for 50 percent.
  • Associate directors would account for 20 percent.
  • Senior strategists would be 15 percent.
  • Strategists were 10 percent.
  • Associate strategists accounted for 5 percent.

We decided that everyone in the company, including myself, would be scored on the same criteria. We wanted everyone to have the same marching orders, and I wanted everyone to understand what I thought was important to running a successful media agency. However, we did not expect everyone to be able to score a 10—each criterion was scored from one to 10, with 10 being the best. But since everyone was ranked against his or her own level, junior people were not penalized for getting less than a 10 on their weighted score.

The 10 criteria and their weightings were as follows:

  • Will this individual's departure put revenue at risk? 20 percent
  • Ability to grow revenue, 20 percent
  • Ability to build strong client relationships, 20 percent
  • Develops smart, effective, creative, sellable media recommendations, 5 percent
  • Enhances TMK's reputation to the ad sales community, the press and to the industry, 5 percent
  • Great and nice, 5 percent
  • Collaborates easily, 5 percent
  • Individual is always looking for innovative media solutions and ideas, 5 percent
  • Volunteers for new projects and offers to lead projects, 5 percent
  • Individual exhibits an ambitious work ethic, 10 percent

At this point, it was clear we'd made a lot of assumptions that needed testing. We assumed that a group director should account for 50 percent of an individual's grade, and we assumed that these criteria and weights would help us identify star performers. In order to test our hypothesis, we decided to put our review approach into action, and we conducted test reviews in the fourth quarter of 2015. We called this our benchmarking period. We used Google Forms to create a questionnaire and gathered the data. Following Eric Reis' model for Lean Startups, we wanted to test the process in the simplest and least costly way. While not the most grandiose or beautiful, the Google Apps platform was perfect for this.

Everyone spent a couple of hours reviewing their teams, and we immediately had data we could analyze. When we started to review the data and get feedback on the process, we became convinced our weightings were correct and that we were asking the right questions—the star performers were getting high scores. But people needed more explanations and guidance when they were reviewing their teammates. We had explained each criterion in a paragraph, and we had several 90-minute agencywide meetings to describe the process and what we meant by each criterion.

But it wasn't enough.

This was an important lesson learned: If our process was going to be useful and believable, we had to ensure consistency, and we had to give people boundaries so everyone was giving the same behaviors and responsibilities the same grades. We repeated the exercise in the first quarter of this year and decided to expand the criteria and provide more guardrails by unpacking each criterion.

We also heard that people wanted even more context and color around what others thought of their performance, so we included a box to capture open-ended comments on the Google Form. This has proved really helpful because it gives people a chance to provide anonymous feedback to their teammates that isn't included in the scores.