Team Skills Assessment

Goals

  • Understand strengths and weaknesses of current team

  • Determine where individuals’ desires to grow strengths and improve weaknesses

  • Determine gaps to fill with additional team members

 

Method

Research

I had previously seen a skills assessment from Home Depot (shown below) in the shape of a circle with “pie pieces” cut out of it for different skills. Team members would then rate themselves on each skill - the smallest piece for “Needs Help”, adding the middle section for “Can Do”, and filling the entire slice for “Can Teach.” A team can then layer each individual’s assessment on top of each other and visually see team strangths and weaknesses.

 

Source: How to Get Promoted in UX by Ryan West

 

There were a few items from this method that I wanted to improve before assessing my own team. For one, I didn’t want to dictate the skills we would assess - I wanted the team to determine them. I also wanted to add additional levels because three didn’t seem like enough granularity to me. The descriptions also seemed relatively short and open to interpretation; I wanted our skills to have full descriptions and be more objective and easy to measure oneself against. Finally, I wanted to adapt the methodology for a virtual team. Due to COVID 19, our team was distributed and we wouldn’t be able to physically stack papers on top of one another.

Skills Brainstorming Session

The next step was to have the team work together to determine what skills we would use for our self-assessment. In order to facilitate this discussion, I set up a Google Jamboard (similar to FigJam or Miro) with the following question written: “What skills make a designer successful on our team?”.

Each team member was asked to brainstorm for 5 minutes and add stickies to their own area of the board with their responses. Individual response examples are shown below (names are changed for anonymity).

Then, each person reviewed the stickies they had written with the group and pulled them into a central location. We then used a form of affinity mapping to group similar skills together. Lastly, we determined a single name or title for each grouping of skills. The final affinity map is shown below; group names are lime green stickies.

 
 

Full Skills Descriptions & Rating Scale

After the exercise, I created a collaborative document (Google Docs) with each group/skill name and a rough stab at a short description for each. I asked my team to take the following week to review, add, edit, and comment asynchronously to flesh out each skill description. At the end of the week, I went through and cleaned things up a bit so that formatting and language usage would be consistent across the skills (e.g., so that each descriptor would start with an action verb). Examples of the final skills and descriptions are shown below.

 
 

For the rating scale, I wanted to have 5 levels in order to allow additional granularity and potential to see individual growth sooner. I did some research on a variety of Likert scales in order to determine one that best fit with a self-assessment of expertise in a skill. I landed on the following scale, courtesy of icombine.net.

  1. Novice

    • Have minimal or textbook knowledge without connecting it to the practice

    • Need close supervision or guidance

    • Have little or no idea of ​​how to deal with complexity

    • Tend to look at actions in isolation

  2. Advanced Beginner

    • Have basic knowledge of key aspects of the practice

    • Straightforward tasks are likely to be done to an acceptable standard

    • Is able to achieve some steps using own judgment, but needs supervision for the overall task

    • Appreciate complex situations, but is only able to achieve partial resolution

    • See actions as a series of steps

  3. Competent

    • Have good working and background knowledge of area of practice

    • Results can be achieved for open tasks, though may lack refinement

    • Able to achieve most tasks using own judgement

    • Cope with complex situations through deliberate analysis and planning

    • See actions at least partly in terms of longer-term goals

  4. Proficient

    • Depth of understanding of discipline and area of practice

    • Fully acceptable standard achieved routinely, results are also achieved for open tasks

    • Able to take full responsibility for own work (and that of others where applicable)

    • Deal with complex situations holistically, confident decision-making

    • See the overall picture and how individual actions fit within it

  5. Expert

    • Authoritative knowledge of discipline and deep tacit understanding across area of practice

    • Excellence achieved with relative ease

    • Able to take responsibility for going beyond existing standards and creating own interpretations

    • Holistic grasp of complex situations, moves between intuitive and analytical approaches with ease

    • See overall picture and alternative approaches, has vision of what may be possible

Self-Assessment Prep

Prior to the team actually assessing themselves on our newly created skills, I wanted to have a central location where everyone could record their answers and compare across team members to determine team-level strengths and weaknesses. In other words, I needed a digital version of stacked pieces of paper. In order to achieve this, I set up a collaborative spreadsheet (Google Sheets) with multiple tabs.

Tab 1 included the skills and their full descriptions as well as the rating scale so that we could avoid hopping between documents. Tab 2 was the team’s collective assessment with the month and year of the planned group session, including average scores for each skill. Tabs 3 - n were for each individual’s responses. I wanted to have a place where each person could go and see their own growth over time. I set up some formulas so that each person would only have to record their responses once and then they would automatically populate on Tab 2.

Individual Tab

Team Tab

In addition to the numerical averages, I wanted to replace the visual aspect of spotting strengths and weaknesses by stacking the team’s assessments and analyzing gaps. In order to achieve this, I created a spider graph that would include a line for each person’s individual assessment as well as a thicker line to show the average (see results below for an example of the graph).

Team Self-Assessment Session

The last step was to facilitate a session with the team for each person to assess themselves on each skill on the expertise rating scale from 1 - 5. The assessment itself likely could have been done asynchronously. However, I would have needed to gather each person’s ratings separately and then enter them myself prior to any analysis to avoid bias from seeing others’ ratings ahead of time. By having everyone assess themselves at the same time, it didn’t leave much time for “peeking.”

I gave each person approximately 10 minutes to review the skills and descriptions (again) and give themselves their rating for each one. Then we all came together to look at the final spider graph and identify findings (shown below). We noticed that one of our main weaknesses was in the Domain Knowledge category and we had a lot of strength in Information and Interaction Design.

 
 

Repetition

My intention was for the team to complete the assessment together every 6 months. However, as it always does, time gets away from us, and I have only been able to repeat the assessment one additional time, which turned out to be annual. I plan to continue to use this method to help team members set their individual goals for the year based on skills that they or the team need to improve.

 

Results

Since the first team assessment we completed, we have added three additional team members - one of which had additional strengths in areas the team had previously found weaknesses. I also received feedback that the assessment helped people focus their annual goals and performance reviews because it clarified a lot about the skills that were valued by the team itself and for promotion.

 

Opportunities for Improvement

After performing the assessment twice with the team, I identified some potential areas that could be improved.

One area was in normalizing the rating scale. It would have been helpful to review the scale in more detail with the team prior to rating. It turned out that some people paid less attention to the rating descriptors and had rated the skills relative to themselves, rather than the scale. In other words, if someone’s best skill was Research, they rated it a 5 even if they weren’t an actual Expert-level researcher.

Another issue with the method is the bias inherent in any self-rating. I would like to try a variation on the method where I ask each person to also rate the other team members on each skill and then take the average for each person rather than their self-assessment alone. I think this may cut down on any artificially high scores. Although this exercise was never intended to be punitive, it’s too easy for someone to rate themselves better at a skill than they actually are (Dunning-Kruger Effect). Anecdotally, I also generally think it’s easier for people to assess others than themselves.

Finally, I think it would be a good idea to revisit the skills and their descriptions every so often to ensure that they still accurately reflect answers to the originally posed question, “What skills make a designer successful on this team?” As the team grows, new team members may have additional or different ideas on what that means, and those should be incorporated into the assessment.