Assessing Developer Assessment
When you're considering a prospective new software developer's credentials, knowing whether they have the technical credentials to thrive in the role is essential. It's not unknown for someone who doesn't actually know how to program to get hired for a programming position, and even if a new hire's shortfall in expertise is not this drastic, trying to bring them up to speed will take precious time and energy. So one or more forms of technical assessment is a routine part of the hiring process.
However, none of these is perfect. Here, I'm going to look at each of the typical forms of technical assessment, consider its strengths and weaknesses, and give my opinion on how any weaknesses can be mitigated.
Tests
The traditional question-and-answer test is one of the most common means of assessment. These are used for many industry certificates, and generally follow a multiple choice format. Since composing their own tests would be a time-consuming and error-prone task, most organisations use tests from third-party providers. These assess candidate responses using means that range from the generic (one point for each right answer, nothing for a wrong one), to the more responsive or adaptive (correct answers lead to harder questions, wrong answers lead to easier ones.)
Tests are good for assessing a candidate's residual knowledge - what they know without having to look anything up. The main problem with them is that they deny the candidate an opportunity to express their knowledge about anything that's not on the test. If a candidate has a solid grasp of Java nested classes, but the test they take never covers this, the opportunity to learn this about the candidate is lost. Since most tests are provided by someone else, there's no guarantee that they'll cover topics relevant to your organisation. This is problematic with languages like Java, where many tests still include questions about applets, bitwise operators and low-level networking protocols, which ceased to be relevant to most developers years ago. Unless your organisation actually requires those things, those questions are liable to be answered wrong and make candidates look less competent than they really are.
The last issue I'll mention is a problem often noted about tests and exams in other, non-IT areas. No one operates like that in real life. A developer who doesn't know something will look it up or ask a colleague. If they type code that won't compile or execute, it will be highlighted by their IDE or displayed in an error message, allowing a competent worker to promptly fix the mistake and carry on. Unfortunately, it's common to see questions which focus on minutiae (such as import statements) and force candidates to be 'human compilers'.
So if you use tests for assessment, what can you do? My primary suggestion would be to choose tests that provide a breakdown of subject areas, which will provide more granular feedback on a candidate's knowledge. If concurrency isn't used in your organisation's projects, a low score here may not matter. Also, ask the candidate whether any questions were confusing or irrelevant.
Assignments
At first glance, these have many advantages over tests. A candidate gets to bring all their knowledge to bear, and they can use any external resources they have. Time pressure is typically kinder than it is in tests, and you get the advantage of seeing how the candidate actually writes code and approaches problems that they're given. So, ideal then?
Not quite. Assignments still suffer from a level of artificiality that is different from what's found in tests, but still present. First, the candidate is operating blind. They have to guess what approach the organisation will favour. Often they're on their own, with no one to consult if they're not sure about a requirement. (Would you give your senior developers an 8-hour task to complete alone, without recourse, and expect it to turn out as desired? Probably not.) Then there's the content of the assignment itself. Similar to tests, if it doesn't relate to what the candidate would be doing in the real job, then its usefulness is limited. It may be nice to see that a candidate can write a tightly optimised sorting algorithm, but if the work they'd do is more CRUD-related, a more fitting assignment would've been better.
When you use assignments, the kindest thing you can do is try to put aside any fixed ideals of how the problem should be solved. Aside from what's in the problem description, the candidate doesn't know what you want, and is likely to take a different approach to what they're "supposed" to do. Ideally, a completed assignment should be followed by a discussion where the candidate explains their approach and assumptions.
Technical Interviews
This could be a job interview that's defined as a technical interview, or any interview where a candidate is quizzed on their technical knowledge. For these, the interviewer gets to tailor the questions exactly as they wish. The candidate can immediately ask for clarification if required, and a well crafted question can show how the candidate "thinks on their feet" in response to a challenge.
The problems? First, the candidate doesn't know what they're going to be asked. Languages or tools with a wide scope (such as Java) are nearly impossible to prepare for, and if the position covers multiple technical areas, it's highly likely that at least one question will be hard for the candidate to answer. Again, the candidate doesn't have the resources they'd normally have at hand to answer the question, and they're in a stressful situation (the interview). If the candidate has anxiety problems, this could give a false impression of their actual competence. There's also the issue of the candidate's knowledge not being fully assessed if they only get to talk about the topics chosen by the interviewer.
In my opinion, the best questions for technical interviews are either basic (what anyone in the role should know) or open-ended ones that give the candidate a chance to demonstrate their knowledge. For example, asking which aspects of Angular the candidate used to solve a problem and how these were applied, in preference to asking for a detailed description of Angular's digest cycle.
Whiteboard Presentations
These may be the least popular form of assessment. Ruby on Rails inventor David Heinemeier Hansson tweeted that he would fail to write a bubble sort algorithm on a whiteboard. Expecting a candidate to write error-free code that would perform as expected on a whiteboard is bordering on unfair. In my experience, it's more common for candidates to be asked to draw diagrams on them, typically for prospective web applications. If a candidate's skill set doesn't involve inscribing tidily with a marker, this will become immediately obvious when they step up to the whiteboard, which is liable to make them more anxious.
If you do use whiteboards, it's best - if not mandatory - to use them for assessing a candidate's thought processes. For example, seeing how they go about solving a programming problem using pseudocode. Also, it may be a minor thing, but it really helps if you ensure beforehand that all markers by the whiteboard work.
Trial Work Environment
I've experienced all of the above methods of developer assessment, but have yet to experience this one. This is where a candidate collaborates on a piece of work with one or more team members who are already part of the organisation. While this is probably the best way of assessing a candidate's merit, as it simulates the work environment more closely than anything else, there are a number of reasons why it isn't seen more.
First, it's resource intensive, if not prohibitive. Other team members have to devote themselves to assessing the candidate rather than doing their actual jobs. If the assigned work is a mock project, the opportunity to do real work is lost, and if the assignment is real work (procured by a client) then the client is at risk of receiving less than optimal deliverables. Also, the question of whether it's "real" work for pay purposes is a murky one, as a Wellington cafe found to their cost recently.
Wrapping Up
Whichever means of candidate assessment you choose, it's important to have a realistic expectation of what the results are telling you. They may be confirming that the candidate is a skilled performer, but on the other hand they may simply indicate what the candidate knows about a specific set of topics or handles a distinct coding situation without giving a true indication as to what they can do. Knowing the limitations of what you're using, and having strategies to alleviate these (such as post-assessment discussions) gives a better chance of obtaining genuinely valuable information.