In essence, the candidate doesn't know what he doesn't know.
This phenomenon results in one of two things happening.
They overestimate their own skill. This can happen because they honestly don't know, or because the process itself incentivizes them to do so. There is after all an obvious implication that if you don't meet a certain threshold you won't be considered for further interviews. Plus who doesn't want to be thought of as an expert?
They underestimate their skills relative to the REAL job requirements. It's equally damaging if the candidate does manage to accurately assess his body of knowledge in situations where the interviewer hasn't identified what ACTUAL level the position requires. This usually happens when whoever has hired the phone interviewer makes some blanket statement like
We only want to talk to candidates who rate at least 8 out of 10 on the subject of <insert your technology here>
The recruiter is all too happy to oblige, but how do they assess the candidate to determine that score? Most just parrot the question back, survey style, leading us back to our predicament.
I understand why this happens. Frequently it's an entry level worker who is assigned these kinds of calls in the first place. Relatedly, they're the kind of person who is probably least likely to be able to assess the level of skill for a candidate other than by simply asking. Lastly there is the perceived legal risk of deviating in questions asked for prospective candidates.
The greater risk to both parties is that totally unqualified candidates will make it through the phone interview only to be tossed out later (or worse, NOT, when they should have been). This results in lost time, effort, and money for all involved. In the case of recruiters it can lead to a damage in the relationship with the client. It can also lead to perfectly good candidates being thrown out because of miscommunication between the phone interviewer and the candidate.
To help with this problem I'd like to propose a common scale that can be used to guide the conversation.
0 - Knows nothing about the technology.
1 - Knows the item exists. It's a black box. End user level knowledge
2 - May know bits and pieces of the technology if self studying, but can't really describe how it works and isn't familiar with what several of the common acronyms associated with the technology.
3 - Can perform basic administrative tasks in small environments under direction. Knows basic acronyms and how they relate to the subject
4 Knows the technology as it relates to simple implementation. Can answer basic function of how it works, and knows most acronyms. Implemented simple setups in a lab environment
5 Has enough skills and knowledge to pass an industry certification exam, and has enough experience to have implemented the technologies in a small business, or in the case of advanced topics, in a lab.
6 - Experience installing and administering more complex implementations - like integrations, use of APIs, large organizations, etc
7 - Has enough knowledge and experience to architect designs for larger or more complex environments.
8 Researcher. Repeated implementations. Detailed understanding of the technology, to the point they can easily converse in detail about nuances of implementation.
9 If they haven't published by now they certainly could. Have implemented the technology a number of times for a variety of environments, including large complex ones.
10 Industry Expert. Known in the field. Likely has published works on the subject. Probably still wouldn't score herself as a 10.
This scale should be able to be applied to any given technology in broad terms. Additional details could easily be flushed out to provide technology specific questions and markers to accurately confirm a candidate's self assessment.
There's a lot of material here I left out, mostly because I wanted to keep it (relatively) short but still useful. If there is positive feedback I'll continue on with some specific technology examples and some ways to incorporate this standard into other methodologies.