“I would like to measure their expertise before and after the training session. When you do the survey, use a five-point scale from Novice to Expert…”
Those were the orders, and the training team did what was asked. All of the respondents to the survey replied that after the session they considered themselves “experts.” It was all an understandable, but undeniable, lie.
This lie was not a sinister act. It was one of self-preservation. After all, if your organization invests money and resources to send you to a class to become an “expert,” there is an expectation that an expertise transformation will occur, even when it's not realistic.
The “Expert Myth” may exist in other fields, but it is most prevalent in the Information Technology industry. Training resources claim to create expertise in student attendees. Bootcamps turn out scores of ethical hackers, administrators and security analysts after a simple investment of 3-5 days, hands-on labs and “deep dive” content.
The unfortunate truth is that expertise is not so easily obtained. At most, the effectiveness of boot camps, instructor led courses and virtual instruction depend on what experience the student brought with them, how much information they understood and retained and, more importantly, what they do AFTER the course.
Studies have shown that regardless of the level of instruction, if there is no practical application of those learned skills, they will fade and atrophy - usually in as little as 2 weeks.
Expertise cannot be taught. Expertise and competency can be acquired only through actually doing the work, and doing it often. This includes making mistakes, adjusting and adapting. The aforementioned courses, classes, and bootcamps are not worthless, they just need to be framed realistically as a foundation on which to build.
The “Expert myth” still exists as many organizations think that a singular focused instructional period can deny centuries of how humans learn and build competency. A reason for this perception is a mindset called “checkbox enablement.”
When a leader or organization recognizes the need for training, it should be considered in the context of an end state or outcome. For example, the expectation of sending someone to a CylancePROTECT® class might be to expand the ability of a security team to better administer their prevention-based environment. The correct, realistic outcome is that the attendee will return from the training event with the knowledge necessary to build their expertise by working with the product or solution. Or maybe they already had some knowledge already and needed extra instruction to enhance their acquired expertise and level up.
Often unfortunately, training can be commoditized in thought by leadership and even internal education teams. Instead of looking at the more complex measurement variable of competency and tracking progress after a class, the easier metric is taken: “did they attend?” An assumption of expert ability is made based on just “being there” and a box is checked off.
The risks inherent in this approach are obvious. For the organization, skillsets are potentially overestimated, and the chance of error and related incidents are increased. For the individual, the pressure of being considered an expert without experience or developed competency can impact individual performance and increase the chance of negative outcomes.
The concepts of both the Expert Myth and Checkbox Enablement also make assumptions about a vendor’s training curriculum. Best practices and content based on customer experience is often in built into training courses, yet the deployment of any given vendor solution is different if not unique based on many factors like industry, regulations, geographic considerations, etc.
Unless it is customized, any given training class covers how the vendor suggests an organization should run the solution. The reality of how it is actually installed, configured and deployed is often very different. This is why it is not enough to just send someone to training and hope for the best. There has to be a consideration of what is to be accomplished. There has to be a plan.
An effective training plan is focused on the individual and combines formal in-person or virtual training with resources to reinforce key concepts. The plan also should include mentoring, shadowing, and a series of experience-enhancing goals and milestones that support the outcome desired AFTER training.
Knowledge transfer from more experienced personnel is invaluable. If those assets are not available, participation in user groups, support communities and online forums can help the building of competency along with actual experience. This should all be framed by actually doing the work, in the environment, applying learned skills to the reality while at the same time learning and actually becoming an expert outside the classroom.
Training classes are important, but what happens afterwards is critical. Checkbox enablement and the perpetuation of the expert myth offers no value and impedes professional development and achievement.
Individual, outcome-based training plans that build on foundation knowledge and that are aligned with objectives maximize an investment in education. This requires a bit more effort and time by the individual and the organization, but the return and benefits are worth it.