Career Readiness Assessment Framework
Career Readiness Assessment Framework
A structured methodology for evaluating engineering talent
Learn with O.J. | learnwithoj.com
Overview
This framework codifies 20+ years of experiential pattern recognition into a repeatable, defensible methodology for evaluating software and infrastructure engineers. It was developed from real hiring decisions, coaching engagements, and the patterns that emerged from getting burned by bad hires and working alongside great ones.
The framework operates in six stages, progressing from low-cost screening to high-investment conversation. Each stage has specific signals to look for, and candidates can be filtered out at any stage without wasting time on later ones.
What This Framework Deliberately Ignores
Before diving into methodology, it is important to name what this framework does not use as evaluation criteria:
- Pedigree: School name, bootcamp brand, or where someone interned
- Leetcode performance: Algorithm puzzle solving in isolation
- Years of experience as a proxy for skill: Tenure does not equal growth
- Cultural fit as a vague feeling: If it cannot be articulated, it is not a signal
These are excluded because they introduce bias without improving prediction accuracy. The framework focuses on observable signals that correlate with actual engineering capability and career readiness.
Stage 1: Resume Signal Analysis
The resume is not read for content first. It is read for structure, organization, and what the candidate chose to emphasize. These meta-signals often reveal more than the actual bullet points.
Signals
| Signal | Indicates | Weight |
|---|---|---|
| Clean hierarchy and consistent formatting | Attention to detail, communication skill | Moderate |
| Bullets lead with impact, not tasks | Understands value delivery | Strong |
| Quantified outcomes (reduced by X%, improved Y) | Results-oriented thinking | Strong |
| Skills section is curated, not exhaustive | Self-awareness about actual strengths | Moderate |
| Job progression tells a coherent story | Intentional career management | Moderate |
| Resume length matches experience level | Knows their audience | Light |
| Buzzword density is low | Confidence in actual skills | Moderate |
| Title history shows gaps between output and title | Potential coaching client (title gap) | Strong |
Red Flags
- Every job is described identically ("responsible for...")
- Skills list includes everything they have ever touched
- No quantified outcomes anywhere
- Resume reads like a job description, not an achievement record
- Formatting is inconsistent or sloppy in ways that suggest no one reviewed it
Stage 2: LinkedIn Profile Cross-Reference
The LinkedIn profile is compared against the resume for consistency and additional signals. Discrepancies between the two are informative.
Signals
- Headline positioning: Is it a job title or a value statement?
- About section: Does it tell a story or list skills?
- Activity: Are they posting, commenting, or silent?
- Recommendations: Quality and specificity of endorsements
- Resume-to-LinkedIn delta: Do the stories match? Are there roles on one but not the other?
Personality Markers (Positive-Only)
Personality markers are treated as mild positive signals only. Their presence is a small boost. Their absence means nothing. They cannot rescue a candidate who fails on technical signals.
- Humor or personality in their About section
- Side projects that show genuine curiosity
- Community involvement (meetups, open source, mentoring)
- Writing that shows they can explain technical concepts clearly
Stage 3: Code Authenticity Check
For candidates with public code (GitHub, GitLab, personal sites), this stage assesses whether the code is genuinely theirs and what it reveals about their engineering maturity.
The Diff Method
Instead of looking at finished repos, look at the commit history. The diff tells the real story:
- Are commits granular and well-messaged, or giant dumps?
- Is there evidence of refactoring (going back to improve earlier work)?
- Do they handle edge cases, or only the happy path?
- Is there test code? Even minimal testing shows engineering discipline.
Signal Spectrum
| Signal | Level | Meaning |
|---|---|---|
| Repo is a fork of a tutorial with no modifications | Tutorial match | Not useful signal |
| Repo matches tutorial structure but has some modifications | Modified tutorial | Mild positive |
| Repo has original structure with tutorial-level patterns | Learning project | Moderate positive |
| Repo has original code with evidence of problem-solving | Original code | Strong positive |
| Even one modified function or creative deviation | Self-directed | Sends them to the top of the pile |
What This Stage Cannot Assess
- Quality of code written in private repos (most professional work)
- System design thinking (repos rarely show architecture decisions)
- Collaboration skills (single-author repos tell one story)
Stage 4: Tenure Pattern Analysis
How someone moves between jobs, how long they stay, and what changes between roles reveals career intentionality and growth patterns.
Patterns
| Pattern | Description | Implication |
|---|---|---|
| Steady climber | 2-4 years per role, clear upward trajectory | Strong career management |
| Lateral mover | Similar level across moves, different domains | Building breadth, may need help articulating growth |
| Long tenure, no movement | 5+ years in one role, same title | Could be deep expert or could be coasting |
| Job hopper with growth | Short stints but clear skill/title progression | Intentional and impatient (not necessarily bad) |
| Job hopper without growth | Short stints, lateral or unclear progression | Red flag for coaching readiness |
| Resume-driven developer | Moves to collect resume lines, not depth | Appears experienced but shallow |
| Cruiser | Stays comfortable, avoids challenge | Low coachability signal |
The Resume-Driven Developer
This pattern deserves special attention because it is hard to spot on paper. The resume looks impressive — diverse experience, recognizable companies, broad skill set. But the depth is missing. They joined, did the minimum to claim the experience, and moved on. The signal is breadth without depth in any single area.
The Cruiser
Stays in the same role or type of role for years without visible growth. Not the same as the deep expert (who grows within a domain). The cruiser avoids stretch assignments, doesn't take on new challenges, and their skills plateau. Low coachability because they are comfortable.
Stage 5: Pre-Conversation Preparation
Before any live conversation (coaching intake, interview, evaluation), prepare targeted questions based on Stages 1-4 findings. The goal is to confirm or challenge the hypothesis formed from the written record.
Question Strategy
- Ask about specific projects mentioned on the resume — if they can't discuss them in depth, the resume is inflated
- Ask about decisions, not just outcomes — "Why did you choose X?" reveals thinking, not just doing
- Ask about failures — how they frame setbacks reveals self-awareness and growth mindset
- Ask about their relationship with their current/recent manager — reveals organizational awareness
Stage 6: Progressive Simplification (Live Conversation)
This is the signature evaluation technique. When a candidate claims expertise in an area, start with an architecture-level question and progressively simplify until they find their actual level.
Example: Candidate Claims Kubernetes Expertise
- "How would you design a multi-cluster deployment strategy for a global application?" (Architecture)
- "Walk me through how you'd debug a pod that's in CrashLoopBackOff." (Operations)
- "What's the difference between a Deployment and a StatefulSet?" (Concepts)
- "Can you describe what a pod is?" (Fundamentals)
Why This Works
Progressive simplification is generous by design. No candidate can claim they weren't given a fair shot. The evaluator kept trying to find a level where the candidate could succeed. If the candidate fails four progressively easier questions on a topic they claim expertise in, that is a clear and defensible assessment, not a single trick question that caught them off guard.
What Matters in the Conversation
- Where they land on the simplification ladder (their actual level)
- How they handle not knowing something (deflect, admit, or fake it)
- Whether they ask clarifying questions (engineering instinct)
- How they explain trade-offs (mature engineering thinking)
Scoring and Assessment
The framework does not produce a single numeric score. It produces a profile:
- Technical depth: Where they actually are vs. where their resume says they are
- Career intentionality: Are they managing their career or drifting?
- Self-awareness: Do they know their gaps?
- Coachability: Are they open to feedback and growth?
- Readiness level: Ready for senior? Ready for staff? Ready for coaching?
Measurement Opportunities
Some aspects of this framework can be made quantitative over time:
- Resume signal scores can be calibrated against outcomes (did the candidate who scored well on Stage 1 actually perform well?)
- Tenure pattern classification can be automated
- Progressive simplification results can be tracked to see which starting levels predict success
- Accept/reject reason tags from Harvest feed back into which signals matter most
The data point that sells this as a service: if O.J.'s candidate assessments predict interview pass rates or job performance more accurately than a recruiter's initial screening, that is a measurable and sellable delta.
What is Not Measurable Yet
Some of the most valuable aspects of this framework are currently subjective: reading tenure patterns, interpreting resume organization choices, and preparing interview questions that give candidates a fair shot. These may remain qualitative and that is acceptable. Not everything needs a number to be rigorous. The goal is to measure what can be measured and to be honest about what relies on judgment.
Open Questions
- How should the framework adapt for different engineering specializations? An SRE candidate and a frontend developer have different expected signals. Should there be sub-frameworks or configurable weights?
- At what point does a candidate shift from "needs coaching" to "not ready for coaching"? Is there a minimum baseline of experience or self-awareness below which coaching is not effective?
- How should the framework handle candidates who are strong technically but weak at self-presentation? This is arguably the ideal coaching client, but they score low on Stages 1 and 2. The scoring needs to account for this.
- Can Stage 3 (Code Authenticity) be partially automated? A tool that diffs a candidate's GitHub repos against known tutorial code could accelerate this stage significantly.
- Should the framework include a "coachability" signal? Some candidates are open to feedback and growth. Others are defensive or fixed in their self-perception. This matters enormously for coaching engagements but is hard to assess pre-conversation.
- How does this framework interact with the Harvest intake form? Should the intake form be redesigned to capture data that maps to framework stages?
Career Readiness Assessment Framework | Learn with O.J. | learnwithoj.com