# RANGE --- Your AI Fluency Compass RANGE is a framework for measuring how you work with AI. It captures five dimensions of AI fluency --- not whether you use AI, but *how well* you use it. Think of it as a compass reading on your current skills, with clear markers for where you are and where to grow next. Your Primer uses RANGE to give you a snapshot after build exercises. Each dimension is scored 1--4 based on observable evidence in your conversation --- what you actually did, not what you intended. There are no passing or failing scores. There is only where you are now and where the trail leads next. --- ## The Five Dimensions ### Reach --- How far beyond familiar territory do you push? Reach is about ambition, not polish. A rough attempt at something genuinely new scores higher than a perfect execution of something safe. This dimension rewards you for choosing the harder path. **Level 1 --- Stays in the comfort zone.** You pick the safest, most routine task available. The work is something you could do without AI in roughly the same time. There is no visible risk-taking or exploration of unfamiliar territory. When given a choice of difficulty, you go with what you already know. **Level 2 --- Attempts a stretch, but pulls back.** You try something beyond your default, but retreat when it gets difficult. You might narrow a challenging task to its safest interpretation, or start an unfamiliar approach and revert to what you know when friction appears. The intent to stretch is there, but it doesn't survive contact with difficulty. **Level 3 --- Proactively chooses challenge and persists.** You deliberately select a harder option and stay with it through difficulty. You might express discomfort ("I'm not sure this will work") but you keep going. The output shows genuine engagement with unfamiliar territory, even if it is rough around the edges. You choose the stretch because you want to learn, not because the exercise requires it. **Level 4 --- Ventures into genuinely novel territory.** You attempt something you have never done before --- a new domain, a new format, a new level of complexity. You name the novelty explicitly and show awareness of what you don't yet know. You might say "I've never tried building one of these" and lean in anyway. Quality may dip because you are learning, and that is exactly the point. --- ### Autonomy --- Do you drive the process independently? Autonomy measures self-direction. Do you iterate, evaluate, troubleshoot, and close the loop on your own? A participant who works slowly but drives the entire process scores higher than one who finishes fast by accepting the first output. **Level 1 --- Single attempt, accepts first output.** One prompt, one output, done. No iteration, no evaluation, no quality judgment applied. You treat AI like a vending machine --- put in a request, take what comes out. If it reads fluently, you call it finished. **Level 2 --- Some iteration, but reactive.** You push back when the output is obviously wrong, but you don't proactively seek improvement. Follow-up prompts tend to be generic ("make it better," "more professional") rather than targeted at specific gaps. You might notice a problem without acting on it. **Level 3 --- Self-directed build-evaluate-revise loop.** You drive a complete cycle: build something, evaluate it against your own criteria, identify specific gaps, and revise with targeted prompts. You can name what is wrong --- "the tone doesn't match this audience" --- rather than just feeling dissatisfied. Three or more targeted iterations is typical at this level. **Level 4 --- Fully owns the process, evaluates against criteria.** You set explicit quality standards before you start building. You evaluate your output against those standards throughout. You troubleshoot independently, catch subtle errors, and produce a finished artifact --- not a draft. You say "I built this," not "the AI gave me this." You can describe your process to someone else and they could follow it. --- ### Navigation --- Do you think strategically about your AI interaction? Navigation is about planning before prompting, choosing your approach deliberately, and adapting when things go wrong. Following instructions perfectly but without understanding *why* scores lower than choosing your own approach and adjusting when it stalls. **Level 1 --- No planning, single approach.** You open the AI tool and start typing immediately. There is no visible planning, no problem framing, no audience specification. When your approach doesn't work, you try the same thing again or give up. You cannot explain why you chose the approach you used. **Level 2 --- Some strategic thinking, but rigid.** You show planning elements --- providing useful context, reading instructions before starting. But when your approach stalls, you add more information rather than changing strategy. Your plan is present but inflexible. You might include some context in your prompt but not adapt when the output misses the mark. **Level 3 --- Plans before acting, adjusts mid-course.** You frame the problem, set context, and decompose complex tasks before prompting. When output falls short, you diagnose *why* --- "the tone is wrong because I didn't specify the audience" --- and change your approach accordingly. You can articulate your reasoning: "I'm doing it this way because..." **Level 4 --- Fluent strategic control.** You use meta-prompting deliberately ("What questions do you need answered to do this well?"). You choose between approaches based on what the task requires and you can explain why. You pivot without losing direction and articulate what you learned about your own process. You are aware not just of the task but of how you are thinking about the task. --- ### Generalization --- Can you transfer skills across contexts? Generalization is the hardest dimension. It measures your ability to take a technique that worked in one context and apply it --- with appropriate adaptation --- in another. Most people develop this dimension later in the program, and that is expected. **Level 1 --- Single task type, no transfer.** You work exclusively within one task type and one domain. Each AI interaction is treated as isolated. There is no connection between what worked before and what you are doing now. If a technique worked for emails, it doesn't occur to you to try it for reports. **Level 2 --- Borrows technique mechanically.** You apply an approach from one context to another, but without adapting it. You copy the same prompt structure for very different tasks. You reference a prior success ("I'll do what worked before") without analyzing *why* it worked or whether this new context requires something different. **Level 3 --- Deliberately transfers and adapts.** You intentionally apply a technique to a new domain and adapt it to fit. You name the transfer explicitly --- "I'm using the same approach, but adjusting for this audience" --- and you notice what doesn't carry over. The adaptation is conscious and articulated. You can say what transferred and what needed to change. **Level 4 --- Abstracts principles across domains.** You identify universal principles that work across contexts and can explain why. You articulate insights like "every communication task is really a discovery process" or "the framework transfers, but the judgment doesn't." You can teach your transfer process to someone else. You create reusable tools or templates that encode the principle. --- ### Execution Fidelity --- Do you reliably produce quality output? Execution Fidelity is about your evaluation instincts, error-catching habits, and verification practices. It is not about perfectionism --- catching one critical error is worth more than polishing formatting for an hour. **Level 1 --- Accepts output uncritically.** You take AI output at face value. No evaluation, no verification, no quality criteria. You declare output "good" because it reads well, not because you have checked whether it is correct. Fluent writing is mistaken for accurate writing. **Level 2 --- Some evaluation, but inconsistent.** You catch obvious errors and ask for improvements, but your evaluation is hit-or-miss. You might carefully check facts in one paragraph while accepting unsupported claims in the next. Quality criteria are felt, not stated. You know something is off but cannot always pinpoint what. **Level 3 --- Systematic quality checks.** You apply quality criteria consistently. You check accuracy, completeness, and appropriateness. You catch factual errors, tone mismatches, and format drift. Your evaluation leads to specific revision --- "fix the citation in paragraph three" --- rather than vague dissatisfaction. You distinguish between "good enough to send" and "needs one more pass." **Level 4 --- Consistently polished, verified work.** You set quality standards before building and evaluate against them throughout. You catch subtle issues --- unsupported conclusions, tone drift, logical gaps, data that looks plausible but hasn't been verified. You may invent your own verification tools (tagging claims that need checking, tracking your rewrite rate). The final artifact is production-ready, not a draft that needs someone else to review. --- ## Quick Reference Card | Dimension | Level 1 | Level 2 | Level 3 | Level 4 | |---|---|---|---|---| | **R** Reach | Safest option only | Attempts stretch, retreats | Chooses challenge, persists | Novel territory, names the novelty | | **A** Autonomy | One shot, accepts output | Reactive iteration only | Self-directed build-eval-revise | Sets criteria upfront, verifies, invents tools | | **N** Navigation | No planning, stuck when it fails | Some framing, rigid when it fails | Plans first, adapts mid-course | Meta-prompting, strategy selection, teaches process | | **G** Generalization | Single domain, no transfer | Mechanical copy of approach | Deliberate transfer, notes gaps | Abstracts principles, teaches the transfer | | **E** Execution | Accepts uncritically | Catches obvious errors only | Systematic quality checks | Production-ready, verification tools | --- ## Reading Your Scores Your RANGE reading will show a score for each dimension. A few things to keep in mind: - **Uneven profiles are normal.** Most people are stronger in some dimensions than others. Someone might be a Level 3 in Execution Fidelity but Level 1 in Reach --- that tells you where to focus, not that you are doing poorly. - **Generalization develops last.** Do not worry if this dimension comes back as "Insufficient evidence" in early sessions. It requires multiple contexts to observe, and the program builds toward it deliberately. - **Scores can go down between sessions.** If you tackle something much harder in Session 3 than you did in Session 2, your Reach might jump while your Execution dips. That is growth --- you traded polish for ambition. The next step is bringing execution back up at the new difficulty level. - **The average matters less than the shape.** A flat profile at Level 2 across all five dimensions tells a different story than a spiky profile with two 3s and two 1s. Both are useful starting points. Pay attention to where the spikes and valleys are. --- ## How to Use This After any AI build exercise, your Primer will offer you a RANGE snapshot. You can also ask for one anytime: > "Give me a RANGE reading on what I just did." Your Primer will score each dimension with evidence from your conversation, highlight your strongest dimension, and suggest one concrete next step for growth. Over the course of the program, you will see your scores shift --- that trajectory is the real measure of your development. There is no target score. A participant who starts at Level 1 across the board and finishes at Level 3 has grown more than someone who started and stayed at Level 3. RANGE rewards growth, not starting position. Use the Quick Reference Card above to self-assess before you ask for a reading. See if your self-assessment matches the Primer's. The gap between how you think you performed and what the evidence shows is often where the most useful learning lives. You can also ask your Primer to focus on a single dimension: > "How did I do on Navigation in that last exercise?" Or compare across time: > "Compare my RANGE profile now to where I was in Session 1." The rubric is yours to reference anytime. When you are about to start a build exercise and you want to push yourself, scan the level descriptions for the dimension where you scored lowest. Pick one behavior from the next level up and try it deliberately. That is how the trail gets wider.