# Expedition 3 — Field Exercise *After Session 3 | ~30 minutes core + 10-15 minutes extension* --- You've proven you can produce AI-assisted work. You've built prompts with RCCE, evaluated your own output, and transferred techniques across domains. This expedition asks you to do something harder: take that work into the real world and account for it responsibly. Four core steps, two optional extension steps. --- ## CORE ### Step 1 — Deploy (10 min) Tell me about your Session 3 Field Exercise deliverable. What did you make? Did you deploy it — use it in real work, send it to someone, or submit it somewhere? If you deployed it, walk through what happened. What was the reaction? What feedback did you get? What surprised you — about the output, about the response, or about your own feelings when you sent it? If you haven't deployed it yet, either do it now (if you can) or describe the plan: who will see it, when, and what you expect. A real deployment is far more useful than a hypothetical, but a clear plan works. Don't rush this step. The deployment experience is the raw material for everything that follows. --- ### Step 2 — Ethics Audit (8 min) Rangers leave no trace. They pack out what they pack in, stay on the trail, and clean up after themselves. For AI use, that means being thoughtful about the data you share, the claims you make, and how you present your work. Audit the deliverable you just described. Work through five lenses, one at a time: **Lens 1 — Data Appropriateness** Was the information you put into AI appropriate to share? Were there names, client data, internal financials, or anything sensitive? If you used a free-tier tool, its terms may allow training on your inputs. **Lens 2 — Factual Verification** What claims or facts in the deliverable did you actually verify before using it? Were there places where you accepted AI output without checking? **Lens 3 — Disclosure** If the recipient asked "Did AI help you make this?" — what would you say? Is there anything in your organization's norms or your own professional standards about disclosure? **Lens 4 — Low-Stakes Defaults** Were there decision points where you had a choice between a more cautious path and a faster one? What did you choose, and why? **Lens 5 — Environmental Awareness** Who else is affected by this AI-assisted output? Not just the direct recipient — downstream effects, team members who will act on it, clients who may rely on it. After all five: Of these areas, where are you most confident? Where do you have a genuine gap? --- ### Step 3 — Ranger Creed (7 min) Write your Ranger Creed — three personal commitments for responsible AI use. Ground each one in the specific deployment you just walked through. Not aspirational. Not borrowed from a list. Rooted in what you actually did and what you actually noticed. Requirements: - Each commitment must be specific enough that you'd know whether you kept it - "I will use AI responsibly" does not count. "Before I attach my name to any AI-assisted factual claim, I will verify it in a primary source" does. - Write them as if you'll stand behind them in front of your cohort — because you will, at Session 4. --- ### Step 4 — Frontier Plan (5 min) Look further out. Name one genuinely ambitious task you want to attempt with AI in the next 90 days — something you have not tried yet, something that would be a real stretch. Write a brief plan: | Element | Your Answer | |---------|-------------| | **Task** | What is it? | | **AI contribution** | What do you think AI could contribute? | | **Success criteria** | What would success look like? | | **Unknowns** | What would you need to figure out first? | This is not a task to execute now. It's a commitment about where you're heading. If your plan sounds like something you could finish this afternoon, push harder — what would actually feel ambitious on a 90-day horizon? --- **Core is complete.** You are fully prepared for Session 4. The extension below gives you a head start on Session 4's opening content — two steps, 10-15 minutes. --- ## EXTENSION (optional) ### Step 5 — Tool Landscape Pre-Read (10 min) Session 4 opens with a rapid survey of the AI tool landscape. You'll get more out of it if you've thought about this beforehand. Survey these five categories. For each, note what you've used and what you haven't: 1. **General-purpose assistants** — Claude, ChatGPT, Gemini 2. **Specialized writing tools** — Jasper, Notion AI, Grammarly, etc. 3. **Code and technical tools** — GitHub Copilot, Cursor, Claude Code, etc. 4. **Image and media tools** — Midjourney, DALL-E, Suno, Runway, etc. 5. **Workflow automation tools** — Zapier AI, Make, n8n, integrated AI in project management tools After surveying all five: **Which two categories are most relevant to your work** — either because you already use them and want to go deeper, or because you see the clearest opportunity? Name your top two and one sentence on why. --- ### Step 6 — Cross-Domain Application (5 min) Take the prompting technique you found most effective during Sessions 2 or 3 — the one that gave you the most reliable, useful output. Apply it to a task type you have never tried with AI. Different domain, different deliverable. After running it: What transferred? What didn't? What surprised you? --- ## Artifacts Checklist **Core (bring all four to Session 4):** - Deployment Report — what you deployed, what happened, what you'd change - Ethics Audit notes — which lenses flagged gaps, what actions you named - Ranger Creed — 3 specific, grounded commitments (written, shareable) - Frontier Plan — 90-day ambitious task with success criteria and unknowns **Extension (if completed):** - Tool Landscape picks — top 2 most relevant categories with rationale - Cross-domain attempt notes — what transferred, what broke, what surprised you --- ## Trail Partner Sharing Share your Ranger Creed with your trail partner before Session 4. You'll discuss it in the Ethics Roundtable at the start of the session. Your partner is counting on having something to respond to. --- ## Feed-Forward | Artifact | Session 4 Use | |----------|---------------| | Ranger Creed | Ethics Roundtable — you'll discuss your commitments in pairs, then as a cohort | | Frontier Plan | 90-Day Roadmap — becomes your starting point, not a blank slate | | Tool Landscape picks | Tool Landscape Rapid Review — you'll surface these during the survey | | Cross-domain attempt | Domain fluency conversation — you'll share what transferred and why | --- ## RANGE Dimensions in This Expedition - **Autonomy** (primary) — This is the highest-autonomy homework in the program. You drive the deployment, the audit, the commitments. No scaffolding, no guardrails. - **Execution Fidelity** (primary) — The ethics audit builds responsible quality standards into your AI practice - **Navigation** (developing) — The Frontier Plan exercises strategic planning on a 90-day horizon