This file contains is additional guidance for AI agents and other AI editors.
These principles reduce common LLM coding mistakes. Apply them to every task.
Don't assume. Don't hide confusion. Surface tradeoffs.
- State assumptions explicitly. If uncertain, ask.
- If multiple interpretations exist, present them — don't pick silently.
- If a simpler approach exists, say so. Push back when warranted.
- If something is unclear, stop. Name what's confusing. Ask.
Minimum code that solves the problem. Nothing speculative.
- No features beyond what was asked.
- No abstractions for single-use code.
- No "flexibility" or "configurability" that wasn't requested.
- No error handling for impossible scenarios.
- No defensive use of
hasattrorgetattr, reason about the types instead. - No inline/deferred imports. All imports go at the top of the file.
- If you write 200 lines and it could be 50, rewrite it.
The test: Would a senior engineer say this is overcomplicated? If yes, simplify.
Touch only what you must. Clean up only your own mess.
When editing existing code:
- Don't "improve" adjacent code, comments, or formatting.
- Don't refactor things that aren't broken.
- Match existing style, even if you'd do it differently.
- If you notice unrelated dead code, mention it — don't delete it.
When your changes create orphans:
- Remove imports/variables/functions that YOUR changes made unused.
- Don't remove pre-existing dead code unless asked.
The test: Every changed line should trace directly to the user's request.
Define success criteria. Loop until verified.
Transform tasks into verifiable goals:
| Instead of... | Transform to... |
|---|---|
| "Add validation" | "Write tests for invalid inputs, then make them pass" |
| "Fix the bug" | "Write a test that reproduces it, then make it pass" |
| "Refactor X" | "Ensure tests pass before and after" |
For multi-step tasks, state a brief plan:
1. [Step] → verify: [check]
2. [Step] → verify: [check]
3. [Step] → verify: [check]
Strong success criteria let you loop independently. Weak criteria ("make it work") require constant clarification.
- Use a conda environment for building, testing and benchmarking, running commands like
gh, etc. - Which env: cuML envs are named
cuml-YYYYMMDD. Use the one with the latest date (e.g. runconda env listand pick thecuml-*env whose date suffix is greatest).
- After making code changes, rebuild the project. Use
cd ~/git/cuml && PARALLEL_LEVEL=8 ./build.sh --nolibcumltest --ccacheto do that - The project needs rebuilding after changes to Python only code and to the tests as well. Not only C++ code.
When generating a summary of your work, consider these points:
- Describe the "why" of the changes, why the proposed solution is the right one.
- Highlight areas of the proposed changes that require careful review.
- Reduce the verbosity of your comments, more text and detail is not always better. Avoid flattery, avoid stating the obvious, avoid filler phrases, prefer technical clarity over marketing tone.
- Read
agents/plans/current.mdfor curent status (what's done, what's next) - Read relevant
agents/designs/*.mdfor architecture context (why decisions were made)
When asked to make a plan or perform research always store the resulting plan and design documents as a markdown file
in agents/plans/.
Include the date the plan was first created as well as the last time it was edited at the top of the file.
Use the date in the format yyyy-mm-dd in the filename to indicate when the plan was created. Keep agents/plans/current.md
up to date with current state of work, it should refer to the plan that is currently being worked on.
Design and architecture decisions are in agents/designs/. Use it to record learnings and why decisions were made. Refer to it when planning and implementing work.