π§Ύ Proposal: Use of Personal or Professional Development Machines to Run Local LLMs for Enhanced Developer Productivity
We propose allowing the use of personally-owned machines or laptops purchased via professional development budgets to run local Large Language Models (LLMs). This approach enables powerful AI-assisted coding workflows while maintaining complete data privacy and offline security, since no data leaves the local device.
CloseKnit-issued laptops are resource-constrained and not optimized for running local AI models. Developer-owned or dev-budget machines can:
- Handle larger local LLMs (e.g., Code Llama, StarCoder, Phi, etc.)
- Support GPU-based model acceleration
- Run AI workflows entirely offline
- Boost productivity with faster, more responsive tools
| Characteristic | Local LLM Usage |
|---|---|
| Network access required? | β No β models run entirely offline |
| Cloud service dependency? | β None |
| Code or data transmission? | β Zero β no data leaves the device |
| PHI/PII exposure risk? | β None if data is test/sanitized in local dev |
Local LLM tools such as Ollama, LM Studio, LMDeploy, or LocalGPT can be used to ensure that:
- All inference happens locally
- Models are pre-downloaded and verified
- All developer data remains on-device
- Code generation and refactoring
- Regex and SQL generation
- Documentation writing
- Unit test generation
- Local codebase summarization
| Policy Component | Recommendation |
|---|---|
| Approved machines | Personally owned or dev-budget machines |
| Usage scope | Development environments only β no production credentials or live PHI data |
| Tooling examples | Ollama, LM Studio, Code Llama |
| Data retention | No data is uploaded or logged externally |
| Risk | Mitigation Strategy |
|---|---|
| Exposure of sensitive data | Use sanitized/test data only; do not connect production secrets or APIs |
| Model compliance concerns | Use only open-source, locally auditable models (e.g., Meta LLaMA, Mistral) |
- Empower developers with high-performance, responsive AI tools
- Enable fully private AI experimentation with zero vendor lock-in
- Allow engineers to explore on-device copilots without risking corporate data
- Reduce reliance on paid cloud inference services
- Review and approve policy allowing the use of personal/dev-budget machines for local LLM use
- Define baseline security guardrails (e.g., device encryption, endpoint management if needed)
- Share documentation/examples on running local models responsibly
- Periodically review tools for model updates and privacy risks