Skip to content

Instantly share code, notes, and snippets.

import React from "react";
import { ResponsiveContainer, LineChart, Line, XAxis, YAxis, CartesianGrid, Tooltip, Legend } from "recharts";
const showPassRate1 = false;
const data = [
{ name: "TQ1_0", unsloth: undefined, aider: 51.6, pass_rate_1: 19.1 },
{ name: "IQ1_M", unsloth: 79.8, aider: 56.9, pass_rate_1: 24.0 },
{ name: "TQ1_0-thinking", unsloth: undefined, aider: 60.4, pass_rate_1: 26.2 },
{ name: "IQ2_XXS", unsloth: 80.3, aider: undefined, pass_rate_1: undefined },
{ name: "IQ2_M", unsloth: 80.78, aider: 61.3, pass_rate_1: 36.4 },
@aylaeroglu
aylaeroglu / aider-plan.md
Created September 7, 2025 18:03 — forked from rstacruz/aider-plan.md
Stop letting Aider code blindly - try this first

Stop letting Aider code blindly - try this first

Using Aider's /ask and /architect commands to approach larger tasks.

Aider is one of the best AI coding tools available today (in my opinion!). It's a brilliant AI coding assistant that integrates with any LLM, in any code editor.

However, Aider can often feel very eager to make changes. It jumps right into coding after I type anything. I noticed a pattern:

  • I would describe what I wanted
  • Aider would start coding right away
@aylaeroglu
aylaeroglu / chat.html
Created August 15, 2025 10:11 — forked from smahs/chat.html
Simple LLM chat for local OpenAI compatable servers
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>LLM Chat Interface</title>
<script src="https://cdn.jsdelivr.net/npm/@unocss/runtime/preset-typography.global.js"></script>
<script src="https://cdn.jsdelivr.net/npm/@unocss/runtime/preset-mini.global.js"></script>
<script type="text/javascript">
@aylaeroglu
aylaeroglu / InstantClient.md
Created August 14, 2025 08:19 — forked from worbas/InstantClient.md
Install Oracle Instant Client on Debian / Ubuntu
@aylaeroglu
aylaeroglu / README.md
Created July 27, 2025 06:32 — forked from disler/README.md
Prompt Chaining with QwQ, Qwen, o1-mini, Ollama, and LLM

Prompt Chaining with QwQ, Qwen, o1-mini, Ollama, and LLM

Here we explore prompt chaining with local reasoning models in combination with base models. With shockingly powerful local models like QwQ and Qwen, we can build some powerful prompt chains that let us tap into their capabilities in a immediately useful, local, private, AND free way.

Explore the idea of building prompt chains where the first is a powerful reasoning model that generates a response, and then use a base model to extract the response.

Play with the prompts and models to see what works best for your use cases. Use the o1 series to see how qwq compares.

Setup

  • Bun (to run bun run chain.ts ...)
@aylaeroglu
aylaeroglu / README_MINIMAL_PROMPT_CHAINABLE.md
Created July 27, 2025 06:29 — forked from disler/README_MINIMAL_PROMPT_CHAINABLE.md
Minimal Prompt Chainables - Zero LLM Library Sequential Prompt Chaining & Prompt Fusion

Minimal Prompt Chainables

Sequential prompt chaining in one method with context and output back-referencing.

Files

  • main.py - start here - full example using MinimalChainable from chain.py to build a sequential prompt chain
  • chain.py - contains zero library minimal prompt chain class
  • chain_test.py - tests for chain.py, you can ignore this
  • requirements.py - python requirements

Setup