One guide. Every AI provider. Zero confusion. Works with Android, iOS, Web, Desktop β any app that can make HTTP requests.
- What is an API Key?
- Google Gemini (FREE)
- OpenAI / ChatGPT
- Anthropic Claude
- OpenRouter (400+ Models)
- Groq (Ultra Fast)
- Ollama (Local, Free Forever)
- How API Calls Work (Visual)
- Code Examples
- Multi-Key Failover
- Troubleshooting
- Security Tips
Think of an API key like a password for your app to talk to an AI service.
You (App) ββββ API Key βββββΊ AI Service (Gemini, ChatGPT, etc.)
β
βΌ
AI Response
It's just a long string of characters, like:
- Gemini:
AIzaSyD_xxxxxxxxxxxxxxxxxxxx - OpenAI:
sk-proj-xxxxxxxxxxxxxxxxxxxx - Anthropic:
sk-ant-api03-xxxxxxxxxxxx
You get it for free (most providers have free tiers), paste it into your app, and boom β your app can talk to AI.
- β Completely FREE (generous limits)
- β Easiest setup (2 minutes)
- β Powerful models (Gemini 2.0 Flash, 2.5 Pro)
- β Text + Image + Audio + Video support
Step 1: Open browser β Go to https://aistudio.google.com
Step 2: Sign in with your Google account
Step 3: Click "Get API Key" (top-left or sidebar)
Step 4: Click "Create API Key"
Step 5: Copy the key (starts with "AIzaSy...")
Step 6: Paste it in your app β Done! π
| Model | Speed | Quality | Best For |
|---|---|---|---|
gemini-2.0-flash |
β‘β‘β‘ | β β β β | General use (RECOMMENDED) |
gemini-2.0-flash-lite |
β‘β‘β‘β‘ | β β β | Quick simple tasks |
gemini-1.5-flash |
β‘β‘β‘ | β β β β | Balanced |
gemini-1.5-pro |
β‘β‘ | β β β β β | Complex reasoning |
gemini-2.5-flash-preview-05-20 |
β‘β‘β‘ | β β β β β | Latest & greatest |
gemini-2.5-pro-preview-05-06 |
β‘ | β β β β β + | Most powerful |
POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=YOUR_KEY
Headers:
Content-Type: application/json
Body:
{
"contents": [
{
"parts": [
{ "text": "Your question here" }
]
}
],
"generationConfig": {
"temperature": 0.7,
"maxOutputTokens": 4096
}
}
{
"candidates": [
{
"content": {
"parts": [
{ "text": "The AI's answer is here!" }
]
}
}
]
}Get the answer: response.candidates[0].content.parts[0].text
Step 1: Go to https://platform.openai.com
Step 2: Sign up / Log in
Step 3: Click your profile icon β "API Keys"
(or go to https://platform.openai.com/api-keys)
Step 4: Click "Create new secret key"
Step 5: Name it anything (e.g., "My App")
Step 6: Copy the key (starts with "sk-proj-...")
Step 7: β οΈ SAVE IT NOW β you can't see it again!
π° Pricing: Free $5 credits for new accounts. After that, pay-as-you-go. GPT-4o-mini is very cheap (~$0.15 per 1M tokens).
| Model | Speed | Quality | Cost |
|---|---|---|---|
gpt-4o-mini |
β‘β‘β‘ | β β β β | π² (Cheapest) |
gpt-4o |
β‘β‘ | β β β β β | π²π² |
gpt-4-turbo |
β‘β‘ | β β β β β | π²π²π² |
o3-mini |
β‘ | β β β β β + | π²π²π² (Reasoning) |
POST https://api.openai.com/v1/chat/completions
Headers:
Content-Type: application/json
Authorization: Bearer YOUR_API_KEY
Body:
{
"model": "gpt-4o-mini",
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Your question here" }
],
"max_tokens": 1000
}
{
"choices": [
{
"message": {
"content": "The AI's answer is here!"
}
}
]
}Get the answer: response.choices[0].message.content
Step 1: Go to https://console.anthropic.com
Step 2: Sign up (may need to add payment method)
Step 3: Go to "API Keys" in the sidebar
Step 4: Click "Create Key"
Step 5: Copy the key (starts with "sk-ant-api03-...")
π° Pricing: Free $5 credits for new accounts. Claude Haiku is cheapest.
| Model | Speed | Quality | Cost |
|---|---|---|---|
claude-haiku-4-5-20251001 |
β‘β‘β‘ | β β β β | π² (Cheapest) |
claude-sonnet-4-6 |
β‘β‘ | β β β β β | π²π² |
claude-opus-4-6 |
β‘ | β β β β β + | π²π²π² (Best) |
POST https://api.anthropic.com/v1/messages
Headers:
Content-Type: application/json
x-api-key: YOUR_API_KEY
anthropic-version: 2023-06-01
Body:
{
"model": "claude-haiku-4-5-20251001",
"max_tokens": 1024,
"system": "You are a helpful assistant.",
"messages": [
{ "role": "user", "content": "Your question here" }
]
}
{
"content": [
{
"type": "text",
"text": "The AI's answer is here!"
}
]
}Get the answer: response.content[0].text
- β ONE API key β access OpenAI, Claude, Gemini, Llama, Mistral, and 400+ models
- β Some FREE models available
- β Same OpenAI-compatible format for ALL models
- β Pay only for what you use
Step 1: Go to https://openrouter.ai
Step 2: Sign up with Google/GitHub
Step 3: Go to https://openrouter.ai/keys
Step 4: Click "Create Key"
Step 5: Copy the key (starts with "sk-or-v1-...")
| Model | Provider | Cost |
|---|---|---|
google/gemini-2.0-flash-exp:free |
FREE | |
meta-llama/llama-3.1-8b-instruct:free |
Meta | FREE |
openai/gpt-4o-mini |
OpenAI | π² |
anthropic/claude-sonnet-4-6 |
Anthropic | π²π² |
google/gemini-2.5-pro-preview |
π²π² |
POST https://openrouter.ai/api/v1/chat/completions
Headers:
Content-Type: application/json
Authorization: Bearer YOUR_OPENROUTER_KEY
HTTP-Referer: https://your-app-name.com
Body:
{
"model": "google/gemini-2.0-flash-exp:free",
"messages": [
{ "role": "user", "content": "Your question here" }
]
}
{
"choices": [
{
"message": {
"content": "The AI's answer is here!"
}
}
]
}Get the answer: response.choices[0].message.content
- β INSANELY fast (10x faster than others)
- β Free tier available
- β Runs Llama, Mixtral, Gemma models
- β OpenAI-compatible API
Step 1: Go to https://console.groq.com
Step 2: Sign up with Google/GitHub
Step 3: Go to "API Keys"
Step 4: Click "Create API Key"
Step 5: Copy the key (starts with "gsk_...")
| Model | Speed | Best For |
|---|---|---|
llama-3.1-70b-versatile |
β‘β‘β‘β‘ | General use |
llama-3.1-8b-instant |
β‘β‘β‘β‘β‘ | Quick tasks |
mixtral-8x7b-32768 |
β‘β‘β‘β‘ | Long context |
gemma2-9b-it |
β‘β‘β‘β‘ | Lightweight |
POST https://api.groq.com/openai/v1/chat/completions
Headers:
Content-Type: application/json
Authorization: Bearer YOUR_GROQ_KEY
Body:
{
"model": "llama-3.1-70b-versatile",
"messages": [
{ "role": "user", "content": "Your question here" }
],
"max_tokens": 1000
}
Response format: Same as OpenAI β response.choices[0].message.content
- β Completely FREE forever β no API key, no limits
- β Runs on your own computer
- β Your data never leaves your machine
- β Hundreds of open-source models
Step 1: Go to https://ollama.com
Step 2: Download & install for your OS
Step 3: Open terminal/command prompt
Step 4: Run: ollama pull mistral (downloads a model)
Step 5: It's running! No key needed.
ollama pull mistral # 7B - Great general purpose
ollama pull llama3.1 # 8B - Meta's best open model
ollama pull codellama # 7B - Coding specialist
ollama pull gemma2 # 9B - Google's open model
ollama pull phi3 # 3.8B - Microsoft, super smallPOST http://127.0.0.1:11434/api/chat
Headers:
Content-Type: application/json
Body:
{
"model": "mistral",
"messages": [
{ "role": "user", "content": "Your question here" }
],
"stream": false
}
No API key needed! Just make sure Ollama is running on your computer.
{
"message": {
"content": "The AI's answer is here!"
}
}Get the answer: response.message.content
βββββββββββββββ ββββββββββββββββββββ ββββββββββββββ
β β HTTPS β β Magic β β
β YOUR APP ββββββββββΊ β AI Provider ββββββββββΊ β AI Model β
β β β (Google, etc.) β β β
β β ββββββββ β β ββββββββ β β
β β JSON β β Result β β
βββββββββββββββ ββββββββββββββββββββ ββββββββββββββ
What happens:
1. Your app sends a POST request with your question + API key
2. The AI provider validates your key
3. The AI model processes your question
4. The provider sends back a JSON response
5. Your app reads the answer from the JSON
1. URL β Where to send the request
2. Headers β Your API key + content type
3. Body β Your question (JSON format)
4. Response β AI's answer (JSON format)
// π΅ GEMINI
async function askGemini(question, apiKey) {
const res = await fetch(
`https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=${apiKey}`,
{
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
contents: [{ parts: [{ text: question }] }],
generationConfig: { temperature: 0.7, maxOutputTokens: 4096 }
})
}
);
const data = await res.json();
return data.candidates?.[0]?.content?.parts?.[0]?.text || 'No response';
}
// π’ OPENAI / π£ OPENROUTER / β‘ GROQ (all same format!)
async function askOpenAI(question, apiKey, baseUrl = 'https://api.openai.com', model = 'gpt-4o-mini') {
const res = await fetch(`${baseUrl}/v1/chat/completions`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${apiKey}`
},
body: JSON.stringify({
model: model,
messages: [{ role: 'user', content: question }],
max_tokens: 1000
})
});
const data = await res.json();
return data.choices?.[0]?.message?.content || 'No response';
}
// π ANTHROPIC
async function askClaude(question, apiKey, model = 'claude-haiku-4-5-20251001') {
const res = await fetch('https://api.anthropic.com/v1/messages', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-api-key': apiKey,
'anthropic-version': '2023-06-01'
},
body: JSON.stringify({
model: model,
max_tokens: 1024,
messages: [{ role: 'user', content: question }]
})
});
const data = await res.json();
return data.content?.[0]?.text || 'No response';
}
// Usage:
// const answer = await askGemini("What is Kotlin?", "AIzaSy...");
// const answer = await askOpenAI("What is Kotlin?", "sk-proj-...");
// const answer = await askOpenAI("What is Kotlin?", "sk-or-v1-...", "https://openrouter.ai/api", "google/gemini-2.0-flash-exp:free");
// const answer = await askOpenAI("What is Kotlin?", "gsk_...", "https://api.groq.com/openai", "llama-3.1-70b-versatile");
// const answer = await askClaude("What is Kotlin?", "sk-ant-api03-...");// Using OkHttp (add to build.gradle: implementation("com.squareup.okhttp3:okhttp:4.12.0"))
import okhttp3.*
import okhttp3.MediaType.Companion.toMediaType
import okhttp3.RequestBody.Companion.toRequestBody
import org.json.JSONObject
// π΅ GEMINI
suspend fun askGemini(question: String, apiKey: String, model: String = "gemini-2.0-flash"): String {
val client = OkHttpClient()
val json = JSONObject().apply {
put("contents", org.json.JSONArray().put(
JSONObject().put("parts", org.json.JSONArray().put(
JSONObject().put("text", question)
))
))
put("generationConfig", JSONObject().apply {
put("temperature", 0.7)
put("maxOutputTokens", 4096)
})
}
val request = Request.Builder()
.url("https://generativelanguage.googleapis.com/v1beta/models/$model:generateContent?key=$apiKey")
.post(json.toString().toRequestBody("application/json".toMediaType()))
.build()
val response = client.newCall(request).execute()
val body = JSONObject(response.body!!.string())
return body.getJSONArray("candidates")
.getJSONObject(0).getJSONObject("content")
.getJSONArray("parts").getJSONObject(0)
.getString("text")
}
// π’ OPENAI-COMPATIBLE (works for OpenAI, OpenRouter, Groq)
suspend fun askOpenAI(
question: String,
apiKey: String,
baseUrl: String = "https://api.openai.com",
model: String = "gpt-4o-mini"
): String {
val client = OkHttpClient()
val json = JSONObject().apply {
put("model", model)
put("messages", org.json.JSONArray().put(
JSONObject().put("role", "user").put("content", question)
))
put("max_tokens", 1000)
}
val request = Request.Builder()
.url("$baseUrl/v1/chat/completions")
.addHeader("Authorization", "Bearer $apiKey")
.post(json.toString().toRequestBody("application/json".toMediaType()))
.build()
val response = client.newCall(request).execute()
val body = JSONObject(response.body!!.string())
return body.getJSONArray("choices")
.getJSONObject(0).getJSONObject("message")
.getString("content")
}
// π ANTHROPIC
suspend fun askClaude(question: String, apiKey: String, model: String = "claude-haiku-4-5-20251001"): String {
val client = OkHttpClient()
val json = JSONObject().apply {
put("model", model)
put("max_tokens", 1024)
put("messages", org.json.JSONArray().put(
JSONObject().put("role", "user").put("content", question)
))
}
val request = Request.Builder()
.url("https://api.anthropic.com/v1/messages")
.addHeader("x-api-key", apiKey)
.addHeader("anthropic-version", "2023-06-01")
.post(json.toString().toRequestBody("application/json".toMediaType()))
.build()
val response = client.newCall(request).execute()
val body = JSONObject(response.body!!.string())
return body.getJSONArray("content")
.getJSONObject(0).getString("text")
}import requests
# π΅ GEMINI
def ask_gemini(question, api_key, model="gemini-2.0-flash"):
url = f"https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent?key={api_key}"
data = {
"contents": [{"parts": [{"text": question}]}],
"generationConfig": {"temperature": 0.7, "maxOutputTokens": 4096}
}
r = requests.post(url, json=data)
return r.json()["candidates"][0]["content"]["parts"][0]["text"]
# π’ OPENAI-COMPATIBLE (OpenAI, OpenRouter, Groq)
def ask_openai(question, api_key, base_url="https://api.openai.com", model="gpt-4o-mini"):
url = f"{base_url}/v1/chat/completions"
headers = {"Authorization": f"Bearer {api_key}"}
data = {
"model": model,
"messages": [{"role": "user", "content": question}],
"max_tokens": 1000
}
r = requests.post(url, json=data, headers=headers)
return r.json()["choices"][0]["message"]["content"]
# π ANTHROPIC
def ask_claude(question, api_key, model="claude-haiku-4-5-20251001"):
url = "https://api.anthropic.com/v1/messages"
headers = {"x-api-key": api_key, "anthropic-version": "2023-06-01"}
data = {
"model": model,
"max_tokens": 1024,
"messages": [{"role": "user", "content": question}]
}
r = requests.post(url, json=data, headers=headers)
return r.json()["content"][0]["text"]If one key hits a rate limit, automatically switch to the next one:
const keys = [
{ provider: 'gemini', key: 'AIzaSy...', model: 'gemini-2.0-flash' },
{ provider: 'openrouter', key: 'sk-or-v1-...', model: 'google/gemini-2.0-flash-exp:free' },
{ provider: 'groq', key: 'gsk_...', model: 'llama-3.1-70b-versatile' },
];
async function askAI(question) {
for (const entry of keys) {
try {
if (entry.provider === 'gemini') {
return await askGemini(question, entry.key, entry.model);
} else if (entry.provider === 'openrouter' || entry.provider === 'groq') {
const baseUrl = entry.provider === 'openrouter'
? 'https://openrouter.ai/api'
: 'https://api.groq.com/openai';
return await askOpenAI(question, entry.key, baseUrl, entry.model);
}
} catch (err) {
console.log(`${entry.provider} failed, trying next...`);
}
}
throw new Error('All AI providers failed!');
}| Error | What It Means | Fix |
|---|---|---|
401 Unauthorized |
Wrong API key | Double-check your key, no extra spaces |
403 Forbidden |
Key doesn't have access | Check billing/permissions on provider dashboard |
429 Too Many Requests |
Rate limited | Wait a minute, or use multi-key failover |
500 Internal Server Error |
Provider is down | Wait and retry, or switch provider |
CORS Error (browser) |
Browser blocking request | Use a backend proxy, or use the API from server-side |
Model not found |
Wrong model name | Check the model table above for exact names |
β‘ Is the API key correct? (no extra spaces, full key)
β‘ Is the API key enabled? (not revoked on dashboard)
β‘ Do you have credits/billing set up?
β‘ Is the model name spelled correctly?
β‘ Are you using the right URL for the provider?
β‘ Is the Content-Type header set to application/json?
β‘ Is the request body valid JSON?
β οΈ NEVER put API keys in:
β Source code committed to GitHub
β Client-side JavaScript (anyone can see it)
β Public URLs or logs
β
ALWAYS store API keys in:
β Environment variables (.env files)
β Android SharedPreferences / DataStore
β iOS Keychain
β Server-side only (proxy API calls)
β localStorage (for personal/prototype apps only)
Need FREE AI?
βββ Yes, with internet β Gemini (best free) or OpenRouter (free models)
βββ Yes, no internet β Ollama (runs locally)
βββ Budget available?
βββ Want the BEST β Claude Opus or GPT-4o
βββ Want FAST + CHEAP β Groq (Llama) or GPT-4o-mini
βββ Want ONE key for ALL β OpenRouter
βββ Want PRIVACY β Ollama (your data stays local)
| Provider | Get API Key | Docs | Pricing |
|---|---|---|---|
| Gemini | aistudio.google.com | Docs | FREE |
| OpenAI | platform.openai.com | Docs | Pricing |
| Anthropic | console.anthropic.com | Docs | Pricing |
| OpenRouter | openrouter.ai/keys | Docs | Per model |
| Groq | console.groq.com | Docs | FREE tier |
| Ollama | ollama.com | Docs | FREE forever |
Made by @ashokvarmamatta β If this helped you, give it a β!
Used in: Prompt Gen Β· ZeroClaw Β· Neural Forge