$Turning Any Web Service Into a Programmable API for LLMs
Every web application is already an API — your browser proves it every time you click a button. Here's how you can reverse-engineer any web service into a clean, programmable interface that AI agents can use autonomously.
Every web application you use is already an API. Your browser is just a very pretty client.
When you click "Generate" on an AI art platform, or "Send" on a messaging app, your browser fires off an HTTP request to a backend endpoint with a structured payload. The response comes back as JSON. The fancy UI is just a wrapper around what is, fundamentally, a programmable interface.
This means any web service can become your API — if you know where to look.
The Browser Is Your Documentation
Open DevTools. Click the Network tab. Use the service normally. Every action you take becomes a logged request — complete with URL, headers, payload, and response. This is the documentation the company never published.
The pattern is remarkably consistent across modern web apps:
- Authentication — Usually a POST to an auth endpoint that returns a JWT or session token. Sometimes OAuth, sometimes a simple email/password exchange.
- Session/State Management — Many services create server-side sessions or contexts that subsequent requests reference.
- The Core Action — The actual API call that does the thing you care about: generating content, processing data, sending a message.
- Polling or Webhooks — For async operations, the client either polls a status endpoint or listens on a WebSocket.
That's it. Four steps. Every SaaS product on the internet follows some variation of this.
From Network Requests to a Clean Client
Once you've mapped out the request flow, building a programmatic client is straightforward:
Browser clicks "Generate"
↓
POST /api/v1/generate { prompt: "...", model: "..." }
↓
You write: client.generate(prompt="...", model="...")
The key abstractions you'll need:
- Auth handler — Store tokens, detect expiry, refresh automatically. Most JWTs have an
expclaim you can read without even decoding the signature. - Request builder — Mirror the headers and payload structure you observed. Pay attention to required headers like
Content-Type, custom auth headers, and any anti-CSRF tokens. - Response parser — Extract the data you care about from the response. Handle errors gracefully.
- Async poller — If the service processes requests asynchronously, write a polling loop with exponential backoff.
A well-structured client can often be built in a single afternoon. The reverse-engineering is the hard part; the code is simple.
The LLM Layer: MCP Makes It Trivial
Here's where it gets interesting. Once you have a working client library, you can expose it to LLMs using the Model Context Protocol (MCP).
MCP is an open standard that lets AI assistants call external tools. You define tools with names, descriptions, and input schemas — and the LLM figures out when and how to use them.
Your reverse-engineered client becomes a set of MCP tools:
@mcp.tool()
async def generate_image(prompt: str, style: str = "realistic") -> str:
"""Generate an image from a text prompt."""
result = await client.generate(prompt=prompt, style=style)
return result.url
Now an AI agent can autonomously use a service that was never designed to be used programmatically. No official API required. No API key application process. No rate-limit tier negotiations.
You Can Also Go REST
MCP is great for LLM consumption, but sometimes you want a traditional API too. Wrapping the same client in a FastAPI or Express server gives you:
- Interactive API docs (Swagger/OpenAPI) for free
- Standard HTTP endpoints any programming language can call
- The ability to build your own UI on top of the service
The client library is the foundation. MCP and REST are just two different doors into the same room.
The Ethics and Practicalities
Let's be direct about the boundaries:
- Terms of Service exist. Most services prohibit automated access. Know the rules before you break them. Use this for personal projects, learning, and prototyping — not for building competing products.
- Things break. Undocumented APIs change without notice. Your client will need maintenance. Build with resilience in mind.
- Rate limiting is real. Just because you can send 1,000 requests per minute doesn't mean you should. Be a good citizen.
- Authentication credentials are sensitive. Never hardcode them. Use environment variables. Don't commit them to git.
Why This Matters
We're in an era where AI agents need tools to be useful. The official tool ecosystem is growing, but it will never cover every service. The ability to turn any web application into a programmable tool for an AI agent is a superpower.
The web was built on open protocols. Every service speaks HTTP. Every payload is inspectable. The gap between "web app" and "API" is just a few hours of detective work and a thin layer of Python.
Your browser already knows how to talk to every service you use. Now your code can too.
If you're building AI agents and need them to interact with services that don't have official APIs, start with your browser's Network tab. The documentation is already there — it's just not written down yet.