Twinkle HubTwinkle Hub
Sign in

Docs

10-minute setup

Twinkle Hub is an MCP endpoint — any MCP client connects. Pick the path that matches you.

1. Get an API key

/login → Google / GitHub OAuth → auto-mints a virtual key. Quota raised by support. All requests carry Authorization: Bearer sk-....

2. curl (streamable-http transport)

Step 1: initialize and capture session id from the mcp-session-id response header

text
curl -i -X POST https://api.twinkleai.tw/mcp/ \
  -H "Authorization: Bearer sk-..." \
  -H "Accept: application/json, text/event-stream" \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2025-03-26","capabilities":{},"clientInfo":{"name":"curl","version":"0"}}}'

Step 2: tools/call

text
curl -X POST https://api.twinkleai.tw/mcp/ \
  -H "Authorization: Bearer sk-..." \
  -H "Accept: application/json, text/event-stream" \
  -H "Content-Type: application/json" \
  -H "mcp-session-id: <from-step-1>" \
  -d '{"jsonrpc":"2.0","id":2,"method":"tools/call","params":{"name":"opendata-list_domains","arguments":{}}}'

Response uses SSE-style framing — find the last line starting with data: and parse the JSON.

3. Python (fastmcp)

python
import asyncio
from fastmcp.client import Client
from fastmcp.client.auth import BearerAuth

async def main():
    async with Client(
        "https://api.twinkleai.tw/mcp/",
        auth=BearerAuth("sk-..."),
    ) as client:
        tools = await client.list_tools()
        print([t.name for t in tools])

        r = await client.call_tool(
            "opendata-search_datasets",
            {"query": "AQI", "domain": "environment", "limit": 5},
        )
        # response 在 r.content[0].text — JSON string
        import json
        print(json.loads(r.content[0].text))

asyncio.run(main())

Tools return CallToolResult.content[0].text as a JSON string. License, citation and other metadata pass through in the _meta field.

4. Tools reference

ToolArgumentsReturns (brief)
opendata-list_domains[{key, name_zh, scope, typical_questions}]
opendata-search_datasets{query, domain?, limit?}[{dataset_id, title, score, license}]
opendata-get_dataset{dataset_id}{schema, columns, license, source_url}
opendata-query_rows{dataset_id, where?, limit?}[row, ...]
opendata-materialize_dataset{dataset_id, format?}全表 CSV / JSON

Full schemas via tools/list — auto-generated as JSON Schema.

5. Authentication

  • Bearer token — header Authorization: Bearer sk-.... A LiteLLM virtual key, one per user. Fetch from /dashboard.
  • No OAuth flow — no PKCE / refresh tokens needed; just a long-lived bearer.
  • Rate limit — no per-key RPM/TPM limits today; only max_budget enforces (LiteLLM rejects when spend > max_budget).

6. Pricing

ToolPer callUse
opendata-list_domains$0.0005List 19 domains (helps Claude plan)
opendata-search_datasets$0.001Search datasets by query / domain
opendata-get_dataset$0.001Fetch dataset metadata + schema
opendata-query_rows$0.005SQL-like filter on rows
opendata-materialize_dataset$0.010Full dataset materialization

Spend tracking has 5–15s async lag (LiteLLM batches DB writes). Quota enforcement is real-time on the next request: spend vs max_budget.