Case study / Open Source Maintainer
Ryuk-me/Torrent-Api-py
Open-source FastAPI service that unifies multiple torrent providers under one API. I maintain scrapers, normalize responses, and keep deployments stable.
At a glance
- Torrent sites expose different HTML and different field names.
- Core flow: Request -> Router -> Scraper -> Normalize
- Primary stack: Python, FastAPI, aiohttp, BeautifulSoup
Request flow
Multi-site torrent search flow
A search request comes in, the router picks the right scraper (or fans out to all 16 for /all/search), cloudscraper handles anti-bot, BeautifulSoup pulls the data, and the normalizer returns a consistent shape every time.
Request
GET /search?site=1337x&q=ubuntu or /all/search to hit all 16 at once
Router
Checks API key if set → parses site + query params → picks scraper
Scraper
cloudscraper bypasses Cloudflare → BeautifulSoup extracts torrent rows
Normalize
Maps site-specific fields → title · seeders · leechers · size · magnet · hash
Response
Identical JSON shape for every site — 1337x, PirateBay, YTS, or any other
Request
GET /search?site=1337x&q=ubuntu or /all/search to hit all 16 at once
Router
Checks API key if set → parses site + query params → picks scraper
Scraper
cloudscraper bypasses Cloudflare → BeautifulSoup extracts torrent rows
Normalize
Maps site-specific fields → title · seeders · leechers · size · magnet · hash
Response
Identical JSON shape for every site — 1337x, PirateBay, YTS, or any other
Contributions
What I built
- Designed API routes for search, trending, recent, category, and multi-site /all/search queries.
- Implemented 16 provider scrapers and kept them isolated so one provider failure does not break the whole API.
- Maintained public deployment and docs while handling scraper breakages over time.
Technical decisions
Key engineering decisions
- Each provider gets its own scraper file — when a site goes down or changes its markup, only that scraper needs fixing.
- cloudscraper handles Cloudflare and anti-bot protection so the scrapers don't fail on protected sites.
- Mangum adapter lets the app run on AWS Lambda without code changes — same codebase, multiple deploy targets.
Challenges
Constraints and challenges
- Sites go down, change their HTML, or add bot protection without notice — keeping 16 scrapers working is ongoing work.
- Each site returns different fields in different formats; normalizing them into one response shape required handling a lot of edge cases.
- Some sites only support search, others support trending or categories — the API has to handle missing features per provider.
Outcomes
Impact
- Reached 401 GitHub stars and 246 forks as a widely reused API wrapper.
- Supports 16 providers behind one interface, reducing client-side parsing complexity.
- Public docs and hosted API let developers test it immediately.