# Retrieving chapter images

# Basics

MangaDex uses a variety of methods to distribute page files, both to optimize end-user performance and to save on bandwidth on our side.

This is usually achieved through MangaDex@Home, our volunteer CDN, rather than by pulling directly though us. This typically results in lower latency for you and saves bandwidth for us.

MangaDex additionally offers 2 image quality modes:

  • data: Original quality - pixel-for-pixel accurate to how the image was originally sent to us
  • data-saver: Compressed quality - Large size savings at the expense of image quality

Every single chapter page on the website is available in both qualities. The data-saver mode is offered mainly for americans people that still have to suffer stupid data caps in $CURRENT_YEAR.

# Howto

You need the ID of the chapter first. Then, by calling the GET /at-home/server/:chapterId endpoint, you'll get all required fields to compute your page URLs:

GET https://api.mangadex.org/at-home/server/:chapterId
Field Type Description
.baseUrl string A valid base URL
.chapter.hash string Chapter Hash
.chapter.data array of strings ordered data quality filenames
.chapter.dataSaver array of strings ordered data-saver quality filenames

The page URLs are then in the format

$.baseUrl / $QUALITY / $.chapter.hash / $.chapter.$QUALITY[*]

Important notes:

  • The validity of the base URL is limited in time. We guarantee 15 minutes. Could be more, could be less. Call the /at-home/server/:chapter-id again if you need it after that long but then get a 403 error.
  • It is not literally /$.chapter.dataSaver[*]. That is a placeholder to mean (all) the elements (filenames) within the data/data-saver arrays, depending on the quality and pages you want.

# Example

Assuming chapter id: a54c491c-8e4c-4e97-8873-5b79e59da210.

# 1. Get the chapter's image delivery metadata

GET https://api.mangadex.org/at-home/server/a54c491c-8e4c-4e97-8873-5b79e59da210
  "result": "ok",
  "baseUrl": "https://uploads.mangadex.org",
  "chapter": {
    "hash": "3303dd03ac8d27452cce3f2a882e94b2",
    "data": [
    "dataSaver": [

Important: Here, the base url happens to be https://uploads.mangadex.org but it could be whatever else. Typically it will be very different if it's a MangaDex@Home node. Do NOT assume any format. It is not "a URL", it is not "a domain name", it's not "https:// followed by a domain name". It is a string. No more no less. Just use it ** as-is**.

# 2. Construct page URLs

The full URLs are then

DATA (source/original quality)


DATA-SAVER (compressed):


# MangaDex@Home, load successes, failures and retries

Sometimes, a request for an image will fail. There can be many reasons for that. Typically it is caused by an unhealthy MangaDex@Home server.

In order to keep track of the health of the servers in the network and to improve the quality of service and reliability, we need you to report successes and failures when loading images.

The MangaDex@Home report endpoint is for this. For each image you retrieve (successfully or not) from a base url that doesn't contain mangadex.org.

  • Call the network report endpoint to notify it (see just below)
  • Call the /at-home/server/:chapterId endpoint again to get a new base url if it was a failure

But it failed and I still get the same server back!!

Then call the endpoint. If you don't, we cannot know that the server you got assigned to isn't working.

# The MangaDex@Home report endpoint

It is a POST request to https://api.mangadex.network/report (note that it's api.mangadex.network, ** not** apimangadex.org) as follows.

POST https://api.mangadex.network/report
Content-Type: application/json
Field Type Description
url string The full URL of the image (including https:// )
success boolean true if the image was successfully retrieved, false otherwise
cached boolean true iff the server returned an X-Cache header with a value starting with HIT
bytes number The size (in bytes) of the retrieved image
duration number The time (in miliseconds) that the complete retrieval (not TTFB) of the image took

Note 1: The content-type header must be exactly application/json. Note 2: It's api.mangadex.network, not apimangadex.org

# MangaDex@Home report examples

Let's assume that for the example chapter above your base url was: https://foo.bar:5678/abcdef/1a2b3c4d.

# Success
POST https://api.mangadex.network/report
Content-Type: application/json
  "url": "https://foo.bar:5678/abcdef/1a2b3c4d/data/3303dd03ac8d27452cce3f2a882e94b2/2-2a5e95dfec7f15cd01f9a63835be18a22fb77a10fd2d62858c7dcbb6e6c622f9.png",
  "success": true,
  "bytes": 674687,
  "duration": 235,
  "cached": true
# Failure
POST https://api.mangadex.network/report
Content-Type: application/json
  "url": "https://foo.bar:5678/abcdef/1a2b3c4d/data/3303dd03ac8d27452cce3f2a882e94b2/2-2a5e95dfec7f15cd01f9a63835be18a22fb77a10fd2d62858c7dcbb6e6c622f9.png",
  "success": false,
  "bytes": 25,
  "duration": 235,
  "cached": false

N.B.: On a failure that doesn't result in any response (connection failure, bad SSL certificate, ...) just put 0 for bytes.

# About hardcoding base URLs

Damn, I wish I could load images slower, waste MangaDex's bandwidth, and get IP banned! But how?

Hardcoding your base URL is a solid approach!

First of all: Don't. The dynamic URLs we return on the /at-home/server/:chapter-id endpoint are almost always optimized based on your geographic location, so this is typically just a dumb thing to do besides during basic prototyping of your project. We also have stricter rate-limits on those, etc.

But if you think that it is required for your use-case, feel free to explain your use-case in the #dev-talk-api channel on our Discord, and maybe we can figure something out.

You're probably gonna do it anyway if you were planning to. Just don't complain when it screws you over. Or do, we just won't particularly care.