• parpol@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    Their website isn’t properly caching pages which is the real reason they’re having problems.

  • Tag365@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    So why doesn’t a random follower posting a link on Mastodon cause server load issues, but a popular follower does?

  • cbarrick@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    11 months ago

    Just put the site behind a cache, like Cloudflare, and set your cache control headers properly?

    They mention that they are already using Cloudflare. I’m confused about what is actually causing the load. They don’t mention any technical details, but it does kinda sound like their cache control headers are not set properly. I’m too lazy to check for myself though…

  • Sean Tilley@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    It’s an interesting and frustrating problem. I think there are three potential ways forward, but they’re both flawed:

    1. Quasi-Centralization: a project like Mastodon or a vetted Non-Profit entity operates a high-concurrency server whose sole purpose is to cache link metadata and Images. Servers initially pull preview data from that, instead of the direct page.

    2. We find a way to do this in some zero-trust peer-to-peer way, where multiple servers compare their copies of the same data. Whatever doesn’t match ends up not being used.

    3. Servers cache link metadata and previews locally with a minimal amount of requests; any boost or reshare only reflects a proxied local preview of that link. Instead of doing this on a per-view or per-user basis, it’s simply per-instance.

    I honestly think the third option might be the least destructive, even if it’s not as efficient as it could be.