Hey! Hope you're enjoying your long weekend Admin!
Real quick gotta say when I step and think about you first conceiving and building this "mirror" (it's so much more to us users!) and now maintaining this gem I find myself humbled. Just a quick shout out and thanks for your work and also your patience. 😉
On the series page (https://animetosho.org/animes) is there a way to filter skipped entries in the same was as old entries? (I tried filter_skipped=1 just in case it was undocumented, but it didn't work.)
If not, would it be something you'd be willing to consider adding?
its weird to use a gmail address for business when you "work" for a "marketing company". why not use your own domain for emails. getintoway looks suspicious and that link is unsecure according to chrome.
https://adfinity.buzz/blog/navigating-...actices/82 is the latest blog post from 2023 and it looks like ai-generated nonsense or just a stolen photoshop tutorial that has nothing to do with the title or the "business"
I see, but wouldn't it make sense to discard the cache if the oldest item in the feed is older than the cached items? And if it's newer, the next page of the feed should be fetched, as there could be better entries that have been missed. Basically, the only case where I see the cached entries making sense is if a feed doesn't support paging (which isn't an issue here).
If renaming or deleting NZBs isn’t viable, please consider blocking downloads of deleted release NZBs for clients with Sonarr, Radarr, Prowlarr, or NZBHydra2 user agents
Unfortunately this has the same issue. The 'deleted' flag is present on the webserver (animetosho.org) whilst the NZB is hosted on a secondary server (storage.animetosho.org). The two servers are disconnected from each other, so there's no way for the secondary server to know that the entry has been marked as deleted.
Unfortunately, there's just no easy solution to to invalidate the original URL in any way (other than deleting the file, but I don't want to do that).
I am Evelyn from Adfinity reached out with an offer to boost website revenue using non-intrusive ads (download button and pop-under) that integrate seamlessly. They promise competitive average CPM rates ($20-$60/day), flexible daily, weekly or monthly payouts, 24/7 support, global traffic monetization, and a live stats dashboard. You should check one of our client example (Getintoway.com). We provided multiple contact options to discuss a potential partnership.
For Anime Tosho, the 75-item RSS feed limit may work with reasonable Delay Profiles, but other indexers with more releases can lose the best release before the delay ends. This is why *arr apps cache links, and their maintainers likely won’t change this.
If renaming or deleting NZBs isn’t viable, please consider blocking downloads of deleted release NZBs for clients with Sonarr, Radarr, Prowlarr, or NZBHydra2 user agents.
The cache is required because new releases can flood the RSS feed during the delay, potentially pushing the best release out of the feed’s size limit before the delay ends.
Thanks for the explanation! I don't quite get it though - if it's checking for "better" versions, it needs to make another query. And if it's making another query, it'll know if something has been deleted. So I don't understand why it needs to cache anything.
The NZBs are keyed by ID, so can't really be renamed easily. It's a possibility, though I don't look too fondly over such a change.
Sonarr and Radarr have "Delay Profiles." This feature makes them wait for a set time (e.g., 60 minutes) after finding a matching release before starting the download. The idea is to allow time for potentially better quality versions to be uploaded. During this period, download links to suitable releases are cached.
Invalidating the original download URL upon deletion (perhaps by renaming the target file like some-release.nzb.gz to some-release.nzb.gz.deleted) is an equally effective solution without requiring the removal of the NZB files.
I don't use those applications, so don't really understand the issue. I would've thought that the application makes queries for the latest stuff, then downloads them. So what's being cached? Old search results? If so, why are those cached?
I don't really want to remove the NZB files, as they're still valid uploads.
I don't really get what you're trying to say, but you can restrict a search to entries tagged for a particular series if you tick the option whilst browsing that series.
Could you please look into removing the NZB files for deleted releases? It seems they currently remain active, which can cause *arr users with cached entries to unintentionally download them.
I noticed something when I looked up group releases for a single show (**Example) If the search peformed, is made from a show, the search returns, only what a group has released for without needing to add a shows name to the search. Whether it was fluke and I stumbled onto it or the search parameter has recently been added. Its an excellent and efficient way to save time searching. However, something weird also happened. After the focused search was made. A check box for making the same focused query, appeared under the search box(left side of screen). Never a dull moment on AT
(**ex: Anime Tosho Home » Star Blazers Space Battleship Yamato 3199 » [Group name] )
There you stated irefresh-type 1 was used, but seek time is very fast, how could you achive that?
Mine open GOP act like so: start video(~24m) and switch to middle(~12m), it requires ~30s to vlc/mpv to truly change the scene But yours behaves like closed GOP, what the secret, magician?
Also pls provide svt-psy vesrion, because i cannot recreate image quality used this as a source: https://nyaa.land/view/1876552 (av1an 0.4.4-unstable, SVT-AV1-PSY v2.3.0-B-0+17-b834bf13)
There's no stated rate limit. I request that clients be "reasonable" with the number and rate of requests sent. For "reasonable", consider the page browsing activity of an average user.
Hopefully that helps.
(I don't manage or check the Discord server, so you probably won't get responses for these sorts of questions there)
I opened a ticket on discord for this as well, but that didn't receive a response so I'm trying here too :D What's the current rate limiting with regards to API queries? I'd like to implement the limit client-side to avoid requests being blocked.
21/04/2025 12:46 — admin