As an above poster points out, it looks like Free isn't wrapping quotes around the filename they send, which gives an error in Chrome, and gives a weirdly named download in Firefox (I'm guessing the file is valid, just the name is weird). A different browser may give different results (like the poster above trying HR's version), but regardless, it's a spec violation on Free's side. I can work around the issue by stripping out commas in filenames when uploading to Free.
I wasn't able to replicate an issue with ClickNUpload though. It looks like they send a double Content-Disposition header, which some browsers may not like, but this seems to be the case regardless of commas in the name. Firefox doesn't seem to have a problem downloading the file though.
Using ep7, downloading the HR version with Free, and the Ember version with ClicknUpload was fine on my laptop. Both versions have commas.
I suppose you can try other hosts. If your native alphabet is different, I suppose there could be some kind of non-ascii problems. Or if you're using the same anti-virus or ISP for both devices.
i have a weird problem that only started a couple weeks ago with free and clicknupload, it only seems to be affecting this series specifically from what i watch but whenever i download "Kumo desu ga, Nani ka" via them it seems to be giving me problems of a broken download webpage on my phone and no download or it just gives me a windows file if i do it on desktop, i have a feeling the comma in the filename is causing problems
Comment in Feedback 19/02/2021 15:02 * — Anonymous: "Fallon"
I'd like to think this is the kind of thing I'd build if I had the skills... but since I don't, I'm glad someone else dreamed it up. You're doing the lord's work here, thank you.
The cache period is set to 5 minutes, so there's little point in querying it more frequently, as you'll just get the same result. I personally think 5 minutes is too frequent for most people as I can't see much reason for such frequency, but I'll let you be the judge there.
I generally don't trust people to follow any suggestion I'd make for an "acceptable refresh interval", hence the enforced caching period.
I see, thanks for pointing that out. Hah, talk about poor documentation. I can look at implementing some of those parameters (though maxage is less useful if sorting by date).
Has Search here always been limited to 15min updates here, or are you getting squeezed by all the other stuff most of us will never use like NZB and RSS? Your own search seems more integral to the site.
Interesting quirk: reloading the Search page only refreshes the left banner system clock every 15min as well.
Ah I see what happened here, that particular readthedocs page seems to have omitted the Optional Parameters section on search for some reason. It's still present in the document if you click "View page source" in the top right.
Every other raw version of the API spec that I can find has it as well.
I don't believe it ever was supported, and Newznab doesn't define that parameter for search anyway. They only define it for movie-search, music-search and book-search, none of which is supported here.
ah, I wasn't sure if you modified torrents or not, since Nyaa does, which I consider shit, it's fully understandable why you wouldn't do it, I've just been going around and asking for WebTorrent support and already got a few uploaders to add wss trackers so it's great that they aren't stripped, thanks for the quick reply and implementation! <3
Thanks for the suggestion. I've added CORS headers to feed.animetosho.org and storage.animetosho.org
AT is currently neutral in that trackers are passed through, as is. As such, I'm somewhat unwilling to modify torrents at this stage. I think it'd be better to get the uploaders to add Webtorrent trackers to their torrents, and/or get Nyaa/Anidex to support/enforce it on their side. If support is added there, it'll naturally work here (maybe the seeder/leecher scraper would need to get updated, but that's about it).
hey, now that webtorrent is slowly becoming a thing, will tosho add support for it? its mostly really easy 1line shit like adding a wss tracker to torrents and enabling cors ONLY on RSS feeds
It's behaving normally for me: Ads blocked and just mkv files downloading. Can you provide a link to the AT page that produced an exe download for you?
Otherwise, perhaps you can update your blocker. uBlock Origin has been recommended by several users here.
"[SubsPlease] Urasekai Picnic - 03" seems to be completely missing even though entries dated just before/after are correctly present. Am I blind or is this just a scraper glitch?
Hi admin -- Thanks so much for making Animetosho available.
I download through USENET (GIgaNews with the Usenet Explorer application), and in the past month or so almost all posts have been incomplete, in the range of 95-99%. Therefore, my request is to add more PAR2 data to your USENET posts. Doubling your current default PAR2 size should be sufficient to allow the data to be decoded. Thanks again!
That sounds exhilarating. ;) ...but don't tell anyone else that.
(I'm just going with what was stated in the first episode. I know there's the ending song, but "Kumoko" (Miss Spider?) doesn't seem like much of a name either. Debatable I suppose. Also avoiding any info beyond episode 1 of the anime)
does the search engine/DB only return the subset/limit?
Yes, a LIMIT clause is being used.
Is it more expensive to filter the limit at the search engine/DB level?
No, it's generally best to do the filtering on the DB side.
For example, if I query for a series that has 150 items and the limit is 50, then the script calls for the search engine to find all 150 items.
The problem is if there happens to be 100,000 items - trying to pull these off disk into memory isn't going to be performant. With a limit set to 50, you only need to read 50 items off disk instead of 100k.
On further look, I think you can use the sphinx_total_found meta attribute.
Thanks for finding that. I'm not sure why my searches didn't find the SHOW META query. This looks feasible, but will only work if a search is being performed (which is probably all that these apps care about anyway).
I've added to the API now so hopefully it starts showing up now (cached result sets may take a while to clear). Thanks for the tip!
20/02/2021 02:01 — admin