Latest Comments

Comment Types:
Comment in Feedback
05/08/2020 23:41 — Anonymous: "LastExile"
Found something part of  Fate/Grand Order out of place and thought you might want to give it the home you made for it.
-->Manga de Wakaru! Fate/Grand Order
https://animetosho.org/series/manga-de...rder.14579

-->Searched using these words:  Learning with Manga
https://animetosho.org/view/edo-learni...v.n1109116
https://animetosho.org/view/edo-learni...4.n1109087
The rest is up to you Thanks
Comment in Feedback
05/08/2020 12:22 — Anonymous
your time is incorrect please correct it
Comment in Feedback
05/08/2020 02:16 — Anonymous
Stop over and swallow my juice and I will think about buying your cheap sunglasses
Comment in Feedback
05/08/2020 02:06 *Anonymous: "Bryon"
[spam]
Comment in Feedback
03/08/2020 15:00 — geha714
Apparently the site is not updating latest Nyaa releases (unless they're posted in TT)

https://nyaa.si/download/1268330.torrent
https://nyaa.si/download/1268331.torrent
Comment in Feedback
23/07/2020 10:53 *admin
Pre-2016, mplayer was used to render the video and save out PNG images.

Now it works by extracting frames and rendering them as separate steps. Frame extraction is done using ffmpeg, subtitle rendering done via VapourSynth and image rendering done with PyAV (libav* wrapper). There's a more detailed write up on how it works here.
Comment in Feedback
23/07/2020 08:59 — Anonymous
Mplayer, I think they said.
Comment in Feedback
23/07/2020 07:36 — Anonymous
What is the tool that Anime Tosho taking screenshots of videos
Comment in Feedback
22/07/2020 00:45 — admin
The main issue with rolling CRC is that it's relatively slow (and problematic if it finds too many false matches), so it's often restricted. By default, par2cmdline will only do rolling CRC checks for 64 bytes, so it only really works for small movements. Since this is all very custom though, you could just increase the limit though, at the expense of processing speed.

Selecting a compression block size could be difficult. In general, you want it to be large to maximise efficiency and minimize the frequency it straddles PAR2 blocks, but not so large that small changes require lots of recovery data.

The data is in TSV format. Padding is rather unusual there, but I suppose not impossible. I don't really get the aim of it though, since compression would eliminate any padding you add to uncompressed data.

output something that can be processed with standard tools that someone else maintains because they're useful elsewhere.
Do you have an example of such a standard tool which can handle the scheme you describe?
Comment in Feedback
21/07/2020 18:46 — Anonymous
> the compressed blocks won't be 4KB - their sizes will vary (hence they won't align to some PAR2 block boundary)
So long as the changes to the output are more than a block-length apart, par2 will find a block anyway, because it uses a rolling CRC to look at one-byte intervals for candidate blocks (which it then tests against a proper hash that's not CRC32). So long as you're resetting the huffman table at a deterministic place (for example the nth-id'd INSERT statement), it doesn't matter to PAR2 that this new block isn't located at a block-length-multiple offset, just that there's at least a block worth of unchanged compressed output. What would trip up PAR2 is changes happening across the file at a (compressed) distance less than a block length.

>Also, later changes can affect the output of earlier bytes in the block
This is true, it's not a design goal that changes inside a block are confined to a subset of the compressed block, merely that two compressed blocks are independent of each other.

Admittedly I've not looked at a dump (because they're huge), but if they're standard [My]SQL dumps, then you could sort the INSERT statements by their primary key (if they're not already), than pad every nth id with a comment to align it to an mKB boundary. The idea would be to constrain all the bespoke code to the server-side, and output something that can be processed with standard tools that someone else maintains because they're useful elsewhere.
Comment in Feedback
21/07/2020 00:17 — Anonymous
Go back to Nyaa.
Comment in Feedback
19/07/2020 22:52 — admin
Ahh, I see - if you're looking to make some end-user application, database dumps isn't the right solution.
There's currently no database index on this sort of information, but if you can write up exactly what you want, I can help look into it for you.
Comment in Feedback
18/07/2020 17:06 — Anonymous
If you can't do anything about it then it's none of your business.  Start your own website.
Comment in Feedback
17/07/2020 20:20 — Anonymous
uptobox blocked all US IPs
Comment in Feedback
17/07/2020 20:12 — Anonymous
please clean up this whole page
Comment in Feedback
17/07/2020 20:10 — Anonymous
Donate to JPDDL or start yourself.
Comment in Feedback
17/07/2020 20:09 — Anonymous
please clean up this section.
Comment in Feedback
17/07/2020 20:09 — Anonymous
please clean up this section
Comment in Feedback
17/07/2020 15:45 *Anonymous: "Casimira Parrott"
[spam]
Comment in Feedback
16/07/2020 23:27 — Anonymous
When torrents are deleted on Nyaa they auto-delete here on update checks.
Comment in Feedback
16/07/2020 23:00 — Anonymous
Someone attempted to distribute potentially unwanted program on https://animetosho.org/view/horriblesu...p.n1263249
It would be best to remove the entry from this website.
Comment in Feedback
16/07/2020 12:45 — Anonymous
Thanks for considering an API. Just looking for a way to search for subtitles, an API that lists torrent entries that have subtitles for a given AniDB ID would be nice. If you're ok with hosting an API for that, I could make it.
Comment in Feedback
13/07/2020 03:28 — admin
If you happen to see this: I'm not sure what is exactly required, but in terms of numerical categories, 5070 is the anime category (and the only one that gets served here).
Comment in Feedback
12/07/2020 21:29 *Anonymous: "Jacklyn Leboeuf"
[spam]
Comment in Feedback
12/07/2020 16:58 — Anonymous
Die from neck pain
Comment in Feedback
12/07/2020 12:48 *Anonymous: "Ezra Toosey"
[spam]
Comment in Feedback
09/07/2020 15:27 — snvcjeen3u4
You're probably right. Thank you kind sir or madame :)
Comment in Feedback
09/07/2020 15:26 — Anonymous
A Sonarr forum might be able to help you.
Comment in Feedback
09/07/2020 05:59 — snvcjeen3u4
Hey guys!

Completely new to the game and I'm wondering how I would go adding anime tosho to my Sonarr as an indexer? I have no idea what I should enter in the "categories" field and all my searches with anime tosho on Sonarr return no results (even though the episodes exist on this site). Could anybody help me which numbers to add to "categories" and to "Anime categories"?
Comment in Feedback
09/07/2020 00:55 *admin
Thanks for the explanation. That sounds like typical schemes where compression is broken into blocks (or using zlib full flushes periodically).
I think your understanding may be a little incorrect though. If you break the input into 4KB chunks and compress them separately, the compressed blocks won't be 4KB - their sizes will vary (hence they won't align to some PAR2 block boundary).
Also, later changes can affect the output of earlier bytes in the block - a change in byte 2203 can affect the entropy used to encode the first bytes of that block.
This also does degrade compression efficiency.

I probably should also mention this: it doesn't sound like there's any standard tool for doing this either, which means that someone would have to write it. And even then, it's going to be a fairly custom setup that few are going to adopt.
Engineering a solution may be interesting, but I don't really want to over-complicate the export process (can make things more likely to fail, takes effort etc).
To be brutally honest, I don't actually see the size as a big issue. Currently the total size of all dumps is 250MB, which is smaller than most video files on offer here. Even if you downloaded them every day, that's only 7.5GB/month. Now I understand that some people have more restricted internet and the like, but then I question what use you would have in getting full dumps every day. However, I'm happy to be corrected here.
May I ask if you're trying to achieve something in particular with these dumps? Maybe there's another way.

If you're willing to develop something custom, I might suggest just scraping feeds or the like for data. Alternatively, if you're willing to develop some API which can query a MySQL database for the data you want, I may consider hosting it.

Thanks again for the suggestions, and hope that this is of some value.
Comment in Feedback
08/07/2020 18:25 — Anonymous
How about incremental snapshots?
Like 1 full dump followed one snapshot for each day limited by x days.
To keep in sync check whether local version is older than x days or not, if older then get full dump, if not then get snapshots since last local sync
Comment in Feedback
08/07/2020 11:28 — Anonymous
So in principle, if the format is laid out chronologically and one-file-per-table, the changes will all be on the end and the start of each large file should always be the same, and compress the same.
Comment in Feedback
08/07/2020 11:25 — Anonymous
"rsync-compatible" is a mode where it resets the dictionary every so often, so there's a finite limit to how far changes propagate, for example if the interval is 4096, and byte 2203 changes, this changes the compressed stream for bytes 2204-4095, but byte 4096 will be the same.
Comment in Feedback
06/07/2020 02:46 — Anonymous
Are you asking about a note from 2011?!
Comment in Feedback
06/07/2020 02:43 — Anonymous
"Expired links do NOT get reuploaded as files are deleted after we process them"
Comment in Feedback
06/07/2020 01:38 — Anonymous
Does this link checker ping files so they don't expire? That might be better than re-uploading constantly.
Comment in Feedback
05/07/2020 07:33 *admin
Posting here generally works :)
If you absolutely cannot post here for whatever reason, you can use this page instead.
Comment in Feedback
05/07/2020 07:32 — admin
If you just want stuff without subtitles, you can download the soft-subbed files here and remove the subtitles. Technically, many of the webrips here are "raw" in the sense that they're straight rips from the source.
If you're looking for BD/DVD rips, your choices might be limited, due to them being less popular and expensive to deal with (large file sizes). I recall jpddl.com used to offer raws, but don't know how usable they are now.
Comment in Feedback
05/07/2020 07:22 — admin
Thanks for the suggestion.
I'm not sure what you mean by "rsync-compatible compression" (rsync just uses zlib for compression), but PAR2 only really works well for corruption type situations where the data isn't shifting around much.
I suppose dumps often do contain the same data across days, so it could work if the PAR2 was generated at 100% redundancy, but it is quite involved (i.e. a few manual steps required). The other problem is that you can't use compression at all.
Comment in Feedback
05/07/2020 02:17 — Mayor023
is there any email that I can contact for this site ? thanks
Comment in Feedback
04/07/2020 22:26 — Gurphy_TC
Mixing categories doesn't work, it would be better at a separate site.  And we're operating at our chosen bandwidth level as we are.

I can't think of any raw ddl sites, sorry.
Comment in Feedback
04/07/2020 20:33 — Anonymous: "Michael"
Please add raw anime / somebody know website like animetosho.org (have multiple download link) but provide raw anime
Comment in Feedback
04/07/2020 03:57 — Anonymous
If the archive is done with rsync-compatible compression, then generating a PAR2 set would do. Download .par2, feed in the old dump, out comes how many blocks you need to download to get the current dump.
I've not downloaded or looked at the underlying data, but if it's a series of table dumps, I think each table would need to in its own file or padded to par2 blocksize alignment.
Comment in Feedback
03/07/2020 11:21 — admin
Added Solidfiles again - thanks for the update!
Comment in Feedback
02/07/2020 17:14 — Anonymous: "Anon"
Solidfiles was unresponsive for about 10 days, but now it's back again and working well.
Please enable Solidfiles again, admin.
Comment in Feedback
28/06/2020 19:55 — Anonymous
Stylus is definitely preferred for it's anti-tracking measures / anonymity.
Comment in Feedback
28/06/2020 11:58 — admin
Firstly, it's nice to know that they're actually being used.
I'm not too sure how to do this nicely. rsync is problematic for public distribution. Patches might be possible, if performance is reasonable on dump files, but it'd only work if you always have the latest files.
A 'workaround' solution may be just to download them less frequently, or at least for the 'files' table.

Suggestions welcome though!
Comment in Feedback
28/06/2020 11:54 — admin
I haven't ever accepted any style submissions here, but it's not out of the question.
If you're familiar with CSS, you can edit the style code and apply it using a plugin like Stylist or Stylish.
There's two custom styles posted here, and if you create one yourself, you could post it there too.
Comment in Feedback
25/06/2020 03:03 — Anonymous
The database export files are huge, could you perhaps provide an rsync like method for getting only the changes for the database export files?
Comment in Feedback
24/06/2020 15:56 — Anonymous
You can create your own style just for your machine using your brower's tools.  No AT involvement is required.  AT styles already cover most of the spectrum for most people.
beta
Anime DDL+NZB mirror
Current Time: 28/04/2026 15:34



About/FAQs

Discord