Anyone else getting deceptive site ahead warning by accessing file links directly from Jheberg through AT?
Been happening for a month with multiple browsers. Before that everything worked fine.
Basically for new anime listings once the Jheberg host appears it can take hours for the file hosters to appear under Jheberg on AT. But you can access the file hosters faster by clicking on Jheberg.Once on Jheberg any file hosters in green are uploaded. The click on the file hoster you want. Used to work fine but now the warning always appears.
No issues once hosters appear under Jheberg on AT though to be clear.
I've faced it several times on different sites from ZippyShare to others. Actually everybody has had! It's shown mostly on download websites. It's a false positive most of the time and safe indeed. Although drive-by download attacks can't be ignored as they can be quite effective and damaging.
Keep your OS and web browser updated, install uBlock Origin while having a well known security solution in place, and you should have no problem (hopefully) ;) And don't forget to think before you click!
We have never had any CRC based search capabilities here, though many files do contain CRCs in the name, so if you enter one, it may get found. If you absolutely need to use CRC32 though, the only other option currently offered is to download the files database dump and search through that.
Thanks. Actually, I decided to not include One Pace as it's not exactly One Piece (or rather, mostly because the naming scheme really confuses the script since it looks like "One Pace" is a group name), so will just leave their releases as is.
Yeah, there's an upper limit of around 100MB per file, which should be plenty for .ass subtitles. Unfortunately, some in this release go past 300MB, which the script considers excessive, and just skips over.
Thanks for bringing it up, and for the one who extracted and uploaded the subtitles.
When I go to download link Google says that its a unsafe website, all the links from jheberg shows this message. Why is that. DO anybody else is facing this problem ?
Take a guess who downloaded SonicBoom's garbage release just to extract the attachments (for use with motbob's raws) and uploaded them so others could avoid doing that themselves. https://nyaa.si/view/1156694#com-15
> Turns out it was encoder error. Re-read my original comment to find out why yours does not make sense. Better to keep your mouth shut if you do not know what you are talking about.
At second glance, I noticed that the two subtitles included in the attachment archive provided on this site are both smaller than 25 MiB with the next larger one being ~100 MiB big. This leads me to believe that the attachment extracting/archiving script used by this site's admin skips (subtitle) files that are over a certain size, which is a sane feature to have when processing files of unknown origin. The script only "broke" on this release because of SonicBoom's incompetence.
Your attachment extracting and/or archiving script seems to be broken. Of the four releases linked below, all episodes have the tags, chapters and font attachments available, but none have any of the two subtitle tracks each except for episode 15. https://animetosho.org/search?q=Hi Sco... SonicBoom
The upload script already supports streaming 7z creation (e.g. all uploads to Openload do it) - I just flag which hosts get their files wrapped like such.
Aside from network bandwidth, disk I/O is probably the biggest bottleneck on the updates server, which is why streaming is important (most cheap servers only have a single, likely laptop-class, disk). It's probably a little more complicated than you're thinking, since it also needs to support on-the-fly splitting (to deal with file size limits), as well as handling retries when an error occurs, i.e. stream rewinding.
I see, figured as much. You probably have a vps so using hdd doesn't matter that much, but you can also use /dev/shm while carefully watching ram usage or this module here https://github.com/allanlei/python-zipstream . There probably are other techniques but these are what I know of. Uncompressed doesn't have much cpu overhead so you can read the file once to determine the size, then again to actually upload data without storing in memory. Needs integration with python uploading script, so might not be useful to you.
Some people use the links for streaming purposes and the like, so wrapping them in archives does break that use-case. I'm not entirely sure if there's strong evidence suggesting that this makes a difference, but I can flip the flag for Solidfiles to see if it makes any difference...
Idk why encrypting names within archives isn't used for all uploads by default . Maybe admin-sama can chime in? You can even do it on the fly, without writing files to disk if resources are a problem. Still requires some cpu/io overhead and using uncompressed zip files, but that doesn't seem like such a big hurdle. As a bonus you can keep the original file names without the host modifying it.
solidfiles blocked me one time when I was upping a series for an AT request. So I did a test to see if I could put up the rest of the series with nondescript names (like zpart1, zpart2) and what do you know? no prob.
After that I really think SF deletes by names (titles/group names that give away too much info) and not hash.
Comment in Feedback 24/06/2019 16:44 — Anonymous: "i am anonemos"
to everyone. bro i'm falling in love with all of yous.
is stuff getting deleted pretty much instantly or within a couple hours on solidfiles? i've noticed it a couple times this week that i've gone to get the latest episodes and i'm getting file not found errors
Comment in Feedback 22/06/2019 23:10 — Anonymous: "nOtAnonymous"
11/07/2019 00:18 — Anonymous