Latest Comments
Seems like CloudFlare has now been removed, so upload limit restored back to 500MB.
I think 180upload is better than billionuploads because they don't use IP to track the downloads which result in times when downloading is prohibited.
"Wrong IP" for instance is a common sight.
"Wrong IP" for instance is a common sight.
I do like anonfiles as a downloader with its fast speed and long file retention. Upz!
Thanks for the informative post, now i see things in a different light.
Please look at d-h.st and billionuploads.com, both have good speed.
Please add netload.in in your list, don't count it inside other sites, I just use it and sharebeast if possible after the problem with turbobit which delete everything on it
It's significantly easier to change a number than to reverse engineer the uploading process of another host.
I like AnonFiles for fast upload speeds and that they keep files for a long period of time.
7z wrapping is of no concern to me, as the code to do it is already functional and quite stable.
I'm considering adding another host, but when I actually get around to doing it, I'll evaluate the choices then.
I like AnonFiles for fast upload speeds and that they keep files for a long period of time.
7z wrapping is of no concern to me, as the code to do it is already functional and quite stable.
I'm considering adding another host, but when I actually get around to doing it, I'll evaluate the choices then.
Personally (hint: rant incoming), I don't really like CloudFlare. I've been blocked out, myself, a few times, probably because I disable Javascript and cookies by default (and the CF redirection page chucks a fit if you have those disabled).
However, they sell themselves as a free solution to easily protect websites against spam, hack attempts, DDOS attacks all whilst making your website load faster. So many clueless website owners just set it up without thinking of possible implications (or why it's being offered to them for free in the first place).
I've never used CF so I don't know what they're like. But presumably their spam/hack protection is, at best, based on heuristics at the HTTP level - in other words, completely sucks.
Most sites being dynamic these days, I'd argue that the CDN caching is of limited benefit, although caching dynamic content in the event of the site going down could be interesting (depends on how often they update it though). I'll give them the DDOS protection claim, assuming an attacker can't easily get the underlying IP by sending a phony DMCA complaint.
Privacy is another concern as they're essentially sitting in the middle capturing everything, including all sent messages, everything any visitor views, and even authentication tokens (and possibly username/passwords as most sites don't hash the password before sending (like AT, but we have SSL so it doesn't really matter :P)). It's interesting that you have to pay to use SSL on CloudFlare, but even then, they intercept everything (probably so that their redirection pages work) so HTTPS doesn't protect your privacy there.
However, they sell themselves as a free solution to easily protect websites against spam, hack attempts, DDOS attacks all whilst making your website load faster. So many clueless website owners just set it up without thinking of possible implications (or why it's being offered to them for free in the first place).
I've never used CF so I don't know what they're like. But presumably their spam/hack protection is, at best, based on heuristics at the HTTP level - in other words, completely sucks.
Most sites being dynamic these days, I'd argue that the CDN caching is of limited benefit, although caching dynamic content in the event of the site going down could be interesting (depends on how often they update it though). I'll give them the DDOS protection claim, assuming an attacker can't easily get the underlying IP by sending a phony DMCA complaint.
Privacy is another concern as they're essentially sitting in the middle capturing everything, including all sent messages, everything any visitor views, and even authentication tokens (and possibly username/passwords as most sites don't hash the password before sending (like AT, but we have SSL so it doesn't really matter :P)). It's interesting that you have to pay to use SSL on CloudFlare, but even then, they intercept everything (probably so that their redirection pages work) so HTTPS doesn't protect your privacy there.
wouldn't it better to just replace anonfiles.com with davvas.com ?
its 5GB max upload file size without acc and no need to 7z files either :)
its 5GB max upload file size without acc and no need to 7z files either :)
CloudFlare is causing a lot of distress and unhappiness to quite a lot of other sites for blocking users to some website on suspicions of automaton bots/spiders.
It's usually in response to complaints. Easy way to solve things...
Hmm, not exactly sure. Either we got filtered, or (more likely) a stuff-up on their end:
PING anonfiles.com (46.227.71.81) 56(84) bytes of data.
From 78.152.40.234 icmp_seq=3 Packet filtered
^C
--- anonfiles.com ping statistics ---
16 packets transmitted, 0 received, +1 errors, 100% packet loss, time 15110ms
I've mapped their hostname to a different IP and it seems like it isn't being filtered at least, but I haven't tested anything. Let's see if this works.
-----------
An update on this: it seems that AnonFiles have decided to put CloudFlare in front on their servers, which has a 100MB limit (this issue is separate from the one above).
Which means that AnonFiles uploads will be broken into 100MB parts until they fix this (by getting rid of CloudFlare or breaking their API by shifting it to another server).
At least they should be going through now.
PING anonfiles.com (46.227.71.81) 56(84) bytes of data.
From 78.152.40.234 icmp_seq=3 Packet filtered
^C
--- anonfiles.com ping statistics ---
16 packets transmitted, 0 received, +1 errors, 100% packet loss, time 15110ms
I've mapped their hostname to a different IP and it seems like it isn't being filtered at least, but I haven't tested anything. Let's see if this works.
-----------
An update on this: it seems that AnonFiles have decided to put CloudFlare in front on their servers, which has a 100MB limit (this issue is separate from the one above).
Which means that AnonFiles uploads will be broken into 100MB parts until they fix this (by getting rid of CloudFlare or breaking their API by shifting it to another server).
At least they should be going through now.
Slowpoke since i only started using this site recently, but why can't the series in the filtered list be fetched? The copyrights holder in japan for em actually goes after piracy or something?
Anon file stopped being uploaded to since yesterday, hope it's fixed soon.
I do enjoy using my grill very much (especially with goldfish), however I wouldn't quite call myself a cooking utensil yet.
To prevent questioning from the conspiracy theorists out there, I also am not a highly intelligent AI controlled bot (although the "highly intelligent" part probably remains true).
I am merely an eccentric flesh & bones human being.
One has suggested that I secretly work as magical girl. To that I have no comment. Oh, and I'm no longer available for dating either, sorry.
To prevent questioning from the conspiracy theorists out there, I also am not a highly intelligent AI controlled bot (although the "highly intelligent" part probably remains true).
I am merely an eccentric flesh & bones human being.
One has suggested that I secretly work as magical girl. To that I have no comment. Oh, and I'm no longer available for dating either, sorry.
Seems Germany's still full of Nazi's and deluded morons.
Not really, I'm just trying to be hip.
Isn't tagging everything as "beta" the cool thing to do nowadays?
Isn't tagging everything as "beta" the cool thing to do nowadays?
https://animetosho.org/view/tvcaps-one...-ts.695214
https://animetosho.org/view/tvcaps-one...-ts.695216
https://animetosho.org/view/tvcaps-hun...-ts.695215
Cancel the fetch please. I think no one would want to download these anyway.
https://animetosho.org/view/tvcaps-one...-ts.695216
https://animetosho.org/view/tvcaps-hun...-ts.695215
Cancel the fetch please. I think no one would want to download these anyway.
Ever since I've known AT, it has been in the Beta stage. Are you planning to launch the "full" site? What would it have apart from the Beta?
Wow! amazed at the level of tech that went into making this site. I'll stick to plowshare as I've become a bit familiar with it ( also sounds a lot better than writing to TCP lol ). Thanks for taking the time admin. Keep it up ;)
Firefox, by default, has a soft limit of 50MB for IndexedDB storage (soft limit = asks for permission to get larger, but no hard limit after that), but I presume anyone who uses Mega would just agree to it.
I haven't checked, but I'd imagine that IndexedDB storage would come under Firefox's "Offline Website Data" (or maybe "Cookies") or Chrome's "cookies and other site and plug-in data", so clearing that regularly could help. Depending on your browser usage pattern, you may choose to get the browser to clear it on exit.
I'd say it's just a bug / lazy coding rather than anything nefarious. There are a million smarter ways to track you (not saying they aren't).
Writing the file to an IndexedDB is a somewhat ugly way to do things, but I suppose there isn't really much else you can do in Javascript. Perhaps a custom desktop downloader won't have the issue.
Thanks for sharing the info!
I haven't checked, but I'd imagine that IndexedDB storage would come under Firefox's "Offline Website Data" (or maybe "Cookies") or Chrome's "cookies and other site and plug-in data", so clearing that regularly could help. Depending on your browser usage pattern, you may choose to get the browser to clear it on exit.
I'd say it's just a bug / lazy coding rather than anything nefarious. There are a million smarter ways to track you (not saying they aren't).
Writing the file to an IndexedDB is a somewhat ugly way to do things, but I suppose there isn't really much else you can do in Javascript. Perhaps a custom desktop downloader won't have the issue.
Thanks for sharing the info!
Recovering GBs of hard disk space after using Mega.co.nz
I just recovered 42GB of hard disk space from dead files that Mega left after downloading. And I don't use that Mega often. I didn't find any information about this at Mega or with a Google search, so I thought I'd share the how and why with any here at AT who are interested:
1. How to recover space: Delete the folder "https+++mega.co.nz" that Mega creates on your hard drive during downloads. For Firefox users the folder is located at:
For Win7:
c:\Users\"Username"\AppData\Roaming\Mozilla\Firefox\Profiles\"xxxxxxxx".default\indexedDB\https+++mega.co.nz
For WinXD:
c:\Documents and Settings\"Username"\Application Data\Mozilla\Firefox\Profiles\"xxxxxxxx".default\indexedDB\https+++mega.co.nz
(The info in quotes varies by person.)
So far Mega has never cleaned up these files on the machines I use. Firefox doesn't either.
2. The why: (Optional info) When you use Mega it creates it's own un-moderated folder, copies your downloads to that folder in a metadata/database format and when the download is complete then copies the database files from that folder to the browser, converting the information back into the original file format. The problem is it doesn't delete the database files after it's done, leaving them as dead weight on your hard drive. If you want the space back, you have to find the database folder and delete the files manually.
3. The Why of the why: Why doesn't Mega clean up after itself? Sloppy implementation? Maybe tracking your use for a future plea-bargain.
IE and Chrome: You'll have to find out for yourself if you have the same problem and where the folder is.
I just recovered 42GB of hard disk space from dead files that Mega left after downloading. And I don't use that Mega often. I didn't find any information about this at Mega or with a Google search, so I thought I'd share the how and why with any here at AT who are interested:
1. How to recover space: Delete the folder "https+++mega.co.nz" that Mega creates on your hard drive during downloads. For Firefox users the folder is located at:
For Win7:
c:\Users\"Username"\AppData\Roaming\Mozilla\Firefox\Profiles\"xxxxxxxx".default\indexedDB\https+++mega.co.nz
For WinXD:
c:\Documents and Settings\"Username"\Application Data\Mozilla\Firefox\Profiles\"xxxxxxxx".default\indexedDB\https+++mega.co.nz
(The info in quotes varies by person.)
So far Mega has never cleaned up these files on the machines I use. Firefox doesn't either.
2. The why: (Optional info) When you use Mega it creates it's own un-moderated folder, copies your downloads to that folder in a metadata/database format and when the download is complete then copies the database files from that folder to the browser, converting the information back into the original file format. The problem is it doesn't delete the database files after it's done, leaving them as dead weight on your hard drive. If you want the space back, you have to find the database folder and delete the files manually.
3. The Why of the why: Why doesn't Mega clean up after itself? Sloppy implementation? Maybe tracking your use for a future plea-bargain.
IE and Chrome: You'll have to find out for yourself if you have the same problem and where the folder is.
Okay that makes sense then.
Unless you have a specific reason not to, I don't see the problem in using plowshare.
The script writes to TCP sockets directly. cURL provides nice abstractions for HTTP, but doesn't exactly provide enough flexibility that's needed here.
I'm not sure how I'd help you with interacting with upload sites. It's largely about understanding how HTTP works and replicating and automating it.
Unless you have a specific reason not to, I don't see the problem in using plowshare.
The script writes to TCP sockets directly. cURL provides nice abstractions for HTTP, but doesn't exactly provide enough flexibility that's needed here.
I'm not sure how I'd help you with interacting with upload sites. It's largely about understanding how HTTP works and replicating and automating it.
I was thinking of getting a vps and starting a site similar to this, but not on anime. Oh well.... One last question and I'll stop bugging you :) : I suspect the script is curl based, would you have a few resources/tips to help me interact with the api/interface of uploading sites?
Thanks for the compliment.
If you can't torrent, then you can't really fetch the files in the first place to actually be able to upload them, so I'm not entirely sure what you're asking for.
We don't use plowshare here for uploading; it's a completely custom script.
If you can't torrent, then you can't really fetch the files in the first place to actually be able to upload them, so I'm not entirely sure what you're asking for.
We don't use plowshare here for uploading; it's a completely custom script.
Hello admin, I'd like to thank you for making this site, because I myself cannot torrent at the uni.
If you could help a noob like me with some tips regarding the way you upload files. I know about plowshare for
instance, just curious what you use and how you set up the upload script.
Thanks a bunch.
If you could help a noob like me with some tips regarding the way you upload files. I know about plowshare for
instance, just curious what you use and how you set up the upload script.
Thanks a bunch.
I'm not sure about when you made the comment, but yeah, if the server goes down, it can take a while to sync things back up.
Host thought it was a good idea to turn everything off to transfer us to a new server.
They did actually give 6 hours' advance notice, which would've been useful if I checked my email every hour or so.
Apologies for the downtime.
Anyway, the server's back on, and it hasn't seemed to have moved anywhere from what I can tell, so perhaps they temporarily lacked the capacity to mine enough Bitcoins.
Update: okay, it looks like they've actually provisioned a new server, which means that I'll have to transfer things over... Well, since I'm clearly a positive thinker, we'll see how this new server goes.
They did actually give 6 hours' advance notice, which would've been useful if I checked my email every hour or so.
Apologies for the downtime.
Anyway, the server's back on, and it hasn't seemed to have moved anywhere from what I can tell, so perhaps they temporarily lacked the capacity to mine enough Bitcoins.
Update: okay, it looks like they've actually provisioned a new server, which means that I'll have to transfer things over... Well, since I'm clearly a positive thinker, we'll see how this new server goes.
Wow thanks for that, is it possible to get RSS feeds for each series or only for the entire site?
Is it possible to get RSS feeds so that when a new episode of a show (or even for all shows) is uploaded?
I did a small test with AnonFiles:
- "wget https://cdn.anonfiles.com/1376972279463.7z" => redirected to "https://anonfiles.com/redirect/1376972279463.7z", and then "https://anonfiles.com/file/34398790ce65a949fbedd33e8265b8b6" (the download page).
- "wget --referer=https://anonfiles.com https://cdn.anonfiles.com/1376972279463.7z" => 2013-09-05 13:59:50 (7.29 MB/s) - '1376972279463.7z' saved [339927819/339927819] (download started and finished normally).
It seems that AnonFiles starts requring referer anonfiles.com*, a small proof to show that you start the download from their download page, not some random sites--They've changed their mind on hotlinking?
If you copy the link and paste to the download manager, it won't give you the file but redirect you to the download page as no referer information is provided. If you let IDM catch the link from the download page, or send it the link via some addon like Flashgot, the download should start normally like before.
TL;DR: Just click the link on the download page and let IDM catch it, or you need to deal with that referer.
- "wget https://cdn.anonfiles.com/1376972279463.7z" => redirected to "https://anonfiles.com/redirect/1376972279463.7z", and then "https://anonfiles.com/file/34398790ce65a949fbedd33e8265b8b6" (the download page).
- "wget --referer=https://anonfiles.com https://cdn.anonfiles.com/1376972279463.7z" => 2013-09-05 13:59:50 (7.29 MB/s) - '1376972279463.7z' saved [339927819/339927819] (download started and finished normally).
It seems that AnonFiles starts requring referer anonfiles.com*, a small proof to show that you start the download from their download page, not some random sites--They've changed their mind on hotlinking?
If you copy the link and paste to the download manager, it won't give you the file but redirect you to the download page as no referer information is provided. If you let IDM catch the link from the download page, or send it the link via some addon like Flashgot, the download should start normally like before.
TL;DR: Just click the link on the download page and let IDM catch it, or you need to deal with that referer.
I know not everyone can have such speeds and never meant to start an argument.I just like the site because i only watch a few shows and you have plenty of mirrors on here (which i have premium subs on) and i do not like bit-torrent much . Apologies for my rudeness and thanks for your efforts.
Okay, so it doesn't seem like anything broke. Fanzub's listings just seem to be quite different to what we have here, to the point where the timing is a bit out of whack.
I've tweaked the maximum time skew, which should increase the likelihood of matching stuff, however it appears that Fanzub is rather behind Nyaa/TT with releases, so you won't see most new files with an NZB.
By the way, if you haven't realised, we don't actually provide NZBs - we just provide links to Fanzub for convenience. If you're primarily interested in NZB downloads, I suggest just going straight to the source.
I've tweaked the maximum time skew, which should increase the likelihood of matching stuff, however it appears that Fanzub is rather behind Nyaa/TT with releases, so you won't see most new files with an NZB.
By the way, if you haven't realised, we don't actually provide NZBs - we just provide links to Fanzub for convenience. If you're primarily interested in NZB downloads, I suggest just going straight to the source.
You probably need to update your IDM or browser or both. Make sure that the IDM CC extension is enabled in your browser. But yeah, you need to specify your problem.
Hmm, probably the Fanzub parser broke.
Unexpected, as they actually have an API that we use.
I'll need to check why they aren't coming up a bit later, but thanks for the notice!
Unexpected, as they actually have an API that we use.
I'll need to check why they aren't coming up a bit later, but thanks for the notice!
So you have a 1Gbps line (assuming you meant 20 Megabytes per second)? That's very rare for a residential connection.
With such a connection, I'm struggling to understand why you need to use AT at all.
But regardless, how fast can you upload? Download speed was never the problem, upload speed is.
You've also got to consider whether your line can maintain that speed 24/7 (residential lines rarely are designed to do this), continue to be reliable as well as have good routing/peering characteristics that you generally only get from a data centre.
I'm not expecting someone like you to understand all the complexities involved with managing all this, which is probably why I'm managing this and not you, but hopefully I've at least helped improve your very limited understanding in the logistics behind the project.
With such a connection, I'm struggling to understand why you need to use AT at all.
But regardless, how fast can you upload? Download speed was never the problem, upload speed is.
You've also got to consider whether your line can maintain that speed 24/7 (residential lines rarely are designed to do this), continue to be reliable as well as have good routing/peering characteristics that you generally only get from a data centre.
I'm not expecting someone like you to understand all the complexities involved with managing all this, which is probably why I'm managing this and not you, but hopefully I've at least helped improve your very limited understanding in the logistics behind the project.
All single links you skipped and your server could not handle http://uptobox.com/0w52xska6u69 http://uptobox.com/0a2iau7y0ux0 http://uptobox.com/wmdjlz7u74fq http://uptobox.com/o54bj11t676e http://uptobox.com/582ejzhvr73g . I can dl from them @20MB/S .I was pointing the flaws out on your server.
you can't afford a decent one
Are you suggesting that you can, and would willing to pay for one?Because if you did pay for "a decent" server for us, I'd be happy to fetch those "HQ BD rips" you so desire (until you stopped paying for it). But of course, you wouldn't right? If you could, I'm sure you would've paid for your own "decent" server, correct?
Or were you just trying to find any insult you can throw because there's nothing better you can do?
21/09/2013 07:44 — ichimoku1134