They do have a robots.txt [1] that disallows robot access to the spigot tree (as expected), but removing the /spigot/ part from the URL seems to still lead to Spigot. [2] The /~auj namespace is not disallowed in robots.txt, so even well-intentioned crawlers, if they somehow end up there, can get stuck in the infinite page zoo. That's not very nice.
> It seems quite likely that this is being done via a botnet - illegally abusing thousands of people's devices. Sigh.
Just because traffic is coming from thousands of devices on residential IPs, doesn't mean it's a botnet in the classical sense. It could just as well be people signing up for a "free VPN service" — or a tool that "generates passive income" for them — where the actual cost of running the software, is that you become an exit node for both other "free VPN service" users' traffic, and the traffic of users of the VPN's sibling commercial brand. (E.g. scrapers like this one.)
Eh. To me, a bot is something users don't know they're running, and would shut off if they knew it was there.
Proxyware is more like a crypto miner — the original kind, from back when crypto-mining was something a regular computer could feasibly do with pure CPU power. It's something users intentionally install and run and even maintain, because they see it as providing them some potential amount of value. Not a bot; just a P2P network client.
(And both of these are different still to something like BitTorrent, where the user only ever seeds what they themselves have previously leeched — which is much less questionable in terms of what sort of activity you're agreeing to play host to.)
AFAIK much of the proxyware runs without the informed consent of the user. Sure, there may be some note on page 252 of the EULA of whatever adware the user downloaded, but most users wouldn't be aware of it.
A well-written scraper would check the image against a CLIP model or other captioning model to see if the text there actually agrees with the image contents.
There is a particular pattern (block/tag marker) that is illegal the compressed JPEG stream. If I recall correctly you should insert a 0x00 after a 0xFF byte in the output to avoid it. If there is interest I can followup later (not today).
I was very excited 20 years ago, every time I got emails from them that the scripts and donated MX records on my website had helped catching a harvester
> Regardless of how the rest of your day goes, here's something to be happy about -- today one of your donated MXs helped to identify a previously unknown email harvester (IP: 172.180.164.102). The harvester was caught a spam trap email address created with your donated MX:
So how do I set up an instance of this beautiful flytrap? Do I need a valid personal blog, or can I plop something on cloudflare to spin on their edge?
That said, these seem to be heavily biased towards displaying green, so one “sanity” check would be if your bot is suddenly scraping thousands of green images, something might be up.
Given that current LLMs do not consistently output total garbage, and can be used as judges in a fairly efficient way, I highly doubt this could even in theory have any impact on the capabilities of future models. Once (a) models are capable enough to distinguish between semi-plausible garbage and possibly relevant text and (b) companies are aware of the problem, I do not think data poisoning will be an issue at all.
> So the compressed data in a JPEG will look random, right?
I don't think JPEG data is compressed enough to be indistinguishable from random.
SD VAE with some bits lopped off gets you better compression than JPEG and yet the latents don't "look" random at all.
So you might think Huffman encoded JPEG coefficients "look" random when visualized as an image but that's only because they're not intended to be visualized that way.
I can see what was meant with that statement. I do think compression increases Shannon entropy by virtue of it removing repeating patterns of data - Shannon entropy per byte of compressed data increases since it’s now more “random” - all the non-random patterns have been compressed out.
Total information entropy - no. The amount of information conveyed remains the same.
Technically with lossy compression, the amount of information conveyed will likely change. It could even increase the amount of information of the decompressed image, for instance if you compress a cartoon with simple lines and colors, a lossy algorithm might introduce artifacts that appear as noise.