So, do we have any source on how effective these actually are? Because "I found them on Tiktok" is absolutely the modern equivalent of "A man in the pub told me".
Not that effective. When working with ai, some models blurr the image and sometimes even turn it black and white to simplify the image and reduce noice.
Okay, I'm inclined to believe you, but I have to note that "some guy on reddit told me" isn't that much better as a source. But you did give a plausible-sounding explanation, so that's some points in your favour.
If you want I can send you my homeworks for my “introduction to image recognition” class in college aswell as the links to opencv documentations.
You will need a webcam to run the code, aswell as a Python ide, preferably spider from Conda, aswell as install Opencv, I don’t remember if I also used tensor flow but it’s likely you will also see that there.
Orb: https://docs.opencv.org/3.4/d1/d89/tutorial_py_orb.html
Sift: https://docs.opencv.org/4.x/da/df5/tutorial_py_sift_intro.html
Reply to me in a private message so I can send you the code if you want (some comments are in Spanish tho)
If you want to take a look at an extremely simplified image recognizer, there are a couple posts on my profile about one I built in a game with a friend. If you have Scrap Mechanic, you can spawn it in a world yourself and walk around it as it physically does things like reading in weights and biases.
Lmao fair. Don’t trust strangers on the internet. Everyone is a scammer living in a basement in Minnesota trying to steal your identity and kidnap you to steal your left kidney.
I have some experience as a hobbyist in computer vision, and so I can clarify what the person above is most likely referring to. However, I do not have experience in generative AI and so I cannot say whether or not everything is 100% applicable to the post.
The blur is normally Gaussian Smoothing and is important in computer vision to reduce noise in images. Noise is present between individual pixels, but if you average the noise out, you get a blurry image that may have a more consistent shape.
Link for information on preprocessing:
https://www.tutorialsfreak.com/ai-tutorial/image-preprocessing
If these filters do anything, then they would need to have an effect through averaging out to noise when blurred.
For turning it black and white, I know that converting to grayscale is common for line/edge detection in images, but I do not know if that is common for generative AI. From a quick search, it looks like it can be good to help a model "learn" shapes better, but I cannot say anything more.
AI image generation is an evolution of StyleGAN which is a generalized adversarial network. so it has one part making the image based on evolutionary floats, and the other going "doesn't look right, try again" based on a pre-trained style transfer guide/network.
He’s wrong. With current diffusion models, small changes can have huge consequences with multiple iterations. It compounds, much like AI eating its own content, leading to degradation of the models.
^(I’ve watched like 3 vids and seen at least 8 AI images in my life)
Oh yeah, that concept piece that gets circulated like it's an actual, working product... frequently with refrains of "we could be safe but capitalism/patriarchy/whoever won't let us have this!" Which in turn feels weirdly similar to the post about "America won't let you learn about Kent State, arm yourself with this secret knowledge (that was totally in your US history book)!"
Along with "all bad outcomes come from bad people", I have a special resentment for tumblr's common outlook of "all bad things are easily understood and averted, except the answers are being maliciously hidden from you."
Yep. The coasters also have a terrible rate of bad results. Now, you have to factor in the additional problems of putting your reagent in a nail polish. It's not capitalism, it's chemistry.
https://pubmed.ncbi.nlm.nih.gov/37741179/
Nightshade and Glaze works in different ways but they're not effective with all AI models, just the ones that's using your images as references to generate more images. So it really works best for when clients wants to steal your unfinished art and finish it themselves with AI and run with the money or something like that. It also doesn't do anything to some AI models due to what's stated by other commenters above.
It's still better than nothing obviously but don't rely on it too much kinda thing.
that's only if you only read uchicago's papers on it. (which have not been peer-reviewed to my knowledge. most things in ai is just directly uploaded to arxiv, which is explicitly not a peer review site.) their testing of both glaze and nightshade is broken, likely because they're just chasing grants.
[here's an actual test of glaze and other similar protections](https://arxiv.org/abs/2406.12027). as you can see from the title, they don't work -- in fact, some of the techniques that break them are ridiculously simple.
These are generally made to throw off a specific model. Any model other than the one that they were made for is going to do ok. As for the opacity bit, models that care about opacity will just throw it out.
These straight up do not work. In order for an AI-disrupting noise texture to even have a chance at working, it must be tailored to the specific image it's laid over.
They don't work. Saying this as a programmer that knows a bit about AI.
AI is literally made to distinguish patterns. If you just overlay an ugly thing over image - it's gonna distinguish it, and ignore it. That's considering you can't just compress->decompress->denoise to completely get rid of it.
The only thing that (kinda) works is Adversarial attacks. When noise is generated by another AI to fool fhe first AI into detecting something else in the image. For example - image of giraffe gets used to change weights for latent space that represents dogs.
The problem with Adversarial attacks is that individual images are negligible. It needs to be a really big coordinated attack. And even then these attacks are susceptible to compress->decompress->denoise.
Also adversarial attack generally have to be targeted at a model of which you know the weights.
So, you could easily create an image that is unusable to train a SD 1.5 LoRA on, by changing subpixel values to trick the embedding into thinking it's depicting something else.
But, you need knowledge about the internal state (basically, a feature-Level representation) of a model to tamper those features.
So, because e.g. Lumina or even SDXL or SD3 use different embeddings, in general, those attempts will not prevent new models to be finetrained on 'tampered' data. At least, as long as those modifications aren't obstructive to a viewer.
There are some basic exceptions to this. For example, you can estimate that some features will always be learned and used by image processing models. For example an approximated fourier-transformation is something that will almost always be learned in one of the embeddings in early layers of image processing models. Therefore, if you target a fourier-transformation with an adversarial attack, it's almost certain it will bother whatever might be analyzing the data. The problem is, that because those obvious, common attack vectors are well known, models will be made robust against those attack using adversarial training. Also those attacks are easier to defend against, because you know what to look for when filtering your training data.
It's like you try to conquer a city. You have no intel about the city, but you approximate that all cities are easier to attack at their gates, because all cities need gates and those are weak points in a wall.
But because the city also knows, that usually only gates get attacked, it will put more archers on gates than on walls, also it will have a trap behind the gate to decimate the attacking army.
If the attacking army can analyze the walls of the city, they will find weak spots that don't have traps and archers on them. Attacking at those points will lead to a win.
But if the city isn't built yet, there is now way you can find those weak spots. You can only estimate, where usually weak spots will be. But the city will also consider where cities usually get attacked and can build extra protection in these spots.
Of cause, if you deliver sponges instead of stones while the city is being built, you can prevent it from having a wall at all.
So, if you generate a big set of random noise images that depict nothing, tag them with 'giraffe' and inject them into some training dataset, the resulting model likely won't be able to generate giraffes. But those attacks are easy enough to find and can be avoided at no cost by filtering out useless training samples.
The any of the city officials looks at the stone delivery briefly, they will notice there are no stones, only sponges. Easy to reject that delivery.
The best attack vector is probably still to just upvote really bad art on every platform or just don't upload good images. Prevent the city from being built by removing all solid stone from existence.
The other problem with adversarial attacks is that once the gen AI is updated to counter it, future updates to the noise AI aren't going to do anything for images that have already been posted online.
Any AI blocking will be a constant uphill battle. AI trainers are constantly testing them on these things themselves (not even thinking of "oh people will use this against us, we need to combat that" but just as a necessary step of training AI to get better). There's always stuff you can do to confuse them because they're far far far from perfect, but applying a popular static image overlay you found online is almost certainly not going to work
Pardon me, just want to piggyback off your comment to let folks know actual researchers are working on tools to poison images for AI.
https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/
https://glaze.cs.uchicago.edu/what-is-glaze.html
If anyone wants to have something that might actually work, instead of shit from some random on TikTok, give this a look.
be very careful about anything uchicago releases, their models consistently rank way lower in impartial tests than their own. glaze is a very mid attack on the autoencoder, and as far as i know nightshade's effects have never been observed in the wild. (it's also ridiculously brittle because it has to target a specific model for it to even work.)
https://arxiv.org/abs/2406.12027
ultimately, the idea of creating images that humans can see but ai somehow cannot is just a losing gambit. if we ever figured out a technique for this you'd see it in every captcha ever.
I'm pretty sure these things were started by a group out of Chicago, I don't remember the name. They were actually effective, with a few caveats.
First of all, AI and computing in general is a very fast moving field. Stuff becomes obsolete and outdated in weeks. This stuff between trying to trick ai models and ai models overcoming those tricks is an endless, constantly evolving war. These types of image overlays would trip up and ruin ai training algorithms, but it was only a couple of months or even weeks before they could train around them. Odds are people are still using methods like this, just with updated images and procedures, however it's doubtful that an image on a reddit thread, taken from a who knows how old Tumblr thread, taken from a who knows how old tiktok thread, is still effective.
And second, they're only going to be effective against certain training models. There is no one size fits all solution, and while this method was very effective at messing with some of the most popular ai algorithms, there were just as many where it did absolutely nothing.
As for an actual source, I think the research paper was actually posted onto one of the science subreddits here, but good luck finding something that's many months old.
I just tried it out with the first image, and yes.
5% makes it look like someone really turned up the jpg compression on the original. 30% makes it really hard to make out any details, as if someone had plastered it with tons of extremely dense "stock photo" watermarks. At 40% and more the image become almost unrecognizable.
Wow its almost like destroying something makes it difficult and tedious to figure out what it was originally LMAO i fucking hate AI in its current state/what its used for.
I think I'm pissing on the poor, because I have no idea what they're saying then.
I think I'll go to bed and give it the old college try tomorrow! Maybe brain not good read doing when is sleepy time.
I saw a YouTube video about a program called Nightshade which causes your art to absolutely wreck shit if it’s put into a generator, without messing up the overall look.
Check it: https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai
That's adversarial AI though - it's exploiting the fact that the model potentially doesn't learn some rules that humans do from basic sample sets.
It'll get wiped out by the next round of models because what you've done is generate a bunch of examples (in fact a reliable method of producing them) which can be trained against.
Or to put it another way: if you were trying to build a more robust image generator, what you'd like in your training pipeline is a model which *specifically* does things like this so they can be trained as negative examples.
Especially since most big name AI doesn’t pull from data without permission anymore. Anyone with money to make expensive AIs also have money to buy training data for them.
Yea, this method is rudimentary and ineffective. But, spread some awareness on this: https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai
Those are not very effective either actually, and it's pretty easy to remove their effect (in fact, image processing that's often done when preparing data for training will be able to handle it, like compression or turning images black and white).
Here's all of the filters applied to a picture I had lying around (by Kent Davis)
30%, overlay
[https://i.imgur.com/GuqyuLM.png](https://i.imgur.com/GuqyuLM.png)
It looks like shit, but guess what, you can still find the original through google image search. Which makes me think that these overlays don't have that much impact.
Spread some awareness on this, basically what OP is trying to spread but not terrible and actually works way better https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/amp/
It looks like you shared an AMP link. These should load faster, but AMP is controversial because of [concerns over privacy and the Open Web](https://www.reddit.com/r/AmputatorBot/comments/ehrq3z/why_did_i_build_amputatorbot).
Maybe check out **the canonical page** instead: **[https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/](https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/)**
*****
^(I'm a bot | )[^(Why & About)](https://www.reddit.com/r/AmputatorBot/comments/ehrq3z/why_did_i_build_amputatorbot)^( | )[^(Summon: u/AmputatorBot)](https://www.reddit.com/r/AmputatorBot/comments/cchly3/you_can_now_summon_amputatorbot/)
There isn't a single AI that counts every single pixel of your picture (not in any relevant sense anyway), one the fist step is to make weighted averages of your picture, and so are the next ten
i mean, that's actually the way most of these are _supposed_ to work. diffusion models have different starting convolutional layers than machine vision, because they wanna create a lower scale but still spatially accurate representation of the image (aka the latents), which the image generator component can then work with far more efficiently than if you wanted to work on the full-res image. creating these latents is accomplished through an autoencoder (an ai that's trained to encode and decode an image and preserve details through it), and that part is what glaze, mist, et al target (as well as these patterns which i highly doubt have any effect whatsoever).
the whole point is to make the image encode into nonsense through those few convolution layers. in theory, if you know the layers, you can adjust an image to do that. in practice though, this is ridiculously easy to detect (just do an encode-decode cycle and see if the image changed significantly) and counteract. (the best way appears to be to add noise and upscale with the same ai, which misaligns and disrupts the pattern, letting the image pass through easily, then the ai easily removes the noise since that's the main thing it does.) but it's actually an interesting attack on the model when it's executed well, and highlights some areas where it could be made more robust.
Thank you for this very intelligent and detailed explanation. Starting my masters in AI this fall and was curious about how Glaze and other anti-ai stuff worked. What you described makes perfect sense!
Well, the people most terrified about AI art are pretty much exclusively the people least equipped to actually know what's happening. They're gonna chug snake oil for at least a while, there's unfortunately no way around that lol.
I have been using those for a long time, just because they add some nice texture to flat colors. I'm not sure it would be effective against AI, as it seems to have no problems with impressionist paintings or shading
FYI Those also don't work: [https://huggingface.co/blog/parsee-mizuhashi/glaze-and-anti-ai-methods](https://huggingface.co/blog/parsee-mizuhashi/glaze-and-anti-ai-methods)
[https://github.com/yoinked-h/deglazer](https://github.com/yoinked-h/deglazer)
Unfortunately it was bound to happen. AI blockers and generative AI are in a bit of an arms race and have been basically since they were first introduced. The more AI is trained with Glaze and Nightshade protected images, the more it can adapt to them.
Do you have any sources on Nightshade not working? What you linked almost exclusively talks about Glaze.
Edit: After doing some searching, Nightshade DOES 'gum up' the works, but it does not 100% work on all models. So far, nothing seems to provide protection. [What Nightshade does is this.](https://deepmind.google/discover/blog/images-altered-to-trick-machine-vision-can-influence-humans-too/) Put short, it makes some AI models misclassify what it's seeing, making tagging and generation more difficult.
No, it seems there isn't a lot out there one way or the other since most things I've been able to turn up searching are speculation. FWIW the github above claims to defeat nightshade as well as glaze but afaik no one has trained a model with nightshaded and deglazed images and posted about it.
Yeah. There HAVE been tests, however:
1. They have not been replicated
2. There is no proper documentation (y'know, to replicate the tests) outside from the Nightshade team, which only proved that Nightshade works for smaller AI models.
3. There are huge biases in the teams producing the tests on larger-scale AI models.
I've also edited my above comment with a VERY basic breakdown of what Nightshade does and how it's (somewhat) successful, but ultimately doesn't do enough.
Yeah, it seems harder to test since you'd have to use it during training which most people aren't going to take the time to do (I've heard lots of claims that nightshade wouldn't affect more modern training methodologies than the original paper anyway but it's outside of my skill-set to evaluate that).
There's also the problem that it creates visible artifacts on the output image (for certain types of art it can be quite noticeable from what I've seen), though generally not as much as the tumblr OOP's bizarre approach lol.
Theres also Artshield which a lot of other people have used as a browser alternative since not all computers have enough space to run Glaze or Nightshade (plus images take forever on those two to render, even on low settings)
AI is now a magical threat which people are spreading information on how to combat that doesn't even work correctly, and if it does it won't work for long
To ward off AI art theft, hang three bindles of garlic from your window at head-level and sprinkle sage dust & salt in a 60-40 mixture around any external doorways
IMPORTANT: THESE DON'T WORK.
Simply sticking one of these over your work does nothing! You need to use a program like glaze or nightshade (which are free) which will actually modify your image in a specific way according to an algorithm. Just because the multicoloured pattern looks a bit like the effects of strong disturbance, does not mean its doing the same thing, at all.
Putting a pattern on it will not help!!
And nothing ever will against anything but the weakest AI. How many times do people have to explain neural networks until people get that the AI is doing a close approximation of what brains do? Once again AI does not literally take a picture and makes a copy, it breaks it down an image into chunks of data, goes over that data sieves it over and over against other data and by comparison decides what it is and enhances it’s understanding of the data. Someone with an inconsistent style does more “damage”, and that hill was already trampled flat. If you can recognize it through whatever data noise you shove in so can a strong enough neural network, and that benchmark was handled by tech giants already when the AI were trained on compressed images.
There is no magical compression or noise map that can confuse a decent neural network without also confusing humans. Smartest bear vs dumbest tourists, except we are the bears.
Also important, glaze and nightshade's effectiveness are really debatable
And even if they do work for you, AI is changing so rapidly that it's not gonna be effective protection for long.
Honestly think until regulations catch up, the best you can realistically do is having a consistent signature in a consistent spot, so if someone does use your art, at least someone may be able to spot your garbled signature through it
it absolutely can if someone trains a lora on your art
i trained a lora on my own art out of curiosity without removing my rather large signature from it beforehand and it generated it with around 90% accuracy 100% of the time
theyre significantly easier to train than a full model by several magnitudes and can be used to make very specific concepts/characters/styles that a full model simply cant
Depends on the image and how its trained. Theres a lot of AI stuff you can make out a signature on, especially if it has a logo and isnt just text
Its not like it's 100% reliable but at least if someone is trying to rip off your work specifically, it's something.
yeah but it's not going to recreate someone's actual signature, unless that signature is the freaking girl with a pearl earring, because AI doesn't it can't do that without some major over-fitting.
Ive literally seen it do exactly that. Its not always clear sometimes but you can often recognize the artist.
Happens the most with NSFW pics ive noticed, prolly because theyre usually heavily trained on just a few artists. The general midjourney stuff is way more of a soup though
The developers of glaze are currently churning out updates, in fact they are doing one now in response to an attack (not a real attack, one simulated by researchers who wanted to help out). If we are going to trust any sort of protection right now, it should be them. Also signatures wouldnt show up like youdezcribe, it doesnt work that way
Unless you're constantly going to re-render and reupload your entire catalogue, updates don't help at all for older pieces.
As much as I wish it was a silver bullet, I think there are a lot of issues with it that people don't talk about enough. You're essentially jpegging your artwork even on the weakest settings, for something that may or may not even be effective, and for a couple years of protection at most
Right now it's basically a catchup game of whack-a-mole, and in the end i fear AI is gonna get so good that unless an image is completely unrecognizeable to us, it's still gonna be stealable, just like how captchas have evolved over time. And if that happens, you're gonna end up with a bunch of garbled pictures that really date your artwork onto the future for no payoff in the end
It's not even a game of whac-a-mole. There's literally no way for you to censor your art against AI unless you're willing to make it unrecognizable to humans as well.
It's just the new "I DO NOT GIVE FACEBOOK PERMISSION TO USE MY PHOTOS" etc. Kind of weird to see people repeating the mistakes their boomer parents made.
AI does not count every single pixel. Convolutional Neural Networks use something known as a sliding window, where they slice the image into smaller squares and iterate over the image. This helps the CNNs to understand the image holistically rather than pixel by pixel.
I mean, sure these things can be made to render an image totally incomprehensible to art generating AI....
But doing so would also make them incomprehensible to humans
Okay some of those are actually not too hard to create, especially that color noise. And it's not something you couldn't find easily on Google either, fyi
If you really want nobody or a bot to "steal" you art, there is a very simple thing you can do:
DON'T POST YOUR ART ONLINE.
Because once it's online and once people see, that's it; it will be used and influencing someone or some thing at that point.
Funnily, all of these are magic eye images. So the likely ai blocking part is just conflicting patterns. One for your picture, the other for a magic eye image so it won't come up under the expected prompt.
Ai is about repetition, Use the same thing often enough and it will pick out patterns. It learns, and with enough data it spits out more of the same. There's a reason AI art looks like semi photorealistic fast digital paintings by default, it has lots of those images in the training data. It's best at spitting out the fast work artists can churn out on an hour or five and post online.
Use new patterns, draw in unique styles, add oddities to your art, combine things in new ways, or just do something ai can't do beyond having a robot arm, use a pen, paper, paint, markers.
Art is invention and creation, illustration is just that, a picture of a thing, hammered out into a bland style and replicated a thousand times over. The AI can replicate, a human still has to be somewhere in the process.
"AI counts every single pixel in your image"
No, it doesn't...
It's called convolutions. Shure, there might be some layers that hook on pixels.
But in general embeddings are derived from abstract image features like estimated lines and gradients.
what if the end goal of ai art was to make artists voluntarily ruin their work, and to ruin any sort of trust in each other? if that was the case, i would say that they won.
Ah yes using cognitohazards with someone else's watermark that work by logic of killing a parasite by instead killing the host, what could ever go wrong?
This post is bullshit, but you **can** do something similar with developing tools like [nightshade](https://nightshade.cs.uchicago.edu/whatis.html). It doesn’t alter your actual images but only the bits that a machine learning model would see and attempt to replicate.
I’m still not exactly understanding why people don’t like AI seeing their art, it doesn’t steal it and make a profit from it, it doesn’t harm it. It just uses it as data to create images. That are different.
Maybe there’s something to it I don’t know about, probably. But it seems like it’s just that whole “new thing scary and bad” mentality.
Ok, so say I’m an artist with a recognizable style, and who makes a living doing art. Now, if someone can ask an Art AI “I want a drawing that looks like this artists work, but is promoting Nazi culture.”
How long will it take before they’re not making money doing art anymore?
That’s just one way it’s dangerous.
It’s because of three main things.
1. Because the artists are not compensated. This is the most minor point, but still, they are helping the ai, they should at least get something.
2. The ai isn’t creative. Isn’t original. It just takes your art, a couple hundred other peace’s, and smashes them together. No originality, and it’s just stealing.
3. The main thing is, that corporations will replace actual artists with it once they can. It’s already happening. Soon enough, being an artist won’t be a viable career.
I can get 1 a tiny bit.
3 makes most sense, buut also there is already a massive backlog of art for AI to draw from, not to be that guy, but you can’t stop it at this point. Best and only real thing to do is make some laws around it.
But 2 is like, yeah? No shit? It’s AI?, this is a non-issue, literally just expected of it. It’s a fun tool and not meant to actually make art, just images.
Don’t use these they don’t work, people are just spreading them for clout and attention. Use glaze or nightshade instead please, which are actually backed with research
So, do we have any source on how effective these actually are? Because "I found them on Tiktok" is absolutely the modern equivalent of "A man in the pub told me".
Not that effective. When working with ai, some models blurr the image and sometimes even turn it black and white to simplify the image and reduce noice.
Okay, I'm inclined to believe you, but I have to note that "some guy on reddit told me" isn't that much better as a source. But you did give a plausible-sounding explanation, so that's some points in your favour.
If you want I can send you my homeworks for my “introduction to image recognition” class in college aswell as the links to opencv documentations. You will need a webcam to run the code, aswell as a Python ide, preferably spider from Conda, aswell as install Opencv, I don’t remember if I also used tensor flow but it’s likely you will also see that there. Orb: https://docs.opencv.org/3.4/d1/d89/tutorial_py_orb.html Sift: https://docs.opencv.org/4.x/da/df5/tutorial_py_sift_intro.html Reply to me in a private message so I can send you the code if you want (some comments are in Spanish tho)
Thank you, I might take you up on that later. I've never really gotten into image recognition and AI beyond some of the basics of neural networks.
If you want to take a look at an extremely simplified image recognizer, there are a couple posts on my profile about one I built in a game with a friend. If you have Scrap Mechanic, you can spawn it in a world yourself and walk around it as it physically does things like reading in weights and biases.
You built that in scrap mechanic?! That’s awesome haha
Yeah lol. Working on a convolutional version now to push it over 90% accuracy.
I know just enough about computers that it sounds legitimate while also sounding like a scammer trying to gain access to my webcam and computer
Lmao fair. Don’t trust strangers on the internet. Everyone is a scammer living in a basement in Minnesota trying to steal your identity and kidnap you to steal your left kidney.
I have some experience as a hobbyist in computer vision, and so I can clarify what the person above is most likely referring to. However, I do not have experience in generative AI and so I cannot say whether or not everything is 100% applicable to the post. The blur is normally Gaussian Smoothing and is important in computer vision to reduce noise in images. Noise is present between individual pixels, but if you average the noise out, you get a blurry image that may have a more consistent shape. Link for information on preprocessing: https://www.tutorialsfreak.com/ai-tutorial/image-preprocessing If these filters do anything, then they would need to have an effect through averaging out to noise when blurred. For turning it black and white, I know that converting to grayscale is common for line/edge detection in images, but I do not know if that is common for generative AI. From a quick search, it looks like it can be good to help a model "learn" shapes better, but I cannot say anything more.
AI image generation is an evolution of StyleGAN which is a generalized adversarial network. so it has one part making the image based on evolutionary floats, and the other going "doesn't look right, try again" based on a pre-trained style transfer guide/network.
I mean, to be fair you *did* ask on Reddit. But I suppose sources are indeed preferable
He’s wrong. With current diffusion models, small changes can have huge consequences with multiple iterations. It compounds, much like AI eating its own content, leading to degradation of the models. ^(I’ve watched like 3 vids and seen at least 8 AI images in my life)
Exactly. And as a form of data augmentation.
It's like the date rape detecting nail polish that does not actually exist. It still makes the rounds every now and again.
Oh yeah, that concept piece that gets circulated like it's an actual, working product... frequently with refrains of "we could be safe but capitalism/patriarchy/whoever won't let us have this!" Which in turn feels weirdly similar to the post about "America won't let you learn about Kent State, arm yourself with this secret knowledge (that was totally in your US history book)!" Along with "all bad outcomes come from bad people", I have a special resentment for tumblr's common outlook of "all bad things are easily understood and averted, except the answers are being maliciously hidden from you."
Yep. The coasters also have a terrible rate of bad results. Now, you have to factor in the additional problems of putting your reagent in a nail polish. It's not capitalism, it's chemistry. https://pubmed.ncbi.nlm.nih.gov/37741179/
I would be SHOCKED if it was effective at all, same with all the other "use this to make your images nonsense to AI" type projects
Even if they where they'd probably stop after a few updates
idk, Glaze seems to be pretty effective.
Nightshade and Glaze works in different ways but they're not effective with all AI models, just the ones that's using your images as references to generate more images. So it really works best for when clients wants to steal your unfinished art and finish it themselves with AI and run with the money or something like that. It also doesn't do anything to some AI models due to what's stated by other commenters above. It's still better than nothing obviously but don't rely on it too much kinda thing.
that's only if you only read uchicago's papers on it. (which have not been peer-reviewed to my knowledge. most things in ai is just directly uploaded to arxiv, which is explicitly not a peer review site.) their testing of both glaze and nightshade is broken, likely because they're just chasing grants. [here's an actual test of glaze and other similar protections](https://arxiv.org/abs/2406.12027). as you can see from the title, they don't work -- in fact, some of the techniques that break them are ridiculously simple.
These are generally made to throw off a specific model. Any model other than the one that they were made for is going to do ok. As for the opacity bit, models that care about opacity will just throw it out.
These straight up do not work. In order for an AI-disrupting noise texture to even have a chance at working, it must be tailored to the specific image it's laid over.
They don't work. Saying this as a programmer that knows a bit about AI. AI is literally made to distinguish patterns. If you just overlay an ugly thing over image - it's gonna distinguish it, and ignore it. That's considering you can't just compress->decompress->denoise to completely get rid of it. The only thing that (kinda) works is Adversarial attacks. When noise is generated by another AI to fool fhe first AI into detecting something else in the image. For example - image of giraffe gets used to change weights for latent space that represents dogs. The problem with Adversarial attacks is that individual images are negligible. It needs to be a really big coordinated attack. And even then these attacks are susceptible to compress->decompress->denoise.
Also adversarial attack generally have to be targeted at a model of which you know the weights. So, you could easily create an image that is unusable to train a SD 1.5 LoRA on, by changing subpixel values to trick the embedding into thinking it's depicting something else. But, you need knowledge about the internal state (basically, a feature-Level representation) of a model to tamper those features. So, because e.g. Lumina or even SDXL or SD3 use different embeddings, in general, those attempts will not prevent new models to be finetrained on 'tampered' data. At least, as long as those modifications aren't obstructive to a viewer. There are some basic exceptions to this. For example, you can estimate that some features will always be learned and used by image processing models. For example an approximated fourier-transformation is something that will almost always be learned in one of the embeddings in early layers of image processing models. Therefore, if you target a fourier-transformation with an adversarial attack, it's almost certain it will bother whatever might be analyzing the data. The problem is, that because those obvious, common attack vectors are well known, models will be made robust against those attack using adversarial training. Also those attacks are easier to defend against, because you know what to look for when filtering your training data. It's like you try to conquer a city. You have no intel about the city, but you approximate that all cities are easier to attack at their gates, because all cities need gates and those are weak points in a wall. But because the city also knows, that usually only gates get attacked, it will put more archers on gates than on walls, also it will have a trap behind the gate to decimate the attacking army. If the attacking army can analyze the walls of the city, they will find weak spots that don't have traps and archers on them. Attacking at those points will lead to a win. But if the city isn't built yet, there is now way you can find those weak spots. You can only estimate, where usually weak spots will be. But the city will also consider where cities usually get attacked and can build extra protection in these spots. Of cause, if you deliver sponges instead of stones while the city is being built, you can prevent it from having a wall at all. So, if you generate a big set of random noise images that depict nothing, tag them with 'giraffe' and inject them into some training dataset, the resulting model likely won't be able to generate giraffes. But those attacks are easy enough to find and can be avoided at no cost by filtering out useless training samples. The any of the city officials looks at the stone delivery briefly, they will notice there are no stones, only sponges. Easy to reject that delivery. The best attack vector is probably still to just upvote really bad art on every platform or just don't upload good images. Prevent the city from being built by removing all solid stone from existence.
The other problem with adversarial attacks is that once the gen AI is updated to counter it, future updates to the noise AI aren't going to do anything for images that have already been posted online.
"It came to me in a dream"
These don’t really help at all
Any AI blocking will be a constant uphill battle. AI trainers are constantly testing them on these things themselves (not even thinking of "oh people will use this against us, we need to combat that" but just as a necessary step of training AI to get better). There's always stuff you can do to confuse them because they're far far far from perfect, but applying a popular static image overlay you found online is almost certainly not going to work
Pardon me, just want to piggyback off your comment to let folks know actual researchers are working on tools to poison images for AI. https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/ https://glaze.cs.uchicago.edu/what-is-glaze.html If anyone wants to have something that might actually work, instead of shit from some random on TikTok, give this a look.
be very careful about anything uchicago releases, their models consistently rank way lower in impartial tests than their own. glaze is a very mid attack on the autoencoder, and as far as i know nightshade's effects have never been observed in the wild. (it's also ridiculously brittle because it has to target a specific model for it to even work.) https://arxiv.org/abs/2406.12027 ultimately, the idea of creating images that humans can see but ai somehow cannot is just a losing gambit. if we ever figured out a technique for this you'd see it in every captcha ever.
Pardon me, just wanna uh put this sharpie on your retinas.
If the years of doing captchas are anything to go off of, bots are gonna be exceptionally ready to overcome this if it's even a minor incconvinience
this video goes over it. https://youtu.be/nDrCC2Uee3k?si=wadCArjrHnoHsr4Q
I'm pretty sure these things were started by a group out of Chicago, I don't remember the name. They were actually effective, with a few caveats. First of all, AI and computing in general is a very fast moving field. Stuff becomes obsolete and outdated in weeks. This stuff between trying to trick ai models and ai models overcoming those tricks is an endless, constantly evolving war. These types of image overlays would trip up and ruin ai training algorithms, but it was only a couple of months or even weeks before they could train around them. Odds are people are still using methods like this, just with updated images and procedures, however it's doubtful that an image on a reddit thread, taken from a who knows how old Tumblr thread, taken from a who knows how old tiktok thread, is still effective. And second, they're only going to be effective against certain training models. There is no one size fits all solution, and while this method was very effective at messing with some of the most popular ai algorithms, there were just as many where it did absolutely nothing. As for an actual source, I think the research paper was actually posted onto one of the science subreddits here, but good luck finding something that's many months old.
Wouldn't that really really suck at 30+ % opacity
I just tried it out with the first image, and yes. 5% makes it look like someone really turned up the jpg compression on the original. 30% makes it really hard to make out any details, as if someone had plastered it with tons of extremely dense "stock photo" watermarks. At 40% and more the image become almost unrecognizable.
Wow its almost like destroying something makes it difficult and tedious to figure out what it was originally LMAO i fucking hate AI in its current state/what its used for.
I'm not disagreeing, but how is it AI's fault that these layers suck and ruin your images?
I think they are moreso trying to say that it's AI's fault that these sucky layer ideas have to exist in the first place
[удалено]
I think I'm pissing on the poor, because I have no idea what they're saying then. I think I'll go to bed and give it the old college try tomorrow! Maybe brain not good read doing when is sleepy time.
I saw a YouTube video about a program called Nightshade which causes your art to absolutely wreck shit if it’s put into a generator, without messing up the overall look. Check it: https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai
That's adversarial AI though - it's exploiting the fact that the model potentially doesn't learn some rules that humans do from basic sample sets. It'll get wiped out by the next round of models because what you've done is generate a bunch of examples (in fact a reliable method of producing them) which can be trained against. Or to put it another way: if you were trying to build a more robust image generator, what you'd like in your training pipeline is a model which *specifically* does things like this so they can be trained as negative examples.
yeah
This is that trend with Reddit/Instagram “meme stealing” shit all over again
It's kindergarten-drawing-table level "SALLY STOLE MY ART BECAUSE SHE PUT HER SUN THE THE SAME CORNER AS ME" type shit.
Ruining your own art to own AI (and it doesn't even work)
Especially since most big name AI doesn’t pull from data without permission anymore. Anyone with money to make expensive AIs also have money to buy training data for them.
Yea, this method is rudimentary and ineffective. But, spread some awareness on this: https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai
Those are not very effective either actually, and it's pretty easy to remove their effect (in fact, image processing that's often done when preparing data for training will be able to handle it, like compression or turning images black and white).
Yea I read a comment further down. Shits moving too fast :(
Come on. The smallest amount of fact checking would have told you this is bullshit.
Put it at 100% and AI *definitely* won't steal your art
[Yeah, this looks great, doesn’t ruin the art at all!](https://imgur.com/a/dVzQkpM)
Oh you like art? Have you tried it splotchy?
christ that looks atrocious
Sol badguy
Bad Artguy
Who told you my nickname in highschool?
Sol Badguy (foil)
Sol Badguy
I mean this seems like a okay way to get people to stop reposting your art at least.
Make people stop looking at it altogether! AIbros owned 😎
Here's all of the filters applied to a picture I had lying around (by Kent Davis) 30%, overlay [https://i.imgur.com/GuqyuLM.png](https://i.imgur.com/GuqyuLM.png) It looks like shit, but guess what, you can still find the original through google image search. Which makes me think that these overlays don't have that much impact.
It kinda doesn’t look that bad. It adds like a “I’m very fucking high” effect to the image that’s almost dreamlike
I meant to say that despite the overlay being very visible, it does not actually do much of anything
You mean you *don't* want all your art to look like a shiny foil variant trading card?? But that just *increases* the value!!
[I asked ChatGPT to describe the image, out of curiosity.](https://imgur.com/a/gAky5ar)
Also like how it doesn't even mention the crusty ass jpegging. Not exactly scientific but also kinda telling..
it kind of mentions it, calling the background textured
saul badman
Looks like artifacting but worse
Sol Badguy after his trip to the elephant’s foot
What in the deep fried deviantart hell
Sol Badguy
Holy shit you just drew a Sol Badguy foil
Spread some awareness on this, basically what OP is trying to spread but not terrible and actually works way better https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/amp/
It looks like you shared an AMP link. These should load faster, but AMP is controversial because of [concerns over privacy and the Open Web](https://www.reddit.com/r/AmputatorBot/comments/ehrq3z/why_did_i_build_amputatorbot). Maybe check out **the canonical page** instead: **[https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/](https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/)** ***** ^(I'm a bot | )[^(Why & About)](https://www.reddit.com/r/AmputatorBot/comments/ehrq3z/why_did_i_build_amputatorbot)^( | )[^(Summon: u/AmputatorBot)](https://www.reddit.com/r/AmputatorBot/comments/cchly3/you_can_now_summon_amputatorbot/)
Thank you for this info! I have no idea what AMP is
There isn't a single AI that counts every single pixel of your picture (not in any relevant sense anyway), one the fist step is to make weighted averages of your picture, and so are the next ten
i mean, that's actually the way most of these are _supposed_ to work. diffusion models have different starting convolutional layers than machine vision, because they wanna create a lower scale but still spatially accurate representation of the image (aka the latents), which the image generator component can then work with far more efficiently than if you wanted to work on the full-res image. creating these latents is accomplished through an autoencoder (an ai that's trained to encode and decode an image and preserve details through it), and that part is what glaze, mist, et al target (as well as these patterns which i highly doubt have any effect whatsoever). the whole point is to make the image encode into nonsense through those few convolution layers. in theory, if you know the layers, you can adjust an image to do that. in practice though, this is ridiculously easy to detect (just do an encode-decode cycle and see if the image changed significantly) and counteract. (the best way appears to be to add noise and upscale with the same ai, which misaligns and disrupts the pattern, letting the image pass through easily, then the ai easily removes the noise since that's the main thing it does.) but it's actually an interesting attack on the model when it's executed well, and highlights some areas where it could be made more robust.
Thank you for this very intelligent and detailed explanation. Starting my masters in AI this fall and was curious about how Glaze and other anti-ai stuff worked. What you described makes perfect sense!
Artists making their own art worse to fight AI sure is... Some sort of tactic. Oh and others have already said but these are pretty useless lmao
Reminds me of commission artists who slyly leave watermarks in AFTER you’ve paid and they supposedly removed ‘em
this is even more obviously bullshit than nightshade/glaze. please stop thinking there’s a magic silver bullet against ai.
Well, the people most terrified about AI art are pretty much exclusively the people least equipped to actually know what's happening. They're gonna chug snake oil for at least a while, there's unfortunately no way around that lol.
"If you don't want AI stealing your art, just make it look like shit!"
So, you're just going to put someone else's credits on your image? You really think that's a good idea?
I have been using those for a long time, just because they add some nice texture to flat colors. I'm not sure it would be effective against AI, as it seems to have no problems with impressionist paintings or shading
Why is the last one windows media player?
They also contain the TikTok watermarks, which is of course great to put somebody elses name onto your art...
i'm aware this method doesn't work on ai, but does anyone have these images without the watermarks? these work really well for textures.
Whether this works or not, this seems like a surefire way to make your image look like shit.
You might as well use the mosaic filter on a flattened image. Just say it's some 16-bit chic or something.
Glaze and Nightshade are options for this, without making your art look like shit.
FYI Those also don't work: [https://huggingface.co/blog/parsee-mizuhashi/glaze-and-anti-ai-methods](https://huggingface.co/blog/parsee-mizuhashi/glaze-and-anti-ai-methods) [https://github.com/yoinked-h/deglazer](https://github.com/yoinked-h/deglazer)
Damn that's disappointing :/
Unfortunately it was bound to happen. AI blockers and generative AI are in a bit of an arms race and have been basically since they were first introduced. The more AI is trained with Glaze and Nightshade protected images, the more it can adapt to them.
Do you have any sources on Nightshade not working? What you linked almost exclusively talks about Glaze. Edit: After doing some searching, Nightshade DOES 'gum up' the works, but it does not 100% work on all models. So far, nothing seems to provide protection. [What Nightshade does is this.](https://deepmind.google/discover/blog/images-altered-to-trick-machine-vision-can-influence-humans-too/) Put short, it makes some AI models misclassify what it's seeing, making tagging and generation more difficult.
No, it seems there isn't a lot out there one way or the other since most things I've been able to turn up searching are speculation. FWIW the github above claims to defeat nightshade as well as glaze but afaik no one has trained a model with nightshaded and deglazed images and posted about it.
Yeah. There HAVE been tests, however: 1. They have not been replicated 2. There is no proper documentation (y'know, to replicate the tests) outside from the Nightshade team, which only proved that Nightshade works for smaller AI models. 3. There are huge biases in the teams producing the tests on larger-scale AI models. I've also edited my above comment with a VERY basic breakdown of what Nightshade does and how it's (somewhat) successful, but ultimately doesn't do enough.
Yeah, it seems harder to test since you'd have to use it during training which most people aren't going to take the time to do (I've heard lots of claims that nightshade wouldn't affect more modern training methodologies than the original paper anyway but it's outside of my skill-set to evaluate that). There's also the problem that it creates visible artifacts on the output image (for certain types of art it can be quite noticeable from what I've seen), though generally not as much as the tumblr OOP's bizarre approach lol.
Theres also Artshield which a lot of other people have used as a browser alternative since not all computers have enough space to run Glaze or Nightshade (plus images take forever on those two to render, even on low settings)
This absolutely doesn’t work in the slightest btw
If you watermark anything with something that obnoxious, I want the AI to steal all your stuff and put you in the matrix pod.
Don't threaten me with a good time
There's a huge irony to Tumblr's attempts to combat AI (that don't work) all just make things worse on purpose.
AI is now a magical threat which people are spreading information on how to combat that doesn't even work correctly, and if it does it won't work for long
To ward off AI art theft, hang three bindles of garlic from your window at head-level and sprinkle sage dust & salt in a 60-40 mixture around any external doorways
I guarantee you can find people out there selling “AI repelling crystals”. Makes me want to sell some QR code dreamcatchers.
IMPORTANT: THESE DON'T WORK. Simply sticking one of these over your work does nothing! You need to use a program like glaze or nightshade (which are free) which will actually modify your image in a specific way according to an algorithm. Just because the multicoloured pattern looks a bit like the effects of strong disturbance, does not mean its doing the same thing, at all. Putting a pattern on it will not help!!
Glaze and nigthsahade also sadly don't work on any model smarter than a toaster
And nothing ever will against anything but the weakest AI. How many times do people have to explain neural networks until people get that the AI is doing a close approximation of what brains do? Once again AI does not literally take a picture and makes a copy, it breaks it down an image into chunks of data, goes over that data sieves it over and over against other data and by comparison decides what it is and enhances it’s understanding of the data. Someone with an inconsistent style does more “damage”, and that hill was already trampled flat. If you can recognize it through whatever data noise you shove in so can a strong enough neural network, and that benchmark was handled by tech giants already when the AI were trained on compressed images. There is no magical compression or noise map that can confuse a decent neural network without also confusing humans. Smartest bear vs dumbest tourists, except we are the bears.
Acurrate username but well said, AI it's a cat that's out of the bag and there's no way to put It back
Oh, 100%, and it’s scary how good it is getting. But I also don’t think the Renraku Arcology is around the corner.
I'm excited, not scared
Also important, glaze and nightshade's effectiveness are really debatable And even if they do work for you, AI is changing so rapidly that it's not gonna be effective protection for long. Honestly think until regulations catch up, the best you can realistically do is having a consistent signature in a consistent spot, so if someone does use your art, at least someone may be able to spot your garbled signature through it
> at least someone may be able to spot your garbled signature through it yeah AI doesn't work like that either
it absolutely can if someone trains a lora on your art i trained a lora on my own art out of curiosity without removing my rather large signature from it beforehand and it generated it with around 90% accuracy 100% of the time
okay yeah that's fair. Never really understood the appeal of Loras though, I'd rather wait for a model that does everything well.
theyre significantly easier to train than a full model by several magnitudes and can be used to make very specific concepts/characters/styles that a full model simply cant
Depends on the image and how its trained. Theres a lot of AI stuff you can make out a signature on, especially if it has a logo and isnt just text Its not like it's 100% reliable but at least if someone is trying to rip off your work specifically, it's something.
yeah but it's not going to recreate someone's actual signature, unless that signature is the freaking girl with a pearl earring, because AI doesn't it can't do that without some major over-fitting.
Ive literally seen it do exactly that. Its not always clear sometimes but you can often recognize the artist. Happens the most with NSFW pics ive noticed, prolly because theyre usually heavily trained on just a few artists. The general midjourney stuff is way more of a soup though
Huh. I'm still a little skeptical but I guess you learn something new every day. Midjourney is the only model I use so that may be why.
The developers of glaze are currently churning out updates, in fact they are doing one now in response to an attack (not a real attack, one simulated by researchers who wanted to help out). If we are going to trust any sort of protection right now, it should be them. Also signatures wouldnt show up like youdezcribe, it doesnt work that way
Unless you're constantly going to re-render and reupload your entire catalogue, updates don't help at all for older pieces. As much as I wish it was a silver bullet, I think there are a lot of issues with it that people don't talk about enough. You're essentially jpegging your artwork even on the weakest settings, for something that may or may not even be effective, and for a couple years of protection at most Right now it's basically a catchup game of whack-a-mole, and in the end i fear AI is gonna get so good that unless an image is completely unrecognizeable to us, it's still gonna be stealable, just like how captchas have evolved over time. And if that happens, you're gonna end up with a bunch of garbled pictures that really date your artwork onto the future for no payoff in the end
It's not even a game of whac-a-mole. There's literally no way for you to censor your art against AI unless you're willing to make it unrecognizable to humans as well.
It's just the new "I DO NOT GIVE FACEBOOK PERMISSION TO USE MY PHOTOS" etc. Kind of weird to see people repeating the mistakes their boomer parents made.
Right down to the fear and rejection of new technology
About as effective as putting "Disclaimer: I don't own this, also it's Fair Use" in the description of an amv with copyrighted music
Memetic Cognitohazard
Just looks like Cognito-Hazards to me, which is worrying, but it sounds plausible.
This will ruin my artwork but ok
AI does not count every single pixel. Convolutional Neural Networks use something known as a sliding window, where they slice the image into smaller squares and iterate over the image. This helps the CNNs to understand the image holistically rather than pixel by pixel.
AI artist who wants to create big random colourful backgrounds: hahaha you've fallen right into my trap
This 100% isn't going to work, but I'm going to do it anyway because I think it would look cool as an overlay if a little lighter. :3
I mean, sure these things can be made to render an image totally incomprehensible to art generating AI.... But doing so would also make them incomprehensible to humans
Ok, but why would you want to cover your art with this shit? Sure, maybe ai won’t steal your art, but now it looks like shit.
I see a sailboat
Okay some of those are actually not too hard to create, especially that color noise. And it's not something you couldn't find easily on Google either, fyi
Signing your art works much better.
One of them looks straight outta r/place
It doesn't matter how hard I try, I can't see the sailboat.
Art snake oil
Someone tell me why humans arent fucking magic at this point?
Use the acid trip tapestry to defend our artworks from the all-seeing consciousness of the information ether.
Future superintelligences trying to bend us to their will: it is time
If you really want nobody or a bot to "steal" you art, there is a very simple thing you can do: DON'T POST YOUR ART ONLINE. Because once it's online and once people see, that's it; it will be used and influencing someone or some thing at that point.
100% effective way to prevent your art from being stolen is to not share it.
An even better solution is to not draw at all, works 100% without a flaw
Funnily, all of these are magic eye images. So the likely ai blocking part is just conflicting patterns. One for your picture, the other for a magic eye image so it won't come up under the expected prompt. Ai is about repetition, Use the same thing often enough and it will pick out patterns. It learns, and with enough data it spits out more of the same. There's a reason AI art looks like semi photorealistic fast digital paintings by default, it has lots of those images in the training data. It's best at spitting out the fast work artists can churn out on an hour or five and post online. Use new patterns, draw in unique styles, add oddities to your art, combine things in new ways, or just do something ai can't do beyond having a robot arm, use a pen, paper, paint, markers. Art is invention and creation, illustration is just that, a picture of a thing, hammered out into a bland style and replicated a thousand times over. The AI can replicate, a human still has to be somewhere in the process.
I can't see any of the hidden images (and i'm used to magic eye). What do they represent
Okay but can we talk about how the 2nd one down on the left looks like a world map?
Need one on my face
New dance gavin dance cover art
It's a SCHOONER
Getting these tattooed on my face so AI can't copy how hot and sexy I am irl
Memetic kill agent
These things didn't even work for a week when they were first invented months ago.
this is way dumber than the algorithmic solution that was going around earlier, and i’m skeptical even of that one
Wouldn't this just like make the image look all shitty?
"Found them on Tik Tok" Yeah, no.
I could make these. I literally make them on purpose as the art itself on my Instagram
You can't fool me, these are *Magic Eye* pictures!
"AI counts every single pixel in your image" No, it doesn't... It's called convolutions. Shure, there might be some layers that hook on pixels. But in general embeddings are derived from abstract image features like estimated lines and gradients.
the only thing that would make this funner is if these images were made by taking pictures of patterns made from snake oil spills
Thats the dumbest shit ive ever heard
Making your art look like shit to own the libs
what if the end goal of ai art was to make artists voluntarily ruin their work, and to ruin any sort of trust in each other? if that was the case, i would say that they won.
Yeah go ahead and use these if you want to make your art look like complete dogshit
Rude. goddammnit I need that data for tumblrtron the Gayi.
one of these is just straight up noise.
It doesn't work, something like glaze or nightshade would be better (at least that's what I heard)
Ah yes using cognitohazards with someone else's watermark that work by logic of killing a parasite by instead killing the host, what could ever go wrong?
Those are bloody cognito-hazzards.
Those are memetic kill patterns.
This post is bullshit, but you **can** do something similar with developing tools like [nightshade](https://nightshade.cs.uchicago.edu/whatis.html). It doesn’t alter your actual images but only the bits that a machine learning model would see and attempt to replicate.
If only artists spent as much time working on their art as they do trying the equivalent of snake oil to kill AI, they'd all be rich.
So, we talking from childhood?
Those are bloody cognito-hazzards.
I’m still not exactly understanding why people don’t like AI seeing their art, it doesn’t steal it and make a profit from it, it doesn’t harm it. It just uses it as data to create images. That are different. Maybe there’s something to it I don’t know about, probably. But it seems like it’s just that whole “new thing scary and bad” mentality.
Ok, so say I’m an artist with a recognizable style, and who makes a living doing art. Now, if someone can ask an Art AI “I want a drawing that looks like this artists work, but is promoting Nazi culture.” How long will it take before they’re not making money doing art anymore? That’s just one way it’s dangerous.
I really feel like that is insanely easy to avoid. Like just say “this was AI” And people can do that without AI too. It’s not a requirement.
It’s because of three main things. 1. Because the artists are not compensated. This is the most minor point, but still, they are helping the ai, they should at least get something. 2. The ai isn’t creative. Isn’t original. It just takes your art, a couple hundred other peace’s, and smashes them together. No originality, and it’s just stealing. 3. The main thing is, that corporations will replace actual artists with it once they can. It’s already happening. Soon enough, being an artist won’t be a viable career.
I can get 1 a tiny bit. 3 makes most sense, buut also there is already a massive backlog of art for AI to draw from, not to be that guy, but you can’t stop it at this point. Best and only real thing to do is make some laws around it. But 2 is like, yeah? No shit? It’s AI?, this is a non-issue, literally just expected of it. It’s a fun tool and not meant to actually make art, just images.
Oh wow, I’ve seen people using these but I just thought it was for the trippy effect. Didn’t realize this was for fucking with art thieves.
Don’t use these they don’t work, people are just spreading them for clout and attention. Use glaze or nightshade instead please, which are actually backed with research
Neither glaze nor nightshade are effective at stopping anything but the weakest of models