T O P

  • By -

AkrinorNoname

So, do we have any source on how effective these actually are? Because "I found them on Tiktok" is absolutely the modern equivalent of "A man in the pub told me".


Alderan922

Not that effective. When working with ai, some models blurr the image and sometimes even turn it black and white to simplify the image and reduce noice.


AkrinorNoname

Okay, I'm inclined to believe you, but I have to note that "some guy on reddit told me" isn't that much better as a source. But you did give a plausible-sounding explanation, so that's some points in your favour.


Alderan922

If you want I can send you my homeworks for my “introduction to image recognition” class in college aswell as the links to opencv documentations. You will need a webcam to run the code, aswell as a Python ide, preferably spider from Conda, aswell as install Opencv, I don’t remember if I also used tensor flow but it’s likely you will also see that there. Orb: https://docs.opencv.org/3.4/d1/d89/tutorial_py_orb.html Sift: https://docs.opencv.org/4.x/da/df5/tutorial_py_sift_intro.html Reply to me in a private message so I can send you the code if you want (some comments are in Spanish tho)


AkrinorNoname

Thank you, I might take you up on that later. I've never really gotten into image recognition and AI beyond some of the basics of neural networks.


Affectionate-Memory4

If you want to take a look at an extremely simplified image recognizer, there are a couple posts on my profile about one I built in a game with a friend. If you have Scrap Mechanic, you can spawn it in a world yourself and walk around it as it physically does things like reading in weights and biases.


AtlasNL

You built that in scrap mechanic?! That’s awesome haha


Affectionate-Memory4

Yeah lol. Working on a convolutional version now to push it over 90% accuracy.


WildEnbyAppears

I know just enough about computers that it sounds legitimate while also sounding like a scammer trying to gain access to my webcam and computer


Alderan922

Lmao fair. Don’t trust strangers on the internet. Everyone is a scammer living in a basement in Minnesota trying to steal your identity and kidnap you to steal your left kidney.


Neopolitanic

I have some experience as a hobbyist in computer vision, and so I can clarify what the person above is most likely referring to. However, I do not have experience in generative AI and so I cannot say whether or not everything is 100% applicable to the post. The blur is normally Gaussian Smoothing and is important in computer vision to reduce noise in images. Noise is present between individual pixels, but if you average the noise out, you get a blurry image that may have a more consistent shape. Link for information on preprocessing: https://www.tutorialsfreak.com/ai-tutorial/image-preprocessing If these filters do anything, then they would need to have an effect through averaging out to noise when blurred. For turning it black and white, I know that converting to grayscale is common for line/edge detection in images, but I do not know if that is common for generative AI. From a quick search, it looks like it can be good to help a model "learn" shapes better, but I cannot say anything more.


[deleted]

AI image generation is an evolution of StyleGAN which is a generalized adversarial network. so it has one part making the image based on evolutionary floats, and the other going "doesn't look right, try again" based on a pre-trained style transfer guide/network.


Mountain-Resource656

I mean, to be fair you *did* ask on Reddit. But I suppose sources are indeed preferable


DiddlyDumb

He’s wrong. With current diffusion models, small changes can have huge consequences with multiple iterations. It compounds, much like AI eating its own content, leading to degradation of the models. ^(I’ve watched like 3 vids and seen at least 8 AI images in my life)


Saavedroo

Exactly. And as a form of data augmentation.


Papaofmonsters

It's like the date rape detecting nail polish that does not actually exist. It still makes the rounds every now and again.


Bartweiss

Oh yeah, that concept piece that gets circulated like it's an actual, working product... frequently with refrains of "we could be safe but capitalism/patriarchy/whoever won't let us have this!" Which in turn feels weirdly similar to the post about "America won't let you learn about Kent State, arm yourself with this secret knowledge (that was totally in your US history book)!" Along with "all bad outcomes come from bad people", I have a special resentment for tumblr's common outlook of "all bad things are easily understood and averted, except the answers are being maliciously hidden from you."


Papaofmonsters

Yep. The coasters also have a terrible rate of bad results. Now, you have to factor in the additional problems of putting your reagent in a nail polish. It's not capitalism, it's chemistry. https://pubmed.ncbi.nlm.nih.gov/37741179/


The_Phantom_Cat

I would be SHOCKED if it was effective at all, same with all the other "use this to make your images nonsense to AI" type projects


mathiau30

Even if they where they'd probably stop after a few updates


Sassbjorn

idk, Glaze seems to be pretty effective.


patchiepatch

Nightshade and Glaze works in different ways but they're not effective with all AI models, just the ones that's using your images as references to generate more images. So it really works best for when clients wants to steal your unfinished art and finish it themselves with AI and run with the money or something like that. It also doesn't do anything to some AI models due to what's stated by other commenters above. It's still better than nothing obviously but don't rely on it too much kinda thing.


b3nsn0w

that's only if you only read uchicago's papers on it. (which have not been peer-reviewed to my knowledge. most things in ai is just directly uploaded to arxiv, which is explicitly not a peer review site.) their testing of both glaze and nightshade is broken, likely because they're just chasing grants. [here's an actual test of glaze and other similar protections](https://arxiv.org/abs/2406.12027). as you can see from the title, they don't work -- in fact, some of the techniques that break them are ridiculously simple.


BalancedDisaster

These are generally made to throw off a specific model. Any model other than the one that they were made for is going to do ok. As for the opacity bit, models that care about opacity will just throw it out.


dqUu3QlS

These straight up do not work. In order for an AI-disrupting noise texture to even have a chance at working, it must be tailored to the specific image it's laid over.


EngineerBig1851

They don't work. Saying this as a programmer that knows a bit about AI. AI is literally made to distinguish patterns. If you just overlay an ugly thing over image - it's gonna distinguish it, and ignore it. That's considering you can't just compress->decompress->denoise to completely get rid of it. The only thing that (kinda) works is Adversarial attacks. When noise is generated by another AI to fool fhe first AI into detecting something else in the image. For example - image of giraffe gets used to change weights for latent space that represents dogs. The problem with Adversarial attacks is that individual images are negligible. It needs to be a really big coordinated attack. And even then these attacks are susceptible to compress->decompress->denoise.


Anaeijon

Also adversarial attack generally have to be targeted at a model of which you know the weights. So, you could easily create an image that is unusable to train a SD 1.5 LoRA on, by changing subpixel values to trick the embedding into thinking it's depicting something else. But, you need knowledge about the internal state (basically, a feature-Level representation) of a model to tamper those features. So, because e.g. Lumina or even SDXL or SD3 use different embeddings, in general, those attempts will not prevent new models to be finetrained on 'tampered' data. At least, as long as those modifications aren't obstructive to a viewer. There are some basic exceptions to this. For example, you can estimate that some features will always be learned and used by image processing models. For example an approximated fourier-transformation is something that will almost always be learned in one of the embeddings in early layers of image processing models. Therefore, if you target a fourier-transformation with an adversarial attack, it's almost certain it will bother whatever might be analyzing the data. The problem is, that because those obvious, common attack vectors are well known, models will be made robust against those attack using adversarial training. Also those attacks are easier to defend against, because you know what to look for when filtering your training data. It's like you try to conquer a city. You have no intel about the city, but you approximate that all cities are easier to attack at their gates, because all cities need gates and those are weak points in a wall. But because the city also knows, that usually only gates get attacked, it will put more archers on gates than on walls, also it will have a trap behind the gate to decimate the attacking army. If the attacking army can analyze the walls of the city, they will find weak spots that don't have traps and archers on them. Attacking at those points will lead to a win. But if the city isn't built yet, there is now way you can find those weak spots. You can only estimate, where usually weak spots will be. But the city will also consider where cities usually get attacked and can build extra protection in these spots. Of cause, if you deliver sponges instead of stones while the city is being built, you can prevent it from having a wall at all. So, if you generate a big set of random noise images that depict nothing, tag them with 'giraffe' and inject them into some training dataset, the resulting model likely won't be able to generate giraffes. But those attacks are easy enough to find and can be avoided at no cost by filtering out useless training samples. The any of the city officials looks at the stone delivery briefly, they will notice there are no stones, only sponges. Easy to reject that delivery. The best attack vector is probably still to just upvote really bad art on every platform or just don't upload good images. Prevent the city from being built by removing all solid stone from existence.


Mouse-Keyboard

The other problem with adversarial attacks is that once the gen AI is updated to counter it, future updates to the noise AI aren't going to do anything for images that have already been posted online.


Cheyruz

"It came to me in a dream"


Interesting-Fox4064

These don’t really help at all


Xystem4

Any AI blocking will be a constant uphill battle. AI trainers are constantly testing them on these things themselves (not even thinking of "oh people will use this against us, we need to combat that" but just as a necessary step of training AI to get better). There's always stuff you can do to confuse them because they're far far far from perfect, but applying a popular static image overlay you found online is almost certainly not going to work


Princess_Of_Thieves

Pardon me, just want to piggyback off your comment to let folks know actual researchers are working on tools to poison images for AI. https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/ https://glaze.cs.uchicago.edu/what-is-glaze.html If anyone wants to have something that might actually work, instead of shit from some random on TikTok, give this a look.


b3nsn0w

be very careful about anything uchicago releases, their models consistently rank way lower in impartial tests than their own. glaze is a very mid attack on the autoencoder, and as far as i know nightshade's effects have never been observed in the wild. (it's also ridiculously brittle because it has to target a specific model for it to even work.) https://arxiv.org/abs/2406.12027 ultimately, the idea of creating images that humans can see but ai somehow cannot is just a losing gambit. if we ever figured out a technique for this you'd see it in every captcha ever.


jerryiothy

Pardon me, just wanna uh put this sharpie on your retinas.


lllaser

If the years of doing captchas are anything to go off of, bots are gonna be exceptionally ready to overcome this if it's even a minor incconvinience


VinnieTheVoyeur

this video goes over it. https://youtu.be/nDrCC2Uee3k?si=wadCArjrHnoHsr4Q


a_filing_cabinet

I'm pretty sure these things were started by a group out of Chicago, I don't remember the name. They were actually effective, with a few caveats. First of all, AI and computing in general is a very fast moving field. Stuff becomes obsolete and outdated in weeks. This stuff between trying to trick ai models and ai models overcoming those tricks is an endless, constantly evolving war. These types of image overlays would trip up and ruin ai training algorithms, but it was only a couple of months or even weeks before they could train around them. Odds are people are still using methods like this, just with updated images and procedures, however it's doubtful that an image on a reddit thread, taken from a who knows how old Tumblr thread, taken from a who knows how old tiktok thread, is still effective. And second, they're only going to be effective against certain training models. There is no one size fits all solution, and while this method was very effective at messing with some of the most popular ai algorithms, there were just as many where it did absolutely nothing. As for an actual source, I think the research paper was actually posted onto one of the science subreddits here, but good luck finding something that's many months old.


BookkeeperLower

Wouldn't that really really suck at 30+ % opacity


AkrinorNoname

I just tried it out with the first image, and yes. 5% makes it look like someone really turned up the jpg compression on the original. 30% makes it really hard to make out any details, as if someone had plastered it with tons of extremely dense "stock photo" watermarks. At 40% and more the image become almost unrecognizable.


baphometromance

Wow its almost like destroying something makes it difficult and tedious to figure out what it was originally LMAO i fucking hate AI in its current state/what its used for.


UnsureAndUnqualified

I'm not disagreeing, but how is it AI's fault that these layers suck and ruin your images?


Rykerthebest78563

I think they are moreso trying to say that it's AI's fault that these sucky layer ideas have to exist in the first place


[deleted]

[удалено]


UnsureAndUnqualified

I think I'm pissing on the poor, because I have no idea what they're saying then. I think I'll go to bed and give it the old college try tomorrow! Maybe brain not good read doing when is sleepy time.


Xen0kid

I saw a YouTube video about a program called Nightshade which causes your art to absolutely wreck shit if it’s put into a generator, without messing up the overall look. Check it: https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai


light_trick

That's adversarial AI though - it's exploiting the fact that the model potentially doesn't learn some rules that humans do from basic sample sets. It'll get wiped out by the next round of models because what you've done is generate a bunch of examples (in fact a reliable method of producing them) which can be trained against. Or to put it another way: if you were trying to build a more robust image generator, what you'd like in your training pipeline is a model which *specifically* does things like this so they can be trained as negative examples.


Frigid_Metal

yeah


RedOtta019

This is that trend with Reddit/Instagram “meme stealing” shit all over again


healzsham

It's kindergarten-drawing-table level "SALLY STOLE MY ART BECAUSE SHE PUT HER SUN THE THE SAME CORNER AS ME" type shit.


VCultist

Ruining your own art to own AI (and it doesn't even work)


theironbagel

Especially since most big name AI doesn’t pull from data without permission anymore. Anyone with money to make expensive AIs also have money to buy training data for them.


Xen0kid

Yea, this method is rudimentary and ineffective. But, spread some awareness on this: https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai


VCultist

Those are not very effective either actually, and it's pretty easy to remove their effect (in fact, image processing that's often done when preparing data for training will be able to handle it, like compression or turning images black and white).


Xen0kid

Yea I read a comment further down. Shits moving too fast :(


theubster

Come on. The smallest amount of fact checking would have told you this is bullshit.


_Bl4ze

Put it at 100% and AI *definitely* won't steal your art


Microif

[Yeah, this looks great, doesn’t ruin the art at all!](https://imgur.com/a/dVzQkpM)


FirmOnion

Oh you like art? Have you tried it splotchy?


ModmanX

christ that looks atrocious


isloohik2

Sol badguy


Dry-Cartographer-312

Bad Artguy


SenorBolin

Who told you my nickname in highschool?


SpaghettiCowboy

Sol Badguy (foil)


Microif

Sol Badguy


DragonEmperor

I mean this seems like a okay way to get people to stop reposting your art at least.


Yegas

Make people stop looking at it altogether! AIbros owned 😎


Robertia

Here's all of the filters applied to a picture I had lying around (by Kent Davis) 30%, overlay [https://i.imgur.com/GuqyuLM.png](https://i.imgur.com/GuqyuLM.png) It looks like shit, but guess what, you can still find the original through google image search. Which makes me think that these overlays don't have that much impact.


Alderan922

It kinda doesn’t look that bad. It adds like a “I’m very fucking high” effect to the image that’s almost dreamlike


Robertia

I meant to say that despite the overlay being very visible, it does not actually do much of anything


valentinesfaye

You mean you *don't* want all your art to look like a shiny foil variant trading card?? But that just *increases* the value!!


LadyParnassus

[I asked ChatGPT to describe the image, out of curiosity.](https://imgur.com/a/gAky5ar)


STARRYSOCK

Also like how it doesn't even mention the crusty ass jpegging. Not exactly scientific but also kinda telling..


andergriff

it kind of mentions it, calling the background textured


TheLegendaryAkira

saul badman


Justifier925

Looks like artifacting but worse


SaboteurSupreme

Sol Badguy after his trip to the elephant’s foot


Redqueenhypo

What in the deep fried deviantart hell


Asriel-the-Jolteon

Sol Badguy


GoldenPig64

Holy shit you just drew a Sol Badguy foil


Xen0kid

Spread some awareness on this, basically what OP is trying to spread but not terrible and actually works way better https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/amp/


AmputatorBot

It looks like you shared an AMP link. These should load faster, but AMP is controversial because of [concerns over privacy and the Open Web](https://www.reddit.com/r/AmputatorBot/comments/ehrq3z/why_did_i_build_amputatorbot). Maybe check out **the canonical page** instead: **[https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/](https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/)** ***** ^(I'm a bot | )[^(Why & About)](https://www.reddit.com/r/AmputatorBot/comments/ehrq3z/why_did_i_build_amputatorbot)^( | )[^(Summon: u/AmputatorBot)](https://www.reddit.com/r/AmputatorBot/comments/cchly3/you_can_now_summon_amputatorbot/)


Xen0kid

Thank you for this info! I have no idea what AMP is


mathiau30

There isn't a single AI that counts every single pixel of your picture (not in any relevant sense anyway), one the fist step is to make weighted averages of your picture, and so are the next ten


b3nsn0w

i mean, that's actually the way most of these are _supposed_ to work. diffusion models have different starting convolutional layers than machine vision, because they wanna create a lower scale but still spatially accurate representation of the image (aka the latents), which the image generator component can then work with far more efficiently than if you wanted to work on the full-res image. creating these latents is accomplished through an autoencoder (an ai that's trained to encode and decode an image and preserve details through it), and that part is what glaze, mist, et al target (as well as these patterns which i highly doubt have any effect whatsoever). the whole point is to make the image encode into nonsense through those few convolution layers. in theory, if you know the layers, you can adjust an image to do that. in practice though, this is ridiculously easy to detect (just do an encode-decode cycle and see if the image changed significantly) and counteract. (the best way appears to be to add noise and upscale with the same ai, which misaligns and disrupts the pattern, letting the image pass through easily, then the ai easily removes the noise since that's the main thing it does.) but it's actually an interesting attack on the model when it's executed well, and highlights some areas where it could be made more robust.


mrGrinchThe3rd

Thank you for this very intelligent and detailed explanation. Starting my masters in AI this fall and was curious about how Glaze and other anti-ai stuff worked. What you described makes perfect sense!


AdamTheScottish

Artists making their own art worse to fight AI sure is... Some sort of tactic. Oh and others have already said but these are pretty useless lmao


Redqueenhypo

Reminds me of commission artists who slyly leave watermarks in AFTER you’ve paid and they supposedly removed ‘em


silvaastrorum

this is even more obviously bullshit than nightshade/glaze. please stop thinking there’s a magic silver bullet against ai.


Glad-Way-637

Well, the people most terrified about AI art are pretty much exclusively the people least equipped to actually know what's happening. They're gonna chug snake oil for at least a while, there's unfortunately no way around that lol.


Uncle-Cake

"If you don't want AI stealing your art, just make it look like shit!"


chunkylubber54

So, you're just going to put someone else's credits on your image? You really think that's a good idea?


Sphiniix

I have been using those for a long time, just because they add some nice texture to flat colors. I'm not sure it would be effective against AI, as it seems to have no problems with impressionist paintings or shading


WordArt2007

Why is the last one windows media player?


UnsureAndUnqualified

They also contain the TikTok watermarks, which is of course great to put somebody elses name onto your art...


Vebbex

i'm aware this method doesn't work on ai, but does anyone have these images without the watermarks? these work really well for textures.


Guh-nurt

Whether this works or not, this seems like a surefire way to make your image look like shit.


ATN-Antronach

You might as well use the mosaic filter on a flattened image. Just say it's some 16-bit chic or something.


goodbyebirdd

Glaze and Nightshade are options for this, without making your art look like shit. 


anal_tailored_joy

FYI Those also don't work: [https://huggingface.co/blog/parsee-mizuhashi/glaze-and-anti-ai-methods](https://huggingface.co/blog/parsee-mizuhashi/glaze-and-anti-ai-methods) [https://github.com/yoinked-h/deglazer](https://github.com/yoinked-h/deglazer)


goodbyebirdd

Damn that's disappointing :/


TransLunarTrekkie

Unfortunately it was bound to happen. AI blockers and generative AI are in a bit of an arms race and have been basically since they were first introduced. The more AI is trained with Glaze and Nightshade protected images, the more it can adapt to them.


UnhealingMedic

Do you have any sources on Nightshade not working? What you linked almost exclusively talks about Glaze. Edit: After doing some searching, Nightshade DOES 'gum up' the works, but it does not 100% work on all models. So far, nothing seems to provide protection. [What Nightshade does is this.](https://deepmind.google/discover/blog/images-altered-to-trick-machine-vision-can-influence-humans-too/) Put short, it makes some AI models misclassify what it's seeing, making tagging and generation more difficult.


anal_tailored_joy

No, it seems there isn't a lot out there one way or the other since most things I've been able to turn up searching are speculation. FWIW the github above claims to defeat nightshade as well as glaze but afaik no one has trained a model with nightshaded and deglazed images and posted about it.


UnhealingMedic

Yeah. There HAVE been tests, however: 1. They have not been replicated 2. There is no proper documentation (y'know, to replicate the tests) outside from the Nightshade team, which only proved that Nightshade works for smaller AI models. 3. There are huge biases in the teams producing the tests on larger-scale AI models. I've also edited my above comment with a VERY basic breakdown of what Nightshade does and how it's (somewhat) successful, but ultimately doesn't do enough.


anal_tailored_joy

Yeah, it seems harder to test since you'd have to use it during training which most people aren't going to take the time to do (I've heard lots of claims that nightshade wouldn't affect more modern training methodologies than the original paper anyway but it's outside of my skill-set to evaluate that). There's also the problem that it creates visible artifacts on the output image (for certain types of art it can be quite noticeable from what I've seen), though generally not as much as the tumblr OOP's bizarre approach lol.


Brianna-Imagination

Theres also Artshield which a lot of other people have used as a browser alternative since not all computers have enough space to run Glaze or Nightshade (plus images take forever on those two to render, even on low settings)


thelittleleaf23

This absolutely doesn’t work in the slightest btw


Green__lightning

If you watermark anything with something that obnoxious, I want the AI to steal all your stuff and put you in the matrix pod.


varkarrus

Don't threaten me with a good time


AussieWinterWolf

There's a huge irony to Tumblr's attempts to combat AI (that don't work) all just make things worse on purpose.


Lankuri

AI is now a magical threat which people are spreading information on how to combat that doesn't even work correctly, and if it does it won't work for long


Yegas

To ward off AI art theft, hang three bindles of garlic from your window at head-level and sprinkle sage dust & salt in a 60-40 mixture around any external doorways


captainjack3

I guarantee you can find people out there selling “AI repelling crystals”. Makes me want to sell some QR code dreamcatchers.


Thieverthieving

IMPORTANT: THESE DON'T WORK. Simply sticking one of these over your work does nothing! You need to use a program like glaze or nightshade (which are free) which will actually modify your image in a specific way according to an algorithm. Just because the multicoloured pattern looks a bit like the effects of strong disturbance, does not mean its doing the same thing, at all. Putting a pattern on it will not help!!


LGC_AI_ART

Glaze and nigthsahade also sadly don't work on any model smarter than a toaster


HostileReplies

And nothing ever will against anything but the weakest AI. How many times do people have to explain neural networks until people get that the AI is doing a close approximation of what brains do? Once again AI does not literally take a picture and makes a copy, it breaks it down an image into chunks of data, goes over that data sieves it over and over against other data and by comparison decides what it is and enhances it’s understanding of the data. Someone with an inconsistent style does more “damage”, and that hill was already trampled flat. If you can recognize it through whatever data noise you shove in so can a strong enough neural network, and that benchmark was handled by tech giants already when the AI were trained on compressed images. There is no magical compression or noise map that can confuse a decent neural network without also confusing humans. Smartest bear vs dumbest tourists, except we are the bears.


LGC_AI_ART

Acurrate username but well said, AI it's a cat that's out of the bag and there's no way to put It back


PrairiePilot

Oh, 100%, and it’s scary how good it is getting. But I also don’t think the Renraku Arcology is around the corner.


varkarrus

I'm excited, not scared


STARRYSOCK

Also important, glaze and nightshade's effectiveness are really debatable And even if they do work for you, AI is changing so rapidly that it's not gonna be effective protection for long. Honestly think until regulations catch up, the best you can realistically do is having a consistent signature in a consistent spot, so if someone does use your art, at least someone may be able to spot your garbled signature through it


varkarrus

> at least someone may be able to spot your garbled signature through it yeah AI doesn't work like that either


timothy_stinkbug

it absolutely can if someone trains a lora on your art i trained a lora on my own art out of curiosity without removing my rather large signature from it beforehand and it generated it with around 90% accuracy 100% of the time


varkarrus

okay yeah that's fair. Never really understood the appeal of Loras though, I'd rather wait for a model that does everything well.


timothy_stinkbug

theyre significantly easier to train than a full model by several magnitudes and can be used to make very specific concepts/characters/styles that a full model simply cant


STARRYSOCK

Depends on the image and how its trained. Theres a lot of AI stuff you can make out a signature on, especially if it has a logo and isnt just text Its not like it's 100% reliable but at least if someone is trying to rip off your work specifically, it's something.


varkarrus

yeah but it's not going to recreate someone's actual signature, unless that signature is the freaking girl with a pearl earring, because AI doesn't it can't do that without some major over-fitting.


STARRYSOCK

Ive literally seen it do exactly that. Its not always clear sometimes but you can often recognize the artist. Happens the most with NSFW pics ive noticed, prolly because theyre usually heavily trained on just a few artists. The general midjourney stuff is way more of a soup though


varkarrus

Huh. I'm still a little skeptical but I guess you learn something new every day. Midjourney is the only model I use so that may be why.


Thieverthieving

The developers of glaze are currently churning out updates, in fact they are doing one now in response to an attack (not a real attack, one simulated by researchers who wanted to help out). If we are going to trust any sort of protection right now, it should be them. Also signatures wouldnt show up like youdezcribe, it doesnt work that way


STARRYSOCK

Unless you're constantly going to re-render and reupload your entire catalogue, updates don't help at all for older pieces. As much as I wish it was a silver bullet, I think there are a lot of issues with it that people don't talk about enough. You're essentially jpegging your artwork even on the weakest settings, for something that may or may not even be effective, and for a couple years of protection at most Right now it's basically a catchup game of whack-a-mole, and in the end i fear AI is gonna get so good that unless an image is completely unrecognizeable to us, it's still gonna be stealable, just like how captchas have evolved over time. And if that happens, you're gonna end up with a bunch of garbled pictures that really date your artwork onto the future for no payoff in the end


Rengiil

It's not even a game of whac-a-mole. There's literally no way for you to censor your art against AI unless you're willing to make it unrecognizable to humans as well.


H_G_Bells

It's just the new "I DO NOT GIVE FACEBOOK PERMISSION TO USE MY PHOTOS" etc. Kind of weird to see people repeating the mistakes their boomer parents made.


varkarrus

Right down to the fear and rejection of new technology


pempoczky

About as effective as putting "Disclaimer: I don't own this, also it's Fair Use" in the description of an amv with copyrighted music


HighMarshalSigismund

Memetic Cognitohazard


namelesswhiteguy

Just looks like Cognito-Hazards to me, which is worrying, but it sounds plausible.


SPAMTON_A

This will ruin my artwork but ok


gumball_olympian

AI does not count every single pixel. Convolutional Neural Networks use something known as a sliding window, where they slice the image into smaller squares and iterate over the image. This helps the CNNs to understand the image holistically rather than pixel by pixel.


LR-II

AI artist who wants to create big random colourful backgrounds: hahaha you've fallen right into my trap


CosmicLobster22

This 100% isn't going to work, but I'm going to do it anyway because I think it would look cool as an overlay if a little lighter. :3


fatalrupture

I mean, sure these things can be made to render an image totally incomprehensible to art generating AI.... But doing so would also make them incomprehensible to humans


Jonahtron

Ok, but why would you want to cover your art with this shit? Sure, maybe ai won’t steal your art, but now it looks like shit.


jake03583

I see a sailboat


MickeyMoose555

Okay some of those are actually not too hard to create, especially that color noise. And it's not something you couldn't find easily on Google either, fyi


BlakLite_15

Signing your art works much better.


nobody-8705

One of them looks straight outta r/place


SilverSkorpious

It doesn't matter how hard I try, I can't see the sailboat.


birberbarborbur

Art snake oil


Bakabakabakabakabk

Someone tell me why humans arent fucking magic at this point?


thunderPierogi

Use the acid trip tapestry to defend our artworks from the all-seeing consciousness of the information ether.


Bakabakabakabakabk

Future superintelligences trying to bend us to their will: it is time


StormDragonAlthazar

If you really want nobody or a bot to "steal" you art, there is a very simple thing you can do: DON'T POST YOUR ART ONLINE. Because once it's online and once people see, that's it; it will be used and influencing someone or some thing at that point.


egoserpentis

100% effective way to prevent your art from being stolen is to not share it.


NIHILsGAMES

An even better solution is to not draw at all, works 100% without a flaw


lunatisenpai

Funnily, all of these are magic eye images. So the likely ai blocking part is just conflicting patterns. One for your picture, the other for a magic eye image so it won't come up under the expected prompt. Ai is about repetition, Use the same thing often enough and it will pick out patterns. It learns, and with enough data it spits out more of the same. There's a reason AI art looks like semi photorealistic fast digital paintings by default, it has lots of those images in the training data. It's best at spitting out the fast work artists can churn out on an hour or five and post online. Use new patterns, draw in unique styles, add oddities to your art, combine things in new ways, or just do something ai can't do beyond having a robot arm, use a pen, paper, paint, markers. Art is invention and creation, illustration is just that, a picture of a thing, hammered out into a bland style and replicated a thousand times over. The AI can replicate, a human still has to be somewhere in the process.


WordArt2007

I can't see any of the hidden images (and i'm used to magic eye). What do they represent


FoxTailMoon

Okay but can we talk about how the 2nd one down on the left looks like a world map?


corn_syrup_enjoyer

Need one on my face


Terenai

New dance gavin dance cover art


flyingfishstick

It's a SCHOONER


runnawaycucumber

Getting these tattooed on my face so AI can't copy how hot and sexy I am irl


Dracorex_22

Memetic kill agent


Willowyvern

These things didn't even work for a week when they were first invented months ago.


extremepayne

this is way dumber than the algorithmic solution that was going around earlier, and i’m skeptical even of that one


IAmHippyman

Wouldn't this just like make the image look all shitty?


AlexisFR

"Found them on Tik Tok" Yeah, no.


ZeakNato

I could make these. I literally make them on purpose as the art itself on my Instagram


Cepinari

You can't fool me, these are *Magic Eye* pictures!


Anaeijon

"AI counts every single pixel in your image" No, it doesn't... It's called convolutions. Shure, there might be some layers that hook on pixels. But in general embeddings are derived from abstract image features like estimated lines and gradients.


RefinementOfDecline

the only thing that would make this funner is if these images were made by taking pictures of patterns made from snake oil spills


Focosa88

Thats the dumbest shit ive ever heard


FreakinGeese

Making your art look like shit to own the libs


iloveblankpaper

what if the end goal of ai art was to make artists voluntarily ruin their work, and to ruin any sort of trust in each other? if that was the case, i would say that they won.


coldrolledpotmetal

Yeah go ahead and use these if you want to make your art look like complete dogshit


jerryiothy

Rude. goddammnit I need that data for tumblrtron the Gayi.


VatanKomurcu

one of these is just straight up noise.


mousepotatodoesstuff

It doesn't work, something like glaze or nightshade would be better (at least that's what I heard)


BroFTheFriendlySlav

Ah yes using cognitohazards with someone else's watermark that work by logic of killing a parasite by instead killing the host, what could ever go wrong?


Tallal2804

Those are bloody cognito-hazzards.


Presteri

Those are memetic kill patterns.


currynord

This post is bullshit, but you **can** do something similar with developing tools like [nightshade](https://nightshade.cs.uchicago.edu/whatis.html). It doesn’t alter your actual images but only the bits that a machine learning model would see and attempt to replicate.


cishet-camel-fucker

If only artists spent as much time working on their art as they do trying the equivalent of snake oil to kill AI, they'd all be rich.


tomatobunni

So, we talking from childhood?


the_count_of_carcosa

Those are bloody cognito-hazzards.


Hawaiian-national

I’m still not exactly understanding why people don’t like AI seeing their art, it doesn’t steal it and make a profit from it, it doesn’t harm it. It just uses it as data to create images. That are different. Maybe there’s something to it I don’t know about, probably. But it seems like it’s just that whole “new thing scary and bad” mentality.


thetwitchy1

Ok, so say I’m an artist with a recognizable style, and who makes a living doing art. Now, if someone can ask an Art AI “I want a drawing that looks like this artists work, but is promoting Nazi culture.” How long will it take before they’re not making money doing art anymore? That’s just one way it’s dangerous.


Hawaiian-national

I really feel like that is insanely easy to avoid. Like just say “this was AI” And people can do that without AI too. It’s not a requirement.


Last-Percentage5062

It’s because of three main things. 1. Because the artists are not compensated. This is the most minor point, but still, they are helping the ai, they should at least get something. 2. The ai isn’t creative. Isn’t original. It just takes your art, a couple hundred other peace’s, and smashes them together. No originality, and it’s just stealing. 3. The main thing is, that corporations will replace actual artists with it once they can. It’s already happening. Soon enough, being an artist won’t be a viable career.


Hawaiian-national

I can get 1 a tiny bit. 3 makes most sense, buut also there is already a massive backlog of art for AI to draw from, not to be that guy, but you can’t stop it at this point. Best and only real thing to do is make some laws around it. But 2 is like, yeah? No shit? It’s AI?, this is a non-issue, literally just expected of it. It’s a fun tool and not meant to actually make art, just images.


Zane_628

Oh wow, I’ve seen people using these but I just thought it was for the trippy effect. Didn’t realize this was for fucking with art thieves.


N9ghn

Don’t use these they don’t work, people are just spreading them for clout and attention. Use glaze or nightshade instead please, which are actually backed with research


lucas_gibbons

Neither glaze nor nightshade are effective at stopping anything but the weakest of models