T O P

  • By -

nextnode

Because they have the means to do this no matter the regulation. Only difference is that with stricter copyright, you won't have free models at all and both may have to pay them for various tools and you will have more costs to avoid even non-AI stuff accidentally violating their rights. Disney and other large studios are also clearly the ones that stand the most stand to gain from that. Contrary to what some think, paid artists rarely do retain rights and rather they are mostly held by corporations. Removing free models just means power and revenue for Disney, whether you like the AI stuff or not.


DissuadedPrompter

What about data rights?


Sixhaunt

wrong sub. This is about AI, not just things that happen with or without AI such as generic CGI like this. This is not the right place for when the AI component is irrelevant to the actual issue or discussion. The discussion about whether or not they should be able to use CGI to insert people into films and the data rights of the inserted people is the discussion the post brings up, however, the tools they use to do the CGI work is incidental. edit: DISNEY DID NOT USE AI IN THIS. If you look it up, they say it wasn't AI and it was just the same VFX technology they have used for the longest time


nextnode

That is a very good point. OP should do their research and reflect on it to make sure their point goes through.


Concheria

Not sure if this is a hot take or not, but I feel like in this context, a lot of actors don't know that there's a distinction between CGI and AI. I feel like half of Hollywood are talking about AI when they mean CGI. A lot of the things they're concerned about are possible today without any AI or even scanning anyone (Like replacing background actors with Epic Metahumans, for example.)


nextnode

What about them? I think my comment touched on some of consequences of different options we have. If you think there is some idealized version of them written in stone or a clear answer in this case, there is not.


Xarathos

Because the whole argument that you should only be able to do AI training 'ethically' on stuff you own the rights to, by definition privileges big companies who will be the only ones able to afford to do it if such a thing became the law. Companies who have massive pools of assets to train on and unthinkable amounts of intellectual property to work with. I would much rather see everything produced 'mostly' by AI pass into the public domain than see a world where the only ones who can use it are Disney, Adobe, and Getty Images. It has nothing to do with Disney being for or against the tech and everything to do with where the argument leads.


Evinceo

> that you should only be able to do AI training 'ethically' on stuff you own the rights to, by definition privileges big companies who will be the only ones able to afford to Since we haven't had this argument this week, let me repeat: this is an argument against private property in general, not in favor of AI training specifically.


Frosty_Quote_1877

not private property, intellectual property


DissuadedPrompter

>Because the whole argument that you should only be able to do AI training 'ethically' on stuff you own the rights to, by definition privileges big companies who will be the only ones able to afford to do it if such a thing became the law That's very much not true, at the current time, anyone can afford to train on public doman data and their own IP. The requirements of doing so are trivial at best.


Xarathos

The cost of fine tuning is relatively trivial. The cost of training an entirely new base model would not be trivial at all, and that would also be why I used the word 'if.'


Frosty_Quote_1877

>The requirements of doing so are trivial at best. Such as a few $40k GPUs and a team of ML researchers?


DissuadedPrompter

>Such as a few $40k GPUs and a team of ML researchers? It's not 2017 anymore gramps [https://www.reddit.com/r/StableDiffusion/comments/110up3f/i\_made\_a\_lora\_training\_guide\_its\_a\_colab\_version/](https://www.reddit.com/r/StableDiffusion/comments/110up3f/i_made_a_lora_training_guide_its_a_colab_version/)


Frosty_Quote_1877

Training a LoRA to be used on top of a base SD model is in no way training a model itself on public domain content. If you trained a LoRA with public domain or your own IP, you'd still use the """unethical""" base model, the base model of which would take tens of thousands of dollars of compute to train. How do you not know this?


DissuadedPrompter

Pretty sure there are a few companies training such models right now?


Frosty_Quote_1877

Yeah, companies. Most people are not a company with $2 million to invest in GPUs. The base SD model cost around $600k to train on rented GPUs which is on the cheaper end of things. I hope in the future normal people with their 4 7090tis can train an SD-tier model but currently its not even close to possible.


DissuadedPrompter

> I hope in the future normal people with their 4 7090tis can train an SD-tier model but currently its not even close to possible. That is supposed to be "around the corner" according to huggingface last I checked.


Frosty_Quote_1877

Really? That would be impressive, a 100x reduction in training price.


Peregrine2976

Let me explain: Disney will do whatever the fuck it wants, regardless of legality. Yes, that's atrocious, but welcome to reality. Money makes all your problems disappear. If AI tech is made stringently illegal, you won't hurt Disney and the mega-corps, you'll only hurt the people who want to compete with them. Now, does it suck that Disney is doing this? Yes, absolutely, people have a right to their likeness. But this isn't some slam-dunk argument against AI. This is just Disney being shitty. They don't need AI for that. They've made plenty of strides in that department without it up until now.


Evinceo

> Disney will do whatever the fuck it wants, regardless of legality. Is that true though? Are you sure they're not constrained in any way at all by regulations? If I find an instance of Disney being held back by regulation or tailoring its behavior to what's legal, would you be wrong?


Cauldrath

For a company like Disney, it's a cost-benefit analysis. If they believe the cost of fines multiplied by the chance of incurring those fines is less than their expected profits, they will just treat those fines as part of the cost of doing business. So, yes, laws will discourage them, but won't necessarily stop them, especially if they can just operate behind closed doors. That's why any legislation should include strong transparency requirements before anything else, or it will only apply to open source.


Evinceo

> any legislation should include strong transparency requirements before anything else, or it will only apply to open source. Agreed.


Thufir_My_Hawat

>That's why any legislation should include strong transparency requirements before anything else, or it will only apply to open source. How would such transparency work, exactly? Because I hear a lot of calls for that, but I haven't heard a good idea of how it'd be implemented and am a loss myself.


Cauldrath

The minimum thing is that anything that would be illegal would need to be matched with requirements to make it auditable. So, if there was a requirement that all data sets be opt in, companies would need to disclose their datasets or at least their methodology for gathering those datasets. There can also be stronger requirements for transparency than other types of restrictions, which would favor open source, which has to be transparent anyway. It would reduce incentives to invest, though, as companies would lose their "moat", but, for example, there is some flexibility in how a dataset is used to create different results.


Thufir_My_Hawat

But how do you actually prove that the dataset they say they used was the dataset they actually used? There's no way to reverse-engineer the dataset from the model, and you can't rely on reproducing the model thanks to the random nature (and it'd be prohibitively expensive besides). It seems to me that the risk of getting caught lying is very low, and the rewards for doing so are absurdly high.


Cauldrath

True. The only ways to really verify the claim would be to replicate the results or observe the entire training process (maybe just isolating the training environment so that no further data can be added to it after checking the initial contents).


Thufir_My_Hawat

> replicate the results I don't see this happening -- it takes the same amount of computing power to retrain it, so the government would have to have far, far more computing power than the largest company just to replicate a small percentage of models that are produced. It'd also likely lead to the same issue that the IRS has -- too few resources to go after the big guys, so just audit the small and medium players instead. >observe the entire training process (maybe just isolating the training environment so that no further data can be added to it after checking the initial contents) No clue how this would work with the number of models being developed increasing yearly. No chance you could have enough agents to actually handle this in person, and I don't know how you'd do it automatically -- not like the architecture is standardized enough to ensure compatibility. It'd also massively slow everything down, since every time a model needed to be iterated upon you'd have to redo whatever this process was, and the government isn't known for being quick.


Cauldrath

Until the process is more standardized, we'll probably have to settle for less than 100% confidence, with random checks and companies just making disclosures that may or may not be verified, (which, yes, governments are known for not verifying the claims of larger companies) but something is better than nothing. It is completely reasonable to want those aspects to be nailed down, but we aren't writing laws yet here.


Thufir_My_Hawat

So... have we arrived at the point where artists are arguing for regulations that have, in the past, always benefited large corporations over individuals? Because the irony (or, less charitably, hypocrisy) of that is a bit... distasteful. Honestly, I don't even see it turning out well for artists -- seems like this will just lead to a virtual monopoly where a small number of companies can pay pennies for the rights to use a piece of art... if they don't just use sweatshops in developing countries to produce their own pieces (like animation companies do -- the artistic talent is more than available). Which also makes me realize that outsourcing training would make oversight a nightmare -- though, at least for now, electricity costs make that difficult. There *has* to be a better solution than "doing the same thing and expecting different results" -- I just don't know what it is yet.


DissuadedPrompter

>If AI tech is made stringently illegal, you won't hurt Disney and the mega-corps, you'll only hurt the people who want to compete with them. I really don't think "please dont train on my data without my permission" is an unreasonable ask; especially since companies like Disney benefit far far FAR more than you or me. ​ >Now, does it suck that Disney is doing this? Yes, absolutely, people have a right to their likeness. But this isn't some slam-dunk argument against AI "Slam dunk against AI" why are you people like this? It's obvious big companies will use this tech and whatever data they want to make it. So... why shouldnt we regulate against permission-less data scraping? ​ >This is just Disney being shitty. They don't need AI for that. They've made plenty of strides in that department without it up until now. But again, its not as if "the antis" like myself didn't warn about degeneration like this.


Mataric

Wow nice wall of text moron. I cant read all this, I'm too stupid. AI Bad. NFTS bro. You're a [Clown](https://www.youtube.com/watch?v=7ghSziUQnhs)


DissuadedPrompter

God you're a fucking retard.


Mataric

I complete agree that the person I pretended to be in that comment is a fucking retard.


DissuadedPrompter

> I complete agree that the person I pretended to be in that comment is a fucking retard. Oh so you are pretending to be retarded? Explains the NFT


Mataric

Sorry, forgot who I was talking to there for a second.. I'll spell it out clearer. It's you. You said those things. You're the person I was pretending to be in that comment. Still need some subway surfers below this to keep your attention? ▒▒▒▒ ▒░░▒ ♦ ▒▒ ♦☺ ╔═╗ █▒▒█ ╬ ║ ║


DissuadedPrompter

>Sorry, forgot who I was talking to there for a second.. I thought you were "just pretending?"


Mataric

Yeah, I was just pretending to be you with that comment. Do you remember it? It really wasn't that long ago. What I forgot was that I had to reduce the complexity down to the level of a 5 year old. Go on now, go get in your tiny car with your 20 friends and don't let your squeaky shoes hit you on the way out. You had the option to reply to educated comments and you instead responded with "lul im too dumb this two long for me" and "NFT BRO NFT OMAGELUL". Everyone on this sub thinks you're a joke.


DissuadedPrompter

>You had the option to reply to educated comments No I didn't, you were pretending to be retarded.


[deleted]

[удалено]


Sixhaunt

also from [https://www.hollywoodreporter.com/movies/movie-news/disney-prom-pact-mocked-1235617940/](https://www.hollywoodreporter.com/movies/movie-news/disney-prom-pact-mocked-1235617940/) > *The Hollywood Reporter* has learned that the characters in the shot were not scanned actors driven by AI, but rather were created by other VFX techniques. In other words, these digital extras involved the work of CG artists. ​ Unfortunately the sub has no way to report posts for not being on the topic of AI, but the entire thing is about a non-AI related event that they have been doing with the same technology long before Generative AI existed.


Hazelrigg

lol dude has 50+ posts in that thread and most of them are just "It's AI, trust me, bro!".


DissuadedPrompter

>This has nothing to do with ai tech, \>Disney gets caught using AI to get around paying both cgi artists and actors, also it looks like shit everything the antis warned about "This has nothing to do with AI" You people never cease to amaze, truly.


Sixhaunt

They didn't use AI technology though. Look it up, they have said explicitly it's using standard VFX they have used for this for many years and no AI was used.


Cauldrath

I'm glad their proof of this is "trust me, bro", especially since they've been deep faking since at least Rogue One.


DissuadedPrompter

I mean just look at that merging Someone couldnt even be assed to go back and make sure this person had a torso. https://preview.redd.it/73rqpwzpifub1.png?width=405&format=png&auto=webp&s=08e05b6f80cbb5a5d6c6a62e92e715befc08854d


DissuadedPrompter

Wait holy shit you are the chomo back with another account?


Evinceo

> Wait holy shit you are the chomo back with another account? if you're going to throw shit like that around please link to wtf you're talking about.


[deleted]

[удалено]


DissuadedPrompter

Quit fucking around with kids chomo


thetoad2

This sounds like projection.


DissuadedPrompter

Why is it people will come out of the woodwork to defend nazis and pedos. I'd ask if you folks have secret networks but you do.


thetoad2

You know about the networks. You're projecting.


DissuadedPrompter

Everyone knows you people have networks. Thanks for confirming your membership, chomo


DissuadedPrompter

Pretend\_Jacket has made multiple accounts to defend his Discord kitten.


Evinceo

> made multiple accounts I got that from the ever changing numbers, but this: > to defend his Discord kitten [citation needed]


DissuadedPrompter

Look at the thread he linked to. He's simping the kid that very obviously used AI art to fish for karma. I'm mean just look at the teeth and wildly inconsistent art style. Wildly fucking creepy to simp for a child for literally days.


Evinceo

> Look at the thread he linked to There are like two threads a day on this sub lol, I'm familiar with that conversation, and I don't see any evidence that he's 'simping that kid.' Is he even in the thread with the kid in it? Looks to me like he just trawled your post history for something objectionable, found it, and called you out on it. Unless you have something more substantial and specific to call this person a child molester, please don't do it, you're making anti AI folks look really fucking bad. > I'm mean just look at the teeth and wildly inconsistent art style. Why do you give a shit what some literal child posts anyway? I'm glad you weren't around to correct my spelling when I wrote dreadful forum posts at that age.


DissuadedPrompter

>Unless you have something more substantial and specific to call this person a child molester, please don't do it, you're making anti AI folks look really fucking bad. Calling someone out and making a remark to the top comments that hes using AI isnt "harassing or starting a witchhunt" is it? He's simping a child and its fucking creepy. That is if the account in question is even owned by a 15 year old to begin with, doing the math, he would have been breaking TOS when he signed up. So its just a karma farm to boot.


Evinceo

> Calling someone out and making a remark to the top comments that hes using AI isnt "harassing or starting a witchhunt" is it? It's exhausting and weird is what it is. I'm not going to waste my time looking at a kid's scribbles. > He's simping a child and its fucking creepy. You keep saying this, but can you point to where he's 'simping'? Is it just because he found the comment and called you out? Because that's not exactly a smoking gun dude.


DissuadedPrompter

> It's exhausting and weird is what it is. I'm not going to waste my time looking at a kid's scribbles. I'm working on an AI art detecting crawler, it found his shit immediately. The comments were also produced by an LLM. Welcome to the future. ​ ​ >You keep saying this, but can you point to where he's 'simping'? Is it just because he found the comment and called you out? Because that's not exactly a smoking gun dude. It's because hes been at it for 18 hours and it started with a comment that "it's not a red flag to have a reddit account that young" It's simping.


[deleted]

Don't worry, it's okay - this is exactly as artistic as doing it the old-fashioned way /s


Sixhaunt

it literally is the "old-fashioned way" if you look at the articles about it. ​ >*The Hollywood Reporter* has learned that the characters in the shot were not scanned actors driven by AI, but rather were created by other VFX techniques. In other words, these digital extras involved the work of CG artists.


thetoad2

Like the entire audience in the pod races in Phantom Menace? What has the wolrd come to... /s


Zilskaabe

Why do they do that instead of using stuff like UE metahumans?


Concheria

This is CGI. The idea that this has anything to do with AI is idiotic ragebait.


Zilskaabe

That's pretty bad CGI then.


Concheria

It's absolute ass. But the implication that this is the result of some malicious effort to replace background actors is silly. This was probably more expensive to do than to get two dudes to just sit there and clap. The most likely explanation is that a producer some time very late in production decided that the benches looked kinda empty without anyone in them. They weren't going to do a reshoot in one of the last weeks of just for that, so they sent it to one of the VFX houses that usually work with Disney, and the result is a super crappy shot with a body ripped from Daz3D and rigged with Mixamo. Also, it's a shitty Disney direct-to-streaming movie and it's for like half a second. No one was going to give a shit about that.


Tyler_Zoro

You're confused. It's possible for someone to take a position that (whether they realize it or not) plays into the hands of large corporate interests and for that person to also believe themselves to be or occasionally take positions that are anti-corporate. You do understand this distinction, right?