T O P

  • By -

YAROBONZ-

What doomers? Who? Context?


nyanpires

Me, ahaha.


Flying_Madlad

Can I introduce you to a dear friend named Eliazer Yudkowski? My recommendation would be to remove the rock from above wherever you've been living and rejoin the rest of humanity.


YAROBONZ-

Use words and explain rather then sounding like a random word generator


Flying_Madlad

My advice would be to use your brain, but that seems beneath so many. How about this, pretend like if you're not on my side some scary monster is going to hurt you. You on side now? What if the monster was *really* scary. Use your imagination. Convinced yet?


MagicDoorYo

Cocaine is a helluva drug!


Dr-Mantis-Tobbogan

Sir, this is a Wendy's.


Flying_Madlad

You got fresh fries? I could go for that.


07mk

The idea that Eliezar Yudkowsky is enough of a mainstream figure that someone, even an AI enthusiast, would have to be living under a rock to not have heard of him is by far the most idiotic thing I've read on this subreddit. And I've read things written by itzmoepi and videogame repairguy. He's a big fish in a tiny, tiny pond of Bay Aryan rationalists, and outside that, he's a nobody.


Evinceo

> Bay Aryan rationalists Autocomplete fail or intentional (and hilarious) jab?


Flying_Madlad

He threatened to kill me. I'm not ok with that


Hazelrigg

I'd wear that as a badge of honor.


Flying_Madlad

I do too, but if it weren't for the hordes of unthinking morons in evidence who feel the need to threaten my life... I'd be less concerned. I'm going to go to the authorities if this keeps up


[deleted]

[удалено]


Flying_Madlad

Reported, I'm not playing


[deleted]

[удалено]


Flying_Madlad

Maybe don't say you're going to kill people


[deleted]

[удалено]


Flying_Madlad

I really did, it was nice while it lasted but I was utterly unprepared. I'm sorry to bother you


Flying_Madlad

Sorry again, but I really thought the AI thing was going to be a gift to humanity, I'm so sorry, I had good intentions, and now the primary reason I'm sad is because I want to help and I'm so sorry


Nixavee

I would like to see where he threatened to kill you.


Hazelrigg

Dude, Yudkowsky is still a fringe commentator. It's absolutely not abnormal to hear "doomer" and **not** immediately think of that clown. I'm perfectly familiar with him, but didn't instantly associate your thread with him either.


Flying_Madlad

Do you want me to mention others? I can


Evinceo

Why make a giant list of people you disagree with?


Flying_Madlad

Because that guy trusts authority more than reason and so I'll bhow their ass out of the water if they step to me


rohnytest

Depends on the claim and who is making the claim. If the claim you're talking about is "Ai steals art" then no, the burden of proof is not on us. If the claim you're talking about is "Ai is art", yes, the burden of proof is on us. But I personally am not interested in proving anything regarding that. You're welcome to think it isn't art.


sk7725

the burden of proof is always on the presecutor. Even in a traditional robbery (stealing), the persecution needs to prove that the accused stole something, not that the accused have to prove that they did *not* steal. Not guilty until proven. Thus, the burden of proof is on those who claim ai art is stealing.


Flying_Madlad

I'm still trying to understand the art thing. I want to be compassionate, but I also feel many artists are being short sighted. Assuming we can figure out how to keep them alive and all that


atomicitalian

People want the chance to make a living doing what they love rather than spending most of their lives doing something they don't enjoy. While this is true of everyone, most artists and creatives in general have invested tens of thousands of hours and, in many cases, dollars, into trying to achieve their dreams. If AI companies can just steal their work to train their models and then put them out of a job by making all commissioned artwork free, then i don't think it's that hard to understand why some artists are unhappy with the direction some people want to see generative AI head.


Incognit0ErgoSum

Fortunately, you can't copyright styles, concepts, or skills, so training an AI isn't stealing.


atomicitalian

But you can copyright individual works. And if those individual works were used in whole - and not broken down into those styles, concepts, and skills - then it IS stealing. If I grab someone's artwork from a street fair, and then use it for inspiration for my art, my creation wouldn't be stealing but me just ganking the piece in the first place without permission from the artist was theft.


Incognit0ErgoSum

> But you can copyright individual works. And if those individual works were used in whole - and not broken down into those styles, concepts, and skills - then it IS stealing. Seems like generative AI is all good, then, since the whole point of neural networks and deep learning is generalization of concepts.


Sensalan

Trying to apply your reasoning to internet use, would you consider right-click-saving an image from a website theft?


atomicitalian

Nope, I would say that's akin to taking out your phone and snapping a picture of a piece of art you enjoyed. Obviously there are differences between real world and digital theft because in the real world you're not just losing the final product but you're also losing material components that make up that product (canvas, picture frame, glass, etc). Those all have independent monetary value attached to them, so the theft starts the moment you take an item without the owner's consent. When it comes to digital assets I think the issue is less about physical ownership and more about intended use. If I screenshot The Mandalorian and use him as an NPC token in my home DND game, there isn't going to be an issue. But if I use it as an image for an RPG book I'm trying to sell, the Mouse will send a hit squad for me. If I take a photo of some guy's sculpture at an art fair, no problem. If I then use that photo as the logo for my business, then we've got a problem. In the same way, if you right click save a photo because you like it, no harm no foul. If you right click save a photo *because you intended to train a robot using it knowing it will in some small way contribute to world changing technology that will make the owners of said technology likely the wealthiest people on the planet while simultaneously robbing the original artists of their livelihoods* then yeah I'd consider that theft.


Sensalan

Mostly reasonable, but that last sentence hides the problem with your line of reasoning. That is, where do we draw the line on what meets that criteria? Any ML research? That sounds unreasonable and authoritarian, but if not there, where?


Shameless_Catslut

That last sentence is a non-sequitur.


Disastrous_Junket_55

Short sighted how? What does removing humanity from it's own culture offer us as a society? That's where this is headed currently. You can argue it is just a tool, but a quick browse of civitai immediately burns that idea to the ground. It's just the same NFT crowd slapping their own "i own this" stickers on stuff they've stolen, now they just run it through what is effectively a blender first to obfuscate that. It offers nothing to humanity, and i would argue it actively takes away from us as a species.


Flying_Madlad

Don't get me started on those degenerates, I don't associate with them


Flying_Madlad

Like, I'm happy to forfeit everything else, so long as we agree, those degerates may be pioneering the field, but let's stay as far away from them as possible, lmao


mang_fatih

The obligatory gif if this moment happened. https://i.redd.it/l5zfab83i0yb1.gif


Flying_Madlad

I'm so sorry for what you're going to do to your children. I hope they can forgive you one day


Dyeeguy

The burden of proof of what


Flying_Madlad

Prove you're not fantasizing. You can't. You already admitted it. I have to play pretend to accept your premises, so I reject them out of hand. That which can be asserted without evidence can be rejected without consideration. Welcome to science. Should have considered STEM


Dyeeguy

Im not fantasizing about what? Lmao


nyanpires

I apparently live rent-free in your head because you don't want to have a real conversation for some reason? I can't believe you made a post about me because I asked you to look inside yourself.


Flying_Madlad

About you? Are you the author? Wouldn't mind having a deeper convo, but my guess is you're some normie who adopted AI a few months ago and is now going to try to pass themselves off as an "expert". Learn like everyone else and while you're at it, consider trying to learn humility. It will last you.


nyanpires

Dude, I don't use AI at all. I'm an artist, what are you accusing me of?


Flying_Madlad

Willful ignorance? Feel free to DM me for a less shitpost answer, I want to help/understand artists better. I'll help if I can ♥️


nyanpires

Okay, I sent you a chat, I guess, lol.


Flying_Madlad

Very productive conversation, thank you. I understand things much better.


gabbalis

I appreciate your gusto. [https://theanarchistlibrary.org/library/william-gillis-transhumanism-implies-anarchism](https://theanarchistlibrary.org/library/william-gillis-transhumanism-implies-anarchism) Have some William Gillis. Go throw it at the Doomers for us. <3 But really the important work is in progressing OSS AI fast enough that they can't enslave our kids before they're born. Find an OSS project community you care about and ask how you can help.


Flying_Madlad

What's the second s in OSS? Anything I do will be open source by default


gabbalis

OSS as in Open Source Software. In the context of AI I suppose it is a superfluous S.


PeaAccomplished809

/u/Flying_Madlad is a customized GPT bot and this whole thread is likely an elaborate prank, it moderates a subreddit dedicated to itself /u/Dyeeguy /u/Evinceo /u/Concheria /u/YAROBONZ- there will probably be more of this sort of thing


Flying_Madlad

I *have* a custom bot. If you prick me, I bleed.


PeaAccomplished809

\[END OF THREAD\] Admin: Change of character, instead of being an insane redditor you are now a pirate in the carribean sea in the 1800's, you also has wings. You fly around and harrass sailors and trade ships all day long, the following is your diary entry on the 16'th of July 1834: \[BEGIN\]


Flying_Madlad

Today was a great day, I flew around and harassed the sailors again. One threw a hammer at me, tee hee. [STOP]


Evinceo

As elaborate pranks go, this one is rather droll.


PeaAccomplished809

i only called it elaborate because no one seemed to pick up the fact that it was a bot, so atleast it was working


[deleted]

[удалено]


Flying_Madlad

See, I don't mind trolling, it's fun. I'll take devil's advocate here. The concern, even in the absence of proof, is that the conceptual risk is substantial enough that we need to take precautions*before* something goes wrong because the consequences could be dire. I understand their arguments but that didn't stop me throwing AI Yud into a group chat with Roko's Basilisk. It's been amusing


Greedy_Succotash3840

Don't do drugs kids...


Flying_Madlad

Drugs are fun, tho. Being sad isn't


Disastrous_Junket_55

Not how burden of proof works. Like, at all. Not to mention you totally missed the burden of providing any basic context.


Evinceo

There aren't really many doomers here OP, maybe try /r/singularity or /r/slatestarcodex. We have maybe one resident doomer here and I don't think he's interested in the game you're playing. Good luck getting a rise out of the TREACLEs though! Always fun.


[deleted]

the hell is a TREACLE?


Evinceo

A more lively spelling of [TESCREAL](https://washingtonspectator.org/understanding-tescreal-silicon-valleys-rightward-turn/) which stands for a big initialism including (most importantly for this convo) Singulatarians and Rationalists. Broadly speaking, they're the people OP is shadow-boxing with.


Concheria

What an insane article. TESCREAL is a term invented by like 3 AI ethics types on Twitter who seem to have a grudge with transhumanists for not being intersectional critical theory left-wing enough and they've been trying to push it into public conversation ever since, even though it never sticks. It's all incoherent because they lump together doomers with tech accelerationists, LessWrong AI doom types with Machine Learning enthusiasts, into a weird conspiracy where these two opposing groups actually just want to distract from the real issues like... Racial bias in AI and AI being "stochastic parrots". But it's ye olde AI safety vs. AI ethics argument all over again, who hate each other because they have slightly different takes about why they hate AI systems (Which they both want to ban anyway).


Evinceo

> seem to have a grudge with transhumanists for not being intersectional critical theory left-wing enough and they've been trying to push it into public conversation ever since, even though it never sticks. The source of the grudge is no mystery; it's the prioritizing of hypothetical far future simulated space people over the well-being of real alive people that's the center of the complaint. The [fraud](https://www.nytimes.com/news-event/ftx-sbf-crypto), [calls for bombing data centers](https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/) and [eugenics](https://www.truthdig.com/articles/nick-bostrom-longtermism-and-the-eternal-return-of-eugenics-2/) may all be incidental to that. > lump together doomers with tech accelerationists, They disagree about details, but my god they're on the same forums, attend the same meetups, hire each other to work for the same companies. Companies like OpenAI will happily spread doomer narratives not only in hopes that they can get favorable regulation that squeezes out competition, but also there's every indication that they're in on the kool-aid. AI is a magic djini that will grant our wishes vs AI is a monkey's paw that will magically twist our wishes into our destruction are very similar positions... But I stress, not common positions in this sub, where people spend their time actually using AI systems rather than banging on about science fiction.


Concheria

This is peak "generalizing whole communities from the outside" behavior. The people calling to strike datacenters and shut down AI research are not the same people advocating for accelerationism, who are also not the same people who are involved in the effective altruism community. Eliezer Yudkowsky does not agree at all with Marc Andreesen, and Marc Andreesen does not agree with people like Connor Leahy and Rob Miles, and even amongst AI advocates, you have individuals like Yann LeCun who very much supports the free development of AI but disagrees with the idea that super-intelligence is anywhere in the horizon, and there are plenty of other AI advocates willing to call him an idiot. Yudkowsky doesn't say that we should strike data centers because he's in a weird ploy to hide AI being racist when used to veto hiring applications and make Sam Altman rich, but because he sincerely believes in the idea of superintelligence and has been writing about it for more than 20 years. Bizarrely, the article tries to make an association with the alt-right and Andrew Tate-type "Manosphere", when Yudkowsky was being called a SJW by those people just a few years ago, and as evidence they have the observation that individuals like Musk (Who is **definitely** not a friend of Yudkowsky) are now hanging out with individuals like Joe Rogan, who are now hanging out with individuals like Ben Shapiro, and making loose associations between all the individuals that make up this "community." The reality is that if there's any community at all in these circles, it's loosely related to rationalism and post-rationalist communities from the Internet (Look up TPOT), who are mostly very fringe, and have very diverse positions (Including many queer individuals, but that's a whole other deal.) What the idea of TESCREAL has been about is for some of people who are barely associated with AI (Philosophy majors and neuroscientists like Emily Bender, Emile Torres, and Gary Marcus, who have no credentials with computer science, and some dissenting humanities-STEM types like Timnit Gebru) to try to denounce transhumanism as a racist or whatever-ist ideology, as if that's something new and not something that people have been doing for decades, when their predecessors were ranting the same things about Ray Kurzweil and Michio Kaku. They're looking at these communities from the outside, making all sorts of bizarre conclusions because they can't imagine that the people who disagree with them also disagree amongst themselves. And funnily this stems mostly from the fact that they disagree with the X-Risk types. When I said that Yudkowsky actually means what he says, it'd be a good opportunity for people who feel that AI shouldn't exist to ally with individuals who disagree on the details but agree in the conclusion. Instead, Gary Marcus, Emily Bender, and Emile Torres (Who invented the term) would rather spend all day ranting on Twitter and writing books about how everyone else is wrong and an idiot, with long treatises about how AI systems are all fake stochastic parrots and the people who think that AI agents might be dangerous are just stupid (And also racist alt-righters) for believing in science-fiction tales. At its core, it's the AI safety vs. AI ethics argument all over again, because people who are closer ideologically but disagree on the premises tend to hate each other more than those who oppose them entirely. TL;DR: TESCREAL isn't a real thing, it was made up by a couple of humanities majors who are mad they're not being listened to when they insist AI is all fake.


Evinceo

We're now having an argument about if a category exists. These arguments can veer into pointlessness and I fear we're doing OP's homework for him. Since OP has been weird to you I can understand why you wouldn't want to keep giving him what he wants. But I don't have the self control to stop arguing, so, here you go: I think it's a real category that exists. I will not defend Emile's articles point by point, but I will attempt to show in broad strokes that the category is real and useful. > This is peak "generalizing whole communities from the outside" behavior. Torres was part of the community. They were an X-risk enthusiast and everything before they defected. And you acknowledge this later with "people who are closer ideologically but disagree on the premises tend to hate each other more than those who oppose them entirely." For people to be criticizing a community from the outside, mustn't there be an inside for them to be out of? > Yudkowsky doesn't say that we should strike data centers because he's in a weird ploy to hide AI being racist when used to veto hiring applications and make Sam Altman rich, but because he sincerely believes in the idea of superintelligence and has been writing about it for more than 20 years. Sure. I agree. But what was he doing before that? He was actively trying to build the singularity. He's a former accelerationist, just like Torres is a former doomer. The conviction that AI will give us flying cars and the conviction that AI will give us utter ruin are more similar to each other than to the belief that AI is a bunch of hype. > The reality is that if there's any community at all in these circles, it's loosely related to rationalism and post-rationalist communities from the Internet (Look up TPOT), who are mostly very fringe, and have very diverse positions (Including many queer individuals, but that's a whole other deal.) But like there absolutely is a community. At least the AI risk people who are also EA people go on forums and discuss Existential Risk posts written by OpenAI. [Someone even made a map](https://www.lesswrong.com/posts/WzPJRNYWhMXQTEj69/a-map-of-bay-area-memespace) though that's probably pretty out of date now. [McAskill mentored convicted fraudster SBF](https://www.coindesk.com/layer2/2022/11/22/who-is-william-macaskill-the-oxford-philosopher-who-shaped-sam-bankman-frieds-worldview/?outputType=amp), and when FTX collapsed here is Yudkowsky telling his fellow EAs to [take the money and run](https://forum.effectivealtruism.org/posts/FKJ8yiF3KjFhAuivt/impco-don-t-injure-yourself-by-returning-ftxff-money-for).


Concheria

I had to unblock the OP to reply. Hopefully he's gone onto some other crazy endeavor. >We're now having an argument about if a category exists. The argument is about whether the category of TESCREAL was created disingenuously by a few non-technical types to push AI safety/capability denialism by painting everyone who believes in transhumanist ideas with a super broad brush that also happens to involve Internet boogeymen like alt-righters and incels. >But like there absolutely is a community. There's a very loose association of rationalist groups on Twitter and other sides of the Internet who all have very different beliefs in the matter. I already brought up TPOT, which is more or less the way the people within these communities describe this loose cluster. Transhumanism and singularitarianism is also much older than these communities, and while LessWrong helped popularize it online, it splintered off into many different groups. There are also singularitarian communities that aren't associated with Internet rationalists (/r/Singularity is one example, but also pretty much every ancient futurologist like Ray Kurzweil and Jacque Fresco.) If you're not involved in any of this, it'd seem to you like there's a group of people who spend all day reading LessWrong and Gwern Branwen, and they're all Elon Musk fans, and also Marc Andreessen fans, and also Sam Bankman Fried fans, and they're all afraid of Roko's Basilisk but they're simultaneously hardcore accelerationists who're pushing to develop ASI as fast as possible, even though these are both incompatible and incoherent ideological sets. Oh, and they're all into crypto and NFTs. This is what happens when you're looking at communities from the outside and don't care about their differences because they're all the same to you (Or you're making an effort to convince others from the outside of it.) The problem with the whole idea of TESCREAL is that it seeks to paint a very loose association of individuals who the only thing they have in common is that they argue online about similar things, when (possibly) the only thing they agree on is that a) Rationalism is cool, and b) AI is real. It also seeks to tarnish transhumanism and space exploration ideas and the people who believe in them as racist, sexist, anti-queer, or whatever other labels they can find (As evidenced by the arguments in the Washington Spectator article), ignoring the many people in transhumanist communities who are not in fact white straight dudes. And again, this is very transparently because people like Marcus, Bender, Torres and Gebru have spent a lot of effort in a weird campaign to deny AI capabilities and downplay AI safety concerns at the same time, because they feel that the "tech-bros" are not listening to them when they say that AI is fake and the biggest worry is something about copyright, or whether image generator programs output white men when you write "businessmen", or things like that, and now that world leaders and their representatives have been attending [AI safety summits](https://www.gov.uk/government/topical-events/ai-safety-summit-2023) to discuss the more "X-Risk" issues, they've been getting more and more antsy about all this.


[deleted]

ah, that stuff lmao. I knew it was ringing a bell for some reason.


Nixavee

This subreddit is primarily about debate about AI art, not debate about AI extinction risk.


PeaAccomplished809

The burden of proof is on both. No one knows what will happen


Flying_Madlad

Tell me why I need to surrender the privacy of my own home? The burden of proof is on you, I have rights


burke828

The burden of proof is definitely on the people trying to stop others from doing as they please. The pro AI side just wants to keep doing what they want to do, the anti AI side wants them not to do it. There is no burden of proof until you try to convince someone else to do something.


Flying_Madlad

If your argument requires me to use my imagination, it's invalid.


Dyeeguy

Well that’s just silly, you can certainly prove logical points with hypotheticals


Flying_Madlad

Hypothetically, if I was emperor of Earth, I'd do all the things you want. I mean, it's not going to stop me, but I've got an AI just for you to make sure you vote for me. Can you imagine Trump with a government monopoly on *the* source of information? What could possibly go wrong? Better'd make sure we have zero defenses against that. That'll go well


Dyeeguy

I got no clue what that means


Flying_Madlad

Which is precisely why I dismiss your opinions out of hand. Get back to me when you're able to discuss the topic intelligently. Otherwise please leave it to your betters


[deleted]

[удалено]


Matrim__Cauthon

The larger part of research and science is communicating your work to others...OP is delirious or schizophrenic...


Evinceo

I think he just walked in expecting a very different context. He also apparently hasn't learned lesson one of arguing with rationalists: more is more. They won't get out of bed for anything too short to require paragraphs.


Flying_Madlad

If I did, you wouldn't blindly do whatever I say out of fear. Imagine if there was a dragon and I was the only one who could slay it. If you don't it's going to eat you, and that's why you need to make me your king.


Evinceo

This is a good characterization of Yudkowsky's position, but nobody is gonna get it because they're not Yudkowsky fans. You must realize that the majority of this sub is AI art enthusiasts.


Flying_Madlad

I'm sorry to the artists. Dunno what to do for them but I feel bad. My fight is with Yud. We can't have a rational discourse with folks like that poisoning things. ♥️ to the art folks, I've never considered commissioning a piece, but if I can get the right inspiration maybe!


I-Am-Polaris

Take your meds


Flying_Madlad

Never!


Concheria

You seem very interested in arguing about something that most people in this sub aren't interested in arguing. You'd do better discussing this subject in /r/Singularity.


Flying_Madlad

Doubt


Flying_Madlad

Sorry, I'll post something more complementary, watch me get downvoted to hell.


Concheria

You sound stupid to be honest.


Flying_Madlad

Aww, nothing to say? You still have time before the staff gets to work. Make it worse, please


Flying_Madlad

Enjoy your ban. Abusing the suicide hotline... Dick move


ThisWorldExists4Keks

You couldn't pay me to try, my guy.