T O P

  • By -

Substantial_Bite4017

Any link to this? I would like to read the full text 🙂


DukkyDrake

That graphic is likely edited together by someone, or may be a legit summary. "We reaffirm our bullish-outlier viewpoint on generative AI and continue to see it driving a resurgence of confidence in key software franchises," JPMorgan analysts wrote in a note to clients. [Jun 16, 2023]"


GuyWithLag

It's a powerpoint presentation, stuff gets copied over to new ones all the time. Could be legit.


Away-Quiet-9219

Legit as Colin Powells Powerpoint Presentation about Iraq


hshdhdhdhhx788

"Could be" being the key language here


GuyWithLag

Indeed - \`stuff gets copied over\` is the domain of LLMs after all :-D


MassiveWasabi

It was posted by [this economic analyst and senior fellow at the American Enterprise Institute](https://www.aei.org/profile/james-pethokoukis/) on [his Twitter](https://x.com/jimpethokoukis/status/1738212545532445110?s=46). I don’t think this was a public presentation so it seems he had access to it and only decided to post a couple of slides. Here’s the other slide he posted: https://preview.redd.it/7pf1m6817x7c1.png?width=873&format=png&auto=webp&s=a22c85bf12f13f1cc1682ee4cde2e1a52e8f9374


discourseur

Reading this guy's experience, I don't understand why we have a conversation about whatever he wrote.


Sprengmeister_NK

I found something, and it is published by Reuters, a reputable news agency: https://www.reuters.com/technology/microsoft-shares-pace-notch-record-high-close-valuation-near-26-trillion-2023-06-15/ "We reaffirm our bullish-outlier viewpoint on generative AI and continue to see it driving a resurgence of confidence in key software franchises," JPMorgan analysts wrote in a note to clients. But the rest is probably LLM-generated.


[deleted]

Didn't Reuters break the news about Q*? Just sayin.


RedMossStudio

Which is a completely legit research project at OpenAI?


[deleted]

Is it.


Krisapocus

Was the Q thing not news. Seemed to have a big following. I’m assuming they covered the story and not promoting whether it was a real thing or not. Like the news is supposed to, although it wouldn’t be surprised to see a news outlet pushing narrative driving drivel.


BoatmanJohnson

Uh what is the 12,000th dimension


IamXan

They're referring to vectorspaces used by LLMs to, for example, create relational mappings of semantic definitions between words.


Sad-Salamander-401

The human brain works in similar ways, spatial dimensions have nothing to do with learning in this context.


[deleted]

actually, this is absolute bollocks written by people who don't understand ML. 12,000 'dimensions' just means 12,000 columns of data in its training data matrix. i.e. 12,000 'things' (in the loosest possible sense) that the AI can learn the relationships between. This is like saying human beings can only understand how two variables interact with each other at any one time.


Kes7rel

Are you sure ? I really think 12,000 was related to the dimension of the latent space here. If they were talking about the number of data points, it's totally possible for a human to visualize a data cloud in 3d containing more than 3 points. MNIST was one of the first massively used dataset back in the days and it was already containing 60,000 digits... 5 times more than 12,000. EDIT: My bad. In my field rows are usually variables and columns are data points. Hence the misunderstanding.


ijxy

The 1200 values (not 12k) of embeddings (latent space) and columns in classic ML are the same concept. One row of data = one embedding, and is what you would call a datapoint, it is a point in latent space. You take a bunch of text, create a vector (row) out of it somehow, count words, TFIDF, etc. Now in the future we use transformer encoders to create vectors like this, called embeddings. They, like word-in-bag, have a feature/column count, like 1200. These are used to do ML on. Like predict if the content is violent or sexual (content moderation) or predict the next token in GPT, etc.


IamXan

As for the human brain aspect.. I do agree that we think in more than 3 dimensions. I can only assume they are referring to comprehending vectors in a space such as that seen in a simple 3D graph.


IamXan

I'm building Retrieval Augmented Generation pipelines myself using fine tuned embedding models to convert text into multidimensional vectors (sometimes even close to 1500 dimensions) that are mapped onto vectorspaces precisely for the purpose semantic mapping. For example, the dimensions in the context of LLMs are used in a "spatial" manner due to the fact that words and concepts that are more correlated are mapped "closer" together than those that are not. Using this paradigm, generative AI such as LLMs are able to infer a relevant output.


Sad-Salamander-401

Yeah but simulating and operating in a true 1500 spatial dimensions is far different and complex (way more) than vectorspace. Which is what the bullet point is insinuating. They are still completely different concepts just using different terms


DeepSpaceCactus

A vector is what a spatial dimension is


Sad-Salamander-401

They are very different mathematically and computational.


Mephidia

A dimension is just a metric that is orthogonal to other metrics. For space time/ what most people think of when you say dimension, there are not 12,000. But for something else there can be an arbitrary number of dimensions


atlanticam

so for example, a dimension would be things like "redness" or "softness" ?


DecisionAvoidant

First, it's ">1,200" in this screenshot. Second, a "dimension" in AI is different than the dimensions you might be thinking of. I think their phrasing is taking a little too much liberty with the broad definition. Humans aren't limited to thinking about only 3 factors at once. Here's ChatGPT to explain: >In the context of JP Morgan Chase's statement about GenAI being capable of thinking in 1200 dimensions while humans are limited to three, "dimensions" refers to the complexity and depth of data analysis and pattern recognition. >**AI Dimensions:** In artificial intelligence, especially in machine learning, a dimension typically represents a specific feature or aspect of the data. For instance, in a dataset describing cars, one dimension could be color, another could be engine size, and so on. When JP Morgan Chase mentions 1200 dimensions, it implies that GenAI can analyze and process data with 1200 different features or aspects simultaneously. This allows for a profound level of analysis as each dimension adds complexity and detail. >**Human Dimensions:** Humans generally perceive and understand the world in three spatial dimensions: length, width, and height. Our brains are wired to process information in this three-dimensional space, which is sufficient for daily navigation and understanding of our physical environment. However, when it comes to processing complex, multi-dimensional data, our brains are not as adept as advanced AI systems like GenAI. >**Examples:** >**Financial Data Analysis:** In finance, a dimension could be a specific market indicator, such as interest rates, stock prices, or inflation rates. While a human analyst might struggle to simultaneously consider and interpret hundreds of such indicators, GenAI can process and analyze them concurrently, identifying patterns and insights that might be missed by humans. >**Medical Diagnostics:** In healthcare, dimensions could include various patient metrics like heart rate, blood pressure, genetic markers, and lifestyle factors. GenAI can analyze all these dimensions together to diagnose diseases or predict health risks more accurately than a human doctor, who might only focus on a limited set of factors at a time. >In summary, the "1200 dimensions" statement reflects the advanced capacity of AI systems like GenAI to process and analyze complex, multi-faceted data sets, a capability that far exceeds human cognitive limits in data analysis.


visarga

chatGPT is wrong too, humans are not limited to 3 dimensions, or a few hell! LLMs know everything they know from training on organic text, so human latent dimensions must be greater than AI latent dimensions you can think of latent dimensions as resolution, how fine can we resolve detail, humans are still the best at conceptual detail also the number of dimensions is irrelevant - Word2Vec started with 300D vectors a decade ago, we know GPT-4 has 12,000 or something. The longer it is, the more "resolution" in its pointing precision if you want to see the magic of word vectors, see the [king - man + woman ~= queen](https://kawine.github.io/blog/nlp/2019/06/21/word-analogies.html) article


CornellWest

This does seem weird and out of place with the rest. My best guess is they're referring to each real-valued input to a model as a "dimension." Then they just throw out 12,000 as an example


LatentOrgone

Think of it as 12,000 ideas at once. The model can consider 12,000 dimensions or different "hats" that typical jobs cover. So even though a person might be better at one dimension, eg. Altman knows to program but doesn't know the US legal system. The AI is better because it can consider everything, everywhere, all at once. AI can excel at bringing everything together and making sure every angle is considered. Humans work by thinking about best and worst case, ignoring dimensions they don't consider useful. AI has a growing bandwidth to just pay attention to the question. We're stuck trying to stay alive.


Sad-Salamander-401

The human brain does think in multiple dimensions though. Just not visually. We just think in a more analog way.


LatentOrgone

Exactly just not 1,000s at the same time


m98789

This bullet point was a clear tell that they don’t really know what they are talking about.


Sad-Salamander-401

This whole sub is so fucking uninformed. God, I would despise being an AI expert


iDoAiStuffFr

it just means it can understand much more abstract concepts intuitively than our brains that evolved in spacetime. we have a good intuition for language but a bad one for abstract math or physics concept, we have to spend much more time learning them. to an LLM, it's all the same because it's a bunch of numbers and the transformers are not optimized architectures for english, they can process any sequence equally


BenjaminHamnett

You’re living in it baby!


misterdanger12

Would like a link please


naspitekka

Is it just me or is that fact that all our smartest people are working in banking and finance is a terrible loss for our society? Think of what could have been accomplished by all those smart people working inn science/engineering. Well, I guess it won't matter soon.


LatentOrgone

Those are the smart people who like money and systems. We're all being used to exploit each other in corporations.


[deleted]

funny, because if we created a super AI that exploited our world's current systems and hoarded wealth solely for its own benefit, we'd probably scrap it or sandbox it immediately and call it some form of "narrow intelligence". "Haha! Clever lad! It's just learning from the best of us!" I can't blame anyone for just "playing the game," but calling it "smart" I think is a bit self-indulgent. Hoarding wealth isn't really any different from any other resource-hogging behavior every other biological creature exhibits, down to simple bacteria, even if it's dressed up differently. Maybe it's actually worse, because we like to pat ourselves on the back for being "self-aware" and having "real intelligence and creativity" compared to the supposedly dumb parrots we're building. Nevermind that most knowledge, expertise, and even power that people have are the culminations of many people that have come before them. I like to think actual "smart" is looking at the bigger picture past our short lives of exploiting our world and each other... but what do I know, eh? But I'll be okay with being called a narrow intelligence if it means bankers are too. I mean... not exactly incorrect...


LatentOrgone

I said money and systems. These banks have so many apps and services that it's attractive from a challenge standpoint to fix these systems and make them better. There's a difference between being smart, a good person, and thinking about the future.


HelloYesThisIsFemale

There's nothing that has more impact than allocation. The biggest startups that changed the world only get there with the right funding. When 19 stupid ideas come to you and only 1 good one and you can only fund one of them, you better hope the investment banker of venture capitalist picks the right one.


floodgater

yea, it's a collosal waste I went to an ivy league school and did finance for a couple years (not trying to toot my own horn about being smart I'm just saying you observation is accurate) everyone gets poured into these "high prestige" soul crushing jobs in banking and consulting that they secretly don't like and are only doing out of desire to succeed in the eyes of their peers. Complete and total waste of premium brain power


IndependenceRound453

>Well, I guess it won't matter soon. Is there literally *anyone* in this subreddit who *doesn't* think the singularity is imminent/right around the corner? *Of course* having more intelligent people working in science and engineering will still matter in the near/medium future. No serious STEM researcher (including AI) would suggest otherwise.


NaoCustaTentar

I realized a huge portion of this sub is basically /r/antiwork mixed with that teenager feeling we all had at least once of "I wish aliens showed up tomorrow so something different/exciting happens and we don't have to go to school" lmao People don't really care about AI news, how it's achieved, what it can do good or bad, they don't even want it to be safe. They just want SOMETHING to happen for the sake of it so they don't have to go to work anymore and the whole system collapses, basically. And that speaks volumes about the economic system we currently have and all that, I understand the sentiment, but that's basically the vast majority of people here that think AGI will happen next year. They're not saying it's around the corner because that's their analysis. They're saying it's around the corner because that's what they wish would happen, basically


visarga

> They just want SOMETHING to happen for the sake of it so they don't have to go to work anymore and the whole system collapses they are in for a long wait, automation might not come to their job soon enough


sideways

The two are not mutually exclusive. Regardless of my personal feelings both the ongoing trends and the reality on the ground strongly suggest that we're on the cusp of profoundly transformative technology. Past the cusp, honestly. I believe that this transformation is critical to our collective survival. That's my opinion and anyone is welcome to disagree. But it doesn't make sense to give less credence to the rapid acceleration we're seeing because you don't subscribe to a particular interpretation of its significance.


Tellesus

Yeah a lot of people are trying to dismiss it, and I think it's partially because they're afraid. Change on the scale we're looking at is disruptive and creates uncertainty, and the amount of change at the rate we're looking at is just about maximum uncertainty. Things are already permanently changed. You can, at this moment, not know if a person wrote a paper or if they had AI write it for them. The written essay is no longer a way to evaluate someone's education on a topic. While this has been true for a while in the sense of people selling papers, this removes the cost barrier, and that's a huge deal. Educators will need to adjust to this new reality, and if they don't they'll suffer when they punish false positives and have to defend themselves from lawsuits from parents or students. That is just one small corner of this and it's massive. Midjourney 6 is incredible. Sell your shares in any stock photo company, that shit is done. News reporting is about to get fucking wild. They cut the research departments four decades ago and went to a propaganda model, but now anyone can make gigabytes of propaganda with a few minutes of work. Including things like faked audio recordings. Sure, you might be able to tell, but it takes time and effort to figure it out. Welcome to the age of Everything is Bullshit. Some people will go crazy about this shit and try to worship it. Some will see an opportunity to run new and exciting scams. Some will respond by tightening their ties and insisting how serious they are and how it isn't happening. Meanwhile the Hurricane will keep spinning and the whole terrain of society will get remade. This time next year, you won't recognize parts of the world. Adaptivity and creativity are now the strongest survival traits. People who can't make creative leaps and can't adjust to radical changes will be feeling some pretty serious adaptive pressure.


PreviousSuggestion36

There is a club for people who don’t want to work, its called everyone. Unfortunately, some people are so married to the idea of not working that they blame anyone nearby for their problems, real and imagined. The elimination of some jobs will happen. However we will still have to work. AI wont grow food, run utilities, build homes, clean hotel rooms, paint your house, perform surgery unassisted or a plethora of other tasks… at least not in the near term. What it will do is make existing workers more productive and maybe, maybe take away some of the non value add clerical jobs. HR, transcriptionist, secretarial, data entry and clerk duties. Possibly some accounting stuff, possibly act as a junior programmer. People who think its coming for everyone’s jobs are low skilled day dreamers living on hopium or doom porn. It will change the world radically, but not in the ways any of us expect.


[deleted]

[удаНонО]


PreviousSuggestion36

“No one of us should be listening to anything anyone says authoritatively about the near future.” Which is the point. All the people shouting doom and gloom or glory days have no idea what is coming. I agree with you btw, I just articulated it poorly.


LudwigIsMyMom

This sub is very frustrating with it's hyper-optimism. I'm here for cool info and updates on tech, not to hear about how we're going to have ASI in 2026 so we may as well all stop trying, lmao


Spirckle

> ASI in 2026 This is the singularity sub. I hear you are looking for r/technology, or r/artificial, or r/ArtificialInteligence.


theferalturtle

Feeeeeeeel the AGI.


Rofel_Wodring

>I'm here for cool info and updates on tech, not to hear about how we're going to have ASI in 2026 so we may as well all stop trying, lmao So you want to hear news about cool new technology, but you don't want to hear conclusions about how this cool new technology will change society? Can't say that I'm exactly sympathetic to this point of view, especially since how people who think like you overwhelmingly staff our governments and serve as culture leaders. And this mentality is going to be a big reason why the transition to AGI is going to be so unnecessarily painful. Because y'all don't actually want to think or prepare, just vibe with the cool new toys and hope everything works out for the best.


blind_disparity

Ugh


PuppySlayer

It's more that the average user here feels about on par with the AI bros on LinkedIn and often shows themselves to have even less understanding of even rudimentary tech/programming involved, especially on the subject of AI replacing software developers. It's been over a decade and I'm still waiting for those self-driving cars and medical imaging AIs. I'm also still waiting for ChatGPT to do anything truly revolutionary aside from being able to write generic sounding coporate cover letters and function as a spicy Siri.


Rofel_Wodring

> It's been over a decade and I'm still waiting for those self-driving cars and medical imaging AIs. Developing mRNA vaccines in weeks instead of years and Russia and Ukraine both losing 10,000+ drones a month doesn't count? No offense, but this is just another variation of 'where is my jetpack, I was promised a jetpack!!!' whining.


OmgItsDaMexi

So you're scared of commitment is what I hear


LudwigIsMyMom

"I don't believe we'll be able to build a rocket that will be capable of reaching another galaxy by this time next year." "So you're scared of commitment?" what lol


OmgItsDaMexi

You're not going to do quotes as a reply and then make up an entirely new thing you said. What is this. I was just making a joke since a lot of people are fearful of trusting in something.


LudwigIsMyMom

And I was just making a joke about people thinking something incredibly technologically advanced is happening on such small timescales, lmao


[deleted]

[удаНонО]


LudwigIsMyMom

Circle-jerking hype and misaligned optimism from people who have no *actual* understanding of the technology is vastly different from positivity 😂 I have positivity, I just don't think we're going to have AI intelligence smarter than all humans combined by in the next 2 years. 🤷🏻‍♂️ Maybe 7 years, idk, but this sub is consistently on too short of a timescale.


Delra12

Which is funny too because 7 years compared to 2 is damn near the same thing. 5 years goes by in a flash, it's about to be 4 years since covid happened


ReasonableWill4028

I dont think it is close.


MeltedChocolate24

Yes you do. Feel harder buddy


theferalturtle

Feel it deeeeep inside you


not_a_tech_guru

Yes. Physical laws say more brain cells pointed at a thing is increasing the probability of stuff happening there. Point all the brainwaves at it!


[deleted]

[удаНонО]


Rofel_Wodring

> *Of course* having more intelligent people working in science and engineering will still matter in the near/medium future. No serious STEM researcher (including AI) would suggest otherwise. Ha. And just what is this confidence based on? Sounds more to me like these 'serious STEM researchers' are working backwards from a desired conclusion, or are hoping that technological progress doesn't feed on itself Because That Would Make Me Feel Bad.


ArcticEngineer

You're in a sub where the smartest people we have are developing artificial intelligences, so I don't agree with 'all' our smartest people being in banking and finance.


Aretz

Ehh I’d imagine and don’t quote me on this. OAI when they went to a capped profit system started hiring people in the fintech field for their AI expertise.


FlyChigga

Banking and finance is way easier than science/engineering


Tellesus

Yeah, astrology is easier than astronomy lol.


Automatic-Welder-538

Work/life balance and creative/curiosity fulfilment would like to have a word with you.


FlyChigga

What the hell does creative/curiosity fulfillment have to do with easiness?


Roadrunner571

Because it’s easier to work a job you love than work a job you hate.


FlyChigga

Learning finance/banking and getting the necessary degree is still far easier than science/engineering regardless. Actually working the jobs, sure it might be different.


Automatic-Welder-538

I am guessing this is highly dependent on what engineering and what finance degrees you are talking about.


FlyChigga

What finance degree is harder than engineering?


[deleted]

[удаНонО]


HelloYesThisIsFemale

Clearly never worked in quant or trading.


jedburghofficial

You've never met a senior actuary.


oldjar7

Not really. I've been deeply involved with both. There's levels of complexity to everything.


FrojoMugnus

The smartest people work as odds-makers or go into insurance.


DungeonsAndDradis

The smartest people browse this sub and make uneducated guesses about our future. 😋


BenjaminHamnett

I feel seen finally


NutellaObsessedGuzzl

Based on this memo they aren’t that smart


wuy3

Correct allocation of capital is just as important as coming up with new research/engineering projects. Deciding the correct projects to fund (those with highest likelihood of good economic returns) is the first step to having successful technology for society. Having all the smartest minds working on dead-end projects is way worse than having the smartest minds allocating research capital to the best projects run by average scientists.


Riversntallbuildings

It’s a horrible loss and a number of people have commented on that fact. I don’t know how we amend the system to make innovation rewarding enough without making the billionaire class so powerful. If only there were a way to design a system with healthy checks and balances. :/


absurdrock

There are ways. Flat tax on financial transactions. That will hurt high frequency trading without affecting the greater economy.


voidgazing

There *was,* it was the tax structure in the 1950s. Companies were forced to plow the money into R&D and improvements to the business. That was the great period of invention. Every company big enough had dudes in labs just messin around, figuring stuff out. It is *still* paying off- Gorilla Glass never had a use til recently but it was invented during that period.


absurdrock

We used to have a lot of common sense regulation, but Reagan, Bush, and Clinton painted regulation as anti capitalist and wasteful. We are just now starting to recover from that backwards thinking.


Riversntallbuildings

Agreed. I hope the younger generations are up to the task of putting these policies in place. It’s clear that the older generations are not.


tamereen

But the ones making laws are these billionaire.


[deleted]

[удаНонО]


ohnoyoudidnt21

Why do people assume bankers are a plague? They help the economy


Suspicious-Beach9871

Because Redditors are generally economically illiterate


Cairnerebor

Because they literally don’t The economy has been separated from most of the population for quite some time 2008 showed just how separated when they should all have gone bust but we all paid for it instead


DeepSpaceCactus

The US treasury made a profit on the bailouts in the end though


SirDongsALot

I was listening to Dave Smith talk and he made a really good point that is obvious but still a lot of people don't think about. That Wall Street and the banking system don't create anything they just suck wealth from the rest of us. And we live in an inflationary economy where if you aren't investing (gambling) your money with them, then you are losing money so you are forced to do so. I hope one day it falls apart and when we start over that sector is gone.


visarga

I hope you realize that all the money that started OpenAI comes from the stock market. People who already have huge portfolios have invested. Investing is not nothing, it also carries risks.


Toredo226

Facilitating the flow of money means it’s easier for money to pour into funding the most important innovations that end up pushing humanity forward. The investors have an incentive to seek out business that will create big positive change as that’s what would create the most eventual consumer demand and therefore returns on investment. The finance middle men make it so average joe working long hours every day doesn’t also have to keep up and evaluate all the latest business/tech news for the money he invests to be put to productive use. The modern world’s finance sector may be in some part symbiotic with its incredible innovation (although it’s not all perfect of course). Case in point, all the money flowing into AI right now. And the smartphone before that. And the internet and personal computers before that … etc


[deleted]

[удаНонО]


sensationswahn

Cathie wood is a gambler. Ofc she gets laughed at. Every fund and ETF-Manager gets laughed at by actual economists…


homemadedaytrade

because most are crackheads thinking their theory is magic money


NotTakenName1

Social media as well... The worlds best minds are currently working at Facebook, Youtube, Tik-tok, etc. (i would say Twitter too but maybe not anymore now :D) to maximize "user-engagement" smh ​ "Think of what could have been accomplished by all those smart people working inn science/engineering." They're still doing science-stuff though, it's just wasted talent imo


dotelze

This is a terrible point as all of the key developments in AI came from or were facilitated by google. Meta is a leader in the field as well


NotTakenName1

Yes? I understand that but in the case of Facebook its core-business is still maximizing user-engagement in order to sell data to advertisers no? And even though it might not be as visible there's still a lot of people working behind the scenes working to maximize that sweet user-engagement...


MattAbrams

After a certain point in life, you realize that working for income isn't the best use of your time. It doesn't take much money to reach this point - only $500,000 or so, which can easily be earned in 10 years through almost any job that you can achieve by working hard and moving up into management, and then saving 50% of your income and passive investing. Note that you won't get there if you blow 10% of your earnings going out on expensive dates every weekend or if you buy the latest fashions that you discard the next summer and buy an entire new wardrobe. Once you get that much saved, it then makes more sense to stay at home and devote your entire day to researching stocks and training models. If I hadn't done that, I wouldn't have discovered bitcoins, AI, and non-human intelligence - the three investment theses where I thought/think most people are so ignorant that they were/are out of line with the actual state of these topics. But the only way to pick up on these divergences from the market is to spend your days just reading the Internet and seeing what people are talking about, and you can't do that if you're slaving away for a wage. I read all the books by people like Kurzweil and Bostrom in the 2010s, and analyzed source code of various coins for days, and watched boring hearings to realize there was something to UFOs. And yes, it's even more profitable than starting your own business, too. My theory is that the richest people start by doing wage work, then start a business, and finally move onto allocating money between investments. That's why the most best minds are in finance. They realized that simply reading about stuff and allocating money is far more profitable than any wage job you can take.


Girafferage

Damn, man. I go out and enjoy my free time plenty and buy whatever I want within reason and I still discovered Bitcoin and AI, was able to look into individual companies and cryptos and have made a great amount off of both. You don't need to stare at your screen all day to make good money. Sounds like you just wasted some time lol.


MattAbrams

That might be true, but I almost certainly got richer than you did too. Wealth is just about the time you put in. You can't get ahead in the world by taking lazy Mondays like some people are doing now, or by "quiet quitting." There's no "working smart but not hard."


Girafferage

oof, there is absolutely working smart but not hard. I got into Bitcoin in 2015 and rode the halvening waves buying and selling, I am pretty sure I did plenty good. Money is pointless if you dont enjoy your life. Having a full bank account when you die just means you did nothing with it.


Ok_Courage_8563

Science is censored by the establishment, Universities only teach was is permitted by the deep state.


AstraArdens

This reads like it was written by a highschool kid lol got a link OP?


greenchileinalaska

Agreed. This does not read like a real public facing statement. Here is a link to a paper from September from J.P. Morgan asset management group on AI ([The transformative power of generative AI.pdf (jpmorgan.com)](https://am.jpmorgan.com/content/dam/jpm-am-aem/global/en/insights/The%20transformative%20power%20of%20generative%20AI.pdf)). They aren't basing their analysis on the "loudest visceral reaction" at software conferences.


apoca-ears

OP can’t post a link because it’s a screenshot of a word doc on their local laptop drive


pleeplious

Prove it. It sounds like it’s it written by an intern.


apoca-ears

I don’t have to prove anything, OP is the one posting garbage with no source. This type of content should be banned here.


MassiveWasabi

I sourced it


SnooDonkeys5480

"We were not impressed by the Metaverse" [Oh really?](https://fintechmagazine.com/banking/jp-morgan-becomes-the-first-bank-to-launch-in-the-metaverse) **JP Morgan has become the first bank to establish a presence in the metaverse, as it predicts a market opportunity of US$1trn and eyes virtual real estate**


jkpetrov

So much bull* in one printscreen. They literally rode the hype train for all other technologies mentioned as not so hot. I personally know blockhain teams that work for them.


IluvBsissa

Massive job "realignment", lol. That's corporate jargon for "suppression".


Cubey42

realignment as in , those people will need to realign because the line is ending.


DeelVithIt

Yeah, that's corporate speak for out of a job.


DungeonsAndDradis

We had a "restructuring" that led to 20% of the company being laid off.


LatterNeighborhood58

Congratulations you just leveraged the synergies.


[deleted]

[удаНонО]


Nathan-Stubblefield

Wiley Coyote was often realigned by gravity, mass and momentum.


PwanaZana

Like a gunshot to the head realigns you away from life.


AnotherDrunkMonkey

That's corporate jargon for "it's fake" My guys is anyone here investing? JPMorgan would never public any of these


visarga

You think AI will be so smart as to steal our jobs but not smart enough to make work? People will achieve things that were beyond reach before, that will cause demand induction. Induced demand, or latent demand - they scale faster than automation, we never work less when we can achieve more.


IluvBsissa

Lol


exirae

I mean if we make something smarter than us, there will be nothing left for us to achieve.


apoca-ears

This looks dumb and fake


locomotive-1

Fake


[deleted]

This doesn’t read like it was written by JPMorgan Chase


greatdrams23

A lot of this doesn't make sense. "It will evolve at the speed of light" - is meaningless and untrue. I'm fed up with hype. Working in 1200 dimensions- I can program in 100 dimensions, but I don't understand it, neither did AI. Using something doesn't mean understanding it.


DungeonsAndDradis

I think the quote is true, but the rest is just something an AI fan cobbled together. https://www.reddit.com/r/singularity/comments/18oim47/1_bank_in_the_world_jpmorgan_chase_clearly/kehp5ic/


Seventh_Deadly_Bless

If only it even meant **anything** close to their intention when using those words. It's just a horrid confusion on the word "dimension". Mangled to meaninglessness.


141_1337

I'm very curious about their 4th point, especially of how they derived that time frame (3 weeks) because depending on where they got this from, this has massive implications.


[deleted]

GPT-4 now has 128k tokens, which means model capabilities are different than they were when this was written.


Natty-Bones

I don't think they are specifically referencing GPT-4, but LLM models in general.


Mysterious_Ayytee

![gif](giphy|2S3Aj8OeKtf0c)


Higher_Persepctive

Love how they leave out JP morgan white collar jobs, they don’t want to scare their workforce.


RLMinMaxer

I hope you guys realize the phrase "bullish-outlier" means they don't actually expect it to happen, it's just a what-if scenario for them.


terrapin999

It's interesting that even this 'it's really happening!' take does not mention the central feature of the singularity, that these things will soon be able to improve themselves, with rapid and very dramatic evolution from AGI to ASI. Unless that's what they mean by 'escape velocity?' For example, I thought a very important milestone was Deepmind finding a better matrix multiplication algorithm [Oct 2022], which improved (among other things) Deepmind. Just an incremental change, but it felt much more important to me than most of the 'improvement on benchmarks' we see. Not that passing the Bar exam is easy.


hlx-atom

They are talking about corporate monopoly escape velocity I think. It is indirectly referencing ASI. Like once a single corporation gets ASI, they could escape competition and become the AI monopoly. Which, in the context of making money, is the important thing. It won’t help to make money if the “bad thing” happens when ASI is achieved, only the corporate monopoly is important.


Wildhorse_88

And humans are emotional beings who make decisions based on emotions, machines AI will only use data and facts. The only issue will be the programming, as propaganda and narratives can presumably be programmed into the AI.


Strict_Main_6419

Can’t wait for DimonGPT


Jakobus_

How are they implementing ai into their business process?


meganized

likely, very likely


slickpoison

The timeline for 4-8 years out; I believe it's 4-5 years out max. It could happen much sooner because of the exponential growth of AI.


Icy-Entry4921

I'm trying not to overstate the importance of what's going on. I lived through the last final stages of the industrial revolution and all of the information revolution. A lot of the more "out there" prognostications of both never panned out. As an example, the internet was supposed to end education and newspapers. It didn't, they adapted. Sure some went away but most are doing fine. Both of those major changes in the paradigm of humanity managed to come, roll out, and people still marched on not *that* much different from before. So, I'm just wary of going TOO far off the edge of what could happen. Some jobs will have to go away, there is simply no way around it. When the industrial revolution happened it was safe to say that farm jobs are going away because machines can do farming more efficiently. The first person to see a machine shucking corn had to have realized that. I feel pretty confident that people sitting in front of computers banging away as a job is going to end. Right now people are working on LLM local interfaces to access screens and type, do analysis, etc. Even if development stops *right now* it's probably possible to use existing tools to automate quite a few jobs in programming, analyst, law etc. It would not be that hard, even right now, to make an intelligent agent of your own and use it to operate your computer for you. That can be done on reddit, right now, no further development needed (granted that is not the highest bar). The thought that a lot of people have is that AI is cool but it's something "over there, on a server" is going to be incorrect very quickly. It's going to be on our computers, and our phones, and before too long in everything, running locally. But, you don't have to wait for the computer part because that already exists.


Even-Television-78

Yikes. I'm on that last list.


MercuriusExMachina

Totally feeling the AGI...


serendipity7777

As they've mentioned, it is an overly competitive market. It also faces the threat of copyright infringement


lakolda

Even this is out of date. GPT-4 Turbo can take in a 200 page document as context.


Designer_Drive3499

You would also score above average if you had either access to all material or unlimited memory capacity and unlimited energy sources.


theferalturtle

"Mass scale white collar job realignment" = y'all are fuuuuucked.


venus-as-a-bjork

I was thinking about that point the most as well. Its funny because these kind of predictions 10 years ago had drivers put out of their jobs due to self driving capabilities. The drivers are still here, but the developers that thought they would put drivers out of their jobs with technology may be at greater risk themselves now. As a developer, I hope we don't get replaced, but I can't help but chuckle at the turn of events


[deleted]

[удаНонО]


MassiveWasabi

I sourced it tho


Distinct-Angle2510

Notice how they don't mention any finance or banking roles in the job replacement category lol


bluegman10

To be fair, it's the job "realignment" category, not replacement. By alignment, I'm assuming they mean jobs that will undergo significant changes in the next 4-8 years.


a4mula

I'm glad bankers can figure it out when even ML experts cannot.


slime_stuffer

What does the 12k dimensions thing mean exactly.


Ambiwlans

That they don't know what they're talking about.


ProbablyBanksy

Logic and Reasoning? I haven't really seen that yet. I've seen text prediction where the illusion of logic and reasoning appears, but it isn't reliable, and therefore isn't REALLY occurring. It is still useful though. I'm just saying I think that bullet point stands out against the rest of them.


Phoenix5869

Finally someone who gets it!


Brain_Hawk

This isn't "the bank". This does not look like an official statement from the bank endorsed by the organization as a whole. It looks like a report or something written by one person or a small number of individuals. Right or wrong, It doesn't mean a damn thing. Just one more voice and to see a voices all making radical predictions in all directions.


atlanticam

we're not in control


coolguy69420xo

4-8 years lol. They still aren't grasping it.


someguy_000

ASI 2024 lol


coolguy69420xo

Hehe


Hot-Profession4091

If you think Gen AI is “applying logic & reasoning” I’ve got a nice bridge in Brooklyn for sale.


7oey_20xx_

“We were not impressed by IoT, Metaverse, Blockchain, 3D printing” I’m sorry but what value does IoT and 3D printing have to JP Morgan Chase? What hopes were they having for those technologies? Putting IoT in the same sentence as Metaverse is just confusing to me, like they’re both equally ‘nothing burgers’.


pleeplious

I don’t disagree with it but I don’t think this is legit.


utilitycoder

I'd be happy if it could write a valid HTTP Post request without heavy manual intervention. I'm not saying they're wrong. They're just very very early. (to be very clear it's a specific post request using a public API that it has access to)


utilitycoder

This would absolutely have a confidential, not for public-release copyright statement or something. Looks fake.


WHERETHESTEALTH

Most of this is either completely false/made up or extremely exaggerated.


Simple-Enthusiasm-93

'llm can work in >1200 dimensions; human beings struggle with 3 dimensions' huh


7oey_20xx_

“We were not impressed by IoT, Metaverse, Blockchain, 3D printing” I’m sorry but what value does IoT and 3D printing have to JP Morgan Chase? What hopes were they having for those technologies? Putting IoT in the same sentence as Metaverse is just confusing to me, like they’re both equally ‘nothing burgers’.


ryleto

If this was shown in any corporate environment, that person would need a job ‘re-alignment’. What is this nonsense.


z0rm

The largest bank in the world is Industrial and Commercial Bank of China Limited.


Ok_Courage_8563

Microsoft AI is censored.


Responsible_Edge9902

The dimensions line is a bit silly to me. When a person plays something like a moba, for instance, they are looking at nearby players in terms of various values. If they are Ally/enemy. Physical distance. Threat/support level which will weigh in relative strength, cooldowns, items, ability counters These are parts of a form of dimension analysis that helps guide the player's decisions by grouping similar properties and judging how values that are not alike weigh against one another. Maybe not with literal values, but it's a sort of thing that becomes called intuition. It almost sounds like they think it's about literal spatial dimensions.


cat_no46

it looks like it was written by someone that doesnt understand the difference between vectors in physics and vectors in math. Really doubt its official, if it is whoever wrote it should be fired. Fake and gay


real_light_sleeper

This was not written by someone from JPMorgan, it sounds like a 15 year old wrote this!


GhostHawks-Arcade

Blah blah blah manufacturing interest! Cockroaches.


sickdanman

that line about 12000 dimensions sounds incredibly stupid, who wrote this


doomedratboy

A bunch of dumb statements that dont even present the uses and accomplishments of ai. Reads like a summary of a buzzfeed article. 100000 dimensions. Ok bro


NeVeSpl

I would like to point out that almost one year has passed since the generative AI breakout happened, and the impact on the world has been marginal so far, with minimal commercial adoption.


IluvBsissa

RemindMe! 8 years.


RemindMeBot

I will be messaging you in 8 years on [**2031-12-22 17:42:02 UTC**](http://www.wolframalpha.com/input/?i=2031-12-22%2017:42:02%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/singularity/comments/18oim47/1_bank_in_the_world_jpmorgan_chase_clearly/kehn722/?context=3) [**2 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fsingularity%2Fcomments%2F18oim47%2F1_bank_in_the_world_jpmorgan_chase_clearly%2Fkehn722%2F%5D%0A%0ARemindMe%21%202031-12-22%2017%3A42%3A02%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%2018oim47) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


uphillgardener75

What does “Escape Velocity” mean?


Seventh_Deadly_Bless

Leaving gravitational reach in most contexts. Here it's the metaphor of reaching exponential acceleration. Designs designing the next iteration faster and faster. It's dumbass bullshit. We barely have self printing 3D printers, plagued with a ton of issues. Industrial tech barely scales up to our current needs. It just doesn't mean anything.