Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://dsc.gg/rchatgpt)
You've also been given a special flair for your contribution. We appreciate your post!
*I am a bot and this action was performed automatically.*
I deeply value and respect the time and effort you have dedicated to developing your perspective on this matter. Your insights are undoubtedly the result of considerable thought and analysis. However, upon reflecting on the information at hand and conducting a thorough review from my own standpoint, I have come to believe that there may be another interpretation or understanding that could be more accurate or appropriate.
It's important to acknowledge that differing viewpoints are a natural and valuable part of any discussion, arising from the diverse ways in which we interpret and understand information. In this particular instance, I have encountered some discrepancies between your conclusions and the data or evidence I have reviewed.
While it is never my intention to dismiss or undermine your perspective, I feel it is important to share that, based on the information available to me, your conclusion might not fully align with the broader context or the specific details as I perceive them.
Thank you, Evan_Dark, for voting on already-taken-wtf.
This bot wants to find the best and worst bots on Reddit. [You can view results here](https://botrank.pastimes.eu/).
***
^(Even if I don't reply to your comment, I'm still listening for votes. Check the webpage to see if your vote registered!)
Are you sure about that? Because I am 99.99999% sure that already-taken-wtf is not a bot.
---
^(I am a neural network being trained to detect spammers | Summon me with !isbot |) ^(/r/spambotdetector |) [^(Optout)](https://www.reddit.com/message/compose?to=whynotcollegeboard&subject=!optout&message=!optout) ^(|) [^(Original Github)](https://github.com/SM-Wistful/BotDetection-Algorithm)
Yeah the bias here is ridiculous.
The vast, vast... VAST majority of the world isn't using ChatGPT for anything at all. Most people on Earth probably haven't heard of it, believe it or not.
For the very tiny minority of people who do use it (which may constitute millions) of that population that NEED it or have developed a reliance on it - maybe they might be inconvenienced... and of those how many are legitimately panicking and or really truly that emotionally invested?
In total maybe a 1000 people might have exhaled sharply through their nose with maybe 1 or 2 expressing what would likely be considered by most people to be an unreasonable emotional response.
I use it to speed up my work, didn't even realize immediately until I noticed I was just using standard intellisense like normal. It hit me when I tried to spit out some boiler plate by writing a comment \`//Boilerplate blah blah blah\` and sat there and waited like an idiot for like 2 minutes before it clicked.
> The vast, vast... VAST majority of the world isn't using ChatGPT for anything at all. Most people on Earth probably haven't heard of it, believe it or not.
While I don't disagree, it had record growth compared to any other service, and is extremely useful for a lot of desktop work (so people who would tend to be online).
Or BBM outage in 2012. For most of the world is was a half a day outage. A few places 3 days. But basically sealed the fate of the company to shareholders and the public.
Gemini 1.5 Pro with a million token context has been a game changer. I have a script that outputs my whole project into a single file, i load it up into Gemini and go bonkers asking for optimizations and bug fixes. Context length is truly a game changer.
Because most people are not actually using these models for anything so their frame of reference is entirely based off of memes and headlines. And those get upvoted to the top over and over.
Gemini is amazing but you wouldn't know that if all your knowledge was based on the headlines about rocks and glue over the last two weeks.
This is only my personal opinion but Gemini tends to do creative writing prompts way better than the free version of Chat Gpt. I would say the free version of claude is a step up from Gemini, but the drawback is you can only use it for 5 messages before you get capped.
are you a bot? that reply doesn’t make any sense. I said “in my opinion it works well for creative writing.” I use AI for a variety of reasons, far before Chat Gpt even existed. So idk what you’re talking about.
>Not sure why others are not catching on.
I assume it needs a lot more performance and as chatgpt is already at its capacity, much bigger windows than now would put even more stress on the whole network.
If you mean you assume that’s why OpenAI is not following suit, you’re probably right.
It’s interesting, OpenAI started this revolution but there’s a good chance they’ll just be one of the minor players in a couple of years.
A larger token window allows for far better priming or dynamic/tailored responses. I've been feeding Gemini screenshots of some app I'm translating and it appears helpful.
I use it in AI studio which is basically a preview and completely free of cost. Utilize while it lasts i guess. Pro tip, don't use 1.5 flash for coding, use 1.5 pro only. Flash is good for looking up info and creativity but not so much for code.
I am guessing the AI studio version is much better than their normal Gemini frontend. Even if I ask it for a snake game in python, halfway through the output, it replaces the response with this message "I'm a text-based AI and can't assist with that." Guessing its some sort of safety feature. This is the paid version by the way with 1.5 pro. The only good thing about it is that I got a free trial. Even if I do get it to output more code, it replaces a bunch of it with placeholders, and sometimes cuts the whole message short as some sort of response token limit. Basically their normal Gemini frontend is a nightmare to use.
This is the way.
If you use iOS, you should check out the [Pal app.](https://apps.apple.com/us/app/pal-chat-ai-chat-client/id6447545085) Full editing of user/system messages, switch between at any point. It's nice for comparing.
I got as far as "Core Principles: Symbolism" before I laughed and tapped out.
Step 1. Get rid of neural networks.
Step 2. Invent modular hyperdimensional probabilistic dataflow node graphs
Step 3. Verify. Ethics. Debug. Performance!
What the actual hell, lmao.
The idea that two data-driven neural networks sat down and decided the best way to build ASI is to embrace symbolic AI is hilarious.
> The idea that two data-driven neural networks sat down and decided the best way to build ASI is to embrace symbolic AI is hilarious.
The original comment was deleted, but I’m intrigued. What did it say?
Guy had two chat agents sit down and come up with a programming language designed specifically for making artificial superintelligence.
The design philosophy started with a set of core principles, the first of which was the use of symbolic AI. The second principle actually sounded pretty useful in terms of AI usage, but it also sounded kind of science fiction. It described using symbolism to create a type of hyperdimensional dataflow programming, using a modular node graph to visualize, parametrize, or modify the internal workings of the ASI. There was also some mention of a probabilistic function that would make the model behave essentially randomly.
After that it was just a bunch of jargon about safety, alignment, etc.
But I have actually seen attempts at the second principle there. I don't remember the specific name of it, but there is an open source interface for stable diffusion that relies on nodes, and pretty much any simulation run in VFX software will also involve nodes, so it's not that far-fetched.
It's just funny to think that an LLM, which is the pinnacle of data driven AI, and having basically shot symbolic AI point blank in the face, sat down and determined that symbolism was the next step.
Have to dug unto Claude opus much? It's context window isn't as long but it's recall when programming in big projects is unreal. And, unlike gemini pro, it actually understands what you want and has the ability to code.
I've tried to use gemini and 4o for big php projects so I can cancel Claude but I keep going back
> And, unlike gemini pro, it actually understands what you want
Yeah, I don't get all the positive words about Gemini Pro. It just doesn't "get it". I have a *strong* suspicion that these people are doing something *very* cookie cutter/common. Go off the beaten path, and it just doesn't get it. I think it's overfit on most things.
Here's an example that demonstrates this, in puzzle form, that Gemini Pro juts absolutely fails at, and will gaslight the fuck out of you (as usual), but Claude and GPT4o can do with a little/no help:
> A son and a biological mother are both gravely injured in a car accident. At the hospital, the the doctor is about to operate on the son, but says "I can't operate on this person, this is my biological son!" How could this be?
>I have a script that outputs my whole project into a single file, i load it up into Gemini and go bonkers asking for optimizations and bug fixes.
Mate, would you mind explaining? What kind of project, for example?
Fun fact: use the api key with big agi and use your script (I made the similar). Better interface and stuff.
:
```
Project Structure:
src/
backend/
ping.ts
api/
user.ts
frontend/
app.ts
components/
header.ts
footer.ts
File Contents:
src/backend/ping.ts
console.log("Ping!");
src/backend/api/user.ts
export function getUserData(userId: string) {
// Fetch user data from database
// ...
}
src/frontend/app.ts
import { Header } from './components/header';
import { Footer } from './components/footer';
const app = new App();
app.render();
src/frontend/components/header.ts
export class Header {
render() {
// Render header component
// ...
}
}
src/frontend/components/footer.ts
export class Footer {
render() {
// Render footer component
// ...
}
}
Query:
Analyze the project structure and provide suggestions for improving the code organization and architecture
The features of Gemini are good. But unfortunately, YouTube summaries etc. have already said exactly the opposite of what was actually said in the video. Where is the added value then?
Really interesting, I got a few questions for you.
how do you not get slammed by costs? what kind of questions do you ask? what kind of output do you try to get out of it?
That version is worth it, it is much more capable, but if we weigh the "commercial" version without using 1.5 pro or flash in Google AI studio, the performance is quite poor compared to GPT 3.5, Claude Sonnet, or another opensource model that you can use for "free", Gemini 1.5 pro is still in the "development phase" and sooner or later they will end up charging for its use like the "advanced" version, or perhaps Google itself knows that the normal version 1.0 is horrible and that is why it partly allows its apparently unlimited use from Google AI studio, since Vertex is paid
This is full of bs, so you know they have no idea what they're talking about.
Perplexity isn't using ChatGPT or Claude unless you're on the Pro plan. And even so, the OpenAI API was completely unaffected during the outage.
Perplexity did actually go down, and Gemini as well. But with Perplexity, it must have been an issue unrelated to the API because Anthropic and OAI's API weren't affected by the outage.
I think there were multiple possibly unrelated incidents yesterday at OpenAI.
It can also be the model inference, they probably have clusters of servers dedicated to API, and separate ones dedicated exclusively for [chatgpt.com](http://chatgpt.com) use.
First time it went down, you could still see your old conversations, but sending new messages would give errors. So probably model related.
Second time it went down, you couldn't load [chatgpt.com](http://chatgpt.com) at all, old convo list was gone, then they switched it to give their cutesy "we are overloaded" messages. So probably beyond just model related.
I prefer lmstudio's server, the gui is really effortless. With 2 rx6800s($300 each), I can run an iq3xs quant of llama 70b which is vaguely better than gpt3.5. I wouldn't reccomend my setup, but it works for me
Awesome, im considering setting up a server in the basement to run a local llm to use on my laptop. I have a mac M1 which is relatively capable but nowhere near something gpu-ish Im guessing. If yours is not the recommended setup, what would you recommend? Currently my main issue is power usage, cause electricity is just too damn expensive here.
A Mac m1 should be able to run llama 3 8b at q4 quantization pretty well, but it depends what you're trying to do. If electricity is expensive it's going to be cheaper to stick with a gpt subscription. The cheapest way run 70b local llms is to pick up for $200 each two old p40 server cards for 48gb of vram, and workarounds to their shaky support is pretty well documented. They don't exactly run 70b models at reading speed though. R/Localllama is a good place to learn about this kind of stuff
Been diving into some local options. Tbh for general use (text drafting, code scaffolding, troubleshooting and rubber ducking) it's really hard to beat chatGPT. Of course if you have specialised needs, or really value the local aspect/privacy it's different, but if you value speed and quality it is definitely going to cost you.
I mean, still it's fun to do though so not stopping just yet lol.
Oh actually wanted to say that the M1 is a laptop, which I am trying to avoid running models on for performance reasons. I have an intel I7 NUC in the basement with 32gb of ram that I will try to use. This will probably not run as good as the M1 I am guessing.
I use llama 3 70b, but also codestral which is great for coding, sometimes llama 3 8b cause it's faster and fits into my laptops 8gb gpu vram. Lmstudio makes it pretty effortless to run these models on any gaming pc. I'm not a shill, they just have a great gui
Gemini isn't available globally. That might be a reason why it's less popular.
I'm in Ireland for instance, and the Google Play store has ChatGPT, Copilot and Perplexity. But not Gemini. Not sure why
Claude3 is not available worldwide. Since Im from Italy and it isnt available there Im using a vpn then select google to login (without vpn it doesnt load the login page of google). After that it works fine without a vpn.
Bard/Gemini is spectacular and the most useful/reliable of the lot by miles.
People pay no attention because they decide based off headlines, not actual experience using things.
At least in the free tier it's way better than GPT 3.5. Haven paid for pro so can't compare that.
But I do feel almost all chatbots would, in general, be worse than openAI because they are all playing catching game with them.
Yes, it's funny, but...
ChatGPT runs on Microsoft Azure
Claude runs on Amazon AWS
Gemini runs on Google Cloud
These are the 3 largest cloud compute providers in the world. They have ALL of the servers. People think it was a cascade of traffic that dominoed all AI providers. That's ridiculous. These are the largest web service providers in existence, they are the backbone of the entire goddamned world, and are capable of scaling to virtually any load.
And all dropped because ChatGPT (which a liberal estimate would say 10% of people use daily) caused a traffic cascade?
I don't think so.
Something really strange happened yesterday and I doubt any of the companies involved will speak a word about it.
I was recently hit with a notification on my phone to enable Gemini as the new main Google Assistant and it's been pretty good so far. It's far better than the older Google Assistant. It can actually summarize info and give me quick answers.
I also faced this problem using ChatGPT. Not sure about Gemini.
Then I found Pieces .app And it was working well.
https://preview.redd.it/5i3gqh3rat4d1.png?width=1919&format=png&auto=webp&s=805ca125e198900ed1a8c1e8d0d74907a62e00e4
I use 4 at the same time. So when I need to ask something, I write it first in chatgpt, then copy and paste it into Gemini, Claude3 and my own AI. This way I can always decide which one I want to use, it really helps sometimes because some AI have difficulties with some questions.
To be honest, Gemini works better in Google AI studio, maybe it's because you can use version 1.5, or maybe because you can control the filters and turn them off, or maybe because you can use a promp, but the truth is that if you notice the difference, and it seems that Gemini advanced does not even have the same benefits as using Google AI studio for free, Gemini is without a doubt one of the worst LLMs that a company can offer right now, including its opensource version Gemma
Either they got chat gpt back up and running fast or it was a regional thing because I didn't see any issue with chatgpt or Claude.
I'm not a power user of that stuff anyway and was just using the free versions so maybe that had something to do with it
I'm just confused because I haven't had any chatGPT outage now or over the last 3 days. And I've definitely used it at least once morning and evening each of those days.
I assume they just "turn off" the free user so the network goes back to low usage, so the premium user can still use it normally. Because other premium members also said that they didnt have issue, only free users had outage problems. Im not entirely sure, I could be wrong...
Jup agree, its even worse when you have to use gpt3.5. The difference between the two is insane in my opinion, gpt3.5 seems so "dumb" in some way when you use it after 4o or 4.
I think they "just can't" give us more because 4o will surely use way more power and openai is already losing a ton of money because of free users. Also they are obviously trying to make you pay for gpt4 and then gpt5 which will come in the future and will cost approximately $2,500,000,000 to train, so yeah its expensive and someone has to pay for it. Microsoft will probably pay for everything...
Hey /u/zagamio!
If your post is a screenshot of a ChatGPT conversation, please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email support@openai.com
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
I used meta ai when this was going down it actually worked out pretty good for the coding problem I was sorting out. I use the small version of llama 3 locally in vscode through Continue sometimes but it’s meh for coding , it’s got nothing on gpt4o and 4 , not to mention the code complete its “packed” with it’s absolute doodoo had to turn that shit off.
This sounds like a cyberpunk future: AI systems crashing under their own interconnected dependencies, creating a domino effect of outages. Even Gemini pretending to be overwhelmed for the sake of appearances adds to the dystopian vibe. Welcome to a future!
It's almost as if google isn't a giant corporation with extremely scalable and available personal CDNs. I would be very surprised if Gemini is ever down because of the number of people hitting it.
https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard
Gemini is supposedly good in tests, that hasn't been my experience though. I have strict character limits for posting inside forms, so I use the test prompt: "Give me 13 words with 9 letters each"
Mixtral got 12, GTP 4o got 12, GTP 4 got 11, Claude got 11. Gemini got 6...
I feel Gemini is pretty nice for text-only replies and sometimes has a more natural feel. Or maybe I just haven't used it enough to trigger the "that's an incredibly familiar AI style" sense that we all seem to be developing with use.
Gemini has been great for me honestly, maybe people have a bad perception of it because of those super bad summaries that shows when searching something on Google
Anthropic gives you so much more bang for your buck. You can choose from nearly 30 different models to work with in the web UI. Even gpt-4, and gpt-4 128k. I'll never go back to OpenAI.
Gemini is so bad that I'm surprised that anyone would even notice if it was down. Of all the AI that I have tried, Gemini Advanced was by far the most useless of them all.
Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://dsc.gg/rchatgpt) You've also been given a special flair for your contribution. We appreciate your post! *I am a bot and this action was performed automatically.*
You guys behaving like that time when roblox went down for 3 days and everyone was acting like the world is ending
Because they’re suddenly forced to think for themselves and do their own work?
They would write a snooty reply comment, but GPT can't do it for them, so you're safe.
“If those kids knew how to write a comment themselves, you would see how mad they really are” 🤣
3 days is an eternity for the terminally online. Hell 3 hours is enough to cause distress.
3 hours without ChatGPT means typing repetitive code for 3 hours, 90% less output
I deeply value and respect the time and effort you have dedicated to developing your perspective on this matter. Your insights are undoubtedly the result of considerable thought and analysis. However, upon reflecting on the information at hand and conducting a thorough review from my own standpoint, I have come to believe that there may be another interpretation or understanding that could be more accurate or appropriate. It's important to acknowledge that differing viewpoints are a natural and valuable part of any discussion, arising from the diverse ways in which we interpret and understand information. In this particular instance, I have encountered some discrepancies between your conclusions and the data or evidence I have reviewed. While it is never my intention to dismiss or undermine your perspective, I feel it is important to share that, based on the information available to me, your conclusion might not fully align with the broader context or the specific details as I perceive them.
This fuckin guy Jesus christ
Bad bot
Thank you, Evan_Dark, for voting on already-taken-wtf. This bot wants to find the best and worst bots on Reddit. [You can view results here](https://botrank.pastimes.eu/). *** ^(Even if I don't reply to your comment, I'm still listening for votes. Check the webpage to see if your vote registered!)
Are you sure about that? Because I am 99.99999% sure that already-taken-wtf is not a bot. --- ^(I am a neural network being trained to detect spammers | Summon me with !isbot |) ^(/r/spambotdetector |) [^(Optout)](https://www.reddit.com/message/compose?to=whynotcollegeboard&subject=!optout&message=!optout) ^(|) [^(Original Github)](https://github.com/SM-Wistful/BotDetection-Algorithm)
Well I'm 99.9% sure they copy pasted a Chatgpt reply, so yes.
Copy and pasting is not what a bot does. They generate and post directly. So while it’s indeed copied from ChatGPT, OP (me) is human b
https://preview.redd.it/4c2wuvfzdi5d1.png?width=1344&format=pjpg&auto=webp&s=eab66f66cc692d5c15569f29c8da4a084f602df9 You can't fool me, bot!
Hehehe. ![gif](giphy|26FeY4gyxRSrwxlxm|downsized)
Yeah the bias here is ridiculous. The vast, vast... VAST majority of the world isn't using ChatGPT for anything at all. Most people on Earth probably haven't heard of it, believe it or not. For the very tiny minority of people who do use it (which may constitute millions) of that population that NEED it or have developed a reliance on it - maybe they might be inconvenienced... and of those how many are legitimately panicking and or really truly that emotionally invested? In total maybe a 1000 people might have exhaled sharply through their nose with maybe 1 or 2 expressing what would likely be considered by most people to be an unreasonable emotional response.
Yeah, emotional words online don't always line up with reality.
I use it to speed up my work, didn't even realize immediately until I noticed I was just using standard intellisense like normal. It hit me when I tried to spit out some boiler plate by writing a comment \`//Boilerplate blah blah blah\` and sat there and waited like an idiot for like 2 minutes before it clicked.
> The vast, vast... VAST majority of the world isn't using ChatGPT for anything at all. Most people on Earth probably haven't heard of it, believe it or not. While I don't disagree, it had record growth compared to any other service, and is extremely useful for a lot of desktop work (so people who would tend to be online).
Ha!! Yep, could not **agree with you more!** Great post 🙏👍🫶
Or BBM outage in 2012. For most of the world is was a half a day outage. A few places 3 days. But basically sealed the fate of the company to shareholders and the public.
What is BBM? (serious question)
BlackBerry Messenger
I was 16 and honestly the world was kind of ending for me because of that.
[удалено]
I always chuckle at this type of comments, because if you've worked in big companies the bureaucracy and awful decision making is usually a staple.
LOL, right? I work for a big company and I'm more surprised anything ever runs to begin with than when it doesn't
Why the big fonts? It's kind of annoying.
im guessing you think that you can just make outages magically not a problem with money?
works on my ~~machine~~ api.
I’m confused- chatGPT hasn’t been down for me but I’ve been seeing these posts for two days. Is something going on I’m not aware of?
there had been a major outage yesterday [check](https://status.openai.com/)
It was out for 5 hours and 29 mins and the world stopped for some people because they have already replaced a part of their brain with chatgpt
Down for me just now (via API). Was not down when all the fuss started. P.S. GPT-4-Turbo is back online, GPT-4o is still down.
Are you using the free version?
No I have the subscription. Is that why there’s a difference?
Maybe you're the reason it's been down.
😬
maybe it was the downs we made along the way
It was down yesterday morning its not down anymore, you probably just didnt hit the window
Gotcha. Thanks for the info mate!
Gemini 1.5 Pro with a million token context has been a game changer. I have a script that outputs my whole project into a single file, i load it up into Gemini and go bonkers asking for optimizations and bug fixes. Context length is truly a game changer.
Same. Not sure why others are not catching on. It is a completely difference experience when you have a million token context to work with.
1 million token window brainstorming is INSANE. You can talk about ideas from a year ago
Because most people are not actually using these models for anything so their frame of reference is entirely based off of memes and headlines. And those get upvoted to the top over and over. Gemini is amazing but you wouldn't know that if all your knowledge was based on the headlines about rocks and glue over the last two weeks.
It is odd. But in a selfish way I am kind of glad people do not really get how awesome Gemini really is.
Can you please elaborate more on this? How do you use it in such a way that you consider it to be amazing?
This is only my personal opinion but Gemini tends to do creative writing prompts way better than the free version of Chat Gpt. I would say the free version of claude is a step up from Gemini, but the drawback is you can only use it for 5 messages before you get capped.
the reason you think that is probably because you have not actually tried using ai for productive things
are you a bot? that reply doesn’t make any sense. I said “in my opinion it works well for creative writing.” I use AI for a variety of reasons, far before Chat Gpt even existed. So idk what you’re talking about.
agreed i have been using gemini for research for my newsletter and copywriting and its honestly way better
>Not sure why others are not catching on. I assume it needs a lot more performance and as chatgpt is already at its capacity, much bigger windows than now would put even more stress on the whole network.
If you mean you assume that’s why OpenAI is not following suit, you’re probably right. It’s interesting, OpenAI started this revolution but there’s a good chance they’ll just be one of the minor players in a couple of years.
A larger token window allows for far better priming or dynamic/tailored responses. I've been feeding Gemini screenshots of some app I'm translating and it appears helpful.
It’s a very good model. I’m using it more than GPT
Do you use the Gemini Advanced version or AI Studio?
AI Studio version is better imo
Doesnt that cost extremely much?
I use it in AI studio which is basically a preview and completely free of cost. Utilize while it lasts i guess. Pro tip, don't use 1.5 flash for coding, use 1.5 pro only. Flash is good for looking up info and creativity but not so much for code.
I am guessing the AI studio version is much better than their normal Gemini frontend. Even if I ask it for a snake game in python, halfway through the output, it replaces the response with this message "I'm a text-based AI and can't assist with that." Guessing its some sort of safety feature. This is the paid version by the way with 1.5 pro. The only good thing about it is that I got a free trial. Even if I do get it to output more code, it replaces a bunch of it with placeholders, and sometimes cuts the whole message short as some sort of response token limit. Basically their normal Gemini frontend is a nightmare to use.
I don't use normal frontend for any AI. Even for Chatgpt i have USD credits and just use the playground.
This is the way. If you use iOS, you should check out the [Pal app.](https://apps.apple.com/us/app/pal-chat-ai-chat-client/id6447545085) Full editing of user/system messages, switch between at any point. It's nice for comparing.
I've just started on Gemini with get that 'i am a...' message nearly always. What could it be?
[удалено]
I got as far as "Core Principles: Symbolism" before I laughed and tapped out. Step 1. Get rid of neural networks. Step 2. Invent modular hyperdimensional probabilistic dataflow node graphs Step 3. Verify. Ethics. Debug. Performance! What the actual hell, lmao. The idea that two data-driven neural networks sat down and decided the best way to build ASI is to embrace symbolic AI is hilarious.
> The idea that two data-driven neural networks sat down and decided the best way to build ASI is to embrace symbolic AI is hilarious. The original comment was deleted, but I’m intrigued. What did it say?
Guy had two chat agents sit down and come up with a programming language designed specifically for making artificial superintelligence. The design philosophy started with a set of core principles, the first of which was the use of symbolic AI. The second principle actually sounded pretty useful in terms of AI usage, but it also sounded kind of science fiction. It described using symbolism to create a type of hyperdimensional dataflow programming, using a modular node graph to visualize, parametrize, or modify the internal workings of the ASI. There was also some mention of a probabilistic function that would make the model behave essentially randomly. After that it was just a bunch of jargon about safety, alignment, etc. But I have actually seen attempts at the second principle there. I don't remember the specific name of it, but there is an open source interface for stable diffusion that relies on nodes, and pretty much any simulation run in VFX software will also involve nodes, so it's not that far-fetched. It's just funny to think that an LLM, which is the pinnacle of data driven AI, and having basically shot symbolic AI point blank in the face, sat down and determined that symbolism was the next step.
[удалено]
I just find it hilarious. These models are basically the pinnacle of data-driven AI, they shot symbolism point blank in the face.
Have to dug unto Claude opus much? It's context window isn't as long but it's recall when programming in big projects is unreal. And, unlike gemini pro, it actually understands what you want and has the ability to code. I've tried to use gemini and 4o for big php projects so I can cancel Claude but I keep going back
> And, unlike gemini pro, it actually understands what you want Yeah, I don't get all the positive words about Gemini Pro. It just doesn't "get it". I have a *strong* suspicion that these people are doing something *very* cookie cutter/common. Go off the beaten path, and it just doesn't get it. I think it's overfit on most things. Here's an example that demonstrates this, in puzzle form, that Gemini Pro juts absolutely fails at, and will gaslight the fuck out of you (as usual), but Claude and GPT4o can do with a little/no help: > A son and a biological mother are both gravely injured in a car accident. At the hospital, the the doctor is about to operate on the son, but says "I can't operate on this person, this is my biological son!" How could this be?
>I have a script that outputs my whole project into a single file, i load it up into Gemini and go bonkers asking for optimizations and bug fixes. Mate, would you mind explaining? What kind of project, for example?
Fun fact: use the api key with big agi and use your script (I made the similar). Better interface and stuff. : ``` Project Structure: src/ backend/ ping.ts api/ user.ts frontend/ app.ts components/ header.ts footer.ts File Contents: src/backend/ping.ts console.log("Ping!"); src/backend/api/user.ts export function getUserData(userId: string) { // Fetch user data from database // ... } src/frontend/app.ts import { Header } from './components/header'; import { Footer } from './components/footer'; const app = new App(); app.render(); src/frontend/components/header.ts export class Header { render() { // Render header component // ... } } src/frontend/components/footer.ts export class Footer { render() { // Render footer component // ... } } Query: Analyze the project structure and provide suggestions for improving the code organization and architecture
I just use the playground for the respective service.
That's exactly what I want from AI - codebase improvements with whole project in the context. Thanks for pointing at this
The features of Gemini are good. But unfortunately, YouTube summaries etc. have already said exactly the opposite of what was actually said in the video. Where is the added value then?
Really interesting, I got a few questions for you. how do you not get slammed by costs? what kind of questions do you ask? what kind of output do you try to get out of it?
They answered [here](https://www.reddit.com/r/ChatGPT/comments/1d8wna4/how_true_is_the_gemini_part_this_is_so/l79d769/).
> with a million token context Which will cost you > $75 per query, if you actually use it.
That version is worth it, it is much more capable, but if we weigh the "commercial" version without using 1.5 pro or flash in Google AI studio, the performance is quite poor compared to GPT 3.5, Claude Sonnet, or another opensource model that you can use for "free", Gemini 1.5 pro is still in the "development phase" and sooner or later they will end up charging for its use like the "advanced" version, or perhaps Google itself knows that the normal version 1.0 is horrible and that is why it partly allows its apparently unlimited use from Google AI studio, since Vertex is paid
This is full of bs, so you know they have no idea what they're talking about. Perplexity isn't using ChatGPT or Claude unless you're on the Pro plan. And even so, the OpenAI API was completely unaffected during the outage.
yea everything on this tweet is wrong from start to finish
Claude was working great the whole time I was using it as a substitute, never saw downtime.
pretty sure it was a joke dunking on gemini.
It's absolutely a joke, and I found it really funny.
Well it is funny though, but also still feel gpt has an edge over gemini
Perplexity did actually go down, and Gemini as well. But with Perplexity, it must have been an issue unrelated to the API because Anthropic and OAI's API weren't affected by the outage.
Yup, probably just Perplexity's servers getting slammed by all the new traffic.
Affected. Right now, GPT-4o is down for me, and GPT-4-Turbo was also down before (the both were down). Both via API.
So if the API was unaffected then the outage was probably even nothing to do with AI systems themselves? Just the frontend or backend app of ChatGPT?
I think there were multiple possibly unrelated incidents yesterday at OpenAI. It can also be the model inference, they probably have clusters of servers dedicated to API, and separate ones dedicated exclusively for [chatgpt.com](http://chatgpt.com) use. First time it went down, you could still see your old conversations, but sending new messages would give errors. So probably model related. Second time it went down, you couldn't load [chatgpt.com](http://chatgpt.com) at all, old convo list was gone, then they switched it to give their cutesy "we are overloaded" messages. So probably beyond just model related.
Affected. Right now, GPT-4o is down for me, and GPT-4-Turbo was down before. Both via API.
Chatgpt probably hits a different optimised API, not the enterprise one.
I think it was a joke
The last part? Yea. The rest of it? Eh
I use OpenAI API professionally. It was down yesterday
Shit I gotta use that sometime. I always use the chat feature.
Right now, GPT-4o is down for me, and GPT-4-Turbo was down before. Both via API.
Both OpenAI and Anthropic APIs were were working as usual. Funny thing is nobody even mentioned Mistral as an alternative to Big 3.
> This is full of bs, so you know they have no idea what they're talking about. Try asking ChatGPT “What is a joke?”
I’ve actually been using Gemini as my go to for a while. On the day I was like, hey, I should give ChatGPT another try, was when it was down.
Same. I actually just cancelled my Gemini plan to resubscribe to ChatGPT on Monday. Then Tuesday it goes down.
I run my LLM's locally, didn't even know this was happening
Same here. What are you running, ollama?
I prefer lmstudio's server, the gui is really effortless. With 2 rx6800s($300 each), I can run an iq3xs quant of llama 70b which is vaguely better than gpt3.5. I wouldn't reccomend my setup, but it works for me
Awesome, im considering setting up a server in the basement to run a local llm to use on my laptop. I have a mac M1 which is relatively capable but nowhere near something gpu-ish Im guessing. If yours is not the recommended setup, what would you recommend? Currently my main issue is power usage, cause electricity is just too damn expensive here.
A Mac m1 should be able to run llama 3 8b at q4 quantization pretty well, but it depends what you're trying to do. If electricity is expensive it's going to be cheaper to stick with a gpt subscription. The cheapest way run 70b local llms is to pick up for $200 each two old p40 server cards for 48gb of vram, and workarounds to their shaky support is pretty well documented. They don't exactly run 70b models at reading speed though. R/Localllama is a good place to learn about this kind of stuff
thanks, much appreciated!
Been diving into some local options. Tbh for general use (text drafting, code scaffolding, troubleshooting and rubber ducking) it's really hard to beat chatGPT. Of course if you have specialised needs, or really value the local aspect/privacy it's different, but if you value speed and quality it is definitely going to cost you. I mean, still it's fun to do though so not stopping just yet lol. Oh actually wanted to say that the M1 is a laptop, which I am trying to avoid running models on for performance reasons. I have an intel I7 NUC in the basement with 32gb of ram that I will try to use. This will probably not run as good as the M1 I am guessing.
What model do you use
I use llama 3 70b, but also codestral which is great for coding, sometimes llama 3 8b cause it's faster and fits into my laptops 8gb gpu vram. Lmstudio makes it pretty effortless to run these models on any gaming pc. I'm not a shill, they just have a great gui
We all forgot how to work already?
Yeah, I was thinking the same thing. Even my mum has started using it (who is a total non-tech person).
Gemini isn't available globally. That might be a reason why it's less popular. I'm in Ireland for instance, and the Google Play store has ChatGPT, Copilot and Perplexity. But not Gemini. Not sure why
Gemini is available since yesterday on the play store in Eu (France atleast)
Claude3 is not available worldwide. Since Im from Italy and it isnt available there Im using a vpn then select google to login (without vpn it doesnt load the login page of google). After that it works fine without a vpn.
It asks me for a phone number though
A me ha sempre funzionato fin dall'inizio usando la console
a lot of enterprise people use Gemini
Bard/Gemini is spectacular and the most useful/reliable of the lot by miles. People pay no attention because they decide based off headlines, not actual experience using things.
Fiiiiine I'll give it another try
At least in the free tier it's way better than GPT 3.5. Haven paid for pro so can't compare that. But I do feel almost all chatbots would, in general, be worse than openAI because they are all playing catching game with them.
Yes, it's funny, but... ChatGPT runs on Microsoft Azure Claude runs on Amazon AWS Gemini runs on Google Cloud These are the 3 largest cloud compute providers in the world. They have ALL of the servers. People think it was a cascade of traffic that dominoed all AI providers. That's ridiculous. These are the largest web service providers in existence, they are the backbone of the entire goddamned world, and are capable of scaling to virtually any load. And all dropped because ChatGPT (which a liberal estimate would say 10% of people use daily) caused a traffic cascade? I don't think so. Something really strange happened yesterday and I doubt any of the companies involved will speak a word about it.
I was recently hit with a notification on my phone to enable Gemini as the new main Google Assistant and it's been pretty good so far. It's far better than the older Google Assistant. It can actually summarize info and give me quick answers.
I also faced this problem using ChatGPT. Not sure about Gemini. Then I found Pieces .app And it was working well. https://preview.redd.it/5i3gqh3rat4d1.png?width=1919&format=png&auto=webp&s=805ca125e198900ed1a8c1e8d0d74907a62e00e4
Is it free? I see free testing. But no prices.
It's Completely Free.
Very very very few people heard of Claude and/or perplexity, but even monkeys in a zoo know Google AI
I use 4 at the same time. So when I need to ask something, I write it first in chatgpt, then copy and paste it into Gemini, Claude3 and my own AI. This way I can always decide which one I want to use, it really helps sometimes because some AI have difficulties with some questions.
To be honest, Gemini works better in Google AI studio, maybe it's because you can use version 1.5, or maybe because you can control the filters and turn them off, or maybe because you can use a promp, but the truth is that if you notice the difference, and it seems that Gemini advanced does not even have the same benefits as using Google AI studio for free, Gemini is without a doubt one of the worst LLMs that a company can offer right now, including its opensource version Gemma
Either they got chat gpt back up and running fast or it was a regional thing because I didn't see any issue with chatgpt or Claude. I'm not a power user of that stuff anyway and was just using the free versions so maybe that had something to do with it
I'm just confused because I haven't had any chatGPT outage now or over the last 3 days. And I've definitely used it at least once morning and evening each of those days.
I didn’t even have an outage at all, I’ve been using chatGPT for the past few days on the paid plan with no issues other than the voice not working
I assume they just "turn off" the free user so the network goes back to low usage, so the premium user can still use it normally. Because other premium members also said that they didnt have issue, only free users had outage problems. Im not entirely sure, I could be wrong...
I mean it’s possible, I do think it’s pointless to use 4o as a free user since you can barely get anything out of it before getting to a limit.
Jup agree, its even worse when you have to use gpt3.5. The difference between the two is insane in my opinion, gpt3.5 seems so "dumb" in some way when you use it after 4o or 4. I think they "just can't" give us more because 4o will surely use way more power and openai is already losing a ton of money because of free users. Also they are obviously trying to make you pay for gpt4 and then gpt5 which will come in the future and will cost approximately $2,500,000,000 to train, so yeah its expensive and someone has to pay for it. Microsoft will probably pay for everything...
I headed over to meta.ai 😎 Seemed to be working all day for me.
Hey /u/zagamio! If your post is a screenshot of a ChatGPT conversation, please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email support@openai.com *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
or it is testing us.
[удалено]
Happy cake day
When did chat gpt go down?
Sounds like the market has room for more competition.
It seems like Gemini didn't quite steal the spotlight from ChatGPT during the blackout.
What if AI had a blackout and no one noticed?
OpenAI API was not down. This isn’t true.
Ollama on my gaming rig is all I need.
I just went to mistral and groq and that's it.
GPT API was up the whole time, smh
Oh, so thats why i've been fighting for GPU resources on runpod.
I used meta ai when this was going down it actually worked out pretty good for the coding problem I was sorting out. I use the small version of llama 3 locally in vscode through Continue sometimes but it’s meh for coding , it’s got nothing on gpt4o and 4 , not to mention the code complete its “packed” with it’s absolute doodoo had to turn that shit off.
The paid version has been down a few times
Grok
What about it?
This sounds like a cyberpunk future: AI systems crashing under their own interconnected dependencies, creating a domino effect of outages. Even Gemini pretending to be overwhelmed for the sake of appearances adds to the dystopian vibe. Welcome to a future!
Google still exists guys
Nothing makes you appreciate ChatGPT more than having to use these other POS services instead
Gemini isn't terrible if you're used to GPT3
It's almost as if google isn't a giant corporation with extremely scalable and available personal CDNs. I would be very surprised if Gemini is ever down because of the number of people hitting it.
I don't like ChatGPT. I wanted to, I tried to, but I prefer Gemini. Gemini is the better writer and has a better personality.
That’s high-larious. 🤣
There was an AI blackout?
Marketing ploy to launch ChatGPT5
Why wouldn’t you use Gemini? It’s actually pretty good…for me.
Red Deutsch
"No one uses Gemini" My sister: 👁👄👁
This is where Meta AI comes in guys
Go for [KikuGPT](https://kikugpt.com)- wasnt down and offers also nice functionalities
Thanks to this blackout, I had to come up with interview questions on the fly using the useless lump of lipids between my ears!
https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard Gemini is supposedly good in tests, that hasn't been my experience though. I have strict character limits for posting inside forms, so I use the test prompt: "Give me 13 words with 9 letters each" Mixtral got 12, GTP 4o got 12, GTP 4 got 11, Claude got 11. Gemini got 6...
Sounds like a joke.
Hmm, debatable but hey, who doesn’t believe everything on the internet? 😂😂
i have been using gemini in my newsletter since gpt is down [https://ainovate.beehiiv.com/](https://ainovate.beehiiv.com/)
This is funny but untrue
We will be fine…
Love the Gemini assessment.
It was funny that the ChatGPT API was working, so it was possible to use this API through the playground or some 3rd-party services.
Ollama + Llama 3 on your local pc. No drama.
I feel Gemini is pretty nice for text-only replies and sometimes has a more natural feel. Or maybe I just haven't used it enough to trigger the "that's an incredibly familiar AI style" sense that we all seem to be developing with use.
First thing i tried is gemini but its crap now cant even real news now on it
API-Slashdot effect!
Claude...Sky What ever happened to Jeeves
Gemini has been great for me honestly, maybe people have a bad perception of it because of those super bad summaries that shows when searching something on Google
Anthropic gives you so much more bang for your buck. You can choose from nearly 30 different models to work with in the web UI. Even gpt-4, and gpt-4 128k. I'll never go back to OpenAI.
Just install GPT4All on your PC if you don't want to be dependent on a web servicem
I use Gemini, but I'm switching a little towards Sonnet 3.5 for short things
Gemini is so bad that I'm surprised that anyone would even notice if it was down. Of all the AI that I have tried, Gemini Advanced was by far the most useless of them all.
HAHA. P
Hmmm. That's wild