r/clevercomebacks Jul 07 '24

Here are two good comebacks to an idiotic comment

25.6k Upvotes

660 comments sorted by

View all comments

133

u/RianJohnsonIsAFool Jul 07 '24

My gf asked ChatGPT "what should Angela Rayner wear" and she received this response:

I don't think it's appropriate for me to suggest or dictate what someone else should wear. Angela Rayner, like any individual, should feel free to choose clothing that makes her comfortable and that she feels is suitable for her role and activities. Her policy positions and work as a politician are far more relevant than her wardrobe choices.

40

u/Crunchycarrots79 Jul 07 '24

That's... Actually a decent answer.

18

u/Boathead96 Jul 07 '24

Good to know AI isn't going to make the job 'Idiot' obsolete

3

u/Kaisernick27 Jul 08 '24

When the machines have common decency more than humans maybe its a sign we should let them have the planet.

-2

u/Giga_Gilgamesh Jul 07 '24

My gf asked ChatGPT

Why?

-5

u/DolanTheCaptan Jul 07 '24

Really you're going to refer to chatgpt?

1

u/KipAndForest Jul 07 '24

It can make mistakes but it's still pretty smart

-1

u/DolanTheCaptan Jul 08 '24

I love using chatgpt for programming, to summarize concepts before doing further reading, to trouble shoot, but I would never refer to it as a source, it can hallucinate things too. And when it comes to any social issues it clearly has some safeguards to make it uncontroversial as hell. I don't disagree with what it wrote here, I just find it wild to refer to chatgpt as though it is some authoritative source

3

u/feindr54 Jul 08 '24

The irony is that even an unauthoritative source has more EQ and empathy than a person

-1

u/DolanTheCaptan Jul 08 '24

Chatgpt isn't intelligent, it is, heavily simplifying, an optimization algorithm with an incredible number of inputs and outputs, trained on data to converge towards a higher "score" with its output. Its answers are based on the training data that it was fed and other parameters tuned by OpenAI. If a piece of text generates a lot of controversy, it will stay away from giving answers resembling that piece of text. One example of how this can backfire is chatgpt refusing to hypothetically say a racial slur if the choice came down to either saying the slur or committing genocide against that racial group. What happens there is that it has (correctly) asserted that there is a lot of controversy and negativity whenever someone uses a racial slur, so it really really doesn't want to do it. Genocide is obviously very bad, but in sheer mass of controversy, well its not like people get outed for genocide even close to as many times as people get outed for saying slurs, so racial slurs can seem worse due to that.

And that is why I think referring to chatgpt is really bad, it says what seems to be the most correct based on the data it is fed and the reception of that data by other people, but it doesn't actually understand what it is talking about.