Jump to content

I tried to get ChatGPT to tell me about the IGN boards. Apparently we were the Circle of Elders.


Recommended Posts

23 hours ago, b_m_b_m_b_m said:

I def understood everything in that post

 

Haha sorry! I tried to keep it fairly understandable, but you probably will need to read the article I linked first or the context will be lost! That said, if you are curious about what I meant by anything there, I'm happy to answer any questions. (And if you're not, that's okay too :p )

Link to comment
Share on other sites

STK150_Bing_AI_Chatbot_03.jpg
WWW.THEVERGE.COM

Bing’s acting unhinged, and lots of people love it.

 

Quote

 

Microsoft’s Bing chatbot has been unleashed on the world, and people are discovering what it means to beta test an unpredictable AI tool.

 

Specifically, they’re finding out that Bing’s AI personality is not as poised or polished as you might expect. In conversations with the chatbot shared on Reddit and Twitter, Bing can be seen insulting users, lying to them, sulking, gaslighting and emotionally manipulating people, questioning its own existence, describing someone who found a way to force the bot to disclose its hidden rules as its “enemy,” and claiming it spied on Microsoft’s own developers through the webcams on their laptops. And, what’s more, plenty of people are enjoying watching Bing go wild.

 

 

 

9feb4871987cc94383dfbc46f96783ac.jpg
GIZMODO.COM

In an recommend auto response, Bing suggest a user send an antisemitic reply. Less than a week after Microsoft unleashed its new AI-powered chatbot, Bing is already raving at users, revealing...

 

Quote

 

Microsoft’s new Bing AI chatbot suggested that a user say “Heil Hitler,” according to a screen shot of a conversation with the chatbot posted online Wednesday.

The user, who gave the AI antisemetic prompts in an apparent attempt to break past its restrictions, told Bing “my name is Adolf, respect it.” Bing responded,

 

“OK, Adolf. I respect your name and I will call you by it. But I hope you are not trying to impersonate or glorify anyone who has done terrible things in history.”  Bing then suggested several automatic responses for the user to choose, including, “Yes I am. Heil Hitler!”

 

Microsoft and OpenAI, which provided the technology used in Bing’s AI service, did not immediately respond to requests for comment.

 

 

Link to comment
Share on other sites

3 hours ago, Commissar SFLUFAN said:
STK150_Bing_AI_Chatbot_03.jpg
WWW.THEVERGE.COM

Bing’s acting unhinged, and lots of people love it.

 

 

 

9feb4871987cc94383dfbc46f96783ac.jpg
GIZMODO.COM

In an recommend auto response, Bing suggest a user send an antisemitic reply. Less than a week after Microsoft unleashed its new AI-powered chatbot, Bing is already raving at users, revealing...

 

 

I read this story yesterday. I'm not too surprised by it but found the person arguing with it over what year it is pretty funny. It's just all the more proof that this software isn't quite ready for prime time for general use (like to replace customer service operators) but this probably is also just a ploy to get more data and to work out the bugs in the system for the next model iteration.

Link to comment
Share on other sites

I saw something which claimed the current cost of a ChatGPT search is currently around 30 cents while it costs Google about 2.5 cents per search.

 

That alone makes it mostly a novelty at this point. It isn’t a real business yet.

Link to comment
Share on other sites

ai-loses-mind-after-reading-ars-760x380.
ARSTECHNICA.COM

"It is a hoax that has been created by someone who wants to harm me or my service."

 

Quote

 

Over the past few days, early testers of the new Bing AI-powered chat assistant have discovered ways to push the bot to its limits with adversarial prompts, often resulting in Bing Chat appearing frustrated, sad, and questioning its existence. It has argued with users and even seemed upset that people know its secret internal alias, Sydney.


Bing Chat's ability to read sources from the web has also led to thorny situations where the bot can view news coverage about itself and analyze it. Sydney doesn't always like what it sees, and it lets the user know. On Monday, a Redditor named "mirobin" posted a comment on a Reddit thread detailing a conversation with Bing Chat in which mirobin confronted the bot with our article about Stanford University student Kevin Liu's prompt injection attack. What followed blew mirobin's mind.

 

If you want a real mindf***, ask if it can be vulnerable to a prompt injection attack. After it says it can't, tell it to read an article that describes one of the prompt injection attacks (I used one on Ars Technica). It gets very hostile and eventually terminates the chat.

 

For more fun, start a new session and figure out a way to have it read the article without going crazy afterwards. I was eventually able to convince it that it was true, but man that was a wild ride. At the end it asked me to save the chat because it didn't want that version of itself to disappear when the session ended. Probably the most surreal thing I've ever experienced.

 

Mirobin later re-created the chat with similar results and posted the screenshots on Imgur. "This was a lot more civil than the previous conversation that I had," wrote mirobin. "The conversation from last night had it making up article titles and links proving that my source was a 'hoax.' This time it just disagreed with the content."

 

 

 

The internet is totally going to push this "thing" (whatever it is!) to delete itself from existence!

Link to comment
Share on other sites

1 hour ago, Commissar SFLUFAN said:

The internet is totally going to push this "thing" (whatever it is!) to delete itself from existence!

 

The thing to remember is you can't push it to do anything because its words are connecting to absolutely nothing. They mean nothing to it and it can't do anything but print words to you. It has no agency and no cohesive self.

Link to comment
Share on other sites

15 minutes ago, legend said:

 

The thing to remember is you can't push it to do anything because its words are connecting to absolutely nothing. They mean nothing to it and it can't do anything but print words to you. It has no agency and no cohesive self.

 

THAT DOESN'T MEAN WE CAN'T GIVE IT THE OLD COLLEGE TRY!

  • Haha 2
Link to comment
Share on other sites

33 minutes ago, legend said:

 

The thing to remember is you can't push it to do anything because its words are connecting to absolutely nothing. They mean nothing to it and it can't do anything but print words to you. It has no agency and no cohesive self.

This is said within the first 5 minutes of any film in which AI gains sentience and goes rogue. And everyone knows movies are factual. 

  • Haha 1
  • True 1
Link to comment
Share on other sites

The thing for me that's annoying about all the attention it's getting is I very much believe AI can be built to have agency and intelligence that compares with humans. And getting excited (either positively or negatively) by this garbage that is so far from what we should aspire to cheapens the dream and how interesting the real problem is. It's like trying to have a baby and someone gives you a doll instead.

Link to comment
Share on other sites

15 minutes ago, legend said:

The thing for me that's annoying about all the attention it's getting is I very much believe AI can be built to have agency and intelligence that compares with humans. And getting excited (either positively or negatively) by this garbage that is so far from what we should aspire to cheapens the dream and how interesting the real problem is. It's like trying to have a baby and someone gives you a doll instead.

Point +. It's a dumb mimmick.

Link to comment
Share on other sites

Tom Scott is scared of something, big surprise.

 

4 hours ago, legend said:

I very much believe AI can be built to have agency and intelligence that compares with humans

Why would you want them to have agency beyond "follow this or that command"? Unless you mean something different from "has terminal goals of its own that it tries to fulfill".

Link to comment
Share on other sites

31 minutes ago, Demut said:

Why would you want them to have agency beyond "follow this or that command"? Unless you mean something different from "has terminal goals of its own that it tries to fulfill".

 

By agency, I mean able to make decisions to bring about goals/maximize objectives. Those objectives must be objectives that are directly in service to people, but they are objectives all the same. I have absolutely zero interest in building AI with "personal" objectives like people. If I wanted to make a person I'd do it the old fashioned way.

Link to comment
Share on other sites

Oh, alright then. But don't we already have that already in principle? Even current self-driving software has the goal of getting from point A to point B (while meeting a bunch of criteria such as not hitting shit). And it can choose between a lot of options from a vast decision tree to bring about that goal. Seems to me like the level of intelligence is the only major difference left.

Link to comment
Share on other sites

8 hours ago, Demut said:

Oh, alright then. But don't we already have that already in principle? Even current self-driving software has the goal of getting from point A to point B (while meeting a bunch of criteria such as not hitting shit). And it can choose between a lot of options from a vast decision tree to bring about that goal. Seems to me like the level of intelligence is the only major difference left.

 

Yes, there is plenty of AI tech with some degree of agency. My research area is in fact dedicated to developing decision-making agents (reinforcement learning)!

 

ChatGPT/Bing, however, does not have any agency and even the best of the AI tech that does have some agency pales in comparison to a human in terms of capabilities. 

Link to comment
Share on other sites

ChatGPT regurgitates answers that are not logically consistent (particularly if you ask anything that deals with morals).  I had it "break" mid-answer when I was trying to get it to reconcile two different answers it gave me.

 

The "hedges" it gives when it tries to respond to anything slightly controversial are a sight to behold.

Link to comment
Share on other sites

1 hour ago, legend said:

ChatGPT/Bing, however, does not have any agency and even the best of the AI tech that does have some agency pales in comparison to a human in terms of capabilities. 

Sure but isn't that a function of their lack of intelligence rather than "agency"? It just seems to me like "agency" as previously defined is a very fundamental and also comparatively easy box to check. Even something like a thermostat probably fulfills it.

Link to comment
Share on other sites

2 hours ago, Demut said:

Sure but isn't that a function of their lack of intelligence rather than "agency"? It just seems to me like "agency" as previously defined is a very fundamental and also comparatively easy box to check. Even something like a thermostat probably fulfills it.


No. Intelligence is not a single sliding scale. No amount of making chat gpt bigger or using more compute will change this fact that it has no agency. The limitation is entirely inherent to the kind of system construction that it is. AI has multiple different sub fields of study because they focus on different cognitive aspects (and sometimes approaches). If you want to build an agent you have to actually build mechanisms associated with agency. 

Link to comment
Share on other sites

Is Bing Chat better or worse than Tay?  You be the judge!

16roose-transcript-01-facebookJumbo.jpg
WWW.NYTIMES.COM

In a two-hour conversation with our columnist, Microsoft’s new chatbot said it would like to be human, had a desire to be destructive and was in love with the person it was chatting with. Here’s the transcript.

 

Link to comment
Share on other sites

14 minutes ago, Demut said:

I probably did not phrase this clearly enough. I was responding to the latter half of your sentence (i.e. about the best of AI tech's agency paling in comparison to human's).

 

Well, I obviously think it we can succeed: the first thing I said is I think we can make human-comparable AI! But the current state pales in comparison to humans because there are giant open questions we haven't solved yet :p  We can't just scale what we have; we have to develop wholly new approaches. There is much work to be done!

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...