Jump to content

The AI is garbage thread


Recommended Posts

2 hours ago, legend said:

 

Much of the practical implementations of modern AI use human sources for the data, but that is by no means the limit of the field nor the end goal. My primary sub field is reinforcement learning and by default there is no human information in reinforcement learning. The agent learns autonomously by interaction with its environment. Back during my postdoc, a lot of the work I did was to *add* the ability for RL agents to learn from additional information from people (e.g., through human delivered feedback or commands) because teachers can accelerate learning and allow for user-tailored agents.

 

The aspirations of AI are very much to move passed supervised learning. It's also why it's hilarious that people think LLMs are near "AGI" when LLMs are still just doing limiting dataset fitting of very well structured human data.

 

Yeah, I think he's talking about the concrete implementations.  I'd wager he'd be skeptical that you can effectively apply AI to any system involving exchanges between human beings without continuous input from human beings, even if it utilizes unsupervised learning.  Although I've heard him concede that it's possible if you're dealing with a closed system in a controlled environment, like a game with a limited move-set.  (But most of the important systems that operate within society are open, evolving ones, in uncontrolled environments.)

 

Actually, that idea is one reason why I find the history of AI-generated language translation really interesting.  According to the accounts I've read, the early attempts to achieve it involved trying to build the most accurate data-model of human language as possible, and it was assumed once that was done you could translate text/speech between one language and another without using large data sets.  But it never really worked until researchers shifted to using large data-sets of actual translations run through correlative statistical algorithms, LLM-style.

 

Do you find, working within the field, that the original idea of finding the original 'golden key' of the pristine Chomskyan data model for language still alive?  Are there those who think Big Data techniques will eventually be made obsolete in this respect?  

 

Quote

Well if he does, that seems very unlikely to be true :p  Human-like intelligence is only limited to humans if either the Church Turing thesis is wrong or the "hardware" of brains is the only kind of hardware that can manage the computational complexity of intelligent processes in practice. The former is very unlikely to be wrong and if it's wrong it has *vastly* weirder implications about reality than just what can be "intelligent." That latter is also very unlikely because evolution doesn't select for intelligence and populations evolved to it only through a series of local hacks. (Although I do think it's very plausible that we will need a hardware revolution for more intelligent machines -- but probably not one that needs to look just like brains)

 

Not coincidentally, he has a fairly iconoclastic reading of the Turing Test.  He has argued that there's no way to know whether a machine passes the Turing test because it has become smart enough to convince the observer that it is a human being, or whether it is because the observer has become dumber--dumb enough to believe the machine is human, when it clearly isn't.  He frames the text that articulated the test itself as partially a reprimand or cry for help from Turing towards the institutions and people he worked for, driven by the way he was treated for his homosexuality, i.e., "if you're dumb enough to forcibly drug me with hormones for being gay after I basically helped you win WWII maybe you're dumb enough to believe machines and people could be the same".

 

It's a fascinating interpretation, but I'm guessing not a popular one.

Link to comment
Share on other sites

1 hour ago, Signifyin(g)Monkey said:

 

Yeah, I think he's talking about the concrete implementations.  I'd wager he'd be skeptical that you can effectively apply AI to any system involving exchanges between human beings without continuous input from human beings, even if it utilizes unsupervised learning.  Although I've heard him concede that it's possible if you're dealing with a closed system in a controlled environment, like a game with a limited move-set.  (But most of the important systems that operate within society are open, evolving ones, in uncontrolled environments.)

 

Actually, that idea is one reason why I find the history of AI-generated language translation really interesting.  According to the accounts I've read, the early attempts to achieve it involved trying to build the most accurate data-model of human language as possible, and it was assumed once that was done you could translate text/speech between one language and another without using large data sets.  But it never really worked until researchers shifted to using large data-sets of actual translations run through correlative statistical algorithms, LLM-style.

 

Do you find, working within the field, that the original idea of finding the original 'golden key' of the pristine Chomskyan data model for language still alive?  Are there those who think Big Data techniques will eventually be made obsolete in this respect?  

 

Yeah, early AI attempts were very much oriented around rule-based system and logical deduction. We usually refer to that as "GOFAI" which stands for "good old fashioned AI." Pure GOFAI is effectively dead and very few people believe in it. The world is too complex for humans to encode all the necessary rules and it doesn't have especially good mechanisms for learning from new information, a hallmark of animals.

 

However, there is a need to give current methods more of an ability to reason. The trick is how to endow systems with reasoning that is compatible with learned models that are more murky. There's actually a bunch of good work in this direction. In my field it would fall under "model-based reinforcement learning" where agents statistically learn models of how the world works and then plans within them (typically still using statistical planning methods, but plan all the same!)

 

 

1 hour ago, Signifyin(g)Monkey said:

Not coincidentally, he has a fairly iconoclastic reading of the Turing Test.  He has argued that there's no way to know whether a machine passes the Turing test because it has become smart enough to convince the observer that it is a human being, or whether it is because the observer has become dumber--dumb enough to believe the machine is human, when it clearly isn't.  He frames the text that articulated the test itself as partially a reprimand or cry for help from Turing towards the institutions and people he worked for, driven by the way he was treated for his homosexuality, i.e., "if you're dumb enough to forcibly drug me with hormones for being gay after I basically helped you win WWII maybe you're dumb enough to believe machines and people could be the same".

 

It's a fascinating interpretation, but I'm guessing not a popular one.

 

 

You might have already known, but just to be clear, the Turing test is not the same as the Church Turing thesis. The former was Turing's rhetorical argument on intelligence, the latter is a computational theory claim (the foundations of which were simultaneously developed by Church and Turing, hence the name) which is far broader and is effectively the foundation for all computer science. Overturning the Church Turing thesis would be to computer science what overturning evolution would be for biology. The ramifications for what it would mean about reality would be really weird and extreme.

 

On the topic of the Turing test though, it test succeeds as rhetorical argument, but fails as a literal test because it is very easy to dupe people with mechanisms that are pure smoke an mirrors. So I agree with him on that much. I don't think his framing of Turing's argument for the Turing test matches the historical context or content of the work :p Primarily, Turing wanted to object to the fuzzy and kind of mystical descriptions people had toward intelligence (or "thinking" more specifically) and push them towards adopting a functional perspective. If a machine can do all the cognitive things people do, it doesn't really matter what your fuzzy mystical thoughts on intelligence are. All we should care about is the function and we can evaluate that directly without untethered philosophy.

 

There's some debate over whether Turing did think the imitation game would also be a good test or if it was purely for the rhetorical argument, but either way his drive was moving towards functional ways of thinking about intelligence.

 

Even if the Turing test as a literal test was better and less prone to humans being duped, it's also kind of unnecessary for most analysis of AI.  We know what we're building and what it lacks. It is necessary to empirically measure how good any AI system is at something, but we don't really need to empirically test what kinds of cognitive faculties it has. I don't need to "talk" to Chat-GPT to know about the seriously lacking cognitive faculties it has, for example. I know many things it lacks by its construction. People get overly swept up by AI scientists pointing out that these models are black boxes we don't understand. We actually understand a lot. What is usually meant by that is we can't tell you precisely how the process it carries out works because it's an incredibly complex process. If we knew how it worked precisely, we wouldn't have needed machine learning in the first place. We would have just programmed the process ourselves! But while we lack that precise knowledge of how it reaches its answers, we very much know how it learns to do that and what limits that imposes.

  • Halal 1
Link to comment
Share on other sites

  • 4 weeks later...

If you hadn't already started avoiding everything G/O Media, maybe start doing that now.

 

acastro_200730_1777_ai_0001.jpg
WWW.THEVERGE.COM

G/O Media began publishing error-riddled AI-written articles last week.

 

Who would have thought giving article generation over to a chat bot wouldn't work or very well?

 

acastro_181017_1777_brain_ai_0002.jpg
WWW.THEVERGE.COM

The Gizmodo bot’s first article contained errors.

 

Link to comment
Share on other sites

These fucking hypebros (they don’t even deserve to be called tech bros) are turning my field into a laughing stock. It’s so fucking infuriating. There are a great many good and responsible researchers and engineers in the field, but the hope of money and the fantasy of building a god has distorted everything and drown them out. 
 

  • Sicko 1
Link to comment
Share on other sites

  • 6 months later...

It's been a few years since I've had to dive into a project, but the current search results for DIY repairs is awful. I wanted to find out some information on home furnace repair because mine probably broke today, and the entire first page of search results were auto-generated generic websites, which also had just straight false information and incorrect jargon. Then I searched for another topic (animal control services) and the top results were for nation-wide garbage instead of local businesses.

 

 

Link to comment
Share on other sites

  • 5 weeks later...
  • 1 month later...
On 2/21/2024 at 12:51 AM, Keyser_Soze said:
WWW.PCGAMER.COM

In the eyes of AI, it would seem that nothing is ever truly yours once it's posted online.

 

Honestly, I think reddit is getting scammed here. They're not making money and the big boys in AI are eating their lunch.

 

Meta makes ~$40 average revenue per user per year, and it's way higher for NA. Reddit makes more like ~$0.80 in ARPU, but I think that's largely a failing of their ad platform and the general insanity that is Reddit. I think it's obvious that Reddit is more valuable as training data for AI than it is for their home grown insubstantial ad platform. If this was typical tech, I'd say that they can totally afford to take a worse deal now because if they survive they'll be worth far more later, but that's just not the nature of AI. Reddit's data will never be as valuable as it is right now unless they hit some ridiculous TikTok growth curve, which seems incredibly unlikely.

 

Maybe the courts will rule that anything that you can index publicly is purely fair use and let every AI company off the hook for all the data they stole, but if the pendulum swings the other way Reddit will have given away billions a year. Hell, Meta was just thinking about buying Simon & Schuster just for AI training data.

 

Reddit has one of the few online repositories that (at least in the past) was an absolute treasure trove of actual, legit user generated text on nearly every subject. I'm sure that they'd have been scooped up purely for this reason if there wasn't an anti-trust backlash happening at the same time it's unclear what the legality of scraping data for AI training.

Link to comment
Share on other sites

One of my production company’s current gigs is with this group in Italy that’s trying to produce their own Magic the Gathering style card game, and my job has been doing a SHIT TON of cleanup on AI generated images that they’ve been trying to use for promotional images.

 

I told my boss that after this, we need to put out something about refusing to work with AI images because of eventual legal issues.

Link to comment
Share on other sites

The existence human "beauty" pageants in the Year of Our Lord 2024 is absolutely ridiculous.

 

The existence of AI "beauty" pageants makes me want to commit seppuku.

 

WWW.FORBES.COM

In a contest sponsored by Fanvue, models and influencers crafted from artificial intelligence can now jockey for the title "Miss AI."

 

 

  • Haha 1
  • True 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...