Jump to content

The AI is garbage thread


Recommended Posts

1 hour ago, Ominous said:

So AI looks at the work of "real artists" and develops a way to mimic that. 

Modern artists look at hundreds of years of art from real artists and their work is 100% original.

 

Did I get that right? 

 

I'd say that's some of what artists do, but hardly the bulk of it. At least not for the talented ones. I'd say the best artists are trying to express something they feel, and those expressions take new form depending on the circumstances of modern life that motivate it.

 

It's no secret that I don't think there's anything "special" about humans that cannot be captured in sufficiently advanced machines. But to pretend that what these current AI models do is the same as what people do is to misunderstand and cheapen how complex intelligence and human cognition is. If all AI tech amounted to was variations on this current form of generative models, it would be a tragedy and great disappointment and I'd consider the entire AI project a colossal failure.

Link to comment
Share on other sites

AI has infinite time to practice art. A better way to go about this, instead of reference, would be for the AI to keep making stuff and then get feedback from a human if it looks good, and then after enough feedback the AI would figure out how to get something aesthetically pleasing without having to reference something else.

 

I think back of the open AI playing the dota games. The AI did things no human had ever done but at the same time they were smart things (like putting down an observer ward in tower range when attacking the tower so the tower would attack the ward instead of the player), then eventually it got really good at playing the game. If an AI did the same thing with art it would be impressive.

  • Halal 1
Link to comment
Share on other sites

4 minutes ago, Keyser_Soze said:

AI has infinite time to practice art. A better way to go about this, instead of reference, would be for the AI to keep making stuff and then get feedback from a human if it looks good, and then after enough feedback the AI would figure out how to get something aesthetically pleasing without having to reference something else.

 

I think back of the open AI playing the dota games. The AI did things no human had ever done but at the same time they were smart things (like putting down an observer ward in tower range when attacking the tower so the tower would attack the ward instead of the player), then eventually it got really good at playing the game. If an AI did the same thing with art it would be impressive.

 

Learning from human feedback with experimentation is a great way for an AI to go beyond pure mimicry for sure. The biggest barrier is it's hard to get enough data of human feedback. However, ChatGPT (pure language model) works by fine tuning from human feedback about what good responses are, so this is certainly an avenue of active research. Simple feedback modeling still has serious limitations, but it is a step in the right direction.

 

(It also starts breaking into reinforcement learning, which is my topic of interest and incidentally the best AI topic :p )

Link to comment
Share on other sites

12 hours ago, Ominous said:

So AI looks at the work of "real artists" and develops a way to mimic that. 

Modern artists look at hundreds of years of art from real artists and their work is 100% original.

 

Did I get that right? 

Missing the process entirely. Art isn't mimicry, otherwise abstract movements wouldn't have been invented. The matter is that even those hieroglyphs depicted in ancient egypt were not indicative of cognitive skill but already fashioned out of a tradition of cultural and social relevance. There's a reason why high ranking sarcophagi were depicted as nearly alien in simplicity (photoshop anyone?), whereas layworkers were depicted realistically.

Link to comment
Share on other sites

Consider CRT. It's not an additive to historical context but a reclamation of the past. It is to undo the erasure of events that profit the present. Same with the LGBTQ lashback. There could have been a national effort to inform of sexual anthropology with even a bare assed effort at sex-ed, but can't have that though. The lack of both are leading to deaths that knowledge could have informed otherwise.

Link to comment
Share on other sites

1 hour ago, unogueen said:

Art isn't mimicry


Most of it absolutely is. The greats are who they are because of originality, but the vast majority of art is copying ideas and techniques developed by others.

 

I’m more interested in the art of filmmaking than any others, and I’m always blown away by something Spielberg does in nearly every one of his films because dude is old and yet invents some new shot sequence nobody has ever done before. Even the best of the best working today rarely do that.

 

I don’t think the lack of originality is actually a knock, doing something unoriginal but at an incredibly high level is still special in my book.

Link to comment
Share on other sites

20 minutes ago, sblfilms said:


Most of it absolutely is. The greats are who they are because of originality, but the vast majority of art is copying ideas and techniques developed by others.

 

I’m more interested in the art of filmmaking than any others, and I’m always blown away by something Spielberg does in nearly every one of his films because dude is old and yet invents some new shot sequence nobody has ever done before. Even the best of the best working today rarely do that.

 

I don’t think the lack of originality is actually a knock, doing something unoriginal but at an incredibly high level is still special in my book.

How much do actually understand about art history? You can and should admit ignorance when applicable. Whatever drivel you've absorbed though offhand media representation of art is not comprehensive in the least. Picking at landmark artists and supposing the rest to be copycats is just a mirror of present product culture. Art isn't iterative. People don't practice by painting the same rembrandt a thousand times over. You're not training a muscle, it's idealization. An author is not 1000 monkeys on 1000 typewriters. There is no trial and error here. More to the point this disdain for art probably originates from the media portrayal of high art installations, while most people probably don't even recognize that low art is a thing, a very pervasive thing, and their spite for high art poisons the whole pool regardless.

At the end of the day, art is a measure of history, of anthropology, that may not meet 'objectivity', nevertheless it is a literacy that does more than inform of what you're looking at, but also projects the history behind it.

As a fan of cinema yourself, I argue that the cape movie paradigm is actively detrimental to the craft due to the fact that it limits the viability of stories that veer away form a good vs evil diorama. I mean, does there need to be an antagonist in a story? Or what if the protagonist was the antagonist? Maybe it's all antagonists. Or maybe there are no protagonists. Maybe the movie is telling a story where central focus is a philosophy rather than a conflict.

Link to comment
Share on other sites

Is the cultural erasure caused by the colonizer's proletizing by fire just something you're going to walk away from. Because you didn't. It's been centuries and no measure has been made to correct any wrongs, rather exasperate the circumstance and continue the exploitation. I've seen enough right wing memes declaring that Africa has not produced anything of value for humanity. And do you understand where that ignorance comes from?

Link to comment
Share on other sites

44 minutes ago, unogueen said:
Krispy-Kreme_1.jpg
WWW.NRN.COM

During the 2022 Krispy Kreme Investor Day, execs addressed the major thorn in their side: lack of access

 

I worked at a convenience store called Winner's Corner that had Krispy Kreme, and we had to order probably twenty dozen doughnuts, if not more (this was over 13 years ago). Sure, at first they sold decently, but that didn't last long. Every night I had to toss out a trash bag full of them, which was such a waste. Employees at After taking some home a few times, I couldn't stand eating them anymore, even for free, and just stopped. Same with everybody else I worked with. Our store was the busiest one the company owned in the area. The northern Nevada KK ended up closing in May 2008, and we switched to a company that just delivered 2 dozen, which almost always sold out or a couple were left over. 

Link to comment
Share on other sites

Art makes you feel an emotional connection to the artist. Many people treat art as nothing more than consumptive entertainment and for those people there is going to be no difference between art-art and AI art. The same will be true generally for art that we all take for granted such as signage, bugs, and clipart. I don't think that's a bad thing.

 

Art that can be broken down in to basic ideas with clear rules whose purpose is little more than making someone say "oh interesting" or "I definitely noticed that and remember what it looks like" can almost now be done completely by someone with different training. That's it. A machine won't be able to make art until a machine has an intent and emotion of its own that it needs to express. 

Link to comment
Share on other sites

34 minutes ago, SuperSpreader said:

What I find hilarious about all the "ai artists" getting butthurt with the first criticisms they've received. Bro welcome to art you're always getting shit on. 

 

we should develop AiCritic* and have it review AI art and just trash it

 

*tm

  • Like 1
  • Sicko 1
Link to comment
Share on other sites

3 hours ago, skillzdadirecta said:
1672064966528.png
WWW.AXIOS.COM

It's hard to catch, the Furman University professor says.

 

Just saw this story on the news... Damn A.I. is putting entrepreneureal nerds who write papers for jocks out of business.

 

South Park Jobs GIFtrump supporters took our jobs GIF by South Park


I never was paid for it, but I’ve probably passed freshmen Comp I a dozen times writing papers for friends and family

Link to comment
Share on other sites

  • 4 months later...
  • 1 month later...

Meant to post this awhile back in this thread.  When it comes to the AI debate, Jaron Lanier is one of the most formidable minds on the subject and whether you agree with his positions or not, you'd be intellectually lazy not to reckon with his arguments.  Formerly they have been relegated to his books (Who Owns the Future and Dawn of the New Everything are good ones to start with) but he published a New Yorker article touching on some of his main themes that is a must-read if you're interested in the subject:

 

lanier_ai.jpg?mbid=social_retweet
WWW.NEWYORKER.COM

There are ways of controlling the new technology—but first we have to stop mythologizing it.

 

 

  • Thanks 2
Link to comment
Share on other sites

1 hour ago, Signifyin(g)Monkey said:

Meant to post this awhile back in this thread.  When it comes to the AI debate, Jaron Lanier is one of the most formidable minds on the subject and whether you agree with his positions or not, you'd be intellectually lazy not to reckon with his arguments.  Formerly they have been relegated to his books (Who Owns the Future and Dawn of the New Everything are good ones to start with) but he published a New Yorker article touching on some of his main themes that is a must-read if you're interested in the subject:

 

lanier_ai.jpg?mbid=social_retweet
WWW.NEWYORKER.COM

There are ways of controlling the new technology—but first we have to stop mythologizing it.

 

 

 

 

I both like and dislike this article :p 

 

Things I like

  • Demystifying LLMs instead of embracing the complete an utter nonsense capabilities OpenAI wants you to believe it possess.
  • Focusing on more tangible risks with AI technology
  • Promoting more transparent AI that can be reasoned about

 

Things I dislike

  • Reinforcing the incorrect view that AI researchers buy into catastrophic risk scenario by pointing to poorly conducted pools and a handful of legitimate scientists buying into the unscientific and unhinged speculation. Myself and so many of my colleges are regularly exacerbated by this narrative primarily fueled by PR focused tech bros with a savior complex.
  • Dismissing "AI" as concept merely because our current technology is far from anything like human intelligence. It's still the aspiration of the field to work to intelligent systems with human-like cognitive abilities. But it's important to note that there is a huge difference between human-like cognitive abilities and human-like. We're building the former, not the latter.
Link to comment
Share on other sites

1 hour ago, Signifyin(g)Monkey said:

Meant to post this awhile back in this thread.  When it comes to the AI debate, Jaron Lanier is one of the most formidable minds on the subject and whether you agree with his positions or not, you'd be intellectually lazy not to reckon with his arguments.  Formerly they have been relegated to his books (Who Owns the Future and Dawn of the New Everything are good ones to start with) but he published a New Yorker article touching on some of his main themes that is a must-read if you're interested in the subject:

 

lanier_ai.jpg?mbid=social_retweet
WWW.NEWYORKER.COM

There are ways of controlling the new technology—but first we have to stop mythologizing it.

 

 

 

The "data dignity" model for potentially compensating creators who have a high percentage of input into AI-generated works is an interesting one. Seems extremely hard to implement though, especially in a way that most people would find fair.

Link to comment
Share on other sites

2 hours ago, legend said:

 

 

I both like and dislike this article :p 

 

Things I like

  • Demystifying LLMs instead of embracing the complete an utter nonsense capabilities OpenAI wants you to believe it possess.
  • Focusing on more tangible risks with AI technology
  • Promoting more transparent AI that can be reasoned about

 

Things I dislike

  • Reinforcing the incorrect view that AI researchers buy into catastrophic risk scenario by pointing to poorly conducted pools and a handful of legitimate scientists buying into the unscientific and unhinged speculation. Myself and so many of my colleges are regularly exacerbated by this narrative primarily fueled by PR focused tech bros with a savior complex.
  • Dismissing "AI" as concept merely because our current technology is far from anything like human intelligence. It's still the aspiration of the field to work to intelligent systems with human-like cognitive abilities. But it's important to note that there is a huge difference between human-like cognitive abilities and human-like. We're building the former, not the latter.

 

On the last point, I always thought it was more of a reframing rather than a dismissal.  The claim is that intelligent systems with human-like cognitive abilities are the product of processing input taken from human sources, and thinking of them as freestanding, autonomous minds (or 'intelligences') hides the role these human sources (who are freestanding, autonomous minds) play.

 

I think it comes out of his 'humanist' view of intelligence and the mind in general.  While he has a fairly nuanced view on the nature of human consciousness (which I think he kind of implies is a necessary predicate for true 'human intelligence' in his arguments), he strikes me ultimately as someone who views it as something unique to humans that a digital machine probably isn't capable of replicating. (particularly the aspect of subjective experience or 'interiority')

 

That's kind of my interpretation, though, he never says that explicitly.  He touches on his views here--and notes the irony of being a big fan of Daniel Dennet, who holds very different opinions from him:

 

 

Link to comment
Share on other sites

1 hour ago, Signifyin(g)Monkey said:

 

On the last point, I always thought it was more of a reframing rather than a dismissal.  The claim is that intelligent systems with human-like cognitive abilities are the product of processing input taken from human sources, and thinking of them as freestanding, autonomous minds (or 'intelligences') hides the role these human sources (who are freestanding, autonomous minds) play.

 

Much of the practical implementations of modern AI use human sources for the data, but that is by no means the limit of the field nor the end goal. My primary sub field is reinforcement learning and by default there is no human information in reinforcement learning. The agent learns autonomously by interaction with its environment. Back during my postdoc, a lot of the work I did was to *add* the ability for RL agents to learn from additional information from people (e.g., through human delivered feedback or commands) because teachers can accelerate learning and allow for user-tailored agents.

 

The aspirations of AI are very much to move passed supervised learning. It's also why it's hilarious that people think LLMs are near "AGI" when LLMs are still just doing limiting dataset fitting of very well structured human data.

 

 

1 hour ago, Signifyin(g)Monkey said:

 

I think it comes out of his 'humanist' view of intelligence and the mind in general.  While he has a fairly nuanced view on the nature of human consciousness (which I think he kind of implies is a necessary predicate for true 'human intelligence' in his arguments), he strikes me ultimately as someone who views it as something unique to humans that a digital machine probably isn't capable of replicating. (particularly the aspect of subjective experience or 'interiority')

 

That's kind of my interpretation, though, he never says that explicitly.  He touches on his views here--and notes the irony of being a big fan of Daniel Dennet, who holds very different opinions from him:

 

 

 

Well if he does, that seems very unlikely to be true :p  Human-like intelligence is only limited to humans if either the Church Turing thesis is wrong or the "hardware" of brains is the only kind of hardware that can manage the computational complexity of intelligent processes in practice. The former is very unlikely to be wrong and if it's wrong it has *vastly* weirder implications about reality than just what can be "intelligent." That latter is also very unlikely because evolution doesn't select for intelligence and populations evolved to it only through a series of local hacks. (Although I do think it's very plausible that we will need a hardware revolution for more intelligent machines -- but probably not one that needs to look just like brains)

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...