Jump to content

OpenAI Debacle - Update (03/01): Elmo sues OpenAI for...uhhhh..."reasons"


Recommended Posts

1 hour ago, legend said:

 

 

That people have decided that AI now means Generative ML is deeply infuriating to me. I do put a lot of blame on OpenAI leadership for this nonsense. They are such irresponsible pseudoscentific twats.

"The Future", as you envisioned it as a child = Amazingly enlightened, with a bold emphasis on exploration and discovery from the stars to virtual worlds presently beyond us.

 

"The Future", when you final arrived at it = A circus largely focused on marketable buzz rather than substantive progress, run by absolutely douchebags, who have many douchebag friends that assure them that others just don't understand their genius! 

 

Chaos Fire Elmo Sticker - Chaos Fire elmo Fire kedet - Discover & Share GIFs

 

 

  • Sad 1
  • True 4
Link to comment
Share on other sites

54 minutes ago, TwinIon said:

Personally, I would be happy if there is a sufficient backlash against generative ML to the point that they need to rebrand it. I feel like calling everything AI is a disservice.

 

I'm even okay calling it AI, because it did come from the AI research community. But that people think it goes the other direction, that all AI is generative ML is just too far!

Link to comment
Share on other sites

2 hours ago, Mr.Vic20 said:

"The Future", as you envisioned it as a child = Amazingly enlightened, with a bold emphasis on exploration and discovery from the stars to virtual worlds presently beyond us.

 

"The Future", when you final arrived at it = A circus largely focused on marketable buzz rather than substantive progress, run by absolutely douchebags, who have many douchebag friends that assure them that others just don't understand their genius! 

 

Chaos Fire Elmo Sticker - Chaos Fire elmo Fire kedet - Discover & Share GIFs

 

 


Maybe AI doesn’t stand for Artificial but for Alien Intelligence. We’re just slow at progressing through their tech for us to enjoy. . . .or be slowly taken over like a Granny Goodness Headset.

  • Haha 1
Link to comment
Share on other sites

WWW.BUSINESSINSIDER.COM

The CEO of an AI startup said he wasn't able to hire a Meta researcher because it didn't have enough GPUs.
Quote

"I tried to hire a very senior researcher from Meta, and you know what they said? 'Come back to me when you have 10,000 H100 GPUs'," Srinivas said on a recent episode of the advice podcast "Invest Like The Best."

 

  • Sicko 1
Link to comment
Share on other sites

1 hour ago, Remarkableriots said:
WWW.BUSINESSINSIDER.COM

The CEO of an AI startup said he wasn't able to hire a Meta researcher because it didn't have enough GPUs.

 

 

Easy employer response: come back to me when you don't want to work for a company making the world worse.

 

(under the assumption the start up is not equally awful, but less funded :p )

  • Like 1
Link to comment
Share on other sites

8 minutes ago, SuperSpreader said:

 

lmao 10,000 GPUs gets you this;

 

3DReview_Screenshot_V001.jpeg

 

 


To be fair, that’s not really where the AI researchers go. The AI researchers do work on building some big models and they do release the models. They released one of the better open LLMs, and have a bunch of vision models like segment anything (or something like that). 
 

Meta is much better in the open research department than OpenAI. It’s just that all the actual stuff where Meta makes money is toxic and I wouldn’t want to support that company. 

Link to comment
Share on other sites

Yeah to be fair Meta puts a lot into their AI/ML research even if much of it doesn't get converted into a product. The thing with meta is most/many researchers go there for 2-4 years then leave for something else. So really if you want to get meta talent just play the long game and always leave an offer on the table, they'll quit and join you eventually.

 

It was always interesting watching the TVs in the food halls showing off work anniversaries and once it got to the 5 year mark the numbers dwindled to just a tiny handful.

Link to comment
Share on other sites

39 minutes ago, legend said:

[...] and have a bunch of vision models like segment anything (or something like that). 

 

Yeah it's called Segment Anything. It's really impressive and the work done with forks of it HQ Segment Anything has been used in some really awesome projects. I'm kind of sad I didn't try to get some VC funding last year to build an idea I wanted for AR space because people are now actually releasing stuff close to what I wanted to make back then. If I do it now I'll probably be playing catchup for a long time. :(

Link to comment
Share on other sites

4 minutes ago, chakoo said:

 

Yeah it's called Segment Anything. It's really impressive and the work done with forks of it HQ Segment Anything has been used in some really awesome projects. I'm kind of sad I didn't try to get some VC funding last year to build an idea I wanted for AR space because people are now actually releasing stuff close to what I wanted to make back then. If I do it now I'll probably be playing catchup for a long time. :(

 

Yeah I thought it was a really great model and with a fantastic interface (in terms of the kinds of inputs the model can work from) that allowed for a lot of great uses. It's wild to me that word of mouth for it didn't take off the way LLMs did, but I suppose that's because LLMs are a magic trick that fool people into thinking there's a ghost in the machine, whereas segment anything is actual practical tool :p 

 

(For the record I think eventually we can build systems with a "ghost in the machine" -- at least as much as one exists for people, but LLMs aint it)

Link to comment
Share on other sites

1 minute ago, legend said:

 

Yeah I thought it was a really great model and with a fantastic interface (in terms of the kinds of inputs the model can work from) that allowed for a lot of great uses. It's wild to me that word of mouth for it didn't take off the way LLMs did, but I suppose that's because LLM's are a magic trick that fool people into thinking there's a ghost in the machine, whereas segment anything is actual practical tool :p 

 

I think the issue is the way it's shown to people and it's sample use case is the problem. It's shown as just some way to extract an object from an image which most will just question why that matter when their iphone can do the same. Yet this could be a good thing for it's use in tech as people won't be flipping the **** out like they are with GenAI.

  • True 1
Link to comment
Share on other sites

  • 3 weeks later...
9 hours ago, Air_Delivery said:

As someone who barely pays attention to this, how is OpenAI "open" if it is controlled by MS?

 

OpenAI was originally a non-profit organization. They lured in researchers with the promise of being "open." So initially, they were. They published the work they were doing and released a bunch of open source projects.

 

Eventually, they decided they should have a "capped" for profit branch. This was totally "okay" though because they were only doing it so they could fund their open efforts that were super important to humanity, wink, wink, nudge nudge. Also this for-profit branch would still be "controlled" by the non-profit board, so nothing nefarious could ever happen. I mean, yes, the cap was enormous and gets more enormous ever year or so, but this will surely be fine. They need that ever growing money because they're building a god after all.

 

Eventually they stoped publishing their research and stopped releasing open source. But that's really for societies sake because their stuff is just too "dangerous" to release or describe how they made it, or on what data they trained it with -- it was all data acquired on the up and up for sure. Only they can be the ethical shepherds of their protogod. Yes, yes, plenty of other research on the very same topic with open source code and models have been released and the world didn't crumble. But OpenAIs stuff is so much better; their big autocomplete is an artificial "powerful mind" so they still have to keep all of it closed for our own safety of course. The important thing is they let people use and pay for their product, and that makes it open.

 

MS is one of the major investors in the for-profit branch, but that is totally 100% controlled by the non-profit board. And their non-profit side is absolutely controlling things for the betterment of man, so it's absolutely still "open." That's why when the non-profit board fired Sam Altman, Sam got his for-profit cronies to get him reinstated and then the non-profit board members had to "resign." I mean, the issue here was the non-profit made a mistake and couldn't be trusted. They needed more ethical people in place. That's why Sam had said just a few months earlier that to avoid him acquiring too much power that it was important that the non-profit board could fire him.

 

Oh, and while released emails showed that their plan all along was just to lure in researchers with open research but then switch to closed research, that's all still fine. They just didn't think researchers would be able to understand that they really are still open because they release products. And those products would help people. And there's nothing more open than a product.

 

So really, they're totally open.

  • Thanks 2
Link to comment
Share on other sites

Yum Brands wants to have their employees ask an AI how to make the food that gets made in exactly the same way every time at every location

 

WWW.PCMAG.COM

Your next fried chicken basket or cheesy pizza could be made with the help of a generative AI-powered 'SuperApp,' says parent company Yum Brands.

 

  • Confused 2
Link to comment
Share on other sites

22 hours ago, Ricofoley said:

Yum Brands wants to have their employees ask an AI how to make the food that gets made in exactly the same way every time at every location

 

WWW.PCMAG.COM

Your next fried chicken basket or cheesy pizza could be made with the help of a generative AI-powered 'SuperApp,' says parent company Yum Brands.

 

Quote

For example, fast food employees will be able to ask the AI bot questions about things like correct oven temperatures, according to the report.

As someone who has worked at KFC: lol, lmao

 

 the friers and ovens and warming cabinets and all have two temp settings, on or off, and the only difference is in how long you cook each thing for, all preprogrammed buttons which are labeled. 
 

lmao

Link to comment
Share on other sites

1 hour ago, legend said:

It's almost like autocomplete isn't a path toward "AGI."

 

Apparently they're now at the point of just feeding LLM outputs back into the LLMs as training data because of how desperate they are for additional training material.

Link to comment
Share on other sites

2 minutes ago, Jason said:

 

Apparently they're now at the point of just feeding LLM outputs back into the LLMs because of how desperate they are for additional training material.

 

I honestly don't know why anyone would do this. It's standard folk knowledge that this is bad, and there have even been explicit studies for various generative models showing it's bad all the same.

Link to comment
Share on other sites

14 minutes ago, legend said:

 

I honestly don't know why anyone would do this. It's standard folk knowledge that this is bad, and there have even been explicit studies for various generative models showing it's bad all the same.

 

Yeah but have you considered that saying your model has 10x the input as competing models could boost your short-term stock dividend by up to 5%?

Link to comment
Share on other sites

12 minutes ago, CitizenVectron said:

 

Yeah but have you considered that saying your model has 10x the input as competing models could boost your short-term stock dividend by up to 5%?

 

I have, and I have wept for humanity over it :p 

 

There seems to be a real problem not just in shitty companies, but in investors being dumb fucking marks who are awful at their job. Why do these morons have so much money to throw around?

  • True 1
Link to comment
Share on other sites

If you spent even 30 minutes having a continous conversation with one of the current LLMs you know it turns to gibberish before long if you don't actively steer it back on track and even then sometimes you just have to start over

Link to comment
Share on other sites

1 hour ago, Jason said:

 

Apparently they're now at the point of just feeding LLM outputs back into the LLMs as training data because of how desperate they are for additional training material.

 

That's how humans learn though

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...