Mr.Vic20 Posted March 7 Share Posted March 7 1 hour ago, legend said: That people have decided that AI now means Generative ML is deeply infuriating to me. I do put a lot of blame on OpenAI leadership for this nonsense. They are such irresponsible pseudoscentific twats. "The Future", as you envisioned it as a child = Amazingly enlightened, with a bold emphasis on exploration and discovery from the stars to virtual worlds presently beyond us. "The Future", when you final arrived at it = A circus largely focused on marketable buzz rather than substantive progress, run by absolutely douchebags, who have many douchebag friends that assure them that others just don't understand their genius! 1 4 Quote Link to comment Share on other sites More sharing options...
legend Posted March 7 Share Posted March 7 54 minutes ago, TwinIon said: Personally, I would be happy if there is a sufficient backlash against generative ML to the point that they need to rebrand it. I feel like calling everything AI is a disservice. I'm even okay calling it AI, because it did come from the AI research community. But that people think it goes the other direction, that all AI is generative ML is just too far! Quote Link to comment Share on other sites More sharing options...
silentbob Posted March 7 Share Posted March 7 2 hours ago, Mr.Vic20 said: "The Future", as you envisioned it as a child = Amazingly enlightened, with a bold emphasis on exploration and discovery from the stars to virtual worlds presently beyond us. "The Future", when you final arrived at it = A circus largely focused on marketable buzz rather than substantive progress, run by absolutely douchebags, who have many douchebag friends that assure them that others just don't understand their genius! Maybe AI doesn’t stand for Artificial but for Alien Intelligence. We’re just slow at progressing through their tech for us to enjoy. . . .or be slowly taken over like a Granny Goodness Headset. 1 Quote Link to comment Share on other sites More sharing options...
Remarkableriots Posted March 12 Share Posted March 12 Recruiting AI Talent Is Ruthless Right Now WWW.BUSINESSINSIDER.COM The CEO of an AI startup said he wasn't able to hire a Meta researcher because it didn't have enough GPUs. Quote "I tried to hire a very senior researcher from Meta, and you know what they said? 'Come back to me when you have 10,000 H100 GPUs'," Srinivas said on a recent episode of the advice podcast "Invest Like The Best." 1 Quote Link to comment Share on other sites More sharing options...
legend Posted March 12 Share Posted March 12 1 hour ago, Remarkableriots said: Recruiting AI Talent Is Ruthless Right Now WWW.BUSINESSINSIDER.COM The CEO of an AI startup said he wasn't able to hire a Meta researcher because it didn't have enough GPUs. Easy employer response: come back to me when you don't want to work for a company making the world worse. (under the assumption the start up is not equally awful, but less funded ) 1 Quote Link to comment Share on other sites More sharing options...
SuperSpreader Posted March 13 Share Posted March 13 15 hours ago, Remarkableriots said: Recruiting AI Talent Is Ruthless Right Now WWW.BUSINESSINSIDER.COM The CEO of an AI startup said he wasn't able to hire a Meta researcher because it didn't have enough GPUs. lmao 10,000 GPUs gets you this; 2 Quote Link to comment Share on other sites More sharing options...
legend Posted March 13 Share Posted March 13 8 minutes ago, SuperSpreader said: lmao 10,000 GPUs gets you this; To be fair, that’s not really where the AI researchers go. The AI researchers do work on building some big models and they do release the models. They released one of the better open LLMs, and have a bunch of vision models like segment anything (or something like that). Meta is much better in the open research department than OpenAI. It’s just that all the actual stuff where Meta makes money is toxic and I wouldn’t want to support that company. Quote Link to comment Share on other sites More sharing options...
chakoo Posted March 13 Share Posted March 13 Yeah to be fair Meta puts a lot into their AI/ML research even if much of it doesn't get converted into a product. The thing with meta is most/many researchers go there for 2-4 years then leave for something else. So really if you want to get meta talent just play the long game and always leave an offer on the table, they'll quit and join you eventually. It was always interesting watching the TVs in the food halls showing off work anniversaries and once it got to the 5 year mark the numbers dwindled to just a tiny handful. Quote Link to comment Share on other sites More sharing options...
chakoo Posted March 13 Share Posted March 13 39 minutes ago, legend said: [...] and have a bunch of vision models like segment anything (or something like that). Yeah it's called Segment Anything. It's really impressive and the work done with forks of it HQ Segment Anything has been used in some really awesome projects. I'm kind of sad I didn't try to get some VC funding last year to build an idea I wanted for AR space because people are now actually releasing stuff close to what I wanted to make back then. If I do it now I'll probably be playing catchup for a long time. Quote Link to comment Share on other sites More sharing options...
legend Posted March 13 Share Posted March 13 4 minutes ago, chakoo said: Yeah it's called Segment Anything. It's really impressive and the work done with forks of it HQ Segment Anything has been used in some really awesome projects. I'm kind of sad I didn't try to get some VC funding last year to build an idea I wanted for AR space because people are now actually releasing stuff close to what I wanted to make back then. If I do it now I'll probably be playing catchup for a long time. Yeah I thought it was a really great model and with a fantastic interface (in terms of the kinds of inputs the model can work from) that allowed for a lot of great uses. It's wild to me that word of mouth for it didn't take off the way LLMs did, but I suppose that's because LLMs are a magic trick that fool people into thinking there's a ghost in the machine, whereas segment anything is actual practical tool (For the record I think eventually we can build systems with a "ghost in the machine" -- at least as much as one exists for people, but LLMs aint it) Quote Link to comment Share on other sites More sharing options...
chakoo Posted March 13 Share Posted March 13 1 minute ago, legend said: Yeah I thought it was a really great model and with a fantastic interface (in terms of the kinds of inputs the model can work from) that allowed for a lot of great uses. It's wild to me that word of mouth for it didn't take off the way LLMs did, but I suppose that's because LLM's are a magic trick that fool people into thinking there's a ghost in the machine, whereas segment anything is actual practical tool I think the issue is the way it's shown to people and it's sample use case is the problem. It's shown as just some way to extract an object from an image which most will just question why that matter when their iphone can do the same. Yet this could be a good thing for it's use in tech as people won't be flipping the **** out like they are with GenAI. 1 Quote Link to comment Share on other sites More sharing options...
SuperSpreader Posted March 13 Share Posted March 13 49 minutes ago, legend said: To be fair 1 Quote Link to comment Share on other sites More sharing options...
elbobo Posted March 30 Share Posted March 30 OpenAI and Microsoft reportedly planning $100 billion datacenter project for an AI supercomputer | Tom's Hardware WWW.TOMSHARDWARE.COM It would be called "Stargate." This would easily be an order of magnitude more expensive than any other datacenter on Earth Quote Link to comment Share on other sites More sharing options...
Air_Delivery Posted March 30 Share Posted March 30 As someone who barely pays attention to this, how is OpenAI "open" if it is controlled by MS? Quote Link to comment Share on other sites More sharing options...
legend Posted March 30 Share Posted March 30 9 hours ago, Air_Delivery said: As someone who barely pays attention to this, how is OpenAI "open" if it is controlled by MS? OpenAI was originally a non-profit organization. They lured in researchers with the promise of being "open." So initially, they were. They published the work they were doing and released a bunch of open source projects. Eventually, they decided they should have a "capped" for profit branch. This was totally "okay" though because they were only doing it so they could fund their open efforts that were super important to humanity, wink, wink, nudge nudge. Also this for-profit branch would still be "controlled" by the non-profit board, so nothing nefarious could ever happen. I mean, yes, the cap was enormous and gets more enormous ever year or so, but this will surely be fine. They need that ever growing money because they're building a god after all. Eventually they stoped publishing their research and stopped releasing open source. But that's really for societies sake because their stuff is just too "dangerous" to release or describe how they made it, or on what data they trained it with -- it was all data acquired on the up and up for sure. Only they can be the ethical shepherds of their protogod. Yes, yes, plenty of other research on the very same topic with open source code and models have been released and the world didn't crumble. But OpenAIs stuff is so much better; their big autocomplete is an artificial "powerful mind" so they still have to keep all of it closed for our own safety of course. The important thing is they let people use and pay for their product, and that makes it open. MS is one of the major investors in the for-profit branch, but that is totally 100% controlled by the non-profit board. And their non-profit side is absolutely controlling things for the betterment of man, so it's absolutely still "open." That's why when the non-profit board fired Sam Altman, Sam got his for-profit cronies to get him reinstated and then the non-profit board members had to "resign." I mean, the issue here was the non-profit made a mistake and couldn't be trusted. They needed more ethical people in place. That's why Sam had said just a few months earlier that to avoid him acquiring too much power that it was important that the non-profit board could fire him. Oh, and while released emails showed that their plan all along was just to lure in researchers with open research but then switch to closed research, that's all still fine. They just didn't think researchers would be able to understand that they really are still open because they release products. And those products would help people. And there's nothing more open than a product. So really, they're totally open. 2 Quote Link to comment Share on other sites More sharing options...
Keyser_Soze Posted March 30 Share Posted March 30 46 minutes ago, legend said: OpenAI ... So really, they're totally open. 1 Quote Link to comment Share on other sites More sharing options...
Ricofoley Posted April 2 Share Posted April 2 Yum Brands wants to have their employees ask an AI how to make the food that gets made in exactly the same way every time at every location Taco Bell, Pizza Hut, KFC Experiment With AI for Meal Prep WWW.PCMAG.COM Your next fried chicken basket or cheesy pizza could be made with the help of a generative AI-powered 'SuperApp,' says parent company Yum Brands. 2 Quote Link to comment Share on other sites More sharing options...
Spork3245 Posted April 2 Share Posted April 2 Quote Link to comment Share on other sites More sharing options...
legend Posted April 3 Share Posted April 3 I really can't stress enough how much I hate that generative ML is the face of "AI" at this moment. 2 Quote Link to comment Share on other sites More sharing options...
b_m_b_m_b_m Posted April 3 Share Posted April 3 22 hours ago, Ricofoley said: Yum Brands wants to have their employees ask an AI how to make the food that gets made in exactly the same way every time at every location Taco Bell, Pizza Hut, KFC Experiment With AI for Meal Prep WWW.PCMAG.COM Your next fried chicken basket or cheesy pizza could be made with the help of a generative AI-powered 'SuperApp,' says parent company Yum Brands. Quote For example, fast food employees will be able to ask the AI bot questions about things like correct oven temperatures, according to the report. As someone who has worked at KFC: lol, lmao the friers and ovens and warming cabinets and all have two temp settings, on or off, and the only difference is in how long you cook each thing for, all preprogrammed buttons which are labeled. lmao Quote Link to comment Share on other sites More sharing options...
chakoo Posted April 10 Share Posted April 10 Quote Link to comment Share on other sites More sharing options...
legend Posted April 10 Share Posted April 10 It's almost like autocomplete isn't a path toward "AGI." Quote Link to comment Share on other sites More sharing options...
Jason Posted April 10 Share Posted April 10 1 hour ago, legend said: It's almost like autocomplete isn't a path toward "AGI." Apparently they're now at the point of just feeding LLM outputs back into the LLMs as training data because of how desperate they are for additional training material. Quote Link to comment Share on other sites More sharing options...
legend Posted April 10 Share Posted April 10 2 minutes ago, Jason said: Apparently they're now at the point of just feeding LLM outputs back into the LLMs because of how desperate they are for additional training material. I honestly don't know why anyone would do this. It's standard folk knowledge that this is bad, and there have even been explicit studies for various generative models showing it's bad all the same. Quote Link to comment Share on other sites More sharing options...
CitizenVectron Posted April 10 Share Posted April 10 14 minutes ago, legend said: I honestly don't know why anyone would do this. It's standard folk knowledge that this is bad, and there have even been explicit studies for various generative models showing it's bad all the same. Yeah but have you considered that saying your model has 10x the input as competing models could boost your short-term stock dividend by up to 5%? Quote Link to comment Share on other sites More sharing options...
legend Posted April 10 Share Posted April 10 12 minutes ago, CitizenVectron said: Yeah but have you considered that saying your model has 10x the input as competing models could boost your short-term stock dividend by up to 5%? I have, and I have wept for humanity over it There seems to be a real problem not just in shitty companies, but in investors being dumb fucking marks who are awful at their job. Why do these morons have so much money to throw around? 1 Quote Link to comment Share on other sites More sharing options...
elbobo Posted April 10 Share Posted April 10 If you spent even 30 minutes having a continous conversation with one of the current LLMs you know it turns to gibberish before long if you don't actively steer it back on track and even then sometimes you just have to start over Quote Link to comment Share on other sites More sharing options...
mclumber1 Posted April 10 Share Posted April 10 1 hour ago, Jason said: Apparently they're now at the point of just feeding LLM outputs back into the LLMs as training data because of how desperate they are for additional training material. That's how humans learn though Quote Link to comment Share on other sites More sharing options...
legend Posted April 10 Share Posted April 10 14 minutes ago, mclumber1 said: That's how humans learn though Narrator: It isn't. Quote Link to comment Share on other sites More sharing options...
Commissar SFLUFAN Posted May 17 Author Share Posted May 17 OpenAI dissolves team focused on long-term AI risks, less than one year after announcing it WWW.CNBC.COM OpenAI has dissolved its Superalignment team amid the high-profile departures of both team leaders, Ilya Sutskever and Jan Leike. Quote OpenAI has disbanded its team focused on the long-term risks of artificial intelligence just one year after the company announced the group, a person familiar with the situation confirmed to CNBC on Friday. The person, who spoke on condition of anonymity, said some of the team members are being reassigned to multiple other teams within the company. The news comes days after both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the Microsoft-backed startup. Leike on Friday wrote that OpenAI’s “safety culture and processes have taken a backseat to shiny products.” OpenAI did not immediately provide a comment to CNBC. OpenAI’s Superalignment team, announced last year, has focused on “scientific and technical breakthroughs to steer and control AI systems much smarter than us.” At the time, OpenAI said it would commit 20% of its computing power to the initiative over four years. News of the team’s dissolution was first reported by Wired. Sutskever and Leike on Tuesday announced their departures on social media platform X, hours apart, but on Friday, Leike shared more details about why he left the startup. “I joined because I thought OpenAI would be the best place in the world to do this research,” Leike wrote on X. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.” Leike wrote that he believes much more of the company’s bandwidth should be focused on security, monitoring, preparedness, safety and societal impact. 1 Quote Link to comment Share on other sites More sharing options...
GeneticBlueprint Posted May 17 Share Posted May 17 Quote Link to comment Share on other sites More sharing options...
elbobo Posted May 17 Share Posted May 17 it is clear that ever since the failed ouster that OpenAI was going to be ALL gas and no brakes Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.