Jump to content

The AI is garbage thread


Recommended Posts

24 minutes ago, Keyser_Soze said:

Invisible watermark was supposed to be a thing that identified AI images, maybe it's not working?

My understanding is that any kind of invisible watermark is easily defeated. They might still get implemented and function on the lowest hanging fruit, but it's trivial for bad actors to remove them and add them to legitimate images. There's also the problem of open source models where you can't be sure that watermarks would be added in the first place.

 

My opinion is that any attempts at identifying AI generated images will end up in a similar place to photoshop detection tools. They'll be able to spot the most obvious things, but there will always be some level of uncertainty.

Link to comment
Share on other sites

5 hours ago, chakoo said:

Why would these items require gen ai? I’m guessing this is bad messaging or reporting?

 

I think they're just using generative AI as a blanket term for AI. Like in this example:

 

Quote

Call center workers there took more than 660,000 calls last year. The state envisions the AI technology listening along to those calls and pulling up specific tax code information associated with the problem the caller is describing. The worker could decide whether to use the information. Currently, call center workers have to simultaneously listen to the call and manually look up the code, Maduros said.

 

I mean that's not really generative AI that's just like if you had an AI assistant.

Link to comment
Share on other sites

Speaking of OpenAI...

 

WWW.CNBC.COM

OpenAI has dissolved its Superalignment team amid the high-profile departures of both team leaders, Ilya Sutskever and Jan Leike.

 

Quote

 

OpenAI has disbanded its team focused on the long-term risks of artificial intelligence just one year after the company announced the group, a person familiar with the situation confirmed to CNBC on Friday.

 

The person, who spoke on condition of anonymity, said some of the team members are being reassigned to multiple other teams within the company.

 

The news comes days after both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the Microsoft-backed startup. Leike on Friday wrote that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”

 

OpenAI did not immediately provide a comment to CNBC.

 

OpenAI’s Superalignment team, announced last year, has focused on “scientific and technical breakthroughs to steer and control AI systems much smarter than us.” At the time, OpenAI said it would commit 20% of its computing power to the initiative over four years.

 

News of the team’s dissolution was first reported by Wired.

 

Sutskever and Leike on Tuesday announced their departures on social media platform X, hours apart, but on Friday, Leike shared more details about why he left the startup.

 

“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike wrote on X. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

 

Leike wrote that he believes much more of the company’s bandwidth should be focused on security, monitoring, preparedness, safety and societal impact.

 

 

Link to comment
Share on other sites

Speaking of using messages (posts) for training...

 

WWW.THEVERGE.COM

Reddit’s signed AI licensing deals with Google and OpenAI.

 

Quote

 

OpenAI has signed a deal for access to real-time content from Reddit’s data API, which means it can surface discussions from the site within ChatGPT and other new products. It’s an agreement similar to the one Reddit signed with Google earlier this year that was reportedly worth $60 million.

 

The deal will also “enable Reddit to bring new AI-powered features to Redditors and mods” and use OpenAI’s large language models to build applications. OpenAI has also signed up to become an advertising partner on Reddit. 

 

Redditors have been vocal about how Reddit’s executives manage the platform before, and it remains to be seen how they’ll react to this announcement. More than 7,000 subreddits went dark in June 2023 after users protested Reddit’s changes to its API pricing. Recently, following news of a partnership between OpenAI and the programming messaging board Stack Overflow, people were suspended after trying to delete their posts.

 

 

Link to comment
Share on other sites

OpenAI also has a non-disparagement clause where you can't ever say anything bad about the company once you leave or you lose your equity. I'm not even sure this is legal.

 

But it's also completely unsurprising. Since practically its dawn, OpenAI has very clearly not actually been a non-profit org for the betterment of people. It's similarly been clear that they weaponized the stupid as bricks "rogue AI might kill us all!" narrative simply because it was useful to overhype their product and help them work toward regulatory capture.

 

It's amazing that it's taken researchers who were there this long to figure it all out.

 

Praising the few people who have left and given up their equity to speak freely is also like praising a person for doing the bare minimum. These are not people risking their livelihood. They made a ton of money and will be fine in life. If you sign the deal for the money all it does is make it blatantly obvious that you don't *actually* care about bettering society and it's all been talk and savior complex.

  • stepee 1
  • Halal 3
Link to comment
Share on other sites

Something else that's rich: some people in Effective Altruism were considering donating money to the wealthy people who left and didn't sign the deal for their equity to "reinforce" the behavior. Fortunately, at least one of them turned it down. But how the hell is it an "effective" use of their "altruism" to spend their funds on their own in-cult people who might otherwise have to postpone purchasing their second house!

 

This crowd is straight out of an episode of Silicon Valley and they make my field look fucking stupid. I'm sick of it. Please leave.

  • stepee 1
  • Halal 1
Link to comment
Share on other sites

6 hours ago, legend said:

Here's a more proper publication on it now:

VARIETY.COM

Scarlett Johansson declined OpenAI's request for her to provide her voice for ChatGPT -- and was "shocked" when it used voice similar to hers.

 

 

lmao that these openai dumbasses think they can get away with fucking with the woman who won against Disney for trying to fuck her over 

  • True 1
  • Sicko Sherman 1
Link to comment
Share on other sites

1 minute ago, Jason said:

Uh... 

 

 

Screenshot_20240520_234043_Twitter.jpg

 

Yeah I think I mentioned this somewhere earlier in the thread. From various things I've heard about him, including pretty damning things like this, it makes me wonder if he's a sociopath.

Link to comment
Share on other sites

WWW.CNN.COM

Could Scarlett Johansson sue OpenAI for creating a voice assistant that sounds like the actor’s performance in the 2013 film “Her,” about a man who falls in love with an artificial intelligence?

 

I was not aware of the Bette Midler precedent:

Quote

 

In 1988, the singer Bette Midler won a lawsuit against Ford Motor Company over an advertisement featuring what sounded like her voice. In fact, the song in the ad had been recorded by one of Midler’s backup singers after Midler turned down the opportunity to record the ad. The similarities between the reproduction and the original were so striking that some people told Midler they believed she had performed in the commercial.

The US Court of Appeals for the 9th Circuit ruled in Midler’s favor.

“Why did the defendants ask Midler to sing if her voice was not of value to them?” the court wrote in its decision. “Why did they studiously acquire the services of a sound-alike and instruct her to imitate Midler if Midler’s voice was not of value to them? What they sought was an attribute of Midler’s identity. Its value was what the market would have paid for Midler to have sung the commercial in person.”

 

Given that another case involving Tom Waits reaffirmed that precedent in 92, it does seem like Johansson would have a pretty good case. 

  • stepee 1
Link to comment
Share on other sites

1 minute ago, MarSolo said:

Okay, what does this have to do with AI?

 

Sam Altman's sister, it was posted in a discussion of the ScarJo claims as a "he has a longstanding history of not taking no from women" comment.

  • Halal 1
Link to comment
Share on other sites

1 hour ago, TwinIon said:
WWW.CNN.COM

Could Scarlett Johansson sue OpenAI for creating a voice assistant that sounds like the actor’s performance in the 2013 film “Her,” about a man who falls in love with an artificial intelligence?

 

I was not aware of the Bette Midler precedent:

Given that another case involving Tom Waits reaffirmed that precedent in 92, it does seem like Johansson would have a pretty good case. 

And she could take on the Mouse and win is a terrible sign for OpenAI 

Link to comment
Share on other sites

It's a wonderfully ironic(?) thing that this challenge exists because "Open"AI, the non-profit-owned org to "benefit humanity," is not open about the data and methods they use to train their for-profit products.

  • Halal 1
Link to comment
Share on other sites

Air Canada recently had to take down its chatbot after it gave a bogus answer to someone (which resulted in them losing a ticket or something), and a court ruled that AI/chatbot results were legally binding statements by the company. Unless these "AI" results are 100% accurate, then you'd better believe someone is going to sue for bad advice that it gives, etc. Maybe that won't hold up in the US where the courts are less pro-consumer, but it has a better chance in Canada or (especially) the EU.

Link to comment
Share on other sites

Sadly google isn't really led by people building technology anymore, they're led by idiot shareholders who think you have to shove AI technology that shouldn't be used for any number of applications in everything because that's all the rage right now and they don't want to look like they're falling behind. DeepMind leadership specifically is better, but even there they've been pressured to do something with LLMs if they want any meaningful compute resources, because that's what's trendy.

  • True 2
Link to comment
Share on other sites

Incidentally, Meta's AI research group (FAIR) tends to be much better about all this. They open source their models and although I disagree with their lead Yann LeCun on a lot of things, he's largely got the right idea that LLMs are cool, but a distraction and not a path to solving the AI problem.

 

On the other hand, it's Meta, and they suck as an org on the whole :p 

  • stepee 1
Link to comment
Share on other sites

28 minutes ago, osxmatt said:
WWW.WASHINGTONPOST.COM

The owner of the Wall Street Journal, the New York Post and the Daily Telegraph joins the Financial Times and Politico in striking deals with the AI company.

 

 

How will this be different from what News Corp outlets already publish?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...