Free Video Downloader

Fast and free all in one video downloader

For Example: https://www.youtube.com/watch?v=OLCJYT5y8Bo

1

Copy shareable video URL

2

Paste it into the field

3

Click to download button


Sam Altman Is Showing Us Who He Really Is
May 23, 2024

Sam Altman Is Showing Us Who He Really Is

Reading Time: 6 minutes

We should believe him., Scarlett Johansson A.I.: What the fight over her voice reveals about OpenAI

OpenAI, the research firm whose 2022 launch of ChatGPT single-handedly pushed ‘artificial intelligence’ into the mainstream, isn’t often inclined to back down from the knotty disputes—over copyright, safety concerns, appropriate regulations—that its innovative tech has raised. Yet this month, it antagonized someone much more powerful, and is already retreating just a touch.

On Monday evening, Scarlett Johansson issued a statement to NPR’s Bobby Allyn about OpenAI’s GPT-4o announcement, which the company showcased in a live demonstration just last week. Specifically, the multimodal computer interaction model centered around a voice assistant named Sky, whose timbre really, really resembled ScarJo’s.

‘Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system,’ Johansson wrote. ‘After much consideration and for personal reasons, I declined the offer. Nine months later, my friends, family and the general public all noted how much the newest system named ‘Sky’ sounded like me. When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference.’

Johansson likewise mentioned that she was ‘forced’ to hire lawyers who wrote letters to Altman and his company, after which ‘OpenAI reluctantly agreed’ to switch out the voice. Indeed, earlier that morning, OpenAI tweeted that it was ‘working to pause the use of Sky‘ after hearing ‘questions about how we chose the voices in ChatGPT, especially Sky.’

‘We believe that AI voices should not deliberately mimic a celebrity’s distinctive voice—Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,’ the company insisted in an accompanying blog post.

Altman also spoke to Johansson’s explicit objection after NPR’s reporting, telling the public broadcaster: ‘We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.’

It was a strange apology and dubious explanation, not least because Altman himself invited comparisons of Sky’s voice to Johansson’s during the GTP-4o rollout. As was noted amply last week, he tweeted the word ‘her,’ in obvious reference to what he’s previously called his favorite movie: Her, the 2013 Oscar-winning drama in which Johansson voices a Siri-like voice assistant with whom the film’s protagonist falls in love. (Altman, in a subsequent personal blog post: ‘It feels like AI from the movies.’)

What’s more, as Johansson mentioned in her statement: ‘Two days before the ChatGPT 4.0 demo was released, Mr. Altman contacted my agent, asking me to reconsider. Before we could connect, the system was out there.’ And, as the Washington Post’s Nitasha Tiku tweeted, she noticed during a live demo in September—the very same month that OpenAI reportedly offered ScarJo its hire-for-training offer—that the Sky voice even then sounded like Johansson, and that executives denied this was ‘intentional.’

Naturally, this hullabaloo has invited quite a bit of attention: The world’s most influential A.I. company is squaring off against a brand-name and litigation-happy celebrity, over a rather bizarre interpretation of one of her most acclaimed movies, in a reference that was employed in large part to launch one of OpenAI’s most esteemed upgrades to date—and one that has already fueled a surge in its mobile-app revenue (as well as controversy over the spam used to train its Chinese-language capabilities).

The timing is a bit eyebrow-raising as well, considering that Johansson and her fellow actors only reached a post-strike union deal months ago—one fueled in large part over concerns regarding A.I.’s impacts on the movie industry. It makes sense that SAG-AFTRA publicly praised Johansson’s stance here against OpenAI.

It’s also unusual to see Altman and his executives assume a preemptively defensive crouch on this, especially since they’ve offered little in the way of apology or transparency when it comes to the heaping amounts of data used to train and power apps like ChatGPT and DALL-E 3—other than admitting it would be ‘impossible‘ to train these models without sucking up vast amounts of copyright work, whether books or artworks or articles, sans permission or disclosure. The company is already fending off lawsuits from authors and news publishers over this very practice—and ironically, as 404 Media reported this month, it also sent a ‘copyright complaint’ to the moderators of the OpenAI subreddit over their use of … OpenAI’s logo.

Yet OpenAI’s reportedly aggressive actions in courting ScarJo’s voice and then pressing ahead without her consent have invited a new level of public opprobrium against the otherwise popular app-maker. The resulting damage control may arise from the fact that this incident, plus other developments from inside OpenAI’s offices over the past few months, may be exposing something else about the notoriously closed-lid firm: what kind of person Sam Altman really is.

The last time OpenAI drama made national news occurred when the majority of the company’s board gave a no-confidence vote in Altman and suddenly fired him in November. They claimed that ‘he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.’ The vague statement and rapid action led to public speculation that coalesced into an all-out war between two staunch sides: one consisting of Altman and his unfailingly loyal lieutenants, the other consisting of underlings and overseers worried over how quickly Altman wanted to develop and deploy new products, with little consideration to their potential for misuse, as well as his ‘psychologically abusive’ treatment of employees (as the Post’s Tiku had reported).

The aftereffects of that dust-up lingered for months after Altman was restored and the OpenAI board was (mostly) purged of his opponents. Ilya Sutskever, a main character in the November saga who’d reportedly questioned Altman’s honesty but stayed on at OpenAI, apparently ‘never returned to work‘ in the months following the fight, according to the New York Times. Last week, the company announced his departure.

Just hours after that news, the head of Sutskever’s team announced his resignation, tweeting that he’d ‘been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.’ Reporters at Vox revealed that ‘at least five more of the company’s most safety-conscious employees have either quit or been pushed out’ since November, with one of those quitters telling the website that he’d ‘gradually lost trust in OpenAI leadership.’

There are likely still others who can’t talk, as Vox found out, because of an ‘extremely restrictive off-boarding agreement‘ that OpenAI employees are forced to sign if they want to retain any vested equity with the company—and that ‘forbids them, for the rest of their lives, from criticizing their former employer’ or ‘even acknowledging that the NDA exists.’

These revelations made waves in the tech world even prior to the ScarJo incident; Altman claimed on X that he was unaware of the equity provision, that it was being rewritten, and that ‘if any former employee who signed one of those old agreements is worried about it, they can contact me and we’ll fix that too.’ But in light of Altman’s reputation within Silicon Valley, this comes across as a bit fishy.

Past reporting from the MIT Technology Review has indicated that OpenAI ‘is obsessed with maintaining secrecy, protecting its image, and retaining the loyalty of its employees.’ This appeared to play out in April, when the Information reported that two OpenAI staffers, one of them an ally of Sutskever’s, were fired for ‘allegedly leaking information.’ As Bloomberg has noted, Altman is quietly thought of among his industry as ‘ambitious, cunning, even Machiavellian.’

Other details from OpenAI’s ongoing copyright battles appear to bolster this. As part of its suit with the Authors Guild, documents were released showing how OpenAI deleted two massive datasets, consisting of ‘more than 100,000 published books,’ that had been used to train an early iteration of its GPT model. Furthermore, the employees tasked with scrubbing that data were no longer with OpenAI, which had refused to disclose anything about its training-data history to the Authors Guild prior to this lawsuit.

On Monday, the lead counsel for the New York Times’ own suit against OpenAI sent a letter to the presiding judge, accusing the company of delaying a similar discovery process by ‘taking weeks to respond, dragging out negotiations over a protective order, and refusing to quickly produce basic information.’ Oh yeah, and the Securities and Exchange Commission is investigating Altman’s comms in a probe over whether he and other senior OpenAI leaders misled the company’s investors. Yet Altman clearly hopes all this will wash away: OpenAI’s first post in the aftermath of the ScarJo squabble was a ‘safety update‘ for the AI Seoul Summit, which mentions that ‘we prioritize protecting our customers, intellectual property, and data.’

For all Altman talks about the importance of ‘transparency,’ for all the generosity he’s professed toward OpenAI’s dissenters, and for all the governmental glad-handing he’s done to paint himself as a responsible steward of A.I., it seems pretty clear he’s unafraid of playing a completely different game under wraps. If the underlying record of how he appears to treat his own employees, run his company, and keep his secrets clashes so much with his public statements, why should anyone—least of all Scarlett Johansson herself—trust that he actually went about Sky’s voice-training process in good faith?

Reference: https://slate.com/technology/2024/05/scarlett-johansson-ai-voice-sam-altman-openai.html

Ref: slate

MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG

Leave a Reply

Your email address will not be published. Required fields are marked *