Free Video Downloader

Fast and free all in one video downloader

For Example: https://www.youtube.com/watch?v=OLCJYT5y8Bo

1

Copy shareable video URL

2

Paste it into the field

3

Click to download button


As OpenAI’s multimodal API launches broadly, research shows it’s still flawed
November 7, 2023

As OpenAI’s multimodal API launches broadly, research shows it’s still flawed

Reading Time: 3 minutes

Today during its first-ever dev conference, OpenAI released new details of a version of GPT-4, the company’s flagship text-generating AI model, that can understand the context of images as well as text. This version, which OpenAI calls ‘GPT-4 with vision,’ can caption and even interpret relatively complex images — for example identifying a Lightning Cable adapter from a picture of a plugged-in iPhone.

GPT-4 with vision had previously only been available to select users of Be My Eyes, an app designed to help vision-impaired people navigate the world around them; subscribers to the premium tiers of OpenAI’s AI-powered chatbot, ChatGPT; and ‘red teamers’ charged with probing GPT-4 with vision for signs of unintended behavior. That’s because OpenAI held back GPT-4 with vision after unveiling it in early March, reportedly on fears about how it might be abused — and violate privacy.

Now, OpenAI’s seemingly confident enough in its mitigations to let the wider dev community build GPT-4 with vision into their apps, products and services. GPT-4 with vision will become available in the coming weeks, the company said this morning, via the newly launched GPT-4 Turbo API.

The question is whether GPT-4 with vision’s actually safer than it was before, though.

In October, a few weeks before OpenAI began rolling out GPT-4 with vision to ChatGPT subscribers, the company published a whitepaper detailing the model’s limitations and more… questionable tendencies (e.g. discriminating against certain body types). But the paper was co-authored by OpenAI scientists — not outside testers who might bring a more impartial perspective to the table.

Luckily, OpenAI provided several researchers — the aforementioned red teamers — early access to GPT-4 with vision for evaluation purposes. At least two, Chris Callison-Burch, an associate professor of computer science at the University of Pennsylvania, and Alyssa Hwang, Callison-Burch’s Ph.D. student, published their early impressions this afternoon at OpenAI’s conference.

But Hwang, who conducted a more systematic review of GPT-4 with vision’s capabilities, found that the model remains flawed in several significant — and problematic, in some cases — ways.

Hwang documents many other instances of GPT-4 with vision making mistakes in a draft study published on the preprint server Arxiv.org. Her work focuses primarily on GPT-4 with vision’s ability to describe figures in academic papers, a potentially quite useful application of the tech — but one where accuracy matters. A lot.

Unfortunately, accuracy isn’t GPT-4 with vision’s strong suit where it concerns scientific interpretation.

Hwang writes that GPT-4 with vision makes errors when reproducing mathematical formulas, oftentimes leaving out subscripts or printing them incorrectly. Counting objects in illustrations poses another problem for the model, as does describing colors — particularly the colors of objects next to each other, which GPT-4 with vision sometimes mixes up.

Some of GPT-4 with vision’s more serious, broader shortcomings lie in the factual accuracy department.

GPT-4 with vision can’t reliably extract text from an image. To demonstrate, in the study, Hwang gave the model a spread with a list of recipes and asked it to copy down each recipe in writing. GPT-4 with vision made mistakes in parsing the recipe titles, writing things like ‘Eggs Red Velvet Cake’ instead of ‘Eggless Red Velvet Cake’ and ‘Sesame Pork Medallions’ instead of ‘Sesame Pork Milanese.’

An example of GPT-4 with vision analyzing — and extracting text from — a particular image.

A related challenge for GPT-4 with vision is summarizing. When asked for the gist of, say, a scan of a document, GPT-4 with vision might poorly paraphrase sentences in that document — omitting information in the process. Or it might alter direct quotes in misleading ways, leaving out parts such that it affects the text’s meaning.

That’s not to suggest GPT-4 with vision is a total failure of a multimodal model. Hwang praises its analytical capabilities, noting that the model shines when asked to describe even fairly complicated scenes. It’s clear why OpenAI and Be My Eyes saw GPT-4 with vision as possibly useful for accessibility — it’s a natural fit.

But Hwang’s findings confirm what the OpenAI paper hinted at: that GPT-4 with vision remains a work in progress. Far from a universal problem solver, GPT-4 with vision makes basic mistakes that a human wouldn’t — and potentially introduces biases along the way.

It’s unclear to the extent to which OpenAI’s safeguards, which are designed to prevent GPT-4 with vision from spewing toxicity or misinformation, might be impacting impacting its accuracy — or whether the model simply hasn’t been trained on enough visual data to handle certain edge cases (e.g. writing mathematical formulas). Hwang didn’t speculate, leaving the question to follow-up research.

In its paper, OpenAI claimed it’s building ‘mitigations’ and ‘processes’ to expand GPT-4 with vision’s capabilities in a ‘safe’ way, like allowing GPT-4 with vision to describe faces and people without identifying those people by name. We’ll have to wait and see to what degree it’s successful — or if OpenAI’s approaching the limits of what’s possible with today’s multimodal model training methods. 

Reference: https://techcrunch.com/2023/11/06/openai-gpt-4-with-vision-release-research-flaws/

Ref: techcrunch

MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG

Leave a Reply

Your email address will not be published. Required fields are marked *