Free Video Downloader

Fast and free all in one video downloader

For Example: https://www.youtube.com/watch?v=OLCJYT5y8Bo

1

Copy shareable video URL

2

Paste it into the field

3

Click to download button


Here’s why most AI benchmarks tell us so little
March 8, 2024

Here’s why most AI benchmarks tell us so little

Reading Time: 3 minutes

On Tuesday, startup Anthropic released a family of generative AI models that it claims achieve best-in-class performance. Just a few days later, rival Inflection AI unveiled a model that it asserts comes close to matching in quality some of the most capable models out there, including OpenAI’s GPT-4.

Anthropic and Inflection are by no means the first AI firms to contend their models have the competition met or beat by some objective measure. Google argued the same of its Gemini models at their release, and OpenAI said it of GPT-4 and its predecessors, GPT-3, GPT-2 and GPT-1. The list goes on.

But what metrics are they talking about? When a vendor says a model achieves state-of-the-art performance or quality, what’s that mean, exactly? Perhaps more to the point: Will a model that technically ‘performs’ better than some other model actually feel improved in a tangible way?

On that last question, not likely.

The reason — or rather, the problem — lies with the benchmarks AI companies use to quantify a model’s strengths — and weaknesses.

The most commonly used benchmarks today for AI models — specifically chatbot-powering models like OpenAI’s ChatGPT and Anthropic’s Claude — do a poor job of capturing how the average person interacts with the models being tested. For example, one benchmark cited by Anthropic in its recent announcement, GPQA (‘A Graduate-Level Google-Proof Q&A Benchmark’), contains hundreds of PhD-level biology, physics and chemistry questions — yet most people use chatbots for tasks like responding to emails, writing cover letters and talking about their feelings.

Jesse Dodge, a scientist at the Allen Institute for AI, the AI research nonprofit, says that the industry has reached an ‘evaluation crises.’

It’s not that the most-used benchmarks are totally useless. Someone’s undoubtedly asking ChatGPT PhD-level math questions. However, as generative AI models are increasingly positioned as mass market, ‘do-it-all’ systems, old benchmarks are becoming less applicable.

David Widder, a postdoctoral researcher at Cornell studying AI and ethics, notes that many of the skills common benchmarks test — from solving grade school-level math problems to identifying whether a sentence contains an anachronism — will never be relevant to the majority of users.

Misalignment with the use cases aside, there are questions as to whether some benchmarks even properly measure what they purport to measure.

An analysis of HellaSwag, a test designed to evaluate commonsense reasoning in models, found that more than a third of the test questions contained typos and ‘nonsensical’ writing. Elsewhere, MMLU (short for ‘Massive Multitask Language Understanding’), a benchmark that’s been pointed to by vendors including Google, OpenAI and Anthropic as evidence their models can reason through logic problems, asks questions that can be solved through rote memorization.

‘[Benchmarks like MMLU are] more about memorizing and associating two keywords together,’ Widder said. ‘I can find [a relevant] article fairly quickly and answer the question, but that doesn’t mean I understand the causal mechanism, or could use an understanding of this causal mechanism to actually reason through and solve new and complex problems in unforeseen contexts. A model can’t either.’

So benchmarks are broken. But can they be fixed?

Dodge thinks so — with more human involvement.

‘The right path forward, here, is a combination of evaluation benchmarks with human evaluation,’ she said, ‘prompting a model with a real user query and then hiring a person to rate how good the response is.’

As for Widder, he’s less optimistic that benchmarks today — even with fixes for the more obvious errors, like typos — can be improved to the point where they’d be informative for the vast majority of generative AI model users. Instead, he thinks that tests of models should focus on the downstream impacts of these models and whether the impacts, good or bad, are perceived as desirable to those impacted.

‘I’d ask which specific contextual goals we want AI models to be able to be used for and evaluate whether they’d be — or are — successful in such contexts,’ he said. ‘And hopefully, too, that process involves evaluating whether we should be using AI in such contexts.’

Reference: https://techcrunch.com/2024/03/07/heres-why-most-ai-benchmarks-tell-us-so-little/

Ref: techcrunch

MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG

Leave a Reply

Your email address will not be published. Required fields are marked *