Free Video Downloader

Fast and free all in one video downloader

For Example: https://www.youtube.com/watch?v=OLCJYT5y8Bo

1

Copy shareable video URL

2

Paste it into the field

3

Click to download button


CTGT aims to make AI models safer
October 30, 2024

CTGT aims to make AI models safer

Reading Time: 3 minutes

Growing up as an immigrant, Cyril Gorlla taught himself how to code — and practiced as if a man possessed.

In high school, Gorlla learned about AI, and became so obsessed with the idea of training his own AI models that he took apart his laptop to upgrade the internal cooling. This tinkering led to an internship at Intel during Gorlla’s second year in college, where he researched AI model optimization and interpretability.

Gorlla’s college years coincided with the AI boom — one that’s seen companies like OpenAI raise billions of dollars for their AI tech. Gorlla believed that AI had the potential to transform entire industries. But he also thought that safety work was taking a backseat to shiny new products.

‘I felt there needed to be a foundational shift in how we understand and train AI,’ he said. ‘The lack of certainty and trust in models’ output is a significant barrier to adoption in industries like healthcare and finance, where AI can make the biggest difference.’

‘My parents believe I’m in school,’ he said. ‘Reading this might come as a shock to them.’

CTGT works with companies to identify biased outputs and hallucinations from models, and attempt to address the root cause of these.

It’s impossible to completely eliminate errors from a model. But Gorlla claims that CTGT’s auditing approach can empower firms to mitigate them.

‘We expose a model’s internal understanding of concepts,’ he explained. ‘While a model telling a user to put glue into a recipe might be humorous, a response that recommends competitors when a customer asks for a product comparison is not so trivial. A patient being given information from a clinical study that is outdated, or a credit decision made on hallucinated info, is unacceptable.’

A recent poll from Cnvrg found that reliability was a top concern shared by enterprises adopting AI apps. In a separate study from Riskonnect, a risk management software provider, more than half of execs said they were worried about employees making decisions based on inaccurate information from AI tools.

The idea of a dedicated platform to evaluate an AI model’s decision-making isn’t new. TruEra and Patronus AI are among the startups developing tools to interpret model behavior, as are Google and Microsoft.

But Gorlla claims CTGT’s techniques are more performant — in part because they don’t rely on training ‘judge’ AI to monitor in-production models.

‘Our mathematically-guaranteed interpretability differs from current state-of-the-art methods, which are inefficient and train hundreds of other models to gain insight on a model,’ he said. ‘As companies grow increasingly aware of compute costs, and enterprise AI transitions from demos to providing real value, our value is significant in providing companies the ability to rigorously test the safety of advanced AI without training additional models or using other models as a judge.’

To assuage potential customers’ fears of data leaks, CTGT offers an on-premises option in addition to a managed plan. It charges the same annual fee for both.

‘We do not have access to customers’ data, giving them full control over how and where it is used,’ Gorlla said.

CTGT, a graduate of the Character Labs accelerator, has the backing of former GV partners Jake Knapp and John Zeratsky (who co-founded Character VC), Mark Cuban, and Zapier co-founder Mike Knoop.

‘AI that can’t explain its reasoning is not intelligent enough for many areas where complex rules and requirements apply,’ Cuban said in a statement. ‘I invested in CTGT because it is solving this problem. More importantly, we are seeing results in our own use of AI.’

And — despite being early-stage — CTGT has several customers, including three unnamed Fortune 10 brands. Gorlla says that CTGT worked with one of these companies to minimize bias in their facial recognition algorithm.

‘We identified bias in the model focusing too much on hair and clothing to make its predictions,’ he said. ‘Our platform provided the practitioners immediate insights without the guesswork and wasted time of traditional interpretability methods.’

CTGT’s focus in the coming months will be on building out its engineering team (it’s only Gorlla and Tuttle at the moment) and refining its platform.

Should CTGT manage to gain a foothold in the burgeoning market for AI interpretability, it could be lucrative indeed. Analytics firm Markets and Markets projects that ‘explainable AI’ as a sector could be worth $16.2 billion by 2028.

‘Model size is far outpacing Moore’s Law and the advances in AI training chips,’ Gorlla said. ‘This means that we need to focus on foundational understanding of AI — to cope with both the inefficiency and increasingly complex nature of model decisions.’

Reference: https://techcrunch.com/2024/10/29/ctgt-aims-to-make-ai-models-safer/

Ref: techcrunch

MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG

Leave a Reply

Your email address will not be published. Required fields are marked *