Free Video Downloader

Fast and free all in one video downloader

For Example: https://www.youtube.com/watch?v=OLCJYT5y8Bo

1

Copy shareable video URL

2

Paste it into the field

3

Click to download button


Dropbox, Figma CEOs back Lamini, a startup building a generative AI platform for enterprises
May 12, 2024

Dropbox, Figma CEOs back Lamini, a startup building a generative AI platform for enterprises

Reading Time: 4 minutes

Lamini, a Palo Alto-based startup building a platform to help enterprises deploy generative AI tech, has raised $25 million from investors, including Stanford computer science professor Andrew Ng.

Lamini, co-founded several years ago by Sharon Zhou and Greg Diamos, has an interesting sales pitch.

Many generative AI platforms are far too general purpose, Zhou and Diamos argue, and don’t have solutions and infrastructure geared to meet the needs of corporations. In contrast, Lamini was built from the ground up with enterprises in mind and is focused on delivering high generative AI accuracy and scalability.

To Zhou’s point, many companies have expressed frustration with the hurdles to meaningfully embracing generative AI across their business functions.

According to a March poll from MIT Insights, only 9% of organizations have widely adopted generative AI despite 75% having experimented with it. Top hurdles run the gamut from a lack of IT infrastructure and capabilities to poor governance structures, insufficient skills and high implementation costs. Security is a major factor, too — in a recent survey by Insight Enterprises, 38% of companies said security was impacting their ability to leverage generative AI tech.

So what’s Lamini’s answer?

Zhou says that ‘every piece’ of Lamini’s tech stack has been optimized for enterprise-scale generative AI workloads, from the hardware to the software, including the engines used to support model orchestration, fine-tuning, running and training. ‘Optimized’ is a vague word, granted, but Lamini is pioneering one step that Zhou calls ‘memory tuning,’ which is a technique to train a model on data such that it recalls parts of that data exactly.

Memory tuning can potentially reduce hallucinations, Zhou claims, or instances when a model makes up facts in response to a request.

‘Memory tuning is a training paradigm — as efficient as fine-tuning, but goes beyond it — to train a model on proprietary data that includes key facts, numbers and figures so that the model has high precision,’ Nina Wei, an AI designer at Lamini, told me via email, ‘and can memorize and recall the exact match of any key information instead of generalizing or hallucinating.’

I’m not sure I buy that. ‘Memory tuning’ appears to be more a marketing term than an academic one; there aren’t any research papers about it — none that I managed to turn up, at least. I’ll leave Lamini to show evidence that its ‘memory tuning’ is better than the other hallucination-reducing techniques that are being/have been attempted.

Fortunately for Lamini, memory tuning isn’t its only differentiator.

Zhou says the platform can operate in highly secured environments, including air-gapped ones. Lamini lets companies run, fine-tune, and train models on a range of configurations, from on-premises data centers to public and private clouds. And it scales workloads ‘elastically,’ reaching over 1,000 GPUs if the application or use case demands it, Zhou says.

‘Incentives are currently misaligned in the market with closed source models,’ Zhou said. ‘We aim to put control back into the hands of more people, not just a few, starting with enterprises who care most about control and have the most to lose from their proprietary data owned by someone else.’

For what it’s worth, Lamini’s co-founders are quite accomplished in the AI space. They’ve also separately brushed shoulders with Ng, which no doubt explains his investment.

Zhou was previously faculty at Stanford, where she headed a group that was researching generative AI. Prior to receiving her doctorate in computer science under Ng, she was a machine learning product manager at Google Cloud.

Diamos, for his part, co-founded MLCommons, the engineering consortium dedicated to creating standard benchmarks for AI models and hardware, as well as the MLCommons benchmarking suite, MLPerf. He also led AI research at Baidu, where he worked with Ng while the latter was chief scientist there. Diamos was also a software architect on Nvidia’s CUDA team.

The co-founders’ industry connections appear to have given Lamini a leg up on the fundraising front. In addition to Ng, Figma CEO Dylan Field, Dropbox CEO Drew Houston, OpenAI co-founder Andrej Karpathy, and — strangely enough — Bernard Arnault, the CEO of luxury goods giant LVMH, have all invested in Lamini.

AMD Ventures is also an investor (a bit ironic considering Diamos’ Nvidia roots), as are First Round Capital and Amplify Partners. AMD got involved early, supplying Lamini with data center hardware, and today, Lamini runs many of its models on AMD Instinct GPUs, bucking the industry trend.

Lamini makes the lofty claim that its model training and running performance is on par with Nvidia equivalent GPUs, depending on the workload. Since we’re not equipped to test that claim, we’ll leave it to third parties.

To date, Lamini has raised $25 million across seed and Series A rounds (Amplify led the Series A). Zhou says the money is being put toward tripling the company’s 10-person team, expanding its compute infrastructure, and kicking off development into ‘deeper technical optimizations.’

There are a number of enterprise-oriented, generative AI vendors that could compete with aspects of Lamini’s platform, including tech giants like Google, AWS and Microsoft (via its OpenAI partnership). Google, AWS and OpenAI, in particular, have been aggressively courting the enterprise in recent months, introducing features like streamlined fine-tuning, private fine-tuning on private data, and more.

I asked Zhou about Lamini’s customers, revenue and overall go-to-market momentum. She wasn’t willing to reveal much at this somewhat early juncture but said that AMD (via the AMD Ventures tie-in), AngelList and NordicTrack are among Lamini’s early (paying) users, along with several undisclosed government agencies.

‘We’re growing quickly,’ she added. ‘The number one challenge is serving customers. We’ve only handled inbound demand because we’ve been inundated. Given the interest in generative AI, we’re not representative in the overall tech slowdown — unlike our peers in the hyped AI world, we have gross margins and burn that look more like a regular tech company.’

Amplify general partner Mike Dauber said, ‘We believe there’s a massive opportunity for generative AI in enterprises. While there are a number of AI infrastructure companies, Lamini is the first one I’ve seen that is taking the problems of the enterprise seriously and creating a solution that helps enterprises unlock the tremendous value of their private data while satisfying even the most stringent compliance and security requirements.’

Reference: https://techcrunch.com/2024/05/02/dropbox-figma-ceos-back-lamini-a-startup-building-a-generative-ai-platform-for-enterprises/

Ref: techcrunch

MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG

Leave a Reply

Your email address will not be published. Required fields are marked *