Sentiment Analysis of the earnings transcript to help figure out if there are any bullish or bearish sentiments that could be gathered from it. We're doing ML and AI based analysis on the earnings call to get some more insights.
Please consider a small donation if you think this website provides you with relevant information
| Statement |
|---|
| So there is simple economics that are extremely strong that drive this business and this investment from the type of customers that we have been working with |
| So to wrap up on foundation high speed links, we believe we delivered some of the best-in-class, 100 gig per lane ecosystem, driving the objectives that I've outlined |
| We're at the heart of Silicon Valley and I'm very excited you're here with us |
| And I'm very proud to say that the teams we have on the semiconductor side are the best in the world and in these spaces we continue to execute |
| And again, we've continued to maintain leadership from way back, direct modulated lasers to the highest state of the art, 100 gig per lane electro absorption modulators |
| This is not ready yet for production, but it's something that we're excited about and think that we can continue to deliver on our long history of leadership in this market |
| So, first and foremost, why do co-package? Obviously, pluggable transceivers have been around for a long time, have been very effective and served the industry extremely well |
| So we see the ability to integrate components on silicon photonics itself as a way to enhance reliability |
| We're very excited about this |
| Why? Because we know that that improves reliability, we know that improves cost, it improves scalability of these systems |
| This is like -- looks very much similar to a standard pluggable optical box, but what's the difference? We gain tremendous amount of power savings, we gain tremendous amount of cost savings |
| These are very difficult, right? However, when I see this, I see that the surf is favorable for Broadcom, because we're good at surfing those big monsters |
| So we're really excited as this being our first product with co-packaged technology, we hope to see a lot of future technologies use this capability |
| So it's a very exciting direction |
| We've shown industry leadership on optical components for over a long history, specifically now at 100 gig per lane |
| We're doing extremely well in delivering for AI applications |
| This is a really exciting technology that provides both cost and power benefits with up to 70% reduction in cost and 30% cost savings |
| And the larger the cluster they build, the better the engagement, which means the better financial returns |
| And when you have optics, the reach is much greater, but they have the highest power and highest cost |
| And in both of these categories, our execution over the last 10 years has been stellar, number one in each of these categories with amazing execution |
| And ultimately, our track record of execution, plus what we're going to show you coming, as well throughout the day today will keep us in that leadership position |
| And more importantly, you can actually see it's 10% better performance than other alternatives |
| And we're very proud of these engagements |
| And again, I think with these specs we have, we absolutely believe and we're very confident that customers are going to be delighted with this and we will be the number one SerDes again in 200 gig |
| So today I'm very happy to tell you that 400 gig optics, we did that, we achieved what we wanted to achieve |
| So we're very, very honored and pleased and happy to tell you that third customer is also in the consumer AI space and we are in the ramp phase and we will be shipping products in the next few months to that customer |
| Lowest power, best performance for optimized workloads in this XPUs allow us to have the best performance by TCO |
| And one of the good things about this is, our performance of this [indiscernible] -based modules is the best in the industry today |
| And at the time, we were pleased to actually have achieved that with a single customer |
| And so we have the best producing, best performing module |
| Statement |
|---|
| That's a pretty bad failure rate and we see pluggable -- sorry, CPO systems being offering a way to get rid of that as a kind of poor reliability of transceivers |
| Well, with these AI systems, the bandwidth, the amount of components continues to scale and the cost of the optics continue to be a problem in that scalability |
| Now, it comes with some challenges |
| The second thing I'd like you to think about is, this is a distributed computing problem |
| A distributed compute challenge will not be solved with the best networking architecture that will be out there |
| And by the way, some of the markets that we're in actually decline low single digits |
| Even if they give you unlimited funds, power is the number one problem |
| I think we talked a lot on optics and power being an important issue |
| You could get hurt and it's really hard to go through those to go to surf |
| The revenue of AI in the semiconductor has been less than 5% for the longest time up until 2022 |
| And it creates all of a very difficult barrier to entry |
| And to a certain extent, we also stumbled into it by luck, so to say |
| Not only is it important IP to them, they're actually very concerned about that data and that IP and the privacy of that data |
| Tremendously difficult to get the warpage correctly, to get the mechanicals to make sure it doesn't crack |
| It seemed hard back then, by the way |
| We know our struggles |
| We know their struggles |
| So we've thought about this problem for a long time |
| However, if you're a consumer AI company and you're building these large scale platforms, these general processors or GPUs are actually too powerful in terms of power consumption and too expensive to actually deploy into their networks |
| If there's one thing I'd like you to take away from here today is, in a distributed computing problem, it doesn't matter how big a GPU will you make, because it's not big enough to have the entire workload run on one GPU |
Please consider a small donation if you think this website provides you with relevant information