Components – Radio Free Mobile https://www.radiofreemobile.com To entertain as well as inform Thu, 24 Apr 2025 05:58:38 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.26 https://www.radiofreemobile.com/wp-content/uploads/2018/06/cropped-RFM-favicon-32x32.png Components – Radio Free Mobile https://www.radiofreemobile.com 32 32 TSMC Q1 25– No Wobbles https://www.radiofreemobile.com/tsmc-q1-25-no-wobbles/ https://www.radiofreemobile.com/tsmc-q1-25-no-wobbles/#respond Fri, 18 Apr 2025 06:38:28 +0000 http://www.radiofreemobile.com/?p=10784 AI freight train is still rolling. 

  • The threat of tariffs, trade war, China restrictions and stock market volatility have been unable to dent TSMC, which reported good results and underlined that AI will continue to drive its revenues in 2025.
  • Q1 25 revenues / EPS were NT839bn / NT13.94, broadly in line with estimates of NT837bn / NT13.62.
  • TSMC confirmed that its 25% YoY growth guidance in 2025 remained intact and that it would still spend $38bn – $42bn in capex.
  • A meaningful part of this is due to the chips it is making for AI datacentres, where revenues are expected to double again in 2025, confirming that the mad scramble to build datacentre capacity remains on track.
  • Although this represents no change to estimates, the shares rose nearly 4%, clearly reflecting some alleviation of fears that there would be some impact from all of the chaos caused by US attempts to rewrite the rules of global trade.
  • This underpins my view from yesterday (see here), where the weakness that ASML saw in bookings was not related to a sudden drop in AI-related demand but was more a reflection of tariff nervousness and a decline in demand coming from China due to tightening restrictions.
  • Hence, I expect that when Nvidia, AMD and so on report in a few weeks, they will confirm this trend, meaning that the outlook for 2025 remains good.
  • Consequently, there is also likely to be no change in capex estimates from Google, Amazon and Microsoft when they report their calendar Q1 25 results
  • While this is good news for the short term, the stage is still being set for a correction as the attitude of the big cloud providers is that overbuilding is better than underbuilding in this environment.
  • Here, I disagree because this is precisely what the telecom operators said in 1999 and 2000 when asked if they were overbuilding fibre optic networks to support the internet.
  • This view was rapidly turned on its head when it turned out the internet was not fast and mature enough to handle all of the use cases that were postulated at the time and that we take for granted today.
  • The problem was that between 2000 and 2004, there was a dip where you could not give fibre optic capacity away, and I suspect that in AI infrastructure, something similar could easily occur.
  • However, I have no doubt that when the AI correction comes, it will be smaller and far less painful than the internet bubble 25 years ago.
  • This is because, despite its hallucinations and shortcomings, AI can deliver real services and real value to users and enterprises now.
  • By contrast, the internet was barely able to deliver a slow, frustrating web browsing experience 25 years ago, and so it took far longer for all the use cases that we take for granted today to materialise.
  • Consequently, even with its problems, AI can deliver meaningful revenues now, and so while it will not meet the lofty expectations being set by its creators, it will deliver far more than the Internet could in 2000.
  • The net result is that when the correction comes, AI capacity will rapidly decrease in price, meaning that it is likely to end up being cheaper to buy in the dip as opposed to building it now.
  • The problem is that the industry is so infested with FOMO (Fear Of Missing Out) that this strategy is currently inconceivable.
  • I suspect that almost no one will be willing to wait and then purchase later, but this is how the best return on investment is likely to be made.
  • Hence, the pick and shovels of AI are going to continue doing well in the short term, but the one I am really looking for is the one with the spine to hold off for now and then buy when the dip comes.
]]>
https://www.radiofreemobile.com/tsmc-q1-25-no-wobbles/feed/ 0
Tech Newsround – ASML, AMD & Global Foundries https://www.radiofreemobile.com/tech-newsround-asml-amd-global-foundries/ https://www.radiofreemobile.com/tech-newsround-asml-amd-global-foundries/#comments Thu, 17 Apr 2025 06:28:45 +0000 http://www.radiofreemobile.com/?p=10782 ASML Q1 25: The tariff effect.

  • ASML reported reasonable Q1 25 results, but its order book was way adrift of expectations, which I take as a sign of weakness in China and uncertainty around tariffs as opposed to a sign that the AI freight train is slowing down.
  • Q1 25 revenues / EPS were E7.7bn / E6.00 broadly in line with expectations of E7.8bn / E5.74.
  • Guidance also remained unchanged with Q2 25 revenues expected to be E7.2bn – E7.7bn (E7.45bn) and FY 2025 at E30bn – E35bn.
  • However, the order book fell well short, coming in at E3.9bn compared to forecasts of E4.8bn.
  • This caused some consternation, which combined with new restrictions hitting Nvidia (see here) and AMD (see below) triggered another correction in the semiconductor sector.
  • ASML remained tight-lipped about China, even though its contribution to sales and orders is falling as Chinese customers become more concerned about the long-term viability of using ASML equipment.
  • I expect this to continue as the Department of Commerce is showing every intention of further increasing restrictions on what can be exported to China.
  • Tariff uncertainty has also been a contributing factor that has caused customers to delay orders, meaning that once there is visibility on how global trade is going to be conducted, this should quickly correct.
  • ASML remains the sole supplier of equipment capable of manufacturing advanced chips for AI in the cloud and edge, and so it is being used as a gauge for AI demand.
  • The weak order book looks to me to be more about China and tariff uncertainty than it is pointing to a sudden drop in AI demand, and so I do not expect that we will see related weakness in its customers and the customers of its customers.

AMD: Same game, same pain.

  • AMD has said that it will take up to $800m in provisions as a precaution for the chips that it will now no longer be able to sell in China, which again indicates that the long-term effect of these restrictions will be economic rather than technological.
  • The MI308 product that AMD has been selling in China now requires a license from the Department of Commerce, which, with a presumption of denial, effectively means that shipments will now cease and not resume.
  • The stated intent of these restrictions is to prevent China from developing advanced AI that could be used for the purpose to damage US interests, which at a high level has already badly failed.
  • This is because DeepSeek, Alibaba and others have already produced AI that competes with the leaders and are likely to continue to do so regardless of the restrictions that are placed upon China.
  • However, the lack of advanced silicon will mean that Chinese AI costs more to produce and run, which will greatly undermine the proposition to use China’s AI outside of its borders.
  • I have serious doubts whether the US Department of Commerce has thought this far ahead, and I expect that rendering China uncompetitive when it comes to other countries is going to be the main benefit of these restrictions when it comes to containing China’s rise.

Global Foundries – Bathwater Baby.

  • With tariffs being all the rage, one baby that has been thrown out with the bathwater is Global Foundries, whose fabs, while not leading edge, are all situated far from China’s backyard with a hefty presence in the USA.
  • Hence, the USA fabs in New York have suddenly become more attractive, and I suspect that there has been an upswell of inquiries about using Global Foundries to manufacture in the USA.
  • This makes Global Foundries an interesting one to consider as a “tariff” trade, and given that it is now trading on 18.6x 2025 PER, it has also become much more attractive on a fundamental basis.
  • I don’t own Global Foundries, but it increasingly looks like it is worth having a look.
]]>
https://www.radiofreemobile.com/tech-newsround-asml-amd-global-foundries/feed/ 6
Tech Newsround – Nvidia and Apple https://www.radiofreemobile.com/tech-newsround-nvidia-and-apple/ https://www.radiofreemobile.com/tech-newsround-nvidia-and-apple/#respond Tue, 15 Apr 2025 06:25:36 +0000 http://www.radiofreemobile.com/?p=10777 Nvidia – Window dressing

  • Nvidia is signing up to make products in the USA which is a move I suspect that it was already executing but it will have done itself no harm by being seen to fall in with the patriotic agenda.
  • Following a catch-up with the President of the United States, Nvidia has said that it will produce up to $500bn of products in the US including full AI systems in addition to the chips that TSMC is making in Arizona.
  • This does not look like there is very much new here as Nvidia had already committed to making chips in Arizona and it is already building facilities in Houston and Dallas with its partners.
  • However other partners like Amkor and SPIL are also increasing their commitment to the USA and this is where I think Nvidia is increasing its commitment.
  • The real reason why Nvidia is already diversifying away from Taiwan is to mitigate the risk of China interfering in the semiconductor supply chain which has been rising for some time.
  • This risk will only continue to increase as the USA and its allies increase trade and technological pressure on China.
  • Hence, we are likely to see continued diversification by all of TSMC’s customers away from Taiwan even if a trade deal is struck with China and the tariff-related chaos dies down.
  • Although China has retaliated with reciprocal tariffs, it has yet to take serious action against the increasing pressure being placed upon it.
  • This could take the form of a ban on Apple or Qualcomm shipments into China or a blockade of Taiwan which would cause real consternation.
  • However, these sorts of moves are very risky for China and could do as much, if not more, damage to the Chinese economy than to the US meaning that these sorts of measures can only be a last resort.
  • Hence, I think that a deal of some description will get done as this remains in everyone’s best interest (especially China’s) given how weak and troublesome its economy has been since the pandemic.

Apple – Privacy? What privacy?  

  • Apple is in real trouble when it comes to AI as it is now resorting to using its users’ data to train its AI to try and catch up in a field where it remains woefully adrift.
  • This is also an admission that synthetic data is never as good as the real thing, and now that it is in trouble, it has been forced to compromise its long-standing position on the privacy of its users.
  • In a blog (see here) Apple details how it is extending its differential privacy technique so that it can improve the quality of its AI algorithms and try to close the yawning gap to its rivals.
  • Differential privacy is a technique where data is injected with random sequences such that it is meaningless when viewed in insolation but when aggregated the random pieces cancel each other out and the real aggregated data remains.
  • This protects user privacy but only using the averages means that the model that is trained will never be as good as if it were trained with the individual pieces of data.
  • This has been acceptable to date but now that AI is becoming more important, users are beginning to notice just how bad Apple is at AI which has been exacerbated by the less-than-successful roll-out of Apple Intelligence.
  • Apple has been training its AI with synthetic data, but I remain sceptical about the value of synthetic data because it depends on being a realistic simulation of reality which it almost always is not.
  • Hence, one quickly arrives at a garbage-in, garbage-out scenario which is where Apple has found itself.
  • I think that this is why it has had to come up with this convoluted way of using the real data of its users while still being able to claim that it has not violated its privacy standards.
  • One can argue whether or not its desperation has forced it to compromise its privacy ideals, but what is clear is that Apple is in a very difficult position when it comes to AI and it is not getting any better.
  • Fortunately, for the moment, this is not going to compromise the sale of iPhones, but if competing devices start sporting AI agents that everyone loves to use (big if), then Apple’s market position will come under much greater threat.
  • This is why Apple needs to do something but I suspect that this will not go nearly far enough to fix the issue.
]]>
https://www.radiofreemobile.com/tech-newsround-nvidia-and-apple/feed/ 0
China vs. USA – Yield Debate https://www.radiofreemobile.com/china-vs-usa-yield-debate/ https://www.radiofreemobile.com/china-vs-usa-yield-debate/#respond Thu, 10 Apr 2025 06:01:07 +0000 http://www.radiofreemobile.com/?p=10772 Yield is everything.

  • Relative newcomer to the semiconductor game SiCarrier made a splash at Semicon China 2025 by launching a lot of new equipment and claiming that non-optical methods may enable China to start producing 5nm chips on homemade equipment.
  • This has underpinned a lot of chatter with some of the commentariat now confidently predicting that China will produce 5nm chips as soon as 2026 (see here) but I remain highly sceptical.
  • Mr Du Lijun, President of SiCarrier (founded in 2021) stated that homemade tools could be employed to make 5nm chips where its non-optical machines could help deal with the lithography issues.
  • Mr Du is referring to a technique called quadruple patterning which uses multiple passes with a laser to create the very narrow lines needed to make a 5nm chip.
  • The problem is simple in that China is trying to draw lines that are 5nm wide with a laser with a wavelength of 193nm which is very difficult to do.
  • This is where the multi-patterning technique comes in and TSMC had some success with this at 7nm but soon switched to EUV because the yields were not high enough.
  • The EUV machines operate at a wavelength of 13.5nm making it much easier to draw the narrow lines required at 7nm and below once one has overcome the massive technical issues of creating a reliable light source at that wavelength.
  • ASML is the only company in the world to have cracked this problem and given that it cannot sell its EUV products to China, it has had to resort to taking what TSMC developed and advancing it.
  • While China has had some success at producing chips at 7nm (like TSMC did), I don’t think that it has ever been able to produce these chips economically because the yield has been too low.
  • Typically yields above 90% are needed to make a decent return and after several years SMIC and Huawei are still way behind that with their 7nm process.
  • This means that these chips have negative gross margins making them economically unsustainable.
  • This is something that China really cannot afford right now given the precarious state of its economy which will only get worse when China tries to take this technique to 5nm.
  • Multi-pattening at 5nm is something that nobody has tried to the best of my knowledge and will require at least 20% more steps than 7nm.
  • Hence, I suspect that it will take China far longer than the commentariat claims to get this to work at 5nm and it is unlikely to ever be economically viable.
  • China has demonstrated that it is adept at making do with limited resources (as it did with DeepSeek) and so I think it may find a way to make a 5nm chip.
  • However, I do not think that it will ever get the yields up to a point where it can make these chips economically viable.
  • If this were possible, I think we would have seen TSMC stick with multi-pattening at 7nm and migrate it to 5nm.
  • Consequently, I think China will remain limited in terms of what it can produce below 20nm as it can no longer buy or service new equipment although I think the home-grown industry may be able to fill this gap in time.
  • However, at the leading edge, I think China will not go beyond 14nm economically and think that even making a chip at 5nm is a stretch.
  • Hence, I continue to think that for the foreseeable future, China will struggle to make advanced semiconductors at high yield and low cost.
  • This means that the US restrictions will continue to be quite effective with very little impact coming from the availability of these new machines.
  • This continued rivalry means that the days of global standards are numbered and, going forward, we are likely to see one standard outside of China and another, competing, and non-compatible standard inside China.
  • This is bad news for everyone as two incompatible networks will generate much less value than one global network.
  • Consequently, long-term growth for the entire technology sector over the next 10 to 20 years will be lower than it otherwise would have been
]]>
https://www.radiofreemobile.com/china-vs-usa-yield-debate/feed/ 0
Artificial Intelligence – The Dynamo Effect https://www.radiofreemobile.com/artificial-intelligence-the-dynamo-effect/ Tue, 25 Mar 2025 09:22:03 +0000 http://www.radiofreemobile.com/?p=10752 Dynamo is CUDA for inference.

  • Meta’s failure to acquire FuriosaAI is a sign that the focus of AI is moving towards inference where rival chip companies have a better chance of competing against the incumbent, but as usual, Nvidia has a fix for that.
  • FuriosaAI is a Korean chip company that offers data centre chips for inference which would have helped accelerate Meta’s plans to support its AI efforts with in-house silicon.
  • However, FuriosaAI rejected the $800m offer preferring to remain independent, raise money separately and eventually seek an IPO.
  • The fact that FuriosaAI thinks that it can survive on its own is a sign that the market is shifting from training to inference which is something that Nvidia alluded to at GTC 2025 last week.
  • This makes sense because these “reasoning” models work by either “thinking” for longer or generating many answers and then having a separate model select the best one.
  • Either way, the compute consumption by inference increases by many orders of magnitude when using this type of inference meaning that demand growth for inference is likely to significantly outstrip training going forward.
  • The problem for Nvidia here is that the CUDA platform that has served it so well for training is less sticky for inference as many players already train on Nvidia but then run inference on something else.
  • This is where FuriosaAI would have helped Meta which has said that it will spend $60bn to $65bn on capital expenditures in 2025, the vast majority of which will be spent on data centres for AI.
  • FuriosaAI would have helped Meta accelerate its silicon independence and allowed it to equip its data centres with in-house inference silicon much more quickly.
  • Instead, Meta will be forced to source more silicon from outside to build the data centres in the planned time frame and Nvidia will be a lead contender to take the slot.
  • This is where Nvidia’s new product Dynamo comes in which I think was the most important announcement at GTC 2025 which hardly anyone is talking about.
  • Instead, most of the focus was on Blackwell Ultra and Rubin but with inference really taking off, it is Dynamo that will help Nvidia maintain its market share as CUDA becomes less important in the purchase decision of customers.
  • Dynamo is a software layer that sits in the data centre and ensures that the data centre outputs as many tokens as possible by optimising GPU, memory and communications within the data centre.
  • According to Nvidia, Dynamo increased the output for DeepSeek R1 by 30x implying that the economics of a data centre can be greatly improved by using Dynamo.
  • However, Dynamo has been designed to run only on Nvidia silicon and while it will be possible to port it to other chips as it is available in open source, I suspect that all of the savings will disappear when it is on non-Nvidia silicon.
  • This means that it makes no sense to run Dynamo on non-Nvidia silicon despite the software being available in open source.
  • Hence, if Dynamo proves popular, it will recreate the lock-in that CUDA has created for training and make it harder for other players to take share from Nvidia.
  • This is why I think Dynamo was the most significant announcement made at GTC 2025 and is a similar strategy to Nvidia Inference Microservices (NIMs) which also aims to keep clients on Nvidia silicon as opposed to competitors.
  • These are medium to long-term strategies to keep market share as the market evolves but in the short term, CUDA looks to be as sticky as ever.
  • This combined with its rapid product cadence makes the next 12-18 months look pretty safe from the point of view of market share and gross margin.
  • Hence, I think that Nvidia is likely to continue to grow in line with the capex spend for data centres which remains very healthy.
  • This is why Nvidia is the only direct AI company I would own as its valuation is still in the realm of sanity as it is already making money and generating cash from AI.
  • However, I still prefer the adjacencies of inference at the edge and nuclear power.
]]>
Nvidia & Samsung – GTC Day 2 https://www.radiofreemobile.com/nvidia-samsung-gtc-day-2/ Thu, 20 Mar 2025 03:12:17 +0000 http://www.radiofreemobile.com/?p=10745 Samsung’s fate is bound up with Nvidia

GTC Update: Nvidia Dynamo – the most critical launch of 2025. 

  • GTC 2025 is in full swing and as more details emerge about Dynamo, it is clear to me that this is by far Nvidia’s most important launch of 2025.
  • Dynamo is an operating system for a data centre that is producing tokens (inference) for a generative AI service.
  • Dynamo looks at the system of GPUs, memory and networking and works out the most efficient way to manage these resources based on the nature of the requests that are coming in.
  • It allows the data centre to produce as many tokens as possible for its resources, which maximises revenue given that industry standard pricing is $ per million tokens.
  • Dynamo makes more sense now because of the new family of “reasoning models” that improve performance by massively increasing the number of tokens that they produce per request.
  • This means that inference is going to quickly become the largest function of the data centre as RFM has been predicting for some time.
  • To be as dominant in inference, Nvidia needs more than CUDA as I think that CUDA is much less of a control point in inference than it is in training.
  • Enter Dynamo which promises to do for inference what CUDA has done for training.
  • If it is as good as Nvidia claims (see here) then Dynamo users are going to become more competitive given the efficiency improvement.
  • However, Dynamo is only likely to work really well on Nvidia hardware and the fact that anyone would want to run it on competing silicon does not seem to have been considered.
  • “In theory, you could do that as we have made it available to open source” was the response to the question, but obviously, this is not what it has been built for.
  • Consequently, if Dynamo proves to be very popular with clients, it is likely to raise barriers for data centre owners who are considering running inference on competing silicon.
  • Given that CUDA has been around for over 20 years and Dynamo is brand new, Dynamo has a long way to go if it is going to replicate CUDA’s effect in training, but the seeds have been sown.
  • Competitors expecting to take a bite out of Nvidia when it comes to inference need to act quickly if they don’t want the slightly open door of inference to be quickly slammed in their faces.

Samsung: Bet on HBM4. 

  • Samsung apologised yet again at its AGM for its lack-lustre performance in memory and has promised to do better in 2025 setting the shares up for a substantial re-rating if it is successful.
  • Like the Note 7 that spontaneously exploded, Samsung’s memory problem is simple in that its memory offering for AI was not good enough allowing SK Hynix and Micron to humiliate it at its own game.
  • This has greatly damaged both Samsung’s performance and its reputation as memory for AI is one of the hottest areas in semiconductors right now.
  • If one takes Nvidia’s roadmap seriously, the importance of memory is only going to increase meaning that high-bandwidth memory (HBM) is going to continue to grow like wildfire.
  • The result of Samsung messing this up has been poor performance for some considerable time and a share price that fell by over 40%.
  • With a discount of this size, the market is assuming that Samsung is not coming back in HBM, but we have seen this sort of thing before.
  • When the Note 7 started catching fire in people’s pockets the view was that this would cost Samsung its leadership in smartphones, but the company buckled down, found its backbone and dug itself out of its hole.
  • There is every indication that this is exactly what is going to happen here as the problems it has had in HBM3 are fixable and I think that Samsung has the depth of character to do what it needs to get HBM4 right.
  • Hence, I don’t think that Samsung is going to get very far with its 12-layer HBM3E but I expect it to qualify with Nvidia for HBM4 that will be going into the Rubin chip that will become available in H2 2026.
  • Samsung is currently on 13.2x 2025 and 9.7x 2026 PER and I suspect that the 2026 estimate is too low.
  • This sets the scene for a big rally triggered by the news of qualifying with Nvidia for HBM4 which I am hopeful will come this year.
  • I have a position in Samsung where I am looking for around $1,600 on the global depositary receipt (GDR) that trades on the London Stock Exchange.
]]>
Nvidia GTC 2025 – Spread Betting https://www.radiofreemobile.com/nvidia-gtc-2025-spread-betting/ Wed, 19 Mar 2025 01:37:43 +0000 http://www.radiofreemobile.com/?p=10741 Nvidia is spreading its bets wide and early.  

  • Another confident keynote from Jensen Huang where technical slip-ups were part of the show and where the best was saved for last.
  • Nvidia is leveraging its dominant position in AI training to move quickly into nascent adjacent markets such that when they start to develop, Nvidia will already be the go-to provider.
  • This is how Nvidia can keep competition at bay and still earn very high margins on the chips that it develops and sells.
  • During the keynote, the main announcements were:
    • First, Blackwell Ultra & Rubin, where Blackwell is now in full production, Blackwell Ultra was announced and more details of the 2-year roadmap were given.
    • Blackwell Ultra is an update to Blackwell which offers 50% greater AI performance than the original and upgrades both memory size and memory speed.
    • Blackwell Ultra will be available in H2 2025 and I do not expect to see a repeat of the problems that we have seen with the ramp-up of Blackwell as this is an evolution rather than something brand new.
    • The brand-new item appears in 2026 with the launch of Rubin which promises another big jump over Blackwell, but smaller than the jump Blackwell made over Hopper.
    • Rubin is coming in H2 2026 and will offer a 3.3x AI performance gain over Blackwell Ultra as well as a doubling of memory bandwidth.
    • Rubin is two dies stuck together but Rubin Ultra coming in H2 2027 is four GPUs stuck together in a single chip taking the improvement over Blackwell to 14x and 4x over Rubin (of which 2x is because there are two more GPUs).
    • These kinds of gains will certainly allow Nvidia to keep its leadership and Nvidia is following its usual strategy of sharing the gains it makes with the customers.
    • Hence, I would expect that Rubin will be double the price of Blackwell meaning that the customer should see a cost reduction relative to the compute output of 50% or so.
    • This is where the classic “the more you buy, the more you save” tagline comes from, and this looks like it will still be the main theme of the company for a few more years.
    • With data centre capex forecasted to increase to $800bn by 2028 (from $500bn in 2025) and to cross $1tn soon after, it looks like there will be plenty of money available to spend on Nvidia GPUs even as prices continue to rise.
    • Second, Nvidia Dynamo which is a software toolkit aimed at optimising “reasoning” models to run inference on Nvidia silicon.
    • This makes complete sense as “reasoning” is the latest trick to improve performance, but it involves massive increases in compute consumption for inference.
    • This is evident in the prices that OpenAI is charging for its Deep Research service and so it makes sense to offer something that can provide an improvement for this kind of inference.
    • Here Nvidia is claiming that Dynamo can increase the number of tokens generated by 30x when running DeepSeek R1.
    • I suspect that Dynamo is taking advantage of some of the techniques that DeepSeek has put into its model to achieve this level of improvement and so the improvement seen with other models won’t be as good as this.
    • However, there are signs everywhere that the industry is trying to reverse-engineer what DeepSeek has done, and so it is quite likely that other models will also see similar levels of gains in time.
    • Third, Robotics: which I think has the potential to be a huge opportunity, but is going to take much longer than anyone thinks.
    • This is what Nvidia refers to as Physical AI and the combination of Omniverse and Cosmos allows for robotic systems to be trained and tested virtually before they are ever built.
    • The first robots are autonomous vehicles which Nvidia thinks are imminent but where I am considerably more cautious.
    • To kick-start this market, new Cosmos models have been released which can be used together with the new blueprints for Omniverse to train robots and autonomous vehicles.
    • The real win in automotive however was the announcement that GM will be using Nvidia for almost all of its AI needs from digital twins of its factories to running its autonomous cars as well as its corporate AI needs.
    • Nvidia also announced the launch and availability to open-source of Isaac GROOT N1 which is a model that Nvidia says can be used to train all sorts of robots.
    • With this model, Nvidia claims that the age of general robotics is here but I am more sceptical.
    • Just as LLMs are not really general in that they can’t deal with situations they have not been trained for, robots have to be trained individually and then retrained if any changes are made.
    • Fixing this problem is one of the big issues for robotics and so I am somewhat sceptical that GROOT N1 is the fix for this sticky problem.
    • However, what we are seeing is Nvidia moving early and aggressively to cover this nascent space so that when everyone else arrives as the segment takes off, it is already the industry standard.
    • Fourth, DGX Boxes: with a DGX Spark (Mac Mini size) and DGX Station (Desktop PC size) devices launch that brings Blackwell out of the datacentre.
    • DGX Spark can deliver 1 PetaFLOP while DGX Station can do 20 Peta FLOPs with the same code that is used in the data centre.
    • This allows developers to fine-tune their AI services at the edge before deploying them to the cloud or wherever they intend to run them.
    • The big winner here is MediaTek which helped with the design of the DGX Spark and got several mentions during the keynote.
    • This combined with the collaboration in automotive represents a huge profile boost for MediaTek outside of Taiwan which will help it compete in Europe and North America particularly.
  • The net result is that while 2024 was all about reaching a new pinnacle in performance, 2025 is all about taking that pinnacle and leveraging it as widely as possible across different industries.
  • We are witnessing an extension of the CUDA strategy from the silicon development platform to many other software platforms and tools that make it easy to develop AI for all industries on Nvidia hardware.
  • This means that competitors need to match both Nvdia’s hardware cadence and its software offering which is where most of the competition currently is falling over.
  • Nvidia is not standing still and is hoovering up as many partners as it can and it is the likes of Accenture, Deloitte, ENY and Cisco who will help cement Nvidia AI platforms in enterprise customers.
  • Nvidia is showing no sign of slowing down meaning that it is quickly expanding into any area where AI will be relevant with the strategy to become the industry standard before its competitors get out of bed.
  • This will help the company keep its market position but with 85%+ market share in datacentre GPUs, it will remain a hostage to end demand.
  • This means that there will be a few tough quarters when the inevitable correction comes, but there is still no sign of this as spending growth in the data centre remains rampant.
  • This combined with its reasonable valuation is why it is the only direct AI company that I would touch from an investment standpoint, but I continue to prefer the adjacencies of inference at the edge and nuclear power as the way to invest in the AI boom.
]]>
Intel – Anywhere to Go? https://www.radiofreemobile.com/intel-anywhere-to-go/ Mon, 17 Mar 2025 03:39:43 +0000 http://www.radiofreemobile.com/?p=10735 Mr Tan’s options already seem very limited. 

  • The recruitment of Lip-Bu Tan as the new CEO of Intel should come with an expectation of a strategic shift that increases the probability of the company being broken up but as this cannot be done for some time, one wonders how the new CEO will do anything that changes Intel’s fortunes.
  • Lip-Bu Tan is a well-respected semiconductor executive best known for his excellent stewardship of Cadence and he also currently serves as the chairman of Walden International, a VC firm.
  • He also served on the board of Intel between 2022 and 2024 resigning in August 2024 reportedly due to differences in opinion with Pat Gelsinger over the right strategy for Intel.
  • I suspect that Mr Tan was not the board’s first choice it is fairly well known that he wanted the position, meaning that the board could have named him shortly after Gelsinger’s exit instead of waiting for over 3 months.
  • However, the fact that he is an outsider has played in his favour as the market has marked the shares up 15% in a weak market simply because he is not an insider.
  • The problem is that the market will now be expecting large and rapid changes, and it is not at all clear how these will be achieved.
  • Intel essentially faces two choices which are to either invest hard and become a leader again or split the company up creating a foundry and a fabless chipmaker.
  • Mr Gelsinger’s strategy was to invest heavily and execute to return Intel to the forefront of semiconductor manufacturing.
  • Intel’s board appears to have become impatient with this strategy even though the early signs are that Intel is beginning to close the gap to TSMC in some areas.
  • Unfortunately, this period of investment has coincided with a difficult period for Intel where its core business is under greater pressure than ever, meaning that there is much less space on the balance sheet for heavy investments than anticipated.
  • Consequently, I took Mr Gelsinger’s departure as a reaction to a change of heart by the board over Gelsinger’s plan which it agreed to when he was hired in 2021.
  • Lip-Bu Tan was not a board member of Intel at that time but must have given the plan tacit support when he joined in 2022.
  • Lip-Bu Tan’s disagreements with Gelsinger and his 2024 resignation from the board are signs that Intel will go in a new direction with Lip-Bu Tan in charge.
  • This strongly implies that the intention will now be to split the company into two pieces, but this will not be possible for some time.
  • This is because the vast majority of Intel’s fabs are 14nm, 10nm and 7nm all of which use Intel proprietary development tools and not the industry standard tools sold by Synopsis, Cadence and so on.
  • Until these fabs are migrated to using the industry standard tools that everyone else uses, these fabs will not be able to make anything for anyone other than Intel.
  • This is not likely to happen before 2027 and so spinning the fabs out now into a foundry makes very little sense.
  • If a spin-out into a foundry is off the cards and the board no longer likes Gelsinger’s invest-and-return-to-glory strategy, then one is left wondering exactly what strategy Lip-Bu Tan will be changing.
  • There are definitely places where improvements can be made such as the bloated headcount and bureaucratic culture, but Intel needs changes of a far more drastic nature.
  • Furthermore, Intel’s rivals are not standing still and Nvidia, AMD, Arm, Qualcomm and others are already taking share which will be very difficult to win back.
  • Consequently, other than some tidying around the edges, I am wondering what Mr Tan will be able to achieve over the next few years.
  • This leaves me sceptical with regard to the 15% rally we have just seen in the share price as I am far from certain what big changes Mr Tan is going to be able to make.
  • I have long been an advocate of the idea that when Intel hits 10x 12-month forward PER, it is an attractive opportunity, but for the last few years, the earnings have been falling even harder than the share price.
  • With uncertainty surrounding just what Mr Tan is going to be able to do, does not lead me to think that this opportunity is near.
]]>
Nvidia & Meta – Safe for Now https://www.radiofreemobile.com/nvidia-meta-safe-for-now/ Wed, 12 Mar 2025 05:42:28 +0000 http://www.radiofreemobile.com/?p=10729 Nvidia is not close to danger yet.  

  • The worst-kept secret in tech is that Nvidia’s customers are all trying to reduce their dependence on Nvidia by building their own silicon, but it’s a slow process and with Nvidia’s product cadence, I don’t see it being in danger anytime soon.
  • Meta is working with TSMC (and I presume Arm (see here)) to develop an in-house chip that it will use for all of its AI activities including training and inference of regular machine learning and generative AI.
  • If successful, this would reduce or remove its dependence on Nvidia which given the size of Meta as a customer, would have significant and negative implications for Nvidia.
  • For a large client of Nvidia to switch to its silicon is much easier than it is for a small company as the large company has its in-house captive market to drive the economics.
  • It also does not have to worry about the dominance of Nvidia’s silicon development platform CUDA as it can make its systems vertically integrated and use its development tools.
  • It does, however, have to make the platform as good as CUDA and its in-house silicon as economically viable as the latest and greatest from Nvidia.
  • Hence, as always, the devil is in the details as:
    • First, product cadence: which I have long argued is one of Nvidia’s key differentiators.
    • Here, the latest product from Nvidia (currently Blackwell) is always at least one generation ahead of everyone else meaning that it will be the most cost-effective to operate even with Nvidia’s 70%+ gross margins.
    • This is the classic build vs. buy dilemma that any company has to weigh up and, at the moment, everyone else is far enough behind to make it more cost-effective to buy Nvidia.
    • Second, developers: where anyone who wants to have 3rd party developers using their silicon has to solve the development platform problem.
    • Developers already know how to use CUDA and as it is the most mature in the industry, it remains a key control point and the reason why developers prefer Nvidia.
    • Consequently, with 3rd parties, the CUDA problem needs to be overcome and given how far ahead it is, I think that unlikely that anyone will succeed in this generation.
    • However, RFM Research has long argued that the developer market will move from developing on silicon to developing on foundation models as models become increasingly commoditised.
    • The big foundation model providers are likely to ensure that their models can be trained optimally on Nvidia, their own silicon or anyone else as greater competition means purchasing the silicon for their data centres will cost less.
    • This is how the CUDA control point may weaken, and I think that it is not until then, that we will see any real pressure on Nvidia’s business model.
  • Hence, I think that while Meta will have some success with its in-house silicon for its uses when it comes to 3rd parties, it is going to be stuck with Nvidia for some time.
  • RFM Research has also concluded that it will take a while for developers to shift towards foundation models meaning that for a few years yet, Nvidia’s market share is unlikely to change much.
  • Consequently, Nvidia remains subject to the whims of demand which remains higher than it can deal with.
  • Hence, revenues are likely to be a factor of how much capacity it has booked at TSMC for the coming 12 months as opposed to how much customers want to buy.
  • The net result is that Nvidia’s short to medium visibility remains pretty good and so I do not expect any surprises in the next few earnings reports.
  • However, this also means that the scope for a further large run-up in the share price is limited meaning that the share price is likely to remain in line with revenue and profit growth.
  • Nvidia’s valuation is still relatively undemanding for the growth that it is likely to see in the next year or two, and so if I were forced to hold a direct AI investment, this would be it.
  • However, I still prefer the adjacencies of AI inference at the edge of the network and nuclear power to solve the energy shortage both of which remain pretty cheap and underinvested.
]]>
Nvidia FQ4 25 – On the Nose https://www.radiofreemobile.com/nvidia-fq4-26-on-the-nose/ Thu, 27 Feb 2025 09:40:23 +0000 http://www.radiofreemobile.com/?p=10699 Not too little, not too much, just right.

  • Nvidia reported results that confirmed that the AI spending spree remains on track, but this growth is now captured in the estimates meaning that surprises will be hard to come by.
  • The valuation of Nvidia remains undemanding relative to the growth that it is still experiencing meaning that while the really big gains are over, this still represents growth at a reasonable price.
  • FQ4 25 revenue / EPS were $39.3bn / $0.89 up 12% QoQ and 78% YoY which was slightly ahead of estimates of $38.1bn / $0.85 but below the top of the estimates range.
  • Guidance for the FQ1 26 was also comfortably within the range with revenues of $42.1bn – $43.4bn with gross margins expected to be 70.6% – 71.0%.
  • This represents slowing but still very good growth, but the key take-home of these numbers is the failure to guide above the top of the estimated range.
  • For companies on very high multiples of earnings, this would represent a significant problem as one would expect to see a collapse in the share price of 25% – 30%.
  • However, Nvidia barely moved ending the after-hours trading session ending just 1.5% below where it closed just before the report.
  • This is precisely why Nvidia remains the safest direct investment in the AI boom, and it is the company’s ability to feed the growth straight to the bottom line that has kept the PER ratio at reasonable levels.
  • This, in turn, means that the rating does not need to correct when the company reports earnings in line with expectations which is precisely what we see here.
  • At the same time, I think that the company’s current high visibility on its earnings due to ongoing strong demand means that fiscal 2026 is going to be one of good, but not surprising growth.
  • Nvidia took the opportunity to address the DeepSeek question with greater efficiency being offset by higher demand and it pointed out the reasoning models which are currently all the rage consume far more compute power than the predecessors.
  • This makes complete sense because these models work by “reasoning” for longer or by computing several answers with a separate algorithm and then choosing the best answer.
  • This is what Nvidia means by a new level of model scaling and while I would argue that parameter and data scaling are close to the limits of what they can deliver, inference time has further to go.
  • I suspect that this too, will hit a limit of what it can deliver but at the moment, many players will be looking at how they can improve performance through increasing inference.
  • Furthermore, RFM Research has concluded that DeepSeek may not be that much more efficient than OpenAI when it comes to inferencing meaning that fears of an immediate collapse in demand are probably overstated.
  • This is a tailwind for Nvidia and the data centre capex plans for FY2026 have not been cut meaning that I do not think that DeepSeek is about to clobber demand for Nvidia’s data centre chips.
  • Hence, expectations for FY2026 look to be about right which combined with the good visibility that I think the company has means that there will be few surprises this year.
  • The net result is that the valuation of the company should remain steady meaning that share price appreciation should be roughly in line with profit growth.
  • This means that the share price should do reasonably well this year, but the days of blow-outs and huge price appreciation are clearly over.
  • This is why I think that Nvidia remains a pretty safe direct AI play, but I continue to prefer the adjacencies of inference at the edge and nuclear power both of which look like they now have further to travel than Nvidia where the story is well understood and priced in.
]]>