Internet – Radio Free Mobile https://www.radiofreemobile.com To entertain as well as inform Thu, 24 Apr 2025 05:58:38 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.26 https://www.radiofreemobile.com/wp-content/uploads/2018/06/cropped-RFM-favicon-32x32.png Internet – Radio Free Mobile https://www.radiofreemobile.com 32 32 Google Glass 2 – Buying TED https://www.radiofreemobile.com/google-glass-2-buying-ted/ https://www.radiofreemobile.com/google-glass-2-buying-ted/#respond Tue, 22 Apr 2025 05:50:17 +0000 http://www.radiofreemobile.com/?p=10786 More about Gemini than Metaverse.

  • Google clearly paid the organisers of TED Talks to demo its developments for the Metaverse, but to be fair to Google, it showed good progress in improving the proposition of the Metaverse and head-mounted displays.
  • Google bought its way into a TED Talks session and used its time to demonstrate its AR and VR hardware and how they have been brought to life with Gemini.
  • This is where Google has been very clever, as it has used Gemini and processing in the cloud to make up for the shortcomings of the hardware, which remain a major barrier for everyone in this space.
  • Google’s strategy for the Metaverse has three elements, which are the Android XR software it is developing with Samsung, an in-house pair of smart glasses and a virtual reality device being developed by Samsung.
  • Stitching all of these elements together is Gemini, which, if it works as well as advertised (big if), goes a long way towards improving the user experience and making up for hardware shortcomings.
  • Google’s glasses (Glass 2?) are a regular pair of glasses like Meta’s Ray-Ban, except that they also have a very small and low-specification display.
  • This is to ensure that the glasses stay as small and light as well as have an acceptable battery life.
  • This is why almost all of the processing is being done off-device in the smartphone or the cloud.
  • The glasses demonstration was actually quite good, with Gemini being able to take inputs and give outputs in many languages, as well as recall things that the camera had seen moments before.
  • This immediately raises the same privacy concerns that caused CoPilot to delay the Recall feature that does the same thing, but on a PC.
  • Most of the demonstrations were utilising what Gemini could see and what it could do as opposed to the glasses themselves, as the glasses are nothing special when it comes to the hardware.
  • Gemini was also good at recognising and understanding text and diagrams in a book as well as playing music from an album viewed in the glasses, but these were obviously meticulously rehearsed beforehand.
  • Google also brought on the upcoming virtual reality device from Samsung, where it demonstrated more immersive use cases like visiting a destination in Google Earth, as well as Gemini helping with work-related tasks.
  • These demonstrations are the product of the Android XR collaboration that was launched with Qualcomm and Samsung last December, and I think that this offering is competitive with the other players, such as Meta and Apple, against whom it is going to be fighting for The Metaverse.
  • Android XR is also the only one that has a hint of being open, although those who choose to use it are likely to be locked into the Google Ecosystem as they are when they make smartphones.
  • Even in this limited scope, this openness is important as everyone else currently operates in a self-contained and vertically integrated silo which is exactly the wrong thing to have if the Metaverse is going to take off.
  • If the Metaverse is to become the place where users live their digital lives rather than smartphones, every device needs to be able to access the services of the other players as they can on smartphones.
  • This is not the trajectory that is being taken today, and unless this changes, The Metaverse is unlikely to ever become a mainstream environment.
  • Google has made some good progress on this front, and as long as it is less religious about putting its ecosystem front and centre of Android XR Metaverse devices, it could attract good traction.
  • The problem here is that this is how Google monetises the investments that it makes in this area, and so there needs to be a delicate balance.
  • The net result is that this does not represent a big advance in making the Metaverse a reality, as the barriers to entry remain as high as ever, but Google is cleverly using Gemini to greatly improve the experience.
  • If it works well and Google sells a few glasses (as Meta has done), this could help kick-start acceptance of smart glasses, but there remains a huge hill to climb.
  • For example, if Metaverse device shipments were to double every year for the next 7 years, they would still be just 20% of the smartphone market, indicating just how economically irrelevant the sector remains.
  • However, if the smartphone is ever to shed its crown, then this is the leading candidate to take over, making it an insurance policy worth having for those currently making their living from smartphones.
]]>
https://www.radiofreemobile.com/google-glass-2-buying-ted/feed/ 0
Apple – AI Breakdown https://www.radiofreemobile.com/apple-ai-breakdown/ Tue, 11 Mar 2025 07:14:49 +0000 http://www.radiofreemobile.com/?p=10726 AI is not needed to sell iPhones yet.

  • Apple’s run of trouble with AI is continuing as a good proportion of the much-hyped Apple Intelligence services are being delayed again in yet another sign that Apple is still struggling with AI raising the possibility of an acquisition to plug the gap.
  • The good news is that AI is not yet needed to sell iPhones giving Apple time to sort the mess out, but this does give Google an opportunity to take the fight to Apple.
  • Both the “more personalised version of Siri” (which I take to mean powered by an LLM) and certain Apple Intelligence services such as “on-screen awareness” are taking “longer than we thought” to come to market as testing has revealed that they don’t work very well.
  • This news comes at the same time that Bloomberg is reporting that Apple is in licensing discussions with Google that it reports could see Gemini power the iPhone.
  • I think that this is a very unlikely outcome as:
    • First, competition: where Google’s ecosystem is Apple’s main rival in the smartphone business.
    • Google and Samsung are currently working hard at integrating Gemini functionality into Google’s ecosystem services on Samsung devices meaning that if Apple were to follow suit, iOS could lose some of its differentiation.
    • If AI becomes a differentiator in the user’s decision to live his or her life with Apple or Google, then this loss of differentiation could be disastrous for Apple’s gross margins.
    • Instead, Apple needs to have a service that is distinct from Google’s to prevent user perception that the quality of the two user experiences is converging.
    • Second, control: which is something that Apple has gone to great lengths to preserve and extend.
    • Apple is one of the most vertically integrated technology companies in the world, and its focus on its own silicon for processing, graphics and now modems is a sign of how important this is.
    • Hence, if AI is going to be important, the last thing Apple is likely to do is cede the foundation of its AI service to a 3rd party and least of all its biggest competitor.
    • Consequently, I think that Apple will either build or acquire a complete system over which it will have complete control.
  • The net result is that I suspect that these discussions are about adding Gemini as another option alongside ChatGPT as an external source when the request falls outside of the scope for which Apple Intelligence has been trained.
  • This would be completely separate from Apple Intelligence and would have no impact on Apple’s competitive position.
  • Consequently, I think that Apple will continue to develop its in-house system for AI or acquire one of the other players when they inevitably get into financial trouble.
  • The good news is that Apple has time to sort this out as AI services on smartphones are not yet differentiating factors.
  • However, Google is moving to infuse its digital ecosystem with Gemini and if users start to appreciate these services, that will start to put pressure on Apple to deliver.
  • This requires that Google ensures that the best that it has goes onto Samsung devices and those from its Chinese partners rather than favouring its own in-house brand, Pixel.
  • This is because Pixel is a rounding error in the smartphone market and so any new and differentiating features that are exclusive to Pixel are likely to go unnoticed by the average smartphone buyer.
  • Google appears to have realised this and has been working more closely with Samsung but to see real impact it will also need to be quick to offer its latest and greatest on Chinese smartphones.
  • Hence, I think Apple has some time to get this right but it is clear that it is struggling relative to its competitors and the other AI services that are available in the cloud.
  • It looks to me that the main problem is orchestration which is the software link that allows the agent to access the apps, services, websites and APIs that it needs to be able to fulfil user requests.
  • Hence, I think that this is a solvable problem but in the meantime, Siri remains as awful as ever and is only ever activated when I accidentally long press the side button.
  • Time, however, is ticking.
]]>
Manus AI – Music Maker https://www.radiofreemobile.com/manus-ai-music-maker/ Mon, 10 Mar 2025 07:42:41 +0000 http://www.radiofreemobile.com/?p=10721 Monica demonstrates an advance in orchestration, not AGI.

  • Manus AI is the latest AI story out of China but once the hype has died down, I suspect the real story will be how Monica has made an advance in teaching agents to execute tasks rather than making them more intelligent.
  • Manus AI is the latest product from a Chinese AI company called Monica based in Shenzhen that until recently has focused on providing a range of generative AI services using foundation models from OpenAI, Anthropic, DeepSeek, Google, Mistral and so on.
  • However, on 6th March it previewed Manus AI which is similar to the Deep Research products that everyone is launching, but with the supposed ability to take independent action to complete tasks more thoroughly.
  • Once again, if this is an accurate description of Manus’ capabilities, then it represents a big step forward going from assisting humans to complete tasks to completing the task without human involvement.
  • Manus appears to have access to its own cloud-based computer that is running Linux with access to the command line meaning that as a superuser, it can execute tasks like a human would rather than make a series of recommendations.
  • This looks to me to be a big part of the innovation in taking the AI agent from recommending to doing.
  • In the launch video, the model is seen screening resumes, conducting real estate research and stock analysis.
  • The best example of the augmented functionality is real estate research where areas of New York are screened on certain criteria such as schools and crime and then goes on to write a program to calculate affordability ending up with a list of viable options.
  • Other examples on the website (see here) include creating a report on the film industry and analysing US naval tactics during World War II.
  • Manus has also been tested on the GAIA Benchmark which is designed to test an AI’s capability to do real-world tasks.
  • Here, it comfortably beats OpenAI’s DeepResearch on all levels of difficulty.
  • It is important to note that unlike DeepSeek R1 and Alibaba’s QwQ-32B, Manus is not available in open source meaning that none of these claims can be easily verified.
  • The architecture of Manus is also not very clear, but it appears that, like DeepSeek, it is using a mixture of experts structure but whether it has been able to make any resource savings like DeepSeek is claiming is unclear at this stage.
  • Yichao “Peak” Ji, Co-Founder and Chief Scientist states that Manus would not have been possible without the open source community meaning that Llama, Mistral and others have probably been heavily used and distilled in the creation of Manus.
  • Peak has also committed to giving back to the open-source community, but it will be a little more complex than just uploading the model.
  • As a mixture of experts model, Manus uses several distinct models that Monica has fine-tuned to enable the service that Manus provides.
  • It is some of these sub-models that have been “fine-tuned for Manus” that Monica will be contributing to open source as opposed to the whole Manus system itself.
  • The idea here is to drive the adoption of Manus as an AI assistant rather than to allow anyone to download the whole system tinker with it and recreate an instance on Manus on a separate system.
  • This could be for both business and geopolitical reasons as like DeepSeek, I suspect that the real IP in Manus is in how it was created and trained meaning that uploading some of the sub-models gives very little away.
  • Monica will not be making anything available to open source just yet, which is probably a sign that the company is still in the process of getting a license to export the sub-models.
  • This also means that 100% of Manus is running in China and everyone who uses Manus and submits data to it for analysis or execution of a task is sending data to China.
  • China’s national security laws mean that the Chinese state has access to all of this data whenever it wants which I suspect is going to lead to rapid bans on Manus by companies whose employees are already using AI to aid their daily tasks.
  • Until a few days ago, Monica was a small Chinese company that hardly anyone had heard of meaning that, like DeepSeek, it has been operating on a tiny fraction of the resources that OpenAI, Google, Anthropic and so on have access to.
  • While the internet is going wild for this latest AI release from China, problems are cropping up.
  • There are reports of crashes and failure to complete tasks, but this could be due to the server being under massive strain.
  • However, it has also failed to book flights, reserve tables at restaurants and was not as good at writing code as some of its Western competitors.
  • This tells me that Monica has created and trained Manus for a specific series of task categories but the minute that one tries to take it out of its comfort zone, it gets into trouble.
  • This means that it is not a step towards super-intelligent machines but is an excellent implementation of software that makes the AI we already have more useful.
  • RFM Research has concluded that one of the difficulties in making AI agents is all of the messy software plumbing that will allow the agents to access the apps, APIs, websites and so on to carry out tasks on the user’s behalf.
  • With the use of a Linux superuser command prompt, Monica appears to have made an advance in working out a good way to deal with this difficult task, but this is not a step towards superintelligent machines.
  • It is, however, another PR coup for China that is doing a very good job of proving that it is a contender in AI and it is here I think the struggle will be fought for the next couple of years.
  • The USA will find it harder to contain China in AI as it has in semiconductors although the edge in semiconductors is going to be an issue when China wants to ramp to real scale.
  • This is another impressive development from China but with no real progress towards super-intelligent machines, the stage is still set for some form of correction in terms of expectations and valuations across the whole AI industry.
]]>
Alibaba QwQ – 32bn Questions https://www.radiofreemobile.com/alibaba-qwq-32bn-questions/ Fri, 07 Mar 2025 07:46:13 +0000 http://www.radiofreemobile.com/?p=10719 More questions than answers.

  • Alibaba has produced a model that performs pretty well, but it is so small that China is once again challenging the Western “bigger is better” philosophy.
  • However, there are so few details about how this was created that it is impossible to know whether this represents another step forward for China or if it is merely a public relations exercise.
  • The new model is called QwQ-32B (see here) which has 32bn parameters and it performs extremely well against DeepSeek R1 (671bn), and o1-mini (estimates 100bn) on a number of the usual tests.
  • These tests have been carried out by Alibaba and so would have been chosen and run to make QwQ-32B look as good as possible, but at face value, it looks like an impressive achievement.
  • This flies directly in the face of the Western approach to generative AI which is that the bigger the models are made and the more data that is pumped through them and the more compute they expend in inference, the better they perform.
  • I have long thought that this approach, which has worked well for a few years, has reached the end of its usefulness and that a new approach is needed.
  • Furthermore, I have also been of the opinion that when it comes to innovations around AI efficiency, the Chinese would get there first (see here).
  • This is not because Chinese engineers are more brilliant than Western ones, but merely because export restrictions and capital limitations have forced them to do more with less.
  • By contrast, the Western players are flush with cash, have access to ever more powerful GPUs and have been able to focus solely on pushing the boundaries of what AI can deliver.
  • QwQ-32B looks important because if this kind of performance can be delivered with a 32bn parameter model, then pretty soon we will see it running on smartphones and laptops.
  • However, there are a lot of questions which Alibaba has not answered.
    • First: Inference where we have no idea how long QwQ-32bn is running inference before it delivers its answers or how this compares to everyone else.
    • Increasing compute time for inference, is a key strategy to improve models that are solving reasoning tasks.
    • Hence, QwQ-32bn may be able to perform particularly well by scaling up its reasoning time which means that it is not really as efficient as Alibaba would have us believe.
    • QwQ-32B is available in the open source and so 3rd party independent testing should be able to answer this question in time.
    • Second: training data, where we have no idea how much data was pumped through QwQ-32B to get it to perform at its current level.
    • Experiments by DeepMind (Chinchilla) some time ago demonstrated that by training models with more data, smaller models could be made to perform better than models 4 times their size.
    • If Alibaba has used this technique, then what it gains in model size it has lost in terms of training data meaning that it was not cheaper to train than its larger counterparts.
    • However, it will be cheaper to run inference given its smaller size and this is where a real saving could be found.
    • Third: Pruning which I have often referred to as the nuclear fusion of AI.
    • This is a technique where one can remove 90% of a model and see no degradation in its performance.
    • The problem with this is that it is so time-consuming to work out which parts make up the 90% of the model to remove that it is not worth the effort.
    • However, once again if one is purely focused on the efficiency of inference, this is a technique worth considering.
  • Alibaba has not said whether it has used any of these techniques and has made no claim about how much it costs to train and how much inference it consumes and so this development needs to be taken with a dose of caution.
  • The model is available on Hugging Face meaning that anyone can download and test it but the same issue as there is with DeepSeek also applies here.
  • This is because the National Security Law of China requires that AI technology of this nature is granted a licence from the CCP before it is allowed to be exported.
  • Alibaba will have had to have obtained a license from the CCP to export QwQ-32B immediately raising the question of why the CCP would allow important Chinese IP to fall into the hands of its rivals.
  • If independent testing verifies that QwQ-32B performs as well as its much larger rivals, this will be another feather in the cap of China’s reputation as an AI powerhouse which RFM Research and Alavan Independent suspect may have been the main reason to grant Alibaba an export license.
  • The net result is that if QwQ-32B performs as well as promised, it will accelerate the trend of deploying models on edge devices rather than in the cloud.
  • It is far cheaper for a service provider like OpenAI to deploy its models on edge devices because then it does not have to pay the cost of running inference.
  • This is by far the biggest draw for inference at the edge and QwQ-32B promises to take performance on edge devices to another level.
  • The main beneficiaries here are China (whose reputation is boosted once again), Alibaba that is proving its AI chops versus DeepSeek, Qualcomm, MediaTek, Arm and Broadcom who are all selling chips that can run inference on edge devices.
  • I continue to like this theme as a way to invest in the current AI craze and Qualcomm is the stock that I own.
  • I also own Alibaba which is becoming less and less of a drag on my portfolio and remain happy to stick with it and sit out the recovery now that it seems to be finally here.
]]>
Tencent – A Must Win https://www.radiofreemobile.com/tencent-a-must-win/ Wed, 05 Mar 2025 07:26:49 +0000 http://www.radiofreemobile.com/?p=10714 Tencent has to become the agent of choice on Chinese smartphones

  • Tencent has passed a crucial milestone in ensuring the durability of the ecosystem it built in China by becoming the most downloaded AI chatbot on Chinese iPhones.
  • This is a must-win because if someone else supplies the agent by which users access their smartphones, then Tencent is almost certain to lose the ecosystem it has built and return to being just a games publisher and developer.
  • One of the big themes in the smartphone industry at the moment is the idea that with high-quality chatbots being available that usage will migrate from interacting with many apps to simply talking or typing to an agent.
  • It would then be the agent’s job to fulfil the request using apps and services on behalf of the user.
  • I am yet to be sold on this theme because, in smartphones, the current user experience using touch is already very good and has been universally adopted across the world.
  • Migrating to a voice or text-based agent will not necessarily deliver the big uplift in utility that will be needed to drive adoption by users given how good the experience already is.
  • In the automobile, the situation is completely different as the fact that humans still have to drive makes touching screens a suboptimal and potentially hazardous user experience.
  • Hence, using voice (as long as it is really good), represents a big uplift in the user experience that will encourage rapid adoption once users have gotten past their current scepticism.
  • However, assuming that AI agents are the next big thing on smartphones, then the current digital ecosystems face a moment of dislocation.
  • These moments are dangerous as they mean that the rules of the game change, allowing newcomers to dislodge the incumbents.
  • The last time this happened in the mobile phone industry was in 2007 which resulted in Apple, Google and Tencent completely taking over the industry and forcing the legacy incumbents out over the subsequent 7 years.
  • In 2025 it is Apple, Google and Tencent that control the smartphone industry and a move from a touch-based interface to an agent-based interface represents a change of this magnitude.
  • This is because it is the agent that becomes the control point in the smartphone and, as such Apple, Google and Tencent must win the agent battle within their ecosystems to stay relevant.
  • While I have argued many times that the barriers to entry for generative AI are becoming lower and lower with every day that passes, these three still have a big moat that a newcomer will have to overcome.
  • This moat is their user base which for each of these players is already well over 1bn users which gives them a golden opportunity to distribute their agents to their users and set them as default in the devices and services that they already deliver.
  • This is the power of default which has been demonstrated time and again to be a very potent tool giving the incumbent digital ecosystem a very good chance of maintaining their dominance if this transition comes to pass.
  • The data from China is the first indication of this power being put to good use by Tencent and looking at the agentic AI strategy of Apple and Google, one can see the same thing.
  • Here, Apple will only allow Apple Intelligence to control its devices while Google will embed Gemini in all of its digital ecosystem services that have billions of users and turn it on by default.
  • Hence, I don’t think that agentic AI is going to result in a change of guard in the smartphone industry in China or anywhere else, but this is a dislocation in the market meaning that the best chance of disruption is now.
  • The next potential (and much larger dislocation point) is the Metaverse, but I think that we are many years away from mass adoption and there is a high chance that it never happens at all.
  • This would be a much harder fight as companies like Meta who have deep pockets and a determination not to be under the thumb of the incumbents, will be fighting hard in that space.
  • The net result is that this is an important win for Tencent, and its agent must remain the agent of choice if the status quo is to be preserved.
  • The initial signs are positive for the incumbent digital ecosystems should agentic AI become popular on smartphones, but this is something to keep an eye on as the risk to their supremacy is as high as it has ever been.
]]>
Alexa + & GPT4.5 – Mad Scramble https://www.radiofreemobile.com/alexa-gpt4-5-mad-scramble/ Fri, 28 Feb 2025 05:16:20 +0000 http://www.radiofreemobile.com/?p=10702 Competition heats up even more.

  • Amazon has finally announced the new version of Alexa but at the same time, OpenAI has released its new model GPT-4.5 to compete with the rapidly growing field of competitors which signals that the top of the market is close, if not already here.
  • Amazon has launched Alexa Plus which promises to offer a “conversational experience” with the new chatbot but with almost all of the 600m deployed devices incapable of running even a small language model locally, there are going to be user experience issues.
  • Amazon showed a series of use cases (just like OpenAI and Meta did) and the AI was very quick to respond giving the impression of speaking to a person.
  • However, in all of these instances, the device that the demonstrator spoke to was hard-wired into the cloud to ensure that latency was as low as possible.
  • The 600m devices that Amazon has deployed in the network will listen to the request and send it via WiFi and a fixed connection to a cloud connection miles away where the request will be processed and then returned to the device to deliver to the user.
  • This means that the latency will be several seconds at best from request to answer which will kill the user experience as Amazon demonstrated it.
  • This is why RFM Research has concluded that if agents are going to deliver a good user experience, a good portion of their intelligence needs to be implemented on the device itself.
  • This configuration provides the best economics for the service provider and the best experience for the user and could be where the majority of inference ends up happening in my opinion.
  • This is why I suspect that the next batch of Alexa devices that Amazon launches will have voice processing and at least some intelligence in the device itself.
  • This is something that Google has done on smartphones for some time and its experience is all the better for it, especially when it comes to translation.
  • This is one reason why I like the inference at the edge adjacency (see below).
  • Meanwhile, keen to be seen to be maintaining its lead, OpenAI launched its latest model GPT-4.5 which is bigger than ever (i.e. >2tn parameters) and has been designed to be more personable and easier to use rather than being a math or coding whizz.
  • OpenAI used the fact that GPT-4.5 is not a reasoning model to explain why it does not beat older models in benchmarks as it does marginally better in some and worse in others.
  • The real reason I suspect is that scaling in terms of making models bigger with more data to get better performance has hit a wall as pumping in more resources demonstrably produces smaller and smaller improvements.
  • GPT-4.5 does not use the latest inference tricks such as “thinking” for longer or generating multiple answers but instead is targeted at a general chatbot use case with a better user experience.
  • The answers that it gives are not meaningfully better than GPT-4o but they are easier to read an understand.
  • From the layman’s perspective, this will make some difference and is precisely what the new version of Amazon Alexa is targeting.
  • From the consumer perspective, this is where the AI ecosystem war will be fought and with 600m devices already deployed globally, Amazon is not starting from scratch.
  • This demonstrates that when it comes to generative AI, there is no moat other than the existing digital ecosystems which means that the real asset that OpenAI has is the ChatGPT user base and its global name recognition.
  • That does not always translate into global dominance of a mature industry and given that alongside Amazon’s 600m, Google has 2bn+, Apple 1.4bn+, Tencent 1bn+, Meta 3bn+ and so on OpenAI has a mountain to climb.
  • However, in the interim, I suspect that the real action will be in the enterprise where OpenAI’s early lead is much more of an advantage.
  • However, there is still plenty of competition here also and OpenAI is competing against others whose models are just as good and, in some cases, (Meta, Mistral & DeepSeek) make them available for free.
  • OpenAI has also yet to do anything about DeepSeek’s claims as the new model is bigger and consumed more resources than ever before both in terms of training and inference.
  • OpenAI admits both in its commentary during the launch and in Sam Altman’s tweet on X which is why it is only going to Pro users ahead of getting more GPUs online to allow it to launch to Plus users next week.
  • Hence, there is little sign of OpenAI becoming more efficient, especially with the possibility of SoftBank now bankrolling it to the tune of tens of billions meaning that it might get caught with its pants down when the correction comes.
  • The net result is that these launches continue to show competition in this space heating up which means prices coming down or more stuff being made available for free (Microsoft most recently (see here)).
  • This will eventually trigger a correction because the returns on investment will be much lower than promised given the price erosion.
  • The only real winners here are the infrastructure vendors like Nvidia, Astera Labs, Supermicro and so on who will continue to benefit as competition heats up and generative AI vendors continue to invest to stay ahead.
  • They are also where almost all of the money is going right now meaning that their valuations are much more reasonable than the likes of OpenAI, Mistral, Safe Superintelligence and so on.
  • Other adjacencies such as inference at the edge and nuclear power will also continue to benefit, but given that these have one degree of separation from the generative AI craze they will see much slower but far more sustainable value appreciation over time.
  • I very much prefer this to the rollercoaster that we see elsewhere and so I remain very happy to sit tight in those adjacencies.
]]>
Artificial Intelligence – The Asymptote pt. III https://www.radiofreemobile.com/artificial-intelligence-the-asymptote-pt-iii/ Mon, 24 Feb 2025 08:31:10 +0000 http://www.radiofreemobile.com/?p=10692 Perplexity starts a race to the bottom with deep research.

  • In the space of a month, pretty much everyone now has or will soon have a deep research tool confirming that performance gains are flatlining and that beyond a fat wallet, there are no real barriers to entry in the generative AI game.
  • What will follow now will be a rapid race to the bottom as Perplexity has made limited access to the latest hot product free.
  • Perplexity.ai is the latest to launch a deep research tool (see here) which is a tool that takes a request, researches it and puts together a report detailing the information found and any conclusions that can be drawn to address the original question.
  • OpenAI launched its product with much fanfare on February 3rd 2025, but the fact that it is only available on the $200 per month tier, means that take-up will be slow until it is available at lower tier pricing.
  • The attention this launch received highlights OpenAI’s media power advantage as this launch was not an amazing new product but a move to bring its offering into line with the competition.
  • One of the others is Google which launched Gemini Advanced Deep Research at the end of 2024 and it is available as part of the $30 per month Google One package.
  • With Perplexity and Grok 3 as of last week, the field is already getting crowded and with everyone now focused on “reasoning models”, it is now only a matter of time before all of the others launch direct competitors.
  • The sheer speed of model launches and the very short time that it takes everyone to catch up with a similarly performant model or service is a clear indication that model performance improvements are flatlining.
  • This in turn means that the price of the services will quickly become a race to the bottom with new services constantly being added to paid tiers or paid services being made available for free.
  • Perplexity is offering a limited number of searches and access limited to the basic model, but the glass ceiling has been broken.
  • This is a classic sign of an asymptote where performance improvements become increasingly incremental despite continued high growth in investments that are needed to achieve these improvements.
  • At some point, investors are going to start asking where the returns are on the investments that they have made and with a race to the bottom in terms of pricing, the returns are likely to be lower than expected.
  • This will be exacerbated to some degree by DeepSeek’s methodologies which may make it much cheaper to create these sorts of services meaning that even more players can enter the market.
  • Without the creation of a super-intelligent machine that can replace 90% of human economic tasks, the valuations of all of these companies look much too high to me.
  • RFM Research has long concluded that there is no super-intelligent machine on the horizon with the only alternative being a correction when reality finally reappears.
  • This correction is unlikely to be anything like as bad as the Internet Bubble as even without a super-intelligent machine, generative AI has many use cases where there is plenty of money to be made.
  • When this happens, the likes of OpenAI, Anthropic, Mistral and so on are likely to run out of money and be acquired by much bigger players with deep pockets.
  • Hence, this is not a sector I would want to have a direct position in but if I had to choose something, it would be Nvidia given it is making real money and real profits now and its valuation is far from outrageous.
  • However, in the adjacencies of nuclear power and inference at the edge there are still opportunities to be had meaning that I am very happy to stay put with this theme.
]]>
Alibaba FQ3 25 – Mo Comes To China https://www.radiofreemobile.com/alibaba-fq3-25-mo-comes-to-china/ Fri, 21 Feb 2025 05:12:36 +0000 http://www.radiofreemobile.com/?p=10690 AI momentum finally makes it to China.

  • Alibaba reported good results where a jump in cloud revenues allowed the company to justify large investments in AI, which has allowed the AI momentum story to finally come to China after ignoring it for 2 years.
  • FQ3 25 revenues / EPS were RMB280.1bn / RMB2.55 ahead of estimates of RMB275.1bn / RMB2.43.
  • The main driver of the surprise was the Cloud Intelligence Group (AliCloud) which is China’s equivalent of AWS where growth has accelerated to 13% YoY.
  • This is still a very far cry from overseas where the cloud companies are growing 25%+ YoY (despite being bigger than AliCloud) indicating that there is still a lot of catching up that AliCloud can do if this continues.
  • Within this AI-related product revenues are growing by over 100% YoY and its increasing relevance is starting to affect the headline numbers.
  • The core e-commerce business continues to chug along despite the weak macro environment aided by its digital marketing tool (Quanzhantui) that helps merchants connect their products to potential buyers.
  • Merchants found that this tool was particularly helpful during the Singles Day promotion event and its success also helps Alibaba earn a little bit more from every sale that it makes.
  • Despite the steady developments, the real story of these results is the momentum that Alibaba has picked up from its partnership with Apple and the sudden interest in Chinese AI triggered by DeepSeek.
  • This combined with the fact that the Chinese state has realised that it needs the private sector (see here and here) to have a chance of fulfilling its ambitions, and suddenly Chinese Technology is back in vogue.
  • CEO, Eddie Wu was quick to grab the opportunity stating that Alibaba would invest more in AI infrastructure over the next 3 years than it has in the previous decade and that the pursuit of AGI is Alibaba’s strategy.
  • This is exactly what the market wanted to hear, and the shares are currently up nearly 13% in Hong Kong in Friday’s trading.
  • The net result is that the steady performance of its e-commerce business and the cash that it has on the balance sheet gives Alibaba the resources to invest in AI.
  • Its partnership with Apple and the competitive performance of its Qwen AI models against global peers and benchmarks are evidence that its efforts to develop an AI capability are paying off.
  • Most important of all is that Jack Ma is off the naughty step and has been allowed back into the ranks of the entrepreneur class in China which will also no longer be victimised by the state.
  • Instead, it looks like the state will level the playing field and allow the private sector to flourish which should bring back VC investment and the start-up ecosystem in time.
  • Against this backdrop, there is still plenty of space for the Chinese technology sector to rally as it remains far cheaper than all of its global peers.
  • For example, if Alibaba’s multiple expands to match where Amazon is currently trading (35x 2025 PER) then the shares could return to previous highs.
  • If there is also a recovery in earnings growth as domestic consumption picks up as a result of the state letting the private sector develop once again, then it could go higher.
  • The risks of investing in China remain much higher than elsewhere, but for the moment momentum and narrative are back in the driving seat meaning that there is every sign that the rally will continue.
  • My long-held, deeply frustrating position in Alibaba has finally stopped being a drag on my portfolio and I am sitting tight as I think that there is quite a bit further to go.
]]>
Grok 3 – Spoilt for Choice https://www.radiofreemobile.com/grok-3-spoilt-for-choice/ Wed, 19 Feb 2025 06:39:15 +0000 http://www.radiofreemobile.com/?p=10684 The count of equally good models continues to rise

  • xAI has released the latest version of its Grok model and, while it scores very well on all of the most advanced benchmarks, it struggles like everyone else with the basic stuff implying once again that LLMs alone are not going to deliver the superintelligence that the industry craves and that the valuations are too high.
  • One large data centre and 200,000 GPUs later, Grok 3 was born having consumed 15x the compute of Grok 2 and likely to continue to consume huge resources as it begins to answer questions.
  • On the stuff that is used to market these products, Grok 3 performs better than Grok 2 and better than its peers on the usual maths and coding tests.
  • When it comes to “reasoning” it also does pretty well beating out DeepSeek R1 and o3 on the usual AIME24 and GPQA benchmarks but only marginally so.
  • Consequently, Grok 3 delivers an incremental improvement over what has gone before but when one considers the 15x increase in compute that was required to deliver it, it looks to me like we are firmly in the arena of diminishing returns.
  • This is in no way limited to Grok and is why there is no GPT-5 (see here) and is also a factor in how everyone has been able to match OpenAI’s performance in a relatively short time.
  • The net result is that we have a large range of models from different parts of the world that are roughly equivalent in terms of their capability and from what their makers choose to demonstrate, these capabilities look amazing.
  • However, just like all of its predecessors and its peers, Grok 3 completely falls over when it comes to very simple things.
  • Grok 3 fails to be able to draw a picture of a person writing with their left hand and cannot draw a picture of a common English word on a piece of paper with the vowels circled.
  • Once again, to be fair to Grok, all models fail these simple tests unless they have been specifically programmed which is a sign of RFM’s long-held view that these models have no understanding of causality.
  • They also continue to fail the simplest of reasoning tests supporting RFM’s view that these models are incapable of true reasoning.
  • This is important because if a statistical-based system can be taught to truly reason rather than this incredibly sophisticated simulation of reasoning, then we will truly be on the path to super-intelligent machines.
  • This doesn’t mean doing PhD maths with a high score, it means being able to reason that if A=B then it follows that B=A without having to be given the data for both directions.
  • It means being able to draw pictures of people writing with their left hand, driving vehicles as well as humans in all scenarios and telling and reproducing the time on an analogue clock.
  • Of this, there is no sign despite hundreds of billions of dollars being spent on both compute and engineering salaries leading me to think that this AGI is not going to be solved with LLMs alone.
  • The problem is that there is so much hype and excitement around LLMs that other avenues of scientific enquiry are being ignored and underfunded.
  • Hence, I think that expectations of super-intelligent machines powered by LLMs are not going to materialise and so there is a correction coming.
  • Despite the limitations, LLMs have very large and likely to prove very lucrative use cases, and so the correction will be nothing like as severe as the internet bubble of 1999 – 2000.
  • However, many of the AI startups will not survive and so they will most likely be absorbed into the bowels of large companies setting up another battle for the ecosystem fought by the large players as well as between the USA and China.
  • Nvidia is one of the ones that will suffer the least as it is the only one that is making real money from generative AI right now and even with a correction in demand for its processors, it will suffer much less than those offering generative AI services for $20 per month.
  • Hence, if I were forced into direct investment in the generative AI sector, I would choose Nvidia, but I prefer to look more laterally at inference at the edge or nuclear power to run all of these new data centres that are springing up.
  • I have positions in both of these themes.
]]>
China Technology – The Turn? https://www.radiofreemobile.com/china-technology-the-turn-2/ Mon, 17 Feb 2025 06:17:20 +0000 http://www.radiofreemobile.com/?p=10680 Jack Ma may again be the turning point.

  • Jack Ma’s potential meeting with President Xi is a sign that after 4 ½ years, the Chinese state may have had enough of stagnation and could be prepared to let the private sector have a much freer hand.
  • This is precisely what RFM and Alavan Independent have thought is required to get technology innovation and investment going once again in China and the stock market is already reacting.
  • A range of entrepreneurs including Jack Ma and Liang Wenfeng from DeepSeek have been invited to meet some of the most senior members of the CCP but the real question is whether or not President Xi will also make an appearance.
  • In November 2020, Mr Ma publicly criticised the Chinese banking sector and, because the sector is state-owned, the state itself.
  • This was viewed by the CCP as the private sector challenging the CCP’s authority, and so a brutal crackdown on the technology sector quickly ensued.
  • Entrepreneurs were called in for questioning and Mr Ma disappeared from public view for several years and everyone was sternly reminded who really is in charge in China.
  • This combined with the Covid pandemic has left the Chinese economy in a poor state with very weak domestic consumption which the state is trying to offset by boosting exports.
  • The gloom has had a dire effect on innovation as no one wants to take any risks or invest in start-ups which is where the real innovation in any technology ecosystem happens.
  • Instead, venture capital companies have become debt collection agencies focused on getting money back rather than handing it out.
  • This, combined with the heavy hand of the regulators who can do anything at any time with no warning has hammered innovation which is the very thing that China needs to fulfil its goals of becoming a technological and geopolitical leader.
  • It has also hammered the Chinese stock market such that China’s leading technology companies can be found trading at a 12-month-forward PER of 12x or less.
  • The risks of investing in China are greater than they were 10 years ago but the position of the Chinese state may be shifting.
  • If President Xi is willing to meet Jack Ma (who triggered the crackdown) it would be a concrete sign that President Xi’s hardline on the private sector may be shifting.
  • This in turn could lead to a lightening of the regulatory burden and bring back confidence that the private sector is operating with the approval of the state rather than just being barely tolerated.
  • Giving the private sector more air to breathe should return activity to the moribund venture capital ecosystem and bring innovation back to life.
  • In the innovation ecosystem, confidence is almost as important as the money itself which is why President Xi making an appearance in person with Jack Ma is so important.
  • DeepSeek was the catalyst which proved that despite the restrictions being placed on it by the US, China can create AI that performs in line with the global leaders.
  • However, a shift in attitude by the state is needed if the “DeepSeek” effect is to spread through the rest of the technology sector and bring it back to life.
  • If that shift fails to materialise, then this will be yet another false start that peters out with holders of beaten-up Chinese technology shares taking the opportunity to sell out of their positions.
  • This rally feels very different to the blips we have seen in the last few years, but it will take President Xi’s endorsement to make it real.
  • It is still very early days as most Chinese technology companies remain extremely cheap meaning that there is still a lot of upside as long as President Xi is on board.
]]>