Search Results for “monkeys” – Radio Free Mobile https://www.radiofreemobile.com To entertain as well as inform Thu, 24 Apr 2025 05:58:38 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.26 https://www.radiofreemobile.com/wp-content/uploads/2018/06/cropped-RFM-favicon-32x32.png Search Results for “monkeys” – Radio Free Mobile https://www.radiofreemobile.com 32 32 Artificial Intelligence – Scaling Debate https://www.radiofreemobile.com/artificial-intelligence-scaling-debate/ Mon, 11 Nov 2024 06:07:39 +0000 http://www.radiofreemobile.com/?p=10536 Scaling laws appear to be dying.

  • There are more signs that the “scaling laws” that have underpinned the AI explosion (and all of the hype attached to super-intelligent machines) are coming to an end meaning that the real potential of LLMs is now visible and is falling way short of the craziest of forecasts.
  • It is important to note that these new indications are anecdotal and as such do not represent any form of empirical proof, but they add to what is already being seen with existing models and how they are performing in reality.
    • First, an article from The Information: (see here) that claims that Open AI’s new models are not improving as quickly as expected and so the company is looking at new strategies to keep improving the performance of its new models.
    • Since its inception, Open AIs belief has been that with enough data and enough compute, artificial superintelligence would magically pop out at the end.
    • I have often referred to this as the “Infinite Monkey Theorem” (see here) and have held the opinion since 2020 (when I first wrote about LLMs (see here)) that this would not hold.
    • The radical underperformance of Open AI’s o1 model relative to what we were told is yet another sign that LLMs are beginning to experience the law of diminishing returns.
    • Second, commentary from an industry insider: who is the CEO of Deep Trading (algorithmic trading) who claims he was told that another one of the leading creators of LLMs has also hit a big wall of diminishing returns (see here).
    • This is even more tenuous than the article from The Information, but it adds weight and a second unrelated “data point” implying that LLMs are beginning to reach the limits of what they are capable of.
  • Diminishing returns is a huge problem because it means that one has to use increasing amounts of resources in order to achieve smaller and smaller improvements.
  • This very quickly becomes uneconomical, and any system that is funded by private money soon finds that willingness to pour more money in quickly dries up.
  • This could easily trigger a correction of expectations which in turn would cause valuations of the most outlandish companies to fall meaningfully.
  • It is my opinion that diminishing returns have been evident for quite some time supported by the fact that there is not much difference between the big models today in stark contrast to 2 or 3 years ago.
  • It is the slowing improvements that allows the laggards to catch up which is why when one looks at the benchmarks these days, the differences are minor.
  • LLMs still have substantial use cases that will deliver great economic benefits and spawn a new industry, but superintelligence is as far away today as it was 10 years ago.
  • The LLM superpowers of using natural language as a man-machine interface and the ability to ingest categorise, cross reference and regurgitate unstructured data remain very much intact and when properly used will be extremely valuable.
  • These two abilities alone open up many possibilities for new businesses as well as the replacement or improvement of businesses that already exist.
  • I am not bearish on the outlook for LLMs but merely cautious on valuation as expectations have run far ahead of what is realistically possible meaning that a correction is needed to bring expectations back to reality.
  • The robots are not coming to kill us anytime soon, but they will be making an appearance in the economy in ways that will make digital life for users better and more productive as well as allow companies to make far better use of the data that they already have but have forgotten about.
  • There is a correction coming and the time to invest will be when everyone has given up on AI and moved on to the next bright and shiny theme.
]]>
OpenAI – Hot Mess https://www.radiofreemobile.com/openai-hot-mess/ Mon, 20 Nov 2023 06:32:11 +0000 http://www.radiofreemobile.com/?p=9919 Looks to have been a fight over profit

  • The absurd goings-on at OpenAI this weekend look to me to have come down to disagreements about money, but the implosion of the company may have just allowed it to fulfil its original mission.
  • Sam Altman, the CEO and co-founder of OpenAI was suddenly fired by the board of OpenAI on Friday, November 17th which was met with such a furore that attempts were made to reinstate him.
  • However, recent events suggest that negotiations to reinstate Mr. Altman have broken down completely and that he and a significant contingent of the company will now leave and start a new company.
  • Needless to say, the press and social media have been rife with speculation as the only reason given for Mr Altman’s sudden removal was a failure to communicate clearly with the board.
  • The principle of Occam’s Razor states that the simplest explanation is usually the correct one which in this case is money.
  • OpenAI was set up as a non-profit to develop artificial general intelligence (AGI) which is the point at which machines become as intelligent as humans or more so.
  • As a non-profit, OpenAI would ensure that this critical technology would not be monopolised by a single company but would instead benefit all of humanity.
  • While the company was a bunch of scientists tinkering with a problem that has eluded humanity for years and making little progress, this was not a problem.
  • However, along the way on this journey, the company created something that does not solve the AGI problem, but did cost a fortune to create and which has very great commercial potential.
  • The seed of this implosion was sewn when OpenAI took $1bn from Microsoft in 2020 (see here) which it needed to pay for the immense amount of compute that was required to develop its large language models (LLM).
  • This was compounded in 2023 when it took another $10bn leaving Microsoft with just under 50% of the company and I think, effective control (see here).
  • However, Microsoft is a commercial enterprise and does not invest $11bn with no hope of earning a return as it has a fiduciary duty to its shareholders to make them money.
  • Sam Altman increasingly understood this but his board being more distant from the daily realities was focused on the original mission upon which the company was founded.
  • This weekend was simply the detonation of a bomb that has been waiting to go off since 2020.
  • Mr Altman is not returning to OpenAI but importantly, Mira Murati (chief scientist) is also no longer interim CEO meaning that she, too may also leave with Mr. Altman.
  • This is relevant because RFM research has concluded that Ms. Murati is the real brains behind the operation and that she will be needed to give Mr Altman’s new venture its best chance of success.
  • This leaves Microsoft in a real bind as it seems that all of the brains are walking out of the door leaving Microsoft with a bunch of algorithms that will age quite quickly with no one to update them and a pile of potentially worthless paper.
  • This is precisely the risk that I believed would force Microsoft to acquire OpenAI, but it has happened so quickly that there is no way that Microsoft will have had time to mitigate it.
  • Microsoft will not have lost very much money in real terms as almost all of the money that it gave to OpenAI came straight back in terms of Azure revenues although it has had to spend more on Nvidia chips than it would have liked.
  • The real winners here are OpenAI’s rivals who now have a golden opportunity to capitalise on one of the messiest, least professional implosions of a company that I have ever seen.
  • This mess will take months to recover from and every AI engineer at OpenAI will now be rethinking their options making them much easier pickings.
  • Furthermore, it will greatly slow the development of AI at OpenAI giving everyone else a chance to catch up and even surpass OpenAI in terms of AI excellence.
  • One way out of this debacle is for Microsoft to acquire OpenAI which I think has long been on the cards (see here), and for Mr Altman and his co-founders to start something new.
  • The net result is that in just a few days, the most important and influential company in the AI industry has been reduced to almost nothing and the void will be rapidly filled with all of its rivals.
  • Hence, I don’t think that this affects the development of AI very much, but it will make a more level playing field in terms of competition as everyone else moves in at the same time.
  • It is ironic that through its own implosion, OpenAI may have just fulfilled its mission.
]]>
GPT-4 – The law of diminishing returns https://www.radiofreemobile.com/gpt-4-the-law-of-diminishing-returns/ Thu, 16 Mar 2023 06:08:51 +0000 http://www.radiofreemobile.com/?p=9503 Many more monkeys. Still no Shakespeare.

  • Open AI has launched GPT-4 upon the world but while it records steady improvements in performance it refuses to disclose how many parameters or much compute it took to create raising the possibility that this game is becoming so expensive to play that it will never be commercially viable.
  • GPT-4 is the version of OpenAI’s generative foundation model and varies from GPT-3 in that it is larger (possibly 100tn parameters) and that there was human intervention in its training.
  • 100tn (if correct) is a staggering increase in size being 571x the size of GPT-3 and frankly, I am surprised that OpenAI was able to find that much data in existence.
  • It will also be vastly more expensive to run in terms of electricity and also to build requiring many more processors to run and memory to store it.
  • OpenAI is refusing to disclose anything regarding the architecture, size, hardware, training compute, dataset construction or the training method for competitive reasons (see here section 2 first paragraph).
  • It says that this has been done for competitive reasons (for which there is an argument given the degree to which Microsoft has panicked Google) but I suspect that the resources that were consumed to create, train and run GPT-4 were exponentially greater than for GPT-3.
  • GPT-4 is demonstrably better than GPT-3 at taking standardized tests like GREs, SATs and the legal bar exam going from scoring in the 10th percentile to the 90th but the system still has a tendency to hallucinate.
  • In fact, GPT-4 has all of the same limitations that are inherent in GPT-3 meaning that in terms of making GPT more aware and more causal in its understanding, there has been no progress at all.
  • This is fundamental because causal understanding is the central limitation of all systems based on deep learning as these systems reason by statistical correlation, not by causal understanding.
  • This is what leads to the errors, irrationality, craziness and hallucinations that many people have reported and unless something fundamentally changes in how these models are built, these problems will persist.
  • GPT-4 is also unusual in that it had human intervention during its creation and this was done in an attempt to pre-empt bad actors from trying to entice the system into saying socially unacceptable things.
  • The problem here is that GPT-4 has now had bias injected into it as views on what is socially acceptable vary widely and so the very bias that OpenAI claims to try and eradicate is now part of the system by design.
  • GPT-4 has now also been trained with vision and can describe photographs as well as what is odd or strange about them.
  • This represents a sort of generalization as language and vision are currently implemented using two different types of neural network but here OpenAI claims to have managed this with just one.
  • Whether GPT-4 can perform as well as other generative AIs that are specifically designed for generating or recognizing images remains to be seen.
  • For example, Midjourney which was specifically designed for images from the ground up is much better than DALL-E which is based on the large language model, GPT-3.
  • The net result is that I think that GPT-4 is a triumph of effort over finesse and reinforces my view that OpenAI’s philosophical approach to AI remains that with infinite data and infinite compute power, general artificial intelligence will magically appear.
  • I see this as an iteration of the infinite monkey theorem which states that if you have enough monkeys and enough time, they will eventually produce the works of William Shakespeare.
  • Unfortunately, despite a massive increase in monkeys, there is no sign of any of the famous plays.
  • This also raises the likelihood that we are fast reaching the point of diminishing returns in deep learning where more and more effort is required to produce smaller and smaller improvements.
  • The improvement of GPT-4 over GPT-3 is less than GPT-3 over GPT-2 and the likelihood is that a much greater increase in resources was required to produce it.
  • Furthermore, there is nothing in GPT-4 that leads me to think that general artificial intelligence is any closer, leaving the industry in need of other techniques to solve some of the more difficult problems of AI like autonomous driving.
  • Here, I continue to think that a combination of rules-based software and small specific neural networks each carrying out a small simple task is the way that these problems will be solved in a practical way.
  • There has been some progress on this front (see here) but this approach still remains some way away from a practical and commercial application.
  • In the meantime, OpenAI and others will continue to fuel the hype and expectations (artificial general intelligence) until such time that expectations are not met.
  • This will result in disappointment, disillusionment, falling investment and lower valuations as it has on three separate occasions in the last 70 years.
  • In short, the 4th AI winter.
]]>
Microsoft & OpenAI – Chinese whispers. https://www.radiofreemobile.com/microsoft-openai-chinese-whispers/ Tue, 24 Jan 2023 05:55:42 +0000 http://www.radiofreemobile.com/?p=9390 This is not a simple $10bn investment.

  • Microsoft is deepening its ties with OpenAI which looks to me to be mostly about cementing Azure’s edge over AWS in that Microsoft has exclusive access to the hottest AI property on the market right now.
  • Microsoft has said that it is making a “multiyear, multibillion-dollar investment to accelerate AI breakthroughs” and that it will increase “investments in the development and deployment of specialized supercomputing systems to accelerate OpenAI’s ground-breaking independent AI research”.
  • It does not say “we are investing $10bn to buy OpenAI shares at a pre-money valuation of $29bn”.
  • Hence, I think that this deal is far more complicated than the straight investments it did in 2019 and 2021.
  • Furthermore, if it was investing $10bn in OpenAI shares, this would represent a very material transaction and Microsoft would be obligated to disclose this fact to the market as it is a public company.
  • Consequently, I think that the following makes the most sense for Microsoft.
    • First, exclusive: Microsoft makes a further investment in OpenAI in order to help finance the AI research upon which OpenAI is engaged.
    • This is probably in the form of $2bn – $4bn and comes with all sorts of preferences that ensures that Microsoft will be able to earn back its investment over time.
    • OpenAI is supposed to be a non-profit which in hard financial numbers means that the value of the company (present value of the discounted free cash flow) should be $0.
    • Hence, what I suspect Microsoft is buying is an exclusive right to use OpenAI’s technology in its products as well as the promise that OpenAI will only use Azure’s cloud infrastructure.
    • The majority of that investment will end up coming back to Microsoft anyway in the form of Azure revenues as OpenAI subscribes to the massive compute approach to solving AI (see here).
    • Second: Servers: I think the rest of the money will be spent on building custom infrastructure that will run OpenAI’s technology optimally both in terms of performance and power consumption.
    • Because the servers are customised, they will not run other systems particularly well creating an interdependency between the two companies.
    • The popularity of ChatGPT has meant that during busy times, response times increase materially, implying that there is not enough hardware to run the level of requests that are coming in.
    • If Microsoft is going to use OpenAI in its commercial products (see below), then it will not be able to afford to have service slowdowns or outages as a result of high demand.
  • I see two main benefits from this deal for Microsoft.
    • First, Azure differentiation: This deal will very likely ensure that OpenAI will not be available on AWS, Google Cloud or anyone else.
    • This means that Azure will be able to offer OpenAI as a service to its customers (eg Azure OpenAI Service (see here))which is a feature that AWS will be unable to replicate.
    • Should OpenAI’s products become popular with clients, this will give Azure firepower in its quest to close the gap on AWS.
    • Second, new products: Microsoft will have the ability over time to embed OpenAI into its other products such as Office and Windows which may help them to improve.
    • My experience with AI in both Windows and Office 365 to date has been mild irritation leading to it being disabled, meaning that its value to me has been negative.
    • There are many areas where Microsoft products could be greatly improved by AI (eg search in Outlook) and OpenAI may be able to help in these areas.
    • I do not expect to see ChatGPT embedded in Office as I don’t think that it is either trustworthy or safe meaning that it fails Microsoft’s own standards for AI products that it creates.
  • Hence, I think that this deal structure is very far from that being reported in the media and technology press but it is one that makes much more rational sense than just dumping $10bn into OpenAI at $29bn.
  • There has been substantial chatter over the last few weeks on this issue and so I suspect that the reality of the deal has been distorted as the news passes from one source to another.
  • If OpenAI products can be made trustworthy and safe, then Microsoft stands to benefit from this transaction in the long term, but I can’t see this as a reason to get excited about the shares now.
  • The market is slowing as Microsoft’s 10,000 layoffs suggest meaning that the shares continue to look fully valued pushing me to look elsewhere.
]]>
Microsoft & OpenAI – Fluff ’n’ stuff https://www.radiofreemobile.com/microsoft-openai-fluff-n-stuff/ https://www.radiofreemobile.com/microsoft-openai-fluff-n-stuff/#comments Wed, 11 Jan 2023 06:31:13 +0000 http://www.radiofreemobile.com/?p=9365 OpenAI is a long way from challenging Google.

  • The hubris around OpenAI following the popularity of ChatGPT continues to grow but the nature of ChatGPT and NLP models, in general, has not changed and they remain wholly unsuited for any task where factual accuracy and truth are important.
  • As a result, I think that Microsoft will struggle to make a decent search engine using ChatGPT any time soon but participating in chatter will help OpenAI raise money at $30bn and enable Microsoft to write up the value of its original investment.
  • I have long viewed Microsoft’s investment in AI as a supply agreement as a large proportion of the $1bn that it paid to OpenAI will have come back in the form of revenues for Azure.
  • OpenAI subscribes to what I call the infinite monkey theorem of AI which means that if you throw enough data and enough compute at a problem then the answer will eventually pop out at the end.
  • This is not dissimilar to the idea that given enough time and a typewriter a monkey will eventually come up with the complete works of Shakespeare.
  • Consequently, OpenAI uses a lot of compute and so as the supplier of the compute, Microsoft effectively de-risked what I have long argued was a bet on a very long shot (see here).
  • By getting most of the money back in terms of cloud revenue, Microsoft effectively paid a greatly discounted price in terms of the valuation in 2019.
  • Now it is considering investing $10bn in the company at a valuation of $30bn where the money would be injected over a period of time presumably as a series of milestones both technical and financial are met.
  • This has triggered speculation that ChatGPT will be integrated into Bing to allow it to compete more effectively with Google but I think that this will be easier said than done.
    • First, time freeze: ChatGPT is frozen in time as it was trained on a single snapshot of the internet taken in 2021.
    • This means it does not know that Russia invaded Ukraine nor does it know that Queen Elizabeth II has passed away.
    • This was done in order to make the dataset upon which it was trained as finite and as stable as possible.
    • I have argued countless times that stable and finite are crucial criteria for deep learning systems and when these criteria are not met, the system rapidly falls over.
    • Search is a service that has to be relevant and up to date (data constantly changing) and so this gigantic problem has to be solved before one puts ChatGPT anywhere near search and this is going to take a long time.
    • Second, long tail: Bing is a perfectly good search engine but Google is much better at finding the obscure items being searched for and surfacing them to the user.
    • It is also much better at guessing what the user is looking for when the search request is not very clear.
    • These two characteristics together are what make Google such a good search engine and are, to a meaningful degree, the reason why everyone keeps using it.
    • I don’t think that ChatGPT will be much help in fixing either of these problems and so I don’t really see how using ChatGPT will make Bing a better search engine.
  • This is why I don’t think that ChatGPT will help Bing challenge Google nor do I think that it will be incorporated into the search engine in a meaningful way anytime soon.
  • However, having Microsoft onboard preselling large amounts of Azure capacity to OpenAI and getting shares in return lends a degree of legitimacy that gets a good dose of FOMO (fear of missing out) going for everyone else.
  • OpenAI started as a non-profit and a portion of the company still leans that way and as far as I am aware the company has no real revenues to speak of.
  • Hence, how the company is worth $30bn is a big mystery to me especially in this climate as I suspect that if it had gone public as a SPAC at $10 it would probably be trading at $0.3 or less by now.
  • There is nothing here that leads me to believe that there is either a Google killer in the works or a good value proposition at $30bn.
]]>
https://www.radiofreemobile.com/microsoft-openai-fluff-n-stuff/feed/ 1
Tesla – On the radar https://www.radiofreemobile.com/tesla-on-the-radar/ Fri, 09 Dec 2022 02:32:40 +0000 http://www.radiofreemobile.com/?p=9327 Tesla’s actions speak louder than Musk’s words.

  • Tesla is admitting that it has failed to solve the machine vision problem by returning radar to its vehicles, in a move that I suspect will be followed by lidar supporting my long-held view that in autonomous driving, Tesla is second rate at best.
  • Tesla started removing radar from its vehicles in May 2021 as a way to reduce costs as it believed that its machine vision system was so good that it did not need radar to help its vehicles to perceive the road.
  • Most autonomous driving players use three sensor types, lidar, cameras and radar because each of these produces a different data set that when combined produces a more reliable picture of the vehicle’s surroundings.
  • However, Tesla had so much confidence in its camera-based machine vision that it felt that it did not need the other two meaning that it could reduce vehicle build costs by removing them.
  • Tesla’s autopilot performance has obviously suffered as a result which is why Tesla recently informed the FCC that it would be marketing a new radar product next month (see here).
  • Given the number of high-profile accidents and investigations that have ensued, I am not that surprised.
  • Mobileye, who I would back over Tesla any day of the week when it comes to camera-only machine vision, is still working with lidars in its vehicles in order to reduce the rate at which the machine makes mistakes.
  • The predominant method by which an autonomous driving system perceives its surroundings is through deep learning which in reality, is nothing more than a sophisticated statistical pattern recognition system.
  • This means that the system has no causal understanding of what it is doing and as such, is unable to think outside of the box.
  • The net result is that whenever a situation arises that the machine has not been explicitly taught, it will fail to correctly interpret the situation which will cause the vehicle to make a driving mistake.
  • This makes deep learning systems very good at tasks where the data set is both finite and stable but catastrophically bad at anything where the environment keeps on changing or is random in nature.
  • The dataset for the road is virtually infinite and it is changing all the time which is why Tesla (and everyone else) has continued to struggle with this problem.
  • I am sure that Tesla is aware of this issue, but it also subscribes to the OpenAI philosophy of AI which is that any problem can be solved with enough data and enough compute power (see here).
  • Unfortunately, this approach has yet to come close to solving the autonomous driving problem which is why I suspect another approach is needed (see here).
  • The use of multiple sensor types can help to reduce the error rate because each of them sees the road differently and detects different characteristics.
  • I also think that a map of what the road looks like is also highly beneficial as it reduces the uncertainty of the task as well as reduces the processing that is required to perceive the environment correctly.
  • Tesla’s move to return radar to its models is an admission that its camera system is not good enough and that it needs other sensors to reduce the error rate.
  • This puts it behind the competition when it comes to autonomous driving and supports my long-held view that Tesla will not be first to market.
  • This means that the robotaxi strategy for which success is required to make sense of Tesla’s crazy valuation looks like it will fail to deliver the numbers promised.
  • This makes the shares look even more overvalued and I suspect that there is still a long way for them to fall even from here.
  • I continue to think that the best way to invest in EVs is nuclear power which is going to be needed to provide the base load power to charge all of these vehicles that we are going to buy.
]]>
Artificial Intelligence – Here comes Skynet? https://www.radiofreemobile.com/artificial-intelligence-here-comes-skynet/ Fri, 16 Sep 2022 06:24:26 +0000 http://www.radiofreemobile.com/?p=9160 I am not panicking yet.

  • Terrifying claims made in a scientific article in AI Magazine last month predict a high likelihood that the machines will turn against their makers but fails to acknowledge that the machines are so stupid that this is never likely to occur.
  • The main thrust of this paper (see here) is that as AI systems are pushed to maximise their rewards, they could end up triggering negative consequences for humans as a result.
  • An example cited is that the AI could end up directing so much energy to the solution of its tasks and therefore its rewards, that there would not be enough energy left to grow food, heat homes and so on.
  • Should humans intervene to take the energy back, then an existential catastrophe could occur which according to Cohen is “not just possible, but likely”
  • The lead author of the article is Michael Cohen who is a PhD student at the University of Oxford and the Future Humanity Institute who has been researching AGI Safety for his PhD (DPhil in Oxford).
  • His co-authors are Michael Osbourne, Professor of Machine Learning at Oxford (presumably his supervisor) and Marcus Hutter, a researcher at DeepMind.
  • The paper begins by making a number of assumptions which in my opinion is where the validity of the conclusions falls to pieces as in my experience, assumptions are the mother of all mistakes.
  • The paper ends with “if they (the assumptions) hold: a sufficiently advanced artificial agent would likely intervene in the provision of goal-information, with catastrophic consequences” which I would not necessarily disagree with.
  • However, it is the first assumption of six which I would contest.
  • Assumption No. 1 reads: “A sufficiently advanced agent will do at least human-level hypothesis generation regarding the dynamics of the unknown environment”.
  • In essence, this means that AI can perform difficult tasks at a human or better level of performance.
  • The task that the researchers give as an example is an AI being able to cure a patient of depression where a human therapist cannot.
  • Anyone who has used Google Assistant, Alexa, Siri, Xiaodu (Baidu), and Alice (Yandex) will have experienced just how stupid these machines are and that they are barely capable of the most basic functions let alone curing difficult patients with depression.
  • Furthermore, even the huge language models such as GPT-3 (see here) and LaMDA (see here) are fundamentally flawed in my opinion.
  • For example, Siri constantly wakes up without being asked to, Alexa constantly fails to turn off the lights, customer service chatbots never seem to have the answer to one’s query and Google has been known to direct me into a high-security military base when looking for the airport.
  • Furthermore, despite billions of dollars of development expenses, machines remain incapable of safely driving vehicles which is something almost every human on the planet can be easily taught to do.
  • It is also still incredibly difficult to teach a robot to walk on legs despite this task being something that most of the animal kingdom (if they have them) can do shortly after birth.
  • This raises the question of why are the machines so stupid and the answer is simply that they have no causal understanding of what they are doing.
  • Neural networks of all shapes and sizes are advanced pattern recognition systems and all of their conclusions are based on matching historical patterns to outcomes.
  • This means that if something changes or something new occurs within the task that the machine is trying to solve, then it will catastrophically fail.
  • In practice, this means that AI is excellent for tasks where the data set is both finite and stable but elsewhere it has great difficulty and is unable to generalise or extrapolate as humans can.
  • This is what is referred to as generalisation or being able to apply what one has learned in one task to another slightly different one.
  • This is by far the single biggest shortcoming in AI systems today and progress on solving it is glacial, to put it mildly.
  • There are plenty of researchers who are looking into this and over 10 years come up with almost nothing.
  • This problem is so acute in neural net systems that some even think that this whole method of creating AI should be thrown away and we should start again.
  • Hence, it could be 100 years before much progress is made and this paper is assuming that this problem has been solved.
  • While I agree with the conclusion that if the AI generalisation problem is solved, then there is something to worry about, it remains so far away and so uncertain that I am not going to lose sleep over it.
  • Skynet has a very very long wait before it can enslave or exterminate the human race.
]]>
Artificial Intelligence – Infinite monkeys https://www.radiofreemobile.com/artificial-intelligence-infinite-monkeys/ Mon, 13 Jun 2022 06:47:35 +0000 http://www.radiofreemobile.com/?p=9025 I don’t think LaMDA is anywhere close to being sentient.

  • Another disagreement between Google and one of its researchers has sparked a big debate on whether AI has become sentient, but all of the evidence that I can see points to the contrary and that AI remains as dumb as ever.
  • This time, the researcher in question is Blake Lemoine who has been put on administrative leave by Google for violating his confidentiality agreements when he shared externally his views about Google’s language model, LaMDA.
  • Lemoine’s view is that LaMDA is sentient like a human and his failure to get his superiors to believe his claim led him to share his findings with persons outside of Google.
  • It is only for sharing his findings with outsiders that Google is censoring Lemoine and on that basis, I don’t think he has a leg to stand on.
  • Lemoine states that he thinks that LaMDA has reached a state of consciousness that would allow it to think and act (if it had a body) like a human being.
  • This is the holy grail of AI and not surprisingly it has hit a wall of scepticism and sharply divided opinion.
  • I think that LaMDA is nowhere close to being sentient for the following reasons:
    • First, three goals of AI: RFM Research has identified three key goals in AI research (see here) which if perfectly solved could pave the way for AI to become sentient.
    • RFM proposed these goals in 2018 and has been monitoring progress against them ever since.
    • Progress has been extremely slow (as one would expect in a scientific undertaking of this nature) and there is no evidence that these goals have suddenly been solved.
    • Hence, I doubt that LaMDA is sentient but instead is a manifestation of so many data points that it can project the illusion of being sentient.
    • Second, human weaknesses: also known as anthropomorphisation which is the tendency to attribute human traits to non-human entities.
    • It is well known that humans have a predisposition to humanise non-human entities which is described by Melanie Mitchell in her book on AI.
    • Consequently, it is possible that this has had an impact on the decision-making of Blake Lemoine and helped lead him to this erroneous (in my view) conclusion.
    • Third, evidence: All of the evidence points in the non-sentient direction with only feelings, experiences and opinions supporting the sentient side of the debate.
    • For example, when Lemoine demonstrated his finding to the Washington Post (see here), Lemoine had to guide the reporter in terms of how to phrase her questions in order to get human-like responses.
    • I consider this to be evidence that the LaMDA is not sentient because problems like this do not arise when speaking to other human beings.
    • Furthermore, RFM research (see here) found evidence that GPT-3 (Open-AI’s language model) has no ability to generalise at all despite giving the impression of being able to do so.
    • Buried in Open-AI’s research paper (see here (fig. 3.10)) is clear evidence that GPT-3 has no ability to generalise.
    • Open AI used the fact that its model could do two operation basic mathematics as a sign that it could generalise from language to mathematics, but GPT-3 could not do three operations even with single-digit numbers.
    • In my opinion, this is evidence that GPT-3’s data set (175bn parameters) had the answers to the two-digit operations buried in there that the researchers had been unable to find but GPT-3 could.
    • Three operation maths will be much rarer and so less likely to have the answers hidden in the data set.
    • Therefore, I conclude that GPT-3 has no understanding of maths as a 5-year-old can make the cognitive leap from two operations to three without difficulty but somehow GPT-3 could not.
    • I suspect that if one tested this on LaMDA, one would get a similar result.
  • Consequently, I think that all of the evidence points towards another iteration of what I call the Infinite Monkey Problem.
  • This idea states that given a typewriter and enough time, a monkey will be able to come up with the complete works of Shakespeare.
  • These models have more parameters than a human can conceive of and access to huge amounts of computational power.
  • So, in effect, there are billions and billions of monkeys all hammering away at the typewriter which combined with all of the information available on the internet, is able to weave the illusion of sentience.
  • Hence, I do not think that Skynet has been created nor do I think we are about to enter into a cataclysmic war with machines that results in humankind being enslaved.
  • AI remains very good at solving problems where the task at hand is both finite and stable and pretty bad at everything else which is why computers still struggle with walking on legs and driving cars on open roads.
  • It is in the application of AI to these tasks where resources and time should be allocated leaving algorithms like LaMDA, GPT-3 and so on as interesting research projects and little more.
]]>
4YFN 2022 – Treasure hunt. https://www.radiofreemobile.com/4yfn-2022-treasure-hunt/ Mon, 07 Mar 2022 06:20:16 +0000 http://www.radiofreemobile.com/?p=8843 Diamonds in the rough?

  • Part of the MWC trade show is the 4YFN (four years from now) section where all the start-ups are to be found, most of whom have nothing to do with mobile technology.
  • For once, I had time to do an in-depth tour of this exhibition and found 4 companies that piqued my interest.

Inmersia – Outlandish or hidden genius?

  • Top of the list of outlandish claims was Inmersia, a university spin-out that is creating a pair of AR glasses.
  • Inmersia claims to have fixed the field of view problem for augmented reality and having focused on this area at MWC 2022, I can confirm just how bad it still is.
  • The leading offerings offer a field of view of around 50° which results in a very poor user experience.
  • Inmersia claims to be able to offer a 150° field of view in a small, compact form factor.
  • The catch is that this product does not yet exist outside of the lab, but the company did say that it has worked out how it was going to miniaturize what it does have into a usable form factor.
  • This is intriguing as, in my opinion, this could be a giant leap forward but, as ever, the proof is in the pudding.
  • Needless to say, I will be looking into this one in more detail to find out whether these are outlandish claims or the beginning of a proper unicorn.

IDUN Technologies – Musk-less brainwaves

  • Hidden away in the health care section was Zurich-based IDUN Technologies which is at its heart is a materials science company.
  • The company has developed a material that can be used for earbuds that is capable of picking up electrostatic signals from the brain when placed in the ear.
  • This is not some Elon Musk promoted brain implant wheeze but a real product that has a couple of good use cases.
  • The first is sleep-scoring where the company is working with Japanese pharmaceutical company Takeda to replace the standard sleep scoring test called polysomnography (PSG).
  • PSG involves lots of head attached electrodes and expensive machines but is the gold standard of sleep medicine.
  • So far IDUN Technologies can achieve 70% accuracy with its earbuds which compares favourably to other trackers like Oura or Apple Watch which are 40% accurate or less.
  • The company also believes that it can revolutionise the hearing aid industry by being able to selectively amplify the noises the listener is focused on.
  • This is done by interpreting the brain signals to work out which frequencies to amplify rather than just making everything louder.
  • The product is still at an early stage, but the results so far are, in my opinion, far more promising and relevant than monkeys playing MindPong.

Kamleon – A pot to pee in

  • Kamleon is a health monitoring company whose first product is a smart urinal that can analyse the urine that passes through it.
  • Currently, this provides an assessment of how hydrated the person is but there is scope for far more.
  • Many people, even athletes, often neglect their hydration levels as in the early stages there are no symptoms to signal that a course correction is needed.
  • This is a problem that Kamleon is working on correcting but there is a lot further that this product can go as many diagnostic tests use urine.
  • The obvious candidate is diabetes, but also prostate health and diagnosis is also a possibility.
  • The product is currently being trialled at the public lavatory in Barcelona’s Sants Estacio train station and interest is high.
  • There are plenty of issues that need to be worked out around privacy, data privacy and medical licencing but this is an innovative approach to solving a big problem.

Oxford Quantum Circuits – All about scale

  • FYFN also showcased a lot of quantum computing companies, the most interesting of which was Oxford Quantum Circuits (OQC).
  • OQC builds quantum computers using superconductivity but has two differentiators that make it interesting.
  • First, its circuits are 3D in that the input and the output lines are above and below the chip itself rather than on one side.
  • This gives the ability to easily scale the size of the computer without having the problem of where to put the input and output lines.
  • This is a unique proposition that I have not seen elsewhere.
  • Second, the company has just made one of its computers (Lucy) available as an instance on Amazon.
  • OXC has done all the software plumbing to allow access via Amazon meaning that anyone who wants to do some quantum computing can get access to it without having to install one of these beasts in the cupboard.
  • Most of the competitors build and deliver quantum computers to anyone who wants one which are typically academic or government research institutions.
  • Quantum computing is many years away from replacing silicon, but the seeds are being planted and the first opportunities to invest in the giants of tomorrow are emerging now.
]]>
OpenAI & Microsoft – Infinite monkeys https://www.radiofreemobile.com/openai-microsoft-infinite-monkeys/ Thu, 24 Sep 2020 03:13:32 +0000 http://radiofreemobile.com/?p=7925 OpenAI shows Microsoft the money.

  • Microsoft’s latest deal with OpenAI is little more than Microsoft supporting what is probably becoming one of its biggest clients.
  • Microsoft has extended its relationship with OpenAI beyond its original investment (see here) and the supercomputer that it announced in May 2020 to exclusively license OpenAI newest language algorithm, GPT-3.
  • This means that access to GPT-3 will be offered by Microsoft as part of its Azure offering as well as integrated with the cloud services that are offered as part of its cloud computing service.
  • Generative Pre-trained Transformer 3 (GPT-3) is a language model that uses an almost unimaginable 175bn parameters making it 116x bigger than its predecessor ensuring that it requires vast resources in order to run.
  • I think that this is the main reason that Microsoft is all-in on OpenAI because the bigger and more complex the models get, the more Azure resources OpenAI consumes and the more money Microsoft makes.
  • This is why it makes little difference to Microsoft whether OpenAI succeeds or fails as almost all of the money that Microsoft invested comes back to Microsoft in the form of revenues for Azure.
  • GPT-3 is the latest embodiment of OpenAI’s philosophy which is that artificial general intelligence or transfer learning (AI’s holy grail) can be achieved by throwing as much data and compute that one can lay one’s hands on at the problem.
  • RFM describes GPT-3 as an example of the infinite monkey theorem which states that given enough time a monkey with a typewriter will eventually come up with the works of William Shakespeare. (see here)
  • GPT-3 produces some very impressive content that 48% of humans could tell whether the content had been written by a human or a machine.
  • It also produces a lot of gibberish (which tends to get ignored) and OpenAI’s claims of generality for GPT-3 are not difficult to call into serious question using its own data (see here).
  • However, this is exactly what it was created to do as the system was incentivised to produce text that reads plausibly regardless of whether it makes any sense.
  • When one digs into what the machine is actually saying, it quickly becomes obvious that the system has no idea what it is doing.
  • This is the flaw that is inherent to all deep learning-based systems and progress to fix this problem has been glacially slow.
  • The other problem that OpenAI faces is that will be increasingly difficult to come up with GPT-4 and beyond.
  • This is because GPT-3 was effectively set loose on the entire Internet (the global repository of all public data) meaning that finding ever-larger sources of data will become increasingly difficult.
  • Furthermore, researchers are increasingly finding that the improvements achieved in performance from ever-increasing amounts of data are getting smaller and smaller.
  • This tells me that in terms of making machines appear to be intelligent, deep learning techniques are beginning to approach the limit of what they can achieve.
  • This is why I think that OpenAI’s approach is the wrong approach to reaching artificial general intelligence and that OpenAI will eventually run out of money and close its doors.
  • In the meantime, Microsoft will make hay while the sun shines and is the main beneficiary of OpenAI’s voracious appetite for compute and storage resources.
  • That being said, I would not rush out and buy Microsoft’s shares as they have run far beyond what I would call fair value as have almost all of the big tech names.
  • This is the danger of a money printing and quantitative-easing fuelled rally because the stock market is currently in the thrall of the Federal Reserve which can’t print money forever.
  • When it does finally decide that it can not print and ease more, the market will correct to reality and this will take Microsoft with it.
  • Its been a great run but I would be (and am) completely out of this stock now.
]]>