Q2+AI Strategy
Dear Partner,
I’m pleased to report that the NJC Horizon Fund achieved a 23.34% year-to-date (YTD) return as of Q2 2024, outperforming the Nasdaq (up 18%) by 29.27% and the S&P 500 (up 15.26%) by 52.92%. This performance was primarily driven by our core investments in healthcare, infrastructure, tech services, and particularly semiconductors, which also fueled our returns in Q1.
If I had written this letter in June, I would have cautioned that these extraordinary short-term gains would likely moderate. Even the best-performing stocks occasionally give up ground. While a change in market leadership is imminent, it makes economic sense to continue holding these assets unless we're confident that the potential downside outweighs both the capital gains taxes and the opportunity cost.
After a year of AI dominating investment discussions, I’ve now witnessed the second meltdown and rebound in AI stock valuations in just a few months. Against this backdrop, I’d like to share my view on AI’s future over the next decade, who I expect will capture the bulk of AI-related profits, and how this shapes my approach to public equity investment.
Context:
While AI may seem like an overnight sensation, it was in development for 70 years before ChatGPT 3.5 gained public attention in late 2022. However, we are still in the early stages of AI's evolution. Until recently, most AI algorithms in use were decades old, limited by insufficient processing power. As a result, commercial applications were confined to specific tasks like converting handwriting into text or selecting digital ads. The groundbreaking AI models we see today didn’t begin development until around 2015.
The rapid adoption of ChatGPT and other generative AI models tempted developers and investors with the promise of unprecedented demand. This enthusiasm pushed stock prices into speculative territory for anything remotely connected to AI development. However, in recent months, soaring costs and stabilizing user engagement have tempered investor excitement. While I’m relieved that the prophecies of an AI bubble appear self-defeating, I believe many current concerns are just as exaggerated and naive as the earlier optimism.
Analysts and investors who, just months ago, were wide-eyed and constantly talking about AI, now speak of waning demand and unrecoverable expenses. This pessimism is misguided, as modern AI is not only genuinely useful—contributing measurably to productivity—but also poised for significant improvement and capable of achieving enormous economies of scale.
The Case For Optimism:
The development and initial training of large AI models are enormously expensive, but operating those models—known as inferencing—tends to be far less costly. Over time, operating costs can drop significantly as models mature. Migrating models to Application-Specific Integrated Circuits (ASICs)—custom-designed chips that run models more efficiently than general-purpose hardware, is one way to dramatically lower the marginal costs of inferencing. For example, Amazon reduced the cost per use of Alexa’s voice recognition system by about 70% after migrating it to ASICs. This increased efficiency encourages developers to accelerate model training toward the point of diminishing returns. While these large upfront investments are currently unnerving some investors, they may extend the period during which developers can reap outsized profits.
Recent investor concerns are driven by fears that AI won’t deliver enough value to customers to justify its high development costs. I find this hard to believe. Anecdotally, many people I know already pay to use Large Language Models (LLMs) like ChatGPT. While the technology has countless applications, a few have gained widespread adoption: 1) People are increasingly using LLMs as a starting point for research, replacing traditional search engines. 2) LLMs help bring clarity, organization, and tact to communications. 3) LLMs are also used to summarize and analyze large texts. (Feel free to try it on this letter!)
While these tools might only save five minutes at a time, white-collar work involves so much communication and research that these small savings quickly add up to several hours each week. With the average total hourly employee cost around $46, an AI tool that saves an hour or two easily justifies prices far higher than the current $20–30 per month for LLMs. That’s my high-level assessment, but I believe the next wave of AI products will deliver even greater value by supporting highly paid technical professionals.
Much of the current pessimism around AI is based on the narrow assumption that the next generation of models will resemble today’s LLMs. They won’t. LLMs are trained on datasets so vast they encompass nearly the entirety of written language—requiring extraordinary computing power, even by AI standards. In contrast, the next wave of AI models will focus on more specific tasks, such as enabling AI agents to explain their reasoning or engage in self-supervised learning. These models will likely require far less processing power—perhaps by an order of magnitude—than the immense resources needed to train LLMs. Since hardware accounts for about 60% of current training costs, and electricity another 5%, the next generation of AI models should be much more economical to develop.
NVIDIA, Forsaking All Others:
Given that AI is genuinely useful, its commercial potential ultimately depends on its economics. Training a single model requires quadrillions (10^15) of calculations, and cost-efficiency comes from numerous small optimizations that eliminate unnecessary calculations and reduce resource bottlenecks. When it comes to optimizing the massive parallel computing needed for AI development, NVIDIA is unmatched in providing out-of-the-box solutions.
NVIDIA has long distinguished its products from competitors through extensive software optimizations, particularly in AI. Jensen Huang, founder and CEO of NVIDIA, recognized early on that the company’s Graphics Processing Units (GPUs), originally designed for video games, were also well-suited for the matrix algebra underlying much of AI. NVIDIA began quietly integrating support for AI through CUDA, a highly specialized platform designed to optimize parallel processing tasks, long before other GPU makers followed suit.
To understand the power of NVIDIA’s CUDA architecture and its optimizations, consider why alternatives generate so little excitement. Last year, AMD (a smaller fund holding) launched a processor offering 300% more power at two-thirds the price of NVIDIA’s. Yet, despite its stronger specs and lower cost, the market largely ignored AMD’s entry. The reason? NVIDIA’s processor, with its software optimizations, performs the most popular and intensive AI training tasks three times faster than AMD’s offering. Additionally, while other chipmakers rely on third-party network technology, NVIDIA distinguishes itself by providing fully integrated computing and networking hardware, thanks to its acquisition of Mellanox. This integration allows NVIDIA to optimize performance across its networked chips, giving it a significant edge in large-scale deployments for major customers.
NVIDIA’s massive advantage is likely to grow as its scale energizes the software flywheel effect. AI models are highly resource-intensive, making CUDA’s powerful optimizations especially appealing to developers. As more developers adopt CUDA, resources like tutorials and examples proliferate, simplifying further adoption. This snowball effect reinforces NVIDIA’s dominance by making it easier for new users to leverage its platform.
NVIDIA gathers real-time usage data from its software, uncovering insights that might otherwise go unnoticed. These insights allow NVIDIA to continuously refine its products, improving efficiency and encouraging even greater adoption. In fact, NVIDIA’s initial push beyond graphics was sparked when they observed scientists repurposing their GPUs’ graphics capabilities to accelerate research—a trend they then capitalized on. This combination of growing user adoption, feedback, and continuous product refinement ensures that NVIDIA’s lead in AI computing will strengthen as its ecosystem evolves.
The main argument against NVIDIA’s long-term dominance is that key customers are supporting alternatives to avoid reliance on a single supplier. While these customers are rallying others in the AI ecosystem around alternatives like OpenCL, it’s unlikely this will significantly challenge NVIDIA’s dominance. The integration of processing hardware, networking hardware, and software allows NVIDIA to make superior optimizations compared to non-integrated competitors. The push for alternatives is more likely to facilitate the use of hardware that NVIDIA doesn’t make than to open the door for direct competition.
NVIDIA’s lead in software optimization is immense. Even if competitors developed comparable software, they would struggle to optimize it as effectively without the many layers of integrated hardware that NVIDIA offers. I believe the push for alternatives to NVIDIA’s CUDA architecture is aimed at enabling the integration of more efficient hardware solutions for AI, as some tasks benefit greatly when moved from general-purpose GPUs to highly specialized processors. Google, for example, buys large quantities of NVIDIA GPUs but supplements them with its proprietary Tensor Processing Unit (TPU), an ASIC optimized for energy-efficient matrix algebra. This suggests a cycle where new features are developed on versatile hardware like NVIDIA’s and later migrated to ASICs, freeing up GPUs for further innovation.
Though industry analysts understand how workloads shift from GPUs to specialized chips, this concept seems less fully appreciated by many in the finance community, where GPUs and ASICs are often seen as direct competitors. This oversimplification creates confusion, as the two types of hardware are more complementary than competing. As a result, I believe this lack of clarity leads to some of the gaps between the financial markets’ narrative around AI and industry estimates.
Hardware:
As we are seeing, companies providing the hardware needed to train AI models—most notably NVIDIA—are poised to enjoy outsized profits as a multi-year arms race begins. Successful AI models will eventually migrate to purpose-built hardware like ASICs. The depth of refresh cycles for training hardware will depend on how well recent AI models perform. Meanwhile, demand for inferencing hardware, such as ASICs, will hinge on the popularity of specific AI models and their improvements from one generation to the next.
There are two significant publicly traded companies that design and manufacture ASICs for AI: Broadcom and Marvell. While I like both, I strongly prefer Broadcom at similar valuations. Broadcom’s business is more diverse, it offers a 3% dividend, and its CEO, Hock Tan, is an exceptionally capable industry veteran. These factors provide appealing stability amid the speculation surrounding AI and the cyclicality of the computer hardware sector.
Finally, there is a potential middle ground between the capabilities of GPUs and ASICs: the Field Programmable Gate Array (FPGA). Like ASICs, FPGAs are designed for efficiency, but unlike ASICs, they can be reconfigured after production, though it requires effort. AMD, a longtime competitor of NVIDIA, acquired the leading FPGA maker in early 2022, and I expect they are positioning themselves to leverage this technology to open a new front in AI hardware competition. AMD’s CEO, Lisa Su, is brilliant and understated, so it would be very characteristic of the company to remain silent until a launch is near.
Cloud:
Who is buying all this hardware? Right now, it’s largely cloud infrastructure providers, many of whom also offer platforms and services built around their infrastructure. The cloud ecosystem is a vast network that allows organizations to manage data, run applications, and deploy resources over the internet.
There are numerous dimensions of competition within the cloud, allowing layers and niches to thrive, even with high-quality one-stop shops like Amazon’s AWS and Microsoft’s Azure. Google is making significant strides in this space, though I have genuine concerns about the vulnerability of their core search and advertising businesses. IBM and Oracle are emerging as strong contenders after early missteps that kept their valuations relatively modest. I also like several specialized cloud players, such as Databricks (private), Snowflake, Datadog, and MongoDB. However, their high valuations are based on growth assumptions that may not reflect the risk of operating in niches where competition is fairly dynamic.
Cloud infrastructure providers are already benefiting from the AI boom, as systems optimization enables them to create value from their scale. Alphabet, for example, is differentiating its offerings by incorporating ASICs into its infrastructure. Meanwhile, companies like Amazon, Google, and IBM are also capitalizing on the trend by selling their AI models as services. Many niche cloud providers, such as Snowflake, MongoDB, and Databricks, benefit from AI’s rise by offering solutions to manage the extremely large datasets required to train AI models.
Agents As A Service:
When we talk about AI as a product, we’re usually referring to AI agents, not AI models. Models are the mathematical frameworks that power features, while agents package those features into useful software that can perceive its environment, make decisions, and take actions to achieve specific goals. For example, Siri is an AI agent, and it relies on models to perform tasks like recognizing speech from audio.
While AI products will find both consumer and enterprise customers, it’s hard to predict who will produce the next ‘killer app’ for consumers—though Meta will likely copy it quickly. However, some enterprise software companies have clear advantages, such as large datasets and extensive customer bases, which they can leverage to efficiently develop and monetize AI products. I tend to think of these companies as either legacy vendors that struggled with the transition to cloud infrastructure or cloud-native software-as-a-service providers.
Of the legacy enterprise software vendors, Oracle and IBM made significant strides in adapting to the age of AI. IBM had a strong position in AI development and implementation for some time, but it struggled to convert that leadership into sales. Both Oracle and IBM got their cloud services businesses in order just in time to catch the AI wave, and I expect they will do well.
SAP is another legacy player that struggled with the shift to cloud. Like many companies, they claim AI is a priority but offer few details. SAP has an enviable trove of enterprise data and access to large customers, though it does not sell cloud infrastructure directly. Without a clear AI strategy, SAP’s valuation fluctuates with market narratives, but its potential for a rebound remains strong. The fund has been selectively buying shares when pessimistic narratives bring its PE ratio closer to historical norms (25–35).
Newer enterprise software players like Salesforce, ServiceNow, and Adobe are well-positioned to develop, deploy, and sell AI products. Of these, the fund holds a sizable position only in ServiceNow. While Salesforce and Adobe dominate their markets, their core products may be vulnerable to leaner competition. ServiceNow, however, operates in fragmented markets where it has been steadily gaining market share.
Conclusion:
The valuations of AI-related businesses have fluctuated wildly since June. Oracle’s valuation went from modest to rich in just the few days I’ve been writing this letter. With a sound understanding of the factors shaping the AI market, we can anticipate who will capture value as AI commercializes and when that value is likely to appear in financial statements. When the market mood swings between exuberance and skepticism, it’s important to stay focused on the fundamentals: identifying companies with real advantages, sustainable business models, and the ability to capitalize on AI’s growth. While volatility in AI stocks will likely persist, long-term investors who align with knowledgeable professionals are well-positioned to benefit as the sector matures.
Nicholas Carpenter,
Manager, NJC Capital Management LLC
https://www.njchorizon.com/