Gen AI and the Future of Investment Research

Gen-AI-and-the-Future-of-Investment-Research

Author Name: Indy Sarker
Category: Tech Innovation
Date: March 28, 2024

Generative AI and Large Language Models (LLMs) are reshaping several industries with significant expectations to support innovation and productivity. The world of Investment Research is not immune to these winds of change. How does the role of financial analysts covering publicly listed companies change? What do investors want from their research teams going forward? Such questions place a heavy strategic burden on banks, brokers, and asset managers who run research divisions (at considerable costs) on how best to embrace this important tech innovation into their businesses.

The world of bottom-up fundamental investment research analysts, especially when it comes to equity research, has largely remained untouched by change for the last 30 years. The advent of desktop software to support financial forecasting and modelling, and the subsequent emergence of data-referencing and number-crunching tools, was the last significant disruptor in this space.

The individual analyst was and remains key to the analytical output. All inferences and the final output in the form of investment conclusions and recommendations rested heavily on the analyst. The analytical leadership was established by individual analysts as opposed to technology or the growing role of technology. We believe the future is not an either-or choice but more about how human intelligence (as embedded in the analyst and supported tools) combines with artificial intelligence to drive investment decision-making.

The world of quant-modelling and investment decision-making, driven by complex mathematical models, powered by machines, has been around on the trading desks for over three decades. For decades quant-based investing, via mathematical and statistical models, has triggered Buy/Sell trades and made significant returns for many investors. Such trading models were “intelligent” enough to make those trades. However, the advent of machine learning has added a whole new dimension to data analyses. We expect to see more synthesis between Gen AI and the quant-trading world to seek that extra competitive edge in the investment process.

Our focus in this piece is on bottom-up fundamental research and whether it lends itself to Gen AI capabilities in a way that does not undermine the human intelligence element (but enhances it!) and simultaneously does not “degrade” the research offering in the marketplace.

Calibrating the path to adopting Gen AI

Decision-makers need to look at this as an evolutionary path with the appropriate checks and balances to ascertain what works and what requires greater scrutiny to judge its effectiveness. The adoption of Gen AI to support the investment research organization is not expected to be a “big bang” innovation but a carefully laid out and calibrated path that reflects on the actual results at each stage in the journey before embracing its adoption on a larger scale.

Are you dealing in facts, or inferences and assertions?

Most LLMs, the likes of ChatGPT, have developed a good understanding of factual queries by their users. As an analyst, if you wish to understand or list a supply chain or value chain of an industry, or assess risk factors on a particular outcome, you will get an intelligent response. It is then up to the individual to apply context and fact-check.

Exhibit 1: Gen AI - Useful Tool for Industry Knowledge and Dynamics

When it comes to conclusive inputs or reasoning, as a user of LLMs in any specialised domain, you need to be very careful about the source and potential biases in the results. In other words, you are unable to rely on them as reported but require iterations and corrections. This “fine-tuning” process is precisely how the LLM learns! Inferences require careful analytical assessment and, therefore cannot be blindly relied on from an LLM query.

When it comes to drawing conclusive assertions from LLMs, you will start to identify limitations in the present times. If you look at the illustration below, you will find that ChatGPT does a great job giving you some basic theoretical considerations while looking to invest in gold; but it's not willing to go the extra mile to reflect on economic data, both intertemporal and relative prices, to give you any conclusive assertions, the way a commodities analyst (who covers gold) might provide you.

The quality of the output also rests on the specialisation levels of the LLM that you are leveraging as well as the paid versus free options that are out there.

Exhibit 2: Limitations of Gen AI content when it comes to Investing

Leveraging LLMs: Identifying the low-hanging fruits

Given the above backdrop, where does the value of Gen AI rest for the investment research world? First things first, a research organization needs to introspect how they wish to empower their businesses to bring superior output to their client base. Additionally, what is that definition of “superior” output or engagement?

We believe any steps in this direction must adopt an evolutionary path that focuses on some of the low-hanging fruits and create a vision and execution plan for the more complex outcomes for the future. Going up the intelligence curve, particularly in a proprietary sense, requires time, resources, and expertise (let’s club it with resources!), to drive conclusive value-add to your client base.

Productivity Challenges: How does one bring greater vigour to the “research process” without adding to the costs or time to publish (engage clients!)? Analysts are pressed for time all the time. If there are ways in which their conceptual research and understanding can be supported with tools for easy access to vital inputs in the research process, it should lead to greater objective output. Analysts come with various levels of seniority and for the ones that are new to a particular industry or product, LLMs can deliver quick insights to help them focus their analytical attention.

In the area of translation services, natural language process (NLP) capabilities are helping drive efficiencies and cost savings to research organizations that produce content in their primary language and then translate their reports to English for a wider audience.

There are LLMs today that consume news feeds and other similar services (i.e., R&D, innovation, journals etc.) to develop a more contemporary and real-time assessment of impact. Perhaps this area could be seen as the second phase of seeking productivity using Gen AI.

In the area of Alerts/Notifications, the scope of intelligence or “smartness” can be quite wide. From the simple news-driven alerts to the Gen AI model reflecting on the alert and then based on its inference triggering subsequent alerts with qualitative observations (driven off the learning path). Such observations can be edited by the consumer of such observations, which in turn feeds the “learning” process of the model in an iterative sense.

Articulation Challenges: In the world of investment research, 80% of the content is consumed in the English language; and yet English is not the primary language for over 50% of the analyst community. Traditionally, firms have relied on editorial talent and over time these editorial teams have been cut due to cost pressures and the burden of proofreading has gone up exponentially on such teams; not to mention delays.

Exhibit 3: Creating Impactful Summaries using Gen AI

We believe Gen AI as a tool to support analysts to articulate their point of view better would help reduce lead times to get to clients and reduce costs of delivery. This comes in the form of language translation efficiencies as well as boosting their ability to express themselves more effectively in English.

Exhibit 4: Gen AI - Highlighting some Quick Impact Areas

Training a LLM with Proprietary Data: Time and Costs

An investment research firm produces large amounts of data across its team regularly. This includes financial estimates and forecasts, valuation targets, references to news flow, textual assessment and conclusions in each research note etc. Take each research note published by a research firm and you can categorize the content into various categories to feed a proprietary model.

Given that a research division covers a diverse range of industries as well as disciplines (e.g., equities, fixed income, macroeconomics and strategy, currencies, commodities etc.), your proprietary model is never large enough in data terms to give you an objective and well-rounded view and/or eliminate assessment biases. Consequently, you need to combine your proprietary data set with other third-party large variable models out there to help drive the best results, in terms of reasoning.

Global institutions, on both the sell-side and buy-side, will look to claim branding leadership of their proprietary model over others; in ways that specialised investing institutions have done over the last three decades. These institutions have credited their proprietary “investment models” to their ability to consistently generate alpha. However, most institutions that lack the resource scale, are better placed to focus on a defined path to how Gen AI can help their business, leveraging what is out there rather than “building” everything in-house.

Focus on the learning path of an LLM

Every LLM is built on a foundation of machine learning and deep learning. Such models consume enormous amounts of data and are trained on data sourced from the internet and a variety of other sources, depending on their specializations. It is therefore critically important to ascertain the maturity of a specific LLM in the context of your needs (i.e., industry, speciality, and expectations).

Exhibit 5: Popular Large Language Models Dominated by Global Tech Companies

Challenges of LLMs and Weaving in Proprietary Data

One of the fundamental challenges of LLMs centres around knowledge representations and reasoning in AI systems. This is particularly true in specialised domains and therefore the approach to processing knowledge and its utilization must be paid careful attention, given the high risks of generating non-factual or misleading content or drawing wrong conclusions from the wider dataset.

We look at a few generic challenges that safely apply to the investment research landscape and need careful consideration at the adoption stage. The ability to generate meaningful content that is high quality with meaningful insights in a specialised domain (again, this is all relative!) requires considerable programming talent and learning inputs into the process, including the appropriate checks and balances in the learning process.

Despite addressing these basic technical needs, you will have to still encounter and solve several inherent challenges of LLMs and their ability to produce quality output which you can use without any further human intervention. In most instances, Gen AI output will require review and edits at an analyst end, to drive meaningful application.

Some important areas to focus on when it comes to LLMs:

Quality of Output and Hallucinations: One of the primary reasons for high levels of “hallucinations” in the output from an LLM is the lack of knowledge of a specific domain or specialisation. While this area can get quite “technical” on how LLMs are “trained” to reduce hallucinations, the bottom line on this topic is that the learning path is critical to reducing hallucinations. Besides, one must be very clear on what is it you want from the LLM in terms of Gen AI content.

Training a Model and Eliminating Biases: This is an area that is trickier to address as, unlike hallucinations that are relatively easy to detect, “biases” by their very nature are more subtle. For example, as a Tech Analyst, you have always been more bullish than your peers in the market. Using your historical research report data in an LLM, wherein you set a higher degree of credence to your proprietary dataset, it will by definition give you output that has more of a “positive outlook” bias.

Ease of Use and Access: The advent of various open-source platforms like Llama-2 from Meta, Falcon, and Jais, has made it very easy for people to tap into LLMs. However, such ease of access comes with the added problems of malicious intent when it relates to “influencing” the model. Tracing such abuse is an increasing challenge given the growing diversity of various LLMs.

Addressing the Factuality Challenge: Factuality relates to the capacity to generate content that passes the fact-check threshold. One should not confuse this with hallucinations. In other words, the probability of the LLM to produce content that is consistent with established facts. When your LLM lacks the appropriate maturity, you will have to be careful about the extent to which you can rely on its inferences or suggested output.

At ANALEC with its Resonate and InsightsCRM offerings, we are looking to devise the most appropriate Gen AI-driven capabilities that best support our clients’ needs and aspirations. We go about doing this keeping in mind the present limitations of LLMs and the inherent challenges and working closely with our clients and their objectives.

In our next publication in this series, we will delve deeper into some of these topics to help you ascertain your priorities and define the most appropriate strategy on how Gen AI can support your business needs in our investment research and client servicing organizations.

icon_Line_Up_arrow