• Products
  • Insights
  • Practice Management
  • Resources
  • About Us

Why Is Everyone So Excited About AI at the Moment?

Charlotte Wood, Head of Innovation and Fintech Alliances, Schroders: We’re excited about how this technology can give people access to information that already exists faster and easier, whether or not they would previously have had visibility of it. There’s huge opportunity for this tech to enable people to do things that they wouldn’t have been able to do before.

Alex Tedder, Head of Global and Thematic Equities, Schroders: Financial markets are particularly excited about the application of generative AI to businesses and the productivity gains that can be realized.

 

Can You Put Some Numbers to These Potential Productivity Gains?

Alex Tedder: There are around 1 billion knowledge workers—i.e., people who add value through their knowledge—globally. If we assume a knowledge worker earns, say, $15,000 per worker per year (obviously it is more than this in the West, but much less than this in emerging markets), we end up with a global wage bill of $15 trillion a year.

Now, let’s assume 15% of the work these knowledge workers do is displaced by AI. In theory, there’s a savings of $2.25 trillion annually just by applying AI to certain parts of the knowledge spectrum. For sure, not all of this will translate into revenues for the companies that supply generative AI models. But even on a conservative basis, the annual addressable market could be around $450 billion.  

These are big numbers, and that is without productivity gains. You can see how the same logic can be applied at the company level. The potential for cost savings and productivity gains is significant, and that’s why financial market participants are very excited.

 

Won’t These Cost Savings and Productivity Gains Be Offset by Social Misery, and Potentially Unrest, as People Lose Jobs?

Adelina Balasa, Lead Responsible AI Ambassador, Microsoft: Research suggests 65% of knowledge workers actually prefer to delegate some of their work to AI to be more productive, and leaders are twice as likely to be concerned with productivity than with cutting jobs. Of course, businesses can automate as much as they like, but you can’t have a successful business that relies just on AI. You always need a human in the loop, especially in large language models, to make crucial decisions.

Alex Tedder: I think AI is actually going to raise the standard of living in certain parts of the world, especially in developing countries. AI enables a level of knowledge sharing that simply hasn’t been possible before, which will be hugely beneficial for institutions and individuals. 

I also see AI as a positive thing in the developed world, where aging populations are creating labor shortages, particularly in countries such as Japan. AI can allow fewer people to become more productive and effectively offset the loss of workers that results from changing demographics.

 

AI presents an opportunity in that as humans, we’re limited in how much of data we can usefully consume and factor into investment decisions. 

 

What Could AI Mean for the Investment Industry?

Charlotte Wood: There is so much data that investment teams have to consume every day, and as humans we’re limited in how much of this data we can usefully consume and factor into investment decisions. AI presents an opportunity to vastly improve data consumption and application, thereby improving investment decisions and client outcomes.

Alex Tedder: AI is going to play into how people allocate capital between different asset classes, different regions, and different sectors. And that’s where it gets interesting: There’ll be winners and losers at the corporate level from this in terms of how they adopt AI and how they implement it; essentially, how successful they are in improving productivity and creativity.

Financial markets have been very efficient in pricing the potential impact of AI, particularly in what it could mean for revenue growth in the software and semiconductor sectors. What the market hasn’t really done yet is taken a step back and thought about what it will mean in terms of adding value in other sectors or industries. And at the corporate level, this is where it gets very interesting.

 

Looking at Financial Professionals Specifically, How Might AI Be Used in their Day-to-Day Business?

Adelina Balasa: Where AI’s value comes in is its ability to help the financial professional understand my situation faster, communicate with me quicker, and give me a more personalized experience.

But I wouldn’t take financial advice from AI without a human in the loop. The type of AI we’re talking about shouldn’t be relied upon to generate numerical figures, only to extract, understand, and process them. I would take generic advice such as “diversification is a good thing,” but not personal advice that hasn’t been vetted by a human.

Large language models also understand numbers, but that’s because it learns about them from existing data and content, not because it understands or can apply that understanding to situations.

In fact, some AI models have content safety systems built in that can detect if you are asking, “What should I do in this specific situation?” The AI model will give you a general answer, but the content safety system, in addition to detecting and blocking inappropriate language, will also add in at the end that you should ask this advice of a professional.

Charlotte Wood: The really amazing thing about this technology is that you don’t have to be a data scientist to use it in your everyday life or in your job. For financial professionals, AI can be used to augment client interaction. For example, it can check that the professional asked all the right questions of their client.

Because of this technology’s accessibility, it’s also more readily available to smaller companies who, for example, may not have been able to afford to hire teams of data scientists to deploy machine learning in the past.

It can also make it easier to personalize information, to help clients interpret data, and to disseminate it. This is where AI can have a massive impact in doing that for you.

 

What Are Some of the Limitations of, and Risks Associated With, Generative AI?

Adelina Balasa: Most large language models have been built using open source data from the internet, which doesn’t help you with data lineage (i.e., you can’t necessarily verify where the information is coming from and whether it’s a trusted source). This is why responsible AI best practice is to combine generative AI models with other data solutions such as search engines, which can give you the data traceability and the transparency you need to trust the answer.

Even with other data solutions attached, sometimes generative AI can also “hallucinate”—i.e., it can make up things that aren’t true and render the entire generated content nondeterministic—and here is where you need to adopt specific responsible AI frameworks and prompt engineering techniques to mitigate the hallucinations.

In generative AI models, you can implement a threshold at which you can dictate how creative the AI should or shouldn’t be. If you’re writing a poem, then you can push the threshold to its maximum creativity, but if you want to extract numbers, then you can tell it to be exact and non-creative.

Once you’ve put data into something like a ChatGPT, you no longer have control of what’s being done with that data. 

 

Charlotte Wood: Once you’ve put data into something such as ChatGPT, you no longer have control of what’s being done with that data. OpenAI has the right to store it and use it in future.

I would definitely encourage people to be cautious about using a public version of generative AI. And you certainly don’t want to put your clients’ details into it because that’s effectively posting them on the internet directly.

 

Who Should Be Held Accountable to Make Sure AI Is Used Responsibly?

Adelina Balasa: It’s a shared responsibility between organizations that provide technology, organizations that use the technology, and governments. There are actually quite a few guidelines already in place in Europe, for example the EU AI Act, and in the UK we have an AI White Paper and transparency best practices. But I do think that governments need to make actual legislation that everyone needs to follow so AI technology is trustworthy.

 

To learn more about opportunities in AI, talk to your financial professional. 

 

Important Risks: Investing involves risk, including the possible loss of principal. Focusing on one or more sectors may lead to increased volatility and risk of loss if adverse developments occur.  

Hartford Funds may or may not be invested in the companies referenced herein; however, no particular endorsement of any product or service is being made.

The views expressed herein are those of Schroders Investment Management (Schroders), are for informational purposes only, and are subject to change based on prevailing market, economic, and other conditions. The views expressed may not reflect the opinions of Hartford Funds or any other sub-adviser to our funds. The opinions stated in this document include some forecasted views. Schroders believes that they are basing their expectations and beliefs on reasonable assumptions within the bounds of what they currently know. The views and information discussed should not be construed as research, a recommendation, or investment advice, nor should they be considered an offer or solicitation to buy or sell any security. This information is current at the time of writing and may not be reproduced or distributed in whole or in part, for any purpose, without the express written consent of Schroders or Hartford Funds.

WP767 3037390

Insight from Schroders Investment Management
Author Headshot

The material on this site is for informational and educational purposes only. The material should not be considered tax or legal advice and is not to be relied on as a forecast. The material is also not a recommendation or advice regarding any particular security, strategy or product. Hartford Funds does not represent that any products or strategies discussed are appropriate for any particular investor so investors should seek their own professional advice before investing. Hartford Funds does not serve as a fiduciary. Content is current as of the publication date or date indicated, and may be superseded by subsequent market and economic conditions.

Investing involves risk, including the possible loss of principal. Investors should carefully consider a fund's investment objectives, risks, charges and expenses. This and other important information is contained in the mutual fund, or ETF summary prospectus and/or prospectus, which can be obtained from a financial professional and should be read carefully before investing.

Mutual funds are distributed by Hartford Funds Distributors, LLC (HFD), Member FINRA|SIPC. ETFs are distributed by ALPS Distributors, Inc. (ALPS). Advisory services may be provided by Hartford Funds Management Company, LLC (HFMC) or its wholly owned subsidiary, Lattice Strategies LLC (Lattice). Certain funds are sub-advised by Wellington Management Company LLP and/or Schroder Investment Management North America Inc (SIMNA). Schroder Investment Management North America Ltd. (SIMNA Ltd) serves as a secondary sub-adviser to certain funds. HFMC, Lattice, Wellington Management, SIMNA, and SIMNA Ltd. are all SEC registered investment advisers. Hartford Funds refers to HFD, Lattice, and HFMC, which are not affiliated with any sub-adviser or ALPS. The funds and other products referred to on this Site may be offered and sold only to persons in the United States and its territories.

© Copyright 2023 Hartford Funds Management Group, Inc. All Rights Reserved. Not FDIC Insured | No Bank Guarantee | May Lose Value