Looking at Financial Professionals Specifically, How Might AI Be Used in their Day-to-Day Business?
Adelina Balasa: Where AI’s value comes in is its ability to help the financial professional understand my situation faster, communicate with me quicker, and give me a more personalized experience.
But I wouldn’t take financial advice from AI without a human in the loop. The type of AI we’re talking about shouldn’t be relied upon to generate numerical figures, only to extract, understand, and process them. I would take generic advice such as “diversification is a good thing,” but not personal advice that hasn’t been vetted by a human.
Large language models also understand numbers, but that’s because it learns about them from existing data and content, not because it understands or can apply that understanding to situations.
In fact, some AI models have content safety systems built in that can detect if you are asking, “What should I do in this specific situation?” The AI model will give you a general answer, but the content safety system, in addition to detecting and blocking inappropriate language, will also add in at the end that you should ask this advice of a professional.
Charlotte Wood: The really amazing thing about this technology is that you don’t have to be a data scientist to use it in your everyday life or in your job. For financial professionals, AI can be used to augment client interaction. For example, it can check that the professional asked all the right questions of their client.
Because of this technology’s accessibility, it’s also more readily available to smaller companies who, for example, may not have been able to afford to hire teams of data scientists to deploy machine learning in the past.
It can also make it easier to personalize information, to help clients interpret data, and to disseminate it. This is where AI can have a massive impact in doing that for you.
What Are Some of the Limitations of, and Risks Associated With, Generative AI?
Adelina Balasa: Most large language models have been built using open source data from the internet, which doesn’t help you with data lineage (i.e., you can’t necessarily verify where the information is coming from and whether it’s a trusted source). This is why responsible AI best practice is to combine generative AI models with other data solutions such as search engines, which can give you the data traceability and the transparency you need to trust the answer.
Even with other data solutions attached, sometimes generative AI can also “hallucinate”—i.e., it can make up things that aren’t true and render the entire generated content nondeterministic—and here is where you need to adopt specific responsible AI frameworks and prompt engineering techniques to mitigate the hallucinations.
In generative AI models, you can implement a threshold at which you can dictate how creative the AI should or shouldn’t be. If you’re writing a poem, then you can push the threshold to its maximum creativity, but if you want to extract numbers, then you can tell it to be exact and non-creative.