Transcript



What is the trend?

In 2023, many AI projects were standalone, small focus proofs of concept. In 2024, companies are looking to iterate and expand AI in their organization. But they're also looking for guidance on establishing proper governance, risk management, ethical standards, and alignment with any relevant compliance regimes in place. Similar to how medical and academic organizations have data review boards, companies using AI will need an oversight body well-versed in AI, who can ensure that product teams are using these capabilities responsibly.


Why is this trend important?

So the topic of governance and responsible use has been a central topic from the start of AI's proliferation into the mainstream. However, this will become even more critical in 2024 as projects move beyond narrowly focused R&D efforts into broader applications that influence and guide a company's core businesses. Firms will need to have clear objectives they seek to achieve from each AI solution and an understanding of what data is being accessed and any limitations to the quality of that data including what types of errors may be introduced into the system and the impacts of those errors. Companies will need to be able to explain what decisions are influenced by AI and why they are relevant and trustworthy.


What is Valorem Reply doing to prepare for this trend?

Valorem Reply can help a company define its broader AI strategy and identify where key risk areas may lie from an ethical privacy or explainability perspective. We can help companies build a North Star for their AI practices. What are the guiding principles and objectives for using the technology and how can business results be measured from its use? Finally, we can provide the training and change management to help a firm's AI governance process get off the ground.