In September, California signed into law a series of AI-related measures. What do these laws mean for you and your business?
European lawmakers are finishing work on an A.I. act. The Biden administration and leaders in Congress have their plans for reining in artificial intelligence. Sam Altman, the chief executive of OpenAI, maker of the A.I. sensation ChatGPT, recommended the creation of a federal agency with oversight and licensing authority in Senate testimony last week. And the topic came up at the Group of 7 summit in Japan.
There has been an explosion in state-based legislation to address artificial intelligence, as lawmakers across the 50 US states and the District of Columbia have placed a heavier focus on AI regulation in 2023.
Generative AI, a transformative technology, gained widespread attention with the launch of ChatGPT and GPT-4 in late 2022 and early 2023, respectively. While experts have been fascinated by this technology for years, its consumer-facing applications have now captured the public's imagination.
Shadow AI is the use of AI within an organization without the knowledge of or oversight from the IT or Compliance Department. Within Shadow AI, there are two categories of concern that can be particularly relevant:
Topics: shadowartificialintelligenceThe prevalence of fraudulent activities presents a major problem to firms across the financial sector. In fact, according to a report from the Association of Certified Fraud Examiners (ACFE), the typical organization loses 5% of revenue to fraud each year. In addition to the risk of identity theft, phishing scams, and other types of consumer fraud, firms should also be on the look-out for occupational fraud, which is a type of financial crime that occurs when an employee, manager, or third party misuses an organization's resources for personal gain.
Supervisory Statement 1/23 is a regulation by the PRA on Model Risk Management (MRM) for banks within the U.K. going into effect May 17th, 2024. While the SS1/23 statement covers the full range of how to govern all models within a firm and defines what a model is quite broadly, it also explicitly calls out AI as a sub-principle of the regulation on the Bank of England’s website.
Topics: model risk managementShadow AI refers to the use of AI Applications or Models being used within an organization without the explicit consent or knowledge of a firm’s IT organization. There are normally two categories of concerns when it comes to Shadow AI:
Topics: managing risks of shadow ai, shadow ai, aiFinancial institutions are rapidly adopting AI within their inventory of complex models. We believe, along with most internal auditors and risk managers, that it is imperative to identify and manage the new business and regulatory challenges that accompany the use of AI.
At its core, AI models are simply another form of an End User Computing (EUC) Application.
Topics: mitigate ai risk, automated ai, risk assessment, mitigationThe recent Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence marks a significant step towards regulating and harnessing the power of AI.
Understanding the Executive Order: The executive order outlines a comprehensive framework for the responsible development and deployment of AI, emphasizing the importance of addressing potential risks associated with its use. From privacy concerns to algorithmic biases, the order aims to create a safer and more transparent environment for AI applications across various industries.
Topics: ai, secure and trustworthy development and use of artificial intelligence