What Can You Do with This Book?
There are many practical ways in which the policy material in this book can be used, among them are:
- Reassure Top Management and the Board that You Have Practical AI Solutions: Fit AI-related governance and risk management activities into the rest of your organization’s governance and risk management activities, and thereby assure top management and the Board that AI matters are under control. Also give them confidence that AI work will be done in a manner that they have approved. Besides helping to achieve these aims, the policies in this book reflect good practices found in the published AI literature as well as interpretations of traditional information system risk management practices that are adapted to respond to the risks unique to the AI realm.
- Create New AI-Related Reporting Structures: Reassure top management and the Board of Directors that AI-related expenditures are cost-justified, and that AI efforts are in direct support of the organization’s strategic objectives. Create new reporting structures and internal communications channels which give both top management and Board members the information they need to make good decisions about AI (such as which projects to fund), and to help them fulfill their AI-related legal duties (such as the fiduciary duty of oversight). These new reporting structures and communications channels will in turn help them to feel less anxious about the rapidly moving and very impactful AI area.
- Prevent Shadow AI: Make sure that user departments do not go their own way via “shadow AI” by having a mandatory, unified, organized, and rational approach to the AI governance and risk management area. Give top management and the Board assurance that not only the use of AI foundation models will be appropriately handled, but that custom AI solutions will also be appropriately designed, tested, operated, and audited. Facilitate the integration of AI systems with traditional information systems because the AI systems involved have been developed with consistent standards and related aligned approaches. These standards and aligned approaches in turn will not only expedite system interconnection, but that also facilitate quality control, data standardization, and data management. Reflecting this unified, organized, and mandatory approach, this book includes a compiled and ready-to-go draft of a “AI Life Cycle Process Policy,” which is also an important process for preventing shadow AI.
- Set-Up the Internal Process to Make Ethical Codes an Implemented Reality: Far too many AI ethics codes are considered by AI system builders as marketing and public relations exercises. With the material in this book, you can adopt practical mechanisms to make sure that an organization-specific AI Ethics Code is not only used when designing and building systems, but later when operating and auditing systems as well. This book focuses on a multi-stakeholder approach which naturally embraces ethics codes and related considerations, and that in turn helps to reduce public relations problems, employee objections, and customer disputes. The book also addresses the internal organizational groups which are best used to compile, revise, approve, and enforce an organization-specific AI ethics code, such as an Artificial Intelligence Ethics Committee.
- Orchestrate the AI Governance and Risk Management Process: Compress the time required to wade through the vast literature about AI governance and risk management, by reviewing instead the policy recommendations, and their justifications, found in this new book. By simply reading the policy statements themselves, policy-writers get the essence of the risk reduction control involved, and they can then quickly assess whether the approach in the policy being reviewed might be appropriate for their organization. If your organization already has an AI risk management process in place, the material in this book can be used to substantially upgrade that process to be more cost-effective, more responsive to the organization’s unique needs, and more aligned with the organization’s strategic objectives. Policies tie everything together at the highest level of the organization, and from the unified view that they can provide, orchestration and coordination can naturally follow.
- Rapidly Compile Training and Awareness Materials: Utilize the ready-to-go “Artificial Intelligence Acceptable Use Policy” provided in this book as it is, or with slight modifications. Base a user training and awareness program on this document and the related policy ideas found in this book. The process of developing content for this user awareness program is considerably expedited because the research has largely already been done for you, and the writing process likewise has largely been done for you too -- you accelerate this work by using the originating ideas expressed as policies in this book which are relevant to your firm. With this book’s material, a similar but abbreviated awareness-raising program can be rapidly developed for top management and members of the Board of Directors.
- Raise the Technical Team’s Level of Awareness About Practical Solutions: With the material in this book, you can rapidly raise the level of awareness internally about the practical options for reducing AI risks. Rather than skipping over known risks or known-to-be-likely risks, the technical staff now has specific control ideas, provided in the form of policy statements, which they can propose to manage the risks of AI systems. These control ideas are relevant whether those AI systems are built in-house, built by a third party, subscribed to via a third-party service, or accessed with some other approach. These practical control solutions for example include ways that the exchange of internal data, for the data at another firm, can be greenlighted, so that substantial amounts of additional data can be procured for training internal AI systems, but also so that privacy, security, compliance, audit, and related requirements can all be met.
- Translate Best Practices in Information Technology Risk Management into the AI Area: Move the best practices from traditional information technology risk management field into the arena of AI, so that they can be making a difference when it comes to the unique risks of AI (such as emergent properties, hallucinations, and model collapse). For example, this book discusses the role and activity of an AI Governance Council, and why it is an important governance and management vehicle allowing organizations to effectively deal with the unique risks associated with AI. While the book responds to the new risks that AI presents, it does not go back over old material from traditional information systems risk management, but instead builds upon generally accepted material in the traditional information systems risk management area. Thus, this book provides new AI-specific ideas which can be layered on top of the generally accepted risk management approaches that your organization already uses.
- Expeditiously Prepare for Audits and Regulatory Investigations: Move out of reactive scramble mode, and into a more proactive and coordinated on-target effective action mode. For example, using the material in this book, your organization can include consideration of the environmental impacts of AI systems in the feasibility analysis and risk analysis phases of the AI Life Cycle Process. By having these and other internal processes already in place, and additionally standardized, your organization will be able to more appropriately respond to audits and regulatory investigations, if and when these audits and investigations take place. The data that the auditors and regulators will want to see will, in large measure, already have been prepared, so responding to their requests for information will be straightforward, and hopefully thereafter met with approval. This type of data, for example, describing the environmental impacts of particular AI systems, can also help management to do a better job managing the risks of AI systems.
- Generate Policy Material to Show to Prospective Merger and Acquisition Partners: With the material in this book, you can prepare for the due diligence phase of a potential merger or acquisition, or perhaps a major investment in your company, or perhaps a deal with an important business partner, or perhaps an application for AI-related insurance (for example, directors’ & officers’ liability insurance or cyber-risks insurance). Being able to show that that your organization is demonstrably on top of these AI risk matters, because you have a sophisticated internal policy statement, will help expedite a variety of activities, as well as engender the confidence of third parties.
- Rapidly Repair and Recover from Damage Done by AI Problems: After an AI-related security or privacy breach, or perhaps after an embarrassing withdraw of a previously released AI product from the marketplace, with the material in this book, your organization can shore-up internal processes to significantly reduce the chances that such incidents will happen again. By adopting a formal AI Life Cycle Process, as described in this book, serious problems of this nature are considerably less likely to happen, and if they do happen, the losses so generated are likely to involve significantly lesser amounts. To assist the reader in this process, this book comes with an already-compiled draft “AI Life Cycle Process Policy.” Besides repair and recovery, a considerable portion of this book is devoted to risk understanding, avoidance, prevention, detection, deterrence, transfer to other organizations, and correction.
- Show Definitive Risk Management Progress to Garner More User Trust: Through transparency and user-friendly AI-related processes and procedures -- such as an ombudsman process for complaints that were not handled adequately by first level customer support -- your organization shows itself to be trustworthy in the eyes of AI system users. By adopting the related user-interface policies found in this book, your organization can foster user trust, and as a result, encourage users to buy or subscribe to your organization’s AI-related products and services. In abbreviated format, an example of such a policy is the requirement to, in newly initiated dialogs with a new user, always immediately reveal that this user is interacting an AI system.
- Communicate Your Organizational Culture Regarding AI: Third parties are looking for evidence that your organization has done a good job dealing with AI risks, and that it has the top management commitment, and the supporting organizational culture, to continue and sustain such good work. To show these third parties, such as potential business partners, your AI risk management policy, is a good way to communicate the work that your firm has already done. Such a policy also reveals your organizational culture, and its approach to dealing with third parties such as users. It is a “litmus test” of sophistication that is used by multiple third parties, including insurance companies and regulators. Such a policy also allows third parties to readily determine your organization’s corporate culture regarding AI risk management, and to determine whether that culture is compatible with their own corporate culture regarding AI risk management. If there is a compatibility of organizational cultures, then certain events, such as the exchange of data that could be used for AI system training, might then proceed.
- Obtain Additional Resources and Staffing: The risk management function in many organizations has been historically underfunded, understaffed, and underutilized. By detailing the complexity of the AI risk management area, through specific practical policies describing how risks can practically be addressed, you create the specifics on which requests for additional resources and staffing can confidently be based. A silver-lining to the policy-writing process is that members of the top management team, and members of the Board of Directors as well, are given a crash course on what needs to be done and why. When they understand how serious AI risks really are, and why they need to be addressed promptly, via specific policy statements and related awareness-raising activities, the resources and staffing will naturally follow.
- Reveal How the Procurement Department Can Expeditiously Work with the Information Technology Department to Reduce AI Risks: Through the policies found in this new book, you can set-up internal processes and procedures which enable the Procurement Department to work with the Information Technology Department and the Legal Department, to ensure that all AI systems (no matter where they are sourced) meet certain essential requirements. This in turn not only discourages “shadow AI,” but it also helps to expedite the AI related work, so that new AI systems can be brought to market faster, and so that the AI systems that reach the market have well-managed levels of risk. A draft “AI Life Cycle Process Policy” is included in the book, and that draft policy statement can form the starting point for a customized process for achieving these and related objectives.
- Set-Up an AI Life Cycle Process Tailored to Your Organization’s Needs: With this book, you can expedite the movement of AI systems through feasibility analysis, risk assessment, system design, model selection, development and training, testing, release, and subsequent auditing. At the same time, you can create internal processes which ensure that risk is being adequately addressed, at each stage of the process, because you have established checkpoints along the way. Each of these checkpoints make sure that important considerations, like legal and regulatory compliance, have been properly addressed. Not only can this help make sure that AI systems are designed in-line with strategic organizational objectives, but that they are cost-justified, and in alignment with existing business practices (such as those in Procurement Department). This in turn will help ensure that the significant expenses incurred in the AI area are productive and on target.
- Accelerate Time to Market with AI-Enhanced Products and Services: With the material in this book, you will be able to more rapidly bring new AI-enhanced products and services to market because your management team has assurance that the risks have been adequately dealt with as a part of the AI Life Cycle Process, which is extensively defined in this book. This process is expedited because those AI systems which are “high risk” receive greater attention, and take longer to complete, which is appropriate given the higher risks that they present. Likewise, controls are applied only to those AI systems with risk rankings that warrant these controls, and this means that AI systems which don’t need certain controls are not burdened with unnecessary controls, unnecessary analyses, unnecessary documentation, and/or unnecessary delays.
- Match AI Usage with Organizational Risk Preferences: Align the ways in which AI is being used internally with the levels of risk tolerance and risk appetite that the Board of Directors and senior executives have defined as acceptable. Through the organizational structures and communication processes defined in this book, which involve the Risk Management Subcommittee reporting to the Board of Directors, as well as the use of an AI Governance Council, the level of risk can be managed on both an AI-system-by-AI-system basis, and on an organization-wide AI-related basis. With the policies in this book, the portfolio of risks represented by all internal AI systems can be portrayed, so that top management and Board members can readily make appropriate risk-related decisions.
- Evaluate the Adequacy and Appropriateness of Existing Policies: If your organization already has AI risk management policies, this book is an excellent way to determine what upgrades to those policies might now be in order. Since the book provides a compendium with 175+ AI-related control ideas expressed as policies, which are each organized by the department that is most impacted, and since the entire book can readily be searched by keyword, this book makes the reader’s efforts to compare and contrast the existing policies, with the policies in this book, a task that can be rapidly performed. An additional alphabetical index by policy title, combined with the fact that all policies reference at least one related policy, allows the policy-writer to rapidly cross reference related policies in this book. Similarly, through the use of an extensive list of AI risks, as well as a significant list of recent AI disaster cases, the policy-writer can readily determine whether or not the organization’s existing policy statement adequately addresses the unique risks associated with AI.
- Bolster Existing Defenses Against Security and Privacy Attacks: Understand the existing control approaches in the AI area, expressed in the form of policies, and then be in an informed position to use all those which apply to your organization’s situation. Through the survey of control ideas provided in this book, the policy-writer can come to understand the evolving legal notion of the “standard of due care” as it applies to AI systems risk. This can help him/her to appreciate where an organization stands in terms of implementing the best practices in this AI risk area. This understanding in turn allows plans for upgrading controls and guardrails to be readily prepared. This understanding and the subsequent actions also help directors and officers show that they made “reasonable and appropriate” decisions regarding AI risk management, if later there should be questions as part of a related legal matter. Being able to show that “reasonable and appropriate” decisions were made is an issue in a variety of legal matters such as negligence, criminal recklessness, securities law violations, as well as unfair, deceptive and abusive business practices (UDAP).
- Come to Terms With Third Party AI Risks: With this material, you can put together a risk management plan to not only become aware of the AI-related risks associated with the use of third parties -- such as cloud service providers -- but also to make sure that the risk management steps the third parties take are in fact consistent with the risk management steps taken inside your organization. It is the interfaces between the systems provided by different parties, and the inconsistencies and discontinuities found at those points, which are often exploited by hackers, computer criminals, politically motivated saboteurs, and nation-state-funded cyber-attackers. For example, publicly accessible AI systems need to be able to stand-up against Distributed Denial of Service (DDoS) attacks initiated by third parties.
- Set-Up Internal AI Processes to Assure Regulatory and Legal Compliance: AI systems have special legal and regulatory risks, risks that are not found in the domain of traditional information systems. These include hallucinations, systems learning over time so that the system that was tested yesterday is not the system in use today, and emergent properties (capabilities that AI systems teach themselves). With the policies in this book, the policy-writer can be assured that his/her organization’s staff is focusing on the important areas, and answering the tough questions, that are in turn necessary in order to achieve legal and regulatory compliance. Many of the legal issues associated with AI have unfortunately often been dealt with after a serious problem has taken place (such as training AI systems using copyrighted material scraped from the web). This book instead takes a proactive stance with the intention to keep the policy-writer’s organization out of legal trouble, and additionally to minimize corrective legal and public relations costs.
- Clarify Roles and Responsibilities in the AI Risk Management Area: Bring order to what for many organizations is a haphazard approach to AI risk management, by establishing clear and definitive mandates for certain groups, such as an AI Ethics Committee. Define important roles such as a Chief Artificial Intelligence Officer (CAIO) and Chief Artificial Intelligence Risk Officer (CAIRO). When AI-related roles and responsibilities are clarified, then a rational plan for hiring AI-related talent can be adopted. Until then, it will most likely be the needs of priority projects that drive hiring decisions. This book helps organizations to get instead to a position where a central group of AI talent (often called an AI Center of Excellence) can serve the many needs of various AI projects. By centralizing this talent in one group, there are many benefits including: (a) better support for a standardized way to deal with the risks of AI systems, (b) the ability to support a greater depth and breadth of technical talent in-house, than would otherwise be possible, and (c) a synergy because technical people who are co-located in the same department share their knowledge, skills, and talents with each other.
- Build a Well-Orchestrated AI Center of Excellence: With greater focus and alignment across the policy-writer’s organization, based on policies found in this book, the organization can build an Artificial Intelligence - Center of Excellence (AI-COE) which has activities that truly support business units, and which accelerates the accomplishment of the organization’s AI-related strategic objectives. Such an AI-COE can in turn act as a magnet for attracting excellent AI talent, and it can become a cost-effective way to share the high cost of AI talent among various AI projects. Most often sitting in the Information Technology Department, the AI-COE bridges the gap between executive decision-making and AI technical implementation. This close linkage can help to make sure that the chances of AI project failure are minimized. The AI Center of Excellence also does research for, and implements the decisions made by, the AI Governance Council.
- Rationalize Incentive Systems: Make sure the rush to market with a new AI system doesn’t overshadow other important considerations such as adequate testing, as well as fully understanding, and then ethically responding, to the likely reactions of stakeholders after the AI system is released. Assign responsibility to specific individuals to assure that certain important jobs will be done (example -- Artificial Intelligence System Owners are responsible for paying the development costs of AI systems such that these systems are fully compliant with all requirements in the AI Life Cycle Process). Otherwise, with the material in this book, you can alter existing organizational incentive systems to create AI systems which have risks which are adequately managed, rather than allowing these systems to be dominated by existing financial incentive systems. For example, a mission-critical AI system should be able to be immediately replaced by a traditional information system, or perhaps by an alternative AI system, according to an existing contingency plan, if there was a problem with this normally operational mission critical AI system.
_____________________________
PWC (Price Waterhouse Coopers) conducted a 2024 survey identifying the top risk mitigation priority for the next 12 months. The top of the list is digital and technology risks, notably adverse consequences from new or frontier technologies (like AI), and the inability to execute digital transformation initiatives. The area most in need of regulation, according to the respondents, was AI. The new book, described on this website, provides specifics about internal regulation, and how organizations can manage themselves. – See the 2024 Global and Digital Trust Insights Report, PWC
_____________________________
“In a larger sense, what’s happening at OpenAI is a proxy for one of the biggest fights in the global economy today: how to control increasingly powerful AI tools, and whether large companies can be trusted to develop them responsibly.” – Kevin Roose, “Fight for the Future,” The New York Times, November 21, 2023 (about CEO Sam Altman’s complicated departure from OpenAI, and then later his rehiring, but the concurrent dismissal of the AI Ethics Committee). This book details exactly what can be done to develop, operate, use, and represent AI systems in a responsible manner.