Sample Policy

 

Format: Each policy in this book is presented in the following way -- a sequential policy number, a brief title, specific suggested words for the policy addressed, justification and explanation for the policy presented, followed by references for the policy presented. The references at the end of each policy point to a references appendix, which contains over 2000 specific references, and which has hot links to articles, books, webpages, videos, and other sources. The references appendix can help illuminate reasons to adopt such a policy, explain variations in the ways in which such a policy is approached, and the further explain the nature of the risk which is addressed by the policy in question. The standardized format for all the 175+ policies in this book, in conjunction with the indexes provided, and the ability to keyword search everything, allows the policy-writer to rapidly navigate this material.

 

------ an example follows -----

81. Archival Recording of Business Processes Used Before AI Systems Were Adopted

 

Policy: In all those instances in which an AI system is going to be used to significantly change or else replace an existing way of doing production Company X business, before the AI system in question can be moved into production operation, an archival recording of the prior business process must be captured to the satisfaction of the Artificial Intelligence – Center of Excellence (AI-COE). Such a recording must involve multiple approaches including textual descriptions of the steps involved, videos of actions performed, and interviews with involved employees. This archival recording of the prior business process must also be permanently safeguarded and maintained by the AI-COE. The recording will used for contingency planning and disaster recover purposes, should the AI system in question for some reason be abandoned, turned-off, or unavailable.

Justification: This policy addresses a much larger problem, a problem that is not just evident in the conversion of business processes to be AI-assisted, but also in the relentless effort to cut costs by replacing human knowledge with standardized procedures, policies, forms, manufacturing molds, computer systems, etc. Through this effort to standardize and cut costs, organizations become more and more rigid and inflexible, and at the same time, the ways they do things become more and more accelerated using predefined approaches. This rigidity, inflexibility, and narrowing can lead to catastrophic accidents, because there is very little buffer or slack in the system, and if the new standardized system cannot change, it breaks-down. This trend is much more serious in the domain of AI, where what goes on inside the AI system is unknown (the so-called “black box” phenomenon), where activities are a whole lot more complex than they were with traditional information systems, and where AI systems learn and change over time without any human involvement. What’s more, the relatively expensive human talent that was previously employed to perform the business process in question has been dismissed, transferred to other positions, or perhaps they have retired. The humans who are left, especially after an AI-system is used for part of a business process for a significant period of time, are often shallow in their understanding of what exactly is happening inside the system, and shallow in their understanding of the context in which the business process takes place. The combined human-machine business process is then accordingly rigid, fixed, and opaque. And when there is a serious accident, or perhaps a major unanticipated development, the organization will have a very difficult time adequately responding in a timely manner. Perhaps most illustrative of this problem is the nuclear accident at Fukushima Japan. Nuclear power plants, like AI systems, are incredibly complex, and there is a lot that the operators don’t know, and as a result, there is a lot that the operators are unprepared to handle. Everything is fine until some serious problem is encountered, or some significant unexpected event takes place -- like a very large earthquake. Then it may take a very long time before service is restored, and the losses may be very serious. Even though the Fukushima accident took place in 2011, as of the time of this book’s writing (2024), the problem still has not been remedied, and the serious radiation-release losses continue. To get back to this policy, the intent is to assist staff in going back to the prior way of doing business, to make the transition to a less complex way of performing business activities possible. With this archive, the information needed to go back to a prior way of doing business is provided, and the possibility of moving to a less complex way of performing business activities is made more feasible. Note that the contents of the archive are much more extensive that the existing operating instructions, which most likely already exist, in relatively up-to-date format, for the way of doing business that is to be replaced by an AI-supported system. Separately, the information needed for each such archive will be dependent on the business process involved, and that fact is embraced by the part of this policy which requires that this archive be made to the satisfaction of the AI-COE. Note that high-risk AI systems will separately be required to have contingency plans and disaster recovery plans, and these plans no doubt will make reference to the archive mentioned in this policy. The designation of an AI system as “high-risk” should be a part of a risk assessment, which in turn should be a part of the AI Life Cycle Process. For an example of such a Process, see the appendix to this book entitled “Artificial Intelligence Life Cycle Process Policy.” Separately, this archive policy is a way to, at least in part, compensate for the fact that skilled labor is being replaced by unskilled labor accompanied by AI systems. Since this replacement means there is a loss of deep knowledge about the way that things are done, this archive seeks to capture some of that deep knowledge before it is forgotten, the people move on, the people die, etc., and that deep knowledge is then lost forever. Examined from a different angle, this policy provides some redundancy which may not be immediately used, but at some point in the future, may in fact be very important. If the cost of adopting this policy is considered to be too high, the policy’s scope can be narrowed, so as to apply to mission-critical AI systems only. The author of this book nonetheless suggests that this narrowing of the scope NOT be performed, because so many modern information systems are so interconnected and interdependent these days, and since the deep knowledge of how things work is rapidly decaying. This means that there may soon be some interconnectivity or interdependency that comes to light only after a major problem or serious accident, and at that point, it’s too late to capture an archive such as the one that this policy requires. On another note, this drive to standardize business activities into policies, procedures, forms, computer processes, etc., includes a breaking-up of previously holistic activities into specific tasks, so that they can be more readily standardized, and be replaced by, less costly ways to achieve business goals. Thus, for example, the job performed by a medical doctor may be chopped-up into tasks, some of which may be delegated to computer systems like AI systems. What is lost through this standardization and segmentation process is deep knowledge, the whole picture, and the larger context. It may turn out that only through the integrated picture of the whole patient, for example understanding the patient’s lifestyle and/or the patient’s mindset about illness, that a proper diagnosis can be provided. Meanwhile, an AI system may deliver an answer that is right for the isolated piece of the picture that it possesses, but is wrong for the patient involved, and that answer in turn may not lead to what otherwise would have been a healing result. The recovery of the deep knowledge, the larger context, the big picture, can be through some future AI system, or through some innovative future business process, but the recovery of that integration will need to rely upon the archive to understand what needs to be done. On a separate literary point, the science fiction writer Arthur C. Clarke is famous for saying, “Any sufficiently advanced technology is indistinguishable from magic,” and this policy attempts to make it once again clear that there is no magic involved. Shifting gears yet again, taking a different approach to many of the same critical knowledge destruction/loss issues found in this policy, but involving a corporate knowledge database (CKD) rather than an archive, is the policy entitled “Mission-Critical Business Function AI Projects Must Record Corporate Knowledge.” For additional related ideas, see the policy entitled “Archival Storage of Documentation Prepared for AI Life Cycle Process.”

References: Smith2024D, DElia2022, Singer2022, Cain2023, Schneider2021

_____________________________

 

Juniper Networks conducted a survey in 2021 which identified that 87% of executives agreed that organizations have a responsibility to have governance and compliance policies in place to minimize the negative impacts of AI. At the same time, executives still ranked establishing AI governance, policies and procedures, as one of their lowest priorities, because the emphasis instead was on getting to market rapidly with a new AI solution, hiring technical people needed to achieve that objective, etc. The policies in the book described on this website don’t look at this matter as a trade-off. Instead, the book discusses how to simultaneously achieve multiple synergistic objectives such as: facilitate efforts to bring new AI solutions to market rapidly, organize and orchestrate AI systems to be aligned with the organization’s strategic objectives, attract and hire the best AI technical talent, and justifiably gain user trust in new AI systems. – See the report entitled “AI is Set to Accelerate… Is Your Organization Ready?” by Jupiter Networks, April 2021. 

 

Buy Your Copy Now