One of the big payoffs from digital decoupling is creating distinct services that can be reused across multiple applications. In doing so, this functionality shifts from working within a known environment and context to operating more autonomously with less visibility into how it is being used. This means that we often need to take more proactive steps to ensure appropriate use, as what was once understood must now be made explicit.
Fortunately, government agencies are acknowledging the need for added responsibility, especially as it concerns AI and ML applications, which have attracted heavy scrutiny because of their potential to encode bias in their algorithmic models. A number of agencies, including the departments of Homeland Security, Health and Human Services, and Justice, for example, have issued AI strategies and policies that place a high priority on “responsible” or “ethical” AI use, but those strategies generally don’t detail what that will mean in practice. Responsible or ethical AI generally refers to a variety of steps that can be taken during the development and deployment of an AI or ML capability that aim to manage, monitor, and mitigate biases that may be intentionally or unintentionally embedded in the data being used.
Most agencies still have far to go in fleshing out protocols and steps that will enable them to design responsible AI systems and architectures in a systematic way. For example, a 2020 report by the Administrative Conference of the United States found that none of the numerous federal agencies it reviewed had established systematic protocols for assessing the potential for an AI tool to encode bias. “The upshot here, as earlier, is that developing internal capacity to rigorously evaluate, monitor, and assess the potential for disparate impact will be critical for trustworthy deployment of AI in federal administrative agencies,” the report concluded. Even the National Artificial Intelligence Research and Development Strategic Plan, issued by the White House in 2016, highlights the need to design architectures for ethical AI. And while it describes a variety of considered approaches for doing that, the strategy leaves the challenge with individual researchers to figure out.
The Defense Department, which has been more aggressive than any federal agency in pursuing AI- and ML-enabled applications, has also been the government’s pacesetter in adopting a responsible AI posture by formally adopting in 2020 a series of ethical principles concerning the use of AI. The recommendations came after 15 months of consultation with leading AI experts in industry, government, academia, and the public. The DoD’s AI ethical principles apply to both combat and non-combat functions and encompass five major areas. For example, they require DoD personnel to minimize unintended bias in AI capabilities, employ methodologies to ensure the AI they are using is transparent and auditable, and maintain an ability to detect and avoid unintended consequences and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.
Privacy and other ethical concerns follow many of the technologies emerging in the marketplace, not just AI. To help address this, the National Institute of Standards and Technology released in 2020 a draft privacy framework that sets an ethical foundation for data usage for technologies such as AI, biometrics, and the Internet of Things. “Getting privacy right will underpin the use of technologies in the future, including AI and biometrics, quantum computing, the Internet of Things and personalized medicine,” said NIST Director Walter Copan. “These technologies all will be a big part of our future.”
While these steps are helpful, federal agencies in particular will need to give far greater thought to ethical considerations as they explore and expand their use of new technologies because of the highly sensitive nature of federal data and because of the government’s significant impact on almost every aspect of our lives. In the case of AI, for example, there’s a significant effort by DoD’s Defense Advanced Research Projects Agency (DARPA) to flesh out how to make AI systems more understood and explainable to the people using them (as well as to others, such as courts and regulators that will have to make judgments about their efficacy, legality, and suitability). This is a critical concern for many government agencies that operate in the law enforcement, medical, security, and other arenas.