Agencies will need to complete an initial threshold assessment for AI projects Credit: Supplied Art (with Permission) The Digital Transformation Agency (DTA) is piloting an artificial intelligence (AI) assurance framework as part of its exploration into AI technologies used by government agencies, ensuring they meet standards designed to promote safe and responsible use. Under DTA’s draft assurance framework, agencies will complete an initial threshold assessment covering the basic information of the use case. It will also focus on the challenges it aims to solve and the expected benefits that the AI solution will provide. Agencies will also need to have a potential non-AI alternative that could deliver similar solutions and benefits. ‘We want agencies to carefully consider viable alternatives,’ said DTA’s general manager of strategy and planning Lucy Poole. “For instance, non-AI services could be more cost-effective, secure, or dependable.” According to Poole, the DTA believes that evaluating these options will help agencies understand the advantages and limitations of implementing AI. “This enables them to make a better-informed decision on whether to move forward with their planned use case,” she claimed. When it comes to the assessment process, all risks in the initial assessment are low and the assessment contact officer and executive sponsor are satisfied, then a full assessment will not be required. However, if one or more risks are rated as medium or above, they will need to proceed to a full assessment. According to the DTA, the full assessment will require agencies to document how the use case measures up against Australia’s AI Ethics Principles. These include, but are not limited to, fairness, reliability and safety. The DTA has also provided suggestions for how agencies should consider ensuring the reliable and safe delivery and use of AI systems. It particularly focuses on data suitability, Indigenous data governance, AI model procurement, testing, monitoring, and preparedness to intervene or disengage, as well as privacy protection and security. “Our approach to AI assurance prioritises human oversight and the rights, wellbeing, and interests of people and communities,” stated the DTA. From November 2024, DTA will also hold participant feedback sessions, interviews, and analyse survey responses to inform updates to the framework and guidance. “Our goal is to provide a unified approach for government agencies to engage with AI confidently,” said Poole. “It establishes baseline requirements for governance, assurance, and transparency, removing barriers to adoption and encouraging safe use for public benefit.” SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe