When using AI in any organization—especially one concerned with security and resilience like WiscNet—guardrails should address governance, risk, privacy, and operational safety. Drawing from NIST CSF 2.0’s guidance on governance, identification, and protection, here are the main categories of guardrails WiscNet considers in it’s use of AI:
✍🏻 Clear policies defining acceptable AI uses
✅ Role-based responsibilities for AI oversight, including decision-making authority and escalation procedures
✍🏻 Documented risk management strategy linking AI use to the organization’s mission, compliance requirements, and member expectations
✅ Data classification rules for what information can/cannot be processed by AI systems
✅ De-identification and minimization before sending any sensitive data to AI tools
✅ Vendor and supply chain risk reviews for AI providers to ensure secure handling of organizational data
✅ Access controls and multi-factor authentication for AI tools
✅ Logging and monitoring of AI queries and outputs to detect misuse
✍🏻 Security testing to ensure AI applications don’t introduce exploitable vulnerabilities
✅ Human-in-the-loop reviews for AI outputs that could impact security, compliance, or public communications
✅ Bias detection procedures to review training data and model behavior
✅ Independent verification of any critical AI-generated analysis or recommendations
✅ Inclusion of AI misuse scenarios in incident response plans and tabletop exercises
✅ Clear escalation channels if AI behavior causes or contributes to a security event
✅ Rapid disablement procedures for AI integrations that are behaving unexpectedly
✍🏻 Label AI-generated content when shared internally or externally
✅ Stakeholder communication protocols for explaining AI decisions, especially if errors occur
✍🏻 Public disclosure plans for significant AI-related incidents
✅ Regular audits of AI systems for compliance, accuracy, and evolving risk factors
✍🏻 Training for staff on safe and effective AI use, tailored to job functions
✍🏻 Lessons learned process after any AI-related event, feeding updates into policies and controls
✍🏻 In Progress | ✅ In Practice
Mr. Pickles — on an iceberg — refecting on the environmental impacts of his use of artificial intelligence.