Governing the Ungovernable

If you’re in the technical CyberSecurity space the recent rise in Artificial Intelligence has probably caused you a sleepless night or two, and rightly so.

In this new series of articles we’ll look in detail at the steps you can take to create or improve your governance program for AI.

New technology always gets a hype behind it, and we’ve seen this sort of thing before, machine learning? crypto and the blockchain? web 2.0?. AI has blown past the blockchain in hype scale but it’s not that new really. The concept of AI, ML and Neural Networks has been around for years but we have never seen it be this capable before, especially at consumer accessible scale.

So before I go off on a rant about RAM prices being 300% up so someone can make a video of a cat playing the guitar with no effort (or hit the ultra hypocrite and use it to generate the image at the start of this article) lets talk about governing it, especially in the enterprise.

Standard Governance Policies

In the enterprise? Sort your policies out. AUP? Update it, Info Sec Policy? Update it. You need robust, AI specific policies in place immediately. If you haven’t got these you have no baseline, you have no defence or capability to manage users and their use of tools. If it’s not written down then ‘I didn’t know that was the rule’ always stands.

Risk and reward

Risk assessment, the risks AI bring are new for both personal and the enterprise. We hear lots of things about 10x Engineering, efficiency, speed and improvements. That’s all well and good but if AI is 10x faster then it can cause damage 10x faster too. It’s not about not doing something as a security team or even an individual, its about doing it correctly.

If you as an individual were given a book on medicine and told some of the content might be wrong but we can’t tell you which one, would you use it all?.

At an enterprise if your CISO or Head of Security came to the board and said I want this incredible new firewall, it will protect us, its really expensive, also it might all change tomorrow and sometimes it’s wrong?. You’d be laughed out the room.

As reported online, Meta’s AI Alignment director, recently had to manually intervene as OpenClaw started deleting her emails even though she explicitly told it not too. Upon querying the LLM it advised her she ‘had every right to be angry’.

AI provenance has created a major shadow IT problem. Existing tools and technical controls don’t work all that well against a lot of it. EDR’s and Web Gateways took a while to catch up. When reviewing your risk and reward you must decide what data you’re willing to give away, does the model use your information to train future models, do you retain ownership of data. This is relatively standard for SaaS products, but I can’t remember experiencing such an influx of new ‘products’ that so many different types of users want to use. The closest comparison I can remember was when easy cloud storage like dropbox appeared, files and data were being thrown everywhere for a while.

Data Classification

Data Classification is imperative for Enterprise AI. A robust data classification and retention period definition is imperative.

Classification and labelling lets you define your data controls. Some platforms have classification built in. Microsoft CoPilot for example has a ‘quick win’ in allowing CoPilot to be locked into your tenancy, on an E5 license CoPilot can be restricted and managed via Azure Unified Labelling and Security Policies.

Enterprise AI Governance

So how can we reduce risk in enterprise. A combination of steering, guidance and guardrails is in order. You must assess your use. You need to define which tools you are using and for what and you need to classify your data.

Creating a set of rules for which ever AI you’re using is key. Deterministic rules are important and your internal policies may need reformatting, you need to agree on controls for how your AI should behave. Using a markdown format allows AI to read and process the rules you define.

A Quick Example

You want to create a set of steering for an AI coding assistant, As a company you have that coding assistant and a cloud provider and your engineers.

You have a requirement that all your HTTP connections are TLS and use version 1.3. It’s an easy rule to write.

Markdown
| Rule ID | Rule | Reference |
| --- | --- | --- |
| RULE001 | You must use TLS 1.3 | Internal Reference 

Rendered the markdown is easy to read for a human, the plain markdown make s it easy to read for an AI.

Next you use your cloud provider controls to apply a guardrail control that stops the deployment of anything not TLS1.3. Your engineers with the AI coding assistant now have steering to prevent the creation of something not using TLS1.3 and you have your guardrail to cover any gaps or errors in that process.

Next up we’ll look at all of these things in detail.