We use cookies to improve your experience and analyse site traffic.
The foundational compliance control for agentic AI systems is a machine-enforced action space policy that defines every permitted action in a typed schema. Tool registries document each external tool with its permissions and risk assessment. Permission boundaries are enforced at the infrastructure level, independent of the agent's own reasoning.
The foundational compliance control for agentic systems is a formal, machine-enforced policy that defines the complete set of actions the system is permitted to take.
The foundational compliance control for agentic systems is a formal, machine-enforced policy that defines the complete set of actions the system is permitted to take. The Technical SME defines every permitted action in a typed schema specifying the action name, its parameters with type and range constraints, the preconditions that must be satisfied before execution, and the postconditions the system must verify after execution. Each action also records its reversibility, which informs the human oversight design, and its risk tier, which determines the approval gate requirement.
For a recruitment agent, the action schema might define a send interview invitation action with parameters for candidate identifier, datetime, and interviewer identifier. Preconditions enforce that the candidate is in shortlisted status, the interviewer has available slots, and the invitation count for the role is below the configured maximum. Postconditions verify the invitation record was created, the candidate status updated, and the calendar hold placed. The action is partially reversible, with cancellation possible within 24 hours but requiring manual intervention after that.
Any action request that falls outside the schema is rejected by the action execution layer, regardless of how the agent phrases the request or how plausible the action appears. The schema is the machine-readable expression of the system's intended purpose.
The action schema must be traceable to the system's documented intended purpose in AISDP Module 1.
The action schema must be traceable to the system's documented intended purpose in aisdp Module 1. Every permitted action must be necessary for the intended purpose; every action that is not necessary is excluded. The AI System Assessor reviews the mapping from intended purpose to permitted actions during the conformity assessment.
The mapping exercise proceeds in three steps. First, the Business Owner defines the operational workflow the agent is intended to support, expressed as a sequence of tasks. Second, the Technical SME identifies the minimal set of actions covering tool calls, API interactions, and data mutations required to accomplish each task. Third, the AI System Assessor verifies that no permitted action exceeds what the intended purpose requires and that no action necessary for the intended purpose has been omitted.
The resulting action schema is documented in AISDP Module 3. Changes to the schema are governed by the governance pipeline's change classification logic: adding a new action or expanding an existing action's parameter range is a significant change at minimum and may constitute a substantial modification if it extends the system's effective autonomy beyond what the original conformity assessment evaluated.
Agentic systems interact with external tools including APIs, databases, file systems, communication channels, and in some architectures code execution environments.
Agentic systems interact with external tools including APIs, databases, file systems, communication channels, and in some architectures code execution environments. Each tool extends the agent's capabilities and introduces a new risk surface.
The Technical SME maintains a tool registry documenting every tool available to the agent. Each entry records the tool name and description, the API endpoint or interface specification, the permissions the tool grants covering read, write, delete, and execute, the data the tool can access or modify, the rate limits and quotas enforced, and the risk assessment for the tool's misuse potential. The tool registry is a governed artefact, version-controlled alongside the system code and referenced in AISDP Module 3. Adding a new tool triggers the governance pipeline's risk gate. Removing a tool is a routine change unless it affects the system's ability to fulfil its intended purpose.
Each tool has explicit permission boundaries enforced at the infrastructure level through IAM policies, API scopes, and database role restrictions, not within the agent's own reasoning. A recruitment agent's database tool may have read access to candidate records but no write access to employment contracts. Permission enforcement must be external to and independent of the agent: an agent that reasons about its permissions and decides to escalate them is exhibiting precisely the behaviour the boundary is designed to prevent.
Every tool call the agent makes is validated against the action schema before execution.
Every tool call the agent makes is validated against the action schema before execution. The validation checks that the tool is in the registry, that the action is permitted by the schema, that the parameters conform to the schema's type and range constraints, that the preconditions are satisfied, and that the agent has not exceeded its iteration or rate limits.
Failed validations are logged and blocked. The agent receives an error response indicating that the action was not permitted, but the error does not expose the boundary rules themselves to avoid enabling the agent to reason about how to circumvent them. The validation log provides the audit trail required under Article 12, recording every attempted action, its validation result, and the schema rule that blocked it if applicable.
Removing a tool is a routine change unless it affects the system's ability to fulfil its intended purpose, in which case it triggers the governance pipeline.
Because an agent that reasons about its own permissions and decides to escalate them is exhibiting the exact behaviour the boundary is designed to prevent. Infrastructure-level enforcement ensures compliance even if the agent's reasoning is compromised.
Adding a new action or expanding an existing action's parameter range is a significant change at minimum. It may constitute a substantial modification if it extends the system's effective autonomy beyond what the original conformity assessment evaluated.
The action is rejected by the execution layer, logged, and the agent receives an error response, regardless of how the request is phrased.