We use cookies to improve your experience and analyse site traffic.
Section 5.1 of the EU AI Act Practitioners Implementation Guide requires organisations to ground AI system development in a clear articulation of business intent, ethical commitment, and transparency principles. The Business Owner must complete this alignment before any code is written or architecture designed.
Every AI system development must begin with a clear articulation of business intent, ethical commitment, and transparency principles before any code is written or architecture designed.
Every AI system development must begin with a clear articulation of business intent, ethical commitment, and transparency principles before any code is written or architecture designed. This grounding serves as the reference point against which every subsequent design decision is measured. The business owner is responsible for establishing this foundation, which feeds directly into the System Architecture and Design Documentation process and ultimately into the aisdp.
The business intent assessment must confirm alignment with the organisation's values, the EU AI Act's requirements, and the fundamental rights of affected persons. If the business intent cannot be satisfied without creating unacceptable risks to fundamental rights, the organisation must either modify the intent or decline to develop the system. This assessment is documented in the AISDP as a precondition to Module 1 (System Identity and General Description).
The statement of business intent must specify what the system is intended to achieve, for whom, and within what constraints.
The statement of business intent must specify what the system is intended to achieve, for whom, and within what constraints. Precision is essential: "to assist human recruiters in screening high-volume applications by ranking candidates against role-specific competency profiles" is an adequate statement of business intent. "To improve recruitment efficiency" is not, because it is too vague to constrain design decisions or enable meaningful Conformity Assessment Procedures.
The distinction matters because the statement of business intent anchors all downstream compliance documentation. A vague statement cannot ground risk assessment, cannot define the boundaries of intended use, and cannot support the specificity required by Technical Documentation Requirements.
The development team must work within an explicit ethical framework that addresses four questions: What are the potential harms this system could cause?
The development team must work within an explicit ethical framework that addresses four questions: What are the potential harms this system could cause? Who bears those harms? Are those harms distributed equitably? What safeguards ensure that the system serves its intended beneficiaries without unfairly disadvantaging others?
The framework must also address what mechanisms allow affected persons to understand, challenge, and seek redress for the system's decisions.
The ethical framework should reference recognised principles such as the EU's Ethics Guidelines for Trustworthy AI, the OECD AI Principles, or the organisation's own responsible AI framework.
The ethical framework should reference recognised principles such as the EU's Ethics Guidelines for Trustworthy AI, the OECD AI Principles, or the organisation's own responsible AI framework. Critically, it must translate those principles into concrete design constraints. "The system must not discriminate" is a principle. "The system must achieve a selection rate ratio of at least 0.90 across all measured protected characteristic subgroups" is a design constraint. The AISDP must demonstrate this translation from principles to constraints.
The organisation must commit, before development begins, to a level of transparency appropriate to the system's risk tier and the expectations of its stakeholders.
The organisation must commit, before development begins, to a level of transparency appropriate to the system's risk tier and the expectations of its stakeholders. For high risk ai systems, this commitment must encompass four dimensions.
Transparency to deployers specifies what information will be provided about the system's capabilities, limitations, and requirements. Transparency to affected persons specifies how individuals will be informed of the system's involvement in decisions affecting them and how they can obtain explanations. Transparency to regulators specifies how the AISDP and its evidence base will be made available for inspection. Internal transparency specifies how the development team, governance leads, and leadership will maintain visibility into the system's behaviour.
These commitments must be documented and approved by the ai governance lead before development resources are committed.
A principle is a general statement such as 'the system must not discriminate'. A design constraint is a measurable requirement such as 'the system must achieve a selection rate ratio of at least 0.90 across all measured protected characteristic subgroups'. The AISDP must demonstrate the translation from principles to constraints.
The Business Owner is responsible for articulating and assessing the business intent. The AI Governance Lead must approve the transparency commitments before development resources are committed.
The business intent assessment is documented in the AI System Description Package (AISDP) as a precondition to Module 1 (System Identity and General Description).
The development team must work within an explicit ethical framework addressing potential harms, who bears those harms, equitable distribution, safeguards for beneficiaries, and mechanisms for affected persons to challenge decisions.
The ethical framework must reference recognised principles and translate them into concrete, measurable design constraints documented in the AISDP.
High-risk systems require transparency commitments across four dimensions: to deployers, affected persons, regulators, and internally, all documented and approved before development begins.