type="image/svg+xml" href="data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAzMiAzMiI+PHJlY3Qgd2lkdGg9IjMyIiBoZWlnaHQ9IjMyIiBmaWxsPSIjMmRkNGJmIi8+PHRleHQgeD0iNCIgeT0iMjIiIGZvbnQtZmFtaWx5PSJtb25vc3BhY2UiIGZvbnQtc2l6ZT0iMTQiIGZvbnQtd2VpZ2h0PSI3MDAiIGZpbGw9IiMwODBkMTQiPkFJPC90ZXh0Pjwvc3ZnPg==">
// Proposed industry standard
AICL
A Artificial
I Intelligence
C Content
L Level

When AI writes,
who is the author?

The AICL Scale is a standardized 10-level framework for transparent AI authorship attribution, applicable to articles, reports, code, research, and any content produced with AI tools.

10
Defined levels
AICL-0 (fully human) through AICL-9 (fully AI)
5
Assessment questions
4
Core dimensions
The problem

The transparency gap
no one is naming

Across journalism, consulting, law, academia, and corporate communications, AI-generated content is flowing into the world with no authorship signal attached. Readers, clients, regulators, and colleagues have no way to know whether what they are reading represents a domain expert's hard-won knowledge or a language model's pattern-matching output.

The question is not whether AI was used

AI is a genuinely powerful tool that accelerates expert work and helps people communicate more clearly. The question is: how much, at which stage, and whether a qualified human stood behind the result. That distinction matters enormously, and currently, there is no standard way to communicate it.

A common vocabulary for a new era

The AICL Scale gives any organization, publisher, or individual a shared, consistent language for describing the nature of human-AI collaboration in any piece of content, a simple 0–9 vocabulary that any industry can adopt and enforce.

Mid-range levels are professional

A critical design principle: AICL-4 (expert dictates, AI articulates) and AICL-5 (human-led co-creation) are not marks of laziness. They describe legitimate, high-quality professional workflows. The framework must be framed this way or people will systematically under-report their AI use.

Self-declaration with AI scaffolding

Pure self-declaration is inconsistent. Pure AI auto-detection is impossible, no single tool sees the full process. The right model is AI-guided self-declaration: the AI tool helps the creator arrive at the correct level through a structured reflection, which the human then confirms.

Why it matters

Transparency is the
foundation of AI ethics

The AICL Scale is not just a labelling system. It is a response to one of the defining ethical challenges of the AI era, the erosion of trust in human expertise and the blurring of accountability when machines and people create together.

// The trust equation

Every piece of content carries an implicit promise. When you read a medical article, a legal opinion, or a financial analysis, you extend trust based on your assumption of who created it and how.

That trust relationship is not just about accuracy. It is about accountability. If the content is wrong, who answers for it? If it misleads, who is responsible? These questions have clear answers when a human expert authors content. They become dangerously ambiguous when AI is involved, and no disclosure exists.

The AICL Scale restores that accountability chain. It does not judge how much AI was used, it makes the nature of that use visible, so readers can calibrate their trust appropriately and creators can stand behind their work with precision.

// AI fluency

Knowing how you used AI is a professional skill

AI fluency is not just about knowing how to use AI tools, it is about understanding your own relationship with them. Did AI shape your thinking or just your prose? Did it generate the structure or just fill it in? These distinctions define the nature of your intellectual contribution, and professionals who cannot answer them clearly are not yet truly fluent.

The AICL self-assessment is designed to build that fluency. By asking the same four questions consistently, concept, development, words, review, it trains creators to be conscious of their own process in a way that makes them better collaborators with AI, not just more transparent ones.

// Ethical use

Ethics is not about avoiding AI, it is about owning your choices

The most important ethical question is not "did you use AI?" It is "did you take responsibility for what you published?" An AICL-9 document published without disclosure is an ethical failure. The same content published with an honest AICL-9 declaration is a legitimate choice, readers can decide what weight to give it.

Ethical AI use is transparent AI use. The AICL Scale operationalizes that principle into something concrete, consistent, and verifiable, turning a vague cultural expectation into a professional standard with real teeth.

01

Transparency as default

In a world where AI can produce fluent, credible text on any topic, the default assumption can no longer be human authorship. Transparency must become the baseline, not the exception.

02

Accountability without judgment

The AICL Scale does not prescribe how much AI is acceptable. It creates the conditions for informed judgment, by readers, employers, regulators, and clients, without imposing a single standard of correctness.

03

Human expertise preserved

By distinguishing original concept from AI development, the AICL Scale ensures that human intellectual contributions, especially expert knowledge, are never collapsed into the same category as AI-generated output.

The bigger picture

AI fluency is the
literacy of our time

Just as financial literacy became a civic expectation in the 20th century, AI fluency, the ability to understand, use, and account for artificial intelligence in your work, is becoming the defining professional competency of the 21st.

For individuals

Knowing your AICL level is an act of professional self-awareness. It signals that you understand your own creative process, that you take accountability seriously, and that you distinguish between your expertise and the tool you used to express it.

For organizations

Mandating AICL disclosure is an act of institutional integrity. It signals to clients, regulators, and the public that your organization takes the provenance of its knowledge seriously, and that human expertise is not interchangeable with AI output.

For society

Widespread AICL adoption creates an epistemic infrastructure for the AI age, a shared basis for evaluating the credibility of information, preserving the value of genuine expertise, and holding creators accountable for what they publish.

// About this framework

The AICL Scale is a proposed open standard for AI authorship transparency. It is published under a Creative Commons Attribution 4.0 license, free to use, adapt, and implement by any individual or organization, with attribution required. The goal is not ownership but adoption: the more widely AICL is used, the more trust it creates.

Published by
aiclscale.org
CC BY 4.0

Four dimensions, not one

The key innovation: splitting "ideas" into concept and development

C

Concept

The original idea, framing, or insight. The spark that initiated the content. This is almost always human-originated and should be tracked separately from how the concept was developed.

// The "what" and "why"
D

Development

How the concept was built out, structure, methodology, argumentation, analytical approach. AI can do heavy lifting here even when the concept was 100% human, and this must be captured separately.

// The "how" and "structure"
W

Words

Who produced the actual language and prose. This is the most visible dimension but often the least important, a domain expert's scribed text is worth far more than independently-written non-expert prose.

// The "expression"
R

Review

Whether a credentialed domain expert validated the content for substantive accuracy. This dimension can rehabilitate a high-AI-involvement score when genuine expertise was applied in review.

// The "validation"
The framework

AICL Scale, 10 levels

Each level is defined by four dimensions. The bars show the human percentage contribution, 100% means fully human, 0% means fully AI. Click any level to expand the full definition.

Fully human
Fully AI
// Four dimensions
Bars show human % contribution at each level
Concept

The original idea, framing, or insight, the spark that initiated the content. Almost always human-originated. Tracked separately from how the concept was built out.

// The "what" and "why"
Development

How the concept was built out, structure, methodology, argumentation, analytical approach. AI can contribute heavily here even when the concept was 100% human.

// The "how" and "structure"
Words

Who produced the actual language and prose. Most visible but often the least decisive, a domain expert's scribed text outweighs independently-written non-expert prose.

// The "expression"
Review

Whether a credentialed domain expert validated the content for substantive accuracy. High in human-dominant levels, meaningful through collaborative, drops sharply in AI-dominant.

// The "validation"
// AICL Scale, 10 levels defined
Self-assessment

What is your AICL level?

Answer five questions about how you created your content. We will identify the correct AICL level and generate a disclosure statement ready to paste into your document.

// Industry profile
Concept 35% · Dev 25% · Words 25% · Review 15%
// Assessment steps
// AICL Scale, self-assessment tool
For organizations

Implementing AICL
across your organization

Adoption works best as a phased rollout, starting with culture, moving to process, then embedding into tooling. The technical implementation is secondary to whether people want to be honest.

// Phase 01  ·  Months 1–3
01

Educate and normalize

Build shared vocabulary before mandating compliance. The goal in phase one is honest self-reflection, not surveillance.

  • Host AICL literacy sessions for all content creators
  • Frame AICL-4 and AICL-5 as professional and respectable; expert vision work deserves recognition
  • Add AICL field to all document templates and cover pages
  • Have leadership model honest disclosure publicly
  • Publish calibration examples showing correctly-rated content
// Phase 02  ·  Months 4–6
02

Embed in workflow

Make declaration a natural step in existing publishing and document governance flows, not an added burden.

  • Add AICL as required metadata in CMS and document management
  • Gate publishing on AICL completion, same as category tagging
  • Track AICL distribution across departments for calibration
  • Integrate self-assessment tool into internal tooling
  • Establish review process for high-stakes AICL-6+ content
// Phase 03  ·  Month 7+
03

AI-assisted scoring

Use your internal AI deployment to surface a suggested AICL score at session end. Human confirms.

  • Deploy end-of-session AICL prompt via internal API wrapper
  • AI reviews session history and suggests a starting level with rationale
  • Human adjusts and confirms, creating an auditable co-declaration
  • Log confirmed scores for periodic calibration audits
  • Build department-level AICL dashboards for management review

The AI-guided scoring conversation

The most practical near-term implementation is a prompted self-declaration workflow. Rather than assigning a score automatically, the AI tool asks the right questions at session end and helps the person arrive at the correct AICL level, consistently and honestly. This solves the inconsistency of pure self-declaration without requiring any single AI tool to know the full creative process.

A key implementation insight: the AI tool only sees its own contribution. If a creator used ChatGPT for brainstorming, Grammarly for editing, and Claude for drafting, no single tool has the full picture. The guided questionnaire captures the holistic process rather than any one tool's logs. This is why human confirmation remains essential even in automated workflows.

01

Session close trigger

When a user finishes a content creation session, the AI tool surfaces the AICL prompt automatically before the session closes, similar to a "save before exit" prompt.

02

Context-aware suggestion

The AI reviews the conversation history across all four dimensions and suggests a starting AICL level with brief reasoning: "Based on our session, this looks like AICL-5, you provided the concept and strategic direction; I drafted most of the text; you then revised substantially."

03

Human adjustment and confirmation

The creator reviews the suggestion and adjusts if needed, for example, if they used other tools outside this session, or if the concept originated before any AI involvement. Human confirms the final level.

04

Disclosure generated and logged

The tool outputs a ready-to-paste disclosure statement with the confirmed AICL level. The declaration is also logged to the organization's document management system for audit and quality assurance purposes.

//

The cultural dimension is the hardest part

If the company frames AICL as surveillance or a performance metric, "managers will judge you for using too much AI", people will systematically declare AICL-0 regardless of the system. The framework only works if mid-range levels are genuinely respected. An AICL-4 from a domain expert who used AI as a scribe represents outstanding work. That framing must come from leadership first.

// AICL Scale, implementation guide
Examples

AICL across industries

These scenarios illustrate how AICL levels appear across different professional contexts. Each one maps to a specific level based on where the concept originated, who developed the structure, who wrote the words, and how deeply a human expert reviewed the result.

Academic publishing

Where epistemic credibility is everything

AICL-2
A professor used AI extensively to map the existing literature before writing an entirely independent analysis. The insights, methodology, and all writing are wholly theirs. AI only accelerated the learning phase before they put pen to paper.
AICL-4
A clinical researcher had a fully formed methodology and original findings. They briefed AI verbally and through notes, and AI produced the full paper draft. The researcher reviewed every claim, corrected inaccuracies, and validated the science. The intellectual contribution is entirely theirs; the writing is AI-produced.
AICL-7
A graduate student had AI draft their literature review section and then edited it for tone and flow. The topic and direction were theirs, but AI developed the structure and wrote the text. No independent expert reviewed the substance beyond the student's own self-correction.

Journalism and media

Where reader trust is the core product

AICL-1
An investigative journalist used AI to verify public timelines and cross-check facts during their research process. The investigation, interviews, analysis, and all writing are entirely their own work. AI served as a faster search tool and nothing more.
AICL-5
A reporter developed an explainer article with AI. They provided the original reporting, the key facts, and the editorial angle. AI drafted sections, and the reporter rewrote and enriched them substantially with original judgment and sourced detail. Both parties contributed meaningfully to the final text.
AICL-8
An automated news brief about earnings data was AI-generated from a structured data feed and a prompt template. An editor reviewed it for basic accuracy before publication but applied no substantive editorial judgment or independent analysis.

Consulting and advisory

Where clients pay for expert judgment

AICL-3
A senior partner used AI to stress-test their strategic hypotheses before a client presentation. AI challenged their assumptions and surfaced counterarguments. The concept, all development logic, and every word in the deliverable are the partner's own. AI only pressure-tested their thinking.
AICL-4
A specialist consultant had a complete proprietary framework in their head. They directed AI to write the full client deliverable from their verbal briefing and working notes. AI produced all the prose and structured the document. The consultant reviewed every section against client data and signed off on every recommendation.
AICL-6
AI produced a market sizing analysis from a detailed prompt. The principal defined the concept and analytical approach. AI developed the structure and wrote the full output. A senior analyst then reviewed the findings, corrected several assumptions, and added proprietary benchmark data before the report went to the client.

Corporate communications

Where volume, consistency and brand voice matter

AICL-4
A senior executive had a clear vision for a thought leadership piece: the argument, the examples, and the conclusions were all in their head. They briefed AI in detail and AI wrote the full article. The executive reviewed every paragraph, revised the framing in two sections, and approved the final version.
AICL-7
A marketing team briefed AI with a topic direction and key messages. AI developed the structure and content. A copywriter edited each piece for tone and brand voice but did not change the substance or apply domain knowledge to validate claims.
AICL-9
Product descriptions were auto-generated via an agentic pipeline directly from product data and published to the website without any human review. No editorial judgment was applied at any point in the process.

Software development

Where logic, architecture and correctness matter

AICL-1
A senior engineer wrote all the code independently but used AI to look up syntax, check documentation, or verify library APIs during development. Every architectural decision, logic, and implementation is entirely their own work.
AICL-4
A software architect designed the full system: the data model, the architecture, the service boundaries, and the logic flows. AI wrote all the code from that specification. The architect reviewed every file, caught logic errors in two modules, and validated the implementation against the original design before shipping.
AICL-6
A developer and AI built a feature together. The developer defined what the feature needed to do and reviewed AI-generated code at each step, redirecting the approach twice and rewriting key sections independently. The final code reflects significant input from both parties, with the developer making all critical decisions.

Legal and compliance

Where accuracy and accountability carry legal weight

AICL-3
A practicing attorney used AI to identify potential weaknesses in their legal argument before filing. AI surfaced three counterarguments the attorney had not considered. The attorney evaluated each, incorporated two into their strategy, and wrote the brief entirely themselves.
AICL-4
A compliance officer had a complete understanding of the regulatory requirement and the organisation's obligations. AI drafted the compliance policy document from their detailed briefing. The officer reviewed the full document against the regulation, corrected two interpretations, and signed off as the accountable party.
AICL-8
A legal team used AI to generate a first-draft contract from a standard template and a brief. A paralegal checked it for obvious errors and formatting issues before sending it to the client. No qualified attorney reviewed the substance before it left the firm.
// AICL Scale, real-world examples
Common questions

Frequently asked questions

// AICL Scale, aiclscale.org