The AI Labels specification seeks to provide a voluntary labeling scheme for creators who would like to clarify the role of AI in the creation of their work products.
The specification defines three levels of AI involvment:
The use of AI is nuanced and these three categorizations leaves room for interpretation. For example, using a spell checker or sentence autocompletion tool is quite different than generating paragraphs of text and then lightly editing before posting. The AI Labels specification aims to provide a solution to this nuanced problem.
This specification is not immune to criticism, but the challenge is present among us today. We believe a well intentioned and partially complete specification is preferable to the alternative, which is providing no guidance and allowing ambiguity to proliferate further.
As AI technology advances and becomes more prevalent in daily life, it is not always clear if the content was generated by a human or AI.
The initial wave of AI technology that took over in the 2010s was primarily focused on detection - voice assistants, object detection, video summarization. This opened a lot of questions around the ethical and responsible use of AI – for example, self-driving cars must make moral judgments when put in challenging situations.
In the 2020s, generative AI took off. This involved similar underlying technologies, but the result was AI creating something, not simply perceiving some sensory input. While a powerful tool that opens many possibilities across creative industries, the technology also raises some ethical questions. Should consumers have the right to know if AI created the product they are consuming?
Labels are prevalent in consumer products today, some regulated and some voluntary. Examples are prevalent in a variety of industries which inspiration can be drawn from:
We view AI as a powerful tool. Tools are neither inherently good nor evil, but it is prudent to use tools responsibly. As responsible professionals in the field of AI, we believe it is in the general public’s best interest to provide a framework for clearly labeling the use of AI in products and welcoming discussions on differing views and opinions so the specification can grow over time to adapt to the needs of consumers. We believe an approach where industry insiders are ushering in the specification is preferable to a situation where an external governing body imposes regulations on a resistant industry.
As such, We invite industry insiders, consumers, regulators alike to participate, as our goal is to provide a framework that harnesses the power of AI technology for good – openly, transparently, and respectfully of all parties involved.
This project is not a commercial one by nature, but commercial entities may be borne out of this specification. We believe a governing specification is better to be isolated from commercial interests, as this frees the contributors from potential conflicts of interest.
Some certification bodies charge fees for auditing works and providing an external assessment of the product’s workflow and adherence to a standard. For examepl, SOC 2 and ISO 27001 are common information security frameworks where impartial auditors can be hired to review a company’s internal practices and provide a certificate demonstrating their level of conformance.
We are not necessarily opposed to the project evolving into this direction in the future, so long as the governing body remains impartial.
For the purposes of this specification, we consider AI to be any algorithm that is capable of autonomously generating materials. The materials can take multiple forms, including but not limited to written prose, song lyrics, photorealistic images, 3D models, illustrations, animations, sound effects, or song riffs.
Since the purpose of this specification is for consumer clarity, it is helpful to draw some delienations between what is and is not considered AI. In the broadest sense, commonplate technologies that have been around for decades and embedded in common tools could be considered AI - for example, spell check, red eye reduction, image sharpening and denoising are a few. We consider these approaches to not be AI, as inluding them would dilute the value of the labeling scheme and the technologies do not raise the same ethical considerations as more modern AI tools like image and text generation.
A technique is considered to be AI if it has any of the following properties:
The specification provides three categorizations for creative works. Publishers of these works can self-certify according to the guidelines present in this specification and use the appropriate badge in the distribution of their work.
For example, a book cover could provide the badge on the cover by the barcode. An online art gallery could provide the badge in the description of the piece.
The Made by humans mark is the most restrictive mark. The work must be created by humans without AI assistance. The use of computers is permitted, but AI assistance as defined earlier cannot be used.
The Made primarily by AI mark can be used on works where the majority of the work was created by AI. Humans can still be involved in the creative process, but the primary characteristics of the piece have come from AI tools.
For example, a book cover designed by generative AI tools where a human added title and author text over the artwork would be considered to be primarily created by AI.
If the human’s primary contribution to the piece was prompt engineering, the work shall be classified with the Made primarily by AI mark.
This category has some room for interpretation, so a variation can be used where there is real estate allocated to identify what specifically what the AI did. This option allows the creator of the work to specify in more detail what aspects of the work were done by AI versus humans.
To use the label in your own work, you can use the blank template form and add your own descriptor:
The Made by humans with AI mark defines a mix of contributions between human and AI. The mark shall be used in scenarios where both AI and humans made significant contributions, but the human contritubted significant components to the work.
For example, a song where the AI generated a riff and bass line, but a human wrote and recorded the lyrics would fall under this category.
Similar to the AI-only case, this label can be expanded to provide additional clarity:
To use the label in your own work, you can use the blank template form and add your own descriptor:
The descriptor in the labels is set in Alumni Sans. The ailabels.org caption is set in Inter.
The AI Labels specification was originally authored by Zach Rattner, cofounder of Yembo.ai.
If you’d like to leave feedback, please join the discussion on GitHub.