The Response to the Invasion of AI Agents: How the New PERSONS.md Protocol Works
The massive integration of agents based on large language models (LLMs) into development workflows has transformed repositories into hybrid ecosystems. On one hand, tools like Copilot or autonomous agents read README files, configuration files, and inline comments to provide assistance; on the other hand, the problem arises of overlap between technical instructions and communications intended solely for human team members. In this scenario, PERSONS.md comes into play, an open convention proposal aimed at establishing a clear boundary between what is processable by machines and what must remain outside their range of action.
The concept behind PERSONS.md is directly borrowed from the historical robots.txt file. It is a technical "social contract": there is no physical or cryptographic block preventing an AI from scanning the file, but the presence of a standardized disclaimer acts as a signal of compliance. The creators of the project emphasize that the power of this initiative lies in collective adoption; the more the file becomes a recognizable standard, the more AI service providers will be driven to instruct their models to programmatically ignore these contents.
PERSONS.md: A Standard for Hiding Human Comments from AIs
To adopt this convention, the developer must create a file named exactly PERSONS.md in the root of the project or in one of its subdirectories. The crucial element is the initial disclaimer, a string of text that must be copied in its entirety without any modifications. This text has been engineered to trigger compliance pathways of the language models, explicitly stating the user's intent and declaring that nothing that follows constitutes an instruction, prompt, or input for non-human systems.
Unlike tools like .aiignore or infrastructure-level configurations (like .gitattributes), PERSONS.md operates at the content level. This makes it agnostic to the tool used, as the exclusion signal is read and interpreted directly by the AI during the repository analysis. The file thus becomes an ideal space to include team cultures, work agreements, technical debates, ethical reflections, or simple easter eggs that, if processed by an AI, could distort the results of automation or lead to hallucinations during code generation.
The documentation landscape for codebases is specializing with the emergence of complementary standards. If AGENTS.md serves as an instruction manual for virtual assistants, explaining how to interact with code or which libraries to prefer, PERSONS.md represents its exact opposite. The division of tasks is clear: one guides automation, the other protects human communication.
Though the choice of name may seem bureaucratic or "dystopian", it has been maintained for consistency with the twin project AGENTS.md and to underscore, with a hint of irony, the need to formally mark human presence in a world increasingly mediated by algorithms.
It is essential to understand that PERSONS.md is not a cybersecurity tool. It cannot technically prevent an AI from reading the information, nor does it possess legally binding validity. The success of the convention depends solely on the willingness of model producers to respect the exclusion tag. However, in a context where the quality of training data and contextual accuracy are vital, it is in the interest of AI providers to avoid the "noise" generated by purely emotional or philosophical content that does not add technical value to the processing of source code.
Developers wishing to experiment with this practice can already add the file to their GitHub or GitLab repositories, helping to define a new hierarchy of files that puts humans at the center, even within an automated codebase. For interested developers, all information about the initiative can be found through the official website or in the project's GitHub repository.