Using artificial intelligence (AI) for work tasks is becoming increasingly common—from drafting an email to designing a presentation or preparing a report. While many users greatly appreciate the assistance provided by algorithms, many are cautious about disclosing that they’ve relied on generative language models such as ChatGPT, Gemini, Perplexity, or Copilot.
There has been extensive debate about the ethical boundaries of using AI. One of the points that draws the most consensus among experts is the recommendation to disclose the use of generative AI—and, in some cases, even include the prompts used to complete the assigned task. Digital transparency should, in theory, foster greater trust, yet paradoxically, it may produce the opposite effect.
This is one of the key conclusions of our article “The Transparency Dilemma: How AI Disclosure Erodes Trust” (Organizational Behavior and Human Decision Processes, 2025), which examines how disclosing AI usage impacts trust across various types of tasks—from communication and analysis to artistic work—as well as across individual actors, such as team leaders, collaborators, educators, analysts, and creatives, and organizational actors such as investment funds.
A Matter of Trust
In the AI field, much attention is paid to user trust in the technology—but it’s also important to consider the trust placed in those who use it. Our study focuses on how the decision to disclose AI usage can influence the trust placed in a person. More and more professionals are asking themselves whether they should disclose their use of AI at work.
Our research shows that people who disclose having used AI are trusted less than those who don’t. Moreover, this negative effect holds regardless of how the disclosure is framed, goes beyond general algorithm aversion, and occurs whether or not AI involvement is already known—whether the disclosure is voluntary or mandatory.
The Moral Dilemma of AI Disclosure
Some consider disclosing AI usage a moral responsibility, as it clarifies the role of the technology in work processes and gives proper credit to its contributions. On the other hand, people are also concerned about how such use will be perceived once revealed, as shown by recent surveys: while most people believe AI use should be disclosed, many hesitate to do so themselves.
Our study addresses this dilemma. Disclosures aim to make visible the involvement of automated systems that may influence processes or outcomes, functioning as a warning to audiences that the work was not generated solely by a human. As such, it tends to be perceived as illegitimate—and therefore diminishes trust.
The Role of Legitimacy
Legitimacy refers to the perception that a person’s actions or decisions are desirable, appropriate, or acceptable within a given context. Concerns about legitimacy tend to arise when people encounter practices that deviate from established norms or challenge preconceived ideas about what constitutes proper behavior.
When someone acts in ways that cast doubt on their adherence to social norms—triggering negative legitimacy judgments—it sets off mental alarms that lead to diminished trust. In other words, deviating from norms makes a person especially vulnerable to losing others’ trust.
(Still) Very Human Capabilities
The special value we place on human skills and ideas is a deeply rooted expectation in both cultural and legal norms. When AI usage is disclosed, it may be seen as a departure from this expectation—leading people to judge such practices as inappropriate, since they reduce—or even replace—human input. This also touches on issues like intellectual property and authorship.
Thus, AI usage is perceived as inconsistent with socially accepted standards for task execution, undermining legitimacy. In contrast, when AI use is not disclosed, this perception shift does not occur, allowing people to maintain an appearance of conformity to accepted practices.
When Transparency Becomes Distrust
Research on transparency generally highlights its positive effects. However, in our study, promises of transparency—intended to signal honesty and build trust—can ironically spark doubt. Openly disclosing certain practices to reassure audiences may instead draw more attention to them and provoke questions about their appropriateness.
Such disclosure, especially when meant to preempt fears or suspicion, can trigger defensive reactions. Thus, even when the intention is to build trust, revealing AI usage may paradoxically invite greater scrutiny and skepticism regarding the legitimacy of the discloser's practices.
The Broad Impact of AI Disclosure
In our study, the negative effect of disclosing AI usage on trust was observed not only in the general population but also across professional groups such as legal analysts and hiring managers, as well as among student samples.
This effect was evident in a wide range of communication and writing tasks where generative AI is now commonly used—from routine emails to more significant outputs like application letters. It also extended to analytical tasks (e.g., estimating tax refunds) and artistic endeavors (e.g., advertising creation or graphic design). Furthermore, this trust erosion was observed across different contexts, from hiring decisions to investment choices.
Finally, we found that the negative effect is mitigated among people with positive attitudes toward technology and among those who perceive AI as accurate. However, our analysis did not find that general AI familiarity or frequent AI usage reduced the effect.
Organizational Recommendations
Many organizations are grappling with whether AI disclosure should be optional or mandatory, how to enforce it, and how to maintain trust. Our findings suggest that both approaches can be valid, but each requires specific strategies. If mandatory disclosure is chosen, enforcement mechanisms such as AI detectors are recommended. In addition, fostering a culture that collectively legitimizes AI use can help reduce the trust-related consequences of disclosure.
In commercial settings, the effects of disclosing AI usage are equally significant. Transparency in marketing can erode consumer trust, lowering perceived authenticity and emotional connection with the brand. While AI may streamline content creation, explicitly revealing its use carries social and psychological costs that negatively impact customer satisfaction, engagement, and loyalty.
Compared to the privacy paradox—where people express concern over their data but share it freely—the AI disclosure effect reveals a form of interpersonal hypocrisy: people criticize others for using AI, even as they use it themselves.
The study also challenges the idea that AI is “just a tool.” Framing it that way does not prevent the erosion of trust once its use is disclosed. Finally, while AI may assist with early-stage creative ideation, the authors anticipate that disclosing its role in these processes could carry substantial social costs—raising the question of whether AI truly enhances human creativity or merely repackages existing ideas.
The authors are Distinguished Visiting Professor in Leadership and Effective Organizations at EGADE Business School and Tec's Undergraduate Business School (Oliver Schilke) and Distinguished Visiting Professor in Consumer Behavior at EGADE Business School and Tec's Undergraduate Business School (Martin Reimann).