• Facebook
  • Twitter
  • Instagram
  • Youtube
  • Linkedin
AIM Leader
  • Cover Story
  • Features
  • AIM News
  • Alumni News
  • Multimedia
  • Alumni Benefits
  • Life At AIM
  • Contact
  • Give
Select Page
Articles

The AI Trust Paradox: Why We Judge AI Work Differently (and why so many of us feel the need to hide it)

by Alumni Relations Office

Here is a question I have been sitting with.

When you learn that a piece of work was produced with AI, does it change how much you trust it? Do you hold it to a higher standard, a lower standard, or the same standard as work produced without AI?

Now here is the follow-up, and it is the one that makes the room go quiet:

How many of you have felt the need to mask the fact that you used AI to prepare a presentation, draft an article, or build a recommendation for the executive team or board?

If you felt a small jolt reading that, I am almost certain you are not alone.

The Trust Shift No One Talks About

Something interesting happens the moment someone says, “I used AI for this.” The energy in the room shifts. People start reading more critically. They look for cracks. They scan for the parts that feel generic or off.

But here is what is strange: if the same output had been presented without the disclosure, it would often be accepted without question. The content has not changed. Only the label has.

This is the trust paradox of AI-assisted work. We simultaneously over-trust AI when we do not know it is there, and under-trust it the moment we find out.

So when AI is invisible, we do not question it enough. And when it is visible, we question the person who used it rather than the quality of the work itself.

Neither response serves us well.

Why We Hide It

I have spoken with executives, consultants, and team leaders who quietly use AI for significant parts of their work and then go to considerable effort to erase any trace of it. They rewrite AI-generated text to “sound more like them.” They present outputs without mentioning the tools that helped produce them. Some even feel a low-grade guilt about it, as if using AI is a form of cheating.

Why?

Because the professional world has not yet figured out what AI use signals about competence. In many workplaces, there is an unspoken assumption: if you used AI, you probably did not think very hard. The tool did the heavy lifting. You just pressed a button.

This assumption is often wrong. Using AI well requires judgment. It takes critical thinking to know what to ask, how to evaluate the output, what to keep, what to discard, and what to rethink entirely. The person who uses AI skillfully and still applies rigorous thinking has not outsourced their judgment. They have extended it.

But that distinction is hard to see from the outside. And so people hide.

The Real Question Is Not “Did You Use AI?”

The question we should be asking is not whether AI was involved. It is whether thinking was involved.

A beautifully written strategy memo that was drafted entirely by a human can still be intellectually lazy. It can recycle old assumptions, avoid hard truths, and tell the leadership team exactly what they want to hear. Meanwhile, someone who used AI as a sparring partner to stress-test their reasoning, surface blind spots, and sharpen their argument has done real intellectual work, even if the first draft came from a machine.

The value of any piece of work lies in the quality of the thinking behind it, not in the tools used to produce it.

We do not question whether someone used spell check, a calculator, or a financial modeling tool. We evaluate the output on its merits. AI should be no different, but we are not there yet.

What Masking Costs Us

When professionals feel they must hide their AI use, three things happen.

First, the organization loses the chance to learn. If no one admits they are using AI, teams cannot have honest conversations about what is working, what is not, and where human judgment still needs to intervene. The learning stays private, and the mistakes stay invisible.

Second, it creates a false standard. When AI-polished work is presented as purely human-produced, it raises the bar in ways that are unsustainable and dishonest. Colleagues compare themselves to an output that was never purely human in the first place. This breeds imposter syndrome and erodes the very psychological safety that teams need in order to think well together.

Third, it delays the development of shared norms. Every organization will eventually need to agree on how AI should be used and disclosed. That conversation cannot happen if everyone is pretending they are not using it.

Toward a Better Standard

What if, instead of asking “Was this made with AI?” we started asking better questions:

What assumptions does this rest on?

What data was this based on, and what might have been left out?

Did the person challenge the output, or accept it at face value?

What is the quality of the thinking, regardless of the tools?

These are the questions that lead to better work. They apply whether the output came from AI, from a team, or from a single person working alone at midnight.

Harvard Business School professor Amy Edmondson and 3M Chief Science Advocate Jayshree Seth have argued that AI adoption should be treated as a team development effort, not just a technology upgrade. Part of that development means creating the psychological safety for people to say, “I used AI for this, and here is how I applied my judgment to it,” without fear that the disclosure will diminish them.

An Invitation

I will go first.

I use AI regularly in my work. I use it to draft articles, build financial models, synthesize research, structure presentations, and stress-test my reasoning. I use it for the unglamorous work and the high-stakes work alike. I also question its outputs, push back on its assumptions, and frequently discard what it gives me in favor of something better.

That is not outsourcing my judgment. That is using a tool well.

The professionals who will thrive are not the ones who avoid AI, and not the ones who accept its outputs uncritically. They are the ones who use it with intellectual honesty, who apply their thinking rigorously, and who have the courage to be transparent about how they work.

So here is my challenge to you: the next time you use AI in your work, try saying so. Not as a confession. As a contribution to a more honest professional culture.

The stigma only breaks when someone is willing to go first.

 

Original post: https://www.linkedin.com/pulse/ai-trust-paradox-why-we-judge-work-differently-maria-victoria-betita-ymgkc/

Related Articles

Alumni Thought Leaders
December 9, 2025
From Warehouses to Lobbies: What Makes Teams High-Performing Across Sectors
Last week in our leadership class, we read the C&S Wholesale Grocers case from Harvard. It’s a warehousing company, not exac...
by Alumni Relations Office
Alumni Thought Leaders
November 27, 2025
Transformation Isn’t About Systems—It’s About People
Over the years, I’ve witnessed many transformation initiatives, some succeeded, others didn’t. Looking back, the difference of...
by Alumni Relations Office
Faculty
November 7, 2025
Zepto’s 10-Minute Model Faces Trust and Compliance Test
Research by: Sk Abu Khalek, Bishal Dey Sarkar, Tamal Samanta, & Sandeep Puri   Abstract In May 2025, Zepto Marketplace Pr...
by Alumni Relations Office
AIM Logo

Learn how business works in Asia with the people who practice it.

  • Cover Story
  • Multimedia
  • Features
  • Life at AIM
  • AIM News
  • Contact
  • Alumni News
  • Give
  • Alumni Benefits
Copyright © Asian Institute of Management 2026 | Privacy Policy
  • Facebook
  • Twitter
  • Instagram
  • Youtube
  • Linkedin