The Fable of the Illusory Truth

This is a leadership fable. The characters and company are entirely fictional, but the pattern is a composite of multiple experiences I have observed over two decades of technology leadership. The quieter pace of 2020, with so many of us working from home, gave me time to reflect on patterns I had watched play out across organizations for years. If you have spent time in large organizations, you will recognize what happens here.


Part One: The Narrative

Nadia Reeves had been Chief Product and Technology Officer at Meridian Publishing for exactly eleven days when she walked into her first executive leadership team meeting.

She had spent those eleven days listening. Listening was what she did first at every new organization. Not presenting a 90-day plan. Not reorganizing. Listening. She had learned years ago that the fastest way to understand a company was to ask the same five questions of twenty different people and pay attention to where the answers diverged.

But in this case, the answers did not diverge. They converged on a single point with startling consistency.

“Before we get into the agenda,” said Martin Hale, the CEO, from the head of the conference table, “I want to acknowledge that Nadia is inheriting a challenging situation with the technology organization. We all know the issues. The team has been slow for a long time. Nadia, we are here to support you in turning that around.”

Around the table, heads nodded. The CFO. The head of advertising revenue. The editor-in-chief. The SVP of marketing. All nodding as if Martin had said something as obviously true as “it is Wednesday.”

“Thank you, Martin,” Nadia said. “I am still getting up to speed. Can you help me understand what you mean by slow? Slow relative to what?”

A brief silence. Martin glanced at Diane Chen, the editor-in-chief.

“I can give you a recent example,” Diane said. She leaned forward, folding her hands on the table. “We asked for a new content recommendation module for the homepage. We were told it would take eight weeks. It took five months. Five months for what is essentially a widget.”

“That is a good example,” said Rob Aguilar, the head of advertising revenue. “And it is not an isolated one. When my team needs anything from product and technology, we build in a buffer of at least double whatever timeline they give us. We have learned not to trust the estimates.”

Nadia wrote this down. She did not argue. She did not defend. She wrote it down and asked her next question.

“How long has this been the case?”

“Years,” said Martin. “At least two years. Maybe longer.”

“At least three,” Diane corrected.

Nadia nodded. “I appreciate the candor. Give me a few weeks to assess the situation, and I will come back with what I find.”

After the meeting, she walked back to her office and closed the door. She sat down and looked at the notes she had taken over the past eleven days. She flipped through page after page of conversations with directors, senior engineers, product managers, project managers, stakeholders across every department.

She had spoken to forty-one people. Thirty-four of them had used some variation of the same phrase. The technology team is slow. The tech org cannot deliver. Product and engineering is broken.

Seven people had said something different.

All seven of them worked in the technology organization.


The engineer’s name was Priya Subramanian, and she was one of the seven.

Nadia found her in a small conference room on the fourth floor, the one the engineers used as an overflow workspace because the open floor plan was too loud for focused work. Priya was a staff engineer, which at Meridian meant she was one of the most senior technical people in the building. She had been at the company for six years.

“Can I ask you a direct question?” Nadia said.

“Sure.”

“Is this team slow?”

Priya set down her laptop lid. She looked at Nadia for a long moment, as if deciding something.

“No,” she said. “We are not slow. But I understand why you are hearing that.”

“Tell me.”

Priya leaned back in her chair. “Two and a half years ago, we did the homepage redesign. The big one. Codenamed Horizon. The CEO wanted it done in twelve weeks. The previous CTO — David Carr — agreed to twelve weeks even though the engineering leads told him it was a twenty-week project at minimum. David wanted to show he could deliver for Martin.”

“What happened?”

“What always happens when you promise twelve weeks on a twenty-week project. We got to week ten and we were maybe 40% done. But it was worse than that, because by week ten the scope had changed three times. Diane’s team kept adding requirements. They wanted a new content taxonomy, then a personalization layer, then integration with an events calendar that did not have an API. Each addition was presented as small. Each one was not small.”

“And it shipped late.”

“Five months late. David left shortly after. I think he was asked to leave, but nobody said that out loud. And the story that came out of Horizon was not ‘we underestimated a project and then tripled its scope.’ The story was ‘the technology team is slow.’”

Priya paused.

“That was two and a half years ago. The story has not changed since. Every time anything takes longer than someone expects, it confirms what they already believe. Every time we deliver on time, nobody notices, because it does not fit the story.”

Nadia felt the weight of what Priya was describing. It was not an engineering problem. It was not a process problem. It was a narrative problem. And narrative problems are harder to fix than either of those, because the people telling the story do not know they are telling a story. They think they are stating a fact.

“How many people have left because of this?” Nadia asked.

Priya’s expression shifted. Something tightened around her eyes. “I can give you names if you want. Eleven engineers in the last eighteen months. Not all of them left because of the narrative directly, but all of them felt it. When the rest of the company thinks you are the problem, it does something to you. You stop volunteering for projects. You stop suggesting ideas in meetings. You start updating your LinkedIn profile.”

She paused again.

“I almost left six months ago. I had an offer from a company in Austin. More money, better title. I turned it down because I care about the people on this team. But I will tell you honestly, Nadia, if things do not change in the next year, I will not turn down the next one.”


Nadia spent the next two weeks in the data.

She pulled every project that had been completed or canceled in the previous eighteen months. She looked at original scope documents, change requests, delivery dates, and quality metrics. She built a spreadsheet that no one had asked her to build, because no one had thought to ask the question it answered.

The question was simple: how often does this team actually deliver what it commits to, on time, at the quality level expected?

The answer was 85%. Twenty-three out of twenty-seven planned projects delivered on time against their committed scope. The four that slipped had all experienced significant scope changes after development began. In every case, the scope change was initiated by a business stakeholder who added requirements after the project was already underway.

She pulled the defect data. The team’s production defect rate was below the industry average for media and publishing companies. She pulled the infrastructure data. Uptime was 99.97% over the previous twelve months. She pulled the deployment data. The team was shipping code to production an average of fourteen times per week, up from twice a week two years earlier.

By every objective measure she could find, the technology team at Meridian Publishing was competent. More than competent. They were good. They were delivering. They were improving.

But the organization did not see any of this. The organization saw “slow.”


On a Thursday evening, Nadia sat alone in her office with her laptop and a cup of tea that had gone cold an hour ago. She was reading. Not about Meridian, not about project management. She was reading about why people believe things that are not true.

She had been searching for something — a framework, a name for the phenomenon she was witnessing — and she found it in a 1977 psychology study by Lynn Hasher, David Goldstein, and Thomas Toppino.

The illusory truth effect.

The study was elegant in its simplicity. The researchers presented participants with a series of statements — some true, some false — and then repeated a subset of those statements in later sessions. The result: participants rated the repeated statements as more likely to be true, regardless of whether they actually were. The mere act of hearing something again made it feel more credible. Not because of evidence. Not because of logic. Because of familiarity.

Nadia sat with that for a long time.

She thought about Martin saying “we all know the issues” in the leadership meeting. We all know. Not “I believe” or “the data suggests.” We all know. As if the slowness of the technology team was as settled as gravity.

She thought about Rob building in a “buffer of at least double.” Not because he had analyzed delivery data, but because he had heard the story enough times that the story had become his planning assumption.

She thought about Diane calling a five-month project a “widget.” Diane was not being dishonest. She genuinely perceived it as simple, because the narrative had taught her that any delay from the technology team was evidence of dysfunction, not evidence of complexity.

The illusory truth effect did not just explain what was happening at Meridian. It explained the mechanism by which organizational myths perpetuate themselves. A statement gets repeated. Repetition creates familiarity. Familiarity feels like truth. And once something feels true, people stop checking whether it is true. They stop even considering that it might not be.

A statement gets repeated. Repetition creates familiarity. Familiarity feels like truth. And once something feels true, people stop checking whether it is true.

She read further. She found that the effect is strengthened by source diversity. When multiple people say the same thing, the brain treats the repetition as corroboration rather than echo. Martin saying it, Diane saying it, Rob saying it — the brain does not process this as “one story repeated by three people.” It processes it as “three independent confirmations.” Each voice adds credibility, even though all three voices learned the story from each other.

And then confirmation bias takes over. Once you believe the technology team is slow, you notice every late project and forget every on-time delivery. The late project confirms what you know. The on-time delivery is an exception, an anomaly, not worth updating your mental model for. The narrative becomes self-reinforcing, immune to contradictory evidence, because contradictory evidence is filtered out before it reaches conscious evaluation.

Nadia closed her laptop. She understood the problem now. She understood it well enough to name it, which meant she understood it well enough to fight it.

But she also understood something that the psychology papers did not emphasize: the illusory truth effect does not just distort perception. It distorts decisions. And distorted decisions create real consequences that look like evidence for the false narrative.

At Meridian, the narrative that the technology team was slow had led to understaffing. When the team requested headcount, the request was viewed skeptically. “Why would we give more resources to a team that cannot deliver with the resources it has?” The understaffing led to longer timelines on certain projects. The longer timelines confirmed the narrative. The narrative led to more skepticism about headcount. The cycle continued.

Talented engineers left. Their departure meant remaining engineers were stretched thinner. Projects took longer. The narrative was confirmed again.

Recruiting became harder. Candidates heard through backchannel references that the technology organization was “struggling.” The best candidates chose other offers. The team had to hire from a smaller pool, which occasionally meant hiring people who needed more ramp-up time, which occasionally meant slower delivery on their projects. The narrative was confirmed again.

This was not a story about a bad team. This was a story about a system that had arranged itself to produce the very outcomes that justified the story it was already telling.


Part Two: The Fight

Nadia’s first move was not what anyone expected.

At the next executive leadership team meeting, she did not present a restructuring plan. She did not announce a new methodology. She did not bring in consultants. She brought a one-page document with four numbers on it.

“Before I share my assessment,” she said, “I want to share some data that I do not think this group has seen before.”

She placed the document in the center of the table.

“In the last twelve months, the product and technology team committed to delivering twenty-seven projects. Twenty-three of those were delivered on time against the committed scope. That is an 85% on-time delivery rate. The four that were late all had scope changes initiated by business stakeholders after development began.”

Silence.

“Our production defect rate is 0.3 per thousand lines of code, which is below the industry average. Our infrastructure uptime is 99.97%. We deploy to production fourteen times per week.”

More silence.

Martin spoke first. “Those numbers do not match my experience.”

“I understand that,” Nadia said. “And I want to explore why. Because the gap between these numbers and the organization’s perception of the technology team is the single biggest problem I have found at Meridian. Not the team’s performance. The gap.”

Rob shifted in his chair. “Nadia, I respect the data, but I have lived through project after project that came in late and over budget. That is not perception. That is experience.”

“Can you name one from the last twelve months?”

Rob opened his mouth, then paused. He looked at the ceiling. “The ad targeting integration. That was supposed to be done in Q2 and it did not ship until August.”

Nadia nodded. She had anticipated this one. “The ad targeting project was scoped at six weeks. Three weeks into development, your team requested integration with two additional ad platforms that were not in the original scope. That added four weeks. It was delivered on the revised timeline.”

“But the original timeline was six weeks,” Rob said.

“And the original scope did not include Taboola or Outbrain integration. When the scope changed, the timeline changed. That is not slowness. That is math.”

Diane leaned forward. “What about the recommendation module? I mentioned it at your first meeting.”

“The recommendation module was estimated at eight weeks for the original specification. During development, the requirements expanded to include personalized recommendations based on reading history, A/B testing infrastructure, and integration with the newsletter system. The final scope was roughly three times the original. It shipped in twenty weeks. The engineering work was well-executed.”

“It still took five months,” Diane said.

“It did. And the question is whether five months for a personalized recommendation engine with A/B testing and newsletter integration is slow, or whether it is a reasonable timeline for a project that turned out to be three times larger than originally specified.”

Diane did not respond immediately. Nadia could see her processing. Not agreeing, not yet. But processing.

Martin broke in. “Nadia, I hear what you are saying. But this organization has had a technology problem for years. Multiple leaders have confirmed it. I confirmed it. Are you telling me we have all been wrong?”

“I am telling you that a specific project failure two and a half years ago generated a narrative, and that narrative has been repeated so often that it now feels like established fact. I am not saying the technology team is perfect. I am saying the story everyone tells about this team does not match the data, and the gap between the story and the data is causing real damage.”

“What kind of damage?”

“Eleven engineers have left in the last eighteen months. Several of them were among the best we had. They did not leave for more money. They left because they were tired of working in an environment where the dominant story is that they are failing, when they know they are not. We are also struggling to recruit. Candidates hear through references that the technology organization is broken. The perception is costing us talent, and the talent loss is making actual delivery harder, which reinforces the perception. It is a cycle.”

The room was quiet.

“I am not asking you to believe me,” Nadia said. “I am asking you to look at the data. I am going to publish a monthly dashboard that tracks what we commit to, what we deliver, and what changes along the way. I am going to make the team’s actual performance visible. All I ask is that you read it.”


The dashboard launched the following Monday. It was simple, deliberately so. Five metrics, updated monthly: projects committed, projects delivered on time, scope changes (with originator), defect rate, and infrastructure uptime. Nadia sent it to the executive team with no commentary. Just the numbers.

The first month, nobody responded to the email. Nadia sent the second month’s dashboard. No response. She sent the third.

On the third month, Martin forwarded her the dashboard with a note: “The on-time number is impressive. Is this real?”

“It is real,” she replied. “I can show you the underlying project data anytime you want.”

He did not take her up on the offer. But he had read it. That was enough for now.


Nadia’s second move was structural.

She called it the Scope Commitment Protocol, though she never used that name outside her own team. The rule was simple: any scope change to an active project that added more than two days of engineering work required written sign-off from the requesting stakeholder and from Nadia personally.

The purpose was not to prevent scope changes. Scope changes are a normal part of product development. The purpose was to make scope changes visible and to attach names to them.

The protocol changed behavior almost immediately.

The first test came six weeks in. Greg Heller, a senior director of editorial partnerships, wanted to add social sharing analytics to a project that was already in development. His product manager, a woman named Jess Watanabe, brought the request to Nadia.

“Greg wants social analytics added to the partner dashboard,” Jess said. “He says it is small.”

“How much engineering time?”

“About three weeks.”

“And the project is currently on track for its original deadline?”

“Yes. If we add this, it pushes delivery by at least two weeks, maybe three.”

“Then Greg needs to sign the scope change request. He needs to acknowledge that his addition is what moved the timeline.”

Jess looked uncomfortable. “He is not going to like that.”

“He does not have to like it. He has to decide whether the addition is worth the delay, and he has to own that decision.”

Greg did not like it. He came to Nadia’s office the next day, visibly annoyed.

“This feels bureaucratic,” he said from the doorway.

“Sit down, Greg.”

He sat.

“Let me ask you something. When this project ships three weeks late because of the social analytics addition, what story will you tell in the leadership meeting?”

Greg paused.

“You will say the technology team was late,” Nadia said. “Not because you are dishonest, but because that is the story everyone tells here. You will say they committed to a date and missed it. What you will not say is that you added three weeks of work after the commitment was made. Not because you are hiding it, but because scope changes feel invisible. They feel like small decisions that should not affect timelines. But they do affect timelines. Every single time.”

Greg looked at her.

“I am not blocking your request,” she said. “If you want social analytics, we will build social analytics. Sign the form. We will adjust the timeline and the dashboard will show what happened and why. That is all.”

Greg signed the form.

When the project shipped three weeks late, the dashboard showed the original timeline, the scope change, who requested it, and the revised delivery date. The project was on time against the revised scope. It was late against the original scope. Both facts were visible. For the first time, the full story was told.


Nadia’s third move was the hardest, because it required changing how she talked about her own organization.

The instinct, when your team is under a false narrative, is to defend. To say “we are not slow” in every meeting. To correct every mischaracterization. To fight the story head-on.

Nadia did not do this. She had learned, partly from the illusory truth research and partly from years of experience, that arguing against a narrative keeps the narrative alive. Every time you say “we are not slow,” the word “slow” gets repeated, and repetition is exactly the mechanism that created the problem in the first place.

Instead, she told a different story. She told a story about outcomes.

In the next quarterly business review, she did not lead with delivery metrics. She led with business impact.

“Last quarter, the product and technology team ran four conversion experiments on the subscription page. One of those experiments increased paid conversions by 11%. That experiment took nine days to build, test, and deploy. It generates approximately $400,000 in additional annual revenue.”

She let that land.

“We also rebuilt the breaking news workflow for the editorial team. Publishing time for a breaking story went from fourteen minutes to three minutes. The editorial team has used it sixty-two times since launch. We estimate it has resulted in Meridian being first-to-publish on at least nine major stories where speed mattered for traffic and credibility.”

She paused.

“These are the kinds of outcomes this team produces. I want to make sure they are visible.”

She did not say “we are not slow.” She did not say “the narrative is wrong.” She did not reference the old story at all. She told a new story, and she made it specific, concrete, and tied to business results that the executives cared about.


The change did not happen in a moment. It happened over months, in small shifts that compounded.

Month four: the CFO, during a budget meeting, referred to the technology team’s “strong delivery record” when discussing a new investment. It was the first time Nadia had heard a non-technology executive describe the team positively without prompting.

Month five: Diane Chen stopped Nadia in the hallway after a product demo. “The new article page design is really good,” she said. “Your team did that fast.” Nadia thanked her and said nothing else. She did not point out the irony.

Month six: Martin Hale, in a board meeting that Nadia attended, described the technology organization as “one of our strongest assets.” Nadia kept her expression neutral. Inside, she felt something she had not let herself feel for six months: relief.

But the most meaningful moment came from Priya Subramanian.

Nadia was walking through the fourth floor on a Friday afternoon when she passed the small conference room where they had first talked. Priya was inside with three other engineers, whiteboarding something. Nadia caught her eye through the glass and waved.

Priya stepped out.

“I just want you to know,” Priya said, “that I got another offer. Better than the Austin one.”

Nadia felt her stomach drop.

“I turned it down,” Priya said. “And this time, I did not have to think about it.”


Two years later, Meridian’s technology team had grown by 30%. Not because Nadia had fought for headcount, although she had. Because the team’s reputation had changed, and with it, every downstream decision. Recruiting was easier. Retention was higher. Stakeholders collaborated with the team instead of complaining about it. Scope changes still happened, but they happened transparently, with accountability, and nobody confused a scope change with a slow team.

The false narrative was gone. Not because it was argued away, but because it was replaced by a true one, told consistently, backed by evidence, month after month, until the old story simply could not sustain itself.


What I Take from This

I have seen the pattern in this fable play out at multiple organizations. The specifics change. The mechanism does not.

A real event creates a simple story. The story gets repeated. Repetition makes it feel true. Once it feels true, people stop questioning it. Decisions get made based on the story. Those decisions create conditions that produce new evidence for the story. The cycle continues until someone breaks it.

What makes this pattern dangerous is that the people repeating the false narrative are not lying. They are not conspiring. They genuinely believe what they are saying, because the illusory truth effect is not a choice. It is a feature of how human cognition works. Hearing something repeatedly makes it feel true, and that feeling is powerful enough to override evidence.

If you are a leader, the practical takeaway is this: audit the narratives in your organization. Find the things “everyone knows.” Ask when each narrative started. Ask what evidence supports it. You will likely discover that some of your organization’s most deeply held beliefs are not conclusions drawn from data. They are stories that were told enough times to become indistinguishable from facts.

And if you find yourself on the receiving end of a false narrative, the lesson from Nadia’s experience is do not fight the old story. Build a new one. Make it specific. Back it with data. Tell it consistently. And give it time. The illusory truth effect works in both directions. A true story, repeated often enough, can replace a false one.

But it requires patience, and it requires discipline, and it requires the willingness to let the data speak for months before anyone listens.

The dynamic Nadia faced is compounded in organizations where technology teams are seen through an IT culture lens rather than a product culture lens. When the default expectation is that technology is a service function, the illusory truth effect has even more fertile ground.


The Research Behind This

The illusory truth effect was first documented by Lynn Hasher, David Goldstein, and Thomas Toppino in their 1977 paper “Frequency and the Conference of Referential Validity” (Journal of Verbal Learning and Verbal Behavior, 16, 107-112). Their finding — that repeated exposure to a statement increases the perception that it is true — has been replicated extensively across multiple contexts and populations.

For readers interested in the organizational implications, Daniel Kahneman’s Thinking, Fast and Slow (2011) covers the broader landscape of cognitive biases including the role of familiarity in judgment. Jennifer Aaker’s research at Stanford on the power of stories in shaping beliefs and behavior provides a useful complement: Aaker’s work demonstrates that stories are up to 22 times more memorable than facts alone, which helps explain why organizational narratives are so durable and why Nadia’s strategy of replacing one story with another was more effective than presenting data in isolation.

Jeffrey Pfeffer’s Leadership BS examines how organizations perpetuate comfortable fictions about their own functioning, a dynamic closely related to the illusory truth effect at the organizational level. I reviewed Pfeffer’s book and found his argument that leaders often succeed or fail based on narrative control rather than actual performance to be consistent with what Nadia encountered.

The phenomenon of confirmation bias reinforcing illusory truth in organizational settings is well-documented in organizational behavior research. Once a belief is established through repetition, confirmation bias filters new information to support it — a dynamic that Gary Klein and others have studied in the context of expert decision-making under uncertainty.