Noubar Afeyan, Cofounder & Chairman of Moderna and Founder & CEO of Flagship Pioneering, discusses how trust intersects with Covid-19, drugs, innovation, capitalism and more with Matthew Bishop.

Matthew Bishop: Moderna was one of the firms to develop an effective vaccine against COVID-19, which you might think would increase trust. Yet there are some signs that as the pandemic eased, the public’s trust in vaccines declined. What do you make of that?

Noubar Afeyan: One of the interesting things the COVID-19 pandemic brought to light is the degree to which trust is influenced by context. For example, when you’re desperate, you tend to trust more, because you’re desperate to get out of the situation you’re in. Then when you’re no longer desperate, you start using rationality and introducing doubt, especially when you don’t really understand the topic. To me, trust is a way in which we can act without having all the information. Because if you have the necessary information to act, then why do you need to rely on trust? Trust is a subconscious way to propel yourself, even though you don’t have all the information needed to make decisions or act.

In that regard, in the early stages of COVID, people were forced into the state of mind that most allowed them to act without a lot of information, in other words to place trust on the advice of experts. Yet as the pandemic continued, there were countervailing forces, including the way most governments acted and wanted to look like they knew what was going on and that they were in control. They would make pronouncements one day to do this, the next day to do the opposite, and as their stories were changing, people started worrying and thinking they shouldn’t be quite as trusting in that moment, even if they were desperate. That contributed to the short-lived nature of trust in so-called experts on the pandemic, which was on top of the conscious kind of anti-vax sentiment or anti-science sentiment that existed pre-pandemic.

This was the first mass health challenge in the age of social media, and we got to see how that can amplify misinformation. When people were living in their own microcosms, it would have been hard to spread this level of mistrust. But with social media, mistrust can be weaponized.

MB: Thinking about the risk of future pandemics, what lessons are there about how to win or retain public trust?

NA: It's still too early to judge whether lessons have been learned. So far, I think we're in a mode where people have wanted to forget the pandemic and the widespread pain and suffering and dislocation it caused as quickly as possible. I don't really see a sufficient effort to recognize what worked, what didn't work.

I'm unaware of governments that are trying to hold themselves truly accountable for how the pandemic was handled, let alone hold various other constituencies accountable. Because nobody is going to come out looking all that good.

Still, I'm hopeful that there will be learnings with the passage of a little bit of time. We need to find the courage to go back and revisit this and decide to do certain things differently. Especially because this pandemic was just a dry run for many further pandemics, whether they're infectious disease borne or climate-driven or something else. The extent to which governments are not ready to deal with those is going to be a further basis for mistrust and chaos.

MB: Did governments do anything particularly well that we can learn from?

NA: The most positive thing that was done, at least in the vaccine space, was the relatively early on assembling of what became Operation Warp Speed, which was a kind of a private-public partnership to enable action in the face of uncertainty. The key to this approach was how it created optionality, as opposed to picking winners — backing six alternative vaccine approaches, many of which had never before been scaled, and systematically facilitating them, financially, logistically and in clinical execution, by eliminating barriers, including the regulator becoming part of the solution.

All of those things are positive lessons, and it would be a shame for us to forget because parties changed and elections happened. Operation Warp Speed is probably the single most successful thing that was done anywhere, maybe comparable to the U.K.’s rapid establishment of its large-scale diagnostic infrastructure that allowed the tracing of the evolution of the virus and its prevalence.

MB: What about trust in science, which also seems to be declining alarmingly?

NA: In scientific circles, they're talking about that. But unfortunately, what they're doing is lamenting the distrust in science. I think it's a bit more complicated.

Non-scientists usually have to trust in scientists in order to act, as they don’t have the expertise that scientists have. But when you're confronted with unknowns, then actually, there's a limit to the trust you can place in experts, because of what they too don't know.

That's essentially the situation we found ourselves in at the start of the pandemic. As a society, we're going to find ourselves in that situation again and again, with climate and other calamitous things where people are going to want the experts to know more than they actually do, so they can trust and rely upon what they are saying.

There's a tendency, I'd say, for experts to feel like they need to say things that sound like expertise, regardless of whether they know the topic or not. And that unfortunately fosters mistrust.

I really do think that experts would do better to say, “Look, I don't know, but here's what we're going to do to try to find out, and as soon as we do, we're going to tell you that.” As opposed to saying, “It's going to take four years to develop a vaccine,” which is what most experts said. In that context, the person who says, “How about six months?” looks untrustworthy, even though actually the assertion of four years was purely based on a historic knowledge that was not applicable to the situation we were in. For instance, one of the reasons vaccines typically took four years to develop is the long testing process. Yet during a pandemic, you can find 30,000 people to sign up to a trial in weeks. In a pandemic, you can actually go more quickly for reasons like that, without skirting regulatory corners.

MB: So experts need more humility and more nuance?

NA: The funny part is that I view science as being about the unknown, whereas most people think of science as being about the known. If you're a scientist, you're supposed to excavate the boundaries of the unknown. Science is built on hypotheses, which are essentially made-up extensions of the current art. You should never trust a hypothesis. Right? You just need to do the experiment.

MB: During the pandemic, what could you personally have done differently to help address trust concerns?

NA: Probably, we over-focused on just doing our job. Several hundred people were working essentially 24 hours a day for months and months, just trying to do a heroic act — not to be heroes, but rather, to defy the odds. And I don't know that we spent enough time thinking about how this is going to be accepted by societies, by governments, and what we could do to better anticipate, for example, the issue of inequity of vaccine access. That probably was a foreseeable challenge for us. But since we were a startup, and we'd never developed anything, let alone a vaccine, let alone for a pandemic, it was not in the first instance top of mind.

Had it been, we would have recognized that major governments had essentially signed contracts that made it impossible for us to ensure vaccine equity. Then we would have been able to say upfront, “We'll sign the contracts, but you have to take on the responsibility of distributing these vaccines to other places, otherwise, it's going to blow back at us.” We lost some of our brand value, unnecessarily, when we were attacked for not supplying vaccines to various low- and middle-income countries at a time when we were bound by contracts to send all our supply within the U.S. and EU.

MB: Related to that is the allegation of profiteering, that in pharma you are all being paid too much for doing this work.

NA: I view that as kind of an indictment of the entire system, not just our manifestation of it. There's a baseline amount of distrust in capitalism, the way it works. In and of itself, capitalism can be a basis of mistrust because ultimately people are giving you investment dollars in order to create breakthroughs and generate high returns for them and, in fact, for society. If the government paid for all this, and took all the risk, then it could decide how to price it and there would be no concern about that. But we raised and invested a billion dollars of private capital, before the government gave us resources for this project. And we spent it alongside the government's money.

MB: Did the pandemic make the case for changing intellectual property rules?

NA: In October 2020, before we had a vaccine that was known to work in a phase-three trial, we announced that we would not enforce our patents on mRNA used for a COVID vaccine while the pandemic lasted. Why did we do that? Nobody asked us to do that. There was no World Health Organization calls related to IP at the time. We just voluntarily adopted, as a matter of principle, a policy of not enforcing our patents during that period because we felt it would help us all fight the pandemic. What drove us was our concern about equity and about trust, in order to maintain our license to operate. If you're creating a totally new technology, a totally new thing, it behooves you to maintain that license to operate. In this case, however, people didn’t focus on what we did for the longest time. Even the WHO came to us and said, “Would you agree a year later to make the technology available?” And we replied, “We've already promised we're not going to enforce our patents.”

MB: You gave a speech earlier this year about artificial intelligence and the way that it can really accelerate innovation. But can we trust AI?

NA: AI is a lot of different things. One of them is that machine learning AI can be quite good at pattern recognition, and can make good medical diagnoses, even though we don't quite know how these algorithms actually work. Yes, like humans, AI can make mistakes — though actually fewer mistakes than humans. But in the case of humans, we can fault the human. In the case of a machine, we don't have a human to fault. I think the lack of trust comes from that: not that the algorithm is less reliable, but that it is less accountable.

We need to separate out different aspects of AI and understand where the fear is coming from. The most recent generative AI is by its name generative, which means that it's creating new things. That, too, is something that humans do, creating things that have made up stuff in them. It's called imagination. Yet when an AI does it, we say it's hallucinating.

I view much of AI as just augmented human intelligence. The version of AI that replaces humans is a different matter. Then accountability is the key question. Think about military use. I can assure you that the military can use AI to do phenomenal targeting, differently than it's ever been able to do. Because no human can deal with the kind of complexity of data that we're gathering. But they still want somebody to be held accountable for a decision – and rightly so.

The issue is not the trustworthiness of the technology, per se. Trust issues arise from the way it's being offered to you. Because if it's taking a human out, now you have nobody to blame.

MB: You have developed your own theory about when trust is needed, and when it isn’t.

NA: I look at trust between people and between organizations as a temporary replacement of alignment. What I mean by that is if we have time to align, to exchange facts, to share common goals, so that we can do something together, some form of cooperation, then I don't need trust. I've done the work needed to align so trust is unnecessary. If I don't have time to do that, or the means to do that, or if we can't speak the same language or have the same level of expertise on the subject needed to align, then I need trust. But that should only be temporary. If trust replaces alignment, I think that's a very dangerous thing. Because then you never try to figure out how to get aligned. You might be super misaligned, yet you're trusting each other. That situation could easily go wrong.

I find in our own work, that at the speed we go at, the number of things we have to do in parallel and the level of uncertainty we face, trust is a good thing to use in the short term. But alignment is much better to create cooperation.

The whole notion of “trust, but verify” I think stems from what I'm saying, because verify is the way you achieve after-the-fact alignment. If I need trust, temporarily, to give me enough time to go verify, then that's OK. Verification is passive alignment. Active alignment is discussion, actually sharing goals and saying, “We agree, I want to go here, you want to go here. We're going to go there. We're aligned.”

Yes, I have to trust that you're not going to change your mind. But that's a different level of trust than that when we don't even know where we're going.

MB: In this time of massive change, how do we build alignment so we don't have to rely on trust so much?

NA: For one thing, I believe these massive societal changes we are facing, or going to face, need involvement by governments, not in a dictating way, but rather, in a really collaborative way, where the goal of achieving the end result supersedes questions of who's got the power and who gets to tell who what to do.

Sure, governments can regulate and through that insinuate themselves into anything they want. But that's a coarse tool for achieving alignment. It's alignment by fear of being shut down.

The better path to alignment is what happened with Warp Speed. Nobody had to trust Moderna, nor were we trusting the government. We just had a mechanism by which we could align our interests, align the value that we would receive if we delivered, align on how we could clear out all the obstacles. We acted more confidently and were willing to take greater risks because we were aligned.

Something similar could be done to tackle challenges such as climate and food security. In climate, however, this isn’t happening because governments are unwilling to set a price for removing carbon from the atmosphere. Until they do that, you cannot achieve alignment. Innovators need certainty of what it is they're innovating towards, yet instead, in the case of carbon, they have to embrace massive uncertainty due to the possibility it's never worth much. If there was a price, there would be so much more innovation.

Mistrust arises because there's no mechanism to cause alignment. I think we can create mechanisms to do this. And we should, especially because in our world of one crisis after another, people have to eliminate mistrust to be able to fight against crises.

MB: So we need to put those alignment mechanisms in place?

NA: Yes.

Alignment is a way to cope with either a lack of trust or an absence of trust and still act. And it’s also a way to achieve trust.

Because once you are aligned, then trust is a luxury. You know you don't need it, but you brought it along anyway. A spare tire in a car. To me, trust is faith in the other. The less you know about the other, the less the other is aligned with you, the more you are going on blind faith.




Making AI Work for Everyone

Mei Lin Fung, Cochair & Cofounder, People-Centered Internet