Request Podcast

Transcript of Social Media’s Original Gatekeepers On Moderation’s Rise And Fall

On with Kara Swisher
Published 12 months ago 431 views
Transcription of Social Media’s Original Gatekeepers On Moderation’s Rise And Fall from On with Kara Swisher Podcast
00:00:00

Elon's losing his fucking mind online. It's like, really today. Hi, everyone from New York magazine and the Vox Media Podcast Network. This is on with Kara Swisher, and I'm Kara Swischer. My guests today are amazing. Del Harvey, Dave Wilner, and Nicole Wong, three of the original content policy and trust and people on the internet. Del was the 25th employee at Twitter. She started in 2008 and eventually became head of trust and safety before leaving in 2021. Dave worked at Facebook from 2008 to 2013, eventually becoming head of content policy, and he wrote the internal content rules that became Facebook's first published community standards. Nicole is a First Amendment lawyer who worked as VP and Deputy General Counsel at Google, Twitter's Legal Director for Products, and Deputy Chief Technology Officer during the Obama administration Administration. These three were absolutely key in designing safety and content policies at social media under very difficult circumstances. But it's a hugely influential, mostly invisible job that affects pretty much everyone who uses the internet and a lot of people who don't. But their efforts to make the internet safer and having some guardrails are being unwound by people like President Trump, Elon Musk, and Mark Zuckerberg.

00:01:21

This is a perfect time to go back and look at the history of trust and safety and content moderation. I'm very excited to talk to these three particular people because despite the idiocy of Elon Musk and Mark Zuckerberg, there are thoughtful people thinking through these incredibly difficult issues, not making them partisan, not reducing them and making them seem silly. They're not yelling censorship. They're not yelling about the first amendment, which they don't know anything about. These are hard issues, and they treat them like hard and complex issues like adults do. Others, total toddlers having tantrums. That's the way I say it. Anyway, our expert question comes from Nina Jankowits, a disinformation researcher and the CEO of the American Sunlight Project, she herself has had a lot of experience with disinformation, including being attacked unnecessarily and unfairly. So stick around.

00:02:20

What's up, you all? It's Kenny Beacham. On this week's episode of Small Ball, we get into maybe the wildest, craziest, most shocking week in NBA history. The trade deadline came and it did not disappoint. Sometimes Some trades I love, some I hate it, and some made absolutely no sense at all. The league has been shaken up, and I'm here to break it all down with you. Man, what a time to be an NBA fan. You can watch Small Ball on YouTube or listen wherever you get your podcast. Episodes dropped every Friday.

00:02:49

This isn't your grandpa's Finance podcast. It's Vivian, too, your Rich BFF and host of the Networth & Chill podcast. This is money talk that's actually fun, actually relatable, and will actually make you money. I'm breaking down investments, side hustles, and wealth strategies. No boring spreadsheets, just real talk that will have you leveling up your financial game. With amazing guests like Glenda Baker.

00:03:09

There's never been any house that I've sold in the last 32 years that's not worth more today than it was the day that I sold it.

00:03:15

This is a money podcast that you'll actually want to listen to.

00:03:19

Follow Networth and Chill wherever you listen to podcasts. Your bank account will thank you later.

00:03:25

It is all. Dave, Del, and Nicole, welcome and thanks for being on on. You three are some of my favorite people to talk about this topic. I've talked all of you over the years about it. You helped pioneer trust and safety on social media and created a field that hadn't existed before. I'm excited to have you together for this panel. Thank you for coming.

00:03:45

Thank you. Thanks for having us. I can't actually remember if we've actually all been on a panel together before.

00:03:49

I know. Have you? No. I don't think so. I don't think so. Well, here we go. See, history is made. Let's start with a quick rundown where things stand today, and then we'll go to the beginning and figure out how we got here. Mark Zuckerberg recently announced that Metta is getting rid of fact checkers, replacing with community notes. I have nothing against community notes, but they always seem to be shifting around in all their answers. They also loosen rules around certain kinds of hate speech now, including hate speech aimed at LGBTQ plus people and immigrants. They're quietly getting rid of their process for identifying disinformation. I'd love to get everyone's reaction to this move, starting with Dave, since you ran content policy at Facebook until 2013. Yeah.

00:04:31

There's a few different things. I share your appreciation for community notes as an approach. I think in a lot of ways, the fact-checking part of this got front loaded in how it was all reported. I honestly think that's a bit of a distraction from a bunch of the other parts that you touched on, which are far more important. Three that seem particularly notable to me. One, it came out that they are also turning off the ranking algorithms around misinformation or potential misinformation, which is going to really change how information flows through the system. Explain what that is. They've historically tried to detect whether content might be misinformation and added that into the mix of how content shows up in people's feeds. You can think of it as changing the velocity with which certain kinds of information spreads through the network. They're turning off the dampening on that. That feels to me like a much bigger deal in terms of the amount of content it affects and the amount of views that it affects than whether or not fact checks are appended to a relatively small number of stories because of the scalability of the process.

00:05:38

Just on the truth question, that feels like the much more significant change, if a little bit harder to understand from the outside. Even on top of that, though, the changes they made around hate speech, and there's two coupled ones I think are pretty significant and quite worrisome. So one, they're moving away from proactive of attempts to detect whether or not things might be hate speech just across the board. They seem to be turning down those classification systems. They're justifying that by saying it's going to lead to fewer false positives. That is true. If you stop looking you will make fewer over-removals. But it also means that particularly in more private spaces where folks are aligned around particular sets of values that are maybe not so awesome, there's not going to be really any reporting happening out of those spaces because it's a group of people who already agree with that content talking to themselves. You can make arguments, what's the harm there? It's a group of people talking to themselves. But groups of people on Facebook talking to themselves sometimes storm the capital. Exactly. There are real harms that emerge not just from the speech.

00:06:50

They also made a number of changes to the actual hate speech policies themselves and very surgically, frankly, carved out the to attack women, gay people, trans people, and immigrants in ways that you are explicitly not allowed to attack other protected categories and in ways that allow more speech than the company has really ever allowed at any point where it had a formalized set of rules.

00:07:19

Right. Like Christians. You can't attack Christians. Nicole?

00:07:22

Yeah, there's so much to dig into on this change. High level, it struck The fact-checking thing, I think, is somewhat of a red herring because it's such a small part of the ecosystem that it's looking at. The thing you take from it is who is the audience they're talking to? If you read Joel Kaplan's blog post, he adopts the language of censorship around content moderation that I have to assume is deliberate. The front loading all of this, we're not going to fact check you. We're going to let you say the same types of things you get to say on TV and on the floor Congress that has an audience. To me, a lot of this is about who they've decided to try to appease. That is a huge amount of it. The other thing that struck me, and it is the higher level changes that they are making, which I think are more destructive than the removal of the fact checking, which is the basically refusal to throttle certain types of content and the targetness of that content that they're going let loose. Last time I was with you, Cara, we talked about what are the pillars of design- We talked about architecture.

00:08:36

Yeah. What are the pillars of design for social media? Personalization, engagement, and speed. They are releasing their hold on that personalization. They explicitly say, We're going to allow more personalized political content so you can be safe in your own bubble. We're going to speed it up. We're not going to stop the false statements and other vitriolic statements that we have in the past, and we're going to let you go on it. They are picking on all three of those pillars for what we know is rocket fuel for the worst type of content. You cannot believe that that's not deliberate.

00:09:16

It's a design decision is what you're saying. Speaking of that, let's talk about other platforms. I often say that X has turned into a Nazi porn bar, and today it really is. I have to say, Elon has really trotten out all the Nazi references today, more of them. He's doubling down on his behavior. I think he's trolling us with these fascist salutes, and now he's been tweeting Nazi-related puns right now over on X. There's obviously not anything happening there in terms of trust and savvy. What's the prevailing thinking on content moderation today? Let's start with you, Del, since you were the original Twitter person here, and then Nicole and Dave.

00:09:57

I think that in a lot of it depends on the platform. It's worth a little bit of not all platforms goes here. Because, yes, you have a couple of really big ones that are doing some really odd things. And also, you have a lot that aren't doing the same extremist behavior. I think that there is still very much of value being ascribed to trust and safety. I think we are seeing in some part a shift toward recognizing that trust and safety is more than just content moderation. I think that's one of the most important learnings that hopefully people can take away from everything that's happening in the space, which is trust and safety starts when you're beginning to conceptualize what your product is. You can design it safely. I think that what we're seeing right now with the guardrails being pulled away from this information ranking, all this content that we know that is extremely explosive, drives extreme amounts of engagement. That removal of guardrails on a couple of companies is making a huge fiery scene over here. And then there's a whole bunch more companies that are trekking on and, I think, trying to stay out of the crossfire.

00:11:26

Such as, if you could really quickly.

00:11:29

Well, I I think you know the two big companies that are currently spiraling. And then you've got all the others, right? You've got Reddit, you've got Pinterest, you've got all the different federated sites all these different communities that are still soldiering on.

00:11:51

Right. Nicole?

00:11:54

Del said something which I think is so important, which is content moderation comes at the end in a bunch of ways, where you have people who are outside what you want your product to be. The first choices you're making about content are really about what is the site you're on. When you think about what are the other platforms that are not getting into the hot water we see. It's not just about their scale, although a lot of it is about their scale. It's about their focus as a site. So LinkedIn is a professional site, and you generally see them having fewer controversies. Because if you're not talking about professional information, you're not on the right place. That's not your audience. Pinterest, to me, is the same thing. You're here to get inspired about whatever it is that inspires you, housing goods, shoes, fashion, whatever it is. If you're not talking about that, then you should be somewhere else. I think that a bunch of what we see for the platforms that are having trouble with this are the platforms that deliberately went out there and said, I want to be everything to everyone. It's really tough to manage that playground.

00:13:04

That, to me, is a bunch about the content moderation. I think what we've seen, certainly since I... Keep drafting me out here like the historical artifact that I am, but Yeah, we're going to get to the early days in a second. Right. But the tools that they have, the professionalism of the teams that are doing the work are so much better in the last decade or 20 years. I think there's a lot that's happening content moderation. What I think is less clear is how does a platform position itself and design itself to be resistant to some of the worst parts of the content ecosystem?

00:13:43

I think you're absolutely with the everything store tends to be a problem. There's porn and twinkies. You know what I mean? Exactly. You have a lot going on.

00:13:51

If you had your aisles clearly separated and you could be aware that you were walking into the twinkie aisle or into the and you are of age to access those things, that might be one situation. But that's not the situation.

00:14:06

That's correct. The twinkies are mixed in with the porn. I got it.

00:14:09

They are in an unfortunate way.

00:14:10

Which sometimes say it is. Sometimes it is. All right, Dave.

00:14:13

But again, need those signs. Yeah, that's it.

00:14:16

I am maybe a bit of a fatalist, but in a positive way in that I think a lot of the need and origin for trust and safety arises from people using these services getting into conflict and demanding resolutions to the conflicts they have. That's not going to stop being a thing because people are not going to stop getting into silly fights on the internet or much more serious ones, depending. You are driven, frankly, by customer demand, famously advertiser demand, but even just basic user demand, to create some amount of order, even if it's just on questions of spam and CSAM and whatever. To me, trust and safety, it waxes and wanes in the face of all of these things, but the overall trajectory has been towards more professionalization, more people doing this work. It's not even clear to me from Facebook's announcement, if there are force reduction implications, some of the stuff they said. I mean, to be clear, I'm not a big fan of these changes. But they did talk about adding more folks to deal with appeals, adding more folks to do cross-checks, which great, cool, and don't seem employment negative. So There's a little bit of the meta-narrative of O crisis in moderation, trust and safety is going away.

00:15:35

I think on some level is maybe what they want.

00:15:38

Which they wanted to do. Which they want the Trump administration to think they're really killing all these people. Absolutely. All right. Let's go back to the odds because you guys are early days people, and so am I. We all go way back to the '90s when AOL had volunteer community leaders that moderated chat rooms and message boards. We'll save that for another episode. But Nicole, you were a VPN Deputy Counsel at Google when they bought YouTube in 2006. Saddam Hussein was executed that year, and then two related videos were published on the platform, one of his hanging, another of his corpse, and you had to decide what to do with them. This is not something you ever thought you'd have to do. I remember being in meetings at YouTube in different places about this, like what do we do now? We thought it was all cats, and it's not, thing. Walk us through your deliberative process and talk about the principles you used at the beginning as a First Amendment lawyer working for a private company.

00:16:28

Wow, you're taking me way back. Way back, yeah.

00:16:31

How can you understand that this was from the get was a problem?

00:16:35

Yeah, no, it absolutely was. Luckily, we had a little bit more time. A, we had the grace of being the cool new kids, and B, because it was so new, there was a little bit of buffer to make hard decisions. What I recall from that was at the time of Saddam Hussein's execution, remember, he had been captured, pulled out of a hole, and then executed by hanging. There were two videos, one of which was of the execution, the other was of his corpse. The question was, do these violate our policies around graphic violent content?

00:17:17

Or was it news?

00:17:19

Yeah. Well, was it news or was it historically important? My recollection of it, we had exceptions for content that might be violative, as in violent, but had some historical significance. There were others like educational significance, artistic significance, that thing. As I recall, the call that I made was the actual execution was a historical moment and would have implications for the study of the period later on. But the video of his body seemed to be gratuitous. Once you know he's been executed in a certain manner by certain people in a certain context, what does the body do in terms of historical significance? We took down the one of the corpse, we kept the one of the execution I was so much less professional than either Dell or Dave's organizations at the time. That was Nicole's like, Here's my thought. Here's how we're going to make it stand. But that was the decision at the time. The decision at the time.

00:18:28

Really difficult. That's something you'd never thought. Totally difficult, presumably. I wouldn't know what to do. Twitter initially took a very public ideological stance in favor of free speech. It did pay off, though, as a lot of press. The press dub, The Arab Spring, was the Twitter revolution in 2011, in the middle of Arab Spring, the general manager of Twitter in the UK famously said, The free speech wing of the Free Speech Party. It should be noted at the time, free speech was generally considered to be more of a left wing ideology in a weird way. The platform obviously undergone multiple transformations, and we'll get to that later. But how was your philosophy about trust and safety changed over that time? Talk about what you were thinking then.

00:19:08

I mean, the very first thing that we started with in terms of policies, because there really was nothing when I showed up, the first thing I was assigned was like, Can you come up with a policy for spam? Because every now and then people are encountering spam, and we don't ever think it'll be a big problem because you can choose who you follow. But we still think we should a policy around it. I was like, Oh, it'll be a problem. Yes, I will make you a policy. Then after that, it was copyright and trademark and making sure that we had a relationship with the National Center for Missing & Exploited Children. All of the... You get your initial docs in a row of these are these core functions that you need to make sure that you have in place to give people a tolerable user experience. All of those are things where people have expressed needs and strong sentiments in those areas, so you start with those. Then we started expanding from there. A huge challenge for years was that we only had two options. We could leave a loan or we could suspend the whole account, which is a terrible piece of options.

00:20:22

You couldn't take it down. You couldn't just remove it.

00:20:24

You couldn't take down just the content. It had to be the whole account. Once we added on the ability to just do it to a single piece of content, that was such an exciting day for us. And the advances since then, there are so many possible things you can do now in trust and safety that just weren't even things we could imagine 10 years ago, I would say even.

00:20:48

Dave, in 2009, five years after Facebook was founded, the same year the company first published its community standards, a controversy erupted over Holocaust denial groups on the platform. At the time, you defended Facebook's position to allow these groups and said it was a moral stance that drew the same principles of free speech found in the Constitution. Years later, Mark said, They don't mean to lie. In an interview with me, that he eventually, reverse course, he had a little different take than you had from what I could glean. I don't know what he was saying. I'll be honest with you. They thought it was muddy and ridiculous and ill-informed. But what's your stance today and how has your thinking evolved? And talk a little bit about that because you can see making an argument. It's like the Hyde Park example. Let them sit on the corner and yell.

00:21:31

Yeah. The initial reluctance to get into the initial stance on Holocaust denial when we took it was downstream of an intuition that we frankly weren't capable of figuring out how to reliably police what was true. I think in some ways that has borne out. I don't know that the attempts at that have gone super well. That is true. I think the intuition that gave rise to the stance was right, but there There are multiple ways of getting at what is problematic about that speech that don't rest solely on the fact that it is false and commit you to being the everything fact-checking machine, which we were just deeply aware. She'll make a mistake. Well, I There were like, there were like 250 people, and most of us just graduated from college. We were smart enough to know that we just couldn't. So we had to adopt a we can't or we won't because we couldn't.

00:22:26

Very good point.

00:22:28

I will say, over Over the course of my time there, and particularly since I've left, my thinking on this has been influenced a lot by a woman named Susan Benish, who runs something called the Dangerous Speech Project, which studies the rhetorical precursors to genocides and intercommunal community violence, and has done a really good job of providing really clear, explicit criteria for the kinds of rhetorical moves that preceded that violence in a way that, to me, was usable and specific such that you could turn it into an actual set of standards you could hope to scale, which it's, I guess, been a little implicit in some of what I've said, but my obsession from very early on, and in some ways still is, is this question of, Okay, we've got a stance, but can we actually do the stance? Because if we can't do it, it is in some sense misleading to take it.

00:23:23

I would also chime in and say that while I am in strong agreement that that is not the way to do it, that if you can't enforce something, you shouldn't have it as a policy. There has also been for any number of years, any number of attempts to solve product problems by saying, We're going to write a policy to fix that, which is, quite frankly, I'm impressed that you managed to get them to not say, Well, we're going to do it anyway. Yeah.

00:23:55

Yeah, that's fair. No, that's totally fair. And there The world is... I think Zuck's public stance on founding the company because of the Iraq War seems a little bit revisionist to me. I wasn't there, but that wasn't what I heard. But it is true that it was founded in the shadow of the Iraq war, and to your point about freedom of expression being a liberal value, there was definitely a punk rock, American idiot vibe around being asked to take things down. But also, I don't know, I was a child and we didn't know what we were doing. I have learned several things over the course of the last 20 years.

00:24:34

Which Zuckerberg would never admit. He would never admit he didn't know what he was doing. That is something he would never come out of his mouth. But just so you know, the company was founded on rating girls' hotness in college. But okay, Fine. Iraq war. I'll go with a rock war.

00:24:49

Masculine energy.

00:24:50

Masculine energy. That's what they call that. That's right. We're back to the same place. Everyone was like, Are you surprised? I was like, No, this is what he did. He's a deeply insecure young man who became a deeply insecure 40 year-old. But when you talk about the moral stance, it was the idea that we should be able to tolerate negative speech, which is a long-held American thing, the Nazis and Skoki. But it turns into something where people game the system and allow what you were talking about, which is the precursors to actual violence, where speech is the precursor.

00:25:24

Yes, that's absolutely right. Some of that was literally a question of academic work happening or us becoming some combination of it happening and us becoming aware of it to have a framework where we could really figure out, okay, if you're comparing, this is all going to sound obvious now because it's one of those ideas that when you hear it, it's obviously correct. But there are these rhetorical moves you can make that dehumanize people and serve to authorize violence against them, not by directly inciting violence or calling for violence or threatening violence, but by implying that that they are less than human, that they are other, that they are filthy, that they are a threat, that they are liars about their own persecution that serve to make violent action okay. I think we're seeing some of that now. Part of the I found, to circle back to your first question, part of the reason I found the reason change is so disturbing is they are designed to carve out things like claims that people are mentally ill, which is like down the middle dehumanizing speech that obviously fits into this category.

00:26:29

We're using the word it for trans people. Yeah. We'll be back in a minute. Support for this show comes from NerdWallet. Listeners, a new year is finally here, and if you're anything like me, you've got a lot on your plate. New habits to build, travel plans to make, recipes to perfect. Good thing our sponsor, NerdWallet, is here to take one thing off your plate, finding the best financial products. Introducing NerdWallet's best of awards list, your shortcut to the best credit card, savings accounts, and more. The nerds at NerdWallet have done the work for you, researching and reviewing over 1100 financial products to bring you only the best of the best. Looking for a balanced transfer card with 0% APR, they've got a winner for that. How about a bank account with a top rate to hit your savings goals? They've got a winner for that, too. Now you can know you're getting the best financial products for your specific needs without having to do all that research by yourself. So let NerdWallet do the heavy lifting for your finances this year and head over to their 2025 best of awards at nerdwallet. Com/awards to find the best financial products today.

00:27:42

The Republicans have been saying lots of things. Just yesterday, their leader said he wants to own Gaza.

00:27:49

The US will take over the Gaza Strip, and we will do a job with it, too.

00:27:54

We'll own it. On Monday, the Secretary of State said an entire federal agency was insubordinate.

00:28:01

Usaid, in particular, they refuse to tell us anything.

00:28:03

We won't tell you what the money is going to, where the money is for, who has it.

00:28:06

Over the weekend, Vice President Elon Musk, the richest man on Earth, tweeted about the same agency that gives money to the worst people on Earth?

00:28:15

We spent the weekend feeding USAID into the wood chipper.

00:28:19

Could gone to some great parties. Did that instead. But what have the Democrats been saying? People are aroused. I haven't seen people so aroused in a very, very long time. That's a weird way to put it, Senator. We're going to ask what exactly is the Democrats' strategy to push back on Republicans on Today Explained.

00:28:43

Hey, this is Peter Kafka. I'm the host of channels, a podcast about technology and media.

00:28:48

Maybe you've noticed that a lot of people are investing a lot of money trying to encourage you to bet on sports right now, right from your phone. That is a huge change, and it's It happens so fast that most of us haven't spent much time thinking about what it means and if it's a good thing. But Michael Lewis, that's the guy who wrote Moneyball and the Big Short and Liars Poker, has been thinking a lot about it, and he tells me that he's pretty worried. I mean, there was never a delivery mechanism for cigarettes as efficient as the phone is for delivering the gambling apps.

00:29:21

It's like the world has created less and less friction for the behavior when what it needs is more and more.

00:29:27

You can hear my chat with Michael Lewis right now on channels wherever you get your podcasts.

00:29:35

Let's go on to our favorite time, the Gamergate controversy. It was harassment campaign. I know. It's just been one panoply of horror, aimed at women in the video game industry that include doxing and death threats. It happened in 2014. I recall it extremely well. In some ways, it was the birth of a loose movement of very angry and very online young men morphed into the alt-right. Del, Twitter got a lot of negative press that came from Gamergate because Twitter, along with Reddit and 4chan had relatively little content moderation. There was harassment on the site. Walk us through the controversy and how the aftermath led to changes in how you approach trust and safety. You went from being that very free speech heavy company to eventually focusing on brand safety, which is important to product, as you just noted.

00:30:21

I would say that what you saw was reflective of, in many ways, also the company's investment in trust and safety and whether or not there were the tools and actual functionalities to do certain jobs because the same way that certain policies may have existed at Facebook because there was no way to operationalize them, similar ones certainly existed at Twitter in terms of it sure would be nice if we could do X, but there's no feasible way to do that, and so we can't. If we try to, we're going to set ourselves and people up for failure on it. I think that what you've seen is trust and safety, and this goes back to what Nicole mentioned earlier, content moderation, you're late in the process when it's gotten to content moderation. Ideally, once you're at content moderation, someone has generally experienced a harm and you're trying to mitigate that harm. Whereas if we look at things like proactive monitoring or designing your product, not to have some of these vectors for abuse, or even educational inputs for people about what to expect for... Hey, it's likely a scam if this. All of those things come before content moderation and have a much higher likelihood of impact and ripple effect.

00:31:48

I think what you have seen is a slowly growing awareness of the earlier we can intervene, the earlier we can build in these components. There's this slow growth of like, Oh, we should do more of that. I think that's the biggest shift since the beginning of Gamergate is more of a panoply of options for actioning along with more cognizance around needing to figure out the point of origin.

00:32:19

Consequences. The point of origin. Anticipating consequences. Let's talk about that. In 2017, Mayan Monash Army committed genocide against the Muslim Rohingyas. They raped and killed thousands, drove 700,000 ethnic Rohingyas, into neighboring Bangladesh. In the run up to the genocide, and I was right in the middle of this, Facebook had amplified dehumanizing hate speech against the Rohingyas. They did it despite the fact that Facebook had been implicated in anti-Muslim issues, violence in Myanmar a few years before. This is when it really started to get noticed. How much responsibility would you assign? I'm going to start with you, Nicole, to something like Facebook, and what should have been done differently. And Dave, if you want to jump in, you can, too. But when this happens, the direct responsibility was pretty clear. But you could say, I'm a phone, and you can't blame a phone for people organizing, for example.

00:33:10

Yeah. I want to go back to a little bit about the Gamergate, but I'll connect it up to the part. Because what I recall about Gamergate, and which I think changed the trajectory of some of the content policies that we ended up doing, is that academics like Daniel Cetran Connected the dots between harassment as a suppression of speech. Harassment, not just being, you're being mean to me, which has always existed on the internet, but that it is a incursion on someone else's rights, and particularly those who are least able to bear it, who have the weakest voices. That, to me, was where Gamergate was like, Oh, we actually should not just allow all the speech because the speech That speech is suppressing speech. That connects for me into how we handle things like minority voices, like the Rohingya who may not even be on the service, but are being harassed. Their rights in some ways are being taken away. Dave will be able to speak to this better about how Facebook decided to handle it. I think that a bunch of it has to do with the design of your ability to detect and your policies about when you intervene.

00:34:34

Those are hard. Those are always hard. Because I wasn't at Facebook or WhatsApp, I don't actually know specifically the conversations they were having about how to balance out when you see the harm, whether you have the right detections for it, and what is the correct position of the company to intervene in what may start as small private conversations. There was clearly moment where it became broadcast, where it became about propaganda and pushing people into a certain direction that was very, very toxic and harmful and had terrible consequences on the ground. Then the question is, what is a company sitting in the US? What is their obligation to investigate, to get involved, to send in help?

00:35:26

Dave, you left Facebook in 2013. Then it moved to the 2016 election where Facebook was pillared for Cambridge Analytics, spreading fag news, the Russian propaganda, creating news bubbles and media silos. Talk a little bit about the difficulty of being the world's social media police, I guess, is what I'm thinking about. Then, of course, it got blamed for the election of Donald Trump. That's where it led to. What do you imagine is where you need to be then?

00:36:00

Yeah. I think there's a lot of question in that question and in everything we've said so far. I do think that platforms have a responsibility to intervene that arises out of the fact that they have the ability to intervene. This is where the phone analogy falls down for me. The technology that we're monitoring, yes, it is a communications technology, and also it does not work identically all prior communications technologies. Those design choices change in an almost existential way. It's like, well, too bad you're here now, figure out the meaning of your life. Too bad you have this product now. It creates responsibilities through its design. You don't get to accept, in my view, the upside of those design choices from a growth and monetisation possibility point of view without inheriting some of the downside. I think they're linked personally. I don't have a formula or that could become a regulation about how that responsibility should work, but that is where I have netded out on this entire thing. I do, though, think that returning to the point earlier about general purpose communities, that leaves you in a very difficult position for sites that are aspiring to be an undifferentiated place for everybody in the whole world to talk to each other.

00:37:26

We don't all agree as humans, and we don't agree to the level of real violence all over the world. The notion that it is it is possibly the case that the building of that space is a little bit of a, at best if you've accepted the Ring of power and now have to go find a volcano to throw it into while the ring race is trying to kill you. That might actually be the best you can end up with in that a design choice. Whereas if you are a Reddit or a Discord, everything has a context attached to it, which narrows the problem to something that feels…

00:38:06

Manageable.

00:38:06

Well, more possible to not definitely end up hated by everyone, which I think is what you're doomed to otherwise.

00:38:13

I also think A bunch of the companies, the ones that I was at, we're like, It's the internet. Anyone can access us. We forgave ourselves for not having people on the ground to understand where we were because we're not offering advertising there. We have no people on the ground. Just because they pick it up doesn't make it our responsibility to serve them. There was a bunch of that, which I think that the Rohingya moment changed that and said, actually, the very fact that you are being accessed and you can see the numbers. It imposes the obligation on you.

00:38:49

The algorithm implication is what turbocharges.

00:38:52

Absolutely.

00:38:53

Right. Okay. So, Del, one of the things in the aftermath of this, and including the election and then the COVID pandemic pandemic where Biden said Facebook was killing people by allowing vaccine misinformation, though he later walked it back. Trump himself, obviously a fountain of disinformation. There was a period, I think we can call the peak content moderation, right? And some of Trump's tweets got flagged. The New York Post reporting on Hunter Biden was suppressed. And after January sixth, Trump got kicked off of social media platforms. Am I correct, you were the one who actually kicked him off? Is that you or all of you as a group? Yeah.

00:39:29

Well, it was a group decision. I didn't go out there and just Yolo my way into the day. But it was something where we looked at there were these violations on the sixth, where we said if there are any additional violations of any kind were going to treat that as suspension worthy. And on the eighth, a couple of days later, there were a series of tweets that ended in what was taken as a dog whistle by a number of Trump's followers at the time of, I will not be attending the inauguration. And that like, here's a target that I won't be at, was how it was interpreted by any number of people responding to him. And that was I think we actually published the underlying thinking, and that was the Bridge Too Far.

00:40:20

Bridge Too Far. I have been saying he keeps doing it. When are you going to? I said it to Facebook, If he keeps violating it and you don't do anything, why do you have a law in the first place, essentially? One of the things I wrote in October of 2019, I wrote this, which was something interesting, and I'm going to read it to you. It so happens in recent weeks, including in a fancy pants, Washington dinner party this past week, and I've been testing in my companions with a hypothetical scenario My premise has been to ask what Twitter management should do if Mr. Trump loses the 2020 election and tweets inaccurately the next day that there had been widespread fraud, and moreover, that people should rise up in armed insurrection to keep him in office. Most people I have posed this question to have the same response. Throw Mr. Trump off Twitter for inciting violence. A few said that he should only be temporarily suspended to quell any unrest. Very few said he should be allowed to continue to use the surface without repercussions. If he was no longer President, one high-level government official asked me what I would do?

00:41:15

My answer, I never would have let it get this bad to begin with. Now, I wrote that in 2019, and I got a call from your bosses, Del, not you, saying I was irresponsible for even imagining that. And how dare I essentially eventually. But talk about how difficult it is to anticipate, even though I clearly did.

00:41:35

Well, I would note, again, you didn't get the call from me.

00:41:38

You didn't call me. You didn't. It was one of... You know who it was. Anyway.

00:41:42

I do know who it was, and My point is, I think you're looking at... By this point, we're already seeing some ideological shifts in people's outlooks on how they wanted to handle content. We're seeing pushback. We saw pushback on labeling content as misinformation. In fact, part of the pushback we got at one point was somebody who... We were talking about how there's some misinformation that is actually so egregious, it merits removal, as opposed to simply labeling it as misinformation. And that's because there's some types of misinformation that even if you label it, this is misinformation, people are like, that proves it's true. And it was really difficult to frame that in such a way because there was this, well, why wouldn't they just believe in this info label? There were all these conversations where we're like, but people aren't... You might work that way, but other people don't work that way.

00:42:44

Elon bought Twitter in October of 2022, and he quickly started reinstating people who had been banned, obviously, Trump, also Andrew Tate, Laura Loomer, nick Fuentes, white supremacist, who's names we may not know. One of the top priorities was reinstating the Babylon Bee, who had started this whole thing, a satirical Christian site. It was taken off of Twitter when it misgendered Rachel Levine, who was then Assistant Secretary for Human Services, and called her Man of the Year. The right wing, obviously, is obsessed with trans people, and they've been a very effective job of dehumanizing and scapegoating him. But satir does have its place, and I said before, I thought Twitter was heavy-handed in this case. Now, looking back, how do you think about Twitter's policies around something like that? Did you expect there to be so much resistance, I guess, in that regard, given the topics and Elon's obsession with trolling people?

00:43:36

I am perhaps not surprised by the degree of response. Also, our policies existed and were clear, and the responses that that tweet was getting were all further dehumanizing. At one point, there was the best answer to bad bad speech is more good speech. The best answer to bad speech is not lots more bad speech agreeing with it.

00:44:06

That's a good point.

00:44:07

When you have something that violates our policies, pretty clear cut, is doing so on the heels of a lot of other people making the same joke and targeting this individual, it turns into like, yeah, this is pretty clearly a policy violation. We're going to take action on it. Yes, that upset some people. You know what? I'm sorry that upsets you. I think it probably upsets people who are trans more that you feel like they don't deserve to exist.

00:44:38

Right. But nonetheless, it led to Elon buying Twitter. I think it's one of the biggest reasons. He called me obsessively about it. I can tell you that a number of things he called me obsessively about. This one really bothered him. Looking back, it was the tip of the spear in conservative fight against content moderation. The GOP took back the House shortly after as Jim Jordan began using the House Judiciary Committee to investigate Biden administration's voted censorship regime, supposed anticonservative censorship and bias in tech, and Steven Miller's legal organization began suing disinformation researchers. From a conservative point of view, content moderation was an attempt to impose progressive values on Americans. They think they're just undoing the damage. Nicole, you worked in government, putting aside, obviously, bad faith arguments of which this is just littered with them. Is there any point to be made here about these companies which are private going too far?

00:45:28

There's so many points. As you were recounting that history, it strikes me the acquisition of these platforms by people like Elon Musk, this very top down drive of what is that platform for. It strikes me that there has been a transition of believing when we started it, these were communication platforms that are intended to democratize the way that we communicate with each other, to let small that were blocked out by mainstream media rise so that we would hear from a wider panapalite of people and allow them to communicate with each other. That's not what these policies are for right now. These policies are about creating a bullhorn. Who they are trying to attract to their services is very specific, and it is not about cross-communication and global understanding. It is about a propaganda machine. That's correct. To me, that is really different goal, and the policies just follow from that. If we want the other internet that we started with, we have to change the goal. That is a change of ownership, apparently.

00:46:47

That leads to a question. Each episode, we get an expert to send us a question. Let's hear this one. Hi, I'm Nina Jankowits, a disinformation researcher and the CEO of the American Sunlight Project, a non nonprofit committed to increasing the cost of lies that undermine democracy.

00:47:03

The big question I would ask is, with a consolidating broligarchy between tech execs and Trump in the US, an online safety regulation on the rise in places like Europe, the UK, and Australia, how are tech platforms going to reconcile the wildly different regulatory environments around the world?

00:47:24

Dave, don't you start with this one? Nina obviously underwent a great deal of attacks, propaganda, largely unfair. But this idea of consolidating broligarchy in these owners who aren't going to give up these platforms by any means, and then you have online safety regulation elsewhere.

00:47:42

I think we're in a very interesting situation where it seems to me, looking at them, that they don't know the answer to the question either. That question presumed they had a plan. I'm not sure they do. I'm not sure. I don't know at all, but it doesn't seem to me like Elon necessarily makes plans. Whatever it is that Facebook's Gambit is here seems to basically be a bet that maybe Trump will be mean to Europe for them, and hopefully then somehow they won't have to do this, which feels, I don't know, I'm not convinced that the EU is going to think that's cool and totally go with it, but who knows? It does feel a little bit like a bet on actually splintering this further and trying to use American economic power to put pressure on people to back off them. At least that seems to be my view of Facebook's theory embedded in what they've done. I'm not at all convinced that that's going to work because this becomes a pretty core sovereignty and power issue, and linking it to government pressure that way makes that actually more true. I don't know. Maybe we see a splinter net, maybe we see things increasingly blocked, maybe we see the use of AI technologies, which I do think are going to change moderation in ways that are going to be somewhat helpful to the level of flexibility we have, end up with very different versions of Facebook functionally being available or Twitter functionally being available to people in different parts of the world.

00:49:15

I don't know. I think it'll be some combination of those things.

00:49:19

That would just be a profoundly reckless way of understanding how they exist in the world, though, right? These are companies that have people on the ground in these countries who are subject to the laws of those countries who have users on the ground. It strikes me as enormously short-sided about their ability to continue as a business if they think they're going to blow off the rest of the world.

00:49:42

This is why it has felt, for me, From the get-go, this set of announcements has felt like, weirdly panicky and irrational to me. That's why. I don't understand what the is here beyond like 2026.

00:50:03

We'll be back in a minute. So a couple more questions I recently interviewed Yon LeCun, Metta's chief AI scientist. He says AI has made content moderation more effective, as you just said, Dave. Del, do you agree? You've spoken about how trust and safety are perpetually under-resourced. I know what they are. Do you think that AI gives them the tools to do their job better, assuming people running the platform to want to effectively moderate content in the first place? I know Mark went on about it to me 10 years ago about AI fixing everything, but go ahead.

00:50:42

Assuming that you are using AI to help with scale and you still have humans involved in the circuit to make sure that it hasn't gone wildly awry. Like, yes, please. Absolutely. We have been begging for tools for years, and AI is a tool like any other. It depends on how you use it. If you deploy it carelessly, then it's going to cause problems. But a lot of what Dave has actually been working on-Yeah, I'm going to ask him that next. Is in this space. Well, let me just lead to.

00:51:12

Okay, good. Thank you.

00:51:12

Dave has been doing some really excellent work in this space, so I just want to shout him out.

00:51:16

Okay. So, Dave, generative AI is the new frontier when it comes to issues you've been talking about. We want all the trust in safety and maya, but it's hard to trust the technology. It sometimes hallucinates, and then there's other issues like character AI that's shown AI has the potential to be very unsafe. I've just recently interviewed the mother who alleges her teenage boy took his own life after having started a secret relationship with an AI chatbot. It's a very compelling story. Dave, you worked on safety at OpenAI and Anthropic, and you're doing your own thing now. What does safety look like for AI? Can you go into it more? Do you think it'll end up being safer or more dangerous and corrosive? Well, could it be more deracious? I don't know.

00:51:53

I was going to say, unfortunately, challenge accepted. Some parts of it are very similar and other parts of it are very different. The set of interventions you have around your AI chatbot are a superset of the ones you have for content moderation. You have monitoring of inputs, what people are writing to the chatbot or people are trying to post. You have monitoring of outputs like what the chatbot says back. You have all the different ways of going about that, whether that's flagging algorithmically or human intervention or a combination of those things. But you also, in the context of the AI chatbots do have the ability to try to train the models themselves to behave in more prosocial ways or more the way you want them to.

00:52:39

That's that woke AI Elon keeps talking about.

00:52:41

I'm teasing. Well, it's any AI. For me, AI or racist AI. I'm here to ruin all the fun. This is what I do professionally. But you do have that level of intervention. If you think of the AI as a participant in a conversation in a chatbot product, you're The alternative is actually it's two users having that conversation and you don't have any say in what either of them wants to try to do. In some sense, at least in my view, single person interactive chatbot services, in theory, once everybody gets good at this, and there's a problem here of deploying the technology before we've gotten good at it, should be something that we can actually make more safe because you have all the same points of intervention plus other ones that are not perfect, but add another layer of safety. And add another the other layer of cheese to the Swiss cheese.

00:53:31

I have two more very quick questions, one for Nicole and then one for all of you. We talk most about YouTube, X, Twitter, Meta, since this is where three of you work, but TikTok is the elephant in the room. It may or not be banned. I have some thoughts on that. I'm not going to go into them, but it may or not be used for Chinese propaganda. Elon may or may not end up owning it. There's all kinds of ways. But I have thought that, and Trump just said it today, I said he's going to give it to Elon, and Trump just said, I'm thinking of giving it to Elon. Let's just say Elon does controlling TikTok. Nicole, game out any consequences for us if it happens. He obviously has links in China that are problematic, including his factories and his car sales, all kinds of relationships there, questions about his conflicts of interest. Thoughts on where that's going?

00:54:18

Tiktok is such a dumpster fire of an issue, both at a policy and a technical level. I think there's nothing about his ownership of X that indicates it's going to be a healthy environment. To the extent we wanted to ban TikTok because we thought it would be unhealthy for Americans to be on it, that doesn't strike me as it's going to get better just because Elon has taken it over in the US. I'm probably going to get myself in so much trouble for this comment. That's okay.

00:54:47

I say worst.

00:54:48

The ban itself was so poorly thought through and handled. If we want to solve foreign owned apps on our phones as a security issue, let's have that conversation, but have it broadly, not just TikTok. If we want to have a conversation about propaganda and misinformation spreading on social media, let's have that conversation, but not about TikTok. There's a whole bunch of ways we could try to tackle the surveillance and collection of US person's information. Let's pass federal comprehensive privacy law and stop having this stupid conversation about TikTok. To me, the TikTok thing, I don't know where it's going to end up, but we're not going to avoid the social conversation we actually need to have that keeps us safe. All right.

00:55:40

Nonetheless, we're having it, unfortunately. Elon Musk, Sundar Pichai, Mark Zuckerberg, Sam Altman, TikTok CEO, Shao-Chu. We're all honored guests, include also Tim Cook. Tiktok explicitly thank Trump for helping restore it, even though it's not restored because Apple and Google are declining to let it be downloaded because they understand there's a law and they need to it, and also the Supreme Court said so. Some users have reported that TikTok is hiding anti-Trump content, but we'll see if that's actually the case. Either way, it raises the possibility that some of the most influential communications platforms that drive our culture are in the hands of oligarchs. They don't like that word, it hurts Mark's feelings. I'm sorry, Mark, but that's what you are. You have aligned themselves with Trump. What are the implications of this new power dynamic between a president like Trump and social media platforms? And what do you expect to see flip in the next few months and years. So Del, you go first, Dave and then Nicole.

00:56:33

I think that we are most likely going to see some period of time where everybody goes, no, look, everything's fine still. Everything's totally fine. And then things are going to crash and burn.

00:56:47

How so?

00:56:49

It's going to start with all of a sudden these marginalized groups don't have protections anymore, and they start getting targeted more. And maybe they try to do the encounter with good speech or defending themselves or what have you. But eventually, when the content that's tacking them keeps getting upranked, even surfaced algorithmically, they're going to stop pushing back. They're going to leave. They're going to go elsewhere. Then they're going to essentially have had their speech chilled. There are only so many people who they can appeal to in terms of this pro-fascism anti-woke United States number one opinion of things. The EU is just not going to be chill with this at all. So There's multiple different ways that this could end up in a giant fireball, but it feels like at least one of them is pretty inevitable. Then we will come in and we will clean it up like we do, and we will go back to trying to make things is right again because that's what we do.

00:58:02

All right. Dave?

00:58:04

Yeah, I'd agree with that. Again, this gets to the, not acceptance, but fatalism about the journey of things. I used to say to my teams at Airbnb that the question is not where we end up on this, it's how stupid the journey has to be. I think we're a little bit doing that. That's not to dismiss it because a lot of people are going to get hurt by the stupidity of the journey, which is a tragedy.

00:58:30

Which you noted in your wonderful thread on that.

00:58:34

But it is to say, don't despair, because these pressures simply exist. I think if you run these sorts of platforms, there are a lot of decisions you don't really get to make. It just seems like you get to make them. Then you encounter the forces that press you towards particular directions and are either worn away or like Richa said, state of acceptance and understand the business you're actually in. Sometimes we have to go on a finding out journey around this stuff. I don't have a prediction about exactly how it falls apart. I do think it thought occurred to me earlier when Nicole was talking that in some ways we're seeing social media really become media. One way this potentially develops is these are all cable networks now because they're more broadcast- Very point. And you have the segregation into your MSNBC Blue Sky or CNN threads for normies. Just like CNN, threads wants to be Fox News because they're the cool one that everybody loves using a lot. You may see segregation in that regard, which is, I don't love, but is a way of resolving the social context problem in some ways, but it also makes these things much more propagandistic.

00:59:51

I do agree, though, with Del, that the more extreme edges of this, there is a conflation between the fact that Elon is more wealthy now because buying Twitter and setting it on fire was strategically advantageous to his broader portfolio. That's correct. Which is different than, did this work out well for Twitter as a product? Where the answer is very obviously no.

01:00:16

Yeah, no, he didn't care. Right.

01:00:18

It wasn't the goal. So his broader strategy is successful. But in so far as you view the platforms themselves, they created a vacuum which created threads and powered the rise of Blue sky. There was a homeostatic reaction, and it seems to be continuing and now starting to roll up some of Metta's products. It's not the case there hasn't been backlash. It just hasn't resulted in a cathartic outcome in the bigger picture.

01:00:42

As yet. You're absolutely correct in why he bought that. Actually, Mark even pointed it to me the day he bought it. He said it's nothing to do with this platform. It's everything to do with influence. He later will have, which was interesting. Nicole, finish up.

01:00:54

You're coming to me in a tough week. Tell me what. Of the first day of E. O. I think what is going to happen on social media, I think that we've already been seeing it. I sit on Blue Sky, and so every time there's a X thing, there's a surge in the Blue Sky numbers that comes. I think that it's likely we see people dispersing to find what is the healthiest place for them to be, where they can find their people and the conversations they want to have. I worry that the platforms, none of them rise to the moment. What we end up doing is we sit in small text groups on Signal, which is candidly what I've been for the last six months, is we make our world very small. I think for the setting aside the role of social media, I think there's a bigger problem with Trump and the closest to social media and the so far not very distinguished work of the mainstream media to hold them accountable. If we believe that social media is one place that people get their information, but that amplification really happens when it hits mainstream media, we are not getting what we need in terms of having a trustworthy source of information.

01:02:21

People are going to seek those trustworthy sources of information. It may end up being it has to be in our small text groups because I'm not sure what the trajectory is of where we're going to find it, but people are going to look for it. And so the question is who's going to rise to the occasion for that?

01:02:36

Right. And it's also with all this data sloshing everywhere. I do think people are going to get smaller. You're right. I think the dissipation is really important to think about. And architecture is one as you, you don't realize the impact you had when you talked about architecture to me around these things. How you make something is how they find. And I'm going to read you, actually. It's odd that you said that. I just wrote an afterword to my book, Burn I'm quoting Paul Verilio, who's a French philosopher who talks about these things. He was being interviewed, and I'll read this to you just like a very quick reaction, if you think it's a good end or a bad end. Paul Verilio once talked about technology embedded into our lives in a science fiction short story in which a camera has been invented, which can be carried by flakes of snow. Cameras are inseminated into our official snow, which is dropped by planes. When the snow falls, there are eyes everywhere. There no blind spot left. The interviewer then asked the single best question I've ever heard and wish I had the talent to ask it of the many tech leaders I have known over three decades.

01:03:41

But what shall we dream of when everything becomes visible? And From Virilio, the best answer to, we'll dream of being blind. It's not the worst idea. Do you think it is? What shall we dream of when everything becomes visible in the way it has? Each of you. Last question. Del?

01:03:58

I'll take a stab at it and say, I would wish for once everything has become visible, to be able to identify those things that are meaningful.

01:04:12

Great answer. Nicole?

01:04:15

I think that's such a terrific answer. I had a similar... Sometimes seeing everything is overwhelming, right? So you need to know what makes the hate, the misinformation. What makes things worthwhile and meaningful and permits progress and distill that part of it.

01:04:34

Dave, last answer.

01:04:36

I mean, my flippant reaction is, we'll dream of going outside and touching real grass.

01:04:41

You're a grass toucher. I knew it.

01:04:43

No, but in the sense that I don't think we are fitted for that world. And so the dreams will be dreams of escape, whether those are withdrawing to smaller spaces or wishing that somehow the truth was less painful and was understood as meaningful or wishing to be invisible, which was where my mind immediately went when you asked the question. It's going to be a dream of escape because we're not, I don't think, prepared for that much awareness.

01:05:11

That's absolutely true. Well, thank you for all your efforts in trying to help us get through that. I really appreciate everyone, each one of you and your thoughtfulness. Sometimes tech leaders can seem so dumb, and the people that work for them are not. The people who work for them think a lot and think hard about these issues. I wanted to shine a light on that, and I appreciate it. Thank you so much.

01:05:31

Thank you. Thank you.

01:05:32

Thank you so much.

01:05:38

On with Kara Swisher is produced by Christine Caster-Russell, Ketari Yokem, Jolie Myers, Megan Bernie, and Kaylyn Lynch. Nishat Kurwa is Vox Media's executive producer of audio. Special thanks to Kate Gallagher. Our engineers are Rick Kwan and Fernando Aruda, and our theme music is by Trackademics. If you're already following the show, you must be chockful of masculine energy. If not, go outside and touch some grass. Go wherever you listen to podcast, search for On with Kara Swischer, and hit follow. Thanks for listening to On with Kara Swischer from New York magazine, the Vox Media Podcast Network, and us. We'll be back on Thursday with more.

AI Transcription provided by HappyScribe
Episode description

Since the inception of social media, content moderation has been hotly debated by CEOs, politicians, and, of course, among the gatekeepers themselves: the trust and safety officers. And it’s been a roller coaster ride — from an early hands-off approach, to bans and oversight boards, to the current rollback and “community notes” we’re seeing from big guns like Meta, X, and YouTube.
So how do the folks who wrote the early rules of the road look at what’s happening now in content moderation? And what impact will it have on the trust and safety of the platforms over the long term? This week, Kara speaks with Del Harvey, former head of Trust and Safety at Twitter (2008- 2021); Dave Willner, former head of Content Policy at Facebook (2010-2013); Nicole Wong, a First Amendment lawyer, former VP and deputy general counsel at Google (2004-2011), Twitter's legal director of product (2012-2013), and deputy chief technology officer during the Obama administration (2013-2014).
Questions? Comments? Email us at on@voxmedia.com or find us on Instagram and TikTok @onwithkaraswisher
Learn more about your ad choices. Visit podcastchoices.com/adchoices