It has been a difficult year for Facebook. The monster of social networks was involved in controversies about false news, electoral interference, privacy violations and a broad reaction to the addiction to smartphones. Wall Street has noticed: the company has lost almost $ 100 billion in market value in recent weeks.
Behind the arduous year of Facebook is the collision between values, ambitions, the business model and the hallucinatory scale of the company. Mark Zuckerberg, the founder of Facebook, has long argued that the company's mission is to make the world more open and connected, with the assumption that a more open and connected world is a better world. That assumption has been hard-proved over the past year. As we have seen, a more open world can make it easier for governments to undermine others' choices from afar; A more connected world can facilitate the spread of hatred and violence.
In 2017, Facebook reached more than 2 billion monthly users, and that's without mentioning the bases of massive users of properties owned by Facebook such as Instagram and WhatsApp. There is no way to track, or even understand, everything that is happening on Facebook at any given time. The problems that look small at the moment, such as the disinformation organized campaigns mounted by Russia, are revealed, in retrospect, as massive events, possibly even changing in the world.
Javier Zarracina / Vox; Jonathan Nackstrand / AFP / Getty Images
I spoke with Zuckerberg on Friday about the state of his company, the implications of its global influence and how he sees the problems that lie ahead.
"I think we'll dig through this hole, but it will take a few years," Zuckerberg said. "I wish I could solve all these problems in three months or six months, but I think the reality is that solving some of these questions will take a longer period of time."
But what happens then? What has this past year meant for the future of Facebook? In a 2017 manifesto, Zuckerberg argued that Facebook would help humanity take the "next step" by becoming "the social infrastructure" for a truly global community.
Surprisingly, Facebook's scale makes this a plausible vision. But it comes with a dark side: Facebook has become too big to manage, and too dangerous when it fails? Should the most important social infrastructure in the global community be managed by a single company based in Northern California? And does Zuckerberg's optimism about human nature and the benefits of a connected world make it harder for him to see the damage that Facebook can cause?
The complete conversation with Zuckerberg can be heard on my podcast, The Ezra Klein Show. A transcript, slightly edited for duration and clarity, follows.
I want to start with something you said recently in an interview, which is that Facebook now looks more like a government than a traditional company. Can you expand on that idea?
Of course. People share a lot of content and, sometimes, there are disputes among people around them about whether that content is acceptable, whether it is a hate speech or a valid political discourse; if it is an organization considered bad or hate or terrorist or if it expresses a reasonable point of view.
I think that more than many other companies, we are in a position where we have to adjudicate that type of disputes between the different members of our community. And to do that, we had to develop a complete set of policies and governance on how that works.
But I think it's actually one of the most interesting philosophical questions we face. With a community of more than 2 billion people, all over the world, in every different country, where there are very different social and cultural norms, it is not clear to me that we, sitting in an office here in California, are better placed forever. Determine what the policies should be for people around the world. And I've been working and reflecting: how can a more democratic or community-oriented process be established that reflects the values of people around the world?
That is one of the things that I really think we should do well. Because I'm not sure that the current state is great.
Javier Zarracina / Vox; AFP / Getty Images
I'd love to know more about your opinion on that, because when Facebook is wrong, the consequences are on the scale of when a government is wrong. Elections may lose legitimacy or ethnic violence may arise.
I wonder, has Facebook become too big and too vast and too consistent for normal corporate governance structures, and also the normal incentives of private companies?
We are continually thinking about this. As the Internet reaches a broader scale and some of these services reach a larger scale than anything else, we are constantly faced with new challenges. I try to judge our success not by "No problems that arise?" But, "When a problem arises, can we deal with it responsibly and make sure we can address it so that those kinds of problems do not arise again in the future?
You mentioned our government. One of the things that I feel really fortunate about is the structure of this company in which, in the end, it is a controlled company. We are not at the whims of short-term shareholders. We can really design these products and decisions with what is going to be the best for the community over time.
That's one of the ways in which Facebook is different, but I can imagine reading it in both senses. On the one hand, its control of shares with the right to vote makes it more isolated from short-term market pressures. On the other hand, you have much more personal power. There are no four-year elections for the CEO of Facebook. And that is a normal way in which democratic governments assure responsibility. Do you think that the structure of government makes you, in some cases, less responsible?
I certainly think it's a good question. My goal here is to create a governance structure around content and community that reflects more what people in the community want than what short-term shareholders might want. And if we do it right, then I think that could really start the road to governance of an Internet community. But if we do not do it well, then I think we can not handle many of the problems that lie ahead.
Here are some of the principles. One is transparency. At this time, I do not think we are sufficiently transparent about the prevalence of the different problems on the platform. We have not done a good job publishing and being transparent about the prevalence of that kind of problems, and the work we are doing and the trends of how we are driving those things over time.
A second is some kind of independent appeal process. At this time, if you post something on Facebook and someone reports it and our Community Review and Operations Team reviews it and decides it should be removed, there is really no way to appeal. I believe that in any kind of democratic system that works well, there must be a way to appeal. And I think we can build that internally as a first step.
But in the long term, what I would really like to achieve is an independent appeal. So maybe Facebook people take the first decision based on the community standards that are described, and then people can get a second opinion. You can imagine some kind of structure, almost like a Supreme Court, which is made up of independent people who do not work for Facebook, who finally make a final judgment about what should be an acceptable discourse in a community that reflects social norms. and values of people around the world.
Javier Zarracina / Vox; Drew Angerer / Getty Images
One thing that has been harmful to Facebook during the past year is that a concern will arise and, initially, the answer is: "Very, very few people saw false news." Or "Very, very few people saw some bots related to Russia …" And then, slowly, it comes out, "No, it was really more … Millions … Maybe hundreds of millions."
The problem was not the lack of transparency, it was how to know that we could trust what was coming out. And one of the reasons why I am interested in hearing that you approach the idea of independent institutions is that I wonder if part of the transparency should create modes of information that are independent.
Yes, I think that's a good point. And I certainly believe that what you are saying is a fair criticism. It is difficult to be transparent when we do not have a complete understanding of where the status of some of the systems is. In 2016, we were behind having an understanding and operational excellence in preventing things like misinformation, Russian interference. And you can bet that that is a great focus for us in the future.
At this moment in the company, I think we have about 14,000 people working in security and in community operations and reviews, just to make sure we can solve some of the problems we had in 2016.
After the 2016 US elections, several months later, there were French elections. And for that, we spend a lot of time developing new artificial intelligence tools to find the kind of fake accounts that disseminate the wrong information and cancel them. I think there were more than 30,000 accounts, and I think the reports from France were that people felt that was a much cleaner choice on social media.
A few months later, there were elections in Germany. And there, we increased the playbook again to work directly with the Electoral Commission in Germany. If you work with the government in a country, you will really have a better understanding of what is happening and what are all the problems that we should focus on.
And then, fast forward to last year, 2017, and the special elections in Alabama. We implemented a series of new tools that we had developed to find false accounts that tried to spread false news and we removed them before a lot of debates about the elections. And again, I think we feel a lot better about the result there.
Javier Zarracina / Vox; AFP / Getty Images
But let me ask you about your tools to punish that bad behavior. The risk-reward of manipulating a national election using Facebook is very high. If you are Russia and you get caught hacking our electoral systems, which they also tried to do, and you fail and Hillary Clinton wins, the consequences of that can be really serious. The sanctions could be tremendous and you could even imagine that something like this would intensify in an armed conflict.
If you do this on Facebook, you may get caught and your bots shut down, but Facebook, not being a government, does not really have the ability to punish. If Cambridge Analytica spoils the privacy of everyone, you can not put them in jail in the same way that, if you are a doctor and repeatedly violates HIPPA, the government makes sure that you face very serious legal consequences. So, do you have the capacity to not only detect, but sanction? Is there any way to increase the cost of using your platform for these types of efforts?
I can see how we are basically getting close to this.
There are three major categories of false news. There is a group of people who are like spammers. These are the people who, in days before social networks, sent you Viagra emails. The basic playbook you want to run on is simply not making it economical. So, the first step, once we realized that this was a problem, was that several of them posted Facebook ads on their web pages. We immediately said, "Okay, whichever is remotely incomplete, by no means will you be able to use our tools to monetize." So the amount of money they made decreased.
Then, they try to pump this content to Facebook with the hope that people click on it and see ads and earn money. As our systems improve upon detecting this, we show the content less, which reduces the economic value for them. Eventually, they reach a point where they go and do something else.
The second category is state actors. That is basically Russia's interference effort. And that is a security problem. You never solve it completely, but you strengthen your defenses. You get rid of the false accounts and the tools they have. We can not do all this alone, so we try to work with local governments everywhere that have more tools to punish them and have more information about what is happening in their country so they can tell us what to focus on. And I think we're also making a lot of progress on that.
Then there is the third category, which is the most nuanced, which are basically real media that say what they think is true but have different levels of accuracy or reliability. And that is actually the most challenging part of the problem to be addressed. Because I think there are quite large problems of freedom of expression. People say things that may be wrong, but they mean it, they think they are telling their truth, and do you really want to close them by doing that?
So, we have probably been the most careful in that piece. But this year, we have implemented a series of changes in News Feed that try to boost the ranking of widely trusted news sources. We have surveyed people from across the community and asked them if they trust different news sources.
Take the Wall Street Journal or the New York Times. Even if not all of them read, people who do not read them normally still think they are good and trustworthy. While if you go to blogs that may be more on the sidelines, they will have strong supporters, but people who do not necessarily read them often do not trust them as much.
Zavier Zarracina / Vox; LightRocket through Getty Images
I am someone who emerged as a blogger and had a lot of love for the idea of the open Internet and the way the doors fell. One thing I hear when I hear the third solution is that it also creates a great return to the concern.
If you're the New York Times and you've been around for a long time and you're very well known, people trust you. If you are someone who wants to start a media organization within two months, people still do not know if they can trust you. If Facebook is the way people receive their news, and the way Facebook qualifies their News Feed is privileging the news that people already trust, it will be much more difficult for new organizations to get ahead.
That is an important point in which we spend a lot of time thinking. One of the best things about the Internet and the services we are trying to build, is giving everyone a voice. That is very deep in our mission. We definitely think about that in all the changes we are making.
I think it is important to bear in mind that, of all the strategies I have just outlined, they are made up of many different actions, each of which has relatively subtle effects. So, the broadly reliable change that I just mentioned, changes how much you can see something, I do not know, I just call it in the range of maybe 20 percent.
What we are really trying to do is make the content that people see really really meaningful to them. And one of the things that I think we're often criticized for is, and incorrectly, in this case, people say, "Hey, you're sorting the system according to what people like and click on it."
That really is not true. We passed by many years ago. There was a problem with clickbait, where there were a lot of posts that would send content to Facebook, people would click on them because they had sensational titles, but they would not feel good if they had read that content. That was one of the first times that the basic click metrics, "I like" and comments on the content really stopped working to help us show the most meaningful content.
The way this works today, in general terms, is that we have panels of hundreds or thousands of people who come in and show them all the content that their friends and the pages they follow have shared. And we ask you to classify it and, basically, say: "What were the most significant things you would want to be at the top of the food?"
And then we try to design algorithms that only relate to what people really tell us is meaningful to them. It is not what they click on, not what will make us the biggest income, but what people really consider significant and valuable. So, when we are making changes, like the widely trusted change, the reason why we do it is because it really corresponds to what people tell us they want at a deep level.
One of the things that has come up a lot in the conversation is whether the business model of monetizing the user's attention is what is allowing many of these problems. Tim Cook, the CEO of Apple, gave an interview the other day and asked him what he would do if he were in your place. He said: "I would not be in this situation", and argued that Apple sells products to users, does not sell users to advertisers, so it is a more solid business model that does not open up to these problems.
Do you think part of the problem here is the business model where attention ends up dominating over everything and then anything that can compromise has great value within the ecosystem?
You know, I find that argument, that if you're not paying in some way we can not worry about you, be extremely simplistic. And not at all aligned with the truth. The reality here is that if you want to build a service that helps connect everyone in the world, then there are many people who can not pay. And, therefore, as with many media, having a model supported by advertising is the only rational model that can help develop this service to reach people.
That does not mean that we are not primarily focused on serving people. I think that probably due to the dissatisfaction of our sales team here, I make all our decisions based on what is going to matter to our community and I focus less on the advertising aspect of the business.
But if you want to build a service that not only serves rich people, then you must have something that people can afford. I think Jeff Bezos had an excellent saying about this in one of his Kindle releases a few years ago. He said: "There are companies that work hard to charge you more, and there are companies that work hard to charge you less." And on Facebook, we are in the field of companies that work hard to charge less and provide a free service that everyone can use.
I do not believe at all that it means that we do not care about people. On the contrary, I think it's important that we do not all get the Stockholm Syndrome and let companies that work hard to charge you more convince you that they really care more about you. Because that sounds ridiculous to me.
Javier Zarracina / Vox; Jenny Kane / AP
So I'm also in an advertising model and I have a lot of sympathy for the advertising model. But I also think that the advertising model can blind us. Create incentives under which we operate and justify. And one of the questions I ask myself is if the diversification of the model does not make sense. If I understand, and maybe not, WhatsApp, which is also part of Facebook, is a subscription, right? People pay a small amount?
No, we actually got rid of that.
Well, look, there you go. Show what I know.
The broader point that I want to point out is that it is not necessary that you only provide services to rich people to diversify, but simply do not pay attention. And when it comes to attention, when it comes to advertising, when you need to show growth to Wall Street, that does attract you to get more and more attention from people over time.
I did an interview with Tristan Harris, who has been a Facebook critic. And we were talking about your announcement that some of the changes you are making have reduced, a bit, the amount of time people are spending on the platform. And he made the point, "You know it's great, but he could not do that 50 percent, Wall Street would go crazy, his board would go crazy." There are costs to this model, and I wonder how you think at least protect yourself against some of them that dominate in the long term.
Well, I think our responsibility here is to make sure that the time people spend on Facebook is a time well spent. We do not have teams that, as a main objective, do so for people to spend more time. The way I design the goals for teams is to try to build the best experience you can. I do not think it's correct to assume that people who spend time in a service are bad. But at the same time, I also believe that maximizing the time people spend is not really the goal either.
In the last year, we have researched a lot about what drives the well-being of people. And what uses of social networks are correlated with happiness and long-term health measures and all the welfare measures that you would expect, and what areas are not so positive.
And what we have found is that you can divide the use of Facebook and social networks into two categories. One is where people connect and develop relationships, even if it is subtle, even if it is fair: I post a photo and someone with whom I have not spoken at one point comments. That person reminds me that they care about me.
The other part of the use is basically the consumption of content. So that's watching videos, reading news, passively consuming content in a way that you're not interacting with anyone or creating a relationship. And what we find is that the things that have to do with the interaction with people and the construction of relationships end up correlated with all the long-term welfare measures that you would expect, while those that relate mainly to content consumption, including if they are informative or entertaining and people say that they like them, they are not so correlated with long-term welfare measures.
This is another change we have made in News Feed and our systems this year. We are prioritizing to show more content from your friends and family first, so that you are more likely to have interactions that are meaningful to you and that most of the time you are spending is building those relationships.
That change actually took time, it was spent a little. That was part of what I was talking about on that earnings call. But in the long term, even if you spend time, if people spend more time on Facebook creating relationships with the people they care about, then that will build a stronger community and build a stronger business, regardless of what Wall Street think about it in the short term.
Javier Zarracina / Vox; Richard Drew / AP
I want to ask you another question about the advertising model, and this is more complicated because it is directly related to my industry. Something I've seen recently has been a perception on Facebook that a lot of critical media coverage comes from angry journalists because Facebook is decimating the advertising market that journalism depends on. And there is that view. Dow Jones editor Will Lewis said that the deviation of advertising dollars on Facebook and Google is killing the news and that it has to stop.
Is it right or wrong? And since much of the advertising on Facebook involves the news that journalism organizations are paying to report and publish, what responsibility do you have with the people who create real news for your business model to work, since their products create value, not only for the world, but for Facebook itself?
So, I think a great responsibility we have is to help support high quality journalism. And those are not just the big traditional institutions, but a big part of what I really think when I think of high quality journalism is local news. And I think there are almost two different strategies in terms of how to approach that.
For larger institutions, and perhaps even some smaller ones, subscriptions are really a key point in this. I think many of these business models are moving towards a higher percentage of subscriptions, where the people who get the most value from you are contributing a disproportionate amount to the revenue. And there are certainly many things we can do on Facebook to help people, help these news organizations, boost subscriptions. And it has certainly been a lot of work we have done and will continue to do.
In local news, I think some of the solutions may be a little different. But I think it's easy to lose track of how important this is. There have been many conversations about changes in civic engagement, and I think people can lose sight of how closely linked they can be with local news. In a city with a strong local newspaper, people are much more informed, they are more likely to be civically active. On Facebook, we have taken steps to show more local news to people. We are also working with them specifically, creating funds to support them and working on both subscriptions and advertisements that hopefully should create a more prosperous ecosystem.
I've been thinking a lot about the preparation of this interview, about the 2017 manifesto where you said you wanted Facebook to help humanity take the next step. You wrote that "progress now requires humanity to unite, not only as cities or nations, but also as a global community," and suggested that Facebook could be the social infrastructure for that evolution.
In retrospect, I think a key question here is whether creating an infrastructure where all the tensions of countries and ethnicities and regions and ideologies can more easily collide with each other will really help us become that global community or it will tear us even further apart. Have your ideas changed about that?
Of course. I think that in recent years, the political reality has been that there are many people who feel abandoned. And there has been a great increase in isolationism and nationalism, which I think threatens the global cooperation that will be required to solve some of the most important problems, such as maintaining peace, tackling climate change, collaborating a lot to accelerate science and cure diseases and eliminate poverty.
So this is a big part of our mission. One of the things that I found encouraging is that if you ask millennials what they identify with most, it is not their nationality or even their ethnic origin. Plurality is identified as a citizen of the world. And that, I think, reflects the values of where we need to go to solve some of these most important questions.
So, the question is, how do you do that? I think it's clear that helping people connect on their own is not always positive. A much more important part of the approach for me now is to make sure that by connecting people, we are helping to build links and bring people together, instead of just focusing on the mechanics of the connection and infrastructure.
There are several different pieces that you must do here. I think civil society basically starts from the bottom up. You need to have groups and communities that work well. We are very focused on that. You need a well-informed citizenship, so we are very focused on the quality of journalism, that everyone has a voice and that people can have access to the content they need. That, I think, ends up being really important.
Civic commitment, both to participate in elections and to work more and more to eliminate interference and the different nation-states that try to interfere in the choices of others, ends up being really important. And then I think part of what we have to do is work on some of the new types of governance questions with which we started this conversation because there has not been a community like this that has covered so many different countries.
These are some of the things that I am focused on. But at this time, many people are not so focused on connecting the world or on bringing countries closer, perhaps as they did a few years ago. Y todavía veo eso como una parte importante de nuestra visión de hacia dónde debe ir el mundo, que hacemos lo que podemos para mantenernos comprometidos con eso y esperamos poder ayudar al mundo a avanzar en esa dirección.
Javier Zarracina / Vox; Jeff Chiu / AP
Una de las historias de miedo que he leído sobre Facebook durante el año pasado es que se ha convertido en una verdadera fuente de propaganda anti rohingya en Myanmar y, por lo tanto, se ha convertido en parte de una limpieza étnica. Phil Robertson, subdirector de Human Rights Watch en Asia, destacó que Facebook es dominante para la información de noticias en Myanmar, pero Myanmar no es un mercado increíblemente importante para Facebook. No capta la atención que damos a las cosas que van mal en Estados Unidos. Dudo que tenga una cantidad proporcional de personal en Myanmar a lo que tiene en Estados Unidos. Y dijo que el resultado es que terminas siendo como "un terrateniente ausente" en el sudeste asiático.
¿Es demasiado grande Facebook para gestionar su escala global en algunos de estos otros países, de los que no siempre hablamos en esta conversación, efectivamente?
Entonces, una de las cosas que creo que debemos mejorar a medida que crecemos es convertirnos en una empresa más global. Tenemos oficinas en todo el mundo, por lo que ya somos bastante globales. Pero nuestra sede está aquí en California y la gran mayoría de nuestra comunidad ni siquiera está en EE. UU., Y es un desafío constante asegurarnos de prestar la debida atención a todas las personas en diferentes partes de la comunidad en todo el mundo. .
Creo que los problemas de Myanmar se enfocaron mucho en la empresa. Recuerdo que un sábado por la mañana recibí una llamada telefónica y detectamos que la gente estaba tratando de difundir mensajes sensacionales a través de Facebook Messenger en este caso, a cada lado del conflicto, básicamente diciéndoles a los musulmanes: "Oye, hay casi ser un levantamiento de los budistas, así que asegúrate de estar armado e ir a este lugar ". Y luego, lo mismo del otro lado.
Así que ese es el tipo de cosas donde creo que está claro que la gente estaba tratando de usar nuestras herramientas para incitar a un daño real. Ahora, en ese caso, nuestros sistemas detectan que eso está sucediendo. We stop those messages from going through. But this is certainly something that we’re paying a lot of attention to.
Javier Zarracina/Vox; NurPhoto via Getty Images
I think if you go back a couple years in technology rhetoric, a lot of the slogans people had that were read optimistically have come to take on darker connotations too. The idea that “anything is possible.” Our sense of what “anything” means there has become wider. Or the idea that you want to make the world more open and connected, I think it’s become clearer that an open and connected world could be a better world or it could be a worse world.
So, when you think about the 20-year timeframe, what will you be looking for to see if Facebook succeeded, if it actually made the world a better place?
Well, I don’t think it’s going to take 20 years. I think the basic point that you’re getting at is that we’re really idealistic. When we started, we thought about how good it would be if people could connect, if everyone had a voice. Frankly, we didn’t spend enough time investing in, or thinking through, some of the downside uses of the tools. So for the first 10 years of the company, everyone was just focused on the positive.
I think now people are appropriately focused on some of the risks and downsides as well. And I think we were too slow in investing enough in that. It’s not like we did nothing. I mean, at the beginning of last year, I think we had 10,000 people working on security. But by the end of this year, we’re going to have 20,000 people working on security.
In terms of resolving a lot of these issues, I think it’s just a case where because we didn’t invest enough, I think we will dig through this hole, but it will take a few years. I wish I could solve all these issues in three months or six months, but I just think the reality is that solving some of these questions is just going to take a longer period of time.
Now, the good news there is that we really started investing more, at least a year ago. So if it’s going to be a three-year process, then I think we’re about a year in already. And hopefully, by the end of this year, we’ll have really started to turn the corner on some of these issues.
But getting back to your question, I think human nature is generally positive. I’m an optimist in that way. But, there’s no doubt that our responsibilities to amplify the good parts of what people can do when they connect, and to mitigate and prevent the bad things that people might do to try to abuse each other.
And over the long-term, I think that that’s the big question. Have we enabled people to come together in new ways — whether that’s creating new jobs, creating new businesses, spreading new ideas, promoting a more open discourse, allowing good ideas to spread through society more quickly than they might have otherwise? And on the other side, did we do a good job of preventing the abuse? Of making it so that governments aren’t interfering in each other’s civic elections and processes? Are we eliminating, or at least dramatically reducing, things like hate speech?
We’re in the middle of a lot of issues, and I certainly think we could’ve done a better job so far. I’m optimistic that we’re going to address a lot of those challenges, and that we’ll get through this, and that when you look back five years from now, 10 years from now, people will look at the net effect of being able to connect online and have a voice and share what matters to them as just a massively positive thing in the world.
Javier Zarracina/Vox; AFP/Getty Images