Rebooting Facebook


Remember the internet outcry when Twitter exposed nearly 700,000 Americans to posts from Russian bots, designed to meddle in the 2016 Presidential election? Me neither.

That's because Twitter proactively conducted its own retroactive investigation into the role its platform played in the Russian influence on the 2016 election, following its November 2016 appearance before Congress, along with Facebook and Google, on the matter.

They then provided the public with the specifics of the types of posts put out by the Russian government-linked "Internet Research Agency" (IRA). And kept them updated. Following that, they sent emails to every Twitter account holder who interacted with one of the IRA accounts describing the issue, and told everyone how they had updated their analytics to proactively identify suspicious patterns in future posts. They were specific, fact-based, and focused on the impact on their users and what Twitter was doing about that.

...any such activity represents a challenge to democratic societies everywhere, and we’re committed to continuing to work on this important issue. We have developed new techniques for identifying malicious automation (such as near-instantaneous replies to Tweets, non-random Tweet timing, and coordinated engagement). We have improved our phone verification process and introduced new challenges, including reCAPTCHAs to validate that a human is in control of an account.
— Twitter, January 2018
facebook likes.jpg

Contrast that approach with Facebook's. On March 16 Facebook issued a blog post from their VP Legal, announcing that they were suspending the accounts of Cambridge Analytica ("Cambridge")  and Strategic Communications Laboratories ("SCL"). According to Facebook, those organizations inappropriately received data on 270,000 Facebook accounts, gaining access to roughly 87 million accounts in total, through Dr. Aleksandr Kogan, a University of Cambridge professor who passed the data to them from a Facebook app he had developed. That data was then allegedly used for political purposes during the 2016 US Presidential elections.

Although Kogan followed Facebook's policies for app developers in building the initial app, he later allegedly violated those policies by passing the information on to Cambridge and SCL. Facebook also states in the post that Kogan, Cambridge, and SCL also lied when they confirmed that all data had been deleted in 2015, after Facebook discovered the policy violation. Facebook took pains to note that this was data "misuse" by Kogan, Cambridge, and SCL, not a breach of their systems.

The claim that this is a data breach is completely false. Aleksandr Kogan requested and gained access to information from users who chose to sign up to his app, and everyone involved gave their consent. People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.
— Facebook blog post, March 16, 2018
We enforce our policies in a variety of ways — from working with developers to fix the problem, to suspending developers from our platform, to pursuing litigation.
— Facebook blog post, March 16, 2018

Facebook's stock plummeted over 17% in the ensuing 2 weeks, knocking almost $100 billion off its market cap. Why? Trust. 

I don't think it's Facebook's fault Kogan and Cambridge misused the data, although in retrospect some of Facebook's data access policies were too broad. Facebook didn't misuse data. The bad actors got ahold of it in accordance with policy, then used it for a different purpose than what they claimed. Facebook asked for and received confirmation that the data had been deleted when they found out about the misuse. The data misuse was not Facebook's fault, although in hindsight they could have limited the access to friends' data. 


However, their communications were late and, for a long while in internet terms, tone deaf. The initial blog post was aggressive, focused on clearly identifying the "bad guys," didn't apologize to the customer, and hinted ominously at litigation.

On March 21, 5 days after the news broke, Mark Zuckerberg finally responded in a Facebook post, citing a "breach of trust." The company subsequently apologized in full page newspaper ads and a number of conversations with media outlets. Today, April 5, they implemented sweeping new policies, some of which will definitely impact legitimate developers (like creators of Chatfuel bots) and apps like Tinder. 

But it was simply late. As one of my former colleagues says, "Social 'justice' is fast and presumptive. Get out in front of it."

Having been in this situation before, I know how difficult it can be to make decisions in a crisis at one of the largest tech companies in the world. In the 1994 Pentium chip crisis, there was a flaw in desktop computer chips produced by Intel. Several million desktops were affected, and Dell was the leader in Pentium chip shipments. It wasn't Dell's fault. But our focus from the start was not in fixing blame, it was in fixing the situation for our customers.

A spokeswoman for Compaq said the company was referring the calls to Intel. A spokesman for Dell said the company had been contacted recently by Intel and was dealing directly with customers.
— The New York Times, November 24, 1994

November 24, 1994. Thanksgiving Day in the US. I was the newly minted director of desktop marketing for Dell North America, and I had driven about 3 hours from Austin to Dallas to have Thanksgiving dinner with a friend. My phone rang as I was sitting down to eat. It was my boss, Tom Martin, on the line and he asked me to meet him in his office as soon as I could. "Today, Tom? I'm in Dallas, and we're about to eat." He explained the situation and asked me to come back as soon as I was finished eating. I was in his office in Austin at 6pm. The internet does not care that your crisis occurred on Thanksgiving, or Easter, or the weekend.


What happened next was a freaking whirlwind. We had to decide how we were going to respond, both in terms of fixing the chips and communicating with our customers. Intel had not yet committed any funding to solve the problem. What I remember most was a bleep storm of uncertainty - there were so many unknowns, including how fast Intel could replace the chips, whether they would cover any costs, how many people were actually affected, etc. We had to make hundred million dollar decisions anyway. It was unbelievably difficult, even, or perhaps especially, at a big, admired tech company with lots of resources.

After some heated debate and within a day, we gave all our support reps (who were receiving thousands of calls a day on this topic) the simplest phone script. It started with, "We will take care of our customers." We offered to replace customers' computers if the flaw affected them, regardless of whether or not Intel decided to reimburse us. We didn't get to decide if they were affected - the customer did. Having run the numbers on the cost of replacing all these computers, I initially disagreed with this blanket "we will take care of our customers - no matter the cost" approach and was very nervous about the financial impact.

Then a miracle happened. By focusing on the impact to the customer, and promising to make them whole even if we weren't sure we'd be made whole by Intel, we took the steam out of the conversation. We defused an explosive situation. Customers were then able to listen rationally to our reps' explanation of who might be affected by this flaw (it affected highly precise calculations, such as those used by medical researchers and rocket scientists). Most concluded they were not affected and didn't ask for a replacement. Of those who did, the vast majority could replace the chip themselves - so we were able to ship them a chip rather than a big, bulky, expensive desktop computer. 

It was a lesson I'll never forget.

Several weeks later, Intel apologized and agreed to cover certain costs associated with the Pentium chip replacement. In January 1995, Intel announced a pre-tax charge of $475 million against earnings. I was not the least bit surprised by the number.

Fast forward, present day. I think if Facebook's initial response had been faster (within 24 hours, for sure) and had focused on solving the problem for the customer rather than seeking to assign blame, things might have turned out differently. They eventually got there with the messaging coming out today, but that's almost 3 weeks after Chris Wylie blew the whistle.

I also think Facebook's customer care could use a reboot, to make it more timely and personal. Customer care is the new marketing, and in other blogs we've provided the data about how customer care response rate and speed can dramatically improve sentiment towards a brand.

Recently, one of our clients had a major positive announcement, which they broadcast on Facebook Live. The event occurred over the space of about an hour, and we helped the client man their social channels to ensure they could respond and engage with all the congratulatory messages from their social community. Shortly after the event ended, with hundreds of messages still pouring in, the volume of messages and responses tripped a "spam flag" at Facebook. They shut down our ability to respond on Facebook - the channel on which the event was broadcast.

I figured as soon as we explained what the situation was and someone at Facebook had a quick look at the client's Facebook page, they'd flip the spam flag back off and we'd be fine. So, I sent a message to Facebook support on Facebook. No response after about 20 minutes. Then, I sent texts to people I knew who might know someone at Facebook, and followed up on those leads. Then I sent an email to Sheryl Sandberg via the HBS alumni network. Then I posted on Twitter and LinkedIn seeking help, both from my friends and from a few specific Facebook handles I found online.

After about an hour, I hopped in my car and drove to Facebook's Toronto office, hoping to explain the situation to someone there. A very nice gent came down to the building lobby with an iPad, apologized, and explained that Facebook's Toronto office was "just a sales office," and couldn't do anything to get the spam flag lifted. Nor could they make a phone call to Menlo Park to see if someone there could help. But he did let me fill out a support form on his iPad.

After that, I sat in the lobby of the building Facebook is in, and phoned Facebook in Menlo Park. "For support, press 2..." Spoiler alert. If you press 2, you get "Facebook does not provide phone support - please go to our Facebook page for support..."

After 30 years in tech, I am blessed to have friends in all the right places. At this point, about 2 hours after Facebook shut down our client's ability to respond to their customers in the midst of a "once in a decade" celebratory event, I fired the silver bullet. I sent an email to someone, who just prior to taking the time to help me, was almost certainly on the phone with the CEO of a Fortune 500 company, or a head of state, or some major media outlet. 3 minutes after I sent my note, she got back to me and said she'd be happy to email her corporate Facebook contacts. 15 minutes after that, she got a note back from a Facebook VP saying he was happy to follow up. About 30 minutes after that, the rate limit flag was lifted and our client was able to respond to Facebook messages as per usual. Facebook's follow-up and speed was terrific once we got their attention, thanks to my former colleague. A week later, I got an email from the Facebook Community Operations team explaining that, in fact, they did think the spike in the volume of messages was spam, triggering the rate limit:

The Facebook Page was previously going against our Community Standards on fraud and spam. However, we’ve finished taking a second look at your account and have since ensured that the block has been lifted. We’re sorry for any inconvenience this may have caused.

I suggested that there are certain industries for which such spikes in volume may be predictable, even if the exact timing is not known, and that perhaps it would be useful to have an escalation team we could proactively contact if such an event occurred again. That would prevent the spam flag from being flipped in the first place. Carter at Facebook got back to me and said that they had made changes to their process as a result. That's been my experience with Facebook - once you actually get the attention of the right people, they make smart decisions, are responsive to feedback, and move quickly. But you shouldn't have to send up a bat signal and summon a superhero to get help. I think considering more channels for customer support, empowering the regional teams to assist their local customers, and establishing escalation procedures would go a long way towards making Facebook more helpful to customers - and therefore improve sentiment towards the company.

Despite the hashtags, the world will not "delete Facebook." The data misuse by Kogan and others is not Facebook's fault, and they've made changes to their policies to limit the data access that made the situation more widespread than it could have been. Facebook will recover. I am hopeful they will recover with more transparency, empathy for the customer, and appreciation for the need to respond in internet time.