How Android Fought the Chamois Botnet—and Won – WIRED
sophisticated botnet built on tainted apps that all worked together to power ad and SMS fraud. Dubbed Chamois, the malware family had already cropped up in 2016 and was being distributed both through Google Play and third-party app stores. So the Android team started aggressively flagging and helping to uninstall Chamois until they were sure…
sophisticated botnet built on tainted apps that all worked together to power ad and SMS fraud. Dubbed Chamois, the malware family had already cropped up in 2016 and was being distributed both through Google Play and third-party app stores. So the Android team started aggressively flagging and helping to uninstall Chamois until they were sure it was dead.
Eight months later, though, in November 2017, Chamois roared back into the Android ecosystem, more ferocious than before. By March 2018, a year after Google thought it had been vanquished, Chamois hit an all-time high, infecting 20.8 million devices. Now, a year after that zenith, the Android team has whittled that number back down to fewer than 2 million infections. And at the Kaspersky Security Analyst Summit in Singapore this week, Android security engineer Maddie Stone is presenting a full post-mortem on how Google fought back against Chamois—again—and how personal the rivalry became.
“I actually gave a talk at Black Hat last year on what’s called ‘stage three’ of Chamois,” Stone told WIRED ahead of her talk. “And within 72 hours of me giving that talk, they started trying to change the bytes and each of the indicators I talked about. We could see them manipulating it. The Chamois developers also fingerprinted our exact Android security analysis environment and built in protections for some of the customizations that we use.”
Back With a Vengeance
After the March 2018 infection peak, the Android security team started collaborating with other defenders across Google, like anti-abuse and ad security specialists and software engineers, to get a handle on the new version of Chamois. The first two variants the team tracked in 2016 and 2017 infected devices in four stages to organize and mask the attack. The 2018 version, though, contained six stages, antivirus testing engines, and even more sophisticated anti-analysis and anti-debugging shields to avoid discovery. Malware developers build these features into
Snapchat. “At this point it’s just the easiest way to contact everyone,” she wrote via text. “I use it if I’m trying to get them to respond.” All her friends have Snapchat, and they all check it more frequently than they do their text messages “(no matter how much I hate that lol).” Logan, who…
Snapchat. “At this point it’s just the easiest way to contact everyone,” she wrote via text. “I use it if I’m trying to get them to respond.” All her friends have Snapchat, and they all check it more frequently than they do their text messages “(no matter how much I hate that lol).” Logan, who lives in Denver, says Snapchat conversations feel more intimate: “It is also just nice to see the faces of people.” Sometimes, she and her friends will just send pictures of their faces to each other. “It’s good to see them and adds a little more connection than a normal chat or DM,” she says.
Snapchat isn’t Logan’s favorite platform; she prefers Instagram because “it basically has all my passions.” But what’s a girl to do? If everyone is on Snapchat, then she has to be too. In one week, she’ll get close to 500 notifications from Snapchat, more than twice what she gets from iMessage and Instagram combined.
Written off by many after a disappointing stock-market debut and Facebook mimicry of its popular features, Snapchat remains a mainstay among youth. “You don’t have to speak words to talk to someone you want to stay connected with, as weird as that sounds,” Lily Klima, a 17-year-old from New York City, explains over text. A Pew Research poll from 2018 found that 69 percent of American teens aged 13 to 17 reported using the platform, trailing only YouTube and Instagram, and ahead of Facebook. More than one-third of respondents—35 percent—said they use Snapchat most often, more than any other social media platform. DaJauna Burnett-Hollins, 19, of St. Paul, says she spends up to two hours a day on Snapchat, and prefers it in part because she can “see [a friend’s] face and not just a screen.”
Thanks to those young, devoted users and investments in its Android app and better ad technologies, parent company Snap is riding a rare wave of investor optimism as it prepares to release its latest financial results Tuesday. Snap shares have more than doubled this year, though they remain below the $17 price of the company’s 2017 IPO. Analysts from Goldman Sachs, BTIG, and Bank of America all recently increased their price targets for Snap’s stock.
Snap beefed up its management team by adding Jeremi Gorman
Instagram-loving friends did last summer. Called Who’s in Town, the iOS and Android app is ostensibly designed to show you, well … who’s in town. But it does much more than that.Users who download the app and grant it access to their Instagram account are presented with an eerie interactive map of every place the…
Instagram-loving friends did last summer. Called Who’s in Town, the iOS and Android app is ostensibly designed to show you, well … who’s in town. But it does much more than that.
Users who download the app and grant it access to their Instagram account are presented with an eerie interactive map of every place the people they follow have visited and shared online since they created their profile. The map updates in real time and is sourced from the wealth of location data the average Instagram user willingly uploads to the platform each time they opt to use its popular geotag feature in a story or post.
This information is nominally public already, as Instagram users must choose to share it with their followers. But by collecting them all in one place over time, Who’s in Town transforms data points seemingly meaningless in isolation into a comprehensive chronology of the habits and haunts of anyone with a public Instagram account.
It can tell you what coffeeshops or restaurants your Instagram-using friends frequent, when they last told the digital world they were there, and paint a detailed picture that wouldn’t be evident from just looking at their profile.
“The amount of data is insane,” said Erick Barto, the app’s creator. “It’s the equivalent of you going through every single story and writing down every single location, just consistently all the time.”
Paris Martineau covers platforms, online influence, and social media manipulation for WIRED.
A pre-release study he conducted using Who’s in Town tracked the posting habits of over 15,000 active Instagram users over multiple weeks. Barto said it found that 30 percent of people who post Instagram stories over the weekend geotag at least one location.
“This capability is problematic … from a privacy perspective as long-term aggregate data can potentially be misused in various ways,” Jason Polakis, security researcher and assistant professor at the University of Illinois at Chicago, told WIRED in an email.
Polakis said users’ aggregate location data could reveal sensitive information about their daily routine—like when a person normally goes out, or is at work—that could be used to determine when their home is empty, enabling stalking, or revealilng social connections like friendships or relationships, based on similarities in the time and location of posts. The information could also be used by companies to infer a person’s hidden habits or traits, he noted. A health insurance firm, for example, could scan prospective customers’ geotag history to compare how often they indicated they frequented bars versus the gym.
“While the app’s functionality isn’t doing anything complicated that a determined (malicious) individual or company wouldn’t be able to do,” Polakis added, “it does streamline and facilitate potentially invasive behavior at a large scale, as anyone installing the app would have access to this functionality.”
Once installed, Who’s in Town pulls post data for the people you follow dating back to the creation of each user’s account, and the geotags from stories posted that day.
Daywise, modern smartphone users receive more than double the number of notifications per day than they think they’re getting—as many as 73 per day. (Anecdotally, Screen Time dashboard on my own iPhone tells me I’m averaging around 91 notifications per day.)App makers are trying every which way to grab a sliver of our attention. Psychological…
Daywise, modern smartphone users receive more than double the number of notifications per day than theythinkthey’re getting—as many as 73 per day. (Anecdotally, Screen Time dashboard on my own iPhone tells me I’m averaging around 91 notifications per day.)
App makers are trying every which way to grab a sliver of our attention. Psychological researcher Larry Rosen, who cowroteThe Distracted Mind, says he has spoken to app designers about their approaches and has concluded that their efforts to suck us into their apps is “really a business. The bottom line is, it’s a business. And the problem is they’re using behavioral scientists to help them design this.” More notably, Rosen’s research has consistently shown that notifications stress us out—and that constant notifications, beeps, buzzes, and vibrations from our smartphones and computers all contribute to ongoing chemical stress.
But it wasn’t always this way. Some of the earliest architects of smartphone notifications were simply trying to come up with ways to bring popular desktop communication apps to emerging mobile platforms. One of those people is Matías Duarte. His current role is head of material design at Google. But from 2000 to 2005, Duarte was the director of design at Danger, the predecessor to Android. (Remember the Hiptop, also known as the Sidekick? That was Danger.)
Duarte spoke with WIRED for the video above, digging up smartphone notification designs buried in boxes from nearly 20 years ago, and explained some of the early thinking behind smartphone notifications. An edited version of the conversation follows.
Lauren Goode:You were on the forefront of notifications before they were even called that. Talk a little bit about your history in designing what we now know as notifications.
Matías Duarte:I first started working in consumer electronics and mobile with the Danger Sidekick. This was just at the time when cell phones all looked like this, a nine-keypad at the bottom and a little tiny screen, and all you could basically do is text and [make and receive] phone calls. That’s it. There were no apps, no web browsers, nothing like that.
The first notifications were those little red voicemail lights on desktop phones. Mobile phones had these displays, which weren’t usually even colored. They were black and white … But you could use an icon to indicate when your phone was trying to get your attention because it would also have a little blinking light, right? About a missed call, or a voicemail, or about a text message. So you’d have two different little icons that were baked into that. So we knew that there was this problem of getting people’s attention and connecting people when we were working on the Sidekick.
LG:And this is well before Android, iOS, everything we know now.
MD:Yeah, absolutely. This was around 2000 when we were doing a lot of this design work. I think the very first one of these launched between 2001 and 2002. So this was all way before Android, although we have a connective lineage to these things.
LG:So you were designing just for the Sidekick’s little screen?
MD:For that tiny screen … Actually we started designing for this guy here. [Duarte holds up a small mobile device.] This is what we affectionately called the Peanut. It looks like one of those peanut cookies. This was basically a pager. That’s how you can think of it, except that it had a screen where we could show graphics and icons on it. This was the original product that we were going to make, although eventually we ended up making the Sidekick, which allowed you to communicate two ways just like you do today. And it had a keyboard.
The keyboard was the main appeal, and this meant that not only could you do emails like you would on a BlackBerry and type in your web pages faster, but you could text message. Not just SMS, but on what at the time was the hotness, which was AOL Instant Messenger. There was also MSN, ICQ. We had all of these on this guy here. In fact, we had the first mobile app store on the Sidekick.
LG:And this is a time before social media is really anything close to what it is now.
MD: Oh, there was no social media at the time … There were blogs.
LG: This was even before MySpace.
MD:This was the beginning of MySpace, the beginning of LiveJournal, that kind of thing, which is where this wonderful chart comes in [pulls out a paper chart]. Because part of the process of design is always understanding the problem space before you come up with a solution. And back then we did this analysis around w
Google and Facebook. “Mining and oil companies exploit the physical environment; social media companies exploit the social environment,” he said. “The owners of the platform giants consider themselves the masters of the universe, but in fact they are slaves to preserving their dominant position … Davos is a good place to announce that their days are…
Google and Facebook. “Mining and oil companies exploit the physical environment; social media companies exploit the social environment,” he said. “The owners of the platform giants consider themselves the masters of the universe, but in fact they are slaves to preserving their dominant position … Davos is a good place to announce that their days are numbered.”
Across town, agroup of senior Facebook executives, including COO Sheryl Sandberg and vice president of global communications Elliot Schrage, had set up a temporary headquarters near the base of the mountain where Thomas Mann put his fictional sanatorium. The world’s biggest companies often establish receiving rooms at the world’s biggest elite confab, but this year Facebook’s pavilion wasn’t the usual scene of airy bonhomie. It was more like a bunker—one that saw a succession of tense meetings with the same tycoons, ministers, and journalists who had nodded along to Soros’ broadside.
Over the previous year Facebook’s stock had gone up as usual, but its reputation was rapidly sinking toward junk bond status. The world had learned how Russian intelligence operatives used the platform to manipulate US voters. Genocidal monks in Myanmar and a despot in the Philippines had taken a liking to the platform. Mid-level employees at the company were getting both crankier and more empowered, and critics everywhere were arguing that Facebook’s tools fostered tribalism and outrage. That argument gained credence with every utterance of Donald Trump, who had arrived in Davos that morning, the outrageous tribalist skunk at the globalists’ garden party.
CEO Mark Zuckerberg had recently pledged to spend 2018 trying to fix Facebook. But even the company’s nascent attempts to reform itself were being scrutinized as a possible declaration of war on the institutions of democracy. Earlier that month Facebook had unveiled a major change to its News Feed rankings to favor what the company called “meaningful social interactions.” News Feed is the core of Facebook—the central stream through which flow baby pictures, press reports, New Age koans, and Russian-made memes showing Satan endorsing Hillary Clinton. The changes would favor interactions between friends, which meant, among other things, that they would disfavor stories published by media companies. The company promised, though, that the blow would be softened somewhat for local news and publications that scored high on a user-driven metric of “trustworthiness.”
Davos provided a first chance for many media executives to confront Facebook’s leaders about these changes. And so, one by one, testy publishers and editors trudged down Davos Platz to Facebook’s headquarters throughout the week, ice cleats attached to their boots, seeking clarity. Facebook had become a capricious, godlike force in the lives of news organizations; it fed them about a third of their referral traffic while devouring a greater and greater share of the advertising revenue the media industry relies on. And now this. Why? Why would a company beset by fake news stick a knife into real news? And what would Facebook’s algorithm deem trustworthy? Would the media executives even get to see their own scores?
Facebook didn’t have ready answers to all of these questions; certainly not ones it wanted to give. The last one in particular—about trustworthiness scores—quickly inspired a heated debate among the company’s executives at Davos and their colleagues in Menlo Park. Some leaders, including Schrage, wanted to tell publishers their scores. It was only fair. Also in agreement was Campbell Brown, the company’s chief liaison with news publishers, whose job description includes absorbing some of the impact when Facebook and the news industry crash into one another.
But the engineers and product managers back at home in California said it was folly. Adam Mosseri, then head of News Feed, argued in emails that publishers would game the system if they knew their scores. Plus, they were too unsophisticated to understand the methodology, and the scores would constantly change anyway. To make matters worse, the company didn’t yet have a reliable measure of trustworthiness at hand.
Heated emails flew back and forth between Switzerland and Menlo Park. Solutions were proposed and shot down. It was a classic Facebook dilemma. The company’s algorithms embraid choices so complex and interdependent that it’s hard for any human to get a handle on it all. If you explain some of what is happening, people get confused. They also tend to obsess over tiny factors in huge equations. So in this case, as in so many others over the years, Facebook chose opacity. Nothing would be revealed in Davos, and nothing would be revealed afterward. The media execs would walk away unsatisfied.
Inside the Two Years That Shook Facebook—and the World
Facebook Let Dozens of Cybercrime Groups Operate in Plain Sight
Facebook’s Sloppy Data-Sharing Deals Might Be Criminal
After Soros’ speech that Thursday night, those same editors and publishers headed back to their hotels, many to write, edit, or at least read all the news pouring out about the billionaire’s tirade. The words “their days are numbered” appeared in article after article. The next day, Sandberg sent an email to Schrage asking if he knew whether Soros had shorted Facebook’s stock.
Far from Davos, meanwhile, Facebook’s product engineers got down to the precise, algorithmic business of implementing Zuckerberg’s vision. If you want to promote trustworthy news for billions of people, you first have to specify what is trustworthy and what is news. Facebook was having a hard time with both. To define trustworthiness, the company was testing how people responded to surveys about their impressions of different publishers. To define news, the engineers pulled a classification system left over from a previous project—one that pegged the category as stories involving “politics, crime, or tragedy.”
That particular choice, which meant the algorithm would be less kind to all kinds ofothernews—from health and science to technology and sports—wasn’t something Facebook execs discussed with media leaders in Davos. And though it went through reviews with senior managers, not everyone at the company knew about it either. When one Facebook executive learned about it recently in a briefing with a lower-level engineer, they say they “nearly fell on the fucking floor.”
The confusing rollout of meaningful social interactions—marked by internal dissent, blistering external criticism, genuine efforts at reform, and foolish mistakes—set the stage for Facebook’s 2018. This is the story of that annus horribilis, based on interviews with 65 current and former employees. It’s ultimately a story about the biggest shifts ever to take place inside the world’s biggest social network. But it’s also about a company trapped by its own pathologies and, perversely, by the inexorable logic of its own recipe for success.
Facebook’s powerful network effects have kept advertisers from fleeing, and overall user numbers remain healthy if you include people on Instagram, which Facebook owns. But the company’s original culture and mission kept creating a set of brutal debts that came due with regularity over the past 16 months. The company floundered, dissembled, and apologized. Even when it told the truth, people didn’t believe it. Critics appeared on all sides, demanding changes that ranged from the essential to the contradictory to the impossible. As crises multiplied and diverged, even the company’s own solutions began to cannibalize each other. And the most crucial episode in this story—the crisis that cut the deepest—began not long after Davos, when some reporters fromThe New York Times,The Guardian, and Britain’s Channel 4 News came calling. They’d learned some troubling things about a shady British company called Cambridge Analytica, and they had some questions.
It was, insome ways, an old story. Back in 2014, a young academic at Cambridge University named Aleksandr Kogan built a personality questionnaire app called thisisyourdigitallife. A few hundred thousand people signed up, giving Kogan access not only to their Facebook data but also—because of Facebook’s loose privacy policies at the time—to that of up to 87 million people in their combined friend networks. Rather than simply use all of that data for research purposes, which he had permission to do, Kogan passed the trove on to Cambridge Analytica, a strategic consulting firm that talked a big game about its ability to model and manipulate human behavior for political clients. In December 2015,The Guardianreported that Cambridge Analytica had used this data to help Ted Cruz’s presidential campaign, at which point Facebook demanded the data be deleted.
This much Facebook knew in the early months of 2018. The company also knew—because everyone knew—that Cambridge Analytica had gone on to work with the Trump campaign after Ted Cruz dropped out of the race. And some people at Facebook worried that the story of their company’s relationship with Cambridge Analytica was not over. One former Facebook communications official remembers being warned by a manager in the summer of 2017 that unresolved elements of the Cambridge Analytica story remained a grave vulnerability. No one at Facebook, however, knew exactly when or where the unexploded ordnance would go off. “The company doesn’t know yet what it doesn’t know yet,” the manager said. (The manager now denies saying so.)
The company first heard in late February that theTimesandThe Guardianhad a story coming, but the department in charge of formulating a response was a house divided. In the fall, Facebook had hired a brilliant but fiery veteran of tech industry PR named Rachel Whetstone. She’d come over from Uber to run communications for Facebook’s WhatsApp, Instagram, and Messenger. Soon she was traveling with Zuckerberg for public events, joining Sandberg’s senior management meetings, and making decisions—like picking which outside public relations firms to cut or retain—that normally would have rested with those officially in charge of Facebook’s 300-person communications shop. The staff quickly sorted into fans and haters.
And so it was that a confused and fractious communications team huddled with management to debate how to respond to theTimesandGuardianreporters. The standard approach would have been to correct misinformation or errors and spin the company’s side of the story. Facebook ultimately chose another tack. It would front-run the press: dump a bunch of information out in public on the eve of the stories’ publication, hoping to upstage them. It’s a tactic with a short-term benefit but a long-term cost. Investigative journalists are like pit bulls. Kick them once and they’ll never trust you again.
Facebook’s decision to take that risk, according to multiple people involved, was a close call. But on the night of Friday, March 16, the company announced it was suspending Cambridge Analytica from its platform. This was a fateful choice. “It’s why theTimeshates us,” one senior executive says. Another communications official says, “For the last year, I’ve had to talk to reporters worried that we were going to front-run them. It’s the worst. Whatever the calculus, it wasn’t worth it.”
The tactic also didn’t work. The next day the story—focused on a charismatic whistle-blower with pink hair named Christopher Wylie—exploded in Europe and the United States. Wylie, a former Cambridge Analytica employee, was claiming that the company had not deleted the data it had taken from Facebook and that it may have used that data to swing the American presidential election. The first sentence ofThe Guardian’s reporting blared that this was “one of the tech giant’s biggest ever data breaches” and that Cambridge Analytica had used the data “to build a powerful software program to predict and influence choices at the ballot box.”
The story was a witch’s brew of Russian operatives, privacy violations, confusing data, and Donald Trump. It touched on nearly all the fraught issues of the moment. Politicians called for regulation; users called for boycotts. In a day, Facebook lost $36 billion in its market cap. Because many of its employees were compensated based on the stock’s performance, the drop did not go unnoticed in Menlo Park.
The WIRED Guide to Data Breaches
To this emotional story, Facebook had a programmer’s rational response. Nearly every fact inThe Guardian’s opening paragraph was misleading, its leaders believed. The company hadn’t been breached—an academic had fairly downloaded data with permission and then unfairly handed it off. And the software that Cambridge Analytica built was not powerful, nor could it predict or influence choices at the ballot box.
But none of that mattered. When a Facebook executive named Alex Stamos tried on Twitter to argue that the wordbreachwas being misused, he was swatted down. He soon deleted his tweets. His position was right, but who cares? If someone points a gun at you and holds up a sign that says hand’s up, you shouldn’t worry about the apostrophe. The story was the first of many to illuminate one of the central ironies of Facebook’s struggles. The company’s algorithms helped sustain a news ecosystem that prioritizes outrage, and that news ecosystem was learning to direct outrage at Facebook.
As the story spread, the company started melting down. Former employees remember scenes of chaos, with exhausted executives slipping in and out of Zuckerberg’s private conference room, known as the Aquarium, and Sandberg’s conference room, whose name, Only Good News, seemed increasingly incongruous. One employee remembers cans and snack wrappers everywhere; the door to the Aquarium would crack open and you could see people with their heads in their hands and feel the warmth from all the body heat. After saying too much before the story ran, the company said too little afterward. Senior managers begged Sandberg and Zuckerberg to publicly confront the issue. Both remained publicly silent.
“We had hundreds of reporters flooding our inboxes, and we had nothing to tell them,” says a member of the communications staff at the time. “I remember walking to one of the cafeterias and overhearing other Facebookers say, ‘Why aren’t we saying anything? Why is nothing happening?’ ”
According to numerous people who were involved, many factors contributed to Facebook’s baffling decision to stay mute for five days. Executives didn’t want a repeat of Zuckerberg’s ignominious performance after the 2016 election when, mostly off the cuff, he had proclaimed it “a pretty crazy idea” to think fake news had affected the result. And they continued to believe people would figure out that Cambridge Analytica’s data had been useless. According to one executive, “You can just buy all this fucking stuff, all this data, from the third-party ad networks that are tracking you all over the planet. You can get way, way, way more privacy-violating data from all these data brokers than you could by stealing it from Facebook.”
“Those five days were very, very long,” says Sandberg, who now acknowledges the delay was a mistake. The company became paralyzed, she says, because it didn’t know all the facts; it thought Cambridge Analytica had deleted the data. And it didn’t have a specific problem to fix. The loose privacy policies that allowed Kogan to collect so much data had been tightened years before. “We didn’t know how to respond in a system of imperfect information,” she says.
Facebook’s other problem was that it didn’t understand the wealth of antipathy that had built up against it over the previous two years. Its prime decisionmakers had run the same playbook successfully for a decade and a half: Do what they thought was best for the platform’s growth (often at the expense of user privacy), apologize if someone complained, and keep pushing forward. Or, as the old slogan went: Move fast and break things. Now the public thought Facebook had broken Western democracy.Thisprivacy violation—unlike the many others before it—wasn’t one that people would simply get over.
Finally, on Wednesday, the company decided Zuckerberg should give a television interview. After snubbing CBS and PBS, the company summoned a CNN reporter who the communications staff trusted to be reasonably kind. The network’s camera crews were treated like potential spies, and one communications official remembers being required to monitor them even when they went to the bathroom. (Facebook now says this was not company protocol.) In the interview itself, Zuckerberg apologized. But he was also specific: There would be audits and much more restrictive rules for anyone wanting access to Facebook data. Facebook would build a tool to let users know if their data had ended up with Cambridge Analytica. And he pledged that Facebook would make sure this kind of debacle never happened again.
A flurry of other interviews followed. That Wednesday, WIRED was given a quiet heads-up that we’d get to chat with Zuckerberg in the late afternoon. At about 4:45 pm, his communications chief rang to say he would be calling at 5. In that interview, Zuckerberg apologized again. But he brightened when he turned to one of the topics that, according to people close to him, truly engaged his imagination: using AI to keep humans from polluting Facebook. This was less a response to the Cambridge Analytica scandal than to the backlog of accusations, gathering since 2016, that Facebook had become a cesspool of toxic virality, but it was a problem he actually enjoyed figuring out how to solve. He didn’t think that AI could completely eliminate hate speech or nudity or spam, but it could get close. “My understanding with food safety is there’s a certain amount of dust that can get into the chicken as it’s going through the processing, and it’s not a large amount—it needs to be a very small amount,” he told WIRED.
The interviews were just the warmup for Zuckerberg’s next gauntlet: A set of public, televised appearances in April before three congressional committees to answer questions about Cambridge Analytica and months of other scandals. Congresspeople had been calling on him to testify for about a year, and he’d successfully avoided them. Now it was game time, and much of Facebook was terrified about how it would go.
As it turned out, most of the lawmakers proved astonishingly uninformed, and the CEO spent most of the day ably swatting back soft pitches. Back home, some Facebook employees stood in their cubicles and cheered. When a plodding Senator Orrin Hatch asked how, exactly, Facebook