Android, Antivirus, Apple, Chromebook, Enterprise, Internet Security, iPhone, Mobile, OS X

CPU Security Flaw (Meltdown and Spectre) – What you need to know

Processors (CPUs) provide the brainpower for all the computerized devices we use day to day, from PCs and smartphones down…

Processors (CPUs) provide the brainpower for all the computerized devices we use day to day, from PCs and smartphones down to mundane things such as ATMs. Therefore an exploit – or exploits – that affects virtually all of these devices at the same time is a shocking thing to hear about.

Unfortunately, early 2018 saw just such a thing happen with the news that a design flaw in nearly all modern processors had been found.
 

What are Meltdown and Spectre?

Meltdown and Spectre are the names given to the two newly discovered vulnerabilities that affect virtually every device with a processor in it.

They rely on retrieving small amounts of data that are made available outside of the processor temporarily. This happens due to a design in processors called “speculative execution”.

This is the process where a CPU essentially guesses what information it will need next to function quickly.

Spectre allows attackers to force the processor itself to start the speculative execution process. They then access the extra data to obtain sensitive information that should never be available.

Meltdown fundamentally breaks down the mechanism that stops applications from accessing system memory. By doing so it enables exploits to access arbitrary system memory to retrieve sensitive data.
 

Who discovered them?

Both exploits were independently discovered by multiple teams of researchers.

Meltdown

  • Jann Horn (Google Project Zero)
  • Werner Haas, Thomas Prescher (Cyberus Technology)
  • Daniel Gruss, Moritz Lipp, Stefan Mangard, Michael Schwarz(Graz University of Technology)

Spectre

  • Jann Horn (Google Project Zero)
  • Paul Kocher in collaboration with Daniel Genkin (University of Pennsylvania and University of Maryland), Mike Hamburg (Rambus), Moritz Lipp (Graz University of Technology), and Yuval Yarom (University of Adelaide and Data61)

 

What systems are affected?

On a technical level, every Intel processor that implements out-of-order execution (speculative execution) is potentially affected. This includes almost all Intel processors dating back all the way to 1995!
A portion of AMD processors and ARM processors are also affected.

All desktop, laptop and cloud computing services may be affected by Meltdown.
 

Am I affected by Meltdown and Spectre?

Yes!

This may seem like a very blunt answer but due to the wide-reaching nature of the design flaw, you almost certainly have a device that will have been affected.
 

Does my antivirus protect me?

Antivirus programs could theoretically detect the use of these exploits, however, in practice it is very unlikely. It is possible that your antivirus could detect malware designed to exploit these vulnerabilities but not the actual vulnerabilities themselves.
 

How do I protect myself?

The Meltdown exploit is able to be fixed with a software patch as it relies on breaking the isolation between user apps and the operating system.

Computers fitted with a vulnerable processor and running unpatched operating systems will be open to exploit.

Fortunately, Operating system vendors have released relevant patches to protect their users. As long as you regularly update your operating system using built-in update tools, you should be fully protected from the Meltdown vulnerability.

As usual, it is best to operate safe web browsing habits and not install any potential malware on to your device that may potentially make use of these vulnerabilities.

Spectre has proven to be much harder to protect from as it is executed at the hardware level.

Initial advice so far is to follow the basic steps (similar to meltdown):

  • Update your operating system frequently
  • Install updates from your hardware manufacturer (firmware updates)
  • Turn on isolation mode in your web browser ( Chrome and Firefox ) – This prevents exploits in javascript from utilizing Spectre vulnerability.

 

What next?

The main thing for most people to do is to not panic. If you have followed the basic security steps and best practices above then you will almost certainly be safe.

It is important to note that some of the security patches that have been released may deliver a performance hit to your device. This is a widespread complaint and many of the operating system vendors recognize this as an issue.

They have stated that the performance hit should not be noticeable to the average user, however, hits to performance are “highly variable and depend on a number of factors”.

If you feel like your device performance has been significantly affected, do some research on whichever update you just installed. Other people may have suggestions and/or the vendor themselves may recognize a compatibility issue with certain device setups.
 

Conclusion

The shock release of these two huge vulnerabilities should be a wakeup call to the entire world.

It is increasingly important in this day and age to be ever vigilant about what information you store on your devices.

More importantly, users and companies should focus on preventative practices, such as being aware of potential malware that could expose devices to cybercriminals.
For more advice on what users should look out for in 2018, check our article – Internet security threats to look out for in 2018

Be the first to write a comment.

Leave a Reply

Android

Zuckerberg’s Privacy Manifesto is Actually About Messaging

A Privacy-Minded Vision for Social Networking” and thought it was either a deathbed conversion, a cynical ploy to avoid regulation and reassure users, or even just an absurd musing that the company has no intention of carrying out (much like the “Clear History” feature it announced almost a year ago, which has yet to materialize).I…


A Privacy-Minded Vision for Social Networking” and thought it was either a deathbed conversion, a cynical ploy to avoid regulation and reassure users, or even just an absurd musing that the company has no intention of carrying out (much like the “Clear History” feature it announced almost a year ago, which has yet to materialize).

I know I thought, at various points, all three of the above, and lots of other things to boot.

It’s even possible that you took Mr. Zuckerberg at his word, as former Microsoft wunderman Steven Sinofsky did, and credited him with realizing which way the winds are blowing and moving there with thoughtfulness and haste.

In fact, Zuckerberg’s essay was likely about none of those things, nor was it about privacy at all (more on that later).

It was about WeChat, WhatsApp, and iMessage.

Zuckerberg’s post, minus the PR, was a product road map. It’s aimed at adapting his business to counter one of the only remaining competitive threats to Facebook and Instagram: messaging. And it was a clever way to dress up that pivot as a consumer-friendly privacy play. Win, win!

Molly Wood (@mollywood) is an Ideas contributor at WIRED and the host and senior editor ofMarketplace Tech, a daily national radio broadcast covering the business of technology. She has covered the tech industry at CNET,The New York Times, and in various print, television, digital and audio formats for nearly 20 years. (Ouch.)

Facebook, the core product, is collapsing. I know that seems like a strong statement given the company’s 2 billion users, but, in fact, the News Feed is a wasteland of reposted memories, divisive propaganda, and the occasional baby picture. US users are abandoning it by the millions, user growth is flat, and personal sharing has been on the decline for years.

Regulation and even antitrust investigations are looming. Even a handful of advertisers are starting to move on, and the company’s brand reputation is sinking fast—a recent Axios poll put it at 94 out of 100, ahead of only the likes of the US government, Trump.org, Phillip Morris, and Wells Fargo.

Yes, Instagram looks like the next best hope for the empire, and certainly a lot of users leaving Facebook are landing on Insta. But it’s still a distant second in terms of usage, and while Facebook is bullishly pushing advertisers toward Stories, they bring in far less revenue than News Feed ads. Also, as Instagram’s product roadmap starts to look more and more like Facebook’s, the app could get a lot less appealing.

And if you really look at what teens, in particular, are doing, itincludessocial media, but by almost every measure, they’re texting. The biggest threat to Facebook and Instagram is messaging, and that’s why, if you strip away all the window dressing about privacy, this is the paragraph that matters most:

“Today we already see that private messaging, ephemeral stories, and small groups are by far the fastest growing areas of online communication. There are a number of reasons for this. Many people prefer the intimacy of communicating one-on-one or with just a few friends. People are more cautious of having a permanent record of what they’ve shared. And we all expect to be able to do things like payments privately and securely.”

China’s WeChat is the model that Zuckerberg almost certainly has in mind. It has about a billion users; combines messaging and calling with apps, payments, communications, and commerce; and essentially functions as a proxy for the internet for its users—who spend well over an hour a day us

Read More

Continue Reading
Internet Security

Can predictive analytics be made safe for humans?

Massive-scale predictive analytics is a relatively new phenomenon, one that challenges both decades of law as well as consumer thinking about privacy. As a technology, it may well save thousands of lives in applications like predictive medicine, but if it isn’t used carefully, it may prevent thousands from getting loans, for instance, if an underwriting…


Massive-scale predictive analyticsis a relatively new phenomenon, one that challenges both decades of law as well as consumer thinking about privacy.

As a technology, it may well save thousands of lives in applications like predictive medicine, but if it isn’t used carefully, it may prevent thousands from getting loans, for instance, if an underwriting algorithm is biased against certain users.

I chatted with Dennis Hirsch a few weeks ago about the challenges posed by this new data economy. Hirsch is a professor of law at Ohio State and head of its Program on Data and Governance. He’s also affiliated with the university’s Risk Institute.

“Data ethics is the new form of risk mitigation for the algorithmic economy,” he said. In a post-Cambridge Analytica world, every company has to assess what data it has on its customers and mitigate the risk of harm. How to do that, though, is at the cutting edge of the new field of data governance, which investigates the processes and policies through which organizations manage their data.

You’re reading the Extra Crunch Daily. Like this newsletter?Subscribe for free to follow all of our discussions and debates.

“Traditional privacy regulation asks whether you gave someone notice and given them a choice,” he explains. That principle is the bedrock for Europe’s GDPR law, and for the patchwork of laws in the U.S. that protect privacy. It’s based around the simplistic idea that a datum — such as a customer’s address — shouldn’t be shared with, say, a marketer without that user’s knowledge. Privacy is about protecting the address book, so to speak.

The rise of “predictive analytics,” though, has completely demolished such privacy legislation. Predictive analytics is a fuzzy term, but essentially means interpreting raw data and drawing new conclusions through inference. This is the story of the famous Target data crisis, where the retailer recommended pregnancy-related goods to women who had certain patterns of purchases. As Charles Duhigg explained at the time:

Many shoppers purchase soap and cotton balls, but when someone suddenly starts buying lots of scent-free soap and extra-big bags of cotton balls, in addition to hand sanitizers and washcloths, it signals they could be getting close to

Read More

Continue Reading
Internet Security

Atrium, Justin Kan’s legal tech startup, launches a fintech and blockchain division

Atrium, the legal startup co-founded by Justin Kan of Twitch fame, is jumping into the blockchain space today. The company has raised plenty of money — including $65 million from a16z last September — so rather than an ICO or token sale, this is a consultancy business. Atrium uses machine learning to digitize legal documents and develop applications…


Atrium, the legal startup co-founded by Justin Kan of Twitch fame, is jumping into the blockchain space today.

The company has raised plenty of money — including $65 million from a16z last September — so rather than an ICO or token sale, this is a consultancy business. Atrium uses machine learning to digitize legal documents and develop applications for client use, and now it is officially applying that to fintech and blockchain businesses.

The division has been operating quietly for months and the scope of work that it covers includes the legality and regulatory concerns around tokens, but also business-focused areas including token utility, tokenomics and general blockchain tech.

“We have a bunch of clients wanting to do token offerings and looking into the legality,” Kan told TechCrunch in an interview. “A lot of our advisory work is around the token offering and how it operates.”

The commitment is such that the company is even accepting Bitcoin and Bitcoin Cash for payments through crypto processing service BitPay.

While the ICO market has quietened over the past year following huge valuation losses market-wide, up to 90 percent in some cases with many ICO tokens now effectively worthless, there’s a new antic

Read More

Continue Reading
iPhone

Highlights & transcript from Zuckerberg’s 20K-word ethics talk

Mark Zuckerberg says it might be right for Facebook to let people pay to not see ads, but that it would feel wrong to charge users for extra privacy controls. That’s just one of the fascinating philosophical views the CEO shared during the first of his public talks he’s promised as part of his 2019…


Mark Zuckerberg saysit might be right for Facebook to let people pay to not see ads, but that it would feel wrong to charge users for extra privacy controls. That’s just one of the fascinating philosophical views the CEO shared during the first of his public talks he’s promised as part of his 2019 personal challenge.

Talking to Harvard Law and computer science professor Jonathan Zittrain on the campus of the university he dropped out of, Zuckerberg managed to escape the 100-minute conversation with just a few gaffes. At one point he said “we definitely don’t want a society where there’s a camera in everyone’s living room watching the content of those conversations”. Zittrain swiftly reminded him that’s exactly what FacebookPortal is, and Zuckerberg tried to deflect by saying Portal’s recordings would be encrypted.

Later Zuckerberg mentioned “the ads, in a lot of places are not even that different from the organic content in terms of the quality of what people are being able to see” which is pretty sad and derisive assessment of the personal photos and status updates people share. And when he suggested crowdsourced fact-checking, Zittrain chimed in that this could become an avenue for “astroturfing” where mobs of users provide purposefully biased information to promote their interests, like a political group’s supporting voting that their opponents’ facts are lies. While sometimes avoiding hard stances on questions, Zuckerberg was otherwise relatively logical and coherent.

Policy And Cooperating With Governments

The CEO touched on his borderline content policy that quietly demotes posts that come close to breaking its policy against nudity, hate speech etc that otherwise are the most sensational and get the most distribution but don’t make people feel good. Zuckerberg noted some progress here, saying “a lot of the things that we’ve done in the last year were focused on that problem and it really improves the quality of the service and people appreciate that.”

This aligns with Zuckerberg contemplating Facebook’s role as a “data fiduciary” where rather than necessarily giving in to users’ urges or prioritizing its short-term share price, the company tries to do what’s in the best long-term interest of its communities. “There’s a hard balance here which is — I mean if you’re talking about what people want to want versus what they want– you know, often people’s revealed preferences of what they actually do shows a deeper sense of what they want than what they think they want to want” he said. Essentially, people might tap on clickbait even if it doesn’t make them feel good.

On working with governments, Zuckerberg explained how incentives weren’t always aligned, like when law enforcement is monitoring someone accidentally dropping clues about their crimes and collaborators. The government and society might benefit from that continued surveillance but Facebook might want to immediately suspend the account if it found out. “But as you build up the relationships and trust, you can get to that kind of a relationship where they can also flag for you, ‘Hey, this is where we’re at’”, implying Facebook might purposefully allow that person to keep incriminating themselves to assist the authorities.

But disagreements between governments can flare up, Zuckerberg notes that “we’ve had employees thrown in jail because we have gotten court orders that we have to turnover data that we wouldn’t probably anyway, but we can’t because it’s encrypted.” That’s likely a reference to the 2016 arrest of Facebook’s VP for Latin Amercia Diego Dzodan over WhatsApp’s encryption preventing the company from providing evidence for a drug case.

Decentralizing Facebook

The tradeoffs of encryption and decentralization were a central theme. He discussed how while many people fear how encryption could mask illegal or offensive activity, Facebook doesn’t have to peek at someone’s actual content to determine they’re violating policy. “One of the — I guess, somewhat surprising to me — findings of the last couple of years of working on content governance and enforcement is that it often is much more effective to identify fake accounts and bad actors upstream of them doing something bad by patterns of activity rather than looking at the content” Zuckerberg said.

With Facebook rapidly building out a blockchain team to potentially launch a cryptocurrency for fee-less payments or an identity layer for decentralized applications, Zittrain asked about the potential for letting users control which other apps they give their profile information to without Facebook as an intermediary.

SAN JOSE, CA – MAY 01: Facebook CEO Mark Zuckerberg (Photo by Justin Sullivan/Getty Images)

Zuckerberg stressed that at Facebook’s scale, moving to a less efficient distributed architecture would be extremely “computationally intense” though it might eventually be possible. Instead, he said “One of the things that I’ve been thinking about a lot is a use of blockchain that I am potentially interesting in– although I haven’t figured out a way to make this work out, is around authentication and bringing– and basically granting access to your information and to different services. So, basically, replacing the notion of what we have with Facebook Connect with something that’s fully distributed.” This might be attractive to developers who would know Facebook couldn’t cut them off from the users.

The problem is that if a developer was abusing users, Zuckerberg fears that “in a fully distributed system there would be no one who could cut off the developers’ access. So, the question is if you have a fully distributed system, it dramatically empowers individuals on the one hand, but it really raises the stakes and it gets to your questions around, well, what are the boundaries on consent and how people can really actually effectively know that they’re giving consent to an institution?”

No “Pay For Privacy”

But perhaps most novel and urgent were Zuckerberg’s comments on the secondary questions raised by where Facebook should let people pay to remove ads. “You start getting into a principle question which is ‘are we going to let people pay to have different controls on data use than other people?’ And my answer to that is a hard no.” Facebook has promised to always operate free version so everyone can have a voice. Yet some including myself have suggested that a premium ad-free subscription to Facebook could help ween it off maximizing data collection and engagement, though it might break Facebook’s revenue machine by pulling the most affluent and desired users out of the ad targeting pool.

“What I’m saying is on the data use, I don’t believe that that’s something that people should buy. I think the data principles that we have need to be uniformly available to everyone. That to me is a really important principle” Zuckerberg expands. “It’s, like, maybe you could have a conversation about whether you should be able to pay and not see ads. That doesn’t feel like a moral question to me. But the question of whether you can pay to have different privacy controls feels wrong.”

Back in May, Zuckerberg announced Facebook would build a Clear History button in 2018 that deletes all the web browsing data the social network has collected about you, but that data’s deep integration into the company’s systems has delayed the launch. Research suggests users don’t want the inconvenience of getting logged out of all their Facebook Connected services, though, they’d like to hide certain data from the company.

“Clear history is a prerequisite, I think, for being able to do anything like subscriptions. Because, like, partially what someone would want to do if they were going to really actually pay for a not ad supported version where their data wasn’t being used in a system like that, you would want to have a control so that Facebook didn’t have access or wasn’t using that data or associating it with your account. And as a principled matter, we are not going to just offer a control like that to people who pay.”

Of all the apologies, promises, and predictions Zuckerberg has made recently, this pledge might instill the most confidence. While some might think of Zuckerberg as a data tyrant out to absorb and exploit as much of our personal info as possible, there are at least lines he’s not willing to cross. Facebook could try to charge you for privacy, but it won’t. And given Facebook’s dominance in social networking and messaging plus Zuckerberg’s voting control of the company, a greedier man could make the internet much worse.

TRANSCRIPT – MARK ZUCKERBERG AT HARVARD / FIRST PERSONAL CHALLENGE 2019

Jonathan Zittrain:Very good. So, thank you, Mark, for coming to talk to me and to our students from the Techtopia program and from my “Internet and Society” course at Harvard Law School. We’re really pleased to have a chance to talk about any number of issues and we should just dive right in. So, privacy, autonomy, and information fiduciaries.

Mark Zuckerberg:All right!

Jonathan Zittrain:Love to talk about that.

Mark Zuckerberg:Yeah! I read your piece in The New York Times.

Jonathan Zittrain:The one with the headline that said, “Mark Zuckerberg can fix this mess”?

Mark Zuckerberg:Yeah.

Jonathan Zittrain:Yeah.

Mark Zuckerberg:Although that was last year.

Jonathan Zittrain:That’s true! Are you suggesting it’s all fixed?

Mark Zuckerberg:No. No.

Jonathan Zittrain:Okay, good. So–

Jonathan Zittrain:I’m suggesting that I’m curious whether you still think that we can fix this mess?

Jonathan Zittrain:Ah!

Jonathan Zittrain:I hope–

Jonathan Zittrain:“Hope springs eternal”–

Mark Zuckerberg:Yeah, there you go.

Jonathan Zittrain:–is my motto. So, all right, let me give a quick characterization of this idea that the coinage and the scaffolding for it is from my colleague, Jack Balkin, at Yale. And the two of us have been developing it out further. There are a standard number of privacy questions with which you might have some familiarity, having to do with people conveying information that they know they’re conveying or they’re not so sure they are, but “mouse droppings” as we used to call them when they run in the rafters of the Internet and leave traces. And then the standard way of talking about that is you want to make sure that that stuff doesn’t go where you don’t want it to go. And we call that “informational privacy”. We don’t want people to know stuff that we want maybe our friends only to know. And on a place like Facebook, you’re supposed to be able to tweak your settings and say, “Give them to this and not to that.” But there’s also ways in which stuff that we share with consent could still sort of be used against us and it feels like, “Well, you consented,” may not end the discussion. And the analogy that my colleague Jack brought to bear was one of a doctor and a patient or a lawyer and a client or– sometimes in America, but not always– a financial advisor and a client that says that those professionals have certain expertise, they get trusted with all sorts of sensitive information from their clients and patients and, so, they have an extra duty to act in the interests of those clients even if their own interests conflict. And, so, maybe just one quick hypo to get us started. I wrote a piece in 2014, that maybe you read, that was a hypothetical about elections in which it said, “Just hypothetically, imagine that Facebook had a view about which candidate should win and they reminded people likely to vote for the favored candidate that it was Election Day,” and to others they simply sent a cat photo. Would that be wrong? And I find– I have no idea if it’s illegal; it does seem wrong to me and it might be that the fiduciary approach captures what makes it wrong.

Mark Zuckerberg:All right. So, I think we could probably spend the whole next hour just talking about that!

Mark Zuckerberg:So, I read your op-ed and I also read Balkin’s blogpost on information fiduciaries. And I’ve had a conversation with him, too.

Jonathan Zittrain:Great.

Mark Zuckerberg:And the– at first blush, kind of reading through this, my reaction is there’s a lot here that makes sense. Right? The idea of us having a fiduciary relationship with the people who use our services is kind of intuitively– it’s how we think about how we’re building what we’re building. So, reading through this, it’s like, all right, you know, a lot of people seem to have this mistaken notion that when we’re putting together news feed and doing ranking that we have a team of people who are focused on maximizing the time that people spend, but that’s not the goal that we give them. We tell people on the team, “Produce the service–” that we think is going to be the highest quality that– we try to ground it in kind of getting people to come in and tell us, right, of the content that we could potentially show what is going to be– they tell us what they want to see, then we build models that kind of– that can predict that, and build that service.

Jonathan Zittrain:And, by the way, was that always the case or–

Mark Zuckerberg:No.

Jonathan Zittrain:–was that a place you got to through some course adjustments?

Mark Zuckerberg:Through course adjustments. I mean, you start off using simpler signals like what people are clicking on in feed, but then you pretty quickly learn, “Hey, that gets you to local optimum,” right? Where if you’re focusing on what people click on and predicting what people click on, then you select for click bait. Right? So, pretty quickly you realize from real feedback, from real people, that’s not actually what people want. You’re not going to build the best service by doing that. So, you bring in people and actually have these panels of– we call it “getting to ground truth”– of you show people all the candidates for what can be shown to them and you have people say, “What’s the most meaningful thing that I wish that this system were showing us? So, all this is kind of a way of saying that our own self image of ourselves and what we’re doing is that we’re acting as fiduciaries and trying to build the best services for people. Where I think that this ends up getting interesting is then the question of who gets to decide in the legal sense or the policy sense of what’s in people’s best interest? Right? So, we come in every day and think, “Hey, we’re building a service where we’re ranking newsfeed trying to show people the most relevant content with an assumption that’s backed by data; that, in general, people want us to show them the most relevant content. But, at some level, you could ask the question which is “Who gets to decide that ranking newsfeed or showing relevant ads?” or any of the other things that we choose to work on are actually in people’s interest. And we’re doing the best that we can to try to build the services [ph?] that we think are the best. At the end of the day, a lot of this is grounded in “People choose to use it.” Right? Because, clearly, they’re getting some value from it. But then there are all these questions like you say about, you have– about where people can effectively give consent and not.

Jonathan Zittrain:Yes.

Mark Zuckerberg:So, I think that there’s a lot of interesting questions in this to unpack about how you’d implement a model like that. But, at a high level I think, you know, one of the things that I think about in terms of we’re running this big company; it’s important in society that people trust the institutions of society. Clearly, I think we’re in a position now where people rightly have a lot of questions about big internet companies, Facebook in particular, and I do think getting to a point there there’s the right regulation and rules in place just provides a kind of societal guardrail framework where people can have confidence that, okay, these companies are operating within a framework that we’ve all agreed. That’s better than them just doing whatever they want. And I think that that would give people confidence. So, figuring out what that framework is, I think, is a really important thing. And I’m sure we’ll talk about that as it relates–

Jonathan Zittrain:Yes.

Mark Zuckerberg:–to a lot of the content areas today. But getting to that question of how do you– “Who determines what’s in people’s best interest, if not people themselves?”Jonathan Zittrain:Yes.

Mark Zuckerberg:–is a really interesting question.

Jonathan Zittrain:Yes, so, we should surely talk about that. So, on our agenda is the “Who decides?” question.

Mark Zuckerberg:All right.

Jonathan Zittrain:Other agenda items include– just as you say, the fiduciary framework sounds nice to you– doctors, patients, Facebook users. And I hear you saying that’s pretty much where you’re wanting to end up anyway. There are some interesting questions about what people want, versus what they want to want.

Mark Zuckerberg:Yeah.

Jonathan Zittrain:People will say “On January 1st, what I want–” New Year’s resolution– “is a gym membership.” And then on January 2nd, they don’t want to go to the gym. They want to want to go to the gym, but they never quite make it. And then, of course, a business model of pay for the whole year ahead of time and they know you’ll never turn up develops around that. And I guess a specific area to delve into for a moment on that might be on the advertising side of things, maybe the dichotomy between personalization and does it ever going into exploitation? Now, there might be stuff– I know Facebook, for example, bans payday loans as best it can.

Mark Zuckerberg:Mm-hm.

Jonathan Zittrain:That’s just a substantive area that it’s like, “All right, we don’t want to do that.”

Mark Zuckerberg:Mm-hm.

Jonathan Zittrain:But when we think about good personalization so that Facebook knows I have a dog and not a cat, and a targeter can then offer me dog food and not cat food. How about, if not now, a future day in which an advertising platform can offer to an ad targeter some sense of “I just lost my pet, I’m really upset, I’m ready to make some snap decisions that I might regret later, but when I make them–“

Mark Zuckerberg:Mm-hm.

Jonathan Zittrain:“–I’m going to make them.” So, this is the perfect time to tee up

Mark Zuckerberg:Yeah.

Jonathan Zittrain:–a Cubic Zirconia or whatever the thing is that– .

Mark Zuckerberg:Mm-hm.

Jonathan Zittrain:That seems to me a fiduciary approach would say, ideally– how we get there I don’t know, but ideally we wouldn’t permit that kind of approach to somebody using the information we’ve gleaned from them to know they’re in a tough spot–

Mark Zuckerberg:Yeah.

Jonathan Zittrain:–and then to exploit them. But I don’t know. I don’t know how you would think about something like that. Could you write an algorithm to detect something like that?

Mark Zuckerberg:Well, I think one of the key principles is that we’re trying to run this company for the long term. And I think that people think that a lot of things that– if you were just trying to optimize the profits for next quarter or something like that, you might want to do things that people might like in the near term, but over the long term will come to resent. But if you actually care about building a community and achieving this mission and building the company for the long term, I think you’re just much more aligned than people often think companies are. And it gets back to the idea before, where I think our self image is largely acting as– in this kind of fiduciary relationship as you’re saying– and across– we could probably go through a lot of different examples. I mean, we don’t want to show people content that they’re going to click on and engage with, but then feel like they wasted their time afterwards. Where we don’t want to show them things that they’re going to make a decision based off of that and then regret later. I mean, there’s a hard balance here which is– I mean if you’re talking about what people want to want versus what they want– you know, often people’s revealed preferences of what they actually do shows a deeper sense of what they want than what they think they want to want. So, I think there’s a question between when something is exploitative versus when something is real, but isn’t what you would say that you want.

Jonathan Zittrain:Yes.

Mark Zuckerberg:And that’s a really hard thing to get at.

Jonathan Zittrain:Yes.

Mark Zuckerberg:But on a lot of these cases my experience of running the company is that you start off building a system, you have relatively unsophisticated signals to start, and you build up increasingly complex models over time that try to take into account more of what people care about. And there are all these examples that we can go through. I think probably newsfeed and ads are probably the two most complex ranking examples–

Jonathan Zittrain:Yes.

Mark Zuckerberg:–that we have. But it’s– like we were talking about a second ago, when we started off with the systems, I mean, just start with newsfeeds– but you could do this on ads, too– you know, the most naïve signals, right, are what people click on or what people “Like”. But then you just very quickly realize that that doesn’t– it approximates something, but it’s a very crude approximation of the ground truth of what people actually care about. So, what you really want to get to is as much as possible getting real people to look at the real candidates for content and tell you in a multi-dimensional way what matters to them and try to build systems that model that. And then you want to be kind of conservative on preventing downside. So, your example of the payday loans– and when we’ve talked about this in the past, your– you’ve put the question to me of “How do you know when a payday loan is going to be exploitative?” right? “If you’re targeting someone who is in a bad situation?” And our answer is, “Well, we don’t really know when it’s going to be exploitative, but we think that the whole category potentially has a massive risk of that, so we just ban it–

Jonathan Zittrain:Right. Which makes it an easy case.

Mark Zuckerberg:Yes. And I think that the harder cases are when there’s significant upside and significant downside and you want to weigh both of them. So, I mean, for example, once we started putting together a really big effort on preventing election interference, one of the initial ideas that came up was “Why don’t we just ban all ads that relate to anything that is political?” And they you pretty quickly get into, all right, well, what’s a political ad? The classic legal definition is things that are around elections and candidates, but that’s not actually what Russia and other folks were primarily doing. Right? It’s– you know, a lot of the issues that we’ve seen are around issue ads, right, and basically sewing division on what are social issues. So, all right, I don’t think you’re going to get in the way of people’s speech and ability to promote and do advocacy on issues that they care about. So, then the question is “All right, well, so, then what’s the right balance?” of how do you make sure that you’re providing the right level of controls, that people who aren’t supposed to be participating in these debates aren’t or that at least you’re providing the right transparency. But I think we’ve veered a little bit from the original questionJonathan Zittrain:Yes.

Mark Zuckerberg:–but the– but, yeah. So, let’s get back to where you were

Jonathan Zittrain:Well, here’s– and this is a way of maybe moving it forward, which is: A platform as complete as Facebook is these days offers lots of opportunities to shape what people see and possibly to help them with those nudges, that it’s time to go to the gym or to avoid them from falling into the depredations of the payday loan. And it is a question of so long as the platform to do it, does it now have an ethical obligation to do it, to help people achieve the good life?

Mark Zuckerberg:Mm-hm.

Jonathan Zittrain:And I worry that it is too great a burden for any company to bear to have to figure out, say, if not the perfect, the most reasonable newsfeed for every one of the– how many? Two and a half billion active users? Something like that.

Mark Zuckerberg:Yeah. On that order.

Jonathan Zittrain:All the time and there might be some ways that start a little bit to get into the engineering of the thing that would say, “Okay, with all hindsight, are there ways to architect this so that the stakes aren’t as high, aren’t as focused on just, “Gosh, is Facebook doing this right?” It’s as if there was only one newspaper in the whole world or one or two, and it’s like, “Well, then what The New York Times chooses to put on it’s home page, if it were the only newspaper, would have outsize importance.”

Mark Zuckerberg:Mm-hm.

Jonathan Zittrain:So, just as a technical matter, a number of the students in this room had a chance to hear from Tim Berners-Lee, inventor of the World Wide Web, and he has a new idea for something called “Solid”. I don’t know if you’ve heard of Solid. It’s a protocol more than it is a product. So, there’s no car to move off the lot today. But its idea is allowing people to have the data that they generate as they motor around the web end up in their own kind of data locker. Now, for somebody like Tim, it might mean literally in a locker under his desk and he could wake up in the middle of the night and see where his data is. For others, it might mean Iraq somewhere, guarded perhaps by a fiduciary who’s looking out for them, the way that we put money in a bank and then we can sleep at night knowing the bankers are– this is maybe not the best analogy in 2019, but watching.

Mark Zuckerberg:We’ll get there.

Jonathan Zittrain:We’ll get there. But Solid says if you did that, people would then– or their helpful proxies– be able to say, “All right, Facebook is coming along. It wants the following data from me and including that data that it has generated about me as I use it, but stored back in my locker and it kind of has to come back to my well to draw water each time. And that way if I want to switch to Schmacebook or something, it’s still in my well and I can just immediately grant permission to Schmacebook to see it and I don’t have to do a kind of data slurp and then re-upload it. It’s a fully distributed way of thinking about data. And I’m curious from an engineering perspective does this seem doable with something of the size and the number of spinning wheels that Facebook has and does it seem like a

Mark Zuckerberg:Yeah–

Jonathan Zittrain:–and I’m curious your reaction to an idea like that.

Mark Zuckerberg:So, I think it’s quite interesting. Certainly, the level of computation that Facebook is doing and all the services that we’re building is really intense to do in a distributed way. I mean, I think as a basic model I think we’re building out the data center capacity over the next five years and our plan for what we think we need to do that we think is on the order of all of what AWS and Google Cloud are doing for supporting all of their customers. So, okay, so, this is like a relatively computationally intense thing.

Over time you assume you’ll get more co

Read More

Continue Reading