Antivirus, Apple, Enterprise, Internet Security, iPhone, Mobile, OS X

Do I need an antivirus for iPhone?

Most people are made to believe that all computing devices need to have antivirus software, regardless of the operating system…

Most people are made to believe that all computing devices need to have antivirus software, regardless of the operating system they are running on. Apple’s mobile devices are powered by iOS, one of the most advanced and secure systems you may come across. Even though security firms offer antivirus for iPhone to safeguard your devices, do you really need one?
 

How is iOS different than other mobile platforms like Android?

Unlike Apple, Google has provided a lot more flexibility to its users when it comes to installing applications on their devices. Though by default, an Android device restricts its owner from installing files from unknown sources, this can be changed almost effortlessly through the device’s settings. While this has allowed users and developers have a lot more versatility when it comes to installing and building software, it left a void in the system for cyber criminals to exploit.

The case is not the same with iOS, however. Apple doesn’t provide its iPhone, iPod or iPad users with this feature, limiting them to mostly the App Store. Certainly, hackers and exploiters have got their hands on having root access to iOS, a privilege of removing restrictions imposed by Apple, this, however, is something users wouldn’t be encouraged to proceed with as it voids Apple’s warranty for the device. In addition, Apple keeps updating its iOS from time to time keeping in mind to fix the exploit.
 

Apple doesn’t really allow any Antivirus to function like it normally would:

According to Rich Mogull, analyst and CEO of Securosis, a security firm – security software are designed for latching on to hooks to have deep access to the operating system. This allows them to monitor if a threat persists. However, this creates a potential exposure for the software itself to become a target. All that is required by cybercriminals is to find a loophole in a sloppily designed antivirus. Hence, Apple kept in mind to design iOS such, that no software would have the possibility of grappling on to these hooks.

On the other hand though, founder and CEO of security firm Kaspersky Lab, Eugene Kaspersky has warned Apple that sooner or later iOS would become a target of malicious attacks. And when that happens, it could seriously bring down the reputation of the company, giving an advantage to other mobile platforms.

For the time being though, iOS’s security model of having a strong wall between its apps and operating system seems to work just fine.
 

Then, what about the antivirus for iPhone that are available?

Apple itself promises its users that their devices are well secured and officially bans all antivirus apps, revealing them to be spam aimed at generating money from uninformed owners. At the same time, an antivirus app search in the store would reveal that there are enough of them there. These, however, are designed to provide privacy and security from thefts more than to defend potential system threats as claimed by most. The last thing home users would want to happen is have their iPhones or iPads stolen or their data mishandled.

Apps like Find My iPhone, Avira Mobile Security, McAfee Security, Norton Mobile Security take care exactly of that. They allow owners to wipe their device if it falls into the wrong hands. Bitdefender Mobile Security works similarly, also alerting users if their accounts ever get breached. Citrix Secure Web claims to protect users from malicious websites and phishing attacks. While that sounds impressive, the iOS’s default explorer, Safari is well capable of handling that itself. Notice that these software firms refrain from using the term ‘antivirus’ in their app’s name like they do in their desktop counterparts? You guessed it right – that’s because they were never designed to protect phones and tablets from viruses, but rather, from data thefts!
 

How do I protect my device if it has been jailbroken?

While ‘jailbreaking’ has exposed that the iOS is not entirely fool-proof, the system software is still remarkably secure and stable. Unlike on the Android, malware hasn’t yet been able to find their way to the operating system through software from third-party sources. However, since Apple doesn’t take any responsibility for a jailbroken device, taking a few precautions might be a good idea.

Say No to Piracy:
It is always better to stay away from pirated software. As has been the case with Android, developers of third-party apps get paid to generate traffic through their software. Though, third-party software is unlikely to affect the performance of an iOS device as much as an Android device, taking a precaution is still always better than having to cure.

Secure your jailbroken iDevice by changing the root password:
As of today’s date, there have been two exploits discovered that are aimed at a jailbroken device and both of them try to access its administrator account, popularly known as ‘root’. However, securing it by changing the root password is relatively easy. A Google Search would give you tons of results on how to do that. Since by default iPhone restricts enabling root access, these exploits are only possible on a rooted/jailbroken device.
 

Keep your device updated:

You may have heard news about the flaw in iPhone’s Wi-Fi chip or how an iCloud is capable of holding a device as hostage. Apple might have security exploits from time to time but it keeps a track of everything so closely that the latest iOS update includes all the fixes. Of course, there isn’t a need of immediately updating the device as some updates initially might be less stable than one would anticipate.
 

Conclusion

In summary antivirus for iPhone is not needed or even realistically available. The multiple pieces of security software available for iPhone are still useful, however. Having the ability to track your device or wipe it remotely in the event of loss or theft is essential for the majority of iPhone users. The biggest security threat to iPhone users isn’t virus or malware, it is from general internet security threats as shown in our article – What internet security threats to look out for in 2018?

Be the first to write a comment.

Leave a Reply

Internet Security

Can predictive analytics be made safe for humans?

Massive-scale predictive analytics is a relatively new phenomenon, one that challenges both decades of law as well as consumer thinking about privacy. As a technology, it may well save thousands of lives in applications like predictive medicine, but if it isn’t used carefully, it may prevent thousands from getting loans, for instance, if an underwriting…


Massive-scale predictive analyticsis a relatively new phenomenon, one that challenges both decades of law as well as consumer thinking about privacy.

As a technology, it may well save thousands of lives in applications like predictive medicine, but if it isn’t used carefully, it may prevent thousands from getting loans, for instance, if an underwriting algorithm is biased against certain users.

I chatted with Dennis Hirsch a few weeks ago about the challenges posed by this new data economy. Hirsch is a professor of law at Ohio State and head of its Program on Data and Governance. He’s also affiliated with the university’s Risk Institute.

“Data ethics is the new form of risk mitigation for the algorithmic economy,” he said. In a post-Cambridge Analytica world, every company has to assess what data it has on its customers and mitigate the risk of harm. How to do that, though, is at the cutting edge of the new field of data governance, which investigates the processes and policies through which organizations manage their data.

You’re reading the Extra Crunch Daily. Like this newsletter?Subscribe for free to follow all of our discussions and debates.

“Traditional privacy regulation asks whether you gave someone notice and given them a choice,” he explains. That principle is the bedrock for Europe’s GDPR law, and for the patchwork of laws in the U.S. that protect privacy. It’s based around the simplistic idea that a datum — such as a customer’s address — shouldn’t be shared with, say, a marketer without that user’s knowledge. Privacy is about protecting the address book, so to speak.

The rise of “predictive analytics,” though, has completely demolished such privacy legislation. Predictive analytics is a fuzzy term, but essentially means interpreting raw data and drawing new conclusions through inference. This is the story of the famous Target data crisis, where the retailer recommended pregnancy-related goods to women who had certain patterns of purchases. As Charles Duhigg explained at the time:

Many shoppers purchase soap and cotton balls, but when someone suddenly starts buying lots of scent-free soap and extra-big bags of cotton balls, in addition to hand sanitizers and washcloths, it signals they could be getting close to

Read More

Continue Reading
Internet Security

Atrium, Justin Kan’s legal tech startup, launches a fintech and blockchain division

Atrium, the legal startup co-founded by Justin Kan of Twitch fame, is jumping into the blockchain space today. The company has raised plenty of money — including $65 million from a16z last September — so rather than an ICO or token sale, this is a consultancy business. Atrium uses machine learning to digitize legal documents and develop applications…


Atrium, the legal startup co-founded by Justin Kan of Twitch fame, is jumping into the blockchain space today.

The company has raised plenty of money — including $65 million from a16z last September — so rather than an ICO or token sale, this is a consultancy business. Atrium uses machine learning to digitize legal documents and develop applications for client use, and now it is officially applying that to fintech and blockchain businesses.

The division has been operating quietly for months and the scope of work that it covers includes the legality and regulatory concerns around tokens, but also business-focused areas including token utility, tokenomics and general blockchain tech.

“We have a bunch of clients wanting to do token offerings and looking into the legality,” Kan told TechCrunch in an interview. “A lot of our advisory work is around the token offering and how it operates.”

The commitment is such that the company is even accepting Bitcoin and Bitcoin Cash for payments through crypto processing service BitPay.

While the ICO market has quietened over the past year following huge valuation losses market-wide, up to 90 percent in some cases with many ICO tokens now effectively worthless, there’s a new antic

Read More

Continue Reading
iPhone

Highlights & transcript from Zuckerberg’s 20K-word ethics talk

Mark Zuckerberg says it might be right for Facebook to let people pay to not see ads, but that it would feel wrong to charge users for extra privacy controls. That’s just one of the fascinating philosophical views the CEO shared during the first of his public talks he’s promised as part of his 2019…


Mark Zuckerberg saysit might be right for Facebook to let people pay to not see ads, but that it would feel wrong to charge users for extra privacy controls. That’s just one of the fascinating philosophical views the CEO shared during the first of his public talks he’s promised as part of his 2019 personal challenge.

Talking to Harvard Law and computer science professor Jonathan Zittrain on the campus of the university he dropped out of, Zuckerberg managed to escape the 100-minute conversation with just a few gaffes. At one point he said “we definitely don’t want a society where there’s a camera in everyone’s living room watching the content of those conversations”. Zittrain swiftly reminded him that’s exactly what FacebookPortal is, and Zuckerberg tried to deflect by saying Portal’s recordings would be encrypted.

Later Zuckerberg mentioned “the ads, in a lot of places are not even that different from the organic content in terms of the quality of what people are being able to see” which is pretty sad and derisive assessment of the personal photos and status updates people share. And when he suggested crowdsourced fact-checking, Zittrain chimed in that this could become an avenue for “astroturfing” where mobs of users provide purposefully biased information to promote their interests, like a political group’s supporting voting that their opponents’ facts are lies. While sometimes avoiding hard stances on questions, Zuckerberg was otherwise relatively logical and coherent.

Policy And Cooperating With Governments

The CEO touched on his borderline content policy that quietly demotes posts that come close to breaking its policy against nudity, hate speech etc that otherwise are the most sensational and get the most distribution but don’t make people feel good. Zuckerberg noted some progress here, saying “a lot of the things that we’ve done in the last year were focused on that problem and it really improves the quality of the service and people appreciate that.”

This aligns with Zuckerberg contemplating Facebook’s role as a “data fiduciary” where rather than necessarily giving in to users’ urges or prioritizing its short-term share price, the company tries to do what’s in the best long-term interest of its communities. “There’s a hard balance here which is — I mean if you’re talking about what people want to want versus what they want– you know, often people’s revealed preferences of what they actually do shows a deeper sense of what they want than what they think they want to want” he said. Essentially, people might tap on clickbait even if it doesn’t make them feel good.

On working with governments, Zuckerberg explained how incentives weren’t always aligned, like when law enforcement is monitoring someone accidentally dropping clues about their crimes and collaborators. The government and society might benefit from that continued surveillance but Facebook might want to immediately suspend the account if it found out. “But as you build up the relationships and trust, you can get to that kind of a relationship where they can also flag for you, ‘Hey, this is where we’re at’”, implying Facebook might purposefully allow that person to keep incriminating themselves to assist the authorities.

But disagreements between governments can flare up, Zuckerberg notes that “we’ve had employees thrown in jail because we have gotten court orders that we have to turnover data that we wouldn’t probably anyway, but we can’t because it’s encrypted.” That’s likely a reference to the 2016 arrest of Facebook’s VP for Latin Amercia Diego Dzodan over WhatsApp’s encryption preventing the company from providing evidence for a drug case.

Decentralizing Facebook

The tradeoffs of encryption and decentralization were a central theme. He discussed how while many people fear how encryption could mask illegal or offensive activity, Facebook doesn’t have to peek at someone’s actual content to determine they’re violating policy. “One of the — I guess, somewhat surprising to me — findings of the last couple of years of working on content governance and enforcement is that it often is much more effective to identify fake accounts and bad actors upstream of them doing something bad by patterns of activity rather than looking at the content” Zuckerberg said.

With Facebook rapidly building out a blockchain team to potentially launch a cryptocurrency for fee-less payments or an identity layer for decentralized applications, Zittrain asked about the potential for letting users control which other apps they give their profile information to without Facebook as an intermediary.

SAN JOSE, CA – MAY 01: Facebook CEO Mark Zuckerberg (Photo by Justin Sullivan/Getty Images)

Zuckerberg stressed that at Facebook’s scale, moving to a less efficient distributed architecture would be extremely “computationally intense” though it might eventually be possible. Instead, he said “One of the things that I’ve been thinking about a lot is a use of blockchain that I am potentially interesting in– although I haven’t figured out a way to make this work out, is around authentication and bringing– and basically granting access to your information and to different services. So, basically, replacing the notion of what we have with Facebook Connect with something that’s fully distributed.” This might be attractive to developers who would know Facebook couldn’t cut them off from the users.

The problem is that if a developer was abusing users, Zuckerberg fears that “in a fully distributed system there would be no one who could cut off the developers’ access. So, the question is if you have a fully distributed system, it dramatically empowers individuals on the one hand, but it really raises the stakes and it gets to your questions around, well, what are the boundaries on consent and how people can really actually effectively know that they’re giving consent to an institution?”

No “Pay For Privacy”

But perhaps most novel and urgent were Zuckerberg’s comments on the secondary questions raised by where Facebook should let people pay to remove ads. “You start getting into a principle question which is ‘are we going to let people pay to have different controls on data use than other people?’ And my answer to that is a hard no.” Facebook has promised to always operate free version so everyone can have a voice. Yet some including myself have suggested that a premium ad-free subscription to Facebook could help ween it off maximizing data collection and engagement, though it might break Facebook’s revenue machine by pulling the most affluent and desired users out of the ad targeting pool.

“What I’m saying is on the data use, I don’t believe that that’s something that people should buy. I think the data principles that we have need to be uniformly available to everyone. That to me is a really important principle” Zuckerberg expands. “It’s, like, maybe you could have a conversation about whether you should be able to pay and not see ads. That doesn’t feel like a moral question to me. But the question of whether you can pay to have different privacy controls feels wrong.”

Back in May, Zuckerberg announced Facebook would build a Clear History button in 2018 that deletes all the web browsing data the social network has collected about you, but that data’s deep integration into the company’s systems has delayed the launch. Research suggests users don’t want the inconvenience of getting logged out of all their Facebook Connected services, though, they’d like to hide certain data from the company.

“Clear history is a prerequisite, I think, for being able to do anything like subscriptions. Because, like, partially what someone would want to do if they were going to really actually pay for a not ad supported version where their data wasn’t being used in a system like that, you would want to have a control so that Facebook didn’t have access or wasn’t using that data or associating it with your account. And as a principled matter, we are not going to just offer a control like that to people who pay.”

Of all the apologies, promises, and predictions Zuckerberg has made recently, this pledge might instill the most confidence. While some might think of Zuckerberg as a data tyrant out to absorb and exploit as much of our personal info as possible, there are at least lines he’s not willing to cross. Facebook could try to charge you for privacy, but it won’t. And given Facebook’s dominance in social networking and messaging plus Zuckerberg’s voting control of the company, a greedier man could make the internet much worse.

TRANSCRIPT – MARK ZUCKERBERG AT HARVARD / FIRST PERSONAL CHALLENGE 2019

Jonathan Zittrain:Very good. So, thank you, Mark, for coming to talk to me and to our students from the Techtopia program and from my “Internet and Society” course at Harvard Law School. We’re really pleased to have a chance to talk about any number of issues and we should just dive right in. So, privacy, autonomy, and information fiduciaries.

Mark Zuckerberg:All right!

Jonathan Zittrain:Love to talk about that.

Mark Zuckerberg:Yeah! I read your piece in The New York Times.

Jonathan Zittrain:The one with the headline that said, “Mark Zuckerberg can fix this mess”?

Mark Zuckerberg:Yeah.

Jonathan Zittrain:Yeah.

Mark Zuckerberg:Although that was last year.

Jonathan Zittrain:That’s true! Are you suggesting it’s all fixed?

Mark Zuckerberg:No. No.

Jonathan Zittrain:Okay, good. So–

Jonathan Zittrain:I’m suggesting that I’m curious whether you still think that we can fix this mess?

Jonathan Zittrain:Ah!

Jonathan Zittrain:I hope–

Jonathan Zittrain:“Hope springs eternal”–

Mark Zuckerberg:Yeah, there you go.

Jonathan Zittrain:–is my motto. So, all right, let me give a quick characterization of this idea that the coinage and the scaffolding for it is from my colleague, Jack Balkin, at Yale. And the two of us have been developing it out further. There are a standard number of privacy questions with which you might have some familiarity, having to do with people conveying information that they know they’re conveying or they’re not so sure they are, but “mouse droppings” as we used to call them when they run in the rafters of the Internet and leave traces. And then the standard way of talking about that is you want to make sure that that stuff doesn’t go where you don’t want it to go. And we call that “informational privacy”. We don’t want people to know stuff that we want maybe our friends only to know. And on a place like Facebook, you’re supposed to be able to tweak your settings and say, “Give them to this and not to that.” But there’s also ways in which stuff that we share with consent could still sort of be used against us and it feels like, “Well, you consented,” may not end the discussion. And the analogy that my colleague Jack brought to bear was one of a doctor and a patient or a lawyer and a client or– sometimes in America, but not always– a financial advisor and a client that says that those professionals have certain expertise, they get trusted with all sorts of sensitive information from their clients and patients and, so, they have an extra duty to act in the interests of those clients even if their own interests conflict. And, so, maybe just one quick hypo to get us started. I wrote a piece in 2014, that maybe you read, that was a hypothetical about elections in which it said, “Just hypothetically, imagine that Facebook had a view about which candidate should win and they reminded people likely to vote for the favored candidate that it was Election Day,” and to others they simply sent a cat photo. Would that be wrong? And I find– I have no idea if it’s illegal; it does seem wrong to me and it might be that the fiduciary approach captures what makes it wrong.

Mark Zuckerberg:All right. So, I think we could probably spend the whole next hour just talking about that!

Mark Zuckerberg:So, I read your op-ed and I also read Balkin’s blogpost on information fiduciaries. And I’ve had a conversation with him, too.

Jonathan Zittrain:Great.

Mark Zuckerberg:And the– at first blush, kind of reading through this, my reaction is there’s a lot here that makes sense. Right? The idea of us having a fiduciary relationship with the people who use our services is kind of intuitively– it’s how we think about how we’re building what we’re building. So, reading through this, it’s like, all right, you know, a lot of people seem to have this mistaken notion that when we’re putting together news feed and doing ranking that we have a team of people who are focused on maximizing the time that people spend, but that’s not the goal that we give them. We tell people on the team, “Produce the service–” that we think is going to be the highest quality that– we try to ground it in kind of getting people to come in and tell us, right, of the content that we could potentially show what is going to be– they tell us what they want to see, then we build models that kind of– that can predict that, and build that service.

Jonathan Zittrain:And, by the way, was that always the case or–

Mark Zuckerberg:No.

Jonathan Zittrain:–was that a place you got to through some course adjustments?

Mark Zuckerberg:Through course adjustments. I mean, you start off using simpler signals like what people are clicking on in feed, but then you pretty quickly learn, “Hey, that gets you to local optimum,” right? Where if you’re focusing on what people click on and predicting what people click on, then you select for click bait. Right? So, pretty quickly you realize from real feedback, from real people, that’s not actually what people want. You’re not going to build the best service by doing that. So, you bring in people and actually have these panels of– we call it “getting to ground truth”– of you show people all the candidates for what can be shown to them and you have people say, “What’s the most meaningful thing that I wish that this system were showing us? So, all this is kind of a way of saying that our own self image of ourselves and what we’re doing is that we’re acting as fiduciaries and trying to build the best services for people. Where I think that this ends up getting interesting is then the question of who gets to decide in the legal sense or the policy sense of what’s in people’s best interest? Right? So, we come in every day and think, “Hey, we’re building a service where we’re ranking newsfeed trying to show people the most relevant content with an assumption that’s backed by data; that, in general, people want us to show them the most relevant content. But, at some level, you could ask the question which is “Who gets to decide that ranking newsfeed or showing relevant ads?” or any of the other things that we choose to work on are actually in people’s interest. And we’re doing the best that we can to try to build the services [ph?] that we think are the best. At the end of the day, a lot of this is grounded in “People choose to use it.” Right? Because, clearly, they’re getting some value from it. But then there are all these questions like you say about, you have– about where people can effectively give consent and not.

Jonathan Zittrain:Yes.

Mark Zuckerberg:So, I think that there’s a lot of interesting questions in this to unpack about how you’d implement a model like that. But, at a high level I think, you know, one of the things that I think about in terms of we’re running this big company; it’s important in society that people trust the institutions of society. Clearly, I think we’re in a position now where people rightly have a lot of questions about big internet companies, Facebook in particular, and I do think getting to a point there there’s the right regulation and rules in place just provides a kind of societal guardrail framework where people can have confidence that, okay, these companies are operating within a framework that we’ve all agreed. That’s better than them just doing whatever they want. And I think that that would give people confidence. So, figuring out what that framework is, I think, is a really important thing. And I’m sure we’ll talk about that as it relates–

Jonathan Zittrain:Yes.

Mark Zuckerberg:–to a lot of the content areas today. But getting to that question of how do you– “Who determines what’s in people’s best interest, if not people themselves?”Jonathan Zittrain:Yes.

Mark Zuckerberg:–is a really interesting question.

Jonathan Zittrain:Yes, so, we should surely talk about that. So, on our agenda is the “Who decides?” question.

Mark Zuckerberg:All right.

Jonathan Zittrain:Other agenda items include– just as you say, the fiduciary framework sounds nice to you– doctors, patients, Facebook users. And I hear you saying that’s pretty much where you’re wanting to end up anyway. There are some interesting questions about what people want, versus what they want to want.

Mark Zuckerberg:Yeah.

Jonathan Zittrain:People will say “On January 1st, what I want–” New Year’s resolution– “is a gym membership.” And then on January 2nd, they don’t want to go to the gym. They want to want to go to the gym, but they never quite make it. And then, of course, a business model of pay for the whole year ahead of time and they know you’ll never turn up develops around that. And I guess a specific area to delve into for a moment on that might be on the advertising side of things, maybe the dichotomy between personalization and does it ever going into exploitation? Now, there might be stuff– I know Facebook, for example, bans payday loans as best it can.

Mark Zuckerberg:Mm-hm.

Jonathan Zittrain:That’s just a substantive area that it’s like, “All right, we don’t want to do that.”

Mark Zuckerberg:Mm-hm.

Jonathan Zittrain:But when we think about good personalization so that Facebook knows I have a dog and not a cat, and a targeter can then offer me dog food and not cat food. How about, if not now, a future day in which an advertising platform can offer to an ad targeter some sense of “I just lost my pet, I’m really upset, I’m ready to make some snap decisions that I might regret later, but when I make them–“

Mark Zuckerberg:Mm-hm.

Jonathan Zittrain:“–I’m going to make them.” So, this is the perfect time to tee up

Mark Zuckerberg:Yeah.

Jonathan Zittrain:–a Cubic Zirconia or whatever the thing is that– .

Mark Zuckerberg:Mm-hm.

Jonathan Zittrain:That seems to me a fiduciary approach would say, ideally– how we get there I don’t know, but ideally we wouldn’t permit that kind of approach to somebody using the information we’ve gleaned from them to know they’re in a tough spot–

Mark Zuckerberg:Yeah.

Jonathan Zittrain:–and then to exploit them. But I don’t know. I don’t know how you would think about something like that. Could you write an algorithm to detect something like that?

Mark Zuckerberg:Well, I think one of the key principles is that we’re trying to run this company for the long term. And I think that people think that a lot of things that– if you were just trying to optimize the profits for next quarter or something like that, you might want to do things that people might like in the near term, but over the long term will come to resent. But if you actually care about building a community and achieving this mission and building the company for the long term, I think you’re just much more aligned than people often think companies are. And it gets back to the idea before, where I think our self image is largely acting as– in this kind of fiduciary relationship as you’re saying– and across– we could probably go through a lot of different examples. I mean, we don’t want to show people content that they’re going to click on and engage with, but then feel like they wasted their time afterwards. Where we don’t want to show them things that they’re going to make a decision based off of that and then regret later. I mean, there’s a hard balance here which is– I mean if you’re talking about what people want to want versus what they want– you know, often people’s revealed preferences of what they actually do shows a deeper sense of what they want than what they think they want to want. So, I think there’s a question between when something is exploitative versus when something is real, but isn’t what you would say that you want.

Jonathan Zittrain:Yes.

Mark Zuckerberg:And that’s a really hard thing to get at.

Jonathan Zittrain:Yes.

Mark Zuckerberg:But on a lot of these cases my experience of running the company is that you start off building a system, you have relatively unsophisticated signals to start, and you build up increasingly complex models over time that try to take into account more of what people care about. And there are all these examples that we can go through. I think probably newsfeed and ads are probably the two most complex ranking examples–

Jonathan Zittrain:Yes.

Mark Zuckerberg:–that we have. But it’s– like we were talking about a second ago, when we started off with the systems, I mean, just start with newsfeeds– but you could do this on ads, too– you know, the most naïve signals, right, are what people click on or what people “Like”. But then you just very quickly realize that that doesn’t– it approximates something, but it’s a very crude approximation of the ground truth of what people actually care about. So, what you really want to get to is as much as possible getting real people to look at the real candidates for content and tell you in a multi-dimensional way what matters to them and try to build systems that model that. And then you want to be kind of conservative on preventing downside. So, your example of the payday loans– and when we’ve talked about this in the past, your– you’ve put the question to me of “How do you know when a payday loan is going to be exploitative?” right? “If you’re targeting someone who is in a bad situation?” And our answer is, “Well, we don’t really know when it’s going to be exploitative, but we think that the whole category potentially has a massive risk of that, so we just ban it–

Jonathan Zittrain:Right. Which makes it an easy case.

Mark Zuckerberg:Yes. And I think that the harder cases are when there’s significant upside and significant downside and you want to weigh both of them. So, I mean, for example, once we started putting together a really big effort on preventing election interference, one of the initial ideas that came up was “Why don’t we just ban all ads that relate to anything that is political?” And they you pretty quickly get into, all right, well, what’s a political ad? The classic legal definition is things that are around elections and candidates, but that’s not actually what Russia and other folks were primarily doing. Right? It’s– you know, a lot of the issues that we’ve seen are around issue ads, right, and basically sewing division on what are social issues. So, all right, I don’t think you’re going to get in the way of people’s speech and ability to promote and do advocacy on issues that they care about. So, then the question is “All right, well, so, then what’s the right balance?” of how do you make sure that you’re providing the right level of controls, that people who aren’t supposed to be participating in these debates aren’t or that at least you’re providing the right transparency. But I think we’ve veered a little bit from the original questionJonathan Zittrain:Yes.

Mark Zuckerberg:–but the– but, yeah. So, let’s get back to where you were

Jonathan Zittrain:Well, here’s– and this is a way of maybe moving it forward, which is: A platform as complete as Facebook is these days offers lots of opportunities to shape what people see and possibly to help them with those nudges, that it’s time to go to the gym or to avoid them from falling into the depredations of the payday loan. And it is a question of so long as the platform to do it, does it now have an ethical obligation to do it, to help people achieve the good life?

Mark Zuckerberg:Mm-hm.

Jonathan Zittrain:And I worry that it is too great a burden for any company to bear to have to figure out, say, if not the perfect, the most reasonable newsfeed for every one of the– how many? Two and a half billion active users? Something like that.

Mark Zuckerberg:Yeah. On that order.

Jonathan Zittrain:All the time and there might be some ways that start a little bit to get into the engineering of the thing that would say, “Okay, with all hindsight, are there ways to architect this so that the stakes aren’t as high, aren’t as focused on just, “Gosh, is Facebook doing this right?” It’s as if there was only one newspaper in the whole world or one or two, and it’s like, “Well, then what The New York Times chooses to put on it’s home page, if it were the only newspaper, would have outsize importance.”

Mark Zuckerberg:Mm-hm.

Jonathan Zittrain:So, just as a technical matter, a number of the students in this room had a chance to hear from Tim Berners-Lee, inventor of the World Wide Web, and he has a new idea for something called “Solid”. I don’t know if you’ve heard of Solid. It’s a protocol more than it is a product. So, there’s no car to move off the lot today. But its idea is allowing people to have the data that they generate as they motor around the web end up in their own kind of data locker. Now, for somebody like Tim, it might mean literally in a locker under his desk and he could wake up in the middle of the night and see where his data is. For others, it might mean Iraq somewhere, guarded perhaps by a fiduciary who’s looking out for them, the way that we put money in a bank and then we can sleep at night knowing the bankers are– this is maybe not the best analogy in 2019, but watching.

Mark Zuckerberg:We’ll get there.

Jonathan Zittrain:We’ll get there. But Solid says if you did that, people would then– or their helpful proxies– be able to say, “All right, Facebook is coming along. It wants the following data from me and including that data that it has generated about me as I use it, but stored back in my locker and it kind of has to come back to my well to draw water each time. And that way if I want to switch to Schmacebook or something, it’s still in my well and I can just immediately grant permission to Schmacebook to see it and I don’t have to do a kind of data slurp and then re-upload it. It’s a fully distributed way of thinking about data. And I’m curious from an engineering perspective does this seem doable with something of the size and the number of spinning wheels that Facebook has and does it seem like a

Mark Zuckerberg:Yeah–

Jonathan Zittrain:–and I’m curious your reaction to an idea like that.

Mark Zuckerberg:So, I think it’s quite interesting. Certainly, the level of computation that Facebook is doing and all the services that we’re building is really intense to do in a distributed way. I mean, I think as a basic model I think we’re building out the data center capacity over the next five years and our plan for what we think we need to do that we think is on the order of all of what AWS and Google Cloud are doing for supporting all of their customers. So, okay, so, this is like a relatively computationally intense thing.

Over time you assume you’ll get more co

Read More

Continue Reading
Antivirus

D-Link’s D-Fend AC2600 router uses McAfee to protect your entire network

With the internet now an essential part of most people’s daily lives, the router that’s at the heart of every home’s connectivity has become increasingly important. And while we all reap the benefits of that access, it’s often easy to overlook the aspects that could negatively affect your family and home if left unmonitored and…


With the internet now an essential part of most people’s daily lives, the router that’s at the heart of every home’s connectivity has become increasingly important. And while we all reap the benefits of that access, it’s often easy to overlook the aspects that could negatively affect your family and home if left unmonitored and unprotected. 

With threats like malicious attacks (such as malware, botware, cryptojacking and more) and content or activities that are particularly unsafe for children (not to mention rising problems such as screen and video game addiction), router-makers have started to integrate more sophisticated tools to let households deal with these issues. 

D-Link is the latest to turn its attention towards better home-network protection and has teamed up with the antivirus

Read More

Continue Reading