iPhone

Opera Touch brings website cookie blocking to iOS

Last fall, Opera introduced Opera Touch for iOS — a solid alternative to Safari on iPhone, optimized for one-handed use. Today, the company is rolling out a notable new feature to this app: cookie blocking. Yes, it can now block those annoying dialogs that ask you to accept the website’s cookies. These are particularly problematic…


Last fall, Operaintroduced Opera Touch for iOS — a solid alternative to Safari on iPhone, optimized for one-handed use. Today, the company is rolling out a notable new feature to this app: cookie blocking. Yes, it can now block those annoying dialogs that ask you to accept the website’s cookies. These are particularly problematic on mobile, where they often entirely interrupt your ability to view the content, as opposed to on many desktop websites where you can (kind of) ignore the pop-up banner that appears at the bottom or the top of the page.

Cookie dialogs have become prevalent across the web as a result of Europe’s GDPR, but many people find them overly intrusive. Today, it takes an extra click to dismiss these

Read More

Be the first to write a comment.

Leave a Reply

iPhone

Highlights & transcript from Zuckerberg’s 20K-word ethics talk

Mark Zuckerberg says it might be right for Facebook to let people pay to not see ads, but that it would feel wrong to charge users for extra privacy controls. That’s just one of the fascinating philosophical views the CEO shared during the first of his public talks he’s promised as part of his 2019…


Mark Zuckerberg saysit might be right for Facebook to let people pay to not see ads, but that it would feel wrong to charge users for extra privacy controls. That’s just one of the fascinating philosophical views the CEO shared during the first of his public talks he’s promised as part of his 2019 personal challenge.

Talking to Harvard Law and computer science professor Jonathan Zittrain on the campus of the university he dropped out of, Zuckerberg managed to escape the 100-minute conversation with just a few gaffes. At one point he said “we definitely don’t want a society where there’s a camera in everyone’s living room watching the content of those conversations”. Zittrain swiftly reminded him that’s exactly what FacebookPortal is, and Zuckerberg tried to deflect by saying Portal’s recordings would be encrypted.

Later Zuckerberg mentioned “the ads, in a lot of places are not even that different from the organic content in terms of the quality of what people are being able to see” which is pretty sad and derisive assessment of the personal photos and status updates people share. And when he suggested crowdsourced fact-checking, Zittrain chimed in that this could become an avenue for “astroturfing” where mobs of users provide purposefully biased information to promote their interests, like a political group’s supporting voting that their opponents’ facts are lies. While sometimes avoiding hard stances on questions, Zuckerberg was otherwise relatively logical and coherent.

Policy And Cooperating With Governments

The CEO touched on his borderline content policy that quietly demotes posts that come close to breaking its policy against nudity, hate speech etc that otherwise are the most sensational and get the most distribution but don’t make people feel good. Zuckerberg noted some progress here, saying “a lot of the things that we’ve done in the last year were focused on that problem and it really improves the quality of the service and people appreciate that.”

This aligns with Zuckerberg contemplating Facebook’s role as a “data fiduciary” where rather than necessarily giving in to users’ urges or prioritizing its short-term share price, the company tries to do what’s in the best long-term interest of its communities. “There’s a hard balance here which is — I mean if you’re talking about what people want to want versus what they want– you know, often people’s revealed preferences of what they actually do shows a deeper sense of what they want than what they think they want to want” he said. Essentially, people might tap on clickbait even if it doesn’t make them feel good.

On working with governments, Zuckerberg explained how incentives weren’t always aligned, like when law enforcement is monitoring someone accidentally dropping clues about their crimes and collaborators. The government and society might benefit from that continued surveillance but Facebook might want to immediately suspend the account if it found out. “But as you build up the relationships and trust, you can get to that kind of a relationship where they can also flag for you, ‘Hey, this is where we’re at’”, implying Facebook might purposefully allow that person to keep incriminating themselves to assist the authorities.

But disagreements between governments can flare up, Zuckerberg notes that “we’ve had employees thrown in jail because we have gotten court orders that we have to turnover data that we wouldn’t probably anyway, but we can’t because it’s encrypted.” That’s likely a reference to the 2016 arrest of Facebook’s VP for Latin Amercia Diego Dzodan over WhatsApp’s encryption preventing the company from providing evidence for a drug case.

Decentralizing Facebook

The tradeoffs of encryption and decentralization were a central theme. He discussed how while many people fear how encryption could mask illegal or offensive activity, Facebook doesn’t have to peek at someone’s actual content to determine they’re violating policy. “One of the — I guess, somewhat surprising to me — findings of the last couple of years of working on content governance and enforcement is that it often is much more effective to identify fake accounts and bad actors upstream of them doing something bad by patterns of activity rather than looking at the content” Zuckerberg said.

With Facebook rapidly building out a blockchain team to potentially launch a cryptocurrency for fee-less payments or an identity layer for decentralized applications, Zittrain asked about the potential for letting users control which other apps they give their profile information to without Facebook as an intermediary.

SAN JOSE, CA – MAY 01: Facebook CEO Mark Zuckerberg (Photo by Justin Sullivan/Getty Images)

Zuckerberg stressed that at Facebook’s scale, moving to a less efficient distributed architecture would be extremely “computationally intense” though it might eventually be possible. Instead, he said “One of the things that I’ve been thinking about a lot is a use of blockchain that I am potentially interesting in– although I haven’t figured out a way to make this work out, is around authentication and bringing– and basically granting access to your information and to different services. So, basically, replacing the notion of what we have with Facebook Connect with something that’s fully distributed.” This might be attractive to developers who would know Facebook couldn’t cut them off from the users.

The problem is that if a developer was abusing users, Zuckerberg fears that “in a fully distributed system there would be no one who could cut off the developers’ access. So, the question is if you have a fully distributed system, it dramatically empowers individuals on the one hand, but it really raises the stakes and it gets to your questions around, well, what are the boundaries on consent and how people can really actually effectively know that they’re giving consent to an institution?”

No “Pay For Privacy”

But perhaps most novel and urgent were Zuckerberg’s comments on the secondary questions raised by where Facebook should let people pay to remove ads. “You start getting into a principle question which is ‘are we going to let people pay to have different controls on data use than other people?’ And my answer to that is a hard no.” Facebook has promised to always operate free version so everyone can have a voice. Yet some including myself have suggested that a premium ad-free subscription to Facebook could help ween it off maximizing data collection and engagement, though it might break Facebook’s revenue machine by pulling the most affluent and desired users out of the ad targeting pool.

“What I’m saying is on the data use, I don’t believe that that’s something that people should buy. I think the data principles that we have need to be uniformly available to everyone. That to me is a really important principle” Zuckerberg expands. “It’s, like, maybe you could have a conversation about whether you should be able to pay and not see ads. That doesn’t feel like a moral question to me. But the question of whether you can pay to have different privacy controls feels wrong.”

Back in May, Zuckerberg announced Facebook would build a Clear History button in 2018 that deletes all the web browsing data the social network has collected about you, but that data’s deep integration into the company’s systems has delayed the launch. Research suggests users don’t want the inconvenience of getting logged out of all their Facebook Connected services, though, they’d like to hide certain data from the company.

“Clear history is a prerequisite, I think, for being able to do anything like subscriptions. Because, like, partially what someone would want to do if they were going to really actually pay for a not ad supported version where their data wasn’t being used in a system like that, you would want to have a control so that Facebook didn’t have access or wasn’t using that data or associating it with your account. And as a principled matter, we are not going to just offer a control like that to people who pay.”

Of all the apologies, promises, and predictions Zuckerberg has made recently, this pledge might instill the most confidence. While some might think of Zuckerberg as a data tyrant out to absorb and exploit as much of our personal info as possible, there are at least lines he’s not willing to cross. Facebook could try to charge you for privacy, but it won’t. And given Facebook’s dominance in social networking and messaging plus Zuckerberg’s voting control of the company, a greedier man could make the internet much worse.

TRANSCRIPT – MARK ZUCKERBERG AT HARVARD / FIRST PERSONAL CHALLENGE 2019

Jonathan Zittrain:Very good. So, thank you, Mark, for coming to talk to me and to our students from the Techtopia program and from my “Internet and Society” course at Harvard Law School. We’re really pleased to have a chance to talk about any number of issues and we should just dive right in. So, privacy, autonomy, and information fiduciaries.

Mark Zuckerberg:All right!

Jonathan Zittrain:Love to talk about that.

Mark Zuckerberg:Yeah! I read your piece in The New York Times.

Jonathan Zittrain:The one with the headline that said, “Mark Zuckerberg can fix this mess”?

Mark Zuckerberg:Yeah.

Jonathan Zittrain:Yeah.

Mark Zuckerberg:Although that was last year.

Jonathan Zittrain:That’s true! Are you suggesting it’s all fixed?

Mark Zuckerberg:No. No.

Jonathan Zittrain:Okay, good. So–

Jonathan Zittrain:I’m suggesting that I’m curious whether you still think that we can fix this mess?

Jonathan Zittrain:Ah!

Jonathan Zittrain:I hope–

Jonathan Zittrain:“Hope springs eternal”–

Mark Zuckerberg:Yeah, there you go.

Jonathan Zittrain:–is my motto. So, all right, let me give a quick characterization of this idea that the coinage and the scaffolding for it is from my colleague, Jack Balkin, at Yale. And the two of us have been developing it out further. There are a standard number of privacy questions with which you might have some familiarity, having to do with people conveying information that they know they’re conveying or they’re not so sure they are, but “mouse droppings” as we used to call them when they run in the rafters of the Internet and leave traces. And then the standard way of talking about that is you want to make sure that that stuff doesn’t go where you don’t want it to go. And we call that “informational privacy”. We don’t want people to know stuff that we want maybe our friends only to know. And on a place like Facebook, you’re supposed to be able to tweak your settings and say, “Give them to this and not to that.” But there’s also ways in which stuff that we share with consent could still sort of be used against us and it feels like, “Well, you consented,” may not end the discussion. And the analogy that my colleague Jack brought to bear was one of a doctor and a patient or a lawyer and a client or– sometimes in America, but not always– a financial advisor and a client that says that those professionals have certain expertise, they get trusted with all sorts of sensitive information from their clients and patients and, so, they have an extra duty to act in the interests of those clients even if their own interests conflict. And, so, maybe just one quick hypo to get us started. I wrote a piece in 2014, that maybe you read, that was a hypothetical about elections in which it said, “Just hypothetically, imagine that Facebook had a view about which candidate should win and they reminded people likely to vote for the favored candidate that it was Election Day,” and to others they simply sent a cat photo. Would that be wrong? And I find– I have no idea if it’s illegal; it does seem wrong to me and it might be that the fiduciary approach captures what makes it wrong.

Mark Zuckerberg:All right. So, I think we could probably spend the whole next hour just talking about that!

Mark Zuckerberg:So, I read your op-ed and I also read Balkin’s blogpost on information fiduciaries. And I’ve had a conversation with him, too.

Jonathan Zittrain:Great.

Mark Zuckerberg:And the– at first blush, kind of reading through this, my reaction is there’s a lot here that makes sense. Right? The idea of us having a fiduciary relationship with the people who use our services is kind of intuitively– it’s how we think about how we’re building what we’re building. So, reading through this, it’s like, all right, you know, a lot of people seem to have this mistaken notion that when we’re putting together news feed and doing ranking that we have a team of people who are focused on maximizing the time that people spend, but that’s not the goal that we give them. We tell people on the team, “Produce the service–” that we think is going to be the highest quality that– we try to ground it in kind of getting people to come in and tell us, right, of the content that we could potentially show what is going to be– they tell us what they want to see, then we build models that kind of– that can predict that, and build that service.

Jonathan Zittrain:And, by the way, was that always the case or–

Mark Zuckerberg:No.

Jonathan Zittrain:–was that a place you got to through some course adjustments?

Mark Zuckerberg:Through course adjustments. I mean, you start off using simpler signals like what people are clicking on in feed, but then you pretty quickly learn, “Hey, that gets you to local optimum,” right? Where if you’re focusing on what people click on and predicting what people click on, then you select for click bait. Right? So, pretty quickly you realize from real feedback, from real people, that’s not actually what people want. You’re not going to build the best service by doing that. So, you bring in people and actually have these panels of– we call it “getting to ground truth”– of you show people all the candidates for what can be shown to them and you have people say, “What’s the most meaningful thing that I wish that this system were showing us? So, all this is kind of a way of saying that our own self image of ourselves and what we’re doing is that we’re acting as fiduciaries and trying to build the best services for people. Where I think that this ends up getting interesting is then the question of who gets to decide in the legal sense or the policy sense of what’s in people’s best interest? Right? So, we come in every day and think, “Hey, we’re building a service where we’re ranking newsfeed trying to show people the most relevant content with an assumption that’s backed by data; that, in general, people want us to show them the most relevant content. But, at some level, you could ask the question which is “Who gets to decide that ranking newsfeed or showing relevant ads?” or any of the other things that we choose to work on are actually in people’s interest. And we’re doing the best that we can to try to build the services [ph?] that we think are the best. At the end of the day, a lot of this is grounded in “People choose to use it.” Right? Because, clearly, they’re getting some value from it. But then there are all these questions like you say about, you have– about where people can effectively give consent and not.

Jonathan Zittrain:Yes.

Mark Zuckerberg:So, I think that there’s a lot of interesting questions in this to unpack about how you’d implement a model like that. But, at a high level I think, you know, one of the things that I think about in terms of we’re running this big company; it’s important in society that people trust the institutions of society. Clearly, I think we’re in a position now where people rightly have a lot of questions about big internet companies, Facebook in particular, and I do think getting to a point there there’s the right regulation and rules in place just provides a kind of societal guardrail framework where people can have confidence that, okay, these companies are operating within a framework that we’ve all agreed. That’s better than them just doing whatever they want. And I think that that would give people confidence. So, figuring out what that framework is, I think, is a really important thing. And I’m sure we’ll talk about that as it relates–

Jonathan Zittrain:Yes.

Mark Zuckerberg:–to a lot of the content areas today. But getting to that question of how do you– “Who determines what’s in people’s best interest, if not people themselves?”Jonathan Zittrain:Yes.

Mark Zuckerberg:–is a really interesting question.

Jonathan Zittrain:Yes, so, we should surely talk about that. So, on our agenda is the “Who decides?” question.

Mark Zuckerberg:All right.

Jonathan Zittrain:Other agenda items include– just as you say, the fiduciary framework sounds nice to you– doctors, patients, Facebook users. And I hear you saying that’s pretty much where you’re wanting to end up anyway. There are some interesting questions about what people want, versus what they want to want.

Mark Zuckerberg:Yeah.

Jonathan Zittrain:People will say “On January 1st, what I want–” New Year’s resolution– “is a gym membership.” And then on January 2nd, they don’t want to go to the gym. They want to want to go to the gym, but they never quite make it. And then, of course, a business model of pay for the whole year ahead of time and they know you’ll never turn up develops around that. And I guess a specific area to delve into for a moment on that might be on the advertising side of things, maybe the dichotomy between personalization and does it ever going into exploitation? Now, there might be stuff– I know Facebook, for example, bans payday loans as best it can.

Mark Zuckerberg:Mm-hm.

Jonathan Zittrain:That’s just a substantive area that it’s like, “All right, we don’t want to do that.”

Mark Zuckerberg:Mm-hm.

Jonathan Zittrain:But when we think about good personalization so that Facebook knows I have a dog and not a cat, and a targeter can then offer me dog food and not cat food. How about, if not now, a future day in which an advertising platform can offer to an ad targeter some sense of “I just lost my pet, I’m really upset, I’m ready to make some snap decisions that I might regret later, but when I make them–“

Mark Zuckerberg:Mm-hm.

Jonathan Zittrain:“–I’m going to make them.” So, this is the perfect time to tee up

Mark Zuckerberg:Yeah.

Jonathan Zittrain:–a Cubic Zirconia or whatever the thing is that– .

Mark Zuckerberg:Mm-hm.

Jonathan Zittrain:That seems to me a fiduciary approach would say, ideally– how we get there I don’t know, but ideally we wouldn’t permit that kind of approach to somebody using the information we’ve gleaned from them to know they’re in a tough spot–

Mark Zuckerberg:Yeah.

Jonathan Zittrain:–and then to exploit them. But I don’t know. I don’t know how you would think about something like that. Could you write an algorithm to detect something like that?

Mark Zuckerberg:Well, I think one of the key principles is that we’re trying to run this company for the long term. And I think that people think that a lot of things that– if you were just trying to optimize the profits for next quarter or something like that, you might want to do things that people might like in the near term, but over the long term will come to resent. But if you actually care about building a community and achieving this mission and building the company for the long term, I think you’re just much more aligned than people often think companies are. And it gets back to the idea before, where I think our self image is largely acting as– in this kind of fiduciary relationship as you’re saying– and across– we could probably go through a lot of different examples. I mean, we don’t want to show people content that they’re going to click on and engage with, but then feel like they wasted their time afterwards. Where we don’t want to show them things that they’re going to make a decision based off of that and then regret later. I mean, there’s a hard balance here which is– I mean if you’re talking about what people want to want versus what they want– you know, often people’s revealed preferences of what they actually do shows a deeper sense of what they want than what they think they want to want. So, I think there’s a question between when something is exploitative versus when something is real, but isn’t what you would say that you want.

Jonathan Zittrain:Yes.

Mark Zuckerberg:And that’s a really hard thing to get at.

Jonathan Zittrain:Yes.

Mark Zuckerberg:But on a lot of these cases my experience of running the company is that you start off building a system, you have relatively unsophisticated signals to start, and you build up increasingly complex models over time that try to take into account more of what people care about. And there are all these examples that we can go through. I think probably newsfeed and ads are probably the two most complex ranking examples–

Jonathan Zittrain:Yes.

Mark Zuckerberg:–that we have. But it’s– like we were talking about a second ago, when we started off with the systems, I mean, just start with newsfeeds– but you could do this on ads, too– you know, the most naïve signals, right, are what people click on or what people “Like”. But then you just very quickly realize that that doesn’t– it approximates something, but it’s a very crude approximation of the ground truth of what people actually care about. So, what you really want to get to is as much as possible getting real people to look at the real candidates for content and tell you in a multi-dimensional way what matters to them and try to build systems that model that. And then you want to be kind of conservative on preventing downside. So, your example of the payday loans– and when we’ve talked about this in the past, your– you’ve put the question to me of “How do you know when a payday loan is going to be exploitative?” right? “If you’re targeting someone who is in a bad situation?” And our answer is, “Well, we don’t really know when it’s going to be exploitative, but we think that the whole category potentially has a massive risk of that, so we just ban it–

Jonathan Zittrain:Right. Which makes it an easy case.

Mark Zuckerberg:Yes. And I think that the harder cases are when there’s significant upside and significant downside and you want to weigh both of them. So, I mean, for example, once we started putting together a really big effort on preventing election interference, one of the initial ideas that came up was “Why don’t we just ban all ads that relate to anything that is political?” And they you pretty quickly get into, all right, well, what’s a political ad? The classic legal definition is things that are around elections and candidates, but that’s not actually what Russia and other folks were primarily doing. Right? It’s– you know, a lot of the issues that we’ve seen are around issue ads, right, and basically sewing division on what are social issues. So, all right, I don’t think you’re going to get in the way of people’s speech and ability to promote and do advocacy on issues that they care about. So, then the question is “All right, well, so, then what’s the right balance?” of how do you make sure that you’re providing the right level of controls, that people who aren’t supposed to be participating in these debates aren’t or that at least you’re providing the right transparency. But I think we’ve veered a little bit from the original questionJonathan Zittrain:Yes.

Mark Zuckerberg:–but the– but, yeah. So, let’s get back to where you were

Jonathan Zittrain:Well, here’s– and this is a way of maybe moving it forward, which is: A platform as complete as Facebook is these days offers lots of opportunities to shape what people see and possibly to help them with those nudges, that it’s time to go to the gym or to avoid them from falling into the depredations of the payday loan. And it is a question of so long as the platform to do it, does it now have an ethical obligation to do it, to help people achieve the good life?

Mark Zuckerberg:Mm-hm.

Jonathan Zittrain:And I worry that it is too great a burden for any company to bear to have to figure out, say, if not the perfect, the most reasonable newsfeed for every one of the– how many? Two and a half billion active users? Something like that.

Mark Zuckerberg:Yeah. On that order.

Jonathan Zittrain:All the time and there might be some ways that start a little bit to get into the engineering of the thing that would say, “Okay, with all hindsight, are there ways to architect this so that the stakes aren’t as high, aren’t as focused on just, “Gosh, is Facebook doing this right?” It’s as if there was only one newspaper in the whole world or one or two, and it’s like, “Well, then what The New York Times chooses to put on it’s home page, if it were the only newspaper, would have outsize importance.”

Mark Zuckerberg:Mm-hm.

Jonathan Zittrain:So, just as a technical matter, a number of the students in this room had a chance to hear from Tim Berners-Lee, inventor of the World Wide Web, and he has a new idea for something called “Solid”. I don’t know if you’ve heard of Solid. It’s a protocol more than it is a product. So, there’s no car to move off the lot today. But its idea is allowing people to have the data that they generate as they motor around the web end up in their own kind of data locker. Now, for somebody like Tim, it might mean literally in a locker under his desk and he could wake up in the middle of the night and see where his data is. For others, it might mean Iraq somewhere, guarded perhaps by a fiduciary who’s looking out for them, the way that we put money in a bank and then we can sleep at night knowing the bankers are– this is maybe not the best analogy in 2019, but watching.

Mark Zuckerberg:We’ll get there.

Jonathan Zittrain:We’ll get there. But Solid says if you did that, people would then– or their helpful proxies– be able to say, “All right, Facebook is coming along. It wants the following data from me and including that data that it has generated about me as I use it, but stored back in my locker and it kind of has to come back to my well to draw water each time. And that way if I want to switch to Schmacebook or something, it’s still in my well and I can just immediately grant permission to Schmacebook to see it and I don’t have to do a kind of data slurp and then re-upload it. It’s a fully distributed way of thinking about data. And I’m curious from an engineering perspective does this seem doable with something of the size and the number of spinning wheels that Facebook has and does it seem like a

Mark Zuckerberg:Yeah–

Jonathan Zittrain:–and I’m curious your reaction to an idea like that.

Mark Zuckerberg:So, I think it’s quite interesting. Certainly, the level of computation that Facebook is doing and all the services that we’re building is really intense to do in a distributed way. I mean, I think as a basic model I think we’re building out the data center capacity over the next five years and our plan for what we think we need to do that we think is on the order of all of what AWS and Google Cloud are doing for supporting all of their customers. So, okay, so, this is like a relatively computationally intense thing.

Over time you assume you’ll get more co

Read More

Continue Reading
iPhone

Digital Influencers and the dollars that follow them

Sunny Dhillon Contributor Sunny Dhillon is a partner at Signia Venture Partners. More posts by this contributor Security tokens will be coming soon to an exchange near you Amazon’s next conquest will be apparel Animated characters are as old as human storytelling itself, dating back thousands of years to cave drawings that depict animals in…


Animated characters are as old as human storytelling itself, dating back thousands of years to cave drawings that depict animals in motion. It was really in the last century, however — a period bookended by the first animated short film in 1908 and Pixar’s success with computer animation with Toy Story from 1995 onward — that animation leapt forward. Fundamentally, this period of great innovation sought to make it easier to create an animated story for an audience to passively consume in a curated medium, such as a feature-length film.

Our current century could be set for even greater advances in the art and science of bringing characters to life. Digital influencers — virtual or animated humans that live natively on social media — will be central to that undertaking. Digital influencers don’t merely represent the penetration of cartoon characters into yet another medium, much as they sprang from newspaper strips to TV and the multiplex. Rather, digital humans on social media represent the first instance in which fictional entities act in the same plane of communication as you and I — regular people — do. Imagine if stories about Mickey Mouse were told over a telephone or in personalized letters to fans. That’s the kind of jump we’re talking about.

Social media is a new storytelling medium, much as film was a century ago. As with film then, we have yet to transmit virtual characters to this new medium in a sticky way.

Which isn’t to say that there aren’t digital characters living their lives on social channels right now. The pioneers have arrived: Lil’ Miquela, Astro, Bermuda and Shudu are prominent examples. But they are still only notable for their novelty, not yet their ubiquity. They represent the output of old animation techniques applied to a new medium. This TechCrunch article did a great job describing the current digital influencer landscape.

More investors are betting on virtual influencers like Lil Miquela

So why haven’t animated characters taken off on social media platforms? It’s largely an issue of scale — it’s expensive and time-consuming to create animated characters and to depict their adventures. One 2017 estimate stated that a 60 to 90-second animation took about 6 weeks to create. An episode of animated TV takes between 1–3 months to produce, typically with large teams in South Korea doing much of the animation legwork. That pace simply doesn’t work in a medium that calls for new original content multiple times a day.

Yet the technical piece of the puzzle is falling into place, which is primarily what I want to talk about today. Traditionally, virtual characters were created by a team of experts — not scalable — in the following way:

  • Create a 3D model
  • Texture the model and add additional materials
  • Rig the 3D model skeleton
  • Animate the 3D model
  • Introduce character into desired scene

Today, there are generally three types of virtual avatar: realistic high-resolution CGI avatars, stylized CGI avatars and manipulated video ava

Read More

Continue Reading
iPhone

Apple rolls out software fix for Group FaceTime eavesdrop bug

Apple has said it will compensate the teenager who first found a security bug in Group FaceTime that allowed users to eavesdrop before a call was picked up. The bug was initially reported to Apple by 14-year-old Grant Thompson and his mother, but the family struggled getting in contact with the company before the bug was discovered elsewhere…


Apple has said it will compensate the teenager who first found a security bug in Group FaceTime that allowed users to eavesdrop before a call was picked up.

The bug was initially reported to Apple by 14-year-old Grant Thompson and his mother, but the family struggled getting in contact with the company before the bug was discovered elsewhere and went viral on social media.

The payout will fall under Apple’s bug bounty, which incentivizes security researchers to claim a reward for privately submitting security bugs

Read More

Continue Reading
iPhone

It’s the Jons 2018!

It was the best of years, it was the worst of years, it was the wokest of years, it was the most problematic of years, it was the year of AI, it was the year of scooters, it was the year of Big Tech triumph, it was the year of Big Tech scandals, it was…


It was the best of years, it was the worst of years, it was the wokest of years, it was the most problematic of years, it was the year of AI, it was the year of scooters, it was the year of Big Tech triumph, it was the year of Big Tech scandals, it was the year of Musk’s disgrace, it was the year of Tesla’s redemption, it was the year of shitcoin justice, it was definitelynotthe year of AR or VR, it was the dumbest timeline, it was the spring of stanning, it was the winter of wtf.

It was, in short, a year tailor-made for The Jons, an annual award celebrating tech’s more dubious achievers, named, in an awe-inspiring fit of humility, after myself. So let’s get to it! With very little further ado, I give you: the fourth annual Jon Awards for Dubious Technical Achievement!

(The Jons 2015) (The Jons 2016) (The Jons 2017)

THE FEET AND LEGS AND TORSO OF CLAY AWARD FOR SUDDEN REGRESSION TO THE MEAN

To Elon Musk, who in the past year went from (in many eyes) “messiah who could do no wrong” to “man who has paid a $20 million fine and stepped down as chairman in order to settle with the SEC regarding allegations of tweeted fraud; been sued for very publicly accusing a stranger of pedophilia with no evidence; feuded with Azealia Banks; been roundly criticized for the conditions in Tesla’s factories; and been pilloried (though also, and to my mind more accurately, tentatively praised) for his new Boring Tunnel.” Don’t have heroes, kids.

THE BUT ON THE OTHER HAND THERE ARE ALL THOSE SHINY NEW ELECTRIC CARS AWARD FOR ATTEMPTED DOOMSAYING

Surprisingly, despite the previous award, this one goes to the herds of bears who spent much of the year claiming that Tesla’s imminent doom and bankruptcy would become obvious and indisputable any day now. The roars of the bears seem to have grown much quieter of late, probably because the Model 3’s production rate has rocketed from 1,000 per week at the start of the year to 1,000 perdayof late. No mean feat on the part of Tesla employees.

THE YES BUT THE DIFFERENCE IS THE RUSSIANSKNOWIT’S DISINFORMATION AWARD FOR BAD OPSEC

To Donald Trump, who apparently continues to use an insecure iPhone, which the Chinese and Russians listen in on. The good news? Officials have “confidence he was not spilling secrets because he rarely digs into the details of the intelligence he is shown and is not well versed in the operational specifics of military or covert activities.” Put less diplomatically, the president of the United States doesn’t pay enough attention to briefings to have any important secrets to share. Nothing to worry about there! Trump responded by tweeting a denial, saying he only had a “seldom used government cell phone” … from the iOS Twitter app.

THE YOU MUST ADMIT I WAS AT LEAST RIGHT ABOUT EVERYTHING BEING DIFFERENT NOW AWARD FOR BUBBLY BITCOIN PREDICTIONS

It’s too easy and obvious to give this award to John McAfee, who I suspect of actuallyanglingfor a Jon year after year. And as a believer that cryptocurrencies have long-term importance, I’m not going to award anyone for their less-outlandish-than-McAfee medium-term beliefs. So this award goes to Bitcoin uberbu

Read More

Continue Reading