Internet Security

OpenAI built a text generator so good, it’s considered too dangerous to release

A storm is brewing over a new language model, built by non-profit artificial intelligence research company OpenAI, which it says is so good at generating convincing, well-written text that it’s worried about potential abuse. That’s angered some in the community, who have accused the company of reneging on a promise not to close off its…


A storm is brewing over a new language model, built by non-profit artificial intelligence research company OpenAI,which it says is so good at generating convincing, well-written text that it’s worried about potential abuse.

That’s angered some in the community, who have accused the company of reneging on a promise not to close off its research.

OpenAI said its new natural language model, GPT-2, was trained to predict the next word in a sample of 40 gigabytes of internet text. The end result was the system generating text that “adapts to the style and content of the conditioning text,” allowing the user to “generate realistic and coherent continuations about a topic of their choosing.” The model is a vast improvement on the first version by producing longer text with greater coherence.

But with every good application of the system, such as bots capable of better dialog and better speech recognition, the non-profit found several more, like generating fake news, impersonating people, or automating abusive or spam comments on social media.

To wit: when GPT-2 was tasked with writing a response to the prompt, “Recycling is good for the world, no, you could no

Read More

Be the first to write a comment.

Leave a Reply

Internet Security

Can predictive analytics be made safe for humans?

Massive-scale predictive analytics is a relatively new phenomenon, one that challenges both decades of law as well as consumer thinking about privacy. As a technology, it may well save thousands of lives in applications like predictive medicine, but if it isn’t used carefully, it may prevent thousands from getting loans, for instance, if an underwriting…


Massive-scale predictive analyticsis a relatively new phenomenon, one that challenges both decades of law as well as consumer thinking about privacy.

As a technology, it may well save thousands of lives in applications like predictive medicine, but if it isn’t used carefully, it may prevent thousands from getting loans, for instance, if an underwriting algorithm is biased against certain users.

I chatted with Dennis Hirsch a few weeks ago about the challenges posed by this new data economy. Hirsch is a professor of law at Ohio State and head of its Program on Data and Governance. He’s also affiliated with the university’s Risk Institute.

“Data ethics is the new form of risk mitigation for the algorithmic economy,” he said. In a post-Cambridge Analytica world, every company has to assess what data it has on its customers and mitigate the risk of harm. How to do that, though, is at the cutting edge of the new field of data governance, which investigates the processes and policies through which organizations manage their data.

You’re reading the Extra Crunch Daily. Like this newsletter?Subscribe for free to follow all of our discussions and debates.

“Traditional privacy regulation asks whether you gave someone notice and given them a choice,” he explains. That principle is the bedrock for Europe’s GDPR law, and for the patchwork of laws in the U.S. that protect privacy. It’s based around the simplistic idea that a datum — such as a customer’s address — shouldn’t be shared with, say, a marketer without that user’s knowledge. Privacy is about protecting the address book, so to speak.

The rise of “predictive analytics,” though, has completely demolished such privacy legislation. Predictive analytics is a fuzzy term, but essentially means interpreting raw data and drawing new conclusions through inference. This is the story of the famous Target data crisis, where the retailer recommended pregnancy-related goods to women who had certain patterns of purchases. As Charles Duhigg explained at the time:

Many shoppers purchase soap and cotton balls, but when someone suddenly starts buying lots of scent-free soap and extra-big bags of cotton balls, in addition to hand sanitizers and washcloths, it signals they could be getting close to

Read More

Continue Reading
Internet Security

Atrium, Justin Kan’s legal tech startup, launches a fintech and blockchain division

Atrium, the legal startup co-founded by Justin Kan of Twitch fame, is jumping into the blockchain space today. The company has raised plenty of money — including $65 million from a16z last September — so rather than an ICO or token sale, this is a consultancy business. Atrium uses machine learning to digitize legal documents and develop applications…


Atrium, the legal startup co-founded by Justin Kan of Twitch fame, is jumping into the blockchain space today.

The company has raised plenty of money — including $65 million from a16z last September — so rather than an ICO or token sale, this is a consultancy business. Atrium uses machine learning to digitize legal documents and develop applications for client use, and now it is officially applying that to fintech and blockchain businesses.

The division has been operating quietly for months and the scope of work that it covers includes the legality and regulatory concerns around tokens, but also business-focused areas including token utility, tokenomics and general blockchain tech.

“We have a bunch of clients wanting to do token offerings and looking into the legality,” Kan told TechCrunch in an interview. “A lot of our advisory work is around the token offering and how it operates.”

The commitment is such that the company is even accepting Bitcoin and Bitcoin Cash for payments through crypto processing service BitPay.

While the ICO market has quietened over the past year following huge valuation losses market-wide, up to 90 percent in some cases with many ICO tokens now effectively worthless, there’s a new antic

Read More

Continue Reading
Internet Security

LG hints at gesture interface for smartphone flagship next month

LG has put out a gesture-heavy hint ahead of the annual unveiling of new smartphone hardware at the world’s biggest mobile confab, Mobile World Congress, which kicks off in a month’s time. The brief video teaser for its forthcoming MWC press event in Barcelona, which was shared today via LG’s social media channels, shows a…


LGhas put out a gesture-heavy hint ahead of the annual unveiling of new smartphone hardware at the world’s biggest mobile confab, Mobile World Congress, which kicks off in a month’s time.

The brief video teaser for its forthcoming MWC press event in Barcelona, which was shared today via LG’s social media channels, shows a man’s hand swiping to change on-screen content, including the message “goodbye touch.”

The title of LG’s teaser video includes the name “LG Premiere,” which could be the name of the forthcoming flagship — albeit that would be confusingly similar to the mid-tier LG Premier of yore. So, hopefully the company is going to make that last ‘e’ really count.

Beyond some very unsubtle magic wand sound effects to draw extra attention to the contactless gestures, the video offers very little to go on. But we’reprettysureLG is not about to pivot away from touchscreens entirely.

Rather, we’re betting on some sort

Read More

Continue Reading
Internet Security

The facts about Facebook

This is a critical reading of Facebook founder Mark Zuckerberg’s article in the WSJ on Thursday, also entitled The Facts About Facebook.  Yes Mark, you’re right; Facebook turns 15 next month. What a long time you’ve been in the social media business! We’re curious as to whether you’ve also been keeping count of how many times…


This is a criticalreading of Facebook founder Mark Zuckerberg’s article in the WSJ on Thursday, also entitled The Facts About Facebook. 

Yes Mark, you’re right; Facebookturns 15 next month. What a long time you’ve been in the social media business! We’re curious as to whether you’ve also been keeping count of how many times you’ve been forced to apologize for breaching people’s trust or, well, otherwise royally messing up over the years.

It’s also true you weren’t setting out to build “a global company”. The predecessor to Facebook was a ‘hot or not’ game called ‘FaceMash’ that you hacked together while drinking beer in your Harvard dormroom. Your late night brainwave was to get fellow students to rate each others’ attractiveness — and you weren’t at all put off by not being in possession of the necessary photo data to do this. You just took it; hacking into the college’s online facebooks and grabbing people’s selfies without permission.

Blogging about what you were doing as you did it, you wrote: “I almost want to put some of these faces next to pictures of some farm animals and have people vote on which is more attractive.” Just in case there was any doubt as to the ugly nature of your intention. 

The seeds of Facebook’s global business were thus sown in a crude and consentless game of clickbait whose idea titillated you so much you thought nothing of breaching security, privacy, copyright and decency norms just to grab a few eyeballs.

So while you may not haveinstantlyunderstood how potent this ‘outrageous and divisive’ eyeball-grabbing content tactic would turn out to be — oh hai future global scale! — the core DNA of Facebook’s business sits in that frat boy discovery where your eureka Internet moment was finding you could win the attention jackpot by pitting people against each other.

Pretty quickly you also realized you could exploit and commercialize human one-upmanship — gotta catch em all friend lists! popularity poke wars! — and stick a badge on the resulting activity, dubbing it ‘social’.

FaceMash was antisocial, though. And the unpleasant flipside that can clearly flow from ‘social’ platforms is something you continue not being nearly honest nor open enough about. Whether it’s political disinformation, hate speech or bullying, the individual and societal impacts of maliciously minded content shared and amplified using massively mainstream tools you control is now impossible to ignore.

Yet you prefer to play down these human impacts; as a “crazy idea”, or by implying that ‘a little’ amplified human nastiness is the necessary cost of being in the big multinational business of connecting everyone and ‘socializing’ everything.

But did you ask the father of 14-year-old Molly Russell, a British schoolgirl who took her own life in 2017, whether he’s okay with your growth vs controls trade-off? “I have no doubt that Instagram helped kill my daughter,” said Russell in an interview with the BBC this week.

After her death, Molly’s parents found she had been following accounts on Instagram that were sharing graphic material related to self-harming and suicide, including some accounts that actively encourage people to cut themselves. “We didn’t know that anything like that could possibly exist on a platform like Instagram,” said Russell.

Without a human editor in the mix, your algorithmic recommendations are blind to risk and suffering. Built for global scale, they get on with the expansionist goal of maximizing clicks and views by serving more of the same sticky stuff. And more extreme versions of things users show an interest in to keep the eyeballs engaged.

So when you write about making services that “billions” of “people around the world love and use” forgive us for thinking that sounds horribly glib. The scales of suffering don’t sum like that. If your entertainment product has whipped upgenocideanywhere in the world — as the UN said Facebook did in Myanmar — it’s failing regardless of the proportion of users who are having their time pleasantly wasted on and by Facebook.

And if your algorithms can’t incorporate basic checks and safeguards so they don’t accidentally encourage vulnerable teens to commit suicide you really don’t deserve to be in any consumer-facing business at all.

Yet your article shows no sign you’ve been reflecting on the kinds of human tragedies that don’t just play out on your platform but can be an emergent property of your targeting algorithms.

You focus instead on what you call “clear benefits to this business model”.

The benefits to Facebook’s business are certainly clear. You have the billions in quarterly revenue to stand that up. But what about the costs to the rest of us? Human costs are harder to quantify but you don’t even sound like you’re trying.

You do write that you’ve heard “many questions” about Facebook’s business model. Which is most certainly true but once again you’re playing down the level of political and societal concern about how your platform operates (and how you operate your platform) — deflecting and reframing what Facebook is to cast your ad business a form of quasi philanthropy; a comfortable discussion topic and self-serving idea you’d much prefer we were all sold on.

It’s also hard to shake the feeling that your phrasing at this point is intended as a bit of an in-joke for Facebook staffers — to smirk at the ‘dumb politicians’ who don’t even know how Facebook makes money.

Y’know, like you smirked…

Then you write that you want to explain how Facebook operates. But, thing is, you don’t explain — you distract, deflect, equivocate and mislead, which has been your business’ strategy through many months of scandal (that and worst tactics — such as paying a PR firm that used oppo research tactics to discredit Facebook critics with smears).

Dodging is another special power; such as how you dodged repeat requests from international parliamentarians to be held accountable for major data misuse and security breaches.

The Zuckerberg ‘open letter’ mansplain, which typically runs to thousands of blame-shifting words, is another standard issue production from the Facebook reputation crisis management toolbox.

And here you are again, ironically enough, mansplaining in a newspaper; an industry that your platform has worked keenly to gut and usurp, hungry to supplant editorially guided journalism with the moral vacuum of algorithmically geared space-filler which, left unchecked, has been shown, time and again, lifting divisive and damaging content into public view.

The latest Zuckerberg screed has nothing new to say. It’s pure spin. We’ve read scores of self-serving Facebook apologias over the years and can confirm Facebook’s founder has made a very tedious art of selling abject failure as some kind of heroic lack of perfection.

But the spin has been going on for far, far too long. Fifteen years, as you remind us. Yet given that hefty record it’s little wonder you’re moved to pen again — imagining that another word blast is all it’ll take for the silly politicians to fall in line.

Thing is, no one is asking Facebook for perfection, Mark. We’re looking for signs that you and your company have a moral compass. Because the opposite appears to be true. (Or as one UK parliamentarian put it to your CTO last year: “I remain to be convinced that your company has integrity”.)

Facebook has scaled to such an unprecedented, global size exactly because it has no editorial values. And you say again now you want to be all things to all men. Put another way that me

Read More

Continue Reading