Internet Security

Europe dials up pressure on tech giants over election security

The European Union has announced a package of measures intended to step up efforts and pressure on tech giants to combat democracy-denting disinformation ahead of the EU parliament elections next May. The European Commission Action Plan, which was presented at a press briefing earlier today, has four areas of focus: 1) Improving detection of disinformation; 2)…


The European Unionhas announced a package of measures intended to step up efforts and pressure on tech giants to combat democracy-denting disinformation ahead of the EU parliament elections next May.

The European Commission Action Plan, which was presented at a press briefing earlier today, has four areas of focus: 1) Improving detection of disinformation; 2) Greater co-ordination across EU Member States, including by sharing alerts about threats; 3) Increased pressure on online platforms, including to increase transparency around political ads and purge fake accounts; and 4) raising awareness and critical thinking among EU citizens.

The Commission says 67% of EU citizens are worried about their personal data being used for political targeting, and 80% want improved transparency around how much political parties spend to run campaigns on social media.

And it warned today that it wants to see rapid action from online platforms to deliver on pledges they’ve already made to fight fake news and election interference.

The EC’s plan follows a voluntary Code of Practice launched two months ago, which signed up tech giants including Facebook, Google and Twitter, along with some ad industry players, to some fairly fuzzy commitments to combat the spread of so-called ‘fake news’.

They also agreed to hike transparency around political advertising. But efforts so far remain piecemeal, with — for example — no EU-wide roll out of Facebook’spolitical ads disclosure system.

Facebook has only launched political ad identification checks plus an archive library of ads in the US, Brazil and the UK so far, leaving the rest of the world to rely on the more limited ‘view ads’ functionality that it has rolled out globally.

The EC said it will be stepping up its monitoring of platforms’ efforts to combat election interference — with the new plan including “continuous” monitoring.

This will take the form of monthly progress reports, starting with a Commission progress report in January and then monthly reports thereafter (against what it slated as “very specific targets”) to ensure signatories are actually purging and disincentivizing bad actors and inauthentic content from their platform, not just saying they’re going to.

As we reported in September the Code of Practice looked to be a pretty dilute first effort. But ongoing progress reports could at least help concentrate minds — coupled with the ongoing threat of EU-wide legislation if platforms fail to effectively self-regulate.

Digital economy and society commissioner Mariya Gabrielsaid the EC would have “measurable and visible results very soon”, warning platforms: “We need greater transparency, greater responsibility both on the content, as well as the political approach.”

Security union commissioner, Julian King, came in even harder on tech firms — warning that the EC wants to see “real progress

Read More

Be the first to write a comment.

Leave a Reply

Internet Security

Can predictive analytics be made safe for humans?

Massive-scale predictive analytics is a relatively new phenomenon, one that challenges both decades of law as well as consumer thinking about privacy. As a technology, it may well save thousands of lives in applications like predictive medicine, but if it isn’t used carefully, it may prevent thousands from getting loans, for instance, if an underwriting…


Massive-scale predictive analyticsis a relatively new phenomenon, one that challenges both decades of law as well as consumer thinking about privacy.

As a technology, it may well save thousands of lives in applications like predictive medicine, but if it isn’t used carefully, it may prevent thousands from getting loans, for instance, if an underwriting algorithm is biased against certain users.

I chatted with Dennis Hirsch a few weeks ago about the challenges posed by this new data economy. Hirsch is a professor of law at Ohio State and head of its Program on Data and Governance. He’s also affiliated with the university’s Risk Institute.

“Data ethics is the new form of risk mitigation for the algorithmic economy,” he said. In a post-Cambridge Analytica world, every company has to assess what data it has on its customers and mitigate the risk of harm. How to do that, though, is at the cutting edge of the new field of data governance, which investigates the processes and policies through which organizations manage their data.

You’re reading the Extra Crunch Daily. Like this newsletter?Subscribe for free to follow all of our discussions and debates.

“Traditional privacy regulation asks whether you gave someone notice and given them a choice,” he explains. That principle is the bedrock for Europe’s GDPR law, and for the patchwork of laws in the U.S. that protect privacy. It’s based around the simplistic idea that a datum — such as a customer’s address — shouldn’t be shared with, say, a marketer without that user’s knowledge. Privacy is about protecting the address book, so to speak.

The rise of “predictive analytics,” though, has completely demolished such privacy legislation. Predictive analytics is a fuzzy term, but essentially means interpreting raw data and drawing new conclusions through inference. This is the story of the famous Target data crisis, where the retailer recommended pregnancy-related goods to women who had certain patterns of purchases. As Charles Duhigg explained at the time:

Many shoppers purchase soap and cotton balls, but when someone suddenly starts buying lots of scent-free soap and extra-big bags of cotton balls, in addition to hand sanitizers and washcloths, it signals they could be getting close to

Read More

Continue Reading
Internet Security

Atrium, Justin Kan’s legal tech startup, launches a fintech and blockchain division

Atrium, the legal startup co-founded by Justin Kan of Twitch fame, is jumping into the blockchain space today. The company has raised plenty of money — including $65 million from a16z last September — so rather than an ICO or token sale, this is a consultancy business. Atrium uses machine learning to digitize legal documents and develop applications…


Atrium, the legal startup co-founded by Justin Kan of Twitch fame, is jumping into the blockchain space today.

The company has raised plenty of money — including $65 million from a16z last September — so rather than an ICO or token sale, this is a consultancy business. Atrium uses machine learning to digitize legal documents and develop applications for client use, and now it is officially applying that to fintech and blockchain businesses.

The division has been operating quietly for months and the scope of work that it covers includes the legality and regulatory concerns around tokens, but also business-focused areas including token utility, tokenomics and general blockchain tech.

“We have a bunch of clients wanting to do token offerings and looking into the legality,” Kan told TechCrunch in an interview. “A lot of our advisory work is around the token offering and how it operates.”

The commitment is such that the company is even accepting Bitcoin and Bitcoin Cash for payments through crypto processing service BitPay.

While the ICO market has quietened over the past year following huge valuation losses market-wide, up to 90 percent in some cases with many ICO tokens now effectively worthless, there’s a new antic

Read More

Continue Reading
Internet Security

OpenAI built a text generator so good, it’s considered too dangerous to release

A storm is brewing over a new language model, built by non-profit artificial intelligence research company OpenAI, which it says is so good at generating convincing, well-written text that it’s worried about potential abuse. That’s angered some in the community, who have accused the company of reneging on a promise not to close off its…


A storm is brewing over a new language model, built by non-profit artificial intelligence research company OpenAI,which it says is so good at generating convincing, well-written text that it’s worried about potential abuse.

That’s angered some in the community, who have accused the company of reneging on a promise not to close off its research.

OpenAI said its new natural language model, GPT-2, was trained to predict the next word in a sample of 40 gigabytes of internet text. The end result was the system generating text that “adapts to the style and content of the conditioning text,” allowing the user to “generate realistic and coherent continuations about a topic of their choosing.” The model is a vast improvement on the first version by producing longer text with greater coherence.

But with every good application of the system, such as bots capable of better dialog and better speech recognition, the non-profit found several more, like generating fake news, impersonating people, or automating abusive or spam comments on social media.

To wit: when GPT-2 was tasked with writing a response to the prompt, “Recycling is good for the world, no, you could no

Read More

Continue Reading
Internet Security

LG hints at gesture interface for smartphone flagship next month

LG has put out a gesture-heavy hint ahead of the annual unveiling of new smartphone hardware at the world’s biggest mobile confab, Mobile World Congress, which kicks off in a month’s time. The brief video teaser for its forthcoming MWC press event in Barcelona, which was shared today via LG’s social media channels, shows a…


LGhas put out a gesture-heavy hint ahead of the annual unveiling of new smartphone hardware at the world’s biggest mobile confab, Mobile World Congress, which kicks off in a month’s time.

The brief video teaser for its forthcoming MWC press event in Barcelona, which was shared today via LG’s social media channels, shows a man’s hand swiping to change on-screen content, including the message “goodbye touch.”

The title of LG’s teaser video includes the name “LG Premiere,” which could be the name of the forthcoming flagship — albeit that would be confusingly similar to the mid-tier LG Premier of yore. So, hopefully the company is going to make that last ‘e’ really count.

Beyond some very unsubtle magic wand sound effects to draw extra attention to the contactless gestures, the video offers very little to go on. But we’reprettysureLG is not about to pivot away from touchscreens entirely.

Rather, we’re betting on some sort

Read More

Continue Reading