Blog Archives

Political campaigns can ask for new tools from Facebook to help ward off hackers

We’re less than two month away from the midterm elections, and Facebook has decided to roll out a new suite of tools to protect campaigns from hackers.

It’s a new layer of cyber protection that Facebook is making available as a pilot program open to any state or federal campaign. Under the program, per an NBC report, “campaigns as well as campaign committees that opt in to the program would be designated potential high-priority users and be able to take advantage of expedited troubleshooting if they detect any unusual behavior involving their accounts.”

It’s all meant to help spot dubious activity sooner and give Facebook even more lead time to develop a response or take action.

Via a company blog post today, Facebook’s head of cybersecurity policy Nathaniel Gleicher explained that page administrators can apply to participate in the program at politics.fb.com/campaignsecurity. Once enrolled, they can add others from their campaign or committee, and Facebook says it will help officials adopt “our strongest account security protections, like two-factor authentication, and monitor for potential hacking threats.”

“If we discover an attack against one campaign official, we can review and protect other accounts that are enrolled in our program and affiliated with that same campaign,” Gleicher says. “As we detect abuse, we will continue to share relevant information with law enforcement and other companies so we can maximize our effectiveness. And, we are continually assessing how this pilot and our other security programs might be expanded to future elections and other users, such as government officials.”

Facebook didn’t provide a lot of detail about what this new security layer entails, probably for obvious reasons. In terms of what it has been detailed about, the company has already said it’s banned hundreds of fake accounts and pages in the lead-up to the midterms, some of which the NBC report notes used behaviors similar to the Russia-backed Internet Research Agency that caused mischief around the 2016 presidential campaign.

Facebook CEO Mark Zuckerberg is also on record saying the company was caught off-guard by how the company’s tools could be misused by hackers and related groups, which makes today’s news another step in trying to atone for its inaction of recent years. As well as being another proactive move toward making sure as little as possible coopting of its network for nefarious purposes happens going forward.

Time will tell, of course, if that’s a too-lofty goal or if it actually succeeds in making the service better overall. In a post along these lines on his personal Facebook page a few days ago, Zuckerberg wrote that “In 2016, we were not prepared for the coordinated information operations we now regularly face. But we have learned a lot since then and have developed sophisticated systems that combine technology and people to prevent election interference on our services.

“This effort is part of a broader challenge to rework much of how Facebook operates to be more proactive about protecting our community from harm and taking a broader view of our responsibility overall.”

 

Sorry, Sony Music, you don’t own the rights to Bach’s music on Facebook

Public shaming forces publisher to abandon ridiculous claim to classical music.

Facebook is using a meme-sniffing AI to hunt down offensive posts

Policing hate speech and offensive content on a platform as large as Facebook is a big challenge. Filtering for nasty words and phrases is simple enough, but in the age of memes it’s a lot more difficult for the website to detect sensitive posts without human input. To make things a bit easier, Facebook is deploying an AI watchdog that can sniff out bad posts all on its own.

The AI, which can sift through an immense amount of data in a very short period of time, is actually capable of reading text that’s been overlaid on an image or video, and it understands several languages. It’s called Rosetta, and Facebook took some time to explain how it works in a new blog post.

The AI uses an algorithm to detect which regions of an image or video likely contain text, then breaks the suspected text into words which it interprets. The algorithm has to be versatile enough to tackle a number of different languages, including languages like Arabic which are written right-to-left.

Rosetta was trained on both human-annotated images as well as artificially generated ones. The company notes that using the manual approach is not scalable when dealing with more and more languages, and therefore it plans to rely solely on “synthetic” generation to help the AI continue to learn and improve.

When policing videos, the AI could grab individual frames of video and apply the same logic, but Facebook says this approach wouldn’t work long-term.

“The naive approach of applying image-based text extraction to every single video frame is not scalable, because of the massive growth of videos on the platform, and would only lead to wasted computational resources,” the company says. “Recently, 3D convolutions have been gaining wide adoption given their ability to model temporal domain in addition to spatial domain. We are beginning to explore ways to apply 3D convolutions for smarter selection of video frames of interest for text extraction.”

Facebook has increasingly leaned on machine learning to help improve the platform, and it looks like content moderation on Facebook will also be the domain of AI before long.

Facebook punishes liberal news site after fact check by right-wing site

Fact check of article on Brett Kavanaugh’s abortion views hinges on word “said.”

Facebook CEO Confirms the Social Network Followed Apple’s Lead in Removing Alex Jones From the Service

Facebook CEO Mark Zuckerberg admits that Apple’s decision to remove Alex Jones from its App Store and Podcasts spurred Facebook to also remove Jones from the social platform.Read More…