Analysis: Video is sped up to make reporter’s actions look more aggressive.
Facebook on Thursday revealed that it has taken down more pages linked to Iran and Russia “for coordinated inauthentic behavior on Facebook and Instagram.”
Most of the examples Facebook offers originate from Iran, but some campaigns come from Russian actors. However, these were distinct campaigns, and while it’s easy to associate Iran with Russia, Facebook did not identify any correlation between the two.
“However, they used similar tactics by creating networks of accounts to mislead others about who they were and what they were doing,” Facebook explains in a detailed blog post.
The company also elaborated on the kind of action it takes and how it decides to remove bad actors. Rather than banning them immediately, Facebook studies the adversaries, looking into the complexity of their attacks to understand how to mitigate the problem and avoid future occurrences. That’s why Facebook may spend months investigating some of the suspect pages and accounts before removing them.
Facebook said it’s working with other tech companies, academic researchers, and law enforcement when dealing with cyber threats.
Facebook removed 652 pages, groups and accounts “for coordinated inauthentic behavior that originated in Iran and targeted people across multiple internet services in the Middle East, Latin America, UK, and US.” Here’s an example:
In addition to spreading fake news, some of these Facebook properties engaged in “traditional cybersecurity attacks,” including hacking attempts and spreading malware.
The company also removed an unspecified number of pages, groups, and accounts “that can be linked to sources the US government has previously identified as Russian military intelligence services.” These campaigns are unrelated to the Iranian efforts to disseminate fake news, Facebook says.
Furthermore, Facebook says that we’re looking at some of the same bad actors it removed before the 2016 election. The recent activity focuses on politics in Syria and Ukraine.
Facebook has resisted becoming the Internet’s fake news police.
“Authoritative sources” will be more prominently featured in search results.
Tesla on Autopilot accelerated into fire truck at 60mph but at least the driver only broke his ankle
Earlier this month, a Tesla in Utah crashed into a stopped fire truck at 60mph. The story got plenty of attention from the media, which prompted billionaire Tesla CEO Elon Musk to have a meltdown on Twitter about biased reporting.
Today, the Associated Press obtained document on the Utah crash from local police which appear to show that the vehicle was under the control of Autopilot at the time, and accelerated for a few seconds before the crash. The driver hit the brakes manually a moment before the impact.
Data from the Model S electric vehicle show it picked up speed for 3.5 seconds shortly before crashing into a stopped firetruck in suburban Salt Lake City, the report said. The driver manually hit the brakes a fraction of a second before impact.
Police suggested that the car was following another vehicle and dropped its speed to 55 mph to match the leading vehicle. They say the leading vehicle then likely changed lanes and the Tesla automatically sped up to its preset of 60 mph (97 kph) without noticing the stopped cars ahead of it.
Musk’s tweets after news of the accident first emerged took aim at the fact that when other cars crash, they don’t get the same kind of attention from the media:
To an extent, he has a point. Humans are extraordinarily bad at driving, and a few crashed Teslas doesn’t mean that the technology is less safe than human-piloted cars. But the notion that because something is better than what came before it, it doesn’t get to be scrutinized is clearly false. More houses burned down before asbestos was a thing, but would minor lung issues due to asbestos be the kind of story that newspapers shouldn’t have covered in the 1930s? Lead pipes brought clean water to tens of millions of households, driving a new wave of sanitary living conditions that enabled urbanization, but the adverse side effects are still worth studying.
Tesla, by absolute choice and in no part because of Musk’s never-ending PR tour, is one of the front-runners in deploying driver assistant technologies to cars. Everything from the name — Autopilot — to the promo videos of hands-free driving gives the impression that less attention is needed to drive a Tesla than a regular vehicle. News stories about a new technology failing in a sometimes-fatal way aren’t intended to say that all driver-assist technologies are bad and should be banned; they just raise awareness of the side-effects of trusting one particular new technology (Autopilot) too much, and occasionally raise the question of whether there’s a slower but safer method of rolling out these new technologies to the public.
If Musk is really invested in persuading people that his cars are safer, Tesla should release far more data on Autopilot’s safety record. There’s currently one public statistic about the safety of Tesla vehicles before and after Autopilot, and the NTSB, which first released the number, has since said that it’s flawed, at best.
Human drivers are bad. Anyone who has tried to drive in the left lane of the I-95 can tell you that, and the fact that people die in car crashes due to human error on a daily basis isn’t news any more. Reporting on the failure of a new technology doesn’t imply that the new technology is worse than the status quo; it simply makes people aware of the problems of adopting new technologies before they’re fully ready for the mainstream.