Categories
Privacy

FTC probing whether Facebook violated consent decree, report

In Nov. 2011, Facebook settled charges by the Federal Trade Commission that it deceived consumers by advising them they could keep their information on the social network private and then allowing it to be shared and made public.

In the wake of the revelations about Cambridge Analytica, the FTC reportedly is examining whether Facebook violated the terms of the settlement.

Cambridge Analytica, a voter-profiling firm, derived data from more than 50 million Facebook profiles that it accessed via a third-party app. A data scientist at Cambridge University harvested the data starting in June 2014.

That may have contravened the 2011 settlement. Among the charges by the FTC that led to the settlement:

Facebook represented that third-party apps that users’ installed would have access only to user information that they needed to operate. In fact, the apps could access nearly all of users’ personal data – data the apps didn’t need.

The FTC further charged:

Facebook told users they could restrict sharing of data to limited audiences – for example with “Friends Only.” In fact, selecting “Friends Only” did not prevent their information from being shared with third-party applications their friends used.

The settlement barred Facebook from misrepresenting the privacy or security of users’ personal information.

Categories
Privacy

How the government uses social media to monitor protestors

The death of Freddie Gray in April 2015 while in the custody of Baltimore police touched off a wave of protests in that city about civil rights and the department’s treatment of African-Americans.  Days later, as protests mounted, police monitoring social media noticed that kids from a local high school planned to skip class to join a protest at a nearby mall. The department deployed officers to intercept and turn back the students.

The summary of the surveillance comes courtesy of Geofeedia, a Chicago company that sells software that allows users, including police departments across the U.S., to track the whereabouts of people based on searches of data posted to Twitter, Facebook, Instagram and other social networks. According to marketing materials posted by Geofeedia on its website, location-based monitoring of social media activity allowed police in Baltimore “to stay one step ahead of the rioters” and, by running social media photos through facial recognition software, “discover rioters with outstanding warrants and arrest them directly from the crowd.”

screen-shot-2016-10-16-at-11-21-52-am

We know of the monitoring thanks to the American Civil Liberties Union, which obtained the information via records requests to law enforcement agencies in California. A report released Oct. 11 by the group documents how social media companies provided data about users to Geofeedia that comes directly from their servers.

Though both Facebook and Instagram later cut off the feeds, both companies provided police access to data that allowed Geofeedia to sort by specific topics, hashtag or location. Twitter, which also has since ended the practice, provide searchable access to its database of tweets.

As the ACLU noted, the social networks that supplied data for use in monitoring all have expressed publicly their support for activism and free speech.

“Mark Zuckerberg endorsed Black Lives Matter and expressed sympathy after Philando Castile’s killing, which was broadcast on Facebook Live,” Matt Cagle, an attorney for the ACLU who authored the report, wrote in a blog post. “Twitter’s CEO Jack Dorsey went to Ferguson. Above all, the companies articulate their role as a home for free speech about important social or political issues.”

“Social media monitoring is spreading fast and is a powerful example of surveillance technology that can disproportionately impact communities of color,” Cagle added.

For its part, Geofeedia says it has protections in place to ensure that its technology is not used to infringe civil rights.

Though data feeds from the companies have legitimate applications – investors, for example, use data sets from the companies to learn early of problems that can affect stocks, e.g., someone tweets about about his friend becoming ill after eating at Chipotle. The data also can help in finding missing persons. But giving it to the government for use in surveillance can chill the exercise of basic freedoms.

The ACLU is calling on social networks to adhere to guidelines that include a prohibition on supplying data access to developers who are providing software for government surveillance. The networks also should develop clear and open policies that bar use of data feeds for surveillance, and should monitor developers to spot violations, the ACLU says.

Categories
Privacy

The US and EU have three months to come to terms on trans-Atlantic data transfers

The United States and Europe have three months to work out a procedure for the transfer of personal data to the US from the EU, representatives of an independent advisory body that brings together data protection regulators from the EU’s member states announced on Saturday.

The announcement, by the EU’s Article 29 Working Party, gives guidance to businesses and other organizations that send data ranging from posts on social media to personnel records across the Atlantic following a ruling in October by the European Court of Justice (ECJ) invalidating a so-called safe harbor that had governed such transfers since 2000.

The ruling by the ECJ highlighted the cross-border flow of data and raised anew questions about the protections for privacy in a digital economy. It also upended the expectations of more than 4,000 companies, including tech giants such as Facebook, Amazon, and Google, that had certified compliance with the safe harbor to relay data from Europe to the US.

The statement by the Article 29 Working Group aim to allay fears by companies that the ECJ’s ruling might spur regulators in Europe to bring enforcement actions against companies for mishandling data transfers. In the meantime, companies can use contracts to assure privacy safeguards or adopt rules that protect the privacy of data transfers among corporate subsidiaries.

Officials on both sides of the Atlantic also say they will continue negotiations on a pact that can replace the safe harbor. If the sides cannot agree by the end of January, regulators in each of the EU’s member states will “take all necessary and appropriate action, including coordinated enforcement actions,” the Working Party said in its statement.

“Transfers of personal data are an essential element of the transatlantic relationship,” the group added. “The EU and the US are each other’s most important trading partners, and data transfers, increasingly, form an integral part of their commercial exchanges.”

The safe harbor reconciled differences in privacy protection between the US and EU, which holds that citizens have a fundamental right to privacy with respect to the processing of their data. The US regulates privacy by sector but lacks a national scheme.

The ECJ nullified the safe harbor as part of its resolution of a referral from Ireland’s high court, which had referred the matter to the ECJ following a ruling by the republic’s data protection commission (DPC) that the safe harbor preempted investigation of a claim an alleged violation.

The case began in June 2013,  when Max Schrems, then a law student at the University of Vienna, filed a complaint with the DPC charging that Facebook, which maintains its European headquarters in Dublin, sent at least some of the information he and his fellow citizens of the EU posted on the site to servers the company operates in the United States.

Schrems premised his complaint on leaks by Edward Snowden, who documented how the National Security Agency obtained information about users from Facebook, Google, and other tech firms. The surveillance, Schrems asserted, contravened the EU’s protections for personal data.

The ECJ agreed. According to the court, the National Security Agency’s ability to compel tech firms to hand over electronic communications provided by their users “must be regarded as compromising the essence of the fundamental right to respect for private life.”

In January 2014, the Obama administration and tech companies announced a deal that allows the companies to disclose information about data they are required to share with the government

Categories
Privacy

Shutterfly lawsuit highlights concerns with the use of facial recognition and the problem with a ‘Shazam’ for faces

A lawsuit pending in a federal court in Chicago may answer whether tagging and storing photos of someone without that person’s permission violates a state law that regulates the collection and use of biometric information.

That’s the hope of Brian Norberg, a Chicago resident, who in June sued Shutterfly, an online business that lets customers turn photos into books, stationery, cards and calendars. The class action represents the latest in a series of challenges to the use of facial recognition and other technologies that record our unique physical attributes.

Norberg, who claims never to have used Shutterfly, charges that between February and June, someone else uploaded at least one photo of him to Shutterfly and 10 more to the company’s ThisLife storage service. According to Norberg, the company created and stored a template for each photo based on such biological identifiers as the distance between his eyes and ears. The service allegedly prompted the person who uploaded the images to also tag them with Norberg’s first and last names—all without Norberg’s permission.

That, charges Norberg, contravened the state’s Biometric Information Privacy Act (BIPA), a law enacted seven years ago that bars businesses from collecting a scan of someone’s “hand or face geometry,” a scan of their retina or iris, or a fingerprint or voiceprint, without their consent. The law authorizes anyone whose biometrics are used illegally to sue for as much as $5,000 per violation.

In July, Shutterfly asked U.S. District Judge Charles Norgle Sr. to dismiss the lawsuit. According to the company, the BIPA specifically excludes photographs and information derived from them. And, even if the law were unclear, says Shutterfly, the legislature intended it to apply to the use of biometrics to facilitate financial transactions and consumer purchases, not to photo-sharing.

“Scanning photos to allow users to organize their own photos is a far cry from the biometric-facilitated financial transactions and security screenings BIPA is aimed at—such as the use of finger-scanning technology at grocery stores, gas stations, or school cafeterias,” the company asserted in court papers.

In a rejoinder filed last Friday, Norberg says that creating templates based on scans of facial features, not the photos themselves, violates the BIPA. “The resulting face templates—not the innocuous photographs from which they were derived, but the resulting highly detailed digital maps of geometric points and measurements—are ‘scans of face geometry’ and thus fall within the BIPA’s definition of ‘biometric identifiers,’” he wrote.

“By [Shutterfly’s] logic, nothing would stop them from amassing a tremendous, Orwellian electronic database of face scans with no permission whatsoever so long as the data base were derived from photographs,” Norberg added. “And indeed, that appears to be exactly what they are doing.”

Of course, facial recognition technology is used widely already. As Ben Sobel, a researcher at the Center on Privacy & Technology at Georgetown Law, explained recently in The Washington Post:

“Facebook and Google use facial recognition to detect when a user appears in a photograph and to suggest that he or she be tagged. Facebook calls this ‘Tag Suggestions’ and explains it as follows: ‘We currently use facial recognition software that uses an algorithm to calculate a unique number (“template”) based on someone’s facial features… This template is based on your profile pictures and photos you’ve been tagged in on Facebook.’ Once it has built this template, Tag Suggestions analyzes photos uploaded by your friends to see if your face appears in them. If its algorithm detects your face, Facebook can encourage the uploader to tag you.”

Facebook also is defending a class action filed last spring that charges the company’s use of facial-recognition software to identify users violates the BIPA. Facebook users have uploaded at least 250 billion photos to the social networking site and continue to do so at a rate of 350 million images a day, reports Sobel, who adds that Facebook’s tagging occurs by default, whereas Google’s requires you to opt in to it.

According to the Federal Trade Commission, companies that use facial recognition technologies should simplify choices for consumers and increase the transparency of their practices. Social networks should provide users with “a clear notice—outside of a privacy policy—about how the feature works, what data it collects and how it will use the data,” the agency wrote in a report published in October 2012. Significantly, social networks should give users an easy way to opt out of having their biometric data collected and the ability to turn off the collection at any time, the agency advised.

Still, that may not cover someone like Norberg, who says he never used Shutterfly. Or prevent an app akin to a Shazam for faces that would allow users to discover someone’s identity (and possibly more, such as their address) by photographing someone regardless whether the subject knows or consents. Situations like those would require the company to obtain the subject’s express affirmative consent—meaning that consumers would have to affirmatively choose to participate in such a system—the FTC noted.

And those are commercial users of biometrics. The photos of at least 120 million people sit in databases—many built from images uploaded from applications for driver’s licenses and passports—that can be searched by the police and law enforcement. Use of biometrics by the government raises additional concerns, including a need to ensure that a suspect has been detained lawfully before police can photograph the person or swab for DNA.

At a hearing in October 2010 that examined use of facial-recognition technology, Senator Al Franken of Minnesota, the senior Democrat on the Judiciary Subcommittee on Privacy, Technology and the Law, noted that in the era of J. Edgar Hoover, the FBI used wiretaps sweepingly with little regard for privacy.

Congress later passed the Wiretap Act, which requires police to obtain a warrant before they get a wiretap and limits use of wiretaps to investigations of serious crimes. “I think that we need to ask ourselves whether Congress is in a similar position today as it was 50 or 60 years ago—before passage of the Wiretap Act,” Franken said

Categories
Law Privacy

Lawsuit over hacking of Facebook account timely, appeals court rules

A woman whose former boyfriend allegedly hacked into her email and Facebook accounts then sent and posted messages disparaging her sex life had two years from the discovery of each incident to sue for damages, an appeals court in New York City has ruled.

Chantay Sewell sued Phil Bernardin, with whom she had a romantic relationship for nine years starting in 2002, in January 2014, charging Bernardin with gaining access to her AOL email and Facebook accounts without her permission in violation of federal law.

Sewell alleged she discovered the intrusion into her AOL account after being unable to log in to her email on Aug. 1, 2011. The following February, Sewell discovered she could no longer log in to her Facebook account because her password had been changed.

A federal trial court in Brooklyn dismissed Sewell’s lawsuit against Bernardin after concluding she failed to file it within the two-year limitations periods set forth in both the Computer Fraud and Abuse Act and the Stored Communications Act, the laws that Sewell charged Bernardin with violating.

But the U.S. Court of Appeals for the 2nd Circuit disagreed with respect to Sewell’s Facebook-related claim. Writing for a three-judge panel in a ruling released Aug. 4, Judge Robert Sack noted that Sewell’s discovery of the trespass on her AOL account did not mean she should have discovered the alleged tampering with her Facebook account then, too.

“At least on the facts as alleged by the plaintiff, it does not follow from the fact that the plaintiff discovered that one such account—AOL e-mail—had been compromised that she thereby had a reasonable opportunity to discover, or should be expected to have discovered, that another of her accounts—Facebook—might similarly have become compromised,” Sack wrote.

That means Sewell’s lawsuit with respect to the breach of her Facebook account was timely, noted the court, which reversed the trial court’s dismissal of Sewell’s Facebook-related claim.

The laws under which Sewell sued differ slightly in their formulation of when the limitations period begins, Sack explained. The limitations period under the Computer Fraud and Abuse Act, which authorizes someone whose computer as been accessed without authorization to file a civil lawsuit against the intruder, began to run when Sewell learned that her account had been impaired.

The limitations period under the Stored Communications Act, which authorizes a person whose email, postings or other stored messages have been accessed without authorization to sue, starts when the victim discovers, or has a reasonable opportunity to discover, the intrusion.

The limitations periods under both laws may be insufficient in some situations, the court noted. “Even after a prospective plaintiff discovers that an account has been hacked, the investigation necessary to uncover the hacker’s identity may be substantial,” wrote Sack. “In many cases, we suspect that it might take more than two years.”

Categories
Life Privacy Tech

Facebook loses appeal over search warrants

Facebook cannot challenge the constitutionality of a search warrant on its users’ behalf prior to the government’s executing the warrant, an appeals court in New York has ruled in a decision that delineates a boundary for Internet privacy.

The ruling follows a lawsuit by Facebook to void 381 search warrants the company received two years ago from the Manhattan district attorney’s office, which obtained then in connection with an investigation into Social Security disability claims by a group of retired firefighters and police officers whom the DA suspected of feigning illness they attributed to the aftermath of the 9/11 attacks.

Upon receiving the warrants, which sought information derived from the users’ accounts, Facebook asked the DA to withdraw the warrants or to strike a provision that directed the company to refrain from disclosing their existence to users whose postings were to be searched. The DA’s office asserted the confidentiality requirement was needed to prevent the suspects being investigated from destroying evidence or fleeing the jurisdiction if they knew they were being investigated.

After the DA declined to withdraw the warrants, Facebook sued to either quash them or compel the DA remove the non-disclosure provision. The trial court sided with the DA and Facebook appealed.

The appeals court affirmed that the legality of the searches could be determined only after the searches themselves were conducted. “There is no constitutional or statutory right to challenge an alleged defective warrant before it is executed,” Judge Dianne Renwick wrote for a unanimous panel of the court’s appellate division in a ruling released July 21. “We see no basis for providing Facebook a greater right than its customers are afforded.”

The constitutional requirement that a warrant can issue only upon a showing of probable cause as determined by a judicial officer helps to ensure the government does not exceed its authority when requesting a search warrant and eliminates the need for a suspect to make a motion to void the warrant before it can be served, the court noted. “Indeed… the sole remedy for challenging the legality of a warrant is by a pretrial suppression motion which, if successful, will grant that relief,” Renwick explained.

According to Facebook, which was joined in the appeal by Google, Twitter, Microsoft and other tech industry firms, the federal Stored Communications Act also gave the company the right to challenge the warrants. But that law, which protects the privacy of email and other communications stored on servers belonging to ISPs, authorizes ISPs to challenge subpoenas and court orders but not warrants obtained from a judicial officer based on a showing of probable cause, the court noted.

Despite its ruling, the court agreed with Facebook that the DA’s serving 381 warrants swept broadly and suggested the users themselves may have grounds for suppression. “Facebook users share more intimate personal information through their Facebook accounts than may be revealed through rummaging about one’s home,” wrote Renwick. “These bulk warrants demanded ‘all’ communications in 24 broad categories from the 381 targeted accounts. Yet, of the 381 targeted Facebook users accounts only 62 were actually charged with any crime.”

Through civil liberties groups hoped the appeal might bolster protections for Internet privacy, experts said the ruling makes sense as a matter of law. As Orin Kerr, a professor of criminal procedure at George Washington University Law School who has written extensively about privacy and the Internet, wrote in The Washington Post:

“Think about how this plays out in an old-fashioned home search. If the cops show up at your door with a warrant to search your house, you have to let them search. You can’t stop them if you have legal concerns about the warrant. And if a target who is handed a warrant can’t bring a pre-enforcement challenge, then why should Facebook have greater rights to bring such a challenge on behalf of the targets, at least absent legislation giving them that right?”

Still, “that doesn’t mean the warrants were valid,” added Kerr, who imagined that the defendants themselves seem likely to challenge the sweep of the material seized from their Facebook accounts if they haven’t already.

For its part, Facebook disagreed with the ruling but said the company had not decided whether to appeal. “We continue to believe that overly broad search warrants—granting the government the ability to keep hundreds of people’s account information indefinitely—are unconstitutional and raise important concerns about the privacy of people’s online information,” Jay Nancarrow, a spokesman for the company, told the Times.

The DA’s office noted that the investigation led to the indictment of 134 people and alleged hundreds of millions of dollars in fraud. “In many cases, evidence on [the suspects’] Facebook accounts directly contradicted the lies the defendants told to the Social Security Administration,” Joan Vollero, a spokeswoman for the district attorney’s office, said in a statement.

 

Categories
Law

Facebook posts cannot be threats without intent, Supreme Court rules

The Supreme Court on Monday narrowed the circumstances in which someone who posts threats on Facebook or social media can be criminally liable for their actions.

In an 8 to 1 ruling, the court overturned the conviction of Anthony Elonis, a Pennsylvania man who was found guilty in 2011 of threatening his estranged wife, former co-workers and others in series of posts on his Facebook page.

The musings, which contained violent language and images, earned Elonis, writing under the pseudonym “Tone Dougie,” a sentence of 44 months in prison for violating a federal law that bars “transmitting in interstate commerce” a threat to injure another person or group of people.

On appeal, Elonis contended that to be criminal—and otherwise beyond the protection of the First Amendment—the threats required a subjective intent that Elonis claimed he lacked. According to Elonis, the trial court erred when it instructed a jury that a statement constitutes a criminal threat when a “reasonable person” would interpret the statement as “a serious expression” of an intent to inflict injury.

The Court agreed, noting that to be criminal, conduct must derive from a defendant’s mental state; that negligence alone is insufficient to support liability. “Federal criminal liability generally does not turn solely on the results of an act without considering the defendant’s mental state,” Chief Justice Roberts wrote for the majority. “That understanding ‘took deep and early root in American soil’ and Congress left it intact here… ‘wrongdoing must be conscious to be criminal.’” (citation omitted)

Though the court did not discuss the implications for free speech raised by the appeal, the American Civil Liberties Union and other groups had charged that the instruction insisted on by the trial court would discourage speech protected by the First Amendment.

For its part, the government contended that requiring a subjective intent as Elonis urged would undermine the goal of protecting people from fear of violence regardless whether the person who threatens them intends his words to be harmless.

The Court limited its opinion to Elonis’ intent. “Having liability turn on whether a ‘reasonable person’ regards the communication as a threat—regardless of what the defendant thinks—‘reduces culpability on the all-important element of the crime to negligence,” wrote Roberts. “We ‘have long been reluctant to infer that a negligence standard was intended in criminal statutes… Under these principles, ‘what [Elonis] thinks’ does matter.” (citations omitted)

Categories
Law

On Facebook, distinguishing art from assault

When you post something online, what’s the difference between making a threat and striking a pose?

The Supreme Court on Monday will hear arguments in a case that raises that question. It involves a challenge by a Pennsylvania man to his conviction in 2011 for threatening his wife, his former co-workers and others in a series of posts to his Facebook page.

After his wife left him and he lost his job at an amusement park, Anthony Elonis adopted the pseudonym “Tone Dougie” and published musings and lyrics that he says he penned not as a statement of his beliefs but solely as therapy for his pain.

One post, which he published two days being fired, read:

Y’all saying I had access to keys for the f#$king gates, that I have sinister plans for all my friends and must have taken home a couple. Y’all think it’s too dark and foggy to secure your facility from a man as made as me. You see, even without a paycheck I’m still the main attraction. Whoever thought the Halloween haunt could be so fucking scary?

Another, which Elonis posted after his wife obtained a protection order, stated:

Fold up your protection-from-abuse order and put it in your pocket. Is it thick enough to stop a bullet? Try to enforce an order that was improperly granted in the first place.

That and other writings earned Elonis a sentence of 44 months in prison for violating a federal law that prohibits “transmitting in interstate commerce” a threat to injure another person or group of people.

Threats of violence against a particular person or group of people – so-called true threats – are not protected by the First Amendment.

At trial, Elonis asserted that his postings were similar to lyrics by rappers such as Eminem, who in songs has fantasized about killing his ex-wife. With that in mind, Elonis asked the judge to instruct jurors they could convict him only if they found that Elonis intended to communicate a threat.

However, the court instructed the jury that a statement constitutes a true threat — and thus beyond the protection of the Constitution — when a “reasonable person” would interpret the statement as “a serious expression” of an intent to inflict injury.

On appeal, Elonis contends that true threats require a subjective intent to threaten another person. That’s especially true online where messages may be seen by anyone, according to the American Civil Liberties Union and other groups. As the groups write in a friend-of-the-court brief:

A message posted to a publicly available website or mailing list is potentially viewable by anyone with an Internet connection anywhere in the world. A speaker may post a statement online with the expectation that a relatively small number of people will see it, without anticipating that it could be read – and understood very differently – by a much broader audience.

An objective test for online communication “would inevitably chill constitutionally protected speech, as speakers would bear the burden of accurately anticipating the potential reaction of unfamiliar listeners or readers,” the groups say.

For its part, the Justice Department, which is pressing the court to uphold the conviction, argues that the requiring a subjective intent to threaten would undermine the law’s goal of protecting people from a fear of violence regardless whether the speaker intended the statement to be harmless.