Categories
Privacy Tech

The Internet of Things marks an anniversary for privacy

This September marks two years since the Federal Trade Commission ordered TRENDnet, a California-based maker of surveillance cameras and networking devices, to refrain from misrepresenting the security of its devices after feeds from hundreds of consumers’ cameras became public on the Internet.

According to the FTC, the company failed to use reasonable security to design and test software for its SecurView cameras. The omission allowed hackers to obtain feeds for roughly 700 cameras that showed babies asleep in their cribs, children playing, and adults coming and going.

The case, which TRENDnet settled by agreeing to strengthen digital security in its products and to implement a program that reduces risks to privacy, represented the first enforcement action by the FTC involving a consumer device that sends and receives data over the Internet, also known as the Internet of Things (IoT).

From mattresses that measure whether we toss and turn at night, to refrigerators that tell the grocer when it’s time to restock, to fitness trackers that encircle our wrists, the IoT represents a networking of everyday devices to improve—in theory, at least—how we live and work. The IoT includes meters that allow electric utilities to measure usage, monitors that give doctors access to our health data 24/7, and carpets and walls that detect when someone has fallen.

Though estimates vary, there are roughly 4.9 billion connected devices in the world, up 30% from 2014, according to Gartner, which projects 25 billion such devices by 2020. Data from mobile devices alone reached 2.5 exabytes per month (that’s one billion gigabytes) last year, up 69 percent from a year earlier, and is expected to exceed 24.3 exabytes per month by 2019, according to Cisco.

Or, as a character on the HBO series “Silicon Valley” exclaims: “Ninety-two percent of the world’s data has been created in the last two years alone!”

Devices can be difficult to secure. Seventy percent of the most common ones that constitute the IoT contain serious vulnerabilities, a study last year by Hewlett-Packard found. But what matters as much if not more is safeguarding the flood of data itself and ensuring that consumers know the terms of the exchange. Dominique Guinard, co-founder and and chief technical officer of Evrythng, a maker of platforms that tie devices together, observed recently in AdvertisingAge:

“In the data-driven world of IoT, the data that gets shared is more personal and intimate than in the current digital economy. For example, consumers have the ability to trade protected data such as health and medical information through their bathroom scale, perhaps for a better health insurance premium. But what happens if a consumer is supposed to lose weight, and ends up gaining it instead? What control can consumers exert over access to their data, and what are the consequences?”

Guinard envisions contracts between consumers and manufacturers that adjust over time and address what happens when data becomes unfavorable to the consumer. The FTC has discussed similar approaches. In a report published last January, the agency presented results of a workshop at which participants examined security for the IoT as measured by Fair Information Practices, a code established in 1973 by the U.S. Department of Health, Education and Welfare and later adopted by the Organization for Economic Cooperation and Development that has provided a framework for thinking about privacy since.

At the workshop the FTC and participants focused on the application of four practices as they pertain to the IoT: security, data minimization, notice, and choice. Participants stressed the benefit of so-called security by design, which holds that companies build security into devices at the outset rather than as an afterthought. Minimization refers to companies imposing reasonable limits on collection and retention of data. Less is more, you might say.

Notice refers to how a company describes its privacy practices, including what information the company collects from consumers. Choice addresses the ability of consumers to specify how such information may be used, disclosed and shared.

The meaningfulness of both notice and choice turn in part on consumers’ expectations. Among scenarios posited by the FTC:

“Suppose a consumer buys a smart oven from ABC Vending, which is connected to an ABC Vending app that allows the consumer to remotely turn the oven on to the setting, ‘Bake at 400 degrees for one hour.’ If ABC Vending decides to use the consumer’s oven-usage information to improve the sensitivity of its temperature sensor or to recommend another of its products to the consumer, it need not offer the consumer a choice for these uses, which are consistent with its relationship with the consumer. On the other hand, if the oven manufacturer shares a consumer’s personal data with, for example, a data broker or an ad network, such sharing would be inconsistent with the context of the consumer’s relationship with the manufacturer, and the company should give the consumer a choice.”

Technology may help. The Future of Privacy Forum, a Washington-based think tank that advocates for responsible data practices, suggested in comments to the FTC that companies tag data with permissible uses so that software can identity and flag unauthorized uses. Microsoft envisioned a manufacturer that offers more than one device using a consumer’s preference for one to determine a default preference for others.

As the proposals suggest, notice and choice can be a challenge to achieve when our appliances collect data while we go about our lives. But as the FTC observed, “giving consumers information and choices about their data… continues to be the most viable [approach] for the IoT in the foreseeable future.”

Categories
Law Privacy

Wyndham ruling reinforces FTC authority to regulate privacy practices

A hotel chain’s repeated failure to protect customers from hackers constitutes an unfair practice that subjects the company to a lawsuit by the Federal Trade Commission, a federal appeals court in Philadelphia has ruled in a decision that reinforces the agency’s authority to protect consumers from companies that backtrack on promises about privacy.

Wyndham Worldwide Corporation, which licenses its brand to roughly 90 independently owned hotels that use the company’s computerized property management system, cannot contend that federal law or the FTC’s interpretations of it failed to put the company on notice that lapses in cybersecurity on its part could lead to legal liability, according to the court.

The FTC sued Wyndham, which also franchises more than 7,600 hotels worldwide, in June 2012, charging the company with failing to protect consumers in violation of Section 5 the Federal Trade Commission Act, a century-old law that authorizes the FTC to proscribe “unfair or deceptive acts or practices” in commerce.

Three breaches of Wyndham’s property management system over two years starting in 2008 resulted in hackers obtaining payment-card information from more than 619,000 consumers and at least $10.6 million in losses from fraud, the FTC charged.

According to the FTC, Wyndham failed to use encryption, firewalls and other procedures to safeguard customers’ names, payment card account numbers, expiration dates and security codes stored in the system, notwithstanding the company’s privacy notice, which advised customers that Wyndham safeguards their personally identifiable information using industry-standard practices.

Before trial, Wyndham sought to dismiss the FTC’s claims, charging the agency with failing to support a finding of unfairness. Congress reshaped Section 5 to exclude cybersecurity, according to Wyndham, which also charged the FTC with failing to notify companies what standards to follow. U.S. District Judge Esther Salas denied Wyndham’s motion but allowed the company to appeal the ruling.

The appeals court sided with Salas. “A company does not act equitably when it publishes a privacy policy to attract customers who are concerned about data privacy, fails to make good on that promise by investing inadequate resources in cybersecurity, exposes its unsuspecting customers to substantial financial injury, and retains the profits of their business,” wrote Judge Thomas Ambro for a three-judge panel of the U.S. Court of Appeals for the 3rd Circuit.

The government’s charges, which ranged from Wyndham’s allowing company-branded hotels to store payment card information in clear readable text, to permitting the use of easily guessed passwords to protect the property management system, to failing to restrict access to the system by third parties, embody unfairness as defined by both the FTC and Congress, the court noted.

In 1994, Congress codified a definition of unfairness adopted by the FTC 14 years earlier that defines the term as an act or practice that “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.”

Apart from the term’s plain meaning as defined by the FTC, Congress specifically declined to enumerate specific unfair practices in the law, choosing instead to leave its development to the FTC as technology and the marketplace evolve, Ambro explained.

The approach makes sense, according to Omer Tene, a professor at the College of Management School of Law in Rishon Le Zion, Israel and a visiting scholar at the Center for Internet and Society at Stanford Law School, who wrote following the ruling:

“In what could serve as a valuable lesson for European lawmakers as they mull over the details of the voluminous General Data Protection Regulation, Congress had the foresight back then to understand the futility of exhaustively listing every unreasonable practice that might arise. Firewalls, passwords and secure cloud transactions were hardly foreseeable in 1914.”

The court also rejected a claim by Wyndham that a business does not treat its customers unfairly when the business itself is victimized by hackers, a situation the company argued would be akin to allowing the government to sue a supermarket that was “sloppy about sweeping up banana peels.”

“The argument is alarmist to say the least,” wrote Ambro. “And it invites the tart retort that, were Wyndham a supermarket, leaving so many banana peels all over the place that 619,000 customers fall hardly suggests it should be immune from liability under [Section 5.]”

The court further rejected Wyndham’s contention that it lacked notice of what specific security procedures a business must take to avoid liability. According to the court, the FTC has published enforcement actions and consent decrees that have the effect of notifying companies whether their practices treat consumers fairly. The FTC says it has settled 53 cases against companies related to data security, including Snapchat, Reed Elsevier and Credit Karma.

Ambro noted that in Wyndham’s case the facts failed to create a close call:

“As the FTC points out in its brief, the complaint does not allege that Wyndham used weak firewalls, IP address restrictions, encryption software, and passwords. Rather, it alleges that Wyndham failed to use any firewall at critical network points, did not restrict specific IP addresses at all, did not use any encryption for certain customer files, and did not require some users to change their default or factory-setting passwords at all. Wyndham did not respond to this argument in its reply brief.” (citations omitted, emphasis in original)

Whether Wyndham realized the risks to security it faced when the first breach occurred, the company had notice by the second and third cyberattacks, Ambro noted. By now Wyndham knows, too. In its latest annual securities filing, the company described risks it faces in the realm of privacy and security:

“The legal, regulatory and contractual environment surrounding information security and privacy is constantly evolving and the hospitality industry is under increasing attack by cyber-criminals operating on a global basis. Our information technology infrastructure and information systems may also be vulnerable to system failures, computer hacking, cyber-terrorism, computer viruses, and other intentional or unintentional interference, negligence, fraud, misuse and other unauthorized attempts to access or interfere with these systems and our personal and proprietary information.”

According to experts, the ruling is significant in part because it represents the first time a company has challenged the FTC’s authority to hold companies accountable for unfair practices pursuant to Section 5.

“It’s the first Court of Appeals decision on the issue and should be viewed and taken by companies that this is a potential area of exposure,” Eric Hochstadt, a partner at Weil, Gotshal & Manges in New York, told Bloomberg. “This is definitely an area of growing concern as the underlying misconduct, data breaches, is growing in scope.”

For its part, Wyndham vows to continue the fight. “Once the discovery process resumes, we believe the facts will show the FTC’s allegations are unfounded,” spokesman Michael Valentino said in a statement.

The FTC welcomed the ruling. “Today’s Third Circuit Court of Appeals decision reaffirms the FTC’s authority to hold companies accountable for failing to safeguard consumer data,” said Chairwoman Edith Ramirez. “It is not only appropriate, but critical, that the FTC has the ability to take action on behalf of consumers when companies fail to take reasonable steps to secure sensitive consumer information.”

Categories
Privacy

Spotify shows that privacy deserves debate

Spotify set off a storm of hand-wringing recently with an announcement the company had updated its privacy notice.

The notice, which replaces a version published nearly two-and-a-half years ago, reflects the digital-music service’s aim of tuning its offering to users, who listen on the move and assemble playlists for one another.

Significantly, Spotify revamped sections of the notice that inventory information the company may gather from users. According to the notice, users consent to the company’s collecting the location of their smartphones, such as via GPS or Bluetooth, and information about the speed of their movements (from sensors in some smartphones), the better to deliver music that matches users’ workouts. Spotify also said it may gather photos, contacts and media files stored on users’ devices, as well as information about “likes” and posts from those who sign up for the service via Facebook.

None of that went over well with observers. “Like a jealous ex, Spotify wants to see (and collect) your photos and see who you’re talking to,” complained Wired. “Perhaps Spotify feels left out that you are hanging out without it, because it wants to know where you are all the time.”

“I’m now considering whether the £10 I pay for a premium membership is worth it, given the amount of privacy I’d be giving away by consenting,” lamented Thomas Fox-Brewster at Forbes. “You know, Apple Music just started looking a lot better,” Gizmodo observed.

It didn’t help Spotify that the changes it adduced arrived two days after hackers published names, email addresses and other personal information about roughly 36 million people who signed up for Ashley Madison, a hookup service for married people. On Monday, police in Toronto described the theft “as one of the largest data breaches in the world.”

Nor did it help that Spotify neglected to tell users how it might use all the data, or whether people could choose not to participate (and still remain users).  The company also failed to describe the difference, if any, in privacy for subscribers to its premium service, which contains no ads.

The blowback elicited an apology from Daniel Ek, Spotify’s CEO, who conceded in a blog post the company “should have done a better job” communicating the changes and that users won’t have to share their contacts, photos and the rest if they don’t want to. “We understand people’s concerns about their personal information and are 100 percent committed to protecting our users’ privacy and ensuring that you have control over the information you share,” he pledged.

Though the apology diffused the dustup, the reaction to the exchange that services such as Spotify and others bargain for suggests a lack of confidence among users in the terms of the trade. Roughly in 10 adults say that controlling who can obtain information about them and what information can be collected are important, according to a survey published in May by the Pew Research Center. Yet about half as many trust their records will remain private and secure.

By now it’s established that companies cannot revise their privacy notices without first advising users what the changes will be. But it wasn’t always that way. The idea originated as recently as a decade ago, when the Federal Trade Commission determined that companies cannot change their privacy notices retroactively.

Companies comply by telling us the stipulations of their services and hoping we come round. “If you don’t agree with the terms of this privacy policy, then please don’t use the service,” Spotify’s privacy notice advises users. You can’t get much clearer than that. But privacy notices can get more explicit.

Standards and laws evolve, of course. Note, too, that consumers have more trust in banks and health insurance companies—sectors that abide by well-established rules for privacy—than they do retailers and social-networking services to safeguard their personal information, according to a Gallup Poll released last year.

Still, there’s nothing to stop companies from innovating. Services such as Spotify that specialize in personalization seem well-poised to deliver privacy notices that users can understand as intuitively as the services themselves.

Writing in the Times recently, A.O. Scott describes the main character in “Grandma,” a film starring Lily Tomlin. “She is impatient with the world and suspicious of the motives of a lot of people in it, but that is partly a result of her idealism, her uncompromising commitment to behaving like a free human being,” Scott writes.

As the characterization suggests, we can be uneasy and idealistic. The fears that arise in connection with how and to whom we relinquish our personal information are meaningful because they remind us of our independence. Which suggests debates about privacy premised less on reacting to the latest stumble and more on thinking individually and together about trade-offs we’re willing to tolerate.

Categories
Law Privacy

For AT&T customers, opting out of online ads more hassle than necessary: LA Times

Writing in Tuesday’s Los Angeles Times, David Lazarus, the paper’s consumer columnist, chronicles the difficulty that customers of AT&T can encounter when they elect to opt-out of advertising from the company and its partners.

As one who switched my mobile service recently to AT&T, I decided to sample the process Lazarus describes. I went to an online site where AT&T customers can decline ads delivered by the Network Advertising Initiative, a self-regulatory organization whose members serve up ads based on predictions about users’ interests “generated from your visits over time and across different websites.”

According to Lazarus, AT&T customers have to opt out of ads delivered by as many as 21 digital advertising companies. In my case the number totaled 79. NAI counts 93 members in all, which leaves 14 companies that have yet to place a cookie in my browser. (To those companies: It’s fine, really.)

NAI members include a mix of household names such as Google and AOL, as wells as firms such as MediaMath, NetSeer, LiveRail and TubeMogul.

The companies tailor ads by embedding a fragment of code in your browser that tracks your comings and goings on the Internet. That means to avoid such advertising completely you have to opt out for each browser on each device you use. As Lazarus notes, opting out of ads delivered via your smartphone or satellite TV requires going to discrete links for each platform.

Though technology exists that allows customers to register their preference for all their devices, the phone companies have yet to adopt it. A spokeswoman for AT&T tells Lazarus that the company’s procedures are “consistent with industry practice.” Still, opt outs “should be as streamlined as possible,” he argues.

“The letter of the law may allow them to do things as they are now,” Jill Bronfman, director of the Privacy and Technology Project at UC Hastings College of the Law, told Lazarus, referring to the phone companies. “But the spirit of the law is that they need to offer consumer-friendly privacy options.”

Categories
Privacy

AT&T aided NSA in spying on a massive scale: reports

Thanks to Edward Snowden and reporters at the Times and ProPublica, we now know that AT&T likely handed over to the National Security Agency billions of cellphone calling records over roughly two years beginning in August 2011.

According to documents reported Saturday by the Times, AT&T gave the NSA as many as 1.8 billion sets of data each day about who people called, when and for how long. Though Verizon, also provided the NSA access to similar metadata, AT&T appears to have been a partner without peer. According to ProPublica:

“While it has long been known that American telecommunications companies work closely with the spy agency, the documents we’ve published show that the relationship with AT&T has been considered unique and especially productive. One document described it as “highly collaborative” and another lauded the company’s “extreme willingness to help.”

It appears the calling records allowed intelligence agencies to run queries, relying on orders issued by a court pursuant to the Foreign Intelligence Surveillance Act, on calls that originated overseas but passed across AT&T’s network. In addition, the company reportedly gave the NSA billions of emails that flowed across its network in the dozen years that followed the 9/11 attacks.

AT&T also provided the NSA with access to high-capacity broadband lines that serve the United Nations in New York, according to the documents.

“We do not voluntarily provide information to any investigating authorities other than if a person’s life is in danger and time is of the essence,” Brad Burns, an AT&T spokesman, told ProPublica without elaborating.

Categories
Law Privacy

NIST publishes guidance for securing health records on mobile devices

How can health care providers secure mobile devices that physicians and other professionals use to send information about patients?

That’s the question at the center of a so-called practice guide published recently in draft form by the National Institute for Standards and Technology (NIST). Between now and Sept. 25, NIST seeks public comment on the guide, which illustrates how providers can assess cyber threats and secure electronic health records on smartphones, tablets and laptops, as well as the servers to which such equipment connects.

The effort reflects the reality that electronic health records, which the federal Health Information Technology for Economic and Clinical Health Act (HITECH Act) aims to spur adoption and use of, can be accessed in ways that compromise both privacy and patient care. According to NIST:

“Cost and care efficiencies, as well as incentives from the HITECH Act, have prompted health care groups to rapidly adopt electronic health record systems. Unfortunately, organizations have not adopted security measures at the same pace. Attackers are aware of these vulnerabilities and are deploying increasingly sophisticated means to exploit information systems and devices.”

At issue is the susceptibility of electronic health information to intrusion. NIST cites a report published in May by the Ponemon Institute that found malicious hacks on health care organizations now outnumber accidental breaches, and that the number of criminal attacks grew 125% in the last five years.

As the law firm King & Spalding notes, so far this summer the U.S. Department of Health and Human Services has logged 34 breaches of protected health information that each affected 500 or more people. Incidents include an attack on a server that held records for roughly 390,000 people at Medical Informatics Engineering, a software company in Indiana; the theft of a desktop computer containing health records for more than 12,500 people at Montefiore Medical Center in New York; and a cyberattack in June on UCLA Health System, where intruders made off with information for as many as 4.5 million people.

The practice guide proposed by NIST addresses such scenarios as the theft or loss of devices that had access to electronic health records; attacks on the networks of health care organizations, whether by hackers or intruders who gain access to the premises; installation of malware; or users who walk away while logged in to devices.

The guide, which is voluntary for stakeholders, mirrors a framework that NIST is developing pursuant to an order for reducing cyber risks to infrastructure that President Obama issued in February 2013. Federal law requires providers to assess risks to electronic health information regularly.

Categories
Law Privacy

Lawsuit over hacking of Facebook account timely, appeals court rules

A woman whose former boyfriend allegedly hacked into her email and Facebook accounts then sent and posted messages disparaging her sex life had two years from the discovery of each incident to sue for damages, an appeals court in New York City has ruled.

Chantay Sewell sued Phil Bernardin, with whom she had a romantic relationship for nine years starting in 2002, in January 2014, charging Bernardin with gaining access to her AOL email and Facebook accounts without her permission in violation of federal law.

Sewell alleged she discovered the intrusion into her AOL account after being unable to log in to her email on Aug. 1, 2011. The following February, Sewell discovered she could no longer log in to her Facebook account because her password had been changed.

A federal trial court in Brooklyn dismissed Sewell’s lawsuit against Bernardin after concluding she failed to file it within the two-year limitations periods set forth in both the Computer Fraud and Abuse Act and the Stored Communications Act, the laws that Sewell charged Bernardin with violating.

But the U.S. Court of Appeals for the 2nd Circuit disagreed with respect to Sewell’s Facebook-related claim. Writing for a three-judge panel in a ruling released Aug. 4, Judge Robert Sack noted that Sewell’s discovery of the trespass on her AOL account did not mean she should have discovered the alleged tampering with her Facebook account then, too.

“At least on the facts as alleged by the plaintiff, it does not follow from the fact that the plaintiff discovered that one such account—AOL e-mail—had been compromised that she thereby had a reasonable opportunity to discover, or should be expected to have discovered, that another of her accounts—Facebook—might similarly have become compromised,” Sack wrote.

That means Sewell’s lawsuit with respect to the breach of her Facebook account was timely, noted the court, which reversed the trial court’s dismissal of Sewell’s Facebook-related claim.

The laws under which Sewell sued differ slightly in their formulation of when the limitations period begins, Sack explained. The limitations period under the Computer Fraud and Abuse Act, which authorizes someone whose computer as been accessed without authorization to file a civil lawsuit against the intruder, began to run when Sewell learned that her account had been impaired.

The limitations period under the Stored Communications Act, which authorizes a person whose email, postings or other stored messages have been accessed without authorization to sue, starts when the victim discovers, or has a reasonable opportunity to discover, the intrusion.

The limitations periods under both laws may be insufficient in some situations, the court noted. “Even after a prospective plaintiff discovers that an account has been hacked, the investigation necessary to uncover the hacker’s identity may be substantial,” wrote Sack. “In many cases, we suspect that it might take more than two years.”

Categories
Life Privacy Tech

Facebook loses appeal over search warrants

Facebook cannot challenge the constitutionality of a search warrant on its users’ behalf prior to the government’s executing the warrant, an appeals court in New York has ruled in a decision that delineates a boundary for Internet privacy.

The ruling follows a lawsuit by Facebook to void 381 search warrants the company received two years ago from the Manhattan district attorney’s office, which obtained then in connection with an investigation into Social Security disability claims by a group of retired firefighters and police officers whom the DA suspected of feigning illness they attributed to the aftermath of the 9/11 attacks.

Upon receiving the warrants, which sought information derived from the users’ accounts, Facebook asked the DA to withdraw the warrants or to strike a provision that directed the company to refrain from disclosing their existence to users whose postings were to be searched. The DA’s office asserted the confidentiality requirement was needed to prevent the suspects being investigated from destroying evidence or fleeing the jurisdiction if they knew they were being investigated.

After the DA declined to withdraw the warrants, Facebook sued to either quash them or compel the DA remove the non-disclosure provision. The trial court sided with the DA and Facebook appealed.

The appeals court affirmed that the legality of the searches could be determined only after the searches themselves were conducted. “There is no constitutional or statutory right to challenge an alleged defective warrant before it is executed,” Judge Dianne Renwick wrote for a unanimous panel of the court’s appellate division in a ruling released July 21. “We see no basis for providing Facebook a greater right than its customers are afforded.”

The constitutional requirement that a warrant can issue only upon a showing of probable cause as determined by a judicial officer helps to ensure the government does not exceed its authority when requesting a search warrant and eliminates the need for a suspect to make a motion to void the warrant before it can be served, the court noted. “Indeed… the sole remedy for challenging the legality of a warrant is by a pretrial suppression motion which, if successful, will grant that relief,” Renwick explained.

According to Facebook, which was joined in the appeal by Google, Twitter, Microsoft and other tech industry firms, the federal Stored Communications Act also gave the company the right to challenge the warrants. But that law, which protects the privacy of email and other communications stored on servers belonging to ISPs, authorizes ISPs to challenge subpoenas and court orders but not warrants obtained from a judicial officer based on a showing of probable cause, the court noted.

Despite its ruling, the court agreed with Facebook that the DA’s serving 381 warrants swept broadly and suggested the users themselves may have grounds for suppression. “Facebook users share more intimate personal information through their Facebook accounts than may be revealed through rummaging about one’s home,” wrote Renwick. “These bulk warrants demanded ‘all’ communications in 24 broad categories from the 381 targeted accounts. Yet, of the 381 targeted Facebook users accounts only 62 were actually charged with any crime.”

Through civil liberties groups hoped the appeal might bolster protections for Internet privacy, experts said the ruling makes sense as a matter of law. As Orin Kerr, a professor of criminal procedure at George Washington University Law School who has written extensively about privacy and the Internet, wrote in The Washington Post:

“Think about how this plays out in an old-fashioned home search. If the cops show up at your door with a warrant to search your house, you have to let them search. You can’t stop them if you have legal concerns about the warrant. And if a target who is handed a warrant can’t bring a pre-enforcement challenge, then why should Facebook have greater rights to bring such a challenge on behalf of the targets, at least absent legislation giving them that right?”

Still, “that doesn’t mean the warrants were valid,” added Kerr, who imagined that the defendants themselves seem likely to challenge the sweep of the material seized from their Facebook accounts if they haven’t already.

For its part, Facebook disagreed with the ruling but said the company had not decided whether to appeal. “We continue to believe that overly broad search warrants—granting the government the ability to keep hundreds of people’s account information indefinitely—are unconstitutional and raise important concerns about the privacy of people’s online information,” Jay Nancarrow, a spokesman for the company, told the Times.

The DA’s office noted that the investigation led to the indictment of 134 people and alleged hundreds of millions of dollars in fraud. “In many cases, evidence on [the suspects’] Facebook accounts directly contradicted the lies the defendants told to the Social Security Administration,” Joan Vollero, a spokeswoman for the district attorney’s office, said in a statement.

 

Categories
Law Privacy

Sony must face breach lawsuit, court rules

Sony Pictures must continue to defend a lawsuit filed by nine former employees whose personal information was stolen from the studio during a cyberattack last fall, a federal court has ruled.

The former employees sued Sony in March, charging the company with negligence, breach of contract and violation of confidentiality laws in failing to safeguard medical, financial and other personally identifiable information that the attackers later posted online and traded via the Internet. The plaintiffs charge they’ve have had to subscribe to identity-protection and credit-monitoring services, obtain credit reports and incur costs resulting from freezes to their credit.

Sony asked the U.S. District Court in Los Angeles to dismiss the suit, alleging that the former employees failed to show injury sufficiently concrete to establish standing.

The court disagreed. “Here, plaintiffs have alleged that PII was stolen and posted on file-sharing websites for identity thieves to download,” wrote Judge Gary Klausner in a ruling released June 15. “Plaintiffs also allege that the information has been used to send emails threatening physical harm to employees and their families. These allegations alone are sufficient to establish a credible threat of real and immediate harm, or certainly impending injury.”

According to the court, the costs incurred by the former employees also satisfy the requirement for injury on which a claim of negligence depends, although Klausner sided with Sony and dismissed part of the lawsuit that charged the company with failing to notify the former employees of the breach in a timely fashion.

The plaintiffs also established that a so-called special relationship exists between a company and its employees that allows the employees to later hold the employer responsible for negligence and breach of contract. According to the plaintiffs, Sony failed to shore up systems that stored records for human resources despite experiencing data breaches in the past.

Klausner agreed, noting that “to receive such compensation and other benefits, Sony required plaintiffs to provide their PII, including names, addresses, Social Security number, medical information, and other personal information.”

Sony’s alleged failure to defend its systems against a cyberattack also allows the former employees to charge the company with violating a California law that obligates employers to safeguard employees’ medical information, the court ruled.

Categories
Law Privacy

Phone companies should not be required to store call data, privacy advocates say

A federal rule that orders phone companies to retain records of calls for a year-and-a-half disregards the privacy of millions of Americans, according to a coalition of civil liberties groups that is asking the Federal Communications Commission to rescind the requirement.

As currently configured, the mandate that carriers hold for 18 months the name, address and telephone number of callers, along with numbers called and the dates, times and length of each call exposes consumers to data breaches, thwarts innovation and does little to aid law enforcement, according to a petition filed Tuesday with the FCC by the Electronic Privacy Information Center (EPIC) on behalf of itself and 28 organizations.

The retention requirement makes little sense in an age when phone companies bill customers for unlimited or non-measured calling, compared with a time when companies itemized calls, according to EPIC, which contends that requiring companies to keep such records in bulk results in retention of information about nearly all American adults regardless of whether the government suspects them of wrongdoing.

“These telephone records not only show who consumers call and when, but can also reveal intimate details about consumers’ daily lives,” wrote Marc Rotenberg, EPIC’s president. “These records reveal close contacts and associates, and confidential relationships between individuals and their attorneys, doctors, or elected representatives.”

According to EPIC, the FCC proposed 30 years ago to eliminate the record keeping entirely before the Department of Justice asked the FCC to extend the retention period to 18 from six months, contending that retaining phone records aided investigation and prosecution of criminal conspiracies. But the value of the records has eroded as billing has changed, charges EPIC, which notes that DOJ conceded as much in comments filed with the FCC in 2006. Further, law enforcement agencies still could request that records be retained in connection with investigations, said EPIC.

Retaining calling records also amplifies the risk of data breaches, such as the one recently at the Office of Personnel Management, according to EPIC. “The best strategy to reduce the risk of an attack and to minimize the harm when such attacks do occur is to collect less sensitive information at the outset,” the petition notes.

Discontinuing the requirement that carriers retain call records for 18 months would lower the cost of record keeping and allow phone companies to compete for customers on basis of privacy, “which many believe is the market-based solution to the enormous privacy challenge confronting the nation today,” Rotenberg added.

The FCC declined to comment on the petition.

Revisions last spring to post-9/11 surveillance laws ended bulk collection of phone call metadata by the government. Under the terms of the USA Freedom Act, the National Security Agency can obtain such information from phone companies if authorized by the Foreign Intelligence Surveillance Court. But the act does not require phone companies to collect or store metadata.