Categories
News

Search on

I had nearly given up this blog. 

Not because I didn’t have more to write, but because I ran into a glitch with the site itself. Whenever I clicked on links from the archive, I received a message that read “Oops! That page can’t be found.” 

For the better part of a year, my attempt at a fix was to call tech support at the company that hosts the site. Tech support didn’t have a fix.

In time, I stopped posting. It seemed I always had something better to work on. But that didn’t feel right either. 

Then I searched Google. That seems like an obvious go-to; I think it took me as long as it did because I imagined the problem lay with hosting. Thanks to Google, I learned the problem lay with WordPress. 

WordPress users suggested a change to a setting on the back end of the site. That didn’t solve my problem, but it got me to play around with a few other settings. I tweaked one. Not it. Then another. Nope. I changed a third setting. Voila. The archived posts appeared. 

I feel so pleased to have fixed the problem that I’m posting this.

Categories
Privacy

Apple stance on privacy may slow artificial intelligence push: report

Those of us who use iPhones may have more to welcome this week than Apple’s event to unveil the latest devices.

The computer maker’s stance on guarding customer privacy may be slowing its push to stay ahead of rivals in the race to to develop digital assistants, Reuters reports. If correct, that means the company is upholding its pledge to respect customers’ personal privacy, but more on that in a minute.

At issue is a race by Apple, Google and other tech companies to recruit experts in machine learning, a branch of artificial intelligence that allows computers to anticipate what users want without being explicitly programmed.

The larger the set of data that software can analyze, the more precise those predictions can become. But with a self-imposed privacy policy that causes iPhones and other devices to refresh every 15 minutes, Apple forgoes the opportunity to send the data to the cloud, where the information could be combined with other data, analyzed and, possibly, sold to advertisers.

That benefits users by protecting their personal privacy but can slow the evolution of services such as Siri to anticipate users’ needs. “They want to make a phone that responds to you very quickly without knowledge of the rest of the world,” Joseph Gonzalez, co-founder of Dato, a machine learning startup, told Reuters, referring to Apple. “It’s harder to do that.”

Or not. If any company can reconcile the imperatives of privacy and technological progress in a way that advances both it may be Apple.

The next generation of Apple’s services will depend heavily on artificial intelligence, AppleInsider reports. At the same time, digital assistants developed by Google and Microsoft reportedly are getting better at learning users’ routines.
Apple currently aims to recruit at least 86 more experts in machine learning, according to an analysis by Reuters of the computer maker’s jobs postings.

Apple CEO Tim Cook said in June that his company won’t be a party to the exchange that defines the relationship of many tech companies and their customers, in which customers accept free services in return for companies’ selling information about consumer’ searches, shopping, health and more to advertisers.

“They’re gobbling up everything they can learn about you and trying to monetize it,” Cook told a gathering in Washington sponsored by privacy advocates. “We think that’s wrong.”

Edward Snowden, the former government subcontractor who revealed the magnitude of the National Security Agency’s spying on Americans in the wake of the 9/11 attacks, said Apple’s stance deserved consumers’ support.

“Regardless of whether it’s honest or dishonest, for the moment, now, that’s something we should… incentivize, and it’s actually something we should emulate,” Snowden told an audience in Spain about two weeks after Cook outlined the company’s policy.

Apple is slated to introduce enhancements to Siri this Wednesday as part of the rollout of iOS 9, the latest version of the company’s operating system for the iPhone and iPad.

Categories
Privacy

Shutterfly lawsuit highlights concerns with the use of facial recognition and the problem with a ‘Shazam’ for faces

A lawsuit pending in a federal court in Chicago may answer whether tagging and storing photos of someone without that person’s permission violates a state law that regulates the collection and use of biometric information.

That’s the hope of Brian Norberg, a Chicago resident, who in June sued Shutterfly, an online business that lets customers turn photos into books, stationery, cards and calendars. The class action represents the latest in a series of challenges to the use of facial recognition and other technologies that record our unique physical attributes.

Norberg, who claims never to have used Shutterfly, charges that between February and June, someone else uploaded at least one photo of him to Shutterfly and 10 more to the company’s ThisLife storage service. According to Norberg, the company created and stored a template for each photo based on such biological identifiers as the distance between his eyes and ears. The service allegedly prompted the person who uploaded the images to also tag them with Norberg’s first and last names—all without Norberg’s permission.

That, charges Norberg, contravened the state’s Biometric Information Privacy Act (BIPA), a law enacted seven years ago that bars businesses from collecting a scan of someone’s “hand or face geometry,” a scan of their retina or iris, or a fingerprint or voiceprint, without their consent. The law authorizes anyone whose biometrics are used illegally to sue for as much as $5,000 per violation.

In July, Shutterfly asked U.S. District Judge Charles Norgle Sr. to dismiss the lawsuit. According to the company, the BIPA specifically excludes photographs and information derived from them. And, even if the law were unclear, says Shutterfly, the legislature intended it to apply to the use of biometrics to facilitate financial transactions and consumer purchases, not to photo-sharing.

“Scanning photos to allow users to organize their own photos is a far cry from the biometric-facilitated financial transactions and security screenings BIPA is aimed at—such as the use of finger-scanning technology at grocery stores, gas stations, or school cafeterias,” the company asserted in court papers.

In a rejoinder filed last Friday, Norberg says that creating templates based on scans of facial features, not the photos themselves, violates the BIPA. “The resulting face templates—not the innocuous photographs from which they were derived, but the resulting highly detailed digital maps of geometric points and measurements—are ‘scans of face geometry’ and thus fall within the BIPA’s definition of ‘biometric identifiers,’” he wrote.

“By [Shutterfly’s] logic, nothing would stop them from amassing a tremendous, Orwellian electronic database of face scans with no permission whatsoever so long as the data base were derived from photographs,” Norberg added. “And indeed, that appears to be exactly what they are doing.”

Of course, facial recognition technology is used widely already. As Ben Sobel, a researcher at the Center on Privacy & Technology at Georgetown Law, explained recently in The Washington Post:

“Facebook and Google use facial recognition to detect when a user appears in a photograph and to suggest that he or she be tagged. Facebook calls this ‘Tag Suggestions’ and explains it as follows: ‘We currently use facial recognition software that uses an algorithm to calculate a unique number (“template”) based on someone’s facial features… This template is based on your profile pictures and photos you’ve been tagged in on Facebook.’ Once it has built this template, Tag Suggestions analyzes photos uploaded by your friends to see if your face appears in them. If its algorithm detects your face, Facebook can encourage the uploader to tag you.”

Facebook also is defending a class action filed last spring that charges the company’s use of facial-recognition software to identify users violates the BIPA. Facebook users have uploaded at least 250 billion photos to the social networking site and continue to do so at a rate of 350 million images a day, reports Sobel, who adds that Facebook’s tagging occurs by default, whereas Google’s requires you to opt in to it.

According to the Federal Trade Commission, companies that use facial recognition technologies should simplify choices for consumers and increase the transparency of their practices. Social networks should provide users with “a clear notice—outside of a privacy policy—about how the feature works, what data it collects and how it will use the data,” the agency wrote in a report published in October 2012. Significantly, social networks should give users an easy way to opt out of having their biometric data collected and the ability to turn off the collection at any time, the agency advised.

Still, that may not cover someone like Norberg, who says he never used Shutterfly. Or prevent an app akin to a Shazam for faces that would allow users to discover someone’s identity (and possibly more, such as their address) by photographing someone regardless whether the subject knows or consents. Situations like those would require the company to obtain the subject’s express affirmative consent—meaning that consumers would have to affirmatively choose to participate in such a system—the FTC noted.

And those are commercial users of biometrics. The photos of at least 120 million people sit in databases—many built from images uploaded from applications for driver’s licenses and passports—that can be searched by the police and law enforcement. Use of biometrics by the government raises additional concerns, including a need to ensure that a suspect has been detained lawfully before police can photograph the person or swab for DNA.

At a hearing in October 2010 that examined use of facial-recognition technology, Senator Al Franken of Minnesota, the senior Democrat on the Judiciary Subcommittee on Privacy, Technology and the Law, noted that in the era of J. Edgar Hoover, the FBI used wiretaps sweepingly with little regard for privacy.

Congress later passed the Wiretap Act, which requires police to obtain a warrant before they get a wiretap and limits use of wiretaps to investigations of serious crimes. “I think that we need to ask ourselves whether Congress is in a similar position today as it was 50 or 60 years ago—before passage of the Wiretap Act,” Franken said

Categories
Law

Google wins free-speech case over ‘Innocence of Muslims,’ actor has ‘beef’ but no copyright claim, says court

An actress who lost a lawsuit to force Google’s YouTube to remove an anti-Muslim video from its site pursued the wrong claim against the wrong party.

That’s one conclusion from a decision by 9th Circuit U.S. Court of Appeals, which ruled on Monday that Cindy Lee Garcia cannot compel YouTube to take down “Innocence of Muslims” because she cannot copyright her five-minute performance in the video, which disrespects the Prophet Muhammad and sparked death threats against Garcia.

“In this case, a heartfelt plea for personal protection is juxtaposed with the limits of copyright law and fundamental principles of free speech,” U.S. Circuit Judge Margaret McKeown wrote for a majority of the court. “The appeal teaches a simple lesson—a weak copyright claim cannot justify censorship in the guise of authorship.”

The decision represents a win for free speech. Though the First Amendment does not shield copyright infringement, Garcia could not claim a copyright in her performance. According to the court, granting a copyright to an actor based solely on her performance—a work for hire—would put distributors such as YouTube in the position of having to obtain licenses from everyone who appears in a film, as opposed to obtaining the permission of the work’s author, in this case Youssef.

The alternative would render distribution of movies unworkable, the court found. As McKeown noted:

“Treating every acting performance as an independent work would not only be a logistical and financial nightmare, it would turn cast of thousands into a new mantra: copyright of thousands. That leaves Garcia with a legitimate and serious beef, though not one that can be vindicated under the rubric of copyright.”

What if Garcia had grounded her complaint in the threats to her reputation and privacy that followed Youssef’s using Garcia’s performance in a way that was different than she had authorized? Though she appeared knowingly in “Desert Warrior,” Youssef allegedly overdubbed that performance to make her appear to ask if Muhammad were a “child molester” as part of a film that he disseminated widely.

Under California law, a person’s right to privacy may be violated in varied ways, including by acts that cast someone in a false light. “Innocence of Muslims” portrayed Garcia in a light that was highly offensive to millions of people worldwide, judging by the outrage the film has provoked.

False light can be difficult to prove in California without a showing of financial damages. Moreover, even were Garcia able to prevail against Youssef for portraying her in a false light, it’s unlikely that would authorize her to order YouTube to take down the video because, as the trial court noted, the harm from the trailer’s appearance on the Internet already has occurred.

Garcia sued both Google and Youssef initially in state court, where she alleged a series of wrongs, including violation of her privacy and intentional infliction of emotional distress, that she later dropped against Google when she sued the company in federal court for copyright violation.

Note that Garcia could not sue Google for Youssef’s alleged defamation. Federal law shields online services from liability for information they host that’s created by third parties.

Thus, to the extent Garcia has a remedy, it lies in a wrong to her reputation instead of copyright. As McKeown explained:

“We are sympathetic to her plight. Nonetheless, the claim against Google is grounded in copyright law, not privacy, emotional distress, or tort law, and Garcia seeks to impose speech restrictions under copyright laws meant to foster rather than repress free expression.

Privacy laws, not copyright laws, may offer remedies tailored to Garcia’s personal and reputational harms. On that point, we offer no substantive view. Ultimately, Garcia would like to have her connection to the film forgotten and stripped from YouTube. Unfortunately for Garcia, such a ‘right to be forgotten,’ although recently affirmed by the Court of Justice for the European Union, is not recognized in the United States.”

Garcia’s claims may have a shot but it’s a long one. It’s also a reminder that you may have less dominion over your image than you think. As the ruling demonstrates, what’s workable for content creators and distributors can be at odds with the expectations we have in how our likenesses appear online.

As Matthew Schruers, a vice president of law and policy at the Computer and Communications Industry Association, which supported Google and YouTube in the case, told Wired, “Everything you and I and the rest of the world upload to YouTube, is protected the moment we hit record.”