I/S: AJ - International Association of Privacy Professionals

Transcription

I/S: AJ - International Association of Privacy Professionals
I/S: A JOURNAL OF LAW AND POLICY
FOR THE INFORMATION SOCIETY
VOLUME 7 NUMBER 3 WINTER 2012
LEAD EDITORS
MORITZ COLLEGE OF LAW
Peter M. Shane
Jacob E. Davis and Jacob E. Davis II Chair in Law
Peter Swire
C. William O’Neill Professor in Law and Judicial Administration
CAPITAL UNIVERSITY LAW SCHOOL
Dennis Hirsch
Geraldine W. Howell Professor of Law
2011–12
STUDENT EDITORIAL BOARD
EDITOR-IN-CHIEF
Jill Fridley
EXECUTIVE EDITOR
Oliver Zeltner
BUSINESS EDITOR
FORMAT EDITOR
Maya Ginsburg
Greg Weber
Sarah Biebelhausen
Ashley Carter
Meagan Fitzgerald
Janie Henry
Systems Editor
Wan Kim
ISSUE EDITORS
Alyssa Schaeff
Nick Torres
EVENTS EDITOR
Lindsay Gladysz
SENIOR EDITORS
Eric Bell
Shenelle Fabio
John Heithaus
Lorraine Hernandez
Catherine Hookway
Nick Kolderman
Danny Lautar
Steven Arellano
Ama Attah-Mensah
Amanda Barrera
Michael Caldwell
Chenee Castruita
Chisa Chervenick
Dola Das
Larissa Murakami
Hyoun Ja Park
Elizabeth Schechtman
Marshall Slaybod
Adrienne Watson
Maegan Williams
STAFF EDITORS
Devon Glassman
David Hicks
Kristi Hsu
Brian Kim
Michael Kroner
Emily McKinney
Sarah Ott
Serge Rumyantsev
Maria Scheid
Sanya Shah
Quendale Simmons
Scott Stockman
Lindsay Shanahan
Kacper Szczepaniak
Samantha Yarnell
INTERNATIONAL EDITORIAL BOARD
Stephen Acker
The Ohio State University
Jeffrey Hunker
Jeffrey Hunker and Associates
Elizabeth Rindskopf Parker
McGeorge University
Alesandro Acquisti
Carnegie Mellon University
David Johnson
New York Law School
Jon Peha
Carnegie Mellon University
Jonathan Band
Jonathan Band PLLC
Michael Johnson
Carnegie Mellon University
Rajiv Ramnath
The Ohio State University
Kenneth Bass
Sterne Kessler Goldstein and
Fox
Ethan Katsh
University of Massachusetts at
Amherst
Joel Reidenberg
Fordham University
Patricia Bellia
University of Notre Dame
Neal Katyal
Georgetown University
Norman Sadeh
Carnegie Mellon University
L. Jean Camp
Indiana University
Jay Kesan
University of Illinois
Pam Samuelson
University of California at
Berkeley
Adam Candeub
Michigan State University
Ramayya Krishnan
Carnegie Mellon University
Anilkumar Samtani
Nanyang Business School
Mike Carroll
American University
David Landsbergen
The Ohio State University
William Scherlis
Carnegie Mellon University
Jon Caulkins
Carnegie Mellon University
David Lazar
Northeastern University
Marvin Sirbu
Carnegie Mellon University
Julie Cohen
Georgetown University
David Lee
The Ohio State University
Michael Smith
Carnegie Mellon University
Jennifer Evans-Cowley
The Ohio State University
Ed Lee
Chicago-Kent School of Law
James Speta
Northwestern University
Lorrie Cranor
Carnegie Mellon University
Harry Litman
Phillips and Cohen LLP
Daniel Z. Sui
The Ohio State University
George Duncan
Carnegie Mellon University
Jessica Litman
University of Michigan
Rahul Telang
Carnegie Mellon University
Matt Eastin
The Ohio State University
Peter Madsen
Carnegie Mellon University
H. Lewis Ulman
The Ohio State University
Dave Farber
Carnegie Mellon University
Edward Malecki
The Ohio State University
Jonathan Weinberg
Wayne State University
Brett Frischmann
Cardozo Law School
Viktor Mayer-Schöenberger
Oxford Internet Institute
Philip Weiser
University of Colorado
Michael Froomkin
University of Miami
Eben Moglen
Columbia University
Kevin Werbach
University of Pennsylvania
Ron Gdovic
Development Strategies
Benoit Morel
Carnegie Mellon University
Jane Winn
University of Washington
Michael Geist
University of Ottawa
Lisa Nelson
University of Pittsburgh
Alfred Yen
Boston University
Ellen Goodman
Rutgers University
Beth Noveck
New York Law School
Peter A. Winn
University of Washington
Rob Heverly
Albany Law School
Maureen O'Rourke
Boston University
Jonathan Zittrain
Harvard University
Reed Hundt
McKinsey & Company
Marc Zwillinger
Sonnenschein Nath & Rosenthal
THE OHIO STATE UNIVERSITY MORITZ COLLEGE OF LAW
OFFICERS OF ADMINISTRATION
E. Gordon Gee, B.A., J.D., Ed.D., President of the University and Prof. of Law
Joseph A. Alutto, B.A., M.A., Ph.D., Executive Vice President and Provost of the University
Alan C. Michaels, A.B., J.D., Dean of the College and Edwin M. Cooperman Prof. of Law
Garry W. Jenkins, B.A., M.P.P., J.D., Assoc. Dean for Academic Affairs and Assoc. Prof. of Law
Bruce S. Johnson, B.A., J.D., M.L.S., Assoc. Dean for Information Services and Thomas J. and Mary E. Heck and Leo H.
Faust Memorial Designated Prof. of Law
Kathy S. Northern, B.A., J.D., Assoc. Dean for Admissions and Assoc. Prof. of Law
Donald B. Tobin, B.A., J.D., Assoc. Dean for Faculty and Frank E. and Virginia H. Bazler Designated Prof. in Business
Law
Jessica Richman Dworkin, B.S., J.D., LL.M., Assist. Dean for International and Graduate Affairs and Adjunct Prof.
Monte Smith, B.A., J.D., Assist. Dean for Academic Affairs and Adjunct Prof.
Robert L. Solomon, II, B.A., J.D., Assist. Dean for Admissions and Financial Aid, Director of Minority Affairs and Adjunct
Prof.
FACULTY EMERITI
Francis X. Beytagh, B.A., J.D.
Michael Braunstein, B.A., J.D.
Albert L. Clovis, B.A., M.A., LL.B.
Howard P. Fink, B.A., LL.B.
David A. Goldberger, B.A., J.D.
Sheldon W. Halpern, B.A., LL.B.
Lawrence R. Herman, A.B., LL.B.
Louis A. Jacobs, A.B., J.D., LL.M.
Michael Kindred, B.A., J.D., M.C.L., D.E.S.
Joan M. Krauskopf, A.B., J.D.
Stanley K. Laughlin, Jr., B.A., J.D.
James E. Meeks, A.B., J.D.
Rhonda R. Rivera, B.A., M.P.A., J.D.
Michael D. Rose, B.A., J.D., LL.M.
Allan J. Samansky, B.A., M.A., J.D.
Philip C. Sorensen, B.S., J.D.
Gregory M. Travalio, B.A., J.D., LL.M.
Douglas J. Whaley, B.A., J.D.
FACULTY
Michelle Alexander, B.A., J.D., Assoc. Prof. of Law
Rishi Batra, B.A., B.S., J.D., Langdon Fellow in Dispute
Resolution
Mary Beth Beazley, B.A., J.D., Assoc. Prof. of Law and
Director of Legal Writing
Douglas A. Berman, A.B., J.D., Robert J. Watkins/Procter &
Gamble Prof. of Law
Gregory A. Caldeira, B.A.. A.M., Ph.D., Distinguished Univ.
Prof. and Prof. of Law
Cinnamon Carlarne, B.A., B.C.L., M.S., J.D., Assist. Prof. of
Law
Sanford N. Caust-Ellenbogen, B.A., M.C.R.P., J.D., Assoc.
Prof. of Law
Martha Chamallas, B.A., J.D., Robert D. Lynn Chair in Law
Daniel C.K. Chow, B.A., J.D., Joseph S. Platt-Porter Wright
Morris & Arthur Prof. of Law
Amy J. Cohen, B.A., J.D., Assoc. Prof. of Law
Sarah Rudolph Cole, B.A., J.D., Squire, Sanders & Dempsey
Designated Prof. of Law and Director of Prog. on Dispute
Resolution
Ruth Colker, A.B., J.D., Distinguished Univ. Prof. and Grace
Fern Heck Faust Chair in Constitutional Law
Elizabeth Ilgen Cooke, B.A., J.D., Clinical Prof. of Law
Richard C. Daley, B.A., J.D., Senior Lecturer in Law
Steven M. Davidoff, B.A., M..S., J.D., Assoc. Prof. of Law
Sharon L. Davies, B.A., J.D., John C. Elam/Vorys Sater Prof.
of Law and Executive Director of the William E. Kirwan
Institute for the Study of Race and Ethnicity
Ellen E. Deason, B.A., J.D., Joanne W. Murphy/Classes of
1965 and 1972 Prof. of Law
Joshua Dressler, B.A., J.D., Frank R. Strong Chair in Law
Terri L. Enns, B.A., J.D., Clinical Prof. of Law
Christopher M. Fairman, B.A., J.D., Alumni Society
Designated Prof. of Law
Katherine Hunt Federle, B.A., J.D., LL.M., Prof. of Law,
Director of Justice for Children Project, and Director of
the Center for Interdisciplinary Law and Policy Studies
Edward B. Foley, B.A., J.D., Isadore and Ida Topper Prof. of
Law
Larry T. Garvin, B.A., B.S., M.S., J.D., Lawrence D. Stanley
Prof. of Law
Arthur F. Greenbaum, B.A., J.D., James W. Shocknessy Prof.
of Law
L. Camille Hébert, B.A., J.D., Carter C. Kissell Prof. of Law
Stephanie R. Hoffer, B.S., J.D., LL.M., Assist. Prof. of Law
Steven F. Huefner, A.B., J.D., Prof. of Law and Director of
Clinical Prog.
Katherine S. Kelly, B.A., M.A., J.D., Assist. Clinical Prof. of
Law
Creola Johnson, B.S., J.D., Prof. of Law
Kimberly Jordan, B.S., J.D., Assist. Clinical Prof. of Law
Robert M. Krivoshey, B.A., M.A., Ph.D., J.D., Clinical Prof.
of Law
Stanley K. Laughlin, Jr., B.A., J.D., Prof. of Law and Adjunct
Prof. of Anthropology
Katrina J. Lee, B.A., J.D., Assist. Clinical Prof. of Law
Deborah Jones Merritt, A.B., J.D., John Deaver DrinkoBaker and Hostetler Chair in Law
Dale A. Oesterle, B.A., M.P.P., J.D., J. Gilbert Reese Chair in
Contract Law
John B. Quigley, A.B., M.A., LL.B., Presidents Club Prof. of
Law
Anne Ralph, B.A., J.D., Assist. Clinical Prof. of Law
Nancy H. Rogers, B.A., J.D., Michael E. Moritz Chair in
Alternative Dispute Resolution
Guy A. Rub, LL.B., M.A., LL.M., Assist. Prof. of Law
Paul Rose, B.A., J.D., Assoc. Prof. of Law
Peter M. Shane, A.B., J.D., Jacob E. Davis and Jacob E.
Davis II Chair in Law
Ric Simmons, B.A., M.A., J.D., Prof. of Law
Marc S. Spindelman, B.A., J.D., Prof. of Law
Todd A. Starker, B.A., M.B.A., J.D., Assist. Clinical Prof. of
Law
David Stebenne, B.A., J.D., Ph.D., Assoc. Prof. of History
and Law
Joseph B. Stulberg, B.A., J.D., M.A., Ph.D., John W. Bricker
Prof. of Law
Peter P. Swire, B.A., J.D., C. William O’Neill Prof. of Law
and Judicial Administration
C. Lee Thomason, B.A., J.D.., Assist. Clinical Prof. of Law
Daniel P. Tokaji, A.B., J.D., Robert M. Duncan/Jones Day
Designated Prof. of Law
Vincene F. Verdun, B.A., J.D., Assoc. Prof. of Law
Charles E. Wilson, B.S., J.D., Assoc. Prof. of Law
Saul Zipkin, B.A., J.D., Visiting Assist. Prof. of Law
ADJUNCT FACULTY
Sandra J. Anderson
Elizabeth L. Anstaett
Susan Azyndar
David Ball
David S. Bloomfield
Joseph M. Caliguri
Michael Carpenter
Hon. William B. Chandler
Douglas R. Cole
Hon. R. Guy Cole
Matt Cooper
Jonathan E. Coughlan
James E. Davidson
April Opper Davis
Anita DiPasquale
Hon. Gregory Frost
Todd. G. Guttman
Katherine Hall
Nicole Hall
Gail Block Harris
Patricia Hatler
Kimberly Weber Herlihy
Hon. John E. Hoffman Jr.
James Johnson
Shawn Judge
Hon.Terrence P. Kemp
Hon. Norah McCann King
Greg Kirstein
Marya C. Kolman
James K.L. Lawrence
Katheryn M. Lloyd
Hon. Algenon Marbley
Hon. Stephen McIntosh
Richard M. Mescher
Robert J. Miller
Samuel H. Porter
Tanya J. Poteet
Ted L. Ramirez
Frank A. Ray
Douglas Rogers
Dan D. Sandman
Hon. Edmund A. Sargus
Hon. Jennifer Sargus
Edward M. Segelken
Elizabeth Sherowski
Kimberly Shumate
Scott V. Simpson
Douglas Squires
Hon. Jeffrey S. Sutton
Mark R. Weaver
Robert Weiler
James Wilson
Reid Wilson
Stephanie Ziegler
I/S: A JOURNAL OF LAW AND POLICY FOR THE INFORMATION SOCIETY is a new interdisciplinary journal of
research and commentary concentrating on the intersection of law, policy, and information technology. I/S
represents a one-of-a-kind partnership between one of America’s leading law schools, The Ohio State University
Moritz College of Law, and the nation’s foremost public policy school focused on information technology,
Carnegie Mellon University’s H. J. Heinz III School of Law and Public Policy.
CITATION: Please cite to 7 ISJLP 543–748 (2012).
WEBSITE: Please visit moritz.osu.edu/students/groups/is to:
1.
2.
3.
4.
Subscribe
Access PDF versions of each Volume and individual articles
Review the latest news headlines on relevant I/S topics
Send questions and comments to I/S
EDITORIAL AND GENERAL OFFICES: Located at 55 West 12th Avenue, Columbus, Ohio 43210-1391. All I/S
questions and comments should be submitted via the I/S website or can be directed to:
CONTACT: I/S JOURNAL
The Moritz College of Law
55 West 12th Avenue
Columbus, OH 43210
PRICING:
STUDENT
(W/ ID)
DOMESTIC
FOREIGN
INSTITUTIONAL
SUPPORTING
$25.00
$45.00
$100.00
$100.00
$150.00
ANNUAL
SUBSCRIPTION
SUBSCRIPTIONS: To subscribe please visit the I/S website. To subscribe via the mail, please enclose a check
made payable to “I/S JOURNAL” and send to I/S. I/S will assume that each subscriber desires to renew its
subscription unless the subscriber notifies I/S otherwise before the subscription expires. Back stock and reprint
editions of I/S are available upon request via the I/S website or upon written request to I/S.
ACCOUNT INFORMATION: I/S requires receipt of notification of any changes to account information (i.e., address
change) at least thirty days prior to date of the next issue with which that change is to take effect. In order to help
facilitate an immediate update of account information, please visit the I/S website (is-journal.org) or send a written
notification to I/S.
SUBMISSIONS: I/S is devoted to publishing outstanding research and writing that addresses the legal and policy
aspects of e-government and e-democracy, cybersecurity, online privacy and public information policy, ecommerce, information technology and economic development, telecommunications regulation, or any other
aspect of the social, economic, political, or cultural implications of information technology. We welcome
submissions from all relevant disciplines, including law, business, engineering, science, the social sciences, and
the humanities—so long as the manuscript addresses the legal or policy aspects of its topic.
I/S publishes three times a year, with one issue reserved for our “Privacy Year in Review,” written largely by a
student team. The remaining two issues will either feature a themed research section, presenting articles specially
commissioned for I/S, or a selection of the very best unsolicited manuscripts that we receive on topics within the
I/S subject matter domain. Even in issues with a themed research section, I/S is typically able to publish a small
number of unsolicited manuscripts, should we receive submissions of outstanding quality.
I/S welcomes unsolicited manuscripts in four categories: full-length research articles (typically around 10,000
words); shorter, less formal commentaries that address cutting-edge policy topics related to information and
communication technology (“ICT”) and society (typically no more than 5000 words), student research, and book
reviews.
Because our examination of unsolicited manuscripts entails a combination of student and professional review,
manuscripts proposed for publication in any given academic year must typically reach us by either May 1 or June
1 of the preceding academic year (for inclusion in the following winter and spring issues, respectively). Inquiries
concerning format, topic, or anything else may be forwarded to I/S at any time.
COPYRIGHT INFORMATION: Copyright © 2012 by I/S: A JOURNAL OF LAW AND POLICY FOR THE
INFORMATION SOCIETY. Please direct copyright inquiries to Editorial and General Offices address listed above.
I/S: A JOURNAL OF LAW AND POLICY
FOR THE INFORMATION SOCIETY
VOLUME 7 NUMBER 3 WINTER 2012
CONTENTS
ARTICLES
The Mythical Right to Obscurity: A Pragmatic Defense of No
Privacy in Public
Heidi Reamer Anderson ............................................................. 543
AdChoices? Compliance with Online Behavioral Advertising
Notice and Choice Requirements
Saranga Komanduri, Richard Shay, Greg Norcie, Blase Ur &
Lorrie Faith Cranor .................................................................. 603
A Survey of the Use of Adobe Flash Local Shared Objects to
Respawn HTTP Cookies
Allecia M. McDonald & Lorrie Faith Cranor ........................... 639
STUDENT NOTES
Status Update: When Social Media Enters the Courtroom
Lindsay M. Gladysz .................................................................... 688
Pulling Back the Curtain: Online Consumer Tracking
Laura J. Bowman ........................................................................ 718
I/S: A JOURNAL OF LAW AND POLICY FOR THE INFORMATION SOCIETY
The Mythical Right to Obscurity:
A Pragmatic Defense of No Privacy in Public†
HEIDI REAMER ANDERSON*
Abstract: In several states, citizens who videotaped police
misconduct and distributed the videos via the Internet
recently were arrested for violating state wiretapping
statutes. These arrests highlight a clash between two key
interests—the public’s desire to hold the officers accountable
via exposure and the officers’ desire to keep the information
private. The arrests also raise an oft-debated privacy law
question: When should something done or said in public
nevertheless be legally protected as private?
For decades, the answer has been: “[T]here can be no
privacy in that which is already public.” However, given
recent technological developments (e.g., cell phone cameras
and YouTube), some scholars suggest that the law
sometimes should restrict the exposure of truthful
information shared in public. Like the police who claim to
need privacy to do their job, these scholars claim that
people need privacy in public in order to feel dignified and
to feel comfortable developing new ideas. In their
pragmatic balance, these privacy-related needs appear to
trump exposure-related benefits.
In this Article, I argue that certain assumptions have led
these scholars to overstate privacy-related harms and to
I owe many thanks to, among others, Thomas Anderson, Eric Goldman, Dennis Hirsch,
Lyrissa Lidsky and the faculty who attended a workshop at the University of Florida for
their helpful and encouraging comments on earlier drafts. I also wish to thank John
Bennett and Barrett Rodriguez for their outstanding research assistance. All errors are my
own.
†
*
Assistant Professor, Florida Coastal School of Law.
544
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
understate exposure-related benefits. After documenting
and critiquing those assumptions, I show how the proper
balance likely favors exposure over privacy in all but a few
special cases. Ultimately, I conclude that the law should
continue to protect the mass exposure of truthful yet
embarrassing information via the “no privacy in public”
rule. Otherwise, we risk sacrificing the many benefits of
exposure—including those resulting from exposure of police
misconduct—on the altar of a mythical right to obscurity.
I. INTRODUCTION
When it comes to privacy and accountability, people always demand
the former for themselves and the latter for everyone else.1
On June 14, 2010, police officer Ian Walsh attempted to detain
two teenaged criminal suspects at a busy intersection in Seattle,
Washington.2 While he was struggling with the first suspect, a second
suspect pushed Officer Walsh’s arms away from the first.3
Immediately, Officer Walsh swung and punched the second suspect
directly in the face.4 The suspects were arrested and later released.5
DAVID BRIN, THE TRANSPARENT SOCIETY: WILL TECHNOLOGY FORCE US TO CHOOSE
BETWEEN PRIVACY AND FREEDOM 1 (1998) (commenting on the proliferation of video
surveillance). Justice Scalia once shared a similar sentiment regarding the conflict between
people’s desires for privacy and accountability, stating: “This law has every appearance of a
prohibition that society is prepared to impose upon the press but not upon itself.” Florida
Star v. B.J.F., 491 U.S. 524, 542 (1989) (Scalia, J. concurring). In Florida Star, the Court
held that a state law forbidding the reporting of a rape victim’s name by mass media but
not by individuals was unconstitutionally under-inclusive. 491 U.S. at 540. Also
commenting upon the conflict between privacy and accountability, privacy scholar Daniel
Solove has acknowledged that “[p]rivacy impedes discourse, impairs autonomy in
communication, and prevents accountability for one’s conduct.” Daniel J. Solove, The
Virtues of Knowing Less, 53 DUKE L.J. 967, 973 (2003).
1
Seattle Officer Punches Girl in Face During Jaywalking Stop, SEATTLE POSTINTELLIGENCER (June 15, 2010, 10:00 PM),
http://www.seattlepi.com/local/article/Seattle-officer-punches-girl-in-face-during890218.php.
2
3
Id.
See Levi Pulkkinen & Casey McNerthney, Police Punch Caught on Video Prompts Seattle
Police Review of Arrest Procedures, SEATTLE POST-INTELLIGENCER (June 15, 2010, 10:00
PM), http://www.seattlepi.com/local/article/Punch-caught-on-video-prompts-Seattlepolice-889276.php (noting that “[w]hen another woman grabbed him, [the officer]
punched her in the face”). The video clip, which includes some profanity, is available at
http://www.youtube.com/watch?v=E9w9AfptGGQ.
4
2012]
ANDERSON
545
Had this confrontation occurred on June 14, 1980, instead of June
14, 2010, few people other than those present at the time would have
learned about it or discussed it afterwards. If either teenager later
shared their account with others, listeners may have doubted its
veracity or have been skeptical about the events that occurred. In all
likelihood, the confrontation would have become a story shared
among few, and perhaps forgotten, but that is not what happened with
this story. Instead, because the confrontation occurred in 2010, it was
videotaped by a bystander who then posted the video online via
YouTube. Anyone with an Internet connection then could find the
video via a simple search engine query (e.g., “Seattle police punch”),
and watch it repeatedly, on demand.6
The easily-retrieved Seattle video clip inspired many viewers to
share their opinions online by writing comments to online news
stories, by sharing opinions on blogs, by posting Facebook updates, or
by Tweeting.7 These conversations about the incident addressed
several important public policy issues, including: (i) racial tension
(Officer Walsh was white, the suspects were African-American), (ii)
sex discrimination (Officer Walsh was male, the suspects were
female), and (iii) the government’s police power versus the liberty of
its citizens (the punch was viewed by many as excessive given the
suspects’ relatively minor offense of jaywalking).8 Similarly rigorous
policy debates have occurred in multiple cities throughout the United
5
See Pulkkinen & McNerthney, supra note 4 (documenting suspects’ release).
Id. (reporting that video was recorded by a “bystander”). As of January 5, 2011, the
“official” video posted on YouTube had over 1.7 million views. See Seattle Police
Confrontation – komonews.com, YOUTUBE (June 14, 2010),
http://www.youtube.com/watch?v=E9w9AfptGGQ&feature=player_embedded.
6
The inner workings and appeals of these social networking sites have been well
documented elsewhere. See, e.g., James Grimmelmann, Saving Facebook, 94 IOWA L. REV.
1137, 1142–49 (2009) (describing architecture of, and most common uses of, Facebook and
MySpace); Paul M. Schwartz, Review: From Victorian Secrets to Cyberspace Shaming, 76
U. CHI. L. REV. 1407, 1447 (2009) (describing Twitter as “a microblogging service that
allows its users to send and read other users’ messages . . . known as tweets, which are
messages of no more than 140 characters in length.”).
7
Pulkkinen & McNerthney, supra note 4 (reporting parties’ gender and race and noting
that the President of Seattle’s Urban League “call[ed] the violent altercation an
overreaction to jaywalking”); Casey McNerthney, Police Guild: Officer Did Nothing Wrong
in Videotaped Punch, SEATTLE POST-INTELLIGENCER (June 14, 2010),
http://www.seattlepi.com/local/article/Police-guild-Officer-did-nothing-wrong-in885514.php (reporting police union president’s opinion that Officer Walsh “did nothing
wrong” and showing 234 written comments from June 15, 2010 to June 21, 2010).
8
546
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
States in recent years, all triggered by violent police behavior exposed
via “citizen journalism.”9
With each police exposure video, the public and government
response generally proceeds in one of two directions. Some
government officials react with a look inward, via internal
investigations and, ultimately, changed policies.10 Others react by
pointing the finger outward—at the one doing the videotaping. For
example, in at least five states, citizens who videotaped police
misconduct were prosecuted for violating eavesdropping and
wiretapping laws, which bar recording conversations, absent consent
of all parties, except when there is no reasonable expectation of
privacy.11 In one notable example, a motorcyclist posted a video of his
own roadside encounter with a plain-clothed, gun-wielding police
officer who cited him for speeding.12 Shortly after the video received a
Only three months earlier, a different Seattle policeman was videotaped stomping on the
head and body of a wrongly-apprehended Hispanic suspect while exclaiming, “I'm going to
beat the [expletive] Mexican piss out of you, homey. You feel me?” FBI Investigating
Seattle Police Beating, SEATTLE POST-INTELLIGENCER (May 9, 2010, 10:00 PM),
http://www.seattlepi.com/local/article/FBI-investigating-Seattle-police-beating890125.php. Perhaps the most infamous police brutality exposure in recent memory was
the video depicting Bay Area Rapid Transit police Officer Johannes Mehserle shooting
unarmed passenger Oscar Grant in the back. See John Cote, Police Kill Man Near Station
Fruitvale BART Station, S.F. CHRON., July 18, 2010 at C1, available at
http://articles.sfgate.com/2010-07-18/bay-area/21988301_1_bart-police-bart-officersfruitvale-bart-station (recalling how “Grant was killed as he lay face down on the platform”
and how “the shooting touched off widespread protests in Oakland . . . after a Los Angeles
jury convicted Mehserle of involuntary manslaughter . . . instead of a more serious murder
charge”). A quick Google search for “police beating video” sadly reveals hundreds of such
incidents in 2010. See Injustice Everywhere: The National Police Misconduct Statistics
and Reporting Project, http://www.injusticeeverywhere.com (last visited Jan. 2, 2011)
(maintaining “national police misconduct newsfeed” with summaries and links).
9
For example, Seattle issued “Seattle Police Office of Professional Accountability” reports
and “launched a global review of arrest tactics” after the Officer Walsh video surfaced.
Police Guild, supra note 8.
10
See Ray Sanchez, Growing Number of Prosecutions for Videotaping the Police,
ABCNEWS.COM (July 19, 2010), http://abcnews.go.com/US/TheLaw/videotaping-copsarrest/story?id=11179076 (documenting arrests in Maryland, Florida, and New
Hampshire); Wendy McElroy, Are Cameras the New Guns?, GIZMODO (June 2, 2010, 5:00
PM), http://gizmodo.com/5553765/are-cameras-the-new-guns (documenting arrests in
Illinois and Massachusetts and opining that “the prosecutions are a form of social control
to discourage criticism of the police or simple dissent”).
11
See Sanchez, supra note 11; McElroy, supra note 11 (describing circumstances
surrounding Anthony Graber’s arrest). A Maryland Circuit Court judge later dismissed the
charges against Mr. Graber. See Peter Hermann, Judge Says Man Within Rights to Record
12
2012]
ANDERSON
547
great number of views on YouTube, six state troopers showed up at
the motorcyclist’s home with a warrant, seized his computer, and later
charged him with violating Maryland’s anti-wiretapping statute; if
convicted, he could face up to sixteen years in prison.13
Arrests like these (i.e., arrests of those “caught” videotaping
police), consequently have triggered their own privacy-themed debate.
Because the threat of criminal prosecution for taping police acts as a
censor, some commentators have called for states to more clearly
permit such recordings and facilitate the public interest benefits they
trigger.14 In opposition, police officers and their supporters argue that
the threat of constant surveillance and later distribution via the
Internet is an unfair invasion of privacy that prevents them from
adequately doing their job. For example, officers may hesitate to take
necessary action out of concern that a partial and possibly inaccurate
video recording of that action will lead to the officers’ firing or to bad
police work. This concern, in turn, threatens the officers’ reputation
and public safety as a whole.15
Police Traffic Stop, BALT. SUN, September 27, 2010, at A1. Judge Plitt’s opinion read, in
part: “Those of us who are public officials and are entrusted with the power of the state are
ultimately accountable to the public. . . . When we exercise that power in a public forum,
we should not expect our activity to be shielded from public scrutiny.” Id.
See MD. CODE ANN. § 10-402 (2000); Sanchez, supra note 11; McElroy, supra note 11
(describing circumstances surrounding Anthony Graber’s arrest).
13
See Opinion, Our View on Cops and Cameras: When Citizens Film Police, It Shouldn’t
Be a Crime, USA TODAY, July 15, 2010, at A10; Sanchez, supra note 11 (quoting David
Rocah, an attorney for the American Civil Liberties Union of Maryland, as stating, “The
message is clearly, ‘Don’t criticize the police’. . . . With these charges, anyone who would
even think to record the police is now justifiably in fear that they will also be criminally
charged.”); David Rittgers, Editorial, Maryland Wiretapping Law Needs an Update, BALT.
SUN, June 1, 2010, at A13 (suggesting changes to Maryland law because threat of criminal
prosecution is “enough to make citizens pause before pushing the record button”); H.R.
Con. Res. 298, 111th Cong. (2010).
14
See Dennis J. Slocumb and Rich Roberts, Opinion, Opposing View On Cops and
Cameras: Respect Officers’ Rights, USA TODAY, Opinion, July 15, 2010, at 10A (opposing
videotaping of officers in part because of “the inability of those with no understanding of
police work to clearly and objectively interpret what they see”). In retort to this argument,
one journalist responded: “If they’re doing good police work, they should not be worried
about getting caught on tape.” Rittgers, supra note 14; cf. Daniel J. Solove, “I’ve Got
Nothing to Hide” and Other Misunderstandings of Privacy, 44 SAN DIEGO L. REV. 745
(2007) (critiquing “nothing to hide” arguments like that of Rittgers). At least two police
departments have decided to address the concern of incomplete footage by wearing
cameras themselves. See BART Tests Police Cameras, S.F. CHRON., Sept. 30, 2011,
http://www.signonsandiego.com/news/2011/sep/30/bart-tests-police-cameras (detailing
plan of San Francisco’s Bay Area Rapid Transit police to wear cameras); Lisa Halverstadt,
Arizona Officers Wearing Video Cameras, THE ARIZONA REPUBLIC, Apr. 9, 2011,
15
548
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
In many ways, the current debate over citizen exposures of public
police conduct mirrors a broader exposure versus privacy debate
among legal scholars: When should something done “in public”
nevertheless be “private” and, thus, legally protected from exposure?16
Surprisingly, the combination of technologies that helped the Seattle
police incident ignite such a useful debate—recording (small video
camera), distribution (Internet/YouTube), and indexing (Google)—is
the same combination that has led some privacy scholars to call for
restricting the exposure and flow of truthful information shared in
public.17 Their thesis is supported, in part, via anecdotal stories about
persons exposed and the harm such persons suffered.18 Like the police
http://www.azcentral.com/news/articles/2011/04/09/20110409 police-camerarecordings.html (describing how police forces in Arizona will wear cameras in addition to
using dashboard cameras).
See, e.g., Solove, supra note 1, at 969 (framing the question as, “When is it justifiable for
the law to prohibit the disclosure of another person’s private information?”); see generally
Richard A. Posner, The Right to Privacy, 12 GA. L. REV. 393 (1978) (using economic theory
to justify only a few circumstances in which one should be able to restrict the flow of
accurate, personal information about one’s self).
16
See Neil M. Richards and Daniel J. Solove, Prosser’s Privacy Law: A Mixed Legacy, 98
CALIF. L. REV. 1887, 1889 (2010) (“Today, the chorus of opinion is that the tort law of
privacy has been ineffective, particularly in remedying the burgeoning collection, use and
dissemination of personal information in the Information Age.”); M. Ryan Calo, People
Can Be So Fake: A New Dimension to Privacy and Technology Scholarship, 114 PENN ST.
L. REV. 809, 817–824 (2010) (collecting other scholars’ technology-based arguments for
privacy regulation and concluding that “the notion that technology implicates privacy
insofar as it augments the power to collect, process, or disseminate information dominates
privacy and technology commentary”); Jacqueline D. Lipton, “We the Paparazzi:”
Developing a Privacy Paradigm for Digital Video, 95 IOWA L. REV. 919, 927 (2010)
(suggesting that “[t]he fact that individuals can instantly snap a photograph . . . and can
then disseminate that image instantaneously and globally at the push of a button, raises
significant problems”); Solove, supra note 1, at 970 (justifying the regulation of personal
disclosures “[g]iven the development of technologies that permit extensive data gathering
and dissemination”); DANIEL J. SOLOVE, UNDERSTANDING PRIVACY 101 (2009) [hereinafter,
SOLOVE, UNDERSTANDING PRIVACY] (“New technologies have spawned a panoply of
different privacy problems”).
17
See infra notes 104–17 and accompanying text; Lipton, supra note 17, at 921–22
(chronicling stories of Star Wars Kid and Dog Poop Girl as evidence of “worrying new
trend” of “intruding into each other’s privacy and anonymity with video and multimedia
files in ways that harm the subjects of these digital files”); DANIEL J. SOLOVE, THE FUTURE
OF REPUTATION: GOSSIP, RUMOR, AND PRIVACY ON THE INTERNET 38–48 (2007)
[hereinafter, SOLOVE, FUTURE OF REPUTATION] (documenting the “sobering consequences”
of the online exposures of individuals known as Dooce, the Numa Numa dancer, Little
Fatty, and Star Wars Kid). Implicit in some of these stories is a fear directive—that you,
too, should be afraid of becoming Dog Poop Girl or her ilk. See infra notes 282–83 and
accompanying text.
18
2012]
ANDERSON
549
who claim to need privacy to protect their reputations and do their job
effectively, these scholars argue that people often need protection
from exposure of what they do or say in public in order to feel wholly
dignified and to feel comfortable developing new ideas.19
Although these scholars’ primary goals—protection of individual
dignity and individual “thinking space”—may be admirable, their
attempts at balancing the costs and benefits of their proposed
protections appear misplaced in at least two ways. First, they tend to
overstate the potential privacy-related harms that result from
exposure of information initially shared in public.20 Second, they
frequently understate the many existing and potential benefits of
exposing truthful information shared in public.21 After correcting for
these errors, the revised pragmatic balance shared below indicates
that the benefits of exposing public conduct likely outweigh the
harms, even without directly invoking First Amendment speech
rights.22 Accordingly, this Article concludes that the law should not
restrict the collection and reporting of truthful information shared in
public in order to prevent a perceived, potential harm to someone’s
privacy interests.23
Part I of this Article briefly defines the conflict between exposure
and privacy.24 It also documents how the law previously has resolved
that conflict through the “no privacy in public” rule. 25 Part II reviews
the position of other scholars that the “no privacy in public” rule is an
See SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 108 (noting how “persistent
gawking can create feelings of anxiety and discomfort” that harm one’s dignity); Eugene
Volokh, Freedom of Speech and Information Privacy: The Troubling Implications of a
Right to Stop People from Speaking about You, 52 STAN. L. REV. 1049, 1110–12 (2000)
(chronicling other scholars’ arguments for privacy restrictions on the basis that the
disclosure “injures people’s dignity or emotionally distresses them”); Neil M. Richards,
Intellectual Privacy, 87 TEX. L. REV. 387, 412–16 (2008) (arguing that spatial privacy is
necessary for thought development).
19
20
See infra notes 173–214 and accompanying text.
21
See infra notes 217–267 and accompanying text.
See infra Part III.C. Although some of the exposure-related harms and benefits I identify
involve speech elements, and thus, the First Amendment interests of the speaker, one need
not constitutionalize my arguments in order to make them persuasive.
22
23
See infra notes 279–83 and accompanying text.
24
See infra notes 30–43 and accompanying text.
25
See infra notes 44–89.
550
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
inadequate and antiquated rule that fails to protect obscurity.26 In
Part III, this Article shows how these scholars’ demand for a right to
obscurity is misplaced because they (i) overstate the potential harms
linked to more technologically-advanced and democratized exposure,
and (ii) inadequately account for the many benefits of exposure that
would be blocked should their quest for a right to obscurity succeed.27
Finally, in Part IV, this Article tentatively concludes that the benefits
of the “no privacy in public” rule likely outweigh the privacy harms.28
Ultimately, changing the “no privacy in public” rule would risk
sacrificing the many benefits of exposure—including those resulting
from the exposure of police misconduct—on the altar of a mythical
right to obscurity.29
II. THE CONFLICT BETWEEN EXPOSURE AND OBSCURITY
In this Part, I discuss the terminology and historical background
necessary to appreciate the current debate over technology’s alleged
assault on obscurity. In Part I.A, I briefly define the terms exposure
and privacy and characterize certain conflicts between the two as the
“Obscurity Problem.” Next, in Part I.B, I show how the law generally
has resolved the Obscurity Problem through the simple “no privacy in
public” rule. In later Parts, I demonstrate how this rule remains
proper today, despite most pro-obscurity scholars’ assertions to the
contrary.30
A. DEFINING THE OBSCURITY PROBLEM
Generally, exposure occurs when one, without the express consent
of the other: (i) lawfully gathers truthful information about another
person or entity shared by the exposed person in public, and (ii)
makes that information available to someone other than himself, often
26
See infra notes 89–130 and accompanying text.
27
See infra notes 131–267 and accompanying text.
Note, however, that there are special cases for which this balancing tips the other way.
See infra Part III.D.
28
See supra notes 2–9 and accompanying text (equating calls for privacy in public to calls
for a right to obscurity).
29
30
See infra notes 111–138 and accompanying text.
2012]
ANDERSON
551
in a context other than the one in which the information was shared.31
Exposure encompasses a broad range of actions, including an inperson recount to another of an event one witnessed earlier, a blog
post about a speech recently given, and the surreptitious video
recording of someone in public, later published to a website. As these
few examples demonstrate, exposure can be helpful, neutral, or
perhaps hurtful to the one exposed.32 If enough people pay attention
to it and remember it, one effect of an exposure on the exposed person
is to reduce his obscurity.33 Prior to the exposure, his words or actions
were known only to a few; after the exposure, they are known to
many.34 Ultimately, it is most helpful to view obscurity as the absence
of exposure.
Viewing obscurity as the absence of exposure helps to better frame
the current debate over the validity of the “no privacy in public” rule
as a binary conflict of interests.35 On one side, the side of exposure, are
the benefits that flow immediately and in the long run from the
general sharing of public information about people with each other;
on the other side, the side of obscurity, are the harms prevented by
For those familiar with Daniel J. Solove’s privacy taxonomy, this definition incorporates,
in part, the allegedly “harmful activities” of “information collection,” “information
processing,” and “information dissemination.” SOLOVE, UNDERSTANDING PRIVACY, supra
note 17, at 103.
31
For example, the person exposed may view the in person recount as neutral gossip, the
blog post about the speech as helpful publicity, and the surreptitious video recording as
harmful to his dignity.
32
See Lior Jacob Strahilevitz, Reputation Nation: Law in an Era of Ubiquitous Personal
Information, 102 NW. U. L. REV. 1667, 1670 (2008) (“Personal information that was once
obscure can be revealed almost instantaneously via a Google search.”); see also SOLOVE,
UNDERSTANDING PRIVACY, supra note 17, at 149 (quoting Aesop for the proposition that
“[o]bscurity often brings safety”).
33
See SOLOVE, FUTURE OF REPUTATION, supra note 18, at 7 (“People who act
inappropriately might not be able to escape into obscurity anymore; instead, they may be
captured in pixels and plastered across the Internet.”).
34
Focusing on the exposure versus privacy conflict also narrows the applicability of this
Article’s arguments to only those issues involving private citizens’ exposure of public
conduct, thus excluding those privacy issues involving other core interests. This category
may be viewed as a subset of the “informational privacy” category. See Lior Jacob
Strahilevitz, Reunifying Privacy, 98 CALIF. L. REV. 2007, 2010 (2010) (agreeing that
“information privacy is a category with enough commonalities to render it a coherent
concept”) (citations omitted).
35
552
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
protecting the same and similar information from exposure.36 Setting
up the analysis in this “harms versus benefits” fashion also permits us
to do the situational, problem-based harm analysis that at least one
pro-pragmatism scholar, Daniel Solove, suggests is appropriate.37
In his book Understanding Privacy, Solove argues that the proper
approach to addressing privacy concerns is to consider the interests
involved in individual privacy problems.38 In this article, I accept
Professor Solove’s invitation to begin balancing harms and benefits in
particular “problem situations” involving privacy.39 Specifically, I
consider the harms and benefits of what I call the Obscurity
Problem.40 The Obscurity Problem occurs when a private actor (whom
I and others describe as a “citizen journalist”41) lawfully collects and
further exposes information (spoken or behavioral) that someone else
initially shared in public.42 Because any legal response to the
Samuel Warren and Louis Brandeis described the natural conflict as follows: “Matters
which men of the first class may justly contend, concern themselves alone, may in those of
the second be the subject of legitimate interest to their fellow citizens.” See Samuel D.
Warren and Louis D. Brandeis, The Right to Privacy, 4 HARV. L. REV. 193, 215 (1890).
36
See SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 174; see also Danielle Keats
Citron & Leslie Meltzer Henry, Review: Visionary Pragmatism and the Value of Privacy
in the Twenty-First Century, 108 MICH. L. REV. 1107 (2010) (reviewing and endorsing
Solove’s approach).
37
38
See SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 174.
39
Id.
This problem would encompass, at least in part, the sub-problems Solove identifies as
Identification, Disclosure, Accessibility and Distortion. SOLOVE, UNDERSTANDING PRIVACY,
supra note 17, at 121–26 (identification), 140–46 (disclosure), 149–50 (increased
accessibility), 158–61(distortion).
40
See Randall D. Eliason, The Problems with the Reporter’s Privilege, 57 AM. U.L. REV.
1341, 1366–67 (2008) (discussing “the rise of so-called citizen journalists” who “post
information of public concern” via blogs, social networking sites and cable news sites). As
Eliason further notes, CNN calls reports from citizen journalists “I-Reports” while Fox
News calls similar reports “U-Reports.” Id. at 1367 n.97; see also Jonathan Krim, Subway
Fracas Escalates Into Test of the Internet’s Power to Shame, WASH. POST, July 7, 2005, at
D01, available at http://www.washingtonpost.com/wpdyn/content/article/2005/07/06/AR2005070601953_pf.html (describing the Internet as
“a venue of so-called citizen journalism, in which swarms of surfers mobilize to gather
information on what the traditional media isn’t covering, or is covering in a way that
dissatisfies some people”).
41
By definition, the Obscurity Problem excludes other related yet distinct privacy
“problems,” such as government surveillance or stalking, and only applies when the
42
2012]
ANDERSON
553
Obscurity Problem would involve changing the “no privacy in public”
rule, the next section briefly chronicles the evolution of this rule as a
way to resolve the Obscurity Problem.43
B. THE “NO PRIVACY IN PUBLIC” RULE
The “no privacy in public” rule is tantalizing in its simplicity.
Generally stated, it is: “[T]here can be no privacy in that which is
already public.”44 The practical effect of the rule is that one has no tort
law-protectable privacy interest (including a right to obscurity) in
what one does or says in public.45 This section begins with a brief
overview of how this rule emerged in the late 19th century and how it
developed through the mid-20th century in both tort and criminal
procedure contexts.46 The overview illustrates how the rule’s concept
of “in public” was used initially as an adjective describing the nature of
the person or information exposed, and later as an adjective
describing the person’s physical location.47 Later sections of the
exposed person is the one who initially shared the information. For a discussion of how
such problems are different from the Obscurity Problem, see Daniel Solove, A Taxonomy
of Privacy, 154 U. PA. L. REV. 477 (2006).
See infra notes 44–88 and accompanying text. Part III shows how balancing harms and
benefits supports retention of the “no privacy in public” rule.
43
Gill v. Hearst Pub. Co., 253 P.2d 441 (Cal. 1953). In Gill, the plaintiffs sued after
Harper’s Bazaar and related magazines published a photo of them in an “affectionate
pose” at the Los Angeles Farmers’ Market. Id. at 442. The court rejected plaintiffs’ invasion
of privacy claim because the photographed and published kiss took place “not . . . on
private grounds but rather . . . in a public market place.” Id. at 444. By “voluntarily
expos[ing] themselves to public gaze in a pose open to the view of” others, the plaintiffs
“waived their right of privacy.” Id.
44
See Patricia Sanchez Abril, Recasting Privacy Torts in a Spaceless World, 21 HARV. J.L.
& TECH. 1, 6 (2007) (“[C]ourts have generally held that anything capable of being viewed
from a ‘public place’ does not fall within the privacy torts’ protective umbrella.”). As with
any rule, there are exceptions in certain contexts, especially those involving property. For
example, dropping one’s unpublished manuscript in public does not mean that anyone can
pick it up and print it without paying royalties any more than one could take one’s house
keys dropped in public and claim the house as one’s own. Such extreme examples involve
appropriation, in the former instance, and criminal breaking and entering, in the second,
both of which fall outside the scope of this Article.
45
46
See infra notes 61–88 and accompanying text.
See infra notes 54–60 and accompanying text; see also Abril, supra note 45, at 2 (using
examples of gossip regarding drunken behavior to illustrate that “privacy is usually a
47
554
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
Article will detail others’ critiques of, and my defense of, the “no
privacy in public” rule as it applies to the Obscurity Problem.48
1. PUBLIC VERSUS PRIVATE IN THE RIGHT TO PRIVACY
Privacy first was identified as a common law right in Samuel
Warren and Louis Brandeis’s Harvard Law Review article, The Right
to Privacy.49 Warren and Brandeis defined the right in various ways,
including as the “right to be let alone”50 and as the “right of
determining . . . to what extent [one’s] thoughts . . . shall be
communicated to others.”51 Their widely reported, yet later
questioned, motivation for writing the piece was a disdain for
exposures that occurred after Warren’s marriage to a New York
socialite, and his associated loss in obscurity.52 Ultimately, Warren
and Brandeis called for a limited freedom from exposure of certain
actions and information shared in public in order “to protect the
privacy of private life.”53
function of the physical space in which the purportedly private activity occurred, its subject
matter, whether it was veiled in secrecy, and whether others were present”).
48
See infra Parts II and III.
Although the term “right to privacy” permeates legal and popular writings, many
scholars have challenged the concept of privacy as a “right.” See generally Diane L.
Zimmerman, Requiem for a Heavyweight: A Farewell to Warren and Brandeis’s Privacy
Tort, 68 CORNELL L. REV. 291 (1983); see generally Volokh, supra note 19. Unlike this
article, however, these critiques focus on the First Amendment as the primary justification
for limiting privacy.
49
50
Warren & Brandeis, supra note 36, at 195 (citing COOLEY ON TORTS, 2d ed., at 29).
Warren & Brandeis, supra note 36, at 198 (further suggesting that one “generally retains
the power to fix the limits of the publicity which shall be given [his thoughts]”); Warren &
Brandeis, supra note 36, at 198 n.2 (“It is certain every man has a right to keep his own
sentiments, if he pleases. He has certainly a right to judge whether he will make them
public, or commit them only to the sight of his friends.”) (quoting Yates, J.).
51
See Richards & Solove, supra note 17, at 1897 n.59 (“Prosser believed that the press
coverage surrounding the wedding of Warren’s daughter had inspired the [Right to
Privacy] article, although subsequent scholarship has proven that this could not actually
have been the case.”) (citations omitted).
52
53
Warren & Brandeis, supra note 36, at 215.
2012]
ANDERSON
555
Most relevant to the instant Article, Warren and Brandeis also
attempted to define the types of information to which one’s right to
privacy did not extend, using a three-pronged public-versus-private
distinction based on analogies to the common law of libel and slander,
as well as French law.54 First, they stated that “[t]he right to privacy
does not prohibit any publication of matter which is of public or
general interest.”55 This exclusionary statement used the word
“public” as an adjective describing the nature of the information
involved, regardless of the physical space in which the information
was shared.56 Next, Warren and Brandeis used “public” to describe the
kind of individual involved, versus the nature of the information or its
physical origin; inferring that the professional aspirations of the man
himself determine whether the information about him is public or
private. Specifically, they opined that information about one who
seeks public office or position is “public,” while the same kind of
information about a person who seeks no such office or position
would be “private.”57 Combining these first two prongs of Warren and
54
Id. at 214–15.
Id. at 214. Elsewhere in The Right to Privacy, Warren and Brandeis asserted that “the
law must . . . protect those persons with whose affairs the community has no legitimate
concern” and “all persons, whatsoever their position or station” from having information
“made public against their will.” Id. at 214–15 (emphasis added).
55
Solove, supra note 1, at 1001 (describing Warren and Brandeis’s public versus private
distinction as a “newsworthiness test” based on whether a matter “is of public or general
interest”).
56
Warren & Brandeis, supra note 36, at 215 (“To publish of a modest and retiring
individual that he suffers from an impediment in his speech or that he cannot spell
correctly, is an unwarranted, if not an unexampled, infringement of his rights, while to
state and comment on the same characteristics found in a would-be congressman could not
be regarded as beyond the pale of propriety.”); Warren & Brandeis, supra note 36, at 215
(“Since, then, the propriety of publishing the very same facts may depend wholly upon the
person concerning whom they are published, no fixed formula can be used to prohibit
obnoxious publications.”). This type of distinction appears to mirror the public figure
versus private figure distinction in defamation law that lives on despite having been
chipped away at via various rulings suggesting that whomever or whatever is worthy of
press attention automatically loses private figure status. See New York Times Co. v.
Sullivan, 376 U.S. 254, 279–80 (1964) (holding that public figure plaintiff must show that
defamatory statement was made with “actual malice”); Gertz v. Robert Welch, Inc., 418
U.S. 323, 351 (1974) (distinguishing “all purpose” public figures, such as a current mayor,
from “limited” purpose public figures, such as a crime victim); Solove, supra note 1, at
1008–10 (chronicling and critiquing the “public versus private figure” distinction);
Christopher Russell Smith, Dragged into the Vortex: Reclaiming Private Plaintiffs’
Interests in Limited Purpose Public Figure Doctrine, 89 IOWA L. REV. 1419, 1421 (2004)
(suggesting that, under lower courts’ recent applications of the public figure doctrine, “a
57
556
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
Brandeis’s private versus public distinction (the nature of the
information and the aspirations of the person), one gets the following
summation of the “no privacy in public” rule: “[T]he matters of which
the publication should be repressed may be described as those which
concern the private life, habits, acts and relations of an individual, and
have no legitimate connection with his fitness for a public office which
he seeks or for which he is suggested . . . and have no legitimate
relation to or bearing upon any act done by him in a public or quasipublic capacity.”58
Third, Warren and Brandeis briefly defined private versus public
in the spatial sense, but only through a metaphor and without actual
mention of the word “public.” Specifically, they declared: “The
common law has always recognized a man’s house as his castle,
impregnable, often, even to its own officers engaged in the execution
of its commands. Shall the courts thus close the front entrance to
constituted authority, and open wide the back door to idle or prurient
curiosity?”59 By referring to a “man’s house as his castle,” Warren and
Brandeis appear to draw a line between what a man does within his
own home and what he does outside of that particular zone of
privacy.60 Interestingly, Warren and Brandeis did not mention this
spatial sense of private versus public until the last few lines of their
article, and, unlike the other two prongs of public versus private, they
mentioned it without citation to any source. Thus, Warren and
Brandeis acknowledged the value in exposure of certain information
shared in public, yet they yearned for additional legal protection for
private plaintiff may be found to be a public figure even though he or she has little or no
involvement in [a] controversy”).
58
Warren & Brandeis, supra note 36, at 216.
59
Id. at 220.
The concept of a man’s home as his castle—and, thus, a place worthy of protection from
intrusion by the government and private individuals—often is referred to as the Castle
Doctrine. The Castle Doctrine has been used in various contexts, including cases involving
Fourth Amendment search and seizure issues, homicide cases involving self-defense, and
privacy cases. Many trace the origin of the Castle Doctrine to English Common Law, and,
more specifically, to Sir Edward Coke in Semayne’s Case. See Nicholas J. Johnson, SelfDefense?, 2 J.L. ECON. POL’Y 187, 199 (2006). In Semayne’s Case, Lord Coke stated, “For a
man’s house is his castle, & domus sua cuique est tutissimum refugium; for where shall a
man be safe, if it be not in his house?” See David I. Caplan & Sue Wimmershoff-Caplan,
Postmodernism and the Model Penal Code v. the Fourth, Fifth and Fourteenth
Amendments and the Castle Privacy Doctrine in the Twenty-First Century, 73 UMKC L.
REV. 1073, 1090 (2005) (citations omitted).
60
2012]
ANDERSON
557
some information and actions shared in public.61 For them, the
“public” part of the “no privacy in public” distinction referred to the
person, to the nature of the information about the person, and to the
person’s physical location when the information was shared.
2. PUBLIC VERSUS PRIVATE IN PROSSER’S PRIVACY TORTS62
In the many decisions and voluminous commentary to follow
Warren and Brandeis’s article, no person was more influential than
William Prosser, primarily through his treatise, Handbook on the Law
of Torts, and his 1960 California Law Review article, Privacy.63 In
these and other works, Prosser shaped a disorganized set of privacyrelated cases into four tort-based causes of action one could use to
protect the right to privacy defined by Warren and Brandeis.64 The
four torts included: (i) intrusion upon the plaintiff’s seclusion, (ii)
public disclosure of embarrassing private facts, (iii) false light
publicity, and (iv) appropriation.65 The Restatement of Torts, with
Prosser as its chief reporter, adopted this four-pronged approach to
At least one scholar has suggested that, at its core, Warren and Brandeis’s binary
distinction between the public and the private was a distinction between conduct or spaces
associated with men, which were considered public, and those more traditionally
associated with women, which made up the private sphere. See Neil M. Richards, The
Puzzle of Brandeis, Privacy, and Speech, 63 VAND. L. REV. 1295, 1304–05 (2010).
61
Although this article focuses on the four privacy torts originally crafted by William
Prosser, other torts also involve privacy interests. See Solove, supra note 1, at 971–73
(noting torts such as breach of confidentiality for disclosures by physicians and banks as
well as statutory restrictions on the disclosure of certain records).
62
See WILLIAM L. PROSSER, HANDBOOK OF THE LAW OF TORTS (1st ed. 1941) [hereinafter
PROSSER, HANDBOOK]; William L. Prosser, Privacy, 48 CALIF. L. REV. 383 (1960)
[hereinafter Prosser, Privacy]; see also Richards & Solove, supra note 17, at 1888
(anointing Prosser as “privacy law’s chief architect”); see also Richards & Solove, supra
note 17, at 1903–12 (chronicling history of Prosser’s “influence over the development of
tort privacy”).
63
See Richards & Solove, supra note 17, at 1888 (noting that Prosser “was engaged with
tort privacy throughout his career, from his earliest torts scholarship in the 1940s until his
death in 1972”). In the sixty years preceding his article, privacy as a tort theory had
struggled to develop, being limited at first to commercial uses of persons’ likenesses
without permission and a handful of state statutes purporting to protect additional uses as
well. See Richards & Solove, supra note 17, at 1893–95 (citations omitted).
64
Prosser, Privacy, supra note 63, at 389. Prosser’s efforts have been characterized as
“tak[ing] a mess of hundreds of conflicting cases and reduc[ing] them to a scheme of four
related but distinct tort actions.” Richards & Solove, supra note 17, at 1889.
65
558
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
privacy; courts and legislatures implemented it as well.66 All of these
sources of law reflect, to some degree, Prosser’s own skepticism about
the coherence, wisdom and utility of the privacy torts.67 This
skepticism, in turn, stunted the torts’ growth and prevented them
from evolving to clearly cover more modern privacy concerns.68
Prosser’s primary concern about the reach of these torts was that
they provided “a power of censorship over what the public may be
permitted to read, extending very much beyond that which they have
always had under the law of defamation.”69 In particular, he was
concerned about the privacy torts’ potential to restrict the press.70
Given this concern over restricting the public’s access to truthful
information, Prosser “sought to limit [his privacy torts’] capacity for
growth” and succeeded.71 One specific tactic he used to purposefully
stunt the growth of his own creations was to emphasize the “no
privacy in public” distinction in the torts’ elements and their
suggested applications.
Prosser’s use of the public versus private dichotomy at first
appears to model that of Warren and Brandeis’s first and second
prongs discussed above. In his handbook, Prosser states that none of
the privacy torts restricts or punishes the publication of information
“of public interest of a legitimate character.”72 This statement, like that
of Warren and Brandeis, uses the word public to describe the nature
of the information and perhaps the identity of the person—and not the
66
Richards & Solove, supra note 17, at 1904.
67
Id. at 1906–07.
68
Id.
69
See id. at 1900 (quoting Prosser, Privacy, supra note 63, at 423).
70
See id. at 1897–98 (citing Prosser, Privacy, supra note 63, at 410).
See id. at 1905–07. G. Edward White concisely summarized the Prosser-led progression
as follows: “A classification made seemingly for convenience (1941) had been expanded and
refined (1955), hardened and solidified (1960 and 1964, when the ‘common features’ of
privacy were declared), and finally made synonymous with ‘law’ (1971). Prosser’s capacity
for synthesis had become a capacity to create doctrine. One began an analysis of tort
privacy by stating that it consisted of ‘a complex of four wrongs,’ and implicitly, only those
wrongs.” See G. EDWARD WHITE, TORT LAW IN AMERICA: AN INTELLECTUAL HISTORY 176
(expanded ed. 2003) (quoted in Richards & Solove, supra note 17, at 1905).
71
See PROSSER, HANDBOOK, supra note 63, at 1050 (cited in Richards & Solove, supra note
17, at 1897).
72
2012]
ANDERSON
559
physical space in which the information is shared.73 However, Prosser,
in a possible deviation from Warren and Brandeis, effectively
neutralizes other privacy torts by repeatedly using “public” in the
spatial sense. This is most pronounced in the publicity to private facts
tort, which suggests no liability “when the defendant merely gives
further publicity to information about the plaintiff that is already
public” or for “what the plaintiff himself leaves open to the public
eye.”74 Similarly, in the intrusion upon seclusion tort, an invasion of
one’s personal physical area, or its equivalent, is required.75 Implicit in
this element is that there must be some legitimately secluded space in
which the other party is intruding—a private, versus public, space.
Under the Restatement, one only has an intrusion claim if the
intrusion occurs in the home or other traditionally secluded place,
such as a hotel room.76 The spatial part of the private versus public
distinction also is evident in the voluminous cases interpreting the
intrusion and other privacy torts which both preceded and followed
Prosser’s Privacy.77 Ultimately, Prosser’s version of the public versus
See supra notes 54–57 and accompanying text (documenting Warren and Brandeis’s
approaches to the public versus private distinction).
73
74
See RESTATEMENT (SECOND) OF TORTS § 652D cmt. b (1977).
See id. at § 652B (1977); see also Stien v. Marriott Ownership Reports, Inc., 944 P.2d 374
(Utah Ct. App. 2007) (rejecting invasion of privacy claim based on video footage of
employees later shown at company party).
75
See RESTATEMENT (SECOND) OF TORTS § 652D cmt. b (1977) (suggesting that invasion
may be by physical intrusion into hotel room or home or other examination such as of
one’s mail, wallet, or bank account); id. at cmt. c (“Nor is there liability for observing him
or even taking his photograph while he is walking on the public highway.”); id. at illus. 6–7
(distinguishing drunken behavior on public street from having one’s skirt blown over her
head to reveal underwear).
76
See SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 164 (2008) (“U.S. courts
recognize intrusion-upon-seclusion tort actions only when a person is at home or in a
secluded place. This approach is akin to courts recognizing a harm in surveillance only
when it is conducted in private, not in public.”); see also Lyrissa Barnett Lidsky, Prying,
Spying and Lying: Intrusive Newsgathering and What the Law Should Do About It, 73
TUL. L. REV. 173, 204 (1998) (“Intrusion is designed to protect an individual’s sphere of
privacy, whether spatial or psychological . . . .”). Prosser’s influence on case law is welldocumented; thus, the many privacy tort cases need not be further re-examined here.
Rather, it is sufficient to note that the private versus public distinction also is evident in the
cases interpreting Prosser’s torts. See Richards & Solove, supra note 17, at 1906 (“Based on
our familiarity with several hundred privacy tort cases from the 1960s to the present, the
overwhelming majority of courts have adopted wholesale the specific language of either the
Restatement or Prosser’s other works in defining the privacy torts.”); SOLOVE,
UNDERSTANDING PRIVACY 161–62, 164 (reviewing cases).
77
560
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
private distinction’s effect on the privacy torts, as reflected in the
Restatement, is as follows: “There is no liability when the defendant
merely gives further publicity to information about the plaintiff which
is already public.”78
3. PUBLIC VERSUS PRIVATE IN FOURTH AMENDMENT CASES
Perhaps the clearest and most familiar application of the public
versus private distinction in the spatial sense is in criminal search and
seizure cases.79 In its simplest form, the Fourth Amendment’s
exclusionary rule blocks the use of evidence obtained during a
warrantless search of an area in which the defendant had both a
subjective expectation of, and an objectively reasonable expectation
of, privacy.80 In determining whether one’s expectation of privacy is
objectively reasonable, the Supreme Court has said that “in the home,
our cases show, all details are intimate details, because the entire area
is held safe from prying government eyes.”81 Thus, one focus of the
reasonableness inquiry is on the location in which the search was
conducted and, more specifically, its proximity to the defendant’s
See RESTATEMENT (SECOND) OF TORTS § 652D cmt. c. (1977). However, as acknowledged
in Part III.D, exposures of certain aspects of a person, even if captured in public, likely
involve a different balance of interests and thus are less worthy of protection. See infra
notes 272–78 and accompanying text; see RESTATEMENT (SECOND) OF TORTS § 652B, cmt. c
(1977) (“Even in a public place, however, there may be some matters about the plaintiff,
such as his underwear or lack of it, that are not exhibited to the public gaze; and there may
still be invasion of privacy when there is intrusion upon these matters.”).
78
This emphasis on spatial privacy in search and seizure cases is understandable given that
the text of the Fourth Amendment uses the word “houses.” U.S. CONST. amend. IV (“The
right of the people to be secure in their persons, houses, papers and effects, against
unreasonable searches and seizures, shall not be violated . . . .”).
79
See Katz v. U.S., 389 U.S. 347 (1967). In order to be protected from a warrantless search
under the exclusionary rule, one must pass Katz’s two-part test: (i) the person must “[first]
have exhibited an actual (subjective) expectation of privacy, and (ii) “second, . . . the
expectation [must] be one that society is prepared to recognize as ‘reasonable.’” Id. at 361
(Harlan, J., concurring). The practical effect of the Katz decision was to require the
government to obtain a warrant before wiretapping a phone and recording the content of
conversations.
80
See Kyllo v. United States, 533 U.S. 27, 37 (2001); see also United States v. Dunn, 480
U.S. 294, 300 (1987) (“[T]he Fourth Amendment protects the curtilage of a house and that
the extent of the curtilage is determined by . . . whether the area harbors the “intimate
activity associated with the sanctity of a man’s home and the privacies of life.”) (internal
quotations omitted).
81
2012]
ANDERSON
561
home.82 Even a search of one’s home by electronic means may be
deemed to invade someone’s reasonable expectation of privacy in a
physical space.83 Ultimately, if the government conducts a warrantless
search of a person’s private space—most often, his home—then the
evidence obtained in that search cannot be used because the person
had a reasonable expectation of privacy in such space.84
As the search location moves away from the inside of one’s home,
the objective reasonableness of the privacy expectation becomes more
remote.85 For example, the Supreme Court has endorsed warrantless
searches of one’s property from an aircraft in public air space and of
one’s garbage bags placed at the curb.86 This is because as the search
moves away from a person’s home, it becomes more likely that the
person has voluntarily consented to having the information made
available to others.87
See Payton v. New York, 445 U.S. 573, 590 (1980) (“[T]he Fourth Amendment has drawn
a firm line at the entrance to the house.”). Although it is true that the Katz majority
declared that “the Fourth Amendment protects people, not places,” Katz, 389 U.S. at 351,
“what protection it affords to those people . . . generally . . . requires reference to a ‘place.’”
Id. at 361 (Harlan, J., concurring). In other words, whether one’s expectation of privacy is
deemed “reasonable” often depends in large part upon where one was located at the time
and how much that place was like one’s home. See Dunn, 480 U.S. at 334–35. For a recent
and thorough critique of the Supreme Court’s focus on the home and one’s proximity
thereto in determining the scope of one’s Fourth Amendment rights, see Stephanie M.
Stern, The Inviolate Home: Housing Exceptionalism in the Fourth Amendment, 95
CORNELL L. REV. 905 (2010).
82
See, e.g., Kyllo, 533 U.S. at 34–35 (finding that authorities’ warrantless use of heatsensing technology not in general public use to obtain information about the inside of
defendant’s home invaded his reasonable expectation of privacy).
83
However, even items within one’s home or office are not necessarily protected if one
exposes them to the public. See California v. Ciraolo, 476 U.S. 207, 213 (1986) (quoting
Katz, 389 U.S. at 351).
84
The four factors to be used to determine whether a space falls within a home’s
“curtilage,” and thus is entitled to heightened Fourth Amendment protection, are: “(1) the
proximity of the area to the home; (2) whether the area is within an enclosure surrounding
the home; (3) the nature and uses to which the area is put; and (4) the steps taken by the
resident to protect the area from observation by passersby.” Dunn, 480 U.S. at 1139
(finding that barn was not part of home’s curtilage in part because it was not used for
intimate activities).
85
See Florida v. Riley, 488 U.S. 445 (1989) (police examination of partially-open
greenhouse from aircraft in navigable airspace was not a search that violated defendant’s
reasonable expectation of privacy); California v. Greenwood, 486 U.S. 35, 36–37 (1988).
86
See Greenwood, 486 U.S. at 39 (reasoning that defendants had no expectation of privacy
in their garbage bags placed at the curb in part because such bags “left on or at the side of a
87
562
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
The Supreme Court also has stated that no one has a reasonable
expectation of privacy in something done in “plain view” or in an
“open field.”88 Similarly, under the third party doctrine, sharing
information with a third party defeats later claims to privacy in that
information.89 For example, one has no reasonable expectation of
privacy in an email after it is delivered to its intended recipient.90 Nor
does one have a reasonable expectation of privacy in the numbers one
dials by phone, given that those numbers were shared with someone
else—the phone company.91 Thus, if the conduct or information was
shared in a public or quasi-public place or even with someone else in a
private place, then it most likely could be collected and used against a
defendant because he has no reasonable expectation of privacy in
something shared in public. Together, these limitations on one’s
reasonable expectation of privacy in the warrantless search context
public street are readily accessible to animals, children, scavengers, snoops and other
members of the public.”). Most recently, the Supreme Court considered whether it should
carve out an exception to the “no privacy in public” rule for warrantless constant
monitoring of a suspect’s vehicle via use of a GPS tracking device attached by the
government. See United States v. Jones, 132 S. Ct. 945 (2012). Although the space invaded
was the suspect’s car versus his home, the Court concluded that the search violated the
Fourth Amendment because it involved a trespass-like intrusion of a constitutionallyprotected area. Id. at 950. The Court’s decision in Jones likely supports my thesis because
the Court’s analysis focuses on the spatial location of the search and that location’s
proximity to a traditionally-private space. Id. at 941 (“Where, as here, the Government
obtains information by physically intruding on a constitutionally protected area, such a
search has undoubtedly occurred.”).
Kyllo, 533 U.S. at 42 (citations omitted) (“[I]t is . . . well settled that searches and
seizures of property in plain view are presumptively reasonable.”); Oliver v. United States,
466 U.S. 170, 176 (1984) (“[The] special protection accorded by the Fourth Amendment to
the people in their ‘persons, houses, papers, and effects’ is not extended to the open fields.
The distinction between the latter and the house is as old as the common law.” (citing
Hester v. United States, 265 U.S. 57, 59 (1924)).
88
See Smith v. Maryland, 442 U.S. 735, 743–44 (1979) (“[A] person has no
legitimate expectation of privacy in information he voluntarily turns over to third
parties.”); see generally Orin S. Kerr, Defending the Third Party Doctrine, 24 BERKELEY
TECH. L.J. 1229 (2009) (summarizing and defending doctrine); see also SOLOVE,
UNDERSTANDING PRIVACY 139–40 (critiquing third party doctrine).
89
See Rehberg v. Paulk, 598 F.3d 1268, 1281–82 (11th Cir. 2010) (“A person also loses a
reasonable expectation of privacy in emails, at least after the email is sent to and received
by a third party.”) (citations omitted).
90
See Smith, 442 U.S. at 743–44 (concluding that the use of “pen registers” to collect the
numbers dialed by phone did not require a warrant because the person using the phone
“voluntarily conveyed numerical information to the telephone company”).
91
2012]
ANDERSON
563
have led some scholars to equate privacy under the Fourth
Amendment with “total secrecy.”92
III. CALLS FOR A RIGHT TO OBSCURITY EXCEPTION TO THE “NO
PRIVACY IN PUBLIC” RULE
Synthesizing the “no privacy in public” rule from all of the above
summarized sources, Daniel Solove insightfully stated as follows:
“[A]ccording to the prevailing view of the law, if you’re in public,
you’re exposing what you’re doing to others, and it can’t be private.”93
This prospect of constant potential for exposure has led Solove and
others to critique the “no privacy in public” rule as a “binary
understanding of privacy” that is both antiquated and inadequate.94
The rule purportedly is antiquated because of recent technological
developments regarding the collection, distribution, and indexing of
information that threaten obscurity in ways not anticipated when the
rule first emerged.95 Next, the rule is inadequate, practically speaking,
See Daniel J. Solove, Conceptualizing Privacy, 90 CALIF. L. REV. 1087, 1107 (2002) (“In
a variety of legal contexts . . . [p]rivacy is thus viewed as coextensive with the total secrecy
of information.”); William J. Stuntz, Privacy’s Problem and the Law of Criminal
Procedure, 93 MICH. L. REV. 1016, 1021–22 (1995) (reviewing search and seizure cases and
concluding that “the [key] question . . . is whether what the police did was likely to capture
something secret” and suggesting that “privacy-as-secrecy dominates the case law”); see
also Julie E. Cohen, Surveillance: Privacy, Visibility, Transparency, and Exposure, 75 U.
CHI. L. REV. 181, 190 (2008) (documenting how “the U.S. legal system purports to
recognize an interest in spatial privacy”).
92
See SOLOVE, FUTURE OF REPUTATION, supra note 18, at 163; SOLOVE, UNDERSTANDING
PRIVACY, supra note 17, at 110.
93
See SOLOVE, FUTURE OF REPUTATION, supra note 18, at 7 (“Under existing notions,
privacy is often thought of in a binary way—something is either private or public.
According to the general rule, if something occurs in a public place, it is not private.”);
Danielle Keats Citron, Fulfilling Government 2.0’s Promise with Robust Privacy
Protections, 78 GEO. WASH. L. REV. 822, 827 (2010) (“A public/private binary also may not
accord with our lived experiences—individuals routinely carve out zones of privacy in socalled public spaces.”); Lipton, supra note 17, at 929–41 (chronicling alleged gaps in
current law that fail to protect one’s public conduct from capture by handheld cameras and
later distribution); SOLOVE, FUTURE OF REPUTATION, supra note 18, at 163 (concluding that
the secrecy versus privacy paradigm in the law “has limited the recognition of privacy
violations”); SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 111.
94
See, e.g., Cohen, supra note 92, at 191–92 (critiquing emphasis on public visibility when
defining privacy invasions given current surveillance infrastructures); Kevin Werback,
Sensors and Sensibilities, 28 CARDOZO L. REV. 2321 (2007) (chronicling perceived threat to
privacy in public caused by rising ubiquity and usage of camera phones and other
“pervasive sensors”); Erwin Chemerinsky, Rediscovering Brandeis’s Right to Privacy, 45
95
564
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
because it bars people from preventing or recovering for perceived
obscurity harms caused by more tech-savvy exposure.96 These
arguments are summarized in Part III.A, “Technology’s Alleged Threat
to Obscurity.”
Given technology’s threat to obscurity, scholars have suggested
that a new, more nuanced rule is necessary—one that would recognize
claims of privacy in public.97 In crafting the proper rule, Daniel Solove
suggests that lawmakers conduct a pragmatic balancing of harms and
benefits that is better equipped to consider technological
developments and their associated privacy harms.98 Others implicitly
have endorsed this approach.99 Part III.B more carefully describes this
suggested balancing test replacement. Part III then critiques these
Part II arguments as an unnecessary call to protect the mythical right
to obscurity.100
BRANDEIS L.J. 643, 656 (2007) (calling for the Supreme Court to recognize a privacy
interest in the reporting of truthful information because “[t]echnology that Warren and
Brandeis never could have imagined . . . presents unprecedented risks to informational
privacy”); Abril, supra note 45, at 5 (“New technologies have enabled novel social
situations that generate privacy harms and concerns that were unforeseeable.”); see
generally Andrew Lavoie, Note, The Online Zoom Lens: Why Internet Street-Level
Mapping Technologies Demand Reconsideration of the Modern-Day Tort Notion of
‘Public Privacy’, 43 GA. L. REV. 575 (2009).
See e.g., SOLOVE, FUTURE OF REPUTATION supra note 18, at 166 (“[M]erely assessing
whether information is exposed in public or to others can no longer be adequate to
determining whether we should protect it as private”); Amy Gajda, Judging Journalism:
The Turn Toward Privacy and Judicial Regulation of the Press, 97 CALIF. L. REV. 1039,
1041 (2009) (demonstrating how “growing anxiety about the loss of personal privacy in
contemporary society has given new weight to claims of injury from unwanted public
exposure”) (citing ANITA ALLEN, PRIVACY LAW AND SOCIETY 6 (2007)); Cohen, supra note
92, at 191 (concluding that “prevailing legal understandings of spatial privacy” are
inadequate because they do not recognize the spatially harmful alteration of “the spaces
and places of everyday life”).
96
SOLOVE, FUTURE OF REPUTATION, supra note 18, at 7, 161 (suggesting a need for “a more
nuanced view of privacy”); Chemerinsky, supra note 95, at 656 (suggesting that it is “time
to rediscover Warren and Brandeis’s right to privacy” because of “unprecedented risks to
informational privacy”); Abril, supra note 45, at 4–6 (criticizing courts’ “reliance on
physical space” as one of the “linchpins” in privacy because “physical space . . . no longer is
relevant in analyzing many modern online privacy harms”); Lipton, supra note 17.
97
98
See infra notes 118–30 and accompanying text.
See, e.g., Lipton, supra note 17, at 943 (summarizing Solove’s harm balancing approach
and concluding that it “may be the right approach—at least for the present time”).
99
100
See infra notes 172–269 and accompanying text.
2012]
ANDERSON
565
A. TECHNOLOGY’S ALLEGED THREAT TO OBSCURITY
In The Future of Reputation, Daniel Solove declares that “modern
technology poses a severe challenge to the traditional binary
understanding of privacy.”101 For Solove and like-minded
commentators, the “no privacy in public” rule simply is inadequate in
today’s tech-savvy information marketplace.102 Most concerning, is the
three-sided sinister combination of collection, distribution, and
indexing technologies.103 In the Obscurity Problem scenario, we
supposedly are wielding these technologies against ourselves and our
fellow citizens.104 If we do not do something about the Obscurity
101
SOLOVE, FUTURE OF REPUTATION, supra note 18, at 163.
Id. at 166 (opining that because of technological advances, “merely assessing whether
information is exposed in public or to others can no longer be adequate to determining
whether we should protect it as private”); Lipton, supra note 17, at 930–33 (arguing that
current tort laws are inadequate to protect personal privacy interest in conduct shared in
public given new technologies such as Facebook, MySpace, YouTube and Flickr); Julie E.
Cohen, Examined Lives: Informational Privacy and the Subject as Object, 52 STAN. L.
REV. 1373, 1379 (2000) (arguing that private means both “not public” and “not commonowned”); Werback, supra note 95, at 2324–29 (noting threats from various new camera
types); see also SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 4–5 (collecting late
20th century scholars’ statements regarding technology’s alleged threat to privacy).
102
SOLOVE, FUTURE OF REPUTATION, supra note 18, at 17 (“Often, technology is involved in
various privacy problems because it facilitates the gathering, processing and dissemination
of information.”); Paul M. Schwartz, Privacy and Democracy in Cyberspace, 52 VAND. L.
REV. 1607, 1621–41 (1999) (describing supposed “privacy horror show” in cyberspace that
“result[s] from the generation, storage and transmission of personal data”); Jacqueline D.
Lipton, Digital Multi-Media and the Limits of Privacy Law, 42 CASE W. RES. J. INT’L L.
551, 552 (2010) (stating that “the ability of new digital devices such as cell phone cameras
to transmit information wirelessly and globally raises important new challenges for privacy
laws”); Cohen, supra note 102, at 1374 (suggesting new informational privacy concerns
given “rise of a networked society” that facilitates the rapid search, distribution and
connection of one’s personal information).
103
SOLOVE, FUTURE OF REPUTATION, supra note 18, at 164 (describing how “[a]rmed with
cell phone cameras, everyday people can snap up images, becoming amateur paparazzi”);
Lipton, supra note 17 (collecting privacy concerns associated with private citizens’
reporting of public conduct by fellow citizens); Marcy Peek, The Observer and the
Observed: Re-imagining Privacy Dichotomies in Information Privacy Law, 8 NW. J. TECH
& INTELL. PROP. 51, (2009) (describing how the fact that “we are all watching each other”
alters the privacy analysis). While Solove and Lipton use the term “paparazzi” to describe
citizens who use technology to collect and report on their fellow citizens, I prefer the less
pejorative term, “citizen journalists.” See supra notes 41–42 and accompanying text. As
discussed in Part III, I suggest that before we vilify ourselves further, we need to more
thoroughly consider the benefits of paparazzi- or citizen-led exposure in the aggregate. See
infra notes 215–67 and accompanying text.
104
566
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
Problem, “[w]e’re heading toward a world where an extensive trail of
information fragments about us will be forever preserved on the
Internet, displayed instantly in a Google search.”105 In this doomsday
scenario, “[w]e will be forced to live with a detailed record beginning
with childhood that will stay with us for life wherever we go,
searchable and accessible from anywhere in the world.”106 In other
words, the concern is for the person’s sudden and perhaps permanent
loss of obscurity.
Victims of the antiquated “no privacy in public” rule are those that
shared information or conduct in public with a small group of
individuals, only to have that information exposed and exploited by
technology.107 People who have had their obscurity taken away from
them via mass Internet-distribution of things they did or said in public
include:

Dog Poop Girl, who refused to clean up her pet
dog’s excrement on a subway train in South
Korea;108

Numa Numa Guy, who lip-synched and waved
his arms in front of his webcam to the tune of a
Moldovan pop song;109
SOLOVE, FUTURE OF REPUTATION, supra note 18, at 17 ; see also Peek, supra note 104, at
56–57 (describing how “the aggregation of ‘observing’ technology enables the private and
public sector to create a near perfect picture of the full spectrum of a person’s day and a
person’s life”).
105
SOLOVE, FUTURE OF REPUTATION, supra note 18, at 7 (differentiating online exposures
because they involve “taking an event that occurred in one context and significantly
altering its nature—by making it permanent and widespread”); DANIEL J. SOLOVE, THE
DIGITAL PERSON 1–7 (2004) (discussing threat of “digital dossiers”); Lipton, supra note 17,
at 927 (suggesting that new privacy concerns are triggered now that the Internet “makes
the dissemination of video . . . practically instantaneous and potentially global in scope”).
106
SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 96 (suggesting that “[d]isclosing
people’s secrets . . . often affects a few unfortunate individuals” whose “lives are ruined for
very little corresponding change in social norms”).
107
108
SOLOVE, FUTURE OF REPUTATION, supra note 18, at 1–4; Lipton, supra note 17, at 921.
SOLOVE, FUTURE OF REPUTATION, supra note 18, at 42. Numa Numa Guy, also known as
Gary Brolsma, later went on to make more videos, which he posted online; one featured
“Star Wars Kid” in a cameo appearance.
109
2012]
ANDERSON

Star Wars Kid, who performed a series of “Jedi”
physical maneuvers, wielding a golf-ball
retrieving stick as lightsaber-like weapon;110

Laura K., a college student who propositioned
and paid a part-time blogger/actor to write a
paper for her regarding Hinduism;111

Jonas Blank, a law firm summer associate
whose profanity-laced email came across as a
mockery of his employers and work;112

Robert, whose sexual exploits and difficulties
were blogged about by his partner, Jessica
Cutler, also known as “Washingtonienne;”113

Geoffrey Peck, whose attempt to cut his wrists
was captured by a surveillance camera;114
567
110Id.
at 44–8. The original less-than-two-minute video of Star Wars Kid was posted to the
Internet by a high-school acquaintance of its star without the Kid’s express consent. Thus,
there is some question regarding whether his performance was done “in public,” and, as a
result, falls outside the scope of the “no privacy in public” rule. Someone else later edited
the video to include Star Wars music and visual effects, which increased the original video’s
popularity.
111Id.
at 76–78; Scott Jaschik, Busted for a Bogus Paper, INSIDE HIGHER ED (Mar. 31, 2005,
4:00 AM), http://www.insidehighered.com/news/2005/03/31/plagiarize.
See SOLOVE, FUTURE OF REPUTATION, supra note 18, at 29–30 (citing Ben McGrath,
Oops, THE NEW YORKER, June 30, 2003).
112
SOLOVE, FUTURE OF REPUTATION, supra note 18, at 134–35; West, Story of Us, infra note
257, at 597–600.
113
See SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 195. Initially, an English court
found no privacy violation in this case “because the plaintiff’s ‘actions were already in the
public domain,’ and revealing the footage ‘simply distributed a public event to a wider
public.’” SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 195. However, the European
Court of Human Rights instead disagreed, concluding that there was a privacy violation
because the person was not a public figure, while emphasizing “the [systemic] recording of
the data” and the “permanent nature of the record.” Peck v. United
Kingdom, HUDOC (April 4, 2003), available at
http://cmiskp.echr.coe.int/tkp197/view.asp?item=2&portal=hbkm&action=html&highligh
t=peck&sessionid=88009156&skin=hudoc-en.
114
568
I/S: A JOURNAL OF LAW AND POLICY

Todd, whose allegedly bad dating behavior was
shared on the “don’t date him girl” website;115

Michael, who wrote about his time in juvenile
detention without considering that such
information would be available to later
acquaintances via a Google search of his
name.116
[Vol. 7:3
The emerging consensus appears to be that the above-listed people
were victimized via exposure that went beyond what they expected at
the time that they shared the information with at least one other
person.117 Concern over this widespread exposure has led some
scholars to call for legal protections for “information that has been
shared to a few others, but is still not generally known.”118 The concern
is not about the person’s privacy, necessarily, because the information
was shared with others, sometimes in a public place. Rather, the
concern is for the exposed person’s loss of obscurity.119 Ultimately, the
115
See SOLOVE, FUTURE OF REPUTATION, supra note 18, at 121.
116
See Solove, supra note 1, at 1055.
See SOLOVE, FUTURE OF REPUTATION, supra note 18, at 170 (“Privacy can be violated not
just by revealing previously concealed secrets, but by increasing the accessibility to
information already available.”); Chemerinsky, supra note 95, at 656 (suggesting
additional privacy protections are needed given the that “the Internet makes [personal
information] potentially available to many”); Lipton, supra note 17, at 927 (arguing that
current exposures are more harmful because the coupling of cameras and the Internet
“makes the dissemination of video . . . practically instantaneous and potentially global in
scope”).
117
Richards, supra note 61, at 51 (criticizing Warren and Brandeis’s division between
public and private as “the dominant question in privacy law”); see also Cohen, supra note
102, at 1379 (suggesting that some information may no longer be a secret but still worthy of
privacy protection because it is not commonly known).
118
See SOLOVE, FUTURE OF REPUTATION, supra note 18, at 7 (“People who act
inappropriately might not be able to escape into obscurity anymore; instead, they may be
captured in pixels and plastered across the Internet.”); Citron, supra note 94, at 835
(endorsing social-media researcher’s concern that people “‘live by security through
obscurity’”).
119
2012]
ANDERSON
569
“no privacy in public” rule is labeled obsolete because it does not
protect a person’s perceived right to obscurity.120
B. SUGGESTED PRAGMATIC BALANCING TO PROTECT OBSCURITY
To protect obscurity, some suggest that “[t]he law should begin to
recognize some degree of privacy in public.”121 Defining exactly when
the law should do so has proven difficult;122 however, one recentlycrafted approach appears promising. In his latest book,
Understanding Privacy, Daniel Solove constructs a “new
understanding” of privacy, in which he urges us to “understand
privacy as a set of protections against a plurality of distinct but related
problems.”123 The problem-based approach is an offshoot of the more
general theory of pragmatism, which other scholars explicitly and
implicitly endorse.124
Wide circulation of information also was part of what motivated Warren and Brandeis’s
call for a right to privacy. See Warren & Brandeis, supra note 36, at 196 (“Even gossip
apparently harmless, when widely and persistently circulated, is potent for evil.”).
120
See SOLOVE, FUTURE OF REPUTATION, supra note 18, at 168; Lipton, supra note 17, at
930–33 (suggesting that current tort laws are insufficient to protect privacy from picturetaking and other technological advances); Cohen, supra note 92, at 181 (arguing that
person’s privacy interest “is not negated by the fact that people in public spaces expect to
be visible to others present in those spaces”).
121
See SOLOVE, THE FUTURE OF REPUTATION, supra note 18, at 168 (describing “The
Difficulties of Recognizing Privacy in Public”); Warren & Brandeis, supra note 36, at 214
(“To determine in advance of experience the exact line at which the dignity and
convenience of the individual must yield to the demands of the public welfare or of private
justice would be a difficult task . . . .”).
122
123 SOLOVE,
UNDERSTANDING PRIVACY, supra note 17, at 171.
See, e.g., Strahilevitz, Reunifying, supra note 35, at 2032 (including whether “the
gravity of the harm to the plaintiff’s privacy interest [is] outweighed by a paramount public
policy interest” as a third element in his proposed privacy tort); SOLOVE, UNDERSTANDING
PRIVACY, supra note 17, at 46–47 (acknowledging pro-pragmatism scholars such as
William James, John Dewey, George Herbert Mead, Richard Rorty, Richard Posner, and
Cornell West). Although legal scholars generally do not agree as to the proper definition of,
or application of, pragmatism and the law, see Michael Sullivan & Daniel Solove, Can
Pragmatism Be Radical? Richard Posner and Legal Pragmatism, 113 YALE L.J. 687
(2003) (critiquing Richard Posner’s use of pragmatism), for the purposes of this article,
pragmatism essentially means analyzing privacy “in specific contextual situations” versus
universal absolutisms. SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 47.
124
570
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
Under Solove’s proposed approach, the first step is to identify
whether a potential privacy problem exists.125 After identifying the
problem, the next step is to analyze the different types of harms
created by the privacy problem.”126 When tabulating the harms, we are
to consider them in the aggregate versus with respect to only a single
person, i.e., we are to calculate the “value of privacy . . . in terms of its
contribution to society” as well as to the individual exposed.127 After
identifying the privacy problem and valuing the harm caused, we are
to “assess . . . the value of any conflicting interests.”128 Finally, we
perform a balancing to “determine which prevails.”129 If the harms to
privacy are greater than the value130 of the conflicting interests, the
law should protect them at the expense of those interests.131 On the
other hand, if the conflicting interests are greater, then they should
win at the expense of privacy, and no reform is necessary.132 The latter
is the position of this Article, with respect to citizen journalists’
SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 189 (suggesting that a problem
arises when, aided by technology, people, businesses, and governments engage in
“activities that disrupt other activities that we value and thus create a problem”).
125
SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 174–79 (noting that categories of
harm include physical injuries, financial losses, and property harms, reputational harms,
emotional and psychological harms, relationship harms, vulnerability harms, chilling
effects, and power imbalances).
126
127Id.
at 173–74 (“[W]hen privacy protects the individual, it does so because it is in society’s
best interest. Individual liberties should be justified in terms of their social contribution.”).
128
Id. at 183.
129
Id.
Id. at 10 (“The value of privacy in a particular context depends upon the social
importance of the activities that it facilitates.”).
130
SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 183; id. at 185 (acknowledging that
value assigned to harms will vary across cultures, even among supposedly similar groups—
Americans and Europeans.); id. at 179 (noting that in those cases, public policy then should
“effectively redress the harms caused by the problems” via “individual enforcement
mechanisms”); id. at 181 (further suggesting that the law should permit recovery of “true
liquidated damages without requiring proof of any specific individual harm”).
131
See, e.g. SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 48 (“Few would contend
that when a crime victim tells the police about the perpetrator, it violates the criminal’s
privacy.”). See also SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 189 (“The way to
address privacy problems is to reconcile conflicts between activities.”).
132
2012]
ANDERSON
571
exposure of statements or actions shared in public, i.e., the Obscurity
Problem.133
IV. HOW THE BENEFITS OF EXPOSURE OUTWEIGH OBSCURITY HARMS
This Part applies the above-described pragmatic balancing test to
the Obscurity Problem.134 The Obscurity Problem category
encompasses situations such as Dog Poop Girl and Robert, listed in
Part II.A, and Officer Walsh, as shared in the Introduction.135
Presently, many scholars contend that affording some “privacy in
public” to these and similar individuals is desirable in order to
eliminate, or at least minimize the Obscurity Problem.136 Essentially,
they appear to argue that sometimes society’s interest in keeping
truthful information within an individual’s own sole control is greater
than the public’s interest in knowing, possessing and using that same
truthful information, even if the information initially was shared in
public.
To test these scholars’ hypothesis regarding the Obscurity
Problem, the pragmatic balancing approach described in Part II.B
requires us to catalog the societal harms triggered by the Obscurity
Problem and then compare those harms to the positive benefits of the
associated exposure.137 Toward this end, Part III.A documents the
alleged privacy-related harms of the Obscurity Problem, and shows
See SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 121–26; see also supra notes
41–42 and accompanying text (defining the “Obscurity Problem”).
133
See SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 121–26; see also supra notes
41–42 and accompanying text (defining the “Obscurity Problem”).
134
See supra notes 2–8 and accompanying text; see also, SOLOVE, FUTURE OF REPUTATION,
supra note 18, at 106–10; see also Peek, supra note 104, at 56–57; see SOLOVE, FUTURE OF
REPUTATION, supra note 18, at 44–48.
135
See, e.g., Solove, supra note 92, at 1109–16 (collecting scholarship regarding privacy as
“control over personal information” and acknowledging that some information falls outside
the scope of “what society deems appropriate to protect”); Lipton, supra note 17, at 930–
33; Cohen, supra note 92, at 181.
136
SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 187 (“Protecting privacy requires
careful balancing because neither privacy nor its countervailing interests are absolute
values.”); SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 84 (“As I have sought to
define it, privacy involves protection against a plurality of kinds of problems. Articulating
the value of privacy consists of describing the benefits of protecting against these
problems.”).
137
572
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
how they may have been overstated.138 Part III.B then documents the
benefits of exposure in this type of situation, and shows how they have
likely been overlooked or undervalued.139 In Part III.C, I suggest that
overstating the harms and understating the benefits of the Obscurity
Problem has resulted in an unnecessary call for changing the “no
privacy in public” rule in all cases other than those special cases
discussed in Part III.D.140 I later conclude that, in the clash between
citizen exposure of public conduct versus obscurity, exposure should
win.141
A. HARMS FROM THE OBSCURITY PROBLEM
Each privacy problem involves a combination of harms.142 The
Obscurity Problem reportedly triggers two key harms. First, the
Obscurity Problem harms the exposed person’s emotional and
psychological well-being.143 I call this the “dignity” harm.144 For
example, when Dog Poop Girl found out that her picture and criticism
of her conduct were posted all over the Internet, she felt undignified
and attacked.145 Robert, whose sexual preferences and proclivities
were shared in a blog post, experienced similar kinds of emotional
138
See infra notes 139–215 and accompanying text.
See infra notes 216–67 and accompanying text. As noted below, one scholar who has
begun demonstrating the many societal benefits of more truthful information being
available about everyone is Lior Jacob Strahilevitz. See generally Strahilevitz, Reputation
Nation, supra note 33; but see Strahilevitz, Reputation Nation, supra note 33, at 1677
(“More information is not always better. Nor is it always worse.”).
139
140
See infra notes 268–78 and accompanying text.
141
See infra notes 279–83 and accompanying text.
See SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 173 (showing that privacy’s
value “varies across different contexts depending upon which form of privacy is involved
and what range of activities are imperiled by a particular problem”).
142
143
See Id. at 175–76.
Solove also discusses the reputational harm; however, defamation law generally
recognizes a reputational harm only from false or misleading information. The Obscurity
Problem, in contrast, only involves exposure of truthful statements or actions.
144
The harm to dignity also may lead to secondary harms of changed choices and paths in
life, such as when Dog Poop Girl quit her job. Lipton, supra note 17, at 921 (citing
JONATHAN ZITTRAIN, THE FUTURE OF THE INTERNET—AND HOW TO STOP IT 211 (2008)).
145
2012]
ANDERSON
573
harm.146 Second, an Obscurity Problem, or even just the threat of one,
allegedly harms one’s relationship security,147 which in turn leads to
speech-related chilling effects.148 I call this the “thinking space” harm.
For example, knowing that one’s thoughts may be recorded and later
exposed to the world—like what happened to now-Justice Sotomayor’s
“wise Latina” speech—may discourage people from speaking their
minds and prevent the promotion of good ideas.149 In sum, the
Obscurity Problem is perceived as harmful because it harms one’s
personal emotions and chills one’s intellectual discourse, both of
which threaten society as a whole.150 Each of these two categories of
harm is discussed below in more detail.
1. HARM TO DIGNITY
As detailed above, scholars have argued that two particular aspects
of the Obscurity Problem tend to cause harm to one’s psyche—the
sharing of information with many more people than originally
anticipated and the ease of access to that information over a long
period of time.151 The target of the exposure is subjected to “unwanted
146
See supra note 110 and accompanying text.
147
SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 176–77.
Id. at 178. There “may be a widespread chilling effect when people are generally aware of
the possibility of surveillance but are never sure if they are watched at any particular
moment.” Id. at 109.
148
See Charlie Savage, A Judge’s View of Judging Is on the Record, N.Y. TIMES, May 15,
2009, at A21 (describing video recording of speech by then-Judge Sotomayor in which she
stated her “hope that a wise Latina woman with the richness of her experiences would
more often than not reach a better conclusion than a white male who hasn’t lived that life”
and documenting the associated uproar upon release of the video following her nomination
to the Supreme Court).
149
Interestingly, these harms appear to dovetail with the harms that most concerned
Warren and Brandeis, i.e., specific harm to the “feelings and personalities of individuals”
and general harms to “the level of public discourse in the press” and a “lowering of social
standards and morality.” Warren & Brandeis, supra note 36 at 196; see also Richards,
supra note 61, at 1302–03.
150
SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 145–46 (“The harm of disclosure is
not so much the elimination of secrecy as it is the spreading of information beyond
expected boundaries. People often disclose information to a limited circle of friends, and
they expect the information to stay within this group.”); Cohen, supra note 102, at 1379
(suggesting difference between publicly known and commonly known information);
Chemerinsky, supra note 95, at 656 (arguing that Internet exposure is different due to
larger audience); Lipton, supra note 17, at 927 (arguing that current exposures are more
151
574
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
notoriety”152 which “can result in embarrassment, humiliation, stigma
and reputational harm.”153 These feelings generally could be
characterized as a loss of dignity.154
The mere possibility of subsequent over-distribution also
purportedly affects one’s emotions, causing “feelings of anxiety and
discomfort.” 155 Similarly, the possibility of information collection,
even if not actually conducted, leads to a perceived loss of solitude.156
The fear that a person’s personal zone of public space may be intruded
upon, “makes her feel uncomfortable and uneasy.”157 Perhaps some of
these anxious feelings are due to the purportedly permanent nature of
the information on the Internet.158 This potential for reputationharmful because of publication that is “global in scope”). Sometimes this “increased
accessibility” is described as a harm in and of itself. SOLOVE, UNDERSTANDING PRIVACY,
supra note 17, at 151 (describing the “harm of increased accessibility”).
See SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 157 (quoting Roberson v.
Rochester Folding Box Co., 64 N.E. 442, 449 (N.Y. 1902) (Gray, J., dissenting)).
152
Id. at 160; see id. at 147 (opining that exposure “often creates embarrassment and
humiliation” especially if it involves “information about our bodies and health” or
information society deems “animal-like or disgusting”).
153
See id. at 148 (“We protect against the exposure of these bodily aspects because this
protection safeguards human dignity as defined by modern society. Dignity is a part of
being civilized; it involves the ability to transcend one’s animal nature.”) (citation omitted);
see also Warren & Brandeis, supra note 36, at 196 (suggesting that “modern enterprise and
invention have, through invasions upon his privacy, subjected [man] to mental pain and
distress, far greater than could be inflicted by mere bodily injury”).
154
SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 107–08 (“[P]eople expect to be
looked at when they ride the bus or subway, but persistent gawking can create feelings of
anxiety and discomfort.”).
155
Id. at 163 (“[I]ntrusion can cause harm even if no information is involved [because]
intrusion often interferes with solitude—the state of being alone or able to retreat from the
presence of others.”).
156
Id. at 162; see Warren & Brandeis, supra note 36, at 195 (defining right “to be let alone”)
(citation omitted); Olmstead v. U.S., 277 U.S. 438, 478 (1928) (Brandeis, J., dissenting)
(suggesting that “right to be let alone” is “the most comprehensive of rights and the right
most valued by civilized men”).
157
The Internet and related technologies reportedly escalate the harm of the exposure
because they “transform gossip into a widespread and permanent stain.” SOLOVE, FUTURE
OF REPUTATION, supra note 18, at 181; see also Solove, supra note 1, at 969 (“Without
warning, anyone can broadcast another’s unguarded moments . . . of youthful
awkwardness to an audience of millions” and, via Internet archives, “ensure that
embarrassing material follows a victim for life.”).
158
2012]
ANDERSON
575
staining reportedly “makes a person [feel like] a ‘prisoner of his
recorded past,’”159 and sacrifices the exposed person’s opportunity at a
second chance.160 Ultimately, these threats may cause one to feel like
she has lost control over her entire self161 and give others an
inaccurate162 picture of her.
2. HARM TO THINKING SPACE
The second category of harm the Obscurity Problem allegedly
causes, involves more than realized or feared internal feelings of lost
dignity; rather, it involves changed behavior of individuals, which, in
turn, causes harm to society as a whole. In particular, the failure to
protect people from the Obscurity Problem reportedly threatens to
harm their expressive activities and the personal relationships that
foster such activities.163 Depriving someone of “breathing space” in
159
SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 145 (citation omitted).
Solove, supra note 1, at 1053. “One of the values of protecting privacy is facilitating
growth and reformation.” Id. at 1054 (acknowledging that “[e]veryone has done things and
regretted them later” and suggesting that “[t]here is a great value in allowing individuals
the opportunity to wipe the slate clean” in order to “further society's interest in providing
people with incentives and room to change and grow”).
160
SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 153 (concluding that “the more
people know about us, the more they can exercise control over us”). SOLOVE,
UNDERSTANDING PRIVACY, supra note 17, at 154 (warning that “[t]hreatening to
disseminate information can achieve levels of domination that may not be socially
beneficial”).
161
Many claim that this kind of exposure is inaccurate because it is incomplete. Solove,
supra note 1, at 1035–36 (endorsing Jeffrey Rosen’s observation that “[p]rivacy protects us
from being misdefined and judged out of context in a world of short attention spans, a
world in which information can easily be confused with knowledge”); Lipton, supra note
17, at 927–29 (discussing harmful misinterpretation that occurs when one’s image is taken
out of context); JEFFREY ROSEN, THE UNWANTED GAZE 9 (2000) (“[W]hen intimate
information is removed from its original context and revealed to strangers, we are
vulnerable to being misjudged on the basis of our most embarrassing . . . tastes and
preferences.”).
162
SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 78 (“Privacy problems impede
certain activities, and the value of privacy emerges from the value of preserving these
activities.”); ROSEN, supra note 162, at 8 (“In order to flourish, the intimate relationships
on which true knowledge of another person depends need space as well as time:
sanctuaries from the gaze of the crowd in which slow mutual self-disclosure is possible.”)
(citing SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 79–80).
163
576
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
public, it is argued, also deprives him of his “freedom of thought,”164
whether alone or with others,165 which also triggers First Amendment
freedom of association concerns.166
With respect to the need for individual “thinking space,” scholars
have stated that obscurity “preserve[s] space for new ideas to
develop.”167 Without some privacy in public, people will lose the
“moments for intellectual and spiritual contemplation” that privacy in
public provides.168 This type of harm occurs regardless of whether the
exposure actually happens or not.169 The actual or threatened loss of
obscurity chills not only political but also creative expression because
people wish to avoid having their ideas “prematurely leaked to the
world, where harsh judgments might crush them.”170 A sense of
private space in public also reportedly is crucial to intellectual
relationships with others, especially political ones. Specifically, “public
surveillance can have chilling effects that make people less likely to
Richards, supra note 61, at 1324–25; SOLOVE, UNDERSTANDING PRIVACY, supra note 17,
at 169.
164
This argument appears supported by an emerging scholarly effort to draw a line around,
and better protect, “intellectual privacy.” See generally Neil M. Richards, Intellectual
Privacy, 87 TEX. L. REV. 387 (2008). To define when intellectual privacy interests are at
stake, one asks “whether the information being sought is relevant to the activities of
thinking, reading and discussion safeguarded by the First Amendment.” If the answer is
“yes,” then a higher level of protection should result. Richards, supra note 61, at 1349.
165
See, e.g., Katherine J. Strandburg, Freedom of Association in a Networked World: First
Amendment Regulation of Relational Surveillance, 49 B.C. L. REV. 741 (2008).
166
Richards, supra note 61, at 46; Richards, supra note 165, at 412–16 (arguing that spatial
privacy is necessary for thought development).
167
SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 79 (citing material from Joseph
Bensman and Arnold Simmel); Cohen, supra note 102, at 1377 (“We must carve out
protected zones of personal autonomy, so that productive expression and development can
have room to flourish.”)
168
SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 143 (opining that the mere “risk of
disclosure can prevent people from engaging in activities that further their own selfdevelopment” and “inhibit people from associating with others”); Solove, Conceptualizing
Privacy, supra note 92, at 1121 (quoting Gerstein for principle that “intimate relationships
simply could not exist if we did not [have] privacy for them”); Richards, supra note 165, at
403 (“In a world of widespread public and private scrutiny, novel but unpopular ideas
would have little room to breathe . . . and original ideas would have no refuge in which to
develop.”).
169
170
SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 80.
2012]
ANDERSON
577
associate with certain groups, attend rallies, or speak at meetings.”171
Privacy and obscurity allegedly “underwrite[] the freedom to vote, to
hold political discussions, and to associate freely away from the glare
of the public and without fear of reprisal.”172 Further, whether
individually or in groups, protection against disclosure “also facilitates
the reading and consumption of ideas.”173 This is because disclosure,
coupled with the perceived permanency of the Internet “attaches
informational baggage to people” so that a thought exposed once will
be associated with that person permanently.174 Ultimately, as “a tool of
social control,” the Obscurity Problem purportedly “cause[s] [a]
person to alter her behavior [through] self-censorship and inhibition,”
thus reducing the number of helpful activities for society.175
3. HOW THE OBSCURITY PROBLEM HARMS ARE
OVERSTATED VIA ASSUMPTIONS
Simply, and perhaps harshly stated, the primary alleged harms of
an Obscurity Problem are that: (i) it makes the exposed person feel
bad about himself, sometimes for a very long time, and (ii) it makes
some people think twice before they share their thoughts or actions
with others in public. Both harms are based on certain questionable
assumptions, each of which is challenged below.
171
Id. at 112.
Solove, supra note 1, at 993 (quoting OWEN M. FISS, THE IRONY OF FREE SPEECH 3
(1996)); Solove, supra note 1, at 993–94 (concluding that “without privacy, people might
not communicate many ideas” and that threat of disclosure “probably will not end all
conversations, but it will alter what is said”); Richards, supra note 61, at 1341–42
(suggesting that in order to “govern themselves,” citizens need “space [free] from state
scrutiny of their ‘beliefs, thoughts, and emotions’”).
172
See Solove, supra note 1, at 992 (citing Julie E. Cohen, A Right to Read Anonymously: A
Closer Look at “Copyright Management” in Cyberspace, 28 CONN. L. REV. 981, 1012–13
(1996)).
173
174
SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 124.
175
Id. at 108.
578
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
a. Refuting Assumptions that the Dignity Harm is Permanent,
Irrefutable, and Quantifiable
The first harm most often associated with the Obscurity Problem
is a loss of dignity or other emotional harm felt by the person exposed
and by persons who fear such exposure.176 I do not question that some
people subjected to the Obscurity Problem suffer some emotional
harm that causes them sincere pain.177 However, I do question
whether this harm, as some scholars appear to have assumed or
argued, is permanent, irrefutable, and quantifiable.
The first assumption that potentially exaggerates the dignity harm
is the assumption that such harm is permanent.178 As noted above,
some scholars claim that the Obscurity Problem makes one a
“prisoner of his recorded past,” suggesting that once one is exposed,
one permanently drags around the exposed information like a tattoo
or like a ball and chain around one’s neck.179 However, a cursory
“where are they now”-style Internet search for the most oft-cited
victims of the Obscurity Problem,180 reveals that many were not
damaged as much as one initially would think. Others that initially
were harmed have returned to or risen to positions objectively better
than before the exposure. For example, “Star Wars Kid,” now a young
adult, serves as President of a conservation society while studying for
See supra notes 141–59 and accompanying text (discussing dignity harm and
acknowledging that it includes other related harms such as increased anxiety); see M. Ryan
Calo, The Boundaries of Privacy Harm, 86 IND. L.J. 1131, 1145 (2011) (documenting
examples of dignity harm under the label of “subjective harm”).
176
177
See supra notes 141–59 and accompanying text.
SOLOVE, FUTURE OF REPUTATION, supra note 18, at 94 (“One of the chief drawbacks of
Internet shaming is the permanence of its effects. . . . Being shamed in cyberspace is akin to
being marked for life.”). In part, this argument appears to suggest that we should be more
afraid of information when it is more permanent, a suggestion that also could support
limiting hardcover books because they are more durable and longer-lasting than
paperbacks.
178
SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 145; see SOLOVE, FUTURE OF
REPUTATION, supra note 18, at 94 (suggesting that being exposed for public conduct via the
Internet is “similar to being forced to wear a digital scarlet letter or being branded or
tattooed”).
179
180
See supra notes 104–14 and accompanying text.
2012]
ANDERSON
579
his law degree at McGill.181 Jonas Blank, the law firm associate whose
profanely critical email traveled around the world, was hired full-time
by the same top law firm he criticized.182 Countless others had their
fifteen minutes of undignified fame fade even before or shortly after
rising to the pitied status of an “example used in privacy scholar’s
work.”183 Further, the average shelf-life of any documented dignity
harm likely will fade even more rapidly as more people are exposed,
i.e., as the democratization of exposure expands.184 This is because the
increased amount of information available about individuals, and the
ever-decreasing window of time during which a single piece of
information remains in the public’s collective interest, makes any one
exposure less and less noticeable.185 In sum, as more information
about more people is made available in a shorter and shorter news
cycle, the staying power of any one exposure is limited and the
supposed permanence of the dignity harm grows ever more
questionable.186
See Alex Pasternack, After Lawsuits and Therapy, Star Wars Kid is Back,
MOTHERBOARD (June 1, 2010), http://motherboard.vice.com/2010/6/1/after-lawsuitsand-therapy-star-wars-kid-is-back.
181
The law firm, Skadden, Arps, Slate, Meagher & Flom LLP, offered Mr. Blank a full-time
position with the firm, which he accepted. He now works at a different New York City law
firm. See David Lat, Our Favorite Skadden Associate Moves On, ABOVE THE LAW (Mar. 30,
2007, 2:36 PM), http://abovethelaw.com/2007/03/our-favorite-skadden-associatemoves-on.
182
See Schwartz, supra note 7, at 1444–45 (discussing how Solove’s depictions of exposed
individuals perhaps perpetuates the very harm he identifies as troubling).
183
See Aaron Perzanowski, Comment, Relative Access to Corrective Speech: A New Test
for Requiring Actual Malice, 94 CALIF. L. REV. 833, 833–34 (2006) (referencing “the
democratization of the means of mass communication spurred by modern technology” as
reason to question the public figure doctrine in defamation law).
184
See Gloria Franke, Note, The Right of Publicity vs. The First Amendment: Will One Test
Ever Capture the Starring Role?, 79 S. CAL. L. REV. 945, 989 (noting “democratization of
celebrity” and the power of the Internet to “empower[] the formerly voiceless”) (citation
omitted).
185
I suspect that some exposed persons, like the public at large, subjectively do not
experience the harm others project upon them. See, e.g., Eric Goldman, The Privacy Hoax,
FORBES, Oct. 14, 2002, at 42, available at
http://www.forbes.com/forbes/2002/1014/042.html (discussing how consumer behavior
regarding privacy demonstrates a lack of sincere interest in personal privacy and, on that
basis, questioning the need for additional government regulation).
186
580
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
The dignity harm also may be overstated because it appears based,
at least in part, on the assumption that the victim of the harm cannot
reduce the harm by responding in her defense. After an exposure,
some serious emotional damage may have been done; however, that
does not mean that it cannot be mitigated. Every alleged victim of an
exposure has the opportunity to post a reply and many are quite
effective.187 Even Dog Poop Girl posted an online apology.188
Additionally, exposure on the Internet unleashes a mob of eager factcheckers, raring to go and expose the initial citizen journalist as a
fraud. For example, when Department of Agriculture official Shirley
Sherrod was falsely “exposed” for saying, on tape, that she
purposefully refused to help a farmer of a different race than her own,
but further, near-immediate exposure, through the posting of and
commentary regarding the entire tape, revealed that she in fact did
help the farmer and that she learned a great lesson regarding race and
class in society.189 In this respect, exposing the information to a large
mass of people simultaneously helped improve the accuracy of the
information. This automatic “right to reply” has not always been
available (simply put, not everyone owned a newspaper) but now, the
very technology some vilify is the same technology that empowers a
reply.190 And, most importantly, every reply reduces the harm
associated with the exposure and loss of obscurity.191
See Lauren Gelman, Privacy, Free Speech, and “Blurry-Edged” Social Networks, 50
B.C. L. REV. 1315, 1315–16 (2009) (“For the first time in history, the economics of
publishing place the individual speaker on even ground with institutional speakers” such
that “any person can tell her story to the world.”).
187
See Dog Poo Girl, KNOW YOUR MEME, http://knowyourmeme.com/memes/dog-poogirl# (last updated Nov. 7, 2011) (providing screen capture of apology).
188
See Viveca Novak, Shirley Sherrod’s Contextual Nightmare, FACTCHECK.ORG, (July 21,
2010), http://www.factcheck.org/2010/07/shirley-sherrods-contextual-nightmare.
189
See Perzonowski, supra, note 184, at 836 (contrasting the difficulty of responding to a
newspaper’s accusations with the relative ease of responding to an online source’s
publication of the same accusations and concluding that, with the latter, the “easiest and
most effective strategy is simply to correct the misinformation through [one’s] own
response”). As Perzonowski states, “In the time it would take to contact a lawyer, [someone
wrongly exposed online] could compose a response that would counter the misinformation
and prevent or repair any harm to her reputation.” Id. at 836.
190
The reporting of, and replies to, more reputation-related information in turn will likely
lead to more complete and more accurate assessments of an individual’s actual qualities.
See Strahilevitz, supra note 33, at 1670–75 (chronicling developments leading to
“reputation revolution”).
191
2012]
ANDERSON
581
Finally, the dignity harm often is overstated because it cannot be
quantified. Emotional harm is inherently subjective.192 The only one
who accurately can quantify the harm is the person affected, and there
is no direct way for the rest of us to get inside her head and feel the
harm in the same way she feels it. This difficulty has led, at least in
part, to the law’s skepticism of emotional harm, especially when the
alleged harm was caused by the sharing of truthful information. For
example, a privacy tort plaintiff must be able to point to a specific
harm, such as a reputational loss, versus mere “hurt feelings,” in order
to obtain damages.193 The Supreme Court, too, has expressed a
reluctance to limit speech based on its potential to hurt feelings.194
Although it remains possible to assess the harm on an objective basis,
assuming a rational, reasonable audience,195 the currently
unquantifiable nature of emotional harm makes the dignity harm a
weak leg on which to support a right to obscurity. 196 Further, to the
extent it can be quantified, the dignity harm likely is offset by the
emotional benefits of exposure, as discussed in Part B below.
b. Refuting the Assumptions that Private Thinking Space in Public is
Necessary and Unavailable
The next harm—previously described as the destruction of
perceived “thinking space”—is also overstated due to its apparent
reliance on a questionable assumption. Namely, this harm assumes
that a public, yet somewhat secure space absolutely free from later
reporting by anyone of what has transpired there is necessary for, and
helpful to thought development and free association, especially
192
See Calo, supra note 176, at 1145 (describing various subjective harms).
See Richards, supra note 61, at 1345 (“[A] hallmark of modern American First
Amendment jurisprudence is that hurt feelings alone cannot justify the suppression of
truthful information or opinion.”).
193
See R.A.V. v. City of St. Paul, Minn., 505 U.S. 377, 414 (1992) (striking down an
ordinance that criminalized conduct which caused mere hurt feelings).
194
See Lyrissa Barnett Lidsky, Nobody’s Fools: The Rational Audience as First
Amendment Ideal, 2010 U. ILL. L. REV. 799, 802–04 (discussing Supreme Court justices’
“constitutional preference for standards based on a rational audience rather than a real
one”).
195
Richards, supra note 61, at 1346 (“As Brandeis himself implicitly recognized later in life,
a tort-based conception of privacy protecting against purely emotional harm must remain
exceptional in a constitutional regime dedicated to speech, publicity, and disclosure.”).
196
582
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
political thought and association. More pointedly, some seem to fear
that if we let regular people report what they see and hear to millions
of others at the click of a button, no one will say the important things
that need to be said.197
Although some people rightfully may feel that potential exposure
discourages them from engaging in thoughtful debate,198 others may
feel that potential exposure to a huge and immediate audience
encourages sharing and debate. In fact, the inherent appeal of
reaching such a broad audience likely is partly responsible for the
meteoric rise in the use of social networking sites like Twitter and
Facebook. On these sites and elsewhere, many individuals and groups
seek out exposure and publicity for their ideas, rather than hide from
them. Their reasons for doing so presumably range from egotism or
vanity to more utilitarian reasons, such as the fact that speaking with
someone else about your thoughts often improves them or the fact
that making one’s ideas known helps them to spread and gain favor.
Regardless of the precise reason that people share ideas with others, it
simply cannot be said with convincing authority that people generally
need a public-yet-private incubator for their secret camaraderie and
thoughts.199 Further, encouraging some people to pause before they
share a thought may result in beneficial self-censorship.200 Thus, the
This argument also appears fundamentally flawed given that societies with less spatial
privacy than we enjoy today apparently were able to think and write just fine. See Posner,
supra note 16, at 407 (showing how “history does not teach that privacy is a precondition
to creativity or individuality” because “these qualities have flourished in societies . . . that
had much less privacy than we in the United States have today”).
197
For example, the National Association for the Advancement of Colored People
(“NAACP”) famously argued that state-mandated disclosure of its membership lists would
deter free enjoyment of the right to associate. NAACP v. Alabama, 357 U.S. 449, 462 (1958)
(“It is hardly a novel perception that compelled disclosure of affiliation with groups
engaged in advocacy may constitute as effective a restraint on freedom of association as the
forms of governmental action in the cases above were thought likely to produce upon the
particular constitutional rights there involved.”)
198
See Posner, supra note 16, at 408–09. Daniel Solove has acknowledged that not all
observation of one’s thinking violates privacy. SOLOVE, UNDERSTANDING PRIVACY, supra
note 17, at 86–87 (“Suppose I peek through your window and see that you are reading a
book by Rousseau. I learned information about your consumption of ideas, which
ultimately involves information about your identity. Without more details, however, it is
hard to see how my snooping rises to the level of kidnapping.”).
199
See Danielle Keats Citron, Cyber Civil Rights, 89 B.U. L. REV. 61, 64–66 (2009)
(documenting serious societal harms triggered by the “growth of anonymous online mobs
that attack women, people of color, religious minorities, gays, and lesbians”); Lyrissa
Barnett Lidsky, Anonymity in Cyberspace: What Can We Learn from John Doe?, 50 B.C.
200
2012]
ANDERSON
583
“thinking space” harm likely has been overstated and obscurity is not
a necessary prerequisite for thought generation and association.
Various legal sources support the idea that the lack of public-yetprivate thinking space is an acceptable lack of obscurity rather than a
serious harm. Perhaps the most recent and direct challenge to the idea
that lack of thinking space in public is a serious harm comes from the
Supreme Court in its recent decision, Doe v. Reed.201 In Doe, the
Supreme Court held that state disclosure of the names and addresses
of those who sign petitions in support of ballot referenda does not
categorically violate the petition signers’ First Amendment speech
rights.202 In challenging the state statute requiring such disclosure, the
signers of an anti-gay rights petition had argued that disclosing their
names and addresses would chill speech, expose them to harassment
and deprive them of privacy for their thoughts.203 This argument
tracks that of privacy scholars who suggest that little to no obscurity
will chill expression and thought development.204 The Doe v. Reed
Court rejected this argument in an eight to one decision.205
Specifically, the Court found that the state law was a constitutional
disclosure requirement that “may burden the ability to speak, but
L. REV. 1373, 1383 (2009) (labeling anonymity a “double-edged sword” because “it makes
public discussion more uninhibited, robust, and wide-open than ever before, but it also
opens the door to more trivial, abusive, libelous, and fraudulent speech”); cf. Lyrissa
Barnett Lidsky & Thomas F. Cotter, Authorship, Audiences and Anonymous Speech, 82
NOTRE DAME L. REV. 1537, 1573–74 (2007) (categorizing intrinsic benefits of anonymous
speech).
201
Doe v. Reed, 130 S. Ct. 2811 (2010).
Id. at 2815. Before reaching its ultimate holding, the court determined that the signing
of the petition was an expressive act, id. at 2818, and that disclosure requirements are
subject to “exacting scrutiny,” under which a “sufficiently important governmental
interest” must be found. Id. In Doe, the Court ruled that the state’s interest in preserving
the integrity of its electoral process was sufficient. Id. at 2820.
202
Id. at 2820 (expressing concern that publicizing names and addresses provided a
“blueprint for harassment and intimidation”).
203
See supra notes 163–73 and accompanying text (summarizing other scholars’
suggestion that “thinking space” “underwrites the freedom to . . . associate freely away
from the glare of the public and without fear of reprisal”).
204
Doe, 130 S. Ct. at 2821. The Court left the door open to an “as applied” challenge if
petitioners could show that disclosure would, as applied to their particular case, cause
enough harm to their First Amendment rights and personal safety. Id. (citations omitted).
The holding also may be limited to actions that, like petition signing, have a “legal effect”
similar to legislation. Id. at 2818 n.1.
205
584
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
[does] not prevent anyone from speaking.”206 In so finding, the court
rejected a call for more privacy to facilitate activities deemed
“intellectual,”207 at least when they have a lawmaking effect.208 Thus,
in at least one context, the Supreme Court has considered and rejected
the “lack of thinking space” privacy harm as a reason to restrict the
exposure of truthful information.
Another legal reality that undermines the perceived lack of
thinking space harm is the fact that there already are adequate
methods to obtain legal protection for one’s ideas and thoughts when
necessary—namely, confidentiality or nondisclosure agreements.209 If
secrecy of publicized thought truly is valuable enough to someone, it
can be obtained via an express agreement prior to or after it is
shared.210 Admittedly, obtaining such agreements involves various
Id. at 2813–14 (quoting Citizens United v. FEC, 130 S. Ct. 876, 972 (2010)) (internal
quotation marks omitted).
206
See Richards, supra note 61, at 1343 (suggesting that Supreme Court’s alleged regard for
intellectual privacy “holds a much greater degree of promise to better understand and
resolve modern problems of privacy”); see also id. at 1343 (characterizing Warren and
Brandeis’s style of privacy as “a jurisprudential dead end” given conflict with First
Amendment).
207
Justice Scalia was most skeptical of the urged need for thinking space, stating as
follows: “[H]arsh criticism, short of unlawful action, is a price our people have traditionally
been willing to pay for self governance. Requiring people to stand up in public for their
political acts fosters civic courage, without which democracy is doomed. For my part, I do
not look forward to a society which, thanks to the Supreme Court, campaigns anonymously
. . . and even exercises the direct democracy of initiative and referendum hidden from
public scrutiny and protected from the accountability of criticism. This does not resemble
the Home of the Brave.” Doe, 130 S. Ct. at 2837 (Scalia, J., concurring) (citations omitted).
208
See Daniel J. Solove & Neil M. Richards, Rethinking Free Speech and Civil Liability,
109 COLUM. L. REV. 1650, 1654 (2009) (describing use of confidentiality agreements to
protect privacy) (citations omitted).
209
In the absence of an express agreement, one may pursue a claim for breach of an
implied duty of confidentiality in special circumstances, typically those involving a
fiduciary relationship. See Peterson v. Idaho First Nat’l Bank, 367 P.2d 284 (Idaho 1961)
(bank); Alberts v. Devine, 479 N.E.2d 113, 120 (Mass. 1985) (doctor); Rich v. N.Y. Cent. &
Hudson River R.R. Co., 87 N.Y. 382, 390 (1882) (lawyer); Wagenheim v. Alexander Grant
& Co., 482 N.E.2d 955, 961 (Ohio Ct. App. 1983) (accountant) (cited in Solove and
Richards, supra note 209, at 1653 n.10 and accompanying text); see also Neil M. Richards
& Daniel J. Solove, Privacy’s Other Path: Recovering the Law of Confidentiality, 96 GEO.
L.J. 123, 157 (2007). Some have argued for extension of the implied duty of confidentiality
to intimate private relationships and even to mere friends. See Ethan J. Lieb, Friends as
Fiduciaries, 86 WASH. U. L. REV. 665, 668 (2009); Andrew J. McClurg, Kiss and Tell:
Protecting Intimate Relationship Privacy Through Implied Contracts of
Confidentiality, 74 U. CIN. L. REV. 887, 908 (2006); Andrew J. McClurg, Bringing Privacy
210
2012]
ANDERSON
585
transaction costs that may deter their use by some speakers. However,
the ubiquitous availability of these legal options at least questions
whether one absolutely needs a right to obscurity in order to develop
thought. Accordingly, the “thinking space” harm likely has been
overstated.
Sunshine laws and the benefits they have produced also call into
question the need for privacy in public in order to facilitate thought
development.211 One subset of sunshine laws generally requires
governmental policy-related meetings to be open to the public and, in
many cases, recorded, and even posted on a website.212 Similarly,
many federal and state agencies require that people meeting with
agency officials file a written notice describing what was discussed.213
Further, our nation has a history of open town meetings.214 If those
most directly responsible for making public policy decisions do not
need privacy in public, then it is at least questionable whether those
with a more remote role need such absolute privacy either.
Finally, much of the most justifiable fear regarding deprivation of
thinking space is triggered only when the state is the one collecting the
information or when the information collection is a constant,
pervasive threat.215 The Obscurity Problem involves neither statebased nor constant surveillance. State surveillance is not directly
involved because it is “Little Stranger,” and not “Big Brother,” that is
Law Out of the Closet: A Tort Theory of Liability for Intrusions in Public Places, 73 N.C. L.
REV. 989, 1085–86 (1995).
Many of these laws are so-named because Louis Brandeis, years after co-authoring The
Right to Privacy, famously stated that “[s]unlight is said to be the best of disinfectants.”
LOUIS D. BRANDEIS, OTHER PEOPLE’S MONEY AND HOW THE BANKERS USE IT 62 (1914).
211
See generally Daxton R. Stewart, Let the Sunshine In, Or Else: An Examination of the
“Teeth” of State and Federal Open Meetings and Open Records Laws, 15 COMM. L. & POL’Y
265 (2010) (reviewing current status of sunshine laws at federal and state level including
enforcement of and relief under such laws).
212
See Heidi Reamer Anderson, Allocating Influence, 2009 UTAH L. REV. 683, 693–94
(collecting and summarizing agency rules regarding ex parte meeting notices).
213
See James Assaf, Note, Mr. Smith Comes Home: The Constitutional Presumption of
Openness in Local Legislative Meetings, 40 CASE W. RES. L. REV. 227, 229–30 (1990)
(documenting existence of sunshine laws in all fifty states requiring public access to local
government meetings) (citations omitted).
214
See Richards, supra note 61, at 1351 (concluding that Brandeis was most concerned
when privacy protection was threatened “principally [by] the state rather than the press”);
but see Richards, supra note 61, at 1347–50 (documenting threats to “intellectual
privacy”).
215
586
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
collecting the information.216 When it is one’s fellow citizens versus
one’s government officials doing the exposing, freedom of association
and related issues are less of a concern.217 Second, arguments
regarding surveillance’s potential to chill expression are most
persuasive when the surveillance is constant, versus intermittent.218
With the Obscurity Problem, surveillance is only occasional—and thus
less of a threat to “thinking space.”219
B. BENEFITS OF EXPOSURE
Just as we must “value privacy on the basis of the range of
activities it protects” we also must consider the range of activities that
protecting privacy would impede, i.e., the range of activities precluded
by the privacy protections themselves.220 Although some scholars
initially identify some of these benefits of exposure of public conduct
or information,221 there is no comprehensive recognition,
categorization, and aggregation of exposure’s benefits in the privacy
SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 133; see also Calo, supra note 176,
at 1157–59 (reviewing Solove’s references to Orwell and Kafka and suggesting that lack of
privacy is better viewed as a contributor to societal harms than as a separate and distinct
harm in itself).
216
They still remain a concern, however, given the government’s reported use of privatelycollected information. See, e.g., Danielle Keats Citron & Frank A. Pasquale III, Network
Accountability for the Domestic Intelligence Apparatus, 62 HASTINGS L.J. 1441 (2011).
217
See Calo, supra note 176, at 1155 (suggesting that harm to privacy is best measured by
multiplying “the degree of aversion” to the privacy intrusion by “the extent of [the]
surveillance” and thus finding more harm when “the extent of the surveillance is
enormous”); Josh Blackman, Omniveillance, Google, Privacy in Public and the Right to
Your Digital Identity: A Tort for Recording and Disseminating an Individual’s Image
Over the Internet, 49 SANTA CLARA L. REV. 313, 327–34 (2009) (distinguishing harms from
constant or ever-present surveillance, labeled “omniveillance,” from lesser harms caused
by occasional photographs).
218
See SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 80–81 (discussing critics of the
need for privacy for intellectual thought, including professors Hannah Arendt, Yao-Huai
Lu, Richard Posner, and Richard Epstein).
219
220
Id. at 98–99.
See id. at 123, 187 (acknowledging that identification “can reduce fraud and enhance
accountability” and that “many privacy problems emerge as a result of efficacious activities,
much as pollution is an outgrowth of industrial production”).
221
2012]
ANDERSON
587
literature. This section aims to fill that gap, beginning with a
discussion of how exposure keeps government officials accountable.222
1. GOVERNMENTAL ACCOUNTABILITY BENEFITS
Daniel Solove claims that “dog poop girl would have been just a
vague image in a few people’s memories if it hadn’t been for the photo
entering cyberspace and spreading around faster than an
epidemic.”223 While this may be true, and may be deemed a net gain
for society in that one instance, this does not mean that legally
preventing or discouraging exposures of people’s public behavior
always results in a net gain for society. Recall the police brutality story
from the Introduction involving Officer Walsh. Do we want events like
that to be “vague image[s] in a few people’s memories?”224 Of course
not, because exposure of such images holds our government officials
accountable, inspires public debate, and often leads to real policy
changes. However, such benefits could be sacrificed in that situation
and others like it if the person who videotaped the event did not post
it to YouTube because he had a legal duty to protect the obscurity of
Officer Walsh or of other people at the scene.
In trying to prevent harm suffered by the Dog Poop Girls of the
world, we risk losing exposure of public behavior that should be
further seen, heard, discussed, and addressed.225 Consider the many
recent exposures of public officials at public events with other people
nearby about which we may not have learned (or at least not seen) if
the law protected people’s obscurity in public:
My categorization of these exposure benefits is a preliminary and perhaps less
sophisticated attempt to do what Daniel Solove does for privacy harms in Understanding
Privacy. Some of these benefits have speech elements and thus relate to many First
Amendment arguments already made elsewhere. However, in the instant context of a
pragmatic balancing of harms and benefits, it is more appropriate to discuss them in nonspeech terms, and perhaps mention the benefit’s connection to the First Amendment in
passing, just as pro-privacy scholars have done. In this respect, a benefit’s relationship with
the First Amendment increases its weight in the balance, but does not change its existence.
Or, more concisely, one could consider the First Amendment as a thumb on the scale
pushing down in favor of exposure.
222
223
SOLOVE, FUTURE OF REPUTATION, supra note 18, at 8.
224
Id.
See Solove, supra note 1, at 973 (“Privacy impedes discourse, impairs autonomy in
communication, and prevents accountability for one's conduct.”).
225
588
I/S: A JOURNAL OF LAW AND POLICY

Anti-gay comments made by candidate for New
York Governor, Carl Paladino, in a meeting with
religious leaders;226

Then Governor of South Carolina, Mark
Sanford’s, emails to the woman with whom he
was having an extra-marital affair, and
suggestions to his staff to account for his
absences during these affairs by falsely stating
that he was “hiking the Appalachian trail;”227

The reference by George Allen, a candidate for
Virginia’s U.S. Senate seat, to an opponent’s
aide as “Macaca,” which many interpreted as a
racist statement equating the aide to a Macaque
monkey;228

Then Senate Majority Leader Trent Lott’s
birthday party statement suggesting that had
civil-rights opponent and fellow Senator Strom
Thurmond been elected president in 1948, on a
pro-segregation platform, “we wouldn’t have
had all these problems over the years;”229 and
[Vol. 7:3
See Simone Weichselbaum & Kenneth Lovett, Carl Paladino Accused of “Stunning
Homophobia” After Anti-Gay Rant at Meeting with Jewish Leaders, N.Y. DAILY NEWS
(Oct. 10, 2010, 5:10 PM), http://articles.nydailynews.com/2010-1010/local/27077787_1_civil-unions-gay-marriage-bill-gay-pride-parade.
226
South Carolina Gov. Sanford Admits Extramarital Affair, CNN (June 24, 2009, 9:29
PM EDT),
http://www.cnn.com/2009/POLITICS/06/24/south.carolina.governor/index.html.
227
Tim Craig & Michael D. Shear, Allen Quip Provokes Outrage, Apology, WASH. POST,
Aug. 15, 2006, at A1, available at http://www.washingtonpost.com/wpdyn/content/article/2006/08/14/AR2006081400589.html.
228
Thomas B. Edsall, Lott Decried for Part of Salute to Thurmond, WASH. POST, Dec. 7,
2002, at A6, available at http://www.washingtonpost.com/ac2/wpdyn?pagename=article&contentId=A20730-2002Dec6&not Found=true.
229
2012]
ANDERSON

589
Pictures posted on a website by Ninth Circuit
Judge Alex Kozinski, depicting nude women
painted to look like farm animals.230
Any one of these or similar exposures of public conduct or
statements may have been precluded or at least “chilled” if the law
recognized a right to obscurity. As a result, the helpful discussions and
consequences these exposures motivated may not have occurred.
Thus, there is a risk that providing a general right to obscurity would
sacrifice significant governmental accountability benefits. In the end,
“keeping pertinent information about public affairs out of the hands of
the public is equally problematic,” regardless of whether the
information’s source is a “citizen journalist” or a more traditional
journalist.231
2. BEHAVIORAL-IMPROVEMENT BENEFITS
There also is significant value in continuing to apply the “no
privacy in public” rule to everyone instead of applying it only to public
officials. Just as the possibility of getting caught and punished acts
deters crime, the possibility of getting exposed for public statements
or behavior deters non-criminal but still objectively-undesirable
behavior.232 Daniel Solove calls this type of exposure “norm policing,”
but this term is too pejorative for something that holds so much
potential for benefitting society. Solove claims that “[w]e do not view
the victims [of exposure] as blameworthy, and there is little social
value in their suffering.”233 I disagree. This alleged suffering, in the
form of lost dignity and lost obscurity, can lead to significant social
value—and even save lives.
Scott Glover, 9th Circuit’s Chief Judge Posted Sexually Explicit Matter on His Website,
L.A. TIMES, June 11, 2008, http://www.latimes.com/news/local/la-me-kozinski122008jun12,0,6220192.story.
230
231
See Strahilevitz, Reunifying, supra note 35, at 12.
See generally Paul H. Robinson & John M. Darley, The Role of Deterrence in the
Formulation of Criminal Law Rules: At Its Worst When Doing Its Best, 91 GEO. L.J. 949
(2003); SOLOVE, FUTURE OF REPUTATION, supra note 18, at 80–81, 92 (“Internet shaming
has many benefits. Without shaming, people like the dog poop girl, the subway flasher, and
the creep who harasses women in the street would often go unpunished. In a world of
increasingly rude and uncivil behavior, shaming helps society maintain its norms of civility
and etiquette.”).
232
233
See Solove, supra note 42, at 538.
590
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
In Order Without Law, Robert Ellickson discussed the behavioral
benefits of exposure.234 In a more recent and more specific study, Lior
Jacob Strahilevitz demonstrated how additional exposure of people’s
reckless driving habits could reduce deaths on our highways—the
number one cause of death among those aged fifteen to twentynine.235 Strahilevitz first showed how protecting motorist obscurity
leads to rude, dangerous, and even life-threatening behavior.236 Next,
he demonstrated how reducing driver obscurity through exposure by
fellow citizens, and holding drivers accountable for their actions, has
led to better and safer driving among some “exposed” groups and
promises such benefits for society should the exposed groups be
expanded.237 Indeed, Strahilevitz has shown how facilitating closeness
and more “norm policing” may work better than the tort system itself
as a way of curtailing and punishing bad behavior.238
Fewer people cutting us off or tailgating us may save lives. On a
more abstract level, additional exposure and less obscurity, promise to
increase happiness as well, on the highways and elsewhere.239 Fewer
instances of other bad behavior may make life more enjoyable in
various other contexts—primarily those in which obscurity authorizes
Robert C. Ellickson, ORDER WITHOUT LAW: HOW NEIGHBORS SETTLE DISPUTES (1991);
see Schwartz, supra note 7, at 1440–42 (explaining how Ellickson’s work is pro-disclosure
in part because it recognizes that “if the law can help reputational information circulate
more freely, people will work harder to maintain the good opinion of others”).
234
Lior Jacob Strahilevitz, “How’s My Driving” for Everyone (and Everything?), 81 N.Y.U.
L. Rev. 1699, 1712 (2006).
235
Strahilevitz, supra note 235, at 1705 (reviewing highway safety studies and concluding
that “[t]he problems associated with urban and suburban driving are, by and large,
creatures of motorist anonymity”).
236
237
Id. at 1708–12.
238
Id. at 1724–26.
Strahilevitz demonstrated how exposure of drivers would increase happiness—an
increase that likely would offset any other type of emotional harm suffered by the exposed
drivers. Id. at 1702 (“Recent economic research has placed commuting at the very bottom
of the happiness index, easily ranking as the least pleasurable major life activity in which
Americans engage.”); id. at 1729 (“While the costs associated with driver deaths and
injuries are quite substantial, they may well be dwarfed by the sheer unhappiness
associated with commutes to and from work.”); id. at 1730 (discussing economists’ studies
regarding the value of happiness).
239
2012]
ANDERSON
591
and perhaps encourages objectionable behavior.240 For example,
exposing poor tippers online has made servers who felt cheated feel
better simply by reporting them; such exposure, in turn, may make
patrons more courteous and more generous. Similarly, exposure of
unruly hotel, sports stadium, or airport patrons could make visiting
such places more enjoyable for all.241 If the potential for getting
exposed for saying something extremely harmful to someone else
actually changes someone’s behavior—and prevents the harm that
would have been caused—then this changed behavior is a benefit of
the “no privacy in public” rule and is another benefit supporting the
rule’s retention.
3. CRIMINAL DETERRENCE, REPORTING, AND INTEGRITY BENEFITS
The fruits of exposure can be even more beneficial to society when
it is a criminal act, versus happiness-reducing rudeness, that is subject
to exposure. Louis Brandeis’s suggestion that sunlight is the best
disinfectant permeates popular culture and legal discourse.242 Often
forgotten, however, is the second part of Brandeis’s sunlight quote:
and “electric light the most efficient policeman.”243 As Brandeis
suggested, exposure that leads to reduced obscurity for would-be
criminals can be quite efficient at deterring crime, improving the
integrity of the criminal justice system, and increasing the reporting of
crimes, as discussed below.
As Daniel Solove has conceded, “social control can be beneficial . .
. [f]or example, surveillance can serve as a deterrent to crime.”244 In
Great Britain, a government surveillance program using closed-circuit
cameras (CCTV) reportedly has reduced street crimes in some areas
by fifty percent or more.245 Although the Obscurity Problem includes,
See Cohen supra note 92, at 196 (“Maybe we don’t want people to litter or spread
germs, or to drive aggressively, and if the potential for exposure reduces the incidence of
those behaviors, so much the better.”).
240
241
Strahilevitz, supra note 235, at 1763.
See generally, Seth F. Kramer, Sunlight, Secrets, and Scarlet Letters: The Tension
between Privacy and Disclosure in Constitutional Law, 140 U. PA. L. REV. 1 (1991).
242
243
BRANDEIS, supra note 211, at 92.
244
Solove, supra note 42, at 493–94.
See Christopher Slobogin, Public Privacy: Camera Surveillance of Public Places and
the Right to Anonymity, 72 MISS. L.J. 213, 223 (2002); Shane McGlaun, London CCTV
System Caught Over 2,500 Criminals in 2010, DAILYTECH (Dec. 28, 2010, 12:20 PM),
245
592
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
by definition, only exposure by private persons versus governments, it
is possible if not likely that private exposure has a similar, albeit less
comprehensive, deterrent effect on crime as more comprehensive,
government-led surveillance would have. Further, if Britain’s CCTV is
any indication, the crime-deterring effects come burdened with
minimal “thinking space” or other harms.246 The increased feeling of
safety then improves citizen well-being across the board, leading to
additional self-liberty, not less.247
For the many crimes that will occur despite the presence of citizen
journalists, exposure of public conduct likely will continue to assist in
the reporting of crimes as well as in the apprehension and prosecution
of the correct perpetrators. For example, citizens have used their cell
phone cameras to expose drivers involved in “hit and run” accidents in
ways that led to their eventual arrest.248 Similarly, some citizens
reluctant to report crimes in-person have been willing to report crimes
via cell phone text messages including photographs of the alleged
perpetrators.249 Additionally, New York City residents now may report
crimes via uploading their pictures or videos to a government
website.250 The freedom to expose others’ public actions even
http://www.dailytech.com/London+CCTV+System+Caught+over+2500+
Criminals+in+2010/article20501.htm. But see Slobogin, supra, at 223–225 (sharing
conflicting data regarding the effects of closed circuit monitoring on crime).
See SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 108 (acknowledging that
Britain’s CCTV, “is widely perceived as ‘a friendly eye in the sky, not Big Brother but a
kindly and watchful uncle or aunt”). I am unaware of any reports that the amount of
productive thought coming out of Britain has declined since the time the cameras have
been in use.
246
As one commentator has noted, “[a]rguments that claim that an open society must
accept a certain amount of crime fail to recognize that individual liberties are often
sacrificed when an individual’s safety and security are forfeited.” DENNIS BAILEY, THE OPEN
SOCIETY PARADOX 71 (2004) (discussing how use of DNA could have caught a rapist and
reduced other liberty-related harms such as those suffered by women who stayed indoors,
bought guns, and endured needless fear and anxiety).
247
See Ron Zurko, The Impact of Cell Phones on Crime, EHOW,
http://www.ehow.com/about_5398414_impact-cell-phones-crime.html (last visited Jan.
4, 2012).
248
See Cops Ask Public for Text-Message Crime Tips, FOXNEWS.COM (July 3, 2008),
http://www.foxnews.com/story/0,2933,375486,00.html.
249
See Deborah Jian Lee, NYPD Calls on Citizens for Amateur Video Evidence, REUTERS
(July 31, 2008),
http://www.reuters.com/article/domesticnews/idUSN31366504200807 (cited in Josh
Blackman, Omniveillance, Google, Privacy in Public, and the Right to Your Digital
250
2012]
ANDERSON
593
empowers some gutsy citizens to record and report the very criminals
that have hurt them.251
More broadly, social networks, chock-full of reports regarding
others’ activities and whereabouts, have been and could continue to be
harnessed to locate and track criminals or lost children.252 These same
networks of citizen journalists and their exposures provide reliable
and truthful alibis for the wrongly-accused, thereby improving the
integrity of the system and ensuring that the right person eventually is
caught.253 Further, replacing notoriously unreliable eyewitness
testimony with more reliable evidence of exposures documented via
still- and video-cameras also improves reliability.254 Ultimately, the
presence and use of what amounts to millions of mobile, citizendirected security cameras could vastly improve the integrity and
reliability of the criminal justice system, leading to improved safety
and liberty for all citizens.
4. EMOTIONAL AND THERAPEUTIC BENEFITS
Although privacy scholars carefully have identified the emotional
harms associated with the Obscurity Problem, they have not fully
accounted for the emotional benefits associated with exposure. A
Identity: A Tort for Recording and Disseminating an Individual’s Image Over the
Internet, 49 SANTA CLARA L. REV. 313, 332 (2009)).
See Police: Woman Took Pictures of Flasher, UPI (Jan. 15, 2009, 6:24 PM),
http://www.upi.com/Odd_News/2009/01/15/Police-Woman-took-pictures-offlasher/UPI-52501232061864.
251
See Taking Pictures of Police Officers in Public Is Not a Crime, PoliceCrimes.com (July
24, 2010, 7:36 PM), http://www.policecrimes.com/forum/viewtopic.php?f=26&t=9148
(“Even in potential terrorism cases, the presence of lots of ordinary folks carrying cameras
actually enhances public security. In the hours after the failed Times Square car-bomb
attempt, officials . . . sought out home movies shot by tourists.”).
252
See, e.g., Damiano Beltrami, His Facebook Status Now? ’Charges Dropped,’
NYTIMES.COM, FG/CH NEWS (Nov. 11, 2009, 11:10 AM), http://fortgreene.thelocal.nytimes.com/2009/11/11/his-facebook-status-now-charges-dropped
(documenting use of Facebook posting as alibi and broader trend that “Web
communications including photos and videos are providing evidence in legal battles
ranging from murder trials to employment lawsuits”).
253
The unreliability of traditional eyewitness testimony has been well-documented. See,
e.g., Brandon L. Garrett, Judging Innocence, 108 COLUM. L. REV. 55, 60 (2008) (showing
that seventy-nine percent of rape or murder exonerees in expansive study were convicted
based on incorrect eyewitness testimony).
254
594
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
person who exposes another’s public behavior via distributing a video
or story online often does so because sharing her take on the behavior
with others makes her feel better.255 Some describe the emotional
benefit of sharing a story via a personal blog as providing “a new kind
of intimacy, a sense that they are known and listened to.”256 For
intensely personal autobiographical speech, the emotional benefits to
the speaker are even more pronounced and heart-felt.257 In fact, even
the performance of “Numa Numa Guy” has been described as a reason
to promote webcam recordings because his video exhibited pure
emotional enjoyment of a song.258 Thus, silencing one person to
protect the obscurity of another likely ends up sacrificing the
emotional interests of the one silenced.
Exposure of public conduct also leads to emotional benefits for
people similarly situated to the exposed person. When someone is
exposed in public for supposedly shameful conduct—such as
alcoholism—it often leads other people who engage in that conduct to
feel connected and no longer alone. This powerful emotional benefit is
why memoirs, a genre likely made impossible by a duty to protect
others’ obscurity, are so powerful.259 In turn, the readers of such
truthful stories experience emotional benefits as well.
Posner, supra note 16, at 400 (noting how “[a]nyone who has ever sat next to a stranger
on a airplane or a ski lift knows the delight that people take in talking about themselves to
complete strangers”).
255
See Emily Nussbaum, My So-Called Blog, N.Y. TIMES MAG., Jan. 11, 2004, at 33,
available at http://www.nytimes.com/2004/01/11/magazine/my-so-called-blog.html
(reviewing phenomenon of adolescent blogs and concluding that “[e]xposure may be
painful at times, but it’s all part of the process of ‘putting it out there,’ risking judgment
and letting people in”).
256
See Sonja R. West, The Story of Me: The Underprotection of Autobiographical Speech,
84 WASH. U. L. REV. 905, 916–922 (2006) (carefully documenting the “American tradition
of autobiographical speech” and “the modern trend of public self-disclosure”); see also
Sonja R. West, The Story of Us: Resolving the Face-Off Between Autobiographical Speech
and Information Privacy, 67 WASH & LEE L. REV. 589 (2010) (proposing tort-based
solution to the apparent conflict between valuing autobiographical speech and privacy).
257
See Douglas Wolk, The Syncher, Not the Song: The Irresistible Rise of the Numa Numa
Dance, THE BELIEVER, June/July 2006, available at
http://believermag.com/issues/200606/?read=article_wolk (“Brolsma’s video
singlehandedly justifies the existence of webcams” because “[i]t’s a movie of someone who
is having the time of his life, wants to share his joy with everyone, and doesn’t care what
anyone else thinks.”). Perhaps the positive feelings associated with exposure were part of
his motivation for starting his own NumaNetwork on YouTube.
258
See West, Story of Me, supra note 257, at 919–20 (chronicling the recent increase in
memoirs).
259
2012]
ANDERSON
595
Exposure even can help change a harmful social norm that
wrongfully made a person feel “different” in the first place. For
example, the “exposures” of certain celebrities as gay allegedly
empowered others to come out and ultimately change the harmful
social norm that made them feel ostracized.260 Even if the norm does
not change, the exposure could lead to other emotional benefits such
as a sense of community, relief, or forgiveness.261 Exposing people’s
public actions and statements also brings certain issues, previously
hidden at the expense of a certain segment of society, into the public
sphere where they can be debated and addressed.262 For example,
some feminist scholars have argued that over-insistence on privacy
kept many women’s rights issues hidden from public scrutiny. In all of
these ways, the increased pride, self-esteem, confidence and other
emotional benefits likely offset dignity harms to the one being exposed
or to people fearing exposure.263
5. DECEPTION PREVENTION BENEFITS
One reason some people vilify the Obscurity Problem is that they
have something to hide and depend upon others’ ignorance regarding
this “something” in order to maintain their personal and professional
relationships.264 Revealing this information may “correct
misapprehensions that the individual is trying to exploit, as when a
See SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 144 (discussing Zimmerman
and acknowledging that “more disclosures about people’s private lives might change
hypocritical social norms that societies proclaim in public but flout in private”).
260
See Posner, supra note 16, at 408 (“If ignorance is the prerequisite of trust, equally
knowledge, which privacy conceals, is the prerequisite of forgiveness.”).
261
262
SOLOVE, UNDERSTANDING PRIVACY, supra note 17, at 81–82.
See BAILEY, supra note 247, at 204 (“Interpersonal relationships will in fact be better if
there is less of a concern for privacy. After all, forthrightness, honesty and candor are, for
the most part, virtues, while hypocrisy and deceit are not.”) (quoting Richard
Wasserstrom).
263
RICHARD A. POSNER, THE ECONOMICS OF JUSTICE 260–61 (1981) (concluding that
“[p]eople conceal past criminal acts not out of bashfulness but precisely because potential
acquaintances quite sensibly regard a criminal past as negative evidence of the value of
cultivating an acquaintance with the person”). Expanding this thought to non-criminal
activities, people conceal past social behavior (e.g., a keg stand) not out of bashfulness but
precisely because potential acquaintances (e.g., employers or first dates) quite sensibly
regard a poor social choice in the past as negative evidence of the value of dating,
employing, or spending time with such a person.
264
596
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
worker conceals a serious health problem from his employer or a
prospective husband conceals his sterility from his fiancée.”265 Thus,
exposing truthful facts about a person that he purposefully hides from
others acts as a “deception prevention” device, which many view as a
net benefit for society. 266
Preventing deception leads to other societal benefits, both directly
and indirectly, such as ensuring that one does not hire an
irresponsible person to take care of one’s children.267 Knowing more
about someone also can help people make decisions based on real
information rather than relying on inaccurate stereotypes.268 If we
know more about people, and observe them benefitting society despite
their past behavior, perhaps we will learn to be more forgiving and
less judgmental.269 Legally barring or punishing the exposure of such
information lets the deception and poor decision making continue in
the interest of protecting a mythical right to obscurity.270
265
POSNER, supra note 264, at 233.
Posner, supra note 16, at 400; BAILEY, supra note 247, at 184 (“[T]he more privacy we
have, the less likely we are able to trust someone. Knowing something about a person helps
you make a reasonable judgment about whether to trust him or her.”) (discussing legal
philosophy of Michael Froomkin).
266
See FRED CATE, PRIVACY IN THE INFORMATION AGE 29 (1997) (“What parent would not
want to know if her child’s babysitter had been convicted for child abuse? Similarly, what
storeowner would not want to know whether his physician had a history of malpractice?
What man or woman would not want to know if a potential sexual partner had a sexually
transmitted disease? What airline would not want to know if its pilots were subject to
epileptic seizures? Yet the interest in not disclosing that information is precisely what
privacy protects.”).
267
Strahilevitz, supra note 33, at 1684–88 (suggesting ways in which reputational
information can replace race as proxy and thus reduce discrimination based on race).
268
In this way, a digital dossier, which Daniel Solove invites us to fear, is a good thing, not
a bad thing. The dossier documents the positive as well as the negative. Perhaps as one
philosopher suggests, it even would be quite liberating to not have to live two lives, one
private, and one public. See Richard A. Wasserstrom, Privacy: Some Arguments and
Assumptions, in PHILOSOPHICAL DIMENSIONS OF PRIVACY, AN ANTHOLOGY 331 (Ferdinand
D. Schoeman, ed., 1984) (“[An] emphasis upon the maintenance of a private side to life
[leads to a] dualistic, unintegrated life that renders the individuals who live it needlessly
vulnerable, shame ridden, and lacking in a clear sense of self [versus] the more open, less
guarded life of the person who has so little to fear from disclosures of self . . . .”).
269
Richard Posner equates some emphasis on privacy as fraud. Posner, supra note 16, at
399 (“An analogy to the world of commerce may help to explain why people should not—on
economic grounds, in any event—have a right to conceal material facts about themselves.
We think it wrong (and inefficient) that the law should permit a seller in hawking his wares
270
2012]
ANDERSON
597
C. MOVING TOWARDS THE PROPER BALANCE
The purpose of Parts III.A and III.B was to begin aggregating the
collective harms of the Obscurity Problem versus the collective
benefits of exposing public conduct, pursuant to the balance described
in Part II.B. In Part III.A, I showed that scholars appear to oppose the
full democratization of exposure because they believe that it harms the
emotional and intellectual interests of those exposed and of society as
a whole. I then showed how these harms likely have been overstated
due to the reliance on multiple assumptions that have gone
unquestioned. For example, I showed how exposure harms often are
fleeting, rather than permanent; how the potential for exposure has
caused some people to think and speak more, not less; that the
Supreme Court in Doe balanced interests and came down in favor of
exposure; and how political activity has survived and flourished even
when faced with exposure via sunshine laws.
Although further aggregation and discussion of benefits should be
done before drawing ultimate conclusions, a preliminary balance is
worth conducting. A good starting point for the balancing is to
compare apples to apples—to compare emotional harms to emotional
benefits. Parts III.A and III.B show that for every exposed or
potentially exposed person emotionally harmed or threatened by an
exposure, there is at least one person emotionally benefitted via the
same exposure, whether it is the person doing the exposing, the
person exposed, or the person receiving the exposed information.
Assuming for balancing purposes that the number of people
emotionally harmed or benefitted, and the degree to which they are so
harmed or benefitted, is relatively equal, these harms and benefits
likely balance each other out.271
The only harm then left on the side of changing the “no privacy in
public” rule is the harm associated with intrusion into one’s thinking
space.272 Although it is difficult to quantify this harm, assigning it a
precise value is not necessary. One also need not agree with me that
this harm has been overstated due to certain faulty assumptions.273
to make false or incomplete representations as to their quality. But people ‘sell’ themselves
as well as their goods.”).
Admittedly, this cannot be assessed as a perfect balance without a more detailed inquiry
that assigns some value to each side in particular situations. Other interests not specifically
mentioned such as those based on the First Amendment, also must be considered.
271
272
See supra notes 160–72 and accompanying text.
273
See supra notes 173–94 and accompanying text.
598
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
Rather, one need only balance the thinking space value—whatever it
may be—against the value of the many non-emotional benefits I
described in Part III.B.274 For example, I showed how exposures
consistent with the “no privacy in public” rule lead to more
accountability for our government officials, increased enforcement of
beneficial social norms that save lives, the tearing down of social
norms that wrongfully oppress certain groups, more deterrence of
criminal activity, increased ease of reporting criminal activity, more
reliable eyewitness testimony, more accurate information about the
people with whom we spend our valuable time and money, and the
enjoyment of having one’s voice heard by millions of others.
Although it likely is too early to make a definitive, pragmatic
decision regarding the “no privacy in public” rule, these many benefits
at least should make legal commentators pause before calling for its
demise. Additional time and effort also should be directed toward
examining the assumptions detailed in Part III.A to see if these alleged
harms have a firm basis in reality. Only after taking those two steps—
and doing so more evenhandedly—should anyone seriously consider
changing the “no privacy in public” rule.
D. SPECIAL CASES
Although a pragmatic balance may favor retention of the “no
privacy in public” rule in many instances, the balance may tip the
other way in a small set of special cases distinguishable because they
involve the exposure of a body part or bodily activity that society
generally regards as “private” even when the person ventures into
“public.” This special category includes exposures such as publishing a
photograph of a woman’s bare bottom taken in a shopping center
when a gust of air blows up her skirt,275 publishing a photograph of a
teenager’s genitals taken during an athletic event,276 or publishing a
photograph of someone going to the bathroom taken through the gap
in a restaurant bathroom stall.277 Under a strict application of the “no
274
See supra notes 215–67 and accompanying text.
275
Daily Times Democrat v. Graham, 162 So. 2d 474 (Ala. 1964).
276
McNamara v. Freedom Newspapers, Inc., 802 S.W.2d 901, 905 (Tex. App. 1991).
Autopsy photos, which often involve grisly bodily exposures, also likely would fall into
this category. For a thorough discussion of why such photos deserve special privacy
protection, see JON L. MILLS, PRIVACY: THE LOST RIGHT (2008). Prosser also acknowledged
that “a difference may at least be found between a harmless report of a private wedding
and the morbid publication of the picture of a deformed child.” Richards & Solove, supra
277
2012]
ANDERSON
599
privacy in public” rule that I defend above, these exposures would
appear acceptable because the information collected and re-published
initially was shared in a public place (i.e., a shopping mall, a soccer
stadium or a restaurant).
A more careful inquiry, however, suggests that these bodily
exposures instead deserve special treatment under the pragmatic
balancing test given the comparatively greater harms and
comparatively fewer benefits they trigger.278 On the harm side,
exposures of one’s body parts or bodily functions often have been
considered particularly harmful to personal dignity, and thus worthy
of special legal protection in other contexts.279 On the benefits side,
the possible benefits flowing from such exposures are minimal
because exposing someone’s body parts or bodily functions generally
does not improve governmental or personal accountability, deter
crime, or prevent deception.280 As Richard Posner has stated, “because
the individual’s desire to suppress the photograph [of a body part] is
not related to misrepresentation in any business or social market
place, there is no basis for a presumption that the social value of
disclosure exceeds that of concealment.”281
Consider the example of Robert, whose sexual preferences and
behaviors were described by his one-time lover in her blog.282
Although having one’s sexual preferences exposed to others is far from
note 17, at 10 (quoting WILLIAM L. PROSSER, HANDBOOK OF THE LAW OF TORTS (1st ed.
1941)).
See also Lance Rothenberg, Comment, Peeping Toms, Video Voyeurs, and Failure of
the Criminal Law to Recognize a Reasonable Expectation of Privacy in the Public Space,
49 AM. U.L. REV. 1127 (2000) (calling upon the criminal law to penalize voyeurs who
surreptitiously record the private body parts of persons in public).
278
California and Louisiana already have adopted “video voyeur” statutes recognizing a
privacy-based protection against recorded intrusions “under or through the clothing” of a
person even if that person is in public. CAL. PENAL CODE § 647(j)(2) (West 2011); LA. REV.
STAT. ANN. § 14:283(A)(1) (2011); see Rothenberg, supra note 278 (discussing California
and Louisiana statutes and the real-life privacy violations that motivated them).
279
Rather, the little information provided by exposing the “offensive or embarrassing
characteristics of [an] individual” provides little to no discreditable information, and, thus,
does not “serve the prevention-deception goal.” Posner, supra note 16, at 413; see id. at
400 (“Some private information that people desire to conceal is not discreditable. In our
culture, for example, most people do not like to be seen naked, quite apart from any
discreditable fact that such observation might reveal.”).
280
281
Id. at 414.
282
Supra note 113 and accompanying text.
600
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
ideal for most people, the pragmatic balance would shift significantly
in Robert’s favor should Ms. Cutler have sought to describe or publish
a photograph of Robert’s genitals. The latter exposure likely would fall
into the special cases category described here because the benefits to
society of sharing the information would be exceptionally low
(knowing precisely what Robert’s genitals look like does not help one
judge whether he is a good or bad person), while the harms to Robert’s
dignity would be exceptionally high (knowing that anyone, anywhere
can pull up a picture of one’s genitals could be particularly harmful to
one’s dignity and other emotional interests).283 A similar approach
could be used to justify restrictions on publishing the name of a rape
victim or on publishing autopsy photos because such exposures
“cause[] distress to the victim’s family while providing no information
useful to people contemplating transactions with her (since she was
dead) or with her family.”284 Ultimately, these body-focused exposures
likely are less worthy of protection because they provide no useful
reputational information regarding the individual exposed, and
therefore are distinguishable from all other types of exposures of
information shared in public.
V. CONCLUSION
The rhetoric regarding technology’s assault on privacy has peaked
at predictable points in time, most often when appreciation of a
certain technology’s beneficial uses has not yet caught up to the fears
regarding its negative uses.285 One common tactic is to describe
technology as a new, more harmful type of privacy invasion, thus
Posner, supra note 16, at 414. Similarly, consider the exposure of Dog Poop Girl’s
behavior—her refusal to clean up her dog’s excrement on a subway train—versus an
exposure involving Dog Poop Girl’s own body parts or bodily functions. Publishing a
photograph of Dog Poop Girl’s bare bottom taken while she was going to the bathroom is
inherently different than publishing the photograph of her refusing to clean up her dog’s
excrement; the former “could convey no information enabling her friends and
acquaintances to correct misapprehensions about her character which she might have
engendered” while the latter would help friends and acquaintances to decide whether she
was a good, respectful person or not. Id.
283
284
Id. at 416.
Fears about online privacy are particularly amusing to some critics who work in the
technology business. See, e.g., BAILEY, supra note 247, at 137 (“You can go and find a
mailbox right now, open the door to a tin box, tin door, no lock, with unencrypted
information in English, sealed in a paper-thin envelope with spit, yet people are worried
about online privacy.”) (quoting Scott McNealy of Sun Microsystems).
285
2012]
ANDERSON
601
inspiring fear. “We shall soon be nothing but transparent heaps of
jelly to each other,” warned an interviewer of the inventor of a wireless
signaling device capable of penetrating walls.286 “[T]he latest advances
in photographic art have rendered it possible to take pictures
surreptitiously,” gasped Warren and Brandeis.287 Fear of these
supposed harms then is used to justify some type of new legal
restriction or remedy.288 At present, the fear is inspired by stories of
lives permanently and irreversibly scarred by public information being
shared with millions and associated warnings that “you, too, could be
Dog Poop Girl, and have your life ruined” due to a short lapse in
judgment.289 The new legal restriction or remedy demanded is a
supposedly necessary change to the “no privacy in public” rule in
order to protect people’s obscurity.
Critics of the “no privacy in public” rule suggest that its time has
passed and that one now needs some amount of privacy in public—in
other words, a right to obscurity—in order to function in society.
However, as the initial harms versus benefits balance detailed above
shows, the “no privacy in public” rule likely remains valid, useful, and
beneficial to society, even one as technologically-advanced as our own,
in all but a very few special cases. This is because the current and
future lack of a right to obscurity leads to many positive societal
benefits, including accountability for our public officials and
Science, The Academy, Vol. 50, p. 569, Dec. 19, 1896-No. 1285 (cited by Virginia Postrel,
No Telling, REASON, June 1998).
286
See Warren & Brandeis, supra note 36, at 211. “Instantaneous photographs . . . have
invaded the sacred precincts of private and domestic life; and numerous mechanical
devices threaten to make good the prediction that ‘what is whispered in the closet shall be
proclaimed from the house-tops.’” Id. at 195; see also Warren & Brandeis, supra note 36, at
196 (“[M]odern enterprise and invention have, through invasions upon his privacy,
subjected him to mental pain and distress, far greater than could be inflicted by mere
bodily injury.”). Even the concept of the white pages in the phone book once triggered
irrational privacy fears. BAILEY, supra note 247, at 172–73.
287
As Coleridge famously stated, “In politics, what begins in fear usually ends in folly.”
SAMUEL TAYLOR COLERIDGE, SPECIMEN OF THE TABLE TALK OF THE LATE SAMUEL TAYLOR
COLERIDGE 111 (1836).
288
See, e.g., SOLOVE, FUTURE OF REPUTATION, supra note 18, at 48 (“Whether you like it or
not, whether you intend it or not, the Internet can make you an instant celebrity. You could
be the next Star Wars Kid.”); SOLOVE, THE FUTURE OF REPUTATION, supra note 18, at 2
(“Like the dog poop girl, you could find photos and information about yourself spread
around the Internet like a virus.”); Solove, supra note 1, at 969 (“Without warning, anyone
can broadcast another's unguarded moments or times of youthful awkwardness to an
audience of millions.”).
289
602
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
ourselves, better crime prevention and reporting, and more
information about the people with whom we engage in important
personal and business transactions. Ultimately, the balance is likely to
show that a potential loss in obscurity is a small price to pay for these
benefits, and that the “no privacy in public” rule generally remains
valid. At the very least, before we decide to restrict the flow of truthful,
public information in the interest of protecting Dog Poop Girl’s
mythical right to obscurity, we need to better understand what it is we
are sacrificing by doing so.
I/S: A JOURNAL OF LAW AND POLICY FOR THE INFORMATION SOCIETY
AdChoices?
Compliance with Online Behavioral Advertising
Notice and Choice Requirements
SARANGA KOMANDURI, RICHARD SHAY, GREG NORCIE,
BLASE UR & LORRIE FAITH CRANOR*
Abstract. Online behavioral advertisers track users across
websites, often without their knowledge. Over the last
twelve years, the online behavioral advertising industry has
responded to privacy concerns and pressure from the FTC
by creating private self-regulatory bodies. These include the
Network Advertising Initiative (NAI) and an umbrella
organization known as the Digital Advertising Alliance
(DAA). In this paper, we enumerate the DAA and NAI notice
and choice requirements and check for compliance with
those requirements by examining NAI members' privacy
policies and reviewing ads on the top 100 websites. We also
test DAA and NAI opt-out mechanisms and categorize how
their members define opting out. Our results show that most
members are in compliance with some of the notice and
choice requirements, but two years after the DAA published
its Self-Regulatory Principles, there are still numerous
instances of non-compliance. Most examples of noncompliance are related to the “enhanced notice”
requirement, which requires advertisers to mark behavioral
ads with a link to further information and a means of
opting out.
Saranga Komanduri, Carnegie Mellon University, [email protected]; Richard Shay,
Carnegie Mellon University, [email protected]; Greg Norcie, Indiana University,
[email protected]; Blase Ur, Carnegie Mellon University, [email protected]; Lorrie Faith
Cranor, Carnegie Mellon University, [email protected].
*
604
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
I. INTRODUCTION
The Federal Trade Commission (FTC) defines online behavioral
advertising (OBA) as “the practice of tracking consumers' activities
online to target advertising.”1 The FTC has been examining ways to
reduce the privacy concerns associated with OBA for over a decade.
In 1999, a group of companies engaging in OBA announced the
launch of a self-regulatory organization called the Network
Advertising Initiative (NAI) and proposed a set of principles to the
FTC. In a July 2000 report, the FTC acknowledged that “the NAI
principles present a solid self-regulatory scheme,” but nonetheless
recommended legislation to provide a basic level of privacy
protection.2 This legislation was never enacted.3 The NAI published its
principles in 2001 and revised them in 2008.4 Today, the NAI has
seventy-four member companies5 and offers a consumer opt-out
service6 that allows consumers “to ‘opt out’ of the behavioral
advertising delivered by our member companies.”7
As the FTC began examining OBA again in 2009, several industry
organizations with an interest in OBA (including the NAI) formed the
FED. TRADE COMM’N, ONLINE BEHAVIORAL ADVERTISING: MOVING THE DISCUSSION
FORWARD TO POSSIBLE SELF-REGULATORY PRINCIPLES (2011), available at
http://www.ftc.gov/os/2007/12/P859900stmt.pdf.
1
FED. TRADE COMM’N, ONLINE PROFILING: REPORT TO CONGRESS PART 2:
RECOMMENDATIONS 9–10 (2000) [hereinafter FTC, ONLINE PROFILING REPORT], available
at http://www.ftc.gov/os/2000/07/onlineprofiling.pdf.
2
FED. TRADE COMM’N, STAFF REPORT: SELF-REGULATORY PRINCIPLES FOR ONLINE
BEHAVIORAL ADVERTISING 7 (2009), available at
http://www.ftc.gov/os/2009/02/P085400behavadreport.pdf.
3
NETWORK ADVER. INITIATIVE, 2008 NAI PRINCIPLES: THE NETWORK ADVERTISING
INITIATIVE’S SELF-REGULATORY CODE OF CONDUCT (2008) [hereinafter 2008 NAI
PRINCIPLES], available at
http://www.networkadvertising.org/networks/2008%20NAI%20Principles_final%20for%
20Website.pdf.
4
The full NAI membership list is available online at
http://www.networkadvertising.org/participating (last visited Jan. 16, 2012).
5
Opt Out of Behavioral Advertising, NETWORK ADVER. INITIATIVE,
http://www.networkadvertising.org/managing/opt_out.asp (last visited Jan. 16, 2012).
6
7
Id.
2012]
KOMANDURI ET AL.
605
Digital Advertising Alliance (DAA).8 One of the member organizations
of the DAA is the Interactive Advertising Bureau (IAB), which lists as
one of its “core objectives” to “[f]end off adverse legislation and
regulation.”9 In July 2009, the DAA published its own set of
requirements, the “Self-Regulatory Principles for Online Behavioral
Advertising,”10 in an effort to avoid an FTC push for new legislation.11
The self-regulatory program based on the DAA Principles document
was announced in October 2010. According to a Better Business
Bureau announcement, “the Principles and practices represent the
industry's response to the Federal Trade Commission's call for more
robust and effective self-regulation of online behavioral advertising
practices that would foster transparency, knowledge and choice for
consumers.”12
As the FTC determines what to do next, it is useful to evaluate the
effectiveness of industry self-regulation to date. In this paper, we
focus on the effectiveness of notice and opt-out and quantify DAA and
NAI member compliance with these self-regulatory requirements. We
check for compliance by examining websites showing advertisements,
advertising network websites, and the cookies produced by the DAA
and NAI opt-out mechanisms.
The remainder of our paper is organized as follows: We present
background material and related work in Part II. In Part III we discuss
For a list of affiliated organizations, see http://www.aboutads.info/associations (last
visited Jan. 16, 2012).
8
About the IAB, INTERACTIVE ADVER. BUREAU, http://www.iab.net/about_the_iab (last
visited Mar. 13, 2012).
9
DAA ET AL., SELF-REGULATORY PRINCIPLES FOR ONLINE BEHAVIORAL ADVERTISING
(2009) [hereinafter DAA SELF-REGULATORY PRINCIPLES], available at
http://www.aboutads.info/resource/download/seven-principles-07-01-09.pdf.
10
DAVIS & GILBERT LLP, NEWLY FORMED DIGITAL ADVERTISING ALLIANCE ANNOUNCES SELFREGULATORY PROGRAM FOR ONLINE BEHAVIORAL ADVERTISING (2010), available at
http://www.dglaw.com/images_user/newsalerts/AdvMktngPromo_BehavioralAdvertising-Self-Regulatory-Program.pdf.
11
Press Release, Better Bus. Bureau, Major Marketing/Media Trade Groups Launch
Program to Give Consumers Enhanced Control Over Collection and Use of Web Viewing
Data for Online Behavioral Advertising (Oct. 21, 2010), available at
http://www.newyork.bbb.org/article/major-marketing/media-trade-groups-launchprogram-to-give-consumers-enhanced-control-over-collection-and-use-of-web-viewingdata-for-online-behavioral-advertising-22618.
12
606
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
the DAA and NAI requirements that we have investigated. We outline
our methodology in Part IV and present our findings in Part V.
Finally, in Part VI, we discuss our conclusions on the matter.
II. BACKGROUND AND RELATED WORK
Online behavioral advertising is a form of advertising in which
advertising networks construct profiles of users as they navigate
various websites.13 The purpose of this tracking is to present each user
with advertisements related to his or her interests.14 HTTP cookies are
the primary mechanism for executing this tracking, though it is
possible to do so using other technologies, such as JavaScript cookies
or Flash Local Shared Objects (LSOs).
While OBA practitioners claim that behavioral advertising benefits
consumers,15 by funding website content, for example, the FTC notes
that it raises privacy concerns among consumers, including:
[T]he invisibility of the data collection to consumers;
the shortcomings of current disclosures about the
practice; the potential to develop and store detailed
profiles about consumers; and the risk that data
collected for behavioral advertising -- including
sensitive data regarding health, finances, or children -could fall into the wrong hands or be used for
unanticipated purposes. 16
In a 2009 study, Turow et al. found that the majority of American
adults did not want advertisements to be targeted toward their
PAM DIXON, WORLD PRIVACY FORUM, THE NETWORK ADVERTISING INITIATIVE: FAILING AT
CONSUMER PROTECTION AND AT SELF-REGULATION 5 (2007), available at
http://www.worldprivacyforum.org/pdf/WPF_NAI_report_Nov2_2007fs.pdf.
13
How Interest Based Ads Work, ABOUTADS, http://www.aboutads.info/how-interestbased-ads-work (last visited Jan. 16, 2012).
14
FED. TRADE COMM’N, ONLINE BEHAVIORAL ADVERTISING: MOVING THE DISCUSSION
FORWARD TO POSSIBLE SELF-REGULATORY PRINCIPLES, PROJECT NO. P859900 (comments of
Randall Rothenberg et al. on behalf of the Interactive Advertising Bureau, Apr. 11, 2008),
available at
http://www.ftc.gov/os/comments/behavioraladprinciples/080411interactiveadbureau.pdf.
15
16
FTC, ONLINE PROFILING REPORT, supra note 2, at i–ii.
2012]
KOMANDURI ET AL.
607
interests, even if done anonymously.17 They also found that most
Americans believe a law should require advertisers “to immediately
delete information about their internet activity.”18 In a 2010 study by
McDonald et al., over 60% of more than 300 participants saw online
behavioral advertising as “invasive.”19
Google counsel Pablo Chavez reported on Google's OBA opt-out
mechanism, which also allows users to modify their interest
categories:
[F]or every user that has opted out, about four change
their interest categories and remain opted in, and
about ten do nothing. We take from this that online
users appreciate transparency and control, and become
more comfortable with data collection and use when
they feel it happens on their terms and in full view.20
Other research has examined online self-regulatory mechanisms.
McDonald et al. have explored the cost of reading online privacy
policies. They discovered that, despite being a self-regulatory
mechanism designed to provide users with notice, website privacy
policies were so verbose and densely written that it would be
unreasonable for a typical user to read the privacy policy of each
website visited.21
The Platform for Privacy Preferences (P3P) is a self-regulatory
mechanism for websites to communicate their privacy policies to user
agents so users do not have to read them.22 Leon et al. discovered that
JOSEPH TUROW ET AL., AMERICANS REJECT TAILORED ADVERTISING AND THREE ACTIVITIES
THAT ENABLE IT (2009), available at http://repository.upenn.edu/asc_papers/137.
17
18
Id. at 20.
Aleecia M. McDonald & Lorie F. Cranor, Americans’ Attitudes about Internet Behavioral
Advertising Practices, WPES '10: PROCEEDINGS OF THE 9TH ANN. ACM WORKSHOP ON
PRIVACY IN THE ELECTRONIC SOC’Y, at 63, 69 (Oct. 4, 2010), available at
http://dl.acm.org/citation.cfm?id=1866929.
19
Email from Pablo L. Chavez, Managing Policy Counsel, Google Inc., to Donald S. Clark,
Fed. Trade Comm’n, Re: Privacy Roundtables 4 (Apr. 14, 2010), available at
http://www.ftc.gov/os/comments/privacyroundtable/544506-00134.pdf.
20
Aleecia McDonald & Lorrie Faith Cranor, The Cost of Reading Privacy Policies, 4 ISJLP
3 (2008).
21
22
LORRIE FAITH CRANOR, WEB PRIVACY WITH P3P (2002).
608
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
thousands of websites use P3P compact policies to misrepresent their
privacy practices.23 Reay et al. examined P3P policies of websites and
compared them with the legal requirements of the websites'
jurisdictions. They found that websites often do not claim to follow
legal privacy-related requirements.24
Prior research has examined the usability of self-regulatory
privacy mechanisms. McDonald et al. found that only 11% of study
participants were able to determine the function of the NAI opt-out
website.25 Further, the Annenberg Public Policy Center reports that
many users misunderstand the purpose of website privacy policies.
Their report states that over half of users believe that a website having
a privacy policy means the website in question will not share data.26
The NAI expressly recognizes the importance of NAI member
adherence to privacy principles:
NAI members believe that self imposed constraints
help achieve the balance needed to preserve consumer
confidence in the use of this revolutionary medium.
Even where there is reduced privacy impact in use of
anonymous or anonymized data, the NAI recognizes
that consumers will only trust and continue to engage
with advertisers online when there is appropriate
deference shown to consumers' concerns about the
privacy of their websurfing experience.27
The NAI states that they rely in part on consumers to report
violations.28
Pedro Giovanni Leon et al., Token Attempt: The Misrepresentation of Website Privacy
Policies Through the Misuse of P3P Compact Policy Tokens (2010),
http://repository.cmu.edu/cylab/73.
23
Ian Reay et al., A Large-Scale Empirical Study of P3P Privacy Policies: Stated Actions
vs. Legal Obligations, 3 ACM TRANS. ON THE WEB 1, 1–34 (Apr. 2009).
24
25
McDonald & Cranor, supra note 19.
JOSEPH TUROW, AMERICANS AND ONLINE PRIVACY: THE SYSTEM IS BROKEN, A REPORT
FROM THE ANNENBERG PUBLIC POLICY CENTER OF THE UNIVERSITY OF PENNSYLVANIA 3
26
(2003).
27
2008 NAI PRINCIPLES, supra note 4.
Network Advertising Initiative FAQ: What do I do if I Think an NAI Member Has
Violated the NAI Privacy Principles?, NETWORK ADVER. INITIATIVE,
28
2012]
KOMANDURI ET AL.
609
The NAI's 2010 Annual Compliance Report examined the thirtyfour NAI companies who were members at the start of 2010. The
report found that “the vast majority of evaluated member companies
met their compliance obligations.”29 However, the report also
indicated that there were instances of opt-out mechanisms failing and
failure of members to observe requirements pertaining to “non-cookie
technologies.”30 There was also a member using sensitive healthrelated information to target ads without opt-in consent, which the
NAI requires.31 The document states that the NAI is working on policy
changes to address their findings.32
The NAI compliance report also indicates that one NAI member
withdrew its membership.33 This highlights one potential problem
with self-regulatory organizations: members who do not wish to
follow the self-regulation process can simply leave. The FTC expressed
this concern in 2000:
For while NAI's current membership constitutes over
90% of the network advertising industry in terms of
revenue and ads served, only legislation can compel the
remaining 10% of the industry to comply with fair
information practice principles. Self-regulation cannot
address recalcitrant and bad actors, new entrants to the
market, and drop-outs from the self-regulatory
program.34
http://www.networkadvertising.org/
managing/faqs.asp#question_15 (last visited Mar. 3, 2012).
NETWORK ADVER. INITIATIVE, 2010 ANNUAL COMPLIANCE REPORT ii (2011), available at
http://www.networkadvertising.org/pdfs/2010_NAI_Compliance_Report.pdf.
29
30
Id.
31
Id. at vi.
32
Id.
33
Id. at 2.
34
FTC, ONLINE PROFILING REPORT, supra note 2, at 10.
610
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
The “do not track” mechanism has been proposed as a mechanism
to allow privacy-concerned users to avoid OBA tracking,35 and Jon
Leibowitz, chairman of the FTC, has expressed his support.36 A recent
release of Mozilla Firefox includes a do not track feature that signals
to visited websites that the user does not wish to be tracked.37
Likewise, Microsoft Internet Explorer 9 includes a do not track header
as well as a feature called “tracking protection.”38 Google has also
introduced a Chrome extension which enables users to retain
persistent opt-out cookies.39 The do-not-track and opt-out
mechanisms both rely on website operators to honor user preferences.
III. DAA AND NAI REQUIREMENTS INVESTIGATED IN THIS STUDY
In this section we discuss the DAA and NAI principles in more
detail and focus on the notice and choice requirements that we
investigated in this study. The DAA principles are contained in a fortyeight page document, published in 2009.40 This document presents
seven principles along with commentary and implementation
guidance. The NAI principles are contained in a twelve page document
last revised in 2008.41 This document describes ten principles and
does not include the more extensive commentary and implementation
details of the DAA principles document. The principles documents are
Peter Eckersley, What Does the “Track” in “Do Not Track” Mean?, ELECTRONIC
FRONTIER FOUNDATION (Feb. 19, 2011), https://www.eff.org/deeplinks/2011/02/whatdoes-track-do-not-track-mean.
35
Jon Leibowitz, Chairman, Fed. Trade Comm’n., Remarks as Prepared for Delivery,
Preliminary FTC Staff Privacy Report 5-6 (Dec. 1, 2010) (Prepared remarks available at
http://www.ftc.gov/speeches/leibowitz/101201privacyreportremarks.pdf).
36
Mozilla Firefox 4 Beta, Now Including “Do Not Track” Capabilities, MOZILLA BLOG (Feb.
8, 2011), http://blog.mozilla.com/blog/2011/02/08/mozilla-firefox-4-beta-now-includingdo-not-track-capabilities.
37
Dean Hachamovitch, IE9 and Privacy: Introducing Tracking Protection, IEBLOG (Dec.
7, 2010, 10:10AM), http://blogs.msdn.com/b/ie/archive/2010/12/07/ie9-and-privacyintroducing-tracking-protection-v8.aspx.
38
Sean Harvey & Rajas Moonka, Keep Your Opt-Outs, GOOGLE PUB. POLICY BLOG (Jan. 24,
2010, 12:00 PM), http://googlepublicpolicy.blogspot.com/2011/01/keep-your-optouts.html.
39
40
See DAA SELF-REGULATORY PRINCIPLES, supra note 10.
41
See 2008 NAI PRINCIPLES, supra note 4.
2012]
KOMANDURI ET AL.
611
not exhaustive lists of either organization's requirements, as we
discuss below.
We examined the DAA principles document to determine which
principles lend themselves to compliance checks through inspection of
websites, privacy policies, advertisements, and cookies.

Education Principle: The DAA must maintain a
central educational website and provide
educational ads. The educational website is the
DAA website itself.42 Checking the educational
ad requirement is beyond the scope of this
study.

Transparency Principle: Companies must
provide certain information on their websites
and in ads.43 We check this principle through
inspection of websites and advertisements.

Consumer Control Principle: Companies must
provide a mechanism for opting out of data
collection for online behavioral advertising.44
We check this through examination of opt-out
cookies.

Security Data Principle: This principle sets
forth requirements for data security.45 We are
unable to check this because it pertains to
internal practices.

Material Changes Principle: Companies must
obtain consent before making certain changes
to their practices.46 We are unable to check this
42
DAA SELF-REGULATORY PRINCIPLES, supra note 10, at 2, 12.
43
Id. at 12.
44
Id. at 14.
45
Id. at 15–16.
46
Id. at 16.
612
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
because we do not know when companies
change their practices or what steps they are
taking to obtain consent.

Sensitive Data Principle: Companies must take
additional steps when handling sensitive data.47
We cannot check this because we do not know
what data a given company may have or what
steps they take to handle it.

Accountability Principle: The industry must
develop compliance programs.48 The Direct
Marketing Association and Council of Better
Business Bureaus are developing such
programs,49 but a review of these programs is
beyond the scope of this paper.
The NAI principles document contains similar principles as well as
some additional principles that are not relevant to our analysis.
The DAA Transparency Principle requires that companies “give
clear, meaningful, and prominent notice on their own Web sites that
describes their Online Behavioral Advertising data collection and use
practices.”50 Companies must indicate “the types of data collected
online,” “the uses of such data,” a “mechanism for exercising choice”
about data collection and use for online behavioral advertising, and
“the fact that they adhere to these principles.”51 The NAI principles
also require the above, except for requiring members’ affirmation that
they adhere to the DAA principles. In addition, the NAI principles
require that a member disclose what online behavioral advertising
activity it performs, and the approximate duration for which it retains
data for online behavioral advertising.52
47
Id. at 16–17.
48
DAA SELF-REGULATORY PRINCIPLES, supra note 10, at 17–18.
49
Press Release, Better Business Bureau, supra note 12.
50
DAA SELF-REGULATORY PRINCIPLES, supra note 10, at 12.
51
Id.
52
2008 NAI PRINCIPLES, supra note 4, at 7–8.
2012]
KOMANDURI ET AL.
613
The DAA's Transparency Principle includes an “enhanced notice”
provision, requiring that websites on which behavioral advertising
data is collected or used provide a “clear, meaningful and prominent
link” to a “disclosure” about online advertising.53 This link must
appear on every page “where OBA data is collected or used.”54 This
disclosure must contain either a list of advertisers collecting data and
corresponding links, or “a link to an industry-developed Web site”
containing certain information.55 A link to the DAA website satisfies
this condition.56
The DAA principles do not require a specific icon and none is
depicted in the document itself; however, it does mention “common
wording and a link/icon that consumers will come to recognize.”57 In
January 2010, the industry introduced the “Power I” icon to denote
online behavioral advertising.58 This symbol was selected based on the
results of a research study commissioned by the Future of Privacy
Forum.59 Nine months later, the industry announced a new
“Advertising Option Icon.”60 Both the original and new icons are
shown in Figure 1. The Ad Option Icon may be licensed for a fee from
the DAA (although web publishers with annual revenues from online
behavioral advertising of less than $2,000,000 are permitted to use it
for free).61
53
DAA SELF-REGULATORY PRINCIPLES, supra note 10, at 13.
54
Id.
55
Id. at 6.
56
Id.
57
Id. at 5.
Stephanie Clifford, A Little ‘i’ to Teach About Online Privacy, N.Y. TIMES, Jan. 26, 2010,
at B3, available at http://www.nytimes.com/2010/01/27/business/media/27adco.html.
58
MANOJ HASTAK & MARY J. CULNAN, FUTURE OF PRIVACY FORUM: ONLINE BEHAVIORAL
ADVERTISING “ICON” STUDY 16 (2009), available at
http://futureofprivacy.org/final_report.pdf.
59
Tanzina Vega, Ad Group Unveils Plan to Improve Web Privacy, N.Y. TIMES, Oct. 4,
2010, at B8, available at
http://www.nytimes.com/2010/10/04/business/media/04privacy.html.
60
Advertising Option Icon Application, ABOUTADS,
http://www.aboutads.info/participants/icon (last visited Jan. 14, 2012).
61
614
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
Figure 1:
A Progressive ad (left) and a Geico ad (right) displaying the Power I
and Advertising Option Icon, respectively.
The DAA Consumer Control principle requires that companies
“provide consumers with the ability to exercise choice with respect to
the collection and use of data for Online Behavioral Advertising
purposes.”62 This must be available from one of a number of locations,
including the privacy notice.63 Likewise, the NAI requires that its
members using non-personally identifiable information for OBA
provide users with an opt-out mechanism, both on the member
website and on the NAI website.64 Further, while the DAA and NAI
principles documents do not mention this, the NAI and DAA both
require that opt-out cookies persist for at least five years.65
We also note that in 2009, the FTC narrowed its focus to thirdparty behavioral advertising.66 Thus, the DAA considers online
behavioral advertising to occur only “across non-affiliate Websites.”67
The DAA states that the principles do not cover “[a]ctivities of First
62
DAA SELF-REGULATORY PRINCIPLES, supra note 10, at 14.
63
Id.
64
2008 NAI PRINCIPLES, supra note 4, at 8.
NAI Frequently Asked Questions, NETWORK ADVER. INITIATIVE,
http://www.networkadvertising.org/managing/faqs.asp (last visited Jan. 14, 2012); What
Are Opt Out Cookies and How Do They Remember Opt Out Preferences?,
http://www.aboutads.info/how-interest-based-ads-work/what-are-opt-out-cookies-andhow-do-they-remember-opt-out-preferences (last visited Jan. 14, 2012).
65
Press Release, Fed. Trade Comm’n, FTC Staff Revises Online Behavioral Advertising
Principles (Feb. 12, 2009), available at http://www.ftc.gov/opa/2009/02/behavad.shtm.
66
67
DAA SELF-REGULATORY PRINCIPLES, supra note 10, at 19.
2012]
KOMANDURI ET AL.
615
Parties (Web site publishers / operators) that are limited to their own
sites or affiliated sites over which they exercise direct control.”68 The
NAI defines online behavioral advertising as “third-party online
behavioral advertising.”69 Thus, a website can still track and target ads
at a user who has opted out if the user is on the ad network's own
website.
Based on this analysis, we compiled a set of ten requirements to
check for this study. This list of requirements is shown in Table 1.
Table 1:
Summary of requirements checked in this study.
Requirement
Privacy notice requirements
Source
How Checked
Types of data collected
Usage of collected data
Presence of opt-out mechanism
Adherence to DAA principles
Behavioral advertising activities
How long data is retained
DAA+NAI
DAA+NAI
DAA+NAI
DAA
NAI
NAI
NAI member website
NAI member website
NAI member website
NAI member website
NAI member website
NAI member website
DAA
Quantcast top 100
DAA
NAI
DAA+NAI
DAA mechanism
NAI mechanism
Both mechanisms
Enhanced notice requirement
Advertisements contain enhanced notice
Opt-out cookie requirement
Cookie present in DAA opt-out mechanism
Cookie present in NAI opt-out mechanism
Cookie duration is at least five years
The IAB, which is a member organization of the DAA, has its own
separate code of conduct.70 At the time of this writing, this document
contained the DAA principles document verbatim, as well as a section
on monitoring and enforcement, with the task of supervision given to
DAA, SELF-REGULATORY PRINCIPLES FOR ONLINE BEHAVIORAL ADVERTISING
IMPLEMENTATION GUIDE: FREQUENTLY ASKED QUESTIONS 1 (2010),
http://www.aboutads.info/resource/download/OBA%20SelfReg%20Implementation%20Guide%20-%20Frequently%20Asked%20Questions.pdf.
68
69
2008 NAI PRINCIPLES, supra note 4, at 4.
See IAB Member Code of Conduct, INTERACTIVE ADVER. BUREAU,
http://www.iab.net/media/file/
IAB_Code_of_Conduct_10282-2.pdf (last visited Jan. 13, 2012).
70
616
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
the Council of Better Business Bureaus.71 The IAB has also posted a
requirement that their members become compliant with this code by
August 29, 2011.72
IV. METHODOLOGY
In February and March 2011, we analyzed the sixty-six NAI
members listed on the NAI website as of February 2011 for
compliance with the requirements in Table 1. To see if NAI member
compliance had improved, we examined the seventy-four NAI
member websites as of July 2011 in July and August 2011. Because
there was a deadline for compliance from the IAB on August 29,
2011,73 we checked member websites again in the week following the
deadline to determine whether their privacy policies had been
changed since our previous check. We report only the results of the
final check for each member.
We examined member websites for the privacy notice
requirements by examining the front page of each member's website,
their privacy policy, and any relevant links from that policy. We
considered the requirement that members state what types of data
they collect for behavioral advertising satisfied if the privacy policy
provided a general description of what data is collected or an example
of what type of data is collected. We considered the requirement that a
member disclose how long it retains data for behavioral advertising
satisfied even if the member stated that it retains data indefinitely.
However, we did not consider the requirement satisfied if a member
disclosed only cookie or log file expiration information.
While NAI members are not required to provide their own
definitions of opting out, we noted whenever a member chose to do so.
We categorized these members as defining opting out to mean either
not showing targeted ads; collecting less data from opted-out users;
no longer tracking opted-out users; or collecting no data from optedout users. The difference between no longer tracking users and
collecting no data from users at all is that in the former case, aggregate
data can still be collected. If a company used language such as “we no
longer collect data for the purpose of targeting ads,” we deemed that
71
Id.
IAB Member Code of Conduct, INTERACTIVE ADVER. BUREAU,
http://www.iab.net/public_policy/codeofconduct (last visited Jan. 14, 2012).
72
73
Id.
2012]
KOMANDURI ET AL.
617
company as simply not targeting ads for the purposes of our data
collection.
We examined the opt-out cookies from the DAA74 and NAI75 optout mechanisms in February and August 2011. We checked that both
mechanisms successfully placed opt-out cookies for each NAI
member, whether the two mechanisms provided the same cookies,
and whether the cookies had a duration of at least five years.
In mid-March 2011, we checked compliance with the enhanced
notice requirement of the DAA principles by inspecting
advertisements on websites listed on Quantcast's February 2011 top
100 U.S. websites.76 We repeated this in summer 2011, checking
compliance again on the same websites between July 26 and August
19, 2011. Then, because some websites might have become more
compliant on account of the IAB compliance deadline of August 29,
2011,77 we reexamined any website which had ads but was not fully
compliant during our previous check between August 31 and
September 2, 2011.
We navigated first to the root page for each of these websites and
then to the first three links (from top to bottom, left to right) pointing
to non-search pages in the same domain. To record which advertising
networks were associated with each page, we used the Firefox web
browser with the TACO add-on,78 which enables users to observe the
advertising networks on each website. In addition, we also made note
of advertising networks that were explicitly mentioned in ad
disclosures.
The enhanced notice requirement of the DAA applies only to
behavioral advertisements.79 It is nearly impossible to determine if a
given ad is behavioral by visual inspection and TACO indicates
whether an ad network is present on a website but not whether a
specific ad is behavioral. In order to remove from consideration ads
Opt Out From Online Behavioral Advertising (Beta), ABOUTADS,
http://www.aboutads.info/choices (last visited Jan. 14, 2012).
74
75
Opt Out of Behavioral Advertising, supra note 6, at 1.
Quantcast Site Rankings for United States, QUANTCAST, http://www.quantcast.com/topsites-1 (last visited Jan. 14, 2012).
76
77
IAB Member Code of conduct, supra note 72, at 1.
Block Online Tracking with TACO, ABINE, http://www.abine.com/preview/taco.php
(last visited Jan. 14, 2012).
78
79
DAA SELF-REGULATORY PRINCIPLES, supra note 10, at 5.
618
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
that were unlikely to be behavioral, we excluded ads on websites
where TACO did not recognize an ad network. We also excluded ads
that the DAA requirements likely would not cover because they
appeared (based on our judgment) to be contextual ads, “based on the
content of the Web page being visited, a consumer's current visit to a
Web page, or a search query.”80 For example, we excluded ads for
Comcast products on comcast.com and ads for drugs on webmd.com.
It is not always clear whether a given ad or third-party cookie is
actually subject to self-regulatory requirements. Industry estimates
suggest that we can reasonably assume that about 80% of
advertisements we encounter are behavioral.81 Omar Tawakol, CEO of
BlueKai, stated recently that “eighty percent of online ads rely on
third-party cookies for some form of audience targeting.”82 Likewise,
the IAB stated “[i]n an IAB survey of ad agencies conducted earlier
this year, we found that 80% or more of digital advertising campaigns
were touched by behavioral targeting in some way.”83 On the other
hand, industry representatives distinguish between different types of
targeted advertising and Tawakol has stated that “the majority of
third-party cookies use [sic] for targeting actually isn't traditionally
called behavioral advertising.”84
At each website on the Quantcast top 100 list we did the following:
1. Created a new Firefox profile (this clears
cookies and the cache) and clear Flash LSOs;
2. Copied and pasted the URL for the given
website from the Quantcast list;
80
Id. at 10–11.
Omar Tawakol, Forget Targeted Ads – I’d Rather Pay for Content, ONLINE MEDIA DAILY
(Feb. 15, 2011, 12:22 PM),
http://www.mediapost.com/publications/?fa=Articles.showArticle&art_aid=145077.
81
82
Id.
IAB Tells Congress Privacy Bills May Harm Business and Consumers, INTERACTIVE
ADVER. BUREAU, http://www.iab.net/public_policy/1296039 (last visited Jan. 14, 2012).
83
Omar Tawakol, Remarks at FTC Roundtable Series 1 On: Exploring Privacy (Dec. 7,
2009) (transcript available at http://www.ftc.gov/bcp/workshops/privacyroundtables/
PrivacyRoundtable_Dec2009_Transcipt.pdf).
84
2012]
KOMANDURI ET AL.
619
3. Checked for the presence of non-contextual ads
(ads not related to the visited website or the
content of the current page);
4. If there were non-contextual ads, checked them
for compliance with the DAA principles and
record the tracking websites TACO lists for the
page;
5. If there was a privacy notice associated with
advertisements, followed the link and recorded
its data; and
6. Repeated steps three through five for the first
three non-search links on the page.
V. RESULTS
We present the results of this paper in four parts. In Part V.A, we
present the evidence of “enhanced notice” we found while visiting
Quantcast's top 100 websites. In Part V.B, we present our findings for
compliance with “privacy notice” requirements. We evaluate the DAA
and NAI opt-out mechanisms in Part V.C. Finally, in Part V.D we look
at how different NAI members define opting out. For all requirements
checked, we present rates of compliance and indicate which members
were not compliant.
A. ENHANCED NOTICE REQUIREMENT
We looked for non-contextual ads on 400 web pages across 100
websites. In spring 2011, we found 164 pages across fifty websites that
contained non-contextual ads and were monitored by NAI members
in our first examination. In summer 2011, we found 155 pages across
fifty-four websites. We focus on NAI members because they all
describe themselves as engaged in OBA and are required to follow
both DAA and NAI requirements. Using TACO to determine who
monitored each page, we found an average of 2.8 NAI members
identified per page in spring and 3.1 in summer.
The “enhanced notice” requirement of the DAA's Transparency
Principle requires that notice be placed on the same page where
620
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
behavioral ads appear.85 Using the methodology described in Part IV,
we searched for evidence of this notice on each of the pages. In the
spring, we found enhanced notice on 35% of these pages. In the
summer, we found compliance on 50% of pages. In both cases, we
only considered pages where we observed non-contextual ads that
were tracked by an NAI member. Because we expected that about 80%
of advertisements are behavioral, this represents a significant gap in
compliance with the enhanced notice requirement.
While we looked for any instance of enhanced notice on a
webpage, some pages did not provide this notice for every ad on the
page. Specifically, in the spring, we found forty-five pages that
provided enhanced notice near at least one advertisement, with
twenty-nine of these pages providing enhanced notice near every ad
on the page. In addition, twelve pages (on three websites) provided
notice with a single link at the bottom of the page. In the summer, we
observed fifty-four pages with enhanced notice near at least one
advertisement, of which thirty-one pages had enhanced notice near all
advertisements. Forty-six pages on fifteen websites provided notice
with a single link at the bottom of the page. We are unable to
distinguish between those ads that lacked required notice and those
that are not behavioral and thus not required to provide notice. Links
found at the bottom of websites do not list the advertising providers
for each ad on the page and are not very prominent since they may
require a large amount of scrolling to find.
Evidence of notice was also inconsistent across pages on a single
site. Aside from the sites that provided a single link at the bottom of
the page, seven websites displayed enhanced notice on all four pages
that we visited, with an additional fifteen websites providing notice on
at least one page in the spring. In the summer, aside from websites
that provided enhanced notice with a link at the bottom, eleven
websites provided enhanced notice on all pages we visited and twentyeight provided enhanced notice on at least one. We also observed a
mixing of notice styles across pages on a single site. Table 2 lists the
type of enhanced notice found on each of the top websites where we
observed non-contextual ads.
85
DAA SELF-REGULATORY PRINCIPLES, supra note 10, at 13.
2012]
KOMANDURI ET AL.
621
Table 2:
The top 100 websites for the U.S. audience as ranked by Quantcast86
and the level of compliance with the enhanced notice requirement that
we observed. Only websites on which we observed non-contextual ads
are listed. Note that mybloglog.com (55 in the top 100) is excluded
from this table. It did not show non-contextual ads in the spring and
in the summer, it pointed to yahoo.com. Some websites appear to
have made an effort toward compliance, without being entirely
compliant. A website marked “Trying” is making an attempt for all of
their ads to be compliant by placing a link at the bottom of the web
page, but that page is not entirely compliant.
Rank
Web
Compliance
Spring
2011
Compliance
Summer
2011
3
yahoo.com
Fully
Fully
4
youtube.com
N/A
Fully
5
msn.com
Fully
Fully
12
aol.com
No
Fully
14
answers.com
Some
No
17
ask.com
Some
Some
18
ehow.com
No
Fully
20
about.com
No
Fully
21
myspace.com
Some
Some
Ad. Opt. Icon, Power I,
Link at bottom
Advertising Option Icon
Ad. Opt. Icon, Power I,
Link at bottom
Advertising Option Icon
Ad. Opt. Icon, Power I,
Link at bottom
Advertising Option Icon
Ad. Opt. Icon, Link at
bottom
Ad. Opt. Icon, Link at
bottom
Power I, Ad. Opt. Icon
22
weather.com
No
Some
Advertising Option Icon
23
mapquest.com
Some
No
Advertising Option Icon
26
photobucket.com
No
No
-
27
reference.com
Some
Some
Power I, Ad. Opt. Icon
31
go.com
N/A
Some
Link at bottom
32
huffingtonpost.com
No
No
-
34
break.com
No
Fully
Link at bottom
35
hulu.com
N/A
No
-
36
comcast.net
N/A
Fully
Link near ads
38
imdb.com
Some
None
Advertising Option Icon
39
monster.com
Some
Some
Advertising Option Icon
41
Webmd.com
Some
Fully
Advertising Option Icon
86
Quantcast Site Rankings, supra note 76.
Enhanced Notice
Observed
622
I/S: A JOURNAL OF LAW AND POLICY
42
pandora
45
whitepages.com
46
[Vol. 7:3
Some
Some
Advertising
No
Fully
associatedcontent.com
Fully
Fully
47
cnn.com
Fully
Fully
48
flickr.com
Fully
N/A
Link at bottom
Ad. Opt. Icon, Power I,
Link at bottom
Ad. Opt. Icon, Link at
bottom
Link near ads
50
manta.com
Fully
Fully
Advertising Option Icon
54
hubpages.com
N/A
Fully
Power I, Ad. Opt. Icon
56
filmannex.com
No
No
-
57
chinaontv.com
No
N/A
-
58
digg.com
No
Some
Advertising Option Icon
59
cnet.com
Fully
Fully
Link near ads
60
yellowpages.com
Fully
Fully
62
washingtonpost.com
Fully
Fully
64
nytimes.com
Trying
Fully
66
tripadvisor.com
67
legacy.com
68
No
N/A
Power I, Link at bottom
Ad. Opt. Icon, Link at
bottom
Ad. Opt. Icon, Link at
bottom
-
Some
Some
Advertising Option Icon
evite.com
No
Some
Advertising Option Icon
69
bbc.co.uk
No
Fully
Link at bottom
71
people.com
No
Fully
Link at bottom
72
chacha.com
No
Some
Advertising Option Icon
73
tmz.com
No
Some
Advertising Option Icon
75
drudgereport.com
No
No
-
77
dailymotion.com
N/A
Some
79
accuweather.com
Trying
Fully
80
suite101.com
Some
Some
Link near ads
Ad. Opt. Icon, Power I,
Link at bottom
Advertising Option Icon
81
mtv.com
Fully
Fully
Link at bottom
83
yelp.com
No
Some
Advertising Option Icon
86
examiner.com
Some
No
Power I
87
wikia.com
Some
Fully
Advertising Option Icon
89
squidoo.com
Some
Some
Power I, Ad. Opt. Icon
90
merriam-webster.com
Some
Some
Advertising Option Icon
93
weatherbug.com
No
No
-
94
bizrate.com
No
No
-
96
wunderground.com
99
twitpic.com
100
candystand.com
No
Some
Advertising Option Icon
Some
Fully
Advertising Option Icon
No
Fully
Advertising Option Icon
2012]
KOMANDURI ET AL.
623
TACO identified trackers from twenty-three NAI members in the
spring and twenty-eight in the summer, on the pages we examined.
When TACO found NAI members tracking a page that had noncontextual ads, we expected to find at least one enhanced notice. In
the spring, we observed four members only on pages with enhanced
notice, sixteen on pages with and without enhanced notice, and three
only on pages without enhanced notice. In the summer, we found ten
members only on pages with enhanced notice, seven members on
pages with and without enhanced notice, and eleven members only on
pages without enhanced notice. Table 3 presents detailed results for
each NAI member.
Table 3:
Analysis of enhanced notice and opt-out cookies for NAI members.
Enhanced notice data was derived by examining advertisements on
the Quantcast top 100 U.S. websites gathered in spring (March) and
summer (late August to early September) 2011. Blank lines indicate no
instances of data collection. Opt-out mechanisms were tested in
February, March, and August of 2011. A “-” indicates the member was
not in the NAI during collection. Websites marked with * are only
listed as NAI members for August. Note that Batanga does not have its
own opt-out cookies.
Name
Pages where
member
collects data
while noncontextual ad
is shown
Pages
where
enhanced
notice was
found
Number
cookies set
by DAA
opt-out
Number
cookies set
by NAI
opt-out
(Feb. / Mar.
(Feb. / Mar.
(Spr. / Sum.)
/ Aug.)
/ Aug.)
1/1/1
1/1/1
Do its DAA and
NAI cookies
match?
(Feb. / Mar. /
Aug.)
(Spr. / Sum.)
[x+1]
24/7 Real Media
0/2
0/0
Yes / Yes / Yes
1 / 1/ 1
1/1/4
Yes / Yes / No
33Across
1/1/1
1/1/1
Yes / Yes / Yes
Adara Media
1/1/1
1/1/1
Yes / Yes / Yes
AdBrite
1/1/1
1/1/1
Yes / Yes / Yes
1/1/1
1/1/1
Yes / Yes / Yes
1/1/1
1/1/1
Yes / Yes / Yes
AdChemy
Adconion Media
Group
*AddThis
- / 19
-/8
-/-/1
-/-/1
- / - / Yes
Adify
1/3
0/0
1/1/1
1/1/1
Yes / Yes / No
4 / 26
3 / 13
0/0/1
1/1/1
No / No / Yes
1/1/2
1/1/2
Yes / Yes / Yes
AdMeld
Aggregate
Knowledge
0/5
0/5
624
Akamai
Technologies
AOL Advertising
I/S: A JOURNAL OF LAW AND POLICY
AudienceScience
2/2/3
2/2/3
Yes / Yes / Yes
6/7/7
No / No / Yes
57 / 47
20 / 24
4/4/7
-/-/1
-/-/1
- / - / Yes
0/5
0/5
1/1/0
2/2/1
No / No / No
39 / 48
11 / 25
Yes / Yes / Yes
*Aperature
Atlas
[Vol. 7:3
1/1/1
1/1/1
Batanga
0/0/0
0/0/0
NA / NA / NA
Bizo
4/4/5
4/4/5
Yes / Yes / Yes
BlueKai
2/2/1
2/2/1
No / No / Yes
*BrightRoll
13 / 17
11 / 11
-/-/0
-/-/1
- / - / No
Brilig
1/0/1
1/1/1
Yes / No / Yes
Burst Media
1/1/1
1/1/1
Yes / Yes / Yes
1/1/1
1/1/1
Yes / Yes / Yes
1/1/1
1/1/1
Yes /Yes / Yes
-/-/0
-/-/4
- / - / No
Buysight
Casale Media
21 / 5
3/1
*Cognitive Match
Collective
1/1/1
1/1/1
Yes / Yes / Yes
Criteo
20 / 9
1/1/1
1/1/1
Yes / Yes / Yes
*Cross Pixel
Media
DataLogix
-/-/1
-/-/1
- / - / Yes
2/2/2
2/2/2
Yes/ Yes / Yes
DataXu
1/1/1
1/1/1
Yes / Yes / Yes
Datonics
1/1/1
1/1/1
Yes / Yes / Yes
1/0
9/8
0/0
Dedicated
Networks
Dotomi
0/1
0/1
0/1/1
1/1/1
No / Yes / Yes
6/0
3/0
2/2/1
2/2/1
Yes / Yes / Yes
Epic Marketplace
2/0
2/0
1/1/1
1/1/1
Yes / Yes / Yes
eXelate
2/2/2
2/2/2
Yes / Yes / No
FetchBack
1/1/1
1/1/1
Yes / Yes / Yes
Glam Media
Google
127 / 148
43 / 74
3 / 11
3/5
I-Behavior
interCLICK
Invite Media
Lotame
0/0
1/1/1
No / Yes / Yes
1/2/1
No / No / No
1/1/1
1/1/1
Yes / Yes / Yes
1/1/1
1/1/1
Yes / Yes / Yes
11 / 11 / 2
11 / 11 / 11
Yes / Yes / No
1/1/1
1/1/1
Yes / Yes / Yes
MAGNETIC
1/1/1
1/1/1
Yes / Yes / Yes
*MaxPoint
Interactive
*Media
Innovation
Group
Media6Degrees
-/-/0
-/-/1
- / - / No
- / - / No
MediaMath
4/1
0/1/1
2/1/6
-/1
-/1
-/-/0
-/-/3
7/3
1/3
1/1/1
1/1/1
Yes / No / No
1/1/1
1/1/1
Yes / Yes / Yes
2012]
*MediaMind
KOMANDURI ET AL.
-/4
-/4
Mediaplex
Microsoft
4/4
4/4
625
-/-/0
-/-/1
- / - / No
1/1/1
1/1/1
Yes / Yes / Yes
4/4/1
4/4/4
Yes / Yes / No
Mindset Media
1/1/1
1/1/1
Yes / Yes / Yes
Netmining
1/1/1
1/1/0
Yes / Yes / No
OwnerIQ
0/0/1
1/1/1
No / No / Yes
*Pulse360
-/4
-/4
-/-/1
-/-/1
- / - / Yes
Quantcast
101 / 89
30 / 38
1/1/1
1/1/1
Yes / Yes / Yes
*RadiumOne
-/-/1
-/-/1
- / - / Yes
Red Aril
1/1/1
1/1/1
Yes / Yes / Yes
Rich Relevance
1/1/1
1/1/1
Yes / Yes / Yes
Rocket Fuel
1/1/1
1/1/1
Yes / Yes / Yes
3/3/3
3/3/3
Yes / Yes / Yes
1/1/1
1/1/1
Yes / Yes / Yes
SpecificMEDIA
5/0
5/0
TARGUSinfo
The Fox
Audience
Network
TidalTV
Tribal Fusion
6/5
3/3
3/3/5
3/3/3
Yes / Yes / No
1/1/1
1/1/1
Yes / Yes / Yes
13 / 12
4/2
0/0/1
1/1/1
No / No / Yes
-/-/0
-/-/1
- / - / No
1/1/1
1/1/1
Yes / Yes / Yes
1/1/1
1/1/1
Yes / Yes / Yes
2/2/2
2/2/2
Yes / Yes / Yes
*TruEffect
Tumri
Turn
Undertone
Networks
ValueClick Media
Vibrant In-Text
Solutions
Wall Street on
Demand
XGraph
Yahoo!
YuMe
0/5
0/5
0/1
0/1
2/2/1
2/2/2
Yes / Yes / No
2/4
1/2
1/1/1
1/1/1
Yes / Yes / No
1/1/2
1/1/2
Yes / Yes / No
3/0
1/0
1/1/1
1/1/1
Yes / Yes / Yes
28 / 21
8 / 13
2/2/3
2/2/5
No / No / No
1/1/1
1/1/1
Yes / Yes / Yes
In the summer, of the seventy-four instances of enhanced notice
that identified the ad provider, we noted seventeen NAI members. We
noted Google most often, with forty-one instances. The next most
common member was Yahoo!, with seven instances.
As shown in Table 2, we observed a considerable increase in
compliance between spring and summer, with many improvements
being made right around the IAB’s August 29, 2011, deadline. Of the
100 websites we examined, forty-nine had at least one non-contextual
ad during both the spring and summer observations. Of these, twenty-
626
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
five (51%) retained the same status, while twenty (41%) improved. In
the summer, of the fifty-four websites that had ads, forty-four (82%)
were at least somewhat compliant with the enhanced notice
requirement, and twenty-six (44%) were fully compliant. Much of this
new compliance is achieved through putting ad notice links at the
bottom of pages; only three websites used this technique in our spring
observation, while seventeen did in the summer.
Notably, much of the enhanced notice appeared to be driven by
advertisers (i.e., the companies that purchase ads) rather than by NAI
members. For example, almost all of the Verizon ads we saw had
enhanced notice, even though they came from many different ad
providers, including AOL Advertising, Collective, Google, interCLICK,
and Traffic Marketplace. This suggests that some online advertising
buyers are interested in providing notice and choice to their
customers. This also means that a website using symbols on ads for
compliance might have a varying level of compliance as a function of
the ads being served. On the other hand, a website correctly using a
link at the bottom of the page will be consistently compliant, although
with a less prominent notice.
B. PRIVACY NOTICE REQUIREMENT
We checked the privacy policies of the sixty-six NAI members for
compliance with the privacy notice requirements from Table 1 in
February 2011. Audience Science and Rocket Fuel were the only NAI
members that stated that they adhere to the DAA principles, and thus
the only members fully compliant with the privacy notice
requirements we checked. Excluding the requirement to mention
adherence to the DAA principles, fifty-five members (83%) were
compliant with the privacy notice requirements. We repeated our
examination after the August IAB deadline, as described in Part IV.
There are now seventy-four NAI members and eighteen state
adherents to the DAA principles. Of these, fourteen have changed
their privacy policies to indicate adherence and two are new NAI
members.
All NAI members mention their OBA activities, describe how
collected data is used, and provide an opt-out mechanism. It is worth
noting, however, that as of our summer evaluation, both eXelate and
Tumri provide dead opt-out links on their privacy policies. All except
Fox Audience Network stated what types of data they collect for
behavioral advertising during our spring examination. In the summer
examination all members stated what types of data they collect for
behavioral advertising. Only fifty-six of sixty-six members (85%) in
2012]
KOMANDURI ET AL.
627
the spring and fifty-two of seventy-four members (84%) in the
summer stated how long they retain their data collected for behavioral
advertising. Many members mention cookie or log file expiration, but
this does not address the data collected from observing cookies or
analyzing log files. Privacy notice requirement compliance for each
NAI member is presented in Table 4.
Table 4:
NAI Member Privacy notice compliance for February and August
2011. A “No” indicates that notice was not found in the member's
privacy policy. If the value is the same for February and August, it is
listed once. If there is a change between February and August, it is
listed as FebruaryValue-AugustValue. Websites marked with * are
only listed as NAI members for August.
Name
Types of data
collected
How data
will be used
Adherence to
DAA Principles
How long data
will be retained
[x+1]
Yes
Yes
No
No1
24/7 Real Media
Yes
Yes
No-Yes
Yes
33Across
Yes
Yes
No
Yes
Adara Media
Yes
Yes
No
Yes
AdBrite
Yes
Yes
No
Yes
AdChemy
Yes
Yes
No
Yes
Adconion Media Group
Yes
Yes
No
Yes
*AddThis
Yes
Yes
Yes
Yes
Adify
Yes
Yes
No
Yes
AdMeld
Yes
Yes
No
Yes
Aggregate Knowledge
Yes
Yes
No
Yes
Akamai Technologies
Yes
Yes
No
Yes
AOL Advertising
Yes
Yes
No-Yes
Yes
*Aperature
Yes
Yes
No
No1
Atlas
Yes
Yes
No
Yes
AudienceScience
Yes
Yes
Yes
Yes
Batanga
Yes
Yes
No
Yes
Bizo6
Yes
Yes
No-Yes
Yes1
BlueKai
Yes
Yes
No-Yes
Yes
*BrightRoll
Yes
Yes
No
No
Brilig
Yes
Yes
No
Yes
Burst Media
Yes
Yes
No
Yes
Buysight
Yes
Yes
No
Yes
628
I/S: A JOURNAL OF LAW AND POLICY
Casale Media
Yes
Yes
*Cognitive Match
Yes
Collective
Yes
Criteo
*Cross Pixel Media
[Vol. 7:3
No-Yes
No2
Yes
No
Yes
Yes
No-Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
Yes
DataLogix
Yes
Yes
No-Yes
No3
Datonics
Yes
Yes
No
Yes
DataXu
Yes
Yes
No-Yes
Yes
Dedicated Networks
Yes
Yes
No
No1
Dotomi
Yes
Yes
No
Yes
Epic Marketplace
Yes
Yes
No
Yes
eXelate
Yes
Yes
No
Yes
FetchBack
Yes
Yes
No
Yes
Glam Media
Yes
Yes
No
Yes
Google
Yes
Yes
No
No
I-Behavior
Yes
Yes
No
Yes
interCLICK
Yes
Yes
No
Yes
Invite Media
Yes
Yes
No
Yes
Lotame
Yes
Yes
No
Yes
MAGNETIC
Yes
Yes
No
Yes
*MaxPoint Interactive
*Media Innovation
Group
Media6Degrees
Yes
Yes
No
Yes
Yes
Yes
Yes
Yes
Yes
Yes
No
Yes
MediaMath
*MediaMind
Technologies
Mediaplex
Yes
Yes
No-Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
Yes
Microsoft
Yes
Yes
No
No1,4
Mindset Media
Yes
Yes
No
Yes
Netmining
Yes
Yes
No
Yes
OwnerIQ
Yes
Yes
No
No1
*Pulse360
Yes
Yes
No
Yes
Quantcast
Yes
Yes
No-Yes
Yes
*RadiumOne
Yes
Yes
No
Yes
Red Aril
Yes
Yes
No
Yes
Richrelevance
Yes
Yes
No-Yes
Yes
Rocket Fuel
Yes
Yes
No-Yes
Yes
SpecificMEDIA
Yes
Yes
No
Yes
2012]
TARGUSinfo
The Fox Audience
Network
TidalTV
KOMANDURI ET AL.
629
Yes
Yes
No
No1
No5 -Yes
Yes
No-Yes
Yes-No
Yes
Yes
No
Yes
Tribal Fusion
Yes
Yes
No
Yes
*TruEffect
Yes
Yes
No
No
Tumri
Yes
Yes
No
Yes
Turn
Yes
Yes
No
Yes
Undertone Networks
Yes
Yes
No-Yes
Yes
ValueClick Media
Vibrant In-Text
Solutions
Wall Street on Demand
Yes
Yes
No
Yes
Yes
Yes
No
Yes
Yes
Yes
No
Yes
Xgraph
Yes
Yes
No
Yes
Yahoo!
Yes
Yes
No-Yes
No2
YuMe
Yes
Yes
No
Yes
Notice only mentions cookie expiration.
Notice only mentions log file retention.
3 Notice only mentions cookie expiration and log file retention.
4 Retention information found in a blog post, not in a prominent location.
5 Notice explains that "non-personally identifiable information obtained from cookies, web
beacons, and/or similar monitoring technologies" is collected, but the types of data are not
specified.
6 We were notified that Bizo's privacy policy became compliant with the data retention
requirement on March 16, 2011.
1
2
C. CHOICE REQUIREMENT
We evaluated the NAI and DAA opt-out mechanisms in February
and March 2011, with twenty-six days between checks. We used
Microsoft Windows with Chrome 9.0.597, Internet Explorer
8.0.6001.19019, and Firefox 3.6.13 browsers; the March evaluation
used only Chrome 10.0.648. We also conducted the evaluation in
August 2011, using Chrome 13.0.782.107, Internet Explorer
8.0.6001.18702, and Firefox 5.0.1. When we tested it in February, the
DAA mechanism reported that it failed to set an opt-out cookie for one
company with each browser. In all three cases, one company failed,
but surprisingly it was not the same company each time. On Chrome
and Internet Explorer, the DAA mechanism was unable to set the optout cookie for AOL Advertising, the third most pervasive online
630
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
advertiser.87 On Firefox, the mechanism failed for Audience Science.
The NAI mechanism was able to set all opt-out cookies successfully.
In March, we retested the DAA mechanism and found the Invite
Media opt-out cookie could not be set on Chrome, but the mechanism
worked with the other browsers. In August, we successfully used
Chrome to opt-out from NAI members using the DAA mechanism.
Firefox failed to opt out of TARGUSinfo and Internet Explorer failed
to opt out of Microsoft Advertising. On the NAI website, Chrome and
Firefox opted out successfully from all members. Internet Explorer
failed for Adconion, Batanga, BrightRoll, Cognitive Match, Collective,
Media Innovation Group, MediaMind, Microsoft (Atlas Technology),
TARGUSinfo, and TruEffect.
We also observed that the two opt-out mechanisms sometimes set
different cookies, and some opt-out cookies changed from February to
March to August. Even when both mechanisms set cookies for the
same advertiser, they did not always agree on the content of the cookie
or the number of cookies that were set. For example, the NAI
mechanism set four cookies for the domain adsonar.com, a serving
domain of AOL Advertising. These cookies had the following names:
TData, TData2, atdemo, and atdemo2. For the same domain, the DAA
mechanism set a single cookie with the name oo_flag. This did not
change between February and March. Since these mechanisms were
not consistent, users might have needed to use both mechanisms to
opt out. However, in August, the adsonar cookies for the DAA and NAI
now match. Summary results for each NAI member can be found in
Table 3.
We also checked opt-out cookies to be sure that they persist for
five years, in keeping with the DAA88 and NAI89 requirements. Since
multiple opt-out cookies can be set for a single domain, we considered
a domain to be compliant if at least one of the opt-out cookies had a
duration of at least five years. Three domains, adsonar.com,
advertising.com, and invitemedia.com, were not compliant when their
Press Release, comScore, comScore Media Metrix Ranks Top 50 U.S. Web Properties for
October 2010 (Nov. 22, 2010), available at
http://www.comscore.com/content/download/6735/118361/file/comScore%20Media%20
Metrix%20Ranks%20Top%2050%20U.S.%20Web%20Properties%20for%20October%20
2010.pdf://comscore.com/Press_Events/Press_Releases/2010/11/comScore_Media_Met
rix_Ranks_Top_50_U.S._Web_Properties_for_October_2010.
87
What Are Opt Out Cookies and how do they remember ort out preferences?, supra note
65.
88
89
Network Advertising Initiative FAQ, supra note 28.
2012]
KOMANDURI ET AL.
631
cookies were set with the NAI mechanism in February. Only
invitemedia.com was non-compliant when using the DAA mechanism.
This shows another dimension of inconsistency between the two
mechanisms. In March, invitemedia.com became compliant with both
mechanisms, but adsonar.com and advertising.com were still not
compliant. In August, however, all cookies were compliant with the
five year requirement.
The DAA and NAI opt-out mechanisms do not function in the
Apple Safari browser with default settings. Safari blocks third-party
cookies from being set; a cookie for a given domain can be set only
when a user navigates there. A user who navigates to an advertising
network website may subsequently be tracked by that network across
other websites and is unable to use either mechanism to opt out of this
tracking. To confirm, we navigated to various websites with Safari
5.0.3 and then attempted to use the NAI opt-out mechanism. Several
advertising networks had placed tracking cookies on our computer,
but we were unable to opt out from them using the mechanism.
D. DEFINITIONS OF OPTING OUT
The DAA requires that its members provide “users of Web sites at
which data is collected and used for Online Behavioral Advertising
purposes the ability to choose whether data is collected and used for
such purposes.”90 The DAA website says that opting out will not stop
data collection, but will stop delivery of ads based on preferences.91
Consistent with the DAA's definition, the NAI defines opting out as
follows:
Opt out of OBA means that a consumer is provided an
opportunity to exercise a choice to disallow OBA with
respect to a particular browser. If a consumer elects to
opt out of non-PII OBA, collection of non-PII data
regarding that consumer's browser may only continue
for non-OBA purposes, such as ad delivery &
reporting.92
90
DAA SELF-REGULATORY PRINCIPLES, supra note 10, at 33.
How Does the Consumer Opt Out Page Work?, ABOUTADS (Oct. 14, 2010),
http://www.aboutads.info/opt-out.
91
92
2008 NAI PRINCIPLES, supra note 4, at 5.
632
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
As of our summer check, sixty-nine of seventy-four NAI members
provide their own definitions of opt-out, sometimes going beyond the
NAI and DAA requirements.93 For example, AdBrite states that it will
delete prior data when a user opts out. Bizo indicates it will stop
collecting uniquely identifiable data. AddThis just states that it will no
longer target advertisements.
Of those sixty-nine websites that define opting out, forty-two
indicate collecting less or no data or no longer tracking the user, and
thirty-five of those forty-two indicate collecting no data or not
tracking the user. The other twenty-seven members that define opting
out indicate only that opting out would entail not seeing targeted ads,
which is consistent with the minimum requirements of the DAA and
NAI. Three of these members explicitly state that information
collection would continue. These findings are detailed in Table 5.
The members that did not define opting out are Aggregate Knowledge, Atlas, Dotomi,
MediaMath, and The Fox Audience Network.
93
2012]
KOMANDURI ET AL.
633
Table 5:
Categorized definitions of opting out based on NAI members' privacy
policies. Only members that defined opting out are included in this
table.
NAI Member
Stated Policy
[x+1]
24/7 Real Media
33Across
Adara Media
AdBrite
AdChemy
Adconion Media Group
*AddThis
Adify
AdMeld
Akamai
AOL Advertising
*Aperture
AudienceScience
Batanga
Bizo
BlueKai
*BrightRoll
Brilig
Burst Media
Buysight
Casale Media
*Cognitive Match
Collective
Criteo
*Cross Pixel Media
DataLogix
DataXu
Datonics
Dedicated Networks
Epic Marketplace
eXelate
FetchBack
Glam Media
Google
I-Behavior
interCLICK
Invite Media
Lotame
MAGNETIC
*MaxPoint Interactive
*Media Innovation Group
Media6Degrees
N/A - Stop tracking4
Collect no data1 - Don't target ads
Collect no data
Don't target ads
Collect less data3
Collect no data
Don't target ads
Collect no data
Stop tracking
Collect no data
Don't target ads5
Don't target ads
Collect no data4
Collect no data
Collect no data
Stop tracking
Collect less data
Don't target ads
Collect no data
N/A - Stop tracking
Collect no data
Stop tracking
Collect no data
Collect no data
Don't target ads
Collect no data
Don't target ads - Collect no data
N/A - Don't target ads
Collect no data2
Collect no data
Don't target ads
Don't target ads - Collect no data
Don't target ads
Stop tracking1 - Don't target ads
Collect less data
Don't target ads
Stop tracking
Don't target ads4
Don't target ads
Don't target ads - Collect less data1
Don't target ads
Collect no data
Don't target ads
634
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
Stop tracking
*MediaMind Technologies
Stop tracking4
Mediaplex
Don't target ads5
Microsoft
Stop tracking
Mindset Media
Collect no data - Don't target ads
Netmining
Collect no data
OwnerIQ
Don't target ads
*Pulse360
Don't target ads - Collect no data
Quantcast
Collect no data3
*RadiumOne
Collect no data2 - Don't target ads
Red Aril
Don't target ads
Richrelevance
Stop tracking
Rocket Fuel
Don't target ads
SpecificMEDIA
Don't target ads
TARGUSinfo
Don't target ads5 - N/A
The Fox Audience Network
Don't target ads
TidalTV
Stop tracking
Tribal Fusion
Collect no data
*TruEffect
Don't target ads - Collect less data
Tumri
Don't target ads - Collect less data
Turn
Collect less data1
Undertone Networks
Don't target ads
ValueClick Media
Collect no data
Vibrant In-Text Solutions
Stop tracking
Wall Street on Demand
N/A - Collect no data
XGraph
N/A - Don't target ads
Yahoo!
Don't target ads
YuMe
1 Opt-out definition mentions cookies only; we assume other tracking technologies are not used.
2 The opt-out cookie is defined as indicating a preference; we assume this preference will be
respected.
3 Prior-held data will be deleted.
4 The opt-out cookie will block the placement of other cookies from this advertiser.
5 Explicitly stated that data collection will continue.
E. SPECIFIC PRIVACY POLICY NOTES
There are several cases in which an NAI member states in its
privacy policy that a previous opt-out effort by a user may have
become invalid. According to the privacy policy of Akami, which
purchased aCerno, “[d]ue to technical issues, if you opted out of
targeted advertising by acerno, [sic] your choice may not have been
properly saved and recognized.”94 Likewise, according to the
Privacy Statement, AKAMI.COM (Aug. 26, 2011),
http://www.akamai.com/html/policies/privacy_statement.html#policy_opt_out.
94
2012]
KOMANDURI ET AL.
635
Dedicated Networks privacy policy, “[a]s a result, if you opted out of
targeted advertising by Dedicated Networks prior to January 2011,
your choice may no longer be fully effective.”95 According to the
privacy policy of Undertone, “[i]f you opted out of targeted advertising
between August 2008 and June 2009, you should opt out again to
ensure that your choice is saved and recognized by our ad server.”96
Further, the privacy policy of [x+1] states “[a]s a result, if you opted
out of targeted advertising by [x+1] prior to that time (either through
[x+1] or through our opt out listing on the NAI page), your choice is
no longer effective.”97 In each of these instances, a user who had opted
out of online behavioral advertising from one of these companies
would have that opt out invalidated even before the opt-out cookie
expired.
Further, while NAI members are not required to provide
definitions of opting out, we found some instances of ambiguity
among those that did. The privacy policies of 24/7 Real Media, Glam
Media, MAGNETIC, and Undertone Networks only mention opting
out as pertaining to cookies; we assume that they are not using
another mechanism for tracking users.
We observed considerable flux and instability among privacy
policies. Perhaps because of the August 2011 IAB compliance
deadline,98 we observed twenty-two NAI members changing their
privacy policies in August 2011, including ten that changed their
policies in the last week before the deadline. At least twenty-eight NAI
members self-reported changing their privacy policies between
January 1, 2011, and July 31, 2011; nine of these twenty-eight changed
again in August. I-Behavior, interCLICK, Invite Media, Lotame, and
Pulse360 explicitly indicate that their privacy policies may change and
ask their readers to return for updates.
Dedicated Media Service Privacy Policy, DEDICATED NETWORKS (Feb. 23, 2010),
http://www.dedicatednetworks.com/footer_privacy.asp.
95
Privacy Policy for Undertone Advertising Network and Corporate Sites, UNDERTONE
(Oct. 21, 2011), http://www.undertone.com/privacy.
96
97
Privacy Policy for [X+1], X+1 (2011), http://www.xplusone.com/privacy.php.
98
IAB Member Code of Conduct, supra note 72.
636
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
VI. DISCUSSION
A. LIMITATIONS
This paper checks NAI member compliance with the DAA and NAI
notice and choice principles through inspection of websites,
advertisements, and cookies. However, our approach has some
limitations.
We may have overlooked some notices that appear outside a site's
privacy policy. Neither the DAA nor the NAI explicitly require their
notices to be placed in member privacy policies. However, the DAA
principles indicate that notice should be “clear, meaningful, and
prominent.”99 The NAI principles state that notice is to be given
“clearly and conspicuously.”100 Therefore, when we are unable to find
a required notice on a member privacy policy or linked websites, the
site would still be in compliance if it is present on some other
prominent page of the website. Nonetheless, a website that provides a
notice but does not link to it from its privacy policy is arguably not
communicating clearly and conspicuously with its users.
We were unable to make a reliable determination about which
observed advertisements were behavioral and which third-party
cookies were associated with OBA. We narrowed the scope of our
investigation by focusing only on third-party cookies placed by NAI
member companies and by eliminating ads that we judged to be
contextual. However, it is likely that some of the ads and cookies we
eliminated are actually subject to OBA requirements. On the other
hand, some of the ads and cookies we included may not actually meet
the definition of OBA. Nonetheless, we believe our dataset provides a
good estimate of enhanced notice compliance on the most popular
websites, and we provide detailed information about our methodology
and findings to enable readers to determine the basis for our
compliance estimates.
B. PUBLIC POLICY IMPLICATIONS
The results of our study raise a number of public policy concerns.
The DAA published its principles in July 2009, over two years before
our final round of data collection. The DAA officially launched its self-
99
DAA SELF-REGULATORY PRINCIPLES, supra note 10, at 12.
100
2008 NAI PRINCIPLES, supra note 4, at 7.
2012]
KOMANDURI ET AL.
637
regulatory program on October 4, 2010.101 Although we have observed
an increasing rate of compliance in the weeks leading up to the IAB
deadline, overall compliance has been slow. We observe infrequent
compliance with the “enhanced notice” requirements, and only
eighteen of the seventy-four NAI members indicate DAA membership
despite being required to do so.
Beyond shortcomings in notice requirements, the DAA and NAI
opt-out mechanisms contain errors. Opt-out cookies fail to be set for
some members. The opt-out cookies for others differ between the two
mechanisms and some have durations shorter than the required five
years.
Even if the opt-out mechanisms did work flawlessly, they do not
adapt to changing membership. NAI membership jumped from thirtyfour in January 2010102 to seventy-eight in October 2011.103 A user
who has opted out of all NAI members six months ago would not be
opted out of a dozen members today. Further, we know of at least
three NAI members who were acquired and ceased to operate
independently during the duration of our study: aCerno, Dapper, and
Tacoda. This raises further questions about whether a user who has
opted out of a particular company needs to opt out again when such
an acquisition occurs.
Given the focus on third-party tracking, users are unable to optout of tracking by websites they are currently visiting (e.g., companies
that offer both first-party content and third-party behavioral
advertising services). This may come as a surprise to consumers who
think they have opted out of tracking by a particular company but may
not realize it applies only when that company is acting as a third-party
behavioral advertising company. The DAA and NAI give users no way
to avoid being tracked on the websites of NAI members. The narrow
definition of OBA proposed by the FTC and adopted by the DAA and
NAI may be insufficient for addressing consumer privacy concerns.
We also observe that two NAI members impose limitations and
demands on any user who visits their websites, which is necessary in
order to read their privacy policies. Undertone's privacy policy states
that “by using the Undertone Site Network, this website or sharing
information with us, you give your agreement to this Privacy
101DAVIS
102
& GILBERT LLP, supra note 11.
NAI 2010 ANNUAL COMPLIANCE REPORT, supra note 29, at 1.
NAI Full Compliance Members, NETWORK ADVER. INITIATIVE,
http://www.networkadvertising.org/participating (last visited Jan. 14, 2012).
103
638
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
Policy.”104 Undertone's privacy policy also stipulates limitations of
liability.105 ValueClick Media's privacy policy states, “Please read this
Privacy Policy carefully since by visiting this site and sharing
information with ValueClick you agree to be bound by the terms and
conditions of this Privacy Policy unless you offer different terms which
are accepted by ValueClick, Inc. in writing.”106 ValueClick imposes
requirements on its users, including how privacy disputes will be
handled. In both of these cases, a user attempting to learn about a
company's behavioral advertising practices by reading the notices that
the DAA and NAI require will be stuck with limitations on his or her
rights.
It is worth highlighting the flurry of compliance improvements we
observed in late August, which we believe are in response to the IAB’s
compliance deadline. The IAB requirements, found in the IAB Code of
Conduct, mirror those of the DAA, with an added provision for
enforcement. An IAB member found not to be in compliance with the
Code of Conduct may be penalized by having its IAB membership
suspended.107 Therefore, in addition to the possible threat of FTC
enforcement, the concrete deadlines and enforcement procedures of
the IAB Code of Conduct spurred compliance.
Finally, we have seen that a number of NAI members provide their
own definitions of opting out, going beyond the minimum bar set by
the NAI requirements. This is positive from a privacy perspective, but
a common vocabulary for these opt-out variations could be useful for
helping consumers understand what will happen when they opt out.
VII. ACKNOWLEDGEMENTS
This research was funded in part by NSF IGERT grant
DGE0903659, by CyLab at Carnegie Mellon under grants DAAD1902-1-0389 and W911NF-09-1-0273 from the Army Research Office
and by a grant from The Privacy Projects.
104
Privacy Policy for Undertone Advertising Network and Corporate Sites, supra note 96.
105
Id.
106
Privacy Policy, VALUECLICK (Sept. 9, 2010), http://www.valueclick.com/privacy.
107
IAB Member Code of Conduct, supra note 70, at 6.
I/S: A JOURNAL OF LAW AND POLICY FOR THE INFORMATION SOCIETY
A Survey of the Use of Adobe Flash Local Shared
Objects to Respawn HTTP Cookies†
ALEECIA M. MCDONALD & LORRIE FAITH CRANOR*
Abstract. Website developers use Adobe’s Flash Player
product to store information on users’ disks with Local
Shared Objects (LSOs). LSOs store state information and
user identifiers, with similar purposes to HTTP cookies.
Soltani et al. documented “respawning,” where users
deleted their HTTP cookies only to have the HTTP cookies
recreated based on LSO data. One year later, we visited
popular websites plus 500 randomly-selected websites to
determine if respawning still occurs. We found no instances
of respawning in a randomly-selected group of 500
websites. We found two instances of respawning in the 100
most popular websites. While our methods differ from the
Soltani team, our results suggest respawning may be
waning. Similar to the Soltani study, we found LSOs with
unique identifiers. While we can use contextual information
like variable names to guess what a given unique identifier
is for, our study methods cannot conclusively determine
how companies use unique identifiers. We cannot
definitively quantify how many, if any, sites are using
unique identifiers in LSOs for any purpose that might have
Support for this project was provided in part by Adobe Systems, Inc. Thanks to Adobe and
the Center for Democracy & Technology for their assistance in developing the experimental
protocol. Thanks to Justin Brookman, D. Reed Freeman, Kris Larsen, Deneb Meketa, Erica
Newland, Gregory Norcie, MeMe Rasmussen, Ari Schwartz, and Peleus Uhley for providing
assistance and feedback.
†
Aleecia M. McDonald is a Resident Fellow at Stanford University. Lorrie Faith Cranor is
an Associate Professor of Computer Science and of Engineering and Public Policy at
Carnegie Mellon University where she is director of the CyLab Usable Privacy and Security
Laboratory (CUPS).
*
640
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
privacy implications. Unique identifiers may, or may not,
be keys into back-end databases to perform tracking. Or,
unique identifiers could simply identify a specific music clip.
Without visibility into back-end databases, it is difficult to
determine how companies use identifiers. Even assuming
all unique identifiers in LSOs track users, the percentage of
such sites is low—9% of the top 100, and 3.4% of the
randomly-selected 500 sites we studied. However, due to
the popularity of some of these sites, many people could be
affected. We believe further study is needed to determine if
these sites are using LSOs to evade users’ privacy choices.
We conclude our paper with policy options and a discussion
of implications for industry self-regulation of Internet
privacy.
I. INTRODUCTION
Adobe sells several products related to Flash technologies. Some of
Adobe’s customers were sued for using Flash to store persistent data
on Internet users’ hard drives; this is allegedly contrary to users’
knowledge after users have deleted HTTP cookies as a way to bypass
users’ privacy choices.1 These lawsuits followed research performed in
2009 by Soltani, et al. that found companies using Flash to engage in
questionable practices.2 In this paper we measure the prevalence of
“respawning” deleted HTTP cookies, as well as examine the potential
for user data to persist beyond deleting HTTP cookies without
respawning. We review related work in section two and describe our
methods in section three. We present our findings in Part IV. We
discuss policy implications in section five and policy options in section
six. Lastly, we conclude in section seven. Appendix A contains the list
of 600 websites we visited in our research. Appendix B contains
examples of the LSO content we collected and illustrates how we
classified LSOs into different categories.
Tanzina Vega, Code That Tracks Users’ Browsing Prompts Lawsuits, N.Y. TIMES, Sept.
21, 2010, at B3.
1
Ashkan Soltani et al., Flash Cookies and Privacy, SUMMER UNDERGRADUATE PROGRAM IN
ENGINEERING RESEARCH AT BERKELEY (SUPERB) 2009, Aug. 10, 2009, available at
http://ssrn.com/abstract=1446862.
2
2012]
MCDONALD & CRANOR
641
II. BACKGROUND AND RELATED WORK
Flash is used to create multimedia applications, including
interactive content and animations embedded into web pages. Flash
Player is not natively built into web browsers, but rather is a plugin
that works across multiple operating systems and all of the most
popular web browsers, allowing developers to easily create crossplatform programs. An estimated 99% of desktop web browsers have
the free Flash Player plugin enabled.3
Early versions of Flash did not allow for direct access to HTTP
cookies.4 Although programs written to run in Flash Player could not
read and write HTTP cookies directly, Flash programmers could use
an additional programming language, such as JavaScript, to access
HTTP cookies.5 However, using a second language to save and read
data was cumbersome and frustrating to Flash developers.6 As
applications written to run in Flash Player evolved beyond playing
videos and became more interactive, there were additional types of
data to save. This is a familiar pattern; web browsers also initially had
no way to save state, which was fine when the web was static text and
images, but caused limitations as web applications became more
complex. Netscape engineers introduced HTTP cookies as a way to
support online shopping carts in 1994.7 Flash MX was released in
1996, prior to Adobe’s purchase of the company that owned Flash. The
Flash MX release introduced an analog to HTTP cookies. Adobe refers
to this storage as Flash Player Local Shared Objects (“Flash Player
LSOs” or just “LSOs”). Flash Player LSOs are commonly referred to as
Statistics: PC Penetration, ADOBE SYSTEMS,
http://www.adobe.com/products/player_census/flashplayer (last visited Jan. 21, 2012).
3
How do I Access Cookies Within Flash?, STACKOVERFLOW (Sept. 20, 2008),
http://stackoverflow.com questions/109580/how-do-i-access-cookies-within-flash
questions/109580/how-do-i-access-cookies-within-flash.
4
Dan Carr, Integrating Flash Content with the HTML Environment, ADOBE (Apr. 7,
2008),
https://www.adobe.com/devnet/dreamweaver/articles/integrating_flash_html.html.
5
Local Shared Objects-“The Flash Cookies,” ELECTRONIC PRIVACY INFORMATION CENTER,
http://epic.org/privacy/cookies/flash.html (last updated July 21, 2005).
6
David M. Kristol, HTTP Cookies: Standards, Privacy, and Politics, 1 ACM TRANS.
INTERNET TECHNOL. 151, 158 (2001).
7
642
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
“Flash cookies.” Other Internet technologies use local storage for
similar purposes (e.g. Silverlight, Java, and HTML5). Although Flash
developers could use HTTP cookies to save local data, there are
several reasons why Flash developers generally prefer using LSOs,
including:

Flash programmers find that LSOs are much
easier to work with and write code for than
HTTP cookies.

While JavaScript is built into all major
browsers, a small percentage of users choose to
disable JavaScript. This means that any
applications written to run in Flash Player that
rely upon JavaScript to access HTTP cookies
run the risk that the application may break for
some users.

LSOs hold more data and support more
complex data types than HTTP cookies; giving
developers more flexibility and control over
what can be stored locally.
See Table 1 for a summary of some of the differences between HTTP
cookies and LSOs.
Aside from technical differences, software engineers often use
HTTP cookies and LSOs to perform the same functions. However,
users interact with HTTP cookies and LSOs in different ways. Most
users do not fully understand what HTTP cookies are but have at least
heard of them; in contrast, few users have heard of LSOs.8 Users have
access to HTTP cookie management though browsers’ user interfaces,
but until recently could not manage LSOs via web browsers’ native
user interfaces. LSO management required either visiting the
Macromedia website to set LSOs to zero kilobytes of storage, which
functionally disables LSO storage, or interacting directly through the
Flash Player context menu. Web browsers’ “private” browsing modes
retained LSOs until early 2010, when Adobe added support for
Aleecia M. McDonald & Lorie Faith Cranor, Americans’ Attitudes About Internet
Behavioral Advertising Practices, WPES '10: PROCEEDINGS OF THE 9TH ANN. ACM
WORKSHOP ON PRIVACY IN THE ELECTRONIC SOC’Y, at 63, 70 (Oct. 4, 2010), available at
http://dl.acm.org/citation.cfm?id=1866929.
8
2012]
MCDONALD & CRANOR
643
InPrivate browsing.9 Until recently, most Privacy Enhancing
Technologies (PETs) designed to help users manage their HTTP
cookies did not address LSO management. So long as persistent LSOs
stored innocuous and anonymous data such as game high scores,
whether the data was stored in HTTP cookies or LSOs was primarily a
technical implementation detail. LSO use, however, has evolved into
areas with privacy implications.
Table 1:
Technical differences between HTTP cookies and LSOs
Where can the data
be read?
How long does the
data last?
HTTP Cookies
LSOs
Just from the browser that set it
From all browsers on the
computer
Default: until browser closes, but in
practice, commonly set to expire after
eighteen months or many years
How much data
does it hold?
Maximum: Four KB
Which data types
are supported?
Simple Name/Value pairs
Permanent unless deleted
Default: 100 KB, but users
can choose higher or lower
values
Simple and complex data
types
Advertisers use persistent identifiers in HTTP cookies to help
them understand a given customer’s browsing history. This data is
used to build interest profiles for people in interest groups or
demographic categories. Advertisers charge premiums to display ads
just to people in specific interest profiles. Advertisers also use HTTP
cookies to contribute to analytics data about which customers have
viewed ads, clicked on ads, and purchased from ads. Analytics data
helps advertisers test different approaches to determine if an ad is
effective with a particular audience. More importantly, without at
least basic analytics, advertising networks would not know how much
to charge for ads. Meanwhile many users prefer not to be tracked, and
express that preference by deleting their HTTP cookies.10 Deleting
cookies can cause tremendous problems for analytics data based on
Andy Zeigler, Adobe Flash Now Supports InPrivate Browsing, IEBLOG (Feb. 11, 2010,
4:59 PM), http://blogs.msdn.com/b/ie/archive/2010/02/11/adobe-flash-now-supportsinprivate-browsing.aspx.
9
10
McDonald & Cranor, supra note 8, at 74.
644
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
HTTP cookies, where even a small error rate can result in incorrectly
billing thousands of dollars in a single advertising campaign.11
Advertisers discovered LSOs addressed their data quality
problems.12 LSOs remained untouched even by users who deleted
HTTP cookies. Many users did not know about LSOs, which do not
expire, and they were often difficult for users to delete (e.g. under
Windows, LSOs write to hidden system folders, away from most users’
notice or technical ability to delete). LSOs are cross-browser; thus
they reduce advertisers’ problem with HTTP cookie counts because a
single user using two browsers (for example, Internet Explorer and
Firefox) is not miscounted as two different users.
Rather than write new code to work with LSOs, in some cases
advertisers simply used LSOs to identify a user and then re-create
(“respawn”) that user’s previously deleted HTTP cookie data. After recreating HTTP cookies, advertisers could continue to use their existing
code base unchanged, with no need to re-engineer their products. For
example, starting in 2005 United Virtualities sold a product that used
LSOs to “restore” deleted HTTP cookies.13 United Virtualities
explained that this was “to help consumers by preventing them from
deleting cookies that help website operators deliver better services.”14
LSOs used to respawn HTTP cookies sounds like the “best practices”
description put forward in a W3C document on mobile web use:
Cookies may play an essential role in application
design. However since they may be lost, applications
should be prepared to recover the cookie-based
information when necessary. If possible, the recovery
Louise Story, How Many Site Hits? Depends Who’s Counting, N.Y. TIMES, Oct. 22, 2007,
at B1.
11
Paul Boutin, Flash Cookies Get Deleted, Skew Audience Stats as Much as 25 Percent,
VENTUREBEAT (Apr. 14, 2010), http://venturebeat.com/2010/04/14/flash-cookies-getdeleted-skew-audience-stats-as-much-as-25-percent.
12
13Antone
Gonsalves, Company Bypasses Cookie-Deleting Consumers, INFORMATION WEEK
(Mar. 31, 2005), http://www.informationweek.com/news/160400801.
14
Id.
2012]
MCDONALD & CRANOR
645
should use automated means, so the user does not have
to re-enter information. 15
Ultimately, the suggestion to recover cookies was not part of the
final W3C recommendation.16 Using LSOs to respawn HTTP data was
a favorable engineering solution, as a technical response to the
technical problem. However, problems collecting analytics data are
not just technical glitches; users intentionally delete HTTP cookies as
an expression of their desire for privacy. Users had no visible
indication that LSOs existed or that HTTP cookies respawned. Users
reacted with surprise when they learned that HTTP cookies they had
deleted were not actually gone.17
Furthermore, LSOs can be used to track specific computers
without respawning HTTP cookies. HTTP cookies can contain a
unique identifier so websites can tell when a specific computer has
visited the site again. LSOs can be used the same way. Even when
users delete their HTTP cookies to protect their privacy, unless they
also know how to manage LSOs, they may still be identified both to
first- and third-party websites via unique identifiers in LSOs. From a
user’s perspective, this is functionally equivalent to respawning;
despite deleting HTTP cookies, they are still being tracked. However,
not all unique identifiers are used to track specific computers. For
example, each song or video clip on a website could be assigned a
unique identifier.
LSOs became a topic of interest in 2009 with the publication of
Soltani et al.’s paper investigating the use of LSOs for respawning
deleted HTTP cookies and storing data.18 They found at least four
instances of respawning, and over half of the sites they studied used
LSOs to store information about users. Several things changed after
the Soltani study:
15Bryan
Sullivan, Mobile Web Best Practices 2.0: Basic Guidelines, W3C Editor’s Draft,
W3C (Mar. 27, 2008), http://www.w3.org/2005/MWI/BPWG/Group/Drafts/
BestPractices-2.0/ED-mobile-bp2-20080327 #bp-cookies-recover.
Adam Conners & Bryan Sullivan, Mobile Web Application Best Practices W3C
Recommendation, W3C (Dec. 14, 2010), http://www.w3.org/TR/mwabp.
16
Michael Kassner, Flash Cookies: What’s New With Online Privacy, TECH REPUBLIC:
BLOGS (Sep. 8, 2009, 3:38 AM) http://www.techrepublic.com/blog/security/flash-cookieswhats-new-with-online-privacy/2299.
17
18
Soltani et al., supra note 2.
646
I/S: A JOURNAL OF LAW AND POLICY

Public awareness increased. Media attention
popularized the study findings19 and privacy
professionals called attention to LSOs.20
Research continues to find companies misusing
LSOs, including results in a new study from
Soltani et al.21

Corporate practices changed. Quantcast
announced they would no longer respawn
HTTP cookies.22 The Network Advertising
Initiative (NAI), an industry group active in
self-regulation efforts, published guidelines that
their member companies must not respawn
HTTP cookies. Further, the NAI bars their
members from using local storage23 for
behavioral advertising at all.24
[Vol. 7:3
Ryan Singel, You Deleted Your Cookies? Think Again, WIRED (Aug. 10, 2009)
http://www.wired.com/ epicenter/tag/cookies; see also John Leyden, Sites Pulling Sneaky
Flash Cookie-Snoop, THE REGISTER (Aug. 19, 2009),
http://www.theregister.co.uk/2009/08/19/flash_cookies.
19
See Bruce Schneier, Flash Cookies, SCHNEIER ON SECURITY (Aug. 17, 2009, 6:36 AM),
http://www. schneier.com/blog/archives/2009/08/flash_cookies.html; see also Seth
Schoen, New Cookie Technologies: Harder to See and Remove, Widely Used to Track You,
ELECTRONIC FRONTIER FOUNDATION (Sept. 14, 2009),
http://www.eff.org/deeplinks/2009/09/new-cookie-technologies-harder-see-and-removewide.
20
Mika D. Ayenson et al., Flash Cookies and Privacy II: Now with HTML5 and ETag
Respawning, SUMMER UNDERGRADUATE PROGRAM IN ENGINEERING RESEARCH AT
BERKELEY (SUPERB) 2009 (July 29, 2011), available at
http://ssrn.com/abstract=1898390.
21
Ryan Singel, Flash Cookie Researchers Spark Quantcast Change, WIRED (Aug. 12,
2009), http://www.wired.com/epicenter/2009/08/flash-cookie-researchers-sparkquantcast-change.
22
E.g., Flash LSOs, Internet Explorer Browser Helper Objects (BHOs), Microsoft
Silverlight objects, etc.
23
FAQ, NETWORK ADVER. INITIATIVE,
http://www.networkadvertising.org/managing/faqs.asp# question_19 (last visited Jan. 21,
2012).
24
2012]
MCDONALD & CRANOR

Tools improved. Some PETs added LSO
management.25 Adobe added support for
“private” web browsing26 and worked with
browser vendors to integrate LSO management
into browser user interfaces.27 Adobe also
dramatically improved user management of
LSOs.28

Regulators took an interest. The FTC requested
more information from Adobe, and Adobe
formally commented to the FTC characterizing
respawning as a misuse of LSOs.29

In 2010, the Wall Street Journal ran a new
series of articles about Internet privacy. The
series included findings from a second Soltaniled study of fifty websites’ use of LSOs and
tracking technologies, using data collected at
the end of 2009.30 Subsequent to the new media
attention, several class action lawsuits alleging
647
See CCleaner: Optimization and Cleaning, PIRIFORM,
http://www.piriform.com/ccleaner/features (last visited Nov. 4, 2011); see also Better
Privacy, MOZILLA ADD-ONS, https://addons.mozilla.org/en-US/firefox/addon/6623 (last
visited Nov. 4, 2011).
25
26
Zeigler, supra note 9.
Emily Huang, On Improving Privacy: Managing Locak Storage in Flash Player, ADOBE
FLASH PLATFORM BLOG (Jan. 11, 2011, 12:09 PM),
http://blogs.adobe.com/flashplatform/2011/01/on-improving-privacy-managing-localstorage-in-flash-player.html.
27
Manage, Disable Local Shared Objects, ADOBE SYSTEMS INCORPORATED,
http://kb2.adobe.com/cps/526/52697ee8.html (last visited Jan. 21, 2012).
28
MeMe Jacobs Rasmussen, Re: Comments from Adobe Systems Incorporated – Privacy
Roundtables Project No. P095416, ADOBE SYSTEMS INCRIORPORATED (Jan. 27, 2010),
http://www.ftc.gov/os/comments/privacyroundtable/544506-00085.pdf.
29
Tracking the trackers: Our method, WALL ST. J. (July 31, 2010),
http://online.wsj.com/article/SB10001424052748703977004575393121635952084.html.
30
648
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
misuse of Flash technologies are currently
pending.31
We collected data from July 12 to 21, 2010, approximately one
year after the first Soltani study. This was six months after the data
collection for the second Soltani study, but prior to the Wall Street
Journal coverage, and the lawsuits.
This paper provides another data point in the rapidly changing
realm of LSOs. We investigated more sites than both of the Soltani
studies with a more reproducible protocol, though we did not
investigate sites as deeply. We also extend knowledge about Flash
practices by investigating a random sample in addition to popular
websites where prior studies focused. We found respawning is
currently rare, but sites still use LSOs as persistent identifiers (less
than what Soltani et. al. found, though again we caution we used
different methods), which may or may not have privacy implications,
as we discuss further below.
III. RESEARCH METHODS
We used two identically-configured computers on two different
networks to visit 600 websites, and then we analyzed the LSOs and
HTTP cookies those sites set. We investigated two different data sets:

100 most popular sites as of July 8, 2010

500 randomly selected sites
We created these two data sets based on Quantcast’s ranked list of
the million most popular websites visited by United States Internet
users.32 Both data sets contain international websites, although the
sites we visited are primarily U.S.-based.
The 100 most popular sites captures data about the sites users are
most likely to encounter. This is the same method Soltani et. al. used
Jacqui Cheng, Lawsuit: Disney, Others Spy on Kids with Zombie Cookies, ARS TECHNICA
(Aug. 16, 2010), http://arstechnica.com/tech-policy/news/2010/08/lawsuit-disneyothers-spy-on-kids-with-zombie-cookies.ars.
31
Top Ranking International Websites, QUANTCAST, http://www.quantcast.com/top-sites1 (last visited Jan. 21, 2012).
32
2012]
MCDONALD & CRANOR
649
in their study.33 We also sampled a random population of 500 sites,
because the most popular sites may not follow the same practices as
the rest of the web. We list all websites we visited in Appendix A,
Table 4.
We used two identically-configured Windows laptops (XP Pro,
version 2002, service pack 3) with Internet Explorer 7 configured to
accept all cookies and reject pop ups. We used the most recent version
of Flash Player available at that time, 10.1. Our two laptops were on
different computer networks so they would not have similar IP
addresses, eliminating IP tracking as a potential confound.
LSOs are stored in a binary format. We used custom code from
Adobe to save the contents of each LSO in a text file, which allowed us
to automate comparisons of log files rather than open each LSO in a
SOL editor.34 This was strictly a convenience and did not alter the data
we collected.
At each site we collected all first-party and third-party cookies and
LSOs. We used the protocol described below to gain insights into the
use of LSOs as identifiers and as mechanisms for respawning HTTP
cookies.
We visited each site in three “sweeps” for a total of nine visits:

Sweep One, three visits from laptop A

Sweep Two, three visits from laptop B

Sweep Three, three visits from laptop A with the
LSOs from laptop B
During each sweep, we conducted three back-to-back visits per
site. We copied the HTTP cookies and LSOs after each sweep, so we
could determine when they were set. We did not clear cookies or LSOs
during these three visits, so the final visit had all HTTP cookies and
Soltani et al., supra note 2. During the course of the year between the first Soltani study
and our study, 31 sites that had been in the top 100 in 2009 were displaced with different
sites in 2010. In the body of this paper, we present just the top sites from 2010, as there is
substantial overlap between the 2010 and 2009 datasets. However, we also studied those
33
31 sites
to be sure they were not substantially different from the 2010 most popular sites.
We did not find any additional instances of respawning in the 31 sites that had been in the
top 100 sites in 2009 but were no longer in the top 100 in 2010.
LSOs are stored in a shared object file (.sol) format rather than as text files. While SOL
editors open .sol files, they do not readily lend themselves to automation.
34
650
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
LSOs. After we completed the three visits per site, we deleted all HTTP
and LSOs from system directories and moved on to the next site in the
dataset. We conducted a total of three sweeps: a sweep on laptop A, a
sweep on laptop B on a different network, and then another sweep on
laptop A with LSOs copied over from laptop B.
Starting on July 14, 2010, we collected data from the most popular
sites on laptops A and B. It took five hours to complete a full sweep for
the popular sites and twenty-five hours to complete a full sweep for
the randomly selected sites. We then verified our data and re-visited
individual sites as needed due to crashes or caching issues, as we
describe at the end of this section. Once we confirmed we had data for
all sites on both laptops, we began Sweep Three for the most popular
sites on July 15. We again confirmed data integrity, and completed
data collection for two sites that had caching problems on July 21. For
the randomly selected sites, we collected data on laptop A starting
July 12, laptop B starting July 16, and the third sweep starting July 18.
We completed data collection for three sites that had caching
problems on July 19.
The protocol we followed was designed to contrast content
between two different computers, laptops A and B. Any content that is
identical on both of the laptops cannot be used for identifying users or
computers.
For example, one site set the variable test value to the string test.
Every visitor to that site saves the same string; therefore, there is no
way to tell visitors apart because there is nothing unique in the data.
On the other hand, a variable holding a unique user id likely identifies
a specific computer. For example, a site that sets a variable named
userID to a unique thirty-two-character string that differs between the
two laptops can uniquely identify each of those laptops. In contrast, a
site might use a time stamp to note the time the LSO saved to disk. A
site might set a variable named time to the string 1279042176148 on
one laptop, and 1279042395528 on the second laptop. In this case,
time stamps are the time elapsed in milliseconds since January 1,
1970. It is not a surprise that the times are slightly different between
the two laptops, as we did not start the scripts at exactly the same
time. Websites are unlikely to have many visitors at precisely the same
millisecond, and can keep the original time stamp indefinitely. While
not designed for identification, websites could theoretically use time
stamps to distinguish specific computers across multiple visits.
Although setting a time stamp is a standard practice, this is one
case where variance between laptops does not automatically mean the
data is being used to uniquely identify computers. A variable named
userID with unique content is more likely to be used to uniquely
identify computers than a variable named time. We do not, however,
2012]
MCDONALD & CRANOR
651
have visibility into how variables like userID and time are used, since
only data is stored in LSOs. The programs that use the data reside on
computers from the company that set the LSO data. We have no
ability to inspect how data is used, just to observe the saved data. In
summary, we cannot definitely know how data is used in practice, but
we can make intelligent suppositions.
We followed the following automated protocol to collect data for our
analysis:
1. Delete all cookies and cached data on both laptops.
2. Sweep One. On laptop A, for each site:
a. Launch Internet Explorer.
b. Visit the site.
c. Wait sixty seconds to allow all cookies to download.
d. Copy all HTTP cookies, LSOs (*.sol and *.sor) and log
files to another directory.
652
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
e. Visit the site two more times to get a rotation of ads and
copy all HTTP cookies and LSOs after each visit.
g. Quit Internet Explorer.
h. Move all HTTP cookies and LSOs to get any cached files
that were saved on exit (deleting all HTTP cookies and
LSOs in the process).
3. Sweep Two. On laptop B, the exact same procedure as for laptop A
in step 2 above.
4. Sweep Three. On laptop A, for each site:
a. Copy the final set of LSOs only (not HTTP
cookies) that had been on laptop B for that site into
the ..\Application Data\Macromedia directory on
laptop A.
2012]
MCDONALD & CRANOR
653
b. Visit the site just with laptop A.
c. Wait sixty seconds to allow all cookies to download.
d. Copy all HTTP cookies, LSOs (*.sol and *.sor) and log
files to another directory.
e. Visit the site two more times to get a rotation of ads and
copy all HTTP cookies and LSOs after each visit.
f. Quit Internet Explorer.
5. Move all HTTP cookies and LSOs to get any cached files that were
saved on exit (deleting all HTTP cookies and LSOs in the process).
At the end of this procedure we compared HTTP cookies from all
three sweeps. To identify respawning, we looked for HTTP cookie
strings that were different on laptops A and B in sweeps One and Two,
but in Sweep Three were identical to Sweep Two. This suggests that
the information in the HTTP cookie in Sweep Three propagated from
the LSOs copied over from Sweep Two. In the two cases of respawning
654
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
that we observed, the text in HTTP cookies also matched text in LSOs,
but not all matches between HTTP and LSOs were indicative of
respawning.
See Figure 1 (below) for a graphical depiction of how we classified
sites for the popular and randomly selected websites. As shown in
Figure 1, first, we looked for sites that saved an LSO in the
#SharedObjects subdirectory (Figure 1, step one). We disregarded all
of the sites that did not save LSOs. Second, we compared the file
structure on laptops A and B to see if we had LSOs from the same sites
with the same file names (Figure 1, step two). If the file names
matched on laptops A and B, then we compared the contents of those
files (Figure 1, step four). If the file contents were identical on laptops
A and B, there was nothing unique, and the LSOs could not be used to
respawn or to identify computers (Figure 1, step five). If the content in
the LSOs differed between laptops A and B, then we classified these as
uniquely identifying—though we cannot be certain if computers are
being uniquely identified. We further investigated to see if the unique
contents within LSOs matched with content in HTTP cookies (Figure
1, step six). If not, we classified them as having unique content (Figure
1, step seven) but did not have to check for respawning. We performed
a final check. We looked at the HTTP cookies from Sweep Three,
which was performed with LSOs from laptop B, and checked to see if
the HTTP cookies on laptop A now matched the LSO data we copied
over from laptop B (Figure 1, step eight). If so, we established HTTP
cookies were respawned from data stored in LSOs (Figure 1, step ten).
If not, we still knew the LSOs had unique content (Figure 1, step nine).
This describes all of the boxes in the classification flow chart
except for when we did not find the same file name and path for LSOs
on laptops A and B (Figure 1, step three). Despite visiting sites three
times in each sweep to catch rotation of content and ads, on some sites
we found third-party LSOs from the first sweep on laptop A, but not
on laptop B, or vice versa. For example, we might see the file
s.ytimg.com/soundData.sol on laptop A but not laptop B. In all but
two instances we had previously seen third-party LSOs of that type on
other sites where the LSO did appear on both laptops. On a different
website, we would see s.ytimg.com/soundData.sol on both laptops A
and B, allowing us to determine if there was any unique content, and
then classify the soundData.sol LSO. After we classified an LSO, we
then applied the same classification for sites with that LSO only on
one laptop. This method worked well because there are comparatively
few third-party companies using LSOs, and we saw the same third
party LSOs multiple times across multiple sites. For all first party sites
that used LSOs, we found those LSOs saved to both laptops A and B,
2012]
MCDONALD & CRANOR
655
not just one laptop. We were unable to classify third-party LSOs on
only two out of 600 websites.
Figure 1:
Flow chart of website classification based on #SharedObjects. Step
numbers correspond to descriptions in the body of the paper.
656
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
We did not traverse multiple pages within websites; we only
visited the top level of any given domain. As an example of where that
would affect results, some sites start with login pages and only have
content designed for Flash Player after users login. We did not do any
logins or deep links, which means our counts are lower bound. We
also did not interact with any content in Flash Player. This is less of a
concern for quantifying Flash respawning, as sites using LSOs for
respawning would typically not want to require user interaction before
saving LSOs. Similarly, if companies are using LSOs to uniquely
identify visitors to their sites, we expect they would do so immediately
and not require interaction with content in Flash Player. However, we
expect that we undercounted the total number of sites using LSOs. In
addition, we only reported persistent LSOs saved, not all LSOs set—we
logged several sites that saved LSOs but then deleted them. Transient
LSOs cannot be used to uniquely identify computers over time or for
respawning, so we do not report those statistics. Finally, we turned on
popup blocking in Internet Explorer to reduce caching issues, which
could also undercount any LSOs from blocked popups, but popups are
not pervasive at this time.
We did observe sporadic issues with cached data. For example,
Flash creates a uniquely-named subdirectory under the
#SharedObjects directory, something like 8SB5LMVK.35 When we
quit Internet Explorer and removed all #SharedObjects files and
subdirectories, the next site to save an LSO would create a new
randomly named #SharedObjects subdirectory. However, in
approximately 6% of the sites we visited, when we launched a new
version of Internet Explorer it would re-create the prior path and save
old LSOs from the prior website. To address this issue, we had to rerun data collection for all sites that had a #SharedObjects
subdirectory with the same name as the prior site we visited. This
appears to be an issue on the web browser side. We were not able to
reproduce it reliably, and did not test other web browsers. From a
user’s perspective, cache issues could look like and function like
respawned LSOs, even though caching issues appear to be completely
unintentional.
These unique directory names cannot be used to identify computers because application
programmers are unable to access the name of the directory. The directory names are
randomly generated for security reasons.
35
2012]
MCDONALD & CRANOR
657
IV. RESULTS
In this section we present our results. First we present our results
on the use of HTTP cookies. Then, we present our results on the use of
LSOs. Overall, we found the most popular sites were more likely to set
more HTTP cookies and more LSOs.
A. USE OF HTTP COOKIES
For quantifying HTTP cookie use, there was no advantage to using
any particular sweep. We did see a small variation between sweeps;
for example, the number of sites setting HTTP cookies varied by up to
3% depending on which sweep we used. We used the final sweep for
all HTTP cookie counts. In our discussion of the #SharedObjects
directory we contrast Sweep One with Sweep Two to look for unique
data. We then check results from Sweep Three to identify HTTP
cookie respawning, as described in the prior section.
Cookies are ubiquitous. Only two of the popular sites never used
cookies (wikipedia.org and craigslist.org). HTTP cookie use drops to
59% for the random 500 sites. Not only did fewer randomly selected
sites use any HTTP cookies, they also set fewer cookies per site than
popular sites. We used Internet Explorer, which stores cookies in text
files. Visually, the list of cookie files from a popular site might look
like this:

[email protected][2].txt

[email protected][2].txt

cupslab@doubleclick[1].txt

cupslab@yahoo[1].txt

cupslab@voicefive[1].txt
Here we see five different hosts that set cookies: ad.yieldmanager,
doubleclick, voicefive, www.yahoo, and yahoo. There is some overlap
here—www.yahoo and yahoo are from the same company. But as is
the case in this example, in general the number of hosts setting HTTP
cookies is roughly equal to the number of different companies setting
HTTP cookies on the computer.
658
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
The contents of an HTTP cookie file might include something like
this:
fpms
u_30345330=%7B%22lv%22%3A1279224566%2C%22uvc%22
%3A1%7D www.yahoo.com/
1024
410443520
30163755
2720209616
30090329
∗
fpps
_page=%7B%22wsid%22%3A%2230345330%22%7D
www.yahoo.com/
1024
410443520
30163755
2720209616
30090329
∗
During Internet Explorer’s implementation, each cookie file may
contain multiple cookies separated by asterisks. The snippet above
shows two different HTTP cookies (emphasis added). The first, fpms,
is set to a string that begins u_303... and the second, fpps, is set to a
string that begins _page.... Both cookies are served by Yahoo. The
remaining data pertains to when the cookies expire and other meta
information.36
As we summarize in Table 2, we found an average of 6.7 HTTP
cookie files for the popular sites and 2.5 for the randomly selected
sites. We observed a maximum of thirty-four different cookie files on
the popular sites and thirty on the random sites. We found an average
of seventeen HTTP cookies for the popular sites and 3.3 for the
randomly selected sites. We observed a maximum of ninety-two HTTP
cookies set from visiting a single popular site, and a maximum of
seventy-three HTTP cookies from a randomly selected site. Users
HTTP Cookies (Windows), MSDN, http://msdn.microsoft.com/en-us/site/aa384321
(last visited Nov. 4, 2011).
36
2012]
MCDONALD & CRANOR
659
might be surprised to learn that a visit to their favorite site results in
HTTP cookies from dozens of different companies, but this is not a
novel finding.37
Table 2:
HTTP Cookies
Data set
Popular
% sites with
cookies
98%
Avg. #
hosts
6.7
Max. #
hosts
34
Avg. #
cookies
17
Max. #
cookies
92
Random
59%
2.5
30
3.3
73
B. USE OF LSOS
Sixty-nine percent of the popular sites and 33% of the randomly
selected sites had some LSO activity, by which we mean they at least
created a subdirectory to store LSOs, even if they never actually
created any LSOs. Twenty percent of the popular sites stored LSOs in
the #SharedObjects directory, as did 8.2% of the randomly selected
sites. These are the sites we are interested in as potential sources of
either respawning HTTP cookies due to LSOs, or as using LSOs to
individually identify computers.38 We discuss these in more detail
below.
We compared the contents of LSOs in #SharedObjects directories
on two identically-configured laptops. However, we did not always
find identical files on both laptops. For example, one site contained
two LSOs on laptops A and B, but contained an additional two LSOs
just on laptop B.
Six of the twenty popular sites with #SharedObjects did not have
matching file names. The random 500 sites include forty-one sites
with #SharedObjects, of which nine did not have matching file names.
In both datasets we observed one LSO that we saw only once, thus we
were unable to classify it.
Why do we see so many mismatches between the two laptops?
First party #SharedObjects remained stable. Third party
#SharedObjects come from advertisers, and advertising rotates. Even
37
Soltani et. al., supra note 2, at 3.
Programs running in Flash Player also write to the sys directory. While these files are
LSOs with the same file format as in the #SharedObjects directory, the sys files are settings
that applications programmers cannot edit. There is no API to access the data stored in sys
files. Consequently, we have no reason to believe settings files in sys are used for unique
identification or respawning.
38
660
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
though we collected data on both laptops only a few days apart,
advertising, and advertising partners, can change over the course of a
few minutes.
C. MATCHED SITES
We found paired LSOs with matching file names on fourteen of the
2010 top 100 sites and thirty-two of the random 500 sites. As
mentioned before, any LSO that set identical content on both laptops
could not use that content to uniquely identify computers or for
respawning. Not all unique identifiers are used for identifying
computers, but all identification via LSOs requires a unique identifier.
We found matching content on both laptops for six of the 100 popular
sites and twenty of the 500 random sites. These sites are neither
identifying computers nor respawning. For a visual depiction of the
combined analysis of LSOs with matching file names in all sweeps, as
well as LSOs we classified based on seeing them in other contexts, see
Figures 2 and 3.
2012]
MCDONALD & CRANOR
661
Figure 2:
Analysis of the 100 most popular websites of 2010. Semi-circles
contain the number of sites that fall into a given category. Step
numbers correspond to descriptions in the body of the paper.
1
662
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
Figure 3:
Analysis of the 500 randomly selected websites. Semi-circles contain
the number of sites that fall into a given category. Step numbers
correspond to descriptions in the body of the paper.
2012]
MCDONALD & CRANOR
663
D. MISMATCHED SITES
Variable names like userId helped us theorize that many LSOs are
used to identify computers, rather than identifying creative content.
Without knowledge of back-end practices we cannot determine why
LSOs contain unique identifiers, only to quantify how many do. We
further investigated to see if content in LSOs matched content in
HTTP cookies. If so, we performed analysis to see if respawning
occurred. For example, we found one LSO that contains a variable
named uID set to a unique ten-digit integer. After we deleted all HTTP
cookies and migrated LSOs from one laptop to the other and then
revisited the site, the same ten-digit integer now appears in the new
HTTP cookies in the final sweep. This is a clear-cut case of
respawning.
E. PREVALENCE OF UNIQUE IDENTIFIERS AND
RESPAWNING IN LSOS
As shown in Figure 2, out of 100 popular sites, twenty saved LSOs
in the #SharedObjects directory (see oval 1 in Figure 2). Of those
twenty, eight were not unique content and could not be used for
identifying computers or respawning LSOs, and seven of those eight
were first-party LSOs (3 in Figure 2). Another nine had unique
content and may (or may not) be used to identify computers. Seven of
those nine were third-party LSOs (5 & 7 in Figure 2). Two LSOs
respawned deleted HTTP cookie content, with one set by a first-party
and one from a third-party (8 in Figure 2). We were unable to classify
one third-party LSO (9 in Figure 2).
As shown in Figure 3, out of 500 randomly selected sites, fortyone saved LSOs in the #SharedObjects directory (see oval 1 in Figure
3). Of those forty-one, twenty-three were not unique content and
could not be used for identifying computers or respawning LSOs.
Twenty-two of those twenty-three were third-party LSOs (see oval 3 in
Figure 3). Another seventeen had unique content and may (or may
not) be used to identify computers. Sixteen of those seventeen were
third-party LSOs (see ovals 5 & 7 in Figure 3). We observed no
respawning in the random 500 dataset (see ovals 8 in Figure 3). We
were unable to classify one third-party LSO (see oval 9 in Figure 3).
F. RESPONSE TO RESPAWNING
In October, 2010, the Center for Democracy and Technology
(CDT) attempted to contact the two sites we found were respawning
664
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
HTTP cookie content from LSOs. CDT successfully contacted one site,
where site operators expressed surprise to learn they were respawning
LSOs. The site voluntarily stopped using LSOs while they conducted
an internal review. In subsequent discussions with CDT, they stated
they were not using LSOs for respawning. They were counting unique
visitors to their site. At this time, they no longer use unique identifiers
in LSOs for analytics. We have visited the site multiple times, and
confirmed the site no longer sets LSOs.
CDT was unable to reach the third-party company that respawned
HTTP cookies at the second site. CDT left messages by voicemail and
email describing concerns with respawning in mid-October. However,
even before CDT’s messages, this company stopped respawning
cookies by August 30 on the first-party site we studied. We did still see
HTTP cookies from the third-party on September 14, which
establishes they still had a relationship with the first-party site, and it
was not simply a case that they stopped doing business together.
Furthermore, CDT created a list of companies that had a relationship
with this third-party company based on the contents of their website,
blog posts, and news articles. CDT visited all of those sites and found
no LSOs from the third-party company that had been respawning.
CDT left messages for companies that use LSOs to set unique
identifiers. We hoped to understand to what extent unique identifiers
were used to uniquely identify computers, rather than for a nontracking purpose. None of the companies CDT attempted to contact
were willing to speak with CDT regarding the matter.
We subsequently analyzed the privacy policies for the companies
setting unique identifiers to see if we could determine their practices
based on their privacy policies. For the eight popular sites with unique
identifiers, their policies were unclear and we were not able to
determine if they use LSOs to uniquely identify specific computers.39
For the random sites, we looked at both the first-party website and
any third-parties setting an LSO, for a total of thirty-two unique sites.
Of those thirty-two sites, fourteen sites (44%) did not have privacy policies, including
one site that was taken offline by law enforcement agents. None of the sites made promises
that would be violated if they use LSOs to uniquely identify computers. None of the sites
stated that they use LSOs to uniquely identify specific computers. Four of the sites (13%)
gave hints that they might be using LSOs to uniquely identify specific computers, for
example discussing “cookies and other means,” to re-identify visitors to the sites, or
disclosing LSO use to combat fraud and for “other purposes.” The remaining eighteen sites
(44%) had policies that were completely unclear or did not mention LSOs at all. In all, we
were able to neither definitively classify any of the sites as using LSOs to identify individual
computers, nor able to definitively rule it out.
39
2012]
MCDONALD & CRANOR
665
Once again, we were unable to determine if any of the sites use LSOs
to uniquely identify specific computers.
Finally, we reviewed the privacy policies for the two first-party
websites where we found respawning, plus the third-party website
engaged in respawning. The first-party websites’ privacy policies were
unclear. The third-party did not have a privacy policy.
V. POLICY IMPLICATIONS
While our results suggest that use of LSOs to respawn HTTP
cookies or track users may be declining, the frequent presence of
unique identifiers in LSOs combined with a lack of transparency about
the use of these LSOs continues to raise concern. Using LSOs to track
users, however, is just the tip of the iceberg; new mechanisms
continue to emerge that are designed to track users in ways that
circumvent privacy controls.40
HTTP cookie respawning has generated media attention and
regulatory interest. In part, this may be because respawning implies
such a blatant disregard for user choice. More subtle practices with
similar functionality are just as dangerous to privacy, but may not be
as clear-cut topics for regulatory authority. In this section we briefly
address a few points that pertain not just to LSOs and respawning, but
to the larger topic of Internet privacy.
First, regulators are likely to reject industry self-regulation if even
the most prominent companies will not respect user choice. It is
difficult to find calls for a purely self regulated industry approach to
Internet privacy credible when the industry demonstrates a
willingness to violate user intent and privacy, as demonstrated by
using LSOs to respawn HTTP cookies or individually identify
computers. No malice is required; it is easy to imagine software
engineers using a clever tactic to avoid expensive data loss without
considering privacy implications. But the effects on user privacy are
the same, regardless of how decisions are made.
Second, when the Center for Democracy and Technology cannot
get companies to answer questions about their privacy practices, and
privacy researchers cannot determine privacy practices by reading
privacy policies, it seems unreasonable to expect end users to be able
to understand when LSOs are being used and in what capacity. One of
the appealing features of a self-regulated industry approach is that
John Timmer, It is Possible to Kill the Evercookie, ARS TECHNICA (Oct. 27, 2010),
http://arstechnica.com/security/news/2010/10/it-is-possible-to-kill-the-evercookie.ars.
40
666
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
self-regulation allows users to choose what is appropriate for them
personally because privacy preferences vary greatly between
individuals. What we see in this case, however, is that users lack the
information to make such choices. Absent better communication,
privacy policies cannot form the basis of informed consent.
Third, one of the arguments against legislative or regulatory action
with regard to the Internet is that companies can innovate faster than
government can respond. That is likely true in some contexts.
However, because companies can move quickly does not mean they
will move quickly, particularly when action is against their economic
interests. To draw on an example specifically from this context, a
representative from Macromedia—developers of Flash technologies
acquired by Adobe—responded to privacy concerns saying that they
did not think Flash Player was a privacy threat, but they were
speaking with browser makers to improve LSO management in
2005.41 That the LSO management was not addressed until it became
a crisis five years later does not seem unusual. Any software team
prioritizing what to work on for the next release will have a hard time
arguing for a theoretical threat to privacy as something to address
before adding new features that could sell more of their product or
fixing bugs that annoy their current user base. When multiple
companies work together (i.e. Adobe and browser companies) delays
are even more likely than when companies are able to act
independently. In the context of Internet privacy, government moving
slowly may still bring more progress than companies will make on
their own.
Fourth, a common mental model of user choice for privacy is that
users can decide which HTTP cookies to accept or delete. With a single
site setting over ninety cookies, this concept is outdated. No one can
practically choose yes or no for each HTTP cookie, when there are so
many of them in use. As LSOs and other technologies are being used
for tracking, user control becomes even more difficult. In order to
manage HTTP cookies users must rely on some type of privacy
enhancing technology even if it as simple as settings in their web
browser. Other options for HTTP cookie management exist, including
stand-alone packages like CCleaner, opt-out cookies, and browser
plugins. We have crossed the threshold where users require PETs if
they are to protect their online privacy.
Finally, the proposed Best Practices Act would create a safe harbor
for companies working with the FTC, while other companies would
Michael Cohn, Flash Player Worries Privacy Advocates, INFORMATION WEEK (Apr. 15
2005), http://www.informationweek.com/news/showArticle.jhtml?articleID=160901743.
41
2012]
MCDONALD & CRANOR
667
still be subject to lawsuit. Opponents are concerned that privacy
lawsuits would only enrich trial lawyers, while proponents argue the
threat of lawsuit would improve practices.42 While lawsuits are a
cumbersome and inherently reactive approach to privacy, we did see
possible support for the view that the threat of lawsuit can improve
practices. In particular, we note the third-party company that we
observed respawning. They stopped respawning after media coverage
of lawsuits, but before we contacted them. That they would not answer
voice mail or email also suggests they may have been wary of legal
action. Furthermore, the sites identified as respawning in both of the
Soltani studies appear to have stopped respawning. Our experience is
not conclusive, but may be worth considering.
VI. POLICY OPTIONS
In this section we examine which stakeholders can take steps to
reduce privacy-sensitive LSO practices. It is an open question how
many resources should be expended. Our results suggest that
problems with LSOs are reducing over time, but are still present. As
noted in the previous section, however, LSO abuse is only one element
of a larger problem. Ideally, policy solutions do not address
technologies one-by-one, but rather address the entire class of
technologies used to track users without informed consent. That being
said, the following are some steps that stakeholders could take to
address LSOs.
A. COMPANIES USING FLASH TECHNOLOGIES
The ultimate responsibility for using LSOs to respawn HTTP
cookies rests with the companies that engage in such practices.
Unfortunately, even prominent companies have engaged in
respawning. We believe, but cannot definitively prove, that additional
prominent companies are using LSOs to identify users without
respawning.
While these stakeholders are in the best position to take direct
action, they benefit from improved analytics and other user data. They
are unlikely to change their practices without external motivation. We
also note that companies are not always aware when they are using
Grant Gross, Lawmakers Hear Mixed Reviews of Web Privacy, PCWORLD (July 22,
2010), http://www.pcworld.com/businesscenter/article/201712/lawmakers_hear_mixed_
reviews_of_web_privacy_bill.html.
42
668
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
LSOs to respawn HTTP cookies. Chief Privacy Officers (CPOs) or
other appropriate staff might visit their own websites to understand if
and how they use LSOs. By doing so, CPOs can help their companies
avoid potential litigation, regulatory interest, and negative press.
B. ADOBE
While Adobe did not create privacy problems with LSOs, they
inherited the potential for issues when they acquired Flash
technologies. Adobe is in a pivotal position to affect Flash developers.
Adobe has already taken some actions, including their statement that
respawning is abuse of LSOs. However, they have not published a
position on using LSOs to uniquely identify computers without
respawning HTTP cookies. Adobe could take a stance similar to the
IAB position that LSOs must not be used for behavioral advertising at
this time, or go beyond that to also include analytics. More generally,
Adobe could adopt the policy that LSOs should only be used to
support Flash content and nothing else. We do not offer opinions on
where Adobe should set their policy, but these seem like some obvious
additions to consider and discuss.
Adobe’s statement that respawning constitutes abuse of LSOs may
not be widely understood by Flash developers, and currently lacks any
threat of enforcement. Adobe could communicate their policies clearly
in all developer documentation, terms of service, and in popular
developer forums. Adobe could also choose to follow Facebook’s
example and rescind licenses for companies that do not delete
inappropriately collected data and do not comply with Adobe’s license
terms.43 This is by nature an after-the-fact remedy that would only
affect companies that have been shown to engage in unacceptable
practices, and is not a panacea.
Adobe took steps to improve users’ ability to manage LSOs in two
ways. They worked with web browser companies, and redesigned the
user interface for controls currently built into Flash. In working with
web browsers, Adobe published an API for use with Netscape Plugin
Application Programming Interface (NPAPI).44 Most web browsers
Mike Vernal, An Update on Facebook UIDs, FACEBOOK DEVELOPERS (Oct. 29, 2010, 6:15
PM), http://developers.facebook.com/blog/post/422.
43
NPAPI: ClearSiteData, MOZILLAWIKI, https://wiki.mozilla.org/NPAPI:ClearPrivacyData
(last modified Jan. 6, 2011).
44
2012]
MCDONALD & CRANOR
669
use NPAPI with the notable exception of Internet Explorer,45
necessitating another approach. Adobe touts benefits for security and
sandboxing, but their preliminary announcement did not mention
privacy.46 By focusing just on security, Adobe may not have clearly
communicated to the Flash developer community that privacy issues
are a priority. In January 2011, Adobe announced details of interim
user interface controls and discussed them in the context of privacy.47
They have since completed work on user interface changes.
Flash developers may not think about privacy concerns while in
the midst of trying to get code to work. Adobe could add text about
privacy to the ActionScript API documentation. Specifically, it might
help to add information about acceptable practices to the
SharedObject API, which documents how to set and use LSOs. Adobe
could also help the Flash developer community by adding a chapter
specifically about privacy to mirror the security chapter in the
ActionScript Developer’s Guide.
Adobe could modify the functionality of LSOs, but that may risk
breaking existing content designed for Flash Player for the majority of
developers who have done nothing untoward. This is a difficult issue.
To minimize compatibility issues, it is often easier to add new fields
than to delete or modify existing fields. For example, in future
versions of Flash Player, all LSOs could have an expiration date. This
would not prevent LSO abuse, but could limit the scope of privacy
issues.
C. BROWSER COMPANIES
Asking browser makers to expend engineering resources for
problems they did not create seems unsatisfying, but they do have the
ability to improve user experience. LSOs are only one of many types of
tracking technologies and browser vendors may need to keep
adjusting to prevent new approaches from being used to track users
without users’ knowledge.
One challenge browser companies face is creating usable
interfaces. Users currently struggle to understand how to manage
Netscape Style Plug-ins Do Not Work After Upgrading Internet Explorer, MICROSOFT
SUPPORT, http://support.microsoft.com/kb/303401 (last updated July 27, 2007).
45
Paul Betlem, Improved Flash Player Support in Chrome, ADOBE (Mar. 30, 2010),
http://blogs.adobe.com/flashplayer/2010/03/improved_flash_player_support.html.
46
47
Huang, supra note 27.
670
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
their HTTP cookie preferences.48 As browser interfaces expand to
include managing other types of persistent storage, including LSOs,
browser companies have the opportunity to improve the usability of
their privacy settings. If browser companies simply tack on other types
of storage to their sometimes obscure HTTP cookie management
settings, they are likely to increase users’ confusion.
D. POLICY MAKERS
Focusing specifically on the technology of respawning merely
creates incentives for developers to move to other types of tracking. As
we have mentioned, LSOs can store unique identifiers that are
functionally equivalent to respawning. The company Mochi Media
offers tracking via ActionScript code embedded into content running
in Flash Player, with no need to respawn HTTP cookies.49 A popular
book on analytics includes directions on how to use Flash technologies
to track what users read in the New York Times, even from mobile
devices that are disconnected from the web at the time.50 These
examples happen to be about Flash technologies, but could just as
easily be about JavaScript, super cookies, browser fingerprints, or
iPhone and iPad unique identifiers. Rather than a narrow focus on
specific technologies, policy makers would be well advised to look at
functionality.
For enforcement, it seems sensible to focus on the most popular
websites. Not only do popular sites reach millions of people, we found
they are more likely to have questionable privacy practices. If
enforcement actions become public, large companies are more likely
to draw press attention than small companies. Media coverage will
help educate website developers that there are privacy issues they
need to consider.
See Aleecia M. McDonald & Lorrie Faith Cranor, An Empirical Study of How People
Perceive Online Behavioral Advertising, CYLAB TECHNICAL REPORT (Nov. 10, 2009),
http://www.cylab.cmu.edu/research/techreports/tr_cylab09015.html; see also Aleecia M.
McDonald & Lorie F. Cranor, Beliefs and Behaviors: Internet Users’ Understanding of
Behavioral Advertising, 38TH RESEARCH CONFERENCE ON COMMUNICATION, INFORMATION
AND INTERNET POLICY (TELECOMMUNICATIONS POLICY RESEARCH CONFERENCE), Oct. 2,
2010.
48
Flash Tracking, Traffic Monitoring, and Analytics Service, MOCHI MEDIA,
http://www.mochibot.com (last visited Jan. 21, 2011).
49
50
AVINASH KAUSHIK, WEB ANALYTICS 2.0: THE ART OF ONLINE ACCOUNTABILITY AND
SCIENCE OF CUSTOMER CENTRICITY 248–49 (Willem Knibbe et al. eds., 2010).
2012]
MCDONALD & CRANOR
671
VII. CONCLUSIONS
We found that while companies were still respawning HTTP
cookies via LSOs as late as July 2010, the number of companies
involved was low. We observed HTTP cookie respawning on the front
page of only two of the top 100 websites and none of the randomly
selected 500 websites we checked. Further, both companies that were
respawning have stopped this practice—one on their own, and one as
a result of this study. However, because the sites that had been
respawning are very popular, many users may have been affected by
even just two companies respawning, though respawning is by no
means endemic at this time.
Further, we found sites using LSOs to set unique identifiers. While
we cannot know definitively how these identifiers are used in practice,
we believe some of them identify individual computers. If so, this is
functionally equivalent to respawning HTTP cookies. Companies may
use LSOs to track users who decline or delete HTTP cookies, but do
not realize they also need to manage LSOs. We observed fairly low
rates of LSOs that may be identifying computers—9% for the most
popular 100 websites, and 3.4% of a random selection of 500
websites. Again, however, the most popular sites reach a very large
number of users, thus many people may be affected by these practices.
Furthermore, a little over 40% of sites that save LSO data store unique
identifiers, suggesting that Flash developers may not understand
LSOs as a privacy concern.
Finally, we note that the most popular sites are more likely to
engage in practices with potential privacy implications. We observed
primarily third-party LSOs in the randomly selected 500 websites,
which again suggests it is possible to work with a small number of
prominent companies to dramatically affect practices, rather than
needing to contact a large number of small companies. We have hope
that the use of LSOs to circumvent users’ privacy preferences can be
reduced, but note that many other technologies exist that will fill the
same function. So long as we focus on individual technologies, rather
than a larger picture of user privacy and control, we risk an arms race
with advertisers changing the technologies they use to identify users,
regardless of users’ privacy preferences.
672
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
APPENDIX A
We analyzed two data sets based on Quantcast’s list of the one
million most visited websites— the 100 most visited sites in the United
States as of July 2010 and 500 sites we randomly selected from the
Quantcast list of one million. We list those sites here.
Table 3:
Quantcast’s top 100 most visited websites as of July 8, 2010
about.com
adobe.com
amazon.com
americangreetings.com
answers.com
aol.com
ap.org
apple.com
ask.com
associatedcontent.com
att.com
bankofamerica.com
bbc.co.uk
bestbuy.com
bing.com
bizrate.com
blinkx.com
blogger.com
blogspot.com
bluemountain.com
break.com
careerbuilder.com
causes.com
chase.com
chinaontv.com
city-data.com
cnet.com
cnn.com
comcast.com
comcast.net
craigslist.org
dailymotion.com
digg.com
drudgereport.com
ebay.com
ehow.com
evite.com
examiner.com
facebook.com
flickr.com
formspring.me
go.com
godaddy.com
google.com
hp.com
hubpages.com
huffingtonpost.com
hulu.com
ign.com
imdb.com
latimes.com
legacy.com
linkedin.com
live.com
mapquest.com
match.com
merriam-webster.com
metacafe.com
microsoft.com
monster.com
msn.com
mtv.com
mybloglog.com
myspace.com
netflix.com
nytimes.com
optiar.com
pandora.com
paypal.com
people.com
photobucket.com
reference.com
reuters.com
simplyhired.com
suite101.com
target.com
thefind.com
tmz.com
tumblr.com
twitpic.com
twitter.com
typepad.com
usps.com
walmart.com
washingtonpost.com
weather.com
weatherbug.com
2012]
MCDONALD & CRANOR
673
webmd.com
wellsfargo.com
whitepages.com
wikia.com
wikipedia.org
windows.com
wordpress.com
wunderground.com
yahoo.com
yellowpages.com
yelp.com
youtube.com
zynga.com
Table 4:
Random selection of 500 sites
24hourpet.com
350smallblocks.com
411webdirectory.com
72712.com
787787.com
aalas.org
aartkorstjens.nl
abbottbus.com
accutronix.com
ad-mins.com
adaholicsanonymous.net
adamscountyhousing.com
adorabubbleknits.com
advanceexpert.net
agnesfabricshop.com
air-land.com
alignmed.com
allstarsportspicks.com
almostfrugal.com
amandabeard.net
amazingamberuncovered.com
amigofoods.com
ancestryhost.org
appcelerator.com
ar-10-rifles.com
arcadianhp.com
archerairguns.com
ariionkathleenbrindley.com
arizonabattery.com
arizonahealingtours.com
asbj.com
asiainc-ohio.org
askittoday.com
askmd.org
asla.org
astonhotels.com
atbfinancialonline.com
athenscountyauditor.org
auburncountryclub.com
auctioneeraddon.com
autorepairs-guide.info
avistarentals.com
awildernessvoice.com
azbiz.com
babygotfat.com
backwoodssurvivalblog.com
badvoter.com
bargainmartclassifieds.com
battlestargalactica.com
beaconschool.org
beatport.com
beechwoodcheese.com
benedictinesisters.org
best-hairy.com
bestshareware.net
bethpage.coop
bf1systems.com
bibleclassbooks.com
bibleverseposters.com
bird-supplies.net
blackopalmine.com
bladesllc.com
blogmastermind.com
bluetoothringtones.net
body-piercing-jewellery.com
bookjobs.com
boulevardsentinel.com
boyntonbeach.com
bradcallen.com
brealynn.info
brill.nl
broncofix.com
buckstradingpost.com
bucky.com
buyhorseproperties.com
bwcnfarms.com
cabands.com
cabins.ca
cafemomstatic.com
capitalgainsmedia.com
cardiomyopathy.org
careerstaffingnow.com
carrollshelbymerchandise.com
cashloanbonanza.com
cateringatblackswan.com
cdcoupons.com
charterbank.com
674
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
charterco.com
chashow.org
cheapusedcars.com
childrensheartinstitute.org
christmas-trees-wreathsdecorations.com
clarislifesciences.com
claytonihouse.com
clcofwaco.org
clean-your-pcc1.com
cloningmagazine.com
clubdvsx.com
codeproject.com
coltbus.org
coltranet.com
columbusparent.com
complxregionalpainsyndrome.ne
t
computervideogear.com
conservativedvds.com
cookbooksforsale.com
coolatta.org
corvettepartsforsale.com
countrymanufacturing.com
cpainquiry.com
crazyawesomeyeah.com
crbna.com
creatupropiaweb.com
credit-improvers.net
creditcaredirect.com
crowderhitecrews.com
culttvman2.com
curepeyronies.net
curiousinventor.com
dansdidnts.com
dardenrestaurants.com
datingthoughts.com
dcso.com
de.ms
dealante.com
dealsoutlet.net
delti.com
desktops.net
detroitmasonic.com
digitalmania-online.com
disasterreliefeffort.org
dividend.com
dmvedu.org
dobbstireandauto.com
dodgeblockbreaker.com
donlen.com
donnareed.org
dorpexpress.com
dukeandthedoctor.com
dvdsetcollection.com
easypotatosalad.com
educationalrap.com
elmersgluecrew.com
emailfwds.com
emailsparkle.com
empty.de
ereleases.com
escapethefate.net
eurekasprings.org
evanity.com
expowest.com
eyesite.org
fashionreplicabags.com
fast-guardcleaneronpc.net
fatlove.net
fearrington.com
fitnesshigh.com
flatpickdigital.com
fleetairarmarchive.net
florahydroponics.com
floridafishinglakes.net
flyingbarrel.com
foodtimeline.org
foreclosuredlist.com
foreclosurepulse.com
forzion.com
fourreals.com
free-party-games.com
freepetclinics.com
freshrewardscore.com
fretwellbass.com
fukushima.jp
fullertontitans.com
fundmojo.com
fusioncrosstraining.com
ga0.org
gaara.ws
ganstamovies.com
gemission.org
genesearch.com
gerdab.ir
getanagentnow.com
girlfights.com
globalfire.tv
gmeil.com
gogivetraining.com
gold-speculator.com
goldenstaterails.com
gomotobike.com
goodseed.com
googgpillz.com
gordonbierschgroup.com
gotostedwards.com
goutresource.com
graceandtruthbooks.com
2012]
MCDONALD & CRANOR
675
grooveeffect.com
hairybulletgames.com
hallfuneralchapel.com
hallmarkchannel.tv
hammondstar.com
happyshoemedia.com
healthcaresalaryonline.com
hills.net
historyofnations.net
hoover-realestate.com
horseshoes.com
hostpapa.com
hoveringads.com
howyouspinit.com
hp-lexicon.com
hsbc.com.mx
hvk.org
icdri.org
idxcentral.com
ieer.org
iflextoday.com
indianapolis.com
infinitiofdenver.com
inhumanity.com
inria.fr
intelos.com
iphonealley.com
iris-photo.com
itmweb.com
itvs.com
itw.com
ivanview.com
jacksoncountygov.com
japanautopages.com
jesus-passion.com
jetbroadband.com
jimmycanon.com
josejuandiaz.com
joybauernutrition.com
junohomepage.com
jwsuretybonds.com
kbduct.com
kimballarea.com
kitten-stork.com
knittingpureandsimple.com
kpcstore.com
lacosteshoes.us
lafarge-na.com
lakeareavirtualtours.com
latinrank.com
layover.com
life-insurance-quotes-now.com
lifepositive.com
liftopia.com
like.to
lintvnews.com
logodogzprintz.com
lstractorusa.com
ltwell.com
lydiasitaly.com
madisonindiana.org
magnetworks.com
marketminute.com
mastiffrescue.org
maurywebpages.com
mayoarts.org
mcpherson.edu
mcswain-evans.com
measurebuilt.com
meiselwoodhobby.com
menalive.com
merbridal.com
michiganford.com
microcenter.com
miltonmartintoyota.com
minki.net
mirdrag.com
missourimalls.net
mistercater.com
mitutoyo.com
mmodels.com
modbee.com
moforaja.com
moldingjobs.com
moneytip.com
moselhit.de
motomatters.com
motosolvang.com
movefrontlistencom.com
mule.net
mundofree.com
my-older-teacher.net
mycomputerclub.com
mylexia.com
mypickapart.com
mystic-nights.com
mysticalgateway.com
mysticlake.com
mytableware.com
nationalcoalition.org
naturalmedicine.com
ncbeachbargains.com
ncgold.com
nec.jp
nekoarcnetwork.com
newcracks.net
newlawyer.com
newmacfurnaces.com
676
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
newscoma.com
nexstitch.com
nhlottery.com
nittygrittyinc.com
nobledesktop.com
nottslad.com
npg.org.uk
nscale.org.au
nwlanews.com
ocharleydavidson.com
offscreen.com
oixi.jp
olympus-imaging.com
omahaimpound.org
onelasvegas.com
onepaycheckatatime.com
optimost.com
orchidphotos.org
outbackphoto.com
ownacar.net
ownthenight.com
p2pchan.info
parkcityinfo.com
parksandcampgrounds.com
paulrevereraiders.com
pedalmag.com
pennhealth.com
performancehobbies.com
perthmilitarymodelling.com
pet-loss.net
petworld.com
pgamerchandiseshow.com
planfor.fr
plantronics.com
pngdealers.com
polapremium.com
policespecial.com
pphinfo.com
promotersloop.com
promusicaustralia.com
prophecykeepers.com
prostockcars.com
psychprog.com
puppyluv.com
puppystairs.com
q102philly.com
qdobamail.com
quickappointments.com
quickertek.com
quickfinder.com
raleyfield.com
raphaelsbeautyschool.edu
rareplants.de
rax.ru
readingequipment.com
realtracker.com
rentonmclendonhardware.com
restaurantsonlinenow.com
resveratrol20.com
reu.org
revengeismydestiny.com
ripcordarrowrest.com
rpmrealty.com
rrrmusic.com
rumc.com
russellrowe.com
russianbooks.com
sacramentoconventioncenter.com
salonhogar.net
santaslodge.com
scalemodeltoys.com
scanner-antispyh4.com
sccmo.org
scgsgenealogy.com
scottpublications.com
sdchina.com
search4i.com
searchgenealogy.net
section4wrestling.com
seelyewrightofpawpaw.net
seewee.net
sheisladyboy.com
shipleydonuts.com
shootangle.com
shouldersurgery.org
simcomcity.com
simplesignshop.com
socalmls.com
sohojobs.org
southwestblend.com
spanderfiles.com
spatechla.com
squireparsons.com
srtk.net
standup2cancer.org
start-cleaning-business.com
statenotary.info
stimuluscheck.com
stjosephccschool.net
stmaryland.com
storagedeluxe.com
stranges.com
sud.org.mx
sudzfactory.com
summer-glau.net
sungardpsasp.com
sureneeds.com
sweetdealsandsteals.com
2012]
MCDONALD & CRANOR
677
sweettattianna.com
swingstateproject.com
syque.com
tackletog.com
tamusahr.com
tasteequip.com
tecnocino.it
tempgun.com
texasthunder.com
the-working-man.com
theacademic.org
theacorn.com
theauctionblock.org
thedailymaverick.co.za
thedigitalstory.com
theelator.com
thegardenhelper.com
thegriddle.net
thegunninghawk.com
theinductor.com
theliterarylink.com
themainemarketplace.com
themodelbook.com
thenextgreatgeneration.com
thepromenadebolingbrook.com
therichkids.com
threebarsranch.com
thunderracing.com
tickledpinkdesign.net
tj9991.com
todayswebspecial.com
top-forum.net
toponlinedegreechoices.com
tracksideproductions.com
trafficinteractive.com
transfermarkt.de
treadmillstore.com
tri-une.com
tropicalfishfind.com
trycovermate.com
ttsky.com
twaa.com
twtastebuds.com
ualpaging.com
uniquetruckaccessories.com
univega.com
unon.org
uprius.com
usaplforum.com
uscoot.com
v-picks.com
vacuumtubeonline.com
valueoasis.com
vandykerifles.com
vcbank.net
vet4petz.com
vidaadois.net
videocelebs.org
visitshenandoah.com
vitamin-supplementreference.com
vitruvius.be
walmartdrugs.net
wcha.org
weddingnet.org
wefong.com
wegotrecords.com
weplay.com
wetzelcars.com
wi-fihotspotlist.com
wiara.pl
wildfoodadventures.com
willyfogg.com
windsorhs.com
wippit.com
womantotal.com
woodauto.com
woodenskis.com
woollydesigns.com
woolrichhome.com
worldcrops.org
worldmapfinder.com
worlds.ru
wwwcoder.com
wxc.com
ymcatriangle.org
youthoutlook.org
ywcahotel.com
zabaware.com
ziua.ro
678
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
APPENDIX B
We analyzed LSOs to determine if sites were respawning or using
LSOs with unique identifiers. We cannot definitively state how LSOs
are used, as we discuss in the body of the paper, because we did not
have access to the data sites store remotely. Below are the details of
some example LSOs we collected. We provide these examples as a
qualitative illustration of the range of data storage we observed. While
we saw several third party LSOs on multiple sites, we only discuss
them once to remove duplication.
I. TOP 100
As summarized in Figure 2, we found 20 sites with a
#SharedObjects directory. Of those twenty sites, we classified eight
sites as not having a unique identifier. We classified eight sites as
having a unique identifier with LSO strings matching respective HTTP
cookies. We classified one site as having a unique identifier with
HTTP cookies matching transplanted LSOs. Two sites respawned.
Finally, there was one site we could not classify. Examples follow.
A. NO UNIQUE IDENTIFIERS
If the content in the LSOs on laptops A and B is identical, it cannot
be used to uniquely identify users. We primarily found LSOs that
appeared to be set by first parties to test functionality, but are not in
active use.51 We found several examples:

We found a first party LSO with a name that
contained the word “test” that set a variable
named cookie to the value Chocolate Chips.

We found a first party LSO with the variable
testValue set to test.

We found a first party LSO with the variable
sectionName set to Auto.
As mentioned in the body of the paper, we did not interact with Flash content on the
websites. It is reasonable to assume some of these LSOs store additional data based on user
action, but presumably that will not be designed to respawn HTTP cookies or to uniquely
identify computers across multiple websites.
51
2012]
MCDONALD & CRANOR

We found a first party LSO that created an
empty object, with no variables.

We found a first party LSO with the variable
path with no value set.

We found a first party LSO with the variable
animation set to zero.
679
B. UNIQUE IDENTIFIERS
We found several different types of LSOs containing unique
identifiers. Examples follow:

A first party LSO contained a variable named
computerguid, which stored a value in the
format of eight characters, four characters, four
characters, four characters, and twelve
characters, all in hexadecimal, separated by
dashes. This is the format for a GUID (globally
unique identifier) and we assume throughout
that anything in this format is a GUID.52 The
GUID did not appear in the HTTP cookies.

A third-party LSO with a name that suggests
information mining contains a single variable,
crumbID, which is uniquely identifiable.

A first-party LSO contained six variables. Four
of those six were identical on both laptops, and
therefore not unique identifiers. The fifth
contained
no
content.
Finally,
GUIDs are a specific style of random number designed to minimize duplication, so they
are ideal for creating unique identifiers. They are often used as keys into SQL databases.
Some video and audio clips are referenced by GUID; we could imagine that a site used
GUIDs not to identify users, but to identify content. However, that seems unlikely in all of
the cases we observed. If GUIDs were being used identify content we would expect to see
identical GUIDs on both laptops, since by not interacting with Flash components, we had
the same default content in both cases. Instead, we saw different GUIDs on different
laptops, which is the behavior we would expect if GUIDs were instead used to identify
website visitors.
52
680
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
anonymousAuthToken contains a unique
identifier. It does not appear in HTTP cookies.
Meanwhile, a second LSO differs only by the
addition of a variable named routeid, which
contains a timestamp.

We found a third-party LSO storing a great deal
of analytics data about what content we viewed,
including the URLs to each image loaded on the
site and the timestamps from when we viewed
those images. The only data that changed,
however, was timestamps, suggesting this LSO
did not uniquely identify computers.

A first-party LSO with a name suggesting
statistics about videos contain three objects.
Each object contains a seven-digit bytes
variable and time variable with sixteen decimal
precision. Presumably the three objects reflect
the three times we visited the site and do not
uniquely identify computers.
C. RESPAWNING
We found two instances of respawning HTTP cookies from LSOs,
both of which we confirmed have since stopped respawning:

We found a third-party LSO with a ten-digit
variable, uID, plus additional information about
the web pages we visited. The uID content also
appeared in the first party HTTP cookies for
that site. On our final pass, after we deleted the
HTTP cookie they respawned with the content
stored in the LSO.

We found a first-party LSO with a ten-digit
variable, uuID, which is formatted as a
Universally Unique identifier (UUID). The uuID
content also appeared in the first-party HTTP
cookies for that site. On our final pass, after we
deleted the HTTP cookies the HTTP cookies
respawned with the content stored in the LSO.
2012]
MCDONALD & CRANOR
681
In addition, this site has a second LSO with a
variable isReportSent set to true.
D. LSOS ON ONLY ONE LAPTOP
In several cases we found LSOs on only one of two laptops we
used, so we could not compare laptops A and B to confirm content was
unique, or not. Examples:

We found a first party LSO with a single
variable volume set to the value seventy-five.
Even without a second instance to compare to,
presumably this does not uniquely identify
computers both based on the name and because
two digits lack sufficient entropy to uniquely
identify computers visiting the website.

We found a third party LSO with four variables:
count and type are set to one; id is set to synced;
a date appears to be a timestamp. Presumably
this does not uniquely identify computers.

We found a site with a third party LSO named
to suggest data contained within is anonymous,
containing a variable token set to a fourteencharacter string. Without a second LSO to
compare to, we cannot confirm it is a unique
identifier.

We found a site with the same directory
structure on both laptops, but not the same
LSOs. Both laptops had a third-party LSO that
contained a created variable with a timestamp.
On one of the laptops we also found what
appears to be third-party analytics data, with an
LSO that names two commercial analytics
companies as part of the data contained within
the same LSO. Another third-party LSO
appeared on both laptops but with different
content. In one it only had a created variable
with a timestamp, as we had seen before. The
second laptop, based on the LSO names,
682
I/S: A JOURNAL OF LAW AND POLICY
contains an LSO with data for re-targeting, plus
an LSO for opting out which contains a
timestamp but no further information to help
understand what the opt out is for. Finally, in a
third-party LSO with a name that associates it
with advertising, both laptops have a variable
PUI set to a thirty-two-character hexadecimal
unique identifier.

We found a site that had some LSOs that were
on both laptops, and some that were not. Of the
LSOs we saw on both laptops and could
contrast, most were identical data that seemed
used for analytics, with the exception of a
substring of 1280x800 where the other has
1024x768. These may just be differences in the
laptop screen resolution. The LSOs also stored
characteristics about the computers (the OS,
ActiveX, etc. similar to user agent data). We
also found a timestamp, plus two variables
named id set to forty-character strings in two
different LSOs per laptop.

From the same site, we found additional thirdparty LSOs on only one of the laptops, from
three different third parties. In one case we
found a variable userId with a sixteen-character
hexadecimal string. This value also appears in
HTTP cookies from a different third party, as
well as in an HTTP cookie set by the first-party.
Because we only captured the LSO on one
laptop, we cannot test to see if it respawned. We
do find it unusual to see one identifier appear in
three different places, shared between different
entities. A second third party LSO set five
variables that appear related to video, all set to
the value one. The final third party LSO we had
seen at a different website. The structure was
similar at both websites, with an LSO name
indicating it contains data about users, and four
nested objects. One of the four objects was
named vacation on one site and synced on the
[Vol. 7:3
2012]
MCDONALD & CRANOR
683
other, and a variable id is set to these strings
(vacation or synced, respectively). It is unclear
what this variable is for or what it does.
II. RANDOM 500
As summarized in Figure 3, we found forty-one sites with a
#SharedObjects directory. Of those forty-one sites, we classified
twenty-three sites as not having a unique identifier. We classified
fourteen sites as having a unique identifier with LSO strings matching
respective HTTP cookies. We classified three sites as having a unique
identifier with HTTP cookies matching transplanted LSOs. Zero sites
respawned. Finally, there was one site we could not classify. Examples
follow.
A. NO UNIQUE IDENTIFIERS
All but one of the sites in this category used the same third-party
LSO from a video player. They all saved a single LSO with the variable
volume set to 100. Because we did not interact with any videos, we did
not collect any additional LSOs beyond the initial sound setting. The
protocol we followed would not catch any analytics data or unique
identifiers that are introduced after users interact with Flash content
(watch the video, visit another video, etc.). We may be undercounting
the number of sites that can uniquely identify their visitors due to
LSOs, but it stands to reason that a company setting LSOs in order to
uniquely identifying visitors would not wait for user interaction to do
so. The third-party’s privacy policy is vague about which technologies
they use for which types of tracking.
The remaining site in this category has a single first-party LSO.
The name of the LSO refers to video, and it contains a variable played
set to true.
B. UNIQUE IDENTIFIERS
We found several different types of LSOs containing unique
identifiers. Examples follow:

A third-party LSO contained a single variable,
preferedBitrate. The laptop on a fast university
connection with a lot of competition for
bandwidth had a value of 875782, while the
laptop on a slower home connection with no
684
I/S: A JOURNAL OF LAW AND POLICY
competition for bandwidth had a value of
2291302. This is an example of a unique value
that very likely was not used to identify users or
computers, but rather to serve content better.

We found one first-party LSO. The name of the
LSO suggests it was used for a video game. It
contains three variables, bestScore and ranking,
both set to zero, and id, which is a twenty-sixcharacter alphanumeric string. On the final visit
with LSOs from laptop A copied to laptop B, the
LSO was overwritten with a completely new
value. This site stored no HTTP cookies at all.
Particularly because this is a first party LSO, the
id value may be used for personalization,
perhaps to set a high score name, rather than to
uniquely identify a specific user across multiple
websites. Without knowledge of backend
practices all we can definitively say is that the id
is a unique identifier.

A third-party LSO contained a variable named
last_time which appears to be a Unix
timestamp. The LSO also contained a variable
named session_id, which contained a GUID.
Values for session_id were unique on laptop A,
B, and the final pass: the id is a unique
identifier, but it does not persist over time.

A third-party LSO contained a single variable.
The variable name ended in UserId. It was
formatted as a GUID. We saw LSOs from this
third-party on multiple sites. In some cases, but
not always, the LSO contained data that was
also contained within an HTTP cookie as well.
This was not a case of respawning (if we deleted
the HTTP cookie, the data did not reset from
the LSO), but seeing the same data duplicated
in LSOs and HTTP cookies is unusual. On one
site there was only an LSO with no
corresponding HTTP cookie from the third
party.
[Vol. 7:3
2012]
MCDONALD & CRANOR

A third-party LSO contained a variable that was
partially the same on both laptops, but an
additional eighteen digits were unique. We
found HTTP cookies from the same third-party
that contained GUIDs and unique identifiers,
but those were unrelated to the LSOs.

A third-party LSO with a pathname containing
the word “analytics” contained a variable
named id and a unique forty-character string.
Interestingly, there were no HTTP cookies for
this third-party, only the LSO, suggesting the
analytics company may have completely
replaced using HTTP cookies with LSOs.

A third-party LSO contained a variable named
computerID with an eighty-eight-character
string. The eighty-eight-character string was
also stored in an HTTP cookie from the same
third party. On the final visit with LSOs from
laptop A copied to laptop B, the LSO was
overwritten with a new value, which was also in
the HTTP cookie. This is not a case of
respawning since the value was new, rather
than taken from the existing LSO. It is unusual
that a third-party LSO and a first-party HTTP
cookie have a shared unique identifier.

A third-party LSO contained two variables
named cid and sid, which contained twentytwo-digit unique identifiers. Within the music
industry, sid is used for song ID, and cid is coop
ID, which would appear to uniquely identify
content rather than computers. However, the
sid and cid differ on laptops A and B, even
though we did not load different songs. We also
note that the third-party advertises the product
and gathers user statistics as part of their
product, in addition to their main offering of a
customer support widget. It is unclear why a
customer support widget would have cid and sid
variables. Interestingly, not only do the cid/sid
685
686
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
values appear in HTTP cookies, they are
contained in cookies from the first-party. It is
unusual that a third-party LSO and a first-party
HTTP cookie have a shared unique identifier.
C. LSOS ON ONLY ONE LAPTOP
We also found LSOs on only one laptop, so we could not compare
laptops A and B to confirm content was unique, or not. Examples:

We found a third-party LSO with “player” in the
name with a variable volume set to .8. While we
only saw this on one laptop, a single digit
cannot be uniquely identifying, even without a
second laptop to confirm the volume is always
set to 80% of the maximum.

We found a third-party LSO containing the
number of bytes and the time, presumably for a
video.

We found a third-party LSO that appears to
primarily hold timestamps, with a few other
values that are too short to be uniquely
identifying.

We found a third-party LSO with several
complicated data objects. While there was a
string with enough characters to uniquely
identify a computer, we have no idea what it is
used for, and it is contained within an object
simply named data.

We found a third-party LSO with two variables.
One was a Unix timestamp, and presumably not
used to uniquely identify computers. The other
was named crumbId and set to a GUID, and is
presumably used to uniquely identify
computers.

We found an LSO with “analytics” in the LSO’s
name, set by a third-party advertising network.
2012]
MCDONALD & CRANOR
The LSO contained quite a lot of data, including
timestamps and what appears to be a unique
identifier, plus information that seems to be
about ads viewed. Based on variable names, the
LSO stores the referrer page visitors viewed
prior to the current page, the number of visits to
the site, the date of the first and last visits, and
the time spent viewing the page.
687
I/S: A JOURNAL OF LAW AND POLICY FOR THE INFORMATION SOCIETY
Status Update: When Social Media Enters the
Courtroom†
LINDSAY M. GLADYSZ*
Abstract. In December 2010, Rodney Knight Jr. broke into
Washington Post columnist Marc Fisher’s Washington D.C.
home, stealing two laptops, a winter coat, and cash. Proud
of his heist, Knight uploaded pictures of himself and his loot
onto Facebook using one of his newly pilfered computers. In
an even more audacious move, Knight posted a picture of
himself with a handful of the stolen cash on the victim’s
Facebook page. The Assistant United States Attorney
General on the case, also adept at using the Internet,
utilized the information to obtain a warrant and arrested
Knight within a month. In the face of such evidence against
him, Knight pled guilty. While this may seem like an
extreme example, the use of information from Facebook and
other social media sites is becoming an important part of
police investigations and both criminal and civil litigation.
Due to the popularity and prevalence of social media, there
is a multitude of information stored online and on third
party servers. Users of social media have become
accustomed to posting information depicting every minute
detail of their lives, allowing friends and families to
communicate easily and often. Status updates, personal
information, and photographs loaded onto social media
websites have become important sources of discovery in
I would like to credit my note advisor, Professor Peter Swire, as well as the I/S Staff with
their profound help on this Note. I would also like to extend sincere thanks to my
wonderful parents and terrific friends for their support and to the ever-inspirational Brady
Hoke.
†
J.D. Candidate, expected 2012, the Ohio State University Moritz College of Law; B.A.
2009, University of Michigan.
*
2012]
GLADYSZ
689
litigation, as these sources make it easier and cheaper to
obtain information than ever before. However, courtroom
use of information from Facebook and other popular
websites often happens largely unbeknownst to users. While
E-discovery is an important tool for litigators, what
privacy interests are we giving up for the use of this
information? This Note addresses the current state of
privacy law concerning electronic communications, Eevidence use, and what steps should be taken to protect
users’ privacy.
I. INTRODUCTION
Thanks to the wonders of social media, socialization and
networking have evolved irrevocably.1 With Facebook’s active user
base of more than 800 million,2 the world is now connected and
sharing information like never before. From constant status updates
to photographs, users post and share information about their personal
lives, often without considering any repercussions of such uploads.
Law enforcement officials and legal professionals, realizing the value
of such highly personal information on Facebook, have increasingly
attempted to use such information as evidence at trial.3 In 2010, a
New York court ruled private information taken from a plaintiff’s
social networking site to be admissible because of its material and
necessary nature, broadening previous standards of admissibility.4
But should information gathered from social media be discoverable
and admissible in the courtroom?
This Note examines the current legal landscape concerning the use
of E-evidence from social media and the privacy and policy concerns
that arise with such use. Part II of this Note gives a brief overview of
social networking media and its effect on the law and current legal
discourse. Part III reviews the most common types of cases that have
seen admission of evidence from social media and recent cases dealing
with the admission of such evidence. Part IV looks at prevailing
RICHARD SUSSKIND, THE END OF LAWYERS? RETHINKING THE NATURE OF LEGAL SERVICES
77 (2010).
1
Facebook Statistics, FACEBOOK, http://www.facebook.com/press/info.php?statistics (last
visited Feb. 1, 2012). Additionally, half of these active users log on every day. Id.
2
Andrew C. Payne, Note, Twitigation: Old Rules in a New World, 49 WASHBURN L.J. 841,
845 (2010).
3
4
Romano v. Steelcase Inc., 907 N.Y.S.2d 650, 651 (N.Y. Sup. Ct. 2010).
690
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
privacy doctrines and the levels of privacy that govern current policy
considerations, applying these levels to Facebook content. Finally,
Part V examines the future of social media in the courtroom and
argues for stricter standards of admissibility and user protection.
II. WELCOME TO THE CYBERWORLD
From humble beginnings at Harvard University in 2004, Facebook
first expanded to other colleges, then eventually to anyone with an
email address and a desire to socialize online.5 Facebook’s everevolving nature calls into question what reasonable expectations of
privacy users have had at various points in its existence. What users
may have considered quite private during Facebook’s first years may
no longer be seen as secure. Likewise, once information is uploaded, it
is stored on Facebook’s servers indefinitely, even when it is removed
from the actual website.6 Because of the expansion in the user base—
from American college and university students to anyone in the world
with Internet access—and the numerous changes in the interface,
what constitutes a reasonable expectation of privacy is truly a good
question. It is thus useful to examine the current statutory scheme
regarding electronic communications, including social networking
media.
A. THE FOURTH AMENDMENT
The Fourth Amendment protects the rights of United States
citizens to be secure from “unreasonable searches and seizures.”7 But
what qualifies as “reasonable” when it comes to social media remains
Facebook Company Timeline, FACEBOOK,
http://www.facebook.com/press/info.php?timeline (last visited Feb. 1, 2012).
5
Bill Meyer, Facebook Data-retention Changes Spark Protest, CLEVELAND.COM (Feb. 17,
2009, 3:25 P.M.), http://www.cleveland.com/nation/index.ssf/2009/02/
facebook_dataretention_changes.html; see also FACEBOOK DATA USE POLICY,
http://www.facebook.com/about/privacy/your-info#deleting (last visited Feb. 1, 2012). If
a user does delete his profile with potentially admissible evidence, Facebook will restore
access if it is possible. DIGITAL FORENSICS & EDISCOVERY ADVISORY – FACEBOOK
SUBPOENAS,CONTINUUM WORLDWIDE LEGAL SOLUTIONS (Oct. 13, 2010), available at
http://www.continuumww.com/Libraries/PDFs/DF_eD_101310.sflb.ashx. Additionally,
Facebook does urge users to use other means to obtain information if possible. Id.
6
7
U.S. CONST. amend. IV.
2012]
GLADYSZ
691
an open question.8 As Justice Rehnquist stated in United States v.
Knights:
The touchstone of the Fourth Amendment is
reasonableness, and the reasonableness of a search is
determined by assessing, on the one hand, the degree
to which it intrudes upon an individual’s privacy and,
on the other, the degree to which it is needed for the
promotion of legitimate governmental interests.9
This balancing act often necessarily places individual interests at odds
with those of the judicial system. Individuals want to protect their
information from outside intrusion, but such information is at times
essential to the promotion of justice. The current test for
reasonableness, established in Katz v. United States and explored
further in Part III of this Note, contains both subjective and objective
components.10 As a result, there are no easy answers as to what is
reasonable—especially given the rapidly evolving nature of today’s
Web.
The Supreme Court has said that the Fourth Amendment “protects
people, not places.”11 Though the Framers of the Constitution had no
way of predicting the technological advances that have developed
since America’s birth, their ultimate intention was to protect
individual liberties from unreasonable government interference.12 The
recognition of this intention has resulted in the broad interpretation
(and reinterpretation) of the language of the Fourth Amendment with
the advent of technological and social advances, which is necessary in
order to develop the scope of the protections inherent in the
Amendment.13
See Orin S. Kerr, Four Models of Fourth Amendment Protection, 60 STAN. L. REV. 503,
504–05 (2007).
8
United States v. Knights, 534 U.S. 112, 118–19 (2001) (quoting Wyoming v. Houghton,
526 U.S. 295, 300 (1999)).
9
10
Katz v. United States, 389 U.S. 347, 361 (2004) (Harlan, J., concurring).
11
Id. at 351.
Alexander Scolnik, Note, Protections for Electronic Communications: The Stored
Communications Act and the Fourth Amendment, 78 FORDHAM L. REV. 349, 352–53
(2009).
12
Id. As highlighted in Scolnik’s article, even Originalist Justice Antonin Scalia commented
in Kyllo v. United States, “It would be foolish to contend that the degree of privacy secured
13
692
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
Simply put, the Fourth Amendment limits government intrusion
into private individuals’ lives.14 Despite this, there have been countless
instances where courts found other legitimate government interests
that outweighed Fourth Amendment privacy protections. In Katz v.
United States, the Supreme Court held that the Fourth Amendment
must be construed based on what the writers of the Constitution
considered “unreasonable search and seizure” and does not absolutely
shield an individual’s privacy from investigation.15 Under this
Originalist perspective, the Fourth Amendment does not apply as
strongly to electronic information, which exists outside of the home,
as it does to physical objects contained in the home, such as papers or
other items.16 Additionally, because of the “reasonable expectation of
privacy” test and concerns about overreaching government intrusions,
especially those pertaining to emerging technologies, Congress began
promulgating statutes defining reasonable expectations of privacy as
applied to specific items and places, leading to the first wiretapping
laws and other related statutes.17
In defining the limits of the Fourth Amendment, the legislature
and courts have struggled with balancing an individual’s privacy
concerns with the compelling need for information. Recent years have
seen the ratification of the Patriot Act and Freedom of Information
Act, which have reframed some privacy rights and invoked the
compulsory release of stored information at both the federal and state
level.18 Such statutes have allowed law enforcement greater use of
surveillance mechanisms, especially by electronic means, and have
bypassed some of the statutory protections in place.
to citizens by the Fourth Amendment has been entirely unaffected by the advance of
technology.” Kyllo v. United States, 533 U.S. 27, 33–34 (2001). A staunch Originalist,
Scalia nevertheless observed the dangers inherent if the Fourth Amendment was not
construed in a flexible manner. Such flexible interpretation would thus ensure
“preservation of that degree of privacy against government that existed when the Fourth
Amendment was adopted.” Id. at 34.
Nicholas Matlach, Comment, Who Let the Katz Out? How the ECPA and SCA Fail to
Apply to Modern Digital Communications and How Returning to the Principles in Katz v.
United States Will Fix It, 18 COMMLAW CONSPECTUS 421, 422 (2010).
14
15
Katz, 389 U.S. at 361.
Sarah Salter, Storage and Privacy in the Cloud: Enduring Access to Ephemeral
Messages, 32 HASTINGS COMM. & ENT. L. J. 365, 370-71 (2010).
16
17
Matlach, supra note 14, at 424–26.
Patricia L. Bellia, Designing Surveillance Law, 43 ARIZ. ST. L.J. 293, 311 (2011); see also
Salter, supra note 16, at 372.
18
2012]
GLADYSZ
693
B. THE 2006 E-DISCOVERY AMENDMENTS
Due to the rapid development and evolution of technology,
Congress and state legislatures have had a difficult time keeping up
with promulgating changes in the law, often leaving courts to
construct new common law through their jurisprudence.19 Despite the
difficulties inherent in enacting and amending laws to keep pace with
new information technologies, there have been several advances.
In 2006, the E-discovery amendments were added to the Federal
Rules of Civil Procedure, creating distinctions between paper and
electronic documents.20 Before the adoption of formal rules governing
E-discovery, courts generally admitted any relevant computerized
evidence.21 However, as technology progressed and the admission of
E-evidence became litigated more often, the courts and Congress
came to a consensus that “digital is different” and began looking at
revisions to the existing statutes.22 An amendment to Rule 34 added
the term “electronically stored information” to the types of documents
that may be requested23 and specified procedures for requesting
electronically stored information.24 The Advisory Committee defined
“electronically stored information” rather broadly, including
information “‘stored in any medium’ to encompass future
developments in computer technology.”25
The Committee also made changes to Rule 26’s duty to disclose,
creating “specific limitations on electronically stored information.”26
Under the new two-tiered approach: (1) parties must disclose
information if it is at no undue burden or cost (by request or court
order) and; (2) the court may yet order discovery if it finds good
Payne, supra note 3, at 850. As one scholar humorously observed: “Modern
communication follows Moore’s law: technology grows exponentially. Congress follows the
turtle law: slow and steady wins the race.” Matlach, supra note 14, at 457.
19
20
Payne, supra note 3, at 856.
21
Id. at 851.
22
Id.
23
FED. R. CIV. P. 34(a)(1)(A).
24
FED. R. CIV. P. 34(b)(1)-(2).
25
FED. R. CIV. P. 34 advisory committee’s note (2006 amendment).
26
FED. R. CIV. P. 26(b)(2)(B).
694
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
cause—even if showing undue burden or cost is made.27 These moves
by the legislature, while seemingly minor in nature, indicate a
congressional stance toward disclosure, rather than disallowance, of
electronic information.
C. ELECTRONIC COMMUNICATIONS PRIVACY ACT
With the Electronic Communications Privacy Act (ECPA),
Congress sought to extend the protections of wiretapping laws
specifically to new forms of electronic communications.28 While the
act is primarily aimed toward electronic communications, the
language of the act recognizes that the legislation’s goal is to protect
the “sanctity and privacy of the communication.”29 Despite Congress’s
goal of extending greater protections to emerging technologies, the
courts have often held that little constitutional protection from the
prevailing Fourth Amendment shield is allotted to communications
that are open to the public.30 In these rulings, the courts often cite the
absence of a reasonable expectation of privacy when stating that the
communications lack Fourth Amendment protection. This is where
electronic surveillance law steps in: legislation, such as the ECPA,
operates independently of the Fourth Amendment.31 Because of this,
even if a search is deemed reasonable under the Fourth Amendment,
these laws can work to bar such evidence (and vice versa).32
The ECPA is the progeny of the Communications Act of 1934,
which authorized the Federal Communications Commission to
regulate common carriers, such as telephone companies and
prohibited the unlawful interception of voice communications without
the user’s consent.33 In United States v. Rathbun, the Supreme Court
27
Id.
28
Electronic Communications Privacy Act, 18 U.S.C. § 2511 (2006).
29
132 Cong. Rec. 14, 886 (1986) (statement of Rep. Kastenmeier).
See, e.g., Freedman v. Am. Online, 412 F.Supp. 2d 174, 181 (D. Conn. 2005); United
States v. Forrester, 512 F.3d 500, 510 (9th Cir. 2008); United States v. Lifshitz, 369 F.3d
173, 190 (2d Cir. 2004).
30
31
DANIEL J. SOLOVE & PAUL M. SCHWARTZ, INFORMATION PRIVACY LAW 303 (3d ed. 2009).
32
Id.
Communications Act of 1934, Pub. L. No. 73-416, § 605, 48 Stat. 1103–04 (1934); see
also Matlach, supra note 14, at 426 (examining Congress’s policies in enacting the
Communications Act of 1934).
33
2012]
GLADYSZ
695
ruled that only one party to a telephone conversation needs to consent
for interception of the conversation to be lawful under the
Communications Act.34 However, unlike the Stored Communications
Act (discussed below), the complicated definitions and additional
statutory requirements make obtaining warrants under the ECPA
extremely difficult.35 Described as a “‘super’ warrant,”36 law
enforcement officers must submit a warrant request containing sworn
statements and the specific nature and location of the
communications sought to justify the interception.37
The ECPA is only marginally applicable to social networks because
of the limited types of communications it governs. The ECPA applies
to any person who “intentionally intercepts, endeavors to intercept, or
procures any other person to intercept or endeavor to intercept, any
wire, oral, or electronic communication.”38 As such, the ECPA only
applies to communications in transmission—from the moment the
communication is sent until it is opened—but does not apply to
communications that have already been delivered to the recipient or
that are “stored communications.”39 Communications governed by the
ECPA are subject to stringent standards of discoverability, while
communications that have already been delivered to the recipient are
governed by the Stored Communications Act, as explored below.
D. THE STORED COMMUNICATIONS ACT
The Stored Communications Act (SCA), which governs social
media such as Facebook and e-mail communications, prohibits
entities such as Facebook and MySpace from disclosing personal
information to the government without the account owner’s consent.40
While the SCA was generally intended to cover email, text messages,
34
Rathbun v. United States, 355 U.S. 107, 111 (1957).
35
Matlach, supra note 14, at 443.
36
Salter, supra note 16, at 373.
37
Id.
38
18 U.S.C. § 2511(1)(a).
39
Matlach, supra note 14, at 448–49.
40
Stored Communications Act, 18 U.S.C. § 2701(a)(1)-(2) (2006).
696
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
and online bulletin boards,41 it arguably also applies to
communications on Facebook because such communications are
stored on Facebook’s servers (though the courts have yet to rule on
this application definitively).42 The SCA was designed to balance the
government’s legitimate interest in gaining access to information with
the privacy rights of individuals who have entrusted their
communications to Internet service providers.43 Despite this
protection, courtrooms are admitting such social media evidence
largely unbeknownst to the user base and with consequences that
remain to be seen.
Enacted in 1986, the Stored Communications Act was designed by
Congress to create a zone of privacy, protecting Internet users’
personal information while balancing the countervailing need for
access to that information.44 This “Fourth Amendment Lite” was
created to bridge the gap in the Fourth Amendment created by new
technologies, ensuring the continued vitality of the Amendment, and
to protect against the erosion of privacy rights.45 In enacting the SCA,
Congress had two main goals: to prohibit Internet Service Providers
(ISPs) from voluntarily releasing stored information and to ensure
that law enforcement officials have a vehicle by which to access stored
communications if such communications were reasonably necessary
to protect litigants from injustice.46 To carry out these goals, Congress
Timothy G. Ackermann, Consent and Discovery Under the Stored Communications Act,
FED. LAW. 42, 43 (2009).
41
Payne, supra note 3, at 848. There has been one court order applying the SCA to social
media E-evidence, where the subpoena was to the websites themselves. Crispin v. Christian
Audigier, Inc., 717 F. Supp. 2d 965, 989 (C.D. Cal. 2010). The Crispin court held that
Facebook and other social media sites were electronic communications services, and
therefore subpoenas to these sites for private messages could be quashed. Id. at 991. The
SCA does not apply, however, where the subpoena is for wall posts or other “public”
messages. Id. at 990. As explained in this section, it also does not apply if the subpoena is
to an individual, whether a party or non-party, as the SCA only governs communications
service providers. Largent v. Reed, No. 2009-1823, 2011 WL 5632688, *11–12 (Pa. Com. Pl.
2011) (Trial Order).
42
Marc J. Zwillinger & Christian S. Genetski, Criminal Discovery of Internet
Communications Under the Stored Communications Act: It’s Not a Level Playing Field, 97
J. CRIM. L. & CRIMINOLOGY 569, 569 (2007).
43
44
18 U.S.C. §§ 2701–2712; see also Ackermann, supra note 41, at 42.
45
Zwillinger & Genetski, supra note 43, at 575–76.
46
Id. at 576.
2012]
GLADYSZ
697
enacted a categorical ban on ISPs voluntarily releasing information, as
well as a series of exceptions, found in 18 U.S.C. § 2703, which enable
law enforcement officials to seek disclosure through a precise
process.47
As opposed to the ECPA, the SCA governs communications once
they are received and stored, whether in a user’s inbox or in other
document storage.48 Within 180 days of transmission, a warrant is
required to allow the government access to the communications;
however, after 180 days have passed, law enforcement officials simply
need a subpoena or court order, which merely requires a showing of
probable cause.49 Whether a communication is in transmission or has
been received, pinpointing the instant at which a communication
becomes “stored” is an important issue due to the much higher
standards governing the disclosure of communications under the
ECPA.50
The SCA applies to service providers as parties in civil litigation, as
well as to non-parties, disallowing disclosure of electronic
communication in their possession.51 In general, the SCA prohibits
electronic communications services, which enable the sending of
messages between users, from knowingly divulging information
without the user’s permission. Under the exceptions of the Stored
Communications Act, electronic communications services may
voluntarily divulge the contents of online communications if “lawful
consent” is given by the account holder—whether he or she is the
sender or recipient of the communication.52 Like the ECPA, the SCA
only requires the consent of one party for disclosure of a
communication to be lawful.
Issuing subpoenas to Internet Service Providers (ISPs) presents an
additional question for the courts and legislature. If a party seeks
information that falls under the statutory authority of the SCA, the
party seeking admission must first ascertain who has control over the
communication—whether it is a party to the proceeding or a third
47
Id.
48
Matlach, supra note 14, at 448–49.
49
Scolnik, supra note 12, at 382–93; see also 18 U.S.C. § 2703(b) (2008).
50
Matlach, supra note 14, at 449; see also Salter, supra note 16, at 368.
51
Ackermann, supra note 41, at 42.
52
18 U.S.C. § 2702(b)(3) (2008); see also Ackermann, supra note 41, at 43.
698
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
party—and then petition the court to compel disclosure or consent.53
While the issue has not been decided definitively, several courts have
quashed subpoenas to ISPs seeking the release of electronic
communications.54 This is not the case, however, when the subpoena
is directed to the person who controls the information; under Federal
Rule of Civil Procedure 34, when discovery is directed to a sender,
recipient, addressee, or subscriber who exercises control over the
communications, such communication is subject to discovery.55
Further, in Flagg v. City of Detroit, the court not only held that a
court has the ability to order a person to produce documents, but that
it can order that person to give consent so someone else can disclose
documents and communications on their behalf.56 Further, courts
may seek disclosure of information from whoever controls the
communication, even if that person is not a party in the proceeding.57
However, just because the court possesses the power to compel
consent does not mean it must always exercise such power upon
request. Rather, a court must weigh the communication’s value
against privacy considerations.58
When seeking evidence from Facebook, the “third party” is often
Facebook itself. Since all material posted on the website is accessible
by the company, the party seeking admission must simply petition the
court to compel consent or disclosure from Facebook.59 While this is
an easy and direct course to the information, what are the costs
associated with such free access? The next section examines current
53
Ackermann, supra note 41, at 46.
See, e.g., In re Subpoena Duces Tecum to AOL, LLC, 550 F.Supp.2d 606 (E.D. Va.
2008); Hone v. Presidente U.S.A. Inc., No 5-08-MC-80071-JF, 2008 U.S. Dist. LEXIS
55722 (N.D. Cal. July 21, 2008)(unpublished); J.T. Shannon Lumber Co v. Gilco Lumber
Inc., No. 2-07-cv-119, 2008 WL 3833216 (N.D. Miss. Aug. 14, 2008), reconsideration
denied, 2008 WL 4755370 (N.D. Miss. Oct. 29, 2008).
54
55
FED. R. CIV. P. 34(a)(1).
56
Flagg v. City of Detroit, 252 F.R.D. 346, 363 (E.D. Mich. 2008).
Thomas v. Deloitte Consulting LP, No. 3-02-cv-0343-M, 2004 WL 1372954, at *4 (N.D.
Tex. June 14, 2004).
57
In re J.T. Shannon Lumber Co., No. 2-07-cv-119, 2008 WL 4755370, at *1 (N.D. Miss.
Oct. 29, 2008).
58
Facebook’s Terms of Service even explicitly state to users that they will comply with
legitimate law enforcement requests for information. FACEBOOK DATA USE POLICY,
http://www.facebook.com/about/privacy/other (last visited Feb. 1, 2012).
59
2012]
GLADYSZ
699
jurisprudence, focusing on the areas of law that have seen the greatest
advancement in the use of social media E-evidence.
III. CURRENT E-EVIDENCE JURISPRUDENCE
While every area of law may soon see social media introduced into
the courtroom, there are several practice areas that have encountered
this issue the most in recent years. Since 2008, federal judges have
issued more than two dozen search warrants granting access to
individuals’ private Facebook profiles, and the trend is increasing. In
fact, 2011 saw more than double the number of search warrants
granted than in 2010.60 Where courts have allowed evidence from
Facebook, generally it has been evaluated on a case-by-case basis,
with judges carefully weighing the benefits and allowing only that
which is probative and relevant to the outcome of the case.61 As such,
it is beneficial to understand how information from Facebook is being
used in these areas and how the bench is coming to understand this
novel type of evidence.
A. INSURANCE AND PERSONAL INJURY CASES
In tort cases involving insurance and personal injury, introduction
of material from Facebook is often used to combat claims of serious
injury. Typical examples of this type of use include insurance
companies seeking to admit evidence from an allegedly injured
plaintiff’s page that illustrates an active, happy life.
In Romano v. Steelcase, the Suffolk County Supreme Court in New
York allowed evidence from Facebook to disprove the plaintiff’s claims
that her injuries resulted in a serious loss of enjoyment of life.62 While
Jeff J. Rogers, A New Law-Enforcement Tool: Facebook Searches, THOMPSON REUTERS
(July 12, 2011), http://newsandinsight.thomsonreuters.com/Legal/News/2011/07__July/A_new_law-enforcement_tool__Facebook_searches.
60
61
See, e.g., Romano v. Steelcase Inc., 907 N.Y.S.2d 650, 654 (N.Y. Sup. Ct. 2010)
Id. at 651. While Romano is not the most recent case, it has set the precedent of allowing
such breadth of information. A subsequent case has followed Romano’s example, granting
a motion to compel evidence from social media sites where there is a high chance of
relevant information and the plaintiff puts his or her personal condition at issue. See
Zimmerman v. Weis Markets, Inc., No. cv-09-1535, 2011 WL 2065410, at *6 (Pa. Com. Pl.
2011). The Zimmerman court did add, however, “fishing expeditions will not be allowed,”
only compelling access when non-private information suggests relevant information
resides on the private page. Id. at *6 n.8.
62
700
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
Romano is a lower court decision, the case received media attention
because of the unprecedented breadth and type of information
allowed.63 The defendant sought information from the plaintiff’s
current and historical Facebook and MySpace pages—including
private and deleted material—which was postulated to contain
information relating to the extent of the plaintiff’s injuries. Notably,
the defendant contended that the plaintiff’s Facebook and MySpace
pages contained images of her on vacation and engaging in an active
lifestyle. The plaintiff had previously claimed such activities were
unfeasible due to her injuries, which had allegedly confined her to her
house and bed.64 After viewing the public postings on the plaintiff’s
social media pages, the court stated that there was a high likelihood
that highly relevant material would be found on her private pages and
may lead to the discovery of admissible evidence.65
The Romano court used a test of “usefulness and reason” for the
information requested, but stated that public policy weighs heavily in
favor of open disclosure.66 Significantly, the court noted that
“[p]laintiffs who place their physical condition in controversy, may
not shield from disclosure material which is necessary to the defense
of the action,”67 thus permitting discovery of materials relevant both
to the extent of the injuries and damages. Denying the defendant
access to the pages, the court noted, would only “condone Plaintiff’s
attempt to hide relevant information behind self-regulated privacy
settings.”68 The court also stated that the production of information
from Facebook and MySpace was not violative of the plaintiff’s
privacy, and that any such concerns are outweighed by the need for
the information.69 Indeed, the court cited the very nature of social
networking websites—designed to share personal information with
Andrew S. Kaufman, The Social Network in Personal Injury Litigation, N.Y.L.J., Dec. 15,
2010, at 1–2, available at http://kbrlaw.com/kaufman6.pdf.
63
64
Romano, 907 N.Y.S.2d at 654.
65
Id. at 655.
66
Id. at 652.
67
Id.
68
Id. at 655.
69
Id.
2012]
GLADYSZ
701
one’s social network—as evidence of the lack of a reasonable
expectation of privacy.70
Other personal injury cases follow similar fact patterns to that in
Romano. In Ledbetter v. Wal-Mart Stores, Inc., the court denied the
plaintiff’s motion for a protective order on information from
Facebook, MySpace, and other social media sites, finding that such
information is reasonably calculated to lead to the discovery of
relevant and admissible evidence.71 The defendant sought the
information to disprove the plaintiff’s injury claims and his wife’s loss
of consortium claims. Additionally, the court stated that by injecting
the issue of the relationship between herself and the plaintiff into the
courtroom, co-plaintiff Disa Powell waived any spousal privileges she
may have had.72
Foreign courts have established more concrete discovery
procedures pertaining to social media. For example, in Leduc v.
Roman, the Ontario Superior Court of Justice affirmed a lower court
holding that postings on the plaintiff’s Facebook profile were
documents within the meaning of the Rules of Civil Procedure and
therefore must be produced if relevant to the action at issue.73 Leduc
commenced the action after a motor vehicle accident, asking for
damages for loss of enjoyment of life; however, images and postings
on the plaintiff’s Facebook showed him fishing and enjoying other
physical activities. Thus, the court concluded that Leduc had an
obligation to produce any relevant documents, including information
from Facebook.74 As evidenced in the above cases, the social nature of
Facebook and related sites can contain a treasure trove of information
regarding a plaintiff’s injuries, or lack thereof, leading judges to often
allow such evidence into trial.
B. DIVORCE AND CUSTODY CASES
In family law cases, social media evidence is often requested as
proof of a party’s character or fault in the matter, including evidence
70
Id. at 657.
Ledbetter v. Wal-Mart Stores, Inc., No. 06-cv-01958-WYD-MJW, 2009 WL 1067018, at
*1 (D. Colo. Apr. 21, 2009).
71
72
Id.
73
Leduc v. Roman, 308 D.L.R. (4th) 353 (Can. Ont. Sup. Ct. J. 2009).
74
Id.
702
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
of extramarital affairs and engagement in activities that would
adversely affect the best interests of a child. While it is not uncommon
for parties to introduce evidence of their opponent’s character flaws in
order to gain a favorable divorce settlement or custody agreement,
taking such evidence from social media sites is not directly analogous
to more conventional forms of evidence.
Individuals who have everyday access to a party’s Facebook
account generally are allowed to use such information in court; since
the parties are “friends,” there is no expectation of privacy between
them. However, due to the embattled nature of many family law
proceedings, it is common for estranged spouses to “unfriend” each
other—disallowing access to the page if privacy settings dictate as
such. Though it has been established that, absent password
protection, information on a shared spousal computer does not have a
reasonable expectation of privacy,75 the quasi-private nature of
Facebook complicates matters. Despite this, courts are increasingly
inclined towards admitting E-evidence from Facebook in family law
cases, favoring open disclosure over any possible privacy concerns.
In Dexter, II v. Dexter, an Ohio Court of Appeals custody case, the
court held that the trial court did not abuse its discretion in its grant
of custody, which was based in part on the mother’s MySpace profile.76
The mother contended that the trial court erred in considering her
religion, lifestyle choices, and other information from her profile,
absent any evidence that such matters adversely affected the child.77
In her public MySpace blog, the mother had written about her sadomasochism, bisexuality, and paganism. The trial court allowed this
evidence and concluded from the evidence that such personal choices
would have an effect on her child. Courts in similar cases have also
chosen to allow evidence from social media to determine parental
fitness or settle divorce proceedings, deeming such information
relevant in determining a litigant’s character and permitting it to
influence the outcomes of such cases.78
Camille Calman, Spy vs. Spouse: Regulating Surveillance Software on Shared Marital
Computers, 105 COLUM. L. REV. 2097, 2098 (2005).
75
76
Dexter, II v. Dexter, No. 2006-p-0051, 2007 WL 1532084, at *7 (Ohio Ct. App. 2007).
77
Id. at *4.
See also, B.M. v. D.M., 927 N.Y.S.2d 814, at *5 (N.Y. Sup. Ct. 2011) (where the court
allowed evidence from the wife’s blog and Facebook about her belly dancing in a divorce
proceeding); In re T.T., 228 S.W.3d 312, 322–23 (Tex. App. 2007) (court allowed evidence
from MySpace in a case involving termination of parental rights).
78
2012]
GLADYSZ
703
C. CRIMINAL CASES
Criminal courts and police departments have also begun to utilize
information from Facebook and other social media sites to gather
information and to prosecute criminals. Indeed, some criminals have
been foolish enough to brag about their illegal acts via Facebook, as in
the case of a recent burglar who not only uploaded pictures of himself
with his stolen spoils onto his Facebook page, but then also “friended”
the man that he burgled (and was promptly thereafter arrested).79
While it is possible that social media can serve directly as evidence
of the crime at trial, it is more often the case that such evidence is used
to illustrate a defendant’s character and lifestyle.80 Law enforcement
officers are able to see any publicly posted photographs and use them
either as supplemental information about a party or even as evidence
that a crime was committed.81 While most (though probably not all)
criminals would be surreptitious enough to resist posting updates or
pictures of an assault or robbery, lesser crimes, such as underage
drinking or driving under the influence, are more commonly shared
online. This is especially the case for Facebook because of its largely
young user base; college students upload photographs from late
Saturday nights without considering the consequences, which can be
harsh.82
A now-infamous example of the use of such postings happened
after a Pennsylvania State University football game in October 2005.
Photographs taken at a post-game riot soon turned up on Facebook,
which the police then used to identify and cite about fifty students.83
Similarly, in 2007, evidence from Facebook was used by the
University of Connecticut Police to link a driver to a hit and run
incident.84 Additionally, prosecutors have used E-evidence in drunk
Gabe Acevedo, World’s Dumbest Criminal Would Like to Add You as a ‘Friend’, ABOVE
THE LAW (Mar. 11, 2011), http://abovethelaw.com/2011/03/worlds-dumbest-criminal79
would-like-to-add-you-as-a-friend.
Daniel Findlay, Tag! Now You’re Really “It” What Photographs on Social Networking
Sites Mean for the Fourth Amendment, 10 N.C. J. L. & TECH. 171, 171 (2008).
80
81
Id. at 176–79.
82
Id. at 171–72.
Matthew J. Hodge, Comment, The Fourth Amendment and Privacy Issues on the “New”
Internet: Facebook.com and Myspace.com, 31 S. ILL. U.L.J. 95, 95 (2006).
83
Edward M. Marsico, Jr., Social Networking Websites: Are Myspace and Facebook the
Fingerprints of the Twenty-First Century?, 19 WIDENER L. REV. 967, 969–70 (2010).
84
704
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
driving trials, showing photographs of defendants in an embarrassing
light—evidence that such defendants habitually engage in
irresponsible behavior or are unrepentant since their D.U.I. arrest.85
Evidence from Twitter and Myspace has also been used by police
to gather information about gang activity.86 Suspects who cannot
resist bragging online about their latest gun purchase or extortion end
up aiding police and district attorneys in their own prosecutions.87
Using photos of gang symbols or weapons on Facebook and Myspace
pages, law enforcement officials have been able to identify and gather
information about potential suspects.88
Other issues arise when evidence from Facebook is used in
criminal cases. Unless the information is publicly available, under the
Stored Communications Act the State will likely need to serve
Facebook itself with legal process in order to obtain the information.89
This, by itself, does not present a large hurdle for the prosecution.
Questions do arise, however, concerning whether the defendant has
the same ease of access to social media information. While criminal
defendants do have access to any messages sent to or by them
personally, they may not be able to obtain access to alleged victims’
profiles and deleted material, and may therefore lose an opportunity
for a defense.90 Additionally, while governmental entities are granted
an exception to the blanket protections of the Stored Communications
Act, defendants do not have that benefit.91 It thus becomes a question
of fairness as to whether criminal defendants should have the same
reasonable access to social media information that is afforded to the
prosecution.
85
Findlay, supra note 80, at 178.
86
Marsico, supra note 84, at 970.
Id. at 972. This is most often the case with career criminals, who take pride in their
criminal activities and are therefore the most likely to post about them online. Id.
87
88
Id.
89
Zwillinger & Genetski, supra note 43, at 580.
90
Id. at 385.
91
Id. at 590–91; see also 18 U.S.C. § 2702 (2008).
2012]
GLADYSZ
705
D. VOIR DIRE AND OTHER JURY CONSIDERATIONS
The latest way that attorneys are using social media to further
their clients’ interests is through jury selection; Facebook and other
social media sites are valuable in the determination of which potential
jurors are most favorable to a litigant’s case.92 While tangential to the
direct issue of E-evidence admissibility, this emerging area is proving
to be ripe with issues of its own. If used free from any deceit, potential
jurors’ online profiles can offer a host of information which may prove
quite significant in determining the jurors with the “right”
characteristics for a particular case—especially since some of the
information may contain information about subjects disallowed from
voir dire questioning. Attorneys and jury experts look at such profile
details as: favorite television programs, which may indicate a bias,
especially if those shows are crime-related; number of friends, which
may indicate an ability to be swayed; and rants or especially strongopinioned Tweets, which can suggest that such a person might
dominate jury deliberations, or may even run the risk of posting
information from the case and causing a mistrial.93 Social media is
especially useful in the jury selection context because parties usually
have limited time to question potential jurors, in addition to the
candid nature of some posts—a quality not often found in juror
questioning. However, this area of use is not without controversy.
Privacy experts argue that using social networks to investigate private
citizens is invasive, and some experts also argue that it may grant
unfair advantage to those with resources to bring in the required
equipment. Additionally, scholars question the veracity of online
profiles to begin with.94 Even more controversial is the proposition of
granting potential jurors free wireless Internet access during their
time at the courthouse if they agree to give the lawyers access to their
private Facebook accounts (which is generally met with apprehension
from the jury pool).95 Despite the seemingly lax restrictions on such
Ana Campoy & Ashby Jones, Searching for Details Online, Lawyers Facebook the Jury,
WALL ST. J. (Feb. 22, 2011),
http://online.wsj.com/article/SB10001424052748703561604576150841297191886.html.
92
93
Id.
94
Id.
Id. Such access would be granted through temporarily “friending” a certain legal office.
Jurors found this practice invasive, not seeing the reward of temporary Internet access
attractive enough to allow the litigators to invade their private lives. Id.
95
706
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
use of social networking sites, this is one area that may soon be
regulated and followed more closely by ethics committees. Until then,
it remains an ethically ambiguous—yet perfectly legal—tool at a
litigator’s disposal to sway the outcome of the case favorably for his or
her client.
Further, courts are beginning to limit jurors’ use of Facebook
inside and outside the courtroom. Some jurisdictions are adding
policies and specialized rules to jury instructions about the use of
social media and other Internet tools for independent research on the
players of their trial. As far-fetched as this may seem, instances of
jurors polling their Facebook friends on which way to vote, “friending”
parties in the proceeding, or blogging about jury deliberations have
made such measures necessary.96 Some states, such as Michigan, New
York, Oregon, Texas, and Alaska, have baned jurors from bringing
their cellular phones into the courtroom or jury deliberations in
reaction to several instances where jurors were Tweeting during the
trial.97 Other states have added sections to their jury instructions
explicitly explaining that Googling or otherwise searching online for
parties to the proceeding is not allowed.98 Courts across the nation
have begun to examine the promulgation of new rules individually,
but the trend in restricting use in order to avoid improper Internet
communications—and possibly a mistrial—is becoming stronger
throughout the country and will most likely become more prevalent in
coming years.
IV. REALMS OF PRIVACY IN THE FAR REACHES OF CYBERSPACE
In addition to statutory protections—or lack thereof—promulgated
by Congress, important doctrines have emerged from common law
jurisprudence and privacy policy that have affected and continue to
shape the legal conversation concerning social media E-evidence. This
section explores such over-arching doctrines, applying them by
example to relevant Facebook communications.
See Sharon Nelson, John Simek & Jason Fotlin, The Legal Implications of Social
Networking, 22 REGENT U. L. REV. 1, 3 (2010); see also Eva-Marie Ayala, Tarrant County
Juror Sentenced to Community Service for Trying to ‘Friend’ Defendant on Facebook,
FORT WORTH STAR-TELEGRAM (Aug. 28, 2011), http://www.star-telegram.com/2011/08/
28/3319796/juror-sentenced-to-community-service.html.
96
97
Nelson, Simek & Foltin, supra note 96, at 5–6.
98
Id. at 6–7.
2012]
GLADYSZ
707
A. KATZ’S LEGACY
The case of Katz v. United States has become a landmark opinion
in the area of privacy law, still affecting the way the Supreme Court
views the Fourth Amendment more than forty years after it was
decided.99 The issue in Katz concerned the legality of government
wiretaps of a suspected criminal’s phone conversations, which were
introduced as evidence against the defendant Katz at trial.100 The
taped phone conversations took place in a public phone booth, where
the surveillance by law enforcement officers was of limited scope and
duration, and provided evidence of Katz’s illegal gambling.101
Nevertheless, the Court held that invasion of a constitutionally
protected area without a search warrant is presumptively
unreasonable, and therefore this wiretap was unconstitutional.102 In
so holding, the Court noted that Fourth Amendment “considerations
do not vanish when the search in question is transferred from the
setting of a home . . . Wherever a man may be, he is entitled to know
that he will remain free from unreasonable searches and seizures.”103
While the majority holding in this case is undoubtedly
constitutionally significant, it is Justice Harlan’s concurrence that has
made an indelible mark on privacy law and the interpretation of the
Fourth Amendment.104 Providing the analysis that the Court still uses
today, Harlan established a two-pronged test to determine whether
the Fourth Amendment protects certain information. For a person to
have a reasonable expectation of privacy necessary to garner Fourth
Amendment protection: (1) the person must “have exhibited an actual
(subjective) expectation of privacy”;105 and (2) “the expectation [must]
be one that society is prepared to recognize as ‘reasonable.’”106 Under
this test, conversations held in a private home would be considered
99
Katz v. United States, 389 U.S. 347, 347 (1967).
100
Id. at 348.
101
Id. at 354.
102
Id. at 359.
103
Id.
104
See, e.g., Scolnik, supra note 12, at 364.
105
Katz, 389 U.S. at 361 (Harlan, J., concurring).
106
Id.
708
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
protected by the Fourth Amendment, while those that occur loudly in
public do not carry the same expectations of privacy and therefore are
afforded less protection.
Additionally, both the majority and Harlan’s concurrence
recognized the role of technologies in privacy law (though the
technologies of 1967 were certainly quite different than those of
today), with Justice Harlan stating that “electronic as well as physical
intrusion into a place that is in this sense private may constitute a
violation of the Fourth Amendment.”107
Katz affirmed that when individuals voluntarily divulge
information to the public, there is no reasonable expectation of
privacy.108 In most cases, the distinction between public and private is
relatively easy to discern; however, in the case of social media, with
privacy controls and users who do not fully comprehend how the
networks function, categorizing what communications are open to the
“public” and which are truly private becomes a complicated issue.
B. THIRD-PARTY DOCTRINE
Third-party doctrine is a premise developed upon the theories
articulated in Katz and subsequent jurisprudence.109 While its legacy
is not as strongly felt as that of the other principles discussed in this
Note, the vestiges of the third-party doctrine nonetheless have an
influence on privacy law today. The doctrine postulates that if the
information in question has been voluntarily turned over to a third
party, the individual seeking privacy protection no longer has a
reasonable expectation of privacy.110 For example, in Hoffa v. United
States, the Supreme Court held that information given to third parties
or stored on third-party databases no longer contains the reasonable
expectation of privacy necessary to obtain full Fourth Amendment
protection.111 The conversation at issue took place between two
individuals, but was held in the presence of a third-party outsider. The
107
Id. at 360.
108
Scolnik, supra note 12, at 354.
109
Id.
See Orin S. Kerr, The Case for the Third-Party Doctrine, 107 MICH. L. REV. 561, 563
(2009).
110
Hoffa v. United States, 385 U.S. 293, 302–03 (1966) (stating that conversations shared
with a third party could no longer receive Fourth Amendment protections).
111
2012]
GLADYSZ
709
Court thus allowed the subpoena of the third party who bore witness
to the conversation; however, it also stated that a party does not
forfeit all Fourth Amendment rights upon the sharing of information
to a third party, but rather merely shifts the balance slightly more
toward disclosure.112
The third parties of today are no longer simply other persons
present in the room or on another line listening in on a phone
conversation; electronic databases are also being used to diminish
privacy privileges under the third-party doctrine. The Supreme Court
has held that information stored by a third-party, where the thirdparty has access to the information, is afforded no privacy protections
under the Fourth Amendment.113
Similarly, Internet Service Providers (ISPs) often maintain
databases of users’ information, including private emails, which are
stored on ISP servers.114 If courts were to take an expansive view of the
third-party doctrine, this is another avenue by which to access
otherwise privacy-protected communications located on social media
website servers.115 For, as stated by the Court, the Fourth Amendment
does not “protect[] a wrongdoer’s misplaced belief that a person to
whom he voluntarily confides his wrongdoing will not reveal it.”116
C. THE LEVELS OF PRIVACY
As stated in Katz, an individual using a phone booth is “entitled to
assume that the words he utters into the mouthpiece will not be
broadcast to the world.”117 While this may almost always be the case
with telephone conversations, there are Internet locations that enjoy
these assumptions as well. While some Internet communications may
be presumed public, others have privacy controls that garner them
more protection.118 This section examines the three levels of privacy,
112
Id. at 301–03.
United States v. Miller, 425 U.S. 435, 444 (1976) (holding that bank records held by a
third-party bank were not protected by the Fourth Amendment).
113
114
Scolnik, supra note 12, at 359.
115
Zwillinger & Genetski, supra note 43, at 575–76.
116
Hoffa, 385 U.S. at 302.
117
Katz v. United States, 389 U.S. 347, 352 (1967).
710
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
and applies them in terms of today’s most significant social
networking website: Facebook.
1. PUBLIC COMMUNICATIONS
Public communications are those that, because they are shared
openly, are not shielded by a constitutionally-protected expectation of
privacy.119 In the early days of social media, courts were generally very
willing to allow E-evidence into the courtroom without pause for
several reasons, which are still advanced by some courts and scholars
today.120 First, courts are hesitant to give protection to parties who
choose to disclose the information in controversy online.121 Second,
most courts favor the production of relevant evidence over
consideration of an individual’s privacy interests.122 Finally, some
courts find reasonable expectation of privacy considerations absent
from social media postings.123
Today, public communications include any text or media that is
available to the general public.124 Examples of public communications
include radio broadcasts,125 websites, and open blog posts.126 Under
these standards, social networking communications are unequivocally
unshielded by the Fourth Amendment if they are free from any
privacy protections that may be available for individual websites. By
Evan E. North, Note, Facebook Isn’t Your Space Anymore: Discovery of Social
Networking Websites, 58 U. KAN. L. REV. 1279, 1288 (2010).
118
119
Matlach, supra note 14, at 459.
120
Payne, supra note 3, at 860–61.
121
Id. at 861; see also Ledbetter, 2009 WL 1067018, at *1.
122
See, e.g., Ledbetter, 2009 WL 1067018, at *2.
123
Moreno v. Hanford Sentinel, Inc., 91 Cal. Rptr. 3d 858, 862–63 (Cal. Ct. App. 2009).
124
North, supra note 118, at 1288.
See, e.g., Edwards v. Bardwell, 632 F. Supp. 584, 589 (M.D. La. 1986), aff’d, 808 F.2d
54 (5th Cir. 1986) (holding that there is no reasonable expectation of privacy in
communications broadcast over the radio that can be overheard by countless people).
125
See, e.g., United States v. Gines-Perez, 214 F. Supp. 2d 205, 225 (D.P.R. 2002) (stating
that privacy protection is unavailable to a person who undertakes no measures to protect
the information and that a person who places a photograph online and unsecured by
privacy controls forfeits any Fourth Amendment protection).
126
2012]
GLADYSZ
711
this logic, the user, having taken no steps to guard his or her
communications, has no reasonable expectation of privacy. If a social
media user’s profile is available to anyone on the Internet, or even
simply to every registered user of a particular social networking
website, the current standard holds this data as a public
communication.127
Applying the principles of public communications to Facebook, it
is clear that those profiles or parts of profiles that are open to anyone
with an account—or anyone with a familiarity with Google—are
considered public communications and thus are freely discoverable. A
user can have a completely public profile, but privacy controls also let
the user make certain sections of their profile private and others
public. For example, a user may control who can view their “wall” and
photos, but allow anyone to see their basic information. If this is the
case, the contents of the profile are split between those public
communications and the privacy-controlled sections, which would fall
into one of the two categories discussed below.
2. PRIVATE COMMUNICATIONS
Private communications are those that, both subjectively and
objectively, are viewed as being only accessible by a very limited
number of people. These communications are afforded the highest
level of protections allowed by the Fourth Amendment, denying law
enforcement access to such information.
Examples of such private communications include a phone call
between two or an otherwise small number of individuals,128 instant
messaging, emailing, and other communications between individuals
that are not intentionally open to the public. While there are always
chances that an email will be sent to someone unintentionally or
instant messages will become unencrypted, the parties to the
conversation still hold a subjectively and objectively reasonable belief
that they are private and act in confidence of this. Thus, the
communication receives full Fourth Amendment protections.
Analogizing these principles to Facebook, private communications
would be those kept between a small number of people, such as the
use of Facebook messaging—an email equivalent between friends—or
See Smith v. Maryland, 442 U.S. 735, 743–44 (1967) (holding that information provided
freely to others confers no reasonable expectation of privacy); see also Matlach, supra note
14, at 460.
127
128
Katz v. United States, 389 U.S. 347, 352 (1967).
712
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
Facebook chat—an instant messaging service.129 These types of
communications are not intentionally exposed to the public and are
similar to any other form of closed communications, such as a phone
call, email, or other types of instant messaging services.130 Because of
the expectations that these conversations will remain private, they
should carry the same protections as phone calls and similar
communications. Such private Facebook communications, therefore,
should earn full Fourth Amendment protections.
3. QUASI-PRIVATE COMMUNICATIONS
The nature of quasi-private communications, which straddle the
divide between public and private, makes this category the most
difficult to define concretely. Generally, these communications would
otherwise be considered public, but are utilized in a way that attaches
a reasonable expectation of privacy to their use.131
Because of such expectations, lawmakers and courts should grant
these communications some protection under the Fourth Amendment
and privacy statutes. The exact level of protection afforded would
depend on several factors, including the type of information, steps
taken by the user to shelter the information from the public, and the
individual website’s privacy policies. A fact inquiry is often necessary
to ascertain just how reasonable a user’s expectation of privacy may
be.132 However, because a legitimate expectation of privacy exists, a
warrant or court order (as required by the SCA) should be required for
law enforcement to gain access to the material.
A good portion of social networking communications fall under
this category due to the amalgamation of both large networks of
people granted access to the information and the user’s own
(reasonable) view of her information as guarded from the public. An
example of this is information on a Facebook user’s profile, if such
information is protected by the website’s available privacy controls.
Because of such controls, the user often views this information as
closed off to the “public” at large, regardless of how many friends he
129
See Crispin, 717 F. Supp. 2d at 991.
130
Matlach, supra note 14, at 461.
131
Id. at 460.
132
Id.
2012]
GLADYSZ
713
or she might have that are allowed to see the information.133 However,
the current “objective” test of reasonability views this information as
public because of the potentially large amount of people who are still
able to view the information (including people the user might not
know well).134
Application of this principle to social networking communications
can be analogized to simple face-to-face conversations: A conversation
between a small group of close friends is likely to be considered
private, but if such a conversation was held in a large, filled lecture
hall that would most likely not be the case.135 Put another way, private
information that is shared with strangers happened upon is no longer
private;136 but if such information is shared merely with close
confidants, there is an expectation that it will not be repeated
publically.137
Herein lies a divide in understanding: While lawmakers and courts
generally argue that quasi-private communications are discoverable,
the average user regards these communications as private. Because of
the divergence, the law should err on protecting the privacy rights of
individuals absent other prevailing interests.
In terms of Facebook communications, quasi-private
communications are those posts that are hidden to the general public
through the use of privacy controls, but are still open to certain
networks or large groups of people. Additionally, because of the
nature of Facebook, even communications shared only with “friends”
have the chance of being copied and shared with larger networks of
people—disseminating information without user consent—and they
are thus deemed less worthy of Fourth Amendment protection.138
Because of the vast number of controls available, defining a
specific location along the public-private spectrum for all quasiprivate communications simply is not a workable proposal. Instead, in
determining where along the spectrum certain communications lie,
there are several factors to look at, individual to each post or
133
North, supra note 118, at 1296.
134
Id.
135
Smith v. Maryland, 442 U.S. 735, 743–44 (1967).
136
Id.
137
Katz v. United States, 389 U.S. 347, 351 (1967).
138
North, supra note 118, at 1288, 1296.
714
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
communication. Factors may include, but are not limited to: the
amount of friends a user has (and are thus allowed to view certain
information), how privacy controls are used (whether access is limited
to a “network,” “friends of friends,” just “friends,” or an even smaller
number though the utilization of customizable “friend groups”), and
similar considerations.139
By limiting access to selected content, the user logically believes
she has protected herself. If a user has gone through the necessary
steps to reasonably protect her information, this should place such
communications nearer to private communication on the spectrum,
and thus allow the information greater Fourth Amendment
considerations.
V. ANALYSIS
The current state of disconnect between those communications
that users view as reasonably private and those that courts view as
reasonably private is disconcerting. Many users are not aware of how
little protection they have, and are thus unable to protect themselves.
Further, given the current popularity and power of social networks, it
is simply unreasonable to suggest that users significantly alter or
altogether stop using these websites for fear that Big Brother is
watching. Instead, lawmakers must be willing to change with the
times and realize further protections for users of social media are
necessary.
It is my proposition that courts alter their consideration on
whether quasi-private communications, as defined above, should be
discoverable. Instead of presuming that such communications are
admissible absent other interests or issues, current statutes governing
the discoverability of social media evidence need to be amended and
viewed from a new perspective: Rather than presuming such evidence
is discoverable unless proven otherwise, this quasi-private
information should not be regarded as discoverable unless the moving
party can prove prevailing interests under which to admit the
evidence.
While at times this type of evidence may have high probative
value, for example, proving a fact in dispute or an alibi, much of the
potential character or circumstantial evidence is taken out of context
and contains great risk of being overly prejudicial or confusing. Due to
the nature of such information contained on social media websites, a
139
Id. at 1298.
2012]
GLADYSZ
715
vast majority of this information should be disallowed from the
courtroom.
In their analysis of a reasonable expectation of privacy on social
networks, courts fail to consider some of the more complicated issues
that plague networks such as Facebook. For example, Facebook has
not always existed in its current state. At its advent, Facebook was
available only to college students. At that time, students posted
information freely and this information was usually available only to
others in the user’s “network”—other students on their campus.
Neither these users nor Facebook’s executives themselves could have
predicted such rapid and extensive evolution into the multiplatformed network with 750 million users that exists today. This
change is not only a technical marvel, but a privacy concern. Courts
consider reasonable expectations of privacy under standards of today,
not by what kind of network existed when the information was posted.
While it is true that less recent posts are inherently less probative and
thus less likely to be admitted into evidence, the concern exists
nevertheless. No matter when the information was posted, it is stored
on Facebook’s servers and is discoverable under today’s legal
landscape without sight of what privacy rights are given up.
Privacy policy dictates the use of discretion in allowing evidence
believed by the user and deemed by the website to be private to be
subpoenaed. Unless this information is highly probative and essential
to the case at hand, courts should err on the side of caution before
admitting the evidence, in keeping with the Fourth Amendment.
There are certain circumstances where such character evidence should
be admitted into the courtroom, such as instances when the party’s
character or reputation are elements of the crime itself. An example of
this would be the use of social media evidence in a child custody
hearing. In this instance, admitting pictures of a parent engaging in
irresponsible behavior found on a social media site—such as drinking,
drug use, or other risky behaviors—may be highly probative in
determining whether the individual is capable of being a fit parent. If
information contained in a “private” page proves a fact of the matter
in consequence, the court may well find that interests lie on the side of
disclosure.
Additionally, other circumstances or considerations may prove to
tip the scale in favor of allowing such quasi-private communications.
If the information cannot—or at least cannot without significant cost
or burden—be found elsewhere, in the interests of justice, the
information should be discoverable. Further, there may be times when
the specific fact that information was posted online becomes
significant. For example, in a criminal juvenile case, the prosecution
may want to admit online posts to prove that the defendant is
716
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
unrepentant in his actions. Because of the specific value of this
information being online, it proves to be valuable enough to forgo the
privacy considerations in question.
However, the nature of social media sites calls into question the
value of such information, for several reasons. For one, information
about a party may have been posted by another user, proving it very
difficult to control this information or even know that it is online in
the first place. Additionally, not all images or posts can or should be
taken at face value and the court should carefully examine both the
probative value and veracity of the information in its evaluation of
such evidence. Factors in determining whether such evidence is
probative enough to be admitted include how recent such evidence is
and the context of any text or photograph.
Due to the ubiquity of social media websites—deemed the
“permanent chronicle of people’s lives” by one privacy scholar140—the
use of such sites in obtaining potential evidence may prove to be
easiest avenue to certain types of information. However, because of
the above concerns, courts should only allow evidence taken from
social media websites if such information is not available elsewhere.
Such a measure allows important evidence to be admitted over any
privacy concerns only if the information is necessary and otherwise
unavailable, taking heed of Fourth Amendment concerns.
Social media sites encourage users to share personal
information,141 but that does not and should not mean that the use of
social media causes users to relinquish their Fourth Amendment and
other privacy rights. These websites encourage users to start a
conversation, to share aspects of themselves, and to use the site as a
tool with which to interact with friends.142 When one sends a letter to a
friend, she has every reason to believe the post office will respect her
privacy; the letter writer has not consented to having the contents of
such a letter turned over to the police. So why would this expectation
be any different for a Facebook user who sends his friend a message or
post that he believes is private? The law should evolve along with new
media and protect user privacy in a new era. While courts may be
hesitant to read these privacy rights into current precedent and
statutory material, the legislature is in a prime position to promulgate
DANIEL J. SOLOVE, THE FUTURE OF REPUTATION: GOSSIP, RUMOR, AND PRIVACY ON THE
INTERNET 11 (2007).
140
141
North, supra note 118, at 1288.
142
Id. at 1280.
2012]
GLADYSZ
717
new laws in order to fully protect users’ rights. It may be true that
social media is evolving too rapidly for the courts or legislature to keep
up with specific laws, but in order to protect individuals’ Fourth
Amendment rights, it is important to disseminate new statutes and
common law that allows for flexibility to grant protection when users
truly and understandably believe information they post online is
private.
VI. CONCLUSION
The rapid evolution of the Internet and social media has presented
the courts and legislatures of our country with the task of reforming
United States legal doctrine in keeping with the times. Despite certain
efforts in the statutory and common law, the infamously slow-tochange U.S. legal system has yet to fully conform to the new
Facebook-centric era. This disparity between the growth of technology
and legal theory has shown itself most tellingly in the disconnect
between objective and subjective expectations of privacy. While
younger generations believe that material posted online is private,
older generations of lawmakers and judiciary see such information as
open to the masses, and therefore public—and, for better or worse, the
latter are those whose voices are most strongly heard. Users,
reasonably expecting their online communications to be shielded from
the courts, deserve to have their privacy rights recognized by the
legislature and court system.
Amidst all of this commotion concerning what privacy is expected
of one’s Facebook status updates, the Fourth Amendment has been
subdued. In order to guarantee Fourth Amendment rights in future
generations, privacy rights online must be recognized. There are, of
course, certain circumstances that call for the discovery and
admission of social media evidence in litigation, but this should be an
exception, not the rule. In order to encourage faith in the judiciary and
legislature from the new generation of Americans who live their lives
online, protection of their Internet identities is crucial. If we do not
protect privacy interests now, the future of those liberties guaranteed
under the Fourth Amendment are in danger of being silenced forever.
I/S: A JOURNAL OF LAW AND POLICY FOR THE INFORMATION SOCIETY
Pulling Back the Curtain: Online
Consumer Tracking
LAURA J. BOWMAN*
I. INTRODUCTION
Think of every webpage you visited in the last year, every link you
clicked on, and every term you typed into a Google search bar.
Imagine if all of that information was compiled and identified with
your name. Now imagine that the list of websites you visited and
words you searched was available for others to view: your boss, your
insurance provider, the HR representative who is about to interview
you, your parents, your spouse, and your children. Would you want
anyone and everyone to know where you have been online and what
words you searched? Would these people draw inaccurate inferences
from this information? Could it reveal facts you wished to keep
private? Technologies currently allow such a list to be compiled
without the Internet user’s knowledge, and it is possible for the list to
be identified with that unique user. Further, these practices take place
without any government regulation in the United States. If the areas
of online consumer data collection and use remain free of regulation,
the widespread availability of this information could become reality.
The online advertising industry evolves constantly, creating new
and innovative ways to reach consumers and market products. The
industry’s techniques change faster than you can say “pay per click.”1
Today, “advertisers no longer want to just buy ads. They want to buy
*Laura J. Bowman is a member of the Ohio State University Moritz College of Law class of
2012.
See Julia Angwin, The Web’s New Goldmine: Your Secrets, WALL ST. J., July 30, 2010,
available at
http://online.wsj.com/article/SB10001424052748703940904575395073512989404.html.
1
2012]
BOWMAN
719
access to specific people.”2 Companies use controversial, secret,
undetectable, and evasive techniques to follow consumers online in
order to gather and sell information about them. When consumers
traded in their typewriters for computers, did they trade in their
privacy too? In 1999, Sun Microsystems CEO Scott McNealy told
consumers, “You have zero privacy . . . [g]et over it.”3 Despite the
advances of the digital age and the sacrifices people are willing to
make for the convenience of the Internet, consumers should not be
forced to concede to zero privacy. The right to privacy, or the “right to
be let alone,” which Supreme Court Justice Louis Brandeis conceived
of in 1890, still exists.4 But just how far the concept of privacy should
extend in today’s digital age remains unclear.5
Online tracking and consumer advertising offer some benefits to
consumers and are a mainstay of the industry, but without regulation
the consumer’s information remains subject to exploitation. The
unregulated collection and use of consumer information may result in
discriminatory practices, security breaches, and damage to the
concept of liberty at the very core of American society.6 Consumers
generally are unaware of what information is gathered about their
online behavior and how that information is shared, sold, and used.7
And most consumers mistakenly believe that their information is
protected under websites’ existing privacy policies.8 The consumer
Julia Angwin & Jennifer Valentino-Devries, Race Is On to ‘Fingerprint’ Phones, PCs,
WALL ST. J., Nov. 30, 2010,
http://online.wsj.com/article/SB10001424052748704679204575646704100959546.html.
2
Nicholas Carr, Tracking Is an Assault on Liberty, With Real Dangers, WALL ST. J., Aug.
6, 2010,
http://online.wsj.com/article/SB10001424052748703748904575411682714389888.html.
3
Olmstead v. United States, 277 U.S. 438, 478 (1928). See also Samuel Warren & Louis
Brandeis, The Right to Privacy, 4 HARV. L. REV. 193, 193 (1890).
4
See Jacqueline D. Lipton, Mapping Online Privacy, 104 NW. U. L. REV. 477, 479 (2010)
(“It takes longer for laws to evolve than for digital technology to advance. This is
particularly true of laws that involve basic human values, such as privacy and free speech,”
and “clearer identification of norms and values” is necessary before privacy laws can be
appropriately written.).
5
6
Carr, supra note 3.
7
See infra note 20 and accompanying text.
Joseph Turow et al., The Federal Trade Commission and Consumer Privacy in the
Coming Decade, 3 ISJLP 723, 724 (2008) (“When consumers see the term ‘privacy policy,’
they believe that their personal information will be protected in specific ways; in particular,
8
720
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
privacy problems surrounding online tracking are compounded by the
challenges faced when attempting to regulate these practices.
This Note begins with an overview of online consumer tracking
methods and Part II discusses the developments in the online
advertising industry. Part III discusses the risks associated with the
collection and use of online consumer information. Part IV examines
the United States’ current governance in this area. Part V describes
the difficulties regulators will face imposing regulations on an
industry that has developed with few restrictions, and details the
European Union’s struggles when regulating online tracking. Part V
provides an overview of Google’s approach to consumer privacy in
online consumer tracking and advertising. This Note concludes that,
while there is no readily apparent solution to the privacy issues raised
by online consumer tracking, it is clear that any regulation must strike
a delicate balance in order to effectively protect consumer privacy
while still allowing flexibility for ongoing technological advancements.
II. THE SPIES BEHIND THE (COMPUTER) SCREEN
The tracking, collection, and sale of online consumer information
takes place every millisecond of every day. The problem is that this
data is being collected and sold without consumer knowledge and with
few legal limits.9 The question of how to regulate not only the
collection of consumer information but also the way the information is
used and stored is a dilemma facing governments worldwide.10
they assume that a website that advertises a privacy policy will not share their personal
information.”).
9
See infra Part IV.
See, e.g., Emma Portier Davis, Internet: Analysis Parsing Art. 29 Targeted Ad Opinion
Favoring Cookies Opt-In Differ on Its Impact, 9 PRIVACY & SEC. L. REP. (BNA), at 988
(July 5, 2010) (highlighting the debate over the interpretation of the European Union’s ePrivacy Directive, particularly the provisions regarding consumer consent for cookies);
Frederic Debusseré, The EU E-Privacy Directive: A Monstrous Attempt to Starve the
Cookie Monster?, 13 INT’L J.L. & INFO. TECH. 70, 73 (2005) (A “critical analysis is made of
the new European rules for the use of cookies.”); Lilian Edwards & Jordan Hatcher,
Consumer Privacy Law 2: Data Collection, Profiling and Targeting (July 16, 2009),
Research Paper No. 1435105, http://ssrn.com/abstract=1435105 (noting that tracking
technologies are “currently perplexing privacy advocates, privacy commissioners and the
European Commission alike, while users are still largely ignorant of their existence”);
Meglena Kuneva, Consumer Privacy and Online Market, BEUC Multi-Stakeholder Forum
(Nov. 12, 2009), http://ec.europa.eu/archives/commission_20042009/kuneva/speeches_en.htm; Meglena Kuneva, EU Consumer Comm’r, A Blueprint for
Consumer Policy in Europe: Making Markets Work with and for People, Lisbon Council
Event (Nov. 5, 2009), http://ec.europa.eu/archives/commission_200410
2012]
BOWMAN
721
Examining the methods for tracking consumers online and the ways
that information is used and sold reveals the problems and dangers of
allowing the industry to remain unregulated.
A. SPIED ON: ONLINE CONSUMER TRACKING
Like a good spy, online user tracking has developed and adapted;
it works among us—right under our noses—but remains elusive and
secret. In 1994, online user tracking became possible with the
invention of "cookies." But at that time, online advertising was rare.11
Even when online ads gained popularity in the late 1990s,
“[a]dvertisers were buying ads based on proximity to content—shoe
ads on fashion sites.”12 After the dot-com bust, power shifted from
websites to the online advertisers themselves.13 This allowed
advertisers to pay for ads only when a user clicked on them. Due to
this change, sites and ad networks searched for a way to “show ads to
people most likely to click on them” in order to get paid.14 Now,
another change has taken place: rather than paying for an ad on a
specific webpage, advertisers “are paying a premium to follow people
around the Internet, wherever they go, with highly specific marketing
messages.”15 This new strategy is facilitated by spying on consumers’
online behavior. Spying on Internet users has become one of the
2009/kuneva/speeches_en.htm (“[T]echnology never ceases to amaze and today we are
faced with the relatively new issue relating to the online collection of personal and
behaviour data . . . . being done on an unprecedented scale on a massive scale and mostly
without any user awareness at all.”); Ariane Siegel et al., Survey of Privacy Law
Developments in 2009: United States, Canada, and the European Union, 65 BUS. LAW.
285, 285 (2009) (noting that the United States, the European Union, and Canada are all
faced with common online privacy concerns but that “each jurisdiction has developed its
own unique approach to dealing with privacy”).
11
Angwin, supra note 1.
12
Id.
13
Id.
14
Id.
Id. Changes to Google’s business model “reflect a power realignment online.” For years,
the strongest companies on the Internet were the ones with the most visitor traffic. Today,
the power resides with those that have the richest data and are the savviest about using it.
Jessica E. Vascellaro, Google Agonizes on Privacy As Advertising World Vaults Ahead,
WALL ST. J., Aug. 10, 2010,
http://online.wsj.com/article/SB10001424052748703309704575413553851854026.html.
15
722
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
fastest-growing businesses on the Internet, and the process is a
complicated one.16
The process of tracking, collecting, and selling online consumer
information is both swift and silent—happening almost instantly and
without the consumer’s knowledge. How? Tracking technologies such
as cookies gather personal details about the consumer and identify
characteristics about her,17 including her favorite movies, television
shows, and browsing habits.18 That information is “package[d] . . . into
profiles about individuals” which are constantly updated as
consumers move about the Web.19 The profiles are then sold to
companies seeking to attract specific types of customers.20 Recently,
exchanges similar to the stock market have appeared where these
profiles are bought and sold.21 Various technologies have been created
and adapted to meet the demands of this new market.
B. SPY KIT: TOOLS FOR TRACKING
As the Internet advertising industry has evolved, consumer
tracking technology has changed to fit the needs of the industry.
Technology has grown more powerful and more ubiquitous. Currently,
the nation’s most popular fifty websites, “which account for about
40% of the Web pages viewed by Americans,” each install an average
of “64 pieces of tracking technology onto the computers of visitors,
usually with no warning.”22 A dozen of these sites each installed more
than 100 pieces of tracking technology, typically without consumer
16
Angwin, supra note 1.
Id. For example, the code may identify a consumer as “a 26-year-old female in Nashville,
Tenn.” Id. This identification is created in part by capturing what consumers type in any
website—a product review, comment on a movie, or their interest in weight loss or cooking.
17
Id. Some online marketers allow consumers to see what they know, or think they know,
about him or her. See also infra note 124 and Part V.C.
18
19
Angwin, supra note 1.
Id. For example, one consumer’s profile can be sold “wholesale,” included with a group
of movie lovers for “$1 per thousand,” or “customized,” as “26-year-old Southern fans of
‘50 First Dates’.” Id. This information can also be segmented down to an individual person.
Id. (quoting Eric Porres, Lotame Solutions Inc. Chief Marketing Officer).
20
21
Id.
22
Id.
2012]
BOWMAN
723
knowledge.23 Technologies used for tracking constantly change and
develop. Those most commonly used currently include cookies, web
beacons, flash cookies, history sniffing, and device fingerprinting.24
1. COOKIES AND WEB BEACONS
Cookies are the original online consumer tracking technology. A
cookie is a small text file that stores information on a computer’s hard
drive. When a user goes to a website for the first time, the website
assigns her a unique identification number. That number is stored in
the cookie along with other information like the webpages she visits
on that site, the items placed in her shopping cart, and any
information she provides, such as her name and billing address. That
cookie is then placed on the user’s hard drive.25 The cookie allows that
original site to remember her even after she has left and visited other,
unrelated webpages. In order to continue tracking consumers as they
move from website to website, advertisers created the “third-party
cookie.”26 Advertisers use third-party cookies to track users as they
23
Id.
U.S. courts have held that cookies, the simplest tracking device, are legal, but have not
ruled on the legality of more complex trackers. Id.; see also In re DoubleClick Inc. Privacy
Litigation, 154 F. Supp. 2d 497, 497 (S.D.N.Y. 2001); Jonathan Bick, New Net-Use
Tracking Tactics Capture Privacy Claims, 27 E-COMMERCE L. & STRATEGY 1, 2 (2011)
(Plaintiffs allege that “DoubleClick's practice of placing cookies on the hard drives of
Internet users who accessed DoubleClick-affiliated Web sites constituted violation of the
Stored Communications Act (‘SCA’), the Wiretap Statute, and the Computer Fraud and
Abuse Act.” The court held that DoubleClick’s use of cookies fell into the consent
exceptions of those statutes because “when Internet users agreed to the terms and
conditions of the DoubleClick-affiliated site, they essentially were consenting to those sites
using their information.”).
24
Christina Tsuei, How Advertisers Track You, WALL ST. J., July 30, 2010,
http://online.wsj.com/video/how-advertisers-use-internet-cookies-to-trackyou/92E525EB-9E4A-4399-817D-8C4E6EF68F93.html.
25
Id. The creators of the cookie technology were concerned about privacy issues and
designed cookies in a way that would not allow information to continue to be stored as
users leave the original site and browse others. However, advertisers worked around these
limitations and created the “third-party cookie.” Id. Third-party cookies allow the
advertisers to show a user an ad for the product she just viewed on amazon.com, while she
is reading the news on an unrelated site. Id.
26
724
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
browse multiple, unrelated webpages and to “build lists of pages that
are viewed from a specific computer.”27
In addition to recording the websites a consumer visits and the
information she inputs, advertisers can now track every move of her
mouse using web beacons. Web beacons are a similar but newer
technology also known as “Web bugs” and “pixels.” Beacons are small
pieces of software running on a webpage that track “what a user is
doing on the page, including what is being typed or where the mouse
is moving.”28 Web beacons are also used in third-party tracking.
Similar to third-party cookies, as the user moves about the Internet
and visits a site also “affiliated with the same tracking company,” the
Web beacon “take[s] note of where that user was before, and where he
is now.” This allows the tracking company to build a “robust profile”
for each user.29 But advertisers have innovated further, adapting
technologies that actually evade users’ attempts to avoid being tracked
online.
2. FLASH COOKIES OR ZOMBIE COOKIES
Flash cookies are a newer technology which create new privacy
and legal issues. Flash cookies were originally created to save users’
Flash video preferences, such as volume settings.30 Advertisers have
adapted this technology to help track online consumers. Flash cookies
evade traditional methods of removal—they are not actually deleted
when an online consumer uses her browser options to remove
cookies.31 Since advertisers are actually “circumvent[ing] a user’s
attempt to avoid being tracked online,” this technology raises
significant privacy issues.32 Lawsuits are currently pending regarding
this questionable practice.33
Angwin, supra note 1. Third-party cookies are widely used. More than half of the United
States’ top 50 websites “installed 23 or more ‘third party’ cookies.” At the highest end of the
spectrum, “Dictionary.com installed . . . 159 third-party cookies.” Id.
27
28
Id.
29
Id.
30
Id.
Barry M. Benjamin & Stephen W. Feingold, Flash Cookies: Marketing Tactic Raises
Privacy Concerns, 16 CYBERSPACE LAW 12, 12 (2011).
31
Angwin, supra note 1; see also Benjamin & Feingold, supra note 31 (“The use of Flash
cookies troubles many consumers because it contravenes the currently-understood
consumer expectation around managing cookies on their computers. Indeed, the use of
32
2012]
BOWMAN
725
3. HISTORY SNIFFING
A recent lawsuit highlights a different invasive method that
advertisers use to track Internet users: “history sniffing.”34 Because
browsers display a website link in a different color when the browser
has visited a webpage before, a company can tell whether a user has
visited a particular site before simply by running a code in the user’s
browser and it can then create a profile based on the sites that user
has viewed without the user knowing. History sniffing can be used to
gather “extensive information regarding the domains or even
Flash cookies, which are not necessarily deleted when a consumer deletes regular cookies
from her browser, may well contradict brand advertiser privacy policies.”).
See Jennifer Valentino-Devries & Emily Steel, ‘Cookies’ Cause Bitter Backlash, WALL ST.
J., Sept. 19, 2010,
http://online.wsj.com/article/SB10001424052748704416904575502261335698370.html.
See also Eric C. Bosset et al., Private Actions Challenging Online Data Collection Practices
Are Increasing: Assessing the Legal Landscape, 23 INTELL. PROP. & TECH. L.J. 3, 3 (2011)
(Recent lawsuits “allege that certain online marketing firms and their publisher affiliates
improperly used ‘local shared objects,’ also known as ‘Flash cookies,’ to, for similar
advertising reasons, track user activity and back up HTTP cookies for the purpose of
restoring them later (also referred to as browser cookie re-spawning).”) (internal citations
omitted); Jessica E. Vascellaro, Suit to Snuff Out ‘History Sniffing’ Takes Aim at Tracking
Web Users, WALL ST. J., Dec. 6, 2010,
http://online.wsj.com/article/SB10001424052748704493004576001622828777658.html
?mod=WSJ_Tech_LEFTTopNews. One such suit alleging that Quantcast Corp.,
Clearspring Technologies Inc. and several other websites “used online-tracking tools that
essentially hacked into users’ computers without their knowledge” recently ended in a
settlement. Under the settlement agreement, Quantcast and Clearspring agreed not to use
Flash Cookies “to store Web-browsing activity without adequate disclosure or except
related to Adobe System Inc.’s Flash program” and “to pay $2.4 million, some of which will
go toward one or more online-privacy nonprofit organizations.” The settlement remains
subject to court approval. See In re Clearspring Flash Cookie Litig., No. 2:10-cv-05948GW-JCG (C.D. Cal. filed Dec. 3, 2010); In re Quantcast Flash Cookie Litig., No. 2:10-cv05484-GW-JCG (C.D. Cal. filed Dec. 3, 2010).
33
Vascellaro, supra note 33; see also Kashmir Hill, History Sniffing: How YouPorn Checks
What Other Porn Sites You’ve Visited and Ad Networks Test the Quality of Their Data,
FORBES (Nov. 30, 2010, 6:23 PM),
http://blogs.forbes.com/kashmirhill/2010/11/30/history-sniffing-how-youporn-checkswhat-other-porn-sites-youve-visited-and-ad-networks-test-the-quality-of-their-data;
Kashmir Hill, Class Action Lawsuit Filed Over YouPorn History Sniffing, FORBES (Dec. 6,
2010, 7:04 AM), http://blogs.forbes.com/kashmirhill/2010/12/06/class-action-lawsuitfiled-over-youporn-history-sniffing. (“The suit accuses YouPorn and the other sites of
‘impermissibly accessing [users'] browsing history’ and seeks class-action status. The
lawsuit alleges that the porn websites broke California computer and consumer protection
laws, as well as ‘violat[ed] Plaintiffs’ privacy interests.’”); see also Complaint at ¶¶ 15, 28,
Pitner v. Midstream Media Int’l, N.V., (C.D. Cal. filed Dec. 6, 2010) (No. 8:10-cv-01850).
34
726
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
subdomains” a consumer visits, and because the code is delivered via
ads or other items on a site, the site’s host may not know that the
history sniffing is taking place.35 Federal Trade Commission (FTC)
director David Vladeck noted with concern that “history sniffing
‘deliberately bypassed’ the most widely known method consumers use
to prevent online tracking: deleting their cookies.”36 The FTC
requested that browser vendors implement fixes to protect consumers
from history sniffing. While some vendors have implemented these
changes, the majority of Web users remain vulnerable and soon
“more-sophisticated types of sniffing” may be developed, making
current browser protections obsolete.37
4. DEVICE FINGERPRINTING
Yet another “new and controversial” tracking technique in the
online advertising industry is device fingerprinting.38 Every time a
consumer goes online, his computer broadcasts hundreds of unique
details as an identifier for the other computers it connects with. Each
computer possesses a “different clock setting, different fonts, different
software” and other specific characteristics that identify it.39
Companies use this data to individually identify a computer and then
track and build a profile of its user.40
Fingerprinting began as a way to stop the copying of computer
software and prevent credit card fraud.41 But it has become a powerful
new tool for tracking companies. Not only is fingerprinting difficult to
block, even “sophisticated Web surfers” cannot tell if their devices are
35
Vascellaro, supra note 33.
36
Id.
37
Id.
Jennifer Valentino-DeVries, ‘Evercookies’ and ‘Fingerprinting’: Are Anti-Fraud Tools
Good for Ads?, WALL ST. J. (Dec. 1, 2010, 10:49 AM),
http://blogs.wsj.com/digits/2010/12/01/evercookies-and-fingerprinting-findingfraudsters-tracking-consumers.
38
39
Angwin & Valentino-Devries, supra note 2.
Id. (noting that the same method can be used to identify and track cell phones and other
devices).
40
41
Id.
2012]
BOWMAN
727
being fingerprinted.42 Even if a user is aware of the fingerprinting,
unlike cookies, there is no way for users to delete the fingerprints that
have been collected.43 This gives advertisers a significant advantage
and companies utilizing fingerprinting expect to “completely replace
the use of cookies.”44 Consumers may be surprised that they cannot
avoid being tracked online without permission, but there is no
regulation restricting these practices.
III. NOWHERE TO HIDE
There is currently no uniform federal regulatory system governing
the collection and use of online consumer information. The risks
associated with the unregulated collection and use of consumer
information range from breaches and misuse of personal information
to the crumbling of the freedoms which are the foundation of
American society. It remains unclear which entity should regulate—
Congress, the FTC, the Department of Commerce (DOC), or a White
House commission—and what form regulations should take in order
to effectively protect consumer privacy while still allowing flexibility
for ongoing technological advancements. Thus far, regulation efforts
have “resulted in a combination of legislation and promotion of
industry self-regulation.”45 The federal government’s “piecemeal
approach” to protecting online privacy has been deemed inadequate
for years by scholars and privacy advocates.46 Regulation-free
42
Id.
Id. While most fraud-prevention companies keep fraud data and advertising data
separate, some plan to combine the two, making it possible to identify the tracked
consumer and giving rise to privacy issues. Companies also plan to link the profiles of
devices that appear to be used by the same consumer and may even seek to match people’s
online data with offline information “such as property records, motor-vehicle registrations,
income estimates and other details” which raises additional privacy concerns. Id.
43
Id. (“Tracking companies are now embracing fingerprinting partly because it is much
tougher to block than other common tools used to monitor people online such as browser
‘cookies.’”). One company examined 70 million website visits and found that it could create
a fingerprint for 89% of those visits. Id.
44
Paige Norian, The Struggle to Keep Personal Data Personal: Attempts to Reform Online
Privacy and How Congress Should Respond, 52 CATH. U. L. REV. 803, 803 (2003).
45
Id.; see also Alan F. Blakley et al., Coddling Spies: Why the Law Doesn't Adequately
Address Computer Spyware, DUKE L. & TECH. REV. 25, 25 (2005) (“Existing law does not
address spyware adequately because authorization language, buried in ‘click-through’
boilerplate, renders much of current law useless. Congress must act to make spyware
companies disclose their intentions with conspicuous and clearly-stated warnings.”);
46
728
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
consumer tracking presents a serious threat to privacy.47 Moreover,
many online consumers are unaware of the collection and use of their
online data. Finally, new technologies mean that anonymization—
identification by a number rather than a person’s name—can no
longer be relied upon to shield the consumer’s data from
identification.
A. AN INVISIBLE ASSAULT ON PRIVACY
The fact that the majority of user data is collected without the
user’s knowledge raises significant privacy issues.48 When consumers
are kept unaware, “even the legal use of private information may be
surprising and unnerving.”49
If collected information falls into the wrong hands, “this
information could facilitate identity theft, credit card fraud, cyberstalking, damaged credit, and more.”50 Additional concerns surround
Christopher F. Carlton, The Right to Privacy in Internet Commerce: A Call for New
Federal Guidelines and the Creation of an Independent Privacy Commission, 16 ST.
JOHN'S J. LEG. COMMENT. 393, 395 (2002) (“The Internet poses a new set of challenges to
privacy . . . . [b]ut the legal system has not sufficiently evolved and . . . . [t]he current policy
of self-regulation whereby Internet users and operators are setting the rules and
regulations has proven to be ineffective.”); Amy S. Weinberg, These Cookies Won’t
Crumble—Yet: The Corporate Monitoring of Consumer Internet Activity, In Re
Doubleclick Inc. Privacy Litigation, 21 TEMP. ENVTL. L. & TECH. J. 33, 33–34 (2002)
(Congress is “considering the passage of new laws to cope with [online consumer tracking]
issues. At this time, however, the practices of DoubleClick, Inc., for the purposes of
research and enhancement of Internet usage, while seemingly improper, have been
deemed to not be in violation of federal law.”).
David Bender, Targeted Advertising Arrives on the Government’s Radar, N.Y. L.J.
(April 7, 2009),
http://www.newyorklawjournal.com/PubArticleNY.jsp?id=1202429695777. (Sir Tim
Berners-Lee, considered the founder of the World Wide Web highlights some necessary
considerations: “People use the Internet to search for information when they’re concerned
about their sexuality, when they’re looking for information about diseases or when they’re
thinking about politics” and because of the private nature of information such as this, “it’s
vital that they are not being snooped on.”).
47
Brian Stallworth, Note, Future Imperfect: Googling for Principles in Online Behavioral
Advertising, 62 FED. COMM. L.J. 465, 479 (2010) (noting, “Chief among the privacy
concerns raised by online profiling [are] its nearly invisible nature and the broad scope of
data collected about individual consumers”).
48
Id. at 473. (“Although millions of Americans appear willing to sacrifice a significant
measure of their private information to gain access to Google's ever-increasing armament
of products and services, these people may not fully appreciate the risks they are taking.”).
49
50
Id.
2012]
BOWMAN
729
the breadth of the data that is collected and the risk that “[e]xcess
customization of the Web experience may stratify society.”51 For
example, if a user’s profile suggests that he is “poor or from a minority
group . . . the news, entertainment and commentary [he] sees on the
Web might differ from others’, preventing [his] participation in the
‘national’ conversation and culture that traditional media may
produce.”52
At the most basic level, the continuous, concealed collection of
personal information online threatens “[t]he very idea of privacy.”53
Consumers value both personalization and privacy, and generally
understand that they cannot have more of one without sacrificing
some of the other. In order to have products and promotions tailored
to a consumer’s personal situation and tastes, he must divulge
information about himself to corporations, governments, or other
outsiders. While this tension has long been present in consumers’
lives, covert online tracking eliminates consumers’ ability to control
the tradeoffs for themselves.54 It remains unclear just how to empower
consumers in a way that allows them to make decisions regarding
their online privacy.
B. TOUGH COOKIES
There are numerous obstacles to consumer choice and lawmaker
regulations in the area of online tracking. While some ad networks do
Jim Harper, It’s Modern Trade: Web Users Get as Much as They Give, WALL ST. J., Aug.
7, 2010,
http://online.wsj.com/article/SB10001424052748703748904575411530096840958.html.
51
Id.; see also, Jonathan Zittrain, Let Consumers See What’s Happening, N.Y. TIMES, Dec.
2, 2010, http://www.nytimes.com/roomfordebate/2010/12/02/a-do-not-call-registry-forthe-web/let-consumers-see-whats-happening (“The real nightmare scenarios [of online
tracking] are not better placed dog food ads. They have to do with varying price or service
depending on undisclosed and long-collected behavior cues.” For example, if a consumer’s
“life insurance rates were based not just on facts like a medical checkup, but unexplained
variances in what Web sites you elected to visit.”).
52
Carr, supra note 3. See also Daniel J. Solove, A Taxonomy of Privacy, 154 U. PA. L. REV.
477, 484 (2006) (“Privacy is the relief from a range of kinds of social friction. It enables
people to engage in worthwhile activities in ways that they would otherwise find difficult or
impossible.”); Warren & Brandeis, supra note 4, at 198. (“The common law secures to each
individual the right of determining, ordinarily, to what extent his thoughts, sentiments,
and emotions shall be communicated to others.”).
53
54
Carr, supra note 3.
730
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
permit the savvy consumer to opt out of cookies and thus tracking via
cookies, often “the opt-out must be renewed each time the user clears
cookies from the browser.”55 Further, the technology used to enforce
consumers’ privacy preferences, such as cookie opt-outs are
“ineffective in many instances” and “allow[] cookies to be downloaded
to users regardless of their privacy settings.”56 In addition, tracking
software operates on many websites without the website’s knowledge.
A recent study reports, “nearly a third of the tracking tools on fifty
popular U.S. websites were installed by companies that gained access
to the site without the [web site] publisher’s permission.”57 If a
company is not aware of tracking software, it cannot notify its users or
provide them with options for protection. Furthermore, tracking
technologies change rapidly. The industry has already passed through
several waves of changing strategies and technologies.58 Just as
regulators have begun to focus on methods for regulating cookies,
tracking companies are moving on to different techniques.59 Changes
in technology also mean changes to the data itself, leaving data that
was previously “anonymous” now more easily identifiable.
C. THE CRUMBLING SHIELD OF ANONYMITY
For years advertisers said that the anonymity of online consumer
data would shield the consumer from an invasion of privacy. But
advances in technology can now leave a consumer and her identity
exposed. Generally, the gathered information remains anonymous—a
consumer is identified by a number assigned to the computer rather
than the individual’s name.60 Proponents of tracking point to this
55
Bender, supra note 47, at 5.
Internet: Cookie-Blocking Protocols on Many Sites Do Not Work, Mislead Users, Report
Says, 9 PRIVACY AND SEC. L. REP. (BNA), at 1329 (Sept. 27, 2010).
56
Jessica E. Vascellaro, Websites Rein in Tracking Tools, WALL ST. J., Nov. 9, 2010,
http://online.wsj.com/article/SB10001424052748703957804575602730678670278.html.
57
58
See supra Part II.B.
Angwin & Valentino-Devries, supra note 2 (“‘I think cookies are a joke,’ [David Norris,
CEO of tracking company BlueCava Inc.,] says. ‘The system is archaic and was invented by
accident. We’ve outgrown it, and it’s time for the next thing.’”).
59
Emily Steel, A Web Pioneer Profiles Users by Name, WALL ST. J., Oct. 25, 2010,
http://online.wsj.com/article/SB10001424052702304410504575560243259416072.html.
60
2012]
BOWMAN
731
“layer of anonymity” as “a reason online tracking shouldn’t be
considered intrusive.”61
However, not every tracking company gathers only anonymous
data. One online tracking company, RapLeaf, gathers information
from a variety of sources, including “voter-registration files, shopping
histories, social-networking activities and real estate;” RapLeaf’s
databases also include consumers’ “real names and email
addresses.”62 Currently, the company’s database contains one billion
email addresses.63 RapLeaf claims to honor users’ requests for
removal from its system and has stated that it removes personally
identifiable data from profiles prior to selling information for online
advertising. But RapLeaf has “inadvertently” transmitted details that
can identify a user, like a unique Facebook identification number,
which links to a user’s name, and a similar MySpace ID number that
can be linked to an individual’s name. RapLeaf states that this practice
was stopped after it was told that this was occurring.64 Still, the
collection, storage, and use of consumer information that includes the
consumer’s name and email address raises significant privacy issues.
The company’s own lack of awareness of the use of its collected
information highlights the need for regulations that will force
companies to examine and enforce stringent policies for use of
consumer data.
With the vast information that is collected nearly instantaneously
as a consumer surfs the Web, she is often just “one more piece of
information” short of being identified.65 When AOL publicized three
months’ worth of the search terms used by 657,000 of its users, in
anonymized form, the true sensitivity of this information became
evident; despite the anonymization, the New York Times quickly
identified “searcher No. 4417749 as an over-60 widow in Lilburn,
Georgia, USA.”66 Similarly, in 2006, DVD rental company Netflix
61
Id.
62
Id.
63
Id.
64
Id.
Emily Steel & Julia Angwin, On the Web’s Cutting Edge, Anonymity in Name Only,
WALL ST. J., Aug. 4, 2010,
http://online.wsj.com/article/SB10001424052748703294904575385532109190198.html.
65
66
Edwards & Hatcher, supra note 10.
732
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
released “anonymous” data from 480,000 of its customers which
included “100 million movie ratings, along with the date of the rating,
a unique ID number for the subscriber, and the movie info” for a
contest challenging participants to create a “recommendation
algorithm that could predict 10 percent better than Netflix how those
same subscribers rated other movies.”67 Only a few weeks after the
contest began, two contestants “identified several Netflix users by
comparing their ‘anonymous’ reviews in the Netflix data to ones
posted on the Internet Movie Database website.”68 This revealed
personal information such as political leanings and sexual
orientation.69
A tracking company can access and analyze thousands of pieces of
information about a consumer, including the individual's ZIP code
and demographic group, in less than one second.70 With this
information, it is likely that only one more piece of information, such
as a person's age or birth date, would allow de-anonymization.71 While
the cost of such de-anonymization for all entries in a company’s
database may exceed any benefits at this point in time, the possibility
is real. Anonymity of data is neither guaranteed nor a foolproof
substitute for regulation.72 Regulation cannot rely on anonymization
to protect individuals’ privacy.73
Ryan Singel, Netflix Spilled Your Brokeback Mountain Secret, Lawsuit Claims, WIRED
MAGAZINE (December 17, 2009, 4:29 PM),
http://www.wired.com/threatlevel/2009/12/netflix-privacy-lawsuit. One of the identified
subscribers filed Doe v. Netflix, alleging Netflix violated fair-trade laws and federal privacy
law. See Complaint, Doe v. Netflix, Inc., (N.D. Cal filed Dec. 17 2009) (No. C09-05903).
67
68
Id.
69
Id.
70
Steel & Angwin, supra note 65.
71 Id. (The information collected about
one individual included income level, education,
and town. This “narrow[ed] him down to one of just 64 or so people world-wide.” And with
“one more piece of information about him, such as his age” the specific individual could
likely be identified.).
See Paul Ohm, Broken Promises of Privacy: Responding to the Surprising Failure of
Anonymization, 57 UCLA L. REV. 1701, 1743 (2009) (Computer scientists can now
“reidentify” or “deanonymize” individuals from anonymized data with “astonishing ease,”
meaning that anonymization can “no longer be considered to provide meaningful
guarantees of privacy.”).
72
See id. (Regulations such as the EU Data Protection Directive and the United States’
HIPAA rely on anonymization as a privacy protection. Ohm urges the reevaluation of these
73
2012]
BOWMAN
733
The anonymity of consumer data has recently been easily and
unknowingly compromised by some of the biggest players in the
Internet industry. Affiliates of both MySpace and Facebook have
collected and shared tracking information, including a unique profile
number associated with the user, despite policies prohibiting such
practices.
1. MYSPACE
As with many websites, when a MySpace user clicks on an online
ad, “pieces of data are transmitted, including the web address of the
page where the user saw the ad.”74 However, on MySpace, this web
address has included a user’s unique ID number, giving its holder the
ability to access that specific user’s profile page. While some MySpace
profiles use a “display name” rather than a person’s actual name, the
user ID provides access to all information that a person has made
public on their profile.75 The information was shared by applications
or “apps” on the social-networking site. These apps let users play
games and share information.76 Unfortunately, the apps themselves
were sharing information, despite the fact that MySpace policy
prohibits this practice. This demonstrates how “fundamental Web
technologies can jeopardize user privacy” and that a website’s policies
and terms may not go far enough to protect consumer information.77
2. FACEBOOK
Facebook uses tracking to learn about its 800 million users, but
has had to change its practice of transmitting unique user ID numbers
in addition to the information properly transmitted when a user clicks
on an ad. A great deal of user activity on Facebook occurs using apps.
and “any law or regulation that draws distinctions based solely on whether particular data
types can be linked to identity” as well as cautioning against the drafting of new laws or
regulations based on this distinction.).
Geoffrey A. Fowler & Emily Steel, MySpace, Apps Leak User Data, WALL ST. J., Oct. 23,
2010,
http://online.wsj.com/article/SB10001424052702303738504575568460409331560.html.
74
75
Id.
76
Id.
77
Id.
734
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
In 2011, it was discovered that for an undetermined amount of time,
Facebook apps sent user information to advertising and data firms but
included the user’s Facebook ID number with that information.
Several of the applications transmit personal information about the
user’s friends to outside companies. Even if a Facebook user has her
profile set to “private,” a search of the Facebook ID will reveal the
user’s name. Like MySpace, Facebook’s policies “prohibit app makers
from transferring data about users to outside advertising and data
companies, even if a user agrees.”78 Clearly, merely having the policy
is not enough to protect consumer privacy. Enforcing this policy on
Facebook’s 550,000 apps seems to have been a challenge and
consumer privacy has suffered as a result.79
But even if the data is anonymous, the government’s current
privacy guidelines simply do not “adequately address growing societal
concerns regarding the use and protection of information.”80 It is true
that the regulation-free environment has allowed innovative new
technologies to develop, many of which benefit both consumers and
companies. But the lack of regulation in this area has resulted in a
“trial-and-error” approach and has allowed companies to push the
limits of privacy.81 This lack of comprehensive regulation “creates
uncertainty for businesses and consumers alike,” and “will result in a
Emily Steel & Geoffrey A. Fowler, Facebook in Online Privacy Breach, WALL ST. J., Oct.
18, 2010,
http://online.wsj.com/article/SB10001424052702304772804575558484075236968.html.
78
79
Id.
Andrew B. Serwin, Consumer Privacy: Privacy 3.0—A Reexamination of the Principle
of Proportionality, 9 PRIVACY & SEC. L. REP. (BNA), at 1230 (Aug. 23, 2010; see also
Vascellaro, supra note 15.
80
See, e.g., Subrahmanyam KVJ, Google Buzz’s Privacy Breach is a Sign of things to
Come, VENTUREBEAT ( FEB. 14, 2010), http://venturebeat.com/2010/02/14/google-buzzsprivacy-breach-is-sign-of-things-to-come/. Google pushed the privacy envelope in early
2010 with its social networking program, Google Buzz. Almost immediately, privacy
concerns bombarded the Internet giant, which was forced to implement changes to the
program and publicly apologize. A class action lawsuit alleged “Google automatically
enrolled Gmail users in Buzz, and that Buzz publicly exposed data, including users’ most
frequent Gmail contacts, without enough user consent.” Complaint, Hibnick v. Google Inc.,
(N.D. Cal. Feb. 17, 2010) (No. CV-10-672). In June of 2011, the class action settlement was
approved, requiring Google to pay $8.5 million to various organizations and entities. In re
Google Buzz User Privacy Litigation, Case No. 5:10-CV-00672-JW (N.D. Cal.) (Sept. 03,
2010).
81
2012]
BOWMAN
735
continuation of the patchwork, and at times inconsistent” regulatory
approach, which may not provide adequate protection to consumers.82
IV. CURRENT GOVERNANCE: RECOMMENDATIONS WITHOUT
REQUIREMENTS
In the United States, the current governance of online consumer
tracking consists of self-regulatory principles and a self-regulatory
framework of best practices recommendations. While the FTC does
not enforce these principles and framework because they are not
mandatory, the FTC can bring an action against a company whose
actions do not conform to its stated privacy policies, including any of
the self-regulatory principles the company claims to follow. Congress
has proposed legislation governing online consumer tracking on
several occasions, but none has become law.
A. THE FTC’S RECOMMENDATIONS
In the area of online privacy, the FTC has “investigated fairness
violations, brought law enforcement actions, required some Web sites
to post privacy policies, and overseen an on-going dialog with industry
and consumer groups.”83 But “concern for stifling the freedom and
prosperity of online commerce” has hindered the FTC’s ability to
establish “enforceable regulatory privacy standards.”84 Thus far, the
FTC has approached online behavioral advertising and tracking with
only recommendations, not regulatory requirements.
1. THE FTC’S SELF-REGULATORY PRINCIPLES OF 2009
Following a 2007 “Town Hall” in which numerous industry leaders
and interested parties discussed the benefits and concerns of online
behavioral advertising, the FTC released a staff report with “Self82
Serwin, supra note 80.
Stallworth, supra note 48, at 468; see generally, Susan E. Gindin, Nobody Reads Your
Privacy Policy or Online Contract? Lessons Learned and Questions Raised by the FTC’s
Action Against Sears, 8 NW. J. TECH. & INTELL. PROP. 1 (2009) (discussing and analyzing
the FTC’s action against Sears Holding Management Corp. and its online privacy
implications).
83
84
Stallworth, supra note 48, at 468.
736
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
Regulatory Principles for Online Behavioral Advertising.”85 These
principles included four “governing concepts.” First, companies
should inform consumers that they gather information for behavioral
advertising and allow the consumer to choose whether or not to allow
that practice. Second, companies should take reasonable measures to
protect consumer data and should not retain the information longer
than necessary. Third, if a company decides to use consumer data in a
way that is “materially different” from its stated purpose when it
collected the data, it should do so only after obtaining express
permission from the consumer. Fourth, companies should obtain
permission before using sensitive data for behavioral advertising.86
The self-regulatory principles were embraced by the industry,87
but consumer advocate groups urged that the non-mandatory
principles do not provide adequate consumer protections.88 The
nation’s most prominent media and marketing associations created
the Self-Regulatory Program for Online Behavioral Advertising
(Industry Program). This Industry Program is based on the FTC’s SelfRegulatory Principles and “gives consumers a better understanding of
and greater control over ads that are customized based on their online
behavior.”89
FED. TRADE COMM’N, FTC STAFF REPORT: SELF-REGULATORY PRINCIPLES FOR ONLINE
BEHAVIORAL ADVERTISING TRACKING, TARGETING, & TECHNOLOGY 1–2 (2009) [hereinafter
FTC STAFF REPORT 2009], available at
http://www.ftc.gov/os/2009/02/P085400behavadreport.pdf.
85
86
Id. at 11–12. Sensitive data includes “data about children, health, or finances.” Id.
87See
AM. ASSOC. OF ADVER. AGENCIES ET AL., SELF REGULATORY PRINCIPLES FOR ONLINE
BEHAVIORAL ADVERTISING (2009), available at
http://www.aboutads.info/resource/download/seven-principles-07-01-09.pdf (These
Industry Principles “correspond with tenets proposed by the Federal Trade Commission in
February 2009, and also address public education and industry accountability issues raised
by the [FTC].”).
See, e.g., Alexei Alexis, Legislation: Business Lobbyists Press for Self-Regulation;
Boucher Privacy Proposal Seen as Disruptive, 9 PRIVACY & SEC. L. REP. NO. 23 (BNA), at
844˗45 (June 7, 2010) (“Consumer advocates . . . remain dissatisfied with industry selfregulation and are urging congressional action. [In September 2009,] a coalition of
advocacy groups called on Congress to impose tough privacy standards on online
companies, including a 24-hour limit on the use of any personal data obtained without the
consumer’s prior consent and an absolute ban on the collection of sensitive information.”).
88
THE SELF-REGULATORY PROGRAM FOR ONLINE BEHAVIORAL ADVERTISING,
http://www.aboutads.info/home (last visited Sept. 23, 2011).
89
2012]
BOWMAN
737
The Industry Program “requires participating firms to display an
‘advertising option icon’ within or near online ads or on web pages
where data is collected and used for behavioral advertising.”90 The
icon will also signal that a company engages in behavioral advertising
and abides by the Industry Program’s principles. When a consumer
clicks on the icon, they will be linked to “a ‘clear’ disclosure statement
regarding the company’s data-collection practices, as well as an ‘easyto-use’ opt-out option.”91 Critics urge that the Industry Program and
other self-regulatory principles, “don’t protect consumers, haven’t
worked before, and are largely designed for no reason except to take
the congressional eye off the reform ball.”92 Also, companies have little
incentive to commit to self-regulatory principles like the Industry
Program because, “[a] company that agrees to comply with the
program and fails to do so could be found in violation of Section 5 of
the FTC Act” prohibiting “unfair and deceptive trade practices.”93
After all, when a company has not made an affirmative statement, “[it
is] more difficult to bring a deception claim.”94 Thus, companies face
fewer risks if they do not make an affirmative statement like the one
required by the Industry Program.
2. THE FTC’S PROPOSED FRAMEWORK OF 2010
Answering the call for additional protections, the FTC published a
Preliminary FTC Staff Report proposing a framework to better protect
consumer information, noting that the “notice-and-choice model, as
Alexei Alexis, Marketing: Web Marketers Launch Self-Regulatory Plan with Opt-Out
for Behavioral Data Collection, 9 PRIVACY & SECURITY L. REP. NO. 40 (BNA), at 1385–86
(Oct. 11, 2010).
90
91
Id.
92
Id.
See, e.g., Bick, supra note 24 (While the FTC has allowed online consumer data
collection to be self-regulated, “the agency has prosecuted Web-site owners who fail to
disclose behavioral-targeting techniques in a clear and conspicuous manner.” For example,
the FTC brought an action against Sears Holdings Management Corporation because
“Sears obscured the extent of the data collection, which included health, banking and other
sensitive data, in its privacy statements.”); see also In the Matter of Sears Holdings Mgt.
Corp. C-4264 (F.T.C. Aug. 31, 2009).
93
94
Alexis, supra note 90.
738
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
implemented [thus far], has led to long, incomprehensible privacy
policies that consumers typically do not read, let alone understand.”95
The framework makes four core recommendations: (1) The framework
suggests that it apply to both online and offline commercial entities
that gather and use data that can be “reasonably linked to a specific
consumer, computer, or other device.”96 (2) Companies should
integrate consumer privacy measures throughout their organization
and at each stage in the development of new products and services.97
When data is collected for “commonly accepted practices,” consumer
choice is not required. But for other purposes, the company should
offer the consumer a clear choice of whether or not to provide the
data.98 (3) Companies should create clear, understandable, and
uniform privacy notices. Companies should allow consumers to access
the data that has been collected and retained about them. (4) If a
company chooses to use consumer data in a way that is “materially
different” from its stated purpose when it collected the data, it should
do so only after obtaining express permission from the consumer.99
The FTC also recommended the development of a “do not track”
system100—a “simple, easy to use choice mechanism for consumers to
opt out of the collection of information about their Internet behavior
for targeted ads.”101 The idea for a do-not-track system similar to the
FED. TRADE COMM’N, PROTECTING CONSUMER PRIVACY IN AN ERA OF RAPID CHANGE: A
PROPOSED FRAMEWORK FOR BUSINESSES AND POLICYMAKERS, iii (2010) [hereinafter FTC
PROPOSED FRAMEWORK 2010].
95
Id. at v. This scope demonstrates the FTC’s recognition that “the distinction between
data containing personally identifiable information and anonymous data is becoming less
meaningful.” Julia Angwin & Jennifer Valentino-Devries, FTC Backs Do-Not-Track System
for Web, WALL ST. J., Dec. 2, 2010,
http://online.wsj.com/article/SB10001424052748704594804575648670826747094.html.
96
97
FTC PROPOSED FRAMEWORK 2010, supra note 95, at 40.
Id. at 20–28. These commonly accepted practices include, “product and service
fulfillment,” internal operations,” fraud prevention,” “legal compliance and public
purpose,” and “first-party marketing,” for example, where online companies recommend
products based upon a consumer’s previous purchases on that company’s website. Id. at
53–54.
98
99
Id. at 41–42.
100
Id. at 66–69.
Press Release, Fed. Trade Comm’n, FTC Staff Issues Privacy Report Offers Framework
for Consumers, Businesses, and Policymakers (Dec. 1, 2010),
http://www.ftc.gov/opa/2010/12/privacyreport.shtm.
101
2012]
BOWMAN
739
“Do Not Call” registry prohibiting telemarketing telephone calls was
first raised in 2007. However, the difficulty of implementing such a
system prevented its creation.102 New technologies may now make a
do-not-track system possible. Privacy researchers believe it is possible
to install a small piece of code in web browsers that would “broadcast
a message to every website saying ‘do not track this user.’”103
The FTC report requested comments on its current framework104
and a final version of the framework is forthcoming.105 However, this
new FTC framework remains “recommendations for best practices”
and not regulations “for enforcement.”106
B. THE ABSENCE OF CONGRESSIONAL LEGISLATION
There remains no federal online tracking regulation, and it seems
unlikely that Congress will enact comprehensive online privacy
regulations in the near future. Congress entered the online consumer
tracking debate in 2010.107 In May, 2010, Representative Rick
Boucher, Chair of the Subcommittee on Communications, Technology,
and the Internet, circulated a draft legislative proposal of a
comprehensive federal privacy regulation that would have imposed
new restrictions on the collection and use of consumer data.108
Representative Boucher’s proposed legislation applied broadly—to
“any entity engaged in interstate commerce that collects covered
information, such as a person’s name, postal and e-mail address,
telephone number, ‘preference profile,’ or ‘unique persistent
identifier,’ such as an Internet protocol address.”109 The legislation
Julia Angwin & Spencer E. Ante, Hiding Online Footprints, WALL ST. J., Nov. 30, 2010,
http://online.wsj.com/article/SB10001424052748704584804575645074178700984.html
?mod=WSJ_Tech_LEFTTopNews.
102
Id. The proposal of a do-not-track system was met with immediate criticism. See infra
notes 109–113 and accompanying text.
103
104
FTC PROPOSED FRAMEWORK 2010, supra note 95, at 38.
105
Angwin & Valentino-Devries, supra note 96.
106
Id. (quoting FTC Commissioner Jon Leibowitz).
Daniel T. Rockey, Proposed Data Privacy Legislation Generates Relief as Well as
Concerns, 9 PRIVACY & SEC. L. REP.(BNA), at 961 (June 28, 2010).
107
108
Alexis, supra note 88.
109
Id.
740
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
mandated a “clear, understandable privacy policy explaining how
[covered information] is collected, used, and disclosed,” and required
consumer opt-in for the use of any “sensitive data, such as medical
records, financial accounts, Social Security numbers, and location
information,” and offered limited exceptions to these mandates.110 The
bill also required that companies “delete or ‘render anonymous’ any
covered information” eighteen months after collection.111 This
proposed legislation faced criticism from both the industry, concerned
about how disruptive the regulation would be to current business
models, and consumer advocacy groups, arguing that the proposal
does not impose enough restrictions to adequately protect
consumers.112 Online privacy regulations require an informed, delicate
balancing of interests.113 The proposed privacy regulations may have
left room for improvement, but it appears that the bill did not advance
far beyond the proposal stage. Representative Boucher was defeated
in the November 2010 elections, and his absence likely had a
debilitating impact on the bill’s legislative progress.114
Congress revived the idea of a do-not-track system in February
2011 when it began considering the “Do Not Track Me Online Act of
2011.”115 If passed, this legislation would give the FTC eighteen
Id. (noting that the opt-in mandate was “just one of many areas of the proposal that are
expected to generate debate”).
110
Id. For a discussion cautioning against this type of reliance on “anonymization” in
privacy statutes and regulation, see supra notes 67–68 and accompanying text.
111
Id. (“With only limited exemptions included, the proposal would leave many datasharing arrangements—as they exist today—in legal jeopardy.” Still, consumer advocacy
groups have “argued that his proposal does not go far enough.”).
112
See Andrea N. Person, Note, Behavioral Advertisement Regulation: How the Negative
Perception of Deep Packet Inspection Technology May be Limiting the Online Experience,
62 FED. COMM. L.J. 435, 437 (2010) (“While protecting the personal information of
Americans online should be a top priority, it is equally important to consider how
regulation in this area may affect the future of the Internet and how too much regulation
may harm the consumer.”).
113
Mike Shields, Online Privacy Bill: Dead in the Water?, ADWEEK (Nov. 4, 2010),
http://www.adweek.com/aw/content_display/news/politics/e3if13877e698a1cce2faa1baf
6cc66750a.
114
Speier Introduces Consumer Privacy Package (Feb. 11, 2011),
http://speier.house.gov/index.php?option=com_content&view=article&id=215:speierintroduces-consumer-privacy-package&catid=1:press-releases&Itemid=14; see also David
Sarno, “Do Not Track” Internet Privacy Bill Introduced In House, L.A. TIMES, Feb. 11,
2011, http://www.latimes.com/business/la-fi-do-not-track-20110212,0,66573.story.
115
2012]
BOWMAN
741
months to create regulations requiring advertisers to “allow users to
‘effectively and easily’ choose not to have their online behavior tracked
or recorded.”116 However, such a system could dramatically decrease
the availability of free content,117 and the implementation of a do not
track mechanism remains difficult; browser makers must build the
feature and “[i]t would only work if tracking companies would agree to
honor the user’s request.”118 Reflecting these concerns, the bill was
met with immediate criticism from the industry, including the charge
that the do not track program Congress envisions “would require reengineering the Internet’s architecture,” resulting in a “severely
diminished experience” for consumers.119 As of this publication, the
bill remains in the beginning stages of lawmaking, with the last related
action being its referral to the House Subcommittee on Commerce,
Manufacturing, and Trade one week after it was introduced in
February 2011.120
V. THE REGULATION CHALLENGE: BALANCING OPTIONS
Consumers, governments, and companies around the world are
searching for a suitable approach to online consumer tracking and
behavioral advertising. Any regulation or system must address
consumer privacy issues without eliminating the functionality and
benefits that the Internet and its industry provide. Innovative
116
Sarno, supra note 115.
Angwin & Valentino-Devries, supra note 96 (“‘FTC endorses ‘do not track’; an emotional
goodbye to free content so kindly funded by advertisers,’ tweeted Rob Norman, chief
executive of WPP PLC’s GroupM North America, which buys ads on behalf of corporate
clients.”).
117
Id. (Mozilla Corp.’s Firefox Web browser explored a built-in do-not-track mechanism
but chose not to include the tool in its browser, fearing that such a tool “would force
advertisers to use even sneakier techniques and could slow down the performance of some
websites.”).
118
Grant Gross, Lawmaker Introduces Online Do-Not-Track Bill, PCWORLD (Feb. 11, 2011,
4:32 PM),
http://www.pcworld.com/businesscenter/article/219454/lawmaker_introduces_online_d
onottrack_bill.html (“Do Not Track” as a method of protecting consumers “resonate[s]
with the public,” but practically, it remains “difficult to implement.”).
119
120
Do Not Track Me Online Act, H.R. 654, 112th Cong. (2011).
742
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
solutions have been explored but no one method has proven a
completely successful approach.
A. A MISSION THAT WON’T SELF-DESTRUCT: PREVENTING THE HARM
WITHOUT REGULATING AWAY THE BENEFITS
Tracking has strengthened the behavioral advertising industry and
consumers have grown fond of the benefits that such paid advertising
provides, which adds to the challenge of creating regulations. Targeted
ads show consumers advertisements that will likely interest them. The
benefits of targeted advertising also include Internet features that
many consumers have come to expect since targeted ads fund much of
the website content that consumers access free of charge.121
Several tracking methods use technology that is also used for fraud
protection. Both “evercookies” and device fingerprinting are essential
tools for fraud protection. If device fingerprinting and evercookies are
banned, that could have grave consequences for the Web and the way
it functions.122 In fact, if these techniques for identifying fraudsters
were prohibited or blocked by Web browsers “it would essentially
make it impossible to shop over the internet.”123 Regulation to protect
consumer privacy could ban technologies that provide benefits to
consumers and inhibit Web functions we all rely upon.124
B. THE EUROPEAN UNION’S STRUGGLE WITH OPT-IN
Richard M. Marsh, Jr., Note, Legislation for Effective Self-Regulation: A New
Approach to Protecting Personal Privacy on the Internet, 15 MICH. TELECOMM. & TECH. L.
REV. 543, 544 (2009).
121
122
Valentino-Devries, supra note 38.
Id. The elimination of cookies also impacts the way many websites function. Edwards &
Hatcher, supra note 10, at 3. (Amazon’s website “(unusually) provides fairly good
functionality without cookies, but popular features such as the ‘shopping cart’ and ‘your
preferences’ do disappear. Many sites however simply fall over if the user chooses to ‘turn
off’ or delete” the cookies for that site.” Similarly, search engines collect and store search
data for business purposes that benefit consumers); see also Edwards & Hatcher, supra
note 10, at 14 (Search engines like Google collect data in order “to improve their own
search algorithms” which allow the tailored and accurate searches Internet users
appreciate and expect).
123
124
See, e.g., supra Part II.B.4.
2012]
BOWMAN
743
The European Union (EU) leads the world in the area of online
privacy law.125 But, the EU, like the United States, is struggling to find
the best method of regulating the collection and use of consumer
information online.126 The EU’s recent attempt at implementing an
opt-in requirement for online consumer tracking is a cautionary
tale.127 Previously, European law required websites to allow
consumers to “opt out” or refuse cookies. But in 2009 the EU passed a
law requiring companies to “obtain consent from Web users when
tracking files such as cookies are placed on users’ computers.”128 As
the law waited for enactment by member countries, a dispute over its
meaning erupted and “Internet companies, advertisers, lawmakers,
privacy advocates and EU member nations” became entangled in
debate over what the law actually requires in practice.129 If a user
agrees to cookies when setting up her Web browser, is that a sufficient
opt-in? If an industry plan allows users to see and opt out of data that
has been collected about them, is that sufficient consent? Must a
consumer “check a box each time” before a cookie may be placed on
his machine?130 The law has now passed, but the dispute over its
meaning continues with no clear answer in sight and no change in the
way cookies are used for tracking. The EU’s approach to online privacy
regulation demonstrates just how difficult it is to regulate the practice
of tracking Internet users’ behavior on the Web, and lawmakers must
carefully consider the impact and implementation of any
requirement.131
See, e.g., Robert W. Hahn & Anne Layne-Farrar, The Benefits and Costs of Online
Privacy Legislation, 54 ADMIN. L. REV. 85, 116–17 (2002) (comparing the EU’s
“comprehensive” privacy legislation—the Data Protection Directive—with the United
States’ “ad hoc” approach to information privacy in general).
125
126
See Kuneva, EU Consumer Comm’r, supra note 10.
Paul Sonne & John W. Miller, EU Chews on Web Cookies, WALL ST. J., Nov. 22, 2010,
http://online.wsj.com/article/SB10001424052748704444304575628610624607130.html
?mod=WSJ_Tech_LEADTop.
127
128
Id.
129
Id.
130
Id.
131
Id.
744
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
C. CONSUMER OPT-OUT TOOLS IN THE U.S.
Consumer opt-out tools have also emerged in the United States.
Using various methods, these tools allow consumers to opt-out of
some behavioral advertising.
Both the Self-Regulatory Program for Online Behavioral
Advertising and the Network Advertising Initiative (NAI) have
instituted consumer opt-out web pages which allow users to decide
whether or not they receive interest-based advertising from
companies participating in each program. The websites show
consumers which participating advertisers have been tracking them
and delivering behavioral or “interest-based” advertising targeted to
them. Consumers can choose to opt-out of interest-based advertising
from those participating advertisers.132 However, these opt-out
selections are stored in “opt-out cookies.” This means that consumers’
opt-out selections will be erased any time they clear their cookies.
Also, the consumers’ opt outs only prevent interest-based advertising;
consumers will still receive non-interest-based advertising, and thus
will continue to be tracked.133
D. GOOGLE’S PRACTICAL OPT-OUT SOLUTION
Google’s online privacy program, its “Ad Preference Manager,”
also supports a consumer opt-out rather than an opt-in, system. For
years Google had struggled to determine how far it was willing to go to
profit from its “crown jewels,” “the vast trove of data it possesses
about people’s [online] activities.”134 As a leader in the industry, “[f]ew
online companies have the potential to know as much about its users
as Google.”135 Google has access to the information users store in
Google Docs, its online word processor and spreadsheet; it also has
access to all of the emails users send through Gmail and it saves the
See Opt Out from Online Behavioral Advertising (Beta), ABOUTADS,
http://www.aboutads.info/choices (last visited Sept. 10, 2011); Opt Out of Behavioral
Advertising, NETWORK ADVER. INITIATIVE,
http://www.networkadvertising.org/managing/opt_out.asp (last visited Sept. 10, 2011).
132
133
See Opt out of Behavioral Advertising, supra note 132.
134
Vascellaro, supra note 15.
135
Id.
2012]
BOWMAN
745
searches those same users execute, making the search data
anonymous after eighteen months. However, when it came to using
this user data, the company initially held back, concerned about
privacy issues. Google makes its money selling ads—originally, “ads
tied to the search-engine terms people use.”136 Google’s policies
allowed only display ads based on “contextual” targeting (putting a
shoe ad on a page about shoes). As the industry changed and
advertisers became interested in targeting consumers “based on more
specific personal information such as hobbies, income, illnesses or
circles of friends,” Google has been forced to change too.137 In 2007
Google purchased DoubleClick, a “giant” in the online advertising
business. Google executives remained reluctant to use cookies to track
people online, because many users were unaware that they were being
tracked. Finally, in March 2009, Google did launch an interest-based
ads product that uses cookies to track a user’s visits to “one of the
more than one million sites where Google sells display ads.”138
However, Google “elegantly” and simply addresses the many
practical problems of implementing notice and choice regarding its
consumer tracking with its ad preference manager.139 Google’s
approach has proven functional while still promoting transparency
and consumer control. Further, it presents a privacy solution that has
been called “superior” to a do-not-track system.140 Google tracks users’
browsing activity across sites using AdSense and creates a profile of
each user’s interests. Like many advertisers, Google uses this profile to
136
Id.
137
Id.
138
Id.
Berin Szoka, Google’s Ad Preference Manager: One Small Step for Google, One Giant
Leap for Privacy, THE PROGRESS & FREEDOM FOUND., Mar. 2009, available at
http://ssrn.com/abstract=1421876; see also Jennifer Valentino-Devries, What They Know
About You, WALL ST. J., July 31, 2010,
http://online.wsj.com/article/SB10001424052748703999304575399041849931612.html
(Google, the most prevalent tracker in the nation’s forty-nine of the fifty Web sites, as
tested in a recent Wall Street Journal survey, does allow consumers to use Google's Ads
Preferences Manager, at http://www.google.com/ads/preferences, to “see the name of the
tracking file, or ‘cookie,’ it associates with [their] Web browser and the interests it links to
that cookie.” Google’s manager “lets [consumers] remove some interests and add others,
choosing from a list ranging from ‘poetry’ to ‘neuroscience’ to ‘polar regions.’” A consumer
can also chose to opt-out so Google will no longer track him.).
139
140
See Szoka, supra note 139, at 3.
746
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
tailor the ads delivered on the Google Content Network (GNC) and
YouTube.141
Through its Ad Preference Manager, Google provides notice of its
tracking to consumers in two ways. First, each ad notifies the user
which advertiser is paying for the ad and that Google is serving it. The
bottom left-hand corner of “each AdSense ad on sites in the GNC”
contains the URL of the advertiser’s website. In the bottom right-hand
corner of the ad, an “Ads by Google” link will be displayed. Second, the
“Ads by Google” link leads a user to his or her profile with the
categories and subcategories that have been assigned to the tracking
cookie in his or her browser.
Google also provides choice to consumers in two ways. The Ad
Preference Manager allows users to both view and edit the profile that
has been assembled based on tracking. Consumers can select or delete
categories of interests in their profiles. In addition, users have the
ability “to opt-out completely from having their data collected” for
behavioral advertising purposes, a choice that “will be respected in the
future and will therefore be ‘persistent.’”142 Google’s Ad Preference
Manager addresses privacy advocates’ concerns that opt-out systems
make it too difficult for consumers to find the tool to opt-out and do
not provide a “persistent” opt-out, requiring “the placement of a
special ‘opt-out cookie’ on the user’s computer, which may be
inadvertently deleted when users delete all their cookies.”143 Such
“user empowerment tools” have proven effective in similar contexts
such as “online child protection, where parental control software
offers a more effective alternative to government regulation of
Internet content.”144 Google’s approach allows for flexibility, as it does
not absolutely block ad companies’ abilities to track and target. Such
practices fund free online content and allow fraud prevention
mechanisms to operate.145 Google’s approach is not a flawless model
and could not be widely implemented. There are countless different
companies advertising and collecting consumer data on the Internet,
all around the world. If each advertising company on the Internet
141
Id. at 1.
142
Id. at 2.
143
Id. at 3.
144
Id. at 4; see also Children’s Online Privacy Protection Rule, 16 CFR § 312 (2010).
See Szoka, supra note 139, at 6. For examples of potential negatives effects of a ban on
tracking, see supra notes 111, 116–18, and accompanying text.
145
2012]
BOWMAN
747
provided this tool, the result would be a complex and time-consuming
system requiring consumers to monitor and manage all of their
individual profiles for each and every company. Also, because tracking
and targeting are not fully prohibited with Google’s method, some
privacy issues are left unaddressed.
VI. CONCLUSION
Unregulated online consumer tracking and collection of consumer
data present significant privacy issues. While the self-regulation and
best-practices models the FTC has used thus far allow the industry a
great deal of power to innovate, there remains little incentive for
companies to fully participate.146 Much debate has centered on which
government entity—Congress,147 the FTC,148 the Department of
Commerce,149 or a White House Task Force150—should govern
consumer tracking and whether regulation should take the form of
recommendations for self-regulation or stringent law.
Various methods for addressing the privacy issues surrounding
online tracking have been implemented by regulators and companies,
but none has proven to be an ideal model. Although a consumer ad
management tool such as Google’s resolves many of the practical
issues and provides a great deal of user control, it is not a system that
could be widely implemented. There remains no clear solution to the
privacy issues surrounding online tracking and behavioral advertising.
However, it is apparent that any regulation of online consumer
tracking and behavioral advertising should strike a delicate balance.
Regulation must provide adequate protections to consumers and its
scope must encompass a broad range of ever-developing techniques
146
See supra Part IV.A.
147
See supra Part IV.B.
See supra Part IV.A; see also FTC PROPOSED FRAMEWORK 2010, supra note 95; FTC
STAFF REPORT 2009, supra note 85.
148
See THE U.S. DEP’T OF COMMERCE INTERNET POLICY TASK FORCE, COMMERCIAL DATA
PRIVACY AND INNOVATION IN THE INTERNET ECONOMY: A DYNAMIC POLICY FRAMEWORK
(2010), http://www.commerce.gov/sites/default/files/documents/2010/december/iptfprivacy-green-paper.pdf.
149
See Julia Angwin, Watchdog Planned for Online Privacy, WALL ST. J., Nov. 11, 2010,
http://online.wsj.com/article/SB10001424052748703848204575608970171176014.html
(“[T]he White House has created a special task force that is expected to help transform the
Commerce Department recommendations [on policing Internet privacy] into policy.”).
150
748
I/S: A JOURNAL OF LAW AND POLICY
[Vol. 7:3
used to track. At the same time, regulation should not stifle the
innovation through which the Internet has developed and thrived, and
must not regulate to the point of eliminating the functionality of the
Web.
Sponsored by
The Moritz College of Law
The Ohio State University
Drinko Hall, Room 169
55 West 12th Avenue
Columbus, OH 43210
H. John Heinz III College
Carnegie Mellon University
Hamburg Hall 1108
5000 Forbes Avenue
Pittsburgh, PA 15213-3890
Please Visit the I/S Website to Subscribe Electronically:
moritzlaw.osu.edu/students/groups/is/
Please subscribe me or my organization to:
I/S: A JOURNAL OF LAW AND POLICY FOR THE INFORMATION SOCIETY
Name:_____________________________________________________________
Organization:________________________________________________________
Address:____________________________________________________________
City:________________State:_____Zip Code:_____Country:_________________
Telephone:_________Fax:_________Email:_______________________________
Annual Volume Subscription Type (please circle one):
Domestic ($45) – Student w/ID ($25)
Institutional ($100)
Foreign ($100)
Supporting ($150)
To make payment now, please enclose check in separate envelope payable to: “I/S
JOURNAL”
Published by
The Center for Interdisciplinary Law and Policy Studies
The Ohio State University
Drinko Hall
55 West 12th Avenue
Columbus, OH 43210
Please Visit the I/S Website to Subscribe Electronically:
moritzlaw.osu.edu/students/groups/is/
Place Stamp Here
Post Office
Will Not Deliver
Mail Without
Postage
BUSINESS REPLY CARD
I/S: A JOURNAL OF LAW AND POLICY FOR THE INFORMATION SOCIETY
ATTN: I/S JOURNAL
The Ohio State University
Moritz College of Law
55 West 12th Avenue
Columbus, OH 43210