protecting games



protecting games
Charles River Media
A part of Course Technology, Cengage Learning
Australia, Brazil, Japan, Korea, Mexico, Singapore, Spain, United Kingdom, United States
© 2008 IT GlobalSecure, Inc.
Publisher and General Manager,
Course Technology PTR: Stacy L. Hiquet
Associate Director of Marketing: Sarah Panella
Manager of Editorial Services: Heather Talbot
ALL RIGHTS RESERVED. No part of this work covered by
the copyright herein may be reproduced, transmitted,
stored, or used in any form or by any means graphic,
electronic, or mechanical, including but not limited to
photocopying, recording, scanning, digitizing, taping, Web
distribution, information networks, or information storage
and retrieval systems, except as permitted under Section
107 or 108 of the 1976 United States Copyright Act, without
the prior written permission of the publisher.
Marketing Manager: Jordan Casey
Senior Acquisitions Editor: Emi Smith
Project/Copy Editor: Kezia Endsley
PTR Editorial Services Coordinator: Jen Blaney
Interior Layout: Shawn Morningstar
Cover Designer: Mike Tanamachi
For product information and technology assistance,
contact us at
Cengage Learning Customer and Sales Support,
For permission to use material from this text or
product, submit all requests online at
Further permissions questions can be emailed to
[email protected]
Indexer: Valerie Haynes Perry
Proofreader: Ruth Saavedra
The information contained in this publication is
not intended to convey or constitute legal advice
on any subject matter. Readers should not rely on
the information presented in this publication for
any purpose without seeking the legal advice on
the specific facts and circumstances at issue from a
licensed attorney. Readers should not consider
the information presented in this publication to be
an invitation for an attorney-client relationship,
and providing the information in this publication is
not intended to create an attorney-client relationship between you and any author or contributor to
this publication. The information in this publication contains general information that is intended,
but cannot be guaranteed, to be always up-to-date,
complete and accurate. Any representation or warranty that might be otherwise implied is expressly
disclaimed. The authors and contributors expressly
disclaim all liability or responsibility in respect to
actions taken or not taken based on any or all of
the information contained in this publication.
Material in this book may include discussion regarding issues reported in the public media and public legal system regarding services, products, and other material that may be
subject to laws granting copyright protection. These issues
are discussed for illustrative purposes only and the facts presented are limited to that purpose. Those wishing to seek
further information about any illustrative point discussed
are encouraged to engage further research.
All trademarks are the property of their respective owners.
Library of Congress Control Number: 2008932480
ISBN-13: 978-1-58450-670-6
ISBN-10: 1-58450-670-9
eISBN-10: 1-58450-687-3
Course Technology, a part of Cengage Learning
20 Channel Center Street
Boston, MA 02210
Cengage Learning is a leading provider of customized
learning solutions with office locations around the globe,
including Singapore, the United Kingdom, Australia,
Mexico, Brazil, and Japan. Locate your local office at:
Cengage Learning products are represented in Canada by
Nelson Education, Ltd.
Printed in the United States of America
1 2 3 4 5 6 7 12 11 10 09
For your lifelong learning solutions, visit
Visit our corporate website at
For my parents, sisters, family, friends, teachers, and colleagues.
Thank you for your patience.
irst, I would like to thank Emi Smith, Kezia Endsley, and the team at Cengage
Learning for taking the chance to publish a book on game security.
Thank you to my readers at who, through their interest and
engagement, have sustained me through the past several years.
Thank you to Cheryl Campbell, my great friend and business partner for over
10 years at IT GlobalSecure and also my tireless editor.
A special thank you to Joseph Price and Marcus Eikenberry, for their contributions to this book.
Thank you to Adam Martin, Pierre Laliberte, Alexandre Major, Marc-André
Hamelin, and the other industry professionals who provided invaluable editorial
input to the book.
Thank you to Richard Davis and Eleanor Lewis for their editorial help.
Thank you to my teachers, mentors, friends, and colleagues at the National
Security Agency (especially my coworkers in R56, V6, and C7) who instilled in me
a passion for the security field and an appreciation for how security “fits” in to the
rest of the world. Specifically, Mark U., Brian S., Tim W., Bill M., Cecil S., Sid G.,
Tanina G., Bill U., Nancy G., Jim A., Ed G., Ed D., Robert W., Bob D., and many
Finally, thank you to the game industry and gaming industry professionals who
have welcomed a strange “security guy” into their midst.
Although many people have contributed, the final responsibility for the form,
style, content, and everything else related to this work is ultimately mine.
About the Author
Steven Davis has over 22 years of IT and IT security expertise and has focused on
the security issues of the gaming industry for more than a decade. He advises game
companies, governments, and regulators around the world. Mr. Davis has written
numerous papers and speaks at conferences on all aspects of game security. He is
the author of the game security and industry blog, PlayNoEvil (http://www.
Mr. Davis has international patents on game security and IT security techniques, most notably the anti-cheating protocols that underlie the SecurePlay
( anti-cheating library. He has designed several games,
including DiceHoldem (, and acts as a design consultant.
He is the CEO of IT GlobalSecure (, which
develops game security products and provides game security, IT security, and game
design and evaluation services. Mr. Davis’ experience includes security leadership
positions at the U.S. National Security Agency (NSA), CSC, Bell Atlantic (now
Verizon), and SAIC. He has extensive cryptographic and key management design
experience, including work on Nuclear Command and Control systems, the
Electronic Key Management System, and numerous other commercial and
government projects. Mr. Davis has a BA in Mathematics from UC Berkeley and a
Masters Degree in Security Policy Studies from George Washington University.
About the Contributors
Joseph Price is an Associate in the Antitrust and Telecommunications practice
groups at Kelley Drye & Warren LLP, with a track record of successfully representing companies in strategic mergers and acquisitions, and is especially adept at
working with companies to structure transactions and achieve business goals with
competition and antitrust issues.
With a particular expertise on counseling companies in regulated industries, Mr.
Price has helped clients protect interests threatened by consolidation in the communications industry. He has obtained FTC and DOJ Antitrust Division clearance
on numerous transactions, and provides Hart-Scott-Rodino Premerger Notification
counseling, preparation, and filing on behalf of many clients, including technologyrelated entities, equity funds, investment funds, and targets of investments.
Mr. Price represents clients in public and nonpublic DOJ and FTC investigations
and has served as counsel in public and nonpublic FBI, FCC, and State Attorneys
General investigations and enforcement matters, including formal and informal
administrative complaint proceedings.
Mr. Price also provides a full range of legal services for clients that provide technology and broadband services. He works to assist clients achieve business goals,
whether they involve access to cutting-edge technologies, growth of market share,
product development, or expansion of distribution channels.
Mr. Price speaks and writes frequently on antitrust, technology, media, telecommunications, and network security subjects, including the Communications
Assistance for Law Enforcement Act (CALEA). His analyses have been quoted in a
variety of publications, including Wired, BoardWatch, and Light Reading.
Previously, Mr. Price served as a law clerk to Judge Edwin H. Stern of the New
Jersey Appellate Division. While earning his J.D. at Catholic University, he served
as Editor-in-Chief of the law journal, CommLaw Conspectus: Journal of Communications
Law and Policy, and received an advanced certificate from the Communications
Law Institute.
About the Contributors
Marcus Eikenberry is a serial entrepreneur. He makes his living dealing in intangible goods and services within online video games. His companies sell huge volumes
of game registration codes and game time codes as well as providing anti-fraud
solutions for other sellers within these online gaming markets.
Back in 1990 when the Internet was just for universities and the government,
Mr. Eikenberry was doing computer hardware sales to the public. Fraud was very
rare and not something that needed much attention.
In 1993 when Mosaic hit the public, he attempted to start doing business on the
web. In 1994, he published computer hardware sales sheets and started doing mail
order sales. Because he didn’t like dealing with physical products, he looked for
other products to sell that did not require shipping. In December of 1997, he found
the perfect item to sell: intangible goods within online video games. Marcus is a
pioneer of sales of these intangible video game items and services.
Today, Mr. Eikenberry owns Markee Dragon Inc., which includes several companies, including:
TrustWho (—Anti-fraud services providing transaction
processing and payment verification for companies experiencing high fraud
Markee Dragon (—The largest site in the world for
the buying, selling, and trading of online game accounts. It is estimated that
over 2.5 million dollars worth of accounts and services trade hands in this site’s
forums monthly without any charges to the members.
Shattered Crystal (—Where new game codes, upgrades, and game time have been sold to several hundred thousand satisfied
customers since 2002.
The Protection Game
Game Security Overview
What Is Game Security?
Thinking Game Protection
Lazy, Cheap, or Stupid
Threats, Vulnerabilities, and Risk
Beyond Protect, Detect, React
Asymmetric Warfare
Process, Testing, Tools, and Techniques
Second Grader Security
Part I
Part II
Piracy and Used Games
Overview of Piracy and Used Games
The State of Piracy and Anti-Piracy
Determining the Scope of Piracy
Trusted Brand Security: Nintendo and ADV
Anti-Piracy Innovators: Nine Inch Nails and Disney
Going Forward
Distribution Piracy
Preventing Duplication
Detecting Duplication
Collectables, Feelies, and Other Stuff
Disk as Key
License Keys
Splitting and Key Storage
Busted Pirate: Now What?
DRM, Licensing, Policies, and Region Coding
The Basics of DRM
Why DRM Doesn’t Work
Types of DRM Systems
License Policy
Console Piracy, Used Games, and Pricing
Attacking Consoles
The Used Games Market
Pricing Pirates Out of Business
Server Piracy Trends
Server Piracy
Authenticating the Server
Other Strategies, Tactics, and Thoughts
Measuring Piracy
Fighting Pirate Networks
Multi-Player Gaming
Rich Interaction System
Digital Affiliate System
Playing with Secure Digital Distribution
Anti-Piracy Bill of Rights
Basic Fair Use Principles
Registration Options
Installation Options
Connection Options
The Piracy Tipping Point
Determining the Goal of Anti-Piracy Policies
Part III
Overview of Cheating
Cheating 101
Cheating and the Game Industry
Fair Play
Cheat Codes
The CARRDS Reference Model
The Remote Data Problem
Security, Trust, and Server Architectures
Random Events
Player Collusion
Business Models and Security Problems
App Attacks: State, Data, Asset, and
Code Vulnerabilities and Countermeasures
Memory Editors, Radar, and ESP
Data Obfuscators
Code Hacks and DLL Injection
Blind Security Functions, Code Obfuscators,
and Anti-Tamper Software Design
Save Game Attacks, Wallhacks, and Bobbleheads
Secure Loader and Blind Authentication
Bots and Player Aids
Is It “Help” or Is It Cheating?
CAPTCHAs: Distinguishing Players from Programs
Cheat Detection Systems
Network Attacks: Timing Attacks,
Standbying, Bridging, and Race Conditions
ACID, Dupes, and SQL Attacks
Defensive Proxies
Hacker Proxies
Thinking About Network Time: Act, But Verify
Securing Time
Game Design and Security
Design Exploits
Trivia Games
Word, Number, and Puzzle Games
Algorithmic Games, Physics Flaws, and Predictable Behavior
Speed, Twitch, Timing, and Pixel Precision
Strong and Dominant Strategies and Deep Game Play
Power of People: Rock-Paper-Scissors, Poker, and the World of Psychology 175
Game Play Patterns: Combat Devolved
Designing for the Medium
Case Study: High-Score Security
Cheating in High-Score Games
Encryption, Digital Signatures, and Hash Functions
Client-Server Option
Randomly Seeded Client
Alternative High-Score Strategies
Puzzles, Skill-Based Games, and Other Deterministic Games
Inappropriate Player Handles
Part IV
Social Subversion: From Griefing to Gold Farming
and Beyond with Game Service Attacks
Overview of Social Subversion
Competition, Tournaments, and Ranking Systems (and Their Abuse) 192
Understanding Tournaments and Ranking Systems
Lobby Attacks
Syndicates and Bots
Tournament and Ladder Game Play Attacks
Abandonment: The “Game Over” Game
Game Operator Problems
Identity Problems
Retrofitting Games for Tournaments and Skill Games
Griefing and Spam
Communications Griefing and Spam
Game Play Griefing
User-Created Content
Liability and Business Risk
Game Commerce: Virtual Items, Real Money Transactions,
Gold Farming, Escorting, and Power-Leveling
Amusement Park Economics
Alternative Models
On Virtual Items
Gold Farming
Gold Frauders, Online Thieves, and Insiders
Potential Solutions
Escort Services, Subletting, and Virtual Prostitution
To Ban or Not To Ban? Punishing Wayward Players
Crime, Credibility, and Punishment
The Cost of Punishment: Who’s Being Punished?
Possible Punishments and Credible Deterrence
The Real World
Welcome to the Real World
Insider Issues: Code Theft, Data Disclosure, and Fraud
Code Theft and Other Data Disclosures
Office IT Infrastructure
Insider Fraud
Playing Your Own Game
Privileging and Isolation
Partner Problems
Contracting Security?
Security Accountability in Third-Party Development
Security Accountability in Third-Party Licensing
Service Provider and Partner Security Issues
Part V
Community and Fan Sites
Money: Real Transactions, Real Risks
Payment Processing
Inside the Payment Process: PayPal
Integration for Automation
Payment Fraud
More Money: Security, Technical, and Legal Issues
PCI-DSS and Security
Account Security, Virtual Items, and Real Money
Money Laundering and Illegal Payments
Money Laundering: Legal Issues
Identity, Anonymity, and Privacy
The State of Identity and Anonymity
The Registration Problem and Identity Management Systems
Age Verification
Usage Controls and Game Addiction
Account Compromise, Identity Theft, and Privacy
Legal Requirements for Privacy Protection
Protecting Kids from Pedophiles, Stalkers, Cyberbullies,
and Marketeers
Dealing with Cyberbullies, Pedophiles, and Stalkers
Kids’ Communications, Parental Controls, and Monitoring
Children and Identity
Child Pornography
Dancing with Gambling: Skill Games, Contests,
Promotions, and Gambling Again
What Is Gambling and What Is Not
Accidental Casinos
Skill Games
Miscellaneous Security Issues
Legal Considerations
Denial of Service, Disasters, Reliability, Availability,
and Architecture
What Can Go Wrong, Will Go Wrong
Denial of Service
Scalability and Availability
Sample Game Operations Architecture
Disasters and Disaster Recovery
Contingency Planning
Scams and Law Enforcement
Scams in Games
Game Scams
Law Enforcement
Facilities Requirements: Potential Unexpected Laws and Regulations
Operations, Incidents, and Incident Response
Secure Operations
Active Measures
Incidents and Incident Response
Public Relations and the Perception of Security
Virtual Terrorism
Online Tools for the Modern Terrorist
Practical Protection
“We Have Met the Enemy and He Is Us”
The Business of Game Protection
In Closing
Selected Game Security Incidents
his book is intended to infect its readers with an interest and concern for
game protection. My goal is not to preach to the ”security converted,” but to
convince game designers, developers, programmers, managers, marketeers,
and artists that they should care about the security of their games and give them
confidence that there are ways to secure their games.
Asian hackers hack for money, not glory. They do not share their hacks, but sell
them and do not seem to be as sophisticated as those in the US and elsewhere
who target services in the US.
—Whon Namkoong, CEO, NHN USA, Casual Game Conference 2007
Designers ask, “How can I make my game fun?” Executives ask, “How will this
game make money?” Both questions have a security component: How can someone
undermine my game’s play? How could someone play and not pay? What could
undermine the success and potential of this game?
Game protection is about answering these questions. Ignoring them can ruin
the game and cost its creators their business.
Ideally, this book will also be useful for IT security and game security professionals. There is a lot of game security information scattered about on the Internet
and in various press releases and magazine articles. This book brings this information together in one place. When I started discussing game security, a number of industry professionals told me that the game industry needed its “Pearl Harbor” to
bring security to the fore. Although there hasn’t been a single, spectacular and devastating attack, there is an ongoing guerrilla war that distracts the industry from its
primary goal—to build great games.
As a longtime security professional, I have found game security problems quite
Even on a bad day, traditional IT security for business is relatively straightforward. There are only a limited number of things that can happen—money changes
hands, maybe with a third party involved via escrow; assets move through a workflow process; and decisions need approval. Very rarely does IT security get deeply
entwined into the unique aspects of a business.
Not so with computer games. Even a simple card game has more complicated
interactions than many business processes—information is concealed and shared,
cards must be dealt fairly, wagers made and resolved—and most games are much,
much more complicated. Customers are often the adversaries: exploiting game mechanics, stealing game assets, and hacking high scores and achievements. Games
can have a wide range of rules, systems, and transactions limited solely by the imagination of the game developers, the skills of its programmers, and the strategies of
its executives. Today, games face longstanding challenges from piracy and cheating
with the new additions of protecting children and privacy. The list goes on and on
and on.
Plus, you still have all of the traditional IT concerns, including money, authentication, encryption, and so forth.
Protecting games is fascinating, fun …and a whole lot of work.
The game industry is in a tremendous cycle of innovation with new games and
game business models emerging. Participation is expanding beyond the industry’s
traditional audience of teenage boys into a market that includes everyone from
kids to mom, dad, and even seniors. The bad guys are following right along.
I began my security career at the National Security Agency working, mostly, on
Nuclear Command and Control systems. Our adversary was the USSR—a highly
motivated, skilled, well-funded, committed foe who would do whatever necessary
to defeat us.
Instead of the KGB’s staff and budget, game hackers and cheaters tap a global
pool of talent who will happily attack a game for free with their only reward being
pride at being the one who breaks the latest title: a serious foe to be taken seriously.
Even worse, criminals have learned that games are a lucrative target. A stolen
World of Warcraft account is worth more than $10, whereas a stolen credit card
number can be had for as little as $1.50. The game industry groups estimate that
piracy costs billions of dollars a year.
Viruses, worms, and phishing scams aren’t just being created for fun. IT security threats are now a major criminal problem. Hackers don’t write viruses just to
infect as many computers as possible, they write highly targeted worms that sniff
game account passwords or loot online poker accounts.
Most security books are structured around technologies or solutions: encryption,
firewalls, digital signatures, and so on. Because the subject of this book is protecting
games, I have organized it around the topics that game developers care about including—piracy, cheating, tournament hacking, gold farming, protecting children,
and protecting identity. Many attacks on games and security methods use common
underlying techniques and so there is some redundancy of exposition. For example,
memory editors are useful for piracy and cheating, whereas challenge/response protocols are useful to protect high scores and remotely authenticate software.
Interestingly, traditional security techniques such as encryption and digital signatures are much less effective for protecting games because most of our attention
is focused on insider attackers who have access to the platform and software and
therefore can often access cryptographic keys or circumvent digital signature functions. Cryptography still has an important part to play in protecting games.
However, because this is text is targeted at general readers, I do not spend much
time explaining the details of the cryptographic protocols I discuss. There are plenty
of books on these topics for interested readers.
I try to draw example games from the entirety of the industry—everything
from gambling and skill games to advergames, casual games, subscription MMOs,
free-to-play games, and first person shooters. Occasionally, I will cite examples
from traditional (and not so traditional) board and card games, as it is often easier
to understand the actual game mechanisms when there are no fancy graphics or animations.
There are numerous specific security incidents cited throughout the book,
drawn from fairly credible press or public sources. The actual facts of the incidents
are often unknown, as game companies, like most other businesses, are not in a
hurry to share the details of their security problems. Often I am guessing as to what
the underlying problem is and what a plausible solution could be, based on my
experience. When I have been given official knowledge of game security problems,
I am almost always constrained by a non-disclosure agreement.
The specific security incidents discussed are not an indictment of any individual, developer, or publisher, and certainly not an endorsement of any hacker.
In most cases, there is no way to verify that the descriptions or problems are completely accurate. Rather, the incidents should be considered examples of the types
of problems that games and game developers face.
Many of the countermeasures that I discuss are non-technical. I am a big
believer in trying to find easy ways to avoid problems rather than always solving
problems with a technical fix. If possible, I try to include multiple solutions since
your game and your environment may be far different than my examples. If nothing else, I want to show that protecting games is not purely, or even primarily, a
technical problem.
I do include some pseudo-code. It isn’t C or Java or Python, but simply an
efficient way to describe various algorithms, protocols, and processes.
I discuss attack tools and techniques throughout the book. If possible, I try to keep
the discussion at a generic level and not give sufficient information to implement a
specific attack on any specific game or product. I do mention several widely known
tools for hacking games. This is not an endorsement of these products, confirmation of their functionality, or a recommendation of any kind.
Anyone who considers using such attack tools should do so with great caution.
Criminals delight in including key-loggers, spyware, adware, and an abundance of
other malicious code with installation packages for hacking tools. Even compiling
these tools from source code can be risky—are you really going to examine every
line of code and every included library?
This book is the product of over seven years of tracking and analysis of game security issues, the last three of them covered in my blog, PlayNoEvil (http://www. My hope is that I convey some of the excitement that I feel when
a new game problem comes along... and, even better, my satisfaction when I see or
create a solution. The game industry is in the midst of an amazing transformation
and I believe that protecting games will be critical to the success of that transformation.
Steven B. Davis
October, 2008
The Protection Game
In this part, you’ll find the following topics:
Chapter 1, “Game Security Overview”
Chapter 2, “Thinking Game Protection”
Game Security Overview
hy should we worry about game security? Who should worry about game
security? What exactly is game security? How much should we worry
about game security?
Welcome to the “security game.”
Everybody plays the security game. You play whether you want to or not. You
are playing the security game when you build or operate a game: Your customers
want to play for free, always win, say what they want, and do what they want to
whomever they want.
And the Internet only makes this worse.
Your players can come from any country. Misrepresent their identity. Upload
and download your games (paid for or not) to an audience of millions or billions.
However, you want to make money (usually), players want other players to
play fairly (whether or not they do so themselves), treat them well, and protect their
And, of course, there is one kind of help you usually don’t want: the government. Game violence, addiction, privacy, obscenity, pedophiles, gambling, marketing, terrorists, hackers, criminals—all sorts of issues can get you on the government’s
Finally, you have traditional IT and ecommerce security issues including data
theft and information disclosure, disaster recovery, and, when things do go wrong
(and they will), incident response.
I’ve been told security is the game publisher’s problem; I’ve been told it is “a
technical problem;” and I’ve even been told that it is no problem at all or to wait for
the game industry’s “Pearl Harbor.”
Chapter 1 Game Security Overview
Game security is two things: First, it is the dark side of your game. It includes all the
problems that you don’t want to think about, but that could ruin your business and
your game. Second, and more hopefully, good game security may open up new
ways of operating your game or implementing your business that would not be possible otherwise.
This is simple. If game security does not save you money, enhance your reputation,
or make you money, don’t waste your time on it. Security should be held to the
same standard as anything else you are doing. A nice thing about security in
the game industry (and elsewhere) is that it is often quite cheap to address at the beginning of a project. Security can be horribly expensive or just unsolvable late in the
development process or after the game is running. Security and quality go hand in
hand. In fact, many security defects are really quality defects.
Everyone. You will be able to avoid or solve most of your security problems just by
being aware of them and considering the possibility of things going wrong while
you work. Security is not the responsibility of the security guy (or gal). Security staff
is there to focus on security just as testers focus on testing, designers focus on design, and marketers focus on marketing. Hopefully, they bring domain expertise to
the subject, but, at the end of the day, everything needs to be balanced (the business
model, the game design, the art, the budget, testing, and security).
In general, good data on security incidents is pretty scarce. People don’t like to
admit their problems unless they have to. Without California’s Data Disclosure1
law, it is unlikely that any of us in the US would hear about the numerous compromises of our personal data. Security problems can lead to real changes in consumer behavior. According to a survey by Unisys of 8,000 individuals, 45 percent
stated that they would change financial institutions because of security problems 2.
The game industry faces unique challenges in this regard because players see security problems that affect both themselves and others. Security problems with most
businesses are only visible to the individuals involved. Even in a publicly traded
company, security problems are buried in overhead expenses.
Game security problems are noticed by everyone.
Even single-player games are social. Players share results and achievements.
Once you move to multi-player games, even something as simple as a shared high
score list creates intense attention to perceived cheating. Thanks to the Internet,
Protecting Games: A Security Handbook for Game Developers and Publishers
problems with games get broadly distributed very quickly and can cause irreparable harm to the game business. Traditional criminals do their work in the dark.
Attacking games can be a true ego trip spurring game hackers even without any
financial reward: Attackers have the attention of thousands or millions of fellow
players. Of all the articles that I’ve written on my blog, PlayNoEvil (http://www., the long-term, number one page view is an article I wrote about
cheating at Flash games written in early 2007 (currently, the leading contender is an
article on hacking children’s games). Many of the comments I receive are requests
for help cheating in the various games I discuss!
Consumers care about security. In 2005, a survey of 150,000 Chinese online
game players found that “no game hacking and cheating” was the Number 2 issue
for choosing a game to play (at 11.02 percent), just behind graphics and audio content. It also found that “game cheating and hacking destroyed game” was the
Number 1 reason for leaving a game (18.5 percent) with “game security” itself at
Number 9 (5.85 percent)3. In the US, Intel did a small survey of 226 gamers focused
on cheating and found that 71% were either extremely concerned (23 percent) or
somewhat concerned (48 percent) 4. Cheating problems are of such concern to
game companies that they regularly delete related discussions from their online
forums. Popular concern with game addiction has led to actions to restrict the
number of hours consumers can play in China and elsewhere 5.
Although consumers care more about cheating and excessive game play, piracy
is the number one concern for many traditional computer game companies. The
Entertainment Software Association (ESA) estimates that piracy costs the U.S.
game industry $3 billion per year 6. Another disturbing fact is that games, particularly online games, are increasingly the targets of criminals. In June 2008, Fortinet
found that 13 percent of Asian malware (malicious programs such as worms and
viruses) targeted games 7. The growth of online gaming has made game account
theft lucrative. Criminals use key-loggers (programs that extract keystrokes from a
computer and send them to a remote location) to steal players’ usernames and
passwords to empty player game accounts and sell the contents to others. The
global nature of the game industry makes legal remedies virtually futile.
Blizzard, the operator of the hugely popular online game World of Warcraft,
has gone so far as to start selling a low-cost authentication token8—a technology
previously reserved for serious consumer applications like bank and stock trading
accounts as well as within corporations and governments.
The problem is getting worse. Hackers are following the money and there is easy
money in attacking games. In the early days of the online gambling industry, hackers attacked an online casino running software from Cryptologic Inc. The company
quickly shut the servers down, but during those short couple of hours, everyone
playing craps and video slots won every game, costing the company $1.9 million 9.
Chapter 1 Game Security Overview
Gaming is no longer a niche; it is a major form of global entertainment. Everyone
is getting in on the act. Ordinary companies are incorporating games and contests
into their marketing campaigns. Deloitte Touche Tohmatsu found in a 2008 survey
of Dutch advergame sites that over 90 percent of the games are vulnerable and over
50 percent are, in fact, attacked 10. Companies are tying cash and prizes to these
games, making them targets and turning what could be a marketing bonanza into
a public relations nightmare.
The challenge of game security is that you, the game creator, have to play by the
rules. You can’t break laws; you have limited time and a perpetually squeezed budget; and you have to keep your customers safe—all while providing an entertaining
experience. Your foes are constrained only by your efforts. They know no boundaries, and may attack you simply because they can.
Let’s see if we can win.
1. California (2002), “SB 1386,”
2. W. Eazel (2005), “Majority of World Worried about Internet Fraud,” via
serendipity/index.php?/archives/144-Bad-Security-Makes-Consumers-Change-Online-Behavior-GoodDemographics-Metrics.html (original link
3. PlayNoEvil (2006), “Game Security Major Issue for Online Gamers in China,”
4. Intel (2006), “Intel Fair Online Gaming Study”
5. China Daily (2007), “China Clamps Down on Teenage Internet Gaming,”
6. ESA (2007), “Video Game Industry Applauds Game Pirate’s Sentence,”
7. Fortinet (2008), “The State of Malware: June 2008 Edition,”
8. Blizzard (2008), “Blizzard Authenticator Offers Enhanced Security for World of Warcraft Accounts,”
9. B. Warner (2001), “Hacker’s Heaven: Online Gambling,”
10. Deloitte (2008), “Advergames op Grote Schaal Gehackt,”
(English language version at
Thinking Game Protection
y first impulse when I began this project was to use the word “security” in
the title. After all, we usually talk about IT security: When I started in the
field in the mid-1980s at U.S. National Security Agency (NSA), I worked
in communications security (COMSEC) and computer security (COMPUSEC)
and later information security (INFOSEC). There was also Operations Security
(OPSEC), transmission security (TRANSEC), and a whole bunch of other SECs.
The problem with the word “security” is that it is a bit of a lie. You can never
be completely secure (and every security person will tell you this). Security is an
ideal, like truth, beauty, and art. This linguistic trap was articulated in one of the
few really good books in the field that I have found: Information Protection and
Other Unnatural Acts by Harry Demaio1, sadly, long out of print. Protection captures our endeavor much more accurately than security. We are in the business of
protecting games, because we know that we can’t fully secure them. We face the
same problem everyone else does—protection fails, sometimes with spectacular
consequences. When we think about protection, we are already thinking in economic terms—“how much protection is enough?”—rather than in absolutes.
There is power in imperfection. My goal in this section and throughout this
book is to change how you think, not about game security problems, but about
game protection and how to achieve it. This section does not address the specific issues of piracy, cheating, or any of the other numerous challenges that drive game
developers to distraction. Rather, it gives you a framework you can use to think
about protecting your games in the face of these threats, or at least how to protect
your games “well enough.”
Game developers and publishers often seem a bit fatalistic about security. There
seems to be a tendency to give up and simply accept the problems. Or, conversely,
developers and publishers seek some magic bullet—a single product that will solve
their problems with one purchase; preferably bought at the end of the development
process from someone else’s budget.
Chapter 2 Thinking Game Protection
This violates my first security principle:
Security Principle 1: Anything that is easy to add is easy to remove.
Many anti-piracy solutions such as digital rights management (DRM), which is
discussed in Chapter 6, repeatedly demonstrate this problem.
The notion of “layers” is used when discussing security, but the term is widely
misunderstood. It is common to talk about a “security layer” or about “security
services.” Tools like encryption, key management, firewalls, and intrusion detection
are put into nice little architectural blocks to be called on when needed and are
called security layers or services.
Nothing could be further from the truth.
Properly speaking, one should talk about “layered security.” When we are in
the world of protection, we understand that all our security tools are far from perfect. The art and engineering of well-protected systems comes from combining
multiple, interlocking security techniques into a powerful whole. Rather than
building a security chain that is only as strong as its weakest link, you need to
create a security mesh of independent elements that is much stronger than any
individual links and will continue to operate even if a single tool fails.
Effective protection requires weaving security throughout your application or
business. Some of your protection tools may not even be security techniques, but
simply carefully chosen parts of your business or technical strategy.
In 1990, Clifford Stoll wrote perhaps the first true computer security caper
story The Cuckoo’s Egg 2. It was even made into a NOVA special. Dr. Stoll was an astronomer who, unable to get a job doing astronomy, worked at Lawrence Berkeley
National Lab as a system administrator. His boss asked him to investigate a $0.75
discrepancy between an old, custom computer accounting system and the standard
UNIX one. This investigation led to an international spy ring, the FBI, CIA, and all
sorts of other entertaining things. It’s a great book or video.
The most important lesson of the story seems never to have been learned and
is my second security principle:
Security Principle 2: Effective security comes from weaving
together independent systems.
Protecting Games: A Security Handbook for Game Developers and Publishers
The only reason that this case came to light was because someone noticed the
accounting discrepancy between the old accounting system and the standard one.
The hackers knew enough about the standard UNIX operating system to attack the
accounting system and hide their tracks. They did not know about the strange old
Berkeley accounting system. If they had, they would likely have beaten it, as it was
running on the same computer. To show how bad the problem is, many computer
security references use the term “audit trails” routinely. The term “audit trail”
clearly implies all sorts of wonderful independence and security. Unfortunately,
these tools are not audit trails at all, only accounting records. There is only one
system involved that is generating the report, not two independent ones.
When I talk about independence, I am really talking about statistical independence: Entities are independent of each other if events or actions related to one do
not affect the other3. The challenge, of course, is how to build independence into
your system—without breaking the bank.
Independence is discussed much more extensively in the field of safety engineering by those who are building reliable and highly available systems than it is as
a security principle. Passenger jets have multiple engines so that the plane will be
able to fly when one (and sometimes more than one) engine fails. The Space Shuttle
has five flight computers that vote to avoid undetected failures.
We can actually achieve the goals of independence in multiple ways. As described, we can have multiple entities that independently generate identical results
(we hope). We can also have systems that generate multiple results that are independent of each other—a log of game wins, losses, and wagers compared to a
financial log of deposits, transfers, and payments. One of the areas where games
have an advantage over other entertainment media is that they are naturally highly
transactional. While I may buy and watch a movie, for many games I can post high
scores, play with other people, and otherwise repeatedly interact. These numerous
interactions can be used together to prevent and detect piracy, discourage griefing,
and deter cheating.
I’ve long enjoyed the engineering truism “good, fast, or cheap; choose two.” In
other words, if you want something good and fast, it won’t be cheap and if you want
something fast and cheap, it won’t be good. I think the security field needed something similar, so here’s my stab at it:
Lazy, cheap, or stupid: Any one will get you.
… or some such.
Chapter 2 Thinking Game Protection
“Trust” gets waved around a lot in the world of IT security (and, recently, in discussions about fighting piracy). When I started out in the security field, a big focus was
on trusted operating systems and since then we’ve moved on to trusted platform
modules. The whole idea of these products is that by building a whole lot of “security” (whatever that means), we can “trust” the “trusted” thingy and be secure. The
goal is noble, but rather naïve.
First of all, there is no objective definition of security. The security requirements for
another business can be very different from yours even when you both are using the
exact same applications and platforms. The game industry is not the same as the military; which is not the same as a dating service or an online auction service.
Second, most real security problems occur at the application and business operations levels, completely independent of the underlying platform. Spell checkers may
be able to determine whether a word is spelled correctly, but they can’t tell if you’ve
chosen the wrong word (a problem that I’ve found often while editing this book). If
you have incorrectly defined or configured your ordering process, an unauthorized individual may be able to furnish his house at your expense. A trusted platform will do
nothing to solve malicious use of a legitimate application.
Third, the interaction of arbitrary applications on top of a trusted platform can no
longer be considered trustworthy. As I noted, when my career began in the 1980s,
trusted operating systems were all the rage. What we found was that once we started
adding applications to these platforms, our security analysts were able to undermine
the system by attacking the applications directly. Currently, the focus is on hardening
standard operating systems—basically getting rid of the gratuitous “stuff’” that can
cause some of the worst problems. This includes removing unnecessary applications
such as editors and compilers as well as unneeded network services, analysis tools,
and many of the other products that are provided as part of a standard operating system distribution.
Fourth, what if the trusted platform fails? It happens. Even if you wanted to, could
you risk your business on the promises of a third party? Once upon a time I worked
on a government project with a very clever anti-tamper piece of hardware. Our security team had to plan for the scenario where we would lose one of these devices
(which we had spent a lot of money making tamper-resistant). Our final assessment
was that we had to operate the system as if we had no tamper protection. That, if we
ever lost control of one of our anti-tamper boxes, we still had to assume it had been
compromised and implement our procedures to recover our security status—even if
it was returned “intact.” Trust is not enough.
All of this is not to say that using trusted systems is not good practice. However,
it’s best to use these products as tools and part of an overall security system plan, not
as the hard kernel of security.
Protecting Games: A Security Handbook for Game Developers and Publishers
To an outsider, security often looks like black magic. The field is full of magic
words: rootkits, worms, viruses, hackers, penetration tests, amazing sagas, embarrassing failures, and spectacular capers. Scratch the surface, however, and you’ll
find that almost all security problems arise from one or more basic human failings:
laziness, being cheap, or stupidity. These are security’s three deadly sins, so let’s
look at each in more detail.
There is depth and even some real complexity as you learn the art of security, but
the reason many security experts can appear to work miracles and divine problems
after taking only a cursory look at an organization, system, or project comes from
knowing the following:
Security is not a primary concern of most people.
When you don’t care about something, you tend to take shortcuts and cut corners.
People are wonderfully consistent, especially in how they cut corners.
Of course, things aren’t quite this simple. You need to have a good deal of
knowledge of development practices, programming, system design, project management, business planning, and “human nature” to pull off these “miraculous” insights. Once someone describes a situation for me, the first thing I think about is
“what would be the easiest way to build this system?” and, because the easiest way
to build something is rarely the right way, “what is the easiest way to exploit the
poorly built system?”.
Habits are wonderful for predicting future disasters. In the game industry, the
biggest cheating problems come from the fact that most developers start by programming a single-player game and then add multi-player features. Even though
everyone knows and complains about piracy, they don’t actually seem to start planning a strategy against it until the game is about to launch.
The game industry is not alone. I’ve been brought in on classified government
projects after years of development and many millions of dollars spent, where
security only came up because someone noticed that the system needed to be
accredited as secure before it was allowed to operate.
Being Cheap
Security is never given a decent budget. It is a legitimate problem for planners.
Security rarely shows up as a positive revenue line item. It is always portrayed as a
cost with nebulous benefits at best. Interestingly, one of the things I like best about
the game industry is that its security problems are so closely tied to its core business.
Chapter 2 Thinking Game Protection
In many other industries, it is very hard to argue whether one firewall is better than
another or if one should invest in an intrusion detection system or not. This is not
true for the games industry.
Piracy costs sales. As a security analyst, I can make estimates of those costs and
the benefits of my proposed anti-piracy strategy and present a reasonable business
case to management and ask for a budget. Cheating has not been seen to be a major
problem for traditional, single-player games that are sold shrink-wrapped at a
retailer. However, as we move towards multi-player and online games and the
industry transforms from a product-sales business to a service business, cheating becomes much more important. Cheating and game integrity has always been critically
important for skill games, contests, and the gambling side of the industry. Similarly,
payment processing, identity, protecting children, and the other topics that I will
discuss are not theoretical problems. They can cost your business money or, even
worse, give you the opportunity to deal with irate customers or governments.
Stupidity (Ignorance Is Bliss, for a While)
Developers in every industry are rightfully proud of their accomplishments and
eager to hurry their products to market. After a long slog of development and hopefully some testing, most developers are rather confident about their product’s
ability to work well. In physics, Work equals Force times Distance. If you don’t
go anywhere, you haven’t done any Work. The remorseless calculus of security
doesn’t care how hard you worked or who you are. Hackers just care about what
you have actually done. When I made my first security presentation to the game
industry in 2001, developers shared horror stories of players hacking Flash games
just to get high scores on their individual sites. Eight years later, players are still
hacking Flash games to get high scores to win prizes and lots of cash… and causing
some large companies serious grief in the process.
Gold farming isn’t a new problem and people have been creating bots since the
early text MUDs. However, pretty much every modern MMO has continued to be
plagued by these attacks. Now, instead of a couple of guys running a game on a university server, gold farmers are earning millions, if not billions of dollars, and chewing up entire customer support teams. Major game publishers are spending untold
dollars suing bot builder companies knowing full well that another will spring up,
probably in a jurisdiction beyond the effective reach of their lawyers.
All of the security issues discussed in this book are fairly well known to professionals in the industry as well as interested consumers and even more interested hackers and criminals. The best way to avoid security problems is to simply acknowledge
them at the start of a project and address them early in the development process. Or,
at the very least, ignore them consciously. It is simply stupid to do otherwise.
Protecting Games: A Security Handbook for Game Developers and Publishers
The good news is that solving many of your security problems may be as simple
as adding “remember security” to your project’s PowerPoint templates.
The game industry knows who its attackers are: Pirates steal games, cheaters
win unfairly, griefers and gold farmers are just a pain for everyone. The IT security
literature talks a lot about vulnerabilities, threats, and risks. The language of the
industry and its processes in this regard are a bit confusing. The real question is:
What, if anything, can a security analyst tell you that will cause you
to change how you operate or spend money to fix something?
While people may talk, and talk about security requirements, in practice, these
requirements are undermined when there is money and effort required. This is
frustrating for security analysts, as they spend a lot of time hunting for vulnerabilities, writing them up, and presenting them to management only to be told “we’ll
accept the risk.”
The problem is, management might not be right about accepting the risk, but
the basis of their decision seems to have little to do with the described vulnerability, but rather with rhetoric.
Risk is the nemesis of protection. Risk is where people get into the most trouble. It is basically a qualitative assessment of how likely someone will do something
(bad) and the probability that he or she will succeed. Risk also captures the consequences, usually in financial terms, of an incident. On paper, this doesn’t sound like
a bad concept at all. The problem is with its use.
Risk assessments, vulnerability assessments, and threat assessments seem to all
boil down to a long questionnaire and Excel spreadsheet that reduces risk to a
Often commercial products will generate some “risk score” number, which is
then used to determine whether you are secure enough.
There are three important problems with this approach. First, the weighting
schemes that are used to compare one attack or vulnerability to another are often
hidden and reflect the biases of the tool maker (or consultant) rather than the priorities of the client. Second, some risks are not commeasurable, or rather they
shouldn’t be: It makes little or no sense to combine security issues related to identity theft with those for denial of service. Third, the tools rarely seem to support
business decisions. Instead of giving a final numeric score, these tools would be
Chapter 2 Thinking Game Protection
more useful for determining relative residual risk between programmatic choices:
Should you choose Option A with Budget B, which yields Risk Profile C or choose
Option D with Budget E, which yields Risk Profile F?
Making assumptions about your adversary is quite dangerous. People tend to
“mirror image” their foes. They assume that the enemy has the same propensity for
risk and values as they do.
The game industry is particularly vulnerable to this problem:
Game pirates put a radically different value on games than a publisher does. In
practice, they face little to no risk for the actual act of breaking a game’s security
and they seem to have the time and patience to effectively defeat many security
Gold farming is, allegedly, a billion dollar industry employing tens of thousands of individuals worldwide. Aggressively exploiting an MMO’s economy is
big business. For the game operator, controlling gold farming is often a low
priority. It falls somewhere between customer support and bug hunting. The
operator’s main priority is to keep the servers running and the players playing
and paying.
Protect, detect, react. It has become something of a mantra in the traditional IT
security community. First, you protect your information from attack. If they
successfully attack you, you detect the attack and react appropriately. This iron triangle of IT security probably arose out of a military perspective: Attack, defend, and
counterattack. Protect, detect, react is simple, wonderful, and far from complete,
even in a military context.
There are at least seven additional basic security strategies:
Recover—Reconstitute the system to a secure state (or secure as possible).
Interestingly, this strategy is critical for military systems as well. For example,
if an encryption key is compromised, you create and distribute a new key and
remove the old one. If security equipment is lost, it is simply locked out of the
network. It is important to note that this does not reestablish the security of any
data that has already been compromised. In a military setting, the compromised data may no longer have any value. The message “Go to War” is not a
secret for all that long. Unfortunately for game developers, if a digital rights
management (DRM) system does not restore the security of the lost game, it
restores the security for future games.
Protecting Games: A Security Handbook for Game Developers and Publishers
Avoid—There are some fights that are not worth fighting, battles not worth
winning, and problems that are “too hard.” For games, often we can change the
game’s business model and design as well as its code as a way to thwart attackers. Online games that use the “free-to-play” business model where everything
is purchased from the game operator are essentially immune to gold farming.
Botting, the use of automated programs that play on a player’s behalf, is a hard
problem, in many cases. Game developers might consider changing the game
design to make botting impractical or change the game rules to make the
benefits of botting negligible. An Indian game operator used this tactic for an
MMO that he had licensed that was known to have problems with bots4. The
game operator added direct item and currency sales to the subscription game,
thereby reducing the benefits of botting.
Ignore—Some problems are just not that bad. It is certainly fair to choose to
ignore them, especially if the cost of addressing the problem is high. Many traditional computer game developers often ignore cheating problems with their
multi-player games, as the entire multi-player feature is often considered just
another option added to the core, single-player experience.
Delegate—Sometimes you can transfer a problem onto someone else. If you
are able to do this, why not let someone else deal with the problem? The delegation strategy can be particularly useful to transfer liability. There are certain
third-party companies that are legally authorized to accept liability for protecting children’s identity information and limiting marketing under COPPA.
This may be a more effective, and less expensive, option than complying with
COPPA internally. I would argue that many in the entertainment industry are
trying to delegate their piracy problem to the government. Department of
Justice lawyers and FBI and Customs agents are almost free for the industry;
they cost just a bit of lobbying.
Insure—If you can’t eliminate a problem, why not buy insurance? It works for
car accidents, after all. Unfortunately, this option is rarely available for IT security or game security problems today. It is probably the great unmet security
opportunity. Watch for companies who offer security services to see if they also
offer liability protection. Many work like your home security system; their
insurance basically consists of a refund on your security system equipment
purchase (at best) or a refund of a month’s fees. In the IT security area I have
seen identity theft insurance that falls in this category.
Reward—Why focus on “sticks” when you can offer “carrots” to those who
might otherwise harm your product or business? The key, of course, is that
the reward has to appear significant to your customers while being very costeffective to provide. The “good driver discount” for auto insurance and
airlines’ frequent flier programs are familiar examples.
Chapter 2 Thinking Game Protection
Deter—The threat of punishment works as long as the possibility of being
caught is high and the punishment is substantial. Law enforcement, peacetime
armies, and nuclear war all rely on deterrence. Compelling good behavior is
often much more expensive than relying on deterrence. Also, systems that attempt to compel goodness often are less effective at detecting their own failures.
There is a bit more that can be added to the original mantra:
Protect—As noted while discussing “Recover,” you actually need to know what
you are protecting. I have seen many people confuse using encryption with
“security” and hash functions with “integrity.” Game developers have relied on
a browser’s encryption function to protect high scores from manipulation.
Unfortunately, high-score cheaters are the actual people playing the game and
thus have access to the score before it is encrypted. Similarly, several major
commercial games have used hash functions to “sign” data, not realizing that
the data hash can simply be replaced with one for the hacker’s preferred game
Detect—Detecting problems can be tricky. Game piracy without network connections is essentially impossible to quantify, as there is no direct feedback. If
the number of validated, registered licenses is less than the royalties for a game
developer, it could be an interesting question whether the game has a piracy
problem or an issue with the publisher withholding royalties.
React—Ban, ban, ban. Banning pretty much seems the only strategy used to
punish gaming wrongdoers, whether they are pirates or cheaters or whatever.
For a game company, banning is pretty extreme and tends to deprive the company of revenue, so it is a fair question as to whether banning always makes
good business sense.
A complete security system is built by creatively combining these strategies to
form a coherent whole. For all of the game industry’s complaining about piracy,
particularly on the PC platform, there doesn’t seem to be much thought put into
managing piracy during the game development and publication process.
Security is about managing uncertainty. You never know for sure when and where
you are going to be attacked, but you are pretty sure that it is going to happen
sometime. Also, security is a support function to your real goal of providing a great
game and running a successful business. It is not the end, but a means. Good
protection has got to be lean.
Protecting Games: A Security Handbook for Game Developers and Publishers
Protection is a battle between you and your foe. Both of you have time and
resources to allocate to the fight. The only advantage you have as the defender is
you get to set some of the rules and choose the battlefield.
As noted previously, one the biggest problems that you face is asymmetric
values. Your foe may be far more interested in attacking your game than you are in
defending it. Also, you are obliged to defend the entire game and succeed everywhere,
while your adversary only has to find one hole in your armor and you are lost.
Sadly, a clear example of this asymmetry is the state of airport security in the US
since 9/11. We are spending billions of dollars to try to defend every airplane
against all potential hijackers. And, as numerous incidents have shown, there are
always weaknesses in the system. A terrorist individual or group has to find only
one vulnerability that he or she can successfully exploit to cause serious trouble. Or,
these attackers can attack somewhere else where we are not defending at all.
Fortunately, games are much more constrained systems than national defense.
However, they do both face highly motivated adversaries. Game developers and
publishers have much more control than Homeland Security does over the systems
that hackers want to attack.
Security Principle 3: Make your adversary work a lot harder than you.
Defensive methods should be chosen for their low cost and coverage of a wide
range of threats. For game cheating, the most common strategy is to include some
sort of “cheat detection” tool with the game. The major anti-cheating products in
the industry are Blizzard’s Warden, Valve’s VAC, Even Balance’s PunkBuster,
nProtect’s GameGuard, and AhnLab’s HackShield. They are all signature-based
systems, similar to anti-virus products that use signatures of the individual versions
of malicious software to detect attacks. The cost of this system is that it must be
constantly updated5 to keep up with the latest cheats and, just as the security
industry has found with viruses, hackers are very good at attacking anti-virus tools
directly as well as hiding themselves from the anti-virus tools and altering their malicious software’s signatures6.
While the work of creating and distributing an individual signature is not significant, there is a fair amount of effort to find hacks, understand them, and build
a reasonably stable signature. It is worth noting that this strategy for detecting hacks
depends on cheats being widely used. If there are only a couple of cheaters using a
specific technique, the security surveillance system will be unlikely to detect the
attack. This is becoming increasingly true for traditional malware, which is now
targeted at specific companies or individuals as opposed to the world as a whole 7.
Chapter 2 Thinking Game Protection
In MMOs, professional gold farmers are motivated to develop and use internal
or limited distribution tools instead of mass-market products. This is also true for
the online casino industry: If you have a real, effective cheat that makes you a lot of
money, you are not going to sell it to anyone. Once cheating or hacking is a business and not just vandalism, there is no reason to broadcast attacks.
One of the real reasons that encryption is such a popular security tool is that it
is cheap and easy to implement—whether it is effective or not is a different matter.
The most effective security strategy, for games (and anyone else), is to change the
system so that there is nothing that can be exploited. You are probably lost if you
are constantly hunting for hackers.
Although “thinking right” about security up front will get you a long way, there are
useful tools and tactics to complete the job. Penetration testing gets a lot of visibility as a key security strategy. Penetration testers attempt to break in to a system
from the outside, just like an attacker. When they succeed, it is impressive and
compelling (if a bit late in the development process). There are three weaknesses to
penetration testing:
Many of the “revealed” security weaknesses are generic operating system and
common application vulnerabilities. This is not to minimize their impact, but
there are cheaper and easier ways to find these problems earlier in the development process.
Penetration testing is often very time-constrained. As such, penetration testers
do not have time to become familiar with the target and so go for the easy,
generic attacks. The most damaging flaws are often in the target’s unique
business application (or, in this case, game) environment.
Finally, you cannot test either security or quality into a system. They need to be
built in from the beginning.
My preferred security analysis and testing strategy is to run in parallel to development: from concept through implementation and deployment. This has
substantial advantages. Design errors can be addressed when they are still just
PowerPoint slides and Word documents. Because the security analysis team has full
access to the design and code, it is much better able to focus on proactively finding
real problems at the source before they get out of control and expensive to correct.
Again, security resources are very scarce compared to those of the attackers.
Protecting Games: A Security Handbook for Game Developers and Publishers
Although a hacker may need to reverse-engineer your system to attack it, he
may also be a former team member or have “dumpster dived” to collect your
documentation or even downloaded the source code from your server. There is no
benefit to forcing your security team to emulate this phase of the attack. If your
only defense is that the hacker does not know your system design, you are dependent on “security by obscurity” as your sole security barrier—and you are in deep
Good security testing tools should be a standard part of the toolbox of every
developer and system and network administrator. Similarly, there are numerous
software quality and security testing tools that can help avoid memory leaks and
other common coding errors.
One of the real challenges for security in games and other applications is that
you need to build protection in during the development process, but its benefits
do not appear until the product or service is operational. This causes a number of
annoying, but real, problems.
The biggest problem is that most organizations separate their development and
operations budgets. Features like protection against attack that are hard to measure
during development are easy to drop: They have no consequence until the development team has been paid and moved on to another project. Another issue is that
many security failures are largely silent. When your house is robbed or car is stolen,
you tend to notice it rather quickly. Code theft, identity theft, and unsecured
servers may never be noticed. It is important to build “security instrumentation”
into your system to help make both known and potential threats visible. It may be
possible at an early stage to at least detect problems that you may not be willing or
able to prevent at that time. This will give your live team and operations staff the
tools they need to identify and fix the problem later.
There has been a rise in active measures to fight hackers, pirates, and cheaters—
both within the game industry and outside it. Services like MediaDefender, which
actively seeds peer-to-peer networks to disrupt and locate music pirates, can sometimes create more problems than they are worth. The StarForce anti-piracy saga8
and the Sony BMG Rootkit case have become cautionary examples and created
objections to almost any form of anti-piracy technology. Even Blizzard’s Warden
anti-cheating tool has raised privacy concerns (see Chapter 34).
Some of these methods can be quite effective. However, if you are going to
implement them, you should consider possible consequences. Some of these tools
can cause problems directly, as when MediaDefender targeted a legitimate P2P
distribution service9, and some can cause indirect problems. For example, Sony
BMG’s Rootkit was used to attack World of Warcraft. The decision to use these
active strategies should be made at a senior level. After all, at some point, you may
have to defend your active measures strategy to the public in The Washington Post.
Chapter 2 Thinking Game Protection
Many people confuse complexity with security. One of the disdainful comments
regularly used by those in the security field is “security by obscurity”: the notion
that if you make something sufficiently complicated, surely it will be too difficult
for an adversary to unravel.
This is rarely the case.
Usually, the result is that the system is so complicated that it cannot be maintained: Your own team does not understand the system and there are often obscure
parts of the design that make it more vulnerable to attack. Or, even more likely,
your maintenance staff will come along and “clean up” the design so that they can
support it—and completely undermine your “obscurity” efforts.
Probably the most important design principle that I learned at NSA was to
focus on clean, clear design. Good system engineering and good security engineering go hand-in-hand. Ugly, complicated designs are rarely secure. Any security
weaknesses in a well-architected system will stand out like a sore thumb, and
usually be reasonably easy to fix. This leads to my next security principle:
Security Principle 4: If it’s not simple, it’s not secure.
Or, if you can’t explain it to your manager (or a second grader) on one
PowerPoint slide, it probably isn’t secure. There are computer scientists and mathematicians who look for ways to “prove” security. They use complicated symbolic
languages and systems to create security theorems and then prove them.
Fascinating stuff. These techniques are great for PhD candidates and academics,
but, I’m fairly confident, totally irrelevant in the real world.
Why do I doubt this?
Let me briefly don my tattered, ancient mathematical credentials…
If you’ve heard of Gödel’s Theorem, made familiar outside of the circles of
academia by Douglas Hofstadter’s widely owned but rarely completely read Gödel,
Escher, Bach: an Eternal Golden Braid 10, you may recall that Gödel proved the
Incompleteness Theorem11. This important work of mathematical logic states that,
in short, for any sufficiently complicated system, you can neither prove nor
disprove every theorem about it. Gödel also proved undecidability (whether you
can decide if something is true or not), and Alan Turing disproved computability.
The bottom line of these three theorems is that anything that is even slightly
complicated cannot be completely understood and therefore, you cannot really
know that it is secure.
Protecting Games: A Security Handbook for Game Developers and Publishers
In practical terms this means that the only way to make something really secure
is to make it “trivially secure”: The hard part of good security design is to make the
system simple.
This ends my lofty discussions about security and protection; let’s get to work
on protecting your games.
1. H. Demaio (1992), Information Protection and Other Unnatural Acts: Every Manager’s Guide to Keeping
Vital Computer Data Safe and Sound, Amacom Books, ISBN 0-81445-044-X
2. Cliff Stoll (1990), The Cuckoo’s Egg: Tracking a Spy Through the Maze of Computer Espionage,
PocketBooks, ISBN 0-7434-1146-3
3. Wikipedia, “Statistical Independence,”
4. D. Sengupta (2007), “It’s Virtual World Out There, All for Hard Moolah,”
5. A. Modine (2007), “World of Warcraft Spykit Gets Encrypted,”
6. R. Lemos (2005), “World of Warcraft Hackers Using Sony BMG Rootkit,” brief/34
7. S. Gaudin (2005), “Targeted Virus Attacks Replace Sweeping Assaults,” trends/article.php/3554046
8. A. Varney (2006), “StarForce Must Die,”
9. R. Paul (2008), “Revision3 CEO: Blackout Caused by MediaDefender Attack,”
10. D. Hofstadter (1999), Gödel, Escher, Bach: an Eternal Golden Braid, Basic Books,
ISBN 978-046502-656-2
11. Wikipedia (2008), “Gödel’s Incompleteness Theorems,”
Piracy and Used Games
In this part, you’ll find the following topics:
Chapter 3, “Overview of Piracy and Used Games”
Chapter 4, “The State of Piracy and Anti-Piracy”
Chapter 5, “Distribution Piracy”
Chapter 6, “DRM, Licensing, Policies, and Region Coding”
Chapter 7, “Console Piracy, Used Games, and Pricing”
Chapter 8, “Server Piracy”
Chapter 9, “Other Strategies, Tactics, and Thoughts”
Chapter 10, “Anti-Piracy Bill of Rights”
Chapter 11, “The Piracy Tipping Point”
Overview of Piracy
and Used Games
roadband communications and the Internet have transformed piracy from a
garage sale nuisance and shady street vendors selling games from the back of
a van into a pervasive problem. Virtually any digital media is only a quick
Google search and click away online. Of course, the real questions for any business
are how much money is this costing and what can one do about it?
Piracy is theft. Some may quibble about “software piracy” being copyright
infringement; however, the bottom line is that when people don’t pay for a commercial good or service, they are stealing (at least from the seller’s perspective).
Open source advocates claim “software wants to be free.” Software does not want
to be free. Freeloaders want free software. But, it is also worth looking at other
industries where sales revenue is lost. Not just to unauthorized copies, but also to
used goods where creators do not earn revenue from the secondary sales. Movies,
books, and music have always had some market for used products, but the growth
of console games has created a massive used game retail market (PC games are
rarely sold used).
In the next several chapters, I discuss the various aspects of the piracy problem
and used games—the traditional techniques that have been used to fight piracy, and
some alternative strategies. Additionally, I address legitimate consumer concerns
about anti-piracy measures.
The State of Piracy
and Anti-Piracy
he first questions that should be asked about piracy are “how bad is it?” and
“whom does it affect?” There are two completely different ways to measure
piracy. The first is based on the estimated number of pirated copies of a
game or other work and what those items would cost at retail. This seems to be
the preferred model used by the U.S. Business Software Alliance (BSA) and
Entertainment Software Association (ESA). These numbers are quite suspect on a
number of counts. The second approach is to measure how many actual sales are
lost. After all, many people will use something if it is available for free, but have no
interest in buying the item.
The nature of digital piracy makes it quite difficult to estimate the size of the
problem. Downloading files and duplicating disks do not leave easy trails for forensic investigators. An article about casual game piracy claimed a piracy rate of
around 92 percent based on attempted connections to the company’s server 1. At
least this number came from an actual measurement. A report by China’s government, whose citizens are often a target of anti-piracy rhetoric, noted that based on
BSA’s estimates, one quarter of the country’s Gross Domestic Product (GDP)
would, or rather should, have been spent on software in 20052. The BSA is not alone
in having difficulty with numbers. The Royal Canadian Mounted Police seemingly
made up its estimate of 10 to 30 billion Canadian dollars in piracy—a number that
apparently went from a bullet on a PowerPoint slide into national policy 3. Australia
has moved to challenge a report by copyright holders on the damage from piracy,
stating that the numbers needed to be substantiated, especially as they were being
used to justify increasingly harsh civil and criminal penalties4.
Protecting Games: A Security Handbook for Game Developers and Publishers
Even if we do accept these high piracy rates, the real question for business is how
many of those customers would actually have purchased the item. After all, the
marginal cost of producing digital items for the publisher is very low, so the sunk
material and production costs are often not a major issue. Typically, therefore,
there is no real cost to the publisher from these pirates. This is not always the case,
as SiN Episode 1, a downloaded game that was distributed via Valve’s Steam service,
found when they were overwhelmed with customer support requests by irate “customers” who hadn’t actually purchased the game5! Companies that operate a free
online game play service, like Blizzard’s Battle.Net or ArenaNet’s Guild Wars, need
to be especially concerned about piracy, because the way they subsidize the online
service is through product sales. Several years ago, it was not unusual for Blizzard
to announce bans of hundreds of thousands of CD keys, many of them because the
game copies were pirated6. Specifically, the players were using counterfeit keys to
register the games with the Battle.Net service.
One of the powerful advantages of a service like Battle.Net is that it provides information on numbers of pirates (or, at least, unauthorized registration attempts)
as well as numbers of legitimate players. It also provides an incentive for players to
convert from an illegal copy to a legal one so that they can participate in the online
service. Finally, the service provides a means to compare actual sales (and royalties)
with numbers of registered players to estimate the success of counterfeit CD key
piracy. It might even be able to measure how many pirates purchase legitimate
copies once caught.
In general, there are few good methods to determine how many pirate game
users would actually buy a game. However, the stereotype of gamers (and game pirates) as young and poor is no longer true. Game demographics have shifted to
older players who are generally less likely to pirate games. This would seem to imply
that sales lost from pirates isn’t significant—older players are likely to buy legitimate copies and younger players would never purchase the game at all. Brad
Wardell of Stardock has been quoted7 as saying that game developers need to focus
on the actual population of paying customers:
“When you make a game for a target market, you have to look at how many
people will actually buy your game combined with how much it will cost to
make a game for that target market. What good is a large number of users if
they’re not going to buy your game? And what good is a market where the
minimal commitment to make a game for it is $10 million if the target audience isn’t likely to pay for the game?”
Chapter 4 The State of Piracy and Anti-Piracy
“If the target demographic for your game is full of pirates who won’t buy your
game, then why support them? That’s one of the things I have a hard time
understanding. It’s irrelevant how many people will play your game (if you’re
in the business of selling games that is). It’s only relevant how many people are
likely to buy your game.”
The computer game industry has adopted four major strategies to address
Console-based games
Digital rights management/license management
Online gaming
The dominant anti-piracy strategy by the computer game industry has been to
focus game development towards game consoles. (As a side note, the film industry’s
move towards Blu-ray from DVD would seem to be an attempt to follow suit.) The
general argument has been that control over game hardware will prevent piracy.
In fact, the move to consoles has not really stopped piracy and, potentially even
worse, it has essentially created the used game market. Used games are a totally legal
way for customers to buy games. They provide no revenues to the publisher or
developer and may even cost sales of new games by giving wavering customers a
chance to “wait a bit” and buy a game for less. (There are some counterarguments,
however. The fact that consumers know that they can resell a game means that they
may purchase a $50 game knowing that they can sell it for $10 or $20, meaning the
effective cost is $30 or $40.) Console piracy is rampant in Asia with modified
consoles often publicly sold for a modest $50 premium above a legitimate console8.
Nintendo’s very popular DS handheld console has been facing increasing problems
from the R4 cartridge9. These console hacks allow players to download games for
free or purchase them on the black market for only a couple of dollars.
The game industry has not abandoned PC games, but they have stepped up
their use of increasingly draconian licensing tools. One product, StarForce, became
so unpopular due to its modification of drivers, that popular pressure forced publishers to abandon the tool10. In the music industry, Sony BMG earned the ire of
music fans and lawsuits with its secret, automatic installation of a rootkit program
when certain music CDs were played on computers11. More recently, several publishers have substantially loosened installation and registration requirements for
their games after widespread objections in game blogs and online communities 12.
Protecting Games: A Security Handbook for Game Developers and Publishers
Simple economics and widespread piracy of traditional computer games and
other software drove game developers in Korea and China to focus on online games.
Particularly in Korea, the government’s focus on developing a world-leading
telecommunications infrastructure opened the door for sophisticated games played
on a server rather than sold at retail. Games operating as a service are inherently
more difficult to pirate. Stealing a copy of the player client software is not enough;
a pirate server must be set up, operated, and maintained. This makes the pirate
server operation much more vulnerable to being detected, located, and shut down
by law enforcement.
It is worth noting that any break-even analysis should probably be done when
the game is “green lighted” and the developer makes an initial estimation of expected sales. The question should perhaps be asked—if this game had an additional $2 million to spend, how could it best increase sales to compensate for the
estimated “anti-piracy” expenses? Other security options may make sense and
should be considered. After all, the only additional revenues are going to come
from additional sales. The Entertainment Software Association, the trade group for
most U.S. computer game publishers, claimed that piracy cost the industry $3
billion in sales a year13. They have worked to strengthen penalties and pushed law
enforcement to actively pursue individuals and organizations involved in software
piracy. When I reviewed ESA’s announcements related to piracy in October, 2006,
I found that in the previous 12 months, the ESA and U.S. and Canadian governments had imposed total fines of $36 million and pursued four major cases. This is
just over one percent of their own estimates of the pirate market14. Since there has
been no claimed reduction in piracy, there should be a question as to whether these
prosecutions deter would-be pirates at all.
Piracy is a real problem. It potentially costs the game industry billions of dollars
worldwide each year. We can’t wish piracy away, so it seems our only alternative is
some sort of anti-piracy product. Just as with our analysis of piracy, we need to
consider how much anti-piracy is worth. Should we ignore piracy or fight it?
For example, let’s assume that we are developing a traditional PC game. We
choose an anti-piracy software provider that has an upfront licensing fee of $100,000
and a royalty of 4 percent per copy sold (of the game’s retail price). Then our actual
upfront costs are:
Total Upfront Anti-Piracy Costs = $100,000 + Integration Costs
Chapter 4 The State of Piracy and Anti-Piracy
Vendors everywhere assert their products have no integration cost. Sometimes,
this is true, but usually, there is a cost for integrating any piece of software. At the very
least, you need to test it to make sure that the new software doesn’t break your old
software. In our example, we’ll say these costs are zero, just as the vendor promised.
Let’s assume the game sells for the fairly standard price of $50 and our revenue
per copy is $20 (after packaging, marketing, revenues for the retailer, and so on).
Then, our net revenue is:
Net Revenue = $20 (profit) – $50(.04) ($2 anti-piracy royalty) = $18
However, we may lose some sales because of the anti-piracy tool we use and also
incur some additional customer service costs to handle complaints and such. Once
again we’ll make a simple assumption that this costs us 2 percent of sales.
With a game that sells a respectable 1 million copies, without anti-piracy we’d see:
No Anti-Piracy Revenues = $1 million x $20 = $20 Million
With Anti-Piracy, our revenues are:
Net Anti-Piracy Revenues = $1 million x (98 percent customer base) x
$18 – $100,000 = $17,540,000
It is obvious that I am giving no credit for additional sales for the anti-piracy solution. So, how many more sales do we need to earn to break even and recover the
costs of our anti-piracy product?
The additional profit we need to make up, just to break even is $2,460,000.
Increased Anti-Piracy Sales = $2,460,000/$18 = 136,667 additional units
Or, around a 14 percent increase in sales is required to compensate for the costs
of the anti-piracy product.
Suppose, instead, the anti-piracy product had no up-front fee and didn’t cost any
sales, incur any customer support issues, or otherwise make life difficult (for the customers or us as the game’s publisher).
Our break-even additional revenue number would be $2 million and 111,111 additional sales, just to cover those royalties. The upfront licensing cost has negligible
impact on the price; the key driver is royalties. This is simply a break-even analysis.
There is inherent risk adding any software or expense to a product. In order to rate
the anti-piracy product a success, any publisher should include a margin of error for
expected additional sales of perhaps 200,000 or 20 percent.
Protecting Games: A Security Handbook for Game Developers and Publishers
Peer-to-peer piracy is an even harder problem, because there is really no criminal enterprise to target. There are just individuals looking for a free game or song
or movie. The main business advantage of using prosecutions as an anti-piracy
strategy is that private companies can push the costs onto governments (and taxpayers). However, this works only if piracy is actually reduced.
There is one kind of piracy protection you can’t buy: the trust of your customers.
For many years, Nintendo has cultivated a close relationship with its customers.
Game players in Japan, the US, and Europe have invested years and years of affection for characters like Mario and are quite fond of their Game Boy and NES
consoles. Nintendo works to have a great relationship with its customers. For
example, there have been a number of recent anecdotes in which players had problems with their Wii game consoles, contacted Nintendo, and a replacement was
rapidly shipped at no cost and with no questions asked. The power of Nintendo’s
brand is such that for many years instead of a piracy problem, Nintendo had to deal
with counterfeiting. Criminals would create pirated copies of Nintendo game
cartridges and try to pass them off as legitimate ones for sale. Nintendo’s piracy
resources were focused on education: to protect consumers by educating them on
how to identify counterfeit games.
The recent, explosive popularity of both the Nintendo DS handheld and Wii
game consoles has created a new problem for Nintendo. As the company has
expanded from its long-term, long-established customers in the core markets of
Japan, the US, and Europe, Nintendo is beginning to face typical piracy problems.
These new customers do not have any real loyalty to the Nintendo brand and are
much more willing to use tools such as the R4 Data Cartridge (see Chapter 7). This
product allows players to download games from the Internet and use them for free
instead of purchasing legitimate game cartridges. In 2008, Nintendo is probably the
most aggressive and public opponent of piracy of all the console manufacturers and
has gone from tolerating tools like R4 to actively fighting them15.
Nintendo is not the only company in the entertainment industry to build this
kind of brand loyalty. ADV Films, an importer and publisher of Japanese anime
(animated films) in the US, has also built strong ties with its customers. ADV has
faced the difficult challenge of dealing with the cost of localization (translation into
English) of the large number of anime films and TV series. The company cannot
afford to translate every anime film or show. Instead it supports independent
localization by passionate anime fans through its online community, even when
ADV has rights to the product. However, once ADV Films does create the official
Chapter 4 The State of Piracy and Anti-Piracy
translation of a product, the community voluntarily abandons the unauthorized
copies. U.S. anime fans know that they need to support ADV to ensure access to
great products and work to protect the company16. The U.S. anime fan community
and ADV have recognized that they need each other.
There have been several attempts to fight media piracy by using voluntary payments and hoping for volume sales. Stephen King launched The Plant as a serialized
book in 2000. Although he initially met his financial objective of 75 percent payers
vs. downloaders (paying $1 for each part), the numbers dropped off. After six parts
were released, the project seems to have been abandoned with the last release in
December of 2000 (starting with the fourth installment, there was a price increase
to $2, the payer rate dropped to 46 percent, and there were substantially fewer
downloads)17. The band Radiohead released a low-bandwidth, MP3 version of their
album “In Rainbows” for free in October 2007 with the downloader having the opportunity to “pay what they want,” only to abandon the strategy by April of 200818.
One band, Nine Inch Nails, seems to have found a way around the problem
with a strategy that could be duplicated by any game, music, or movie publisher.
Nine Inch Nails basically created a wide range of versions of their products priced
for different portions of their audience for their album “Ghosts I-IV.” Nine Inch
Nails gave away “Ghosts I” for free, had a $5 download version of the entire album,
a $10 double CD set, a $75 deluxe edition, and a $300 Ultra Deluxe Limited Edition
set19. This last version was limited to 2,500 copies and sold out in three days—
earning the band $750,000 and, even after paying for all of the “goodies” (which
probably cost $10 to $20 to produce), no doubt yielded a substantial profit.
China is notorious for having severe problems with piracy and counterfeit
goods. However, the billions of potential customers are irresistible to companies
around the world—including Disney. In 2006, Disney launched a promotion where
they offered customers who bought Disney products the opportunity to enter to
win a number of prizes ranging from a DVD to a trip to Hong Kong. All the customers had to do to enter the contest was mail the official Disney holographic seal
that was included on every official Disney product20. This is a brilliant anti-piracy
tactic. Customers are turned from pirate accomplices to detectives. First, they are
going to check to see that items are legitimate and, second, any good fake holographs will get sent in to Disney to be used to help hunt down counterfeiters and
the stores that carry their products. Entertainment companies could easily use variations on this strategy to battle pirates, counterfeiters, and even used games.
Protecting Games: A Security Handbook for Game Developers and Publishers
Based on industry rhetoric, piracy is certainly a serious concern for the traditional
console and PC game industry. There are real questions about whether game companies seriously consider piracy during their business and product planning
process. I have talked to a number of security companies with various anti-piracy
solutions and they typically get a courteous hearing from publishers, but no real
business, not even a pilot project. If asked for advice, I recommend that anti-piracy
companies look at other markets.
Using your brand to fight piracy is an amazingly powerful tactic and can be
quite effective. Iconic companies like Apple can charge a premium price for their
products in the market and maintain almost fanatical loyalty among their customers. This does require a long-term, strategic investment in building superior
products and powerful, supporting marketing. A brand-building tool that can also
support anti-piracy is a compelling online service, a topic that I will be revisiting
Promotions and premium versions are powerful and underused anti-piracy
tools in the game industry. Even better, they are funded out of the marketing budget, not the (typically paltry) security budget. Selling concept art and model sculptures, giving away vacations and game libraries, and creating “frequent player cards”
are all standard marketing techniques that can also have wonderful collateral antipiracy benefits if used carefully.
Chapter 4 The State of Piracy and Anti-Piracy
1. R. Carrol (2008), “Casual Games and Piracy: The Truth,”
2. W. Xing (2008), “Piracy Debate,”
3. S. Davis (2007), “Piracy—Fact, Fiction, and Future,”,-Fiction,-and-Future.html
4. S. Hayes (2006), “Piracy Stats Don’t Add Up,”,24897,20713160-15306,00.html
5. brownlee (2006), “Pirates to Buyers Ratio for SiN Episode 1? 5:1,”
6. Blizzard (2004), “StarCraft and Warcraft III Accounts Closed,”
7. K. Gillen (2008), Wardell: “Piracy Is Not the Primary Issue,”
8. Cho J. (2008), “Nintendo Wii Ready for Korea Debut,”
9. C. Ciabai (2008), “Nintendo Starts Epic Battle Against R4 Piracy—The Fight Is On!,”
10. A. Varney (2006), “StarForce Must Die,”
11. EFF (2005-6), “Sony BMG Litigation Info,”
12. Polybren (2008), “Mass Effect, Spore DRM Loosened,”
13. ESA (2007), “Video Game Industry Applauds Game Pirate’s Sentence,”
14. S. Davis (2006), “Modchip Manufacturer Fined $9 Million—Only 332 More Pirates to Go!,”!.html
15. Nintendo (2008), “Nintendo Anti-Piracy,”
16. D. Roth (2005), “It’s... Profitmón!,”
17. Wikipedia (2008), “The Plant,”
18. G. Sandoval (2008), “Radiohead Won’t Repeat ‘In Rainbows’ Giveaway,”
19. Nine Inch Nails (2008), “Ghosts—Order Options,”
20. G. Fowler (2006), “Disney Fires a Broadside at Pirates,”
Distribution Piracy
ntil the recent rise of digital distribution, games were available via CDs,
DVDs, floppy disks, and proprietary game cartridges. Blank media that can
be purchased for pennies, while good for game publishers, also makes
piracy economically viable and trivial to implement. There are three ways to fight
content duplication:
Prevent duplication
Detect duplication
Use a key that is difficult to duplicate or ignore
Preventing duplication has become a bigger challenge as games have moved to
standard physical media and digital distribution. Originally, many games used
proprietary distribution technologies (game cartridges) for a number of reasons,
including fighting piracy. The only major game platforms that still use proprietary
distribution systems are handheld game consoles. The game cartridge from the
Nintendo DS and Sony’s PlayStation Portable (PSP) UMD disk are probably the
last generation of proprietary physical media. One important factor is cost. The cost
of data storage has plummeted even faster than improvements in processing power
and graphics. When storage was expensive, it made sense for game companies to
have their own proprietary systems, especially because this had a collateral anti-piracy
benefit. Pirates had to basically operate a factory to make counterfeit game cartridges. Widespread, modern outsourced, and low-cost manufacturing effectively
eliminates the last vestiges of anti-piracy benefit from using proprietary media.
Chapter 5 Distribution Piracy
Other anti-piracy techniques take advantage of the way media is physically
duplicated to prevent making a useful copy. Videotape protection systems work on
this principle. For digital media, there have been several anti-duplication techniques that work by modifying the master CD or DVD during the production
process. Other approaches stretch the CD and DVD standards in unconventional
ways such as manipulating low-level indexes and offsets to hide portions of the
media from standard duplication techniques. The problem with this tactic is that
not every product implements all portions of the standard specification the same
way, resulting in unpredictable disk failures and customer complaints.
If you can’t protect the distribution media itself, another approach is to protect the
data and detect duplication. The simplest way to do this is to simply label the data
as “do not duplicate” so that standard media players will not read or process the
data. The regional encoding system used for DVDs that prevents disks formatted
for different parts of the world from being played in players for other regions is the
most familiar example1. Ironically, some early Sony PlayStation 2s ignored regional
coding information for DVDs—a problem that was quickly corrected once it was
The most notorious anti-piracy product in the game industry, Starforce, used
this strategy (among others). It actually modified the low-level software (drivers)
for PC DVD players to detect whether a disk was “Starforce protected.” Although
this detected some piracy attempts, it also caused problems for other legitimate
The most recent example of this approach came to light through a successful
attack on Microsoft’s Xbox 360 console. The Xbox 360 uses standard DVDs for distributing games. DVDs include low-level information that describes the content on
the disk so that it can be handled by the appropriate application software in the
console. Disks are labeled as music CDs, movie DVDs, rewriteable DVDs, and Xbox
360 game disks. Microsoft has always used digital signatures on Xbox 360 game files
to prevent their modification. However, the low-level disk label is not protected.
The label is part of the DVD media specification, and Microsoft wanted to use
standard DVD players in its console to reduce cost.
Hackers took advantage of the ability to update the firmware that is available in
most commercial DVD drives. This feature is included for maintainability and to
support legitimate updates from the drive vendor. Unfortunately, hackers used this
capability to replace the standard firmware with a modified version that reported to
Protecting Games: A Security Handbook for Game Developers and Publishers
the game console that disks labeled as rewriteable DVDs were instead reported as
Xbox 360 game disks2. This hack has been widely used in Asia and is essentially
impossible for the console itself to detect3 (something that can be addressed by a
service like Xbox Live, which is discussed in the section entitled “Rich Interaction
Systems” in Chapter 9).
The power of duplication technologies and cheap mass storage has driven game
publishers towards other approaches, particularly in the PC market. All of the
things that make a PC useful also make it a powerful tool for piracy—lots of processing power and storage, full access and control of the hardware, and powerful
and cheap programming and analytic tools.
If you can’t protect the game media, then you need to find something else that you
can protect and tie the operation of the game to it. One of the earliest applications
of this strategy was Infocom’s “feelies”4. In the 1970s and 1980s, Infocom produced
adventure games that were quite popular. There was no widespread Internet access
or even common use of modems, so the local game had to be able to detect if the
copy was legitimate. Infocom’s innovative approach was to ship the game with various physical items that were hard to duplicate, yet played an important part in the
game and the game experience. Customers valued maps, manuals, decoder rings,
and other items that were often tied into game play, and, most importantly, were
difficult to duplicate. For a while, game companies went a bit crazy with this
approach. Games would require players to type information from game documentation into the application to start or continue play and, at its extreme, players
were forced to transcribe entire paragraphs of the manual letter-perfect. In some
sense, the rise of collector’s editions today harkens back to this earlier era, but many
publishers seem to have forgotten the anti-piracy benefits of physical, tangible
Although the CD key is the subject of many complaints today, its early rise was an
antidote to the inconvenience and cost of using feelies for authentication. Instead
of regurgitating game documentation, players simply had to keep the disk in the
computer while they were playing. Initially, the CD was needed because hard
drives were too small and expensive to store entire games.
Chapter 5 Distribution Piracy
Today, the game installation process still doesn’t install everything from the
DVD onto the hard disk. A portion of the game software, or even just a bit of data,
is left behind on the DVD and is checked or loaded from the disk when the game is
executed. This has become the de facto standard anti-piracy approach for PC games:
combining the disk key with some sort of physical or software anti-duplication
technology. The rise of the Internet has allowed the creation of another variant of
this tactic where the withheld data or code is downloaded in real time from an
online server (see the section called “Online Authorization” that follows).
Once hard drives got big enough and cheap enough, players didn’t want to
have to haul game disks around. After all, if the entire game can be easily stored on
the hard drive, who needs a disk? Also, if a player owns 10 or 20 games, she has to
keep track of where they all are when she wants to play. Or, even worse, if the customer plays her games on a laptop, the idea of carrying around even a single disk,
much less a disk for every game, is very unappealing.
Hackers have come up with programs that convince the game that the disk is
present or alter the installation process so that items that aren’t supposed to be
installed and stored on the hard drive are. These “NO DISK” hacks are terribly
popular to this day, even with legitimate, paying customers.
The license key was developed in parallel with the CD key. This long alphanumeric
string allowed the game software to determine whether the user was legitimate or,
at least had access to a legitimate game key. License keys have also been used in conjunction with online registration and authentication. A license key is essentially a
rather long password and typically works in one of three ways:
ID and Checksum
Public Key Encryption
Online Authorization
I discuss each of these methods in the following sections.
First the key can contain a random ID and checksum. The game program has a
mathematical algorithm that the program runs on the random ID portion of the
license key to determine whether the computed checksum matches with the checksum provided in the license key.
Protecting Games: A Security Handbook for Game Developers and Publishers
The problem with this approach is that hackers often reverse-engineer the
process (or game developers are lazy and use a familiar function such as the MD5
standard hash function) and can generate valid license keys on demand. These hack
programs are sometimes called, not too cleverly, keygens. This algorithmic process
is very tempting for online registration systems because it doesn’t require any storage of keys to validate licenses. Also, distributors and manufacturers can be given
the company’s key generation process, which substantially simplifies production: A
manufacturer sets up a printer to produce as many license keys as desired. They do
not need to coordinate anything with the game publisher, maintain or track how
many keys they have produced, store the specific keys that they have produced, or
send actual keys back to the publisher to support online registration validation.
Also, the game disks are identical, making their production cost low:
Generate Random ID
Generate Checksum (Random ID)
Build License Key = Random ID, Checksum(Random ID)
Verification is also simple:
//Checksum algorithm is all that has to be stored in the game software
Split License Key into Random ID and Checksum
Compute Checksum (Random ID)
Compare Computed Checksum with Received Checksum
The second license key system replaces the checksum with a public key decryption
function (see the glossary). This would appear to stop hackers pretty well. After all,
knowledge of the public decryption key does not give access to the secret encryption
// The game software stores the public key decryption
// algorithm and the game’s public key
Decrypt License Key with Game Public Key
Validate License Key
This just requires hackers to change their tactics. Instead of looking for the
checksum algorithm, they simply need to find the stored public key. To complete
Chapter 5 Distribution Piracy
the attack, the hacker “finds and replaces” the game public key with one that the
hacker has generated. The hacker then uses his own private key to generate whatever license key he wishes.
The third major approach does not authenticate the key locally, but requires a connection to an online license server operated by the game publisher. In this case, the
license key is essentially a password. The game program sends the password to the
license server for authentication. Mathematically, the process is identical to the
process described for the “ID and Checksum” method, but with the verification
carried out at the license server instead of locally:
// At some point, the customer enters the license key
// into the game application
Game Application retrieves License Key
Game Application sends License Key to License Server
License Server validates License Key
License Server sends Validation Message to Game Application
Player plays (or not)
The license server can operate just like the local license check, and often does.
One advantage of an online license server is that it can detect attempts to reuse
license keys on different computers. If the license server stores a list of keys that
have been registered, it can reject or take various actions based on a company’s
license policy (see the “License Policy” section of Chapter 6). License servers can
use two approaches to track keys: a fixed list of issued keys and an algorithmic approach, as described for game application license verification. It is very tempting to
use an algorithmic approach to license verification. It requires less storage on the
online server and no coordination with whomever is producing the game disks
and license keys. The downside of this approach is that it is vulnerable to any
exploitation of the key generation process: Once the process is compromised, the
license server can only verify the uniqueness of the license keys, not their legitimacy.
If, instead, the license server contains a list of all of the license keys that have
been legitimately issued, it is much less vulnerable. First, there is no need to create
a license key generation algorithm: The keys can simply be stored in bulk. If a key
is compromised before it has been issued, it can be removed from the license server
list and the company can recover from the compromise or avoid the compromise
Protecting Games: A Security Handbook for Game Developers and Publishers
It is possible for the key producer and license server to use a shared secret key
to generate individual license keys. In this case, the two parties share a license
generation key (LGK) and a license generation function (license_generator).
The license generation function creates license keys based on an index (i) and the
// algorithm to generate the ith license key
license(i) = license_generator(LGK,i)
The key producer and license server simply need to exchange the latest index
value for the license key that has been generated. The license server can then
generate all of the license keys that have been created since the last batch by simply
iterating through the new index values:
// if last license produced has L and the new last index is N
for(i=L+1;i<N;i++) {
license(i) = license_generator(LGK,i);
This has some modest advantages in terms of necessary communications between the license server and the key producer, but does create additional risk in
terms of the storage of the license generator key.
Developers and publishers can also set up schemes based on the online license
registration process to accurately track piracy. Failed license registration attempts
do not correspond to individual pirates. Motivated pirates who fail to successfully
register a game will likely try and try and try again until they succeed.
I’ve had a number of discussions with game developers about piracy and, in many
cases, they basically feel that piracy, and security in general, is the problem of the
publisher. This is fine, up to a point. However, developers often earn royalties based
on the number of games sold, so they do have a vested interest in good anti-piracy
protection. For example, a developer could use license keys as a way to directly audit
game sales—if a license key is valid, then, clearly, the developer should earn royalties
for that game copy. If license keys are issued in lots or by using an index-based key
generation process, the developer can use the highest license key index as a royalty
tracking metric.
Chapter 5 Distribution Piracy
Keys can be used for game activation, validation for operation, or both. A key for
activation is essentially used one time to convince the game software that you have
a legitimate license key. This can be done locally or with a license server, as described previously. A key for operation is required each time the game application
runs. In order to avoid forcing the users to reenter a license key every time a game
is executed, the game needs to store the fact that the game has been activated in
some manner.
The activation key is an obvious target for a pirate. After all, if the activation
process can be spoofed (faked out) in some manner, then the entire license and
registration process can be ignored. Some games have moved towards using a realtime check with a license server to verify that the game license is valid each time the
game runs. This can be very inconvenient for customers who do not have network
access (such as on an airplane or soldiers or others in remote locations). In this case,
the game application needs to somehow store the fact that it can’t connect to the
Internet and still make a license policy decision. A common approach is to keep a
counter of how long it’s been since the application has last been able to access
the license server or how many times the application has been executed since the
application was last able to be validated. This counter is also a great target for
What to do?
Unfortunately, this sensitive information (license key, activation status, last
valid server authentication, and so on) ultimately has to be stored on the computer
with the game application. Encryption has fairly limited efficacy as a tool because
the encryption key has to be present in the game application for the sensitive data
to be extracted and, if the key is present, it can be attacked. Protecting a license key
is, in some sense, a bit easier than protecting other data since it doesn’t change. A
diligent hacker will look for changed data and focus on the changes to prepare an
attack. Instead of directly storing an encryption key, the game application can use
static, but unique, data on the computer to build the key.
For example, most PCs have an operating system license key or other information that is stored within the computer’s configuration information (for Windows
PCs, this is the Registry). There are other unique data elements that may be used:
MAC addresses for Ethernet cards, hard drive IDs, license keys for a number of
common applications, even configuration data stored during the application’s
installation. Game developers want the information to be platform-unique so that
it is more difficult for a pirate to directly copy the installed game onto another
Protecting Games: A Security Handbook for Game Developers and Publishers
Once one or more platform-unique, stable identifiers are found, you can protect your sensitive data by splitting it, obfuscating it, or both.
Splitting data is quite easy and fairly effective for small amounts of data, such as license keys. Basically, an application developer “adds” the sensitive data to the
unique identifier (UID) and stores the sum of the two:
// retrieve unique identifier
UID = retrieveUniqueIdentifierX;
/**Often it is a good idea to perform some sort of operation on the UID
to provide a further simple level of indirection and, often more
importantly, make the result the right size to work with the sensitive
data. For example, a hash function, such as MD5, is used here. If you
do use MD5, it is much better to use a keyed MD5 function. */
ModifiedUID = Hash(UID);
/** Input the known sensitive data and add the two together (such as
with an XOR function).*/
Protected_Sensitive_Data = ModifiedUID XOR SensitiveData
// This data is then stored.
If an adversary reads out the stored data, it does not look like the license key or
other sensitive data and it is tied to the specific platform. It is worth noting that
sometimes unique data is not stable: Ethernet cards get changed, operating systems
get updated, and registries get corrupted. To help ensure a smooth user experience,
you can replicate this process with multiple items of unique data and perform a
majority vote or other process to improve the reliable recovery of the stored sensitive data.
There are two kinds of obfuscation to protect sensitive information in a program.
Code obfuscation protects against reverse engineering and data obfuscation hides
data that is stored locally and protects it from manipulation. It is important to note
that these solutions are inherently weak—when the program runs, the code needs
to execute, so its underlying logic is present and can be read out (at least at the assembly language level). Similarly, at some point in time the program’s sensitive data
needs to be read by the program into the processor’s registers so that it can be used.
Chapter 5 Distribution Piracy
The typical way to protect data from being read or altered is encryption. This
solution is easy to implement using standard libraries. However, encryption is often
CPU intensive so it would have an adverse impact on performance. Also, because
the key needs to be present, encryption is only as strong as the ability to hide (or, in
this case, obscure) the key. Because of these factors, it often makes sense to use a
lighter-weight, non-standard function to obscure the data. The goal is to force the
attacker to reverse-engineer the program’s code as well as find the key. As long as
the function makes the work associated with extracting the data reasonably hard, it
is probably good enough (or rather, as good as it gets!).
It is possible, and can be a good idea, to combine the two concepts—you use
splitting to protect the key used for data obfuscation:
//Extract UID as above
ModifiedUID = Hash(UID);
// Retrieve stored data split and recover key
SensitiveDataKey = ModifiedUID XOR Protected_Sensitive_Data_Key;
// Recover sensitive data
SensitiveData = Deobfuscate(ObfuscatedD$ata,SensitiveDataKey);
// Validate Data
Validation is an important part of the sensitive data-handling process. The
game application needs to know if an attacker has altered the data . A very simple
option to protect your data is to store a copy of the data unprotected as well as one
that is protected. This makes the unprotected data a very tempting target for hackers and, if they are lazy, they might not bother to figure out that there is obfuscation or other protection being used to protect the sensitive data. Another option is
to store the sensitive data encrypted with two different keys. However the multiple
versions of the data are stored, the game application then simply compares the
protected and unprotected copies of the sensitive data. If they don’t match, there is
a problem and the program has likely been attacked. If the program has been
attacked, it can take actions to defend itself (see the section later in this chapter
entitled “Busted Pirate: Now What?”).
The final strategy that can be used is to take advantage of the ordinary file that
stores a saved version of a game. Sensitive data can be stored directly or in an
obscured form along with the saved game. You can strengthen the security of the
sensitive data (and the saved game information) by using a keyed hash or cryptographic checksum on the game-saved data. This can work fairly well, as players
want to save their game progress.
Protecting Games: A Security Handbook for Game Developers and Publishers
One of the thorny questions with pirates and other troublemakers in games is what
to do with the troublemakers once they’ve been caught. Very often, game developers choose to disable the game application or subtly cripple the program in some
manner. Techniques can vary widely, and, since developers really, really dislike
pirates, the countermeasures are often quite annoying for the pirates. A notable
recent case involved the game Titan Quest, where the developers made the game
experience miserable and buggy for game applications that detected that they had
been pirated5. The game was leaked onto pirate distribution sites shortly after the
game’s launch. The hackers broke the game’s primary anti-duplication system in
order to make the game work, but had not removed its more subtle piracy-detection
features. As a result, the pirates had a terrible game experience. The unintended
consequence of this was that many people who downloaded the pirated game
started writing very negative reviews and comments on gaming sites. The security
system turned out to be a bit too clever. Instead of convincing pirates to either buy
the game or uninstall it, the game rapidly earned an undeserved reputation for
being buggy, unreliable, and generally crummy.
Prompt, decisive action by the anti-piracy system makes it easier for hackers to
analyze and eventually circumvent the security system. Although “soft failures”
may complicate the removal of the anti-piracy service, they may actually hurt legitimate sales6. This is the central anti-piracy paradox.
It is important to remember that individual players of pirated games are potential customers. After all, they did acquire the game somehow and, if nothing else,
successful anti-piracy detection confirms that they are playing the game. Ideally, the
game developer and publisher should want to drive the player towards a legitimate
purchase. The anti-piracy system could activate nagware to encourage the purchase
of the game, show ads for other games or products, and, over time, perhaps even
offer the game at a discount (if one is an irredeemable optimist, you could look at
a pirated game copy as one whose manufacturing, marketing, and distribution
costs are zero).
Chapter 5 Distribution Piracy
Most online games store all of the data on the server. This has obvious advantages
in terms of security. It is possible for online games to use digital signatures and hash
functions (or even encryption) to store data on local players’ computers as a backup
and recovery strategy. Conceptually, there is no reason that the backup has to correspond to the specific player on the client computer.
1. Wikipedia (2008), “Regional Lockout,”
2. J. Reimer (2006), “Xbox 360 Hacked, Microsoft Responds,”
3. S. Carless (2006), “Exclusive: Xbox 360 Piracy Spreading Fast in China,”
4. Wikipedia (2007), “Feelie,”
5. M. Fitch (2008), “Venting My Frustrations with PC Game-Dev,”
6. B. Fox (2003), “‘Subversive Code Could Kill Off Software Piracy,”
DRM, Licensing, Policies,
and Region Coding
et me begin by clearly declaring my bias—I am not a big fan of Digital Rights
Management (DRM). Most DRM systems misunderstand and misuse
cryptography. They cause more trouble for legitimate customers than they
gain from preventing piracy, if they work at all. I’ve been asked on a number of
occasions to recommend DRM systems and my short, flip answer is “Pick the
cheapest one that works with your business model; that way you won’t be too
disappointed when it fails.”
The full answer is more complicated. Technically, DRM systems are divided into
two portions: the protection mechanism and the license policy. The protection
mechanism is the technique (or set of techniques) used to identify the item being
protected and specify its “rights.” The license policy takes this “rights” identification information and enforces it.
In practice, DRM systems are often combined with digital distribution and
payment processing systems. For many game developers and publishers, these
features may ultimately be much more important than the DRM security tool itself.
I’ve been monitoring a number of these companies for several years and it seems
that the successful ones have evolved into general-purpose digital media distribution and sales services. The ones that have remained focused purely on DRM seem
to fade away.
Chapter 6 DRM, Licensing, Policies, and Region Coding
The essential problem with DRM is the same problem we face with obfuscation: at some point the protected digital asset has to operate locally on a customer’s
computer. This is where it is always attackable because hackers can do any of the
Disable the DRM system.
Convince the DRM system that everything is okay.
Modify the DRM system so that it always reports that everything is okay to the
main application.
Let the DRM system operate but have the main application ignore the security
Simply strip the DRM from the main application.
To add insult to injury, once a DRM system is compromised, the protected application is compromised permanently. Almost every DRM provider claims that it
can recover from a compromise or failure. The DRM system can recover, but the
security of the protected application is compromised permanently.
The very same cryptography that effectively protects communications and digitally
signs documents fails miserably in enforcing digital rights. The reason cryptography is a powerful tool for security is that it basically turns problems for spies and
hackers into problems for PhD mathematicians and lots of supercomputers. This
works because standard cryptographic systems involve multiple parties that are
engaged in some sort of transaction that they are trying to protect from outside
An encryption system uses cryptography and a key to allow communications
between “good guys” to be protected from “bad guys” who are outside of the
communications network. Encryption systems don’t work if one of the insiders is
a bad guy: The bad guy can read or alter the data. I have seen people propose using
encryption to protect game high scores. This doesn’t work because the person who
wants to cheat is an insider—the player who wants to post an illegitimate high
score—and this person can freely alter the data before it is encrypted.
Digital signatures can allow the recipient of a piece of data to know that it is
from the sender and that it has not been modified in transmission. If the sender sends
malicious data, the digital signature process does nothing to distinguish between
“good” data and “bad” data, just as with the encryption scenario. The recipient may
Protecting Games: A Security Handbook for Game Developers and Publishers
know whom to blame, but that is different from being able to determine that the
data is “good.” Conversely, if the recipient wants to use the data even if the digital
signature fails, there is nothing to prevent her from doing so. Again, the system fails
if the bad guys are the ones creating the signature or if the bad guy is being “forced”
to verify the problematic data.
Pretty much every DRM system relies on cryptography to enforce its security.
Unfortunately, the “bad guy” is the person trying to use the protected application
and is definitely an insider (it is his computer, after all). All of these types of attacks
can be used despite the presence of encryption or digital signature functions
because the problem isn’t a math problem; it’s a hacker problem. Even if, somehow,
you could protect the code, there still has to be a key somewhere. If the protected
application is digitally signed by some authority, that authority’s public key is
present. The simplest attack on digitally authenticated or encrypted data involves
replacing that public key with an “evil authority” public key, for which the hacker
knows the private key. The hacker can then sign and legitimize any data or license
In general, although DRM systems may include cryptography, they are not
systems that can be cryptographically secured.
There are ways around this problem. Hardware can make it more difficult
to find or modify the software or keys. However, the most effective solution is to
enmesh the users in a system where external parties verify their legitimacy. I call this
a “rich interaction system” in Chapter 9.
There are numerous DRM products and they work in extremely different ways.
Rather than discussing individual DRM solutions, I’ve broken DRM down into a
set of major approaches: Any specific product may combine one or more of these
techniques. Also, a DRM solution may be combined with one or more of the media
and licensing techniques I discussed earlier.
With fingerprinting, each copy of a work of digital media has a unique identifier
(the fingerprint) embedded within it. Fingerprints are actually placed inside of the
media file—modifying it in small, almost undetectable ways that ensure that
the fingerprint is present without distorting the base media (usually music or
graphical assets). A better term would really be a “tattoo,” as this data is not inherent in the digital media.
Chapter 6 DRM, Licensing, Policies, and Region Coding
Fingerprint systems can be attacked in three ways: by modifying the media, so
that it is no longer fingerprinted, but still usable; by altering the identifier, so that
it can be used with another media player; or by changing the media player, so that
it ignores the fingerprint.
Like fingerprinting, covert fingerprinting embeds unique identifiers into each
individual piece of digital media. With this technology, customer media readers do
not process or identify fingerprints. Rather, media distributors or their agents scan
widely distributed copies and use the covert fingerprints to determine the source of
unauthorized copies.
Covert fingerprinting is actually a fairly effective solution for detecting unauthorized copies, especially to detect where a compromise occurred during the production process or during a limited release (to reviewers or external testers). For
example, the Academy Awards sends out special DVDs and players to its members
during the voting process. By using a covert fingerprint, they could determine if a
specific copy of a DVD had been misused and take appropriate action against that
member. As discussed in the “Attacking Fingerprints and Watermarks” sidebar, it
is important that hackers not have multiple, distinct copies of the media file or they
may be able to corrupt the fingerprint and make it much more difficult to track
down the culprit.
The biggest problem with fingerprint solutions is that they are somewhat expensive. One of the key advantages of license keys is that they are a small, efficient
way of making a product unique. Fingerprinting systems have to be constructed
carefully so that they don’t disrupt the user’s experience with the protected
media—graphics can’t be visibly degraded and sounds can’t be audibly distorted. In
addition, most fingerprinting systems require a standard license key or unique user
identity to function.
A watermark is very similar to a fingerprint: It is information that has been embedded in all copies of a piece of digital media. The information is either identical for
all copies or divided into large categories (the most familiar example is the actual
“watermarking” found in paper currency). Watermarks are much easier to produce
than fingerprinted systems, because designers have to create fewer distinct versions.
As with covert fingerprinting, watermarking is more of a forensic or anticounterfeiting tool than a digital rights security tool—only special devices can read
the watermark and determine the authenticity of a copy. Theoretically, watermarks
can be used for digital-rights protection; however, the fact that the watermark is
common across all copies and that all audience media players will have a copy of the
“watermark checker” invites circumvention.
Protecting Games: A Security Handbook for Game Developers and Publishers
One of the things that continues to surprise me is how little time security designers spend thinking about how a hacker could attack their systems. The power of
fingerprints and watermarks comes from the difficulty in finding them: If they are easy
to find, they are easy to remove. They are just bits, after all. So, if you have a basic
item of digital media (DM), then you add a fingerprint to it by altering it into fingerprinted digital media (FDM) for each version (i):
FDMi = DM + FingerPrint(i); // for each version i
The security designer looks at this and says, “Wow, given that the digital media is
big and our changes are small, how in the world could a bad guy find the fingerprint
and remove it?”
Buy two copies.
Hackers are lazy. Why work to find the fingerprint when you don’t have to? So, for
the cost or effort of getting two distinct copies of the target digital media (FDM1 and
FDM2), they can now attack it pretty easily. Let’s just “add” the copies together.
(FDM1 + FDM2)
By using the exclusive or (XOR) function (which basically detects where bits are
different), you can find the fingerprints… almost:
FDM1 + FDM2 = DM XOR FingerPrint(1) XOR DM XOR FingerPrint(2)
FingerPrint(1) XOR FingerPrint(2);
// the two copies of the digital media (DM) cancel out
You don’t quite have either fingerprint, but you are pretty close. Basically, you
know where all the bits are in one fingerprint, but not in both. If you randomly “flip”
the bits associated with this combined fingerprint in copy 1 (FDM1) or copy 2 (FDM2),
you can hopelessly garble the fingerprint so that it is not readable. Basically, this is
like taking sandpaper or acid to your own fingerprints, but much less painful. If necessary, you can buy more copies and run experiments until you have successfully
stripped the fingerprint. It is also possible to introduce the fingerprint prior to encoding or compressing the media. This can result in seemingly substantially different
outputs. However, if the media can be returned to any sort of standard form, the fingerprint can still be removed or rendered ineffective.
Chapter 6 DRM, Licensing, Policies, and Region Coding
Security labels or tags are supplementary tags that are appended to a piece of digital
media and may also be bound to the digital media by a digital signature (see the
“Digital Signatures” section that follows). Tags are typically used with proprietary
encoding and post-processing systems to limit copying or other use of digital media
(the regional encoding for DVDs is probably the most familiar example). They
can also include simple serial numbers or other identification and use control
Tags can be easily removed or altered, as they are a distinct portion of a digital
media file or stream. They are often clearly identified and explained in the public
digital media specification (in contrast to watermarks or fingerprints). Nearly every
DRM or other licensing scheme includes some sort of labeling and tagging system.
Some developers attempt to conceal this information, but, ultimately, it must be
readable by the local digital media player or license policy application, and will
eventually be reverse-engineered.
Sometimes, security information needs to be altered in the local copy of a piece
of digital media. Dynamic security labels or tags are simply labels that can be modified by a local media player or the media itself, if it is an executable program, like a
game. The most familiar examples are licensing systems that restrict the number of
copies that can be made of a piece of digital media. Other examples that could be
potentially more interesting uses of these labeling systems are “buddy” versions of
games (where copies are allowed that are tied to a local “master” copy so that friends
can play together with only a single, purchased copy of a game), family licenses that
allow a set of authorized copies to be built from a single piece of digital media, or
affiliate licenses that allow consumers to earn revenue from the individuals to
whom they provide a copy of the media.
Digital signatures wrap a piece of media with a tag that includes additional information but is also derived from the media itself. Digital signatures are usually combined
with one of the other means of protection. The important attribute of signatures is
that a signature verifier cannot also create a valid signature because the system
is based on public key cryptography. The problem is, as discussed, that the local
public key can be replaced, or the entire signature process can be circumvented.
Encryption is the use of a cryptographic function in conjunction with a secret key to
protect data from being read by anyone without the secret key. The problem with
Protecting Games: A Security Handbook for Game Developers and Publishers
protecting digital media is, of course, that the “secret key” somehow has to exist in
every copy of the digital media. Technically, this means that from a digital media
protection perspective, there is no difference between encryption and proprietary
Proprietary encoding is the use of a distributor-controlled format for the distribution and a proprietary player that is required to read the digital media. Proprietary
encoding can be used in conjunction with other DRM and security techniques. For
games, Adobe’s Flash and Shockwave file formats are the most familiar examples.
In the traditional PC games and console markets, Epic Software’s Unreal game
engine is being used so widely that it may be becoming a de facto standard for the
latest generation of games.
The practical problems associated with proprietary encoding include the limitations that they impose on artists and distributors for the production and control
of media. For example, Adobe’s Flash application, although it is very popular on the
web, was not immediately supported by Apple’s iPhone. Other complications
include allocating royalties to the owners of the encoding technology. The recent
battles between Blu-ray and HD DVD disk formats, royalties on blank disks and
tapes, and the battle between VHS and Betamax are all examples where proprietary
encoding has created larger business problems. Excepting the iPhone platform,
Adobe has largely avoided many of these issues with its proprietary products because
of its business model—selling development tools while giving away the media players.
The security problem with proprietary encoding schemes is that these schemes
are vulnerable to reverse engineering: DeCSS allows DVDs to be read and processed
in software by PCs with open source tools. In the hands of pirates, these tools can
be used to regenerate the media into any form and format desired. DeCSS showed
that the reverse engineering of the DVD proprietary encoding system was not
difficult and we are already seeing similar weaknesses in the “next generation” formats: Blu-ray and HD DVD. Virtually every music-related DRM system seems to
be regularly hacked and, in most cases, the media can be extracted into a standard
format such as an MP3 file (this is not a problem for traditional games that are
implemented as custom software).
Obfuscation (also discussed in Chapter 5) is an anti-reverse engineering technique
that protects the underlying media from being easily parsed or edited. Obfuscation
is essentially an analog to old physical media security systems. This technique
typically relies on very low-level machine language and file specifications to alter a
Chapter 6 DRM, Licensing, Policies, and Region Coding
program or data so that it yields the expected result, but the result is computed
or stored in a manner that is difficult to understand without extensive reverseengineering. Obfuscators are just that—they obscure information; they don’t encrypt
it. In and of itself, obfuscation is not really an anti-piracy technique since a copy of
an obfuscated application or media will continue to work as expected. Obfuscation
is used with other anti-piracy techniques to attempt to conceal the location, structure, and operation of the overall DRM system. Because of the inherent weaknesses
of many DRM systems, the security of the DRM system is actually only as strong as
its obfuscation, not the cryptography or other techniques.
Split delivery is a wonderfully straightforward tool. This technique works by limiting the digital media that is distributed to a person to only the portions that they
have paid for (or are available for free). Instead of looking for a clever security
technique to disable code, features, levels, or assets that a person hasn’t purchased,
you just don’t send the un-purchased material to them. A number of game demos
and casual games use this strategy.
Many DRM systems also include an online component that operates as discussed in the section addressing license keys (Chapter 5). Digital signature systems,
encryption, fingerprinting, and any other system whose security includes the notion
of unique identity need to be concerned about the registration problem (see the
section called “The Registration Problem and Identity Management Systems” in
Chapter 29).
The license policy is the most important part of any DRM system. It is the embodiment of a company’s business model. The protection mechanisms are the means
to enforce this policy. If the DRM tool doesn’t support the licensing policy that the
business needs, it doesn’t matter how effective the protection mechanisms are;
the security system will not be effective.
When I first started in the security field in the 1980s, one of the major topics
was computer security as embodied in “The Orange Book”1. This volume specified
a sequence of security grades for computer systems: D, C1, C2, B1, B2, and B3, with
A1 being the highest. There were two security policy models included in “The
Orange Book”: DAC and MAC. Discretionary Access Control (DAC) is similar to
the project-oriented privilege structure that is familiar in UNIX, Windows, and other
commercial operating systems. Mandatory Access Control (MAC) is structured
Protecting Games: A Security Handbook for Game Developers and Publishers
like the classification system used in the military—Unclassified, Secret, Top Secret,
and so on. The DAC security policy was associated with the “lower” security levels
of C1 and C2, whereas MAC was associated with the “higher” security levels B1
through A1.
Even then, I was puzzled as to why MAC was somehow superior to DAC. These
are simply different security policies. Neither is inherently better than the other.
Many DRM products continue this flawed model of confusing security mechanism
and security policy.
Many security developers spend most of their energy on implementing security
mechanisms, but they spend little thought on business models, revenue streams,
and usability. This hurts the overall effectiveness of many DRM products substantially, as the DRM purchasers are unable to alter the DRM tool’s license policy.
These awkward combinations of the digital rights (license policy) models with
enforcement mechanisms restrict game providers and publishers from running
their business as they see fit. They can only offer the services that their DRM vendor
or internal developer chose to implement in the way the DRM provider implemented the services. Even worse, the DRM vendor’s revenue model can force the
media publisher to price goods and services in a way that may damage their success
in the market. This is lost opportunity as the game industry is in a period of intense
innovation in business models and pricing.
The system really should be reversed. The license policy design should be the
central feature of a DRM system with the appropriate enforcement tools incorporated as needed to support the publisher’s business model. The movie industry’s
use of DVD regional encoding has largely functioned as planned—a way for the
industry to control release schedules and pricing to support widely different markets
—even though it has been technically “broken” by hackers. It works because the
business model matched the security technique chosen.
The following are some of the options for controlling license policy:
Regions/Markets—Just as with DVDs, publishers can control the release and
use of different versions of a digital asset based on the market it is being used
for. Note: This does require that the digital media player be able to determine
which market the player is associated with.
User Types—It is possible to categorize users and unlock features based on
those categories. For PC games, it may make sense to distinguish between
Internet Café PCs and home PCs to determine which sorts of features and configurations the publisher wants to support in each. One could even extend this
to individual customers.
Chapter 6 DRM, Licensing, Policies, and Region Coding
Platform—Certain platforms or digital asset players may be subject to restrictions. For example, an arcade game machine has a different business model tied
to metered play, whereas a console or PC often has a purchase-based model.
Installation—The Blu-ray system has the ability to “key out” certain players
because they have been compromised or associated with piracy. Digital media
can include information to forbid or allow specific players. Also, license systems
can restrict the number of reinstallations associated with a given platform, user,
or license key.
Tiered Distribution—One of the interesting capabilities of game handhelds is
the ability to allow players to share a single, licensed copy of a game in order to
play together. The subordinate players connect to the main player and download a special, limited version of the game. This could be extended for PC or
other console games as a marketing strategy or as part of a peer-to-peer distribution system.
Feature Versioning—Licensing systems are well suited to controlling which
feature sets are enabled for an application. Just as with many of the licensing
options, in some cases it is possible to implement controls directly at a specific
media player by only distributing the features that are needed to that platform.
This has the advantage of forcing hackers to somehow actually acquire the
media that they want, not just figure out how to unlock features that are already
Validation/Registration/Activation—License policies can control how the
digital asset will behave if validation, registration, or activation has not been
completed. This is often done to provide a gracefully degrading user experience
in case of non-malicious situations (such as Internet access being unavailable
for a period of time or, as often happens with new game launches, online
license servers being overloaded). Unfortunately, malicious users can sometimes exploit these modes of operation. They force the system into triggering
the alternate policy through tactics such as simply disconnecting the computer
from the Internet.
Live Connection—Certain games require a live connection, often for license
control purposes.
Timers, Clocks, and Counters—Key parts of many more sophisticated license
policies are timers, clocks, and counters that track the status of the various
policy restrictions included in this list. Game demos that are restricted to allow
a certain number of hours, days, or even minutes of play are the most
familiar example. The main challenge for a system that uses these changing
attributes is that timers, counters, and clocks are obvious targets for hackers.
Protecting Games: A Security Handbook for Game Developers and Publishers
Content—Digital assets, for convenience, may include material that is not always accessible to all customers. One could argue that unlocking levels through
game play is an example of this type of control, but game demos also restrict
content. Sometimes they do this by requiring additional assets to be distributed
and sometimes they unlock the restricted data after payment has been received.
There are numerous other license policy areas—parental controls (usage duration, age restrictions, and so on), national censorship requirements (restrictions on
violence, language, or sexual content), and payment information.
The underlying concept of managing digital rights is not controversial. The
critical question and challenge for publishers and developers is to build tools that
effectively support their business strategy. The license policy should be the embodiment of the publisher’s business strategy.
1. DoD 5200.28-STD (1985), “Trusted Computer System Evaluation Criteria (TCSEC),”
Console Piracy, Used
Games, and Pricing
onsoles can be pirated. There is a huge amount of complaining in the computer games industry about PC game piracy, but console games have always
been successfully pirated. Console game piracy has become increasingly
serious as game consoles have expanded into mainstream entertainment and new
markets. This can be clearly seen by Nintendo’s growing attention to piracy
problems1. Products like the R4 cartridge for the Nintendo DS handheld were
tolerated for a long time, but these products are now the subject of lawsuits and
other restrictive efforts worldwide 2,3. Nintendo is far from alone. The Xbox 360 has
had a serious problem with its DVD player since 2006 4 and the Blu-ray disks used
in the Sony PlayStation 3 are also being exploited5 (although, so far, not for games).
The very nature of consoles actually facilitates some of their piracy problems. The
ease of use that makes them popular also makes them a target. Console users
simply insert a game disk or cartridge and play. This means that if a hacker can
convince the console that a disk is legitimate, the game will be permitted to run.
There are several methods that can be used to deceive the console.
The first attack involves duplicating the game storage media. The R4 cartridge
does this by emulating the physical and electrical interface between the game
cartridge and the Nintendo DS handheld. However, instead of an official game cartridge, the R4 uses standard Flash memory cards (the same ones used for cameras
and music players) that can be updated with whatever games the pirate desires.
Balancing ease of use with security is an interesting design challenge.
Uniqueness is a very powerful security tool, but it is not always easy to incorporate
into a system. For PC games, customers are used to typing in a unique license key
when they install the game. Console game players simply want to load the game
Protecting Games: A Security Handbook for Game Developers and Publishers
media and play. In order to add uniqueness to a console game without disrupting
the play experience, some sort of unique, digital license information would need to
be included on each game disk or cartridge. However, there is a cost for adding
uniqueness into a production line, especially for disks like DVDs and Blu-ray.
A cartridge system or Flash memory device could be customized more easily to
support a unique identity.
The next form of attack is to convince the media player that the counterfeit
media is legitimate. This is what happens with the Xbox 360 hack. The DVD player
reports back that disks labeled as “Rewriteable” DVDs are instead reported as
“Xbox360game” DVDs to the console. Microsoft has made some headway against
this problem by preventing the DVD firmware from being updated. However, conceptually, a hacker could always replace a media player with a computer or device
that fully emulates the media player interface. The simplest and cheapest way to do
this would be to find an alternative DVD player with a bit more EEPROM and
RAM (see “Secure Loader and Blind Authentication” in Chapter 14) that otherwise
uses the same interface protocols as the standard drive. All DVD drives use fairly
similar protocols. A hacker would need to analyze the official Xbox 360 DVD drive
interface to determine if anything was non-standard (it is likely that the Xbox 360
drive incorporates a couple of additional commands to distinguish it from a standard drive, although some standard commands may be altered in their format) and
then emulate that interface with an alternative DVD drive.
Emulators are a particularly challenging problem. Instead of attacking the
console, an emulator duplicates the hardware and other features of a console in
software. For a long time, emulators were not a particularly serious concern for
consoles. Moore’s Law (the number of transistors on an integrated circuit has
increased exponentially, doubling approximately every two years), which has been
demonstrated by the huge acceleration in hardware capabilities in recent years,
means that console games can be emulated fairly quickly: PS1 console games are
playable on Sony’s PSP handheld just a decade later6. Even worse, generalpurpose PCs are getting so powerful and inexpensive, it is likely that they will be
able to emulate new game consoles even more rapidly. The cost difference between
a moderately powerful PC and a console has gone from substantially more than
$2,000 to several hundred dollars or less.
Finally, hackers can target the console processor and operating system directly.
If hackers can change the core operations of the console itself, they can bypass all of
the system’s security checks. This can be done by attacking the console’s firmware
through traditional software weaknesses such as buffer overflows or through brute
force replacement of the console’s operating system via modchips7 or other forms
of hardware hacking.
Chapter 7 Console Piracy, Used Games, and Pricing
Hardware hacking is difficult. It often requires welding or replacing memory
chips or even adding new processors and circuit boards to an existing console. The
simplest attack is to replace the ROM memory that stores the console’s software
with your own chip. Other attacks take advantage of hardware debug features,
unused connectors, and interconnects that can be used to alter the operation of the
Console developers are aware of these attacks. Once again, the challenge is to
keep the cost of the console at a minimum while increasing the efforts required for
attackers to be successful. Early versions of most consoles (and other hardware)
contain more discrete components and more powerful, general-purpose processors
or other programmable components that can be altered with software. This is done
to get the product out on the market more quickly, but also to accommodate the
inevitable bugs and problems that any new system faces. The flexibility that is
needed for these early versions of a console tends to make it more vulnerable to
attack. Later, once the design has stabilized, components can be optimized and
custom circuits (ASICs) can replace the general-purpose processors to reduce costs.
Hardware hacking requires the hacker to have to have some real skill to carry
out the attack. First, someone has to do some reasonably serious reverse engineering to find a weakness in the system and implement the attack. Second, the attack
needs to be “productized” in a way that is reasonably simple to implement by less
skilled individuals (nothing more than opening the console and welding or replacing a chip). Third, the commercial pirate needs some sort of facility to produce any
necessary equipment (memory chips, circuit boards, and so on) and numerous
local business partners to implement the attack for customers.
Software attacks are much easier. The goal is the same as with hardware attacks—somehow bypass or alter the console’s operating system. However, instead
of attacking the console, the hackers look for weaknesses in the console or game
software that kicks the console into a state where the hackers can run whatever program they want to.
Because consoles usually don’t provide a command-line interface for users,
hackers have to find their way in through parts of the system that are modifiable:
the games themselves, game save files, other modifiable configuration files, and, increasingly, user-created game content. Consoles are fairly special-purpose systems:
They run games, they save games, they may play games online, and sometimes they
handle other media (like playing DVDs or showing pictures). This limited set of
applications makes it easier for the console developer to lock down the hardware
platform against attack than it is for a PC game developer to protect their game in
a general-purpose computer.
This does not mean that consoles are immune to attack.
Protecting Games: A Security Handbook for Game Developers and Publishers
Ideally, a console developer would like to consider the entire console as a secure
system and believe that no one can get inside the box. In reality, of course, hackers
and pirates are willing to crack open the console, even if it will void their warranty.
This presents a difficult security challenge. Your adversary can test, swap, probe, and
otherwise alter and abuse anything and everything that is in your machine until they’ve
beaten your security. (See M. Steil's report 8 for a fascinating, detailed discussion
about reverse engineering the Xbox.) They can use part numbers to find technical
specifications for your components and, fairly rapidly, completely reverse-engineer
your design.
Attacking information within a chip is still fairly difficult. This is the premise of
products like the Trusted Platform Module (TPM). The notion of “secure bootstrapping” starts with a very small amount of protected memory inside the TPM that is
used to get things started. Because this type of memory is very limited and expensive,
it is used to authenticate conventional, unprotected memory, which is then used to
load the remainder of the operating system. Once the operating system is loaded,
regular applications are loaded and run.
Another important consideration for a secure bootstrapping system is to make
sure that it cannot be forced to revert to a previous version of itself. Sony’s PSP
continues to have problems with downgraders that force the platform to an earlier,
unsecure version of the operating system. This is because the previous versions of the
operating system also pass the secure bootstrapping integrity checks. The way to
prevent downgrading problems is that the core, trusted portion of the system must
include a version counter in protected memory to prevent the operating system from
being rolled back.
Interestingly, while a TPM in a console does work for secure bootstrapping to ensure the integrity of the platform’s operating system, it does nothing to stop piracy on
a PC because, as noted elsewhere, the entire game still needs to be available to the
unprotected computer to execute.
For games to execute most quickly, they are typically given full hardware
privileges by the game console. The standard way that a console game works is as
follows: The console is started and the game is loaded, it retrieves and loads a previously saved game so that the player can resume progress from an earlier session,
and runs until the player quits or the console is turned off. If there is a problem
within the game that causes it to crash, a hacker may be able to use that crash
Chapter 7 Console Piracy, Used Games, and Pricing
(or even cause one) to knock the console into a non-standard state and take over
the console. When an application crashes, it is stopped in a disorderly manner that
can sometimes be used to run a different program with the same privileges and abilities as the crashed application. Game developers test the games pretty thoroughly.
Sometimes, however, they don’t thoroughly test the process for loading saved
games or other external information to ensure that they are not corrupted in some
manner. This is where problems have occurred.
The role-playing game The Legend of Zelda: Twilight Princess, for Nintendo’s
Wii, allows players to name their horses (of course). The game crashes when the
horse is given an exceptionally long name and the players later use the horse. This
allows hackers to run code of their choice 9.
Sony’s PSP has similar Game Save problems associated with the games Grand
Theft Auto: Liberty City Stories and Lumines 10. The Game Save problem is a bit
tricky for the console manufacturer to protect against. The console operating system can restrict the interface to write and read saved games and, hopefully, seize
control back from an application when it crashes.
In addition, the PSP had a problem with its image viewer. Certain corrupted
TIFF files (a standard image format) can cause a crash and permit hackers to execute whatever code they desire11. Apple’s iPhone and iTouch are vulnerable to a
similar attack12. The TIFF vulnerability is almost certainly the result of using a standard, probably open source, image library with a known flaw. In both of these
cases, the console operating system should not blindly trust either the image viewer
or the game application to behave well.
Game Save files are tempting targets for direct attack. Usually, the saved files
can be removed from the console via a memory cartridge or SD disk and then
modified by a motivated hacker. Today, once one of these hacks has been created,
hackers can install the altered save files on any console by using the same type of
standard storage cartridges or media. One potential solution is for each console to
digitally sign all Game Save files that it creates and bind them to itself so that they
can’t be used on another console. This would also complicate the distribution of
these attacks by hackers, because the attack would need to be replicated by hand on
each target system. In the Zelda scenario discussed, hackers would need to play
through the game to the point where they found the horse and name it in order to
implement the “twilight hack” on each specific console. This would make the attack
“too hard” for most lazy players who might otherwise take advantage of the hack.
The clearest way to track the progress of software assaults on a game console is
to monitor the “homebrew scene.” Homebrew developers basically spend their
time figuring out ways to take control of game consoles so that they can run their
own applications (or pirated games) on these powerful, but inexpensive, machines.
Protecting Games: A Security Handbook for Game Developers and Publishers
One of the ways to avoid tempting homebrew developers is to offer a safe, but
open, interface. Sony did this with its PlayStation 2 platform and allowed developers to run the Linux operating system on the console. Unfortunately, it is not
possible to determine if this had any effect on PlayStation 2 piracy.
Interestingly, DRM solutions, which do not work so well on a PC, are much
more effective in the controlled hardware environment of a console.
Used game sales, like pirated games, don’t typically add any revenue to game developers or publishers. And, as seen by reviewing the financial reports of GameStop, a
major U.S. game retailer, used games (both hardware and software) generated $1.3
billion in revenues in 2007 and are responsible for over 48 percent of the company’s
profit13. As mentioned earlier, the Entertainment Software Association’s total
piracy estimate for the games industry is $3 billion and GameStop is just one retailer among many. Although it is unclear how much of that $3 billion in piracy
would convert to legitimate sales even if piracy could be stopped entirely, used
game customers are spending money, in retail, to purchase these games.
“We have the largest selection (approximately 3,000 [distinct products]) of
used video game titles which have an average price of $16 as compared to an
average price of $42 for new video game titles and which generate significantly
higher gross margins than new video game products.”
— GameStop 2007 Annual Report
Some industry professionals have argued that measures need to be taken to
stop used game sales and others have stated that these sales should simply be ignored. I argue that it is probably worth the effort for developers and publishers to
try to capture some of the used game revenue without restricting the ability of retailers, or individuals, to resell games.
One of the most humorous parts of this discussion is that, although PC games
are the primary alleged targets of piracy, the rise of console games has been the key
to the growth of the used game business. Typically, retailers will not accept returns
or exchanges of PC games. They are legitimately concerned that the customer took
the disk home and copied it. In contrast, console disks and cartridges are perfect
candidates for resale: They are compact, fairly easy to inspect for quality, unlikely to
be damaged, sell well, and, as discussed here, are more profitable than new games.
Chapter 7 Console Piracy, Used Games, and Pricing
“Increase Sales of Used Video Game Products. We will continue to expand the
selection and availability of used video game products in our stores. Our strategy consists of increasing consumer awareness of the benefits of trading in and
buying used video game products at our stores through increased marketing
activities. We expect the continued growth of new platform technology to drive
trade-ins of previous generation products, as well as next generation platforms, thereby expanding the supply of used video game products.”
—GameStop 2007 Annual Report
This is not a critique of GameStop, which is just a publicly traded company
whose excellent annual reports clearly demonstrate the economics of used console
game sales. Blockbuster, Circuit City 14, Walmart, and the other major game retailers
probably have similar results.
Almost half of GameStop’s profits come from used games (around 48 percent).
This has to affect their business strategy. If they don’t stock that many copies of a
new title, customers who want a game that is not available may walk out with
another used game that is available immediately. Customers also know that they
can wait a week or a couple of months and find most games at a substantial
discount—a $16 per used game versus $42 per new game average price.
There are ways to earn more revenues from customers without trying to
directly stop used game sales. Downloadable content is certainly one way to gain
revenues. It doesn’t necessarily deter customers from buying a used game, but it
may discourage them from selling the game. Some console game publishers are
experimenting with license keys for console games (something PC games have done
for a long time) to link a single game customer to a game disk. One twist on using
downloadable content is to release multiple, small downloadable items, but only
for a limited time. Some MMOs do this for holiday items—virtual costumes for
Halloween, flowers for Valentine’s Day, and so on. Although a valentine may be
inappropriate for a World War 2 game, limited edition weapons, uniforms, and
maps are certainly plausible—with the items available only for a week or month for
currently registered and active players.
The other anti-piracy strategies discussed for PC games, such as promotions,
collectible items, and special editions, are all equally applicable to fighting the used
game threat.
Protecting Games: A Security Handbook for Game Developers and Publishers
Price is a great anti-piracy strategy. Low price and convenience works as a way to
fight piracy. Apple is now the number one leading music retailer in the US thanks
to its low prices and convenience15. iTunes has been growing steadily, even though
its FairPlay DRM system has been cracked repeatedly16.
The computer game industry regularly touts itself as a competitor to the movie
industry. However, in 2006, there were over 1.1 billion DVDs sold for $16.5
billion17 compared to 240.7 million games sold for $7.4 billion18.
DVD players and video game consoles have been roughly comparable in price
for quite a number of years. The big difference is that a new DVD typically cost less
than $20, whereas a new game is priced at closer to $50. Part of the reason for this
pricing disparity is history. PC games started off at $50 when they were the only
form of entertainment on a computer in the late 1970s and 1980s. At that time,
there was no Internet and the number of PC owners was quite small. However, the
price really hasn’t changed since that time even though there is a vast range of free
entertainment available on PCs (including other games) and phenomenal growth
in the PC market as a whole. Some people argue that this is because a game can provide 30 or 40 hours of entertainment while a film only lasts 2 or 3 hours. However,
many players only wind up playing games for a couple of hours and, if we used this
metric for books, a novel would be much more expensive than a film. In fact, most
recreational hardback books or quality paperback books are priced very close to the
$20 price of a standard DVD.
There is a substantial psychological difference between a $20 and a $50 purchase. I’ll buy a $20 book or a DVD on an impulse, but I pretty much always think
about what $50 games I’ll purchase. The rise of “free-to-play” online games has
shown that many consumers are quite sensitive to price. The popularity of games
like Nexon’s MapleStory and Jagex’s RuneScape has shown that by lowering the
barrier to entry for your customers, you can substantially increase your audience
while still earning very healthy revenues.
Lower prices make piracy and used games much less appealing. Although
GameStop and its fellow game retailers can earn substantial margins when the price
difference between a new game and a used game is $30, the retail appeal of used
games is much less if the new game price is only $20.
One of the challenges of experimenting with pricing for console games is the
substantial royalties that game publishers have to pay to the console manufacturers.
It should give everyone pause when Id Software has threatened to limit the features
it is including in its Xbox 360 version of the game Rage because of the per-disk royalties that it has to pay to Microsoft20. Rage is supposed to be large enough that it
Chapter 7 Console Piracy, Used Games, and Pricing
would apparently require an additional physical DVD to be played on the Xbox 360
instead of fitting on a single Blu-ray disk for the PS3 console.
Also, retail games compete with a wide range of free and inexpensive entertainment options on the PC—DVDs, video-on-demand, Netflix, endless free games
online, cheap MMOs, YouTube, and so on.
Another possibility is episodic gaming. Episodic games have not taken off so
far, although it is intriguing to consider the idea of breaking a game into an initial
release (sold for $10 or $20) and then having the remaining levels purchased as
downloadable content. Potentially, this could be done for both PC games and console games.
The high price of games creates an interesting problem beyond piracy or IT security:
physical security. Because games are a physically small, high-value item, they are targets for theft. For general retail, theft is approximately 1 or 2 percent of sales, but for
games, it is as high as 5 percent. The situation pushes retailers to protect games behind glass cases and forces employees to retrieve the boxes for customers (glass
cases can cut sales by 35 to 45 percent, but reduce theft by 90 percent)19.
Unfortunately, putting games behind glass substantially reduces impulse sales.
Impulse sales are defined as sales where the customer did not visit the merchant
with the intent to buy the specific item. Twenty-six percent of clothing sales comes
from impulse purchases, but only 6 to 8 percent of game purchases are from impulse
Game companies are experimenting with alternative packaging such as bundling
games with toys and other packaging and sales strategies that would allow the games
to need less physical protection.
Of course, lowering prices and improving retail margins directly could help.
A Technical Alternative
Another strategy, better suited for PC games than consoles, would be to move product activation from the home to the cashier. Pre-paid cards are routinely activated
today during the checkout process. The cashier scans a barcode on the pre-paid card
when payment is accepted. This information is sent to the vendor, who activates the
This same activation process could work for computer games.
Game publishers could easily print the game’s license key on its box. When the
customer checks out, the license key is scanned and the game publisher’s license
server then activates the game.
Protecting Games: A Security Handbook for Game Developers and Publishers
This activation process would allow the game to be located in the store with substantially lower risk because possession of the game disk would not be sufficient to
activate the game, even if the thief knows the license key. The cashier’s involvement
prevents casual theft.
The players then enter the license key, as they do today, when they install the
game and registration proceeds.
Another alternative would be to allow players to activate their games with an affinity membership card at the time of purchase instead of via online registration at their
computers. It could even be possible for license registration information to be directly
transferred from the merchant to the game publisher.
Chapter 7 Console Piracy, Used Games, and Pricing
1. C. Dring (2008), “JAPAN: Nintendo Attacks DS Piracy,”
2. B. Ashcraft (2008), “R4 Price Going Up in Akihabara,”
3. Nintendo (2008), “Court’s Judgment of Illegality of Device, Such as R4, etc.,”
4. S. Carless (2006), “Exclusive: Xbox 360 Piracy Spreading Fast in China,”
5. H. Goldstein (2006), “Blu-Ray Already Ripped on PS3,”
6. K. Orland (2006), “Hack: Play Ripped PS1 Games on PSP [update 1],”
7. Wikipedia (2008), “Modchip,”
8. M. Steil (2005), “17 Mistakes Microsoft Made in the Xbox Security System,”
9. WiiBrew (2008), “Twilight Hack,”
10. A. Linde (2007), “PSP Firmware Exploit Found in Lumines; Sales Jump 5,900% on Amazon,”
11. Secunia (2005), “Sony PSP Photo Viewer TIFF File Handling Buffer Overflow,”
12. R. Block (2007), “iPhone and iPod Touch v1.1.1 Full Jailbreak Tested, Confirmed!,”
13. GameStop (2007), “GameStop Annual Report 2007,”
14. A. Webster (2008), “Circuit City to Expand Used Games Plans,”
15. Apple (2008), “iTunes Store Top Music Retailer in the US,”
16. E. Kirk, “App Store’s FairPlay DRM Hacked On Super Monkey Ball,”
17. S. Zeidler (2008), “U.S. DVD Unit Sales Dropped in ’07,”
18. ESA (2007), “Essential Facts about the Computer and Video Game Industry,”
19. T. Wolverton (2008), “Game Industry Tries to Break Through Glass Wall,”
20. N. Breckon and C. Faylor (2008), “Rage Will Look Worse on 360 Due to Compression; Doom 4 and
Rage Not Likely for Digital Distribution,”
Server Piracy
ne of the reasons that companies started to create online game services was
that they were a good way to fight traditional game piracy. After all, if a
customer has to connect to your server, surely there is no way that she can
pirate your game. Although it may be true that the game is truly on the server in a
text MUD, for many graphical online games, most of the art assets, level design, and
even game logic resides on the client application. And, really, the valuable part of
the game is the art and game design. Because so much of the game is often on the
client-side, the limited amount of server logic acts more as an online game key. And,
as discussed in earlier chapters, there are many ways to attack license key systems.
Server piracy has been around for a long time. Massively multi-player games are
particularly vulnerable. Early MMOs like Ultima Online have been targeted1, as
well as new games like World of Warcraft2. Even smaller games, like Star Wars:
Galaxies, have been victims3. MMOs are not the only targets. The unauthorized
“BnetD” server emulates Blizzard’s online service for multi-player gaming for
Diablo II and Warcraft III 4.
It is tempting to categorize pirating single-player downloaded games in Flash,
Shockwave, and Java as examples of server piracy. However, in each of these cases,
the game is distributed in its entirety to the players every time they visit the web
page where the game is located. The absurd confusion about what is actually occurring is obvious when some online casinos call these types of games “no download”
games. Of course the game is downloaded; it is just automatically downloaded
every time the web page is viewed. For these types of downloaded games, the
problem is much more akin to the problem faced when fighting piracy in standard
PC games.
Chapter 8 Server Piracy
Sometimes, these pirate servers are just run for fun. Sometimes, players want to
change the game to suit their own desires, and sometimes they just want control:
“The RunUO Team has a plethora of things for you to choose from. We deliver
an end-to-end solution for your Ultima Online needs. We give you everything
from the server software to a client we have written from scratch. If you prefer
the EA games client, we even have our very own UO Assist program designed
to make game play much easier, called Razor. Below you will find a list of our
products and everything there is to know about them.”
—RunUO Products Page
When online games became more popular and profitable, some pirates moved
to run these unauthorized game services for money. There have been pirate servers
all over the world. China has been a particular target: A pirated version of Legend
of Mir 3 earned its operators 500,000 RMB (around $64,000, a fair amount of
money in China) by offering lifetime subscriptions for 300 RMB (around the same
amount a legitimate player might pay per month). The publisher, Guangzhou
Optisp Company, claimed monthly losses of 10 million Yuan ($1.28 Million)5.
They are not the only company that has faced this problem. A pirate server for
Ragnarok Online, published in China by Shanda Interactive, was shut down with
260,000 accounts and could support 3,000 peak concurrent users6.
China is not alone; there have been pirate server operations shut down in
Europe7 and Russia8; the US is not immune either. In 2003, the source code for one
of the first globally popular MMOs, Lineage II by NCsoft, was compromised and
found its way to a server in China. It was purchased by a Texan in 2004 whose
California business partner set up an illegal Lineage II server in the US. They had
50,000 users in 2006, and NCsoft claimed potential losses of $750,000 per month.
The FBI shut this service down in late 2006. If the site operator is found guilty, he
faces up to five years in jail and $250,000 in fines9.
Most MMOs have relatively simple game mechanics, especially the portion that
is implemented on the server. These mechanics are often substantially implemented
by code on the client computer to minimize bandwidth and processing on the
game operator’s servers. This leaves the pirate with a quite tractable task of reverseengineering the simple part of the game—its server game logic—or coming up
with plausible alternative server code. This actually highlights an interesting irony
about online games. One of the motivations for online games, particularly in Asia,
is to fight piracy. However, the move towards simpler games and general stagnation
of game design has made it easier to create a knock-off game and steal most of an
existing game’s assets.
Protecting Games: A Security Handbook for Game Developers and Publishers
One could really argue that what is occurring is not server piracy, per se, but
service piracy. There are a number of ways for a pirate to exploit an online game
Stolen Server Code—Someone has an unauthorized copy of the source code
for the server (usually with all of the client art and assets that aren’t protected,
in practice). This can occur due to an accidental disclosure (leaving the code on
an unprotected server, as with the Lineage II case), theft from the developer by
an outsider, or a malicious employee.
Reverse-Engineered Server Code—Someone uses the actual game client to
reverse-engineer the game server and communications protocols. Reverse engineering is often legal (DMCA raises some real questions about this in the US)
and very difficult to stop in practice, as seen with “BnetD.” Blizzard won the
case to stop the distribution of this unauthorized version of the Battle.Net
server code in the US, but the code is still widely available online.
Stolen Art, Music, and Animation (and Plausible Client and Server)—A
player with a legitimate copy of the game extracts the game’s art assets and uses
them with their independently created game client and server. This is usually
trivial to implement, because any legitimate player can collect the game’s entire
creative content by simply playing through the legitimate game. This is certainly a EULA and copyright violation. However, the legal case may be very
interesting if you require the player to have a legitimate copy of the original
game client, especially if the original client is distributed for free. This is not the
kind of “interesting” situation that businesses like to deal with.
Cloning: Copied Art and Emulated Server—Building game art and animation
and game play in the style of an existing game. This is a pure copyright violation. The only way to stop this is via the courts, which may not be effective in
many of the jurisdictions that are likely to pursue this tactic.
There are several trends that are going to make the server piracy problem
worse, not better. China has become very aggressive in prosecuting online game
pirates, especially as its domestic online game industry has grown. The rapid globalization of online gaming will likely lead to pirates moving their game servers to
countries with immature legal systems. These countries will be happy to host
services for online games for a license fee or the promise of tax revenues. The
continued success of online gambling in the Caribbean, even in the face of severe
sanctions in the US, is a clear indicator of the challenge of jurisdiction for online
Chapter 8 Server Piracy
Building an online game is gradually getting easier. Services such as Linden
Lab’s Second Life, Makena Technologies’, and Areae’s Metaplace are
designed to simplify building online games. There are also open source projects like
The Croquet Consortium’s Croquet and Sun’s Project Darkstar, among others.
Unfortunately, these same tools can be used to accelerate server piracy projects.
Server piracy is going to get worse as these toolsets get better.
Different types of piracy require different types of solutions. Typically, games are lowcost, mass-market items. Thus, there are severe constraints on how much a publisher
can spend on security on a per-unit basis. This is not the only business scenario:
Some high-end analytic software packages for niche markets are protected by using
hardware tokens for license management. In the game industry, licensing large online games, like MMOs, exposes some unique security problems involving licensees
and employees of licensees.
Publishers of MMOs market their game to the world via an assortment of licenses
into specific markets. The most familiar example in the US is Blizzard’s licensing of
World of Warcraft to The9 in China. Many Korean and Chinese online game developers are also pursuing an aggressive international licensing strategy. The licensee operator handles localization (customization by language and market) and is much
more familiar with important local issues like marketing, payment processing, and
usage controls in accordance with national policies. For the licensor, the money is
often quite good with substantial upfront fees and regular royalties as long as the
game is operated.
There can be problems with these relationships. The licensee has to have access
to the game and there may be difficulties in the relationship between the companies.
Often, the game itself is provided as an executable program without source code to
keep the licensee from stealing the game.
Although this tactic is reasonably effective, I believe that game licensors should
consider shipping their game as an integrated hardware-software appliance. This increases the effort for a licensee to attack the game and makes supporting the game
easier (since the hardware and software are controlled solely by the game developer). Also, game appliances can be leased and it may be easier to “unplug” a difficult licensee than it is with a pure software delivery. License fees can be scaled and
controlled on a per-appliance basis, something that is much more difficult to do with
a software-only delivery. A game appliance is easier to visit and audit and, if properly
designed, there is less risk of a licensee employee extracting the game executable.
Protecting Games: A Security Handbook for Game Developers and Publishers
Increasingly, casual MMOs are moving to a peer-to-peer type architecture.
They do this by building games that don’t require a very large number of players to
interact together simultaneously, such as 2 to 16 players. This will make locating the
pirate central servers more difficult, as the size of the central server infrastructure
is smaller and can more easily be relocated. Also, casual games can be emulated
fairly easily and have their art assets looted to incorporate into an independent
game. Many of these casual games are built using tools like Adobe’s Flash and
Shockwave, which have long been targets of reverse engineering to extract application code and asset theft.
The growth of debit, anonymous, and casual online payment systems makes it
much less difficult for server pirates to monetize their service. Ironically, the move
to “free-to-play” business models makes this less risky, as there is a smaller outlay to
participate in the pirated service. Customers are less concerned that their payment
account will be looted by the merchant, as this is exactly the type of small-scale,
high-risk transaction that these payment systems were designed to secure.
One of my personal favorite anti-server pirate measures is a strong economic or
status system, be it a rich in-game system, such as with CCP Games’ EVE Online,
the entire Free-to-Play business model, badges and achievements ranking systems, or
even gold farming. Although the software for these games and social systems may
be easy to reproduce, the scale and vibrancy of the community and economic system
makes such piracy largely meaningless. In these cases, server piracy is reduced by
the presence of a big, visible, and fun-to-use status system with economic rewards.
The most obvious “proof” of the effectiveness of the power of a strong economic system is the irrelevance of piracy in the world of online casinos or, less
extravagantly, skill games and casual game portals. In most of these cases, the actual
games and game software are easy to steal or duplicate. It does not matter; the service is its economic or status system, not included game or games. I will discuss this
strategy at greater length in the “Rich Interaction System” section in Chapter 9.
As seen from the previous discussion, server code can be compromised in a number of ways. If the server code or a game server is compromised, it is important that
the game service be able to recover. Online games are increasingly moving to a
peer-to-peer (P2P) architecture, particularly in Asia, as the cost of running large,
centralized server farms is growing and games can have hundreds of thousands of
concurrent users. A P2P infrastructure is especially common for free-to-play games
or other non-subscription games that have don’t have a reliable way to recover their
costs of operations for all players.
Chapter 8 Server Piracy
Most of the security focus for online games is on authenticating the player, and
sometimes the client software, to a server. There is also substantial benefit to be had
from authenticating the server to the client or binding clients together with the
server in a peer-to-peer system. This technique is useful beyond the strict realm of
server piracy and should be part of any online service.
The power of authenticating the server to a client is that it more tightly binds
the two together into an integrated game service. This is exactly what a good license
server strives to do when it verifies licenses for a conventional software application.
With good key management, cryptography can be a useful identity tool. Most
of the time, it focuses on identifying the client to a server, but the techniques can
work both ways. For example, the game client software can include the game server
public key in its code to be used as part of the login process. This can be used to
verify the game server to the client.
One way to use the game server public key is to modify a standard challenge/
response login protocol. In the standard protocol, the server sends a random
challenge phrase to the client. The client uses the random challenge phrase in conjunction with its secret key (or user password) to send a response back to the server
for validation:
ServerRandomPhrase; // generated and sent to the client
Response = SecurityFunction(ServerRandomPhrase,ClientSecret);
// created on the client and sent to the server
To authenticate the server to the client, the server uses its secret, private key to
encrypt the server’s random phrase. In addition, the server appends a fixed authentication word to the random phrase and encrypts both together. The game client
receives and validates this expanded challenge message and, if the new challenge
phrase passes, continues the login process:
ServerRandomPhrase; // generated by the server as before
ChallengeMessage = ServerRandomPhrase,AuthenticationWord;
// The authentication word should be a fixed field or a date & time
combined with a fixed field or some other data that both the client and
server can determine independently
// the server uses its private key to encrypt the challenge message
and sends it to the client
AllegedChallengeMessage = Decrypt(ServerPubic,Key,ChallengeMessage);
Protecting Games: A Security Handbook for Game Developers and Publishers
// the client, which knows the public key, uses it to decrypt the
challenge message
Validate(ChallengeMessage) = ServerRandomPhrase,AuthenticationWord;
// this should pass only if the server private key was used to
generate the message. The server private key should only be known by
the legitimate server.
Response = SecurityFunction(ServerRandomPhrase,ClientSecret);
// created on the client and sent to the server
There are other protocols and methods that can achieve the goal of validating
the game server to the client as well as the client to the server. Most take advantage
of public key cryptography. It is possible for motivated hackers to replace the login
code or the public key with their own, but this requires more work on the hacker’s
part in order to connect to a pirate server. And, hackers are lazy, just like regular
You can fight the replacement of the server public key by using it in a number
of places in the game client. It can be simply checked in several places or it can be
used to encrypt client game constants and data. An example would be to use the
server public key to sign (and maybe even encrypt) data updates from the server.
Then, when the client wishes to load or use data from the server, it must use the
server public key to recover the data.
It is fairly common for online games to encrypt the link between a client and server
or between players. In some cases this is done to prevent disclosure of the data being
exchanged. Occasionally, there is a legitimate threat that a third party may intercept
the data. It is a misuse of encryption to attempt to conceal data from the player in this
Typically, the most important security requirement on the connection between a
client and server or between peers is to ensure data integrity and provide source
authentication. Data integrity is important to prevent manipulation of the data on the
network link that could alter player actions or game state. Source authentication is
particularly important, because malicious players could spoof source IP addresses or
game message headers so they appear to come from another player. This can be a
real problem in message-based system designs because the underlying IP address
information is often discarded by the higher-level message.
Chapter 8 Server Piracy
There are three choices for achieving integrity and source authentication: digital
signatures, encryption, and cryptographic checksums (or message authentication
codes—MACs). All three techniques work, and may be useful for protecting a game,
depending on other system design constraints:
Digital Signatures—Use a combination of a hash function and a public key
encryption function. Hash functions are often slower than conventional,
private key cryptographic functions. If you can ensure the security of the
player’s private key, the system can take advantage of a digital signature’s
non-repudiation features. Non-repudiation is the property often associated
with digital signatures that only the legitimate user could have created the
signature, so the legitimate user cannot subsequently deny having signed
the message. Non-repudiation is probably more important for skill games or
gambling games than in conventional MMOs, as the set of messages could be
used to create a “digital contract” to validate the game.
Encryption—Encryption’s main benefit is to protect against disclosure to third
parties. Certain modes do have the same sort of manipulation detection properties that a MAC has. Encryption can also be used to validate identity by
having a unique key assigned to each sender in a client-server architecture.
The developer can pre-generate or independently generate the key stream
and sometimes operate faster by using a key-additive system. This is an
encryption mode where a cryptographically generated key stream is simply
added or XOR’ed with the plaintext data. When operated in this manner, the
encryption system will only confirm identity, not protect against controlled
data manipulation.
Cryptographic Checksums—They leave the message in cleartext, but include
an authentication phrase using a keyed cryptographic function that detects
errors and manipulation. As with encryption, unique keys in a client-server
environment can be used for secure identification. Cryptographic checksums
can be designed to operate quite rapidly and therefore can have minimal
performance impact. In peer-to-peer games, this is typically not a major factor.
However, in client-server games, the server may need to process hundreds of
messages from thousands of player clients in a very short time.
Protecting Games: A Security Handbook for Game Developers and Publishers
One of the reasons I recommend cryptographic checksums for client-server games
is that the server can, if necessary, completely ignore the authentication process for
player data. Because the message data is provided in cleartext, it can be processed
without any problem, even if the message is not authenticated. Therefore, when the
integrity and source of the message does not matter or the server is under a particularly heavy load, the server can bypass the authentication step or save authentication
processing for later.
This is the real work in cryptographic system implementation: the trade-offs
between all of the options that are available for algorithms, modes, key management,
and so on, to meet your performance and security requirements.
1. RunUO,” RunUO Products,”
2. Google (2008), Over 1.1 Million Results for ‘Wow Private Server’,”
3. Timothy (2006), “Star Wars Galaxies Emulator Test Server Hits Alpha,”
4. D. Becker (2004), “Blizzard Wins Online Game Suit,”
5. C. Li (2007), “Man Faces Court for Online Piracy,”
6. H. Lee (2006), “Shanda to Crack Down On RO Pirates,”
7. videogaming247 (2008), “NCsoft Ganks Illegal Greek Lineage II Operation,”
8. S. Davis (2007), “Russian Server Pirate Sentenced to 3 Years in Prison: Reverse Engineered Gravity’s
Ragnarok Online Server,”
9. FBI (2007), “CRACKING THE CODE—Online IP Theft Is Not a Game,”
Other Strategies, Tactics,
and Thoughts
here are many ways to attack piracy. The inherent problem that games and
other digital media face is that “bits are bits” and there is no way to distinguish between a legitimate copy of a piece of digital media and an illegitimate
one. A pirate and a legitimate version of a digital media are, by definition, identical.
In essence, strategies discussed so far have tried to find a way to make it impossible
to copy something that is easy to copy, or to make things that are identical not identical, or both.
When we are fighting digital piracy in this manner, we are, in some sense, denying the very nature and power of the medium.
In doing so, we may quite possibly be doomed to fail.
Of all the solutions discussed thus far, online gaming seems to be the most effective method to actually fight piracy, both in theory and in practice. However, not
all games are, or should have to be, online games. Altering pricing looks like it
could also be an effective option, but it may not always be an acceptable business
How bad is piracy? It is a legitimate question, but without real data on the extent of
piracy, it is impossible to determine an appropriate response. The challenge is to
find a way to collect good data. The first step is to create or find some sort of unique
identifier for each copy of a game that can be collected and tracked. For PCs, it is
reasonably easy since most games already include a license key. For consoles, one
can probably use any unique ID associated with the console to track which machines the game has been installed on. One problem with consoles is that they don’t
have any license key or unique identifier tied to each game copy. Most PCs include
Protecting Games: A Security Handbook for Game Developers and Publishers
a number of unique identifiers that can be associated with each platform—Ethernet
card MAC addresses, Windows License IDs, and so on. If nothing else, the application can generate and store a random unique identifier when it is installed.
Of course a pirate (or person with a second PC) may attempt to reuse the
license key. If there is an online activation process or online service, it is possible to
move from the initial identifier (the license key) to an active platform ID (a unique
identifier associated with a specific PC or console). The advantage of this is that the
publisher can then distinguish between different installations that share the same
license key.
There are benefits to allowing every user to at least start the game. After all, as
long as such a person is not locked out, he is a potential customer. You can also
begin to more accurately gauge how many actual game users there are, who these
individuals are, the structure of the game’s informal distribution channels, and so
on. If the game locks out users immediately, it may not be possible to distinguish
multiple registration attempts by a single user from multiple users each attempting
to register. One way to implement this is to allow the game to operate through an
initial level even if the player has entered an invalid license key. The game would
still do an automated connection and generation of an Active Platform ID. Then,
when the player completes the first level, the software can take whatever action the
publisher desires.
With Internet access, the publisher could also use a GeoIP service (a service that
associates IP addresses with approximate geography) to determine approximately
where the user is coming from.
It is important to track the Active Platform ID separately from any other identifiers. Invalid license keys can be analyzed to determine if they were mistyped or if
the player attempted to use a tool to generate fraudulent license keys or reuse keys
from other players.
The publisher can track actual revenues compared versus Active Platform IDs
and compare both to license information. With these three values, the publisher
can begin to get a real handle on the extent of piracy as well as the total number of
potential customers. The publisher can also experiment with different anti-piracy
tools and, of course, techniques to convert pirates into customers.
In the world of digital distribution, publishers can battle pirates directly. Most pirated software is distributed through large peer-to-peer (P2P) networks (many use
a family of protocols called torrents). These P2P networks are designed to be highly
distributed. Anti-pirates can attempt to identify the individuals downloading
Chapter 9 Other Strategies, Tactics, and Thoughts
pirated media (as the RIAA has done for music), seed the online networks with altered game files, or create their own honeypots, which are pirate servers to help
identify and track pirates.
Developers and publishers may complain about piracy and feel that piracy is
fundamentally unjust; they do have a choice. They can choose to battle pirates with
rewards or penalties. The primary goal for a publisher is really not to stop piracy,
but to maximize the revenues the publisher earns from its products and services.
As discussed previously, publishers have sometimes chosen to modify the
illegal copies of their games so that the games are unreliable and unstable—with
negative consequences because this can hurt the game’s reputation (seen with Titan
Quest). The initial motivation for this tactic was to make it more difficult for pirate
hackers to locate and remove the game’s anti-piracy measures. Developers should
have confidence that their security measures cannot be removed. In this case, the
game should clearly lock itself up and indicate that the version has not been purchased or take the other tack and invoke nagware to endlessly remind players to
purchase the game. An alternative approach is to distribute only part of the game
initially and distribute the remainder of the game to paying customers who have
clearly registered. For example, split a $20 game into two $10 game episodes.
Requiring clear, detailed, user registration before implementing penalties may
be a powerful piracy deterrent, especially if combined with an incentive such as a
contest or promotion. A “$100,000 Titan Quest Giveaway” for registered players
may do more to increase sales than $100,000 worth of anti-piracy.
It is possible to attack P2P networks, as Introversion has shown with Darwinia1.
After a pirated version of Darwinia was leaked onto pirate networks, Introversion
spoofed the same P2P networks by widely releasing a demo version of its game that
was intentionally mislabeled as the complete game. Introversion’s goal was to increase the likelihood that a potential pirate consumer would download the mislabeled demo rather than the actual hacked game. Because the demo clearly ended
and was marked as a demo, it did not damage the game’s reputation. The company
seemed satisfied with its results, although they have not publicly shared any details
on how many sales they gained by using this technique.
Pirate and P2P networks are inherently vulnerable to a wide range of spoofing
and honeypot attacks. Because P2P networks seek to be anonymous, decentralized,
and highly distributed, pirates can’t manage trust effectively. Because there is no
central trusted authority, anyone can post, host, and alter files, as well as provide
file descriptions. Unlike conventional criminal piracy where pirates earn money by
counterfeiting and selling goods, pirates do not earn any money from these P2P
networks directly. Therefore, game publishers have more incentive than the pirates
to attempt to dominate the P2P networks. In this case, the decentralized, distributed nature of P2P networks gives the publisher an advantage.
Protecting Games: A Security Handbook for Game Developers and Publishers
Even if pirate networks move towards reputation systems, an organized effort
by a publisher should be able to shape network traffic.
It should be noted that there are strong financial incentives for criminal pirate
networks that do sell games or operate game services for money. In such cases, pirates will establish trust relationships and often are able to freeze out the publisher’s
Honeypot download services are operated by, or on behalf of, the publishers
with the goal of identifying pirates for prosecution. They do this by hosting “official” pirated media, often at a number of host locations. Pirate consumers who
download the media are subject to tracking by GeoIP to their Internet Service
Providers (ISPs). Once identified, the company typically pursues some sort of legal
action. The MPAA has used a service operated by MediaDefender2 to fight piracy in
the movie industry3. There are several other companies operating in this market
at the moment (mid-2008), including MediaSentry and BayTSP. MediaDefender,
which has been a bit of a lightning rod in the industry, apparently goes further than
the other honeypot services and scans the downloader’s computer for additional
copyrighted material that may have been downloaded illegally. This can be quite
devious and lucrative: If a media security company installed a monitoring application on any computer that had downloaded a game or other media from one of its
sites, the company could catalog potentially compromised items and then contact
the media publishers retroactively. The company could offer the publisher a deal
such as, “I’ve got a list of 500,000 people who have installed your application—
would you like to buy my services?” In some countries, this whole approach may be
illegal and considered computer crime. This is a concern with a number of the
more aggressive anti-piracy tools that aggressively monitor the activities on a computer, report information back to a remote site, or take measures to shut down
“inappropriate” applications.
Honeypot services can definitely fall in the legally and ethically gray world of
active measures. Installing software on people’s computers, even if authorized in a
EULA, could put the company at risk under both civil and criminal law. The pursuit of individuals involved in music file sharing has had limited benefits and has
cost the industry a lot of good will. Even worse, such services can damage legitimate
businesses: MediaDefender actively disrupts peer-to-peer networks for its clients.
There are some legitimate companies that use P2P distribution because it lowers
bandwidth costs. This is a growing tactic for lowering the costs of digital distribution, as end users have a lot of bandwidth capacity that they are not using.
MediaDefender targeted such a network operated by Revision3 and wound up
causing a denial of service attack against the service. This has resulted in lawsuits
and an investigation by the FBI for violations of the Economic Espionage Act and
the Computer Fraud and Abuse Act4.
Chapter 9 Other Strategies, Tactics, and Thoughts
Historically, most computer games were single-player experiences only. Recently,
multi-player gaming has grown rapidly in popularity. As noted by many industry
observers, commercial single-player games have been dominant because, for most
of computer gaming’s short history, bandwidth costs were high. Outside of the
realm of computer games, there have always been far more multi-player games
than single-player games.
Building online multi-player games definitely has its challenges. It is interesting
to note that many commercial game developers continue to see multi-player functionality as an added feature that can be dropped if there are problems, rather than
as an essential part of the game.
This seems quite strange from a pure business perspective, especially in the
world of console games. Exit Games’ CEO, Harald Behnke, believes that multiplayer games can earn two or three times as much as single player games (to be fair,
his business is multi-player gaming services)5. Other estimates have been closer to
20 or 30 percent in additional sales.
At some level the rationale for this is pretty obvious. If I have a single-player
game that I really like and I recommend it to my friend, I might simply give her my
copy. For a multi-player game, in order to play together, we have to pay together,
leading to additional sales.
This is not guaranteed, of course. One of the reasons I think cheating is a critical industry problem is that multi-player gaming is a crucial part of making a game
more successful. And cheating can easily ruin the multi-player gaming experience
and hurt sales.
The other great feature of multi-player gaming is that it is much easier to secure
against piracy. Even a game with a minimal central matchmaking and lobby service
can more effectively control piracy than an elaborate DRM solution. Multi-player
gaming is one example of a rich interaction system; there are a number of others.
Rich interaction systems (RIS) are valuable game play, game community, and
player services that create opportunities for security transactions and incentives for
players to participate in the legitimate game ecosystem.
Security systems work most effectively when there are multiple interactions—
the more often the system is validated, the more security it provides. A bike lock is
fairly effective if the bicycle is parked in a public place where plenty of people see it.
Protecting Games: A Security Handbook for Game Developers and Publishers
One of the interesting differences between Western gaming and games in Asia is the
basic notion of game ownership. In the US, Japan, and Europe, computer games are
typically owned when the user possesses a physical copy of a game associated with
a single platform. In Asia, the physical or electronic copy of a game does not matter,
because the company’s association with the player is based on an online account.
Neither model is superior to the other. It is probably prudent for developers to investigate how to support both models of ownership for their games. Asian game developers may be able to tap game genres that are not solely multi-player and Western
developers may be able to better position themselves for the rapidly growing market
in Asia and other emerging markets worldwide.
Although I cannot claim to have taken a comprehensive survey of game developers and publishers, I have studied quite a number of game company sites over the
years. The only company that I’ve seen that seems to support both account-based
ownership and copy-based ownership, at least to some extent, is Valve Software.
Valve explicitly offers support for Internet Café licensing on its site as well as conventional sales, and its Steam online digital distribution service6.
One area that has not been explored too much is a hybrid of platform and
account-based licensing. Some MMOs do support “buddy” accounts for friends, but
there could be more. Easy examples are intelligent licensing and services for people
who play on both their home PC and a laptop, family licensing plans for MMOs and
other games, and even multi-level marketing and affiliate programs to lower marketing and distribution costs (and even reduce payment risks).
If the bike was locked up in a vacant warehouse and abandoned, no matter how
good the lock was, if someone who found the bike and wanted it, the bike would be
gone. In some sense, a bike lock does not work because it actually stops thieves.
A bike lock works because it is fairly obvious and reasonably time consuming for
someone to circumvent it. A bike lock is a thief detection system, not a theft
prevention system.
Effective protection for games works the same way. Rather than trying to build
an unbreakable lock, it is much easier and more effective to build an environment
where the security system is public and involves multiple users. It is even better if
the security system is part of a visible service that users routinely use.
Chapter 9 Other Strategies, Tactics, and Thoughts
At some level, licensing and DRM system providers understand this principle.
EA initially configured its DRM system for two games, Mass Effect and Spore, to require the player’s license be revalidated with the online server every 10 days in
order to ensure that the license key had not been compromised. After widespread
customer outrage, EA and Bioware canceled this tactic7.
The problem with their approach was not the underlying technical security
strategy, but how players perceived it.
Instead of making the security authentication and license check a standalone
service (which also makes it a target for circumvention), why not provide a valuable
online service that the user wants to participate in?
For Spore, at least, creating a valuable online service would be trivial. One of the
key elements in the game is the ability to create your own custom creatures and
share them. Even though the game is single-player, this shared, online experience
is a key part of its design and has already been very successful. There were over
250,000 creatures created the day the product was launched8 and, even better, HP
and EA launched a worldwide creature design competition9, with the additional
benefit that these players provided the companies with a lot of detailed personal information, no doubt, when they registered.
If the game’s license re-verification system was embedded into the utility that
allowed creatures to be posted or allowed players to register for the competition,
there would have been little or no controversy.
The real pioneer in providing this kind of service was Blizzard with Battle.Net10,
its multi-player gaming and matchmaking service that launched in 1997. Although
Blizzard’s games—the Diablo, Warcraft, and Starcraft franchises—are great and
likely would have been quite successful anyway, Battle.Net probably turned them
into worldwide phenomena and contributed to their amazingly long shelf life. It is
unclear whether Battle.Net was designed as an anti-piracy tool, and there seem to
be some security weaknesses in its implementation that indicate that it was designed primarily as an early social network, but Battle.Net certainly has turned into
a way for Blizzard to manage millions of players and licenses.
Surprisingly, very few game publishers or developers have followed suit and
launched similar online services. The notable exceptions are Valve Software’s
Steam, Stardock’s Impulse, and, for consoles, Microsoft’s Xbox Live, Sony’s and
Nintendo’s online services are not nearly as tightly integrated. The larger the service
and the more extensive the stable of games and features, the more effective the system works against piracy. In addition, the larger the service is, the more favorable
the economies of scale and reduced cost for a common infrastructure for digital
distribution and other online services. A RIS would give a substantial advantage to
any large game publisher.
Protecting Games: A Security Handbook for Game Developers and Publishers
There are many ways to create a RIS. A RIS does not need to depend on a
single service:
Game Commerce and Downloadable Content (DLC)—Selling virtual items
does not need to be limited to MMOs. Although Bethesda Softworks was criticized for charging $2.50 for horse armor in Elder Scrolls IV: Oblivion, the basic
principle of selling items and maps is effective. The two most successful examples are probably Guitar Hero III and Rock Band’s regular additions of new
musical tracks to these very popular games.
Inter-Player Commerce/Real Money Transactions—MMOs know that players love to trade items. By creating scarcity, players have a reason to interact.
Even the humble Nintendo DS allows Pokémon players to trade virtual items.
Ironically, although MMOs condemn real money transactions (RMT) for
disrupting game play, the RMT economic system creates an additional barrier
for potential pirates.
High Score and Badge Systems—Players have loved the ego boost of a high
score, as well as other rewards and achievements, since the days of arcades.
Microsoft has done a lot to revive this with its Achievements system on Xbox
Live and pretty much every publisher and online service has followed suit.
Tournaments and Ladders—Competition takes the basic pleasure of a high
score and raises the stakes. Tournaments and ranking ladders create powerful
incentives for players to participate in the official game service. ArenaNet has
been running Guild Wars tournaments worldwide with substantial prizes and
ongoing publicity for the game.
Rewards, Bonuses, and Incentive Programs—Frequent flier programs have
been a powerful loyalty tool and incentive to spend more money for fairly
modest rewards since American Airlines launched AAdvantage in 1981. Some
MMOs have given rewards to long-term players.
Contests, Sweepstakes, and Promotions—Contests are a classic marketing
tool and they can be effective in binding players to an online service. Operators
just need to be careful to ensure that they do not violate the law (see Chapter 31).
Game Updates—Regular updates to a game engine to fix bugs or, better yet,
improve the game play experience, can also be an effective way to tie players to
an online service.
Game Asset Updates—Game assets are generally easy to update and compact
to distribute. They can also give an expanded game play experience and revive
interest in a maturing game system. Asian MMOs have been particularly
aggressive in providing certain assets that are available to active players only
during certain holidays, such as Halloween or Valentine’s Day.
Chapter 9 Other Strategies, Tactics, and Thoughts
Mods—Although there is a lot of discussion about user-created content, it has
not had too large an impact on many games. Sharing Spore creatures, as cited
previously, is a relatively modest example. Some games have allowed players to
create maps and units and even modify the game engine itself. Games like
Bioware’s Neverwinter Nights allowed players to create dungeons and maps for
the fantasy game and even sell them.
Matchmaking and Multi-Player Gaming—The prototypical RIS is a multiplayer service. Games do not actually have to be run on the server. Players can
simply use the central lobby service to connect with other players and store
game results. An increasing number of Asian MMOs are actually run as peerto-peer games. This allows a more modest central server infrastructure while
preserving many of the anti-piracy benefits of server-based gaming.
Persistent Player Profiles—My military friends used to joke about some officers’ “I Love Me” walls—the collection of plaques and commendation letters
that they had received over their career. We all love to show off a bit and persistent player profiles support this even for single-player games. Microsoft has
struck gold with its combination of Gamertags and Achievements. I suspect
some players wind up buying the Microsoft version of a console game just
because of these features.
Machinima—A recent innovation in 3D computer games is the in-game ability to replay, stage, and otherwise manipulate game activities. This has turned
into an interesting side industry of its own where players use the game engine
to create original movies.
Community Systems—Although many games have online forums for the
game community, in most cases, they are often web services outside of the game
application. Instead, these services could be deeply integrated with the
game application itself and integrate machinima, real game avatars, player profiles, and other value-added capabilities.
Chat and Buddy Systems—In-game chat and buddy systems are often found
as third party add-ons to games. Tying them to the game can support your RIS
security objectives. One interesting example is Kongregate’s general integration
of a chat feature into all of its games, even if the games are single-player only.
Server-Based Gaming—This is the classic strategy of moving game play from
the client to the server as seen in MUDs and most MMOs (and discussed
previously in the Server Piracy section). If the game itself is on the server, many
of the objectives of a RIS are already being met.
Protecting Games: A Security Handbook for Game Developers and Publishers
The range of potential services is limited only by the developer or publisher’s
imagination. The key to their effectiveness as a security tool is how well and deeply
they bind players into the legitimate, licensed game and publisher infrastructure.
I have read with some interest the debate over digital rights management for the
past several years and the technical shortcomings and customer dissatisfaction with
the available approaches.
The Digital Affiliate System (DAS) design addresses the goals of any digital
media publishing business. See Figure 9.1. There has been no evidence that any of
the standard DRM solutions, such as software wrappers, digital signatures, encryption, and even hardware security systems, have had anything but a modest impact
on individual or organized piracy. Also, all of these solutions “fail deadly” (they
don’t “fail safe”); once the security system has been defeated one time, piracy scales
towards infinity because there is no way to recover from the security compromise.
FIGURE 9.1 Digital affiliate system (DAS) architecture
In addition, virtually every DRM solution is encumbered by numerous pieces
of intellectual property (there are quite a number of DRM related patents and, it
seems, more every day) that can make companies a target for litigation.
Chapter 9 Other Strategies, Tactics, and Thoughts
A digital affiliate system is not a rights-management solution; it is a revenuesharing system.
The “key” to the digital affiliate system is that it is designed to encourage the
use of legitimate and protected media assets, no matter their source. The typical
DRM system treats the customer as the enemy. Tremendous effort is expended by
the publisher and DRM system to prevent customers from using the media asset
that they have purchased.
DAS treats the customer as a partner and source of additional revenue. When
customers get an unauthorized copy of a DAS protected media asset, they have a
financial incentive to re-enter the legitimate DAS environment.
DAS works by transforming digital media assets into “currency” that can be
used for future transactions and have inherent value. Thus, the movie, song, game,
or whatever is only part of the value of the DAS Media Asset (DMA). Setting the
value of the DMA is the responsibility of the media asset owner; the media player
and distribution system simply support the process. Media player creators and
distribution services may profit from a portion of the transactions, although this
process should probably be as open as possible. The objective of DAS is to create
a standard, easy to use, open, affiliate system, not to replicate the proprietary solutions of the DRM. The power of this approach is that the digital media “currency”
can be linked to an individual and protected in a database outside of any devices or
players that store the DMA asset. Rewards, exchanges, sales, and promotions can
thus be used to encourage participation in a legitimate DMA market rather than
encouraging customers to defect and find media from the information black
Once you recognize that it is impossible to actually protect “bits” against someone who has legitimate access to them, any security design becomes much simpler.
Ordinary media asset file formats will be wrapped in a simple extended file format
that includes copyright information, ownership information, and “where to buy”
information. These files are read by a simple DMA player that handles the copyright information and passes the actual file to the appropriate media player application. The center of the DAS system is a DMA registry. The registry handles the
association of DMA players with individual owners and the association of DMA
assets with these owners. Finally, there can be a DMA market that supports the
exchange and sale of DMA assets as well as offers promotions, contests, and other
value-added features.
Protecting Games: A Security Handbook for Game Developers and Publishers
As noted, a DMA is simply a wrapped media file in any format. The DMA includes:
Media Asset—The actual asset of interest to the users
Media Asset ID—A unique identifier associated with the media asset
Media Asset Type—A tag associated with the media asset to indicate which
player or other application will be associated with the asset
Copyright Information—The standard legal disclaimer associated with the
media asset
Owner Information—The registered owner of this copy of the media asset
Registry Information—Location information associated with the DMA registry to support the user and media player association
Market Information—Location information associated with the DMA market
to support purchases, promotions, exchanges, and so on
It should be noted that this system does not preclude additional security measures implemented by the copyright holder. Covert fingerprints or other antipiracy features can be embedded in the wrapped media file.
The DMA player is not an elaborate security system; it simply wraps the media
player. The DMA player also knows the owner or owners of the player and can thus
determine the validity of the media player’s owner’s access to any DMA. The player
can also access DMA registries and even different DMA marketplaces and other
value-added services. When the owner of a media player wants to access a DMA,
the DMA player simply checks the DMA and determines whether the DMA is licensed by that owner. If not, the DMA prompts the owner to get a license from the
appropriate DMA registry. The DMA player does not attempt to prevent the owner
from using any media. Also, for art, sound, or other reusable assets, when any of
these assets are copied or clipped, they should retain the DMA wrapper information and pass it into the new asset. If the DMA player is not associated with an
owner, every time that a user attempts to use the DMA player it will prompt the
user to register with a DMA registry. By continually prompting owners to “do the
right thing” and, if the appropriate incentives are in place, this simple player can be
at least as effective as any of the existing DRM solutions available. Also, because
there is no benefit for a user to strip off the DMA wrapper, the user may re-enter
the legitimate system at any time.
Chapter 9 Other Strategies, Tactics, and Thoughts
DMA assets and asset owners are registered at one or more registries. A registry can
be operated by a telecommunications carrier, a digital distribution service, a DMA
asset provider, or other third-party entities. These can easily be set up to preserve
the privacy of customers, for those who want to preserve their anonymity. More
importantly, the registries control the ownership and basic payments for licensing
of DMA assets. DMA players can connect to the registries online or handle paper
receipts, and other purchase processes can be used to link DMA assets with DMA
players and asset owners.
The crux of what this Digital Affiliate System provides is not the technical mechanisms described here, but rather the business services that it enables. Because ownership of a DMA can be positively tracked and controlled by the asset creators,
multiple strategies for revenue maximization and protection are available. DMA
owners who recommend or distribute DMA assets to other users can earn bonuses
and rewards. Contests and incentive programs (much like frequent flier programs)
can be used to reward registered DMA asset owners. Frequent flier programs show
the power of these types of incentives. Even very modest rewards can have a real
impact on consumer behavior.
All of these revenue streams can be tied to legitimate DMA purchases, making
the use of the legitimate DMA system a boon, not a bane, for media users. Even
major pirates can become positive parts of the media distribution system by
rewarding them as resellers (with higher levels of compensation for large numbers
of referrals). And, for those pirates who persist, the value of simply duplicating
DMA-protected assets will go down, as many legitimate customers will prefer to
participate in the legitimate “media commerce” system.
Many PC and even console publishers and developers argue that the game industry
is moving towards digital distribution for games. The rapid growth in broadband
networks and reduction in bandwidth costs make this a technically viable and
sound business strategy. In addition, many see digital distribution as a solution to
the challenge of used games and piracy.
Protecting Games: A Security Handbook for Game Developers and Publishers
Digital distribution is a particularly appealing anti-piracy strategy for consoles,
because it can take advantage of the closed architecture of the console platform to
substantially raise the barrier for potential pirates and, if implemented properly,
make it quite difficult for them to scale their attacks effectively across multiple users.
This section describes a highly simplified, conceptual architecture for a secure
digital distribution system oriented towards consoles, although it could be used for
PCs. I am going to ignore many details to focus on some key security themes.
There are two main elements of this distribution system—the distribution
process from the central server to the console and the method to secure local storage within the console. The main objective of the distribution system from the central server to the console is to operate quickly and efficiently, but somehow
incorporate some form of uniqueness for each game shipped to each console. In
this case, the important requirement for uniqueness is not in the encryption and
distribution service, but to be able to determine where piracy may have occurred in
an efficient manner. Also, this example is not really concerned about the manipulation of the game code or assets. See Figure 9.2.
FIGURE 9.2 Secure digital distribution system architecture
In order to isolate who may have compromised a game, this example uses
steganography to embed a “covert fingerprint” or tattoo (CTAT) into each copy of
the game (GAME). During preprocessing, it will determine where you can safely
combine these bits with the actual game file to create a “tattooed game” (CTGAME)
for each console i:
CTGAME(i) = GAME xor CTAT(i);
Chapter 9 Other Strategies, Tactics, and Thoughts
You do need to encrypt the resultant file and, inn this case, it makes sense to use
a cryptographic function that is error extending, such as a ciphertext autokey
(CTAK) or cipherblock chaining (CBC) mode.
// Note, both modes require an initial value IV = PT(0) that is passed
in the clear or is somehow known to both parties
CT(j) = E[PT(j-1)] xor PT(j);
// CTAK encryption, where E is the encryption function
PT(j) = E[PT(j-1)] xor CT(j);
// CTAK decryption. An advantage of this mode is that there is no
need for a decryption function.
CT(j) = E[PT(j-1) xor PT(j)]; // CBC encryption
PT(j) = D[CT(j)] xor PT(j-1); // CBC decryption, requires a decrypt
When I was working through this design, I almost made a big mistake. I wanted to
use a simple key-additive system:
CT =
KeyStream xor PT;
// where the keystream is generated by a cryptographic function
The advantage of this approach is that I could store the combination of the game
and keystream:
GenericProtectedGame = KeyStream xor Game;
And then I would have added in a tattoo stream at the last moment for each
= GenericProtectedGame xor CTAT(i);
This would have been very fast and efficient from a performance and storage
point of view. Unfortunately, it would have allowed an adversary to combine two
ProtectedGames together to isolate the tattoo, just as fingerprint systems were
attacked (see the sidebar entitled “Attacking Fingerprints and Watermarks” in Chapter
6). Instead, I had to change to the CTAK or CBC cryptographic modes to avoid this
Protecting Games: A Security Handbook for Game Developers and Publishers
The reason you want the system to extend errors is that you do not want a third
party who looks at the cipher streams from different consoles to be able to start isolating the covert tattoo. In general, there is little benefit to using a different key when
sending the data to each individual console. The cryptographic modes described
here can allow unique identification, even if the key is common for all users.
If possible, it would be more efficient if the assets that are tattooed are at the
end of the game file. This would allow the encryption of the front part of the file to
be computed once and stored for all users. Then, the only portions of the game file
that would need to be computed uniquely for each console would be in the tattooed
region. It probably would be wise to change the key associated with a game regularly, but the risk of disclosure is more likely at the server than on the console. Thus,
for each console (i) and game (g), the package sent to the client would be:
ProtectedGame(i) = E[GAME xor CTAT(i),gkey];
// where gkey is the current key associated with that game.
The console and server will create or exchange the game’s key (gkey) via some
public key or private key management protocol. On the console side, once the game
has been completely downloaded and decrypted, the gkey will be deleted.
At this point, the tattooed game file will be available at the console. There is a
huge business advantage for consoles to allow players to purchase their own standard, commodity hard drives. Console makers make more money with less risk by
focusing on the media that they want to distribute rather than on selling hard drives. In order to safely use a standard drive, however, the console needs to encrypt
all data that is stored on the disk. There is no need for any other console or even the
central server to know what key the console is using. This is nice from a production,
operations, and key management perspective. (Exercise: propose a sensible way to
handle console hardware failures that do not require the redistribution of all of the
previously sent games.) Because the data is stored locally and encrypted with a
unique key, the tattooed game file will be less vulnerable to hackers.
Fortunately, consoles, and even many PCs that have a Trusted Platform
Module (TPM), do have some internal secure storage. The key that is used to
encrypt all of the keys for the various games will be stored in the TPM or encrypted
in a unique key that is stored in the TPM.
By using this process, if a game is found to be compromised, the game publisher can look at the game’s tattoo and use the tattoo to determine which console
was the source of the game. The console maker can then determine which other
games that had been sent to the compromised console are at risk. This does not
recover the previously lost games, but it can help reconstitute the system as a whole.
Chapter 9 Other Strategies, Tactics, and Thoughts
Also, the publisher can implement a recovery strategy for other games that were
distributed to that same compromised console.
1. M. Martin (2006), “Cause Mayhem to Disrupt Illegal Downloads, Says Introversion,”
2. Wikipedia (2008), “MediaDefender,”
3. soulxtc (2007), “Gotcha! New MPAA Site Tries to Trick Users into Illegally Downloading Movies,”!+New+MPAA+Site+Tries+to+Trick+Users+into+Illegally
4. R. Paul (2008), “Revision3 CEO: Blackout Caused by MediaDefender Attack,”
5. E. Gibson (2006), “Exit Games CEO Harald Behnke: Interview,”
6. Valve Software (2008), “Business,”
7. Polybren (2008), “Mass Effect, Spore DRM Loosened,”
8. R. Purchese (2008), “250,000 Spore Creatures Created in a Day,”
9. HP and EA (2008),” Electronic Arts and HP Organize Regional Spore Creature Creator Design
10. Wikipedia (2008), “Battle.Net,”
Anti-Piracy Bill of Rights
t the end of the day, treating your customers like criminals is bad business.
Although piracy is a problem that should be taken seriously, the primary
goal of any game company and the industry is to maximize revenues, not
punish pirates. The industry does not need to disclose the details of its anti-piracy
strategy to anyone, including consumers, but it is important that game companies
are clear as to what they are doing to a customer’s computer and their expectations
from their customers.
The entertainment industry, in general, and the game industry in particular,
have come under fire for some of their anti-piracy tactics. Sony BMG’s Rootkit,
the Recording Industry Association of America’s (RIAA) aggressive lawsuits, and the
Starforce DRM problem among others have left consumers with an active distrust
of the industry that has further encouraged piracy.
Many games have had public relations problems with anti-piracy, including
Bioshock, Spore, and Mass Effect, all of which have run afoul of their own antipiracy systems.
Although most consumers are generally sympathetic to the industry’s concern
about piracy, some of the draconian measures that have been taken have alienated
many players and created more sympathy for pirates than game creators. Some of
these cases can bite back. A crusading politician, or friend of a politician, who has
a bad experience with a game’s security system could easily put forward, and even
pass, legislation that would be difficult for the industry.
Think it can’t happen? Illinois recently passed a law that specifies the requirements to be able to cancel an online game subscription after an unhappy gamer
whose father happened to be a local alderman complained to a friend in the state
Chapter 10 Anti-Piracy Bill of Rights
Although it would be ideal for an organization such as the ESRB to add Fair
Use designations to their current product-labeling program, the game press and
online review sites could include Fair Use Principles in their review criteria and ratings. Hopefully, publishers and developers will consider these guidelines when
selecting and implementing their anti-piracy strategies.
What follows is a proposed set of basic principles and designators for managing Fair Use for games (and other applications).
Computers, cell phones, and other devices are the property of consumers. It is a
privilege to be selected by a consumer to be installed on a platform. Conversely,
consumers should value and respect the rights of the creators and it is reasonable
for a creator to be compensated for his or her work as he or she chooses.
1. Any application that is installed on a consumer’s computer should be able
to be cleanly uninstalled if for any reason the consumer no longer wants it
on a platform. This means that there should be no residual software, drivers, data, or other information remaining on the platform as a default.
Any variations from this should be clearly specified and at the discretion of
the consumer.
2. No application shall alter other applications, drivers, libraries, or data on
a platform. “Upgrades” or changes to other applications, drivers, and
libraries should be clearly and individually indicated and approved during
the installation process. Upgrades shall not be for the purpose of adding
security or other functionality beyond that intended by the developer of
those applications and should not impair the operation of other programs
or libraries that may use these shared resources. Essentially, the only case
where an application should modify another provider’s application is when
it acts as an alternative distribution channel for that provider’s application.
3. No application, driver, library, or data will be installed on an application
without the consent of the consumer. A clear and complete manifest of
the applications, files, libraries, drivers, and data shall be provided to the
consumer with the application process. Also, if any shared registry data,
configuration information, or other such changes are made, they shall also
be clearly indicated in the installation manifest as well as a listing or copy
of the previous state of these configuration files, data structures, and so on.
Protecting Games: A Security Handbook for Game Developers and Publishers
4. No applications, libraries, or drivers shall operate when the provided game
or application is not running. The consumer is the individual who determines when, or if, the application operates. The expectation is that this will
be a manual decision by the consumer, not operating as a continuous or
background task, unless clearly and affirmatively agreed to by the consumer.
Any tools to facilitate updates or other background features shall be available and used solely at the discretion of the consumer. The consumer shall
have the clear ability to disable any continuously operating or background
services or applications at any time.
5. If the license or DRM service shuts down, the provider will either disable
the DRM solution, transition support of its existing customers to a third
party, or provide a no-cost migration path to a new solution. This problem
recently came to light with Yahoo!’s announced termination of its music
Registration is the process of validating an installation of an application on a
specific platform, often prior to use. This may be done via a license key, payment
process, provision of personal information, or other process.
RR: Registration Required—The application will not work without initial reg-
istration. It is highly recommended that any application that requires initial
registration support a multitude of registration options, including phone,
email, web form, and fax, and not just direct Internet connection. (See the
“Connection Options” section.)
Rxx (D or T)—Registration required within xx days (D) or times (T) that the
application is used.
V: Registration Value—Registration is required to access value-added services.
The application is still a meaningful, complete application without registration,
not a demonstration or otherwise crippled product.
RO: Registration Optional—There is no required registration for the installa-
tion and operation of the application. This may also include the case where
there is no registration process at all.
Chapter 10 Anti-Piracy Bill of Rights
Installation is the process of installing and configuring an application on a given
I1—The installation is for a single instantiation on a single platform. It is highly
recommended that any application that operates in this mode have a way to
reconstitute or move the installation to another platform. This configuration is
expensive in terms of customer good will.
A1—This installation allows a single active copy associated with a license. The
installation can be moved or reinstalled on another platform. As with I1, the
developer or publisher should carefully consider the operational scenarios
where users may have legitimate problems with their platform that may require
a reinstallation or transfer without a prior clean uninstall process.
Ix—The installation is for a total of x copies spread over one or more platforms,
but by a single licensee.
Ax—The installation is for a total of x concurrent active copies associated with
a single licensee. The licensee is responsible for the activities of all individuals
who use the product or service provided. Thus, if a “ban” or other punitive
action is taken, it will be against all of the copies associated with that license.
CR: Connection Required—The application requires a live Internet or data
connection to operate. This levies a strong availability and scalability requirement on the application provider. It also constrains users from many legitimate
usage scenarios.
Cx (D or T)—Connection is required within x days or sessions to maintain use
of an application.
CV: Connection Value—A connection is required for certain value-added features of the application. The application is still a meaningful, complete application without a live connection, not a demonstration or otherwise crippled
CO: Connection Optional—There is no required connection for the installation and operation of the application. This may also include the case where
there is no connection process at all.
Protecting Games: A Security Handbook for Game Developers and Publishers
I am not the only one concerned about this issue; see Talkjack3 for another
DRM Bill of Rights via ByteShield 4 and Stardock’s Gamer’s Bill of Rights5.
1. M. Fahey (2008), “Illinois Law Spurred by Final Fantasy XI Cancellation Issues,”
2. D. Rothman (2008), “Why We Hate DRM: Yahoo Music Store to Shut Down and Shaft Customers
Who Bought Legal Music,”
3. Talkjack (2008), “Is DRM Killing PC Games? (Part 1),”
4. ByteShield (2008), “Is Anti-Piracy/DRM the Cure or the Disease for PC Games?,”
5. Stardock (2008), “Stardock Announces The Gamer’s Bill of Rights,”
The Piracy Tipping Point
n the past several years, piracy has moved from a nagging nuisance to being perceived as a real threat to the entertainment industries. The power of the music,
film, and game publishers has been based on their control of distribution,
marketing, and funding for new creative works. The Internet has radically reduced
the costs for distributing and marketing entertainment. It is now possible for independent entertainment creators to compete with global corporations. Even more
threatening, piracy no longer requires expending any real capital. Piracy no longer
requires organized gangs who need factories and tools. Individuals can engage in
piracy for their personal benefit. The extreme view, taken by the RIAA and some
members of the film and game industries, is to target individual consumer “pirates”
as if they were major criminals1. In the US, the entertainment industry’s lobbying
efforts appear to have succeeded in pushing the government to take a lead role in
both civil and criminal prosecutions of copyright theft cases2. Although this may
succeed in the short term, it is too easy for pirates to move beyond the reach of
The real question that entertainment companies need to answer is whether they are
seeking to maximize revenues and profits or seeking to stop copyright theft. The
two questions lead to quite different strategies.
Is it really possible to stop copyright theft? The honest answer has to be no.
No Digital Rights Management scheme or Trusted Platform Module or any other
wonderful widget is going to “solve” copyright theft 4. Anyone who tells you different is simply trying to sell you a product. (Simple question: Will any anti-piracy
vendor guarantee its product against failure?)
At the end of the day, do you want to be a policeman or a businessman?
Protecting Games: A Security Handbook for Game Developers and Publishers
The real power of large entertainment publishers comes from their portfolio of
products and tie-ins to each product. Instead of worrying about fighting pirates on
a product-by-product basis, large companies can enmesh customers in an “entertainment ecosystem.”
This is not a new idea. Disney pioneered licensing its characters for everything
from lunch boxes to wristwatches and bed sheets5. The same Internet that makes it
almost trivial to pirate media also makes it much easier to link products and services together. Whether it is something as simple as a loyalty card or as elaborate as
an online community, there are many, many “carrots” that can be used to draw
customers into the legitimate, paying entertainment ecosystem compared to the
DRM and TPM “sticks.” They may still require security technologies, but the goal
is to transform the hard problem of stopping copyright theft into the easier mission
to verify and service legitimate customers.
The game industry has recently and loudly joined the music and movie industries in the fight against piracy. Yet, of all of the major entertainment categories,
the game industry is probably the best positioned to finesse the piracy problem. The
early battles with piracy in Asia have resulted in the move to online subscription,
and now free-to-play gaming. Although these solutions are far from perfect, they
make piracy manageable. They raise the barrier to entry for meaningful abuse, provide continual incentives to participate in the legitimate game system, and make it
much easier to take action against a much smaller pool of pirates… if anything, the
music and movie industry should learn from the game industry, not the other way
As a business, entertainment is amazing because it scales so well. A threeminute song or a game can entertain millions and millions of people. iTunes has
shown that 99 cents is a great price for a song. The games Rock Band and Guitar
Hero are showing that you can take this same song, wrap it into a game for modest
cost, and sell the song again… and with substantially reduced concerns about
Reducing barriers to entry is key. iTunes made digital music good (enough),
cheap (enough), and easy (enough). Game developers and publishers need to make
similar assessments. If charging $50 creates a barrier to entry, perhaps changing the
price or the way games are built to be able to charge $20 is necessary. Games are still
substantially less expensive to create than movies, yet they cost several times as
much to buy. Developers may want to consider episodic gaming as a way to reduce
the in-store cost of a title. Rather than sell the whole game for $50 in a store, simply distribute the first $10 or $20 physically and sell the rest online to already
hooked customers. This could battle the appeal of used games, the poor margins in
retail, and the challenges of piracy all at once.
Chapter 11 The Piracy Tipping Point
Games are released at very different prices at very different times around the
world because of the costs of localization and other issues. Ubisoft recently made a
move to add subtitles to all of its internally developed games to make its titles more
accessible to deaf players7. Simple subtitles, combined with digital distribution,
could make it easier and quicker to release games globally simultaneously, just as
movies do, to reduce the value of piracy for customers in many countries 8.
“Piracy” is a trap. It reduces a whole range of problems, challenges, and issues
into a single word. It invites one to believe that there is a simple solution. As you
unpeel piracy into its components, you’ll find a number of problems and a range of
solutions. Some may require you to change the way you do business, some have
technical fixes, and some problems are simply hard. Albert Einstein stated9 that the
definition of insanity was “doing the same thing over and over again and expecting
different results.” By that standard, we should reconsider the industry’s approach to
the piracy problem and do something different.
1. C. Dring (2008), “Law Firms Declare War on Pirates,”
2. A. Broache (2008), “House OKs Copyright Czar, New Piracy Penalties,”
3. C. Doctorow (2007), “Trade Court Allows Antigua to Violate U.S. Copyright,”
4. M. Androvich (2008), “Encryption Chip Will End Piracy, Open Markets, Says Bushnell,”
5. E. Epstein (2005), The Big Picture: The New Logic of Money and Power in Hollywood, Random House,
ISBN 0-8129-7382-8
6. T. Johnson (2008), “Ripping on Metallica: Death Magnetic and Guitar Hero III,”
7. J. Snow (2008), “Ubisoft Adds Subtitles to All Future Games,”
8. X. Jardin (2005), “Thinking Outside the Box Office,”
9. A. Einstein (attributed), “Albert Einstein Quotes,”
This page intentionally left blank
In this part, you’ll find the following topics:
Chapter 12, “Overview of Cheating”
Chapter 13, “Cheating 101”
Chapter 14, “App Attacks: State, Data, Asset, and Code Vulnerabilities
and Countermeasures”
Chapter 15, “Bots and Player Aids”
Chapter 16, “Network Attacks: Timing Attacks, Standbying, Bridging,
and Race Conditions”
Chapter 17, “Game Design and Security”
Chapter 18, “Case Study: High-Score Security”
Overview of Cheating
heating and games go hand-in-hand. It seems every game has its cheaters.
This discussion of cheating mainly focuses on cheating in multi-player
games. In traditional card and board games, the game’s mechanics and
game play systems are clearly visible. This makes it easy for cheaters to see how to
attack the games and forces game designers to consider cheating as the game is
built. Computer games, on the other hand, have mechanics that are concealed by
their elaborate graphics and high-speed play. They also have a strong, single-player
legacy (often from standalone PC and console games) where cheating was hardly
considered; cheating was a private affair between the player and herself. There are
also cases where people trade on the seduction of “cheating” as a marketing tactic
such as with cheat codes and Diablo 2’s Cheaters’ Tournament where players all
play with maximized statistics. This is not really cheating; it is just an alternate set
of rules of play. This part will explore a wide range of categories of cheats for computer games and discuss potential countermeasures.
Cheating is the next big frontier for computer game security. Multi-player and
social games blend business, marketing, and anti-piracy strategies for many game
companies. Multi-player, online, and social game services reduce distribution costs,
ideally by bypassing retail, but certainly through sale of downloadable content and
virtual items. The rich interaction system (RIS) strategy (see Chapter 9) is a powerful anti-piracy strategy because it ties players to the game publisher’s online
service. Cheating problems undermine the very same business and security benefits
that these strategies provide. Gaming services, such as MMOs and portals, need to
acquire and retain players and keep their operational costs down—all of which are
adversely affected by cheating. Finally, there is substantial growth in “for money”
games. The success of promotional games, tournaments, and contests, and, of
course, skill games and gambling games, is intimately tied to their control of cheating. Even the humble single-player game, after a high score system is added, counts
on controlling cheating to achieve its social gaming success. Who wants to play
when you know the high scores are rigged?
Cheating 101
heating is as old as gaming. When we shuffle and cut a deck of cards or use
a dice cup, we are continuing the age-old battle against cheaters. Although
the primary security concern of computer game companies is piracy, the
number one concern for players is cheating. No matter what motivates a player to
play a game, cheaters damage the game experience for everyone. The oldest
commercial game companies, the casinos, recognize this and structure everything
in their operation to instill confidence in their customers that there is no cheating.
For a long time, the computer game industry has consciously traded on the
seduction of cheating by incorporating “cheat codes” into their games. They sell
“strategy guides” that give their users what some consider an unfair advantage in
the games by disclosing details and tactics that a player would have great difficulty
discovering through ordinary game play. They even have gone so far as to sell cheats
to players1. There are several ways to interpret these cheats:
Cynical attempts to cash in on player laziness
Methods to make up for poor game design
Bugs in game implementation
Because these were single-player games, there were no consequences for the
industry. This has, to some extent, created a “cheating culture” where players think
cheating is okay.
Protecting Games: A Security Handbook for Game Developers and Publishers
However, as games turn into services and multi-player and online games
become the dominant business model, the cheating culture remains. The problem
is that no one wants to play a game where they perceive that they are at an unfair
disadvantage. See Figure 13.1.
When I first started talking about game security, I heard story after story about
how players would work hard to cheat at any game with no real reward. They will
cheat at anything from a simple Flash game on a website to earn a high score to
hack a first person shooter to gain invulnerability. Do a Google search for “cheat”
and you’ll get almost 80 million results (by comparison, “Angelina Jolie” returned
a mere 35 million results). And at the top of those results are cheats for computer
games. Search on “cheat code” and you’ll find 3.5 million results. Again, the top
results are all for computer games. Cheating in games is a topic of compelling
interest for many game players; everyone wants an advantage.
If you create a good game that has value, someone else will try to squeeze it…
Hacking we’ve found is like a drug. It’s addictive for the player, and they have
a difficult time enjoying the game without the hack tools... A lot of players are
turned off by it.
[Hackers] turn away new and existing users, increase account theft rate,
shorten life span of a user, create [an] abusive community [environment,]
and discourage purchasing.
[Hacking is] an epidemic we’d been ignoring in Korea. We realized it’s not just
a US thing, and we developed detection tools and did mass bans…we moved
the critical values to server side and implemented third party solutions.
—Min Kim, Director of Game Operations at Nexon America on Cheating2
FIGURE 13.1 Why cheating matters
Chapter 13 Cheating 101
Cheating is costly. Players leave a game when they feel cheated. Players call
customer service when they believe that they are cheated—whether they are actually cheated or not. Cheating players are often punished by being banned, but this
deprives the game operator of further revenue from that individual. Lost players,
banned cheaters, and increased customer support all cost a game operator money.
Even worse, cheating is a problem whether it is real or just player perception. If
players think that they are being cheated, they may quit even if there is no actual
The same powerful network effects that bring players together for multi-player
games, MMOs, and virtual worlds can rapidly turn from rapid growth to stagnation
and even collapse if players feel that they are being cheated. Anything that even
slightly reduces the likelihood a player will join a social network or reduces the
duration that they stay can have large consequences. Exponential growth is very
sensitive to small changes in its inputs. The key to success for social and multiplayer games is the perception of trust for all players and credible deterrence for
potential cheaters.
Often executives and thought-leaders in the computer game industry talk
about becoming “like the movie industry.” In some sense, the computer game
industry is becoming much more like its closer (and much larger) cousin, the gambling industry. The movie industry sells tickets and DVD products; the casino
industry sells an ongoing entertainment service. This is where the game industry
really seems to be moving. As this migration continues, the game publishers and
developers will change their focus from battling piracy to fighting cheating.
Once we start talking about cheating, we need to ask—what is a “fair” game. First,
there is the question: Is the game itself “fair”? A game is fair if it has an agreed-upon
set of rules known by all of its participants and an expectation that those rules can
be reasonably enforced.
By this standard, casino games, even if they are biased towards the casino, are
fair. The rules are known. Conversely, an MMO may not be “fair” if it is ripe for
abuse: The often-cited example of real-money transactions (RMT) is not in and of
itself unfair. RMT, or any other aspect of game play, become unfair only if it is not
an explicit part of the game or a part of the culture of the game that is shared by all
players. Collusion in poker is unfair only because it is not part of the game’s culture.
One could easily imagine an alternate version of poker where players set up partnerships and agree to share winnings. The card game bridge works this way.
Protecting Games: A Security Handbook for Game Developers and Publishers
Second, what is “fair play”? Fair play includes the game’s rules, the game’s
environment (the way it is being operated), and the game’s culture. It is certainly
plausible to have a computer game that explicitly supports a “hacker culture,”
where programming hacks and mods, aimbots, and wallhacks (and maybe even
hack countermeasures) are all part of the game.
Responsibility for fairness rests with everyone—the game designer, its operator
and implementer, and its players. Game designers should consider cheating and
other threats to the game in its basic design. Game operators and implementers
need to protect the game play environment and ensure that the specific service
they are offering cannot be abused. Players, too, have a responsibility to the other
players to stay within the construct of the game, even if its implementation is flawed
or the rules are incomplete. In a board game, a player has a copy of the rules. The
lack of clear, written rules for many computer games makes cheating easier: You’re
not “cheating,” you’re exploiting the code. A successful game will cultivate an
environment of mutual trust and obligation to play together fairly.
Perhaps the saddest legacy of single-player computer games is that lazy
programming and poor game design have made cheating acceptable or even
admirable. Even worse, players now expect to find “cheat codes,” and game companies and magazines profit from the sale of these codes and guides. Is it bad
game design that makes it necessary to purchase a $25 strategy guide in addition to
a $50 game, or just greed? What is clear is that now that multi-player and massively
multi-player games are emergent, the sudden objections by the game industry to
cheating sometimes seem a bit disingenuous.
Cheat codes seem to be an accident of history. Testing software requires exercising
all of its features and failure paths. For a game, this means spending a huge amount
of time testing the application, and it is terribly inconvenient to have to start over
the game from the beginning or require your testers to all be expert players for them
to be able to do their jobs.
The rather simple solution is to allow the game to be changed so that even a
minimally competent game player can thoroughly test the application. Cheat codes
were simply efficient triggers to access these test modes. Cheat codes can give invulnerability, infinite ammunition or lives, allow players to fly and jump around
the game—whatever it takes for testing.
Chapter 13 Cheating 101
Most of the time, test code is removed from a game or application prior to its
release. Security is one reason, but some of these test codes can also undermine the
proper operation of the application.
Unfortunately, this didn’t happen with some early games. Players discovered
these “cheat codes” and became fascinated with finding them. Players would sometimes even pay for these codes, and cheat codes have become part of the marketing
machinery of the computer game industry. Cheat codes are a currency used by
public relations and marketing to reward magazines, reviewers, and websites, as the
codes can bring in more traffic, advertising, and revenue for the publications.
There are two real problems with cheat codes. The first is a missed business opportunity. These codes are rarely fully integrated into the game’s design and interface. In some sense, the cheat codes provide a wide range alternate modes of game
play that should be easily accessible to all players. These alternate modes should
include all of the features that you would expect from a game mode: an infinite life
mode should have a separate high score, rewards, and other incentive systems
distinct from playing without that feature. Game designers can tap these alternative
game play options to efficiently expand the game experience for a wider range of
players. I, for one, rarely buy first person shooters (FPS) because my “twitch” skills
are pretty pathetic. If FPS games included well thought-out game modes that were
more accommodating of slow folk like me, they might make some additional sales.
The more dangerous problem with these cheat codes is that in many cases the
game itself really doesn’t know that a cheat code is active. For single-player games,
this is not really a problem. In fact, from a strict software testing perspective, it is
better that the game doesn’t “know” that it is running in a cheat mode. For a multiplayer game, however, this situation leads to trouble. Players have been able to use
some of these single-player cheat codes in multi-player environments in a way that
From their early days, some cheaters have targeted game hardware—the console
itself or its various inputs and storage interfaces. These hardware hacks can modify
game save files and, in some cases, allow active modification of a game while it is
running. The R4 product that targets Nintendo’s DS handheld and, to some extent,
Mad Katz’s GameShark both subvert the console’s external storage systems. Some of
these tools can even bypass the entire console operating system and give the hacker
total control over the behavior of the platform. This can be done to give players an advantage in the game or as a method to pirate games. These devices have mainly had
an impact on single-player games. However, they could cause real trouble as console
multi-player gaming grows.
Protecting Games: A Security Handbook for Game Developers and Publishers
is not detectable by the other players. These kinds of careless cheat code implementations can allow a player to play a multi-player game at a huge advantage and
ruin the game for the other players. Sloppy labeling of cheat code modes and poor
management of state information in multi-player environments aggravates these
In an ideal world, the entire language and use of cheat codes would be abandoned. All of these additional game play modes would become part of the game
play experience and listed on the back of a game’s box, just like other features.
Only real cheaters would look for “cheat codes”… except they wouldn’t be able find
them. Is it too much to ask those clever folks in marketing to find some other way to
grab the interest of players, magazines, and websites besides advocating “cheating”?
There is some confusion about the definitions of cheating, hacking, and exploits. In
some sense, I am not really interested in clarifying the issue. For purposes of this
book, cheating is an attack on a game application. I don’t care if it is based on some
tool-aided assault or a quirk in the interface or game engine or some combination
thereof. My categories have been based on how game developers and publishers
typically perceive problems and their countermeasures.
It is worth highlighting exploits, however.
Exploits are flaws in the game as implemented that give a player an advantage.
For example, the set of fantasy pub games for Fable 2 have a race condition flaw (a
situation where the player can induce an inconsistency in the game’s behavior) that
gives them a huge advantage. The Fable 2 pub games are a set of casino-style games
where players can bet and win or lose chips. In one game, a player can “change her
bet” from 60 gold to 600 gold (and win as if she bet 600 gold), but only be penalized for losing as if she bet 60 gold 3. There are innumerable variations on this
problem. Basically, the game programmer and game designer are not implementing
the same game design or the game programmer has not been sufficiently careful in
controlling changes to the game’s behavior.
All games have some sort of state (the current status, location, and other information about players, assets, time, and so on) and rules (the way that players can
alter the game’s state and when they can take these actions). Race conditions are actions that alter the game’s state in a way that should not be allowed under the rules.
One way the Fable 2 problem could occur is that there are two different systems: a
player account system and a game system. The player’s account is checked when the
initial bet is made and this is sent to the game and debited from the player account.
But subsequent changes to the bet are not checked against the player account; they
are only changed in the game itself. Then, if the player wins, the game sends the gold
back to the player account.
Chapter 13 Cheating 101
Race conditions are associated with most “dupe” exploits (where players abuse
the inventory system or other game mechanics to duplicate items). A sample “dupe”
exploit could work as follows:
1. Player 1 drops an item out of her inventory and it is added to the local game
2. Player 1 abandons the game before her inventory is updated.
3. Player 2 picks up an item from the local game environment and it is added to
her inventory.
The item has now been duplicated. The “dupe” attack works because the three
data stores do not properly synchronize with each other.
The key is that games are essentially transactional systems and need to be built as
such. A proper transactional system would not allow the player to drop the item without updating the inventory: Either the item would be dropped and the inventory updated or nothing would happen at all. These problems can be even worse in online
games when developers choose not to use standard databases or do not understand
proper data transactions or are just not careful.
Another major exploit area occurs when game developers use graphics engines to
enforce game play. Graphics engines are optimized to smoothly render graphics and
animation. Smooth playback is the most important requirement for most graphics
engines, not careful enforcement of physics features such as collision detection or
pathfinding. This can create a number of problems: Many games use the graphics
engine’s model of the game environment to determine legal moves, visibility, and position. Exploits take advantage of conflicts between a game’s need for accuracy and
a graphics engine’s goal of smooth rendering.
Exploits can even exist in paper-and-pencil games. I used to play the super hero
role-playing game Champions in high school. I was one of several people who found
an exploit in the game’s mechanics that allowed me to deliver 20 times the damage
of other players.
Exploits are the responsibility of the game developers. They really aren’t “cheats,”
because they do not require any modification of the game’s software or data nor can
they be effectively addressed by anti-cheat tools. Basically, they require corrections to
the game’s design and implementation. Clear documentation of general game rules
and procedures should help reduce exploits and make them obvious flaws when they
occur. The Age of Conan had an exploit that allowed customers playing the
Demonologist class to advance extraordinarily quickly through the game4. For online
games, detailed logging can be critical in locating exploits by tracking their consequences. Also, given the scope of the games, it is probably good to reward players
who find exploits—after all, they are paying you for the privilege of testing your software. (Note: When a player reports an exploit, it is probably also worth reviewing their
game play logs to see if they had some fun taking advantage of the exploit for a while.)
Protecting Games: A Security Handbook for Game Developers and Publishers
In order to continue this discussion about cheating, it is important to have a framework for talking about how games are built. CARRDS is a conceptual framework
for talking about games and game security, and is illustrated in Figure 13.2. The
advantage of the CARRDS framework is that it allows you to focus on the common
elements of computer games that can affect security.
Control—The keyboard, mouse, joystick, controller, Nintendo Wiimote,
voice, biofeedback, or whatever raw means a player uses to interact with a game.
Action—The normalized control that the game rules understand activities such
as go left, fire, strafe, jump, turn right 30 degrees, fly NNW 1 hundred meters,
and so on.
Rules—The actual rules of the game.
Random—Random events sit in the nebulous intersection of actions (players
rolling dice to determine their move choices), rules (determining the result of
an attack), and state (a player’s cards). Random events raise some interesting
problems in a networked environment, as there are real questions as to how to
provide fair random online.
Display—The presentation provided to a player of the game’s state. (Note:
There can certainly be multiple views of the game’s state.)
State—The current state of the game, or, in a multi-player game, the partial
state known to a single player.
FIGURE 13.2 CARRDS game architecture
Chapter 13 Cheating 101
In 2000, Matt Pritchard published one of the first significant papers on multi-player
game security. In it, he set the basic categories of attacks—data and network—and
began the discussion about one of game security’s most trying problems—trusted
clients5. The “Trusted Client Problem” is seen in many games to this day. The
Trusted Client Problem was targeted at client-server games and can be generalized
and restated as “The Remote Data Problem,” as follows:
Games tend to trust the state or information provided by other players.
This approach seems works fine, most of the time, and is very easy to implement by using data synchronization techniques. Data synchronization is very good
at handling cases where the players’ games’ states become slightly inconsistent.
Unfortunately, the system falls apart when one of the players is cheating. Because
the remote data is trusted, the peer player or game server tends to blindly accept information that is manipulated on the cheating player’s platform or on the network
connection. If cheaters can force others to accept whatever data they want, they can
change the game however they want—anything from giving themselves more
ammunition and health, to changing their location, or entirely altering the game
It is also worth noting that in a number of game environments, there is a
concern with cheating by the server. Gambling games and contests are obvious
examples where players don’t trust the game server. Simple multi-player games
that use player-run servers, like many first person shooters (FPS) and even MMOs,
can have situations where players are suspicious of the server operator. The FPS
Battlefield 2 ended ranked games on third-party servers because of this problem 6—
a costly decision, as its publisher, EA, had to operate all of its ranked servers itself.
The simplest way to implement a networked game is to use a distributed state (or
object) application that synchronizes the game state, as illustrated in Figure 13.3.
This is very tempting. It is easy, fast, and the developer doesn’t have to think about
the multi-player design. Essentially, the multi-player game is implemented as a
series of parallel single-player games and the distributed state tool “smoothes out”
the differences between each player’s version of the game.
Protecting Games: A Security Handbook for Game Developers and Publishers
FIGURE 13.3 State-based networking
There are several basic synchronization models that can be used by a distributed state system. At a minimum, players exchange state information (Sx), but they
may also exchange time information (Tx):
Newest Wins—In this synchronization scheme, the players exchange both
state and time information. The one individual that has the latest timestamp
determines which state is authoritative. For any of these systems, there is an interesting challenge associated with synchronizing time, particularly when faced
with lag over a wide area network. Somehow, all of the participants have to start
at the same time. Time itself can be spoofed by malicious players who alter their
own local time to be much later than that of the other players, distorting the results of any application that uses this synchronization model by always forcing
the other players to use the cheater’s state and time information.
if (Ti > Tr) then (Si = Si, Ti = Ti);
// for the case where the internal player (i) has
// a newer timestamp than the remote player (r), the
// internal player state and time is used.
else if (Ti<Tr) then (Si = Sr, Ti = Tr);
Chapter 13 Cheating 101
// for the case where the internal players (i) has
// an older timestamp than the remote player (r), the
// remote player’s state and time is used.
// there is an interesting question as to what to do if
// the timestamps are identical. It is likely that many
// developers would stick with local state, although there
// is a reasonable argument that if the states are different
// and the timestamps the same, there is a problem.
Average—The players average both the time and state information that they
receive from each other. There are various weighting schemes that can be used
to bias data based on how recent the information is both according to its timestamp and the actual, local time. It is quite possible for a cheater to abuse these
systems by providing data that is far outside the ordinary behavior of a legitimate player (altering state information to be very different will cause its value
to dominate the information from the other players—that is, if the cheater is
only supposed to have 100 hit points, setting her own hit points to 100,000 will
result in an average of (100,000+100)/2, or 50,050 hit points.
Ti = AVERAGE{Tx}; // where {Tx} is the time for each player
Si = AVERAGE{Sx}; // where {Sx} is the state for each player
Vote—Players use a voting scheme to determine the local time and state. These
systems can also use a weighting scheme. One of the security measures used
with these types of systems is to throw out outlying results (if there are multiple players). Collusion can be a problem for both vote-based and averaging systems. An important consideration for voting systems is that there is reasonably
good time synchronization between the players. In order to have workable
votes, all players need to participate with a vote for a given game “tick.”
Ti = VOTE{Tx}; // where {Tx} is the time for each player
Si = VOTE(Sx); // where {Sx} is the state for each player
Internal Authoritative—In this type of scheme, players “trust” themselves
more than other players for data about themselves. The different portions of
the game state are updated based upon what the data is and who is doing the
updating. “Neutral” state and time data (information not associated with a
specific player) is updated using an averaging or voting scheme:
Protecting Games: A Security Handbook for Game Developers and Publishers
// In general, a player’s game state is the state of
// the player’s own information, such as her own status,
// (internalPS) and the other players view of their own status.
GameState(playerx) = {externalPS1, externalPS2, …,internalPSx,
// the game state for each player x uses the state
// information from each other player for that player’s
// state and her own data for herself.
// for example
Game(player1) = (internalPS1, externalPS2);
// Player 1’s view of the game state with player 2
// providing the state updates for player 2 and player 1
// providing the updates for player 1.
Game(player2) = (externalPS1,internalPS2);
// Player 2’s view of the game state.
External Authoritative—One of the ways to address cheating is to trust “the
other guy.” In this model, the player trusts herself for data on everyone but herself. If there are more than two players, the player’s internal state and time is
based on an average or voting scheme of the external players:
// In general, a player’s game state is the state of the
// other players’ view of their information (externalPS).
GameState(playerx) = {internalPS1,internalPS2, …,externalPSx,
// the game state for each player x uses the state information from the
// external players for her state and uses her view for their state.
// for example
Game(player1) = (externalPS1,internalPS2);
// Player 1’s view of the game state with player 2 providing the
// state updates for player 1 and player 1 providing the updates for
// player 2.
Game(player2) = (internalPS1, externalPS2);
// Player 2’s view of the game state.
Chapter 13 Cheating 101
All real-time network game schemes share one challenge: handling the synchronization between the game’s actual local state and its displayed state. This is a
concern when games are played over a network because the network lag can introduce discrepancies in time in addition to any differences in time because of clock
differences in each player’s game platform. In general, developers have two competing goals: a smooth presentation and an accurate reflection of the actual game
state. The danger of erring too far on the side of smoothness is that important,
current state information that could affect game play is not available to the player’s
display. Also, player control inputs may not accurately reflect the game actions that
they intended because the display state that they see and are reacting to is not the
actual, local game state.
Another common challenge for networking systems is how to handle scenarios
where one player gets out of synchronization with the other players. It is very common to defer to the out-of-synch user to keep the displayed game smooth for that
player. This choice leads to many of the more obvious attacks against multi-player
games. In these “standbying” and “bridging” attacks, players intentionally break
their connection to push the system from its regular operation into its “synch
recovery” mode. These attacks are very simple to implement by simply installing
a switch that physically disconnects the game computer or console from the
Many games have asymmetric information: There is state information that is
known to some, but not all, players. Players have partial information about the global
game state. The most familiar example is card games. Players are dealt cards face
down that are not known to the other players. A well-formed game has the characteristic that when a player sees an action from another player, this player can determine if the action is consistent with the game’s rules and its current state, even if she
cannot determine exactly that that the action is legal. Once again with card games,
when a player privately passes a card to another player, the receiving player can validate that cards that were passed are not cards that the receiving player knows could
not come from that source, even if she cannot validate that the sending player legally
had those cards. Hidden information can be verified only after the game is over (or
when the hidden information no longer has an impact on the game—such as when
all players are dealt a new hand in cards).
Although real-time verification of fair play is ideal, because games include partial
information, retrospective verification is sometimes the only option available.
Protecting Games: A Security Handbook for Game Developers and Publishers
In general, the problem with state-based networking is that:
Games are distributed transaction systems, not distributed state systems.
The rules are the key. As you can see from Figure 13.3, if a game uses statebased synchronization, cheating is very easy because the game’s multi-player
system exists independently of and underneath the game rules. The incoming state
from a remote player is implicitly trusted by the very nature of the communication
system: The game’s rules are not part of the state synchronization process. Even if
the communication path itself is encrypted, a malicious player can use a tool like
Cheat Engine to manipulate her local game state and trust in the distributed state
system to push the corrupted data to the other players, and it is even easier if she
can get to the cleartext messages.
In general, it is better to disrupt the game experience for the out-of-synch user
and defer to the server or majority of users. In games where synchronization is critical, the game should roll back to the last shared “good state.” As we go down this
rabbit hole, we do realize that a malicious player could attempt to attack the “last
good state” system; therefore this also needs to be designed to be secure. In order
to protect against attacks on the “last good state,” the players should immediately
confirm that they’ve received a version of the game’s state with the other players as
they move forward so that everyone knows the current “last good state” before one
of them can exploit this. One of the protocols, Strobe, from our anti-cheating software, SecurePlay, is actually designed to provide a secure synchronization process
for peer-to-peer or less trusted client-server systems: It delivers a “secure tick” to
stop several attacks on time in games.
The other standard approach to multi-player gaming is to implement a clientserver design with the server being trusted by all of the players, as illustrated in
Figure 13.4.
Trusting the server is secure—at least as long as you and your players both trust
the server (see the section, “Security, Trust, and Server Architectures,” later in this
chapter, for further discussion of trust and server architectures). Control or actions
are sent to a central server for processing. The server then returns state updates to
the client. The archetype of this approach is a MUD where players enter raw telnet
text and send it to the server which in turn returns new state information. It is technically possible to implement a multi-player game solely via exchange of control
information. The main limitation of this approach is that there are often substantial
differences between computers in their raw control data (for example, encoding of
characters, line termination, resolution and mapping of mouse position information,
Chapter 13 Cheating 101
FIGURE 13.4 Client with authoritative server networking
and so on). Also, different game players may want to use different types of control
input devices (mice, keyboard, trackballs, and D-pad controllers) with different
formats and content for the raw data provided by each. It is unnecessarily burdensome to force the game to understand all of the different valid control systems.
The main problem with the authoritative server model comes from cheating by
developers. The “cheat” I am talking about is that too often developers wind up
implementing state and rules on the client-side and using a state synchronization
approach to update the server. This is often done for legitimate performance reasons,
but leads right back to the “trusted remote date” scenario that you thought you had
escaped by using this networking strategy.
The remaining choice for multi-player gaming is to exchange actions instead of
state or control information over the network. Action-based game play networking
(see Figure 13.5) has a lot of advantages. Game actions are almost always bandwidth
efficient, as they reflect player choices that are limited by the inherent abstraction
of all games. Architecturally, a nice aspect of action-based networking is that it can
operate in either a client-server or peer-to-peer network, leaving more flexibility for
game developers.
Protecting Games: A Security Handbook for Game Developers and Publishers
FIGURE 13.5 Action-based networking
Actions are selected prior to the invocation of a game’s rules. This has a
substantial security advantage, as the pre-existing rules validation and processing
system become part of the networked game’s security system. Control-based clientserver networking has some of the advantages of this approach, as control information is converted into actions at the server (where they are validated); however,
state information is passed down to the player client where it is accepted without
verification. By exchanging action information with remote players or servers, the
game’s own rules can be used to protect against cheating. Also, the architecture is
nicely independent of the location of the player or the game’s underlying communications (local or remote, client or server or peer), making testing easier by allowing the game rules to be fully tested through its API. Logging and replay features are
also easy to add when using action-based network, as are metrics and tracking for
performance monitoring and debugging.
It is fairly easy to see why action-based networking is secure by looking at playby-mail chess. In chess, a player sends his moves to the other player. The receiving
1. Receives the incoming action.
2. Determines whose turn it is (can any action occur now from the specified
3. Looks at the board and determines if the piece is available to move.
4. Determines if the move is legal (can the player take the specified action?).
5. Moves the piece.
Chapter 13 Cheating 101
In state-based networking, chess would be played by sending a new chess board
or by sending the moved piece’s new position (along with any removal of an
opponent’s piece). The receiver has a much more difficult time determining if the
move is valid because she basically has to reconstruct the move or look at a list of
all available moves and see if the selected one is included. For a computer game
with more complicated actions and multiple active players or “pieces,” this problem rapidly becomes intractable.
Formally, there are two basic types of action-based networking messages:
PlayerID,Action + Parameters, ActionInitiationTime;
PlayerID,OldGameState,Action + Parameters, ActionInitiationTime;
Under action-based networking, the model is essentially the same as for chess:
1. Receive the incoming action from a player (local or remote).
2. Determine if the player can take that action given the current game state.
3. Implement the action (use the action and its parameters in conjunction
with the game rules to determine the new game state).
4. Update the state.
It is important to note that there are really two types of rules processing. First,
there are the rules that determine if a specific player can take a specific action given
the current game state. This is an area where programmers often make errors.
Often, the problems occur because the developers don’t assume someone would
attack the control interface or the network directly and drive the game faster than
normal. This type of action overrun attack is a fairly common form of network
attack. The second type of game rules are those that take an input action and its
parameters and determine the outcome to update the game’s state.
One of the challenges with implementing action-based networking is maintaining synchronization between players. This is not a problem for state-based
networking systems. In addition to being easy to implement, state-based synchronization schemes can be fairly “sloppy,” especially if they use an averaging scheme
to synchronize. The various players will continually converge towards a shared
state... and it almost doesn’t matter if the different player’s state is ever
exactly the same. This approach is used in large-scale simulations where cheating
security issues aren’t a problem for performance and simplicity of development.
Protecting Games: A Security Handbook for Game Developers and Publishers
Action-based networking first requires setting an initial state and time between
all of the game players. Time can be particularly difficult to coordinate. One approach is to handle network time as a sequence of synchronized “ticks” that all
players need to participate in before the game proceeds to the next tick.
Historically, the main focus of anti-cheating techniques has been on client-side security tools. These software techniques, libraries, and even standalone applications
“look” for malicious code and alterations to the operating environment for the game
application. These tools have been reasonably effective: There is only so much work
a hacker will do when they are attacking a game “just for fun.”
The game industry is changing rapidly. It is much larger than when most of these
anti-cheating tools were created, and hackers are now breaking games to make
The computer industry is also undergoing a bit of a revolution. Virtualization tools,
like VMware and the open source product Xen, are changing IT. At the same time,
hackers have moved to rootkits as a way to avoid detection.
Barring the rhetoric, both of these tools are similar. Virtualization wraps an operating system instance and its associated applications in an isolated shell that is unaware that it is in a shared environment under control of another application. Rootkits
are applications that use very low-level utilities to hide themselves from ordinary operating system monitoring tools. SoftICE, an early Windows tool, hid itself from the
operating system as part of its function as a powerful platform debugger, and was
used to attack Diablo.
Most anti-cheating products are reactive; they detect specific cheats only after
they have been identified. Like anti-virus software, they don’t actually search for new
attacks; they look for signatures of known attack applications. This distinction is important. The company or group that keeps the game security software up to date
needs to have an actual copy of the attack software in order to create a signature for
it. If the attack software is changed, the signature needs to be changed as well. Even
worse, a completely different program that exploits the same weakness also will
need to be retrieved by the security company and a signature created for it as well.
The key challenge that these tools face is that it is inherently difficult to separate
legitimate programs, like mouse or keyboard controllers, from malicious programs
that are virtually identical in function. After all, what is a “bot” but an exceptionally
clever input device?
Chapter 13 Cheating 101
When cheats are done “for fun,” they are widely distributed and easily detected by
a game company’s “Cheat Surveillance System” (the game’s online forums and fan
sites). When cheating a game becomes serious business, hackers don’t share their
attacks or sell them for a nominal fee; they profit from them directly. I have been told
anecdotally that there are proprietary bots used by gold farmers for many MMOs that
are not widely distributed, but used only as a tool for professionals.
Detection of serious cheats can be difficult. An outsourced employee at a company that made an online poker calculator player aid (the tool automatically tracks
the “rake” by the poker site operator) inserted a rootkit into the application 7. The malicious program installed a key-logger and other programs useful for looting the poker
player accounts. This rootkit was still not detected by any traditional security software
months after its insertion—I suspect, because it was neither widely distributed into
the general population nor obvious in its nefarious activities.
Virtualization provides a powerful avenue for attacking games. Just like other applications, software security tools live inside the virtualized environment and cannot
detect that they are not part of the platform’s “real” operating environment. As software developers create more and more powerful virtualization tools, it is highly likely
that some of these virtualization management, security, and testing tools will be used
for malicious purposes, just as the SoftICE Windows testing tool was turned into a
game hacking tool. Combining rootkits and virtualization is a plausible security nightmare with the recent demonstration of a rootkit that could move an operating system
into a virtualized shell without being detected and leaving the rootkit in control of the
Targeted malicious code is a growing problem for the security software industry.
Previously, hackers would create viruses and other malware just to spread them to as
many computers as possible. Recently, online criminals have begun to target specific,
lucrative targets: companies or even individuals. The reasonable revenues and
exceptionally low risk make all forms of online gaming a very tempting opportunity.
Virtualization will also allow hackers to more effectively and inexpensively scale
automated gold farming, pokerbot, and other systems to attack online games.
Almost every multi-player game uses a central server in its design. Even peer-topeer games include a minimal server system for matchmaking and storing persistent data and rankings. The standard assumption is that the server is trustworthy.
Protecting Games: A Security Handbook for Game Developers and Publishers
This is the view of the developer, of course. Players may have a different view, and
rightly so, in some cases. Servers can be hacked or, even worse, a malicious insider
may be abusing the game for his own benefit as happened at the online poker site
UltimateBet.com9. In this case, the malicious insider was using the fact that the
game state was available on the game server to read other players’ hidden cards—
and, as a result, make extremely profitable wagers. Online gambling sites are not the
only victims. A Shanda Interactive vice president and two accomplices were
arrested for creating and selling virtual items in the MMO, Legend of Mir 210. The
only reason that they were caught was that they chose to create and sell exceptionally rare items. They might never have been caught if they had chosen to sell widely
available, but still valuable and profitable, game currency as has happened at several
other online games.
In addition to the “trusted server” and simple “peer-to-peer” models, there are
at least three other architectures to consider: Trusted Third Party, Blind Service,
and Collaborative Game Play.
FIGURE 13.6 Trusted third-party architecture
The Trusted Third Party model (see Figure 13.6) is an independent service
provider who has no interest in the game’s operation or outcome. The third-party
provider does not run the game, but either audits its behavior, or, more powerfully,
implements key portions of the game itself to ensure its integrity. This scenario is
the gaming analog of a real estate escrow agent who mediates part of a sales transaction to ensure money is transferred appropriately between buyer and seller. For
MMOs, an escrow agent could be used to handle all asset transfers (and not just for
sales between players as some Real Money Transaction (RMT) providers are doing
Chapter 13 Cheating 101
today). Conceivably, these third-party providers could roll dice, resolve combat, or
even host the entire game. The success of the Trusted Third Party comes from
proving its independence from the players and game operator, both technically
and from a business perspective. Poker is an example of this architecture already
being used for gaming. The players are not playing against the house, as in a casino,
but against each other. Thus, the poker operator has no vested interest in the
game’s outcome. The challenge comes from enforcing this separation. If an employee of the poker operator has a vested interest in the game’s
outcome, as in the example, the model is undermined. Just as in
promotions and contests, there are real benefits to the integrity of a game from
prohibiting game company employees (or their friends and families) from playing
the game (a topic that I will be revisiting in Chapter 25).
FIGURE 13.7 Blind service architecture
A variant on the Trusted Third Party model is to have an untrusted third party.
The Blind Service model, illustrated in Figure 13.7, is a scenario where there are
multiple service providers that can be chosen at random by the game participants.
Ideally, the blind service would not even know who the players are. If there are
enough blind service providers and the players use an intelligent randomization
scheme for selecting the provider, the system can be secure. The nearest analogy is
the Tor anonymous web browsing service11. The problem with this model is to
find a way to make the set of anonymous providers large enough and make the service economically viable for a game operation. The system also needs a mechanism
to communicate some information between the game operator and the blind
service and players. Depending on the specific system and business model, cryptographic techniques, such as public key cryptography, may be helpful.
Protecting Games: A Security Handbook for Game Developers and Publishers
From an anti-cheating perspective, the ideal is to be “N-1 Secure.” If there are
N players in a game, the game should be fair if at least one of them is honest (or,
at least not part of the same cheating team). Action-based networking actually
addresses a portion of this architecture, and is all that is needed for games like
chess. The combination of exchanging information by actions with local verification of rules by each player means that the only role of a central server is to adjudicate disputes (see Figure 13.8). Players can independently rebuild the changes in the
game’s state over time by looking at all of the players’ actions. If the previously
recorded state is not identical (or at least is consistent) to the reconstructed state,
some form of manipulation or corruption has occurred. My company, IT
GlobalSecure, took this basic concept and extended it to cover more types of games
and other game interactions, such as fair random numbers, handling hidden information, and securely synchronizing network time with our SecurePlay software.
FIGURE 13.8 Collaborative game security architecture
Issues of trust are important for game developers, game operators, and game
players. Payment processors, regulators, and law enforcement may all be concerned
with the integrity of a game for a number of reasons.
One advantage of this architecture is that security evaluation can be made
much simpler. Because each party to the game has an independent copy of the
rules, independent game implementations can be used by each participant as long
as everyone follows the same API. This could have some very interesting benefits
for casino games, skill games, and contests where criminals can currently target
game software for attack and there are ongoing concerns about game developers as
potential threats.
Chapter 13 Cheating 101
Fair random number generation is a really challenging problem for multi-player
games. When we play games face-to-face, we have standard products (dice and
cards) and standard procedures (shuffle, cut, and dice cups) that ensure integrity.
If you go to a casino, a massive portion of the security design for games is around
ensuring the integrity of random events. Slot machines, the most familiar and
trusted form of automated random number generation, use real random noise
(almost always) to generate random bits and the systems are sealed and have their
code evaluated line by line—and they’ve still been attacked12. In the CARRDS
model, “random” is highlighted because it creates its own set of issues and was one
of the motivations for creating our SecurePlay software.
An example of the blind service model, described previously, has been used for
years by board gamers who play by mail to roll dice fairly. The players designate a
stock symbol and future date and use the cents portion of the price as a random
number. (Electronic Arts ERTS closed at 47.06 on Wednesday, 20 August 2008, so
the random number would be 06.).
There are actually several distinct types of random number systems that are of
concern in computer games:
Private Random with Replacement—The random value is known only to the
individual player, but is drawn from a random sample, like dice, where random
values can reappear.
Public Random with Replacement—The random value is known to all players
and the random values can reappear.
Public Random without Replacement—The random value is known to all
players and the random pool is sampled without replacement. For example,
cards from a deck, where random values cannot reappear.
Private Random without Replacement (Separate Random Pool)—The
random value is known to only an individual player and the random pool is
sampled without replacement, but the random pool is not shared across players (as in the game Magic, where each player has her own deck of cards).
Private Random without Replacement (Shared Random Pool)—The random
value is known only to an individual player and the random pool is sampled
without replacement. The random pool is shared like cards dealt by a dealer.
The question in each is how to generate a fair random event, preferably in a way
that is N-1 secure. It is possible to use a trusted third party for random numbers
(interestingly, this has not been tried for online gambling to my knowledge).
Protecting Games: A Security Handbook for Game Developers and Publishers
The random numbers could be encrypted as they are provided to the individual
players for private random numbers. The sole problem with this approach is that
you have to trust the trusted third party.
Collaborative random number generation is an effective solution. Instead of
involving a third party, each participant can create a random number, share it with
the others, and combine the results:
SharedRandom = (Random1 + Random2 + Random 3 +... + RandomN) mod Z;
// where each RandomX is a contribution from one participant and Z
// is the range of values desired as well as the range for each
If everyone is honest, this system works well. The problem comes from the contributions not being simultaneous. If you have ever played Rock-Paper-Scissors
with a child, you’ve seen this problem—the child sees the value you’ve selected and
somehow their hand slips into the advantageous value. Our SecurePlay software
addresses this problem by creating a “logically simultaneous” action; we can use
irreversible transforms to get things started:
for each Player j {
Transformj = IrreversibleTransform(Randomj,padding);
// each player computes an irreversible transform of her
// random value with arbitrary padding appended to it.
Next, each player shares this information with the others and once everyone
confirms that they’ve received the transform, each reveals the random value to the
others, which can then be verified:
for each Player j {
AllegedRandomj = Randomj,padding;
// each player shares their random value with padding
if (Transformj == IrreversibleTransform(AllegedRandomj) {
use Randomj; // as described above
} else {
call Police; // or other action
Chapter 13 Cheating 101
This core algorithm, with slight variations, can be used to cover most of the
required random scenarios. For public random events, the process works exactly as
described. For random events without replacement, the range (Z) is reduced by one
as the random pool is depleted. Private random events can simply be handled by
having the player who is keeping the random event secret (Player j) not provide
her a value (AllegedRandomj) until after the game is over so that it can be verified.
The one case where there is a bit of a problem is when there is a private random
without replacement and a shared random pool. In order to generate a draw from
a random pool without replacement, the dealer needs to retain knowledge of what
has been dealt which allows her to share information with others. The dealer
cannot affect the random event outcome, but she can disclose the private random
information to other players—often a problem in games, as it can give advantage.
A variation on this system can be used with a server/dealer to generate large
numbers of random events quickly and if the dealer is somewhat trusted. Instead of
directly generating random events, the players can all contribute towards building
a collaborative random seed that is used with a deterministic random number generator to create a series of random events.
As seen with the case, trusting a central server at all can be a
potential threat. It may make sense to alter a game’s design so that the game is
naturally N-1 Secure. Poker, for example, could be changed slightly so that players’
private cards are drawn from a separate, private deck (like in Magic: The Gathering).
This would slightly shift the distribution of hands in the game and make it possible
for a player to actually be dealt a natural five-of-a-kind hand, but separate decks
would stop attacks from a compromised central server.
Player collusion is one of the more troublesome problems for all forms of gaming.
Many, if not most, multi-player games are built on the premise of player competition. Collusion can occur “in band” using the game’s communication services or
signaled via game play or “out of band” using external communication systems, like
a telephone. Although it is easy to say that collusion is against the game’s rules (or
terms of service), in practice it is very difficult to eliminate collusive behavior. It is
even worse online where players cannot be physically monitored and they have easy
access to alternate communications channels to share information and coordinate
Collusion needs to be considered very carefully in game design. In one of my alltime favorite game design failures, a Swedish lottery firm, Svenska Spel, designed a
Protecting Games: A Security Handbook for Game Developers and Publishers
game called Limbo where players guessed a number between 1 and 99,999. If you
selected the lowest number that no one else had chosen, you won a prize. If two or
more people chose the same number, they “bounced” and were disqualified.
Collusion was against the rules (as was making too many bets by a single person).
You can probably guess what happened: Players formed large syndicates to
systematically guess different numbers to increase their odds of winning and won
hundreds of thousands of kroner.
The contest was rather quickly and abruptly withdrawn13. Closure of the game
cost Svenska Spel one hundred million kroner (over $15 million) per year.
If you think this is funny, consider all of the games that forbid gold farming,
but allow inter-player item exchanges.
Many MMOs have a limited form of player vs. player (PvP) conflict, where you
can fight players on other teams, but not your own. Members of “Something
Awful” (an interesting collection of people who play a number of games, often in
ways not intended by their designers or other players) organized a pair of guilds
in World of Warcraft with the intent of using both guilds against other players and
groups. World of Warcraft has two “teams,” the Horde and the Alliance. Players are
allowed to attack members of the other team, but not their own side (Horde players
could attack Alliance players, but not other Horde players and vice versa). The
“Something Awful” guilds colluded together for their mutual benefit to take advantage of the game’s economy. The “Something Awful” Horde guild would escort
and aid their Alliance counterparts who entered Horde territory since they were
immune to attack by other Horde players. They also used the paired guilds to act as
a protection racket: While an ordinary Alliance player can do nothing to another
Alliance player, a friendly Horde player (or group) can attack Alliance players
mercilessly 14.
There are three main types of collusion:
External Affiliation—In this case the colluding team has an external relationship that, in and of itself, gives them an advantage. Poker players who have a
shared bank roll have an inherent advantage over the others at their table, even
if they do nothing active in the game but share their winnings after the game is
over. This type of collusion is essentially impossible to detect from the game’s
or game operator’s perspective.
Shared Knowledge—The next level of collusion is sharing knowledge between
the conspirators. In poker again, this is easy to appreciate. When two players
share the values of their hidden cards, they will have a substantial advantage in
their game play.
Chapter 13 Cheating 101
Coordinated Action—As seen in the Limbo lottery game and the World of
Warcraft examples, active collusion can be terribly destructive to a game. Many
of the tournament attacks, discussed in Chapter 20, are based on players
colluding instead of competing.
You can randomize the matchmaking process to make it less likely that teams
can exploit their relationship. In some face-to-face games, it is possible to restrict
communications to avoid covert signaling. In its highest-level tournaments, bridge
players use cards to indicate their bids so that vocal cues are impossible.
Completely stopping collusion is virtually impossible. Online poker sites claim
to look for team play by analyzing game play patterns. This may have some effect,
but it is unlikely to even slow down a serious collusion conspiracy.
For online games, at least, the best anti-collusion strategy may be to legalize collusion or otherwise alter game play so that collusion confers no significant advantage.
Sometimes game security problems are a nuisance, and sometimes they are devastating. Game play and balance problems, no matter how bad they are, can almost
always be repaired. Players may gripe or even quit when their favorite characters are
“nerfed” (had their abilities reduced due to a software update/rules change) or
favorite tactics thwarted. These are serious customer service problems, but they
rarely are a threat to the business.
Some game security problems can ruin your business.
The more closely game play is tied to your business model, the more risk you
face. For a game that is sold as a product, the major threat is piracy. For a subscription game, the threats are unauthorized, unfunded subscriptions and excessive
play. At the other end of the spectrum, any form of cheating or abuse can undermine the success of a gambling or skill game operation.
Free-to-play games and other games with hybrid business/game play models,
like Second Life and Project Entropia, face some interesting challenges. Free-to-play
games make their money by selling in-game virtual assets to players in lieu of charging subscriptions. This approach is becoming the dominant business model used in
Asian games and is rapidly entering Western markets with games like Nexon’s
MapleStory and Three Rings’ Puzzle Pirates. Although many players do not pay to
play these games at all, some can spend hundreds of dollars on virtual items—
much more than they would spend in a subscription game. Also, because the cost
to play is zero, it is easier to reach a wider potential audience.
Protecting Games: A Security Handbook for Game Developers and Publishers
Unlike a subscription game, the game systems and real money payment systems are closely intertwined. Attacks on the game can damage the company’s business model directly. Recently, I was told about a licensed MMO that had a serious
problem in which malicious users could hack the game and simply give themselves
all of the virtual items they want by using a SQL injection attack to directly edit
their inventory—even getting items not yet available to other players because they
had not been enabled by the game operator.
Several online games and virtual worlds have embraced the notion of user-created content and trading as central to their business model including IMVU, Second
Life, and Entropia Universe (formerly Project Entropia). In these communities, the
players can use tools to create and trade virtual items that can be bought and sold
for real money (or, rather, virtual currency that can be converted into real money).
Entropia Universe has gone a step further by directly auctioning off a massive space
station for $100,00015 as well as five banking licenses for a total of $404,00016.
The fact that virtual currency can be converted into real money raises the stakes
for security. Second Life faced an attack that allowed a hacker to steal player’s money
just by “walking by” a modified QuickTime file (moving their game character/
avatar close enough to an object in the game that included the hacked QuickTime
file so that the game would cause the file to be loaded). For a while, Apple’s
QuickTime file format was vulnerable to an attack that gave hackers the ability to
insert and execute malicious code on the target computer, if the hacker could get
the altered file onto the computer. Because players in Second Life can create virtual
items, including items that incorporate QuickTime files, this attack could be
launched against players dynamically in the virtual world. The hack basically forced
the victim’s Second Life account to automatically transfer the game’s currency,
Linden Dollars, to the thief17.
MindArk’s Entropia Universe has the feature that many items that players find
useful are consumed or damaged over time and need to be repaired or replaced: the
essence of the company’s business model. Instead of a hack, players found an exploit that allowed them to guarantee that they would earn more money than it cost
them to play18. This exploit undermines the virtual world’s business model, much
as a slot machine that pays out more than it takes in is quite harmful to a casino’s
bottom line, if not its popularity.
What if your partners rebel? Activision and Infinity Ward faced a rebellion over
the security failures of the World War II first person shooter, Call of Duty 2. The
rebels were not ordinary players, but highly motivated gamers who hosted servers
for the game. They called a strike to stop operating their servers until the game’s
anti-cheat systems were fixed19. Even more costly, MGame in Korea’s licensee in
China, CDC Games, stopped paying royalties until the company fixed piracy and
other problems with Yulang, an MMO20, a dispute that was eventually settled.
Chapter 13 Cheating 101
1. B. Sinclair (2006), “EA Sells Tiger Woods Cheats on XBL,”
2. Virtual World News (2007), “Blogging the AGDC: Coming to America: Nexon’s Micro-Transaction
Revolution, “—5.html
3. R. Miller (2008), “Fable 2 Pub Games Exploit Will Make You Very, Very Rich,”
4. S. Schuster (2008), “AoC Demonologist Exploit Fixed in Recent Patch,”
5. M. Pritchard (2000), “How to Hurt the Hackers: The Scoop on Internet Cheating and How You Can
Combat It,”
6. P. Klepek (2005), “Battlefield Server Delisting—EA and DICE Respond to Recent Server Modding by
7. R, Naraine (2006), “Rootkit Infiltrates Online Poker Software,”
8. R, Naraine (2006), “VM Rootkits: The Next Big Threat?,”
9. J. McCarthy (2008), “Online Casino Admits Insiders Changed Software to Cheat,”
10. A. Xu (2006), “Three Men Tried for Selling Online Game Weapons,” via (original link expired)
11. Wikipedia (2008), “Tor (Anonymity Network),”
12. S. Bourie (2008), “The World’s Greatest Slot Cheat?,”
13. J. Savage (2007), “Game Stopped After Cheat Allegations,”
14. Joe Blancato (2006), “Diseased Cur,”
15. BBC (2005), “Gamer Buys Virtual Space Station,”
16. MindArk (2007), “Virtual Banking Licenses Sold!,”
17. Internet Security For Your Macintosh blog (2007), “Second Life Hack Steals Real Life Money,”
18. Entropia Universe Examined blog (2007), “The Final Entropia Exploit,”
19. IWNation (2005), “The Time Has Come for Action, We Will Be Ignored No Longer,”
20. L. Alexander (2007), “CDC Sues MGame for Security, Tech Support Failures,”
21. M. Greene (2008), “MGame, CDC Settle Yulgang Dispute,”
App Attacks: State,
Data, Asset, and Code
Vulnerabilities and
o abusively paraphrase Sutton’s Law1, hackers attack the local application because “that’s where the game is.” And, it is convenient. And developers leave
the application a vulnerable target. Both hackers and developers are lazy.
Author’s Warning: I name and discuss several programs used for cheating games
during this chapter and elsewhere in the book. I am in no way recommending that
people use these tools. Even if the tools themselves are safe, game cheating and
hacking tools are often provided with free “extra” features, like key-loggers
and other additions that may be used to hack your computer, steal your passwords, and otherwise ruin your day. Even compiling these tools from source code
may be risky, because you aren’t actually going to review all the code, are you? And,
even if you did, do you really think you could find serious, malicious code?
There are many ways to attack a game via the local application. Hackers can
modify the game’s state, its data, memory, and assets, or even the application itself.
There are countermeasures for many of these attacks. However, many of these
countermeasures can themselves be circumvented because they, too, are applications that run on the player’s computer. Local hacking is one of the reasons that
developers have moved towards server-based game designs or should consider actionbased networking, as discussed in Chapter 13.
The easiest attack target is the computer’s memory. System memory needs to be
used by all games and can be easily analyzed, so attackers don’t need to do any new
work to attack different games. In the hand of even a “YouTube-educated” game
cheater, free debuggers and standard utilities can easily read out the computer’s active memory and rapidly and empirically determine where key game data is stored.
Chapter 14 App Attacks: State, Data, Asset, and Code Vulnerabilities and Countermeasures
This allows the cheater to directly change the computer’s memory contents. At the
time of my writing this paragraph, there are 60 YouTube video demonstrations 2 of
hacks on the simple, quite fun Flash-based role-playing game, Sonny 3. These are
attacks on a free game!
All computer applications write data into RAM memory while they are running.
For an ordinary user, the operating system controls access to the RAM memory so
that the applications don’t interfere with each other. Applications themselves create a structured “memory map” so that the application can easily and quickly find
its own data. At their most basic level, memory editors are tools that can look at the
entire RAM of a computer and manually change or lock any memory value.
Smarter memory editors can learn the memory maps for different applications and
automatically remember where certain data is located. The most generic attack on
a game using a memory editor is pretty simple:
1. Start the game.
2. Launch the memory editor.
3. Look for the value in memory.
4. Change the value using the memory editor.
5. See if the value has been changed in the game.
6. Cheat and be merry (shame on you!).
In practice, this may be difficult, as a given number or string sequence may
occur in multiple places in memory. Also, games may try to hide values by changing how they are stored (via encryption or obfuscation). The more robust way to
attack a game takes only slightly more effort:
1. Start the game.
2. Launch the memory editor.
3. Tell the memory editor to “Save State” (save a complete image of the
computer’s memory or the area allocated to the game).
4. Do something in the game that changes the value of interest.
5. Tell the memory editor to look for differences in state (compared to the
previously saved state).
6. Try editing these values.
7. See if the value has been changed in the game.
8. Repeat until satisfied.
9. Cheat and be merry (shame on you!).
Protecting Games: A Security Handbook for Game Developers and Publishers
The power of this technique is that it will work for pretty much any game, at
least on a PC or hacked console.
Instead of modifying a game’s state, in many multi-player games, simply being
able to read state information gives the hacker a substantial advantage. Radar
attacks and ESP (extrasensory perception) both, at their core, prey on a weakness
found in many multi-player games—the need to pre-load remote data. Most
modern games are played as real-time systems and the overwhelming goal of game
developers is to provide a smooth experience for players.
Because of the drive to provide a “smooth” game play experience, the player’s
client application knows information about the game’s state that the player shouldn’t
know. Several years ago, Dark Age of Camelot players could read location and useful map information about other players by analyzing the MMO’s network
protocol (clearly, a state-based system)4. Apparently, these attacks were stopped
when Dark Age of Camelot’s developer, Mythic Entertainment (now EA Mythic),
began encrypting the network packets. Conceptually, an attacker should have been
able to attack the game “above” the network layer (after the packets had been decrypted at the client) and extract the same information, but I have not seen any
indications that this has occurred.
These attacks can affect other genres including first person shooters such as
SWAT 4 5 and, more recently, Team Fortress 2 6. Radar attacks and wallhacks, to be
discussed shortly, can look similar and have identical game-cheating consequences,
but their implementation is different. Radar attacks are dependent on reading out
game state information, whereas wallhacks attack the display subsystem. ESP attacks directly extract the game state and provide their own display.
Because Adobe’s Flash provides tools that make using state-based synchronization very easy, a number of developers of basic multi-player Flash games,
including card games, simply replicate the game’s entire state at each player’s location.
Once hackers extract this information, they have a huge advantage (as you can
imagine if you were playing poker or bridge and could see all of your opponent’s
The best way to prevent a game client from disclosing data is to not provide it in the
first place. Hackers can’t extract information that they can’t access. As is discussed
in Chapter 17, game design itself can be an opportunity for game designers to help
reduce security risks.
Chapter 14 App Attacks: State, Data, Asset, and Code Vulnerabilities and Countermeasures
If it is necessary for the local game client to store a hidden game state, the only
real choice is data obfuscation. Data obfuscation makes attempts to hide the format
and location of important game data from memory editors. As discussed previously, memory editors don’t care where game data is stored in memory. Rather,
memory editors look for known values and changes in values. Sometimes, developers use encryption techniques to hide the data. However, because the key has to
be available on the platform as well as the data, the security, such as it is, comes
from how well any keys are hidden and how difficult it is for the hacker to reverseengineer the encryption system’s design. So, instead of using the term “encryption,”
which implies powerful security for many readers, it is more appropriate to refer to
such systems as “data obfuscators.”
“Encryption” is not the only option. According to the documentation for the
Poke memory editor, Blizzard’s Diablo II used a system where they stored critical
game data in multiple locations and, if only one was changed, the Diablo II storage
system used the smaller one and changed both values to the same, smaller value7.
Once this scheme was identified, it was fairly trivial for hackers to defeat.
A slightly more complicated system involves using “differential storage”—
where game data is split into two elements via addition or the “exclusive or” function so that individual memory locations cannot be usefully read by a memory
/// GD – game data to be stored
// compute a random value, Random1
L1 = GD xor Random1;
// compute the sum of the game data and the random info
L2 = Random1;// store the random info
GD = L1 xor L2; // retrieve the game data
This technique is simple and fast. If a fair amount of data is being updated
regularly, it will be difficult for a hacker to easily isolate correct pairs of storage registers. Eventually, a patient hacker may be able to change memory locations one at
a time and steadily work out the game’s differential memory map.
One can add an anti-tamper element to this system by incorporating a checksum with the game data:
/// GD – game data to be stored
// GDCS - checksum on game data
// compute a random value, Random1
Protecting Games: A Security Handbook for Game Developers and Publishers
L1 = (GD,GDCS) xor Random1;
// compute the sum of the game data and the random info
L2 = Random1;// store the random info
(GD,GDCS) = L1 xor L2; // retrieve the game data
if [Verify (GD,GDCS) == false] then {do countermeasures;} else {play;}
The challenge for any data obfuscator is that it must work quite fast and, essentially, force the hacker to reverse-engineer the design of the game’s obfuscation
code to defeat it. Also, it is essential that the data obfuscator is easy to use. For
languages such as C++, this would typically mean creating a custom data type or
template and for many other languages the obfuscated data would be stored in a
class or struct data type (when I created a data obfuscator for Flash, I created
a set of classes that mirrored the language’s basic data types). It is important that the
obfuscator not be called as a dynamic library at run-time, but integrated into
the code during compilation. If the obfuscator is called as an external library, it can
be “plugged out” so that it is bypassed entirely, as discussed in the next section.
Also, beware of optimizing compilers, as they may optimize your data and code
obfuscation techniques right out of your application. Ideally, the data obfuscator
could be regularly updated by simply recompiling the game with a new obfuscator
version and no modifications would need to be made to any of the remaining game
code. The following are a number of data-protection techniques:
Encrypt—Encrypting data with a static key (hardwired or game instance
based) simply makes the static key of the encryption function the primary
target for attack. Cryptographic modes are important (key-additive, cipherblock
chaining, cipher feedback, and so on), because some may allow the hacker to
change the encrypted data in a useful way without ever breaking the cryptography. It is important to recognize that the technique does not have to be
cryptographically strong; it just has to effectively obscure the data from direct
extraction by a memory editor. Attackers must find the function method or
algorithm and key or blindly modify data in the platform’s memory.
Data Hash—Even a simple, unkeyed data hash, if implemented properly, can
be effective. A keyed hash requires the attacker to isolate the key, the code for
the hash function, and its algorithm.
Indirect Data Store—Instead of storing data in a memory location, store it
indirectly via an object pointer. This can be used against static memory map
tools by forcing an attacker to dynamically read-out memory.
Chapter 14 App Attacks: State, Data, Asset, and Code Vulnerabilities and Countermeasures
Split Data Store—Split the data via some symmetric function (mod 2 arithmetic, addition) so that instead of storing a value (V), a random value (R) and
a split value (R+V) are stored. An attacker must find the correct sets of memory locations in order to effectively modify the game data.
Differential Data/Data Chaining—Instead of storing data directly, store it as
an offset split of another data object. The data chains cannot be too long or
complicated or a single data change may force many other data objects to be
changed and have an adverse performance impact.
Honeytrap Memory—A honeytrap is stored data that has a hard-coded,
known value that is periodically checked. If the honeytrap data has been
changed, the application knows it has been hacked.
Soft Failures—Detected hacks do not immediately get punished or get punished randomly.
Combined Techniques—These techniques can be combined to substantially
increase the difficulty for the hackers.
It is also important to note that these techniques do not actually allow you to
prevent modification of game data, but make such modifications detectable by
your game code. Ideally, your selected data obfuscation strategy will detect any
unauthorized data modifications so that they will be not be accepted by the game.
In a computer, code is data. Just as all of the data in a game is available to a memory editor, the game’s code is also vulnerable. The most vulnerable portion of a
game’s code is its internal configuration data. These data constants are the embodiment of the game’s rules and can have a substantial impact on game play. Aaron
Portnoy and Ali Rizvi-Santiago of TippingPoint DVLabs attacked the configuration
data for Disney’s Pirates of the Caribbean MMO game client by altering the game’s
“jump height” and “ship speed” constants. These changes allowed their characters
to jump ridiculous heights and their pirate ships to move incredibly fast 8. In this
case, the security analysts were not attacking raw binary data, but the byte-code
generated by the Python scripting language. The attack was successful because the
game uses state-based networking so that the actual jump or movement data was
exchanged and blindly accepted by the remote players’ computers. This type of
attack can be thwarted. CCP Games’ EVE Online also uses Python for its client, but,
because game state is totally controlled on the game’s servers, a recent hack that
exposed the game client code caused no problem for the game’s security 9.
Protecting Games: A Security Handbook for Game Developers and Publishers
More serious code attacks typically target external libraries (DLLs for Windows
computers, and SOs for Linux). A hacker can insert a “shim” library with the same
name as the actual library and redirect and edit data going between the library and
the main application. Eyebeam Openlab’s tool OGLE (OpenGLExtractor) demonstrates the basic technique used for graphics engine hacks and DLL proxies to attack
games. OGLE was not developed with malicious intent, but rather to support the
extraction of 3D images for other uses. The tool uses DLL proxy techniques to
extract OpenGL graphics information and derive a 3D scene so that it can be sent
to a 3D image editor or 3D printer10.
A number of game hacks use similar techniques to cheat in games. Game
graphics engines do not pose a threat. However, developers sometimes rely on the
graphics engine to enforce game logic and this can cause problems. For example,
game engines sometimes use the graphics engine to determine whether a player or
item is visible to another player. The problem comes from letting the graphics
engine “see” a player or item that would otherwise be invisible—a tool similar to
OGLE could then be used to highlight the character, make the intervening walls
invisible, and so on, in order to circumvent the game designer’s intent to hide the
asset. The “best” solution would be that a local copy of a game doesn’t know anything it doesn’t need to. If an “invisible” asset needs to be stored locally, it should
be protected as well as possible.
To prevent easy exploitation of this information, the data should not leave the
game engine, but the scene should be managed by an intermediary program that
determines visibility and other attributes based on game rules and level design (that
is, making a wall invisible for rendering does not expose things behind it). This is
similar to a dynamic loader but instead of simply focusing on improving the graphics engine’s performance, the “safe loader” makes sure that anything that should
not be visible isn’t loaded to the graphics engine. It may even load alternate assets
based on game state and rules for the assets that are loaded. The safe loader should
have the side benefit of reducing the number of assets that graphics engine needs to
render—there is no need to waste cycles on rendering invisible items. Also, the safe
loader could allow for more intelligent camera systems (for example, walls graphics could be replaced by suitable “graphic stubs” if they block the main game action
from the camera’s current position—just as found in TV or film sets where the
director removes walls and props from sets when they interfere with observing a
scene as the director wishes).
It is even possible to locate hidden assets and information at a lower level.
Hackers can find the underlying tables that map where game functions are located
and redirect the function calls to alternative functions. This does require a fair
amount of sophistication; however, the same tools that allow assembly language
Chapter 14 App Attacks: State, Data, Asset, and Code Vulnerabilities and Countermeasures
debugging and software reverse-engineering (disassemblers and decompilers,
among others) make implementing these attacks easier as software engineering
tools get better (and more widely accessible and sometimes even free). Some anticheating tools will detect that software is running in debug mode as a way of
attempting to fight these hacks. However, just as with memory editors, the debugging applications can also run independently of the application.
These attacks are very specific to an application and its architecture. Network
stacks, data stores, and other code that is tempting to store in a shared library may
become a target by allowing the hacker to break the application into convenient
pieces, just as the developer did.
Protecting game code, just like protecting game data, is a very hard problem. After
all, the code has to be present for the game to run. As discussed in Chapter 5, some
games attempt to protect their code by not actually installing it with the game, but
rather accessing it from a DVD or over the network when the application is executed.
At this point, the code is loaded into the computer’s memory and it is vulnerable to
being read, modified, or stored, just as the game’s data is.
Some security techniques, such as data obfuscation, depend on the attacker not
being able to (easily) reverse-engineer the application’s design. Code obfuscators
work by making an application more difficult to reverse-engineer. Because the application does need to operate, what these tools do is introduce complexity into the
executable program that makes it very difficult for standard disassemblers, decompilers, or parsers to convert into a higher-level language that is easy to analyze (this
technique is also used for dynamic language’s byte code and simple scripting languages like JavaScript). The key advantage of obfuscators is that they can be highly
automated. However, obfuscators can’t introduce too much complexity or else they
will cause an unacceptable deterioration in performance. Games often have portions of their code that are very performance sensitive. Therefore, it is critical that
obfuscators can be tuned so that they do not affect performance in key code areas.
Blind security functions and anti-tamper software do affect the underlying
operation of the game’s software. At their core, these tools introduce checks embedded deeply within the application to ensure that its data is correct and that its
code is operating properly. Static data may be checked by loading the suspect static
data in otherwise unrelated functions and comparing it with a pre-stored hash
function or other validation:
Protecting Games: A Security Handbook for Game Developers and Publishers
/// pre-store hash of configuration data – CDH
if (CDH != Hash(ConfigurationData)) then {tamper processing} else
It is also possible to check that functions are working correctly. Developers can
pre-store known results:
// pre-store input, result pair for key function
if ( result != KeyFunction(input)) then {tamper processing} else
This approach needs to be done cautiously, as hackers can replace the verification function with a series of NOOP (no operation) assembly language instructions
or their equivalent if they can locate the verification functions. This usually means
that these techniques work better if they are called in numerous places in the code.
If a function is called only once or in only one way, it is easier for a hacker to target
and remove or bypass.
It is tempting for developers to wrap security and other functions up into a
single line of code. This is, after all, standard coding practice. Unfortunately, this
entirely valid coding technique makes reverse-engineering much easier. For example,
it is very common to consolidate digital signature verification into a single function
if (verify(data) == true) then {do good stuff} else {tamper processing};
A lazy hacker will simply alter the verify function to always return true, no
matter how corrupt the data is. To detect such attacks, the developer needs to
validate that the function is working properly:
// first test the verify function by verifying corrupted data
if (verify(data+ junk) == true) then {tamper processing};
else if (verify(data) == true) then {do good stuff} else {tamper
These anti-tamper techniques can be quite powerful, but they are a lot of work
for a developer to implement unless the developer has some powerful scripting or
macro language capability to automate the integration of these features into the
application. In general, these anti-tamper tools are going to be more effective if they
operate on source code rather than on object code or an existing executable application that needs protection.
Chapter 14 App Attacks: State, Data, Asset, and Code Vulnerabilities and Countermeasures
Memory editor attacks are not well-suited for console games, as they usually are
sealed systems that do not allow access to their memory state and do not include or
allow the installation of memory editors or other utilities (at least until they are
For console games, hackers can often attack and modify the save game file.
Most consoles use Flash memory or other rewritable storage to store saved games.
The save game file can be attacked and modified, as discussed in Chapter 7. Hackers
modified the save game information to get infinite ammunition and missiles and
several other benefits in Metroid Prime Hunter, a first person shooter on Nintendo’s
DS handheld game console11. This “trainer” configuration change gave the cheater
a huge advantage when playing via the handheld’s WiFi network—an unfair
modification that seriously undermined the fun of playing the multi-player game12.
Once again, this attack is largely due to the use of state-based networking to
establish player configuration information.
Console games have typically used two types of memory: a DVD or other static media
to store the game itself and a much smaller Flash or EEPROM memory to store saved
games. Players are using both of these methods to transfer their personal player
profiles and save game information to other consoles and post results onto services
like Xbox Live.
However, some players have exploited weaknesses in the existing save game and
player mobility system (where players can use their account profile on multiple game
consoles) to cheat13. The easiest way to understand this is to think of the Save As
command in Word or other Office applications. Basically, some player excels at a
game. They make their saved game available to other players. These players launch
the game from its saved state and then use the “Save As” command (or, as it is actually implemented on a console, assign the saved game to another game save slot)
to transfer the game from its original owner to themselves.
So far, the main exploits of these shared saves have been to gain unearned
achievements on Xbox Live and, perhaps, to unlock game content and share customized data14. Without better control at the individual console or a more effective
central service, there is potential to use shared saves for more serious mischief.
Protecting Games: A Security Handbook for Game Developers and Publishers
It is possible to attack a game’s state without targeting the game’s state directly
via a memory editor or similar tool. Many games use the graphics display engine to
determine where players are and if they can be targeted. The simplest attack on the
graphics engine is to alter the visibility (or, more specifically, the transparency) of
the game’s scenery and walls—called a wallhack15. By making walls transparent,
other players or creatures that would ordinarily be hidden are revealed, making
them much easier targets. The same effect can be achieved by altering attributes of
the graphics in the game’s map file.
The “ultimate” level of graphics engine attack is to replace entire art assets.
Thus, instead of the ordinary player or creature that the game should use, a hacker
replaces these art and animation assets with ones of her choosing that make the
player more visible or an easier target—such as turning regular game characters
into big Bobbleheads.
Conceptually, the display system for a game should be distinct from the game’s
state. The game logic should determine what is visible, who can be shot, and how
one can move. In practice, many games merge the graphics engine and game engine
into one entity. Although this may have some performance benefits, it opens up the
aforementioned wallhacks and other exploits like “holes,” where players can get
behind or between graphical elements by abuse of the graphics engine.
A pure “Game Play” engine should be much less vulnerable to these problems, as
it is handling less data: Level models are simpler, player models are more abstract,
and so on. Also, a game play engine understands what a “wall” is in a formal way,
versus simply a polygonal mesh to be rendered.
By splitting the game play engine from the game presentation/graphics engine,
game developers might also have an easier time moving between computer or console platforms. Also, by decoupling game play from game presentation, developers
should be able to improve testing and scheduling.
Client-side problems are hard. As noted in several places within this chapter, changing the networking model to an action-based network architecture should help.
Chapter 14 App Attacks: State, Data, Asset, and Code Vulnerabilities and Countermeasures
It is possible for the client application to authenticate map, asset, and save information, however. Anything that is loaded can be digitally signed, use a keyed hash
function, or a cryptographic checksum. All of the solutions are equivalent from a
security perspective. If hackers can determine the algorithm and find the associated
static or semi-static key used with the algorithm, they can either replace the key
with their own (the way to attack the digital signature) or use the key (which will
work for the keyed hash function or cryptographic checksum).
A very common design mistake is to use a regular hash function, such as MD5.
Hackers will test common hash functions, like MD4, MD5, and the various SHA
standards, over any loaded data to see if the hash values can be found within the application. MD5 is often used first, because it is probably the most widely available
hash function. This can even work on consoles as seen with a partial, at least, hack
of Gears of War16.
More sophisticated hackers may alter the asset after its initial load by using a
memory editor to change the pointer for the asset file to a preferred, alternate asset
file. Thus, it is not usually sufficient to validate an asset only when it is initially
loaded, but it should be checked periodically, preferably every time it is used
(obviously, this has the potential to have a sizeable performance impact).
The process for secure loading is very closely related to secure bootstrapping
(see Chapter 7). A good secure loader design will also prevent rolling back to a
previous version of software or assets, a problem that has plagued Sony’s PSP 17.
From a performance perspective, using a regular cryptographic checksum,
sometimes called a Message Authentication Code (MAC), is probably best, as these
functions are faster than hash functions.
This system can be used to validate game saves, art assets, maps and levels, and
even game code. First, it is necessary to create the authentication word (AW):
AW(assetx) = securityfunction(assetx,secretkey);
store AW(assetx);
// store the authentication word (and actual asset) somewhere
Then, to verify the asset, either when it is initially loaded or some other time,
the authentication word is retrieved and it is compared with an alleged authentication word for the asset of interest:
AW(allegedassetx) = securityfunction(allegedassetx,secretkey); //or
AW(allegedassetx) = securityfunction(allegedassetx,publickey);
// for the case where digital signatures are used
if (AW(allegedassetx) != AW(assetx))
then { do tamperfunction;} else {process normally};
Protecting Games: A Security Handbook for Game Developers and Publishers
There is another use for this mathematical technique—blind authentication. It
can be desirable to attempt to authenticate data or code at a remote game player’s
location. In this case, keyed hash functions or cryptographic checksums are effective and digital signatures will not work.
The challenging party generates a secret key and sends it to the other participant. The challenging party also sends an identifier associated with the code or
data that is to be verified and must have a copy of the data:
ChallengeMessage = randomkey,assetidentifier;
// challenger generates random key and sends asset
// identifier to challenged party
Both parties compute the authentication word for the specified asset:
AW(asset(assetidentifier)) =
The challenged party then sends the authentication word back to the challenger, who then compares it with her locally computed authentication word:
if (challengerAW != challengedAW)
then {take tamper measures} else {proceed normally};
This process is not perfect, but it does at least ensure that the challenged party
has a copy of the valid asset. For a constrained platform, like a handheld console,
this may be sufficient. It is also very efficient for authenticating saved game data for
consoles, because most have some small amount of protected memory that can
hold a secret key that can be used to check the integrity of a stored game save file.
Chapter 14 App Attacks: State, Data, Asset, and Code Vulnerabilities and Countermeasures
1. Wikipedia (2008), “Sutton’s Law,”
2. YouTube (2008), “YouTube Search: Sonny Hack,”
3. Armor Games (2008), “Sonny,”
4. RadarFTW (2005), “The Truth About Radar,”
5. sarzamineiran (2008), “SWAT 4 Aimbot/Wallhack/Radar Cheat,”
6. Smik3r (2008), “TF2 Hacks Aimbot NoSpread Scout Ownage,”
7. M. Anka (2007), “POKE,”
8. A. Portnoy, A. Rizvi-Santiago (2008), “Reverse-Engineering Dynamic Languages,”
9. M. Martin (2008), “CCP Plays Down EVE Online Source Code Leak,”
10. Eyebeam R&D (2006), “OGLE: The OpenGLExtractor,”
11. (2006), “New Trainer: Metroid Prime Hunters (+4),”
12. 4 color rebellion/Mitch (2006), “Warning: Incoming MPH Cheating,”
13. M. Nelson (2007), “Xbox LIVE Account Sharing and Gamesave Tampering (Don’t Do It),”
14. B. Kuchera (2007), “Microsoft Tries to Stamp Out Cheating, Hurts Enthusiast Sports Gamers Instead,”
15. Wikipedia (2008), “Wallhacking,”
16. mr hoodie lol (2007), “Gears of War Hacked,”
17. J. Ransom-Wiley (2007), “PSP Downgrader: 3.03 to 1.50 in 8 Simple Steps,”
Bots and Player Aids
he next stage of cheating beyond wallhacks, ESP, and radar is to use this
knowledge to augment player performance. After all, once you are playing a
game on a computer, it is a fairly small step to program the computer to play
the game on your behalf.
Game players have always used tools to help them play better, whether the
tools are legal or not. Bridge has its hand-ranking systems; blackjack has its basic
strategy1; and chess and go have endless volumes of analysis. Interesting borderline
cases exist, such as card counting where players use their memory to improve their
performance in a game—a tactic that some consider legal and other people view as
I S I T “H ELP ”
When do strategy and analytic aids cross over into cheating? When an individual
person is no longer setting the strategy or making a decision, but the game play is
driven by a machine or a team of people colluding together. The problem with
fighting player aids is that they are separate from the game and are therefore essentially impossible to detect. There are tons of solvers for the very popular numeric
puzzle game, Sudoku; the only way to even consider detecting a solver is to capture
the precise timing and order that a player enters numbers into the game state
array—both of which can be faked as well. Online poker faces a similar problem.
Programmers are building increasingly sophisticated programs that can play a reasonably strong game of poker (in fact, the Polaris Pokerbot won the 2008 Man vs.
Machine Poker Championship2). These automated play tools don’t actually need to
perform optimally to succeed; they simply need to outperform the majority of players in the majority of hands for the majority of money—poker farming anyone?
Chapter 15 Bots and Player Aids
Automated poker play strikes at the very heart of the online poker industry. If
players believe that they are not playing with other people (and they are losing), the
industry could be in real trouble. There have been several largely unsuccessful
experiments with using the skill game business model with first person shooters.
These types of games are particularly vulnerable to botting and if one of these services ever takes off, it would be interesting to see how long or successfully they
would be able to keep botters at bay.
Even traditional chess is not immune. Chess computers have been around for
years. However, in a recent case in India, a chess player was caught getting advice
from a partner with a chess computer and a Bluetooth connection3, resulting in
suggestions that future tournaments should be played in Faraday cages.
Some games directly support tools for automating portions of game play, but
as a feature, not as a cheat. These macros range from simple playback systems that
repeat sequences of key strokes to highly elaborate scripting languages. Game developers become concerned when these tools are too successful at game play either
when they undermine competition between players or allow totally automated
game play (for example, the Glider4 tool automates play for World of Warcraft and
is currently the subject of a major lawsuit about the legality of such third-party
tools5). Automated game play can be disruptive to other players or can be used to
over-efficiently exploit the game’s economy (see Chapter 22 on gold farming). In
many cases, the only real violation of the game rules is the fact of these game automation tools’ existence. They are not cheating or exploiting actual game systems,
but simply using a computer to play the game instead of a person.
In practice, player aids and macros are very difficult to detect because they are
not manipulating the game, only automating game play. Either the game operator
has to try to detect the player aid application (which doesn’t need to even be on the
same computer) or the macro program or else they need to detect automated
play—unfortunately, this is often little different than the behavior of a highly skilled
Some programs do more than play the game well by the rules. They take the
next step and cheat. Welcome to the world of aimbots and other bots. A simple bot
doesn’t really need to cheat. It can capture display information and use the same
data to determine its strategy and directly drive the mouse and keyboard or controller. However, computer actions and reactions are far faster than a player’s and
therefore the bot can perform much better (even without elaborate artificial intelligence programming).
More sophisticated bots combine wallhack, radar, ESP, and memory editors
with some AI programming and automated controls to play much better than a
human. Some even replace the game client entirely. Aimbots can find targets and
Protecting Games: A Security Handbook for Game Developers and Publishers
I am lazy. When someone contacts me about their game and wants to discuss security, the first thing I do is check out YouTube. Often, I’ll find wonderful demonstrations
and tutorials of hacks for the game. Almost as often, I’ll see examples of what are
pretty clearly “fake hacks” that are used to lure players to disclose their account
information and passwords. These fake hacks use memory editors and other tools,
like Photoshop, to con players by altering the game display on one computer. The
clearest way to demonstrate a cheat properly is to show the game display of more
than one player at the same time, ideally from two different monitors. The reason for
the separate displays is to show that other players are actually seeing the results of
the cheat and that it is really having an impact on game play. If this can’t be done, it
is actually better to show the perspective of the “cheating victim” rather than the
cheater—especially for attacks that involve interaction with other players or the game
environment like dupe attacks and speed hacks. This is not true for state disclosure
hacks like wallhacks, ESP, aimbots, and radar.
shoot better and faster than any player6. Call of Duty added an interesting feature
called “kill cam” that showed the final five seconds of a victim’s life from the
perspective of the killer after the victim was killed. Because tools like aimbots and
radar cause players to take rather unnatural actions (such as tracking targets
through walls), the kill cam was seen as an interesting anti-cheating tool—allowing
victims to detect their cheating killers from beyond the grave (or after the fight)7.
Automated play, combined with full state information and the ability to
simply edit the game’s state, can make for a devastating attack on a game. Real-time
games, like first person shooters, MMOs, and real-time strategy games, are all
particularly vulnerable to these attacks. After all, any game that favors speed over
thought is always going to favor a computer player.
Bots and game cheat tools are not just for fun; they are a serious business.
Glider, the rather well-known automation tool for World of Warcraft, has an Elite
version that costs $5 per month or $60 for a lifetime subscription8. There are several bots for NCsoft’s Lineage 2 that are sold on a subscription basis, including
L2Walker and L2Superman that has a $7.50 per month subscription9 with upgrades
that add features and keep the tool ahead of the game’s cheat detection system (for
comparison, it costs $15 per month for a Lineage 2 subscription).
Chapter 15 Bots and Player Aids
Finally, it should be noted that serious game cheats have moved their hack
tools off of the computer. In Korea, commercial game hack tools use a smart USB
token that includes its own processor. This computer looks like a keyboard, mouse,
and video card (all of which can use USB interfaces), but processes the screen and
game data totally passively (at least from the PC’s perspective) to implement game
cheats. Some of these devices cost as much as $20010. This type of bot could actually be used very effectively against console games as well because it relies purely on
information from the game’s display and inputs from the game’s peripherals. Just
as cybercrime has moved towards advanced infrastructures and custom products
that put conventional IT security companies at a huge disadvantage, criminals,
gold farmers, and other serious exploiters targeting games are already developing
their own tools and methodologies to attack games more profitably. Advances in
virtualization from Xen and VMware are going to make creating very advanced and
virtually undetectable client-cheating tools easy.
CAPTCHA (Completely Automated Turing Test To Tell Computers and Humans
Apart) is a widely used computer security technique that attempts to distinguish
people from computers11. CAPTCHAs are mostly used for login and registration
authentication, but some game developers have moved to using CAPTCHAs to
stop bots. CAPTCHAs work by providing words, math equations, or images that
are difficult for a computer to solve, but easy for a person to distinguish. The problem has grown as hackers have gotten better, and the CAPTCHAs have become
more difficult for even a human to use. On a troubling personal note, a regular
reader of my blog, PlayNoEvil (, was unable to submit
comments for a while because my CAPTCHA system relied on colors. However, he
has impaired vision. (Fortunately, I could solve this problem by changing the
system’s settings.) I am having more and more difficulty getting through some of
these systems myself.
Personally, I’ve found it more than a bit ironic that interactive entertainment
systems like games are even considering using CAPTCHAs, as this seems to be the
ultimate admission of a design flaw.
Once you are in the situation where security is a serious problem (as opposed
to my blog), CAPTCHAs tend to fall apart quickly. The simplest answer for an attacker is to hire someone to process the CAPTCHA or do so herself. Yes, you can
outsource CAPTCHA processing for $1 for 1,000 CAPTCHAs and look at what you
get12 (from B. Kreb’s “Web Fraud 2.0: Thwarting Anti-Spam Defenses”):
Protecting Games: A Security Handbook for Game Developers and Publishers
The quality of recognition is between 90 percent and 95 percent.
We support two-word CAPTCHAs.
We support mixed upper- and lowercase CAPTCHAs.
The volume that we can accept at any moment from new clients is between
500,000 to 1 million CAPTCHAs in day.
We automatically issue refunds for any CAPTCHAs that were solved in more
than 60 seconds.
We automatically return money for solved CAPTCHAs that include incorrect
There is also the option to buy software that processes CAPTCHAs automatically13.
The dominant anti-cheating tools today are cheat detection systems. The most
well-known commercial products are Even Balance’s PunkBuster, nProtect’s
GameGuard, and AhnLab’s HackShield. Some game developers have chosen to
create their own tools including Valve Software’s Valve Anti-Cheat (VAC) and
Blizzard’s Warden. There are also several independent open source security projects. The single biggest advantage that these systems have is that they can be added
to a game anytime, even after it has been completed. In fact, Electronic Arts contracted with Even Balance to add PunkBuster to its MMO Ultima Online nine years
after the game launched, although it appears that the project was eventually put on
hold14. Cheat detection systems are essentially operated as a service. They need to
be constantly updated to identify the latest threats as well as to protect themselves
against hackers who choose to target the security tools directly.
In general, all of these tools work the same way. There is a security client
installed with the game client and a central security server that runs in parallel with
the game client and game server.
The Security Client creates a client ID and may use hardware-fingerprinting
techniques to identify the player’s computer. This client ID is used to register the
game client/security client pair with both the game server and the security server.
The security client has two major functions: It is responsible for analyzing the
player computer to identify any sort of threat (such as the radar, aimbot, and other
hacks, and, in some cases, key-loggers and other forms of malicious code that can
target the game). It is also responsible for reporting its status regularly to both the
Chapter 15 Bots and Player Aids
FIGURE 15.1 Cheat detection system architecture
game client and security server—sending a “heart beat” signal that it is still alive
and operating correctly (Note: The blind authentication technique discussed in
Chapter 14 can also be used to verify the integrity of the security client.)
In general, cheat detection systems use the same techniques as anti-virus
programs: They scan the entire memory space of the computer looking for items of
interest. Instead of looking for worms or virus-infected programs, cheat detection
systems look for signatures associated with known cheat applications or malicious
libraries loaded in memory. If a cheat signature is identified, the security client
sends a notification to the game client and security server. The security client can
use whitelist and blacklist techniques to help with the forensic analysis of potential
cheat applications. Whitelists are known programs that have been determined to
not affect the security of the game and blacklisted programs are known cheating applications. The problem, of course, is that there are hundreds of thousands, if not
millions, of different programs that may be installed on a user computer.
Also, game cheat writers, just like virus authors, use techniques to hide their
applications and obscure their signatures. World of Warcraft hackers used the Sony
BMG Rootkit to hide their attacks from Warden within weeks of the disclosure of
the rootkit15. It would also be possible for these cheat detection applications to
profile all of the programs installed on the player’s computer by scanning the PC’s
hard drive, even if the programs are not apparently being run while the game is
being played.
Because these security clients often report back information about the applications on the player computer, there are some rather serious concerns about privacy
that may create legal risks for the game operator or security firm. Most of the security clients do not return actual information back about the applications that are
Protecting Games: A Security Handbook for Game Developers and Publishers
running on a PC, but rather send a hash value signature back to the central server
to minimize their privacy impact. Whether this is legally sufficient or not is a different question.
Hackers do not actually need to stop the security client’s operation, however.
Instead, they can spoof the security client’s communications to report that everything is okay to the game client and the security server, even if the security client has
been shut down or has detected a hack. The game client is a particular target for this
sort of attack: It is rarely designed with security in mind and the security client API
(application programming interface—the connection between the two applications) was likely integrated into the game client very late in the development
process. Because many games support player-to-player communications, it may be
possible for a hacker to send data that looks like a cheat as part of regular game
communications to trigger a penalty for another player16. After all, cheaters just
want to win, they do not care how they do so.
It is important that the security server and game server coordinate their actions.
After all, the cheater is really targeting the game client and game server and if the
cheater can somehow separate the security systems from the game systems, the
cheater will win, even if cheats are detected. Thus, it is critical that the client ID is
shared and used effectively by both servers.
A cheat detection system is more than its technological platform; it is really a
service. The security client is not a preemptive security tool; it only can profile
known cheats, just like an anti-virus product. Unlike anti-virus tools, however,
which can cover everyone with a Windows or OSX operating system, cheat signatures are almost totally game specific. Game publishers need to contract with the
security provider for each game for as long as they want the system to be kept upto-date. For some licensed MMOs, the licensee also has to pay for the service.
(I think this is a bad business practice; adequate security should be included as part
of any licensing agreement.) The operational activities for a cheat detection system
service are as follows:
Surveillance—The CDS provider monitors cheater forums and hacker sites, as
well as getting information from the game operator about the latest cheats.
However, the criminalization of game hacking has meant that some of the most
damaging cheats are not being used by many people. This implies that the
most damaging cheats will not be found by this sort of surveillance. Anti-virus
companies are having the same problem with viruses, worms, and other malicious code being targeted at single companies or even individuals.
Collection—Once a new cheat is identified, the CDS provider needs to acquire
a copy. Both surveillance and collection require CDS provider personnel to be
able to infiltrate cheating communities.
Chapter 15 Bots and Player Aids
Analysis—The CDS provider will disassemble, analyze, and determine the
risks for the cheat, whether it actually works, how the cheater attempted to protect it, and how to construct the best signature to identify the cheat.
Signature Development—The CDS provider may either create a simple new
signature to be added to the CDS security client signature database or require
an update to the security client to add a new scanning capability.
Distribution and Update —The CDS provider must then update the security
clients and/or their signature databases. It is important for the CDS provider to
ensure that all active security clients are fully updated.
The distribution, updates, and operation of the security client all have an impact on network bandwidth and available CPU resources at the player computer.
There have been a number of complaints, some quite serious, about the performance
impact of these tools on the performance of the game that the tool is intended to
One real weakness of the cheat detection system strategy is that it is rarely used
to improve the game, just patch over weaknesses. Once a cheat is identified, it
should also be assigned to the game’s ongoing support team to see if there is a way
to actually solve the underlying problem that allows the cheat.
Some skill game operators use cheat detection systems. As skill games, online
gambling, and other games-for-money businesses grow, this security strategy will
likely fail. Highly targeted, limited distribution cheats can be very profitable in
these games, just as they are for gold farmers.
As attackers get more sophisticated, the cost for an effective CDS will rise. This
may force game operators to bring the service in-house. The most costly portions
of the CDS are surveillance and collection. It may make sense for game operators
to carry out these activities themselves and only consider outsourcing the remaining CDS functions: analysis, signature development, and distribution.
Philosophically, the cheat detection system strategy essentially encourages
developer laziness. Rather than avoiding cheating problems from the beginning of
the game development process (and, at this point in the evolution of the game industry, many of the cheating problems are quite well known), game developers
simply leave cheating and the whole host of operational and support issues to those
people unlucky enough to still be on the project after the game is completed. Lack
of life cycle engineering and accountability for games is quite costly. The cheat
detection system strategy does work very well as a “belt and suspenders” tool to
augment strong security engineering throughout the design process.
Protecting Games: A Security Handbook for Game Developers and Publishers
As a final note, I have seen a couple of Asian online games experimenting with
bundling traditional security software with their game security tools. This is an
intriguing idea, as game businesses are hit hard and in their pocketbooks by keyloggers and other viruses and malware.
1. M. Shackleford (2008), “How to Play Blackjack,”
2. (2008), “The Second Man vs. Machine Poker Championship,”
3. Australian IT (2006), Bluetooth Chess Cheat Caught,”,24897,20981444-15306,00.html
4. MDY Industries, LLC (2008), “Glider FAQ,”
5. B. Duranske (2008), “World of Warcraft Glider Litigation Update: Final Briefing On Blizzard’s Request
for Injunction Filed,”
6. Smik3r (2008), “TF2 Hacks Aimbot NoSpread Scout Ownage,”
7. G. Kasavin (2003), “Call of Duty Review,” (page 2)
8. MDY Industries, LLC (2008), “Subscribe to Glider Elite,”
9. GoGYGO (2008), “GoGYGO Products,”
10. Cho J. (2006), “Mouse Plays When Gamer’s Away,”
11. Carnegie Mellon University (2008), “What Is a CAPTCHA?,”
12. B. Krebs (2008), “Web Fraud 2.0: Thwarting Anti-Spam Defenses,”
13. CL Auto Posting Tool (2008), “CL Auto Posting Tool,”
14. Electronic Arts (2006), “PunkBuster on Hold,”
15. R. Lemos (2005), “World of Warcraft Hackers Using Sony BMG Rootkit,”
16. Pansemuckl (2005), “The Unerring PunkBuster...,”
Network Attacks: Timing
Attacks, Standbying,
Bridging, and Race
he Internet is like the Wild West. Totally untamed and dangerous, we are all
told. Hackers, thieves, and criminals lurk around every corner. No one seems
to have told a lot of game developers, as they regularly leave their online
games wide open to all sorts of attacks.
Network attacks target game applications from their network interface. Once
again, attackers benefit from the habit of developers to start with a standalone,
single-player game and then add online play as a feature. Building a safe online game
requires a level of formality about time, transactions, and interactions that most
programmers are not used to.
Consoles and MMOs are the main targets. Typically, PC games are attacked
more directly through the game application.
Games are transactional systems. Unfortunately, game developers often fail to build
their games on a foundation of well-constructed transactions and attackers regularly exploit these poor implementations. Even simple stock market games can be
vulnerable. Market games are very popular with aspiring investors and the genre
has grown to encompass everything from celebrities to fashion and even the U.S.
Congress1. Trading is an easy game mechanic to understand and reasonably easy to
implement. Developers need to be careful, as CNBC found out with its “Million
Dollar Portfolio Challenge” in 2007 where players determined that they could post
virtual stock trades after the real market had closed. The technique was fairly simple—they entered their trades before the stock market closed at 4PM Eastern Time,
but did not execute them until after 4PM when the companies announced their
earnings and their stock prices jumped in after hours trading. If the stock didn’t
pop, no problem, the player simply didn’t complete the order 2. The scandal was
Protecting Games: A Security Handbook for Game Developers and Publishers
eventually exposed and caused a delay in awarding the top prize and caused a black
eye for what would have otherwise been a very successful promotion. These types
of race conditions can also happen within a PC game, as Lionhead found with its
Fable 2 pub games3. Mostly, however, these attacks occur over networks.
Virtually every online game has been hit by some sort of dupe attack. Players
use the game’s interface and controls to induce the game to duplicate virtual items
or currency and can cause great difficulty for the game operator. Recently, the
Philippine licensee of Ragnarok Online, Level Up!, had to roll back the game (costing players two weeks of game progress and leading the company to provide two
weeks of extra game time and other bonuses in compensation) because of a severe
dupe problem that resulted in nearly 500 percent inflation of the game’s currency 4.
Dupe attacks are often caused by race conditions in different parts of the game
application. Different parts of the application are updated at different times or
“trust” data from unreliable, intermediate sources (often internal variables or cached
information). In the CNBC case, it appears that permission to make an order was
determined at the time the player chose the stock ticker rather than when they
selected the number of shares and clicked the Order button. World of Warcraft
apparently had a similar race condition in which a player could transfer all of her
gold from one account to another, log off quickly, and then log back in again and
the gold would still be in the first account5.
The solution to dupe attacks comes from applying good database design
principles to game transactions. ACID is an acronym for the properties necessary to
ensure that transactions are properly processed 6:
Atomicity—Either an entire set of transaction tasks occur or none do.
Consistency—No bad data will be introduced into the system due to a
Isolation—State or date affected by the transaction will not be visible to
entities outside of the transaction until it is complete (or rolled back).
Durability—Once completed, a transaction will persist.
In MMOs, the server is typically the authoritative source for information and
so there is a single database which, conceptually, makes implementing proper
transactions easy. For peer-to-peer games, however, ensuring data consistency over
a network can be very challenging from a technical perspective. It is also important
to provide a clear player display and control process to handle rollbacks to a previous state and other problems.
Chapter 16 Network Attacks: Timing Attacks, Standbying, Bridging, and Race Conditions
So far, we’ve discussed action-based networking and state-based networking.
Some MMO developers have chosen a truly dangerous approach that lies between
the two: SQL-based networking. SQL (Structured Query Language) is a widely used
standard protocol for communicating with a database. I’ve heard anecdotally that
there is at least one MMO that ships with a full SQL client embedded in the game
client. Because SQL communicates directly with the database that stores the
game state, SQL commands can be used to directly read, write, and edit anything
about the game. In some sense, this is even worse than a state-based system. A
malicious user with full SQL access to a database can create new items or player
attributes or pretty much anything they can imagine.
Some web applications are vulnerable to SQL injection7 attacks—where players can somehow bypass the server application to communicate directly with the
database. There are tools and programming techniques to stop such attacks from
happening by stopping web users from entering any SQL commands via the web
interface. However, building a game that uses SQL for networking is almost
impossible to protect because the player client application relies on posting player
actions or state updates via SQL queries. The best way to protect against these
attacks is to insert a proxy between the game client and database server that converts
game actions into SQL queries that the database, server-side data store, or game
application can process.
Proxies are a powerful security tool that, if implemented properly, truly provide
“defense in depth.” A communications proxy takes incoming communication
packets or streams and parses and validates its structure and content independent
of the state of the underlying application. A proxy answers the question: “Is this a
well-formed communication?” Once processed by the proxy, the game application
determines if the message is appropriate to the game’s current state and that the
message is from a valid source. Only if an incoming message is validated by both the
communications proxy and the game application will the game application proceed
and process the player action. Standard Internet firewalls and packet inspection
tools are, essentially, generic proxies that can parse and validate a wide range of web
The art of proxy design is to manage the trade-off between the functions of the
proxy and the actual application (see Figure 16.1). Some proxy systems go an extra
step and actually reformat incoming message data into an alternate format for
internal processing to further isolate external from internal communications.
Protecting Games: A Security Handbook for Game Developers and Publishers
FIGURE 16.1 Defensive proxies
Proxies can also handle access control, source authentication, and they are a
natural location to implement logging. Proxy servers can be implemented in front
of game servers instead of, or as a complement to, conventional firewalls. Where
possible, they should be designed to be stateless so that they can be replicated using
low-cost real or virtual servers. A stateless proxy server design will make it easier for
the game service to scale efficiently.
An implicit advantage of a deep, powerful proxy design is that having a separate software application (and, ideally, a different development team) will likely
tighten the implementation and security of the overall system’s interfaces by both
groups. Problems such as buffer overflows and malformed messages will have at
least two independent chances of being caught, as the two development teams
should each be looking at the incoming data separately (simply replicating the message parsing code in the two systems will undo this security benefit, of course).
Defensive proxies can be part of the architecture for peer-to-peer systems as
well as client-server applications. They can provide performance as well as security
benefits and can help cleanly decouple the networking subsystem from the core
game application (local player actions could be processed through the proxy service
as well so that the game engine has a single interface for all players).
A hacker proxy is typically a computer that sits between the target platform (PC or
console) and the remote server or other player platforms that is used as a means to
attack games on their network interface. It is possible for the hacker proxy to be an
Chapter 16 Network Attacks: Timing Attacks, Standbying, Bridging, and Race Conditions
additional application on a player’s PC that intercepts and alters network packets
before they are sent to other players. Another hacker proxy technique is to simply
insert a switch that disconnects the player’s PC or game console from the network8.
A consistent, underlying problem with networked games is that game designers
often assume that they have secure, reliable communications. However, it is sometimes still possible to attack a game even if its communications are fully encrypted.
Halo 2 on the Xbox console had some serious proxy problems, as discussed in an
entertaining and detailed article at GamesFirst! by Shawn Rider 9:
The bridger is the center of power in a cheating setup. [The cheaters] use
a fairly complex method to run the Xbox’s Internet connection through a
personal computer. On the computer [the cheaters] use [commercial] software,
including the popular Zone Alarm firewall program, to control what computers
the Xbox can connect to. By using some tricky methods, [the cheaters] can
completely control the hosting of the game. They can determine who can connect, they can lag out [induce sufficient network delays so that the player will
be knocked out of the game] especially good players on the opposing team, and,
most importantly, they facilitate the “standby” technique.
The standby cheat is simple: Cheaters with cable or DSL modems will push the
“standby” button on the modem [or in software on the bridger computer] to
force everyone else in the game to be presented with the blue screen of “waiting.”
During this time, the gamer who initiated the standby can move in the game
world freely while all of the other players stand frozen in time. The cheater can
blow away a flag holder, for example, return the flag, [and] then press his
modem’s standby button again, resuming the game.
It is easier to understand hacker proxies in terms of chess.
Players A and B are playing chess online. The game developers didn’t really
bother to understand the rules of the game, so they just implemented the game as
a big blob of code and game state. The easiest way to network this game is to
duplicate the game state over the network (here is where developers can fiddle with
object synchronization and differential object synchronization, prediction, and all
sorts of clever things). So, basically when Player A makes a move, the updated game
board is sent to Player B. Then B moves (or, if they are playing Real-Time Chess,
they both move) and the game board objects are synchronized.
Viola—a “network” game. This is also why it is so easy to build these “network
games” ... and why it is so easy to hack them.
Protecting Games: A Security Handbook for Game Developers and Publishers
This attack works by abusing the multitude of problems here. Evil Player B
wants an edge. So, what she does is stop listening to Player A for a while and
simply run the game (she can even add in suitably “helpful” moves on Player A’s
behalf to get the results she wants). Once Player B gets the game “just right,” she
reconnects to the network and sends the game to Player A... whose computer
simply accepts the data.
The problem is that with anything but the most trivial game, it is impossible for
the receiving game object blob to validate the new game object blob against any
form of rules. Think of looking at a chess game if you were allowed to move all of
your pieces at once, but you are only able to do validation of the game board at
Time 1 versus the game board at Time 2.
There are two types of hacker proxies:
White Box Proxy—A proxy application that can peer into the game data packets and edit them, control when they are sent, reorder them, as well as pass data
on to either the local game instance or the remote game instances. This is the
most powerful form of proxy and may not always be practical (for example, if
hardware and cryptographic security work effectively on a game console, it
may not be possible to edit packets or re-order them—your mileage may vary).
Black Box Proxy—A proxy application that cannot peer into the contents of
game data packets. A black box proxy can control game data packets externally:
controlling when a packet is sent, the order in which packets are sent, whether
packets are sent multiple times (replay), and whether a packet is sent at all. This
kind of proxy can sometimes be implemented by simply pressing the Standby
button on a cable modem, as noted previously. This type of proxy cannot really
be prevented, but should be detectable and addressed in the game’s design and
networking system.
Hacker proxies can be used to implement a wide range of attacks on a game,
purely from its network communications:
Speed Hack—There are three types of speed hacks that work by accelerating or
decelerating the pace of network communications. This can sometimes be implemented even if network communications are encrypted:
Message Overflow/Speed Hacks—By driving messages/commands at a
pace that is not expected by the remote system, a malicious user can perform substantially faster than permitted 10, 11.
Chapter 16 Network Attacks: Timing Attacks, Standbying, Bridging, and Race Conditions
Lazy Communications/Telepathy—By slowing the apparent response and
reception of messages by the recipient application, a malicious player can
effectively “predict” incoming data and respond at an advantage.
Late Bet—Wagers are a sure thing when you already know the outcome.
Gambling cheaters target all sorts of games to figure out how to place a bet
after the event has occurred. Late bet scams have shown up in “The Sting,”
a real racing scandal in New York12, and at roulette, craps, and other casino
Standbying: Lag/Resynch Attack—An extreme version of the lazy communications attack. It works by dropping the malicious computer out of the network/
game and proceeding with game play disconnected. Then the malicious player
reconnects in a preferred state13, 14. The length of time that a game will
tolerate a dropped or slow connection can vary widely. This attack is predicated
on trusting data from a remote player. Typically, it requires a state-based resynchronization model.
Packet Hack—The raw manipulation of network packets. It is the network
equivalent of a memory editor. Good network manipulation tools will correct
checksums, sequence numbers, and other non-secure message integrity features.
Encryption might not always stop packet hacks, because some cryptographic
modes allow linear changes to encrypted data to be undetected. This type of
manipulation can be very effective if the hackers know what the underlying text
is and what they wish it to be without breaking the encryption function.
Bridging—A special, more common, case of a packet hack. It works by using a
proxy, such as a firewall, to pretend to be a different server or computer.
Bridging is typically done as part of another attack because the specified address, often a company server, is considered trusted by the client application.
Game Injection—A wholesale synchronization attack. The hacker can save the
state of a favorite game with the right number and type of players and push its
state out to the other players. Note: This may require a proxy to facilitate the
attack. Depending on the networking architecture, a malicious player could use
a proxy to “push” a preferred game state to an accomplice who would then
propagate it to the other players.
Abandonment—Some players simply abandon a game to avoid losing or hurting their ranking or status. They break the connection to the remote players or
server or simply turn off the platform. The challenge for the game operator
and other players is to determine the difference between “natural disasters” and
poor sportsmanship (this issue is discussed further in Chapter 20).
Protecting Games: A Security Handbook for Game Developers and Publishers
Well-constructed encryption and authentication systems at the network layer
can stop many, but not all, of these attacks. Standbying and lazy communications,
for example, do not require manipulating the content of the message packets. These
attacks only require control of the physical and electronic network to slow down or
stop the delivery of game data packets.
Server-based games can address these problems by having a formal, unforgiving model for controlling network time, as discussed in the next section. At a low
level, IT GlobalSecure’s SecurePlay Strobe and Act protocols were designed to fight
many of these attacks for both client-server and peer-to-peer games. These protocols work by creating a logical network “tick” that all game players share.
Essentially, the protocols require the players to all commit to their next action or
state update and then reveal them as if they were playing an elaborate sequence of
the rock-paper-scissors game:
// Strobe Protocol
Mi = {i,Ti = T(Ai)};
// each player i selects an action Ai and computes its
// irreversible transform and builds a message package Mi
// each player sends its identity and the computed transform
// to all of the other players
Store Mi;
// each player stores all incoming Mis from the other players
if (all non-internal Mi’s received) {
SendInternal {i,Ai};
// after receiving the transforms from all of the other players,
// each player i sends his or her action to the others
for each {i,Ai} { // for each received action
Extract (Ti from Mi);
// extract the previously stored transform for that player’s action
if (T(Ai) != Ti) then { do exception processing;}
else { store (i,Ai) }; // accumulate players’ actions for network tick
if (all player actions received) {
process all {i,Ai};
Chapter 16 Network Attacks: Timing Attacks, Standbying, Bridging, and Race Conditions
// once all player actions are received and validated, update the game
repeat; // start next game tick
The protocol is structured so that all players contribute to updating the game
(either actions or state) without knowing the activities of the other players.
Therefore, it is impossible for a player to manipulate lag or benefit from prior
knowledge of other player’s actions. The Act protocol extends this concept by integrating the collaborative random number generation process discussed in Chapter
13. There is a definite performance impact from using this protocol; it requires two
sequential messages to update the game’s state from each player. This can be compensated for in the game’s design or by pipelining a parallel set of strobe protocol
instances (see for additional information).
Minute by minute and day by day, we tend not to think too much about time. It
flows along unnoticed. Unfortunately, as you read previously, neglecting time can
cause real cheating problems for game developers. Network lag and race conditions
are often hard to test or replicate and cause problems for all sorts of general business applications. The default strategy for handling time has been to make it seem
as smooth as possible for players. Errors are usually assumed to be accidental, not
malicious. The trusted client problem often includes a substantial trusted client
time component. Developers need to take a more formal view of time and create a
systematic “time policy,” as follows:
How out of synch should players be allowed to become?
Where does a game roll back to? How do you present this to players?
What are the consequences of a dropped connection?
These questions and more must be addressed consciously by game designers or
they will be handled implicitly in the hands of individual game programmers. Key
factors that need to be considered include:
Delay—What is the time interval after which the game must stop?
Interaction—How “old” can an incoming action be and still be accepted by
other players? Actions from remote players are always associated with some
time in the past. How does the game play system integrate old remote actions
with new local actions and somehow synchronize state between the players?
Protecting Games: A Security Handbook for Game Developers and Publishers
Tick—What is the basic “tick” of the game’s network clock? In some sense,
there needs to be a notion of “minimum duration” during which each player
can take only one set of actions. If the game’s internal temporal model for
“ticking” and player interaction does not correspond with the game’s network
model, game play can break down.
Interference—The interactions of different player actions need to have reasonable and understandable consequences based on each player’s notion of
state and action.
Display Prediction and State Confusion—The interactions of the player
display or presentation and the actual game state can become complicated by
poor predictions by the game presentation engine and the actual actions of the
remote player. Does a player have a shot available at a target? What does a
maneuver really look like relative to another player? This is an important issue
to ensure that real-time games feel responsive and that the display is accurate
and smooth.
These issues, obviously, only exist with real-time games. Turn-based games
can be paused easily and rolled back to the last action. Although the trend in game
design has been towards real-time games, another advantage of turn-based games
is that they can be “played by mail” and so players do not have to reconnect in real
time to resume play in case of an interruption. This makes game abandonment as
well as many other network attacks almost irrelevant.
It is possible to decompose players’ actions over networks into a series of
discrete phases for which game developers need to define clear time policy choices:
Decision—The instant an internal player’s action enters the game play engine.
Commitment—The instant before which the action will be automatically
aborted/changed to address new incoming information that was not available
when the player made a decision.
Success—The instant after which the action has some probability of resolving
Resolution—The instant when the action triggers consequences in the game
play engine.
Conclusion—The instant after which the player is allowed to choose a next action.
This is a sample time policy model. Different models are certainly applicable to
different games and networking strategies. The key factor for a successful network
game experience for players from both a game play and security perspective is to
Chapter 16 Network Attacks: Timing Attacks, Standbying, Bridging, and Race Conditions
systematically address how game state will be updated, how players’ actions will be
resolved through the games rules, and how the game’s display will communicate
state in a networked environment.
Handling time is probably the most difficult problem for multi-player and networked games. It collides with hard problems from computer programming for
handling concurrency and simply handling lag and latency over wide area networks.
Securely managing time in a game is aggravated by cheating players who are more
than willing to abuse game time, network communications, and synchronization
services to their own ends. Turn-based gaming is an easy answer, but it does not meet
the needs of many developers who want to fully tap the power of gaming platforms
and modern broadband communications.
1. A. Varney (2007), “Offbeat Sports Games,”
2. T. Catts (2007), “CNBC’s Easy Money,”
3. R. Miller (2008), “Fable 2 Pub Games Exploit Will Make You Very, Very Rich,”
4. Level Up (2008), “RF Online Economy Fix,”
5. Ryan A. (2006), “Players Found New Golden Exploit for WoW,”
6. Wikipedia (2008), “ACID,”
7. Wikipedia (2008), “SQL Injection,”
8. MFCrow (2008), “Xbox Live Lag Switch: JRG 11.5,”
9. S. Rider (2006), “A Bridge Too Far: The World of Halo 2 Cheating,”
10. T. Bramwell (2004), “Blizzard Bans World of Warcraft Cheaters,”
11. CCP Wrangler (2007), “Rapid Fire Exploit,”
12. J. Drape (2002), “HORSE RACING; Pick-Six Fix Admitted As Giuliani Steps In,”
13. B. Kuchera (2006), “Saint’s Row Receives Quite the Patch,”
14. N. Doerr (2007), “Insomniac Sets Up a Ban Policy for Resistance,”
Game Design and Security
ame design is the foundation of a fun, successful, entertaining or educational game. One of the keys to making a great game is ensuring that the
game design—the rules and framework for how the game is played—keeps
the players playing by the game’s rules. Unfortunately, computer games can hide
weak designs behind pretty graphics, stunning animations, and elaborate plots. At
least, until there is more than one player involved.
This discussion of game design is not about creating a great or even a good
game; it is about avoiding design traps that undermine the intent of the game
design. There may be cases where it is desirable or necessary to build a game with a
known security weakness. It is essential that when this choice is made, a game
designer does so consciously with an appreciation for the consequences. Ideally,
security constraints to stop cheating, piracy, and other forms of game abuse should
spur game designers to create successful games that also avoid the problems or
turn security problems into game play features.
One category of game security problems should be entirely avoidable: exploits of
game design flaws. As introduced in Chapter 13, a game design exploit is a weakness
in the game design if it gives players who use it a substantial, unintended
advantage over other players. Game design flaws are often the result of the rush to
complete games and lack of focus on game design analysis. Within days of the
launch of Age of Conan, for example, players with Demonologist characters found
a way to reach the top level in the game with just four days of game play 1. Economic
systems, combat, movement, and other game systems too often have serious flaws
that arise from focusing on the graphics and the cosmetic “chrome” of gaming
rather than proper design and thorough game play testing. Although analysis is
useful, play testing using paper and pencil may be the best way to exercise game
systems cost effectively and eliminate the worst design exploits.
Chapter 17 Game Design and Security
Collusion, first discussed in Chapter 13, is one of the more pernicious problems for
computer games. When players are playing online, there is no way to prevent
players from communicating and coordinating their plans, especially if they do so
outside the game. Simply making collusion against the rules, as Svenska Spel found
at great expense with its flawed lottery game2, is futile. The easy answer, making collusion a legal game mechanic—cooperation—is a great option when it is possible.
There are other design strategies. In-game betrayal is very effective. CCP Games’
space-based MMO EVE Online has numerous instances of players backstabbing
each other. Cooperation is allowed, even encouraged, but the game’s deep economic systems mean that betrayal can be very profitable or just fun. In one example
among many, a player set up a bank inside the game that a large number of other
players used for a while earning interest and making loans—that is, until the bank
operator ran off with the equivalent of $170,000 in virtual currency 3.
Cooperation and competition dictated by the game mechanics is also an
option. The Austrian card game, Königrufen (“The Calling of a King”)4, has
an elaborate bidding system, similar to that in Bridge. The bids themselves determine player partnerships—with some hands being played solo and others with
partnerships based on calling a king by suit, hence the game’s name. If you are the
player with the called king, you are the partner of the declarer for that hand.
Many games incorporate wagering as a game mechanic. For these games, unrestricted side wagers for or against any player can mitigate the benefits of collusion.
The only two games that I have seen that use this approach are my favorite casual
board game called “The Really Nasty Horse Racing Game” 5 and the dice game craps.
Online poker services claim to detect collusion through extensive statistical
analysis of game play and patterns. However, the weakness of online identity and the
financial benefits of serious collusion make the effectiveness of such techniques suspect. Game operators can try to use platform fingerprinting, IP address information,
account information, bank information, and basically anything else they can find to
create enough identity information to look for teams of players. Without strong
identity, any statistical analysis is going to be pretty weak. Even if you can somehow
detect a player team, the real question becomes what to do about them.
Trivia games are terribly popular and terribly difficult to secure as online games.
First, trivia games as a category are inherently weak in the world of modern
computers and the Internet. Players can collude, research answers, and, most
dangerously, build a catalog of questions.
Protecting Games: A Security Handbook for Game Developers and Publishers
The cataloging problem is expensive to fight. Basically, every time the game is
played, the questions that are asked (and their answers, if revealed) can be added by
malicious players to a catalog list that can be used by their comrades. Economics
works against the game developers: It costs the developer more money to create
each question than it does for cheaters to catalog the answers. Ideally, from a security perspective, a question should be used only once. In practice, questions will
need to be used many times.
How does one try to balance this essential inconsistency between security and
business? Here are some ideas:
Don’t Reveal Individual Answers—Only report the bare minimum amount of
information to the players—whether they won or lost, got to the next level,
won a prize, and so on—without revealing the individual answers to the questions. This does not prevent cataloging of questions, but it makes it more difficult to collect the answers.
Cost of Entry—Create a cost to enter the game. This often creates a legal
problem for trivia games where they can potentially become gambling games
instead of contests for fun (see Chapter 31).
Strong Identity—If the game operators can create a strong identification of
each individual player, they can fight multiple entries by individual players.
This does not stop team collusion.
Split Question Pool—Once a player answers a question wrong (whether the
answer is revealed or not), she is switched to a question pool that is not used for
the “big prize.” This reduces the number of “important” questions that can be
revealed to any given player.
Analytics—It is important to track the number of times a question has been
asked and how many times it has been answered correctly. Also, it would be
valuable to track changes in any statistical changes in the likelihood that a
player answers a question correctly, because this could be a good indicator of
cataloging (if the observed daily rate for answering a question correctly jumps
from 20 percent to 60 percent, the question has likely been compromised).
Honeypot Questions—These questions are asked more often (not drawn from
the ordinary random pool) and are used to help model cataloging efforts by
players (by determining how quickly questions become “easy” for players). The
question may be extraordinarily difficult or even have a wrong answer to help
identify anomalous player behavior.
False Game—The way that players actually progress through a contest is not
tied to their answers in the trivia game, but based on some other criteria. For
example, simply entering a trivia game during a day (or achieving a modest,
Chapter 17 Game Design and Security
minimum score) may be enough to be allowed to progress to a final drawing
for an advergame.
Multiple Tiers—Divide the game into multiple tiers that act as filters to reduce
the number of questions that are exposed to a large game-playing population.
Non-Traditional Question Presentation Methods—Use voice, imagery, and
other alternative means to convey the questions to players. This makes cataloging harder; it does not prevent it. It may also drive up the cost to create each
question, which can have a net negative impact on the game’s security.
Face-to-Face Play—Switch players to a face-to-face competition after preliminary, online game rounds (this is essentially another multiple-tier system).
The potential embarrassment of not being able to use a catalog or other cheating methods in public can act as an additional deterrent.
Trivia games are essentially a resource battle between cheaters and developers.
The developers want to reduce the effective cost of creating each question and maximizing the number of times that the question can be reused, whereas the attacker
wants to acquire the answers to the questions as cheaply as possible. As noted earlier, from an ideal security perspective, each question should be used only once and
the sequence of questions should work to rapidly filter players away from victory
and exposing unused questions.
Brain Age is a very successful and popular “self-improvement” game and it has a
number of online imitators. However, games that rely on basic mathematical, word,
or other puzzles are not very likely to work in an online multi-player, competitive
environment. It is too easy to cheat at these games and impossible to protect them.
The Scrabble Word Finder7 provides its users with all of those wonderful sevenletter words and the longest words based on their current tiles (the tool apparently
does not do any analysis of the current game board looking for highest scores, but
Scrabble players tend to do better by cycling their hand as quickly as possible).
Scrabble Word Finder is an example of a “strong play” tool. These tools do not
give optimal strategies or perfect play techniques, but rather are “good enough” to
allow their users to defeat most opponents. Because there are opportunities to play
more game sessions online than there are to play games face-to-face, these strong
play tools are good enough to give a player that uses them a substantial advantage.
The tool doesn’t necessarily need to find the actual answer, but needs to merely present a reduced set of good options that a human player can use to gain an advantage.
Protecting Games: A Security Handbook for Game Developers and Publishers
Word, number, and puzzle games are generically vulnerable to “catalog
attacks,” where cheaters basically exhaust all of the game play options to find good
or best solutions. A catalog attack can even work with a physics-based game if the
range of player choices is sufficiently restricted. Basically, the catalog tool would
rapidly model a whole series of different player choices and then pick the best
options or further refine possible player choices based on the best results from the
simulation. This may be substantially faster than attempting to algorithmically
solve the game. A cheater may find it faster to search for good solutions by trial and
error rather than attempting to derive an optimal, closed solution to the game.
This is especially true if the game’s interface effectively restricts the granularity of
player choices (for example, restricting distance to multiples of a unit value such as
inches or direction to a multiple of two degrees).
Many games are based on underlying mathematical models. In the real world,
modeling golf, darts, and pool would require including a massive number of variables and complex interactions. For golf they are daunting—the speed of the swing,
the precise angle and position that the club hits the ball, the wind, the terrain from
where the ball is launched, and certainly the topography of where the ball lands.
Other games, such as blackjack and roulette, rely on the random deal of cards or the
spin of a wheel to make them interesting.
In some cases, there is less complexity than people thought. For decades, players have been counting cards in blackjack to try (mostly unsuccessfully) to get an
advantage over casinos. There were no strong card counting systems until the
publication of Dr. Edward Thorp’s “Beat the Dealer,” which described a counting
scheme that would give the player a mathematical advantage over the table7.
Initially, the casino industry was very concerned by Dr. Thorp’s technique. However,
the casinos found that most players actually couldn’t follow the system—although
many players tried and tried and tried and led to huge growth in blackjack revenues
for casinos. Unlike most human players, computers can easily implement such
counting systems (as can highly coordinated blackjack teams) and so are banned
from casinos.
Modern computers have been able to solve roulette and have been used
covertly by cheaters in casinos. Interestingly, courts in the UK and Spain have ruled
that such devices are not illegal (just as card counting isn’t illegal), and it is up to
casinos to detect and control their use8.
Chapter 17 Game Design and Security
For standard computer games based on physics, the thousands of variables
found in real life are commonly collapsed into a very simple mathematical model.
Algorithmic attacks basically target the underlying mathematical model for the
game to find the best solution. Even if the attacker doesn’t know the precise model
that the game is using, physics is physics and is not a secret. By carefully running
experiments and monitoring results, the attacker can reverse-engineer the game’s
underlying mathematical model, or at the very least, the most sensitive parameters
affecting an action’s outcome.
Skill games based on “turn-based physics games” seem to be growing in popularity
—darts, pool, pinball, pachinko, and, of course, golf. Because the games have a
business model tied to competition based on the skill of the game’s players, there is
a real concern that players might be able to use automated tools to optimize their
play (after all, these games are really math problems and not based directly on
human physical skill) and win money unfairly from the game operator or other
How much risk is there in solving a math problem? Although the developers
may hide the math behind a pretty interface, the players are really providing inputs
to solve a mathematical equation. And, because the game is turn-based, a motivated
cheater (or math student) should have plenty of time to figure out the best solution.
One option is to put the mathematical model on the game server. This seems obvious, except that there are a number of game services today that still download the
game’s mathematical model in the game client, Also, running the algorithms on a
central server doesn’t necessarily stop cheating. A mathematically inclined cheater
can still derive a “model of the model” on the server by accumulating data from a
number of game plays, looking at the results, and developing a better and better
local version of the server game model. For linear equations, if there are N unknown constants, it is going to take me N turns to fully determine the equations.
Physics equations aren’t always linear, but the idea is the same. After all, the
underlying physics models are available in high school or college textbooks;
the cheater just doesn’t know how much they have been “tweaked” by the game’s
A studious mathematician can work to isolate variables. Such a person could
run experiments in the game world to make it easier to determine the underlying
model (making short putts on different types of terrain to figure out the game’s
friction model, taking multiple shots in different directions to the wind to determine the windage model, and so on). And, the model just has to be good enough
to provide superior play; it does not need to be perfect.
Protecting Games: A Security Handbook for Game Developers and Publishers
Wind, different types of grass, and any other randomized feature can make building the model harder for the hacker. Unfortunately, random means the game is no
longer a pure skill game (at least in some jurisdictions—see Chapter 31 on skill
games). Developers may also “randomize things a bit” by altering the player’s input
to prevent analysis and to fight botting. Randomness can also creep into the mathematical models unintentionally. The complexity of running the simulation in a
general-purpose microprocessor, including its specific resource loading and timing,
and even the behavior of the rendering engine, may introduce elements of chance
into determining the game’s outcome.
Although game developers talk about using physics to increase realism, the models
in the game are still abstract equations. Altering game mechanics to utilize abstraction may create interesting and dramatic game play, particularly for player-toplayer interaction, which can often become very uninteresting (as seen by the
button mashing of many fighting games). In addition to randomizing inputs to
physical systems, abstract game mechanics can include table-driven results driven
either by random inputs or by the interaction of discrete choices by multiple players (such as a combat result table based on cross referencing the tactics chosen by
each player).
Physics and algorithmically driven games have a lot of powerful advantages, but
developers should use caution if cheating is a potential issue for the game:
Physics Is Not Secure—The combination of the bouncing ball and spinning
wheel is a fairly complicated mathematical model to attack (as seen in roulette).
Most games that use physics are not nearly so sophisticated.
Anything That Can be Modeled Will Be Automated—As discussed in Chapter
15, players will use whatever tools it takes to develop a superior or dominant
strategy. The strategy doesn’t have to be perfect to give them a substantial edge.
Unauthorized State Information Is Dangerous—Players don’t really need to
see the spinning ball at a roulette table except as a confidence-building measure
that the casino isn’t cheating. Similarly, in online games, it is risky to load data
to a client that is not necessary. Of course, unlike roulette, computer gamers
have exact state information, so it is much less difficult to collect the data
needed to attack the system.
Chapter 17 Game Design and Security
Convenience Is a Trap—How would one stop this problem for roulette? Close
wagering once the ball has been thrown. More bets placed means more money
for the casino, but the ritual of roulette would work, and the game would be
much more secure, if no bets were allowed once the ball was thrown. Security
shortcuts are routine in computer games, yet they regularly cause problems.
Some game models are fairly trivial. A number of MMO’s use “static spawning”
techniques to generate monsters for players to fight. This technique, where specific
monsters appear at specific locations at specific times, has led to a number of
annoying problems. Players “camp” to await the spawn of certain high-value creatures. Gold farmers create highly efficient routes from monster to monster to
maximize their productivity. In these cases, some randomization could be quite
helpful—varying the location, nature, and even the activities of the creatures could
undermine many abusive play tactics—and it might even make the game world
appear more “real.”
Anyone who has watched the tremendous deviousness of casino cheats would be
reluctant to trust in the ability of game operators to detect data collection and
analysis systems such as those used for roulette and card counting. These players are
not breaking the rules of the game’s play, but rather the “house rules” of the game
operator. The game’s internal security mechanisms don’t protect it from these
Recently, many developers have embraced physics as a way of efficiently
enriching game play. As with any sort of procedural system, developers should be
careful: Things that are easy to make are often easy to break.
One of the wonderful things that a computer can do is run really fast with amazing
graphics—and game developers have been pushing the limits of both as long as
computers have been around. Games that rely on reflexes and precision work fairly
well for single-player and social games. Once multiple players are competing over
a network, however, the game is a perfect candidate for botting, as previously
discussed in Chapter 15. A game like Guitar Hero, which uses a plastic guitar as a
controller or Dance Dance Revolution, which uses a floor pad, or the Audition casual
dancing MMO that uses the keyboard, are all at their core timing games. For a
computer, it is easy to control simple button press indicators and, of course,
the computer has very precise timing.
Protecting Games: A Security Handbook for Game Developers and Publishers
The main reason that console games that rely on dexterity have had limited
problems with cheats is that they are played alone or socially with friends. This is
not true online. There have been a number of attacks that have targeted Audition9.
Because the game relies on timed button press in response to a simple pattern (left,
right, up, down), it is a perfect candidate for automation. Although Audition can be
attacked with a simulated keyboard, many first person shooters (FPS) are effectively
attacked with a simulated mouse. Just as with physics games, there may be real
game play (and security) advantages to looking at abstraction as a way to mitigate
automation attacks. For an FPS, no matter how accurately the player’s mouse is positioned, the player’s accuracy could be a function of her speed, length of time in a
given position, whether she is crouching and hiding from enemy fire, and so on.
True reflex games, like Audition and Guitar Hero, may need to have their mechanics reconsidered for online play. For Guitar Hero, a webcam might be helpful.
High-speed games also have problems because of network lag. The delay between player actions and game responses often force game developers to pre-position information that allows a player to cheat. A terribly simple option is simply to
force players to slow down. One of the reasons for pre-positioning data is that players can and do move quickly in these games, which requires art assets to be loaded
very quickly. However, this problem is really a function of game design choices that
are quite abstract. Players are, generally, nearly invulnerable in these games and are
often blessed with near infinite ammunition. They are hard to hit and, when hit,
take very little damage, and, when they take damage, it has very little impact on
their in-game abilities.
This is not realistic. If you look at the TV news or film of real combat, people
move very carefully when other folks are shooting at them. They do not want to die.
Simply making games more lethal would eliminate a lot of the insane level of activity that is routine in games today and a source of security problems. Also in real
life, if you are waiting in ambush for someone, you have a huge advantage. A defender who is dug in, has her ranges set, and targets selected will wipe out any fool
who comes sprinting in front of her.
Another option is to turn ambushes and other “quick” events into mini-games
or even cut-scenes. It would be interesting to use “reaction shots,” where we see the
face of our protagonist as she enters a room or turns a corner just as we are used to
seeing in film, as a way to avoid pre-loading data as well as to accurately capture the
disadvantage and risk of such actions. A more technical option could be to pre-load
multiple data sets and only activate one at the time, when needed.
Chapter 17 Game Design and Security
There are too many computer games that have strong or dominant strategies. This
makes game play tedious and is ideally suited for automation, because the hacker
doesn’t even need to think about how to play, just how fast to shoot and in which
direction. Although many games provide a multitude of choices, most seem to have
little impact. The stereotype of button-mashing for console games or madly clicking in MMOs is all too accurate. Recently, Age of Conan attempted to enliven MMO
combat with a system that takes facing and direction of blows into account. It remains
to be seen whether this will affect player satisfaction (or scripting or automation or
other cheating problems). The “grind” that most players criticize about online
games makes automation very tempting. First person shooters that rely on speed
and reflexes rather than tactics are also vulnerable to optimal or strong strategies.
Most hackers are not good artificial intelligence programmers, so if a game has deep
strategic play, it is less likely to be vulnerable to automation attacks.
“Interesting choices” are often associated with interesting games. It is not just that
the player can make interesting choices, but his foes can as well. Tarn and Zach
Adams’ Dwarf Fortress10 is a very highly regarded recent game. One of the things
that makes it interesting is the richness of interactions and actions between the
player and the game environment. The game is a study in innovative, deep procedural game design—so much so, that players delight in recounting their game play
experiences whether successful or catastrophic failures11. Hampus Söderström’s
Toribash12 has redefined fighting games with its combination of rag-doll physics
with simultaneous non-real-time turns. Toribash takes physics-based game play in
a new direction because players are interacting with each other’s choices, which can
have virtually infinite variety and complexity.
Strategic depth is very hard for a computer or player to fake or cheat (without
a lot of effort—chess programs play very well after all). Games don’t need to be
complex to thwart automated play. Rock-paper-scissors embodies the simple
principle that every play can be trumped and every play choice has value. Poker
succeeds as a meaningful game because of the psychological interaction of the players. Exception games, like Magic: The Gathering13, are interesting for players and
hard to automate. Every card in the game changes the rules or breaks the rules in a
different way. Cards can work in combination with each other and the game is
continually being updated.
Protecting Games: A Security Handbook for Game Developers and Publishers
In some sense, this is the best news of all—a well-designed game with “good”
game play is much less likely to have security problems than a poorly designed one.
Although there may be many kinds of games, there are actually very few actual
game-play patterns. The patterns discussed here are not thematic (science fiction,
fantasy, and historical) or genre-related (first person shooter, real-time strategy,
MMO), but are the essential ways that players interact with each other and with the
game’s rules. Many of the common game-play patterns that are used in computer
games are an artifact of the history of the industry. Single-player computer games
typically use an “Action, Randomized Resolve” game-play pattern. Traditional
board and card games tend to sequence players since simultaneous play is difficult
while some of these play patterns may be better suited for computer play. Each
pattern has its advantages and disadvantages and, of course, different security characteristics:
Action, Deterministic Resolve (Chess and Battleship)—Taking turns and
moving or acting. A very simple pattern; if there are N potential actions, there
are N possible outcomes. Strategy comes from choices of actions and when to
take them.
Random Input, Action (Backgammon and Many Family Games)—“Roll your
dice and move your piece” and “draw a card”). A very simple variant on the
Action, Deterministic Resolve pattern where randomization constrains player
Action, Randomized Resolve (Most Combat Results Table Games and Many
Computer Games)—The combat results table (CRT) is a legacy of many board
war games. Basically, a player takes an action, rolls a die, and the result is determined by cross-referencing the action with the die roll. The total number of
possible outcomes is the product of the number of actions (A) and distinct die
roll values (D) (AxD). As with all randomized results, the question is always
how to generate fair random events.
Player 1 Action, Player 2 Response, Deterministic Resolve (Magic: The
Gathering)—Players take turns to act, but the results of their actions can be
modified by the actions of other players. Usually, this pattern is associated with
a finite resource that constrains actions and responses such as available cards or
“action points” that are consumed and slowly replenished. Generally, Action,
Response patterns modify only the initial action; they do not introduce new
game-play elements.
Chapter 17 Game Design and Security
Player 1 Action, Response Chain (Each Player, Deterministic Resolve) (Magic:
The Gathering, War card game)—See Player 1 Action, Player 2 Response,
Deterministic Resolve. Players can continue to take additional contingent
actions until they are unable to continue or choose to conserve resources for
subsequent use.
Action, Response, Randomized Resolve (Some War Games)—Players take
turns to act. The main effect of responses by other is to alter the initial action.
Simultaneous (Player 1 Action, Player 2 Action), Deterministic Resolve (Ace
of Aces, Toribash)—Simultaneous action is well suited to computer-based
play, but it has not been used very often, mainly, I think, because it is not familiar from either single-player computer games or multi-player traditional
games (because of difficulties in implementation). The security challenge with
this pattern is to ensure actual “logical simultaneity.” One of the interesting aspects of simultaneous action is that there is typically a rich range of outcomes
that naturally flow from the intersection of player choices.
Simultaneous Action (Player 1 Action, Player 2 Action), Randomized
Resolve—See Simultaneous (Player 1 Action, Player 2 Action), Deterministic
Resolve. Randomization perturbs a basic result from the intersection of player
Action(t), Deterministic Resolve (“Physics”-Based Games, Guitar Hero)—
The interest in these games comes from the procedural complexity of physical
systems. The problem, as noted previously, is that complex physical systems
can be modeled very well by a computer. They are also very vulnerable to automation. The most common error for real-time game systems is neglecting to
consider the “reset” time to recharge between actions, particularly when the
games are implemented over a network.
Action(t), Random Resolve (“Real-Time”-Based Games)—See the previous
bullet. These games really replace the abstract model for a physical system with
a set of game mechanics that are time based (and still are often tied to physical
system modeling). Often, the only real random element is a damage model.
Player 1 Action(t1), Player 2 Action(t2), Deterministic Resolve (Baseball and
Other Real-Time Reflex Games)—Typically, these games are associated with
physical systems. The main change is the potential for complex interactions
due to the actions of multiple players. Network lag and time models can sometimes be abused to “see” remote player actions before they happen, as discussed
Player 1 Action(t1), Player 2 Action(t2), Random Resolve—See Player 1
Action(t1), Player 2 Action(t2), Deterministic.
Protecting Games: A Security Handbook for Game Developers and Publishers
Deterministic Update(t)—Timed or triggered events, often seen in physics
games such as a dropping ball or weight or a timed elevator. The only concern
with this pattern is whether the update to the game’s state is supposed to be
hidden from the player(s).
Random Update(t)—See Deterministic Update(t). Randomization is used to
add complexity to the game experience. One could argue that some artificial intelligence systems for automated opponents are an example of this type of system, because their behavior is not quite deterministic.
Action—A decision and corresponding game play made by a game player (or
piece of artificial intelligence code taking on the role of a player) that impacts
the game. Actions can also be “secret”—where they are made at some time and
not revealed until later.
Response—A decision and corresponding game play made by a game player
(or piece of artificial intelligence code taking on the role of a player) that impacts the game that is dependent on the action of another player.
Simultaneous Action—A decision and corresponding game play made by two
or more game players (or piece of artificial intelligence code taking on the role
of a player) that impacts the game and occurs at the same time.
Resolve—The basic change to the game state that is a result of one or more
player’s actions, the pre-existing game state, the game rules, and any randomized resolution. This can be thought of the closing element of a “mini-game” or
“gamelet,” basic player interaction, or rules that cause a change to the game
Randomized Resolve—When the resolve is affected by some sort of randomization. Given a set of actions, responses, and game state, a situation where
there can be more than one possible resulting game state.
Deterministic Resolve—The opposite of randomized resolve. When there is
only one possible resulting game state given a prior game state, player’s actions,
and responses.
Action(Time)—Player actions that occur at a specific time for “real-time” games.
Action1(time1), Action2(time2)—A real-time interaction model that allows
rich interaction between players.
Random Input—This type of randomization is very typical for traditional games.
These patterns can be assembled into a wide range of higher-level game-play
Chapter 17 Game Design and Security
Computers are an amazing tool for gaming that we are just beginning to explore.
The reemergence of multi-player gaming has provided opportunities for new types
of game play. There are so many game types, business models, and ways to interact
that it is impossible to neatly categorize computer games today. The key is to truly
understand the implications of the entire design and its environment. Eye of
Judgment for the PlayStation 3 took a paper collectible card game and combined it
with a video camera to create a new form of game play. Unfortunately, the developers seem to have neglected to consider the possibility that players would scan
cards and undermine the core collectible element of the game14. For many years,
online games were metered by the minute, just as online access was. This meant
“problems” like botting and gold farming weren’t nearly as big an issue, because the
game operator could recover their costs better and ill-gotten profits were constrained
by much higher connection costs.
Game protection can take advantage of innovations in game design. True replay
systems as found in the Prince of Persia series with its ability to rewind time or
replay an entire game as with Halo 3’s Saved Films feature can serve both the game
designer’s vision and become tools to strengthen security by helping detect
Careful consideration of security in game design may be the single most effective use of security resources. Many security problems can be avoided entirely
through good design practices. Conversely, bad design choices may make good
security impossible or exorbitantly expensive.
1. ferv0r (2008), “There Are Level 80s in Age of Conan,”
2. J. Savage (2007), “Game Stopped After Cheat Allegations,”
3. P. Pollack (2006), “Online “Banker” Runs Off with Cash, Avatars Cry Foul,”
4. J. McLeod (2008), “Königrufen,”
5. Upstarts! (1982),“The Really Nasty Horse Racing Game”
6. S. Fallon (2007), “Confessions of an Online Scrabble Cheat,”
7. E. Thorp (1962), “Beat the Dealer”
8. P. Lewis (2006), “For Sale for £1,000: Gadget that Means You’ll Never Lose at Roulette Again,”
Protecting Games: A Security Handbook for Game Developers and Publishers
9. YouTube (2008), “Audition Bot—61 Search Results,”
10. T. Adams and Z. Adams (2008), “Dwarf Fortress,”
11. B. Harris (2006), “Dubious Quality—Dwarf Fortress Articles,”
12. H. Söderström (2006), “Toribash,”
13. R. Garfield (1993), “Magic: The Gathering”
14. M. McWhertor (2007), “Eye of Judgment Card Creating Easier Than Expected?,”
Case Study: High-Score
igh-score systems are one of the easiest and quickest ways to turn a singleplayer game into a social game. Achievements, ranks, ladders, and badges
can substantially increase interest in a game as well as foster player community, as seen at game portals like Kongregate. A high-score table is also an easy
way to get marketing data, encourage repeat visitors, and otherwise make the game
“stickier” than simply having a downloadable or simple online game. Many
businesses are taking this strategy to the next level by adding contests into the mix,
sometimes with large prizes.
Sadly, cheating rears its head the moment you introduce this new feature. Players
will even cheat to get a high-score on an obscure website for the simplest Flash
game. These cheaters can undo all of your community and interest building
efforts—who wants to play a game with cheaters? Why compete for a high score if
you don’t think you’ll have a reasonable shot at winning?
Although this can be a nuisance for an independent game developer who is
simply showing off a new game, the problem becomes more serious for an advergame or a commercial game. The reputation of the sponsor, site operator, and
developer’s business and real money are at stake. Things get even more serious
when high scores are used for a tournament or if there are contests or prizes involved. (Note: If you are running a contest, sweepstakes, or a game with prizes,
please consult a lawyer. Even skill-based games are regulated in the US and internationally; See Chapter 31). Deloitte Touche Tohmatsu in the Netherlands did a
study tracking 40 Dutch advergames over a four-month period. The games, almost
all in Flash, were plagued by high-score hacks and leader board griefing (posting
crude, malicious, or defamatory names as high scorers). Several of the marketing
campaigns had to be shortened or canceled because of these security problems1;
see2 for the English translation.
Protecting Games: A Security Handbook for Game Developers and Publishers
High-score games are typically developed as single-player games that then post
the high score to a server. If the games are played in a browser, the security sandbox used by Flash, Shockwave (Director), Java applets, and DHTML/JavaScript
put real limits on both the computing power of the game as well as its ability to
access the resources of the computer. Often, the high-score feature is added as an
Because the high-score feature is often added late in the development process,
it is often a separate part of the game architecture. As a consequence, making
attacks on the “score” is easier. The security adage “if security is easy to add, it is
easy to remove” shows itself again.
What is the simplest attack on a high score? You simply send a better score to
the high-score server, ignoring the game entirely.
The first solution that game developers typically consider is to use encryption, hash
functions, or digital signatures to send the score to the server. Often, the available
SSL library that is used is the one provided by the browser. Although an encrypted
or signed data stream on the network-side is safe, the problem is that the game
attacker controls the computer. This is not the security assumption that most
encryption and digital signature systems are designed for (including SSL). In a typical game scenario, the bad guy is an insider. So, how do you protect against him?
Let’s quickly review the available tools. Hash functions are mathematical functions
that, when applied to a data stream, produce a hash word. A good hash function,
like MD5 or SHA1, will produce a wildly different hash word from even a slightly
different data stream.
An encryption function uses an encryption key to transform a data stream into
a protected stream. Often, there is a decryption function that will use the decryption
key to transform a protected stream back into the data stream. In private key or
symmetric key cryptography, the encryption key and decryption key are the same.
In public key or asymmetric cryptography, the encryption key and decryption key
are different, and one is publicly known.
Finally, a digital signature function combines a hash function with a public key
encryption system to create a protected hash word. Another cryptographic tool is a
cryptographic checksum. Cryptographic checksums are similar to digital signatures
but they use private key cryptography to create a protected hash word.
Computationally, private key cryptographic functions tend to be a lot faster than
public key cryptography.
Chapter 18 Case Study: High-Score Security
What’s the problem with SSL? SSL, like any other external library or application
extension (accessed as a DLL or SO), has a local interface that can easily be
intercepted and modified. This form of interceptor is often provided with a software
development kit (SDK) as a debugging tool or it can be implemented as a custom
proxy. The interceptor allows data (in this case, the game’s high score) to be freely
grabbed before the high score is sent to the encryption library. Because the attacker
can get between the game and the encryption function, she can modify the game data
or even send arbitrary data to the security library: Encrypted “bad data” is still bad.
The fact that SSL is an external library is its biggest weakness compared to an
internally implemented encryption (or digital signature) function. An internal
encryption function will need to be attacked and reverse-engineered using a lowerlevel debugger. This is not impossible, just more difficult. The problem moves from
one of API interception to reverse engineering. The same is true if the hacker wants
to directly modify a game’s high score or internal state.
In order to attack the game’s state, the hacker would require a memory editor
(see Chapter 14). Most memory edit or map tools are pretty simple. They actually
don’t reverse-engineer the game; they simply watch the player run the game for a
while and isolate the changes that have occurred to the application’s memory footprint. The player then can read out the memory map of the game (or any other
application). The technique is both generic and effective for virtually any application
that runs locally on a specific computer.
Attackers do not necessarily have to attack the game’s score. Instead, they can
alter the score table constants associated with different actions that get rewarded.
For example, if destroying a common item is supposed to be worth 25 points, the
hacker could change that value to 2,500 in the game’s code image. Then, each time
the player destroys the item, they will get 100 times the score they would have
earned legally. A limited countermeasure for this type of attack is to post a vector
that passes the components of the high score to the server: number of hits on item
1, number of hits on item 2, and so on. Encrypting or authenticating this value may
make attacks a bit more difficult.
The problem with encryption, digital signatures, digital rights management
(DRM), and other software security solutions is that they are inherently weak
against a local attacker because the hacker has control over its platform. Simple
client-side security does not work because, for games, client software should be
assumed to be malicious.
After all of these concerns, if you still choose to use encryption or other clientside solution, do not hardwire the algorithm’s key. Hardwired keys are no different
from a security perspective than a hash function. Instead, use the key as a part of a
“challenge/response” system and send the client a key in real time to encrypt the
score (see Chapter 14).
Protecting Games: A Security Handbook for Game Developers and Publishers
Digital signatures are of no more value than a regular encryption system, in this
scenario. A signature is only useful if the data that it signs is accurate. By downloading the key in real time, the attacker will need to reverse-engineer your game.
However, do not be fooled; if the hackers are motivated, they will succeed. These
solutions are only a minimal fix to the simplest of attacks.
If you choose to use internal cryptography, you will face the general problem of
a lack of “interoperable” cryptography. Basically, it is necessary to find cryptographic libraries that work both with your client software (Flash and so on) as well
as with the backend (often PHP). Using a common cryptographic algorithm suite
does not guarantee interoperability and isolating implementation inconsistencies
can be very difficult.
The next option is to implement the game in an actual client-server configuration.
This design approach implements the game as if the part that is downloaded is a
“smart terminal” that provides a nice interface to a game that is hosted on the
server—the same approach used in many MMOs. This actually stops most forms of
client-side cheating, but potentially has a large overall system impact compared to
a simple, downloaded client. After all, instead of a single download (a natural for
web servers), the server needs to interactively update the game based on player actions. This will likely add many additional connections to the server as well as additional processing on the server. Also, the developers often have completed the
game as a standalone game and the high-score system is added later, making this architecture change impractical in many cases.
A hybrid solution is to implement a randomly seeded client. This approach works for
games that are not entirely deterministic (see the discussion of puzzles, later in this
chapter). Basically, there are two components to this approach. First, the server periodically and non-deterministically updates the client’s random seed. Second, the
client must store a log of the game action/state sequences. If players claim a high
score, they need to post the game logs and validate that the game session could have
yielded the posted high scores. This system is probably not adequate for contests,
but is probably good enough for free, pure-entertainment games.
Chapter 18 Case Study: High-Score Security
There are other options to ensure the integrity of the game service, if not the high
score itself:
Buddy High Scores—Instead of having a single high-score system, it is possible to divide high scores into geographic, friends/social networks, and other
ways to give more players a chance to earn a high score. A “buddy-high-score”
system works well from a marketing point of view and makes cheating irrelevant (the benefit of cheating your friends is much lower than the ego reward of
earning the high-score globally).
Challenge/Response Score Posting—As noted previously, external encryption
services are not effective in protecting against even a slightly motivated
attacker. An internally implemented challenge/response system can be easily
integrated with an existing game (in fact, we have a version for Flash and PHP
at As with many of these solutions, this approach is suitable
for casual games, not for contests.
Face-to-Face Competition—Depending on the business model being used, a
local high score can be used as a gateway to face-to-face competition. The
prospect of a face-to-face competition can be both a positive marketing tool
and a substantial deterrent to cheating. Public humiliation is a powerful threat.
False Games—Contests don’t need to have a truly functioning, public highscore system. Instead of using the game result and score for a contest, set a low,
minimum threshold score to qualify for entry into a drawing. The benefits of
cheating go way down, as any reasonably good player will be entered in the
drawing (arguably, the game score gateway is simply a way to convince players
that they should disclose their personal data to be entered into a drawing).
Faux Multi-Player Gaming—It may be difficult and expensive to move the
game onto the server and implement real client-server gaming. However, in
some cases, it might be much easier to use other game players as “faux
servers”—where the game and game state is coming from another player’s
computer. This might be easier to implement, as platforms like Flash include
object-replication tools. Basically, each game client runs two or more game
instances: one instance that drives the local display and captures player actions
and another instance that acts as a server engine for another, remote player.
The players either use the central game server as a relay or communicate peerto-peer. There are numerous, tricky details to this approach, but it may be
worthwhile for higher-value games where a developer does not want to move
to true server-based gaming.
Protecting Games: A Security Handbook for Game Developers and Publishers
Implicit Score and Player Data Authentication—The game developer can include implicit features in the game design that are not “known” at all on the
client, but can be checked only on the server. For example, a race may have a
certain hardwired minimum time or certain delays that are implicit to the game
design. These features can provide a way to detect illegitimate scores; however,
these techniques are also very vulnerable to detection and analysis by a motivated foe.
Replayable Game Logs—Traditional computer games are increasingly using
true game-play logs that allow the game to be recorded and replayed. Although
some lightweight browser languages like JavaScript and Java do not allow applications to store data on the platform, others like Flash do. Even without the
ability to save a game, it is still possible to store a game log during play. If a
player submits a suspect high score, the game server can then request the game
to upload its full game log to be verified. Ideally, the game log should be able to
be fed into another copy of the game and used to drive the game instead of the
mouse, keyboard, or controller. This can be used to visually detect game anomalies, but is no guarantee against a serious cheater.
The biggest challenge for protecting high-score games is not the lack of available options. Rather, the problem is that high scores are an afterthought in many of
the games; this fact makes it difficult to add effective security features retroactively.
As discussed in Chapter 17, puzzles and games that have strong or dominant game
play strategies or are dependent on physical skill are poor candidates for games with
a high-score or competitive element because they are often automatable. Malicious
players can create bots or support programs to have a superior game-play strategy,
speed, or solve the puzzle optimally. These games are fine in “for fun” settings
without high scores, but once high scores or multiple players are involved, they are
often attacked.
There are tools to detect bots or even try to discern the use of game aids, but it
is ultimately impossible to distinguish between optimal play from a human and
optimal play from a machine. Also, bots will be able to be hidden in a manner that
is undetectable (see the sidebar in Chapter 13 on virtualization).
Chapter 18 Case Study: High-Score Security
The full discussion of griefing comes later (see Chapter 21). However, simple highscore games are beset by annoying at best, or disturbing at worst, inappropriate
player handles associated with players’ high scores. Most high-score games let players choose how their name will appear on the high score list—their player handle.
Unsurprisingly, many of these handles are obscene, insulting, or infringing on
copyrights and trademarks. The problem is even worse if the high-score system has
been hacked. I was told about a case where a bank had an online game which was
hacked and the pranksters used the opportunity to mock the bank and its poor
security practices via the compromised high-score table. Filters, notifications, and
voting schemes can be used to cost-effectively remove inappropriate names.
However, it is likely that any such system run by a company should include some
level of human review. Contests do have a slight advantage, as inappropriate player
handles can be grounds for disqualification.
High-score systems are a great way to build the popularity of your game.
Unfortunately, cheating follows right behind. The problem may be simply a nuisance if the game is provided for entertainment purposes. In some cases, it may be
better to forgo the advantages of a high-score system than deal with potential
adverse consequences. Also, the level of threat against a game jumps dramatically
once there is any sort of prize or cash involved. Sadly, it takes very little motivation
to bring out the cheats and vandals.
1. Deloitte Touche Tohmatsu (2008), “Advergames op Grote Schaal Gehackt,”,1014,sid%253D13354%2526cid%253D202819,00.html
2. S. Davis (2008), “Serious Advergame Hacking Problems: Deloitte Touche Tohmatsu Netherlands
Survey Findings,”
This page intentionally left blank
Social Subversion: From
Griefing to Gold Farming
and Beyond with Game
Service Attacks
In this part, you’ll find the following topics:
Chapter 19, “Overview of Social Subversion”
Chapter 20, “Competition, Tournaments, and Ranking Systems
(and Their Abuse)”
Chapter 21, “Griefing and Spam”
Chapter 22, “Game Commerce: Virtual Items, Real Money
Transactions, Gold Farming, Escorting, and Power-Leveling”
Chapter 23, “To Ban or Not to Ban? Punishing Wayward Players”
Overview of Social
ou’ve made your game and designed it carefully. You’ve considered cheaters
and hackers, avoided exploits and engine problems, and yet, after your game
goes “live” online, everything falls to pieces.
Welcome to the world of game service attacks.
Cheaters and hackers are increasingly attacking the “game around the game”—
not the game itself, but the other features of the online service. These attacks
violate the social norms and social context of the game and, often, its “terms of
service.” Some people will call many of these activities cheating. The main difference between game service attacks and traditional cheating is that these attacks
cannot be detected by the game itself as rule violations. They are slipperier and
more difficult and costly to control and they rarely can be stopped completely.
Most of these problems stem from weak identity and accountability—if no one
knows who you are, there is nothing to stop you from behaving badly.
Tournaments are growing rapidly in popularity. These services take basic highscore systems and add richer competition for multi-player games. Tournaments and
various forms of in-game competition wrap a game with a lobby or matchmaking
service and track game results. Just as players will cheat at a free Flash game to get a
high score, they will abuse lobbies and competition services for their own purposes.
Griefing and spam exploit communications systems as well as the rules of the
game. Communications abuse ranges from commercial spam, in many cases for
gold farming or other game commerce services, to verbal abuse, cyberbullying, and
sexual harassment. Griefing players take advantage of game play systems, reputation systems, and, in some cases, the anti-griefing systems themselves that are put
in place to handle customer complaints. Griefing behaviors can range from theft of
other players’ assets or experience (ninja looting and kill stealing), to disrupting the
game play of others (corpse camping), and exploits of game system quirks (spawn
camping). Certain games allow players to create content, such as Second Life and
IMVU, and, inevitably, players have found ways to abuse these services with attacks
ranging from fairly standard griefing and abuse to denial of service attacks.
Chapter 19 Overview of Social Subversion
There are people with more time than money, and others with more money
than time. In persistent games, this has resulted in unauthorized game commerce.
Trading is a powerful social tool. Unfortunately for many game developers, players
use in-game trading and gifting systems for real economic purposes. Gold farming
is probably the most widely discussed of these problems (wherein players buy and
sell virtual items and characters), but there are also outsourced services that will
play on behalf of a player (power-leveling), and escort services where paid, skilled
players play along with players to help boost their skills or acquire certain items.
Once a cheater or game service attacker has been caught, the standard impulse
is to ban the person from the game. There are other options and some negative
consequences from banning and there are real questions as to banning’s effectiveness
in deterring game abuse.
There are quite a range of game service attacks and, fortunately, corresponding
countermeasures. However, there are few standard solutions to these problems, as
the game service security weaknesses often are closely tied to specific business,
implementation, and operations choices.
Competition, Tournaments,
and Ranking Systems
(and Their Abuse)
istorically, computer game developers have focused more on single-player
games than multi-player experiences. This is largely an artifact of the evolution of personal computing and network technology. For thousands of
years, games have been predominantly multi-player experiences with the exception
of solitaire card games and puzzles.
Although cooperation is sometimes an element of gaming, competition is
deeply ingrained into its rules and language. There are very few games that don’t have
some sort of notion of “winning,” even if the game has only one player. Wagering
and rewards have long been tied to games—after all, the Bible’s Book of Job is centered on a proposition bet between God and the Devil. Players are encouraged to
earn the most points, finish the game most quickly, and get the high score.
With the emergence of online computer games, competition and tournaments
are rapidly growing in popularity. As discussed in Chapter 18, high scores can invigorate the audience for a single-player game, but they can also inspire abuse.
Multi-player game competitions are more interesting and varied than singleplayer high-score services. Lobbies, ranking systems, and tournaments are varied in
form and targets of a wide range of attacks.
In order to understand the attacks on game competition, it is worth reviewing how
a variety of these systems work. Although most game players are familiar with
sports competitions, the growth of online play has introduced several new types of
ranking systems. There is surprisingly little good discussion about how these
systems work from a practical perspective, but Christopher Allen and Shannon
Appelcline have put together a number of excellent articles on the subject at
Christopher Allen’s blog, Life with Alacrity1.
Chapter 20 Competition, Tournaments, and Ranking Systems (and Their Abuse)
There are two essential types of ranking systems:
Closed Ranking or Tournaments—These ranking systems are built around a
limited pool of entrants (restricted either by total population or a specific registration period). They use some process to rank the entrants or determine one
or more victors. There are a number of types of tournament formats, including2:
Single-Elimination—A tournament where players are removed from
the tournament as they are defeated. Only the top player is ranked, not
any of the other participants.
Consolation—A tournament where players are moved to a “consolation” single-elimination tournament after they lose once. Once they
lose a second time, they are eliminated.
Double-Elimination—Very similar to the consolation tournament,
but players enter more senior brackets, based on how far they proceed
through the brackets in the single elimination tournament. If a team
successfully won three rounds, but then lost its match, that team would
enter at the third round of the consolation tournament, rather than at
the first round.
Up and Down/King of the Hill—Participants all play for a fixed period
of time. The leader or victor at the end of the interval progresses
“upwards” toward the top position and the losers move “down.”
Swiss—Participants all play in a set number of rounds. Players play
against others who have done comparably well, but the final result is
based on a total score, with victories worth 2 points, ties worth 1 point,
and losses worth 0 points.
Round Robin—Participants play all other participants a fixed number
of times (rarely more than two), with the participant with the most
victories crowned the winner.
There are numerous variations on these competitive schemes. A tournament
can also combine different tournament schemes, such as an initial round robin
phase, followed by a single elimination tournament for the top performing
round robin players. For tournaments with gambling, there are also variants
where players can buy back in to the tournament again or purchase additional
chips to continue to play 3, and as well as take advantage of other options4.
Open Ranking or Ladders—These ranking systems can accommodate an
unlimited number of participants or a limited participant pool over an extended
period of time to establish the participant’s relative status. Open ranking
systems are used in applications outside of gaming, most notably for reputation
and rating systems5. Well-known ranking systems include ELO and Glicko for
Chess; Xbox Live’s TrueSkill; and eBay’s rating system.
Protecting Games: A Security Handbook for Game Developers and Publishers
The two types of systems can be used together and interact over time. Most
professional sports teams are ranked from year to year, but also compete through
some sort of tournament system for a final victory. In U.S. college basketball, there
is an ongoing ranking system on a national basis that determines invitations to the
NCAA or NIT tournaments (sort of)6.
There are four major purposes for ranking systems:
Ranking/Serializing—Placing the participants in some sort of order and, for
ranking systems, tracking that order over time.
Grouping/Grading/Thresholding—Grouping the participants into categories
(one to five star systems, grades A to F, and so on). This is the dominant approach used for rating and thresholding systems.
Matchmaking—Competitive systems need a way to match different participants
and reward victors and penalize losers. Matchmaking systems determine who
competes. They can be structured in different ways. In a single-elimination
tournament, participants with higher ranking are matched with players with
lower ranking to increase the likelihood that the highest ranked competitors
will compete in the final match. Thus, the top ranked participant initially
competes with the bottom ranked participant, the second ranked participant,
with the second lowest ranked participant, and so on. Additionally, the highest
ranked and second ranked participants are in different brackets so that if they
win all of their matches, they will reach the finals and compete with each other.
Conversely, ongoing ladder systems tend to match participants with comparable skills to try to ensure a fair competition (each competitor having a nearly
equal chance of victory).
Handicapping—These systems typically give increased rewards for players
who defeat higher-ranked foes. Conversely, in systems like golf’s, a handicap is
a balancing system to basically allow a lower-ranked player to compete with a
higher-ranked player.
The attacks on tournament and ranking systems, discussed later in this chapter, are highly dependent on the specific purpose of the system. Careful design can
avoid many problems. For example, the online game A Tale in the Desert first used
the eGenesis Ranking System7, which attempted to limit the ability of players to create free accounts to boost their ranking by minimizing the effect of competing with
players with new accounts (basically, you could earn a maximum of eight points
from a new player, but could earn substantially more from an experienced player),
but later moved to a very different approach with its Tournament Ranking System8.
Chapter 20 Competition, Tournaments, and Ranking Systems (and Their Abuse)
This latter system works in a similar manner to the “master points” system used in
Bridge where players really only compete with other players at the same rank and
compete to earn additional points towards advancing towards the next rank. The
only real differences are that a player cannot compete multiple times at a given rank
and tournament with another player. Also, it is possible for a player to lose too
many competitions, in which case she can either start again at the lowest rank or no
longer compete within that specific tournament.
Before players enter a game, they use a lobby service to set up matches—either with
opponents of their choosing, or, for tournaments, based on algorithms and procedures provided by the game service. Hackers attack the matchmaking service itself
or its underlying ranking or handicapping system to position themselves to gain an
unfair advantage: A hacker could try to boost her chances of winning cash or prizes
by entering a contest an excessive number of times or create multiple accounts that
“compete” to boost the rank of a chosen account. Conversely, the cheater could
appear to be incompetent and lose often to set up suckers for a sting in a game for
money—just like a pool shark.
Although randomized match-ups are theoretically strong, it is an interesting question whether teammates, or opponents for that matter, could collude to enter the
matchmaking lobby within a narrow time window and thus increase substantially
their chance of being matched together. After all, if ranked games are being run
continuously, there are going to be times when the game lobby is going to be relatively empty. Or, even with a relatively popular game, highly synchronized lobby
entry can overwhelm the matchmaking system’s randomization process. The larger
the team, the more effective this tactic will be. A weighting system that adds an
anti-correlation component (to ensure that players haven’t played together before)
and a measure that considers how many games someone has played (to address
disposable identities) added to the tournament ranking system could help reduce
the effect of team play. Another strategy may be to allow all players to play multiple games concurrently (this strategy works better with thoughtful and leisurely
games, as opposed to fast, reflex-based games).
Protecting Games: A Security Handbook for Game Developers and Publishers
For continuous tournaments or ladder systems, players can “spread out” their
entries to move more rapidly up or down a ranking system. A closed tournament
matches players based on seeding or a random draw. However, it is possible, in an
open ranking system or a tournament that allows players to join over a substantial
period of time, to disperse a team of players uniformly across a large population.
Depending on the ranking scheme, these anti-correlated players can build “good”
reputations independently of each other and subsequently coordinate and play
together to accelerate the ladder performance of a few selected members.
Once players have been able to be matched with whom they wish, they can then
“boost” the rank of a designated player or group of players. This is possible if the
tournament uses an open lobby, the game has a ladder ranking system, or they
overwhelm the lobby. An NBA player, Gilbert Arenas of the Washington Wizards,
was caught colluding with another player by taking turns winning game events to
boost his rank in Halo 3 9. Players can also achieve the same objective by using bots
rather than finding other colluding players for some online games.
It is a good idea to test your tournament structure against various rankmanipulation strategies to see how many cooperating players it would require to be
effective. Although players may attack tournaments just to get a high rank, tournaments with cash or prizes are the prime targets for boosting.
The other goal of a cheating player may be to rank higher in a less competitive
tournament—becoming the “best of the worst” in a junior or amateur tournament
rather than having to fight and likely not win in a more seasoned competition.
This is especially appealing for games where money is involved—a tournament
variant of a pool shark.
These attacks are all quite difficult to counter. They are all, effectively, varieties
of collusion and take advantage of weak identity systems, particularly when games
are online. Clever design of a ladder system or tournament may minimize the impact of several of these attacks. For tournaments, bringing players in for face-toface competition will often make it much more difficult for players to hide their
identities. Platform identities and signatures may be effective in determining
patterns of play that can help uncover these groups of players or accounts.
Chapter 20 Competition, Tournaments, and Ranking Systems (and Their Abuse)
Tournaments and ranking systems create a structure on a social group. There are
two ways to undermine these structures: with an organized team of individuals (a
syndicate) or virtual individuals (a set of bots). Svenska Spel’s lottery game, Limbo,
was undermined by groups of players colluding to select different entries, a variant
of the entry spreading attack where the players ensured that their lottery entries had
different values to increase the group’s chance of winning10. The online game
OutWar was targeted by a massive botnet of at least 30,000 compromised computers to help boost two players’ ranks11.
Tournament cheat bots fall into two categories—winbots and lossbots—and
are the direct counterpart to human syndicate members helping boost or bust a
teammate’s rank. In 2005, Blizzard banned 4,000 players from Battle.Net for using
lossbots and ladder abuse in Warcraft III12. Of course, bots can also be used to
simply cheat against other players in a tournament or boost a player’s rank, as
discussed in Chapter 15.
There are certain attacks that can occur against a game because it is being played in
a tournament or as part of a ladder-ranking system. These are not really attacks on
the game itself but on the game’s context. Although Brain Age may be a fine singleplayer mental skill game with its mini-games based on basic mathematics and logic,
it would fail utterly in an online, multi-player, competitive environment where
players could use calculators and other player aids.
Players can cooperate together when it is forbidden by game rules to gain a competitive advantage. This is a problem for multi-player games in general, but can be
even more problematic when tournaments or rankings are involved. For example,
collusion in a two-player game is meaningless unless there is a larger group ranking system that can be attacked. Usually, boosting and busting rankings can be
easily carried out by groups of colluding players; the larger the colluding syndicate,
the better.
Protecting Games: A Security Handbook for Game Developers and Publishers
Games played with small groups of competing players are very common in online
gaming. There are a couple of very practical reasons for this: It is easier to design a
competitive game that works with a smaller number of players rather than a large
number. Also, it is very difficult to bring together a large group of players and keep
them playing at the same time, particularly online. First person shooters, racing
games, sports games, and strategy games are usually played with modestly sized
groups (rarely more than 16 players, often 8 or fewer players).
For a game service provider, these sorts of games can be operated very inexpensively, especially when the players use a local computer to act as the game server
or when the game operates as a pure peer-to-peer network. Player-operated servers
and peer-to-peer architectures have the substantial advantage of pushing all of the
computing and networking resources onto the game’s players while only leaving a
small lobby, status, and persistence service at the game operator’s location.
Sometimes, game operators let too much control devolve to the players. In
Battlefield 2, players cleverly configured their local game servers to give them an
unfair advantage in the game’s overall ranking scheme (the exploit was really a flaw
in the game’s scoring system that ranked players much higher if they used “lowertech” weapons like knives rather than guns)13. All players need to be able to independently authenticate a game’s configuration and state, whether the game is
configured as client-server or peer-to-peer architecture. The game developer or
operator should also be able to extract an audit record of a game from any and all
players. This may not prevent all ranking system manipulation, but could help
identify gross abuse by players who post scores for game wins or losses that were
not actually played.
Increasingly, game developers are adding an “audience mode” to their games. This
mode allows the game to be viewed from a number of perspectives and often supports replay and recording features. This is due largely to the rise of cybersports and
machinima and has mostly been a boon to the industry. Additional game observers
can create security problems, however. In the real National Football League, the
New England Patriots were caught using cameras and electronics to read the communications between players and coaches of their gridiron foes14. Audience mode
tools can be used by colluding players or players who hack into a game server to
achieve the same objective. The best countermeasure is probably introducing
a communications delay before sending information to the game’s audience.
This will almost certainly not be effective if a player is hosting the game server.
Chapter 20 Competition, Tournaments, and Ranking Systems (and Their Abuse)
More sophisticated blackout systems are likely to be technically difficult to implement and tempting for hackers to circumvent.
These attacks are basically exploits of the tournament or ranking system and
need to be countered as such. Although technical countermeasures may sometimes
be able to detect these hacks, they are difficult to isolate, by their nature. Ghosting
weaknesses are best thwarted by changing the tournament or ranking scheme and
rules, just like game exploits.
One part of the game code is of particular concern for both the game operator and
the game developer—the “game over” game code. Networked games can end for a
number of legitimate reasons, but also for illegitimate ones. Dropped connections
and computer failures are too common to be simply ignored or arbitrarily punished. Game developers and providers also need to be concerned about players
abandoning a game to avoid a loss and reduction in ranking (called stat guarding).
This has been seen in the Ultimate Online Baseball MMO. Certain players (derisively called stat babies) in Netamin’s game abandon their games when it looks like
they may lose and damage the statistics of their pitchers or hitters15.
Malicious players can abuse a game’s “game over” logic, and even the game
abandonment code to their advantage. Depending on how the “game over” logic is
implemented, malicious players may be able to force the game to end when they
have an advantage or to use their preferred scores as the authoritative source for the
game. Players may even abuse the game abandonment system to make it look like
the other player has abandoned the game—and trigger the game score system to
punish the other player accordingly. The ideal approach is for the game to periodically establish a “certified game state” that can be used to replay or finish the game
at a later date (ideally, this would be done continuously).
Players can also attempt to report false scores and delay reporting of undesirable game results to manipulate a ladder or tournament system. A final problem
occurs when players make side wagers on game results. This is not something that
a game operator can handle directly, but it is an issue that they should be aware of,
because these side wagers can substantially alter the behavior of the players. It may
be more profitable for players to lose the game and distort the ranking system if the
side bets are large enough (or there is little value in having a high rank in the ladder).
Unfortunately, there are no magic bullets for these problems. Trustworthy
game logs may help, but games need to be examined on an individual basis.
Protecting Games: A Security Handbook for Game Developers and Publishers
Abandonment is a particularly tricky problem for online games. The essence of any
solution is to make it more advantageous for a player to complete a game, whether
or not they win. A metric that simply rewards players for completing more games may
help. One also needs to look at how a game is scored internally. Games that have
asymmetric scoring systems (a term I’m making up as far as I know) are particularly
vulnerable to this form of attack. An asymmetric scoring system is one whereby one
player can score independently of the other players. Baseball is a good example. Runs
are earned by a given team, as are individual statistics, without direct corresponding
consequences for the other team or players.
Many games can be configured so that the game’s scoring system is “zero-sum” or
symmetric scoring. Each positive event for one player is balanced by a negative event
for the other player(s). Thus, a game will have a net score at all times of zero. In many
cases, this allows the game to end at any time and still be considered valid.
One game that uses this type of scoring system is the Austrian tarot card game
called Konigsrufen. The problem we encountered when playing the game was that
we never had the exact right number of players for the game. The game requires four
players, but we often had five or six, but never enough for two full tables.
This was a group of mathematicians, so, of course, they had a mathematical solution to the problem: zero-sum scoring.
If I scored 300 points in a hand, each other player lost 100—making the hand net
out to zero, so that the odd-mathematician-out could play the next hand while keeping the game score working for the whole evening. This is a powerful and flexible
So, if there are W winning players and L losing players, and the total reward is R,
R = W*r = L*(-p)
Thus, each winning player will receive:
R/W = r reward points
and each loser will lose:
-R/L = p penalty points
Overall standings are based on players’ individual scores. Although the sum of the
players’ scores is zero, their individual scores can vary widely.
This technique may not always be applicable but it can be an effective technique
to minimize the consequences of game abandonment. Also, because the game is always at “net zero,” the game service may more easily support late player substitutions
or replacement players joining a game session to improve the game play experience
while not punishing (or excessively rewarding) late players.
Chapter 20 Competition, Tournaments, and Ranking Systems (and Their Abuse)
Game operators don’t like to think of themselves as a source of game problems, but
players certainly do. The most important asset a game provider has is her reputation. In order to avoid damaging public relations, game service providers should be
prepared for accusations from disgruntled players.
Because of the nature of the games that they are offering, game providers often have
insider knowledge that would give a favored player a real advantage in a game.
Also, if there are games played in competition with the game provider, there can be
tax advantages to reducing apparent winnings by colluding to lose to a cooperating
player. (This is a concern that regulators sometimes have with casinos. Casinos are
taxed on their winnings, so sometimes a greedy casino will arrange to lose to a
player and thereby reduce its taxes. The casino then arranges to share the ill-gotten
winnings with the corrupt player.)
The MMO EVE Online faced accusations that members of the game developer’s
staff were giving their team (corporation) an unfair advantage in the game16. The best
strategy to avoid this problem is to simply not allow developers, their friends, family,
or anyone else personally or professionally associated with the game to play. I’ve had
this debate with several developers who’ve objected strongly. If developers must play,
it should be separately and they should clearly identify themselves. Also, they should
be subject to quite rigorous logging. Even the appearance of impropriety can be quite
expensive. The loss of only 100 subscribers due to damaged reputation for virtually
any MMO would likely exceed almost any employee’s annual salary.
This is a problem associated with closed tournaments, skill games, gambling games,
or other games for money. It is a more extreme version of the bias problem,
whereby the game provider intentionally seeds the game with insider players and alters the game for the game provider’s benefit.
If the game service has payments involved, there are opportunities for payment
abuse. A game company that shaves a nickel here, a penny there, and a dime somewhere else can easily and stealthily earn substantial undeserved revenues (the general
term for this in the computer security field is a “salami attack,” in which many
individuals do not notice very small amounts of fraud but the aggregate amount
stolen can be quite large).
Protecting Games: A Security Handbook for Game Developers and Publishers
Game providers should provide clear payment tables that are always available
to players and full and detailed accounting records for the player’s review. It would
be optimal to provide an independent audit on the player’s platform, but this is not
always practical. An outside auditing firm in support of well-documented processes
and procedures and other measures can help build a reservoir of trust.
This type of attack could easily be implemented in virtual asset games by providing smaller amounts of virtual currency than promised to players, removing a
small amount of virtual currency or items from a player’s existing virtual holdings,
or altering virtual asset prices for non-observant players. This can happen with real
items, not just virtual assets. The use of electronic point-of-sale software has made
it possible for crooked business owners to take funds, without reporting income, by
using programs called zappers17. Basically, zappers create fake transactions from a
malicious company’s business partners for the purchase of items or they alter the
price of items. The zapper manipulates the entire accounting system to allow a
crooked business owner to extract cash without ever reporting it as income but still
have clean books for auditors and tax collectors.
In games where variable rewards or payments are involved, the game provider may
be able to make the game more “interesting” and hence increase payments by players to the game operator. In this case, the game provider doesn’t really care who
wins, just that there is more activity than would be occurring normally under the
game’s rules. There have been accusations that some online poker sites have a bias
towards dealing hands that will encourage a lot of player wagers, thereby earning
the game operator more money from increased wagering.
Reputation is critical for a game provider. Anything that can damage the game
operator’s reputation can be very costly very quickly. It is important for game providers
to avoid even the appearance of impropriety. Also, as the (online) game industry
grows, it will be important for companies to cooperate and develop best practices
and perhaps even certifications to protect the whole industry from flawed practices
or individual bad apple companies that could turn into government regulation or
result in lawsuits.
Identity becomes much more important once one moves to a rich online game service (see Chapter 29). The simplicity of developing a casual single-player game and
hosting free standalone games becomes substantially more complicated when other
Chapter 20 Competition, Tournaments, and Ranking Systems (and Their Abuse)
players become involved. Undermining identity is a critical part of many of the
attacks discussed so far. There are many ways to weaken identity and some of
the problems can have particularly serious impact on tournaments and ranking
Invalid Licenses/IDs—Both paid and unpaid games often use a license key or
platform ID as part of their identification system. For performance, storage,
and business reasons, these keys are sometime not issued and validated individually, but generated by an algorithm. Malicious players can steal keys,
duplicate them, or break or reverse-engineer the ID authentication algorithm
(see Chapter 5). As discussed previously, there are ways to ensure the security
of license keys and recover from compromises—but the techniques are game
service specific.
“Alt” IDs—Free online game services often permit, or do nothing to stop, the
creation of multiple identities. Players can use these additional identities to
increase their chances of winning or boost their rank with lossbots. Positive
incentives can be used to encourage honest registration of identity, such as prizes
or loyalty programs.
Outsourcing—Players sometimes recruit or hire other players who are good at
a game to play for them to boost their score. This is offered as a commercial
service, just like gold farming, for several massively multi-player online games
including World of Warcraft, but it has also been reported with players hiring
other players to boost their rank in ladder systems for casual games. There is
not much that can be done to prevent identity outsourcing (see the section in
Chapter 22 on power-leveling).
Game Save Sharing—Some games store the state of the game or other persistent information locally. This data is sometimes exchanged with other players
to boost statistics or otherwise enhance play. This has occurred on the Xbox
360 console to boost achievements in the Xbox Live service. If the game or console needs to support storing these files, the files should be cryptographically
tied to a specific platform or user account (see Chapter 14).
Strong identity can mitigate many online game security problems, including
those associated with tournament and ladder hacking. Interestingly, cash and prizes
are great tools to encourage better reporting of identity information by players and
can easily be tied to competition and ranking systems. At the same time, cash and
prizes substantially increase the value and likelihood of attempted attacks. There
are advantages and risks with either approach. Strengthening identity is generally
quite useful and there are often solutions to reduce the effectiveness of attacks on
games, as discussed in Part III.
Protecting Games: A Security Handbook for Game Developers and Publishers
There are numerous ways to protect tournaments and ranking systems against
attack. What follows is a list of tactics that may (or may not) be applicable to your
specific environment:
Buddy List or Guild High Scores—Rather than having a single, global highscore system, use buddy list high scores. In order to implement this tactic, the
online service will need to keep track of all the pair-wise scores and reporting
status in a localized buddy list or guild. Because scores are only published
within a local community, spurious accounts and results will not cause meaningful problems. Buddy list scores can serve as a recruiting tool. Likewise,
giving players the ability to track their relative status with their friends may help
build the online community and virally expand as players recruit their friends.
Buddy list scores can also be used for guild versus guild or team competition,
tournaments, and so on.
Paid Versus Unpaid High-Score Systems—If there is an economic or other
model tied to the game, you publish only the high scores for those who are
actually helping your business by paying. This can even be tiered based on how
much money the person has spent.
Levels of Competition/Bridge Style Rankings—The card game Bridge has a
ranking structure based on multiple tiers. Thus, one moves from player to
master to grand master, and so on. For an online game with weak identity, this
structure can be used to radically drive up the cost and effort to spoof the highscore system (and is similar to the approach used in the tournament ranking
system in A Tale in the Desert, discussed previously). For example, in order
to move from player to master, the player must defeat 10 other players; in
order to move from master to grand master, the master must defeat 10 other
masters. (You can add more levels as desired; martial arts rankings have a “belt”
system that supports quite a number of levels.) For a legitimate player, the
minimum number of games to move from player to grand master would be 20.
But for a spoofer who was creating phony losing accounts, the number of
games would be 120 (10 fake players to move to master, 100 fake games to create 10 more masters, and 10 more fake games to get to grand master). As always,
strategies can be combined and there could be a conventional leader board at
the top level (the grand master). Additional twists can include:
1. Uncount scores if the opponent becomes inactive for 30 days.
2. For server-based games, you can track the duration of games to see if
they are unusually short (for time-based games) or fewest turns (for
turn-based games). You can then throw out games that are too short,
once you have data on typical or expected game lengths.
Chapter 20 Competition, Tournaments, and Ranking Systems (and Their Abuse)
Cash and Prizes—As noted several times already, the possibility of winning
something tangible is a powerful incentive to disclose identity. The prize doesn’t
even have to be very large. The downside, also as noted previously, is that valuable rewards encourage hackers.
Natural Achievements—For games that have achievements, rather than hiding
achievements like Easter eggs around the game, have these achievements based
on natural game activities and spread them out smoothly throughout the game
to reflect thorough and masterful play. Some games have used “unnatural
achievements,” such as playing an excessive number of times, that may wind up
costing the game operator money if the player overuses the service.
Impersonal Global High Scores—Global high scores can be listed, but not
globally attributed to the player. Thus, only a player’s game friends would know
their score (if used in conjunction with buddy lists). This may weaken the urge
somewhat to cheat, because the player needs to work harder for notoriety. The
downside is that this may reduce the value of the high-score service.
Randomized Matchmaking—There are real benefits to randomizing players
for ranked games. Player teams, such as clans or guilds, can add complexity to
this, as individual performance could add to the group’s ranking. Game operators need to be careful about lobby-stuffing tactics.
Face-to-Face Play—There is no reason that the only way to earn scores is
online. It can be a powerful marketing tool, and a deterrent to some attacks, to
invite local and regional high scorers for a face-to-face tournament where
additional rewards can be earned through live, face-to-face competition.
Server Scoring—Move the technical score accumulation process to the server
to certify the scoring process for the game. This should cause a lot of trouble for
a number of the exploits discussed so far, particularly game save sharing.
Time-Based Rank—It may be possible to thwart some players who use lossbots
or other methods to lower their ladder rank by including an additional attribute
for how many minutes of play or total game sessions the player has participated
in. Apparently, Microsoft and Bungie are incorporating this technique to help
fight leader board cheating in Halo 3 18.
Grouping Cheaters Together—Segregate rankings for cheaters and non-cheating
players into separate systems. Neither group really needs to see the other.
Blizzard incorporated a variant of this tactic into Diablo II on Battle.Net19.
Protecting Games: A Security Handbook for Game Developers and Publishers
Many games are not really suitable for tournament or skill-based play because of
their poor security. This is both an issue and opportunity because tournament and
skill-based play extend the life and may increase revenue for game developers
and publishers.
Game fans and secondary game businesses have tried to build tournaments and
skill-based games businesses around a number of games. Many of the games that
people have tried to convert to a “for-money” business model are vulnerable to
standard hacks, including proxies and state hacking. Whereas cheating is somewhat
tolerated in a “for-free” or “for-fun” environment, when cash or prizes and payments are involved, cheating becomes a central concern for the business.
The typical architecture for these online services would include a central game
server with multiple game clients. Legitimate operations require that each player
has a licensed version of the game.
In order to improve security with these legacy games, the game can integrate a
real “action log” for each participating player. For this approach to work, the game
needs to be able to be accurately replayed from a log file consisting of the sequence
of player actions, timestamps, and, optionally, one or more random seeds. The
random seeds would be provided by the game server or the collaborative random
techniques discussed in Chapter 13.
At the end of the game, the players submit their action logs to the server. This
can be done after every game, at a random time, or when there is a dispute between
players or an unusual result. The server then uses the logs from all of the players to
reconstruct the game and see if it matches the observed results and game play
behavior in its own action, state, and visual logs. Discrepancies can be used to identify problems and take suitable action.
This approach stops many, but not all, security problems for legacy games.
One problem the “action logging” approach does not stop is the use of automation
tools or player aids. These attacks will always be hard to detect because they don’t
alter the game; they just improve the player’s performance.
Game service providers are moving to provide richer player experiences to complement their games. These richer game play systems, such as tournaments, ladders,
and reputation systems, bind the players to the service and inspire the players to
continue playing. Tournaments can be great marketing tools. NCsoft has run a
number of tournaments for Guild Wars with substantial prizes 20.
Chapter 20 Competition, Tournaments, and Ranking Systems (and Their Abuse)
Hackers do not have to attack individual games to undermine these valueadded services. The hackers can attack the entire game service fabric. These attacks
can be costly to the game operators. Microsoft and Bungie shut down the leader
boards for Halo 2 21 and one can only speculate how many sales might have been
lost compared to the cost for improving the security of the ranking service.
Player reputations and rankings are sometimes sufficient reward themselves to
inspire hackers to exploit these competition services. Fortunately, malicious players are more interested in the fame of breaking a game’s security (and publicizing
the fact) than running up a high score. The threats to a game service change
fundamentally when a game service provider begins to support real rewards and
prizes—hackers and cheaters will stop sharing and publicizing their exploits. This
results in a substantially greater burden on the game service provider’s built-in
security and security team.
1. C. Allen, S. Appelcline (2006), “Collective Choice: Competitive Ranking Systems,”
2. GoldenToken (2008),”Types of Tournaments,”;ref=Wiki%20Start%20Page
3. Full Tilt Poker (2008), “Rebuy Tournaments,”
4. (2008), “Types of Tournaments,”
5. C. Allen, S. Appelcline (2006), “Collective Choice: Rating Systems,”
6. Wikipedia (2008), “National Invitation Tournament,”
7. A. Tepper, J. Yelon (2002), “eGenesis Ranking System,”
8. ATITD (2004), “Tournament Ranking System,”
9. L. Frederick (2007), “NBA Star Caught Cheating at Halo 3,”
10. J. Savage (2007), “Game Stopped After Cheat Allegations,”
11. J. Leyden (2004), “Botnet Used to Boost Online Gaming Scores,”
12. Blizzard (2005), “Accounts Closed for Ladder Abuse,”
13. N. Maragos (2005), “Battlefield 2 Security Holes Exploited, Login Problems Reported,”
Protecting Games: A Security Handbook for Game Developers and Publishers
14. J. Joyner (2007), “New England Patriots Cheating Scandal,”
15. M. Lafferty (2006), “Stat Babies Cropping Up in Online Games,”
16. J. Blancato (2007), “Jumpgate: EVE’s Devs and the Friends They Keep,”
17. R. Furchgott (2008), “With Software, Till Tampering Is Hard to Find,”
18. Michiel Meijs (2006), “For All You Halo Fans Out There #3,”
19. A. Park (2002), “Battle.Net Cracks Down on Cheating...Again,”
20. A. Rose (2006), “Guild Wars Goings-On,”
21. Frankie (2005), “Bungie Weekly Update: October 14th, 2005,”
Griefing and Spam
“Hell is other people.” 1
f being trapped in a room with three other people for an eternity is Hell, one has
to ask what is the name for the place where you are trapped with millions of
people who can do what they want and say what they want with impunity for
even a day?
The Internet, of course.
The two are easy to confuse. And, just like No Exit’s characters Garcin, Inès,
and Estelle, we all have the opportunity to escape and turn off the computer, yet
none of us do.
There are a wide range of ways that we inflict grief on each other:
Through game play. Although “emergent game play” is often seen as a good
thing, there is a dark side where players take advantage of game systems to
disrupt the experience for others. Interestingly, a number of these game play
exploits grew out of a desire to make game play “safer.”
Communications, the tools that enable community, are routinely turned into
weapons wielded to harass, abuse, and annoy others. Players also turn the very
management tools for reporting griefers into griefing tools themselves. They
can even take advantage of the simplest game services, like high scores, to
disrupt the game and attack the game developer.
User-created content is seen by many game developers as something of a “Holy
Grail.” Such content enables developers to provide a sandbox in which players
can create and play. Unfortunately, players sometime use these tools to abuse
and harass others and have gone so far as to implement denial of service attacks
against the game itself.
Protecting Games: A Security Handbook for Game Developers and Publishers
There are consequences for these actions. Some in-game harassment rises to
the level of legal harassment; off-color remarks and content can be considered
obscenity; and certain activities, such as “age play” (where adults use avatars of
children in sexual situations), can create serious legal and business problems for a
game service provider.
Insults and harassment are virtually routine for many online games. The anonymity
afforded to online game players has given rise to widespread and increasingly
aggressive harassment. The hardcore gamers who are otherwise prized by game
companies are often the worst offenders and taunt the newest players (derisively
called “n00bs”). This, of course, is the riskiest time for a game operator—new players
are liable to abandon a game that they find hostile. There is often a perception by
players of a “right to anonymity.” There is no legal basis for this and, in fact, the
Privacy Act in the US did not come into being until 1974 and mainly addresses an
individual’s privacy in relation to the government. Sexual and racial harassment
also, regrettably, occur too often. Additionally, the rise of gold farming in MMOs
has led to a corresponding increase of in-game spam marketing that touts these
services (see Chapter 22). As games become more and more mainstream, it will not
be surprising to see the same kind of spam we all suffer from in our email.
The Facebook social network is increasingly being targeted for spam, virus and
worm delivery, and phishing attacks2. A recent survey showed that spam was effective—nearly 30 percent of individuals surveyed had purchased a product based on
a spam message3. Response rates are surprisingly high, considering that the bulk of
spam is blocked by filters. There are 10 sales per million spam messages sent; a good
return on investment since botnet spam only costs $5 to $10 per million messages.
Even worse for game operators, some of this spam can be used for phishing for
account information or otherwise attacking the game service directly.
To fight these problems, as well as to address other issues, game companies provide
customer support and community and forum features. Apparently as many as 25
percent of customer support calls are due to griefers4. This direct avenue for filing
complaints is also a direct cost to the game company. In 2002, Alan Crosby of Sony
estimated that his 60-person customer service staff each spent one hour out of an
eight-hour shift handling griefing for the EverQuest MMO5.
Chapter 21 Griefing and Spam
The most common “griefer counter-measure” is to put in place a strong set of
community services. Depending on the game, these community services provide
clan features (tools to form and support player groups or teams), friends lists, reputation systems, and other services both to tie players more closely to the game and to
create an environment that reduces anonymity for misbehaving players. One of the
best features of a strong community service is that it can provide substantial security
benefits at modest marginal cost. After all, the game developer is putting the community system into place for primarily business growth and marketing purposes.
There are two main limitations of community systems from a security perspective. First, malicious players can often create new accounts (especially for game
services that are free), thus removing the effective social stigma of griefing. Several
games attempt to fight free-account griefers and spammers by requiring players to
have reached a certain level (implying a fair number of hours of game play) or pay
to be able to broadcast messages in the game.
Second, malicious players can use the anti-griefing system to cause further grief
by wrongly accusing other players of griefing. This is an excellent and unfortunate
example of griefers using the game system against itself and other players.
Player accountability is the key to controlling griefing. Some online game
services, such as Battle.Net, X-Box Live, and Valve’s Steam have the capability to tie
a product license key or other unique tag to a player and can use that identity to
punish or ban abusive players.
The other major form of customer support is found in persistent world games.
In-game game masters provide live monitoring of the game play environment. This
gives the game provider the ability to respond in near real time to griefing incidents.
This solution is quite powerful, but it does come at a cost. Consider the Sony
EverQuest example. Crosby estimated that one eighth of each of his 60 employees’
time was spent handling griefing. So, basically, he had the equivalent of 7.5
employees devoted to griefing full time. If we assume a modest salary of $40,000 per
year (this is really cheap, as it wouldn’t include additional fees for shift work, health
and other benefits, or management overhead), Sony was spending approximately
$300,000 per year, at least, and probably closer to $500,000 on the griefing problem
(2002 was in EverQuest’s heyday, so the game probably had approximately 425,000
subscribers at that time6). Although this may not be a huge portion of the game’s
revenues, it comes directly from profits and, even worse, there are likely additional
costs that are hard to measure directly in terms of lost subscribers or non-renewals
by existing subscribers.
The cost of managing griefing can grow rapidly for a game service provider and
can lead to lost subscribers, redirecting staff from other assignments, reducing
player satisfaction, or increasing the total staff costs for the game by simply adding
Protecting Games: A Security Handbook for Game Developers and Publishers
staff to handle the complaints. For a small game, the cost of managing griefers can
be the difference between success and failure. For a large game, these costs are a
continual drag on the bottom line.
Given these numbers, a game company can make a rational decision as to
whether a new security solution is needed. If griefing is at all a problem in a game,
it is probably costing hundreds of thousands of dollars, minimum. The question is:
are there solutions and what do they cost?
A security solution is not likely to be able to stop griefers, but should detect, and
hopefully deter, them. A common, but usually unsuccessful, approach to stop
griefing is to use “dirty word” lists. “Dirty word” systems basically operate an ever
growing list of “banned” words and phrases that will cause a message to be blocked
or the offending words or phrases removed. A very different approach is to (mostly)
eliminate open communications as Disney has done with Toontown Online. Disney’s
children’s MMO has a communications architecture that eliminates “chat” except
among trusted friends7, and this approach has been followed by other children’s
games. These games use totally structured communications for most players with
all text and even sentence structures provided by the game company. If friends
have shared a code outside of the game environment, they can communicate
through a monitored chat service (see Chapter 30). Monitoring communications
can be expensive. Another children’s game, Club Penguin, has 100 real-time employees monitoring player communications and adds 500 to 1,000 words per day to
its “dirty word” filters for its 12 million total users and 700,000 subscribers8.
For most games, a general-purpose communication capability—either via text
chat or voice or increasingly video—is integral to the game experience. Real “dirty
word” lists are very vulnerable to “misspelling” attacks that will thwart the security
system while effectively conveying the harassing message. Live license keys are fairly
effective in using platform identity to deter griefing (by the threat of banning), but
these keys do not have a strong binding mechanism to an individual message or
person. Similarly, credit card controlled accounts for massively multi-player games
can strongly identify an individual player during a session, but they also cannot be
bound to a potentially offending message.
One tool that binds the actual communications to an individual is a digital
signature. Digital signatures can support both client-server and peer-to-peer
communications. This is especially important as games grow larger and the cost of
simply relaying messages grows rapidly for voice and video communications to the
point where a central service cannot log and monitor all communications.
Chapter 21 Griefing and Spam
Although there are numerous references that can explain how digital signatures
work, the important feature they support is non-repudiation. Non-repudiation is
defined as the property that only one individual could have signed a message. This
works by taking advantage of the unique characteristic of public key cryptography,
namely, that knowing the public key (P) “half” of a public-private key pair will not
allow the reconstruction of the private (secret) key (S). This feature allows an individual to broadcast their public key to everyone and they will be able to decrypt my
messages, but only the individual who knows the private key can encrypt them.
The player can then use her private key to “sign” a message by encrypting a
hash of the message (or the message itself). Then, anyone can use the public key to
validate the message’s signature.
The signed message is formatted as follows:
S(message) or message,S(hash(message));
// where only one person knows how to compute S(x)
The verification process works as follows:
P(S(message)) = message or P(S(hash(message)) = hash(message);
// where everyone knows how to compute P(x) and hash(x)
/* if the hash of the message received does not match the hash
included in the signature, then the message is not verified */
Now, if you build the communication system (voice, video, or text) and add
a digital signature service, players will not be able to deny that they created the
messages that they have sent nor will other players be able to misrepresent messages
from the actual sending player. At least, this is so as long as there is reasonably good
identity information for the players (see Chapter 29). If a player can store all of the
messages that she receives and forward them to the game operator when she wants
to file a complaint, the operator will have an undeniable record of the conversation
that cannot be manipulated. This has several benefits.
First, the potential griefers will be deterred, knowing that their actions are
neither non-deniable nor spoof-able. This is probably the most important characteristic of the system. Deterring griefing (like crime) has a much better return on
investment than hunting down and catching the troublemakers. This technique
actually can be extended to regular in-game actions to deter spawn camping and
other griefing problems by utilizing logs stored by players for evidence of abuse.
Protecting Games: A Security Handbook for Game Developers and Publishers
Second, the game operator can reduce the live team and customer support
staffing for grief-management. Because there are reliable logs of alleged griefing,
real-time response by the customer support team is less critical. Players can post
messages for reliable adjudication and both players’ versions of a conversation can
be used as reliable evidence—which leads to the third benefit of this approach—
fewer customer disputes and complaints.
Digital signatures do nothing to stop spam. Unfortunately, spamming players
usually have found a way to acquire an account that can broadcast messages and
they are willing to risk being banned from the game after sending even a single message. The simplest solution is to remove broadcast communications services from
the game, but these services can be very useful for many reasons such as helping
players form ad hoc groups to ask for help.
Another approach is to phase the distribution of broadcast messages. Most of
the time, this type of spam is sent via text, not voice or video. Broadcast voice and
video are just too disruptive, so players tend to only use them with much smaller
groups and can simply “mute” or blank out anyone who annoys them.
Phased distribution works by dividing the active players into a (binary) tree
structure at random. When a player sends a broadcast message, it is first sent to the
players in the same low-level branch, and then sent to the players in the same
higher-level branch, until eventually, it is sent to everyone (see Figure 21.1).
FIGURE 21.1 Anti-spam phased message propagation system
Chapter 21 Griefing and Spam
The pacing of this distribution system should be set up to give players a chance
to mark the message as spam before it is sent to the next tier in the system. This
technique will disturb the fewest players from any given spammer.
As with many other challenges for gaming, weak identity aggravates griefing
and spamming security problems. Strengthening identity is always helpful. For
games that want or need to support fairly anonymous players, the key is to minimize the impact of spam and grief-related activities either through structured communications or other measures that reduce impact on legitimate players and effort
for customer support personnel.
Players often choose inappropriate or obscene names for their online personae or
characters. Often, this is simply acting juvenile, but sometimes these names are
chosen with malicious intent. In the previously cited advergame security survey 9, 10 ,
some players used the games’ high-score lists to criticize the sponsoring company.
Such incidents can be particularly troubling when they occur in a marketing campaign
with high visibility.
Player reputation systems and alert processes to help flag inappropriate user
names are fairly effective. However, it may be advisable to take the further step of
manually verifying new high scores before they are available publicly. In order to minimize player impact, the game operator can structure the game to show the player
her proposed username, but not reveal the name publicly until it has been verified.
Just as players use weaknesses in a game’s rules implementation to gain a competitive advantage, a number of players use these exploits simply to abuse other players. One of the most common such tactics is “spawn killing,” whereby players kill
another’s character just as the character enters the game before the player can take
any action or protect herself11. Often, the countermeasure is to allow players to be
briefly invulnerable as they enter the game, invisible, or vary where they appear.
These tactics have to be used with caution, as they can, in turn, be used to grief
other players. Another game play griefing tactic is when players sometimes kill the
characters belonging to their allies, the gaming equivalent of “friendly fire,” which
is called “team killing”12. Typically, there is very little that can be done about this
problem without seriously distorting a game’s design.
Protecting Games: A Security Handbook for Game Developers and Publishers
After murder, theft follows naturally in the hierarchy of sins. “Kill stealing” is
the practice of taking items or experience that another player should rightly have
earned13. Depending on the game’s design, players can sometimes join a battle very
late but still share equally in its rewards. The extreme version of this tactic is “ninja
looting,” whereby a player seizes unearned items or items that her group had agreed
to share in a different fashion, and finally, there is “scavenging,” whereby a player
accumulates items or resources left behind by other players14.
Some games have addressed these issues by making the entire combat process
more structured. Once players have joined a battle, no one else can participate in
the fight or earn a share of its rewards. Often, the problems come from a mix of
formal and informal game play systems, as is often found in PvE (player versus
environment) MMOs. These games often have game rules in place to prevent conflict
between players (called PvP); however, the developers want the game to be “real,”
and this is where the trouble occurs with inconsistencies in the various game
Many online games have monsters that “drop” items when defeated. Players then
“pick up” the items either individually or allocate them based upon a predetermined
arrangement. The “ninja looting” problem described previously arises from this design. A simple approach to fighting this sort of abuse is that the players who participated in the combat or other action that resulted in the loot drop handle the
allocation of items abstractly. Nothing is “dropped” physically to be picked up by the
fastest or most devious player. Instead, items appear in a transactional window with
each item to be allocated among the participants in the combat. Allocations are proposed by the players; however, nothing is released to anyone until all of the participants “vote” on the result. The default system would be a unanimous vote with
nothing allocated until everyone agrees. Players can still cause grief by refusing their
vote, but they don’t get any benefits either. The game can retain the transaction for a
week or other interval and then the items disappear. For an organized party of players or guild, loot can build up and be allocated by a number of different systems. This
is an opportunity for a game developer to provide flexible player interactions and an
effective “contract enforcement” tool.
Chapter 21 Griefing and Spam
If the game is very formal, the rules will tend to prevent problems, and if
the game is very “real,” abuse has consequences. This latter approach is seen in the
science fiction MMO, EVE Online. EVE Online is a largely player versus player
game, although certain parts of the universe are more secure against inter-player
aggression. When players can attack players, these forms of griefing tend to take
care of themselves. Players will simply get revenge on the other players for theft.
Interestingly, EVE Online has taken a different approach to theft griefing in the parts
of the game where players are not supposed to fight each other (high security space)
with a space-based police force called CONCORD. The CONCORD system will
essentially destroy anyone breaking the game’s anti-griefing rules in “high security
space.” CONCORD is not immediate, and so some players can carry out their
crimes and not get destroyed. Also, players have sometimes used throw-away ships
to carry out suicide attacks and used other ships to steal loot. This technique, called
“suicide ganking,” is being addressed through changes to the game rules15.
Basically, game developers need to be quite careful and test their designs to
identify both potential cheating and griefing exploits.
The ideal game is one where developers can just sit back and let the players create
the game using some developer-delivered building blocks. Social networks do this
by providing a communications framework with numerous features to encourage
user action and interaction. Dating sites are the most obvious example. Players add
information to their own profiles, rate, and communicate with other members,
take tests, fill out surveys, send virtual gifts, and so on. For game services, some
games go beyond simply providing high scores, multi-player lobbies, and the standard
apparatus of social networks to allowing players to create, and in some cases, buy
and sell, virtual items. Second Life and IMVU are probably two of the best known
online services that use this model. Metaplace goes a step further by allowing users
to create their own games. Some MMOs do support “crafting,” but this is not
really user created content, but simply a portion of the game’s economy.
Once a game allows users to create items or change the game’s rules, there are
massive opportunities for abuse. “Time to Penis” is a tongue-in-cheek metric for
the time from when some level of user creativity is permitted until someone creates
a penis. Electronic Arts (EA) released a Creature Creator for its much anticipated,
family-friendly title Spore. The immediate, inevitable response was widespread
creation of obscene creatures, called Spore porn or Sporn16. EA is managing user
content by allowing players to report inappropriate material as well as by allowing
users to restrict the material that they receive from other players to creatures from
no one, only their friends, or the general public.
Protecting Games: A Security Handbook for Game Developers and Publishers
This in-game graffiti is only one concern. Players can simply use their virtual
presence to disrupt the experience of others as IBM found when Italian union
workers moved to protest a reduction in their “productive results benefit.” Instead
of a traditional strike, they protested in IBM’s online space in Second Life with
2,000 sympathetic avatars resulting in substantial, global media coverage (and,
apparently, victory for the workers)17.
Also in Second Life, a CNET interview with Anshe Chung was disrupted for 15
minutes by “animated flying penises”18. The griefing incident was followed by a
copyright brouhaha when Anshe Chung’s company sent DMCA takedown notices
to YouTube and media sites for posting copies of the video of the griefer disruption
of the interview19. Second Life faces unique challenges because of the pervasive ability of players (called “residents” by Second Life’s creator Linden Lab) to modify
themselves and their environment. Second Life has been moving to add more
controls to its environment in the wake of these attacks. Because Second Life is
structured around the notion of controlled virtual real estate, security controls
really need to be matched to the game’s ownership model. Public spaces in the
game have had a number of problems over the years with denial of service attacks
that used the ability to create and animate items to overwhelm the virtual world’s
servers. Neither IMVU nor Metaplace are likely to have the same sort of concerns
in that neither have the same notion of a shared, public space that is also highly
programmable. However, it does seem that IMVU has had some problems with
players griefing the flagging system for inappropriate virtual items20.
Lying somewhere between spam and user-created content was a stunt pulled by
a gold farming company in World of Warcraft (WOW). In WOW, when a player’s
character dies, it stays in place where the death occurs until the player starts over
(respawns). The gold farming firm arranged to have a number of gnome characters
die in a large pattern on the ground to spell out the company’s website address21.
Rating and reporting systems are probably the best tools for managing abuse of
user-created content systems in online games—especially when used in conjunction with an effective identity system (see Chapter 29). The power and potential for
creativity of a service like Second Life makes it particularly exciting for those who are
enamored with virtual worlds. However, balancing player creativity with griefer
crudity and abuse is a huge challenge.
with J. Price
Certain forms of griefing can create real legal and business risks for a game operator.
Player-to-player communications and user-created content can be considered
Chapter 21 Griefing and Spam
obscene or harassment and trigger legal actions that affect both the individuals
involved and the online service. Players can create virtual items that infringe on
trademarks or alter a game to damage its reputation.
National, state, and local laws and ordinances classify most sexually oriented
material as either obscene or indecent. It’s a serious issue. Criminal laws apply and
there is a possibility of going to jail. But figuring out what is obscene and what is
indecent is often complicated. The difficulty of defining obscenity was memorably
summarized by U.S. Supreme Court Justice Stewart Potter in a concurring opinion
when he said “I know it when I see it.” There are no objective standards. Even
worse, because of the number of jurisdictions involved, a game operator may be in
jeopardy without knowing it. The game operator can be sued where the game’s
audience is located, not just where the company or servers are housed.
In the US, the answer depends upon how the content is being viewed. Different
obscenity standards apply to broadcast television, subscription television
(cable/satellite), and the Internet. Offline, anyone sending sexually oriented material might avoid liability by controlling the point of delivery of their material to
avoid areas with strict obscenity laws. Almost by definition, online services are
accessible in the most conservative as well as the most liberal communities.
Potentially, the most conservative contemporary community standard applies to
any online service on the Internet when considering obscenity and indecency standards. In order to be completely safe and avoid prosecution of violating obscenity
laws, the contemporary community standard of the Internet is (essentially) the
most conservative community where the service is available.
Even if the game operator’s provided material raises no concerns, user
communications and user-created content can be an issue. Most troubling are the
portions of the criminal code that apply liability to the game operator for any
“knowing” use of the “interactive computer service” to transport and transfer
obscene material. If a game operator receives complaints about inappropriate content, the operator must take prompt action. The game operator faces the risk of
being charged if there is evidence that the operator received complaints regarding
criminal content, such as obscenity, and ignored it. The U.S. Supreme Court has
held that a prosecutor need not show that the defendant knew that the material was
obscene, but “that he knew the character and nature of the materials.”
The best practice is to maintain control of the online environment and users:
Have online user policies that give you wide discretion to examine what is happening within your game and that enable you to work with law enforcement at
your discretion. Take down content for whatever reason you feel is appropriate.
Protecting Games: A Security Handbook for Game Developers and Publishers
Don’t put your head in the sand. If you hear of something fishy, go after it and
take down any content that can’t be viewed on prime-time television.
If adult or other potentially risky (or risqué) content is important to your service, you’ll need legal advice tailored to your specific situation.
If you offer a service online that permits user communication, odds are likely that
you will have a harassment case arise at some point. Law enforcement agencies have
estimated that electronic communications are a factor in from 20 percent to 40
percent of all stalking cases22. Forty-five states now have laws that explicitly include
electronic forms of communication in stalking and harassment laws. State laws
that do not include specific references to electronic communication may still apply
to those who threaten or harass others online, but specific language can make the
laws easier to enforce. A number of federal laws also are relevant.
Authorities will work actively to pursue harassment cases. In a widely reported
incident, a woman was accused of bullying a neighborhood teenager via MySpace,
which may have contributed to the 13-year-old girl’s suicide 23. Federal prosecutors
charged the woman with one count of conspiracy and three violations of the antihacking Computer Fraud and Abuse Act. They accused her of violating MySpace’s
terms of service by providing false information to open a fake MySpace account
with her daughter and another teen in September 2006. In doing so, authorities say,
she obtained unauthorized access to MySpace’s servers.
Trademarks only stay in force when the trademark owners actively protect them.
Copyright protects the authors of original creative works against unauthorized use.
As such, trademarks and copyright-protected material that find their way into
games via player actions can create real issues for game operators.
Trademark infringement is a violation of the exclusive rights attached to a
trademark. It occurs when one party, the “infringer,” uses a trademark that is identical or confusingly similar to a trademark owned by another party, in relation to
products or services that are identical or similar to the products or services that the
protected trademark covers. Trademark infringement is not covered under the
Digital Millennium Copyright Act (DMCA) and is not subject to DMCA takedown
notices. DMCA was used to inappropriately remove 3D models of cars and airplanes from the Turbo Squid 3D art-sharing site. This was allowed until a lawsuit
was filed (and won), based on the inappropriate removal of a B-24 model, which
was supported by the Electronic Frontier Foundation24.
Chapter 21 Griefing and Spam
Alteration of game content is another matter.
Tecmo fought and won an out-of-court settlement against a website that provided a patch to its games Dead or Alive 3 and Dead or Alive Xtreme Beach Volleyball
that removed the clothing from the games’ scantily clad models. The company
charged the website, Ninjahacker, with unauthorized modification of game assets
and circumventing the game’s copy protection system. Because the case was settled,
the legal status of player modification of the material from a purchased game is still
Turbo Squid’s final comment on the matter probably serves as a good summary
of the challenges that griefing and user-created content present for game operators:
Turbo Squid solely provides infrastructure for vendors around the globe to
post their models and creations for sale. By accepting our End User License
Agreement when vendors sign up, they warrant that they have all rights to the
models and other digital assets they sell. Turbo Squid is a Digital Millennium
Copyright Act (DMCA)-compliant operation, and when we receive valid takedown notices from (copyright) or (trademark) owners we act accordingly, and
remove any infringing models brought to our attention. That said, because of
how the DMCA Safe Harbor provision works, and because of the realities
of any large open marketplace, we do not pro-actively police (trademark)
infringement to any degree that Turbo Squid might guarantee that any
particular model is not infringing. We can only work in a responsive mode.
The burden to not infringe falls to Turbo Squid’s vendors, the individuals who
create these assets and place them for sale on our site.
No matter what game operators do, it is likely that they will need to deal with
harassment, abuse, obscenity, and exploits. They just need to be prepared.
Although “Hell is other people,” there wouldn’t be much in the way of business or
games without them, either.
1. J.P. Sartre (1944), “No Exit”
2. P. Kafka (2008), “Facebook Spam Getting Worse?,” J. Milne (2008), “Sex and Software Diet Fuels Spam Growth,”
4. D. Becker (2004),“Inflicting Pain on Griefers,”
Protecting Games: A Security Handbook for Game Developers and Publishers
5. A. Pham (2002), “Online Bullies Give Grief to Gamers,”
6. B. Woodcock (2008), “MMOG Active Subscriptions 70,000 to 700,000,”
7. M. Goslin (2004), “Postmortem: Disney Online’s Toontown,”
8. P. Elliott (2008), “MMO Week: Club Penguin,”
9. Deloitte Touche Tohmatsu (2008), “Advergames op Grote Schaal Gehackt,”,1014,sid%253D13354%2526cid%253D202819,00.html
10. S. Davis (2008), “Serious Advergame Hacking Problems: Deloitte Touche Tohmatsu Netherlands
Survey Findings,”
11. Wikipedia (2008), “Camping (Computer Games),”
12. Wikipedia (2008), “Team Killing,”
13. Wikipedia (2008), “Kill Stealing,”
14. Wikipedia (2008), “Looting (Gaming),”
15. J. Egan (2008), “Era of Suicide Ganking in EVE Online Coming to a Close,”
16. M. Simon (2008), “Video Game’s User Content Spawns Naughty Web ‘Sporn’,”
17. UNI (2007), “Breakthrough at IBM Italy,”
18. D. Terdiman (2006), “Newsmaker: Virtual Magnate Shares Secrets of Success,”
19. A. Reuters (2007), “Anshe Chung Studios Cracks Down on Griefing Photos,”
20. Virtual World News (2008), “IMVU to Exit Beta This Summer” (comments),
21. A. Sliwinski (2007), “Gnome Corpse Advertisement in WoW by Gold Farmers,”
22. National Conference of State Legislatures (2008), “State Computer Harassment or ‘Cyberstalking’ Laws,”
23. Associated Press (2008), “Report: Grand Jury Probes MySpace Suicide,”
24. J. MacNeill (2008), “First They Came for the Fords, and I Did Nothing,”
25. D. Jenkins (2005), “Tecmo Settles Nude Patch Lawsuit,”
Game Commerce:
Virtual Items, Real Money
Transactions, Gold Farming,
Escorting, and Power-Leveling
oney makes the (real and virtual) world go around. In some sense, money
itself is the oldest, most widely used virtual item. Because money is so
universally understood, virtual currencies are widely used as incentives in
online games. Whatever one’s views are about “consumer culture,” we all seem to
have a Pavlovian response to accumulating more things.
Game developers know this and draw on it to reward players. However, once
you create a system where more is better, people respond creatively. Where there is
a gap between those with more time than money and those with more money than
time, someone will come along to close that gap.
Welcome to game commerce. It is worth noting that game commerce is not
applicable to all games—after all, there are games of pure mental or physical
accomplishment like chess or baseball. Although people do cheat in both, there is
no way that most of us will ever be a Chess Grandmaster or in the Baseball Hall of
Anyone can be rich, however, especially in a game.
Game commerce encompasses legitimate transactions where players buy, sell,
trade, and exchange items, skills, or characters, as well as unauthorized transactions.
The most visible, and notorious, form of game commerce is gold farming, where
players purchase currency or items for real cash and don’t earn them by playing the
game. The other major categories of unauthorized game commerce include powerleveling, which involves hiring other people to play on one’s behalf, and escorting, in
which players hire other people to play along with them as partners.
The problem for game developers is that, from a strict game play perspective,
these activities are completely legal. In fact, many games that loathe game commerce are built expressly to support the very activities that make it possible. Game
developers want players to be able to give items to each other and they strive to
make it easy for players to group together with others and play the game socially.
Protecting Games: A Security Handbook for Game Developers and Publishers
There are many reasons people play games. Dr. Richard Bartle proposed four basic
categories of game players (as modified to suit my purposes)1:
Achievers—Players who seek to maximize their score or items or status in the
Explorers—Players who want to experience and understand the game world
and its design.
Socializers—Players who use the game as a mechanism to form and expand
their social circle.
Competitors—Players who want to compete with and excel over other players. Dr. Bartle seems to take a more negative view of this category than I do
in that he includes abuse of other players as an implicit part of this category.
To his four categories, I would add a set of mirror categories. The dark side of
Earners—Players who seek to earn the most wealth in the game for real-world
reasons. These are the gold farmers and power-levelers.
Exploiters—Players who carefully study, explore, and analyze the game world
and its mechanics to identify weaknesses that give them a substantial advantage, usually due to flaws in the game design and implementation.
Harassers—Players who use the social mechanisms of the game to make the
experience as miserable for others as possible.
Dominators—Players who use the game’s mechanics to make other players
miserable. These players are not really interested in doing better than other
players, but in making other players know that they have been beaten.
Figure 22.1 illustrates these player types.
Exploiters and Earners are often closely tied to game commerce. Exploiters help
optimize the earning potential of the game. Harassers and Dominators cover two of
the main categories of griefing.
Chapter 22 Game Commerce
FIGURE 22.1 Game player categories
As a game-design note, the existence of the four Bartle categories of players
(see Figure 22.1) is probably one of the reasons that game commerce exists. Game
developers are often Achievers and Explorers. They want players to work through the
game and experience all of the developers’ carefully crafted content. The problem is
that many players are not similarly motivated. Socializers want to be able to play with
whom they want when they want. Some Achievers are more interested in status than
achievement and they may not have as much time to play as the developers want
them to devote to the game. Competitors are interested in competition, not resource
gathering or exploration. Explorers may want to be able to go everywhere and do
everything without “achieving” everything necessary to unlock all of the game’s doors.
Game commerce is the shortcut for all of these players to achieve their goals in spite
of the game designers’ wishes.
When game commerce is not explicitly permitted by game operators, it creates
problems for the game operators because players will engage in game commerce
activities, whether officially allowed by the game operator or not. In 2005, Nick Yee
surveyed 1923 EverQuest players and found that 22 percent admitted to buying gold2.
These gold buyers purchased an average of $135 per year in gold: fairly close to what
they were paying in subscription fees to the game. Considering the fact that players
are likely to under-represent the rate at which they do something that is frowned on,
like gold buying, and under-report the amount that they spend on such items, it is
very clear that gold buying is very widespread. Players pay for convenience.
Protecting Games: A Security Handbook for Game Developers and Publishers
Game commerce causes problems because it creates a mechanism where players can bypass parts of the game or the “effort” that the game’s developers, and a
number of its players, feel that everyone should achieve (see the sidebar called “The
Dark Side – Four More Categories of Game Players”). The other problem comes
from when players engaging in game commerce activities interfere with day-to-day
game play for ordinary players. This can range from monopolizing game resources
in order to farm them most efficiently to broadcasting annoying advertising
announcements for their services and clogging up the game’s communications
channels (see Chapter 21).
The biggest problem with game commerce is that it is lucrative. Some
estimates place total gold farming revenues at over $1 billion worldwide3.
Unfortunately, this kind of money creates problems both within the industry and
outside it. There have been a number of cases of employees using their access to online games to fraudulently create virtual items for sale. There is also the growing
problem with online criminals targeting MMOs because of the ability to convert
stolen account information into cash with little to no risk.
Many game-commerce problems are due to the simple, abstract economic systems
that are found in many online games. Though these systems are called “economies,”
they are really amusement parks. There is no supply and demand. Players “ring the
bell and win a prize!” In some sense, problems arise because the games do not fully
embrace their amusement park “nature”: assets can’t be stolen, but they can be
bound to the player who picks them up; resources don’t have weight or size,
but players have limited inventories; players can teleport or fly around quickly, but
can’t take their characters to another game server or shard; the games are supposed
to be “fun,” yet are designed around a treadmill or “grind” to force players to forever accumulate resources and repair “worn” items.
In some sense, these game’s economies are not really meant to be played for
fun. Rather, they act as surrogates for the long lines and height restrictions that one
finds in a traditional amusement park. After all, what is the requirement to accumulate resources but a way to delay players from entering “high-value” instances or
dungeons? And what are level restrictions, but ways to limit when a player can
access certain adventures?
For many players, game commerce gives them a way to “cut to the front of the
line” and “grow a couple of inches” so they can ride the roller coasters instead of
being stuck on the kid’s rides.
Chapter 22 Game Commerce
By this measure, gold farming, power-leveling, and the other forms of game
commerce are symptoms of game design failures. After all, gold buyers and other
game commerce consumers are giving money to someone else instead of the game
operator in order to have the entertainment experience that they desire.
For a long time, the dominant business model for the online game industry has
been “purchase and subscribe,” whereby a player would buy a shrink-wrapped
game box and pay a monthly subscription to play. (It should be noted that before
Internet Service Providers (ISPs) moved to flat monthly fees, many games and ISPs
had a metered service where players paid by the hour or minute.) World of
Warcraft, Lord of the Rings, and most other “major,” traditional MMOs follow this
model with a standard retail price of around $50 for the game box and between $10
and $15 for a month’s subscription.
A number of companies have tried variants of this basic model. The most
familiar alternative is to provide the game as a free download with a monthly
subscription, as found in EVE Online and some of the older, larger Asian MMOs.
Conversely, ArenaNet’s Guild Wars is purchased, like a standard game, but there is
no fee to play online. No one else has really adopted this model for a persistent
world game even though it was very successfully pioneered by Blizzard’s Battle.Net
for the Starcraft, Warcraft, and Diablo games. A free online service combined with
a purchased game is found routinely with first person shooters and real-time strategy games. However, for these games, the online service is often little more than a
lobby. Also, publishers have been quite willing to shut down the online service,
almost on a whim.
Subscription games with a persistent world or economy are the primary victim
of game commerce problems.
Over the past several years, the “free-to-play” (F2P) model has rapidly emerged.
This business model is usually based on a game that requires no subscription to
play, but collects revenue by the sale of individual virtual assets (the virtual currency
is purchased in a number of ways, such as credit cards, debit cards, phone-based
payments, wire transfers, or pre-paid cards). The F2P business model grew popular
in Asia with the tremendous success of games like Nexon’s KartRider, MapleStory,
and Audition. Recently, the F2P model has grown in Western markets because of
the popularity of Asian games as well as the development of original game titles
such as EA’s Battlefield Heroes.
Protecting Games: A Security Handbook for Game Developers and Publishers
Interestingly, some of the earliest adapters of the F2P were US-based text
MUDs, such as Iron Realms’ Achaea, which launched with virtual currency purchases in 1997 and had its first virtual item auction in 19984. Iron Realms uses a
dual currency system with one currency based on time in-game and the other based
on direct purchases. The only type of trading that this system allows is between the
game’s two currencies5.
It should be noted that some games earn revenue from advertising or other
marketing activities like surveys which are, in turn, linked to virtual item purchases.
There are seemingly endless variations on these models. Jagex’s RuneScape is
free-to-play, but has a low-cost monthly subscription option that gives a number of
advantages ($5.95 per month)—a strategy that is also used in Disney’s Club
Penguin. Linden Lab’s Second Life’s primary revenue comes from selling virtual
items and renting virtual real estate. Many F2P games still include an actual ingame economy just like the subscription games discussed here and are vulnerable
to the same sorts of game commerce problems for this portion of their business.
Finally, there is the “broker” model—where an online game earns funds by
brokering transactions between buyers and sellers of virtual goods and earns a
transaction processing fee. Linden Lab earns some money from this model and
IMVU does as well. In some sense, Apple’s App Store and iTunes use this same
strategy. The challenge for the broker model is that the transaction processing margins need to be relatively large to handle any payment problems that occur, such as
charge backs (see Chapters 27 and 28 on money and payment security issues) and
bandwidth for large digital items. For Apple’s App Store, the company takes 30 percent of the transaction6.
Who owns your virtual assets? The game developer or operator? The player? This
question has vexed the online game and virtual world industries from the days of
The general perspective of most developers is that the company owns all of the
assets and that the player is simply renting access to them—much as they would a
subscription to HBO. The assets and characters are non-transferable, have no intrinsic economic value, and can be altered or removed at the whim of the developer
(as can the player).
Chapter 22 Game Commerce
There are excellent reasons for this. Game developers have been reluctant to
invest in the effort needed to build highly robust, transactional, and reliable systems
to store virtual asset information. Online games are in a constant state of flux and
so the developers are also concerned that any alterations to the value of virtual
assets (nerfing) might incur a liability for the company because players had already
“invested” in those assets. Other questions include addressing what happens when
the game comes to an end and issues related to gambling arise, as many online
games include a chance element (see Chapter 31).
Others, most notably Raph Koster7 and Erik Bethke, have argued for what
many call an “Avatar’s Bill of Rights” 8. Tony Walsh has argued for further “rights”
related to data9. The term “avatar” is more than a bit misleading. They are really
arguing more about the rights of players in online games and virtual worlds. They
make many excellent suggestions from a best practices perspective and many of the
issues that they raise are far beyond the scope of this book. A couple of issues that
are relevant include the ownership of virtual assets and avatars and the rights of
players in relation to banning and punishment (see Chapter 23).
The essential insight highlighted by the “Avatar Rights” movement is that players ascribe substantial value to their game characters and virtual assets. The willful
denial of this fact has facilitated the growth of gold farming and criminals who target online games, in some sense.
Because developers don’t consider the value that players put in their virtual
“stuff,” customer service is often not responsive to player complaints about lost
items. Also, the game systems are not built to easily log, track, remove, and restore
these items in case of loss or theft.
In some sense, this is ironic—the same game companies that argue vigorously
that the virtual items have no value, at the same time are extraordinarily reluctant
to restore players’ characters or virtual items after alleged theft. The argument is
typically made that the players are abusing the system by allowing their items to
be stolen (or, actually, selling them) and then making a complaint to the game
If the items have no value, restore them.
However, this is not a matter of rights; it is a matter of good business. Bethke
proposes a “Better EULA” 10, not some sort of formal and universal declaration of
rights. The real key to this issue for online game businesses is to maximize their revenues and minimize their costs. The extent to which expanded ownership of virtual
items by players increases the popularity of a game or the revenues earned from
each player is the extent to which online developers should extend rights and
control of virtual items.
Protecting Games: A Security Handbook for Game Developers and Publishers
Gold farming is almost certainly the most serious game commerce problem for
most online games. After all, gold farmers are playing the game for money and are
therefore much more highly motivated than ordinary players playing for fun. From
a game-play perspective, these players are not (usually) cheating; they are simply
playing the game the wrong way (gold farmers are almost always violating the
game’s terms of service, but not exploiting or breaking the game play mechanics).
The major irritations that gold farmers cause for other players include:
Resource Monopolization—Gold farmers ruthlessly optimize their game play
and actively seek out the highest-value activities and items in the game. After
all, these items are worth the most to the less-motivated players, who are the
farmers’ customers.
Anti-Social Behavior—Generally, gold farmers have no time or interest in
communicating with other players. They are playing to make money.
Interestingly, the growing practice of using “instances” (a portion of the virtual
world that is only available to an active group of players, like the people riding
on the same roller-coaster car) could help minimize this problem by allowing
gold farmers to self-segregate and no longer annoy other players.
Aggressive Marketing—The best place to sell items is when and where people
use them. Gold farmers are very creative in flooding available communications
channels with their sales pitches. It would be interesting to see the effect on gold
farmer marketing of separating game play from “inventory adjustments.”
ArenaNet’s Guild Wars has a limited number of skill slots that are set at the beginning of a play session and the game is highly instanced. This may reduce the
value of intrusive marketing. Some games have restricted communications to
“shouting distance”—a more realistic option, but many do support the ability
to broadcast messages, which definitely amplifies this problem.
Most of the countermeasures today are targeted towards the gold farmers, not
their customers. One could make an analogy to the proven futility of this approach
as seen in the “war on drugs.” After all, if there are no customers, there will be no
gold farmers. The main objections to gold buyers are:
Unearned Rewards—Players who buy virtual currencies, items, or characters
are perceived to not have “earned” them. In some sense, this is a problem for
Achievers and Explorers who view accomplishment in the game in terms of
their acquired assets and knowledge.
Chapter 22 Game Commerce
Incompetent Play—Players who buy items or characters have not experienced
the portions of the game that allowed them to know how to use the items or
characters and are thus unable to play at the same level as traditional players.
This is one of the main arguments against power-leveling, discussed later in this
Gold farming’s main impact is on customer service and retention. If players are
unhappy and complain, it costs the game operator money for additional customer
support and if players are sufficiently angry with the gold farmers and quit playing,
there is a serious problem. Interestingly, Sony set up the Station Exchange in late
2005 to legitimize these real-money transactions between players and bring them
in-house. Sony found that this change reduced their customer service call minutes
related to virtual item trading from 40 percent to 10 percent11. The reduction in
customer service expense was probably substantially more valuable than the almost $275,000 that Sony earned in sales commissions during the first year of the
service. Unfortunately for game companies, players will call and complain about
problems with their virtual item trading—whether the transactions are permitted
by the game or not.
The general challenge for game developers is that the value they put on fighting gold farming and other game commerce activities is far less than the value that
gold farmers put on their own business. From the perspective of the game developer, gold farming is a customer service problem. Blizzard, the developer and operator of World of Warcraft, claims to have spent only $200 million in upkeep since
the game launched in 200412 or just $50 million on average in each of the last four
years. As the most popular game in the world during this period, World of Warcraft
is likely responsible for a majority of the estimated $1 billion in gold farming revenues per year. The huge disparity between $50 million for total annual operations
(including everything from customer support to technical support, hardware, and
bandwidth) and any significant fraction of $1 billion clearly shows the huge resource disadvantage of any game developer compared to the gold farmers.
Gold farmers will use anything and everything to support their business. If the
game allows free or introductory accounts to recruit new players, the gold farmers
will use them as “mules” to pick up, transport, and store collected items and virtual
currency so that the real farming accounts are less visible to the game. Gold farmers will also use these free accounts to broadcast marketing messages until the accounts are banned. After all, if the account is free, there is no real cost for the gold
farmer... a topic that will be of particular importance when I discuss gold frauders,
later in this chapter.
Protecting Games: A Security Handbook for Game Developers and Publishers
All hope is not lost. There are countermeasures that can be taken to help control
or stop gold farming and other forms of game commerce. Many of the techniques
are used by game companies already. Some may change the game-play experience
in an unacceptable way and a number are far from perfect. Hopefully, however,
they will give game developers some additional tools to add to your anti-gold farming arsenal:
Eliminate Character and Item Exchange—This is the “nuclear” option. The
problem is that players really like to be able to exchange game items and
currency. Eliminating the exchange of characters or accounts is very difficult, as
it is problematic to implement and enforce a strong, effective identity system
(see Chapter 29 on identity).
Soul-Binding—This technique is fairly common. An item is bound to a character once it is found or earned. This is a limited case of the eliminate trading
Player-Binding—Instead of binding items to a character, simply bind them to
a player’s account. This may also encourage players to horde items for later use
by additional characters created later, potentially extending their duration as a
No Private Exchanges: Internal Open Market—All items (and, potentially,
characters) are exchanged via an internal open market like eBay or a stock exchange. Players bid using in-game currency for the items. The lack of private
transactions or gifting will force all gold farming transactions to devolve into
power-leveling, a much more limited business.
Gold Farmer Targeting—Instead of banning gold farmers, griefers, or other
game system abusers, simply turn off any game protections that protect the
characters. Make the characters and their items “fair game” for other players.
Gold farmers then need to decide whether to start another character, which
costs time and therefore money, or deal with the risk of losing their loot to
other players during conflict. Also, gold farmers may think that they have
closed a legitimate financial transaction and find that the player decides to steal
the item from the farmer. This could force gold farmers to use high-level characters to execute or monitor transactions, making their business more costly.
Loot Detection—Large concentrations of game currency or items could be detectable via a spell or skill, making gold farmer treasure stores or players targets.
This could be used in conjunction with gold farmer targeting.
Pure Barter Economy—The lack of a standard in-game currency will force
gold farmers to spread their efforts over the entire range of game items. This
also makes the game economy more dynamic and vulnerable to legitimate
player manipulation.
Chapter 22 Game Commerce
No Gifting, No Dropping—The ability to give an item to another player is the
primary mechanism for gold farmers to complete their transactions. Jagex implemented a variation on this system for RuneScape that stopped “unbalanced
trading,” whereby players were using the game’s trading mechanism as a
“covert gifting channel” by trading items of vastly unequal value13. They also
restricted the ability of a player to drop items so that others could retrieve it.
Leveling Premium—Power-leveling is particularly difficult to stop, as discussed later in this chapter. Players can always find ways to exchange account
information. Instead, make it really inexpensive for players to buy their way up
to a higher level. Charge a nominal amount per month for a higher-level character: add $.10 per level. So, to start at level 2, the game subscription would be
only ten cents higher, but to start at level 50 would add $5.00 per month.
Buy Everything: Pure Free-to-Play—If you want an item, you have to buy it
with real currency. This is the purest form of the “free-to-play” business model
discussed previously. Again, power-leveling is the only remaining game commerce concern.
Dual Currencies—Support a time-based currency, based on game play and
in-game activities, and a money-based currency. Allow players to trade between
the two via a blind, open auction. This is the method pioneered by Iron Realms5.
Item Exchange Aging—Items “decay” the more often they are traded. This
reduces the in-game economic value of farmed items. It also creates a dilemma
for a gold farmer: The fewer trades, the more valuable an item is. However, the
fewer trades, the easier it is for game developers to back-track item exchanges
and detect the gold farmers. The decay process can affect the item’s performance or simply consume the item. Once an item has been exchanged (for
example) three times, it simply disappears.
Badging: Pure Experience-Based Play—Players gain the ability to use items
based on completing different quests, dungeons, or other in-game achievements, not based upon finding the items or accumulating currency. More complicated variations can require multiple badges to use certain items with the
ability to mix and match badges to configure a character as the player wishes.
The only currencies in such a game are time and effort.
Real Game Economy/Corrupt Game Commerce—Having a rich economic
system with extensive trading and market manipulation options will allow regular players to engage in economic warfare. If items have weight and size and
there is no teleportation and player versus player conflict is allowed, gold farmers face risk and competition from regular players. Many abstractions of a
game’s economy and game play mechanisms to make game play “safer” also
make gold farming much easier.
Protecting Games: A Security Handbook for Game Developers and Publishers
Merchant Account—Have players pay a premium for a legitimate merchant
account that allows in-game trading and gifting. These accounts would have
additional features to support some level of transaction transparency. The
game operator would not become involved in the transaction, but might provide a simple rating system. Conquer Online announced such an option in April
Gift Tagging—Permanently mark items and currency as having been exchanged or traded. There may be a humiliation factor associated with having
such items visible to other players in your inventory.
The Purist Badge—Create a player badge for players who have never accepted
(or, optionally, given) gifts. Every player starts with this badge until they lose
it by participating in a transaction. Conversely, players who have given or
received items could be given a “trader” badge.
Virtual Item Honeypots—This is a dangerous tactic. Consult an attorney.
Game operators seed third-party sites that support gold farming transactions
with items for sale and then ban the customers after a period of time if they’ve
made a several purchases. This may also create a lot of ill will with your
customers. Obviously, the game company could make a fair amount of money
this way.
Frictionless Player Transactions—Fully support inter-player transactions, but
only for purchasing additional game currency (for free-to-play games) or additional subscription time. This keeps money in the game operator’s system
and basically allows players to play an economic game whereby they have an
advantage over gold farmers, because the transactions should be much more
efficient as there is no risk. The game company will reduce its regular revenues
as players convert game assets into subscription time or currency.
Penalize Gold Buyers—Instead of targeting gold farmers, target gold buyers.
However, instead of banning the players, fine them enough so that they will
have lost more than if they didn’t buy gold. Reset their account to a check
point prior to the transaction. If a player buys 200 gold from a gold farmer, fine
the player 400 gold. There is no need to correlate the time of the punishment
with the crime. This means that once a gold farmer account is identified, it can
be used to target gold buyers; and the gold farmer won’t know that their
account has been compromised.
Slow Trade—Once an item has been traded, it cannot be traded or transferred
again for some period of time. This can be helpful in detecting account looting
by online thieves. Also, it slows down the gold farming pipeline, if there are
multiple transfers between players’ characters.
Chapter 22 Game Commerce
Usage Limitation—China and some other countries have instituted usage limitations where players are allowed to play only for a couple of hours. If they play
longer, they will earn reduced experience and rewards. After five hours, they
will earn no experience or gold15. This effectively turns games back into a metered play system which will substantially increase costs for gold farmers, while
having little practical impact on regular players. If a game’s usage limit is set at
five hours, costs are increased more than a factor of five (as the reduced experience after two hours could potentially drive a gold farmer to need 12
accounts to provide 24-hour coverage at full experience).
Deep Logging and Analysis—Fully track the history of each individual item
from the time it is created until it is destroyed. Track the transactions that
move the item from the game system to a game creature or spawn point or
quest through each and every character that has involvement with the item.
Even helping kill the monster that drops an item without picking up the item
would result in a logged event. This should help identify all of the accounts involved in gold farming, even indirectly, as well as help isolate exploits and other
game problems. Good logging is not useful without corresponding analysis
Virtual Salary—Give players the option of earning a virtual salary through
time of play, level, or even direct additional subscription payments. This would
allow players who don’t have as much time to play to bypass “the grind” without resorting to buying from gold farmers. This is essentially an alternative
way of presenting the free-to-play model to players.
Delayed Banning—Valve Software has long instituted a delay in the time
between when a player has been caught cheating and when they are banned16.
Recently, Blizzard has used the same strategy for World of Warcraft 17. The advantage of this approach is that it makes it much more difficult for a game
cheater or gold farmer to isolate the method used to detect their activities.
Booster Decks/Personal Treasure Chests—Instead of selling specific items,
sell packages of randomly allocated items or have them dropped as treasure.
Then allow players to buy and sell the items via an in-game service and let the
players set the prices (see the section called “Potential Solutions,” later in this
Identity Tokens/Improved Authentication/Strong Identity—A strong identity system creates improved accountability. Recently, Blizzard added support
for a low-cost identity token ($6.95 each) for players of World of Warcraft 18.
This will certainly help reduce fraud by online criminals and make it much
more difficult for players to assert that their accounts have been stolen. Such a
scheme probably has more benefit for the company than the players.
Protecting Games: A Security Handbook for Game Developers and Publishers
These measures do not have to be used in isolation. In fact, many will work better when combined. Specific game designs and the preferences of the developers
may make several of these solutions impractical. However, it is possible that the
unique character of a game may also allow other anti-gold farming measures that
fit particularly well with the game’s specific environment.
In early 2008, it appears that there was a turning point in the battle between game
operators and gold farmers. It seems that the game operators’ efforts in
2007 to suppress gold farmers were succeeding. However, instead of driving gold
farmers out of the industry, gold farmers changed their way of doing business—and
really began to hurt the game operators:
I think the issue of farming is higher on the radar now than it ever has been.
The behind-the-scenes things are really frustrating. A lot of these farmers are
essentially stealing from us. What they do is they charge us back all the time.
They use a credit card—sometimes stolen, sometimes not—to buy an account
key. They use the account for a month, and then they call the credit card company and charge it back. We have suffered nearly a million dollars just in
fines over the past six months; it’s getting extremely expensive for us. What’s
happening is that when they do this all the time, the credit card companies
come back to us and say “You have a higher than normal chargeback rate,
therefore we’ll charge you fines on top of that.” We’re really trying to get on top
of that. We’re taking our current efforts up about five notches to Defcon 1 on
this issue. They bug us even more than they bug our customers, and we’re
definitely taking steps to implement rigorous anti-farming efforts.
It’s actually really amazing to sit and watch these people work. I’ve personally
sat with [our customer service team] as they’re tracking a farmer, and you’ll
see a mob spawn—this [the gold farmer’s] got a bot that within half a second
[and] has them moving towards the creature even if it’s halfway across the
zone. It’s a serious problem.
—John Smedley, CEO, Sony Online Entertainment [emphasis added] 19
Gold farmers no longer buy the game and pay their subscriptions until they are
banned; they now are using stolen credit cards or legitimate cards and charging
back their purchases. After all, this substantially reduced their costs, particularly as
the game companies became more effective in their banning efforts.
Chapter 22 Game Commerce
“We’re seeing a lot of stolen credit cards. Say you buy gold from a service in
China—you may not know it’s in China, but you give them your credit card
and buy gold only once. They use these credit card numbers to set up new
accounts in these games. They buy an EverQuest account key, farm for a month,
and then charge it back to the stolen credit card.”
—John Smedley, CEO, Sony Online Entertainment 20
It is unusual and notable for anyone at any company to speak out on security
issues, especially when they are costing the company, and Mr. Smedley should be
And the problem is not limited to Sony. Halifax bank in the UK decided to stop
accepting payments for World of Warcraft because of rising levels of fraud21.
Although gold farmers pay their employees a low, but living, wage of as little as
$142 per month22, the cost of a game ($50) and subscription ($10 to $15 per
month) can be a major cost factor if an account is banned regularly.
Of course, things don’t end there. Why even bother gold farming? A gold frauder
could use a stolen credit card to buy gold from a player or gold farmers, and then
turn around and sell it. Online criminals are getting into the game as well. The value
of a stolen World of Warcraft account is $10, whereas stolen credit card account information goes for only $6 according to researchers from Symantec23. Hackers
seeded 10,000 web pages with malicious code to steal game passwords24 and even
created a web ad slipped onto some World of Warcraft community sites that required visitors to simply roll over the ad to install a Trojan on their computer that
stole their account information25. EVE Online had a similar problem with a website
involved in real money trading26. Viruses that target online games are now routinely
in the “Top 5” list of threats from anti-virus vendors. Account theft and looting is
not solely the providence of organized criminals. Players sometimes share account
information and those “friends” sometimes take advantage and loot the account27.
The final, quite serious, problem that game developers face is with corrupt employees. Insiders have access to the game’s systems and data and can manipulate it
to their advantage. In 2006, three employees of Shanda Interactive, including a vice
president, were caught duplicating and selling very rare weapons28. This made it
very easy to detect their crime. If they had duplicated more common, but valuable,
items like game currency, it is a fair question as to whether they would have ever
been caught. It is possible for a hacker to attack the game from the outside (see
Chapter 32), but, unfortunately, employees are the biggest threat.
Online games offer a lucrative target with little to no legal risk. What prosecutor is going to try to argue about the value of virtual items when the game companies’ themselves don’t consider the transactions legitimate?
Protecting Games: A Security Handbook for Game Developers and Publishers
Even if game companies do not value virtual items, they are going to need to
take virtual item theft seriously. At a minimum, they need to be able to track virtual
items and transactions in great detail to be able to undo virtual theft from a customer
service perspective. Several of the other techniques listed previously, such as slow
trading, can also help minimize the effect of virtual theft. In Asia, some game companies have moved to using cell phones as authentication tokens (by implementing
a challenge/response system) as well as for payments. This could be extended to
verifying virtual item transfers. Cell phones generally also have an advantage, as
they provide reasonably good identification information about their owners.
The gold farming problem is a serious customer service problem and gold frauding
is shaping up to be a real threat to the game industry. There are options. Pure free-toplay games, where players can purchase every item in the game, are much less appealing to gold farmers and frauders. There is still risk that someone can steal an
account, but without a market for individual items, it is much easier to restore an account without tracking item trades because this only requires changing a password.
Another approach without trading is to move to a pure “badge” system, as
mentioned previously. In many online games, currency is really an afterthought
and the games’ economic systems are tacked on to core systems tied to experience
and adventuring. Rewarding players for their adventuring prowess with badges or
achievements is quite natural. These badges can then be used as the currency to
“buy” various items. Developers can allow players to earn multiple copies of the
same badge by repeating an adventure; they can also make items freely convertible
between an item and its constituent badges, and even have multiple combinations
of badges that can yield a given item.
One system that supports trading and gifting, but has not been plagued by gold
farming, is the virtual Collectible Card Game, Magic Online, and other games of its
ilk. Magic: The Gathering is one of the most significant games in recent memory.
Both the face-to-face and online versions are sold via basic game decks and booster
decks. These decks contain random sets of the game’s various cards, but are all sold
at certain fixed prices. Players set their own value for the items. The game company
does not. The company does, however, collect a percentage of any exchanges of
cards between players. This approach allows a game operator to profit from real
money transactions without many of the risks.
To date, developers have only used this method with cards (real or virtual), but
there is no reason to restrict the system in this manner. A game could easily issue
treasure chests as its only reward with truly random items to be found in each. Each
treasure chest would be the equivalent of a booster deck and so the only design
Chapter 22 Game Commerce
variable would be how the treasure chests were issued or earned or purchased.
From a design-management perspective, the game developers simply have to concern
themselves with the rate at which players can earn treasure chests in the game and
make these rewards fairly uniform to eliminate efficient “runs” for gold farmers.
Power-leveling sounds more exciting than it is. It is also a problem that is very difficult to stop. There are two basic types of power-leveling. A power-leveling individual or company plays the game legitimately to build up a character until it is
considered valuable. These firms can also boost a player’s status—players have even
paid to boost their achievements on Xbox Live $300 for 3,000 gamer points29. The
power-levelers then sell the character. Individual players will sometimes also sell
their characters or accounts when they are bored with the game. The other scenario
is when a player hires a power-leveling firm to develop a character to spec. This
involves outsourcing the game play to the power-leveler so that the purchasing
players don’t have to take the time (or, some would argue, develop the skill) needed
to create the characters they want.
The reason that power-leveling is so difficult to fight is that it only requires the
exchange of a user name and password to implement. Although this is most often
associated with MMOs, it can occur in any game—a woman arranged to have
another player replace her in an online poker tournament at PokerStars. The surrogate player won, but the company refused to pay the $1.2 million prize because
only the designated account holder is supposed to play in a tournament30.
Strong identity systems can be an effective deterrent to power-leveling, but
they may create a cost and convenience barrier for games seeking to be widely
accessible to potential customers. It is also possible to identify potential powerleveling by using IP address tracking and monitoring individual instances of the
game. This approach is less effective in Asian markets and other locations where
players play in Internet cafes. Also, a motivated power-leveling firm (and foolish
customer) could use remote control software to operate the game client from the
customer’s own computer.
Conceptually, a game that requires a credit card would be more secure if the
player was able to purchase other (expensive) items using the game account, since
a person (hopefully) would be reluctant to share such information. This seems like
a bad idea, in practice.
There are more serious problems associated with power-leveling. Because
power-levelers have access to the players’ accounts, they could break back into an
account after they have finished leveling up the player’s character and loot it, use it
to market gold farming until it is banned, or otherwise take advantage of the users.
Protecting Games: A Security Handbook for Game Developers and Publishers
For example, in a free-to-play game, power-levelers could purchase additional virtual items using any credits the player has in her account and transfer the items out
while the player is not logged in. (Humorous Hypothesis: Is the definition of an
ethical power-leveling firm one that reminds a player to change her password once
they have completed their service?)
This final grab bag of game commerce activities are virtually impossible to detect or
control. They all entail players’ intent as opposed to their behavior within the game.
Escort services are a more expensive and limited version of power-leveling. Players
hire other players to play along with them to be more successful more quickly.
Cybersex is often a part of online games. Even a benign kid-friendly game like
Habbo Hotel has had difficulties with virtual prostitution where players have engaged in cybersex to earn the game’s currency, furniture31. More disturbing still was
the potential involvement of a minor providing virtual sexual favors for real money,
allegedly $50 per “encounter,” in The Sims Online32. The legal implications for any
game operator are quite serious (see Chapter 30 on protecting kids).
Finally, when it seems like you’ve seen every permutation of game commerce
tactics, a game player was recently solicited to “timeshare” her online game account. Basically, because her character was already at level 60 in World of Warcraft,
the gold farmer was willing to power-level her for free to level 70... an idea that
opens up all of the risks of power-leveling33 with the promise of easy money. One
can only imagine how many people would be duped by such a service.
Generally speaking, game commerce activities are considered a serious customer
service problem at minimum and, with the rise of gold frauding, a potentially
serious threat to the online game industry. However, game commerce has one
interesting security advantage—it ties players to the legitimate game service, making
server piracy much less appealing.
The growth of the free-to-play business model does have an impact on gold
farming, as the game operator can essentially undermine any secondary market in
game items and currency if she wishes. There has been an interesting move by game
companies to take advantage of the revenues associated with game commerce.
Game companies who consider this route should carefully consider the potential
for their game being considered an illegal casino.
Chapter 22 Game Commerce
I am personally concerned that the move to partner with third-party companies that provide these secondary market services is quite risky. These legitimate
firms can always be undercut by unauthorized transaction services and gold farming operations while the business itself is fraught with high rates of fraud and charge
backs. Although the total transaction volumes associated with game commerce
may be large, the fees for processing the transactions are hard to protect in the face
of unofficial competitors and the fees required for payment processing.
The problem of gold frauders and other criminal activity directly targeted at
online games does need to be taken seriously. Game companies need to put in
place mechanisms to make their games a much tougher target—whatever their
view of the value of virtual goods. Although earning $275,000 over a year in virtual
sales commissions is nice, the impact of $1 million in charge-back fees in just six
months combined with increased processing fees, longer payment hold times, and
potential cancellation of service is costly.
Some will argue for stronger user education; however, at the end of the day, the
game company is the one at risk. These risks may be the real engine that drives
changes to the underlying business models and practices used in online games. The
“facts on the ground” are that players consider game assets to have real value and
courts are beginning to uphold this position. Avoiding the issue opens up criminal
opportunities and may not provide the companies with the liability protection that
they have asserted.
1. R. Bartle (1996), “Hearts, Clubs, Diamonds, Spades: Players Who Suit MUDs,”
2. N. Yee (2005), “Buying Gold,”
3. R. Heeks (2008), “Current Analysis and Future Research Agenda on “Gold Farming”:
Real-World Production in Developing Countries for the Virtual Economies of Online Games,”
4. S. Davis (2007), “The World of Text MMOs/MUDs: An Interview with Matt Mihaly, CEO of Iron
Realms Entertainment,”,-CEO-of-Iron-Realms-Entertainment.html
5. M. Mihaly (2008), “Using Dual Currency Systems Is the Best Way to Sell Virtual Goods,”
6. Apple (2008),”iPhone Developer Program,”
7. R. Koster (2000), “Declaring the Rights of Players,”
8. M. Zenke (2008), “AGDC08: On Avatar Rights and Virtual Property,”
9. T. Walsh (2006), “‘Data Bill of Rights’ vs. ‘Avatar Bill of Rights’,”
10. E. Bethke (2008), “Better EULA,”
Protecting Games: A Security Handbook for Game Developers and Publishers
11. D. Terdiman (2007), “Real-World Success with Virtual Goods,”
12. K. Pigna (2008), “‘World of Warcraft’ Costs Just $200 Million,”
13. Jagex (2007), “Trade and Drop Changes,”
14. K. Cross (2008), “Recent Headlines for Conquer Online: New Security Features,”
15. Shanghai Daily (2005), “Online Games Set Time Limits Against Addiction,”
16. Wikipedia (2008), “Valve Anti-Cheat,”
17. B. Holloway (2008), “Blizzard’s Gold Farmer Bans Send World Economy into Tailspin,”
18. Blizzard (2008), “Blizzard Authenticator FAQ,”
19. M. Zenke (2008), “A CES Interview with SOE CEO John Smedley (pt. 2),”
20. L. Alexander (2008), “Q&A: SOE, Live Gamer Reveal ‘Live Gamer Exchange’ Service,”
21. J. Leyden (2008), “UK Bank Blames Fraudsters for World of Warcraft Ban,”
22. R. Heeks (2008), “Current Analysis and Future Research Agenda on “Gold Farming”: Real-World
Production in Developing Countries for the Virtual Economies of Online Games,”
23. BBC (2007), “Cursor Hackers Target WoW Players,”
24. R. McMillan (2007), “10,000 Web Pages Infected by Password Hack,”
25. E. Cavalli (2008),”Trojan Attack Targets WoW’s Info Sites,”
26. J. Egan (2008), “EVE Online Currency Sellers Rip Off Players (Shocker),”
27. C. Kagotani (2005), “Japan: MMOG Crime Rising,”
28. A. Xu (3006), “Three Men Tried for Selling Online Game Weapons,” via
29. V. Cole (2006), “$300 for 3,000 XBL Gamer Points?!,”
30. A. Darbyshire (2008), “Woman Withdraws Claim to $1.2m PokerStars Winnings,”
31. BBC (2002), “Furniture Strumpets and Debit Card Toilets,”
32. P. Ludlow (2003), “Evangeline: Interview with a Child Cyber-Prostitute in TSO,”
33. T. Baribeau (2008), “Hit Monsters and Get the Gold,”
To Ban or Not To Ban?
Punishing Wayward Players
o ban or not to ban, that is the question. In order to maintain order in an
online game, virtual world, or social network, it is necessary to do something
about troublesome participants. Usually. There is a huge temptation to
consider player behavior in light of some notion of civil behavior, just as game
companies often view piracy.
But game companies are not governments; their goal is to maximize revenues
and keep their customers satisfied, not mete out “justice.”
There are a number of different areas to consider player punishments. Theft of
service via piracy is very different than griefing or cheating. As discussed in Chapter
22, the costs associated with gold farming and game commerce come from customer
service issues until the gold farmers transform into gold frauders and online thieves.
For punishment to be effective, it must be credible. One of the biggest challenges
for online games is that it is very hard to make punishments stick. Banning is, in
some sense, the least credible form of punishment because identity is so weak in online games (see Chapter 29 on identity). In some sense, player punishments should
be ABB—Anything But Banning. After all, what is really being banned is a specific
account or identity, not the actual person.
In an online environment, the real goal of most punishments is to deter problem behavior, not drive away customers or, in some sense worse, encourage them
to create alternative identities. The other danger of driving away players is that at
some point their population grows to a size to support a viable “black market service” that can compete with the game operator’s legitimate service. The larger the
population of banned or otherwise disaffected players, the larger the potential
Protecting Games: A Security Handbook for Game Developers and Publishers
customers for a guerrilla, competing service. (There is, of course, the obvious
scenario where these players migrate to another online service and help boost its
population so that it will grow faster than the banning service.)
The other question that has to be addressed is whether the right person is being
punished. As noted, for virtually all games, an account or identity is being punished, not an actual person. Console games can use both a player account for
punishment as well as the ID of the console, itself. If the account is used for punishment, it may be easy for a player to set up another one using a new credit card
or other payment method for authorization. If the console ID is used, there is some
question that the player actually owns the console and then there are a number of
issues related to the transfer or sale of a console to another person.
Finally, there is the question as to whether the punishment fits the crime. Xbox
Live took action against players who had boosted their Gamerscores (achievements),
most likely via a game save exploit (see Chapter 14). The players had all of their
achievements to date removed and any achievements that they had previously earned
were made unavailable to the player. In addition, the player’s account was publicly
flagged as a “cheater”1. In the future, the player would be able to earn new achievements, but it does not appear that the “cheater” flag could be removed. Although this
is almost certainly a clear case of abuse, the question is—how much harm did the
player do to the service? There are mostly no prizes or awards for achievements2. The
main social power of Xbox Live comes from multi-player gaming, where cheating
certainly would be an issue. If anything, this form of cheating would be a great opportunity for a private punishment—letting the players know that they’ve been caught,
but not sharing it with others. The cheater’s achievements could be marked “Suspect,”
for her eyes only. The player could then be given the opportunity to delete or re-earn
the achievement or simply leave the “Suspect” flag in place.
Ironically, the richer and more expansive the service, the more costly punishment
is for the game provider. Wide-ranging services like Steam and Xbox Live are true
“long tail” revenue generators. An Xbox subscriber with a Gold account delivers
$49.98 in revenue per year for Microsoft3. This is in addition to any game sales,
peripheral sales, and downloadable content sales. Given that many game troublemakers are also highly motivated customers, the annual cost of a banned player
could literally be hundreds of dollars. After all, if a player is banned, she is not likely
to buy any more games or anything else from the banning company. This also is
true for Steam, even though it doesn’t charge a subscription. If banning is effective,
a player is highly unlikely to buy additional games
Chapter 23 To Ban or Not To Ban? Punishing Wayward Players
The scenario is different when dealing with players who have stolen or pirated
goods. Perhaps. If the player is effectively banned, there is no additional revenue
that will be earned and, if anything, it may encourage further piracy (see Part II).
Another interesting question comes from games with purchased virtual assets
or MMOs where players have developed their characters over an extended period
of time. Players do feel substantial ownership of these characters and items. Will
game companies continue to be allowed to ban players in these circumstances?
In China, Shanda Interactive was sued for 11,000 RMB (around $1,600) because the company banned a player. In an earlier case, The9 was forced to restore a
banned player’s account and items and even pay court costs4. China has been at the
forefront of cases legitimizing ownership of virtual items in games. Most game
companies in the US have tried to avoid addressing this issue in court for fear of
establishing precedent. There is real risk that at some point in time, courts will
decide that game items have value, which seriously restricts potential punishments
(or may require the use of third-party arbitration).
There are many potential ways to punish wayward players. The goal is really to
deter troublesome activity, whether it is piracy or cheating or griefing or hacking,
or any other misbehavior. Anything that qualifies as an actual crime should be
taken to the authorities.
Although there has been a reluctance to pursue individuals for computerrelated crimes, online businesses need to be at the forefront of encouraging the
prosecution of these individuals and groups. This also means that game companies
should build their games to collect evidence that will be useful in identifying criminals as well as make sure the evidence is of sufficient quality and accuracy that it
can be used in court. The industry would do itself a service by educating government and law enforcement about the seriousness of computer crime and encouraging vigorous enforcement.
The goal for non-criminal players should be to minimize the punishment as
much as possible. It is costly to deal with player complaints from a customer service
perspective. This applies to both the victims of the various forms of game abuse as
well as the perpetrators. The most important question for the game operator
is whether to terminate the player’s service and if this will actually credibly deter
the player. There are numerous options that a game operator should consider,
including the following:
Protecting Games: A Security Handbook for Game Developers and Publishers
Game Termination—Ending a game session to avoid further problems. This
can be an issue with tournaments or ranked games, as players will use such mechanisms, as seen in Chapter 20, to manipulate the results of the competition.
Kicking—Simply removing an offending player from a game session. This can
be done by players as well as by game systems. This can either be handled
within the game session or as a first stage for more serious action by the game
Personal Blacklist—Giving players the ability to personally “ban” others so
that they will no longer be able to play together provides an easy way for players
to manage troublesome individuals directly.
Reputation Systems—These systems can help flag troublesome players. However,
the systems are only as effective as the underlying identity system for the game
service and the extent to which players can attack the reputation system itself.
Game Termination with Prejudice—Ending a game session and giving the
offending player a loss or penalty. Care should be taken so that players cannot
use the “with prejudice” system to grief other players.
Slow Down—Reducing the rate at which the player can play or benefit from
playing the game or games at the game service.
Suspension—Denying the player partial or full use of the game or game service.
Probation—This option can be used either alone or in conjunction with other
penalties as a warning or notification of detected abuse with no specific penalty.
Instead, the player is “put on notice” that she has misbehaved and further
abusive actions will be dealt with appropriately.
Status Penalty—Reducing the player’s status, rank, or achievements in the
game service for some time period or even permanently.
Account Reset—The most severe version of a status penalty would be to
completely reset a player’s account and empty it of all earned status, rank,
achievements, and virtual items.
Public Humiliation—Marking the player’s account when viewed by other
players as an abusive player of the appropriate type. The other option is to
have a public list of offenders and their crimes.
Virtual Fine—An interesting option, for at least gold buying and even gold
farming, is to penalize the account by creating a virtual financial penalty. If a
player bought 100 gold, penalize her 300 gold.
Real Fine—A player could have her account suspended until some actual payment is made either to the game operator or the offended individual or group.
This mechanism may be effective as long as the penalty is such that it is preferable to pay it than to quit or start over.
Chapter 23 To Ban or Not To Ban? Punishing Wayward Players
Invisibility—Some game services are very loose communities. In these environments, it may be effective to penalize a player by making her invisible to
others for many services, like matchmaking. Such players could initiate contact
with others, but as a default they would not be presented for matchmaking or
other services that expand social networks.
Abuser Grouping/Segregation/Exile—When an online game or social network
has identified a player as some type of troublemaker, the system can preferentially or exclusively group the troublemakers together. This is more effective
with larger communities, as troublesome players may not notice that they have
been isolated from the main population.
Full or Partial Banning—Banning probably needs to be more than just a user
account. In order for banning to work, the game service must be able to reliably
identify the offender. For game consoles and PCs, the ban may extend to the
equipment, an IP address or range, a physical address, a credit card number,
and so on.
Game Company Blacklist—As noted much earlier in the discussion of the rich
interaction system (see Chapter 9), the larger and more extensive the system,
the more incentive that a player has to participate in it legitimately. When the
game operator decides to remove a player, credibility is complemented by
range. If a player is at risk of experiencing a serious penalty, far beyond just a
single game, she may be less likely to cause trouble.
Game Industry Blacklist—The casino industry is somewhat notorious for its
blacklist of known cheats. The regular game industry could put a similar service
in place to ban real troublemakers from all online gaming. This would require
addressing some legal issues, but it may be possible for game companies to
collaborate to truly ban certain individuals.
Arbitration—Today, game companies are judge, jury, and executioner.
However, as games grow and players become more vested in their gaming
experience, a player who has been banned or otherwise punished may take
legal action to reverse her penalty, as noted in the Chinese virtual property
cases. By incorporating an arbitration clause in the game service’s terms of
service, game operators may be able to avoid such appeals going to court. An
arbitration system by a neutral third party may also minimize customer complaints about player penalties.
Trial by Jury—Another option for meting out justice in a game is to use the
players themselves as the judicial system. Depending on the nature of the game
and the crime, it may be possible to select a random “jury of your peers” to
determine guilt and assess penalties. This adds legitimacy to the process and has
the added benefit of reducing costs for the game developer.
Protecting Games: A Security Handbook for Game Developers and Publishers
This is by no means an exhaustive list of player penalties. Game developers can
even integrate penalties into the game experience. For a while, Second Life had a
cornfield where players were sent if they got in trouble with Linden Lab administrators5. Even more thematically accurate, the Rome-themed MMO, Roma Victor,
used virtual crucifixion for punishment6. The only danger with some of these
public, in-game penalties is that certain players are more interested in attention and
so these penalties turn into rewards and achievements. Knight Online penalized
players who were found using exploits by causing them to suffer double damage
from other players and creatures as well as having their character’s level reduced
substantially below the level it was when they were caught7.
The most severe punishment that can be meted out against a player is banning his
or her account; actually banning a person is almost impossible. There are also very
legitimate questions as to whether the game company is actually punishing itself by
removing future revenues. Even worse is the potential for an aggrieved player to sue
the game company to restore her status or service.
Some problems, like gold farming, may not even be amenable to banning as a
penalty. A ban may be considered simply part of the cost of doing business.
A number of games and social networks refer to dropping the “BanHammer”
when they ban a number of participants. This is probably an apt description.
Banning is a rather blunt instrument for punishing troublemakers and it might not
even deter its targets. Also, the game operator may hit her own “thumb” instead of
the intended miscreants.
The use of lesser punishments is promising. Gaming is an entertainment industry. The goal is to entertain customers and earn revenue from them. It is certainly
important that companies do something to address players’ misbehavior, but the
real goal is to keep them (and as many other players as possible) playing.
Chapter 23 To Ban or Not To Ban? Punishing Wayward Players
1. Microsoft (2008), “Gamerscore Corrections,”
2. Johnathan (2008), “The Old Spice Experience Challenge: Earn Achievement Points to Win Fabulous
3. Microsoft (2008), “Xbox Live Subscription Cards,”
4. S. Fu (2008), “Shanda Gamer Sues for Emotional Damages After Game Account Sealed,”
5. T. Walsh (2006), “Hidden Virtual-World Prison Revealed,”
6. J. Lees (2006), “Virtual Crucifixion Punishes Bad Behaviour Online,”
7. MMORPG (2007), “Recent Headlines for Knight Online: Accounts Penalized for Exploits,”
This page intentionally left blank
The Real World
In this part, you’ll find the following topics:
Chapter 24, “Welcome to the Real World”
Chapter 25, “Insider Issues: Code Theft, Data Disclosure, and Fraud”
Chapter 26, “Partner Problems”
Chapter 27, “Money: Real Transactions, Real Risks”
Chapter 28, “More Money: Security, Technical, and Legal Issues”
Chapter 29, “Identity, Anonymity, and Privacy”
Chapter 30, “Protecting Kids from Pedophiles, Stalkers, Cyberbullies,
and Marketeers”
Chapter 31, “Dancing with Gambling: Skill Games, Contests,
Promotions, and Gambling Again”
Chapter 32, “Denial of Service, Disasters, Reliability, Availability, and
Chapter 33, “Scams and Law Enforcement”
Chapter 34, “Operations, Incidents, and Incident Response”
Chapter 35, “Terrorists”
Chapter 36, “Practical Protection”
Welcome to the Real World
elcome to the real world. After all, games do not exist in isolation: they
are built by companies with employees, partners, and customers. This
part of the book focuses on a number of security issues that affect games
during development and after their deployment into the real world. There are issues
that affect the security and essential health of a game and a game business that are not
traditionally considered “security issues.” I asked a long-time colleague and attorney
Joseph Price to help address some of the legal issues that can affect a games business
and brought in Marcus Eikenberry to discuss some of the problems with money and
payments that can bring any business to its knees.
“The Insider Problem” is widely perceived to be the single largest security issue
for any organization. After all, employees need to be trusted if they are going to be
able to do their jobs and sometimes that trust is misplaced. For game companies,
the three most critical insider problems are code theft, data disclosure (both of
which can affect any game), and fraud against online game services.
Increasingly, games are too big to be built, marketed, and operated solely by
internal company employees. Working with partners raises additional issues. One
especially common problem is that security requirements and responsibilities are
not adequately addressed in the contract between the parties. There are also interesting issues related to the licensing of games to, and from, other companies.
Money is perhaps the oldest “virtual asset” and the economy is the one game
that almost everyone plays with great enthusiasm. I have closely followed ecommerce and the evolution of online services for years, but I have never seen a good
explanation of how payments really work for businesses and what it can mean for
a company. My own experiences in this regard have been fairly painful and eyeopening. Because of the topic’s importance, there are two chapters on money issues.
Chapter 24 Welcome to the Real World
Establishing strong identity has been a theme and a security topic from the very
beginning of this book—identity affects piracy, cheating, griefing, gold farming,
and all of the other topics discussed so far. Knowing your customers are “who they
say they are” is very powerful, but it is difficult to maintain a good balance between
convenience and strong identity. If it is inconvenient or time-consuming to join a
game, many potential players won’t participate. Conversely, people often behave
very badly when there is no accountability for their actions. Anarchy reigns without a strong identity relationship with your customers.
The market for children’s games and online services has grown explosively in the
past several years. Protecting children is a unique challenge for developers and game
operators because there are laws and expectations for security that do not exist for
players in general. Often, game companies and online service providers avoid children as a market that is “too hard.” This is a bit extreme; I’ll argue that there are
some benefits to the children’s market that do not exist for other consumers.
The popular perception is that only children play games. There is one aspect of
the games industry that the general public does not associate with children: gambling.
There are many ways that game companies can stumble into gambling and as people
innovate with game business models as well as advergames and contests, their legal
risks are growing. Although there is no inherent problem with the gambling business
per se, you don’t want to create a casino or lottery by accident. Governments carefully
regulate all legal gambling businesses and there are severe penalties for companies and
individuals who don’t follow these rules—accidentally or not.
One could not have a security book without addressing some traditional IT
security issues and there are a number of these issues that are particularly relevant
to game businesses: Denial of service and availability are key requirements for any
online service. The growth of the IT and IT security industry has made addressing
these concerns much easier than it used to be.
We should not forget that there are villains—the hackers and online criminals
who may attack a game or use your game to prey on your customers. Law enforcement has not been a very effective ally in this fight and the attitude of game companies towards the security issues in their games may make the security situation worse.
Things go wrong. Security incidents will occur. We are often judged by how we
respond to adversity. Most security incidents occur during the operational lifecycle
stage, but developers and security professionals spend very little time focusing on
handling real security incidents when they occur. Although it is nice to think that
security measures will always succeed, it is essential to plan for failure and recovery.
Finally, the topic of terrorism and virtual worlds has gotten the attention of the
mainstream media. One could hardly write a book on security and games without
discussing the issue, at least briefly.
Insider Issues: Code Theft,
Data Disclosure, and Fraud
mployees are in the best position to do damage to any company. They have
legitimate access to a company’s valuable assets. Such insiders are of particular concern for any company whose value is tied to its intellectual property
and online services. Game companies are particularly vulnerable, as their value is
tied to both.
In many cases, insider security issues are inadvertent: caused by negligence or
ignorance or simple laziness. Part of this is an artifact of the history of game development as an informal, garage-based business. Formality, procedures, and controls
are often seen as anathema to the creative souls of game developers. The industry
has changed. Computer games now regularly cost millions of dollars to develop and
can generate tens or hundreds of millions of dollars in revenues. There is a lot that
is worth protecting.
There have been a number of cases where the code for a game has been compromised during the development process. This can result in months-long delays
while the game is reworked. There have also been incidents where a finished game
has been lost or stolen. These cases are particularly costly, as a game’s anti-piracy
system is often incorporated at the end of the production process and therefore
there are no security measures in place to protect the game from being widely
In contrast, online games are particularly vulnerable to insider fraud. Virtual
items are just entries in a database and can be trivially duplicated by a user with
access to a game’s servers. Insider fraud concerns are as much about preventing the
perception of abuse as reality. The perception of fraud or bias by a game operator
can undermine the service’s reputation with its customers. This perception is one
of the reasons that game companies need to be very careful about allowing employees to play their own games on public servers.
Chapter 25 Insider Issues: Code Theft, Data Disclosure, and Fraud
There are technical solutions that can help manage insider threats and errors.
However, technology is only a complement to hiring the right people, training
them well, and holding them accountable for their actions.
with J. Price
The process of developing and supporting games has grown steadily more complex.
Localization, audio, and art may all be developed by different studios around the
world. This requires increased formalization of security processes to protect the game
Code theft has become a notable, and highly preventable, security vulnerability
for the games industry. The “garage engineering” mentality is still too prevalent in
an industry with multi-million dollar development budgets and gross revenues for
individual games over one hundred million dollars. A game’s code-base and art assets
are just too valuable to be left easily accessible on the Internet—especially in an
industry that regularly complains about losing many millions in sales to piracy. HalfLife 2 was compromised by hackers breaking into an employee’s computer from the
Internet during the game’s development, allegedly resulting in a four-month delay 1.
There are two sets of costs in cases of code theft or data disclosure to consider.
First, it costs tens of thousands of dollars (if not more) to extend a game’s development by a single month. One month’s delay would probably pay for a substantial
suite of IT security tools and, perhaps, the IT staff to run them. Second, many
major game titles earn tens of millions and, in some cases, more than $100 million
within the first month of launch. The cost of a delay and piracy are both substantial.
Microsoft earned over $300 million in the first week after Halo 3 launched 2. Although
Halo 3 was not the victim of a security leak, Halo 2 3 and Grand Theft Auto 3 4 were
both compromised prior to launch.
There is also the issue of compromising other companies’ code: The physics
engine from Havok was allegedly part of the compromised source code for Half-Life 2
and, as licensed software becomes more common in the game industry, this risk
could grow substantially.
For such major titles, each day of delay could “cost” tens of thousands of dollars in interest alone.
FutureValue / Revenues = $300 million;
// The chosen sample total game revenues
Interest = 5% per year or .05/365 per day;
Protecting Games: A Security Handbook for Game Developers and Publishers
The formula for present value is:
PresentValue = FutureValue/(1+InterestRate)^(number of days); See 5
For the current scenario, this would result in pure interest costs for a delay of:
$41,090 after one day
$287,514 after one week (7 days)
$1,230,263 after one month (30 days)
This very simple model ignores the costs associated with rescheduling other resources, such as marketing, as well as any lost sales due to piracy.
Even worse, these problems are largely preventable. Traditional information
security practices such as firewalls, intrusion detection systems, good configuration
management systems, or even isolating high-value systems and data from the
Internet are easy to implement and inexpensive.
A notable complicating factor is the rise of distributed development and outsourcing in the games industry. More sites, more companies, and more people inherently create more risk. In the US there are a variety of criminal laws addressing
code theft and related circumstances6. The specific section of the criminal code, in
the US or elsewhere, that applies depends on the circumstances of the crime. A
common criminal law prohibiting most types of hacking is the US Computer Fraud
and Abuse Act (CFAA). Most of the CFAA’s provisions prohibit unauthorized
“access” to a “protected computer” coupled with other conduct7.
A proactive security strategy is needed to manage these risks. First, do not provide full access to high-value data to everyone. Avoid any connections to public
networks that are not absolutely necessary. If people, or companies, cannot access
data, they cannot compromise it. Second, make people and organizations accountable. Implement tracking and logging mechanisms: If a problem occurs, it should
be possible (and preferably easy) to find the culprits. Fire people. Fine companies.
Publicly. Finally, have a recovery plan in place in case of theft or disaster. Games are
major businesses; they should implement good back-up strategies and have disaster recovery plans in place, as well as have adequate insurance and other measures
to manage business risks8.
Hackers and other data intruders are subject to criminal and civil liability.
There are at least 40 federal statutes that can be used to prosecute cybercrime; in
addition, victims can sue under a variety of civil (money damages) theories. For
example, if a hacking victim is attacked by a hacker who works for a company, and
Chapter 25 Insider Issues: Code Theft, Data Disclosure, and Fraud
the hack was launched from a company computer (or in some way involved the
company), the victim might sue the company and file a range of charges such as
negligence, negligent hiring, or negligent supervision.
Hackers are often difficult to identify or be made subject to the jurisdiction of
the US. Even worse, they are typically what lawyers refer to as judgment proof: They
are not worth suing because they do not have enough money to pay damages. It is
alleged that a disgruntled former employee at Electronic Arts was responsible for
leaking the game Black9 and Ubisoft accidentally posted 2GB of screen shots,
videos, and concept art to a publicly accessible server10. In the first case, there is not
likely much that can be gained from prosecuting the ex-employee and in the latter
case, the company has only itself to blame.
A better target for a lawsuit is an entity that failed to prevent the security
breach, or otherwise covered up an issue causing more damages in the long run.
Ubisoft is suing the company that reproduced disks for its game Assassin’s Creed
and who was allegedly responsible for the game being leaked online. These entities
have deeper pockets for plaintiffs, and are sometimes easier targets11. In general,
courts will impose some form of liability on the person in the best position to
prevent losses, particularly if they are aware of the security issues. The best legal defense, therefore, is to avoid security breaches. Security bonding and insurance (if
available) may also help. However, most standard insurance polices that companies
have for liability and “errors and omissions” protection are written so as to avoid
or minimize payouts for computer-security related offenses.
Code losses typically occur accidentally or due to the actions of individuals: the
source code from Lineage III was allegedly stolen by a group of company employees and sold to another game company. NCsoft claimed losses of $1 billion12, which
it is hardly likely to ever collect from anyone.
Issues will arise, however, and there are specific laws meant to protect intellectual property and network integrity. Owners of intellectual property (such as game
code) can seek protection through trade secrecy, patents, copyrights, and other legislation tailored to their particular industries. In the DeCSS13 DVD security circumvention tool cases, for example, the DVD Copy Control Association tried to stop
distribution of DeCSS through state trade secret law, whereas Universal and other
studios invoked the DMCA, a federal copyright and anti-circumvention statute.
These techniques only supplement good business strategies and effective technical
measures. It is an open question whether the DVD industry “won” the war against
the DeCSS or whether the decreasing prices of DVDs had a more significant impact
on reducing piracy.
Protecting Games: A Security Handbook for Game Developers and Publishers
Many security problems come from accidental disclosures. One of the simplest
ways to avoid such problems is to physically isolate high-value data and computers.
Although many IT security professionals advocate firewalls, intrusion detection
systems, and such, the cost, effectiveness, and simplicity of physical isolation cannot
be overstated. A hacker cannot compromise a computer that she can’t connect to.
This is especially true as the cost of computers and networking equipment continues to plummet: meaning that it is easier to have some computers dedicated to
development and others with access to the Internet.
Some inconvenience can be a good thing.
Having to change computers or go to a different room to access certain data or
services, such as the Internet, removes the temptation to browse or collect unneeded
data or waste time. There was merit to the earlier physical security systems where
important information was kept in locked file cabinets. Permission was required to
access any information and usage was tracked (often under the wary eye of an
I used to work on classified systems, sometimes in a big vault. When we wanted
to use the Internet, we went into a different room and logged in from a public computer. Our real work was physically separate. A collateral benefit of this arrangement was that we had much less gratuitous web surfing by employees. I have had
access to network logs at a number of large commercial and government sites and
the amount of non-work-related web surfing was appalling. Sports, news, shopping,
and a surprising amount of pornography was being “consumed” on company, or
government, time. My business partner and I had to fire an employee for excessive
Internet use. Inappropriate Internet use is a real problem and very costly in dollars
and time.
It is also a huge security risk.
I have had discussions with several individuals at game companies who have
implied that certain staff “need” Internet access. If so, the cheaper, more secure solution is to give them a separate computer. Anything that needs to be brought into
the internal development or operational system is then introduced via “sneaker
net” on physical media, preferably through a software library or configuration
management group. This can have an additional benefit of helping track the source
of material in case of copyright disputes.
The other common, contentious issue is related to remote access. If remote
access is necessary, it should be done via dedicated computers provided by the
company. Under no circumstances should these systems be used for other purposes
(surfing the Internet, playing games, whatever). There are good hardware and
Chapter 25 Insider Issues: Code Theft, Data Disclosure, and Fraud
software security tools to help ensure that only specific machines and individuals
can access a company’s sensitive networks and computers. This sort of segmentation can also be extended to internal networks; programmers can be isolated from
artists and business and marketing people separated from everything. Formal build
and test systems should certainly be separated from the general office IT infrastructure, as should anything having to do with money or real operations (see the section
called “Sample Game Operations Architecture” in Chapter 32).
If utterly necessary, this separation can be implemented via VLANs or other
“virtual” technologies. However, human behavior will often tend to unravel these
easy isolation technologies.
At the end of the day, management and the company as a whole need to believe
in and “own” the security strategy, not just the IT guys.
There is no worse problem for any organization than a malicious insider. Insiders
are already behind the scenes and underneath all of the protections that you have
built in to protect your game against troublesome players. This is most obvious for
gambling games. Casinos have layers of monitoring to try to stop this problem:
Dealers watch players, pit bosses watch dealers, floor managers watch pit bosses,
and security and regulators watch everyone. Even so, every so often, the lure of easy
money draws in employees. Recently, at the Lakeside Casino in Iowa, a dealer colluded with some mini-baccarat players to alter the cards and help them win at least
$12,000. All were arrested thanks to video surveillance14. Two online poker sites,
AbsolutePoker.com15 and UltimateBet.com16, have been embroiled in cheating
scandals where players, perhaps company employees, used access to the sites’ computers to cheat by seeing the hidden “hole” cards of the other players.
Although the stakes are particularly high for online gambling, other games
have had notable insider fraud problems. A vice president at Shanda Interactive and
two accomplices were found guilty of fraudulently creating and selling virtual items
for the MMO Legend of Mir II17. The group earned 2 million Yuan (over $254,000).
As discussed in Chapter 22 on gold farming, there is a large market for illicit virtual
goods and fraud is particularly tempting for company insiders. There have been
many similar incidents reported at other game companies and, no doubt, quite a
number that were either handled privately or remain undetected.
Insiders can also cause other problems. A Halo 2 developer “for fun” added a
picture of his behind into the Vista version of the game’s map editor18. Although this
may seem humorous, Microsoft had to re-label all of the copies of the game with the
“Partial Nudity” ESRB label, which delayed the product’s launch by one week.
Protecting Games: A Security Handbook for Game Developers and Publishers
There are other potential problems. Identity theft (see Chapter 29) and payment fraud (see Chapters 27 and 28) as well as other forms of fraud are often easier
within online services, especially those where security concerns were not a core
part of the design.
Online games are no longer a “garage” business. MMOs cost millions to tens of millions of dollars to develop and are designed with the goal of generating many more
millions revenues (except for wild independents). Although game developers love
their games, the adverse consequences of developer’s cheating at their own game so
far outweighs any game design benefits that it just needs to stop. In 2007, CCP Games,
developer, publisher, and operator of the science fiction MMO, EVE Online, found
itself at the center of a controversy over the disclosure that employees of the company had been cheating to help their corporation (a “team” of players in EVE
Online)19. This was not the first time such accusations had been levied against CCP
Games employees, but the scale of this incident was quite embarrassing for the
company. Fortunately for CCP Games, the incident has had little long-term impact.
Game integrity is paramount.
There is nothing more important to business than the trust of your customers,
especially if everything that you provide is virtual. I have heard (repeatedly) the argument that “Developers need to play the actual game to really be able to support
it.” Developers need to find another way. The consequences are just too dire and
the temptation is too great. In 2001, the McDonalds Monopoly promotion (an instore ticket-based promotion) was hit by a scandal when it was discovered that an
employee of the marketing firm that ran the promotion had been secretly giving
winning tickets to his friends and family starting in 1996. That fraud earned the
conspirators $13 million and some serious jail time and cost the marketing firm,
Simon Worldwide, its contracts with McDonalds and Philip Morris20.
It is very hard to estimate the real costs of these types of security incidents. For
subscription games, some players may simply not renew; for a free-to-play game,
they may simply stop buying as many items. Suppose an MMO has 100,000 subscribers and charges $10 per month. Further, let’s assume a developer (or group of
developers) cheats or abuses the game for their own advantage and therefore causes
0.1 percent of the subscribers to cancel immediately. Then, the immediate cost of
the security incident would be:
0.1% x 100,000 x $10/month = $10,000/month or $120,000 for Year 1
Chapter 25 Insider Issues: Code Theft, Data Disclosure, and Fraud
In a subscription game, suppose this increases the chance that players don’t
renew from 20 to 25 percent (again simplifying the subscription model so that
everyone leaves at once and had just renewed... the most favorable model from a
revenue perspective). The increased loss of subscribers would be:
5% x 100,000 = 5,000 lost subscribers after 1 year
Therefore, the Year 2 loss would be an additional:
5,000 x $10/month = $50,000/month or $600,000 in Year 2
(in addition to the $120,000 who left immediately)
So, the Year 2 losses would be $720,000.
One should also model the opportunity cost of new subscribers who decide not
to join the game because of their concerns about perceived cheating. This is harder
to model. There are players who decide to enroll “for free” because of the recommendation of their friends and then there are those who enroll because of marketing. Both may be reduced in the wake of any cheating scandal.
Assume 10 percent of players ordinarily would recommend the game to their
friends and this number is reduced by 10 percent (meaning the recommendation
rate is only 9 percent). However, the total population of players has been reduced
by those who left immediately, so the total population is 95,000.
Therefore, the additional lost subscribers (for Year 2) total:
95,000 x 10% x 10% = 950... costing an additional $114,000 in Year 2
Finally, in the wake of the bad publicity, any marketing to “cold” prospects
would be less effective, perhaps by 20 percent. One can model this in terms of increased marketing costs to maintain population or by reduced additional revenues;
either way, it is expensive.
There are many ways to model the consequences of such incidents. Although
the immediate cost of an insider scandal may not be that great, even using fairly
conservative numbers for a small MMO, one could easily suffer million dollar
losses in the second year for these types of incidents. More severe incidents that are
handled poorly will drive these costs substantially higher. Free-to-play games that
depend on players making virtual asset purchases are likely to be even more vulnerable, because they do not have the inertia of ongoing, and often automatic, subscription renewals to maintain revenues.
Protecting Games: A Security Handbook for Game Developers and Publishers
Also, as games mature and move to a steady population, an incident of this sort
can tip the game into a death spiral where lost subscribers outnumber new players
and drive the game towards collapse.
The very features that cause problems with developers playing their own games and
insider fraud are necessary to the successful operation of an online game.
Developers need to be able to create items and exchange them. They need to be able
to fully test the game play experience in as realistic an environment as possible.
Developers also need to be able to modify a game’s code and its databases. In order
to allow payments to be made, customer data needs to be stored somewhere and be
Because these functions are required, they need to be implemented in a secure
fashion. In this case “security” means that they are only used when, where, and how
they are intended to be used.
The first level of control is privileging and (application) user roles. Game
operators and customer support staff are separated based on experience and trust
with senior personnel given more capabilities than junior staff. These capabilities
may be based on job function or individual needs or experience. The main benefit
of using a person’s job for privilege control is that it is typically easier to manage.
The security industry also has a nice term for it—Role-Based Access Control
(RBAC), in contrast with Identity-Based Access Control (IBAC).
The other general tactic is to isolate or split functions or operations from each
other. The most familiar example is that a business will require two signatures
when writing a large check. This strategy can be very effective because multiple
individuals need to collude for the system to fail. The notion of a signature/
counter-signature system can be easily implemented for any critical function. Fully
automated functions in software can even be split in this manner to protect against
There are a number of additional methods that can be used to help address insider security issues:
Outside Investigations—If anything goes wrong, use an independent, outside
investigator to ensure credibility of the results. This is a standard damage control tactic in other industries and it is equally applicable to games.
Chapter 25 Insider Issues: Code Theft, Data Disclosure, and Fraud
Controlled Key Item Creation—First, high-value items, like the blueprints
that were the problem for EVE Online as well as the rare items from Legend of
Mir II, should probably only be created with the authorization of more than
one developer/game master. This would make it notably more difficult for a
single individual to abuse the system.
Separate Server/World for Developers, Friends, and Families—Set up a separate server or virtual world for developers and their friends and families. This
allows them to play, but removes any appearance of impropriety related to any
benefits in the public game.
No Rewards for Developers, Friends, and Families—If the game has any incentives or awards that have any real value, no employees, associates, or friends
or families should be eligible to win or earn them.
Key Item Logging—High-value items should probably have a life history that
is logged so that their ownership can be tracked back to their creation.
Executive and Oversight Alerts and Reports—Any activities that could alter
or disrupt the game—item creation, game parameter alteration, and so on—
should be regularly tracked and reported to both game operations executives
and whatever independent oversight system is used to ensure game integrity.
Strong Configuration Management/Split Teams—Developers should not
have access to the live system and, conversely, the live team and testing personnel should not have access to the developmental code base. Careful configuration control needs to be in place to ensure the integrity of the code.
Employee Game Logging—Although one can debate the merits of developers
playing the live game endlessly, it should certainly be clear that developer
accounts should be flagged and logged at a much deeper level than regular
players. This may include the use of a distinct client version. It should go without saying that developers should not be playing the game from a development
system workstation.
Independent Logging System—The system logs and auditing systems should
be truly independent of the regular game play and data storage servers. It would
be best that the auditing systems be developed and maintained by a different
Take Game Integrity Seriously—MMOs lifeblood is the integrity of their game
operation. This is a multi-million dollar business. With over 120,000 subscribers paying $15 per month, any damage to the game’s credibility or out and
out corruption could literally break the bank. The consequences for a failure of
game integrity could be fatal.
Protecting Games: A Security Handbook for Game Developers and Publishers
Insider problems are a particular challenge because they require the developer
to look at her own team with suspicion. Business processes need to be understood
and shaped to reduce opportunities for fraud or abuse. Logging (with active review
of the audit logs) is necessary to provide credible deterrence and support for legal
action. Often, little time is spent figuring out how the back-end systems (user management, payments, customers service, game master systems, and so on) of a game
service will operate. As a result, all employees are allowed to do everything, just to
keep the game running. Careful back-end system design will likely have substantial
benefits far beyond addressing insider problems. It will likely improve supportability and reduce operational costs.
Chapter 25 Insider Issues: Code Theft, Data Disclosure, and Fraud
1. C. Morris (2003), “Playable Version of Half-Life 2 Stolen,”
2. Microsoft (2007), “Global Entertainment Phenomenon ‘Halo 3’ Records More Than $300 Million in
First-Week Sales Worldwide,”
3. D. Becker (2004), “Stolen ‘Halo 2’ Hits Pirate Sites,”
4. R. Fahey (2004), “Grand Theft Auto San Andreas Leaked by Pirates,”
5. Wikipedia (2008), “Time Value of Money,”
6. See, for example, 18 U.S.C. §§ 1029 (fraud and related activity in connection with access devices), 1030
fraud and related activity in connection with computers), 1343 (wire fraud), 1831 (economic espionage),
1832 (trade secrets), and 2701 (stored wire and electronic communications and transactional records access)
7. 18 U.S.C. § 1030
8. Insurance Journal (2007), “Fireman’s Fund Launches Product for Video Gaming Industry,”
9. T. Spot (2006), “Angry Ex-EA Staffer Leaks Black?,”
10. K. Kelly (2006), “Ubisoft ‘Accidentally’ Leaks Tons of Assets,”
11. B. Sinclair (2008), “Ubisoft Sues Over Assassin’s Creed leak,”
12. The Chosun Ilbo (2007), “Game Programmers Suspected of Stealing Code,”
13. Wikipedia (2008), “DeCSS,”
14. Lynda (2007), “Arrests Made at Terrible’s Lakeside Casino,”
15. S. Levitt (2007), “The Absolute Poker Cheating Scandal Blown Wide Open,”
16. M. Brunker (2008), “Poker Site Cheating Plot a High-Stakes Whodunit,”
17. C. Li (2007), “Three Jailed in Online Gaming Scam,”
18. Edge (2007), “Nudity the Cause for Halo 2 Vista Delay,”
19. S. Jennings (2007), “Eve Blows Up. Again,”
20. P. Patsuris (2001), “Simon Marketing Gets Fried in McScandal,”
Partner Problems
with J. Price
“Si vis pacem, para bellum” (“If you want peace, prepare for war”)1
ame companies are rapidly moving from doing everything in-house to
contracting or outsourcing a substantial portion of their work. Third-party
game developers build games for publishers, developers outsource art asset
creation, and publishers license games internationally or hire foreign firms for
localization. The foundation of these relationships is contracts and one area that is
often neglected in contracts is security.
If you want a good business relationship, write contracts
that assume everything will go wrong.
Prior to working with the game industry, I spent many years working for the
U.S. government with IT contractors and for contractors working with the government and with other companies. Although my work was technical, I rapidly learned
that my success was highly dependent on contracts. If the contract was well-structured
and thorough, we rarely had to resort to it. If it was not well-structured, life could
become a nightmare. My co-author for this chapter, Joseph Price, is an attorney
who lives and breathes contracts, and often gets paid to deal with the consequences
of poor contracts.
In some sense, contracts are very much like software. The standard rule of thumb
is that 80 percent of software code is written to handle errors. Most of the terms in
a contract are there to handle when things go wrong. If implemented properly, a
good contract will keep you out of court and, hopefully, help keep the project on
track. Save some money while writing a contract and you may be in court for years.
Chapter 26 Partner Problems
The common answer to almost all of the security issues that we discuss in this
chapter is a good contract and structuring the business relationship so that a good
contract can be created and its results measured. A contract is the security tool of
last resort and is a supplement to good security design and business practices.
“Security,” like quality, is quite tricky from a contract perspective because it
is so hard to measure. You can’t write security into a contract; what you can do is
carefully define what you are doing for protection and, if possible, create measures
for accountability when things go wrong.
Security is of particular concern to publishers contracting for games from
third-party developers and game developers that incorporate outsourced services
or products. Whether it is the compromise of the game code during development
or the customer support costs associated with game exploits, security failures are
exacerbated by the nature of third-party relationships in the game industry.
Outsourced products and services can introduce additional security risks. Finally,
online games have a unique challenge with both official and unofficial community
sites that are associated with a game, but not owned or operated by the game’s publisher. These sites are often the source of malicious code, phishing attacks, and
other security problems that target game players.
Third-party developers are motivated to get their games out as quickly and as inexpensively as possible and are typically held accountable for delivering a “great
game.” Because the bulk of the fees that these developers earn are typically on completion of a title, their motivation to address lifecycle and operational issues like
security and other support services is low (quite reasonably). Publishers and developers need to cooperate to ensure that security issues are adequately addressed
during the development process.
The computer game industry is changing: A flashy box and fancy graphics will
not guarantee success. Games no longer have a 30-day sales cycle, but are moving
to a long-term relationship between publisher and player via an online service;
game-development licensing needs to catch up with these changes. Although the
only security issue that used to be of concern was fighting piracy, now, cheating,
griefing, and other issues are comparably important.
The game publisher needs to provide an ongoing operational infrastructure to
support a game. Customer support, server operations, and ongoing engineering,
patching, and maintenance have changed games into services. The nature of the
business relationship and deliverables between a game developer and publisher
Protecting Games: A Security Handbook for Game Developers and Publishers
must change to match this model. Solid engineering, good design, infrastructure
costs, lifecycle costing, and management now matter. Publishers must rethink their
relationship and contracts with developers so that contracts match business needs.
Factoring security issues into the game-development process is one way to reduce lifecycle costs and risks. Game publishers should engage a layered approach
when working with developers and end users. Security risks (which translate into
costs) should be considered at each stage of game development. The later security
is included in the game-development process, the more expensive it is going to be,
particularly considering piracy and code theft that may occur even before the game
is released. It is the publisher who will ultimately pay for security failures one way
or another.
Today, for games from third-party developers, the software is typically provided “as is,” in terms of security. During my discussions with a number of developers, many have said “security is the responsibility of the publisher.” For console
games, developers and publishers have often relied on the security provided by the
platform; a strategy that has not had good results so far. So, with everyone placing
the security responsibility on everybody else, the security “ball” simply gets dropped.
If the publisher insists on a security solution, and is willing to pay, developers
will apply resources to security, just as they do for animation, art, and game play. In
the game contract, the publisher could ensure that its developers agree to appropriate indemnification (acceptance or transfer of liability) for security issues. If this is
not possible, the publisher could insist that the developer deploy security that the
publisher has rights to and prefers, as part of the game (this is the most common
approach today for anti-piracy technology). In other words, the developer should
consider incorporating its own security solutions that the developer believes are good
enough to warrant tight indemnification or give the publisher the option to require
the publisher’s preferred security solution to be incorporated as part of the game.
Developers should also consider making security more of a priority as more
games are adding downloadable content, virtual asset sales, and other monetization
strategies. These may result in substantially larger royalties for the developers, but
only if the game is successful.
Online games are increasingly seen as good licensing opportunities. Potential game
publishers or game operators get a completed title with the potential for a prompt
return of their investment and the game developers can earn substantial royalties in
numerous regions. However, licensees are obliged to operate and support these
Chapter 26 Partner Problems
games and security problems can turn a potentially profitable license into a costly
customer support and marketing nightmare.
Griefing alone, much less other security problems, can consume 25 percent of
monthly service costs2 and suck the profits out of a licensed online service. In Sony
Online Entertainment’s report on its experiment with real money transactions
(RMT), SOE found reductions in overall customer support costs of 30 percent simply by supporting these transactions internally3.
A key part of the due diligence process for licensing any game should be to
determine support costs and risks. Games developed in sophisticated markets like
China, Korea, or the US are likely to be thoroughly “tested” by griefers, cheaters,
and hackers. It is highly worthwhile to research the “security state of the game.”
Most games are not going to be wildly successful. Good control on operational
costs including griefing, cheating, gold farming, and RMT may be the difference
between success and failure.
Another question that is important when licensing games is accountability for
security, which can affect both parties. The game developer may be concerned that
the licensee does enough to protect the game source and executable code that they
provide, while the licensee may be rightly concerned about who will fix security
problems and how promptly (and, of course, who will pay to fix those problems).
NCsoft’s Lineage 2 server code was compromised, apparently in China. Later, this
turned up in several places, including a pirate service in the US that was eventually
shut down by the FBI but potentially cost NCsoft millions in subscriber revenues4.
Similarly, Cryptic Studios was the victim of a hacker who compromised the server
code for City of Heroes as well as knocking several game servers offline5.
Security and technical support problems turned into a contentious dispute
between Korean game developer, MGame, and the licensee of its MMO Yulgang,
CDC Games, in China6. Because of the problems, CDC first stopped paying licensing fees and MGame revoked the game license. Eventually the two companies settled,
although one suspects that lawyers were the sole beneficiaries in the dispute.
In Korea, a number of game companies outsource security to specialty security
service providers, most notably Inca Internet Co. (GameGuard) and AhnLab
(HackShield). In the US, the only major game security service provider is Even
Balance with its Punkbuster security service. Many companies who license games
from Korean developers are required to negotiate a separate license with the security provider. This is almost certainly a poor way to structure a contract; especially
from the perspective of a licensee. When licensing a game that uses such a service,
it is probably preferable to include the security service in the prime contract with the
game company to have unified responsibility for security (and, since the total
payments are larger, this gives the licensee a bigger “stick” to push for better security support).
Protecting Games: A Security Handbook for Game Developers and Publishers
How does the new game operator control this risk? Here are some options:
If the game has been previously deployed, the licensee should do extensive due
diligence to determine whether there are known problems with the game. The
licensee should also audit the game’s forums and internal trouble tickets (and
the time that the company takes to close those tickets) to determine the pace of
security problems. This is critical for being able to estimate support costs.
If the game company is using a security service or product, this should be
bundled with the license cost and the technical security support responsibility
should stay with the game developer. A new game operator should not have to
initiate a contract for security support.
In the terms and conditions for the contract, a maximum pace for security
incident trouble tickets should be budgeted with a clear escalation process if serious weaknesses are not corrected. If this pace is exceeded, the game developer
should have to pay penalties or give discounts to the licensee. Also, the developer should be obliged to promptly report new security problems found in
other regions to the licensee.
An interesting collateral issue is the challenge of replacing a licensee. The long
saga of The9 and their battle with Blizzard over the licensing of the Burning Crusade
expansion to World of Warcraft in China is an interesting example. The companies
worked out their issues, but Blizzard was quite unhappy for a long time with the
quality of service that The9 was providing to World of Warcraft’s Chinese customers7. The termination of an online game license is especially tricky, because
there could be questions about ownership of the customers and any modifications
or localization to the game carried out by the licensee. Although many developers
license their games as an executable without source code, I actually think that it
would be wiser for developers to license or lease pre-configured servers as an appliance (see also sidebar in Chapter 8). This increases control over the licensee and
reduces the effort required for support because the developer has full control over
the game appliance platform.
Games are no longer just sold in boxes at a store. There are electronic distribution
services, online game services, payment processors, game arcade operators, and even
security service providers. The security performance of these companies can have
serious implications for the success of your game. Outages, such as that experienced
by Valve’s Steam online game distribution service8, as well as more conventional
security problems, can affect the revenues of all of the businesses that use these
Chapter 26 Partner Problems
services. The situation can be complicated further by having many companies,
subcontractors, products, and service providers involved. When there is a security
problem, they will all be pointing their fingers at each other or at you.
If someone is thinking about security, the relevant legal issues are raised right
before a contract is signed for software or services. Typically, however, security
becomes a consideration only when a breach occurs. The French version of Halo 2
was compromised during the manufacturing process9 in 2004 and in 2008 Ubisoft
launched a $10 million lawsuit for contract breach and negligence. Ubisoft filed the
suit against its U.S. disk manufacturing company for allegedly allowing an employee
to compromise Assassin’s Creed10 (resulting in 700,000 illegal downloads). In a
perfect world, every contingency will be considered when initially negotiating the
contract. In reality, a breach will occur in a way not entirely anticipated, and will be
complicated by everyone’s confusion about what actually happened and further aggravated by split responsibilities between the affected parties and other product and
service providers.
Given these circumstances, it is wise for the licensee and licensor to keep four
issues in mind when licensing software or services:
Licensee Rights—The licensor will want an assurance, and the licensee should
also confirm for its own purposes, that the licensee has the rights to what it is
licensing, including all of the elements it relies upon to deliver the product (for
example, consider sublicenses when relying on other licensed work).
Damages—Consider worst case scenarios and who you are dealing with; a licensee that cannot (or will not) provide monetary relief should be expected to
provide its product at an appropriately discounted rate because the licensor will
have to spend money elsewhere in anticipation of a breach (if a security software firm, for example, is a small company with few or no assets, its breach
conditions or warranty might only be as good as its insurance policy, if any,
which should be requested to be part included as part of the contract).
Warranty—Anything short of an express warranty, created by an affirmative
statement, description, or promise in the license (such as an express warranty
as to the security capabilities of hardware or software) will give a licensor pause.
Breach and Termination—Consider how you can get out of the license agreement, particularly when a breach occurs, and also consider what damages
might be available upon breach.
Reviewing the license in the context of a security breach will likely be a complicated effort when the agreement is initially negotiated. There are few laws and
regulations that apply in this area of law. Liability will likely rest with the entity that
is contractually liable, if there is one.
Protecting Games: A Security Handbook for Game Developers and Publishers
If there is a lawsuit, it will need to be crafted based on how the security breach
occurred. Good logs and tracking mechanisms are necessary to even begin such a
suit. These records need to be created, stored, and handled in a manner such that
they can be used in court. It is important to consider forensic issues before an incident occurs. If the license grants rights for software or services “as is,” there are no
guarantees, such as the existence of an express or implied warranty (this is often
buried in a contract paragraph in which all the words are capitalized and extra difficult to read). If there is any available warranty, it will be, at best, in a legal grey
zone; the license may provide a degree of a warranty, complicated by limited
indemnification rights, maximum payout restrictions, and subject to third-party
licenses that no party to the agreement has ever actually reviewed.
Damages and causes of action in court for breach of warranty are generally different than breach of contract. It is common to have exclusive remedies applicable
to contractually promised express warranties if, for example, the hardware, software, system, or service fails to comply with an express warranty. For these reasons,
the breach of contract and breach of express warranty claims are generally treated
as separate claims, even where the express warranty claim arises under a contract
between the parties.
Service providers (for example, ISPs, social networks, and online games) may
find themselves part of a lawsuit involving their customers whether or not they
“should” be included. The quickest way to get out of the lawsuit is to have something clear to give to the court that cannot be attacked by the plaintiff or other
defendants. Service providers will want to take full advantage of a legal immunity
available to them11, and reaffirm that immunity in their service agreement and
other contracts.
The model to engage is the “conduit.” With the law and typical service contracts on their side, ISPs will escape liability, which will likely lead to another party.
A slight warning is warranted to those ISPs and other service providers that move
from the conduit role and add functionality. A recent case in California confirming
the broad application of immunity included a footnote indicating that any involvement by the ISP that resembles the actions of a publisher could void the immunity:
Delfino v. Agilent Technologies, Inc., 145 Cal.App.4th 790, 808 n.25
(Ca. App. 6th Dist. 2006)
“We recognize that there is an existing debate concerning whether immunity
under the CDA [Communications Decency Act of 1996] applies equally to
both publishers and distributors of information authored by third parties and
disseminated over the Internet.” (citing Doe v. America Online, Inc., 783
So.2d 1010, 1018-28 (dis. opn. of Lewis, J.) (Fla. 2001)).
Chapter 26 Partner Problems
This case leaves the door open for potential liability (for example, if a service
provider provided security functionality and those mechanisms were to fail).
The contract should be the security measure of last resort. First, design the
business so that it has robust security; second, have clear accountability for any external parties; third, have a solid technical solution; and finally, paper it over as well
as you can with good contracts and licenses.
One of the great things about the game industry and for the game industry is that
its customers are often fans who set up websites, community sites, and other online
forums. Players set up these sites on their own initiative and can have audiences of
tens of thousands or more.
However, many of these sites are run by amateurs and are easy prey for hackers and online criminals. Some are set up by criminals to lure players in to collect
personal information and passwords or even download malware onto visitors’
These sites are totally beyond the control of game companies and the threat is
serious. Thirteen percent of the malware in Asia is targeted towards online games12
(key-loggers and other tools that help break into player accounts). Sometimes online sites do strange things that can create security problems. The Chinese Internet
portal Sina ran an online poll about casual games, which was fine, except that there
were questions in the survey that asked for player’s account names and passwords13.
Improved identity systems, like World of Warcraft’s identity token or systems
that use cell phones to log in, can minimize the potential risks from player errors.
A lot of people have argued strongly for user education as an effective countermeasure against phishing or accidentally installing malware, but a game company can’t
bank on changes in player behavior. One potential option is to include or provide
free security software with a game. Another option is to certify and monitor fan
sites to help them with their website security.
Protecting Games: A Security Handbook for Game Developers and Publishers
1. Vegetius, “Prepare for War: Latin Quote from Vegetius,”
2. D. Becker (2004),“Inflicting Pain on Griefers,”
3. N. Robischon (2007), “Station Exchange: Year One,”
4. FBI (2007), “Cracking the Code, Online IP Theft Is Not a Game,”
5. B. Crecente (2005), “City of Heroes Hacked,”
6. L. Alexander (2007), “CDC Sues MGame for Security, Tech Support Failures,”
7. S. Burns (2006), “Warcraft Game Makes $10m a Month in China, But Blizzard Expansion Dispute Still
Not Resolved,”
8. M. McWhertor (2006), “God Hates Steam, Too,”
9. D. Becker (2004), “Stolen ‘Halo 2’ Hits Pirate Sites,”
10. B. Sinclair (2008), “Ubisoft Sues Over Assassin’s Creed Leak,”
11. 47 U.S.C. § 230
12. M. Hines (2008), “Online Game Malware Takes Off in June,”
13. DoNews (2008), “Sina Online Poll Asks for Game Account Passwords,”
Money: Real Transactions,
Real Risks
by M. Eikenberry
Author’s Note: Although money can’t buy happiness, lack of money can cause your
business to collapse. Handling payments and fighting financial fraud are
profoundly important issues for online services, especially game companies. The
lack of physical product delivery makes digitally distributed games and online
game services targets for fraud. There are also serious risks associated with payment processors that many individuals, experienced only with the consumer side of
the payment business, are completely unfamiliar with. I asked Marcus Eikenberry,
who has lengthy experience with these types of transactions as a game code and
virtual item reseller, to provide his insights into the payments process and fighting
am Marcus Eikenberry and I’m a serial entrepreneur. I make a living dealing in
intangible goods and services within online video games. My companies sell
huge volumes of game registration codes and game time codes, as well as providing anti-fraud solutions for other sellers within these online gaming markets.
I started in 1997 selling virtual items within the game Ultima Online. I noticed
a couple of sales of UO items on eBay and wanted to try my hand at making a sale
there. I took two extra game accounts I had that were about three months old and
put them up on eBay with full details of all the virtual loot that would be included
on them. When I sold the accounts for a combined total of $2,400 I knew I was onto
something. I thought to myself, “I could make a full-time job of this and do very
well.” I have done so ever since.
Now more than 10 years later, I no longer touch in-game goods when it violates
a game’s terms of service. I focus on selling game codes and providing anti-fraud
services for other companies.
After being taken for a lot of money by scammers and thieves and not understanding how to process payments over the years, I’ve developed methods for some
of the more popular payment services to greatly reduce fraud and overhead costs.
Protecting Games: A Security Handbook for Game Developers and Publishers
You will find information here that will help you improve your own payment processing and anti-fraud practices to help you to keep more of your profits in your
Payment processing in its simplest form is getting the money your customer wants
to give you for goods or services into your account. There are several types of payments. I will cover some of the ones that are more popular with video game players.
In North America, credit cards are king. PayPal is also very popular. In Europe,
there is a smaller percentage of people who use credit cards. In Europe, the culture
in many areas is that if you are purchasing with a credit card it is because you do not
have the funds to make the purchase. This may account for their lower use in
Europe and why bank transfers are the preferred payment method. Moneybookers
is a very popular payment method for Europe, as it supports multiple types of bank
transfers as well as credit cards.
Another method that is very popular across the globe is prepaid cards. These
cards can provide credits into your games or subscription time. These are very popular because of their low barrier to entry for purchasing. In the US, you can walk
into any Target store and find 30 or more types of prepaid game cards. In Europe,
the cards can be found in many supermarkets.
If you are selling services or intangible goods that are not delivered to the customer’s physical address, you will not have any seller protection in the event of a
dispute. When accepting payment by credit card, you must accept the fact that you
will lose 99 percent of all disputes. Credit card companies do not track digital goods
delivery. Nor does it appear that they will offer any support for this in the near future.
Depending on your volume of sales, you can expect to pay 1.9 to 2.4 percent,
plus a transaction fee of $0.25 to $0.35 per transaction. Most credit card companies
will not allow you to charge the customer directly for these fees. Additional fees can
be levied on the vendor for chargebacks (Author’s Note: A chargeback is when a
consumer reverses an existing credit card purchase and the vendor is responsible for returning the funds to the credit card company and, ultimately, to the consumer unless the
vendor can successfully dispute the chargeback). Chargeback fees are commonly $10
per disputed charge. You can also be fined heavily if you have a high percentage of
chargebacks. If you have greater than five percent fraud you may need to worry
about these additional costs. The threshold will depend on the merchant service
you are using. If you are planning to have higher than five percent fraud, this
threshold may be one of the questions to ask when looking for a merchant account.
Chapter 27 Money: Real Transactions, Real Risks
Credit card companies will commonly do a credit check on you or your company before approving a merchant account. They also will have caps on transaction
volumes in dollars per day, week, or month. These caps can be directly related to
your personal or company income. I have known company owners who have told
me horror stories about having great growth in a market only to have your merchant account held for reviews when the credit card company starts to get nervous
about your volumes.
For this reason, I suggest having multiple merchant accounts. Use a round
robin method of cycling through the accounts with each new payment that comes
in. Using a round robin method is helpful for multiple reasons. The first might be
obvious: If you have four merchant accounts, the volume will be split across the
four accounts. Thus you will have fewer chances of hitting your limits. The other
issue is when funds get held by the payment processor. These funds are most commonly held for disputes but sometimes the entire account can be frozen if the credit
card company is nervous for any reason. If you do have multiple accounts and one
of them gets tied up, it will not kill your business. You will still have other accounts
and will still have access to their funds. This is important because if you look in
your merchant contracts you will see that they can hold your funds for far longer
than you can go without them. (Author’s Note: They can hold your funds for months
and you have no way to dispute the delay… or even earn interest on the funds.)
I recommend starting with four different accounts from four separate payment
processors. Even if you have really great credit and very high limits, I still suggest
starting with at least two companies.
PayPal is very popular in North America. It is also available in many European
countries. PayPal accounts are funded in three different ways. The first is that one
PayPal account holder can send funds to another. Those funds are held within the
account. The accounts can also be credited from bank accounts and or credit cards.
Fees for PayPal are 1.9 to 2.9 percent of each transaction, plus a $0.30 fee. If you
are accepting funds from all over the world, there can also be a currency conversion
fee of up to 1 percent on top of this. Other fees that can appear are credit card
chargeback fees of $10. Other disputes that are not credit card funded do not have
extra fees.
Depending on your volume and your credit score, PayPal may also elect to insure your transactions against your going out of business. In most cases when you
reach volumes of more than a quarter million dollars per month you will be subject
to this, and PayPal will place a temporary hold that is a percentage of your daily sales.
Protecting Games: A Security Handbook for Game Developers and Publishers
In my dealings with PayPal, we have had a “hold” like this placed on one of our
high volume accounts. The percentage that they told us they would “hold” was only
1 percent lower than our previous year’s profit percentage. I found this to be a
threatening request as they stated that they were doing it to insure the “good
health” of our company. I did renegotiate the percentage to something that we felt
was much more acceptable. So if you get hit with this by any company, do attempt
to negotiate your fees.
PayPal originally started this process and a rolling hold on our funds when my
credit score dropped (this was occasioned when we launched a new company and
we financed part of its launch). PayPal as well as merchant account providers will
check your credit every six months or so, and if your score gets too low for the
volume of sales you are doing, you may be faced with one of these holds. As an
additional note, my credit has returned to where it was prior to the holds, but the
holds are still in place. I would hope that they would release these funds but who
knows. Now that this is in place they may just keep it for the life of the account.
PayPal offers a wide range of features that are good for merchants. They offer
small companies easy development of payment buttons for their websites as well as
automated communications with their online stores for big companies. They also
have fairly good multi-user accounts where you can give access to staff without
giving them full control over the account. Many other payment methods have no
support for this, which means you to have to give the master login to your staff to
process transactions. (Author’s Note: This kind of single login system creates serious
risks of insider fraud that is difficult to detect, as there is no individual accountability
for actions.)
You can also sign up for the PayPal money market account, which tends to pay
higher interest than most banks on the daily funds within your account (currently,
the account pays up to 6 percent). For most companies there will be no reason not
to participate, even if you sweep all of the funds out of your account nightly you will
still earn interest. Make your funds work for you. (If you get any funds “held” on a
rolling 90-day cycle, do insist that those funds be included. Because you cannot use
those funds for anything else, you should demand it; I did and PayPal has included
PayPal offers multiple levels of account managers. As you increase in volume
your account gets transferred to higher and higher levels. The highest levels will give
you direct access to your account manager 24/7. To reach the highest level, you
need to be doing an average of over $500,000 per month in a rolling three-month
average. Lower-level accounts will be supported only during standard business
PayPal, in my opinion, is a very good payment method.
Chapter 27 Money: Real Transactions, Real Risks
Moneybookers supports many different bank transfer methods as well as accept
credit cards. In Europe, Moneybookers is more popular than PayPal. The European
method of bank transfers has a much lower cost than it does for us in the US. As
Europeans tend to like to use bank transfers more, Moneybookers is providing a
service that is needed and wanted.
Moneybookers takes a different approach to fraud. They guarantee no chargebacks or reversals on any payments. There is, of course, a catch to this. If you start
accepting too much fraud they will close your account. They use this “no fraud” as
a big selling point. You still have to do your due diligence to screen funds coming in.
Moneybookers has one item that I feel has a high risk. They do not have multiuser accounts. This requires you to give out your master login and password to any
staff that needs to perform functions within the account. Moneybookers is working
on allowing multiple logins and permissions levels for the future. But as of writing
this, that option is not yet available.
Pre-paid cards are becoming more and more popular. By moving to a code redemption system, you can allow others to resell your products and use their existing payment methods and infrastructure. This also allows you to let the merchant
take all of the risks on whom they sell to. The complexity of implementing a code
redemption system is much less than if you were to implement several payment
methods. The down side to this system is that resellers will need to be able to earn
their normal retail markup rate.
Many carriers offer billing to telephones or cell phones as an option. There are several companies out there that can help with this service. This method has two major
pitfalls. The first is that it is not uncommon to have carrier charge 50 percent of the
transaction amount for their fee (in the US). The second is that the volume your
customers can purchase via this method can be $100 or less. This is not a very profitable method unless you have very high margins.
Are more payment methods better? The simple answer is yes. I have seen a direct increase in sales with several stores once I added additional payment methods.
I would suggest the following methods for tracking what is needed and wanted:
Protecting Games: A Security Handbook for Game Developers and Publishers
Know where your customers are coming from. Identify their countries. Take
your top countries and focus on their available payment methods first. Credit
cards are nearly universal but you can also be leaving a lot of money on the
table if you do not include other payment services. I would focus on providing
at least one alternative payment method for each country to start.
Track your payment gateway abandonment rates. To track abandonment, you
will want to monitor when a customer brings a cart of items to checkout and
then fails to make a successful purchase. If you track this by country this can be
a great tool for knowing where to focus your efforts. An example of this is
when I put PayPal into one of our stores and PayPal was the only option. The
store did sales primarily to North America and Europe. We had a payment
gateway loss for North America of 20 percent, but for Europe it was much
closer to 40 percent. This would be an indication that we should look at a more
preferred payment method for our European customers. You will never reach
a 0 percent abandonment rate. But the closer you are, the better you are doing.
Some of the more popular international games are supporting as many as 200
payment methods. If the demand is there, you may as well take advantage of it. For
getting started I suggest going with companies that can offer multiple payment
types. PayPal and Moneybookers both offer a wide range of options. As you get
more sophisticated with your payment acceptance, you can move to work directly
with more region-specific methods. You will save on your processing charges but
you will also increase complexity.
Let’s talk a little bit about how PayPal works. It works by funding orders through
PayPal funds (funds in the actual PayPal account), which are typically given from
a bank account, credit card, or having received money from someone else through
a transfer. Another method is e-check. This is when money is funded by an attached
bank account only. E-checks can bounce. Do not trust them until they have cleared.
Once they have cleared, you can still run the chance of a bank reversal, but this
doesn’t often happen.
Then there are credit cards funded by PayPal accounts. Whether or not they
have funds in their account, the credit card allows them to do instant payments.
You want the instant payments because the customer wants instant satisfaction.
With payments coming from PayPal funds and credit cards (not e-checks),
you’ll find that about 80 percent of the orders are good after 24 hours. In the first
24 hours, there is an 80 percent chance that the payment you received is good.
Chapter 27 Money: Real Transactions, Real Risks
If you get to 7 or 8 days or more, there is about a 90 percent chance that the payment is good. When you hit 11 days, you’re at a 98 percent chance of that payment
being good. At 30 days it’s approximately 99.8 percent. These figures are based on
statistics with our payment acceptance and how long before the payment gets held.
PayPal will hold payments for several reasons. One is that they watch for funds
being shifted around from account to account to account, all by the same person.
They will monitor these actions. Within a few minutes of the payment coming in,
they will place a temporary hold for investigation. If you are not planning on talking to your customer, you may want to give them a 30-minute delivery expectation.
Wait about 25 minutes and then send the product. This investigation will trip
internal checks at PayPal within a few minutes of the payment being sent. By delaying the delivery of the first order by 25 minutes, you can reduce your fraud by 0.5
percent or so. This may not sound important, but we did $6,000,000 in sales in one
of our stores last year and that short delay saved us $30,000 in potential profit
(profit margins are often lean for resellers).
I have talked with PayPal many times about why they flag a payment and hold
it a minute or two after it’s done. If they are that fast, why can’t they prevent the
payment being sent to us in the first place?
PayPal can be frustrating to work with. You just have to know how to work
with them. They will hold funds for many reasons, even if it’s just general suspicion.
I’ve seen this many times. There’s nothing you can do about it other than build it
into your costs that they will do this from time to time. I usually refund the order
if I haven’t delivered the product. If I have delivered product, I may place a few
notes in the order so they know the details I have of the order. This may provide
them with the information they need to release the order if it is legit.
I always wonder what happens to the funds that PayPal holds. Do they give
those funds back to the customer? Do they keep them? I certainly don’t get it. I’ve
talked with people some time later and they said their money wasn’t refunded.
PayPal states that the funds will always go to the buyer or seller after any hold. They
also state that they are audited and are required to account for all funds. It makes
me wonder if the buyer is being honest with us in these situations or if they have
misunderstandings about how the system works. I’m betting that, most of the time,
the customer funded their payment with a credit card and they do not have access
to online statement. Thirty days later they will receive their statement and see that
a credit was in fact given back to them.
Also with PayPal, they will hold in situations when the customer has no funds
in their account to send in the first place. When an order is funded by a credit card,
the credit card company does a hold on the payment to PayPal. It’s frustrating how
this works. PayPal will first send you an email saying they are disputing this on your
behalf; please provide information. When they dispute it, they also charge $10.
Protecting Games: A Security Handbook for Game Developers and Publishers
This $10 fee is in many cases non-negotiable. Although, I do such a high volume I
have negotiated that they not charge us this fee because we have told them we will
never dispute a credit card chargeback. So, we do not require this service from
them for disputing charges on our behalf.
The credit card chargebacks typically do not appear for at least seven days. This
process differs in that it requires forms to be filled out and most of them are not
electronic. The last one I dealt with was from a furniture company. They called me
two weeks after the order and told me that two months prior to my purchase they
had gone into bankruptcy. Why wasn’t I told this when I made the purchase? They
said I was not getting the funds back. I had to do a chargeback if I wanted my funds
returned. I had to sign documents saying that I attest this was true, had to fill out
other forms, and copy receipts. It was a lot of work; it took me several days to organize and gather the paperwork and then fax it all in.
These holds are not disputable. You cannot fight a credit card chargeback
through PayPal. I have never won that battle unless the customer was making a
chargeback on something else and they did this one by mistake. I have had them
reverse a couple of times in that instance, but it is not worth your time even if it is
a thousand dollar chargeback. The odds of you getting one back are very low. You
will spend more than a thousand dollars worth of time to get that one chargeback
released. Don’t even worry about it.
If you have a lot of profit on certain items, then you may want to study where
your highest profits are. It can sometimes be more profitable to allow a low amount
of fraud, though, as it will make it easier for your honest customers to make purchases.
With this information you can get as hard-core with anti-fraud as you want; or
just use a few of these items to enhance your current anti-fraud policies.
Why are anti-fraud measures so important? My experience is that about five percent
of first-time customers are using a stolen PayPal account, credit card, or another
online payment method (whereby they’re not the legitimate owner of the account).
Typically, they have acquired it from phishing.
Thus, if you blindly accept payment from just any PayPal or credit card account,
you’ll end up with at least five percent fraud. If you sell virtual items that can be
(licitly or illicitly) resold, they will make multiple purchases. When they make one
successful purchase, they will be back again to make more purchases and, if left
unchecked, you’ll go up to about an 18 percent loss rate from their numerous purchases before they get shut down.
Chapter 27 Money: Real Transactions, Real Risks
With products that are not re-sellable, such as registration keys or add-ons, you
can still occasionally get fraud. Because I have those registrations “call home” (they
send a request to our servers to verify they are active in good standing), as soon as
the payment is reversed or there is an issue, I block their registration. The next time
they try to log in to that application, it will become unregistered again.
After that, I use one of two methods. First, I allow them to purchase again at
regular price and hope that it doesn’t get reversed. This is an acceptable method.
The other method, which I prefer, is to charge a penalty price to re-register the
product. If they reverse the payment but come back again and want to re-register
the product, I increase the price substantially, typically up to three to five times the
original purchase price.
The way I identify those who attempt multiple purchases is by having the registrations tied to the actual account names for the games or by another way of identifying which computer or person is making the purchase. I find this to be a good
deterrent. If they know that they can only connect to a specific account with the
purchase, they typically don’t reverse charges, lowering fraud a bit. I have found
this approach has about a 2.5 percent fraud rate. For most products, this would be
For those who come whining to us that they got their registration key banned,
we’ll sell them the banned version for a greater price.
Because five percent of first-time customers are attempting a fraudulent
payment, the key is to catch them immediately. If you don’t, they start running
I figured this out the hard way. Several years ago I was running an online store.
I sold game codes and other items such as currency in some games (it was legal, not
the grey-market terms-of-service violation items). This store would do automatic
delivery of the codes or currency. I had a system in place that checked if their IP
address, PayPal account, or email address had been fraudulent in the past. If it
passed those checks, the system would allow the purchase to go through.
However, this limited the amount of purchases a person could make. I restricted purchases to five in a 24-hour period. If any fraud appeared, I would lose
five purchases. These losses would be pretty limited, or so I thought.
In this store there was $18,000.00 worth of stock. One morning I woke up,
checked the computer, and saw that I sold out of everything! At first I was really
happy. Then I was really scared.
I realized that someplace really popular had linked to our site or something had
happened and someone had found a way to exploit our systems. In fact, the latter
was the case. Someone figured out that only so many purchases per day were allowed. They bought a domain with a stolen credit card and immediately purchased
Protecting Games: A Security Handbook for Game Developers and Publishers
email hosting. They then set up email addresses on this hosting account. They
would make five purchases with the credit card on a PayPal account then our
system would cut them off. Then they create a new email address and make five
more purchases. They did this until the store was empty. They managed to do this
within a couple of hours.
That’s when I decided that I had to do something. I wanted to have an automated solution. The best kinds of sales are the ones that are replicated—those that
require no more effort. The reality is that this just doesn’t work for items that have
resale value.
A human element is required. That human element can come in several different forms. When humans do anti-fraud they can review orders and check on suspicious purchases. They assess orders looking for patterns. You can look at their IP
addresses and for large amounts of fraud from certain ISPs. You may have a human
review a transaction if it’s a high dollar order or just review all of the transactions
right off the bat. Thieves are always looking for a new loophole to get through. So
there needs to be someone always watching the transactions.
You may want to consider automation for performing searches of email addresses or for searching the names of the people to identify those who are going to
steal from you.
This may sound really crazy, but it’s my belief that some criminals want to be
caught. A prime example is this story: I had an email address that read imatheif
@******.com. He placed an order and our staff did not notice the email address.
When they approved the order, the order went out just fine and a couple days later
I get a hold on the funds for that. Looking it over, I realized this person was telling
us they were going to steal from us—they told us so in their email address.
We have since started tracking this kind of information. There are many different payments that should be held. If they have the words such as thief, steal, cheat,
or hack—these are some examples where you can have your system automatically
hold those orders for review. In fact, you may just want to turn them down. I have
found that their email addresses actually told us what they were going to do. (Like
imathief. What does a thief do? He steals.) Half of these customers did actually
reverse charges. I tested that by allowing several such orders through and waiting to
see if charges were reversed.
Customers can give you their intentions. That’s one of the ways that can be automated. Other ways cannot be automated, like your interactions with customers.
Sometimes they tell you things. Like I said, thieves want to be caught. They will give
you hints. It’s a game and a thrill to them.
Chapter 27 Money: Real Transactions, Real Risks
In fact, when you identify a thief, you can even call to speak with some of them
and they will actually tell you what they’ve been doing. They will explain what has
been successful and what hasn’t. That may sound crazy but that’s been my experience. Some thieves like to brag about their exploits, about the people they’ve tricked
or stolen from. You can learn a lot from that.
We’ve seen payments that were going to the XXX game company (I’ll just call
them XXX so as not to embarrass them). Although we were not involved in their
payment processing on this, I realized that in one of the markets (where I was selling game codes), a lot of game codes were appearing on the market for sale at a
discounted price. I knew the game codes were being sold for below cost. I was sure
of that because I was probably one of their highest volume resellers of these codes
and the prices they were selling them for was just incredible.
I decided to figure out what was going on. I contacted some of these sellers who
were selling the cheap codes. I bought some of them and, low and behold, they did
work. I was halfway expecting that they wouldn’t work; that the codes would have
been used or they would just be fake. They, in fact, did work.
When I spoke with them and listened to them brag, they told me they were
Russian carders. A Russian carder is someone who phishes credit cards or gathers
them by some other means. They use those credit cards to make purchases of other
items such as game codes. They then resell those game codes. They have, in effect,
laundered these stolen funds. They were selling the game codes for 50 cents on the
dollar and were making a lot of money.
This turned out to be huge for XXX. I am not sure of the total losses, but it took
me forever to get XXX’s attention on this. I was adamant about getting XXX to do
something about this because it was affecting our sales of their codes (because the
market was flooded with these cheap codes).
When they finally did something about it, they got the FBI involved. Since the
thieves were in Russia, there was really not much that could be done. They got away
with it. Nothing ever happened to them.
These are the exact kind of organized criminals you need to weed out immediately. If not because of the costs of fraud, then because U.S. Homeland Security will
review transactions to make sure terrorists are not laundering money through your
company. All of the IP addresses of the carders at that point were coming from
Russia so we had flags on all Russian orders. They all had to be reviewed because of
the country’s high fraud rate.
I’m going to tell you a story about a guy who placed an order with us. Keep in
mind that I know where our orders originate.
Protecting Games: A Security Handbook for Game Developers and Publishers
The deal on this is that even though we know what city and state they’re from,
we ask them anyway when we confirm purchases by phone. We can see this particular customer is from Nashville, TN. He had a non-regional accent that isn’t necessarily a red flag but can be sometimes. I asked him what city and state he was
from. His answer: “Nash-villey, Tennasis.” Right then we knew that there was a
huge problem and we say, “Thank you, we’re sorry but we’re not going to be able
to process your order.” You’ll receive a refund from us shortly and an email.”
Somebody who cannot pronounce his or her own city and state is typically not
from that area.
Other examples: People from the East coast say “Orygon,” people from the
West coast say “Oregun.” Then there is “Illinoy” (correct), and other people say
“Illinoise.” There are different ways to gauge whether or not someone is from a
region by asking them where they’re from. That’s pretty easy if you know how
words are pronounced in a local area. Don’t let them use the excuse that they just
moved to the area.
The hardest thing about all of these payment methods is the integration into your
systems. All of these systems offer integration but this can be very complex. I recently worked on integrating Moneybookers into one of our stores. I would have
thought it would take just a couple of weeks to get set up, but it took several
months. Each service has different requirements for integration. This is an area
where you will want to have a highly skilled technical person doing this work. If you
take the easy route and just get it working you will be open to exploits. I recently
found this out the hard way—we took substantial losses from a hacker who was
able to fake our system into thinking that every item for sale was just one cent. We
found that we had left one check in our system unused and they exploited this fact.
Taking the time and resources to do this right the first time is worth it. Our losses
had we not caught it right away would have been higher than the costs of integration. We were lucky, but luck should not have anything to do with it. We should
have done it properly the first time.
Chapter 27 Money: Real Transactions, Real Risks
by S. Davis
Credit card fraud1 is a massive problem. The FBI estimated that credit cards were
responsible for the majority of the $315 billion in U.S. financial fraud losses in 2005
while French credit card losses were $319 million2. In the UK, “card-not-present”
fraud was £212.6 million (almost $350 million) in 2006. Although consumers may
see these losses hidden as part of their interest rates and fees, the credit card
companies and payment processors have successfully passed most of the risks to
merchants. The burden is on you to protect your revenues and the health of your
1. Wikipedia (2008), “Credit Card Fraud,”
2. J. Conlin (2007), “Credit Card Fraud Keeps Growing on the Net,”
More Money: Security,
Technical, and Legal Issues
here is always more to say about money. The PCI-DSS (Payment Card
Industry Data Security Standard) initiative by the credit card industry has
been put in place to reduce fraud at merchants. Unfortunately, PCI-DSS
compliance is not the same as “security,” as the Hannaford Brothers grocery chain
found when 4.2 million customer credit cards were compromised even though the
company’s operations had been certified PCI-DSS compliant1. PCI-DSS compliance is an important issue, however, because most online services are likely to need
certification in order to be allowed to accept payments.
Fraud is one of those things you really have to keep an eye on. As we were celebrating revenues in Q1, it was being tainted by fraud. We didn’t know what
acceptable levels were. And the detection is delayed. Revenue is scalable, but so
is fraud. Revenue clouds your ability to identify and fight fraud. You see
money coming in and you don’t want to tighten your grip. And fraud leads to
fines, incrementally and if you hit a certain level they can hit you with a big
one. Increased fraud leads to expulsion from credit card processing. Some companies have been dealt six-digit fines. If you have over a 1 percent chargeback
rate, you get hit with a fine.
… You lose the revenue of the purchase and an additional [chargeback] fine.
Merchants can fight the chargebacks, but it’s difficult and costs a lot of
manpower. We learned you have to stop fraud before it happens and identify
future chargebacks. You need a firm user policy and education. …We had
users selling their time for existing fraudsters to level up characters. And that
came back to hit us.
… We set spending limits and educated the user base. We created a fraud [team]
and joined the Platinum Members List in the Merchants Business Council.
Chargebacks have been drastically decreased and controlled.
—Min Kim, Director of Operations, Nexon America
Chapter 28 More Money: Security, Technical, and Legal Issues
Increasingly, online services are portals to a wide range of entertainment
services. Player accounts allow players to purchase everything from virtual items to
real goods. This makes these accounts targets for fraud by both insiders and hackers. Finally, even though virtual currencies have become a key part of the growth of
the online game industry, there are some real risks. In addition to gold farming, the
ability to convert some of these synthetic currencies back into official currencies
raises the potential for their use for money laundering.
The rise of ecommerce and explosion of corporate IT and networking has created
a huge problem for the payments industry. Poor internal security systems and procedures created huge opportunities for fraud. TJX Companies, owners of a number
of major retailers including T.J. Maxx and Marshalls, compromised 45 million
credit card numbers during a multi-year security breach2. This compromise cost
the company well over $100 million in losses. The credit card industry established
the PCI-DSS standard and certification3 to help protect credit card data, because
these large-scale compromises were costly for the payments industry as well.
As seen with the Hannaford Brothers case, PCI-DSS compliance is not the
same as being secure. It is a minimum standard and may affect your transaction
fees and terms as a merchant. Game companies and other online service providers
should thoroughly investigate their options for practices that can directly reduce
transaction costs as a matter of good business (such as joining Platinum Members
List in the Merchants Business Council).
Good system design should make it possible to move beyond PCI-DSS and reduce your practical risks. Online service providers should also investigate insurance
and even consider not taking payments directly by working with third parties to
further reduce potential threats.
Online game accounts are becoming much more than a way to pay a monthly subscription. Players can purchase downloadable content (DLC) for existing games,
purchase additional games, buy virtual items, and will soon, no doubt, be able to
purchase physical items. Microsoft’s Xbox Live is a pioneer in turning a game service into an all-encompassing entertainment portal, but Apple’s iTunes and App
Store as well as Valve Software’s Steam and Amazon’s growing suite of Amazon
Protecting Games: A Security Handbook for Game Developers and Publishers
Web Services, among others, are joining in. In China, Tencent’s Q-coins have
grown in popularity to the point where the Chinese government considered them
a threat to the nation’s currency4.
These accounts are rapidly becoming much more than a subscription payment
method. As these capabilities of online game services grow, they will become more
“interesting” targets for thieves. There are even greater risks if the virtual currencies
that many of these services use can be converted back into “real” money. Convertible
currencies raise the risk that game-like incentives and rewards can be construed as
gambling. Also, the ability to “cash out” makes the currency a tempting target for
internal fraud and outside hackers. Until a weakness in the QuickTime player was
patched, Linden Lab’s Second Life was vulnerable to a hack that allowed thieves to
transfer virtual currency from one player’s account to another player’s account
without their consent5.
At NetEase, a company employee identified 30 accounts and sent in faxes of
counterfeit identity cards claiming that he had “lost his password” that transferred
the accounts to his control. He then proceeded to loot the accounts6. The amount
involved was not large by U.S. standards, around 4000 Yuan (around $500), but
this is equivalent to several months’ wages in China.
These types of services are vulnerable at three key points: the game or other systems that can reward players, payment systems that move between real and virtual
currencies, and customer service systems.
Customer service applications are a critical security target, because they are the
means for handling any sort of problem that customers have. Potentially, they are
an easy way to compromise user information, steal accounts, add unauthorized
credits or bonuses, or carry out actual transactions for real money.
Rampant payment fraud is a nightmare for any online business and an increasingly
important issue for online games. One alternate payment processor, Flooz, had
created a virtual currency for micro-transactions (similar to that used in many
games). Unfortunately, Russian organized crime groups used Flooz and stolen
credit cards to launder funds7. This resulted in an FBI investigation.
[T]he rate of fraudulent purchases spiked from less than one-twentieth of one
percent to 19 percent of consumer credit card transactions in June and July
2001. Chase Merchant Services, which processed credit card transactions for
the company, then imposed tens of thousands of dollars per month in fines
Chapter 28 More Money: Security, Technical, and Legal Issues
for the excessive fraud rate. Chase also attempted to hold $2 million in company
credit card deposits to cover the fraudulent transactions. Credit card fraud was
already a major concern for online shoppers, and digital currencies were
attractive to fraudsters due to its instant delivery and potential anonymity.
Eventually, Flooz was shut down.
Convertible currencies can pose a risk, if there is enough value that can be
transferred. There has been speculation about criminals or terrorists using online
games as a way to launder money, but typically the amount of value that can be
moved easily in these games is too small to be of interest. This is yet another reason
that companies should be careful with their transaction systems.
Online poker, skill games, sports wagering, and pari-mutuel wagering do have
the potential to be used for money laundering, however, because of the larger
amounts of money involved. Criminals are using online wagering services for
money laundering with wagers of 100,000 Euros on third-string Romanian soccer
matches or even a Czech women’s league game. There are apparently around
15,000 online wagering sites with only 2,000 being legitimate and together they
handle around $23.6 billion in wagers a year8. Internet sports wagering also is
threatening the integrity of many traditional sports. Several suspicious matches at
Wimbledon and elsewhere have led to investigations9.
Perhaps unsurprisingly, the ban on Internet gambling in the US has led to
criminal innovations. A payment processor in Utah miscoded credit card transactions to allow them to be used for online gambling; see10 and11. In Korea, illegal
online gambling has grown with the rise “cyber-money dealers” who handle the
conversion between virtual currencies and hard cash12. These types of “covert
payment channels” can be used for other criminal applications.
Gift cards, because they are not considered “real currency,” have been used by
criminals for payments for drug deals and money laundering13. Gift cards are compact, flexible, and do not have to be reported when crossing borders.
Crooks don’t need great graphics or immersive worlds to create real problems
for individuals, governments, and the online game industry.
Law enforcement throughout the world is well aware that money laundering is not
limited to money transfers between typical banks or other financial institutions.
They know that games, especially online games, have increasingly become the vehicle of choice of many looking for unique ways to transfer money “under the radar.”
Protecting Games: A Security Handbook for Game Developers and Publishers
Those that take advantage of a game for the purpose of laundering money are often
crafty, and a game operator could unsuspectingly find itself in the crosshairs of an
investigation or, worse yet, a prosecution. The unsupervised electronic funds transfers inherent in online gambling, for example, are exploited by criminal interests to
launder large amounts of money14. It is an issue that cannot be ignored.
First, know what money laundering is; then, know how to address potential
issues when you discover them. Money laundering is a process by which a person
takes money, potentially illegally gained cash, and then transfers the money, perhaps distributing it among others or back to himself, with the goal of keeping any
governmental body from finding out the source of the funds. In this manner,
“dirty” (questionable) money goes through a “laundering” process to become
“clean” (deceptively lawful) because the new source of the money is not in doubt
and the path of where it came from is intentionally untraceable. Any process that
moves cash without asking a lot of questions about the money (and does not have
governmental reporting obligations) will be a candidate for a money launderer.
If a person uses a bank or financial institution to transfer money, the bank or
financial institution will have a record of the transaction, and is likely required
to report the transfer to the government. In the United States, for example, cash
transactions and deposits of more than a certain dollar amount are required to be
reported as “significant cash transactions” to the Financial Crimes Enforcement
Network (FinCEN), along with any other suspicious financial activity that is identified in “suspicious activity reports.” Other jurisdictions have similar requirements
that obligate financial services employees and firms to report suspicious activity to
the authorities.
When suspicious activity is discovered by authorities through FinCEN, or some
other means, most countries have broadly written laws at hand to prosecute those
who launder money, as well as those who aid and abet the launderers. In the US,
law enforcement principally relies on a law called the “Illegal Money Transmitting
Business Act of 1992,”15 which makes it a crime to “conduct, control, manage, supervise, direct, or own all or part of a business, knowing the business is an illegal
money transmitting business”16. This law covers the transfer of money by “all
means,” which includes the Internet and any other online service such as a game,
social network, or virtual world17. The scope of this law gives prosecutors the tool
they need to go after each person in even the most creative money laundering
scheme. Each state in the US also has the capability to investigate and prosecute
crimes within their boundaries.
If consideration (such as money) is an element of the game, the game operator
will have to be well aware of potential issues relating to money laundering. Money
laundering can also occur in some other manner, such as sales made with virtual and
actual currency. Issues will occur that the game operator truly has no knowledge of,
and that may be a defense. But prosecutors will often have suspicions about what
Chapter 28 More Money: Security, Technical, and Legal Issues
the game operator knew and when the game operator knew it based on their investigation. If you have the slightest suspicion that something illegal is occurring,
contact law enforcement. You may consider contacting legal counsel first, but do
not delay. Assume you will encounter some sort of incident and have practices in
place to quickly respond to work with law enforcement. You do not want your
business shut down like Flooz.
1. R. Mogull (2008), “Picking Apart the Hannaford Breach: What Might Have Happened,”
2. J. Vijayan (2007), ”TJX Offers Settlement in Wake of Massive Data Breach,”
3. PCI Security Standards Council (2008), “About the PCI Data Security Standard (PCI DSS),”
4. Wang X. and Wang S. (2006), “Virtual Money Poses a Real Threat,”
5. D. Terdiman (2007), “Report: Hackers Say They Can Steal ‘Second Life’ Currency,”
6. China View (2006), “More Attention Paid to Virtual Property Protection,”
7. D. Cotriss (2008), “Where Are They Now: Flooz,”
8. F. Chaptal (2008), “Sports Credibility Caught in Tangled Web of Gambling,”
9. J. Calvert, B. Flatman, N. Fleming (2008), “Wimbledon Fears Match-Fixing Scandal in Massive Betting
10. KUTV (2007), “Mob 3.0: Covert Channels in Online Gambling in the US and the Expansion of
Organized Crime,” (original article
at KUTV no longer available)
11. B. Hansen (2007), “BetonSports—Again?—and BetUs Indicted in Latest DOJ Bust,”
12. M. Ha (2008), “Online Gambling Sites Mushrooming,”
13. D. Birch (2007), “Moral Panic or Genuine Worry?,”
14. S. Coates (2006), “Online Casinos ‘Used to Launder Cash’,”
15. “Illegal Money Transmitting Business Act,” Pub. L. No. 102-760, § 1512(a) (1992), codified at 18
U.S.C. § 1960
16. 18 U.S.C. § 1960(a); also refer to 31 U.S.C. § 5330
17. 18 U.S.C. § 1960(b)(2)
Identity, Anonymity, and
dentity is a severe problem for almost all online services, including games.
Games can operate as anonymous services, but usually identity becomes important at some point. The U.S. online game industry seems to be moving towards
an environment where identity will tend to be very weak. Although credit and debit
cards provide reasonable identity information, prepaid cards (see Chapters 27 and
28) are rapidly becoming the preferred payment method and they are naturally
anonymous (to the game company).
Other countries have a very different view of online identity. Both China1 and
Korea2 are moving towards requiring registration for many online services via some
form of national identity number. It is likely that this approach will be adopted in
many countries. Korea has gone so far as to create a separate online identity number for its citizens3.
The first problem that any identity system faces is how to register users. The
“registration problem” does not get nearly the attention it deserves. The biggest
challenge is assigning an actual person to some sort of number or token or collecting a biometric signature. Once registration is completed, the operation of an
identity service is fairly straightforward. (As almost everyone knows, it is much more
difficult to get a driver’s license than to use one to verify your identity and age.)
Age verification is a particularly thorny aspect of the identity problem. In the
US, the only place where there is a legally acceptable definition for online age verification is to identify younger children under COPPA4. Another area where there
has been increased interest in age and identify verification is the development of
usage controls to address public policy concerns about game addiction. Whether
game addiction is an actual problem or not, game companies need to address public perceptions of the issue.
Compromise of user information and identity theft is a problem that has been
gaining increasing public awareness over the past several years. In the US,
California’s Data Disclosure law has been critical in raising awareness as to how
Chapter 29 Identity, Anonymity, and Privacy
often and how much personal information is compromised5. Korea has gone one
step further with substantial civil penalties being assessed when user data is lost—
as much as $100 per person6. There are legal requirements that any online service
needs to meet in order to collect and retain user information. However, the US has
some of the weakest privacy protections in the world. Most other countries require
notably stronger protection for individuals’ information. Online businesses that are
considering international markets need to be particularly sensitive to these issues,
because acceptable practices in the US are not permitted elsewhere.
Identity has an important internal role to play in online services. Identity management systems need to collect and retain sensitive user information. They also
need to handle login, account recovery, compromise management, and other issues.
Identity is particularly important for game companies that are concerned about
player accountability.
Identity is probably one of the most important problems of the 21st century.
Although businesses and governments have always collected data on individuals,
until the explosion of computing power and networking, this sensitive data sat relatively safely in file cabinets spread all over the world. Today, this multitude of
identity records can be accessed and linked together to provide more information
about ourselves than any of us would really care to share. Online services make it
trivial to track user actions in great detail. At the same time, it is almost trivial to
steal an identity. Stolen identity information, including name, social security number, and address, can be purchased for as little as $2 per identity, whereas a credit
card name and number can be had for 40 cents7. These are for American identities;
European identities go for more.
There has been no real growth in identity security and the need for digital identity or any serious debate of the issues as a matter of public or business policy.
Sadly, there has been depressingly little discussion of the implications of the pervasive access to all of our identity information and the overall lack of security of
online identity. For a business, in addition to the growing requirements for data
that must be retained to meet legal and regulatory requirements, there are also legal
risks from data disclosure. There are other challenges—hate speech and harassment,
obscenity, liability and government access, parental controls, and community
standards and jurisdictional issues—all of which have an identity component.
One strategy may be to move an online service to an offshore jurisdiction,
which has a more amenable legal environment to your specific business. This may
Protecting Games: A Security Handbook for Game Developers and Publishers
appear to increase operational costs, but could be a powerful tool to reduce the
threat of litigation (Note: This will certainly not solve all problems as the online
gambling industry has found with the UIGEA. One firm, Party Poker, lost 90 percent
of its poker revenues due to this law which applies to companies outside of the US 8.)
Anonymity is an interesting issue for online services. On one hand, anything
that reduces barriers to entry increases participation by potential customers. On
the other hand, anonymity seems to encourage bad behavior. Many people see
anonymity as the right to act with impunity. For games, strong identity helps fight
cheaters and griefers by making them accountable for their actions and it helps with
piracy by making it difficult to use pirated games online.
One real challenge for most identity systems is that they are not designed with
malicious individuals in mind. Almost all the systems are built on the implicit assumption that the individuals using the system actually want their identity to work
properly and be secure. When this assumption is incorrect, as it is when criminals
and even minor miscreants in games are involved, the security of the systems often
comes crashing down. Digital signatures and public key infrastructures just don’t
work if a private key has been compromised or the user is willing to lie during
registration or share or steal a key. The best available approaches build on positive
identity relationships such as existing relationships with customers, security tokens, some payment systems, or active incentives that reward accurate identity.
Game developers and operators need to assume that their identity system is
constantly under attack and that some of their users are always trying to defraud
their online service. Until recently, Xbox Live players who had been banned could
use promotional cards that provided minutes and pre-paid cards to set up new accounts and get back into the system9. Games should explore building highly specific
identity systems that take advantage of their unique service offerings. Fan clubs,
loyalty programs, incentive programs, and anything else that rewards honest identity is very valuable. The larger and richer the identity system and online service,
the more effective it is. This is one of the real advantages of services like Valve
Software’s Steam. The more they add to the service, the more costly it is for a player
to “defect” and cheat, pirate, or otherwise damage the game ecosystem.
Identity is a profoundly important problem. The biggest problem for identity
systems is identity itself: knowing that who you are talking to is who you think it is.
If you look at many discussions about identity, they completely ignore this most
essential issue. There are three main components of an identity system:
Chapter 29 Identity, Anonymity, and Privacy
Registration and Association: Linking the Person to an “Identity”—This
consists of both initial registration process and the real-time login or access
control system for some sort of identity-based session.
Transport: Communicating an Identity (over a Network)—This is usually
implemented via cryptography, but the transport service often needs to address
identifying the people and the application and platform that they are using to
contact each other.
Policy: Linking Identity to a Specific Business Problem—Identity is not an
end, it is a means to solving some problem. An identity system policy captures
information about the individuals and what they are allowed to do based on who
they are, their role, or other criteria. An unfortunate habit in the IT security industry is to define policies for systems that may, or may not, reflect the actual
business needs of their clients: organizations own policies, not technologists.
There are several critical supporting issues that need to be addressed as part of
a complete identity system:
Compromise Recovery—What do you do when things go wrong? This issue is
critical, yet is rarely addressed. What happens if the identity service’s data is
compromised (look at the seemingly endless data disclosures in the news if
you think this issue isn’t important)? What if the user compromises her identity information? How does the system recover? For game systems, it is also important to address the scenario where a user intentionally compromises her
own identity data. I would argue this is the key failing of biometric systems—
they have no meaningful recovery mechanism when biometric data is lost either
through compromise of the biometric database or subversion of a biometric
Initial Registration—How do you enroll a person or system in your identity
service? This has been the killer for public key infrastructures (the big security
fad of the 1990s) before biometrics (the current security fad that is winding
down). Initial registration is often costly. In some sense, the Postal Service and
its private competitors are in the best position to handle this function, because
they periodically get physical signatures from individuals face-to-face.
Acceptability—What is legally good enough? Although there are many cases
where there is no need for identity to meet a legal standard, it is an important
and sometimes critical feature of many identity systems. It can also be a trap.
There are a number of age verification services that provide some age and identity information, but none of them, to date, can accept or transfer liability from
the actual service provider (except for COPPA). This is also important when
considering the growing identity theft problem.
Protecting Games: A Security Handbook for Game Developers and Publishers
What are the de facto identity systems today?
Usernames and passwords are used both for association and registration.*
Email addresses are used for registration.*
Credit card numbers with other identity information such as names and addresses are used for registration and for association when making a payment.
Essentially, identity systems can tap existing payment processing and authorization services for a fairly strong sense of identity.*
National identity numbers are used in Korea and China for identification. The
problem with these systems is that it is possible to generate an “authentic”
identity number, because the algorithms that each country uses are known10. In
some cases, these numbers encode personal information such as gender, age,
and location of birth, making the identity numbers tools for identity theft
themselves. In the US, the use, until recently, of social security numbers by a
wide range of entities for identification caused similar problems. Because these
numbers are used to access a number of online services, compromises of the
databases for these sites have resulted in millions of identification numbers
being disclosed. Privacy advocates in Korea have raised concerns with requiring the use of any sort of national ID. They are concerned about the rise in
cybercrime from easier identity theft and about the loss of privacy11.*
The use of online identity numbers. Korea has recently created a separate
number, called an i-PIN, for online identity12. It is not correlated with the
country’s national identity number. This system doesn’t seem to address a couple of important issues: Because the number can be changed, it would make a
lot of sense to support a global “compromise” notification system so that sites
and services that use the ID number could cancel it and replace it with the new
number. Also, although it is important to tie an identity to a specific individual, there are probably a number of benefits to allowing a person to have multiple ID numbers for improved privacy and speedier compromise recovery.*
All of these systems (the ones marked with an asterisk) are vulnerable to compromise. They are dangerous to enter in a public computer and can also be stolen via
keyboard loggers and phishing scams. Except for passwords, these identity systems
do not recover easily from compromises. Even issuing a new number does not
necessarily or promptly invalidate a compromised identity number.
Chapter 29 Identity, Anonymity, and Privacy
Faxed Identity Card or Drivers License—This approach is sometimes used by
services for adults and seems to be fairly widely accepted. People are not particularly likely to lose control of their identity cards and there are existing legal
sanctions for misusing, altering, or forging official documents. The “fake ID”
problem that plagues control of underage alcohol purchases does show the
limits of this approach. Also, this system is relatively costly and slow and, technically, as vulnerable to database compromises as identity numbers.
Registered Mail/Signature Required Delivery—Postal offices and other delivery services typically provide some mechanism for requiring a signature to accept an item. This form of identification and authentication is fairly effective,
if slow and somewhat costly. It can wind up having little marginal cost if there
is an actual physical delivery to a customer. It also has an advantage of fairly
solid legal status.
Security Tokens—There are a number of time-based and challenge/response
security tokens used for identification and authentication. They do not address
the initial registration problem, but are useful for day-to-day authentication.
Blizzard managed to bring the price of an authentication token down to $6.50,
which should make it fairly widely acceptable13. Interestingly, Blizzard is the
real beneficiary of the token, but it has managed to pass the cost of the device
on to its customers.
Mobile Phone Messaging—Asian games are increasingly using mobile phonebased authentication. Because mobile phone numbers are personal items, they
may have available identity information for registration purposes. Players are
presented with a challenge code that they then must send to a specified number (usually via SMS text messaging) to log in to the service.
Biometrics—Some organizations have considered biometric authentication
for remote access. The problem is that the biometric signature is vulnerable to
compromise as regular computers cannot be considered trustworthy devices.
Keyboard loggers or other malware could easily capture the biometric information for use by a malicious user.
Public Key Credentials—This is another system that looks better on paper
than in the field. Unless there is a physically secure device that holds the private
key, it is no better than a username and password.
Challenge/Response Card—NHN in Korea came up with a fairly clever idea of
using a paper card with a set of challenge/response number pairs (that is,
Challenge: 435, Response: 813) listed on the card. The challenge value is provided by the server and a response given by the player14. This system has most
of the benefits of a security token at a fraction of the cost.
Protecting Games: A Security Handbook for Game Developers and Publishers
There are other identity strategies available. The adult social network, Naughty
America, provided an online background-checking service for its customers to help
find out if potential partners have a criminal record15. Shanda Interactive’s King of
the World MMO requires players who wish to play female characters to “prove
their biological sex via webcam”16! There are also identity systems such as web
cookies and computer fingerprinting. These are better at identifying platforms than
their users (this difference is something that U.S. online service providers tend to
forget; in the rest of the world, the Internet is usually accessed via a public terminal
at an Internet cafe, not a personal PC). Computer fingerprinting systems use various values that can be accessed by software on a computer including: serial numbers for hard drives, MAC addresses for network cards, and license keys to attempt
to create a unique identify for each platform. This technique works much better if
the subject does not know her computer is being fingerprinted, as the signatures
can be changed by a motivated hacker.
Robert “Bob” Morris invented and implemented the widely used scheme of using irreversible transforms to protect passwords. The huge advantage of this approach was
that the transformed passwords could be stored in the computer’s main memory
back in the days when memory, hardware, and software were all expensive.
The first part of the Morris Trap is the misuse of the irreversible transform technique. Because people are people, we tend to use highly structured, predictable
passwords. A hacker can test these passwords easily if he has a copy of the “hashed”
or transformed passwords stored in a computer (which Morris’ technique allowed).
If every password is processed with the same irreversible transform function, it becomes very efficient to run a dictionary attack against all of the different user passwords at once. This is not an error on the part of the actual transform technique as
defined by Morris, but how it is incorrectly implemented by many, many programmers.
To avoid the efficient dictionary attack, each password needs to have a distinct seed.
Sometimes, the username is used, but it is better to actually generate a distinct seed
value associated with a username.
The second, and more serious, part of the Morris Trap is that hardware and storage are no longer expensive. Rather than relying on a mathematical technique that
still allows fairly efficient dictionary attacks, it would be better to physically isolate
passwords in a separate system that can only be queried at a relatively slow pace.
In practice, relatively few attackers do have physical access to the target systems. If
passwords are in a separate machine that is hard to attack, passwords are not nearly
as weak a security mechanism as they are when the memory of the machine is
remotely accessible. Buy a separate computer, give it a painfully simple, secure
interface, and physically protect passwords and other sensitive data.
Chapter 29 Identity, Anonymity, and Privacy
It is very hard to have true anonymity online. People tend to forget that they are
sending out a “return address” every time they do anything online via the Internet.
Yes, there are services that can hide online users, but the performance and effectiveness of these services in reality should be questioned.
Anonymity is kind of like cheating—everyone wants to be the only one who
has/does it.
Some online services do not want to support a strong identity management
system, often to allow them to reach a larger audience, but still would like some
level of accountability. Social networks, blogs, and web forums that authenticate
users via username and password or email address are a good example.
The standard solution is usually some sort of “web of trust.” The problem with
a web of trust in an anonymous environment is that there is no cost or penalty for
lying or creating additional identities.
The classic way to attack a “web of trust” is to build your own large collusive
web of untrustworthy people who “trust” each other. This is most easily implemented
by all of them being you. Many individuals create multiple identities at a single online service for perfectly legitimate reasons. Because there is no tie to a real identity
or any cost for creating an identity, it is possible to build an arbitrary reputation by
generating enough identities and relationships to feed your “hero” identity.
These anonymity architectures (like many mathematical systems) seemingly
are designed by mathematicians for mathematicians. They exist in a world of equations and protocols, but tend to have real problems when “real life” intervenes.
Little details, like implementing the system in hardware or software (much less the
involvement of less-than-honorable people) can bring the security of these systems
crashing down.
Anonymous systems need some sort of cost for creating additional identities
and motivating people to be honest about whom they are. The best method is a real
financial cost to create an identity, but this often conflicts with the other goals of the
online service that was considering anonymity in the first place.
Even if a working anonymous system could be put together, would it be desirable?
The data to-date on anonymous behavior online is pretty abominable—
griefing, cheating, harassment, abuse, spamming, phishing, ID theft, and so on—
and it is not like “free speech” is really protected by these systems. There is no
“right to anonymity.” If the government (or, more likely, a motivated hacker)
wants to find out who you are, they will. They will start tracking down IP addresses,
read actual logs of systems, and find you.
Protecting Games: A Security Handbook for Game Developers and Publishers
So, at best, we have a veneer of anonymity that encourages bad behavior
without protecting those rare instances where anonymity might have some positive
social value. Bravo!
What people actually can live with (I think) is a system of strong privacy (in the
ordinary sense of the word) and strong identity. Thus, I may be free to explore
alternative experiences with confidence that, as long as I don’t break the rules, very
few folks will need to know who I am. This can be done. Technical systems need to
be combined with good business practices and sensible laws and regulations.
with J. Price
A variety of online problems could be resolved if a user’s age could be confirmed
quickly, reliably, and under a consistent legal framework. Even better, how about a
law with a “safe harbor” from liability for those who play by the rules and make
a good faith effort to weed out underage or other inappropriate individuals from
content they should not be able to access? Unfortunately, we’re not completely
there, yet. Progress is being made.
Age verification and identity verification are important “gateway services”
necessary for a large number of online businesses. Today, unfortunately, there is no
way to provide these services in a way that completely addresses liability concerns.
The Adult industry uses click-agreements and payment systems as a “best practices/
best effort” solution. On the children’s front, we have COPPA, which does provide
an actual means to verify the age/identity of a child (but not of an adult).
Several companies offer general age verification services, including IMVU17
and Second Life18. Because both companies allow businesses to operate within their
environment, should these virtual businesses trust IMVU or Second Life’s age verification or, for that matter, to what extent should IMVU or Second Life trust their
age verification service providers?
Not if the business faces any real liability for failing to accurately verify the age
of a user.
Neither IMVU nor Second Life nor the actual age verification providers offer
any sort of insurance or liability protection to a business or individual who trusts
the service and uses its result to make a business decision. IMVU ’s service only
claims 90 percent accuracy, which is not very comforting in the world of rampant
Chapter 29 Identity, Anonymity, and Privacy
These new age certifications do have the advantage of being inexpensive, but
they provide limited value to consumers or businesses. Ironically, one of the
“features” of these new systems is really a weakness—their claim that no information
is retained. If the “evidence” of identity was maintained, it could, at least, be used
as the basis of a fraud investigation and action against any individuals who have
misrepresented their identities. Instead, all we will know is that at some time
some data was provided that the company assumed was associated with a specific
If you are going to target your game to children, follow the law. Know why certain laws apply or why they do not apply. Do not go halfway and kinda-sorta target
children and then discover you “accidentally” have data on children. The rewards
from following laws and regulations will far outweigh the risks of building your
service and taking a chance that the government won’t notice. And, if you are going
to hire a third party for this service, make sure you know where the liability falls if
the system fails. The age verification provider may have a “safe harbor” clause that
covers you, and they might not.
One law that is important in this area is the “Children’s Online Privacy
Protection Act,” frequently referred to as COPPA (often confused with “COPA,”
the “Child Online Protection Act,” which has been successfully challenged in court).
COPPA applies to the online collection of personal information from children
under 13. The Federal Trade Commission (FTC) implements the law through a
variety of regulations and suggested “best practices.” The FTC also sends out warnings to industry when it cracks down on those that violate the law.
COPPA provides a “safe harbor” from liability for those who follow its guidelines. Industry groups or even individual businesses can create self-regulatory
programs to govern compliance with COPPA. These guidelines must meet a checklist of legal requirements and then be submitted to the FTC for approval. Before
approval, the FTC will make the guidelines public and ask for comments on whether
the guidelines should be approved. If the FTC approves your guidelines, then you
will generally have a “safe harbor” from any enforcement action for violations of
Although the “safe harbor” is tempting, it might not be the solution if your service targets children under 13 and collects information about the children. In most
cases, those service providers avoid COPPA violations by adhering to strict rules.
First, know whether your service is covered by COPPA. To determine whether a
website is directed to children, the FTC considers several factors including, subject
matter, visual or audio content, the age of models on the site, language, whether advertising on the website is directed to children, information regarding the age of the
actual or intended audience, and whether a site uses animated characters or other
child-oriented features.
Protecting Games: A Security Handbook for Game Developers and Publishers
Next, you need a carefully crafted and prominently placed privacy policy. Do
more than post a compliant privacy policy—adhere to it. Also, note that the FTC’s
regulations also require, among other things, that you obtain verifiable consent
from the child’s parent before collecting data.
Although COPPA does address legal liability for online identity for children
under 13, solving online identity for everyone is a key problem for the future of
advanced online services. The solution to this problem is important. Companies
will need to be able to get a legal safe harbor for certifying identity and individuals will
need to be held liable for identity fraud or theft. Unfortunately, it will require more
than a technical solution. Someone is going to have to engage government to establish a legal safe harbor for online identity.
There is a wide perception that people can become addicted to computer games in
the US19, in Europe, and in Asia. Whether this is true or not is beyond the scope of
this book. What is relevant is that the perception that games are addicting is creating
a public policy problem for the computer game industry. As of late 2008, there has
been little public response of any kind to this issue from industry associations or
individual companies.
Asia, China20, Korea21, and Vietnam22 have all taken similar steps to restrict
game usage. These controls are different than ordinary parental controls (see
Chapter 30) in that they are not adjustable by an adult. In general, the response
from the industry has not been very supportive, but most companies do not seem
to be actively fighting regulation.
No business lives in a vacuum and the failure of the games industry to respond
to social concerns about game addiction reflects poorly on the industry and its
maturity. It is also good business for an industry to lead public policy for issues that
affect the industry’s livelihood.
On the other hand, there is a real concern for governments seeking to control
game companies and address game addiction: The global nature of the Internet
makes it easy to move offshore and successfully avoid undesired regulations. This
could give domestic companies a disadvantage compared to their less regulated,
offshore counterparts.
To understand usage controls, it is instructive to look at what the actual restrictions are. China has established limits on game usage that are far from onerous:
Chapter 29 Identity, Anonymity, and Privacy
Limits play of minors (only).
Play up to three hours per day is not restricted.
Play from three to five hours is penalized at 50 percent of normal in-game
Play over five hours has no in-game benefit.
Players must provide valid identities to play.
These are not difficult restrictions. In the US, where games are largely based on
monthly subscriptions, it would have no negative impact on revenues (and there
may even be an upside with some players purchasing additional accounts). In Asia,
where subscription games are typically metered hourly, the impact would be really
only on extreme players. In both areas, “professional” players, like gold farmers,
would feel the impact the most because these users are playing full-time (eight
hours a day or more).
Many of the “industry concerns” are based on the notion that the system can
be circumvented. Of course it can, but it does not matter. The company is providing due diligence to protect minors with usage controls. The individual player
would have to take positive action to subvert the security system. The game
company has done its part, and it is up to the individual to also obey the law.
For games with a U.S.-style subscription system, this scenario is probably
“money in the bank.” After all, gold farmers and other heavy players would wind up
buying multiple accounts—meaning more subscriptions—so a game company can
probably match revenues to usage better for extreme players.
The game industry worldwide probably should move to usage controls for
players of all ages. The system would be closer to metering and would largely
answer public policy concerns about game addiction and excess game playing.
Finally, the real gem in this program is the requirement for the use of real identities. Although this may place an additional burden on companies in the short
term to adequately protect privacy, strong online identity is necessary to the growth
and vitality of the industry as a whole.
In China, gold farmers increased the prices of their virtual items in anticipation
of the implementation of the “anti-fatigue” usage controls. Whether this was due to
projected loss of business due to fewer hours of play or because the gold farmers
were concerned about an increase in the cost of their operations is unclear23.
Griefing, fraud, unauthorized gold farming, and so on should become much, much
more manageable with a strong identity system in place. This should lower the cost
of operations for everyone.
Protecting Games: A Security Handbook for Game Developers and Publishers
The game industry would benefit from leading on issues such as usage limitations instead of simply reacting and focusing on the (alleged) short-term financial
aspects of these legitimate social issues. The endless battles over age restrictions and
success of anti-gaming activists in the US like Jack Thompson (and he has been a
success in putting the entire U.S. video game industry on the defense in the matter
of labeling and restricting game sales) should show the costs and risks of letting
government and society get “ahead” of the industry on policy matters.
Millions and millions of personal records have been compromised due to negligence
and malice. I’ve received at least two notifications that my personal data has been
compromised: One time from a large defense contractor I hadn’t worked for in
years (why was my information even in a “live” database?) and the other I just received from some firm that apparently has my information because of some stocks
that I own. They all reassure me that there is “no evidence that your personal information has been misused.” What kind of “evidence” are they actually looking for?
Welcome to the world of data disclosure, account compromise, and identity
Goodbye privacy.
If you want to understand why there is so little concern about protecting your
data, in the US, the standard fine for data disclosure is pretty low. The SEC recently
fined a brokerage firm $275,000 for compromising the data of at least 10,000
customers24 and leaving the data unprotected for over a year. Approximate fine per
person: $2.75 or less. Conversely, in Korea, NCsoft was fined 500,000 Won (around
$500) per person for leaving a log file with 8,500 user IDs and passwords unprotected for just five days25. These cases raise some interesting questions:
If the company can show that it keeps data protected so that a single incident does
not lead to any meaningful compromise, can it avoid liability?
This is certainly possible from an engineering perspective (via encryption, split
data storage, and other techniques). Also, if the company has a strong logging
system in place, it may be able to determine the scope of an incident in sufficient detail to reduce costs and better inform customers of whether they are
actually at risk of identity theft (determine if the data has actually been accessed
as opposed to just exposed and if the data has been accessed, by whom.).
Chapter 29 Identity, Anonymity, and Privacy
Should this be a payment to the consumer or a fine?
If the compromise was of a username and password and no other information,
the password compromise should be considered insignificant, if the company
acted promptly and reimbursed users for any lost data during the period of the
compromise. In this case, a fine is reasonable to encourage companies to avoid
these compromises. The simple loss of a password for an extended period of
time could arguably result in a fine and a payment to affected players for
“losses.” Consumers need to be responsible and encouraged to use passwords
in such a manner that the compromise of one does not affect their other accounts (thus, a company fine).
Losses at the site are one matter, but the compromise of personally sensitive information is different. Credit card numbers, ID numbers, and so on, are sensitive, and have a long term value, can be used for other crimes, and are
expensive and time-consuming to recover (once you figure out that identity
theft has occurred, which is difficult in and of itself).
Standards for prompt disclosure combined with an established schedule of
fines and payments would encourage companies to take appropriate actions
promptly and exercise better care with this information. Conversely, consumers
should not be rewarded for irresponsible behavior on their part.
Game companies have compromised data: A Japanese MMO compromised
the full account information (excluding payment information) for nearly 300,000
customers because the data was accidentally placed on a download server26 and
hackers gained access to Second Life’s entire user database, including usernames,
real names, and encrypted passwords and encrypted payment information27. There
are also cases of real malicious insiders. A database administrator at a consumer
reporting agency in Florida stole 8.4 million data records and sold them for
The list of incidents goes on and on. In 2007, Microsoft has had several problems with customer service representatives for Xbox Live succumbing to social
engineering to provide passwords for other users accounts29, 30. What is frustrating
about the Xbox case is that the customer service representatives should have been
able to use the player’s Xbox console ID, which, presumably, is known to Microsoft
but not to other players, to help confirm a player’s identity. Also, Microsoft should
have had procedures in place to restore the integrity of the players’ accounts in cases
where the accounts are compromised (all security systems need to be designed to
recover from failures).
Protecting Games: A Security Handbook for Game Developers and Publishers
The costs for game companies can be serious. In 2006, K2 Networks estimated
that they lost $1 million in one year due to hacking, account compromises, phishing,
and identity theft31. The loss was not of direct revenues, but from lost “customers:
“It’s not lost money generated daily, but lost customers that wouldn’t come
—David Lee, K2 Network Senior Director of Infrastructure and Engineering
If these were lost customers, it is likely that the total losses would actually be a
good bit larger. After all, online gamers tend to stay with the games they enjoy for
a number of years.
The game industry’s experience with personal data disclosures is consistent
with other businesses. The typical cost to a company for a lost record was $197 for
each customer record that was compromised in 2007—up from $182 in 200632. Game
companies are high value targets. Identity thieves stole 230,000 identities from a
number of online sites in Korea and many of them were used to create dummy
accounts for gold farming in NCsoft’s Lineage games33. These costs clearly indicate
that it is worth investing in improving information security technology and practices for any business that holds customer data. This investment should include
protections against external hackers, errors, and internal crooks.
Phishing attacks are a particular problem for online games. Forged emails that
direct players to sites that download malware or solicit usernames and passwords
are an ongoing problem for the industry. Virtually everyone has been hit, from
World of Warcraft, Xbox Live, Steam, EverQuest II, EVE Online, Tibia, and probably everybody in between. Tools like Blizzard’s Authenticator security token and
NHN’s challenge/response cards as well as the phone-based authentication systems
being used in Asia are probably the best approaches to fighting these attacks (see the
section “The Registration Problem and Identity Management Systems,” earlier in
this chapter). Customer education and training can help, but social engineering
attacks have a long, consistent history of success34.
by J. Price
Privacy is a growing concern. Consider what type of data you actually need to
retain. You do not have an obligation to secure any data that you do not keep. This
is not a minor point. Keeping data—no matter how securely—means that the data
Chapter 29 Identity, Anonymity, and Privacy
is susceptible to being stolen or misused in any number of ways. If you require
personally identifiable information from someone to play a game—such as a name,
address, credit card number—you have security obligations, and must respond in
specific ways if a data breach occurs. Issues arise with non-personally identifiable
information, but the obligations are less serious.
Most privacy protection issues occur in the context of online services, although the
collection of personal information about people is not limited to online service
providers, and the laws relating to protecting that information and appropriately
reacting to data breaches are applicable to anyone who holds personal data. Data
breaches can lead to state and federal enforcement actions and result in serious
fines. In 2006, the U.S. Federal Trade Commission (FTC) settled charges with
ChoicePoint, Inc. in connection with a breach involving the personal information
of 163,000 persons. ChoicePoint was required to pay $10 million in civil penalties
—the largest civil penalty in FTC history at the time—and to provide $5 million for
consumer redress35.
These potential fines are in addition to other laws that require you to notify all
individuals whose records were lost if you suffer a data breach. As of August 2007,
approximately 39 states have enacted legislation requiring notification of their
citizens in cases of misappropriation of their personal information36. California is
in the forefront with its tough requirements that require such notification5. Most
states have followed California’s lead. Federal law will not likely be far off, but (as
of the writing of this book) no law has yet been passed, although many drafts have
been circulated.
In the US, privacy controls include a mix of legislation, regulation, and self-regulation. The European Union, however, relies on comprehensive legislation that, for
example, requires creation of government data protection agencies, registration of
databases with those agencies, and in some instances prior approval before personal
data processing may begin. The two fundamentally different approaches to regulating privacy can cause issues for companies engaging in trans-Atlantic transactions.
To bridge these different privacy approaches and provide a streamlined means
for U.S. entities to comply with the EU requirements, the U.S. Department of
Commerce, in consultation with the European Commission, developed a “safe
harbor” framework. The safe harbor is a voluntary means for U.S. companies to
avoid facing prosecution by European authorities under European privacy laws.
Protecting Games: A Security Handbook for Game Developers and Publishers
Certifying to the safe harbor assures consumers that EU organizations certify that
your company provides “adequate” privacy protection, as defined by the European
Organizations that decide to participate in the safe harbor must comply with
the safe harbor’s requirements and publicly declare that they do so. To be assured
of safe harbor benefits, an organization must self-certify annually to the U.S.
Department of Commerce, in writing, that it agrees to adhere to the safe harbor’s
requirements. These requirements include elements such as notice, choice, access,
security, data integrity, and enforcement. The game operator must also state in its
published privacy policy statement that it adheres to the safe harbor. Further details
can be found at the U.S. Department of Commerce Safe Harbor website37.
1. F. Dai (2005), “China: Real Name Registration for Instant Messenger,”
2. Kim Y. (2007), “KOREA: Busy Websites Need Real Name Registration,”
3. Korea.Net (2005), “Foolproof ID to be Adopted for Online Registration,”
4. Federal Trade Commission (1998), “Children’s Online Privacy Protection Act of 1998,”
5. California (2002), “SB 1386,”
6. Kim T. (2006), “Internet Operators Face Suit Over Privacy Infringements,”
7. J. Robertson (2008), “ID Thieves Drive Down Prices for Stolen Data,”
8. S. Bowers (2006), “Players Walk Away as US Law Wipes Out 90% of PartyGaming’s Poker Revenue,”
9. S. Davis (2006), “Payment Card Spoofing: Subverting Identity & Cheating,”
10. Xinhua (2006), “Fake Identities Open Up Games, Blogs, Websites,”
11. Kim T. (2006), “Teenage Porn Case Fuels Online Identification Debate,”
12. S. Burns (2006), “Korea Guards Against Online ID Theft,”,korea-guards-against-online-id-theft.aspx
Chapter 29 Identity, Anonymity, and Privacy
13. Blizzard (2008), “Blizzard Authenticator (United States Only),”
14. Wohn D. (2006), “NHN, NCsoft and Identity Security in Korea,” (original article at JoongAng Daily no longer available)
15. J. Wyss (2006), “Sentry to Play Cop, Chaperon Online,”,-Hes-Not-A-Convicted-Felon-TheNew-Standard-For-Online-Dating-Games-in-Naughty-America-The-Game.html (original article at
Miami Herald no longer available)
16. R. Hsu (2007), “Shanda’s Aurora Bans Transsexuals,”
17. IMVU (2007), “Age Verification FAQ,”
18. Virtual World News (2007), “Second Life Adds Identity Verification,”
19. J. Wagner (2008), “Addiction to Video Games a Growing Concern,”
20. S. Davis (2007), “China: Anti-Fatigue System Regulation Translated,”
21. Kim T. (2006), “Bill to Limit Time for Online Games,”
22. B. Hayton (2007), “Vietnam Restricts Online Gaming,”
23. H. Lee (2007), “Fatigue System Pushes Up Virtual Item Prices,”
24. G. Risling (2008), “Brokerage Firm to Pay Fine for Security Breach,”
25. Chosun Ilbo (2006), “Landmark Ruling Against ‘Lineage’ Maker Over Data Leak,”
26. W. Wyman (2006), “Japanese MMOG Suffers Privacy Leak,”
27. V. Cole (2006), “Second Life’s User Database Breached,”
28. D. Goodin (2007), “IT Pro Admits Stealing 8.4M Consumer Records,”
29. J. Evers (2007) “Microsoft Probes Possible Xbox Live Fraud,”
30. R. Lemos (2007), “Account Pretexters Plague Xbox Live,”
31. L. Sullivan (2006), “Thieves Targeting MMOGs Prompt Tighter Security,”
32. R. Blitstein (2007) “Cost of Compromise: a Customer Record Costs a Company $197 each in 2007,” (original article at San Jose Mercury News no longer
Protecting Games: A Security Handbook for Game Developers and Publishers
33. S. Burns (2006), “Identity Theft Victims to Sue NCsoft,”
34. J. Timmer (2008), “Fake Popup Study Sadly Confirms Most Users Are Idiots,”
35. Federal Trade Commission (2006), “ChoicePoint Settles Data Security Breach Charges; to Pay $10
Million in Civil Penalties, $5 Million for Consumer Redress,”
36. Consumers Union (2007), “Notice of Security Breach State Laws,”
37. (2008), “Welcome to the Safe Harbor,”
Protecting Kids from
Pedophiles, Stalkers,
Cyberbullies, and
nline games and virtual worlds whose primary customers are children are
probably the fastest growing portion of the online game industry. It is
gratifying that many of these services take the issue of protecting children
seriously. Protecting children online is an increasing concern to mainstream
media—it has even been the lead letter to “Dear Abby”1. Parents and public officials
worry about pedophiles and stalkers harassing their children. The available data
actually indicates that cyberbullying and harassment by other children is a much
more significant problem.
For example, the media often cites a claim that one in seven children has been
contacted by a sexual predator. The more accurate number is that 1 in 25 children
received an online sexual solicitation from someone (not necessarily a predator)
that includes an attempt to meet in real life. Interestingly, most adults flatter youth,
they don’t lie, and they don’t misrepresent their age or interest in sex. The young
people involved are not pre-teens, but typically 13-15 years of age and the (potential)
crime is statutory rape, not forcible rape2. The details of this issue are important
enough to quote at length; see the following quotation.
1) These solicitations did not necessarily come from “online predators.” They
were all unwanted online requests to youth to talk about sex, answer personal
questions about sex, or do something sexual. But many could have been from
other youth. In most cases, youth did not actually know the ages of solicitors.
When they believed they knew, they said about half were other youth.
2) These solicitations were not necessarily devious or intended to lure. Most
were limited to brief online comments or questions in chat rooms or instant
messages. Many were simply rude, vulgar comments like, “What’s your bra size?”.
3) Most recipients did not view the solicitations as serious or threatening. Twothirds were not frightened or upset by what happened.
Protecting Games: A Security Handbook for Game Developers and Publishers
4) Almost all youth handled unwanted solicitations easily and effectively. Most
reacted by blocking or ignoring solicitors, leaving sites, or telling solicitors to
5) Extremely few youth (only two) were actually sexually victimized by someone they met online. This number was too small to be the basis of a reliable
estimate of how many youth in the population get sexually victimized from
online meetings.
1 in 25 youth (about four percent) got “aggressive” sexual solicitations that
included attempts to contact the youth offline. These are the episodes most
likely to result in actual victimizations. (About one-quarter of these aggressive
solicitations came from people the youth knew in person, mostly other youth.)
1 in 25 youth (about four percent) were solicited to take sexual pictures of
themselves. In many jurisdictions, these constitute criminal requests to produce
child pornography.
* 1 in 25 youth (about four percent) said they were upset or distressed as a result
of an online solicitation. Whether or not the solicitors were online predators,
these are the youth most immediately harmed by the solicitations themselves.
—Crimes Against Children Research Center (December 2007)
Some more recent data has shown that online harassment (cyberbullying) has
risen to nine percent of children who go online, substantially less than the 17
percent who are bullied in real life3.
The Internet, in general, and online games, in particular, are easy targets for
child protection advocates. Symantec recently completed a study of online behavior. Some of the most interesting data shows how ignorant parents are of what
their children are doing online. In many cases, the parents underestimate the length
of time their children are online by a factor of 10 and only a third of parents take
advantage of available parental controls4.
Good parental controls could be a real business opportunity, not a burden for
game companies. Incidents such as the case of an 11-year old New Zealand boy who
used his mother’s credit card to spend $1500 on virtual items in There.com5 are not
the sort of attention the industry needs. Parental controls can even become powerful marketing tools: T-Mobile, AT&T, and Verizon have all announced initiatives
to give children an “allowance” for mobile phone minutes and SMS messages6.
The game industry has taken some steps to address these issues. Microsoft
launched its Family-Safe Gaming Initiative7 and NCsoft has a similar PlaySmart
Chapter 30 Protecting Kids from Pedophiles, Stalkers, Cyberbullies, and Marketeers
Initiative8. What the game industry really needs is to launch and sustain an industrywide campaign to educate parents about appropriate and safe gaming, as well as
provide parents with useful tools.
Monitoring inter-player communications and game play is particularly important for children’s games, but is also a sensitive topic in games for general audiences
and for adults. Game operators need to balance providing a safe, functional game
environment with addressing privacy and legal concerns.
One of the most misunderstood subjects for children’s games is The Children’s
Online Privacy Protection Act (COPPA)9. COPPA has been misinterpreted and
caused many game creators to steer away from creating online games and virtual
worlds that allow children to play. In fact, COPPA can provide a legal safe harbor
for those online services that comply with its requirements.
Increasingly, there are third-party companies that offer services to help game
providers protect children. With the exception of certain providers certified under
COPPA, these third-party firms can only provide a “best effort” or “best practices”
service and cannot really help a game company reduce its liability if problems occur.
The ultimate goal for the game industry would be for online games and other
online services to effectively protect children globally through a standard and
accepted combination of procedural and technical measures. Today, it is a daunting
challenge to determine adequate, let alone best, practices.
As noted previously, public perception is that the online pedophiles and stalkers are
a much worse problem than is actually the case. The fact that it is newsworthy
when such incidents occur is a tribute to their relative rarity. Cyberbullying, on the
other hand, is relatively widespread—25 percent of girls and 11 percent of boys in
middle school reported being harassed electronically at least once in the previous
two months, according to a 2005 study of cyberbullying by Clemson University
Psychological studies show that people will do more, go further, than they
would normally, if their identity is obscured. Online, it is easy to remain
anonymous, and the normal frustrations of daily life can lead to what psychologists call disinhibition.
Cyberbullies, pedophiles (or child molesters), and stalkers all take advantage
of the relative anonymity of the Internet to pursue their goals. [see the following for additional details on the topic: 11, 12, 13, 14].
Protecting Games: A Security Handbook for Game Developers and Publishers
Although there are some technical tools that an online service can use to automatically filter online communications looking for behavior associated with
bullying, grooming, or stalking, the key is human oversight, preferably by the
children’s parents. Although such technologies may be helpful, it is unlikely
that they will transfer liability from the online service. The Clemson research
recommended the following guidelines for parents:
Teach your kids that, although your identity can be hidden when you’re
online, you still should treat people with respect.
Make sure your children have no online secrets from you, and that they
never share private information over the web or meet with someone they
only know from an IM session.
Put their computers in a space, such as a den, where you can see what
they’re doing.
You should know the passwords for all their accounts and check on what
they are posting.
—Dr. Robin Kowalski, Clemson University
Technically, online games should provide tools to make it easy to report
griefing or other suspicious behavior and support extensive logging both for the
company and for parents (see the next section for more information). Again, this
is an area where the industry has an opportunity to provide leadership through
advocacy and by public awareness and education.
The security issues related to protecting children’s communications are, essentially,
the same issues that exist for adults—with substantially increased visibility and
sensitivity to failures. For adults, the problem is nuisance griefing (see Chapter 21);
for children issues of cyberbullying, stalkers, and pedophiles raise the stakes much
higher. There are several strategies companies commonly use to secure “kidfriendly” online services:
Menu-Driven Communications—Menu-driven systems allow only certain
words and phrases to be constructed based on user selection from a list. The
advantage of this system is that it essentially requires no monitoring, as the participants are incapable of expressing any inappropriate phrases. This is often
the default communication service supported by many kids’ game sites. These
systems are not perfect. Children and adults have created “covert channel”
communications systems on these services, or even used decorations in any
Chapter 30 Protecting Kids from Pedophiles, Stalkers, Cyberbullies, and Marketeers
personal, customizable space in the game (or even clothing, I’d guess), to
exchange data and allow expanded chat (see the “Friend Codes” bullet)15.
Whitelist Chat—These services allow players to use only specific, preapproved words for communication. Site moderators have to continually
monitor game slang to identify new euphemisms for inappropriate topics such
as cybersex. There are also issues where motivated players can use legal phrases
to signal inappropriate information.
Blacklist Chat—This type of chat service blocks words and phrases that are inappropriate. It needs to be constantly updated as new “problem words” are
identified. In practice, this should not be difficult. A real-time monitoring team
can be alerted whenever a new word arrives and it can be added to either a
whitelist or blacklist. In practice, relatively few words are used, even with misspellings, so this is not a challenge for an automated filter. Club Penguin has
100 moderators who add 500 to 1,000 words per day to the site’s filters16.
Personal Blacklist—Sites can be concerned about personal information, such
as addresses, phone numbers, and email addresses, when children attempt to
share the personal info. This information may not be sensitive to everyone, but
when associated with a specific child, it indicates a potential problem—often
innocent, sometimes not.
Silent Filter—Club Penguin created this innovation for their Standard Safe
Chat system. Many sites will filter a specific word or phrase only if it is deemed
inappropriate. Some sites will also notify users that they have said something
wrong. Club Penguin’s system fails silently: The sender does not know that the
system rejected her message. The goal of the system is to not give the offender
any positive feedback—either from the recipient or from the security system—
so that she does not have information to “tune” her abuse strategy17.
Friend Codes—These sequences of letters and/or numbers need to be exchanged
outside of a game or online service before communications can occur. They are
most closely associated with Nintendo’s DS and Wii consoles. Many users
criticize them, but most children’s online games use the technique. Because this
technique forces a communication outside the game, the participants need to
have some sort of independent, hopefully pre-existing, relationship of trust.
Monitoring—Many children’s games use manual monitoring by company
staff to look for troublesome behaviors of all sorts. These people act as supplements to the automated filters and can intervene in various ways when they
identify a suspicious communication. The content of the communication does
not need to trigger intervention, only the fact that it is a new or an anomalous
pattern of behavior. Strictly speaking, monitoring does not always need to occur
in real-time; it can be slightly delayed or even used for simple review of logs.
Protecting Games: A Security Handbook for Game Developers and Publishers
These strategies can be used either in isolation or together. Many children’s
sites have different tiers of service with menu-driven communications available to
all and more flexible chat options for paying customers or individuals who have
provided detailed information about their identity.
As with other online services that have relied on text-based communications,
the rise of audio and video communications can pose a real challenge for protecting
children. Filters, word lists, and the other strategies that have been effective so far
for protecting children online are not really viable for video or audio communication services.
In addition to protecting communications, some games implement additional
parental controls, but typically, these are quite remedial. Games like World of
Warcraft, Lineage, and the Xbox Live service all have basic time limits, schedules,
and payment status tools. There are a number of additional capabilities that could
be useful for children’s online services and would make the services even more
appealing to parents:
Time Limits—An overall time allowance for hours of play per day or per week.
Schedules—Hours during which online activities are permitted or prohibited.
“Grounding”—The ability to prohibit a child from playing because of misbehavior.
Special Reward—The ability to temporarily add hours, open up a schedule, or
otherwise alter the game usage for the child. This could even be implemented
by a separate interface to a cell phone, allowing the parent to be “untethered”
from the playing child.
Payment Status—Current funds in the players account.
Allowance—A rate at which the child can spend money in the game account.
This is particularly applicable for free-to-play games or multi-game services.
Message and Activity Log Review—John Smedley, CEO of Sony Online
Entertainment, noted that it would be helpful for parents to be able to review
the full logs of their children’s messages activities in an online game18. Actually,
there is no reason that parents should not be able to review the activities of their
child online beyond just messaging, perhaps even a full game or online session
replay feature.
Approved Friends List—Parents could have the ability to approve friends or
move them into different categories. This could be beyond the basic “communication type” limitations that many services support today.
Chapter 30 Protecting Kids from Pedophiles, Stalkers, Cyberbullies, and Marketeers
Event Notification—Alert parents of notable events in the game. These do not
even need to be relevant to child safety (although it is probably unwise to abuse
them too much for marketing purposes). This would be another opportunity
to directly contact parents via SMS messages to cell phones or email and create
a strong, direct relationship with the parents.
Ratings System Support—Online games are going to need to develop their
own ratings system or become more involved in the traditional games rating
process. The more information parents have, the more comfortable they will
feel with these online services.
Rich Family Structure—It would be very beneficial to support multiple children
and parents/guardians under a single account with appropriate privileges and
capabilities for each. The service could even support temporary access for baby
sitters and for friends of the children, which could also be good for business.
Parent-Child Contract—Barbie Girls came up with an excellent idea for its
Parents’ Place service: a tool that helps parents and children build a contract for
when, where, and how the child can use the online service19.
One feature that many parents would likely appreciate would be a “parental
dashboard” that would allow a parent to centrally manage all of the online activities
of their child or children at all of the sites that the children use. It often seems that
game developers view parental controls as a burden instead of an opportunity to take
care of the real customer. After all, it is usually the parent who pays for the service.
A market is beginning to develop for security products and outsourced services
related to protecting children20. In addition to technical capabilities and costs, companies that are investigating such services should consider whether they can transfer any liability to the service provider.
Monitoring is not used solely for children’s games. There are additional issues
related to monitoring, particularly privacy, that should be considered regardless of
whether the participants are children or adults (see Chapter 29).
with J. Price
Children are often the direct or indirect audience for many games, including online
games. Providing the game to them, and collecting various data associated with
their use of the game, will require game companies to adhere to the Children’s
Online Privacy Protection Act of 199821 (COPPA). (COPPA should not be confused
Protecting Games: A Security Handbook for Game Developers and Publishers
with COPA, the Child Online Protection Act, which addresses the exposure of
children to online pornography.) COPPA applies to websites and online
services operated by persons or entities under U.S. jurisdiction for commercial
purposes that are either directed to children under 13 or who have actual knowledge that children under 13 are providing information online.
COPPA is not optional. It is the law and companies have been fined substantially, in one case, $1 million for violating the statute22.
Online game developers seem to be petrified of children using their services.
The prospect of complying with COPPA and not marketing to kids seems to have
driven many companies to restrict their market to children 13 and up or adults.
It is interesting to note that parents are very active in children’s games. In the
social network and doll “dress up” game, Stardoll, a survey found that 80 percent of
surveyed children and 54 percent of surveyed parents visit the site daily. Parents seem
heavily involved: 75 percent visiting the site along with their daughters at least once a
week, 64 percent visiting on their own, and 60 percent having their own accounts23.
COPPA does not ban marketing to children; it provides rules on how to do it.
The law details, for example, what an online game operator must include in a privacy policy, when and how to seek verifiable consent from a parent or guardian,
what responsibilities an operator has to protect children’s privacy and safety online,
and includes restrictions on marketing to children under 13.
An important element of COPPA is potential legal immunity. The Federal
Trade Commission (FTC) implements a “safe harbor” designed to encourage
increased industry self-regulation. To take advantage of the safe harbor, the FTC
must approve the set of compliance guidelines that the company employs.24
Protecting children’s identity is a particular challenge. In the US, a child’s privacy
is protected under COPPA, and as a practical matter mishandling children’s identity information is bad business and worse public relations. Kids don’t see identity
protection this way. They want to play games, and there is nothing like saying “no”
to encourage devious behavior. When a player registers, even an adult, the goal of
the registration process is to deter bad behavior, but some parents will also collude
with their children to “game” the system.
A good system should include both carrots and sticks for establishing and protecting identity: rewards and punishments. Verifying identity online is a hard problem;
face-to-face identity verification is not. However, verifying face-to-face identity is
not cheap. One answer is to get the users to pay to have their identities verified.
Chapter 30 Protecting Kids from Pedophiles, Stalkers, Cyberbullies, and Marketeers
How about a T-shirt?
One way to get users to pay for identity verification is through the physical
delivery of goods. Delivery services (such as UPS and FedEx or even the Post Office)
offer the ability to confirm identity via signature receipt. It is fairly expensive to pay
for this service purely as an identity verification method. However, consumers will
often pay to receive a T-shirt, poster, or other items (“carrots”). The marginal cost
of adding signature verification is low to a product delivery and, of course, the consumer is now actively engaged in marketing your game. The worst-case scenario is
that you might have to convince your marketing department to subsidize the shirts
or items.
An approach to handling identity misrepresentation is assessing an obvious,
serious financial penalty for misrepresenting information and the threat of legal
action (the “sticks”). For example, the game provider could levy a $250 charge for
handling and revoking a fraudulent identity. And, because identity theft and credit
card theft are both crimes, the service provider should be willing to pursue legal
action and do so conspicuously and publicly. Both of these threats would be highlighted during the registration process with a clear, explicit acknowledgement by
the user when completing the process. If an incident does present itself, the company
should be willing to follow through by enforcing the fine or taking legal action, or
both. If the service provider does take legal action, it should do so publicly.
Publicity will aid the company’s credibility and deter future problems.
with J. Price
Players may create or post child pornography onto your online service, unfortunately, and it must be dealt with quickly and appropriately. Federal law imposes significant penalties for the online (and off-line) distribution of child pornography.
Any entity providing any online service, including a game, must report any facts or
circumstances around possible violations of federal child pornography laws to a law
enforcement agency25 or the Cyber Tip Line at the National Center for Missing
and Exploited Children (1-800-843-5678—see also
cybertip/). Any person who knowingly and willfully fails to make a required report
will face serious consequences, including fines of $50,000 for the first offense and
$100,000 for subsequent offenses26.
The legislation does not require an online service provider to monitor any user,
subscriber, or customer, or the content of any communication of any user, subscriber, or customer, and immunizes game operators and ISPs from civil lawsuits if
Protecting Games: A Security Handbook for Game Developers and Publishers
they take good-faith actions to comply with the legislation27. But game operators
must respond quickly to any report of child pornography and involve law enforcement just as quickly. Turning a blind eye to an end-user complaint regarding
content, especially if it is—or could be—child pornography, will very likely result
in prosecution.
1. Dear Abby (2007), “Online Video Game Threat Catches Parents Unaware,”
2. Crimes Against Children Research Center (2007), “1 in 7 Youth: The Statistics About Online Sexual
3. Crimes Against Children Research Center (2007), “Internet Safety Education for Teens: Getting It
4. Symantec (2008), “Parents, Get a Clue!,”
5. R. Markby (2006), “New Zealand Boy Runs Up $1,500 Debt in There,” (original story at Stuff no longer available)
6. B. Tedeschi (2008), “How to Give Your Child an Allowance, the Mobile Way,”
7. Edge (2006), “Robbie Bach Fronts Safe Gaming Campaign,”
8. S. Davis (2006), “NCsoft Launches ‘PlaySmart’ Information Program to Advance Online Gaming
Safety, Security,” (via PlayNoEvil: original article at Yahoo! no longer
9. Federal Trade Commission (1998), “Children’s Online Privacy Protection Act of 1998,”
10. B. Levine (2006), “Taking On the Cyber Bullies,”
11. D. Stang (2008), “How Pedophiles Groom Victims,”
12. K. Lanning (2001), “Child Molesters: A Behavioral Analysis,”
13. (2007), “The Facts About Cyber Bullying,”
14. P. Agatston (2007),“Cyber Bullying Quick Reference Guide for Parents,”
15. The Good Reverend (2007), “How to Chat in Chatless Disney Online Game,”
16. P. Elliot (2008), “MMO Week: Club Penguin,”
Chapter 30 Protecting Kids from Pedophiles, Stalkers, Cyberbullies, and Marketeers
17. S. Davis (2007), “Feature Article: Inside Club Penguin and its Child Safety Program—Updated Club
Penguins purchased by Disney,”
18. J. Smedley (2006), “Be Vigilant,”
19. Barbie Girls (2008), “Barbie Girls Parents’ Place,”
20. Virtual World News (2008), “Xivio Signs Crisp Thinking for Online Child Protection,”
21. 15 U.S.C. §§ 6501–6506
22. Federal Trade Commission (2006), “ to Pay $1 Million for Violating Children’s Online
Privacy Protection Rule,”
23. N. Mitham (2008), “Stardoll: Fame, Fashion and Friends…and Mothers?,”
24. Federal Trade Commission (2006), “How to Comply with the Children’s Online Privacy Protection
25. 42 U.S.C. § 13032(b)(1)
26. 42 U.S.C. § 13032(b)(3)
27. 47 U.S.C. § 230; 18 U.S.C. § 2702(b)
Dancing with Gambling:
Skill Games, Contests,
Promotions, and Gambling
here is one place where playing games purely for fun collides hard with the
real world: when gaming becomes gambling. Although game violence and
game addiction have recently been the focus of media attention and public
policy concerning the game industry, gambling is an area where game companies
can rapidly get into very serious trouble.
I am addressing gambling, contests, and related topics in a book on game security for two reasons:
Game companies can get into a lot of difficulty by veering into the gambling
industry unintentionally.
Once cash or prizes are involved, games are not just for fun, and the threat level
to a game’s integrity goes up dramatically.
The gambling gaming industry and the non-gambling gaming industry rarely
seem to acknowledge the other’s existence. However, the explosive growth of online
gaming and the wide range of business model experiments within the gaming industry are creating “accidental casinos” and “unintentional lotteries.”
Governments have no sympathy for companies that make these kinds of mistakes.
The temptation for game developers to edge towards gambling is pretty obvious. The potential to win cash or prizes is a huge incentive to participate in a game
and substantially increases a player’s willingness to pay to play. Designing a casino
game seems very simple: a deck of cards, a couple of dice, a wheel and ball, some
green felt, and you’re in business. All legal U.S. gambling (including casinos, Indian
casinos, charitable games, bingo, card rooms, and lotteries) generated $90.93 billion in gross gaming revenue in 20061; certainly enough money to catch anyone’s
Chapter 31 Dancing with Gambling
In general, a game must have three elements to be considered “gambling”: (1) consideration to play (often money); (2) an element of chance; and (3) a prize. If the
game lacks any one, it is not a gambling game…usually2. Game business models
can be broken into the following categories (if you are considering gambling and
related legal issues):
No Prize: No Gambling, No Problem—Even if you charge to play or have an
element of chance, if there is no prize, there is no gambling.
No Consideration: Contests and Sweepstakes—As long as you do not have to
pay to enter and all entrants have an equal chance to win all prizes (equal dignity is the legal term), the game is not gambling. There are certain U.S. states
that are more restrictive in this regard. Personally, I use the “McDonalds Test.”
Go get a McDonalds prize ticket and look where the contest is “Void Where
Prohibited by Law.” This is a good list of the places where you can’t run contests.
Consideration, No (Little) Chance, and Prize: Skill Game—If the game does
not contain an element of chance, it is a skill game and not considered gambling. Typically, in the US, about 30 states use a “preponderance of skill” standard, another 10 have a “pure skill” standard, and the rest don’t allow skill
games (contact your lawyer).
Consideration, (Mostly*) Chance, and Prize: Gambling—A game that includes all three elements is not necessarily prohibited, but it is almost certainly
regulated in most jurisdictions. For example, in the US, fantasy sports leagues,
horse racing, and certain online lotteries are excluded from the Unlawful
Internet Gambling Enforcement Act (UIGEA)3. There are also different rules
for charitable gaming. (*)Interestingly, the law in Canada is reversed; if there is
an element of skill in the game, it is not considered gambling 4.
Nations and states may treat the gaming/gambling transaction as occurring
where the player is, not where the server or business is located. This is an issue that
needs to be explored more carefully from both the perspective of protecting a gaming company and as a matter of public policy. Some jurisdictions, most notably the
US, France, and Israel5, have stated that if either party to the transaction is in their
jurisdiction, the government has jurisdiction over both parties. The risk for a gaming company is that it may be operating in a jurisdiction where gambling is legal,
but be at risk of prosecution in another jurisdiction because one of its players is
located there.
Protecting Games: A Security Handbook for Game Developers and Publishers
Game designers can work their way around these issues, if they do so carefully.
Trading card games allow players to create value for the cards independently of the
value set by the game company. Games can also be redesigned to remove an element of chance. For example, Duplicate Bridge6 eliminates chance from a card
game by having all partnerships play the same hands. The bottom line is that if you
have any questions or doubts about whether your game is a gambling game, contact an attorney knowledgeable in this area of law.
User-created content and emergent game play are usually wonderful things when
they are part of your game. Players have taken online games and expanded them
and transformed them far beyond the initial plans or intent of their creators. Game
companies can experiment with new rewards and incentives for playing. All of
these factors can sometimes cause a game to stumble into gambling: Linden Lab’s
Second Life has been at the forefront of player creativity. One of the obvious, and,
for a while, quite popular, options for players was to create a virtual casino that
used the game’s currency. In 2007, Linden Lab asked the FBI to review these inworld services7 and shortly thereafter banned both advertising for gambling8 and all
of the virtual gambling sites inside the virtual world9. In Korea’s intensely competitive online gaming market, the Lohan MMO, was launched and was surprisingly
popular. It had a Baccarat “mini-game” that became very popular and which was
of sufficient concern that Korea’s Media Rating Board (equivalent to the U.S.
Entertainment Software Rating Board) asked that the game be removed10.
Even more challenging to the boundary between gaming and gambling is the
case of Sony’s Station Exchange. As discussed in the gold farming section of
Chapter 22, Sony launched this online sales service as part of its strategy to fight
gold farming. However, because Sony seems to have profited from the transactions
and many of the virtual items had an element of chance involved in their acquisition, there were questions raised as to whether the UK’s new Gambling Act would
require Sony to get a license11. Sony could perhaps avoid this potential problem if
it set a flat listing or processing fee instead of a commission-based transaction service. NHN’s game Reign of Revolution embraced a more direct form of capitalism
by allowing players to tax all of the transactions in regions that they controlled. This
could potentially be quite lucrative, given the wide availability of real money trading sites in Korea12.
As a world leader in online gaming, Korea is struggling with clarifying the distinction between gaming and gambling online. They have large-scale gold farming
and, at the same time, casual gambling games (such as the card game Go-Stop and
Chapter 31 Dancing with Gambling
poker) that are widely played for large amounts of virtual currency. The government is considering penalizing players who trade more than 300,000 Won (around
$290) in virtual currency per month13.
Typically, contests are pretty safe and non-controversial. There is no legal problem
as long as players don’t have to pay to enter and they have an equal chance of winning the prizes. It is pretty easy to determine whether the game is a contest.
Skill games are more problematic. Because payments are required to play, the
boundary between jail time and a good business is based solely on the game design
and implementation. Governments are not always helpful: The UK recently determined that poker was a game of chance14, whereas Denmark decided it was a game
of skill15.
Game developers have adapted a number of casual games to use the skill games
business model16. Developers have even converted first person shooters into skill
games, although to date, none of these endeavors has been financially successful17.
There are a range of issues that game developers should consider when designing a
skill game or converting an existing game to the skill game business model:
Does the Game Have Enough Skill—There are a lot of games that are quite fun
that don’t require much skill. The entertainment comes from the graphics, animation, and sound, simple game play, and substantial rewards. Because of the
legal issues, developers have to be quite careful to ensure that the game will pass
the “preponderance of skill” or “pure skill” test, depending on where the game
company plans to operate and for which customers they wish to make the game
Vulnerability to Attack—Many games are fun but not secure or even able to
be secured. Single-player games often have optimal solutions that can be exploited and many multi-player games are very vulnerable to bots (see Part III
on cheating). These design flaws may, in fact, be part of the appeal of the nonskill variants of the game, as players often enjoy the satisfaction of victory, even
if it is gained easily.
Single Player Game Design Issues—Very few online games for single players
are designed to be implemented on the server. All of the game logic is on the
player’s computer and the sole interactions with the server are to download
the game and upload a score. These games are virtually impossible to protect,
although they are commonly used for advergames and skill games.
Protecting Games: A Security Handbook for Game Developers and Publishers
Migration Issues—If a developer is working with an existing game, it is very
tempting to reuse the already built and tested code. Because security is much
less of a priority for a (non-skill) game, the existing game may have serious architectural and game design issues that could make it difficult to port securely.
Skill Dominance/Sufficient Audience—Some games have very high learning
curves and some players are substantially better than others. First person shooters often fit this profile, as do deep, strategic games like chess. These games
make poor skill games because many typical players will have no confidence
that they can win or improve enough that they can ever win. One reason that
poker is so successful is that pretty much everyone thinks that they are an above
average player.
Another issue for skill games is that players do not necessarily trust the game
service. Because money is on the line, players are likely to be concerned about insider cheating, shills (players paid by the game site to play the game), and other
Contests, skill games, and gambling games are potential targets of all of the security
threats discussed elsewhere in this book. Players will cheat, hack, gold farm, abuse
tournaments, grief, and subvert their identities. In other words, some players will
do pretty much anything to win cash or prizes. There are several potential security
issues that are worth discussing further.
Online casinos and skill game sites can be intentionally criminal enterprises.
Because they accept player payments and are potentially rewarding players with
cash or prizes, there are many ways to defraud customers. Unfortunately, reputation is not very effective as a mechanism for preventing fraud because it is so easy
to launch a new site and so easy to market it. Lack of regulations and the ability to
shop for favorable jurisdictions makes it easier to set up and profit from a scam.
Game scams can be fairly subtle since players expect to lose a portion of the
time that they play. A slight change in the payout odds at a casino can turn into a
huge amount of profit. Having shills (players affiliated with the game site) periodically win or defeat players can sustain their trust in the game. One of the reasons
that we created our SecurePlay anti-cheating software was to help players be able to
independently check that games are fair18. Of course, the easiest scam is to simply
refuse to pay out winnings.
Chapter 31 Dancing with Gambling
The growing popularity of online games has also brought out some pyramid
schemes tied to games. No doubt other forms of fraud will emerge.
Bots are a hard problem for game developers to stop. At the end of the day, it is impossible to tell the difference between a human player using a mouse and keyboard
and an automated program driving a mouse and keyboard. Gold farmers use bots
against MMOs and are successful because it is economically worthwhile. Any time
there is money involved, there is real potential for someone to figure out how to automate game play undetectably. As these types of online services grow, it would not
be surprising to find “poker farmers” and “skill game farmers” using automated
tools to exploit these games. These automation tools don’t need to be perfect, only
sufficiently good that they can reliably make money without being easily detected
by the game sites.
There are really only two solutions—you can build games that are not “bot-able”
and design game play so that it has a substantial psychological element. This is
what has kept pokerbots at bay, so far19. The other option is to combine online with
face-to-face play. Face-to-face competition in a controlled environment will keep
bots out by definition. Public humiliation for failing in front of a large audience
may further deter otherwise anonymous botters.
Video feeds of remote, live games have been part of the online gambling industry
for a number of years. Some players believe that these systems are more secure. But
it is well known that traditional, regulated casinos periodically have problems with
cheaters and an ordinary player with a webcam is highly unlikely to be able to
detect any suspicious activity. Wholesale fraud by game sites is also possible. After
all, the same technology that allows the virtual “first down line” to appear on your
television screen during a football game20 could be used to present whatever cards
or dice values you would like to a player.
with J. Price
Even small changes or additions of new features to a game could unwittingly trigger a host of laws and regulations. With a single change, a game can go from not
being regulated at all to being subject to the same laws and regulations as the largest
Protecting Games: A Security Handbook for Game Developers and Publishers
casinos in the nation or being outright illegal. To complicate matters, laws change
and courts often interpret provisions differently depending upon the jurisdiction of
the court and specific application of the law.
As noted previously, there are three elements in games that make a game a
gambling game: (1) consideration to play (often money); (2) an element of chance;
and (3) a prize. There are limitations to this shorthand rule, however. This is an industry with a variety of interweaving laws and regulations. It is often difficult to
make a clear, legal distinction between various types of games. Definitional issues
arise, in large part, because the laws and regulations are often purposely vague (to
encompass present, future, perhaps currently unknown issues), and vary widely
from one jurisdiction to the next.
Not knowing whether your game incorporates gambling is not a legal defense
against charges of running an illegal casino or lottery. Just as with speed limits, law
enforcement will not care whether you were aware of the law when a potential
offense occurs.
Of course, one way to decrease the number of legal issues you have to handle is
to make an effort not to create problems and to avoid pitfalls. The easiest way to
keep your game out of the gambling category is to simply remove one of the three
elements—not just tweak an element so it is arguably not part of the game, but
completely remove it. For example, a game without any type of prize is not gambling, anywhere, anytime.
If you develop a game that includes elements of gambling, you must be aware
of criminal and civil laws and regulations that exist at both the federal and state
level. Worldwide, gambling is often slightly less regulated than in the US, where
there are very specific laws and regulations. There are also a variety of federal and
state laws and regulations that do not specifically relate to either gaming or gambling, but are highly relevant to game developers, such as laws regarding privacy,
consumer protection, fraud, money laundering, online content, as well as other
laws and regulations that might be triggered depending on the type of game, where
it is offered, and where it is played.
You also should consider that the federal (and state) government will not be
shy when it comes to prosecuting something it considers a crime, even if it seems
obvious to you that its reading of the law is incorrect. Whether they win or lose in
a lawsuit against you, it will be a distraction and may set your business off course.
It will also likely be a costly, time-consuming experience.
All of these different parts operate in a legal machine that includes something
referred to as “prosecutorial discretion.” Under U.S. law, government prosecuting
attorneys have broad authority to choose whether or not to bring charges, and
what charges to bring, in cases where the evidence would justify charges.
Chapter 31 Dancing with Gambling
“Prosecutorial discretion” is typically the answer to the question: “Well, if it’s illegal, why is so-and-so doing it?” The fact that someone else is “doing it” does not
mean that “it” is legal, or a prosecutor is not interested in the conduct; it simply
means the person has not been prosecuted for it—yet.
The most important law used by federal prosecutors for gambling is the Interstate
Wire Act of 1961, (often called the Wire Act)21. This law addresses those engaged in
the business of gambling and transmitting information relating to gambling by a
wire (or wireless) communication device. There are exceptions written into the
law, and not all courts agree on its breadth. The U.S. Fifth Circuit Court of Appeals,
for example, has ruled that the Wire Act applies only to sports betting and not other
types of online gambling22. In contrast, the law has been interpreted by some, including the U.S. Department of Justice, to mean that all online gambling is illegal.
Another important federal law, the Illegal Gambling Business Act of 1970, prohibits gambling businesses, in general23. This law is particularly interesting because
it is one of a few that intersect with state law. For a game business to be illegal under
this law, the federal government must first establish that it is a gambling operation
in violation of state or local law. This federal law exempts Las Vegas casinos, for example, where gambling businesses are legal under state law. But most online games
are not regionally offered or limited to one state. In fact, the very success of a game
can be the result of revenues in jurisdictions you never considered. A successful
game could also trigger this and similar laws if a state law is violated, even unwittingly.
Caution is warranted, especially if your game is of the type that might be considered gambling in some jurisdictions, and a legal game in others. If a problem
arises in one of the more conservative jurisdictions, you could have an additional
problem under federal law. A federal prosecutor can literally “make a federal case
out of it.”
Another federal law, referred to as the Travel Act, also may cause issues for unsuspecting game operators, and is another example of how an isolated act can turn
into a much larger issue. This law is aimed at prohibiting interstate travel or use of
an interstate facility in aid of an unlawful business enterprise such as illegal gambling. Similar to the Illegal Gambling Business Act, the Travel Act first requires a violation of another law to trigger the federal law. If the game is defined as illegal
gambling in a jurisdiction, or if there is some other legal issue, that initial violation—or attempt to violate the law—will lead to further issues if interstate facilities
were used to further the unlawful activity.
Protecting Games: A Security Handbook for Game Developers and Publishers
Similarly, the subject matter of a game can cause problems. For example, the
Professional and Amateur Sports Protection Act of 199224 makes it unlawful for any
government (including states and tribes) to authorize legal wagers that are based on
sporting events. Exceptions to this law exist for some states, such as Nevada, which
has its own specific laws concerning sports wagering.
In addition to laws that “make a federal case out of it,” states can cause plenty of difficulties. Worse yet, any number of states can sue simultaneously, and force you to
defend yourself in multiple jurisdictions at once. Every state has laws to distinguish
between games and gambling, and otherwise address issues that arise with games
and gambling. Each also has laws in place to protect its citizens in a variety of ways
that do not directly relate to games, and may not have been drafted with games in
mind, such as laws requiring appropriate disclosures, tax laws, data security statutes,
and catch-all laws that are used to protect consumers in any number of circumstances.
Although each state regulates games in some way, two states stand out—
California and New York. Each has aggressive laws covering games and gambling.
New York provides a good example of a state that bars gambling in its constitution25,
and prosecutes cases vigorously. Under New York law26, “games of chance” are
separately addressed, including a specific exception for bingo for senior citizens27.
Similarly, California has laws regulating the difference between gambling and
gaming28 and very aggressive consumer protection laws.
When you consider a state’s tolerance for any type of game or related activity,
it is instructive to look first at the state’s statutes and regulations, and then at the
state’s court cases with a focus on:
Jurisdiction (whether the state laws apply to a particular game and entity or
person providing the game).
The elements that distinguish gaming from gambling (reviewed previously)
that are often interpreted differently by different states.
Even if two states have similar laws, they might apply their laws differently. This
happens because each state has a myriad of cases that have gone through the court
system that interpret important elements of games in the context of varying facts.
For example, at least one court in Washington state has found that consideration does not have to be defined as money. Consideration could be someone’s time
and effort to obtain prize slips, even though the game pieces were available without
Chapter 31 Dancing with Gambling
purchase and therefore might be gambling in Washington29. In contrast, another
court, in a different state (Michigan), found that any effort that is a mere inconvenience to the participant is insufficient to qualify as consideration and therefore the
same type of game was not determined to be gambling30.
To avoid any confusion about the legal status of different types of games, some
states simply prohibit all pay-for-play skill games that include prizes. Other states
use a sliding scale, with what is referred to as a “predominance test.” If the element
of skill in a game predominates over chance then the game is permitted. For some
games, such as chess, the distinction is clear; for other games, such as poker, the
answer is more ambiguous because poker includes both chance and skill elements.
Skill games are allowed in about 30 U.S. states if the games have a preponderance
of skill; in 10 states a skill game is legal only if it is a pure skill game; and in the remaining 10–12 states, skill games are illegal.
Unfortunately, there is no standard, simple process to determine which legislation or regulations are applicable to a given game within the US. You need to look
carefully at court precedent and applicable laws and regulations.
1. American Gaming Association (2007), “Gaming Revenue: Current-Year Data,”
2. Direct Marketing Association (2008), “Sweepstakes Advertising: A Consumer’s Guide,”
3. Wikipedia (2008), “SAFE Port Act,”
4. S. Davis (2008), “Notes on Skill Games and Contests from the Next Generation in Gambling
5. B. Hansen (2007), “Israel Bonds with US Over Online Betting Ban,”
6. Wikipedia (2008), “Duplicate Bridge,”
7. A. Reuters (2007), “FBI Probes Second Life Gambling,”
8. Linden Lab (2007), “Advertising Policy Changes,”
9. Linden Lab (2007), “Wagering in Second Life: New Policy,”
10. Jun S.H. (2006), “Virtual World or Virtual Vegas?,”
11. Virtual World News (2007), “UK Gambling Act: How to Protect Your Virtual World,”
Protecting Games: A Security Handbook for Game Developers and Publishers
12. Cho J. (2006), “Jungle Law and Taxes Apply in ‘R2’,”
13. Bae J. (2008), “‘Habitual’ Online Item Buyers Face Sanctions,”
14. BBC (2007), “Man Guilty in Poker Skills Case,”
15. Online Poker News (2007), “Danish Court Rules Poker Skill Game,”
16. FUN Technologies (2007), “WorldWinner and PlayFirst Team Up to Create Diner Dash Online Cash
17. P. Elliot (2008), “Kwari Shuts Down,”
18. IT GlobalSecure (2008), “SecurePlay,”
19. J. Woo (2008), “Poker Bots: The Future of Online Poker is Doomed!,”
20. S. Brannan (2008), “How the First-Down Line Works,”
21. 18 U.S.C. § 1084
22. United States v. McDonough, 835 F.2d 1103, 1105 n. 7 (5th Cir. 1988)
23. 18 U.S.C. § 1955
24. 18 U.S.C. § 1955; Pub. L. No. 102-559, 106 Stat. 4227-4229 (1992)
25. New York Constitution, Article 1, Section 9
26. New York Penal Law, Article 140, 225.00 (“Gambling Offenses”)
27. New York Penal Law, Section 213-23 “License Required; Exemption for Senior Citizen Games”
28. Gambling Control Act, California Code, Chapter 5, sections 19800–19958
29. Washington v. Safeway Stores, Inc., 450 P.2d 949 (Wash. 1969)
30. ACF Wrigley Stores, Inc. v. Olsen, 102 N.W.2d 545 (Mich. 1960)
Denial of Service, Disasters,
Reliability, Availability, and
ou’ve got to keep things running. If there is no game service, there is no game
to play, no players, no money coming in, nothing to pirate, no one to cheat,
and no gold to farm. If one looks at IT protection purely from a budgetary
and business impact perspective, this chapter should have been first. I’ve had close
encounters with two IT disasters and one near-miss during my career (so far).
First, an “operational” service was “back-hoed” when my employer had a cable cut
that took down a critical secure mail server. The email service was actually a prototype, but it was being used by senior executives across the federal government, so
no matter what our contract said formally, we were operational. We had purchased
our data lines from two separate telecommunications companies and so we thought
we were safe against single points of failure. We were wrong; the connections were
redundant, not independent. The two telecommunications carriers had both
purchased space on a single physical cable on one segment of their connection to
our site.
Second, we had a contract to operate the main Internet access point for a large
government department. Our building’s roof was being repaired and we had a rain
storm. This would have only been a minor problem, except there was a huge
amount of rain and the massive roof was flat with a single six-inch hole in the center, down which the water spun through like a whirlpool. I have never been in a
building before where it was raining inside. Of course this hole was right on top of
our equipment room. Water streamed down over numerous, brand new, high-end
servers and networking equipment (the folks in the network operations center
didn’t see, hear, or notice a thing).
Protecting Games: A Security Handbook for Game Developers and Publishers
Finally, in another rain-soaked incident some number of years ago, the alreadysaturated District of Columbia had four inches of rain in 45 minutes and the water
rapidly backed up through a large portion of the city (and totally drained away 30
minutes later). Some areas of the city were six to eight feet deep in water. Our office
was lucky. There is nothing like walking through six inches of water to go turn off the
main power and hoping that nothing really bad is going to happen. Nothing did, or
else I wouldn’t be writing this, and we lost surprisingly little IT equipment.
All sorts of things can go wrong. Whatever you plan for, something else will
come along that you haven’t considered. Nature will try to get you, your service
providers will try to get you, your customers will try to get you, and hackers and
other intentional trouble makers will try to get you.
Planning for natural and manmade disasters is totally thankless before it occurs
because of the “unnecessary” cost. But, it is totally necessary (whether a disaster occurs or not), even if you don’t cover every contingency. There are books on contingency planning, but a lot of the needed planning is very procedural and specific to
your business.
However, it is increasingly easy for online services to scale rapidly and, at the
same time, reduce their vulnerability to failures. Virtualization technologies like
Xen1 and VMware2 and cloud computing as well as the commoditization of data
center services are good for business as well as good for security.
Designing your system architecture to be deployed rapidly on leased servers or
virtual services, such as Amazon’s EC23, saves money, improves scalability, and ensures
availability. (As a side note, the continuing rapid evolution of server technology
makes leasing hardware compelling for most applications as well as a great way to
control cash flow. This is a huge change in basic IT thinking from just a few years ago.)
Denial of service attacks can be done just “because” or with serious intent: In 2004,
hackers threatened online bookmakers with a denial of service (DoS) attack during
the Grand National horse race and the Super Bowl (American) football events. These
blackmail attempts, assumed to be from Eastern European criminal groups, targeted
the huge portion of annual sports wagers that are placed for these competitions4.
Gambling sites are not the only target of denial of service attacks. In 2005,
Square Enix’s Final Fantasy XI MMO was also the victim of a distributed denial of
service attack5. Although some folks in the game industry may have been pleased,
most of Korea’s major RMT sites suffered simultaneous denial of service attacks in
early October 2007. The sites took themselves offline for several days to tighten up
their security6.
Chapter 32 Denial of Service, Disasters, Reliability, Availability, and Architecture
It is possible to attempt to blackmail an MMO with a DoS attack, but as the
services do have outages periodically, it is not clear if this would be as financially serious as it is for a sports wagering site. Any site that needs to be operational to earn
revenue is a potential target—any business from ad-driven sites to online casinos.
However, it is much more effective to target sites that have traffic that is highly concentrated at specific times, because the attack does not have to be sustained nearly
as long to have serious impact on the victim.
Sites that support rich, user-created content can be targeted by in-game denial
of service attacks. Linden Lab’s Second Life has suffered a number of such attacks
over the years7. However, user-created content could be used to cause denial of
service and other attacks on client computers, not just at the central service, which
could be potentially more damaging to the company (and its relationship with its
There are four types of denial of service attacks:
Hardware—Attacks that result in changes to the target servers’ or associated
network devices’ firmware so that they no longer operate.
Network—Attacks that overwhelm the network connection to the victim’s site
so that legitimate data can no longer reach the target system.
Platform—Attacks that target the standard operating system and shared applications, such as the network stack, on the victim system’s computers.
Application—Attacks that target the victim system’s specific application.
There are an increasing number of strategies that are available to help fight
denial of service attacks. Some of the simplest strategies are architecture changes. By
hosting an online service at one or more major data centers, it is much more difficult for hackers to simply overwhelm the bandwidth of the site or servers. If the target system is located at a small data center, the company should work with its
upstream network providers to ensure that traffic rate filtering is done to ensure
that the data that comes over a smaller connection has been purged of standard
DoS attack traffic (unusual concentrations of traffic from sites is often an indicator
that the computer is a “zombie” and member of a botnet used to attack networks
and distribute spam).
It is unfortunate that there is no accountability today for ISPs to guarantee that
the data they provide or pass through to their clients is clean. (Hackers will also
modify the source address of data from compromised computers to hide the source
of an attack. Although this is hard to detect by the victim computer, it is easy for the
host ISP to detect: Any data generated from a client computer should have an ISP
issued or authorized IP address.)
Protecting Games: A Security Handbook for Game Developers and Publishers
Defeating hardware and platform attacks is usually the responsibility of the
equipment provider. Basically, the providers need to deliver systems that only allow
authorized updates to the firmware to be installed and these updates can be installed
only by authorized individuals.
Firewalls, intrusion protection systems, and other security hardware and software tools can be used to attempt to keep denial of service attacks at bay. These
products are designed to protect “generic” online services. Online games can also
take advantage of their custom communications protocols to improve their security.
First, because most online game services include general website and other nongame related applications, these sets of servers and services should be separated
from game servers or shards (groups of game servers that work together). This will
allow commercial security tools to work more effectively and have to process less
data. Using a content-distribution system or web-caching service like Akamai can
also reduce the risk of DoS attacks on websites.
Because online game services often process very large amounts of network traffic, typical denial of service security solutions can be quite expensive (the cost for
these solutions is mainly driven by the need to process volumes of network packets
in real time). However, most game services have very structured network data and
this can be used to their advantage.
Game services can use proxy servers to ensure that only data that is formatted
in the structure required by the game’s protocols gets passed on to internal servers
for further processing. Because these proxies only have one purpose—to parse
game messages—they can be simple, fast, and replicated easily. They may even
translate a public game message format into an internal format to support logging
and other security functions. Commercial “deep packet filtering” solutions can implement similar features to a game protocol proxy, but they are burdened with
their need to handle a wide range of protocols. Another advantage of a good proxy
server system is that it can help prevent buffer-overflow and other malformed data
attacks such as that suffered by World of Warcraft in August of 20078.
A simple extension to a game protocol proxy is support for a “white-list” authorization service. Because virtually all games establish a session using some sort of login
service, the same login service can create a registry of IP addresses and random authentication token pairs that it provides to both the client and the game’s proxy servers:
// client creates a secure session with license or login server
ClientAuthorizationMessage = (SecureSessionID,AuthorizationToken);
// message sent to client
ClientRegistrationMessage = (IPAddress,SecureSessionID, AuthorizationToken);
// message sent to the game proxies to add to their white lists
Chapter 32 Denial of Service, Disasters, Reliability, Availability, and Architecture
When a proxy server receives a message from an alleged client, it can rapidly
validate the IP address with SecureSessionID and AuthorizationToken to determine whether it should even begin to parse the remainder of the message. This can
be used to speed the rejection of data from botnets or other malicious users.
A complete white-list proxy service will include the ability for the login service
to de-register a session and IP address. The proxy service should also be able to do
this itself if it suspects that a game client or game session has been compromised
(even core game servers should be able to “black-list” a game client based on anomalous data or activities).
It is always surprising how routinely online games underestimate traffic at start up.
The term “overwhelmed servers” is a routine part of the first hours, days, or weeks
of far too many game services. There are ways to plan for huge surges or spikes in
systems and bandwidth, but none of them is free and they take some serious effort
and planning. The game industry isn’t the only one that faces this challenge:
Sporting events such as the Olympics can create huge, short-term demands for
hardware and bandwidth. Large traffic spikes can also be addressed through the
game’s core architecture. EVE Online’s architecture allows multiple solar systems
to be housed on a single server so that sparsely populated systems can share resources9.
Although game servers may be costly to bring online, it is easier to support
surges of license registrations for new games. License registration does not require
massive amounts of data to be sent or complicated server support like an MMO.
I am not privy to inside data on these cases, but I suspect many games use the same
system architecture for license registration as they do for regular online game play:
direct socket connections to the license servers, and so on. This is really not necessary. Game registration systems could easily use a standard HTTP POST code with
a polite retry system to help handle new game traffic spikes. As a default, the server
would respond with the registration key for the application, but it could also send
a cached “wait 60 seconds” message to spread out requests. There are many tools to
support scaling up websites, ranging from application acceleration tools to virtual
servers (which provide the ability to add new servers on the fly) and cloud services
(pure “cloud” applications that are not tied to individual hardware platforms), as
mentioned previously. Of course, the actual license processing would be carried out
on a backend database and application server behind a web server in a typical threetier architecture allowing both front end and backend servers to be added easily.
Protecting Games: A Security Handbook for Game Developers and Publishers
Another option for handling high-volume license registrations is to take advantage of email. After all, if a player is online, she almost certainly has an email account.
The game could build a structured registration message for the game server that a
player could easily paste into an email, and then cut and paste the response to the
message when it is received from the game site. Just like web servers, standard email
systems are built to scale well; there is no need for game developers to create new
solutions when off-the-shelf answers are available.
Any online game service is going to have a number of fairly standard major systems:
The Game(s)—The actual game servers, their supporting proxy servers, databases, and other application servers.
The Website—The basic website and community services. The website may
also include the front-end support for ecommerce and payments.
Game Operations Center—The server, network, and application management
functions for the game service. The administration of the website may be included within the operations center or handled separately.
Corporate IT—The back office where the game company staff do their work.
Conceptually and practically, it is important to separate the day-to-day work of
the staff from the actual operations of the game service. There may also be a
separate duplicate or small version of the game service itself for testing and
back-up purposes.
Content Delivery—Many games use a mix of internal and outsourced contentdelivery systems to deliver game software and updates to players.
Figure 32.1 shows one such sample game operations architecture.
There are several components of an online game service that are particularly
important from a security perspective:
User Financial Database and Payment Processing Services—Controlling
access to any resource that can be monetized is critical. The most valuable target
in a game service is any database that contains user credit card or other financial information. These financial and payment systems should be physically
separated, if at all possible, and their electronic interfaces closely controlled. If
a game has valuable virtual assets, especially with a potential for gold farming,
the database that contains those assets should be handled with a level of care
similar to that used for a financial system. In addition, financial and payment
systems should be carefully controlled and all actions logged.
Chapter 32 Denial of Service, Disasters, Reliability, Availability, and Architecture
FIGURE 32.1 Sample game operations architecture
User Account System—Any information about players should be carefully
protected. Even though the US does not have strong privacy laws, other countries do and protections like the previously discussed California Data Disclosure
law can make any error costly (see Chapter 29). If these systems do not need to
be easily available in real time online, restrict access to them as much as possible.
If there is data that does not need to be retained or collected, don’t collect it.
Data that doesn’t exist can’t be compromised.
Logging Systems—Although not shown in Figure 32.1, it is critically important to make logging and analysis systems for all part of the online game service
as independent, thorough, and reliable as possible. There is no accountability
without history. Logging is not sufficient; it is crucial to build tools to facilitate
the analysis of all logs as well as include alerts, metrics, and statistical information for critical logged events.
Player Systems—It is actually possible to use the player’s systems as part of the
overall reliability, availability, and security of the overall game service. Although
player computers may be in “untrusted” and “untrustworthy” hands, they are
also very independent of the core game servers.
Traditional IT infrastructure and management systems are not often considered part of a game service. However, it is important to think of the whole company
as an integrated system—including external entities and interfaces like payment
Protecting Games: A Security Handbook for Game Developers and Publishers
processors and outsourced services. All of these components must work together to
provide a reliable service. Unplanned interactions between these components can
be the source of problems: In late 2007, CCP Games took EVE Online offline due to
a suspected security breach10. There was some discussion that the attack originated
via a key-logger on a regular office IT computer that had been hacked. True or not,
understanding the actual interaction between system components and the activities
of individual employees, partners, and players is critical for security success.
There are many, many circumstances that can knock a system offline. Fires, rain,
floods, earthquakes, hardware failures, power outages, vendor problems… the list
goes on and on. As I noted at the beginning of this chapter, it is impossible to plan
for everything. In December of 2006, Valve Software’s Steam online service was
knocked offline by a major storm in the Pacific Northwest11. Sony Online
Entertainment, having experienced a number of wildfires in San Diego, was able to
keep the game running during a major fire in October 2007, but with minimal
technical and customer support12.
Perhaps the easiest way to avoid catastrophic failures is to build a truly distributed system. Multiple sites with independent service providers can handle disasters
more easily simply because all of the locations in a well-designed distributed system
are unlikely to be vulnerable to a single disaster. Because of their experience with
hurricanes, many organizations with headquarters in the Southeastern part of the
US have moved aggressively to be able to deal with a disaster that can knock out a
large region13.
More modest failures can still cause serious problems. Outages at individual
servers or databases can lose valuable data. A Singapore MapleStory server failed
and lost hours of play for several thousand players14. Many games include a substantial game client. This system could conceivably be used as part of a backup and
recovery strategy by storing critical game data on player computers. Note that it is
not necessary to store a specific player’s data on her own computer and the data obviously needs to be stored in a manner that the player cannot modify (or even read)
the encrypted, signed player data.
Online games are first and foremost services: Their value and revenue comes from
being available to customers. Outages can lead to customer dissatisfaction and
abandonment. Design choices can make a game more or less vulnerable to disasters
Chapter 32 Denial of Service, Disasters, Reliability, Availability, and Architecture
and substantially affect the cost of ensuring service availability. Often online services
will provide “free” time to compensate for outages to protect their relationship
with their customers.
This “free” time is far from free for the game provider. Employee salaries were
still paid while the service was down and other expenses were incurred. The “free”
time itself costs additional potential revenue. This is not to say that companies
should not offer customers compensation, but avoiding the need to incur such
expenses should be part of the game service’s business plan.
Planning for contingencies must not be a paper exercise. It should not result in
PowerPoint presentations and unread binders sitting on shelves. Actively working
to avoid costly outages is usually a better business strategy than just recovering
from failures when they occur.
1. Xen (2008), “Xen,”
2. VMware (2008),”VMware,”
3. Amazon (2008), “Amazon Elastic Compute Cloud (Amazon EC2),”
4. Evening Times (2004), “Blackmail Threat to Net Bookies,” (article no longer available from the Evening Times)
5. W. Knight (2005), “Attack on Game Raises Prospect of Online Extortion,”
6. Chosun Ilbo (2007), “Hackers Threaten Cyber Money Sites,”
7. T. Walsh (2006), “Rogue Lily Disrupts ‘Second Life’ Service,”
8. A. Modine (2007), “World of Warcraft Exploit PKs Servers,”
9. B. Drain (2008), “EVE Evolved: EVE Online’s Server Model,”
10. N. Breckon (2007), “EVE Online Database Security Breach Leads to Downtime of Game, Website,”
11. McWhertor (2006), “God Hates Steam, Too,”
12. Sony (2007), “SOE Support Services Suspended During Emergency,”
13. FDIC (2008), “Lessons Learned from Hurricane Katrina: Preparing Your Institution for a
Catastrophic Event,”
14. A. Siew (2007), “Server Glitch Loses Gamers Points, Virtual Cash,”
Scams and Law
“You can’t cheat an honest man; never give a sucker an even break,
or smarten up a chump.” —W.C. Fields1
veryone is looking for an edge, some sort of advantage to get ahead of everyone else. One of the things that breeds cynicism in security professionals is
the constant reminders of how reliably corruptible virtually everybody is.
Another group of avid students of human frailty are con artists.
In general, there are two types of scams that target online games. The most
common scams offer players an advantage in a game: some sort of cheat or aid that
makes the game easier for them than it is for everyone else. Sometimes the scammers offer a cheat tool that works but also has some “extra features,” such as a keylogger or some malicious software. Other cheat-aid scams are more brazen. They
offer a fake cheat; something that the scammers can show on YouTube or offer for
sale (or even for free) on a website.
The second type of scam is pretty new for online games. It is a Ponzi scheme or
pyramid scam that uses a game as the “front business” to conceal the fraud. In some
sense, this is flattering to the game industry: Games are widely seen as a “hot new
business” and con artists are using them to pull in unsuspecting individuals looking to cash in on the industry’s popularity.
Games and other online services are facing a range of criminal activities, including scams and fraud. It is important to understand how your game company should
work with law enforcement. Game companies may seek out the help of law enforcement for recourse after a crime, but game companies can also be contacted by law
enforcement to support an existing criminal investigation. Companies need to be
cautious because sometimes law enforcement officials make inappropriate requests
that may put a business in legal jeopardy as well as endangering the company itself.
Chapter 33 Scams and Law Enforcement
The competition between players that underlies most games makes finding “suckers”
for scams pretty easy. The same passion about a game that leads some people to
play 20 hours or more per week also brings out some people’s desperate need to get
ahead. The scammers are there to fulfill that need to win at any cost. Sometimes, the
tools are pretty innocuous. In 2006, a crooked inside developer turned a legitimate
poker tool that tracked the “rake” (the commission on each hand) collected by online poker sites into a piece of malware that also installed four tools that collected
username and password information from the poker players and sent it over the
Internet to the criminals2. In 2008, a company released a tool for EVE Online that
was supposed to automatically queue up skills for players so that they would not
have to log in at odd hours, which also happened to include a Trojan program that
stole passwords3.
What is interesting about both of these tools is that the game developers themselves could easily have provided these features in the basic game application: It
would be a simple courtesy for a poker site to provide tracking information about
its own fees. In fact, the reason players want tools like this is that they suspect that
the game site is cheating them on the rake. For EVE Online, there is no real reason
that the game itself should not allow players to queue up skills to be trained. (EVE
Online is unique in the MMO genre in that players earn skills based on elapsed, calendar time, not based on game play activities.) Today, players have to be online to
change from one skill to another—an inconvenience that doesn’t help game play
and that opened the door for this malicious tool.
Many games ban most or all third-party tools, but in some sense this is futile.
Because players control their own computers, they will always find a way to run independent tools (see the discussion of bots in Chapter 15). As noted in the earlier
discussion, it probably makes more sense to design the game so that such tools are
either unnecessary or can be used legally. If there are tools, it would be better for the
game and for players if they could buy the tools from the game company who can
certify their features, not to mention capture some revenue by running a tool store.
Fake hacks are particularly troubling for games. Scammers who create fake
hacks don’t need to create a real hack or tool for a game; they offer a way to get
more of whatever the game values (additional money, experience, items, and so on)
by providing some instructions that include giving the scammers the player’s username and password (and sometimes even credit card).
Someone, scammer or not, it doesn’t matter, submitted an alleged “hack”
against Sulake’s Habbo Hotel virtual world to my blog4. The hack demonstrated in
a YouTube video how to give a player tons of furniture and items. Actually, the
Protecting Games: A Security Handbook for Game Developers and Publishers
hacker used a memory editor to add items to the inventory in the player’s game
client application. This worked because the art assets for all of the items are included
in the game client on the player’s computer. Virtually all online games work in a
similar manner and are vulnerable to the same “attack.” To most players, the attack
looks valid since it is using the actual game client and art assets. However, these
changes are purely local to the player’s PC and do not actually alter the information
in the game’s database. The videos have since been removed, apparently at the request of the game’s publisher.
YouTube poses a particular risk for game companies. Whenever a game company
contacts me to consult on a game, the first place I look for attacks is on YouTube.
Often, I find some pretty entertaining exploits. However, there are fake hacks
mixed in with the real ones. The key clue to identifying the fake hacks is the request
to go visit the scammer’s website and enter your username and password, install
some software, or buy a tool. Game companies need to constantly keep up with
these videos and contact YouTube to take them down.
Third-party sites can also house scams. These sites are run by fans, and occasionally scammers, for games. In general, third-party sites are great for a game.
Player-run forums, blogs, social networks, and other communities are evidence of
a game’s popularity and are mostly beneficial. However, the low level of technical
skill of many of these fan sites makes them good places to launch an attack against
a game—either direct malicious code attacks or as a means to advertise “fake hacks”
and other malicious tools. Both of the malicious tools mentioned previously were
downloadable through otherwise legitimate game community sites or found on
Simply setting up a fraudulent fan or community site for a game is not usually
sufficient to cause real problems for the game company. Game community sites
take a lot of effort and are really a labor of love by the players who create them. IGE,
known for its involvement in the gold-farming and gold-selling businesses for
World of Warcraft, notoriously bought two very popular WOW community sites:
Allakhazam5 and WowHead.com6. Gold frauders and power-leveling firms (see
Chapter 22) can take their business a step further by using their interactions with
game players to scam them as well. The players who use gold farming and powerleveling services have little recourse, as they are already cheating at the game.
Game companies can do little to directly protect themselves from scammers
who use third-party sites to target their games. Even though the game company and
game are blameless, these scams can be costly from a customer service perspective
when players complain about looted accounts, viruses, and other security problems.
Chapter 33 Scams and Law Enforcement
One tactic that may make life difficult for scammers is to sponsor “official affiliate” community sites that have access to game art assets, animations, and affiliate revenues in return for following a code of conduct. The sites could perhaps even
allow the game company to scan the fan site for malicious code and downloads.
Another tactic that was mentioned previously is the notion of bundling a good
security tool suite with the game client.
There are two main categories of game scams: (1) scam business models that target
affiliates and partners and (2) crooked game operations. Game business models
that are scams are a relatively new problem and I have yet to see a case go to trial,
although I have seen several suspicious games. The closest example was an FTC
move to shut down the online music service BurnLounge7.
Most of the scam business models are variations on pyramid schemes or Ponzi
schemes. These business models basically “rob Peter to pay Paul.” Later entrants’
fees fund the earlier participants, who are continually encouraged to recruit others.
The typical clue that a game business model is not legitimate is that it has a very
complicated payment structure8.
However, it could be quite interesting to see a game company legitimately use
a multi-level marketing business model; even affiliate revenue-sharing services are
not supported all that widely in the game industry. If used more widely, affiliates,
channels, and MLM programs could deter game scams by giving fan sites a legitimate revenue stream to reward their efforts.
Crooked game operations have mainly been a problem for online gambling, although the problem could also affect skill games, online lotteries, bingo, and contests with an entrance fee. These crooked games either rig the game so that no
legitimate player wins or they simply refuse to pay out and take the players’ money
and disappear. One of the consequences of attempts at prohibiting online gambling
is that even game companies that desire legitimacy have been unable to be licensed
in major jurisdictions. (This may be changing in many countries in Europe as of
2008.) Lacking strong government licensing regimes, industry groups will need to
self-regulate and self-certify. Skill games could be a prime target for scams, as there
is a general weakness and lack of uniformity of regulation for the industry (this is
also true for other “not gambling, but games for money” businesses).
Protecting Games: A Security Handbook for Game Developers and Publishers
with J. Price
Game companies may need to contact law enforcement to handle a security breach
or in case of fraud; or they may be contacted by law enforcement in support of an
investigation. The topics of how to collect, retain, and preserve data that may be
needed for evidence are far beyond the scope of this book, but an online service’s
leadership, IT staff, engineering team, and legal counsel should consider these
issues carefully throughout their design and development process.
Most operators of online games and other online services will eventually
receive a request from law enforcement asking for information about an end user.
When a state or federal law enforcement officer presents a subpoena, court order,
or some other “authorization,” the game operator should not panic. These are typically routine events, and should be processed professionally and efficiently.
First, have a plan in advance. Designate someone in the company to receive and
interact with law enforcement. When the request comes in, be sure it is immediately forwarded to the designated person. That person must decide whether the
request is valid and is what it purports to be or escalate matters further to management or legal counsel. Ideally, legal counsel can assist to make the determination
whether the request is valid and provide guidelines that game company staff can use
for most situations.
After the game company has seen a number of requests, counsel should only be
necessary for the “odd-looking” ones because company personnel will develop a
feel for what are legitimate and illegitimate requests. It is important to note that if
the request is obviously illegitimate, the company may be held liable for turning
over the requested information. Generally speaking, however, there is immunity for
a good faith reliance on law enforcement. But the immunity does not extend to all
situations, and caution is warranted as seen by the recent lawsuits against U.S.
telecommunications firms for their involvement in the government’s “warrantless
wiretapping” program9.
The designated person may delegate tasks where appropriate. However, law
enforcement might require tight secrecy in some instances, which will limit who in
the company can know about or assist with supporting the activity. If there is any
doubt about who can know about the request, ask the law enforcement agent. In
most instances the request will include explicit instruction and leave little confusion, but do not hesitate to clarify any concerns with the agent.
After accepting the request and establishing that it is valid, the person designated to assist law enforcement should consult the relevant staff to determine
whether the information being requested (1) is available and (2) can be provided to
Chapter 33 Scams and Law Enforcement
law enforcement within the period of time allotted in the request. If the request is
legitimate and the information is available, you should fulfill law enforcement’s
If the request appears legitimate and the information is available, but the company simply cannot meet the deadline, the company has recourse to protect itself.
You may consider making the law enforcement agent who served the request aware
of any difficulties that you have, as the agents are in a position to alter the request and
avoid a confrontation in court. Courts can receive challenges to law enforcement requests and alter the request to make it more manageable or provide other relief.
It is important to ask the law enforcement agent about reimbursement procedures for their agency. You may be able to have costs you incur when responding
to a law enforcement request reimbursed. The federal government and most states
permit a responding person or entity to be reimbursed for their time and associated
expenses such as overnight mailings. The reimbursement might not cover all costs,
but it can help, especially if you receive numerous requests.
Another good practice is to keep the law enforcement request and any related
material in a safe, secure location (unless law enforcement forbids the record retention for security reasons). Keep these records after you have complied with the request. If any issue arises later, you will be able to refer back to the records.
with J. Price
Game services are not just providing games. They provide a range of network and
communications services that may impose requirements to support law enforcement.
Depending upon the game’s infrastructure, game designers may need to comply
with various laws.
For example, if a game operator provides voice service, it may have legal and
regulatory obligations under the Federal Communications Commission’s (FCC)
Communications Act. But its obligations will depend upon how the service is
provided and whether a different service provider is responsible for legal and regulatory compliance.
If the operator provides its own network facilities or provides broadband
Internet access, the Communications Assistance for Law Enforcement Act (CALEA)
should be reviewed. CALEA requires that the operator include specific security features within the network. It also requires the service provider to file “System
Security and Integrity Plans” with the FCC and adhere to those plans within the
Protecting Games: A Security Handbook for Game Developers and Publishers
A number of other regulations might also apply, including taxes and tariffs
and “Universal Service” obligations, just to name a few. It is recommended that you
take a careful look at the legal obligations that might come with any expansion of
services, or before you implement a unique business strategy that may impose regulatory requirements.
Also, be very aware of the obligations that come along with services being provided by third parties. A contract with another company can leave you with unexpected legal obligations. You might assume that the third-party service provider is
responsible for compliance with the assorted regulatory obligations that are associated with the service and, therefore, you do not have related legal exposure. Most
service providers, however, go to great lengths to avoid regulatory obligations, and
the details of the service agreement contract might shift the burden to you.
You do not have to accept the “standard” contract that another party offers.
Negotiate. One place in the contract to study carefully is the “indemnification” section, which explains how and when you must protect someone other than yourself.
Some third-party contractors try to avoid any responsibility and, when an issue
arises, leave the obligation to you. It is usually reasonable to indemnify against issues
you may cause the service provider, but they should stand by their product and handle expected as well as unexpected issues and expenses related to providing the service. Also, as always, be careful who is responsible for costs and taxes related to the
service. Be sure these terms are defined broadly and do not come back to haunt you.
1. W.C. Fields (1939), “You Can’t Cheat an Honest Man,”
2. F-Secure (2006), “How’s Your Poker Face?,”
3. J. Egan (2008), “EVE Online Trojan Warning,”
4. S. Davis (2007), “Demonstrating Real Game Hacks vs. Fake Game Hacks: Fake Cheats Attack Sulake’s
Habbo Hotel,”
5. L. Smith (2006), “Gold Farmers Buy Allakhazam,”
6. Emma_UK (2007), “WOW Fansite Sells for Reported $1 Million,”
7. G. Gross (2007), “FTC Asks Court to Shut Down Digital Music ‘Scheme’,”
8. S. Davis (2007), “Multilevel Marketing for Games? And a Bit of Info on Pyramid Schemes,”
9. J. Leydon (2007), “AT&T Sued Over NSA Warrantless Wiretapping,”
Operations, Incidents, and
Incident Response
perations is where “the rubber hits the road.” It is also where security pays
off or fails. Security becomes real when technology is combined with business operations to actually do something. If users and operators circumvent security because it is too complicated or too expensive or too time consuming,
it is a failure of the security team, not the users. It is appalling to read professionals
in the security industry talking about PEBKAC— the problem exists between keyboard and chair—the user, in other words1. Security is, at its core, as much about
human behavior as it is about technology.
There are some in the security industry who’ve taken the adage “the best defense is a good offense” to heart. Instead of being content with protecting their
games or other online services from the “bad guys,” they’ve decided to turn around
and wage war on their foes. Some of these techniques are very self-contained, but
others are more aggressive and expose their users to legal and business risks.
It is almost inevitable that security systems will fail. Good security systems are
designed to fail gracefully and recover quickly. Unfortunately, many developers
neglect to plan for failure. Some of the worst security incidents are the result of failure to consider the possibility of the system’s failure. Such systems tend to collapse
catastrophically and sometimes can never recover. One of the worst characteristics
of many public key cryptography systems is that their compromise recovery systems (compromised key lists and certificate revocation lists) are terribly awkward
and inefficient.
Games are a particularly public business. When things go wrong, they are
highly visible and widely commented on. Part of protecting the integrity of the
game and game company is planning for public relations problems, especially when
responding to security incidents. Although almost everyone would prefer that security incidents don’t happen, effective preparation and a well-handled response can
sometimes turn a security incident into a marketing and branding benefit.
Protecting Games: A Security Handbook for Game Developers and Publishers
There is a lot more to security than just good technology. One common delusion is
that security systems need to be invisible to users. In fact, the most familiar security
system of all, standard keys for cars and homes, are highly visible and somewhat inconvenient. From an operational perspective, they work pretty well. People lock
doors (most of the time) and crooks need to find another way in. It is also fairly obvious that people don’t lose keys too often, but they can “recover” from the loss or
compromise of a door or car key pretty quickly.
Games and other online services are defined by their interfaces and procedures.
It is unfortunate, but good human-machine interface (HMI) design does not seem
to be part of the security discipline. This is regrettable, as psychology and ergonomics probably have a huge impact on the actual secure operation of systems. There
are two equally important interfaces that need to be considered to operate a game
securely: that of the game company’s operational staff (live team, system administrators, customer support staff, security staff, and so on) and, of course, the players.
One of the benefits of using a physical security token is that people are used to
protecting hardware keys. Blizzard’s authentication token will provide most of its security benefits just because it is a physical key—the details of its cryptography and any
anti-tamper or other design features probably hardly matter. Similarly, because people protect their cell phones as valued items, the Asian games that use a cell phone
challenge-response system are likely to be quite effective and secure (see Chapter 29).
Successful, secure operations are usually invisible. Often, so are unsuccessful
ones. One of the rare instances of a game having problems with its internal security
operations has been a series of account thefts at Xbox Live. Hackers have been able
to use “social engineering” to access other player accounts. First, there is a basic
weakness in the Xbox Live system—player’s publicly known Gamertags are also
their account names. This aids attacks by making it easier to hack accounts directly
(by guessing passwords and so on) and makes it much easier to convince customer
support to disclose or reset a player’s password.
…(W)hen I first heard about the “Xbox Live network hacked” story, I checked
with the people on our end, and then posted about it. As originally posted,
Xbox Live has not been hacked. That is still true. A security researcher, Kevin
Finisterre, discovered not a hack, but the fact that some accounts may have been
compromised as a result of “social engineering,” also known as pre-texting,
through our support center. Kevin gave me a call directly and once I realized
what he was talking about (he sent me some painful-to-listen-to audio files),
I confirmed that the team is fully aware of this issue. They are examining the
Chapter 34 Operations, Incidents, and Incident Response
policies, and have already begun re-training the support staff and partners to
help make sure we reduce this type of social engineering attack.
—Major Nelson (Larry Hryb, Xbox Live Director of Programming)2
From an operational security perspective, the statement “Xbox Live has not
been hacked. That is still true.” may be true, but it is also is fairly meaningless. It is
clear that the security of the system was compromised whether through technical
measures or procedural gaffes. Even worse, it appears that the problem was not
resolved. Although the first incident was made public in March 2007, in September
2008 these incidents seem to be continuing—in the latest case, a senior employee of
Bungie from the Halo 3 team’s Xbox Live account was among those compromised3.
What is frustrating is that the system’s security could probably be improved
fairly easily (although there certainly may be internal constraints on the system
that I am unaware of):
Xbox Live accounts are, at least partially, tied to a specific console. Microsoft
could use that console’s ID number to authenticate the user by having the user
tell the customer service rep the number.
Microsoft could also use any credit card information associated with the account
owner to authenticate users. This could be done through the console so that the
customer service rep is not told the card number. Depending on the account
details, authenticating another credit card to confirm the player’s address could
work as well.
Microsoft could send an email to another email account owned by the user that
had been established previously as the “emergency notification account” (or a
phone number or an SMS message or even regular mail).
Microsoft could use the official Xbox console to send a challenge message to a
user (similar to the phone systems used by Korean game companies discussed
Of course, the best solution would be to break the link between the Gamertag
and the user ID. The fact that everyone knows your user ID on Xbox Live
makes the attack much easier.
As to the customer service system, it really shouldn’t “unlock” the account,
even for the customer service rep, until the user has passed an official, authenticating challenge.
The Xbox Live scenario is only an example. The essential key is to design operational procedures and user interfaces, not just technology and systems, to help
players and company staff operate in a secure manner. There is no security without
Protecting Games: A Security Handbook for Game Developers and Publishers
Some companies have gone beyond simply defending themselves; they are attacking their foes. Although “information warfare” may sound exciting and tempting,
anyone who is not in government is likely putting themselves and their business in
legal peril. Any time a game or other application has an effect beyond the application itself, there are opportunities for trouble. The Starforce DRM tool was controversial because it affected the use of DVD drives for other applications than just the
game it was intended to protect (see Chapter 5). The Sony BMG Rootkit could be
used for other purposes besides defending a specific song from copying and had the
potential to open the computer up to external attack.
Online applications are even more risky. Customers have legitimate concerns
about their personal privacy and integrity of their computers. Blizzard’s Warden
anti-cheating tool, and some other similar products, captures information about
the player’s computer’s processes and even some screen data from the player’s
computer. From a user perspective, there is little difference between a key-logger
installed by a hacker and one provided as part of the latest game. The potential for
serious abuse by these applications is a real concern from a public relations and
marketing perspective, if nothing else.
The Motion Picture Association of America (MPAA) and Recording Industry
Association of America (RIAA) have adopted active measures against media piracy
(see Chapter 9). It is unclear whether these activities have been effective in fighting
piracy. What is clear is that some of these techniques have backfired. MediaDefender
launched a denial of service attack against a legitimate firm that uses peer-to-peer
distribution services. This action resulted in an FBI investigation as a potential
violation of computer security laws4. More broadly, the RIAA and MPAA have
been getting less sympathy in court and no sympathy from the general public for
their strong-arm tactics.
The potential for civil and criminal actions (not to mention terrible public
relations) from active security measures should make them a tactic of last resort, if
active measures are even to be considered at all. If you would ever be concerned
that there was a newspaper story describing your active measures, you probably
should choose an alternative strategy.
Earthquakes, fires, theft, data loss, hackers, or other disasters or security incidents
happen regularly to companies everywhere. It is impossible to plan for every
variation, but it is more than prudent to have an overall plan for the major types
Chapter 34 Operations, Incidents, and Incident Response
of incidents that you may face. Level Up, an MMO operator in the Philippines,
uncovered currency duplication cheating in the game RF Online that caused massive, 500 percent inflation in the game’s economy5. The company had hoped to
identify the cheaters and remove them and their ill-gotten gains from the game.
However, they found that the entire game economy had been corrupted because of
the massive influx of illegally created game currency. Instead, the Level Up team
rolled back the server and gave bonuses to all players to compensate for their losses
(and worked with the game’s developer to fix the game’s software).
The key is to have well-established procedures. After running EVE Online for
several years, CCP Games had a mature process for dealing with potential security
incidents. In October 2007, employees identified an anomaly in the game’s database that indicated a potential exploit6. Standard procedures were followed and an
assessment was made by an internal team. They decided to take the game offline
within two hours of initially identifying the exploit because of the potential risk.
CCP had expected to be offline for two hours, but it took ten hours to fully reconstitute the game’s security and safely restore service. Because CCP Games had an
orderly process for handling general security incidents, they were able to rapidly
contain the problem, repair the service, and keep their customers informed.
The following activities should be included in an incident response and recovery process:
Alert—Online services need to include sufficient internal telemetry and monitoring systems so that it is possible to know if something is going awry. In addition to technical systems, skilled operators are essential. A good operator
should be able to “sense” an anomaly and be able to clearly communicate the
problem to management.
Escalate and Act—Once a human or technical system has identified that an incident has occurred or is in process, management needs to be notified so that
they can bring together the resources necessary to respond. This should include
appropriate technical staff, but also business, public relations, customer support, marketing, legal, operations, and any other group that may be necessary
to actually handle the incident, work with customers, and address impacts to
the business.
Identify and Assess—The designated team needs to determine the actual nature of the underlying problem that triggered the incident as well as assess the
severity and scope of the problem. The team needs to develop strategies to
address the problem itself as well as limit the impact of the incident on the
company’s operations. In addition, the team will form a communications strategy to protect the company’s relationship with its customers and handle any
media inquiries.
Protecting Games: A Security Handbook for Game Developers and Publishers
Contain—Once the underlying problem has been identified, the team will implement a containment strategy. This can include doing nothing and accepting
the (limited) impact of the compromise; disabling certain functions, features,
or servers; restricting access by certain users; or shutting down all or a portion
of a service.
Communicate—Some level of customer communications is critical to managing an incident. There needs to be a solid communication strategy operating in
parallel with all of the other incident response activities. This may extend beyond the actual problem to deal with rumors and speculation by customers and
the media. The goal is to keep customers and the public informed and confident in the company. One important communications element is not to lie.
Most likely, any lie will be found out and it will undermine all future communications by the company during this incident or any that occur in the future.
Repair and Recover—Once the problem has been contained, the incident response team should fully analyze the situation and develop an actual repair or
work-around. There is a balance between rapidly reestablishing service and repairing the problem. This may not be a single-step process. It is also possible
that service may not be fully restored to the state prior to the incident if the
problem is severe enough.
Review and Revise—After the system has been fully restored, the company
should review its incident response procedures as well as its overall operations
to determine whether there are related weaknesses that could lead to additional
incidents. Operations, technical systems, and the incident response process
should be updated based on this close-out review. The company may also issue
compensation, incentives, or freebies to players in accordance with the impact
of the incident.
There is something to the notion that one’s true character is revealed through
adversity. How we respond to security incidents and other crises can have an important effect on how we are perceived by our customers, partners, and employees.
Although it is best to be able to avoid such incidents, a modest level of planning can
be the difference between humiliation and increased respect.
Perfect security is not possible. One of the keys to effective security is trust. It is important that customers, partners, and employees believe that your system is secure,
that you care about hackers, that there is no cheating or abuse, or, if there is, you are
actively doing something about it. Because of this, good public relations is important to your security.
Chapter 34 Operations, Incidents, and Incident Response
Balancing public relations and security is not necessarily easy or obvious. For a
long time, Blizzard provided regular reports on the number of players that had been
banned for cheating or gold farming. This was done to give players confidence that
the company cared about the problem and was doing something about it. Square
Enix provides regular, monthly reports on the state of security in the game7 and
other companies make periodic announcements about exploiters caught, gold
seized, and other security news8. Interestingly, however, Blizzard has stopped providing nearly as many announcements about gold farming due, in part I believe, to
a concern that the announcements were acting as an advertisement for the potential lucrative opportunity from gold farming instead of working as a deterrent9.
The worst thing for a game company, or any other business for that matter, is
for security to overshadow the business. Electronic Arts launched the game Spore to
much fanfare. However, the game’s digital rights management system became almost more of a story in the media than the game itself10. Although the game appears to be a substantial hit, the large number of negative stories about its security
system probably cost the game many sales. One has to wonder if there were more
sales lost due to bad public relations from the security tool than sales added by the
anti-piracy product.
A good PR campaign can actually solve some security problems. CCP Games
faced a huge trust issue in the wake of a scandal where company employees were
cheating in the game for their teammates’ benefit11. The company’s initial response
created a serious perception problem in the player community. When CCP realized
that players’ trust in the game was being undermined, they took action. They created a Director of Internal Affairs12, but, more significantly, began a move towards
creating a process for incorporating players into the process of running the game13.
The Council of Stellar Management has become a story in and of itself, including
getting positive coverage for EVE Online in The New York Times14 and elsewhere.
Completely false stories can also be damaging. ArenaNet was falsely charged, by
a disgruntled former employee, with aiding gold farmers and other abuses15. It took
a year for the miscreant to confess, but ArenaNet still had to deal with the incident.
The challenge for any consumer business is to avoid even the appearance of impropriety and be as transparent as possible to protect your players’ trust and your company’s reputation.
From a security perspective, public relations is important for maintaining the
trust and reputation of the game or online service. It is the very last part of the mesh
of technical measures, operational procedures, and good customer care. Security
and public communications is a balancing act. It is important to instill trust in current and potential customers, but it is foolish to unnecessarily provoke hackers and
Protecting Games: A Security Handbook for Game Developers and Publishers
1. E. Bangeman (2007), “Study: PEBKAC Still a Serious Problem When it Comes to PC Security,”
2. M. Nelson (2007), “Xbox Live Security Update,”
3. P. Klepek (2008), “Reports of Hacked Xbox Live Accounts Stir Concerns Over Gamers’ Security,”
4. D. Kravetz (2008), “MediaDefender Defends Revision3 SYN Attack,”
5. GM T (2008), “RF Online Philippines Rollback,”
6. CCP Games (2007), “EVE Online Service Restored after Unexpected Downtime,”
7. Square Enix (2008), “Accounts Banned (Mar. 25),”
8. J. Wood (2006), “Cheaters Never Prosper!,”
9. S. Davis (2007), “World of Warcraft Bans 100,000 Accounts in March, And Doesn’t Tell Me!,”,000accounts-in-March-And-doesnt-tell-me!.html
10. A. Greenberg, M. Irwin (2008), “Spore’s Piracy Problem,”
11. J. Blancato (2007), “Jumpgate: EVE’s Devs and the Friends They Keep,”
12. CCP Games (2007), “CCP’s Director of Internal Affairs: An Introduction,”
13. S. Schiesel (2008), “In a Virtual Universe, the Politics Turn Real,”
14. S. Schiesel (2008), “Face to Face: A Council of Online Gamers,”
15. S. Schuster (2008) “Disgruntled ArenaNet Employee Blogger Finally Admits it Was a Hoax,”
t was inevitable. Take terrorism, the bête noire of early 21st century Western society, and virtual worlds, where science fiction and reality are allegedly colliding,
and put them together for a buzzword or a headline. When I first saw stories
about “virtual worlds” and “terrorists,” I laughed1. The idea has somehow gained
traction, however, most recently with a Pentagon analyst proposing that terrorists
could use World of Warcraft to plot attacks on the US2. I realized we have entered
the Theater of the Absurd.
Could terrorists use virtual worlds to hatch their nefarious plots? Of course. They
could also meet in Starbucks, talk on the phone, use Skype or other VoIP technologies, or any one of the vast number of communications services that help and
bedevil us all today.
Would they? Who knows. Comparatively speaking, there doesn’t seem to be
much advantage in doing so. Because most virtual worlds require some sort of payment and registration, they seem less appealing than the many social networks,
blogs, IRC channels, or simple web forums that are available for free with the only
registration being some sort of email address, if that.
There have been four main “attack vectors” associated with terrorists and virtual
Propaganda—Using virtual worlds to disseminate propaganda. It certainly
would be possible for a terrorist to do something in many virtual worlds to
press their agenda. However, the audience for virtual worlds is quite small. It
would seem much easier to create a YouTube video or story that would be
more likely to catch the public’s attention. One of the arguments within the
online game development community is about the use of downloadable clients
versus “no-download” tools like Flash. In general, the easier it is for an audience to see your story, the better, and requiring downloads definitely reduces
your potential audience.
Protecting Games: A Security Handbook for Game Developers and Publishers
Training—3D environments are used by many organizations for training purposes, including the military. Some have argued that open virtual worlds could
be used by terrorists to prepare an attack. The low fidelity of most commercial,
for-fun, virtual worlds tends to make them not very suitable for this purpose.
Also, because most of these environments are public, terrorist plotters could
face their plans being uncovered by wandering players or a nosy system administrator. Standalone first person shooters could potentially be more useful.
They support private servers, are optimized for combat simulation, and have a
wide range of map editors and tools available for free or low cost that can accurately duplicate most potential “targets.” Ironically, the Chinese Military is
apparently using one such game, Counter Strike, for training3. If one worries
about such matters, the simulation tools are not really the problem. Detailed
maps, floor plans, and other such information are much more dangerous and
widely available online.
Attacking a Virtual World—Why? It is unclear why anyone would consider a
virtual world a particularly interesting target. Today’s largest online game,
World of Warcraft, does have more than 10 million subscribers; however, these
people are spread over many servers in many countries. Any “attack” would
have limited impact, probably even to Blizzard. By comparison, the NASDAQ
stock exchange handles 2 billion messages per day4 and average daily dollar
volume traded was $85 billion5… a much more significant target.
Attacking the Real World via a Virtual World—The wildest proposed scenario is that cyberterrorists could use an attack on a virtual world to somehow
attack the real world. At this point in time, there is nothing to attack. Perhaps
a hacker could take down the server that the game is operating on, but that is
the end of it. Increasingly, utilities, banks, and other institutions are linking
their actual business operations with online services. If you can order phone
service online, there is then some sort of actual connection between the frontend website and the actual control system for the telecommunications network.
This connection is a real target. Whether it is accessed through a simple web
page, or an immersive, 3D environment, live business applications are the real
objective for any attacker.
The other proposed opportunities for terrorists come from using virtual worlds or
online games as means to support their operations. Like the rest of us, terrorists
need to plan and fund their activities:
Chapter 35 Terrorists
Communication—It is certainly possible to use online games as a communications service. All support text and some are now supporting voice communications. There are a multitude of other communications services available,
however. The larger concern is that government may require monitoring (or at
least the capabilities for monitoring) of all online communication services in an
attempt to be able to find terrorists. Voice services would probably be a bit
more effective for terrorists, because they are more difficult to monitor and it’s
harder to extract useful information from them. Of course wise terrorists, just
like the military, know that the best way to avoid interception is to avoid communicating.
Money Laundering—Moving money around and turning illicit funds into real
dollars are difficult challenges. The fact that some online games and virtual
worlds have convertible currencies makes them candidates for money laundering, as discussed in Chapter 28. The significance of online games is a bit overstated in this area, and I am afraid I am a bit at fault, as I was quoted on the
subject in one of the early articles on terrorism and virtual worlds. The most
likely candidates for online money laundering are peer-to-peer financial
services, such as micro-loans, peer-to-peer gambling, skill games, online sports
wagering, and gift cards. Prohibition in the US in the 1920s helped give rise to
organized crime; it appears that one of the unintended side effects of the
UIGEA law that restricts online gambling is the rise of anonymous covert
payment systems, which may be used for terrorism as well as other forms of
Game Crime (Funding)—Gold farming is estimated to be a $1 billion per year
industry. It doesn’t take a huge amount of effort to do profitably and the risks
of any sort of meaningful investigation or prosecution are virtually nil. The
work is naturally fairly anonymous and its practitioners sometimes need links
to ID thieves for stolen credit cards and identities. Fake hacks and other gamerelated scams could also be lucrative and have low legal risk. As such, game
crime makes for a natural funding source for terrorists (and other criminals, of
Criminal Game—There is no reason that a criminal or terrorist group could not
set up and operate its own online game. The costs are modest and the project
could generate reasonable potential revenues. In addition, a criminal game
operation would be very useful for collecting identity information and installing
malware that could be used for other types of criminal activities.
Terrorists are no longer the bomb-throwing anarchists of the late 19th century.
However, they are not really any more advanced technologically than the rest of us.
Protecting Games: A Security Handbook for Game Developers and Publishers
It is somewhat remarkable that we consider the ability of terrorists to set off bombs
at the same time in multiple locations a sign of an amazing malevolent intelligence
when many of us routinely set up teleconferences and meetings with our cell
phones and PDAs with people from all over the world. We are terrified that terrorists use websites, social networks, online chat, and VoIP, yet tens of millions of us
do the same every day. There is nothing amazing about the technologies that terrorists have used or notable about how they have used technology.
If anything, the more serious potential problem will come from actions by
countries or organized criminals. Terrorism is essentially theater. Its goal is to provoke a dramatic overreaction leading to political or economic chaos. A criminal or
state actor is more willing to invest in a financial or strategic payoff.
In early 2008, hackers allegedly caused a number of serious power outages in
order to extort money from utilities6. This rather alarming story went almost unnoticed while wild speculation that terrorists might use World of Warcraft for planning purposes received wide publicity.
We do believe what we see and read. I’ve included hundreds of footnotes in this
book to lend credence to my argument that game security is an important issue
(how many have you checked, by the way?). Virtual worlds and online games are
fantasies. They are one of the few places where we know that what we are seeing is
not real. As such, we do not take them seriously. This would seem to make them
particularly uninteresting for terrorists, but quite interesting for criminals.
Attacks on Wikipedia7 and propagating a story through numerous online
sources can build a powerful, virtual truth for lies. A six-year old story about the
2002 United Airlines bankruptcy filing was accidentally republished in 2008 and the
company’s stock plummeted 75 percent in less than a day8. The hoary story of the
tribesmen who refused to be photographed because they are afraid that the image
will steal their soul has a grain of truth. If we are shown enough images and told
enough stories about someone, that becomes the truth; real-world reality does not
The biggest danger for online games and virtual worlds and terrorists does not
come from the terrorists, but from governments’ fear of terrorists. The collapse of
a virtual world will anger its customers and annoy its investors. It is not a national
security threat. The democratization of information is a huge benefit for society as
a whole, but terrorists and criminals will have access to this data as well. The risks
for online games come from regulation and law enforcement requirements including extensive monitoring of individual communications. It behooves game companies to preempt and avoid these threats.
Chapter 35 Terrorists
1. The Economist (2007), “Jihad on the Internet,”
2. N. Shachtman (2008), “Pentagon Researcher Conjures Warcraft Terror Plot,”
3. B. Crecente (2005), “China Trains Army with Counter Strike,”
4. NASDAQ (2008), “NASDAQ Performance Statistics,”
5. NASDAQ OMX Group, Inc. (2008), “NASDAQ OMX Announces August 2008 Market Performance
Statistics for U.S. and Nordic Exchanges,”
6. N. Shachtman (2008), “CIA: Hackers Shook Up Power Grids (Updated),”
7. P. Boutin (2008), “Sarah Palin’s Wikipedia Page Scrubbed,”
8. K. Zetter (2008), “Six-Year-Old News Story Causes United Airlines Stock to Plummet: UPDATE
Google Placed Wrong Date on Story,”
Practical Protection
y goal has been to show that protecting games is possible and that there
are many paths to solving game security problems. That security does
not require cryptography or iris scans; it just requires a bit of thought and
consideration. That security is, first and foremost, a business issue, not solely a
technical one. That games and the business of games are deeply entwined with
security. Arguably some of the oldest “security problems” were people cheating at
ancient dice games. Fortunately, or unfortunately, it is no longer practical to stick
a spear in the back of cheaters or make pirates walk the plank.
How much is security worth? I read a press release recently about an antipiracy product. They cited the oft-repeated statistic that for every legitimate game
sale, there were 10 to 15 pirated downloads and that if just one of those pirates
converted, sales would double.
If anyone really believed that there were 100 percent or more sales to be earned
from better anti-piracy and that there was any anti-piracy solution that could give
those sales to you, there would be hardly any negotiation or concern about purchasing anti-piracy products.
This quote by the comic book character Pogo (copyright of the Walt Kelly Estate)
pretty much sums it up. Games are a major hacker target. Some of the most common
forms of malware—Trojans and account phishing scams—target online games. On
one hand, this is a tribute to the huge growth in the game industry. One the other
hand, it is an indicator that both the game industry and law enforcement do not
take these attacks very seriously:
Real Money + Low Risk = Juicy Target
Chapter 36 Practical Protection
This is likely to get worse. In 2007, the rate of attacks targeted against specific
businesses was almost 20 percent1. These attacks are much less likely to be detected
by standard, commercial security products, so companies need to harden themselves against all attacks. Denial of service attacks can be launched for as little as
$100 per day from an outsourced botnet. The cost of cyberattacks on businesses
grew from $168,000 per incident in 2006 to $350,424 in 2007. As game companies
move their business online, they are increasingly vulnerable to these hackers.
Today, the game industry seems wracked with security fatalism: PC gaming is
doomed because of piracy. Gold farmers are ruining MMOs. Cheaters and bots are
destroying online poker.
The sky is not falling.
Security problems are big enough to be taken seriously, but, as has been shown
throughout this book, there are ways to bring these challenges under control. The
first step is for both security personnel and their colleagues to consider security as
an important and integral part of game development and game operations.
There seem to be six kinds of security insanity that afflict both security professionals and their coworkers:
Zealots—The radical “true believers” of security. They believe in security at the
expense of everything else. If something is not perfectly secure, there is no reason to do it. This is most often found in very young (or inexperienced) and very
old security professionals.
Product Pushers—These individuals have been totally seduced by one brand
or type of security product and believe it is the magic bullet for whatever security problem you face. This may occur in the wake of an article, a security
course, an industry conference, or an encounter with a particularly charming
and/or attractive sales person from the offending company. This is a moderately annoying problem in security staff. It is utterly depressing and frustrating
when an executive has caught the bug.
Guards and Fences—These individuals view security solely through the lens of
physical security: guards, fences, security clearances, and such. Very often, these
people have no appreciation for intellectual property as a valuable commodity.
This is not found too often in the game industry, but there are definitely folks
in the security industry who continue to have this view.
Blissfully Ignorant—There is no security problem, there is nothing to worry
about, time to move on. “These problems would never affect my product or
our business.” In severe cases, they are the willfully ignorant who refuse to consider anything that differs from their world view.
Protecting Games: A Security Handbook for Game Developers and Publishers
Pearl Harbors—“I’ll worry about security when some large enough security
event occurs.” “Whatever it is, it’s going to have to be big, a <insert your industry
or company or product here > Pearl Harbor.” If nothing else, I hope that this
book has provided enough specific incidents to answer this question for both
the Pearl Harbors and blissfully ignorant in the game industry.
Security Apocalypse Believers—There is nothing that can be done. We’ll just
have to abandon this market. Maybe the government can save us. The entertainment industry seems to be veering into this direction. For the games industry, PC gaming is routinely pronounced “doomed” due to piracy.
Real protection goes with a real understanding of the business and its environment. My discussions of price, multi-player gaming, and rich interaction systems as
strategies to fight piracy are based on the recognition that piracy exists, but the goal
is to maximize revenues, not become a surrogate for government and mete out
“justice.” Second-hand game sales may, in fact, be a more costly problem than
piracy (in terms of lost sales). Real customers willing to spend real money on real
games are increasingly choosing to buy used rather than new games. Hoping for
some sort of revenue share from retailers or individuals is naïve, at best.
Cheating and social subversion are direct threats to any game service. Any
breach of trust or confidence will cost you customers. In the largest available survey, cheating was the number one reason for leaving online games2. There is nothing that ruins fun like perceived injustice. Although piracy may be the largest threat
to games as products, cheating and social subversion are the leading threats to game
Games may be an escape from the real world but game businesses are real businesses. They face numerous challenges, some of which stem from the explosive
growth and changes in the industry. Although it may suit the game industry’s
purposes to consider virtual items as worthless and owned by the company, it is
dangerous to ignore the fact that your players place great value on these mere
entries in a database. This view devalues your customers and facilitates problems
like gold farming. Games are a major industry and can earn many millions of
dollars from their players. In return, developers and game operators should value
their customers and their customers’ trust seriously. Protecting identity, providing
a safe environment, and securing payments should not be afterthoughts; they are
central to the success of a game service.
Chapter 36 Practical Protection
When games can earn more than $300 million in a week3 and there are lawsuits for
millions of dollars over security breaches4, it is time to realize that the game industry has grown up and that protecting these games is serious business. There are a lot
of multi-million dollar security incidents mentioned in this book: Sony Online
Entertainment faced millions of dollars in chargeback fees due to gold frauders; K2
Networks lost a million dollars in one year due to phishing, identity theft, and
credit card fraud; and Shanda offered $1 million in rewards to find private servers,
malicious code, and cheats.
One has to wonder if the potential costs of a security breach were even considered when these games were developed. How carefully do game publishers review
the security procedures of their developers or production contractors, given the
number of times games have been compromised before they were released? How
much thought was put into various online games’ economic systems to consider the
operational and fraud related costs from gold farming? How early in the development
process was the threat of piracy considered during a game’s design and business
Yet, it seems security investments are very lean.
The game industry is not alone in this. The game industry has an advantage
over many other industries in that it can clearly measure return on security investment. Piracy can be tracked; customer complaints and security incidents can be
counted. Security investments can be tied straight to the bottom line.
It is certainly the responsibility of security staff to propose security solutions as
well as identify security vulnerabilities and fairly cost them over the lifecycle of the
game, but it is also the responsibility of management and designers to take heed.
Security is truly an engineering discipline and an art. It is all about leverage. The
goal is to make an investment in security tools, implement appropriate changes to
the game design or business model, or even alter operational procedures that, taken
together, will produce substantially larger revenues or avoid large costs. It is reasonable for management to expect these margins to be fairly large because so are the
unknowns. If game protection is started early, the cost may consist of altering a
PowerPoint design presentation or a Word requirements document.
The game industry is in the midst of a fundamental change. It is moving from
a shrink-wrapped software business dominated by a few major publishers serving a
few large developed countries, to a much more diverse online industry with a wide
range of business models and a huge number of players spread all over the world.
Protecting Games: A Security Handbook for Game Developers and Publishers
Industry leaders need to think globally instead of focusing on individual markets or
The debate in the United Kingdom between industry-sponsored Pan-European
Game Information (PEGI) age-rating system and the British Board of Film
Classification (BBFC) over game-rating authority is not really about which approach is better, but whether companies will need to get their games approved in
each national market or be able to use a standard, global certification.
The challenges extend far beyond game ratings. The game industry has a strong
interest in standardizing and strengthening online identity, developing standards
for parental controls and usage limitations to address game addiction concerns, improving online payments and fighting fraud (including pushing liability back onto
payment processors and credit card firms), fighting online crime, and clarifying
jurisdictional issues for online services.
Many of these issues are not unique to the game industry, but their resolution,
or lack thereof, will have a notable impact on the shape and growth of the industry
in the future.
At the same time, the very way the game industry operates is in flux. Game developers increasingly outsource portions of their projects and operate as virtual
teams spread around the globe. Licensing games has changed from simple localization for each country and local marketing to supporting extensive online services.
These partnerships reduce the costs of development and can help expand business
opportunities, but at the same time they expose companies to threats that they
hadn’t previously considered. Maintaining management control, much less effective security, is a serious challenge.
One of the most important themes throughout this book is that security is not
solely a technology problem nor does it always have a technological solution. The
role of the security team is to look at everything about the game—its business
model, design, distribution, payments, implementation, tools, and so on—and help
navigate a path to success in the face of adversity. Although security is not paramount, its impact needs to be considered seriously along with business and art to
achieve success.
The joke among my NSA friends when we were adding a piece of security gear
to a project was that we were expected to be a heat sink, add power, and add lift.
The development team as a whole often did not see any reason to value security.
Why should they?
Chapter 36 Practical Protection
Military development projects have many of the same characteristics as game
projects. The development team is successful when the project “goes live.” After
that, someone else gets to take care of maintenance, operations, and security.
Without rewards and accountability, security will fail. Like everything else,
people respond to incentives. If security can serve as an impetus to better address
operational issues during development, security should certainly be a resounding
One of the great things about games and security is that there is a lot of freedom
to innovate. Games are unique and creative enterprises. Battle.Net started with a
free online service and ended up extending the life of its games for years and being
pretty effective in battling piracy. Richard Garfield revolutionized the tabletop
game industry by combining sports trading cards with a fantasy game to create
Magic: The Gathering. The free-to-play business model has shown that you don’t
need to charge a subscription to make money from an online game. The Wii
showed that game play, not graphics, can be successful on consoles. Puzzle Quest
showed that you could make a “hardcore” casual game. Solving business, game
design, and security problems can truly be an opportunity to build something
wonderful and new.
A perpetual problem in the security field is the question of who “owns” security. No
matter what we all claim, the adage “where you stand depends on where you sit”
has a lot of validity. If security is owned by the Information Technology (IT)
department, security is an IT problem. If it is placed within Quality Assurance,
quality is security.
The bottom line answer is that the “boss” owns security. The owners of a business are the only ones who have enough visibility into development and operations
to balance both. Some large companies have created a Chief Information Security
Officer (CISO) position, usually underneath the Chief Information Officer (CIO).
The main problem with this choice is that it makes security part of IT and often
CIOs are a lot less strategic to the business than they should be.
My conversations with game developers lead me to a slightly different conclusion for the game industry. The security lead should probably directly report to the
Project Manager or an Operations Manager, if one is appointed early. Bringing in
an Operations Manager early on is highly valuable. They are the ones who are going
to care for, feed, operate, and support the game from the day it is launched until it
is shut down, and so are more like a “real” game player than anyone. Operations
Managers care about how many customer support staff they are going to need, as
well as how many servers, and how many phone lines for complaints. Operations
Protecting Games: A Security Handbook for Game Developers and Publishers
Managers also are going to bear the brunt of any security failures and will therefore
be strong advocates for security. They will appreciate that good security can affect
the variable costs of operations. Best of all, the Operations Manager has a substantial budget—something that security folks almost always lack.
I almost always open my presentations on game security by stating that “If security
doesn’t save you money or make you money, don’t spend a dime on it.” I hope that
I have shown that there are real security problems that affect almost everyone in the
game industry and that there are practical ways to protect your games.
When I started studying game security, the problems were pretty straightforward: piracy for shrink-wrapped games and cheating and griefing for online games.
The explosion of games into advergames, social networks, virtual worlds, skill games,
and children’s games has been exciting to watch even as the security challenges have
These are early days for security in the games industry. Companies who take
the lead in security will have a competitive edge, while others may lag behind.
1. T. Espiner (2007), “Cracking Open the Cybercrime Economy,”,1000002000,39291463-1,00.htm
2. PlayNoEvil (2006), “Game Security Major Issue for Online Gamers in China,”
3. P. McDougall (2007), “Halo 3 Sales Top $300 Million in First Week,”
4. B. Sinclair (2008), “Ubisoft Sues Over Assassin’s Creed Leak,”;title;2
Selected Game Security
began tracking game security incidents fairly seriously after we filed the patents
for what became our SecurePlay product back in 1997. Every so often, a news
story would break about some problem with a game somewhere and I would
file the information away. The pace of these incidents steadily increased and when
I started seriously writing my blog, PlayNoEvil, in the later part of 2005, I went back
and filled in a number of the earlier cases to build an informal chronology of game
security. These cases come from stories in the press or announcements from companies. I have rarely used forums or gamer communities as a reference. When I do
use these sources, it is usually because the attack itself is interesting, rather than just
noting the disclosure of the incident.
The growth in the number of game security incidents has been impressive. Before
2004, there were several incidents per year. By 2004, the pace had reached one per
month. In 2005, there were several each month. In 2006, there was usually at least
one incident a week and by 2007 there were more incidents than I could easily keep
up with. Part of this is certainly due to the growth of online sites that track the game
industry, but the number of incidents seems to have been growing at a remarkable
Over this same period, companies have changed how they respond to security
incidents. There has been a long-standing habit of simply deleting discussions of
cheats or exploits from official game sites. Sometimes game companies will provide
information on security incidents only to later remove them from their sites.
Blizzard used to provide quarterly updates on bans for Battle.Net and World of
Warcraft, but ceased doing so in the fall of 2006. Other companies, like Square
Enix, have taken a very different approach and provide regular updates on security
for their online game Final Fantasy XI.
Protecting Games: A Security Handbook for Game Developers and Publishers
On a personal note, when problems have occurred over, and over, and over
again, there seems to be less value in writing about them, and less interest in reading the same stories by my readers. The rapid changes in the game industry have
given me opportunities to expand my horizons to address issues such as security for
kids’ games, problems with contests and advergames, and other topics.
This is a long-winded way of saying this list is not complete. It covers many
incidents from 2001 to mid-2007 and gives a good sense of the range of the problems. I do refer you to my blog, PlayNoEvil (, which
I’ve continued to update. You may also find Luigi Auriemma’s site (http://aluigi. of interest. There have been a couple of other game security sites
launched over the years, but they rarely stay active for long.
Incidents in 2007 (though May):
In-game griefing turns into real-world violence in World of Warcraft in Mexico
Regional licensing issues in Xbox Live and PS3
Nintendo faces media piracy in Korea
Lineage III code compromised—$1 billion in potential damage claimed
Nintendo Europe fights modchips
Sony PSP firmware updated against download security flaw
HD DVD and Blu-ray anti-piracy systems hacked after being patched within
one week
HD DVD and Blu-ray anti-piracy system hacked
Xbox 360 privilege escalation exploit
Fake cheating scams: Habbo Hotel and elsewhere
Cheating in Insomniac’s Fall of Man for PS3
3,900 Final Fantasy XI accounts banned or suspended
Teleport hacking in World of Warcraft
Xbox 360 exploit found and patched
Griefing article in PCGamer
Malicious shills in online casinos
100,000 World of Warcraft accounts banned
Swedish lottery game shut down for cheating
Entropia Universe performance enhancement exploit
MMORPG editorial on cheating
Blood donations to get “unbanned” in Cabel Online
Appendix A Selected Game Security Incidents
Casual game attacks article
Hacking kids games—Club Penguin and Whyville
Final Fantasy XI bans 5,000 accounts
Pirate server earned $200,000 in China shut down
UK invests 5 million pounds to fight piracy
Lineage 2 pirate server shut down by FBI
Hacking games with Cheat Engine article
Nintendo DS strengthens media security
Game industry article on evolution of game piracy
Real robot used for Xbox 360 achievement exploit
Epic Software’s Gears of War for Xbox 360 attacked
Employee game abuse issue in CCP Games’ EVE Online
Nintendo Wii hardware hack
Knight Online institutes “double damages” instead of banning
Flash high-score system hack demonstrated
Sony PSP firmware hacked again
Controller hack against Rainbow 6 on Xbox Live
Indian gold farming and botting allowed and supported by publisher
Chess tournament cheating via Bluetooth
Legend of Mir 3 piracy operation in China cost nearly $1.3 million per month
in revenues
Incidents in 2006:
Penis griefing in Second Life
Square Enix bans 7,450 from Final Fantasy XI
Another pirate server in China shut down—over 260,000 accounts
Cheating described in online Backgammon
Gold duplication exploit in World of Warcraft (non-US)
Sony PSP firmware hacked
Electronic Arts’ Battlefield 2142 faces numerous security issues
Bungie institutes anti-leader board abuse system for Halo 2 (1,106)
Protecting Games: A Security Handbook for Game Developers and Publishers
Square Enix bans 11,500 in Final Fantasy XI
FBI shuts down Lineage 2 pirate server
Blizzard sues World of Warcraft Glider tool developer
Hackers commercialize game save cheats for Xbox 360
Denial of service attack against Second Life
Saint’s Row for Xbox 360 gets security patch against exploits
Australia challenges piracy damage report
Square Enix creates Final Fantasy security task force
Yendang Entertainment cracks down on cheaters and hackers
iTunes DRM allegedly broken
World of Warcraft movement cheat hits YouTube
Guild banned in World of Warcraft for using exploit
Final Fantasy XII English language code compromised
Student essay on Metroid Prime Hunters cheating
Game Licensor K2 uses policy instead of patching for Webzen’s Legend of MU
Pine tar hack in baseball
Gamerscore cheating on Xbox Live
Electronic Arts’ Battlefield 2142 ships with spyware
Lobby attacks in Gears of War for Xbox 360
Four to five denial of service attacks in less than one month against Second Life
Cheating in chess—Kasparov editorial
Microsoft sues to fight DRM circumvention tool
Microsoft adds security to DVD drive
Gears of War code compromised
Ubisoft compromised 2GB of art assets
Personal data compromised in Second Life
K2 Networks loses $1 million in a single year due to account compromises,
phishing, and identity theft
Baidu sued for publicizing alleged cheater’s name
Roulette cheating legalized in UK
Gold farming business article in China
Appendix A Selected Game Security Incidents
Final Fantasy XI bans 3,000 for third-party software
Video and audio griefing on Xbox Live in Uno
Payment card spoofing and identity spoofing on Xbox Live
Sony PSP patched again
Korean organized crime group hacked arcade chips to cheat
Apple and Microsoft DRM defeated again
Shanda offers $1 million reward to fight cheats and private servers
Casino Times covers cheating in Online Poker
Executive at Shanda duplicates items in Legend of Mir 2—arrested
Three Guild Wars guilds banned for ladder abuse
Both Microsoft DRM and Apple FairPlay systems broken (again)
EA’s Battlefield 2142 hacked in beta
Sony PSP hacked, all versions, via memory stick
Xbox 360 piracy reported in Philippines
Introversion claims successful use of spoofing against P2P network piracy
Final Fantasy XI bans over 1,400 for cheating and third-party software
Bank scam revealed in EVE Online
VP of Chinese anti-virus firm arrested for selling game hack plug-in for Legend
of Mir 3 (earned over US $350,000)
“Stat baby” griefing in Netamin’s Ultimate Online Baseball
Xbox 360 piracy problems reported in Korea
Sony PSP introduces anti-piracy and homebrew features in latest firmware
Datal releases trainer for DS
World of Warcraft bans nearly 60,000 accounts
SiN Episode 1 piracy reported, distributed via Valve’s Steam
Xbox 360 piracy problems reported in China
Final Fantasy XI bans over 2,000 for third-party software
Conquer Online increases penalties for cheating and botting
Guild Wars bans over 4,000 for botting
$21 million in U.S. dollars seized from botters and gold farmers in Ultima Online
New PSP downgrader released
Hellgate: London source code potentially compromised
Protecting Games: A Security Handbook for Game Developers and Publishers
Cheat tools for Metroid Prime Hunters—WiFi
300,000 players data-compromised in Japanese MMO Xenepic Online
Korean gold farming ring earns U.S. $15 million, NCsoft under investigation
for inaction
Nintendo DS piracy problems
Star Wars MMO doppelganger server enters alpha testing
PSP dual boot hardware hack released
Korea moves to stop wagering in online games
Shadowbane MMO announces fix for exploits
Final Fantasy XI MMO bans 250 accounts for gold farming with U.S. $4 million
in game currency
RF Online bans over 2,000 for cheating with bots in May
Bethesda Oblivion RPG patch fails to stop duping
Blizzard bans 30,000 accounts for gold farming and cheating, with U.S. $3 million
in game currency
In-game denial of service attack against Second Life
Bethesda Oblivion RPG patch to stop duping; because of the Xbox Live service
and incentive system this problem has real impact
ArenaNet’s Guild Wars bans 1,000 accounts for bots
NCsoft to spend U.S. $10 million on anti-cheating and farming, including
hiring 150 analysts for Lineage
Korean bot makers busted for U.S. $10,000 and two years in prison
Xbox 360 DVD drive hack weaponized cheat and piracy product for Nintendo
Trojan used to loot World of Warcraft accounts
Second Life denial of service attack—third in one month
NCsoft bans 500,000 accounts in 18 months for Lineage
Second Life denial of service attack
5,400 banned from World of Warcraft for cheating
Virtual crucifixion to punish griefing in Roma Victor
Xbox 360 DVD drive hack disclosed
Mythic bans cheaters from Dark Age of Camelot MMO
Massive identity theft ring targeted NCsoft’s Lineage—200,000+ compromised
Appendix A Selected Game Security Incidents
Online “lynching” of suspected Chinese gold farmers
Mythic bans 450+ cheaters for use of “radar” exploit
PSP hacked via GTA game
Shadowbane institutes “zero tolerance policy” for cheating
Second Life adds virtual prison for griefing or annoying behavior
Incidents in 2005:
Blizzard closes 18,000+ accounts for cheating in past three months
Linden Lab turns to FBI in Second Life denial of service case
City of Heroes/City of Villains servers hacked, code compromised
Age of Empires III patched to address multi-player exploit
Call of Duty 2 fans lobby for better anti-cheating, threaten strike
Denial of service attack against Second Life
Xbox 360 Live tracking security concerns
AhnLabs brings anti-cheating service to US, starting at $30K/year
Two denial of service attacks in November against Second Life
World of Warcraft hackers using Sony BMG rootkit to hide from Warden
Halo 2 ends leader board due to cheating and hacking problems
Japan Lineage II fraud/virtual mugging results in arrest
Duplicate server problem for Lineage II
Major duping incident for EverQuest II
Privacy concerns due to security monitoring by Blizzard
Far Cry patched for cheating
World of Warcraft bans over 1,500 for cheating
Bungie (Halo 2) bans thousands for cheating
Bungie fights mod cheats
Battlefield 2 security flaws allow cheating
Major duping problems in World of Warcraft
China imposes content and time limits on gaming
Battlefield 2 servers delisted for cheating
50,000+ accounts banned from Battle.Net
Protecting Games: A Security Handbook for Game Developers and Publishers
Shanghai player stabbed over virtual sword theft
Spurned woman charged over deleting game data
Tecmo settles nude “skinning” lawsuit
Gold farming and duping problems in World of Warcraft—thousands banned
(several articles)
Rise of pokerbots in various online poker games (several articles)
Final Fantasy XI bans over 800 players
America’s Army hacked
Halo 2 service boots several thousand users for cheating at $50/subscriber/year
Sims 2 hacked (malicious data attack)
Fujistu-Siemens told to pay $16/PC as tax for piracy in Germany
Earlier incidents (2001 to 2004):
World of Warcraft has major game hack within three weeks of release (timing
Blizzard knocks over 500,000 accounts from Starcraft and tens of thousands of
accounts from Warcraft III at Battle.Net
Valve boots 50,000+ players within three months of releasing Half-Life 2
Blizzard wins suit against BnetD (competitive open source game server)
Code theft for Grand Theft Auto – San Andreas
Code theft for Halo 2
ASF Texas Hold’em hacked
SOCOM online virtually shut down due to cheating
EverQuest removes hundreds for cheating
Battle.Net boots over 500,000 accounts (out of 10 million)
Half-Life 2 code theft
China’s online gaming market admits $12,000 lost each day to cheating and
McDonald’s Monopoly sweepstakes scandal
Cryptologic’s software was hacked and lost $1.9 million in a day
Action Overrun—This type of attack on a game works when games do not explicitly
validate the intervals between allowable game actions. The game’s state does not retain
information about when previous actions have occurred. Common examples are players
who can move far too fast or fire extraordinarily quickly compared to other players.
Advergame—A game that promotes a product, service, or brand that is typically provided
for free.
Asymmetric Cryptography—See Public Key Cryptography.
Blind Authentication—A cryptographic authentication process whereby one entity queries
another to determine whether the second entity possesses certain information without disclosing the information directly. Blind authentication can be used to verify software or
other data.
Blu-Ray—A successor to the DVD that can store 50GB of data and includes several digital
rights management systems.
Bot/Botting—The use of automated programs to play on a player’s behalf.
Bridging—The insertion of a hardware or software proxy between two networked game
applications in order to facilitate cheating by manipulating the data exchanged between the
game players.
CAPTCHA—Stands for a “completely automated public turing test to tell computers and humans apart.” A program that tries to distinguish between people and computers quickly and
reasonably easily.
Chargeback—When a consumer reverses an existing credit card purchase and the vendor
is responsible for returning the funds to the credit card company and, ultimately, to the
consumer, unless the vendor can successfully dispute the chargeback.
Cheating—Active measures that abuse game play or game systems.
Checksum—A fixed sized block of data that’s derived from a larger data stream or block. A
checksum can be as simple as a parity bit (a count of the number of 1s in the data stream)
or as complicated as a cryptographic checksum (where the checksum is built using a secret
key and a cryptographic function).
Protecting Games: A Security Handbook for Game Developers and Publishers
Children’s Online Privacy Protection Act of 1998—A U.S. law that protects the privacy of
children under 13, particularly against marketing. It is sometimes confused with COPA or
other laws with very similar acronyms that are related to protecting children from obscenity and have been successfully challenged in court.
Ciphertext—An encrypted stream or block of data.
CKL—See Compromised Key List.
Compromised Key List (CKL)—One of the limitations of public key cryptography is that
keys have a long life, sometimes a year or more, and there is no easy way to get rid of keys
when they are compromised. The CKL is a digitally signed list of compromised keys periodically distributed by a central authority. These lists are used by participants of the service
to ignore messages from holders of the compromised keys.
COPPA—See Children’s Online Privacy Protection Act of 1998.
Digital Millennium Copyright Act (DMCA)—A U.S. law passed in 1998 that places serious
restrictions on attempts to reverse-engineer or circumvent copyright protection technology
such as DRM. The European Union passed an analogous act in 2001, called the EU
Copyright Directive.
Digital Rights Management (DRM)—Software tools that attempt to control the usage and
distribution of digital media.
Digital Signature—By combining a hash function with public key cryptography, digital
signatures are used to ensure the source and integrity of the signed data. The hash function
detects data alteration and the public key function ensures that the data came from a source
that knows the corresponding private key.
DMCA—See Digital Millennium Copyright Act.
DRM—See Digital Rights Management.
Emulators—Software programs that emulate a different computing platform or microprocessor, and are sometimes used for piracy.
Encryption/Decryption Function—A mathematical function that operates on some input
data (plaintext) and a key to produce an output. If you have access to the output of the encryption function (ciphertext), it is computationally infeasible to recover the input data
without the key.
End User License Agreement (EULA)—The contract between a software buyer and a seller.
These contracts have come under scrutiny in the US recently because consumers cannot
meaningfully negotiate with the licensor. Additionally, software companies have been imposing draconian restrictions in the contracts that may not hold up in court.
Escorting—A (paid) service in an online game where the escort plays along with the client.
EULA—See End User License Agreement.
Exploit—A weakness in the implementation of a game system or flaw in the game’s rules
that allows a player to have an inappropriate advantage.
F2P—See Free-to-Play.
Appendix B Glossary
Feelies—Physical items included with a game. Feelies were an anti-piracy technique pioneered by Infocom.
Free-to-Play (F2P)—The name for games whose business model is based on the purchase of
virtual goods. It is typically contrasted with subscription games, where players purchase
game play based on time (per minute or per month).
Gold Farming—The practice of using a game’s economic system to earn real money by selling virtual assets or currency to other players. Most games typically allow, in fact encourage,
the accumulation of items and virtual currency and support their trading as a part of the
game play experience. The abuse of this legitimate system can cause a number of problems,
including spam.
Gold Frauding—The use of actual criminal techniques such as phishing, account theft,
identity theft, and credit card fraud, to buy and sell virtual currencies and items for games.
Griefing—Activities that are not technically cheating, but are disruptive to the game experience of other players. At best, these behaviors can only be documented in a game’s terms
of service. Most often, they are enforced by game company staff.
Hash Function—A mathematical function that takes an arbitrary amount of data and
“hashes” it down to a fixed-size hash word. A good hash function will have the characteristic that small changes to the input data will result in large, unpredictable changes in the
resulting hash word.
Honeypot—A program or online service that is designed to trap criminals or other targets.
Key—A small amount of data (the key) used to protect a large amount of data. Typically
used with a cryptographic function.
Key-logger—A program that captures the keystrokes from a computer and sends them to
a remote location, often for malicious purposes such as capturing user passwords (usually
done by a hacker, but sometimes companies monitor their own employees). There are also
programs that capture screenshots and mouse inputs.
Key Management—The set of operational services are necessary to operate a cryptographic
system securely. Typically, a key management system includes key distribution, key
exchange, registration or initialization, key update, and recovery.
Keygen—An unauthorized program that can generate a game license key that cannot be
detected as fraudulent.
Macros—Programs that automate sequences of keystrokes as a game aid. See also Bot.
Machinima—A video or movie built from a 3D graphics engine (typically a game).
Massively Multi-Player Online Game (MMO, MMOG)—A generic name for games that
can support large numbers of players, typically in shared, persistent environments.
Massively Multi-Player Online Role Playing Game (MMORPG)—A special category of
MMO that has a player take on the role of a person or creature, typically represented by a
graphical avatar. The role-playing-game aspect of the game is that the characteristics and
assets of the avatar can change over time.
MMO/MMOG—See Massively Multi-Player Online Game.
Protecting Games: A Security Handbook for Game Developers and Publishers
MMORPG—See Massively Multi-Player Online Role Playing Game.
Modchips—Additional memory chips, processors, and even circuit boards used to support
hardware hacking of game consoles.
MUD—See Multi-User Dungeon.
Multi-User Dungeon (MUD)—The textual predecessors to the modern graphical MMO
games. Players typically use a simple telnet terminal client to access the game (although
more elaborate client applications are often available). Interaction is often done with very
elaborate textual commands. One advantage that MUDs have is that developing and modifying text is quick and easy. There continue to be several text MUDs that operate commercially.
Nintendo DS—Nintendo’s handheld game console that was released in 2004. It is distinguished by the use of a dual screen and is backwards compatible with Nintendo’s previous
GameBoy Advance handhelds. It also supports Wi-Fi networking.
Nintendo Wii—Nintendo’s game console that was released in 2006. It is distinguished by
its use of the Wiimote handheld controller that recognizes gestures and movement in three
dimensions. It is also backwards compatible with Nintendo’s previous console, the
GameCube, and includes Internet access via Wi-Fi. Unlike its competitors, the Wii does not
have a standard hard drive, only Flash memory.
Obfuscation—The process of making it difficult to reverse-engineer the design of a program (code obfuscation) or the value of data (data obfuscation), even if the adversary has
the obfuscated code or data in her possession.
Phishing—The process of attempting to fraudulently convince individuals to disclose
credit card numbers, personal data, passwords, or other sensitive data.
PlayStation (PS, PS1, PS2, and PS3)—Sony’s video game console series. The latest, the
PlayStation 3 (PS3), was launched in 2006 and includes the Blu-ray disk technology and
Internet access.
PlayStation Portable (PSP)—Sony’s handheld game console first released in Japan in 2004.
It is most notable for the use of the UMD miniature disk technology. It also supports
Wi-Fi networking.
Pokerbot—An automated program that plays poker (see also Bot/Botting).
Power-Leveling—A commercial service whereby a third party plays on behalf of a player to
accelerate her status in a game. The service may also simply sell pre-built game characters
or accounts.
Private Key Cryptography (Symmetric Cryptography)—A type of cryptography where all
parties to the system use the same (secret) key. The same key is used to encrypt and decrypt
PS, PS1, PS2, and PS3—See PlayStation.
PSP—See PlayStation Portable.
Appendix B Glossary
Public Key Cryptography (Asymmetric Cryptography)—A type of cryptography in which
it is computationally infeasible to determine the private key if you know the associated
public key. This means that if you can decrypt a message with the public key, you cannot
encrypt the message because you do not posses the private key. Public key cryptography
is mainly used for key management because it is typically much slower than symmetric
R4 Data Cartridge—An unauthorized product that allows standard SD flash memory storage to be used instead of official Nintendo game cartridges for the Nintendo DS handheld
game console. SD cards are routinely used for digital cameras and media players. The R4
product also includes a PC application that allows users to download pirated Nintendo
games and use them. The R4 allows multiple games to be stored on a single cartridge and
provides its own graphical user interface (GUI) to select games.
Race Condition—A situation where multiple systems (often computer programs) that ordinarily work together get into an undefined condition (that often causes strange failures)
because of poor handling of temporal updates.
Real Money Transactions (RMT)—Inter-player commerce, almost always in online games,
where players use the trading system or gift-giving system provided by the game to support
real money sales of items.
Rich Interaction System (RIS)—An online service that provides a number of types of
services designed to be used to enmesh players so that they will be less likely to cheat and
pirate games.
RMT—See Real Money Transactions.
Rootkit—A program that hides itself from the standard utilities provided with an operating system. Many computer hacks try to “escalate privileges” to get root privileges that typically give the hacker total control the computer.
Spam—The abusive distribution of marketing information via a legitimate communication
Standbying—A method of cheating in networked computer games where communications
are interrupted to disrupt the synchronization of game state or actions between players.
Steganography—A mechanism for hiding important information inside of other data, such
as a message concealed inside of a picture or a sound file. For game applications, the most
common use of steganography is for concealing digital fingerprints or watermarks.
Subletting—Sharing an online game account with another user who is engaged in an activity
such as gold farming. In addition to being a violation of the game’s terms of service, subletting is often a fraudulent way to steal a player’s account.
Subscription Game—A game that includes periodic payments (typically monthly) for service.
Symmetric Cryptography—See Private Key Cryptography.
Terms of Service (TOS)—The extension to a game’s contract that defines acceptable and
unacceptable behaviors. TOS are not typically enforced by the software.
Protecting Games: A Security Handbook for Game Developers and Publishers
Trusted Platform Module (TPM)—A cryptographic coprocessor that stores and generates
cryptographic keys and implements cryptographic functions. Each TPM has one or more
unique cryptographic keys. TPM is also a specification published by the Trusted Computing
Wallhack—The replacement or alteration of game graphic assets to give an advantage to
the game hacker. In many games, the graphics engine has sole responsibility for determining whether one player can see another. A simple wallhack will make a wall in a game invisible and, at a minimum, allow a player to see where others are located. Others can be used
to allow players to shoot at players that they should not be able to see or fire at.
Xbox 360—Microsoft’s game console that was released in 2005 as a successor to the Xbox.
It includes Internet access. One notable change from the Xbox is that there is a version that
does not include a hard drive (the Xbox 360 Arcade).
Xbox Live—Microsoft’s online service for its Xbox and Xbox 360 game consoles.
Note: Security principles are abbreviated.
See individual entries for complete text.
2001 to 2004, game security incidents in, 378
2005, game security incidents in, 377–378
2006, game security incidents in, 373–377
2007, game security incidents in, 372–373
A1 installation option, explained, 95
abandonment, occurrence of, 199–200
abstraction, using, 172
account compromise, overview of, 306–308
Achaea (Iron Realms), 228
ACID, applying to dupe attacks, 156
action-based networking, 117–120
versus state-based networking, 119
synchronizing between players, 119–120
action hands, explained, 202
Action, role in CAARDS reference model, 110
activation, considering in license policies, 53
activation key, vulnerability to pirates, 39
active measures, adoption of, 354
Adams, Tarn, and Zach (Dwarf Fortress), 175
advergames, tracking, 181
ADV Films, trusted brand security of, 28–29
Age of Conan, 109, 166
age verification
overview of, 302–304
problems with, 294
algorithmic games, 170–173
ambushes, turning into mini-games, 174
amusement park economics, 226–227
analysis, availability in cheat detection systems, 153
analytic aids, comparing to cheating, 146–149
anonymity, state of, 296
anonymous systems, using, 301–302
anti-cheating techniques
focus of, 120
N-1 Secure, 124
See also cheating
anti-fraud, overview of, 282–286. See also fraud
approach for PC games, 35
benefits from proprietary media, 32
determining goals of policies for, 97–99
evaluating, 30
innovators, 29
nagware systems used in, 42
price as strategy, 62
public relation problems with, 92
of servers, 70
strategies for used games, 61
techniques, 32–33
worth of, 26–27
See also piracy; RIS (rich interaction systems)
anti-piracy Bill of Rights
connection options, 95
fair use principles, 93–94
installation options, 95
registration options, 94
anti-tamper software, use of, 135, 139–140
ArenaNet (Guild Wars), 24
Asia, console piracy in, 25
Asian malware, statistic related to, 4
Assassin’s Creed, 257
asymmetric information, significance of, 115
asymmetric warfare, choosing defensive methods for,
Atomicity in ACID, explained, 156
attack vectors, association with terrorists, 359–360
Audition dancing MMO, 173–174
Australia, impact of piracy on, 23
authenticating servers, 70–72
authentication, blind, 144
authentication token, use with World of Warcraft, 4
authorization, 37–38
automation, integration for, 286
Protecting Games: A Security Handbook for Game Developers and Publishers
Avatar’s Bill of Rights, 229
average synchronization model, explained, 113
avoid security strategy, explained, 14
AW (authentication word), creating, 143
Ax installation option, explained, 95
Bartle, Richard, 224
Battlefield 2, 111, 198
Battle.Net service (Blizzard), advantage of, 24
“Beat the Dealer” (Dr. Edward Thorp), 170
Bethke, Erik, 229
better EULA, proposal of, 229
bias of game operators, explained, 201
billing options, considering for payments, 279–280
Bill of Rights for anti-piracy, 93–94
biometrics, use of, 299
Black, leaking of, 257
blacklist chat services, using for children, 317
blacklists versus whitelists, 151
blind authentication, using, 144
blind security functions, use of, 139
blind service architecture, 123, 125
Battle.Net service, 24
free online game play service offered by, 24
World of Warcraft, 4, 69
BMG Rootkit (Sony)
use with World of Warcraft, 151
unpopularity of, 25
board games, pattern used in, 176
bootstrapping systems, securing, 58
and cheat tools, 148
problems with, 329
sophistication of, 147–148
and syndicates, 197
Brain Age, 169
brand security, Nintendo and ADV Films, 28–29
break-even analysis, performing, 26–27
bridge style rankings, explained, 204
bridging, defined, 161
broker model, explained, 228
BSA (Business Software Alliance), 23
buddy high scores, explained, 185
Burning Crusade, licensing, 270
business models, security problems related to, 129–130
business risk and liability, overview of, 218–221
byte-code, attack on, 137
CAARDS reference model, 110
CALEA (Communications Assistance for Law
Enforcement Act), 349
Call of Duty 2, security failures of, 130
Call of Duty, “kill cam” feature in, 148
CAPTCHAs (Completely Automated Turing Test To
Tell Computers and Humans Apart), 149–150
card games, pattern used in, 176
Caribbean, success of online gambling in, 68
casinos, accidental, 326–327
cataloging problem, fighting in trivia games, 168
CCP Games (EVE Online), 137
CD key, problems with, 34–35
CDS (cheat detection systems), 150–154
card, 299
login protocol, modifying, 71–72
score posting, 185
Champions, 109
chargeback fees, considering, 276
cheapness, security problems caused by, 10–11
cheat codes, use of, 106–107
cheaters, attacks by, 190
boundaries of, 146–149
costliness of, 105
demonstrating, 148
detecting, 121
versus fair play, 105–106
“for fun,” 120–121
versus hacking and exploits, 108
in high-score games, 181–182
importance of, 104
interpreting, 103
overview of, 102
seduction of, 102–103
by server, 111
in single-player games, 103
threat of, 366
tools used in, 149
using cheat-detection tool with, 16
See also anti-cheating techniques
cheating setup, bridger in, 159
benefits of, 73–74
and ID license key system, 35–36
using with game data, 136–137
chess, solvers for, 147
Chief Information Security Officer (CISO), problem
with, 369
child pornography, overview of, 318–319, 321–322
capabilities for online services, 318–319
and identity, 320–321
protecting communications of, 316–317
sexual solicitation of, 313–314
targeting games to, 303
Children’s Online Privacy Protection Act of 1998
(COPPA), 319–320
anti-piracy innovations of, 29
GDP related to software, 23
server piracy in, 67
chips, attacking information in, 58
CISO (Chief Information Security Officer),problem
with, 369
City of Heroes (Cryptic Studios), 269
Clemson University study of cyberbullying, 315–316
client/authoritative server networking, 116–117
client ID, creating with security client, 150–151
client-server option, using, 184
client-side security, end of, 120–121
clocks, considering in license policies, 53
cloning online game services, 68
CO: Connection Optional, explained, 95
as data, 137
vulnerability of, 137
code losses, occurrence of, 257
code obfuscation, explained, 40–41, 139
code theft, overview of, 255–257
collaborative game security architecture, 124, 126
collection, availability in cheat detection systems, 152
in ladder game play, 197
player, 127–129, 167
in tournaments, 197
in tournaments and ladder games, 197
abuse of, 190
griefing and spam, 210–215
Communications Assistance for Law Enforcement Act
(CALEA), 349
community sites, overview of, 273
community systems, limitations of, 211
competition, levels of, 204
Completely Automated Turing Test To TellComputers
and Humans Apart (CAPTCHAs), 149–150
“conduit” model, adopting in lawsuits, 272
configuration data, vulnerability of, 137
connections, options for, 95
Consistency in ACID, explained, 156
console attacks
considering emulators in, 56
duplicating game storage media, 55
hardware hacking, 57
manipulating media players, 56
secure bootstrapping, 58
on software, 57
console games
anti-piracy strategy for, 25
attacking save game files in, 141
experimenting with pricing of, 62–63
functionality of, 58–59
memory used in, 141
console processors, targeting directly, 56
consumers, concerns about game security, 4
content, considering in license policies, 54
content distribution
detecting, 33–34
preventing, 32–33
contingency planning, overview of, 342–343
comparing to software, 266
content of, 266
indemnification section of, 350
security considerations, 266–267
Control, role in CAARDS reference model, 110
coordinated action collusion, explained, 129
COPPA (Children’s Online Privacy Protection Act of
1998), 303–304, 315, 319–320
copyright infringement, occurrence of, 220–221
counters, considering in license policies, 53
covert fingerprinting DRM system, using, 46–47
CR: Connection Required option, explained, 95
credit card fraud, impact of, 287, 289
crimes, punishing, 243–244
Cryptic Studios (City of Heroes), 269
cryptographic checksums, benefits of, 73–74
effectiveness of, 71
in secure digital distribution, 89
use in DRM, 46
use of, 182
Protecting Games: A Security Handbook for Game Developers and Publishers
Cryptologic Inc., attack on online casino, 4–5
The Cuckoo’s Egg (Clifford Stoll), 7
currencies (convertible), risks associated with, 291
CV: Connection Value option, explained, 95
Cx (D or T) connection option, explained, 95
cyberbullies, dealing with, 315–316
DAC security policy model, explained, 51–52
Dance Dance Revolution, 173
Dark Age of Camelot (EA Mythic), 134
DAS (Digital Affiliate System)
DMA player, 86
Media Asset, 86
overview of, 84–86
obfuscating, 40–41
splitting, 40–41
storing, 41
data disclosures
low standard fine for, 306–307
overview of, 255–257
data hash, explained, 136
data integrity, importance in encryption, 72–73
data obfuscation
basis of, 139
use of, 134–137
data-protection techniques, 136–137
data storage, cost of, 32
Dead or Alive, 221
defensive proxies, overview of, 157–158
delegate security strategy, explained, 14
Delfino v. Agilent Technologies, Inc., 272
Demaio, Harry (Information Protection and Other
Unnatural Acts), 6
denial of service (DoS) attacks, overview of, 336–339
design exploits, overview of, 166
designing for medium, 179
detect security strategy, explained, 15
deterministic games, 186
deter security strategy, explained, 15
development process, including protection in, 18
development, running in parallel to, 17
Diablo (Blizzard)
attack on, 120
Cheaters’ Tournament, 102
storage of data for, 135
differential data/data chaining, explained, 137
differential storage, use with obfuscators, 135
Digital Affiliate System (DAS)
DMA player, 86
Media Asset, 86
overview of, 84–86
digital distribution, security of, 87–91
Digital Rights Management (DRM)
ineffectiveness of, 45
overview of, 44
problems with, 45
problem with, 183
use of cryptography in, 45–46
digital signatures
benefits of, 73
problem with, 183–184
using, 45–46, 182–184
using in DRM (Digital Rights Management), 49
disasters and disaster recovery, overview of, 342
Disney, anti-piracy innovations of, 29
Display, role in CAARDS reference model, 110
distribute and update, availability in cheat
detection systems, 153
distributed development, risks associated with, 256
distributed state systems, synchronization models
used by, 112–114
distribution, security of, 87–91
distribution piracy
detecting, 33–34
preventing, 32–33
DLL injection, 138
DMA player in DAS, features of, 86
DMA Registry in DAS, features of, 87
dominant strategies, use of, 175
DoS (denial of service) attacks, overview of, 336–339
downgraders, problems with Sony’s PSP, 58
DRM (Digital Rights Management)
ineffectiveness of, 45–46
overview of, 44
problems with, 45
problem with, 183
use of cryptography in, 45–46
DRM systems
digital signatures, 49
encryption, 49–50
fingerprinting and covert fingerprinting, 46–48
license policies in, 51–54
obfuscation, 50–51
proprietary encoding, 50
security labels and tags, 49
split delivery, 51
watermarking, 47–48
DS handheld console (Nintendo)
piracy problems with, 25
popularity of, 28
dupe attacks
occurrence of, 156
solution to, 156
dupe exploit, example of, 109
detecting, 33–34
preventing, 32–33
Durability in ACID, explained, 156
DVD players versus video game consoles, 62
console attacks related to, 56
regional encoding system used for, 33
Dwarf Fortress (Tarn and Zach Adams), 175
EA (Electronic Arts), release of Spore, 217
EAM Mythic (Dark Age of Camelot), 134
economic system, fighting server piracy with, 70
emulators, problems with, 56
benefit of, 73
bypassing, 72–74
data integrity of, 72–73
limitations of, 39
popularity of, 17, 41
problems with, 45
problem with, 183
with static key, 136
using, 182–184
using in DRM (Digital Rights Management), 49–50
engines. See game engines; graphics engines
entertainment publishers, power of, 98
Entertainment Software Association (ESA), 23, 26
Entropia Universe (MindArk), 130
entry spreading, 196
Epic Software (Unreal), popularity of, 50
episodic gaming
considering, 98
pricing of, 63
errors, extending in secure digital distribution, 90
ESA (Entertainment Software Association), 23, 26
escorting, defined, 223
escort services, explained, 240
EU, legal requirements for privacy protection,
EVE Online (CCP Games), 137, 217
EverQuest MMO
gold-buying associated with, 225
handling griefing for, 210–211
exception game, Magic: The Gathering, 175
versus cheating and hacking, 108–109
of game design, 166
of shared saves, 141
External Affiliation collusion, explained, 128
External Authoritative synchronization model,
explained, 114
F2P (free-to-play) model
challenges of, 129–130
popularity of, 227–228
Fable 2 pub games, problem with, 108
Facebook, targeting for spam, 210
facilities requirements, 349–350
fair play versus cheating, 105–106
Family-Safe Gaming Initiative (Microsoft), 314
fan sites, overview of, 273
faux multi-player gaming, explained, 185
FBI investigation of payment fraud, 290–291
feature versioning, considering in license policies, 53
“feelies” (Infocom), 34
financial database, inclusion in game services, 340
fingerprinting DRM system
attacks on, 48
using, 46–47
firmware, updating for duplication, 33–34
Flooz, fraud associated with, 290
fraud, safeguarding against, 288. See also gold frauder;
insider fraud
friend codes, using for children, 317
defining, 325–326
versus gaming, 326
Wire Act related to, 331
game accounts, security of, 289–290
game addiction, overview of, 304–306
Protecting Games: A Security Handbook for Game Developers and Publishers
game cheating
boundaries of, 146–149
costliness of, 105
demonstrating, 148
detecting, 121
versus fair play, 105–106
“for fun,” 120–121
versus hacking and exploits, 108
in high-score games, 181–182
importance of, 104
interpreting, 103
overview of, 102
seduction of, 102–103
by server, 111
in single-player games, 103
threat of, 366
tools used in, 149
using “cheat detection” tool with, 16
See also anti-cheating techniques
game clients, vulnerability of, 152
game commerce
alternative models for, 227–228
problems associated with, 223, 226
game consoles
anti-piracy strategy for, 25
attacking save game files in, 141
experimenting with pricing of, 62–63
functionality of, 58–59
memory used in, 141
game demographics, change in, 24–25
game engines
versus graphics engines, 142
keeping data on, 138
game industry
alternative models for, 227–228
competition with movies, 62
game injection, explained, 161
game integrity, importance of, 260
game operations architecture, 341
game operators
versus gold farmers, 236
problems with, 201–202
sample architecture, 340–342
“Game Over” game, 199
game pirates, values of, 13
game play
automating portions of, 147–148
griefing, 215–217
patterns, 176–178
game players, categories of, 224–225
game-play patterns, 176–178
game protection
basis of, 366
challenge of, 16
global industry challenges, 367–368
including in development process, 18
security beyond technology, 368–369
Game Save problems, occurrence with Sony’s PSP, 59
game save sharing, explained, 203
game scams, categories of, 347
game security
accountability in third-party development, 267–268
accountability in third-party licensing, 268–270
balancing with ease of use, 55–56
being concerned about, 3–5
beyond technology, 368–369
business of, 367–370
challenge of, 5
versus complexity, 19
contract considerations, 266–267
contracting, 266–267
of digital distribution, 87–91
effectiveness of, 7
elements of, 3
fatalistic attitudes toward, 6
importance of, 3
importance to consumers, 4
linguistic trap associated with, 6
as means of managing uncertainty, 15–17
of online games, 43
for online games, 71
of operations, 352–353
ownership of, 369
PCI-DSS, 289
perception of, 356–357
simplicity of, 19–20
subjective definitions of, 9
game security architectures
Blind Service model, 123, 125
collaborative model, 124, 126
Trusted Third Party model, 122–123
game security incidents
in 2001 to 2004, 378
in 2005, 377–378
in 2006, 373–377
in 2007, 372–373
and incident response, 354–356
game servers
putting mathematical models on, 171
using public keys with, 71
game services
ensuring integrity of, 185
scams related to, 328–329
security-related components of, 340–341
systems included in, 340
using proxy servers with, 338–339
game state, attacking, 142, 183. See also state-based
game usage, restricting, 304
gamers, stereotypes of, 24
attacking via local applications, 132
high price of, 63
operating as services, 26
partial information in, 115
performing break-even analysis of, 26–27
physical security of, 63–64
retrospective verification of, 115
scams in, 345–347
GamesFirst! (Shawn Rider), 159
GameShark and R4 hardware hacks, 108
GameStop 2007 Annual Report, on used games
market, 60–61
gaming versus gambling, 326
Gears of War, 143
ghosting in tournaments and ladder games, 198–199
Glider tool, using with World of Warcraft, 147–148
Gödel, Escher, Bach: an Eternal Golden Braid
(Hofstadter), 19
Gödel’s Theorem, 19
gold farmers
versus game operators, 236
motivations of, 17
gold farming
countermeasures for, 232–236
defined, 223
impact on customer service, 231
objectives of, 13
overview of, 230–236
solutions for, 238–239
gold frauders, problem with, 241. See also fraud
Grand Theft Auto: Liberty City Stories, 59
graphics engines
versus game engines, 142
vulnerability of, 138, 142
graphics, exploit associated with, 109
grief, inflicting, 209–210
behaviors associated with, 190
countermeasure for, 211
deterring, 213
game play, 215–217
high-score or player-name, 215
managing, 211
methods of, 209
solutions for, 212–215
Guangzhou Optisp (Legend of Mir 3), 67
Guild Wars (ArenaNet), 24
Guitar Hero, 98, 173–174
Habbo Hotel, problems with virtual prostitution, 240
hacker proxies, overview of, 158–163
hackers, attacks by, 190
versus exploits and cheating, 108
servers, 122
hacks, fake examples of, 148
Half-Life, 255
Halo 2
French version of, 271
insider fraud related to, 259
proxy problems with, 159
Halo 3, Save Films feature in, 179
harassment, dealing with, 220
hardware hacking
difficulty of, 57
R4 and GameShark, 108
hash functions, using, 143, 182–184
Havok, physics engine from, 255
hidden state and partial information, 115
high-score games, cheating in, 181–182
high-score strategies, alternatives for, 185–186
high-speed games, problems with, 174
HMI (human-machine interface) design,
importance of, 352
Hofstadter, Douglas (Gödel, Escher, Bach: an
Eternal Golden Braid), 19
honeytrap memory, explained, 137
I1 installation option, explained, 95
ID and checksum license key system, 35–36
Protecting Games: A Security Handbook for Game Developers and Publishers
identify theft, overview of, 306–308
establishing, 253
importance of, 296
problems with, 202–203, 294–295
state of, 295–296
identity cards, counterfeiting, 290
identity systems
components of, 296–297
registration problem with, 296–302
types of, 298–300
Id Software, pricing of Rage, 62–63
ignore security strategy, explained, 14
Illegal Gambling Business Act of 1970, 331
illegal payments, overview of, 290–291
implicit features, including in games, 186
IMVU, 217–218
Incompleteness Theorem, proof of, 19
defined, 268
section in contracts, 350
independence, achieving goals of, 8
indirect data stores, explained, 136
Infocom (”feelies”), 34
Information Protection and Other Unnatural Acts
(Harry Demaio), 6
inside players, problem with, 201
insider fraud
countermeasure for, 262–264
occurrence of, 259–260
See also fraud
considering in license policies, 53
options for, 95
insure security strategy, explained, 14
internal authoritative synchronization model,
explained, 113–114
Internet use, security risk associated with, 258
Iron Realms (Achaea), 228
isolation and privileging, overview of, 262–264
Isolation in ACID, explained, 156
Ix installation option, explained, 95
Jagex (RuneScape), popularity of, 62, 228
judgment proof, defined, 257
keygens, explained, 36
key-loggers, use of, 4
keys. See license keys; public key
kid-friendly services, securing, 316–317
kids communications, overview of, 316–319
Kim, Min, 104, 288
King, Stephen (The Plant), anti-piracy innovation of,
Knight Online, punishment related to, 248
commercial game hack tools in, 149
outsourcing security in, 269
Koster, Raph, 229
Kowalski, Robin, 316
Kreb, B. (”Web Fraud 2.0”), 149–150
ladder games
collusion in, 197
game configuration, 198
ghosting, 198–199
ladders, use of, 193–194
law enforcement, overview of, 348–349
Lawrence Berkeley National Lab, investigation at, 7–8
lawsuits, dealing with, 272
layered security, significance of, 7
laziness, security problems caused by, 10
legal considerations
federal laws and regulations, 331–332
overview of, 329–333
state laws and regulations, 332–333
Legend of Mir 3 (Guangzhou Optisp)
fraud associated with, 259
server piracy of, 67, 122
The Legend of Zelda: Twilight Princess, 59
LGK (license generation key), sharing, 38
liability and business risk, overview of, 218–221
and licensors, 271
replacing, 270
license keys
ID and checksum, 35–36
online authorization, 37–38
protecting, 39
public key encryption, 36–37
splitting data for, 40–41
using, 39
license policies
controlling, 52–54
use in DRM systems, 51–54
licensing games, due diligence process of, 269
Limbo (Svenska Spel), 127–129, 197
Lineage II (NCsoft)
bots for, 148
server piracy of, 67
Lineage III, 257
live connection, considering in license policies, 53
live play, overview of, 329
lobby attacks, 195–196
logging systems, inclusion in game services, 341
Lumines, 59
MAC (Message Authentication Code)
security policy model, explained, 51–52
using, 143
Magic: The Gathering exception game, 175
malicious code, targeted, 121
malware from Asia, statistic related to, 4
MapleStory (Nexon), popularity of, 62, 129
markets and regions, considering in license policies, 52
mathematical models, putting on game servers, 171
MD5 hash function, using, 143
Media Asset in DAS, features of, 86
MediaDefender service, 18
media piracy, fighting, 29
attacking, 132–134
in console games, 141
memory editors, use of, 133–134
merchant accounts, using, 277
Metaplace, 217–218
Metroid Prime Hunter, 141
MGame (Yulgang), 269
Microsoft’s Xbox 360 console, attack on, 33–34
MindArk (Entropia Universe), 130
movement to P2P (peer-to-peer) architecture, 70
simple game mechanics of, 67
vulnerability to server piracy, 66
moneybookers, using, 279
money, conversion of virtual currency into, 130
money laundering
legal issues related to, 291–293
use by terrorists, 361
online services for children, 317
overview of, 316–319
Moore’s Law, relating to emulators, 56
Morris Trap, 300
movies, competition with game industry, 62
multi-player gaming, 79
paper by Matt Pritchard, 111
N-1 Secure, goal of anti-cheating, 124–125
nagware, activation by anti-piracy systems, 42
NCsoft (Lineage II)
bots for, 148
server piracy of, 67
Nelson, Major, 352–353
network games
action based, 117–120
implementation of, 111
problems with, 159
state-based, 111–116
synchronization challenges, 115
network time, considering, 163–165
newest wins synchronization model, explained,
Nexon (MapleStory), popularity of, 62
Nine Inch Nails, anti-piracy innovation of, 29
ninja looting problem, explained, 216
DS handheld console, 25, 28
trusted brand security of, 28–29
“NO DISK” hacks, popularity of, 35
“no download” games, 66
non-repudiation, explained, 213
NOOP instructions, replacing verification functions
with, 140
number games, 169–170
obfuscating data, 40–41
DRM system, 50–51
use of, 134–137
obscenity, dealing with, 219–220
Office IT, infrastructure of, 258–259
online authorization license key system, 37–38
online gambling, success in Caribbean, 68
Protecting Games: A Security Handbook for Game Developers and Publishers
online game services
ensuring integrity of, 185
scams related to, 328–329
security-related components of, 340–341
systems included in, 340
using proxy servers with, 338–339
online games
alternative models for, 227–228
ease of building, 69
globalization of, 68
movement to P2P (peer-to-peer) architecture, 70–71
original reason for development of, 66
security focus for, 71
storage of, 43
online identity. See identity
online payments, growth of, 70
online poker, solvers for, 146
operating systems, targeting directly, 56
operations, security of, 352–353
“The Orange Book,” security grades in, 51
explained, 203
risks associated with, 256
OutWar, 197
P2P (peer-to-peer) architecture
movement of MMOs to, 70
movement of online games to, 70–71
packet hacks, types of, 161
parental controls, overview of, 316–319
partner security issues, 270–273
passwords, protecting with Morris Trap, 300
patterns, types of, 176
payment abuse, explained, 201–202
fraud, 287, 290–291
illegal, 290–291
processing, 276–280
PayPal, using, 277–282
PC games
anti-piracy approach for, 35
pricing of, 62
PCI-DSS standard
compliance with, 288
overview of, 289
pedophiles, dealing with, 315–316
penetration testing, weaknesses of, 17
phased distribution
explained, 214–215
pacing, 215
physical security tokens, benefits of, 352
cost of, 26
determining scope of, 24–28
estimated annual costs of, 4
insider, 69
international impact of, 23
measuring, 75–76
online game appliances, 69
owning problem of, 38
of PlayStation 2, 60
rate of, 23
reconsidering approach toward, 99
strategies toward, 25
as theft, 22
troubled partnerships, 69
See also anti-piracy; server piracy
pirate networks, fighting, 76–78
converting to resellers, 87
handling after catching, 42
pricing out of business, 62–63
Pirates of the Caribbean MMO game, 137
The Plant (Stephen King), anti-piracy innovation of,
platforms, considering in license policies, 53
player collusion, 127–129, 167
player handles, inappropriate, 187
banning, 245
distinguishing from programs, 149–150
player systems, inclusion in game services, 341
PlayStation 2, piracy of, 60
poker, automation of, 147
pornography, child, 321–322
defined, 223
overview of, 239–240
pre-paid cards, using, 279
Prince of Persia, replay system in, 179
Pritchard, Matt, 111
privacy, overview of, 306–308
privacy protection, legal requirements for, 308–310
private keys, signing messages with, 213
privileging and isolation, overview of, 262–264
programs, distinguishing from players, 149–150
Project Entropia, 129
proprietary encoding DRM system, using, 50
protect, detect, react, 13
basis of, 366
challenge of, 16
global industry challenges, 367–368
including in development process, 18
security beyond technology, 368–369
protect security strategy, explained, 15
defensive, 157–158
hacker, 158–163
proxy design, advantage of, 158
proxy servers, using with game services, 338–339
PR (public relations), perception of security in, 356–357
public key
credentials, 299
encryption, 36–37
fighting replacement of, 72
using with game servers, 71
cost of, 244–245
credibility of, 243–244
and credible deterrence, 245–248
goals of, 243–244
possibilities for, 245–248
puzzle games, 169–170, 186
Puzzle Pirates (Three Rings), 129
Python byte-code, attack on, 137
QuickTime, vulnerability of, 130
R4 and GameShark hardware hacks, 108
race conditions
and “dupe” exploits, 109
explained, 108
occurrence of, 155–156
in World of Warcraft, 156
Radiohead, anti-piracy innovation of, 29
Rage (Id Software), pricing of, 62–63
Ragnarok Online (Shanda Interactive), 67, 156
rake abuse, explained, 201–202
randomized features, using, 172
randomly seeded client solution, implementing, 184
random number systems, 125–127
Random, role in CAARDS reference model, 110
rank boosting and busting, 196
ranking systems
countermeasures for, 204
overview of, 192–195
purposes for, 194
types of, 193–194
react security strategy, explained, 15
real world
attacking via virtual world, 360
establishing strong identity in, 253
money in, 252
the insider problem in, 252
recover strategy, explained, 13
regions and markets, considering in license policies, 52
considering in license policies, 53
options for, 94
problems with, 296–300
Reign of Revolution, 326
the remote data problem
action-based networking, 117–120
client/authoritative server networking, 116–117
state-based networking, 111–116
replayable game logs, explained, 186
resync attack, explained, 161
reward security strategy, explained, 14
rewriteable DVDs, manipulating in console
attacks, 56
RF Online, cheating in, 355
Rider, Shawn (GamesFirst!), 159
risk, impact on protection, 12–13
RIS (rich interaction systems), 79–84, 102.
See also anti-piracy
RMT (real-money transactions), fairness of, 105
Rock Band, 98
rock-paper-scissors game, 162–163
Roma Victor, punishment related to, 248
RO: Registration Optional option, explained, 94
round robin method, using with merchant accounts,
Royal Canadian Mounted Police, impact of piracy on,
RR registration option, explained, 94
rules processing, types of, 119
Rules, role in CAARDS reference model, 110
RuneScape (Jagex), popularity of, 62, 228
RunUO Products Page, 67
Rxx (D or T) registration option, explained, 94
Protecting Games: A Security Handbook for Game Developers and Publishers
save game files, attacking in console games, 141
scalability and availability, overview of, 339–340
scams in games, overview of, 344–347
score table constants, altering, 183
Scrabble Word Finder, 169
Second Life, 129–130, 217–218
secure bootstrapping, 58, 143
secure loading, process for, 143
secure operations, 352–353
accountability in third-party development, 267–268
accountability in third-party licensing, 268–270
balancing with ease of use, 55–56
being concerned about, 3–5
beyond technology, 368–369
business of, 367–370
challenge of, 5
versus complexity, 19
contract considerations, 266–267
contracting, 266–267
of digital distribution, 87–91
effectiveness of, 7
elements of, 3
fatalistic attitudes toward, 6
importance of, 3
importance to consumers, 4
linguistic trap associated with, 6
as means of managing uncertainty, 15–17
of online games, 43, 71
of operations, 352–353
ownership of, 369
PCI-DSS, 289
perception of, 356–357
simplicity of, 19–20
subjective definitions of, 9
security by obscurity, 19
security client, using in cheat detection systems, 150
security insanities, types of, 365–366
security issues
game service scams, 328–329
live play, 329
poker, contest, and skill game bots, 329
security label DRM system, using, 49
security principles
Anything easy, 7
Effective security, 7
Make adversary work, 16
Simplicity, 19
security problems
causes of, 10–12
impact on business, 129–130
occurrence of, 9
security software, bundling with game security tools, 154
security strategies
avoid, 14
delegate, 14
detect, 15
deter, 15
ignore, 14
insure, 14
protect, 15
react, 15
recover, 13
reward, 14
security testing tools, using, 18
security tokens, use of, 299
self-improvement game, Brain Age as, 169
server code
reverse engineering of, 68
stealing, 68
server networking, client/authoritative, 116–117
server piracy
combating, 70
versus service piracy, 68
trends in, 66–70
See also piracy
authenticating, 70–72
cheating by, 111
hacking, 122
service providers, partner security issues related to,
sexual favors (virtual), problems with, 240
Shanda Interactive (Ragnarok Online), pirate server
for, 67, 156
shared knowledge collusion, explained, 128
shills, problem with, 201
signatures, attempts at stabilization of, 16
silent filter services, using for children, 317
The Sims Online, problem with virtual sexual favors, 240
SiN Episode 1 (Valve’s Steam service), 24
single-player games
cheating in, 103, 106
pattern used in, 176
skill games
overview of, 186, 327–328
retrofitting, 206
smart USB tokens, use of, 149
Smedley, John, 236–237
social subversion
overview of, 190–191
threat of, 366
Söderström, Hampus (Toribash), 175
soft failures, explained, 137
SoftICE tool, use in Diablo attack, 120–121
software attacks
goal of, 57
tracking progress of, 59
Sony’s BMG Rootkit, 18
unpopularity of, 25
use with World of Warcraft, 151
Sony’s PSP
Game Save problems associated with, 59
problems with downgraders, 58
problems with image viewer, 59
source authentication, achieving, 73
spam and griefing, 190, 209–215
speed hacks, types of, 160–161
Spel, Svenska (Limbo), 197
split data stores, explained, 137
split delivery DRM system, using, 51
splitting data, 40–41
Spore (Electronic Arts), 217
sporting events, legal wages based on, 331
SQL injection attacks, vulnerability to, 157
SSL, problem with, 183
stalkers, dealing with, 315–316
standbying, explained, 161
Stardock (Brad Wardell), 24–25
StarForce anti-piracy product
anti-piracy saga, 18
duplication strategy of, 33
unpopularity of, 25
Star Wars: Galaxies, vulnerability to server piracy, 66
state, attacking, 142, 183
state-based networking
versus action based networking, 119
problem with, 116
synchronization models, 112–114
state information, reading, 134
State, role in CAARDS reference model, 110
stat guarding, defined, 199
statistical independence, achieving goals of, 8
steganography, using in secure digital distribution,
Stoll, Clifford (The Cuckoo’s Egg), 7
strategy versus cheating, 146–149
strong play tool, Scrabble Word Finder as, 169
strong strategies, use of, 175
stupidity, security problems caused by, 10–11
subletting, explained, 240
Sudoku, solvers for, 146
surveillance, availability in cheat detection systems, 152
Svenska Spell (Limbo), 127–129
synchronization models, using with distributed state
systems, 112–114
syndicates and bots, 197
system memory, attacking, 132–134
tag DRM system, using, 49
A Tale in the Desert, 194, 204
terrorism (virtual), overview of, 359–360
terrorists, online tools for, 360–362
the insider problem, perception of, 252
Thorp, Edward (”Beat the Dealer”), 170
threats, impact on protection, 12–13
Three Rings (Puzzle Pirates), 129
tiered distribution, considering in license policies, 53
till fraud, explained, 201–202
time policy, creating, 163–165
timers, considering in license policies, 53
time, securing, 165
“time to penis,” 217
Titan Quest, 42
Toontown Online, griefing solution for, 212
Toribash (Hampus Söderström), 175
collusion in, 197
countermeasures for, 204
game configuration, 198
ghosting, 198–199
growth of, 190
and lobby spiking, 195
overview of, 192–195
retrofitting games for, 206
TPM (Trusted Platform Module), 58, 90
trademark infringement, occurrence of, 220–221
Travel Act, 331
Protecting Games: A Security Handbook for Game Developers and Publishers
trivia games, 167–169
trusted brand security, Nintendo and ADV Films, 28–29
trusted client problem, 111
trusted platforms, problems with, 9
trusted third party model, 122–123
Turbo Squid, 220–221
Turing, Alan, 19
turn-based physics games, skill games based on, 171
Ubisoft lawsuit, 257
Ultima Online, vulnerability to server piracy, 66 online poker site, 122, 126
Ultimate Online Baseball MMO, 199
ultra-violence, explained, 202
uncertainty, managing, 15–17
Unreal (Epic Software), popularity of, 50
usage controls, overview of, 304–306
USB tokens, use of, 149
used games market
anti-piracy strategies for, 61
revenue generated by, 60
user account system, inclusion in game services, 341
user-created content, overview of, 217–218
user types, categorizing for license policies, 52
U.S., legal requirements for privacy protection, 309
wallhack, explained, 142
Wardell, Brad (Stardock), 24–25
watermarking DRM system
attacks on, 48
using, 47
“Web Fraud 2.0,” (B. Kreb), 149–150
web of trust, problem with, 301
whitelist authorization service, supporting, 338–339
whitelist chat services, using for children, 317
whitelists versus blacklists, 151
Wikipedia, attacks on, 362
Wire Act, 331
word games, 169–170
WOW (World of Warcraft), 217
attack by Sony BMG’s Rootkit, 18
authentication token for, 4
licensing of, 69
player collusion in, 128–129
race condition in, 156
Sony BMG Rootkit used with, 151
using Glider tool with, 147–148
Xbox 360 console (Microsoft), attack on, 33–34, 56
Xbox Live, internal security problems with, 352–353
considering in license policies, 53
importance of, 41
Valve’s Steam service (SiN Episode 1), 24
verification functions, replacing with NOOP
instructions, 140
video game consoles versus DVD players, 62
virtual currency, conversion into money, 130
virtual items, ownership of, 228–229
virtualization tools, use of, 120–121
virtual prostitution
explained, 240
problem with Habbo Hotel, 240
virtual terrorism, 359–360
virtual worlds, attacks on, 359–360
Vote synchronization model, explained, 113
V: Registration Value option, explained, 94
vulnerabilities, impact on protection, 12–13
Yee, Nick, 225
YouTube threat, 148
Yulgang (MGame), 130, 269
zero-sum scoring, 200
Download more eBooks here:

Similar documents