Proceedings of WEIN` 16 May 10, 2016 Grand Copthorne Waterfront

Transcription

Proceedings of WEIN` 16 May 10, 2016 Grand Copthorne Waterfront
Proceedings of WEIN’ 16
May 10, 2016
Grand Copthorne Waterfront Hotel Singapore
Scope and Theme
In this workshop we will discuss about the emergence of intelligence from
large-scale complex networked agents. Our brain consists of 50 billion
neurons and the neuron network causes emergence of our consciousness and
intelligence. And, each human behavior and large-scale complex human
network causes emergence of society. We can see many emergence
phenomena like these among the real world.
In these cases, not only the dynamics of each neuron or human but also the
"network dynamics" of these are important. The aim of this workshop is to
investigate the role of networked agents in the emergence of systemic
properties, notably emergent intelligence. Focus is on topics such as network
formation among agents, the feedback of network structures on agent’s
dynamics, network-based collective phenomena, and emergent problem
solving of networked agents.
Up to now, main interest of agent community is the dynamics of agent itself.
But, in the current rapidly growing Internet like WWW and social media like
twitter, facebook, etc., studies about the complex network are attracting
international attention. And studies about complex network are deeply related
to multi agent community. So, the ultimate target of this workshop is to bridge
between multi agent community and complex system community.
Currently, it seems that research on MAS is still mostly focused on agents
themselves, whereas networks of agents have received relatively little
attention. The rapid development of various technologies, including those in
web intelligence, ubiquitous computing, sensor networks, and grid computing
will, however, lead to systems consisting of a potentially very large number of
agents. In these situations, the view of each agent is limited to its local
environment, and the efficiency of the system is significantly affected by the
network in which the embedded agents. Thus, it is important to pay attention
not only to agents themselves, but also to the structure and the dynamics of the
network.
Topics of interest
Methodologies
Bottom up approach for MAS Collective intelligence
Complex network
Emergence and self-organization
Emergence in multi-agent systems
Network-centric agent systems
Social networks
Web intelligence
Applications
Cascade dynamics
Multi-agent-based supply chain networks
Innovation networks
Social and economic agents
Systemic risk in large-scale networks
Web dynamics Committee
Workshop Chair
Satoshi Kurihara (The University of Electro-Communications, Japan)
Workshop Organizers
Hideyuki Nakashima (Future University-Hakodate, Japan)
Akira Namatame (National Defense Academy, Japan)
Satoshi Kurihara (The University of Electro-Communications, Japan)
Scientific Program Committee Members
Akira Namatame (National Defense Academy, Japan)
Fujio Toriumi (The University of Tokyo, Japan)
Hidenori Kawamura (Hokkaido University, Japan)
Hiriiko Suwa (Nara Institute of Science and Technology, Japan)
Kazuhiro Kazama (Wakayama University Japan)
Kiyoshi Izumi (The University of Tokyo, Japan)
Massimo Cossentino (National Research Council of Italy, Italy)
Satoshi Kurihara (The University of Electro-Communications, Japan)
Shu-Heng Chen (National Chengchi University, Taiwan)
Toshiharu Sugawara (Waseda University, Japan)
PROGRAM
Invited Talk I
Multiagent Simulation for Designing Social Services
Itsuki Noda
Invited Talk II
Analysis of the cooperation using networks with arbitrary features
Shohei Usui
Technical Papers
Generation of Public Transportation Network for Commuter Stranded
Problem - - - - - - - - - - - - - - - - - - - - - - - - - - -- - - - - - - - - - - - - - - - - - - 1
Takahiro Majima, Keiki Takadama, Daisuke Watanabe and Mitujiro Katuhara
Effect of Direct Reciprocity on Social Networking Services- - - - - - - - - - 9
Kengo Osaka, Fujio Toriumi and Toshiharu Sugawara
Analysis of Market Trend Regimes for March 2011 USDJPY Exchange Rate
Tick Data- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 17
Lukas Pichl and Taisei Kaizoji
An Examination of a Novel Information Diffusion Model: Considering of
Twitter User and Twitter System Features - - - - - - - - - - - - - - - - - - - - - - 24
Keisuke Ikeda, Takeshi Sakaki, Fujio Toriumi and Satoshi Kurihara
Invited Talk I
Multiagent Simulation for Designing Social Services
NODA, Itsuki,
Dr. Eng.
Leader of Computational Social Intelligence Research Team (CoSIne) &
Principal Research Manager of Artificial Intelligence Research Center (AIRC)
National Institute of Advanced Industrial Science and Technology (AIST)
Abstract
Computer simulations of social phenomena will become the most efficient tool to design and to improve
social systems. It is impossible or quite difficult to carry out experiments of social phenomena in the real
world by the similar way as experiments in physics or chemistry. Therefore, computational social
simulations are indispensable means for social science. Fortunately, advancement of computational powers
and big data enable to handle large-scale social simulations in which a large number of human activities are
represented by behaviors of multiple intelligent agents.
The most significant feature of social simulation compared with physical simulation is difficulty to carry out
real experiments. Because of the difficulty of real experiments, we have not got sophisticated models for
social phenomena. Actually, there are three critical issues on the social simulation, (1) undetermined
models, (2) obscure boundary conditions, and (3) incomplete data.
In order to overcome these issues, we are conducting a project, called "Project CASSIA" (Comprehensive
Architecture of Social Simulation for Inclusive Analysis) aims to develop a framework to administer to execute
large-scale multiagent simulations exhaustively to analyze socially interactive systems. The framework will
realize engineering environment to design and synthesize social systems like traffics, economy and politics.
Using the framework, we are applying multiagent social simulation to actual real-world problems like disaster
mitigation, smart transportation systems, stable economical systems, and so on. In this talk, I will show
several results of this project.
Biography
NODA, Itsuki is a team leader of Service Design Assist Research Team of Center for Service Research,
National Institute of Advanced Industrial Science and Technology (AIST), Japan. He received the B.E., M.E.
and Ph.D., degrees in electrical engineering from Kyoto University, Kyoto, Japan, in 1987, 1989, and 1995,
respectively. He was a visiting researcher of Stanford University in 1999, and worked as a staff of Council of
Science and Technology Policy of Japanese government in 2003.
He was a founding member of RoboCup and promoted Simulation League since 1995. RoboCup is a
research competition and symposium on robotics and artificial intelligence, and is held the international
competitions every year. The Simulation League becomes a standard problem on researches multi-agent
simulation domain and used world-wide. Now, he is the president of RoboCup Federation. He also has
been joining development of integrated information sharing and simulation system of disaster and rescue,
which are Japanese national projects for disaster mitigation. One of outputs of the projects enables
providing traffic information just after the Great East Japan Earthquake using trajectory data of car
navigation systems.
He is now promoting a project to develop a framework for large-scale multiagent social simulation system on
high performance computing environments.
He has received the 1995 best research award of JNNS (Japanese Neural Network Society), the best paper
award of JAWS-2008, the best paper award of IPSJ-2009 and IPSJ-2010, and the 2011 Field Innovation
Award (silver) of JSAI.
He is interested in multi-agent social simulation, machine learning, and disaster mitigation information
systems.
Invited Talk II
Analysis of the cooperation using networks with arbitrary features
Shohei USUI∗ and Fujio TORIUMI†
Graduate School of Engineering,
The University of Tokyo
Our goal is to understand the relation between network
structure and cooperation. Most researchers approached this
problem under limited conditions, it cannot be said that general knowledge about it has been obtained so far. For example, Rong et al. [1] studied cooperation focusing on assortativity. They generated scale-free networks with assortativity r = 0.0 − 0.3 by the Xulvi-Sokolov algorithm, and
observed the relation between assortativity and cooperation.
However there is a correlation between assortativity and average shortest-path length of the networks, and it is unclear
which features affect cooperation degree. From the above reasons, it cannot be analyzed sufficiently without a compound
analysis of the structure feature. In this paper, we make clear
which network features affect evolution of cooperation by the
analysis of a decision tree. We simulate SPD games on various networks, and construct a decision tree.
We propose Greedy Growth Model(GGM). GGM is a network growth model which bring networks up to the networks
with features desided freely. Algorithm 1 shows the process
of GGM. The distance between networks g and c from the
viewpoint of feature fi is calculated as follows:
(
Dfi (fi (g), fi (c)) =
fi (g) − fi (c)
σfi
)2
,
condition in which average shortest-path length L is low. The
networks with high α have a lot of low degree nodes. Therefore, nodes with small degree are required for achievement of
cooperation.
Next, not so low assortativity r ≥ −0.261 leads to cooperation achievement. It is common to the second- and thirdhighest condition. Therefore, it can be said that disassortative
networks cannot keep cooperators. In addition, in the thirdhighest condition, very high assortativity r leads to low cooperation degree. It is suggested that very disassortative networks and very assortative networks cannot keep cooperators.
Finally, high β leads to high cooperation degree. It is common to three conditions. The degree distribution with high
β has a long tail. It is suggested that cooperation degree is
affected by a long tail.
∗
[email protected]
[email protected]
[1] Z. Rong, X. Li, and X. Wang, Phys. Rev. E 76, 027101 (2007).
†
(1)
Analyzed features are the following: average shortest-path
length L, cluster coefficient C, assortativity r, α of Beta distribution, β of Beta distribution. Hear, we generate networks
using required features decided randomly. In this paper, 4,000
networks are generated as the network dataset. The dataset includes networks with various network structures. Cooperation
degree Pc is defined as an objective variable of the decision
tree. In this paper, Spatial Prisoner’s dilemma is adopted [1].
Payoff matrix is defined as follows:
(
) (
)
R S
1.0 0.0
=
,
(2)
T P
1.2 ϵ
where ϵ is a minimum value.
We analyze how and which features affect cooperation degree. A decision tree is constructed for analyzing them. The
cross validation error of the decision tree is 0.325.
There are three conditions in which the cooperator becomes
the majority, as shown in Table I. We consider the condition
of highest cooperation degree. It is suggested that the lower
average shortest-path length L leads to high cooperation degree. However, if average shortest-path length L is high, the
cooperation degree can be high, as shown in the second highest condition. In the second highest condition, the condition
in which α of degree distribution is low is added instead of the
Algorithm 1 Greedy Growth algorithm
Ensure: Start from network with a node
while until the number of links gets the required number of links
do
for the number of network candidates do
if p then
Add a new node and make a link between it and one of the
existing nodes.
else
Add a link
Make a link between two existing nodes.
end if
This network is one candidate and generate c candidates.
end for
Select a network closest to required network from candidates.
end while
TABLE I. Condition in highest and lowest networks
Highest
2nd highest
3rd highest
Pc
0.769
0.621
0.612
L L < 2.65
L ≥ 2.65
C
C < 0.559
r r ≥ −0.261 r ≥ −0.159 −0.146 ≤ r < 0.339
α
α < 2.90
α < 3.05
β β ≥ 24.2
β ≥ 24.2
15.0 ≤ β < 24.2
1
Generation of Public Transportation Network for
Commuter Stranded Problem
Takahiro Majima
∗
†
Keiki Takadama
3-68-1 Shinkawa
Mitaka
Tokyo, Japan
1-5-1 Chofugaoka
Chofu
Tokyo, Japan
[email protected]
[email protected]
‡
§
Daisuke Watanabe
2-1-6 Etchujima
Koto-ku
Tokyo, Japan
Mitujiro Katuhara
Tokyo, Japan
[email protected]
[email protected]
ABSTRACT
General Terms
Scheduled transportation service is a proper system for mass
transportation and it is adopted by wide range of the transportation modes, such as railway, airline, maritime container
shipping and bus. The providers of the service are required
to generate eective routes and networks. For this issue, this
paper tackles the problem of generating Public Transport
Network (PTN) for stranded commuter in a disaster. The
commuter stranded problem in the Tokyo Metropolitan area
was posed by the Great East Japan Earthquake in the year
2011, where several millions of commuters had diculties
to return their home due to the out of service of the railway's transportation system. If the situation prolongs, not
only going home but also going to work becomes dicult.
In such a situation, bus and waterbus are expected as alternative transportation modes and transportation networks
should be generated from scratch for the bus and waterbus.
We developed a method generating PTN based on a MAS
(Multi-Agent System) and it output the best solution for a
benchmark problem. However the network size of the problem is small (14 bus stops). To applying our method to a
large scale problem like the commuter stranded problem,
it is required fast computation process and capability to
deal with multi-modal transportation system. This paper
reports the characteristic and capability of the PTN generation method modied for the commuter stranded problem.
Algorithms
Categories and Subject Descriptors
[Computing Methodologies]: Modeling and SimulationModel
verication and validation
∗National Maritime Research Ins.
†The University of Electro-Communications
‡Tokyo University of Marine Science and Technology
§SocioTechData
Appears in: Proceedings of the 15th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2016), John Thangarajah, Karl Tuyls, Stacy Marsella,
Catholijn Jonker (eds.), May 913, 2016, Singapore.
c 2016, International Foundation for Autonomous Agents and
Copyright ⃝
Multiagent Systems (www.ifaamas.org). All rights reserved.
Keywords
Algorithms, Performance, Network Design
1.
INTRODUCTION
A stranded commuter problem in the Tokyo Metropolitan area was posed by the Great East Japan Earthquake
in the year 2011, where several millions of commuters had
diculties to return their home due to the out of service
of the railway's transportation network. If the bad situation continued, the commuters would have had diculties
to commute to their oce in the center of Tokyo. In such
a situation, bus and waterbus are expected as alternative
transportation modes and transportation networks should
be generated from scratch for the bus and waterbus.
Scheduled transportation service is a proper system for
mass transportation and it is adopted by wide range of the
transportation modes, such as railway, airline, maritime container shipping and bus. The providers of the service are
required to generate eective routes and networks. For this
issue, we developed a method generating Public Transport
Network (hereinafter called the PTN) based on a MultiAgent System (hereinafter called the MAS) and it output
the best solution for a benchmark problem [6]. However the
network size of the problem is small (14 bus stops). To apply our method to a large scale problem like the commuter
stranded problem, it is required fast computation process
and capability to deal with multi-modal transportation system.
We decided to apply the MAS as the generator for the
PTN under the disaster situation, because MAS has two
sorts of exibility. One is the exibility to adapt to the
changing conditions caused by the recovery process of the
disasters. (MAS has the feature adding and deleting agents.
It leads to the adaptiveness in changing conditions by blockage of street or newly build bus stops). The other is exibility to adapt to various problems by changing the rules
of the agents. This paper tackles the following subjects by
modifying the agent rules of our original method[6].
2
• Dealing with multi-modal (bus and waterbus) transportation network
Stranded Rate
100%
• Developing fast computation method
• Combining above two items, applying to a large scale
network for commuter stranded problem
This paper is organized as follows. The section 2 explains
the commuter stranded problem and how to estimate the
OD matrix. The section 3 shows the problem denition and
solution for the PTN generation problem. The section 4 explains modied method to generate PTN for the commuter
stranded problem. The problems to which our method applied and discussions are described in the section 5. Finally,
the section 6 gives the conclusion.
2. COMMUTER STRANDED PROBLEM
In the year 2011, the great east Japan earthquake and
the following tsunami struck the eastern coastal area of the
Japanese main island. The devastated area was not limited to the coastal area. All railway transportation systems
around the Tokyo metropolitan area were out of service. It
is reported that about 5 millions of people had diculty to
return their homes under the situation[3].
Since the demand matrix is essential to generate transportation network, we developed a method to compute the
distribution of the stranded commuters considering the damaged railway networks. The element of the demand matrix
means demand per unit time between two bus stops and the
demand matrix is often referred to as OD (Origin, Destination) matrix. Fig.1 shows the ow diagram for the method.
The system has a database of the time history of the
OD matrix in the working time under the normal condition. Combining the OD matrix at the time of the disaster
occurrence and the damaged railway network, the route of
the commuters are analyzed with the shortest path algorithm in their traveling time. Since the total distance to
walk is recorded in the analyzed itinerary, the number of
the stranded commuter can be obtained with multiplying
the stranded rate in the Fig.2.
Set of Railway Lines and
Sections Out of Service
Street
Network
Railway
Network
OD Matrix
in Normal Situation
(30min Interval)
Position of
Railway
Stations
Time of Disaster
Occurrence
OD Matrix at the
Time of Disaster
Occurrence
Integrated
Network
Pedestrian
Route & Distance
Stranded Rate
OD Matrix
for Stranded
Commuters
Figure 1: Flow Diagram
0%
0km
10km
20km
Distance to Walk
Figure 2: Relationship between Stranded Rate and Distance
to Walk
Fig.3 shows the origin and destination distribution analyzed under the situation in which all railway lines are out
of service. The total number of stranded commuter is about
5 millions. The area colored dark represents the large number of commuters stranded. It can be recognized that the
origin distribution concentrates in the small area, center of
Tokyo, and the destination distribution is surrounding the
center of Tokyo. This result is used as input data for the
PTN generation in the section5.
3.
PROBLEM DEFINITION
The transportation mode is set to bus and waterbus in this
paper. Thus, the physical infrastructure network is composed of streets and waterways. All of the information on
the infrastructure network, the position of bus stops, the OD
matrix, the vehicle speed and carrying capacity are given.
Solution is a set of transportation lines that has information about the route and vehicle number. The route of the
bus line is dened by a set of consecutive bus stops and it
is assumed that the vehicle travels back and forth in the
route. Furthermore, since express bus is not considered, vehicles stop at the all bus stops on its route. Fig.4 shows an
example of the problem and solution.
We decided to apply the MAS to tackle this problem. The
line agent in section 4 competes for passengers by changing
its route and number of vehicle. The solution as PTN is a
set of survival line agents after the evolution process. The
evaluation function similar to the following equation is applied frequently to the problem of PTN generation[8]. In
this paper, the same evaluation function is employed.
min Z =
∑
Si ̸=Sj ∈ST
TSi ,Sj DSi ,Sj + w1
∑
Lk ∈BL
BLk
(1)
where, min Z means that target is minimizing the evaluation value Z , ST is the set of bus stops, BL is the set of bus
lines, DSi ,Sj , TSi ,Sj are the demand and traveling time from
bus stop Si to Sj respectively. The traveling time is composed of not only in-vehicle time, but also expected waiting
time. BLk is the number of vehicle deployed into the bus
line Lk . The traveling time from bus stop Si to Sj , is simply calculated by the route distance and the service speed
of vehicle.
The rst term in Eq.1 represents user's cost and the second term is service provider's cost. w1 is control parameter.
Small w1 reduces traveling time and large w1 reduces the
number of vehicle. Passengers hope to reduce the rst term
and transportation companies hope to reduce the second
term. Generally speaking, relationship between two terms
3
Origin
ST
0
0
1 5
2 0
3 10
4 0
Destination
1 2 3 4
0 30 70 40
10 0 0
10
0 10
5 0
5
30 10 20
2
3
0
4
#
Vehicle Speed=20 (km/hr)
Vehicle Capacity = 50 persons
1
Station or Bus Stop
Infrastructure
Network
(Street, River, Rail)
(a) Problem
2
3
20km
(a) Origin Distribution
0
4
1
Line
Route
Line a : ST 1-2-3
Line b : ST 0-1-3
Line c : ST 0-1-4
Vehicle#
3
7
10
(b) Solution
Figure 4: Denition of PTN Generation Problem and Solution
is trade-o relationship and it is one reason of the diculties
to solve the PTN generation problem. One extreme condition is connecting all pairs of bus stops by direct shuttle
service. Although it is extremely convenient for passengers,
it requires an enormous cost to transportation companies.
The detailed characteristic and application of the method is
reported in [5].
The evolution process starts after the generation of the
initial set of line agents. Fig.5 illustrates the ow diagram
of the process. # 2 in Fig.5 (Section4.1) and # 3∼7 in Fig.5
(Section4.2) are executed alternately, the line agent without passenger is deleted, thus the number of agents declines
eventually.
In this paper, above original method [6] is modied to
tackle the three subjects in the section 1. Firstly, to accelerate the computation speed, the process called 'Large evolution model' in Fig.5 was introduced. In original method (i.e.
'Small evolution model'), the only one and most protable
line agent is allowed to evolve in the one evolution step. On
the other hand, the all line agents are allowed to evolve in
the 'Large evolution model'. Secondary, two kinds of line
agents (i.e. bus and waterbus) are generated as described in
the section 4.3 to deal with the multi-modal transportation
system. Finally, as described in the section4.2.2, after the
exclusion of a bus stop, a new line agent is generated. It is
expected that the new line agent compensates the deteriorated solutions by 'large evolution model'.
4. METHOD
4.1
The method in this paper is based on our original method[6]
that separates several components as follows.
In this method, it is assumed that the passenger selects
the shortest path in time. It means minimizing the rst
term in Eq.1. The process of this subsection analyzes the
in-vehicle time, waiting time, number of changing vehicle
for all OD pairs(Fig.5, # 2). The result of the analysis is
utilized in the section4.2 for the calculation of the evaluation
values and the evolution of line agents.
To analyze the route selection of passengers, PTN is converted to the network in Fig.6 separating the bus stop into
the physical world node and the virtual node.
For instance, since the "line a" and "line b" share the
bus stop 3, the gure shows two virtual bus stops, a3 and
b3. There are three kind of link and arcs in the converted
(b) Destination Distribution
Figure 3: Distribution of Stranded Commuters due to the
Out of Service of the All Railway Lines (a) Origin Distribution (b) Destination Distribution
• Generating initial set of line agents
• Route selection of passengers
• Evolution of line agents
The explanation of the rst component, generation of initial set of line agents, is omitted in this paper due to space
limitation and out of the scope of this paper. Briey speaking, a growing network model was modied to generate initial line agents. The growing network models are investigated actively in the research eld of the complex networks.
Passenger’s Route Selection
4
#1
Generating Line Agents
& Set Evolution Step, n=0
a3
ST
4
Yes
No
1
c1
0
c0
Line Link (Traveling Time)
Boarding Arc (Waiting Time + Penalty Time)
Alighting Arc (No Time)
Large
Evolution
Model
No
#6-2
k=k+1,
k>Line Num. ?
#7-2
All Line Agents
Cancel
Evolution ?
Yes
Figure 6: Network Conversion
Furthermore, the user number, RLk and the vehicle number, BLk is computed by the following equations.
No
No
n
RL
=
k
∑
dn
Si ,Lk
(3)
Si ∈Lk
n
n
n
, 1)⌉
BL
= ⌈max(Bmin
, Bopt
k
End of Evolution Process
Figure 5: Flow Diagram of MAS
n
Bmin
=
network. One is "Line Link" connecting virtual bus stops.
Second is "Boarding Arc" from the physical bus stops to the
virtual bus stop belonging to the lines. Third is "Alighting Arc" from the virtual bus stops to the physical bus
stops. The weight of the "Line Link" is in-vehicle time
(Link Length/Moving Speed), "Boarding Arc" is waiting
time (Headway/2 representing the expectation value of the
waiting time) and "Alighting Arc" has no weight. Furthermore, to represent the cost for transferring, ve minutes is
added to the weight of the boarding links. (The rst riding
is exempt from the ve minutes penalty.)
Since the weight of the link and arc in this network is time,
shortest path algorithm such as Dijkstra Method with heap
sort algorithm [7] can immediately nd the shortest path.
4.2 Evolution of Bus Line Agent
The evolution rule of the line agents changes their route
and vehicle number. Using the result of the previous subsection, the line agent evolves in selsh to increase the prot
dened by the following equation. In the 'Small evolution
model' , to encourage the evolution of line agent with higher
prot, the target agent is selected in descending order of the
prot(# 3 in Fig.5).
n
n
PLk = RL
− w2 BL
k
k
2
ST
4
0
b0
Line c
Yes
#7-1
k=k+1,
k>Line Num. ?
Yes
1
b1
Network
Conversion c4
#5
Small Evolution
Model ?
No
3
Line b
Line c
#4
Evolution of Line Agent, Lk
(adding or excluding
one bus stop)
Yes
a1
Line a
2
3
#3
Computing Profit, PL of each
Line Agent & Sorting in
Descending Order of Profit
& Set Order, k=0
#6-1
Evolution of Lk
Took Place ?
b3
Line b
Line a
#2
Generation of Line Network
& Itinerary Analysis for
Passengers
(Deleting Line Agent without
Passenger)
Small
Evolution
Model
a2
(2)
where, n is evolution step, RLk , BLk are number of user and
vehicle of line agent Lk respectively. w2 is control parameter for the relationship between user number as benet and
vehicle number as cost. It is expected that small w2 leads
to an extension of its transit route to get users and large
w2 leads to a shrink of its transit route to reduce vehicle
number.
max
Si ,Si+1 ∈Lk
n
Bopt
=
n
(dn
Si ,Si+1 )T rLk /CB
√
n
n
T rL
RL
/(2w1 )
k
k
(4)
(5)
(6)
where, Si ∈ Lk stands for the set of bus stops on the route of
the line Lk . dSi ,Lk is head-count boarding a vehicle of Line
Lk at the bus stop Si that is obtained by the passenger's
route selection in the previous subsection. Eq.5 is the minimum vehicle number to satisfy the boarding demand and
maxSi ,Si+1 ∈Lk (dn
Si ,Si+1 ) in the equation is maximum trac
volume between adjacent bus stops Si , Si+1 belonging to the
line agent Lk . T rLk is the round trip time for the line Lk .
CB is the capacity of vehicle. Eq.6 is the vehicle number
minimizing the evaluation value of Eq.1 if Eq.1 is applied
only to the line, Lk . Eq.6 is derived from the fact that the
evaluation value becomes minimum at the point where waiting time as passenger's cost is same to the vehicle number
as operator's cost[2].
Next step is evolution strategy of target line agent (# 4 in
Fig.5). The target line agent, Lk , considers all combinations
of inclusion of one bus stop not belonging to the line agent,
Sj ∈
/ Lk , and exclusion of one bus stop belonging to the line
agent, Si ∈ Lk .
All combinations of the target bus stop and the operation(inclusion and exclusion) are evaluated by the procedure of the following subsections. For example, if line b
in Fig.6 becomes target agent, it considers four patterns in
Fig.7. (Exclusion of ST1 is omitted. Because shortest path
between ST0 and ST3 runs through ST1.)
4.2.1
Inclusion of Bus Stop
In case of the inclusion of a bus stop not belonging to
the line agent, Sj ∈
/ Lk , the agent searches insertion point
of the bus stop, Sj ∈
/ Lk , between the adjacent two bus
stops, Si ,Si+1 ∈ Lk . (It is also considered that Si or Si+1
5
where, ϵ is called 'single eciency'. lSi ,Sj is the network
distance between bus stop Si and Sj . lSj ,Si+1 is the network distance between bus stop Sj and Si+1 . lS′ i ,Si+1 is the
straight Euclidean distance between bus stop Si and Si+1 .
n
dn
Si ,Lk and dSi+1 ,Lk are demand of bus stop Si and Si+1 to
use line Lk at evolution step n.
Thus, r means the reduction of number of passengers,
n
dn
Si ,Lk + dSi+1 ,Lk , takes place, if large detour between bus
stop Si and Si+1 is made by the inclusion of the bus stop
Sj .
2
3
0
4
1
Line b Before Evolution
2
2
0
0
4
Line d
3
3
4
1
Inclusion of ST2
1
4.2.2
Exclusion of ST0
& Generation of New Line d
2
In case of exclusion of the bus stop belonging to the line
agent, Si ∈ Lk , the change of the prot, ASi , is computed
assuming that the line lost users boarding at the bus stop
Si .
2
3
3
0
4
0
1
4
Inclusion of ST4
1
Exclusion of ST3
& Generation of New Line d
Figure 7: Line Evolution Pattern (Left Column is Inclusion,
Right Column is Exclusion of one bus stop)
is terminal bus stop.) If the line agent has more than one
insertion point, it selects the one with the minimum increase
of the route length. The line agent expects the change of
the prot, ASj , by the following equations.
n+1
n
)
ASj = ∆RLk − w2 (BL
− BL
k
k
∆RLk =
∑
DSnj ,Si − r
(7)
(8)
Si ∈Lk ,Sj ∈L
/ k
where, DSj ,Si is head-count whose origin bus stop is Sj ∈
/ Lk
and destination bus stop is Si ∈ Lk , that is given by the
OD matrix. However, to estimate pure increase of headcount using Lk , DSj ,Si , which itinerary includes Lk , isn't
computed. The super script ,n + 1, of BLk , in Eq.7 means
estimated vehicle number after changing route and it is estimated by the following equations.
n+1
n
n+1
BL
= max(Bmin
, Bopt
, 1)
k
n+1
=
Bopt
√
n+1
n
(RL
+ ∆RLk )/(2w1 )
T rL
k
k
(9)
(10)
n+1
where, T rL
is round trip time of the line after the inclusion
k
of the bus stop.
Results of an analysis clearly suggests that the number of
passengers between two stations with large detour decreases
sharply if the eciency dened in Eq.12 is below 80%[6].
Reecting the fact into the evolution rule, r dened by the
following equation is introduced into Eq.8.
{
r=
0
n
dn
Si ,Lk + dSi+1 ,Lk
ϵ ≥ 0.8
ϵ < 0.8
ϵ = lS′ i ,Si+1 /(lSi ,Sj + lSj ,Si+1 )
Exclusion of Bus Stop
(12)
(13)
∆RLk = −dn
Si ,Lk
(14)
n+1
number,BL
,
k
The vehicle
in Eq.13 is estimation value due
to the exclusion of the bus stop Si ∈ Lk and it is calculated
n+1
by Eq.9 and Eq.10 with the round trip time T rL
after the
k
exclusion of the bus stop Si ∈ Lk .
Furthermore, if above exclusion is possible, the new line
agent is generated as shown in Fig.7 , which connects the
excluded bus stop and the nearest bus stop not excluded.
In addition, if the following condition is true, inclusion or
exclusion of a bus stop is not considered.
* Inclusion or exclusion of bus stop that violate the local
stopping service
* Re-inclusion of a bus stop excluded before
4.2.3
Selection of Evolution Pattern
The line agent selects the operation (i.e. inclusion or exclusion of one bus stop) with the maximum change of the
prot, π dened by the following equation. It means greedy
strategy.
π = max (ASi , ASj )
Si ,Sj
(15)
In 'Small evolution model' in Fig.5, if π < 0 is true, the
evolution of the target line agent does not take place (arrow
from # 6-1 to "No" in Fig.5), target line agent is changed to
the agent with the next highest prot, Lk+1 (arrow from #
7-1 to "No" in Fig.5). If the evolution of the target line agent
took place, process moves to the passenger's route selection
in the previous subsection 4.1 (arrow from # 6-1 to "Yes"
in Fig.5). Eventually, evolution process terminates if all line
agents meet with the condition, π < 0 (arrow from # 7-1 to
"Yes" in Fig.5).
In 'Large evolution model' in Fig.5, on the other hand, the
all line agents are allowed to evolve in one evolution step to
accelerate the computation speed.
4.3
(11)
n+1
n
ASi = ∆RLk − w2 (BL
− BL
)
k
k
For Multi-Modal Transportation
The original method was modied to deal with the multimodal transportation system composed with bus and waterbus. For the purpose, two types of the link for the infrastructure network are prepared. One type is the street
6
link on land and the other type is the waterway link. Corresponding to the two kinds of the link, the line agent also
has two types. One is bus line agent and the other is waterbus line agent. In the evolution rule of the line agent, the
bus line agent is not allowed to take a route identied as
the waterway, and in the same manner for the waterbus line
agent.
5. APPLICATION
In this section, the above modied method is applied to
two problems. One is the benchmark problem presented by
Mandl [1]. The other is the commuter stranded problem
presented in the section 2. The performance of the method
is conrmed by the benchmark problem and the application
to the large scale and multi-modal network is demonstrated
by the commuter stranded problem.
5.1 Problem Setting
5.1.1 Benchmark Problem
The method in the previous section 4 is applied to the
benchmark problem [1] and the characteristic and eectiveness of the method are presented. Fig.8 illustrates infrastructure network. The number in the nodes is bus stop ID
and the number along side with the links is in-vehicle time in
minutes. Table 1 shows the OD matrix for one day, that almost every pair of two bus stops have demand. The carrying
capacity of the vehicle is set to 50 persons. The parameter
of w1 to w2 are changed to search the best solution.
Table 1: OD Demand Matrix of Benchmark Problem
ST
0
1
2
3
4
5
6
7
8
9
10
11
12
13
0
0
400
200
60
80
150
75
75
30
160
30
25
35
0
1
400
0
50
120
20
180
90
90
15
130
20
10
10
5
2
200
50
0
40
60
180
90
90
15
45
20
10
10
5
3
60
120
40
0
50
100
50
50
15
240
40
25
10
5
4
80
20
60
50
0
50
25
25
10
120
20
15
5
0
5
150
180
180
100
50
0
100
100
30
880
60
15
15
10
6
75
90
90
50
25
100
0
50
15
440
35
10
10
5
7
75
90
90
50
25
100
50
0
15
440
35
10
10
5
8
30
15
15
15
10
30
15
15
0
140
20
5
0
0
9
160
130
45
240
120
880
440
440
140
0
600
250
500
200
10
30
20
20
40
20
60
35
35
20
600
0
75
95
15
11
25
10
10
25
15
15
10
10
5
250
75
0
70
0
12
35
10
10
10
5
15
10
10
0
500
95
70
0
45
13
0
5
5
5
0
10
5
5
0
200
15
0
45
0
The dierence of the transportation capability between
the two modes (bus and waterbus) and the railway is very
large. Taking into account of the dierence, the number of
passenger was divided by 168hrs (24hrs x 7days) to convert
to the OD matrix.
The carrying capacity of the bus and waterbus are set to
50 persons and 100persons. The service speed of the bus and
waterbus are set to 20km/hr and 22Km/hr. Considering the
situation in Japan, the ratio of the parameter of w1 to w2
for bus is order of 0.1. Reecting this condition, w1 and w2
for bus are set to 1 and 10 and the parameters for waterbus
are set to 2 and 20.
ST 0
8min.
1
3
6
2
2
3
5
3
7
4
4
3
8
22
7
8
10
8
6
9
11
5
10
Tokyo Bay
8
10
5 10
13
2
12
Figure 8: Infrastructure Network of Mandl's Benchmark
Problem
Figure 9: Street and Waterway Network in the Kanto Area
5.2
5.2.1
5.1.2 Commuter Stranded Problem
Fig.9 shows the infrastructure network around the Tokyo.
The gray links on the land (black area) is street network and
the links in the Tokyo bay is waterway network. The street
network is downloaded from the web site [4]. The dots in the
gure show the position of the city halls that are assumed
to be utilized as temporal bus stops. This network has 226
bus stops and the number of passengers is over 5 millions,
that is estimated in the section2.
Analysis Results
Benchmark Problem
In this section, results of the application to the benchmark problem are reported. Fig.10 shows evolution history
of total travel time (TTT includes, in-vehicle time, waiting
time, penalty time for changing vehicle) as passenger's cost
and total number of vehicles as company's cost for 'Large
evolution model'. TTT and vehicle number correspond to
the rst and second term of Eq.1. Upper gure shows results
of 'Small evolution model' and lower gure shows results of
'Large evolution model'.
7
Both gures show that the number of vehicle and total
travel time are smaller than the initial values. It implies
that the method has capability to evolve the PTN to more
sophisticated network. What is remarkable with this result
is the nal evolution step. The nal evolution step of the
large evolution model is about one-third of the result of the
small evolution model. However, total travel time and number of vehicle is almost same level.
Table 2 summarizes the comparison between the large evolution model and the small evolution model. Above three
rows of the table represent rate of passengers in respect to
their transit number. Focusing on the total travel time and
vehicle number, it is found that the results of the 'Large
evolution model' are as good as that of the 'Small evolution model'. Fig.11 is one example of generated PTN by the
'Large Evolution model'.
Table 2: Comparison of Results for Benchmark Problem
(w1 = 0.8)
Small Evolution Large Evolution
Model
Model
w =5
w =9
w =5
w =9
Directly(%) 95.5 89.7 96.5 91.6
Transfer1(%) 4.5
10.1
3.5
8.3
Transfer2(%)
0
0.2
0
0.1
Total(hr)
3264 3298 3259 3274
In-Vehicle(hr) 2701 2698 2692 2659
Waiting(hr)
505
463
521
504
Transfer(hr)
58
137
46
111
Line Num.
4
6
8
6
Vehicle Num. 69
63
71
69
Evolution Step 69
85
21
24
2
2
ID(Vcl#)
ST 0
3,600
120
Vcl. Num.
100
80
3,400
60
40
3,300
BL3=21
Vehicle Number
Total Travel Time (hr)
TTT
3,500
3
3,200
Bus Stop
ST3-4-1-2
L2(28)
ST0-1-3-5-7-9-10-12-13
L3(21)
ST0-1-2-5-7-8-9-10-11
L4(7)
ST2-1-3-5-6-9-13
L5(5)
ST8-7-9
ST3-11
2
BL1=5
2
L1(5)
L6(3)
1
BL5=5
5
8
7
20
2
6
4
0
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85
BL6=3
BL4=7
BL2=28
Evolution Step
9
(a) Small Evolution Model
11
3,600
120
10
Vcl. Num.
3,500
100
80
3,400
60
40
3,300
Vehicle Number
Total Travel Time (hr)
TTT
20
3,200
13
12
Figure 11: Generated PTN with Large Evolution Model
(w1 =0.8, w2 =9.0)
0
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85
Evolution Step
(b) Large Evolution Model
Figure 10: Evolution history of Total Travel Time and Vehicle Number
5.2.2 Commuter Stranded Problem
This section describes the results of the application to the
commuter stranded problem. Since the target network is
larger than that of the benchmark problem, the 'Large evolution model' was applied to avoid the time consuming process. Fig.12 shows the evolution history of the total travel
time and the total cost of vehicle. After the rapid decrease
of the total travel time, it increases gradually and reach to
the same level of the initial value. However, Fig.13 shows
that the evaluation value decreases almost monotonically. It
implies that the cost reduction of the vehicle is more eective.
The volution process starts with thirteen waterbus lines
generated as the set of the initial line agents. However,
only two waterbus lines survived after the evolution process. Fig.14 shows the two waterbuslines. The waterbus is
expected to be the alternative transportation mode for the
railway. However this results implies that the demand across
the Tokyo bay is small.
5.3
Discussion
Fig.10 shows that both total travel time and number of
vehicle decrease to the smaller values than the initial values
and this condition holds during the whole evolution process.
From this result, it can be concluded that the present MAS
has ability to reduce the evaluation value dened by Eq.1.
However, the total travel time uctuates around same
value or gradually increases after its immediate reduction at
the initial stage. Meanwhile, the vehicle number decreases
almost monotonically. This phenomenon stems from the
initial set of the line agents. The method generating initial
line agents (The explanation of this method is omitted in
this paper.) underestimates the number of vehicle. Instead,
it generates many lines so that the passengers do not need
transferring to another lines. The rapid decrease in the total
travel time is caused from the increase of the vehicle number
7,000
48,000
47,500
47,000
46,500
46,000
45,500
45,000
44,500
44,000
43,500
6,000
5,000
4,000
3,000
TTT
2,000
Vcl. Cost
1,000
Vehicle Cost
Total Travel Time (hr)
8
BL556=32
BL1171=6
0
0
5 10 15 20 25 30 35 40 45 50 55
Evolution Step
Evaluation Value, Z
Figure 12: Evolution History of Total Travel Time and Vehicle Cost for Stranded Commuter Problem
Figure 14: Surviving Waterbus Lines after Evolution Process
54,000
53,000
52,000
51,000
50,000
49,000
48,000
47,000
46,000
45,000
44,000
0
5
10 15 20 25 30 35 40 45 50 55
Evolution Step
Figure 13: Evolution History of Evaluation Value, Z
(calculated at the rst step of the evolution process). Since
this condition at the rst step is extremely convenient for
passengers, the line agents change their route and vehicle
number to increase their prot dened by Eq.2.
Although 'Large evolution model' reduces the number of
the evolution step and the computation time (Using a PC
with Core i7, 3.1GHz, the computation time of the small evolution model for the commuter stranded problem is about 3
hours. On the other hand, that of large evolution model is
about 3 minutes.), the quality of the solution for the benchmark problem is the same level to the best solutions obtained by our original method. It can be considered that the
"Large evolution model" roughly changes the network, instead, modied process to generate the new line (section:4.2.2)
holds the solution quality.
Although the parameters, w1 and w2 , are set freely in this
paper, these values are xed in the real world. For example, assuming that the fare is 200 (yen/person) and vehicle
running cost is 30,000 (yen/day), the value of w2 becomes
around 150 in Japan. However, this high cost makes many
line agents stay in their initial condition.
6. CONCLUSION
This paper reports the method to generate public transport network for the commuter stranded problem. Based
on our original method generating public transport network
with Multi-Agent System, several modications were made:
(1) large evolution model for the fast computation; (2) dealing with the multi-modal transportation system; and (3)
generation of the new line agent under the evolution pro-
cess.
Owing to the modications, the modied method successfully output solutions for the Mandl's benchmark problem ,
that are the same quality to the best solutions. Furthermore,
it also successfully output the large scale PTN including two
transportation modes, bus and waterbus, in practical computation time.
Acknowledgments
This work was supported by JSPS KAKENHI Grant Numbers 17360424, 25280116.
REFERENCES
[1] M. Baaj and H. Mahmassani. An ai-based approach for
trtansit route system planning and design. Journal of
Advanced Transportation, 25(2):187210, 1991.
[2] B. Byrne. Public transportation line positions and
headways for minimum user and system cost in a radial
case. Transportation Research, 9:97102, 1975.
[3] Cabinet Oce, Government of Japan. White Paper on
Protection Against Disasters (in Japanese). Nikkei
Insatsu, Tokyo, 2013.
[4] GIS Homepage. National Land Numerical Information
Download Service. http://nlftp.mlit.go.jp/ksje/gml/datalist/KsjTmplt-N10.html.
[5] T. Majima, K. Takadama, D. Watanabe, and
M. Katuhara. Network evolution model for route design
of public transport system and its application. In
Workshop on Emergent Intelligence on Networked
Agents (WEIN07) Proceedings, pages 5769. AAMAS,
2007.
[6] T. Majima, K. Takadama, D. Watanabe, and
M. Katuhara. Characteristic of passenger's route
selection and generation of public transport network.
SICE Journal of Control, Measurement, and System
Integration, 8(1):6773, 2015.
[7] R. Sedgewick. Algorithms in C Part 5 Graph
Algorithms 3rd Edition. Addison-Wesley, 2002.
[8] F. Zhao and X. Zeng. Optimization of user and
operator cost for large-scale transit network. Journal of
Transportation Engineering, 133(4):240251, 2007.
9
Effect of Direct Reciprocity on Social Networking Services
Kengo Osaka
Fujio Toriumi
Toshiharu Sugawara
Dept. of Computer Science
and Engineering
Graduate School of Waseda
University
3-4-1 Okubo, Shinjuku
Tokyo 169-8555, Japan
Graduate School of
Engineering
The University of Tokyo
7-3-1 Hongo, Bunkyo
Tokyo 113-8656, Japan
Dept. of Computer Science
and Engineering
Graduate School of Waseda
University
3-4-1 Okubo, Shinjuku
Tokyo 169-8555, Japan
[email protected]
[email protected]
ABSTRACT
This paper investigates how direct reciprocity facilitates active and voluntary participation in social networking services
(SNS) by modeling them as a type of public goods game. A
number of studies have attempted to understand the structure of SNS activities using anomalies of the public goods
game, but their models did not include reciprocity. However, reciprocity is known as the mechanism to maintain and
evolve cooperation in human society, and, of course, it is actually observed on SNS. To analyze the effect of reciprocity
on SNS, we first propose an abstract model of SNS, called
the reciprocal rewards game, which is an extension of the rewards game. Then, we describe the experiments to see how
reciprocity facilitates cooperation and which parameters in
the game affect evolution or collapse of cooperation in the
reciprocity rewards game. We also examine how memory
lengths of agents to memorize reciprocal agents affect evolution in SNS. Finally, we discuss the suggestions derived
from our experiments using the reciprocal rewards game.
Categories and Subject Descriptors
I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence—Multiagent systems
General Terms
Experimentation
Keywords
Social networking services, Social media, Public goods game,
Prisoner’s dilemma
1. INTRODUCTION
The number of users of many social networking services
(SNS), such as Twitter, Facebook, and Google+, as well as
the number of SNS products/systems are still growing. They
are frequently used to share local information among limited
specialized and close-friend groups and to show in public
some information for the purposes of opinion exchange, advertising, marketing, and political participation/campaigns [14].
Appears in: Proceedings of the 15th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2016), John Thangarajah, Karl Tuyls, Stacy Marsella,
Catholijn Jonker (eds.), May 9–13, 2016, Singapore.
c 2016, International Foundation for Autonomous Agents and
Copyright ⃝
Multiagent Systems (www.ifaamas.org). All rights reserved.
[email protected]
SNS are usually run by companies and organizations but
cannot persist without a huge number of updated content
posted by individual users. However, the mechanisms in
SNS that lead to such continual activities of posting content are not well known. Because such user activities incur some cost and effort to create and submit the content,
there are some free riders, that is, users that just read the
content and never post articles and other content. To provide incentives to individual users to keep submitting content, many SNS introduce a number of mechanisms, such
as providing comments on articles and a comment, showing
the number of followers, and having “Like” buttons. These
mechanisms can give physical and quantitative rewards (e.g.,
reading comments and showing the number of followers and
users who clicked on the “Like” buttons) and psychological
rewards that provide feelings of connection to friends/people
and a sense of belonging [9]. However, these incentive mechanisms also rely on users’ voluntary behavior and incur some
cost and time.
It is an important challenge in the design of social media
on the Internet, like SNS, to identify the conditions or mechanisms that keep SNS active and thriving, and thus many
studies focused on this issue. One approach to this issue is
based on an evolutionary game theoretic approach. For example, Toriumi et al. and Hirahara et al. [5, 17] discussed a
mechanism to keep SNS active using an evolutionary game
on certain network structures. They proposed rewards and
meta-rewards games that were dual parts of Axelrod’s metanorms game, and their extension called SNS-norms game,
to identify evolved behaviors of agents that are the model of
users in SNS. They analyzed the conditions for a cooperation
dominant situation, which corresponds to when SNS are active and many users continue to post articles and comments.
They then found that meta-rewards such as comments on
comments on articles [17] and a simple response mechanism
for rewards such as “Like” buttons for articles [6] play an important role in SNS. However, their studies did not consider
social and personal relationships between peers.
Nowak [11] pointed out that one of five mechanisms, kin
selection, direct and indirect reciprocity, network reciprocity,
and group selection, is necessary for evolving cooperation
in human society, and Rand and Nowak [12] showed the
empirical evidence for human cooperation by these mechanisms. In addition, such mechanisms also exist and have
crucial roles in online networks [8, 16, 4]. For example, Faraj
and Johnson [4] found that network exchange patterns in an
online community are characterized by reciprocity patterns
10
and are different from those characterized by preferential
attachment [3]. Takano et al. [16] also indicated that cooperation based on reciprocity could be observed in a network
game by analyzing the logs of players’ actions. Therefore,
we believe that reciprocity, especially direct reciprocity, is
essential in SNS because connections between users are usually established by direct interaction such as “comments on
articles” and “comments on comment.”
In this paper, we try to understand the effect of direct reciprocity between users on continual and active use of SNS by
extending an existing abstract model of SNS [17]. For this
purpose, we propose a reciprocity rewards game whose structure is basically identical to the rewards game, but agents
tag their peer agents and decide their behaviors on the basis of peers’ past reciprocal behaviors.
We then investigate why the rates of cooperation increase and when the
established cooperation collapses. Our research results and
discussions help us to understand the mechanism and interaction structure to evolve and maintain a cooperationdominant situation that corresponds to a situation in which
SNS continue to prosper.
This paper is organized as follows. First, we describe related work in Section 2. Next, we introduce the reciprocity
rewards game, which is the abstract models of SNS with
reciprocity, in Section 3. Section 4 shows the experimental
results and tries to understand why and when reciprocal behavior facilitates evolution of cooperation by comparing the
results with those of the rewards game. Our experimental
result suggests that user behavior like a half free-rider keeps
cooperative activity in SNS. Then, we conclude this paper
in Section 5 with a brief summary and our future research
direction.
sees
gets
gets
rewards
rewards
gets
gets
sees
does not see
gets
gets
cooperates
does not reward
does not reward
does not see
defects
Rewards Game
Meta-Rewards Game
Figure 1: Meta-rewards and Rewards Games.
sees
gets
gets
rewards
rewards
sees
gets
gets
does not see
gets
gets
cooperates
does not reward
does not reward
does not see
defects
Reciprocity Rewards Game
2. RELATED WORK
A number of studies attempt to understand what factors
affect social media by analyzing the network structures of social media. For example, Karamon et al. [7] propose an algorithm that can effectively analyze important network-based
features from the huge data of a social network for understanding user behavior there. Saito and Matsuda [13] analyzed network structures to identify two types of key users
who draw the attention of many other users on Twitter and
showed that one of these types has higher link reciprocity.
In the field of social psychology, Lin and Lu [10] empirically
studied the reasons for joining SNS and found that enjoyment is the most influential factor for people to continue
using SNS. They also found a notable difference between
genders. The number of active peers is an influential factor
for enjoyment for women, resulting in continued use of social
media, but the number of members is insignificant for enjoyment for men. Surman [15] focused on reciprocity because
it is crucial for social exchanges. He analyzed reciprocity
on Facebook and then identified the strong empirical evidence that reciprocity messages sent from a user on online
social networks increases reciprocity reactions from her/his
audience. Faraj and Johnson [4] analyzed network exchange
patterns in an online community and found that reciprocity
characterized them. These papers suggest that reciprocity
is a key factor to keep social media thriving.
A number of studies proposed abstract models of social
media and investigated their properties. Anderson [1] proposed a game theoretic approach to understand how social
media emerge as a driving force in contemporary market-
Reciprocity Meta-Rewards Game
Figure 2: Reciprocity (Meta-)Rewards Game.
ing and how this would affect future marketing. Toriumi et
al. [17] revealed that SNS have similar properties to public
goods, but they correspond to the dual part of meta-norms
game [2] because SNS seem to have no mechanism to punish non-cooperators and only give (psychological) rewards
to cooperators. Their model includes two games, rewards
game (RG) and meta-rewards game (MRG), and indicated
that meta rewards facilitate cooperation, resulting in active
use of SNS. Then, Hirahara et al. [5, 6] extended this model
to the SNS-norms game to include the characteristics of interaction patterns in SNS. They conducted this game in a
variety of complex networks and found that users at network hubs facilitate posting articles to some degree even if
no meta reward is provided in SNS. However, their study
did not take into account the reciprocal relationships between peer agents.
3.
3.1
PROPOSED MODEL FOR SOCIAL NETWORKING SERVICES
SNS as public goods game
SNS are meaningful and sustainable when many articles
and comments on them are posted by and shared among
anonymous participants and among groups of users. Al-
11
though some cost in terms of personal time and effort is
incurred to post/comment on them, users can get some information and knowledge by reading them and can receive
responses that provide feelings of connectivity, empathy, and
contentment by receiving useful/timely/interesting information from other (group) users. On the other hand, many free
riders who only read content and do not contribute anything
exist. Therefore, SNS have the properties of public goods
that are produced and maintained by cooperation in the
SNS community, whose game structure is essentially an nperson prisoner’s dilemma (PD) game. Toriumi et al. [17]
express SNS as a public goods game and attempt to explain
the mechanism of voluntary participation in SNS. Their proposed model consists of two games, rewards game and metarewards game, which are dual games of norms game and
meta-norms game that were proposed by Axelrod [2]. The
structure of these games is illustrated in Fig. 1.
Table 1: Parameter Values Used
Parameter
Number of agents
Cost of posting article
Reward for reading article
Cost of comment
Reward for receiving comment
Memory length
in Experiments.
Value
|A|
20
−3.0
F
M
1.0
C
−2.0
9.0
R
TW
1
k posts it, k pays cost C ′′ and j gains reward R′′ . Then,
the reciprocity meta-rewards game ends here. All agents
perform this game once in a round. Note that because rewards and meta-rewards games do not take into account
reciprocity, agents have only Li , which is the comment rate,
i.e., the probability of posting a comment on an article and
on a comment, and do not distinguish reciprocal agents from
3.2 Reciprocity Reward and Meta-Rewards Gamesothers agents.
Although Toriumi et al. [17] showed that meta-reward,
3.3 Evolution by Genetic Algorithm
which for example corresponds to “comments on a comment,” is important to give incentives to continue voluntary
RRG and RMRG are evolutionary games, as are (metaparticipation, they ignore reciprocity, which is crucial char)norms and (meta-)rewards games. We define that one genacteristic to understand the activities in SNS as mentioned
eration of the game is the term in which all agents have four
in Section 2. Thus, we define the reciprocity rewards game
chances to post articles. At the end of one generation, each
(RRG) and reciprocity meta-rewards game (RMRG) by inagent selects two agents as parents from A using the roulette
corporating the reciprocal relationships among agents into
wheel selection based on fitness values. The fitness values
the RG and MRG (see Fig. 2).
in the game are defined as the cumulative rewards received
Let A = {1, . . . , n} be the set of agents. Agents in an
minus the cumulative costs incurred during the current genreciprocity (meta-)rewards game select the strategy of either
eration. This process is continued up to a certain generation
cooperation or defeat. Cooperation indicates posting articles
(we conducted up to the 10,000th generation in the experiand comments, and defeat indicates just reading them. A
ments below).
user who almost always selects defeat is called a free rider.
We describe the evolution in detail. As explained in SecAgent i ∈ A has three learning parameters: the probability
tion 3.2, agent i has three learning parameters, Bi , LCi ,
of cooperation (i.e., posting a new article) Bi , the probabiland LNi . Since each of these parameters is represented in
ity of giving rewards (e.g., posting a comment on the article)
three bits, agents have nine-bit genes. The initial values
to reciprocal agents LCi , and the probability of giving reof the nine bits are set randomly at the beginning of each
wards to other (normal) agents LNi . We call Bi , LCi , and
experimental trial. The evolution consists of three phases:
LNi the posting article rate, the reciprocal comment rate,
(1) selection of parents, (2) crossover, and (3) mutation. A
and the normal comment rate, respectively. We also call
child agent of i for the next generation is then generated as
both LCi and LNi the comment rates hereafter. To apply
follows. First, in the parent selection phase, i selects two
the genetic algorithm, we express each of these parameters
parent agents based on the probability distribution {Πj∈A }
as three bits, so it has a discrete value 0/7, 1/7, . . . , or 7/7.
defined as
This expression is identical to that used in the meta-norms
(vj − vmin )2
game [2]. Agent i has the memory for reciprocal agents Wi ,
Πj = ∑
,
(1)
2
which is the set of other agents that posted comments on the
k∈A (vk − vmin )
article or the comment posted by i in the recent TW rounds.
where vk is the fitness of agent k ∈ A, and
The positive integer Tw is called the memory length.
An RMRG proceeds as follows. For ∀i ∈ A, parameter
vmin = min vi .
i∈A
Sit (0 ≤ Sit ≤ 1) is defined randomly in the t-th round (t
is a positive integer) when i is going to post an article. If
Then, two new genes are generated using uniform crossover
Sit ≥ 1 − Bi , i posts a new article with cost F and with
from the genes in the selected parent agents in the crossover
probability Sit , and agent ∀j (̸= i) reads the article posted
phase. Next, one of the two genes is randomly selected. In
by i and gains reward M by reading it. Then, j proceeds to
the mutation phase, each bit of the gene of the child agent
the next phase with probability Sit . If i ∈ Wj , j comments
is inverted with a probability of 0.005. This means that if
on the article with probability LCj ; otherwise j comments
there are 20 agents in the network, 0.9 bits will mutate on
on it with probability LNj . Then, j pays cost C, and i gains
average. After that, the derived gene is used for the child
reward R through j’s comment. The game chain so far, part
agent of i.
of the RMRG, is referred to as the RRG.
Subsequent to the RRG, k ∈ A reads j ’s comment and
4. EXPERIMENTS AND DISCUSSION
proceeds to the next phase with probability Sit . If this happens, k posts a response to the comment with probability
4.1 Experimental Setting
L, where L = LCk if j ∈ Wk and L = LNk if j ̸∈ Wk . If
12
Figure 3: Posting Article and Comment Rates in
RG.
Figure 4: Posting Article and Comment Rates in
RRG.
In this paper, we focus on the reciprocity rewards game
since the reward by comment on a comment seems small and
thereby insignificant in SNS. Furthermore, some simple response mechanisms (rewarding mechanisms), such as “Like”
buttons and “read” icons, have no mechanism to give metarewards for these simple responses. We compare the results
of RRG with those of RG [17] and try to investigate the features of the reciprocity rewards game. The purpose of the
first experiment (Exp. 1) is to investigate how reciprocity
affects user behavior on SNS by comparing the transitions
of the average rates of cooperation, that is, posting an article or a comment, during RG and RRG. The purpose of the
second experiment (Exp. 2) is to understand the causes of
the collapse of the cooperation more clearly. The purpose
of the third experiment (Exp. 3) is to clarify the effect of
memory length TW on the evolution of cooperation by varying it from 1 to 40. The parameter values we set in these
experiments are listed in Table 1, while the value of TW is
varied in Exp. 3. These parameter values are determined
on the basis of the experiments of Axelrod [2] and Toriumi
et al. [17]. Note that the experimental data below are the
average values of 100 independent experimental runs based
on the different random seeds.
4.2 Effect of Reciprocity on Cooperation (Exp. 1)
Figures 3 and 4 indicate how the probabilities of posting
Figure 5: Posting Article and Comment Rates in
RG (One Trial).
Figure 6: Posting Article and Comment Rates in
RRG (One Trial).
an article and a comment change over
∑ generation, where the
average posting article rate B =∑ i∈A Bi /|A|, the average
reciprocal comment rate LC = i∈A
∑ LCi /|A|, and the average normal comment rate LN = i∈A
∑ LNi /|A|. We show
B and the average comment rate L = i∈A Li /|A| in RG.
In RG (Fig. 3), B and L transition at approximately 0.17
and 0.05, respectively. On the other hand, B and LC transition at approximately 0.36, and LN transitions at approximately 0.12 in RRG (Fig. 4). These results indicate that
the value of B, LC , and LN in RRG are larger.
These figures suggest that by taking into account reciprocity to decide the behavior, the activity in SNS improves,
although the improvement is limited. To understand more
clearly why B, LC , and LN increased but their increases
were limited, we investigated the results of one experimental run of RG and RRG in Figs. 5 and 6. We can see that
the posting article and comment rates, B and L, respectively, rose temporarily and then immediately dropped in
RG. Such temporary cooperation was caused by mutation.
However, RG cannot maintain the cooperative situation, so
cooperation disappeared immediately. We also found that
B, LC , and LN temporarily increased and then dropped in
RRG. In both games, cooperation could not last for long, so
their average values became small as shown in Figs. 3 and
13
Figure 7: Posting Article and Comment Rates in
RG (One Trial, 1500 Generations).
this means that almost all agents cooperate (by posting articles) and give comments on cooperators’ articles during
this term. Furthermore, we can see that the value of LC
rarely dropped to zero. The difference between the RG
and RRG is that, in the RRG, agents distinguish reciprocal
agents from other agents so can behave differently. Thus,
agent i with high LC comments selectively only on articles
posted by reciprocal agents who commented on past articles posted by agent i. Such selective comments can prevent
the collapse of cooperation by reducing the cumulative cost
for comments. However, such prevention of collapse works
only when LC > LN ; otherwise, many agents begin to comment on arbitrary articles without rewards, resulting in the
collapse of cooperation.
We explain what the phenomena described above in the
RRG correspond to in actual SNS. When SNS users did not
consider direct reciprocity when using SNS (that is, RG),
users who often comment must stop commenting because
RG has no incentive for comments. On the other hand,
if individual users consider direct reciprocity to comment,
they would comment on the articles of reciprocal users preferentially by looking at the content of memory, Wi . Thus,
when LC > LN , such selective comment behavior for receiving comments in future facilitates and maintains the norm
for cooperation. We also believe that condition LC > LN
is a reasonable assumption in actual SNS. The details are
discussed in Sections 4.3 and 4.5.
4.3
Figure 8: Posting Article and Comment Rates in
RRG (One Trial, 1500 Generations).
4.
However, if we compare these figures more carefully, the
terms in which B or LC reached 1.0 in RRG seem longer than
those in RG. To discuss this difference, we show Figs. 7 and
8, which are the close-up graphs of Figs. 5 and 6 between
0 and 1500 generations. Figure 7 indicates that the value
of B in RG occasionally increased to approximately 0.85 to
0.9 but did not reach 1.0. The value of L also increased but
was much lower than that of B. We can explain this situation as follows. When agents post few articles, they have no
chance to give comments, and thereby their fitness values
were almost zero. Therefore, some agents have the genes to
post articles and give comments by mutation, so their fitness
values became slightly large, and their genes spread. However, the RG has no incentive for giving comments (metarewards); agents with relatively large L also had low fitness
values, so the value of L did not increase that much. After
that, B > L was held, and thereby the agents that post articles could not earn sufficient rewards, their fitness values
also became smaller, and cooperation easily collapsed.
On the other hand, in RRG, B, LC , and LN rose intermittently for the same reason as the RG, but B and LC
reached 1.0 and lasted for a short period as shown in Fig. 8;
Collapse of Cooperation (Exp. 2)
In Exp. 2, we attempt to identify why the average rate
of B, LC , and LN dropped via experiments with different
parameter settings. First, we limited the maximum value
of normal comment rate, LNi to LNmax (0 ≤ LNmax ≤ 1).
Note that LNi is expressed by 3 bits (ranging from 0 to
7), so the probability of the normal comment rate is set to
LNi × LNmax /7. We also fixed the value of Sit = S ′ for a
certain positive constant value to understand how it affects
evolution of cooperation. Figures 9 and 10 plot the transitions of B, LC , and LN over generations in the RRG when
LNmax = 0.1 (Fig. 9) and when S ′ = 1.0 (Fig. 10). By
comparing these figures with Fig. 4, we can see that the values of B and LC became slightly higher in Exp. 2, but no
significant difference existed between them.
On the other hand, if we set both LNmax = 0.1 and S ′ =
1.0, we can observe evolution of cooperation; this result is
plotted in Fig. 11. To analyze the relationship between B,
LNmax , and S ′ more explicitly, we conducted a more detailed
experiment whose results are plotted in Fig. 12. It shows
that, for example, when S ′ = 1.0, if LNmax ≤ 0.5, the value of
B is almost 1.0, but if LNmax is increased from 0.5 to 1.0, the
value of B gradually decreased to around 0.5. Conversely,
when LNmax = 0.1, if S ′ ≥ 0.75, the value of B was kept
to almost 1.0, but if S ′ is decreased from 0.75 to 0, the
value of B gradually reached zero. These results indicate
that the values of LNmax and S ′ are important conditions for
evolution of cooperation: smaller value of LN corresponds
to the situation where agents may read articles from nonreciprocal agents but do not comment on them that much
and a larger S ′ corresponds to the situation where agents
do not miss the articles posted by (reciprocal) agents. Both
conditions must be satisfied to remove the causes of collapse
in the RRG.
We explain what these phenomena in the RRG correspond
14
Figure 9: Posting Article and Comment Rates in
RRG (LNmax = 0.1).
Figure 12: Relationship between B, S ′ , and LNmax .
1
0.9
Posting articles rate B
0.8
Reciprocal comment rate LC
0.7
Normal comment rate LN
Rate
0.6
0.5
0.4
0.3
0.2
0.1
0
0 1 2 3 4 5 6 7 8 9 10 11121314151617181920
Memory length TW
Figure 10: Posting Article and Comment Rates in
RRG (S ′ = 1.0).
Figure 13: Effect of Memory Length in RRG.
1
0.9
Posting article rate B
0.8
Reciprocal comment rate LC
0.7
Normal comment rate LN
Rate
0.6
0.5
0.4
0.3
0.2
0.1
0
0 1 2 3 4 5 6 7 8 9 10 11121314151617181920
Memory length TW
Figure 11: Posting Article and Comment Rates in
RRG (LNmax = 0.1, S ′ = 1.0).
to in actual SNS. First, if LC < LN , agent i can receive
comments from non-reciprocal agent j, but i identifies j as
a reciprocal agent in the next round. The probability of j’s
receiving comments on j’s article from i lowers. This means
that agents should not comment on articles to receive more
comments from others. Of course, this leads to reducing
the number of articles posted. Furthermore, if SNS have no
mechanism to provide meta-rewards, just commenting on
articles incurs only a negative cost, so such agents also dis-
Figure 14: Effect of Memory Length in RRG
(LNmax = 0.1, S ′ = 1.0).
appear. After that, the number of articles quickly decreases.
When S ′ is small and LC > LN , agents have less chances
to give comments on articles posted by reciprocal agents, so
the incentive to post articles decreases.
4.4
Effect of Memory Length (Exp. 3)
In Exp. 3, we want to clarify the relationship between
memory length, TW , and the values of B, LC , and LN because agents with longer memory do not forget past reciprocal behavior, which facilitates the evolution of cooperation.
15
Figure 13 shows the result of the normal RRG, and Fig. 14
shows the result of the RRG with S ′ = 1 and LNmax = 0.1.
In particular, Fig. 14 shows that when W = 1, the rates of
B and LC stay at around 1, like those in Fig. 11. However,
in accordance with the increase in Tw , B and LC quickly
decreased. This phenomenon suggests that the increase in
an agent’s memory length prevents the evolution of cooperation; these results seem counter-intuitive.
We can explain this phenomenon as follows. We consider
that if TW is larger, agents can memorize more reciprocal
agents, so they can continue to comment more on posted articles for a longer time. However, the opposite phenomenon
occurs. When TW = 1, if agent i commented on an article
of another agent j but j did not comment back on an article posted by i, i would stop commenting on j’s article in
the next round due to his/her defeat. However, when i had
a longer memory, i continued to consider j as a reciprocal
agent. Then, if j did not comment on i’s articles in a few
rounds, i still assumed that j would be a reciprocal agent.
Agents that did not comment on others’ articles but received
comments had an advantage. This situation continued for
longer if TW was large. Therefore, this phenomenon is likely
to lead to the collapse of cooperation.
4.5 Discussion
We will discuss the suggestions for SNS derived from the
findings in our formulation and experiments. The first condition, LC > LN , seems to be the condition to keep SNS
active. Particularly, by setting LNmax to low values, the cumulative cost can be significantly reduced by avoiding comments on that many articles. In addition, keeping the value
of S ′ higher means agents are less likely to miss the posted
articles. If we combine these conditions, we have a number of
suggestions related to SNS. First, agents seem to behave like
half free-riders in cooperation dominant situations. Agents
gain rewards by reading articles of normal agents and do not
pay for commenting on these articles. However, because only
this behavior makes SNS inactive, agents heartily comment
only on the articles posted by reciprocal agents. We can
assume that reciprocal agents are like close friends. Because
S ′ is high and LNmax is low, LC must be high to keep SNS
thriving. Agents read as many articles as possible, but they
comment only on the articles posted by reciprocal agents.
Our experimental results show that this interaction structure is an important factor to keep users posting articles on
SNS.
We think that the situation mentioned above is often exposed in actual SNS. A user, u, may have many peers, so
u reads many articles posted by them. However, to gain
the incentive to post articles, u has to receive frequent comments from u’s close peers. Of course, the articles posted
by u are also read by many other users who can gain some
rewards by behaving as free-riders for u.
5. CONCLUSION
We investigated the effect of reciprocity between users on
the prosperity of SNS. For this purpose, we first proposed the
reciprocal rewards game, which is an abstract model of SNS
and an extension of the rewards game [17] . The original rewards game and its associated meta-rewards game do not include the structure of reciprocity between users although we
believe that reciprocity affects the user’s SNS activity. We
conducted three experiments to understand and analyze the
effect of reciprocity on SNS activities, that is, which parameters affect the evolution and collapse of cooperation. We
also examined how memory lengths of agents to memorize
reciprocal agents affect evolution in SNS, and showed that
the increase in the memory length prevents the evolution
of cooperation in SNS. Our experimental results suggested
that when the user behaved as a half free-rider, meaning
that the user behaved as a cooperator to the close friends
(reciprocal peers) but behaved as a free-rider to other peers
(acquaintances), cooperation can evolve and the prosperity
of SNS can be maintained.
We plan to investigate the characteristics of the RRG with
meta-rewards. In addition, we believe that interaction takes
place not only between two users but also in a group to
which the user belongs, so it is necessary to include indirect reciprocity in our model and to clarify its structures in
future.
REFERENCES
[1] Anderson, E.: Social Media Marketing—Game Theory
and the Emergence of Collaboration. Springer-Verlag
Berlin Heidelberg (2010)
[2] Axelrod, R.: An evolutionary approach to norms.
American political science review 80(04), 1095–1111
(1986)
[3] Barabasi, A.L., Albert, R.: Emergence of scaling in
random networks. Science 286, 509–512 (1999)
[4] Faraj, S., Johnson, S.L.: Network exchange patterns
in online communities. Organization Science 22(6),
1464–1480 (2011)
[5] Hirahara, Y., Toriumi, F., Sugawara, T.: Evolution of
cooperation in meta-rewards games on networks of
WS and BA models. In: Proc. of the
IEEE/WIC/ACM Int. Joint Con. on Web Intelligence
(WI) and Intelligent Agent Technologies (IAT). vol. 3,
pp. 126–130. IEEE (2013)
[6] Hirahara, Y., Toriumi, F., Sugawara, T.: Evolution of
Cooperation in SNS-norms Game on Complex
Networks and Real Social Networks. In: Social
Informatics, pp. 112–120. Springer (2014)
[7] Karamon, J., Matsuo, Y., Ishizuka, M.: Generating
useful network-based features for analyzing social
networks. In: AAAI. pp. 1162–1168 (2008)
[8] Leider, S., Möbius, M.M., Rosenblat, T., Do, Q.A.:
Directed altruism and enforced reciprocity in social
networks. The Quarterly Journal of Economics 124(4),
1815–1851 (2009)
[9] Lin, H., Fan, W., Chau, P.: Determinants of users’
continuance of social networking sites: A
self-regulation perspective. Information and
Management 51(5), 595–603 (2014)
[10] Lin, K.Y., Lu, H.P.: Why people use social networking
sites: An empirical study integrating network
externalities and motivation theory. Computers in
Human Behavior 27(3), 1152 – 1161 (2011)
[11] Nowak, M.A.: Five rules for the evolution of
cooperation. science 314(5805), 1560–1563 (2006)
[12] Rand, D.G., Nowak, M.A.: Human cooperation.
Trends in Cognitive Sciences 17(8), 413 – 425 (2013)
[13] Saito, K., Masuda, N.: Two types of twitter users with
equally many followers. In: Proc. of the 2013
16
[14]
[15]
[16]
[17]
IEEE/ACM Int. Conf. on Advances in Social Networks
Analysis and Mining. pp. 1425–1426. ACM (2013)
Stieglitz, S., Dang-Xuan, L.: Social media and
political communication: a social media analytics
framework. Social Network Analysis and Mining 3(4),
1277–1291 (2013)
Surma, J.: Social exchange in online social networks.
The reciprocity phenomenon on Facebook. Computer
Communications 73, Part B, 342 – 346 (2016)
Takano, M., Wada, K., Fukuda, I.: Reciprocal
altruism-based cooperation in a social network game.
Computing Research Repository (CoRR),
abs/1510.06197 (2015)
Toriumi, F., Yamamoto, H., Okada, I.: Why do people
use social media? agent-based simulation and
population dynamics analysis of the evolution of
cooperation in social media. In: Proceedings of 2012
IEEE/WIC/ACM International Conferences on Web
Intelligence and Intelligent Agent Technology (WI-IAT
2012), Vol. 2, pp. 43–50 (2012)
17
Analysis of Market Trend Regimes for March 2011
USDJPY Exchange Rate Tick Data
Lukáš Pichl and Taisei Kaizoji
International Christian University
Osawa 3-10-2 Mitaka Tokyo 181-8585 Japan
[email protected]
ABSTRACT
This paper reports the analysis of the foreign exchange market for the USD and JPY currency pair in March 2011 for
the period of 23 trading days comprising 3,774,982 transactions. On March 11, 2011 the disaster of the Great Tohoku
Earthquake disaster accompanied by tsunami took place;
the event was followed by a highly turbulent market with
JPY appreciating without limits in the panic that ensued;
major central banks of the world intervened since after to
weaken the yen. We analyze the tick data set using the
criteria of aggregate volatility, extreme-event distribution,
and singular spectrum analysis to discover the market microstructure during the central bank interventions. In addition, a deep-learning neural network algorithm is designed
to extract the causality regime on the microscale for each
trading day. At the beginning of the month, the success ratios in the trend prediction hit levels as high as the order of
70%, followed by about a 10-point decrease for the rest of
the data set. Distribution of intra-trade times shows clear
signs of algorithmic trading with clearing transaction clock
ticking at the time intervals of 0.1 , 0.25 and 10.0 sec. The
extracted trend prediction rates represent lower bounds with
respect to other methods. The present work offers a useful
insight into algorithmic trading and market microstructure
during extreme events.
Categories and Subject Descriptors
I.5.1 [Pattern Recognition]: Models: Neural Nets
General Terms
Economics
Keywords
tick data, extreme events, deep learning, causality extraction, trend visualization
1.
INTRODUCTION
On March 11, 2011, at 14:46 JST the Great Tohoku Earthquake of magnitude 9.0 stroke, followed by a massive tsunami,
Appears in: Proceedings of the 15th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2016), John Thangarajah, Karl Tuyls, Stacy Marsella,
Catholijn Jonker (eds.), May 9–13, 2016, Singapore.
c 2016, International Foundation for Autonomous Agents and
Copyright Multiagent Systems (www.ifaamas.org). All rights reserved.
Figure 1: The exchange rate of USD/JPY in March
2011 for the period of 23 working days. The breaks
in the line correspond to weekends.
leaving 15,894 dead, 6,152 injured, and 2,562 people missing, according to the National Police Agency (data of February 10th, 2016) [1]. The overall property damage reached
hundreds of billions of dollars. The Japanese yen rapidly
strengthened once the scale of the damage became known.
This has happened in contrary to increased concerns about
the future exports of Japan or the increased expectation
of government default due to the forthcoming revitalization
cost burden. The two major reasons for the sudden speculative appreciation were according to Neely [2] (i) expectations
that Japanese insurance companies would need to liquidate
and repatriate reserves from overseas, and (ii) the closing
of carry-trade positions in which investors borrowed yen to
lend abroad. The appreciation was rapid and significant:
whereas on March 10, at 9 PM JST, one USD traded at
82.936 JPY, one week after, on March 17, at the same time,
the trading level was 78.899 JPY. The finance ministers of
the group of G-7 announced on March 17 late evening a
coordinated intervention to weaken the yen [3]. Although
the exact amounts are not known, the Fed announced, for
instance, that it sold yen from the U.S. reserves worth the
equivalent of 1 billion dollars. The Bank of Japan sold between 1-2 trillion yen on the same day, too [4].
In this paper, we embark on an empirical study of the
USD/JPY exchange rate using a data set of market tick
transactions in March 2011 (bid and ask price range is avail-
18
Figure 2: Reconstruction of the price time series from Fig. 1 using Singular Spectrum Analysis.
Figure 3: Tick-to-tick log returns for USDJPY exchange rate.
Figure 4: Histogram of log returns.
2.
able, but transaction volumes are not). In Section 2, we describe the structure of the data set and its empirical characteristics, such as the distribution of log returns, intra-trade
time intervals, or the evidence of algorithmic trading on the
real time scale. Section 3 presents our attempt at discovering high volatility regimes characteristic of sudden market speculations or central bank interventions. We focus on
the distribution of extreme events and visualization of their
density in the form of barcode diagram. Singular Spectrum
Analysis (Principal Component Analysis on the lagged time
series) is used to find the events of dimensional collapse when
the eigenvalue of the first principal components represents
all the standard deviation in the time series - this criterion
appears to correlate with the high volatility regimes of market interventions. In Section 4, motivation for trend prediction by deep neural network algorithm is described, and the
results presented for each trading day of March 2011. We
conclude with final remarks in Section 5.
DATA SET
The data set consists of trade records that include the currency pair indicator, USD/JPY, date in the form of YYYYMMDD, and the price pair of bid and ask values, in the format
of NN.NNN (units of JPY). The source is True FX company (http://www.truefx.com/). The exchange rate time
series are depicted in Fig. 1 (time scale in seconds is used
from the reference point of March 1st, 2011, 0:00 JST),
which shows the sudden speculative appreciation of yen after the earthquake and tsunami disaster (2011 3 11 14:46,
t(sec)= 1,003,593 and t(tick)=1,303,200). The peak effect
of this speculative bubble took place on 2011 3 16 21:15
(t(sec)=1,459,100, t(tick)=1,993,900). The intervention of
the G-7 group lead by Bank of Japan started on 2011 3
18 00:02 (t(sec)=1,555,300, t(tick)=2,350,600). An aggregated volatility criterion depicts one more significant event
on 2011 3 21 13:40 (t(sec)=1,863,600, t(tick)=2,728,200),
during which the market was largely unstable, perhaps due
to the ongoing battle of the speculative trend of yen ap-
19
Figure 6: Histogram of intra-trade times.
Figure 5: Histogram of log returns (log-log plot).
preciation and the central bank intervention to depreciate
the currency. Although none of these events appears significantly in the Singular Spectrum Analysis of the FX time
series shown in Fig. 2, the SSA will be proven useful on the
lagged copies of logarithmic returns, defined as
Rt = log
Ft+1
Ft
,
(1)
where the Ft is the price at (tick) time t defined by the
average of the bid and ask exchange rate values. In Fig.
3, the time series of Fig. 1 transformed to logarithmic returns are shown. The vertical lines correspond to the events
of the earthquake, speculative bubble, and the 1st and 2nd
interventions against this trend. Figure 4 provides the discrete version of the log return histogram, on which the quoting step is visible (order of 10−3 /FX). The histogram has a
power-law tail, as can be seen from the straight-line region
of the histogram in a log-log plot shown in Fig. 5.
Next, we proceed to the analysis of intra-trade times. In
the inset of the histogram in Fig. 6 for the first 0.6 seconds,
it can be seen that every 0.1 sec, there is a peak of increased
trading activity, indicating the physical clock at the market
being used for transaction clearing and algorithmic trading.
In addition to the 0.1 sec stepping, there is also a small
peak distinguishable at a quarter of a second. The effect is
more pronounced on a longer time grid in Fig. 7 using the
logarithmic scale. Notice the higher-than-regular effect at
10 second interval, too.
In order to asses the volatility of the market, focusing especially on the central bank interventions, one useful criterion is the distribution of extreme events. Using the n-gram
analysis with n=10, we define an extreme events as a sequence of 10 adjacent log returns of negative sign. Since
the probability distribution of log returns is symmetric, the
odds for this event to occur are 1 in 1024. Next, we define
an indicator, which is equal to 0, when the extreme event
was not recorded in the moving window, and -10, in case
the extreme event was detected. A line plot of this indicator
results in a barplot, from which clustering of volatility and
the distribution of extreme 10-grams may easily be seen.
The result for March 2011 is given in Fig. 8. It can be seen
from Fig. 8 that until the earthquake, the extreme events
were rare (and clustered). This has changed profoundly
since-after March 11, and two continuous blocks of the extreme 10-grams ensued, somewhat surprisingly, immediately
Figure 7: Clock ticks at intra-trade times (log scale).
with the start of the speculative bubble, and then at the G7 coordinated central bank intervention. The barplot thus
indicates the changes in the market microstructure, namely,
the departure from clustering of 10-grams seen in the usual
time series, to a more equidistant distribution. Let us notice that given the stochastic odds of 10−3 for the extreme
10-gram to occur, the total number of tick points in the order of 106 , and the line thickness, the referential barplot of
equally spaced n-grams would be represented by the entire
area in black. For the month of March 2011, the statistical
estimate of extreme 10-grams, and their actual count, coincide to 3 digits precision. An inset from the histogram of
these extreme 10-gram inter-event times is given in Fig. 9.
3.
MARKET PATTERNS
The modeling of volatility effects of central bank interventions typically employs an extra variable representing
whether the intervention is taking place or not [5]. We do
not have this kind of information for March 2011; also not all
of the interventions are publicized, even though the public
announcement beforehand often adds to the scale of the resulting effect [6]. We thus select a simple criterion for volatility detection using the tick-to-tick time scale, on which we
add up the absolute values of the realized log returns (rather
than their squares, in order to allow for a more robust behavior with regard to outliers). When the indicator exceeds
the threshold value of 0.1, we detect an extreme event. As
can be seen in Fig. 10, this criterion clearly detects the speculative bubble and the two interventions events on March 18
20
Figure 8: Barcode of extreme-event occurrence in tick data.
Figure 11: Trend collapse detection using Singular
Spectrum Analysis.
Figure 9: Distribution of time intervals between extreme events.
Figure 10: Volatility detection using
P1000
1
Rt .
and March 21, without fail.
When a market panic occurs, or a massive intervention
takes place, an overall trend emerges that reduces the number of degrees in the time series of log returns. We have attempted to test this hypothesis using the Singular Spectrum
Analysis, i.e., the Principal Component Analysis applied to
k-lagged copies of the time series [7]. The particular setting
is a 100-tick long subset of the time series, each lagged with
k = 0, 1, . . . , 19 steps. From these, the symmetric 20 by 20
covariance matrix is formed, and its eigenvalues are computed. To account for the sudden dimensionality collapse,
we take the ratio of the standard deviation of the first principal component to the standard deviations of all principal
components. We remark that this criterion is again more
robust that it would be in case of taking the ratio of variance explained. The time series of the resulting criterion
values are shown in Fig. 11. The red line indicates the log
return time series for comparison. If we select the threshold of the order of 0.9 and above, then three dimensionality
collapse events are detected, falling within the time of the
un-announced second intervention event of March 21. This
21
completes our empirical analysis of market patterns, volatility and extreme event criteria that distinguish the different
market regimes. In the next section, we will apply the standard deep learning neural network algorithm to investigate
the degree to which causality can be extracted for the case
of trend prediction on the market tick scale.
4.
DEEP NEURAL NETWORK
Artificial feed-forward neural networks are currently undergoing a renaissance in the prediction of time series because of the algorithmic development in the discipline of
deep learning [8, 9, 10]. Altough recent research focuses
mostly on the relation of information and market returns, using, e.g., deep belief networks [10], this computational model
has been applied successfully to trend-extraction in a number of studies [11, 12, 13, 14, 15].
The information propagates through the network from inputs to outputs [16], where
 (k−1)

nX
(k)
(k) (k−1)
(k)
hi = gk 
wi,j hj
+ bi  i ∈ h1, . . . , n(k) i,
Figure 12: Correlation of subsequent log returns.
j=1
(2)
with k denoting the hidden layer index (k = 0 reduces to the
(0)
input layer, hi ≡ xi ), gk being the activation function (in
this work tanh(x) for k = 1, . . . H, and the identity function,
(H+1)
≡ yi ). The network
gH+1 (x) = x in the output layer; hi
(k)
is defined by the parameters of synaptic weights wi,j and
(k)
hidden neuron biases bi in each layer k. The parameters
of the network are initialized at random; the learning algorithm consists in evaluation of the gradient of the objective
(n)
function J for training pattern {~
x(n) , ~
yd },
J (n) =
do
i2
1 Xh
(n)
yk (~
x(n) ) − ydk ,
2
(3)
k=1
by means of error backpropagation [17]. The optimization
method of choice is typically of the conjugate gradient class
combined with stochastic sampling of initial conditions. Upon
training of the neural network, i.e. having optimized its parameters, Eq. (2) can be used to predict the out-of-thesample values. In the present work, the dimension of input
vector ~
x is di = 5 and dimension of output is d0 = 1. The
number of hidden layers is H = 4. An actual example of
a neural network of this type showing the topology used in
this work is given in Fig. 14. The bias values for hidden
and output neurons are in blue with constant input value
1; weights are shown in black for each synaptic connection.
All the calculations are performed using the open source free
software R [18, 19] Artificial neural networks as described
above have already been used extensively in the prediction
of market returns [20].
In order to extract the causality degree from the time
series data set, we select as predictors the variables Rt−5 ,
Rt−4 , Rt−3 , Rt−2 and Rt−1 in a 5-step back moving window.
Normally, the one-step legged time series are negatively correlated with the original ones; the feature of the mutual log
return distribution in the data set can be seen from the scatter plot in Fig. 12. The diagonal line pattern corresponds
to the negative correlation of Rt and Rt−1 . To represent the
correlation of Rt with the recent trend, we compute the moving window return Rt−5 + Rt−4 + Rt−3 + Rt−2 + Rt−1 , which
Figure 13: Trend correlation of log returns.
is positively correlated with Rt as can be seen in Fig. 13.
Both plots indicate the existence of partial patterns that are
subjected to the neural network learning algorithm. Figure
14 then shows the deep-learning architecture of the artificial neural network. There are 5 inputs, plotted as full blue
circles on the left part of the picture, and a single output
neuron, depicted in red. The neurons of the four hidden
layers are shown as empty circles. Weight parameters are
shown as numbers for each synaptic connection. Bias parameters are represented as constant inputs of value 1, and
the respective weight for each extra synaptic connection to
the constant input. The parameters of the nework were
learnt on the March 31st, 2011, data set. The one-month
data set is divided into 23 subsets of one day trading data.
Table 1 shows the deep neural network predicted result
for the trend, i.e. the sign of the log return, or the so
called hit ratios (66.6% of the tick data is used for training, 33.4% of the data for prediction testing). The minimum value is 53.98%, the median 60.9%, the mean 61.27%,
and the maximum 72.92%. It can be seen from the data
22
Figure 14: Neural network in deep-learning configuration.
Table 1: Trend prediction success rates (hit ratios)
for trading days of March 2011.
Mon
07
72.92
14
56.04
21
58.55
28
65.73
Tue
01
54.40
08
72.43
15
57.33
22
61.62
29
61.19
Wed
02
57.67
09
72.05
16
53.98
23
61.65
30
63.35
Thu
03
54.35
10
71.40
17
56.07
24
60.23
31
60.90
Fri
04
53.99
11
63.69
18
57.09
25
62.65
that on the Great Tohoku Earthquake day the rates dropped
from around 70% of the preceding week by 10 points, and
remained at the lower levels throughout the central bank
interventions. These values were obtained by repeatedly reinitializing the neural network parameters, to avoid the trapping in a local minimum problem. They can be considered
as the lower bounds for estimates obtained by more sophisticated methods.
5.
CONCLUSIONS
We have analyzed the statistical properties and the trend
regimes in the tick data set of USD/JPY foreign exchange
trades. A simple volatility criterion that uses the sum of
the log returns over a moving window interval correctly extracts the periods of the speculative bubble and the central
bank market interventions. Singular spectrum analysis applied to the lagged log return series indicated a dimensionality reduction events in the market microstructure during
the market interventions on May 21st. Starting with the
earthquake event, the distribution of extreme events using
10-grams of negative log returns changed substantially - covering the speculative bubble formation and the subsequent
central bank intervention. Trend prediction success rates
computed with the deep learning neural network also drop
by about 10 points from the pre-disaster level, indicating
market turbulence and chaotic behavior of market participants. The present work is believed to provide a useful
insight to the turbulent foreign exchange market episodes,
for which networked agent simulations are appropriate [21].
REFERENCES
[1] National Police Agency of Japan,
http://www.npa.go.jp/archive/keibi/biki/higaijokyo e.pdf
Accessed on Feb. 16, 2016.
[2] C. J. Neely. A Foreign exchange intervention in an era
of restraint. Federal Reserve Bank of St. Louis Review,
93(5): 303–324, September/October 2011.
[3] Reuters Markets. G7 Cenbanks in rare currency action
after yen surge. http://www.reuters.com/article/usglobal-economy-idUSL3E7EH14Q20110318. Accessed
on Feb. 16, 2016.
[4] Bloomberg Business,
http://www.bloomberg.com/news/articles/2011-0513/fed-bought-1-billion-of-u-s-currency-during-marchg-7-yen-intervention. Accessed on Feb. 16,
2016.
[5] K. M. Dominguez. Central bank intervention and
exchange rate volatility. Journal of International
Money and Finance, 17: 161–190 (1998).
[6] W. H. Tsen. Exchange Rate and Central Bank
Intervention. Journal of Global Economics, 2:
e104:1–4, 2014.
[7] W. W. Hsieh. Machine Learning Methods in the
Environmental Sciences, Cambridge University Press,
Cambridge, 2009.
[8] Y. Bengio. Learning Deep Architectures For Artificial
Intelligence. Foundations and Trends in Machine
Learning, 2 (1): 1–127, 2009.
[9] S. Haykin. Neural Networks and Learning Machines.
Pearson International Edition, 3rd edition, 2009.
[10] T. Kuremoto, S. Kimura, K. Kobayashi, and M.
Obayashi. Time Series Forecasting Using a Deep Belief
Network with Restricted Boltzmann Machines.
Neurocomputing, 137: 47–56, 2014.
[11] H. Mizuno, M. Kosaka, H. Yajima, N. Komoda.
Application of Neural Network to Technical Analysis
23
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
of Stock Market Prediction. Studies in Informatic and
Control, 7 (3): 111–120, 1998.
A.-S. Chen, M. T. Leung, H. Daouk. Application of
neural networks to an emerging financial market:
forecasting and trading the Taiwan Stock Index.
Computers and Operations Research, 30: 901–923,
2003.
R. Sitte and J. Sitte. Analysis of the predictive ability
of time delay neural networks applied to the S&P 500
time series. IEEE Transaction on Systems, Man and
Cybernetics, 30: 568–572, November 2000.
G. P. Zhang and D. Kline. Quarterly time-series
forecasting with neural networks. IEEE. Trans.
Neural. Netw., 18: 1800–1814, 2007.
M. P. Clements, P. H. Franses, and N. R. Swanson.
Forecasting economic and financial time-series with
non-linear models. Int. J. Forecast., 20: 169–183, 2004.
W. McCulloch and W. Pitts. A logical calculus of the
ideas immanent in nervous activity. Bulletin of
Mathematical Biophysics, 7: 115–133, 1943.
D. E. Rumelhart, G. E. Hinton, and R. J. Williams.
Learning Representations by Back-propagating Errors.
Nature 323 533–536, 1986.
R Core Team. R: A language and environment for
statistical computing. R Foundation for Statistical
Computing, Vienna, Austria. URL
https://www.R-project.org/, 2016.
S. Fritsch, F. Guenther, M. Suling. neuralnet:
Training of neural networks. R package version 1.32.
http://CRAN.R-project.org/package=neuralnet, 2012.
Filippo Castiglione. Forecasting Price Increments
using an Artificial Neural Network. Advances in
Complex Systems, 4(1): 45–56, 2001.
X.-Q. Sun, H.-W. Shen and X.-Q. Cheng. Trading
Network Predicts Stock Price. Scientific Reports, 4:
3711:1–6, 2014.
24
An Examination of a Novel Information Diffusion Model:
Considering of Twitter User and Twitter System Features
Keisuke IKEDA
Takeshi SAKAKI
The University of Electro-Communications
1-5-1 Chofugaoka, Chofu,
Tokyo, Japan
The University of Tokyo
7-3-1 Hongo, Bunkyo,
Tokyo, Japan
[email protected]
Fujio TORIUMII
Satoshi KURIHARA
The University of Tokyo
7-3-1 Hongo, Bunkyo,
Tokyo, Japan
The University of Electro-Communications
1-5-1 Chofugaoka, Chofu,
Tokyo, Japan
[email protected]
ABSTRACT
Twitter is a popular microblogging service in Japan. People
use the Twitter for communicating with friends, and posting
tweet daily life events. In addition, Twitter has also been
used in an emergency situation, such as the earthquake. In
the East Japan great earthquake disaster, people were using Twitter to get refuge and rescue informations. However, spreading false rumor has become a major social problem. We aim to propose suppression scheme of false rumor.
Therefore, we propose a novel multiagent-based information
diffusion model to reveal the diffusion mechanism of false rumor in this paper. Our model is to focus on the information
diffusion behavior of each Twitter user. We consider three
elements of each user, “User’s diversity”, “Life pattern”, and
“State transition”. In addition, our model also takes into
account multiple of the information path, which is a feature
of Twitter. We evaluate the validity of our model.
Keywords
Twitter, Information diffusion, False rumor
1. INTRODUCTION
Twitter is a kind of microblog service, it is a popular communication tool. It is also used in the event of a disaster not
only for using in daily life event. In East Japan great earthquake disaster, Japan suffered big damage. Among the confused situation, people were using Twitter to get refuge and
rescue information, and communicating each other. Several
TV stations and government agencies announced evacuation
information and rescue information via Twitter[1]. However,
there is demerit when you get these information from Twitter. The demerit is that you may have receive false rumor.
Information is rapidly spreading on Twitter. These are an
actual example of false rumor in the East Japan great earthquake disaster.
Appears in: Proceedings of the 15th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2016), John Thangarajah, Karl Tuyls, Stacy Marsella,
Catholijn Jonker (eds.), May 9–13, 2016, Singapore.
c 2016, International Foundation for Autonomous Agents and
Copyright ⃝
Multiagent Systems (www.ifaamas.org). All rights reserved.
1. A toxic substance attached to clouds by the explosion of
Cosmo Oil falls with rain.
2. Drink a bottle of Povidone-iodine will protect you from
radioactive damage.
We define a false rumor as “information whose correction is announced later, even though this information was
not diffused deliberately[2].” When a large-scale disaster, we
guess that victims are difficult to ascertain the authenticity
of the information. Victims may get a serious damage by
the wrong information.
We would like to establish the method of stopping diffusion of false rumor. However, a detailed diffusion mechanism
has not been made very clear for either false or corrected rumors. So, in this paper, we will propose a novel multiagentbased information diffusion model( AIDM: Agent-based Information Diffusion Model ), based on a multi-agent system.
We will evaluate the validity of our model.
This paper is organized as follow: Section 2, we introduce
related works. In Section 3, we describe the problems of information diffusion model that we have previously proposed.
Section 4, we propose a new information diffusion model to
solve these problems. In Section 5, describes the experiment
and a method for evaluating our model. Finaly, Section 6
concludes our work and describe the future works.
2.
RELATED WORKS
Recently, there are a lot of research dealing with information diffusion in social media.
We have proposed information diffusion model, hereinafter
called “Extended SIR model[2].” Extendet SIR model is an
extension of the famous SIR model[3] as epidemiological
model. We considered a false rumor as a virus that spreads
a disease. However, there are differences in false rumor and
virus, we constructed this model considering characteristics
of information diffusion. In this model, the state transition
of the user is expressed by using the transition probabilities.
By using this model, we have succeeded in the reproduction of a particular false rumor diffusion. However, it does
not mean that reproduces all of the false rumor that was
spreading in the East Japan great earthquake disaster.
25
Serrano et al. were also carried out the information diffusion studies using the SIR model[4]. They also reproduce the information diffusion by the state transition of the
agent. The difference between information diffusion model
proposed in this paper is the following two points. First,
Serrano et al. are considering the influx of information from
other information media. Second, there is a restriction that
the user can not tweet about corrected information who has
already tweeted false rumor. However, Serrano et al. did
not take into account the characteristics of each user.
We propose a new model which considers the characteristics of each user in this paper. From this, we describe the
reason to take into account the characteristics of each user.
Miura analyzed the contents of tweets on the East Japan
Great Earthquake Disaster[5]. She investigated the reason
for the increase in communication and negative reason on
Twitter. She pointed out that these situations caused a lot
of tweet for disaster’s stress reduction. As a result, false rumors increased by this action. She also mentioned on feature
of Twitter. The Twitter timeline is different depending on
each Twitter user. The information that each user receives
is different. We need to change the approach to suit the
above purpose.
Takeuchi et al. were studied information diffusion with a
focus on user’s characteristics[6]. In this information diffusion model, they consider that the user is filtering the information. They considered that whether or not to spread the
information by finding value in the information. In addition,
they described that another important element is the root
of information.
In order to estimate the detailed mechanism of information diffusion, it is necessary to consider knowledge of the
Miura and Takeuchi et al. We study the information diffusion phenomenon by focusing on the characteristics of each
user and Twitter. In this paper, we propose a new information diffusion model to solve the weak point of “Extended
SIR model.”
3. WEAK POINT OF
EXTENDED SIR MODEL
This section will explain the four weak points of Extended
SIR model.
First, all user agents are the same type for message transmission. It is difficult to representation of diversity. State
transition of the agent has been done by the same state
transition probability. This condition means that all user’s
preferences and interests are same. However, each user has
different preferences and interests for what information they
want to convey. Therefore, it is necessary to take into consideration a user’s diversity of communication. In addition,
we considered “false rumor” and “corrected information” are
different information. The corrected information is the information which denies the false rumor. In short, false rumor
and corrected information are same topic. Therefore, the
degree of the user’s interest isn’t different in both information.
Second, users can not tweet more than once on Extended
SIR model. The user is expected to tweet more than once to
inform the important information to many users. However,
it is not possible more than once tweet because our previous
model is based on SIR model.
Third, the multiplex path of communication is not taken
Figure 1: ORS Model
into consideration. A user agent receives the information
only once in our previous model. However, there are some
information paths within the Twitter follower network, so
that user agent can receive information multiple times.
Fourth, people’s life pattern is not taken into consideration. Users don’t keep using Twitter during the day As an
example, consider a user activity for one day. The user wake
up, and eat breakfast in the morning. The user go to work
to the company. After the work, go to the home and eat dinner, Sometimes, the user meets with friends. In the night,
user go to bed. Thus, the user is doing various activities.
4.
PROPOSED METHOD
In this section, we describe improvement method of the
weak point described by the previous section. Our novel
model is a multi-agent model in which a plurality of agents to
represent false rumor diffusion phenomenon by interacting.
Agents consider elements to be described below.
4.1
State Transition Model
The users are able to more than once tweets the same
topic. We propose a new state transition model for representing the above-mentioned. We call the state transition
model “Outsider - Receiver - Sender Model(ORS Model)”.
State transition of “ORS Model” is indicated on figure 1.
The three states are described below.
• Outsider: People who do not know both a false rumor and its corrected information are included in this
situation.
• Receiver: People who know a false rumor or corrected
information are included in this situation.
• Sender: People who know a false rumor(or corrected
informaiton) and diffused it are included in this situation. “Sender” can return in the state of ”Receiver”.
Therefore, it is possible that the agent multiple times
tweets.
4.2
Multiplexing of information path
We take into account the multiplexing of information paths.
If a user receives a false rumor multiple times, they may
tweet uninteresting information. Then, false rumor is spread
by the user.
4.3
Life Pattern
By Shahzad et al study, they have been found that the
usage trend of twitter is change by the hour[7]. This work
is a study on the usage of Twitter in daily life. However,
we target on the use of Twitter in emergency situation. We
investigated the usage of Twitter at the East Japan great
earthquake disaster. Figure 2 is a plot of the average number
of tweets at each time of 7 days(11 March to 17 March 2011).
26
information suits one’s interests or hobbies becoming more
valuable.
Our model defines the new parameters. These parameters
are used as the element of information diffusion.
Degree of Influence: a
The degree of influence a represents the magnitude of
the influence of information sources. As an actual example, famous persons have great impact. This value
is defined by using a PageRank algorithm. We use this
algorithm as the degree of influence a.
Degree of Interest: i
The degree of interest i represents the strength of the
interest on the topic of a user. This value expresses the
difference in each user’s hobbies and diversions. This
value becomes large, if the topic suits one’s interests
and hobbies.
Figure 2: Average Tweet and Tweet Ratio
Table 1: Setup of Tweet Ratio
Time
Ratio (%)
Time
Ratio (%)
Time
Ratio (%)
Time
Ratio (%)
0
6.15
6
1.56
12
4.18
18
5.20
1
4.26
7
2.29
13
4.00
19
5.53
2
2.67
8
2.78
14
4.06
20
6.01
3
1.72
9
2.96
15
5.32
21
6.71
4
1.62
10
3.31
16
4.87
22
7.78
5
1.34
11
3.55
17
4.89
23
7.28
This figure shows the number of daytime posts peak is at
12:00 and 15:00. These hour corresponds to lunch or break
time. Tweets are increasing from around 17:00, the number
of biggest tweets at 22:00. This time period is after work,
users are spending their leisure time. The number of posts
is reducing from around 11:00. Around 5:00 in the morning,
record the minimum number tweets during the day. This
is related at time to fall asleep. In this way, the user’s life
patterns affect the number of Tweets. Therefore, we have
to confine the number of agents to get the information from
Twitter, to express “User’s Life Pattern.”
4.4 User’s diversity
In order to take “User ’s diversity ” into consideration,
we use Endo et al. proposal[8]. They modeled word-ofmouth propagation. They reported two important elements
for word-of-mouth propagation. These elements are “the
reliability of information sources” and “the value of information contents.” For the reliability of information sources,
information from experts or specialists have a greater impact on reliability, and more reliable information is more
persuasive. For the value of information, in general, the
Degree of Sensitivity: s
The degree of sensitivity s represents the degree to
which a user tends to believe the information. Endo
et al. said that a user judges the truth of information
by using their own knowledge and experience. It is
necessary to take this into consideration for each user.
The this value becomes larger, the more likely it is for
the user to be influenced by information.
Our model defines the motivation of tweet(MoT). MoT
expresses the desire that a user wants to tweet. If MoT is
larger than a threshold value, a user will tweet, and the information will be spread. Below, the method for calculating
MoT is shown a formula (1).
M oTkβt = M oTkβt−1 e−λ(t−F G) + ikβ sβ
∑
an
(1)
n
In addition, the characters in the formula represent the
following. β is users who are wondering whether tweet receives the information. αn set of users as a source of user β.
λ is a forgetting rate, k is topic of received information. t is
the present time. F G is the time when the user first receives
false rumor information.
Here, we describe the pseudo code(Algorithm1) the behavior of user. In addition, we explain this pseudo code using a
case where the user β has received false rumor tweet. The
user β is determined whether receive the false rumor in accordance with the value of Table 1 at the current time. Note
that Table 1 shows the rate of users that can be in contact
with the information in each time. The user β received a
false rumor from one or more users αn who have followed up
on user β. The user β’s MoT is calculated by using formula
(1). The size of MoT and the threshold value is compared.
If MoT is larger than the threshold value, the user β’s infection condition is made into “Sender(F alseRumor).” If MoT
is smaller than the threshold value, the user β’s infection
condition is made into “Receiver(F alseRumor).” Corrected
information follows the same idea, too. If MoT is larger
than the threshold value, the user’s infection condition is
made into “Sender(CorrectedRumor).” If MoT is smaller
than the threshold value, the user’s infection condition is
made into “Receiver(CorrectedRumor).”
27
Algorithm 1 Behavior of agent
1: if The agent receives a false rumor according to the ratio
of Table1 at the current time &&
Agent didn’t spread same false rumor then
2:
MoT is calculated by using formula (1).
3:
if MoT > Threshold value then
4:
Agent’s infection condition is made into “Sender”,
and spread false rumor to agent’s follower.
5:
else
6:
Agent’s infection condition is made into “Receiver”.
7:
end if
8: end if
9: if Agent’s infection condition is “Sender” then
10:
Agent’s infection condition change to “Receiver”.
11: end if
Agent gets new information, repeat the above.
Table 3: Setup of each parameter
Degree of Interest:i
Random value of range 0 to 1
Degree of Sensitivity:s Random value of range 0 to 1
Degree of Influence:a PageRank value of each node
Forgetting rate λ
1/8
Threshold
5 × 10−7
Table 4: Procedure of simulation
Step1 : Construct a follower network depending
on the parameters of table 2
Step2 : Choose one node at random, and change
the infection situation to I at time t = 1
of the simulation environment.
Table 2: Parameters for follower network
100,000
upper limit=3000
lower limit=10
Pareto index=0.5
Expectation of possibility upper limit=15.0
of having follower
lower limit=0.05
Pareto index=0.5
Step3 : Choose one node at random and change
the infection situation to R at time t = 16
of the simulation environment.
Number of nodes
Expectation of
numbers of degree
5. EXPERIMENT
In this section, we describe the experiment for confirming
the validity of our proposed model.
5.1 Experiment outline
We use a simulator that included our model. We reproduce an actual false rumor by using the simulator. The
actual false rumor was the false rumor information diffused
just after the East Japan great earthquake disaster, specifically, a disastrous fire occurred at the Chiba refinery of
Cosmo Oil Co., Ltd. in Chiba prefecture in Japan. At
this time, since the false rumor “a toxic substance contained
clouds that comes from the Cosmo Oil explosion will fall
with the rain” spread as a chain mail and was posted to
Twitter after that, the false rumor was spread to many users.
Diffusion of this false rumor was 48 hours( from 18 O’clock,
11th March to 18 O’clock, 13th March). In this simulation,
we take into account the User’s life pattern, we set simulation one step as one hour of real time.
The conditions of a simulation refer to the literature [2].
The simulation procedure is shown in table 4. The setup of
the network used in the simulation is shown in table 2. The
setup of the parameters used within the model are shown in
table 3.
We try the simulation 5000 times. It will result the ones
with the smallest “Distance” in the 5000 times simulation(
For “Distance”, we describe in next section ).
Step4 : Stop the simulation at time t = 48
of the simulation environment.
The simulation results are the number of each state in each
simulation step. The candidate for comparing the simulation
results are real data. The real data1 was obtained as follows.
The contents of a tweet in the case of an earthquake disaster
were analyzed. The number of people of each time and each
infection state is counted. Both real data and simulation
results were used to carry out the next processing. Let the
sum total of the number of each state in each step be a
denominator. The rate of the number is calculated.
The Euclidian distance is used for comparing a simulation
result and real data. We usually use Euclidian distance in
order to calculate distance. The value that should be calculated is the difference between real data and a simulation
result in each step and state. Then, the sum total of the
distance is calculated. If the total distance is close to 0, the
real data and the simulation result are similar.
The calculation method is described below. First, suppose
that there are two kinds of vectors “X = {x1 , x2 , . . . , xn }”
and “Y = {y1 , y2 , . . . , yn }”. Data are already calculated as
a number ratio.
D
=
•
Distance
√
(x′1 − y1′ )2 + (x′2 − y2′ )2 + · · · + (x′n − yn′ )2
v
u n
u∑
t (x′ − y ′ )2
(2)
i
i
i=1
Note that, it defines as one hour to one step in the simulation. However, we were aggregating the actual data every
15 minutes. Therefore, to simulate one step, comparing the
four actual data.
•
5.2 Evaluation Methods
We describe the evaluation method of our model. We use
two indicators “Distance” and “Infection Rate”
=
1
Infection Rate
Real tweets collected before and after the East Japan earthquake disaster by Toriumi et al. from March 11 to 24, 2011
[9]. Okada et al. extracted these tweets related to false
rumors and their corrections.
28
table 5.
Next, It shows infection rate in this case in the table 6.
From this table, infection rate of false rumor was close the
real data value. Infection rate of corrected rumor has become a value lower than the actual data. However, it is
determined that both infection rate values were similar because it is lower than the actual data.
From these results, it can be generally reproduced false
rumor spread using our model.
6.
Figure 3: Simulation result and real data
Table 6: Infection Rate( Simulation result and Real
Data )
Real Data Simulation Result
False Rumor
0.05
0.031
Corrected Rumor
0.347
0.076
The infection rate of the false rumor is found by analysis
of the actual data. “Infection Rate” represents probability of
how many users infected by the false rumor. By comparing
the actual infection rate and our simulation infection rate
to measure the validity of the model. The actual infection
rate is calculated by our previous work[2]. However, there is
missing data in the actual data, it is not perfect. Infection
rate can not be calculated accurately. In addition, the scale
of simulation network is smaller than the actual follower
network. From the foregoing, we determined to be valid
unless the extremely greater the simulation result than the
actual infection rate.
5.3 Experimental Results
The result of having performed the simulation 5000 times
by using the above-mentioned setup is described below. It
shows the simulation results of Cosmo Oil’s false rumor diffusion in Figure 3. First, from figure 3, the reduction rate of
“Outsider” resembled the actual false rumor. The increase
of the agent(“False Rumor Sender”) who spread the false rumor slightly faster than the actual data.The increase of the
agent(“Corrected Rumor Sender”) who spread the corrected
information is a little slower than the actual data. However,
it can be seen that the state changes are generally conforming to the actual data. It shows distance in this case in the
CONCLUSION
In the East Japan great earthquake disaster, diffusion of
false rumor has become a major problem. In order to eliminate the damage caused by false rumor, it must suppress
the diffusion of false rumor. For this purpose, it is necessary
to clarify the diffusion mechanism of information. In this
paper, we propose a novel information diffusion model to
reveal the information diffusion mechanism on Twitter.
Our model considers four elements, “State Transition”,
“Multiplexing of information path”, “Life Pattern”, and “User’s
diversity”. Therefore, this model can be expressed more
finely diffusion phenomenon. We reproduced the actual false
rumor spreading to evaluate our model. As a result, actual
false rumor could be reproduced using our model, and the
validity of the model was proved.
As future work, we will verify also possible this model is
applied to other false rumors. Finally, we will propose a
diffusion control method.
REFERENCES
[1] Ministry of Internal Affairs and Communications,
Japan, “WHITE PAPER 2011,”
http://www.soumu.go.jp/johotsusintokei/whitepaper
/eng/WP2011/2011-index.html, 2011
[2] Yoshiyuki Okada, Keisuke Ikeda, Masayuki Numao,
Fujio Toriumi, Takeshi Sakaki, Kousuke Shinoda,
Kazuhiro Kazama, Itsuki Noda, and Satoshi Okada,
“SIR-Extended Information Diffusion Model of False
Rumor and its Prevention Strategy for Twitter”,
Journal of Advanced Computational Intelligence &
Intelligent Informatics, vol.18No.4, pp. 598 - 607, 2014.
[3] Kermack, William O., and Anderson G. McKendrick.
“A contribution to the mathematical theory of
epidemics.” Proceedings of the Royal Society of
London A: mathematical, physical and engineering
sciences. Vol. 115. No. 772. The Royal Society, 1927.
[4] Serrano, Emilio, Carlos Ángel Iglesias, and Mercedes
Garijo. “A Novel Agent-Based Rumor Spreading
Model in Twitter.” Proceedings of the 24th
International Conference on World Wide Web
Companion. International World Wide Web
Conferences Steering Committee, 2015.
[5] Asako MIURA, “Social Psychology of Online
Communication on 3.11 Disasters in Japan(3.
Diversification of Distribution Means of Disaster
Information,<Special Issue>Disaster Recovery
Activities from the Great East Japan Earthquake and
Teachings Obtained from the Disaster),”the Journal of
IEICE, Vol.95 No.3, pp.219-223, 2012 (In Japanese)
[6] Susumu Takeuchi, Junzo Kamahara, Shinji Shimojo,
and Hideo Miyahara. “Human-network-based filtering:
the information propagation model based on
29
Table 5: Distance
Outsider Sender(False Rumor) Sender(Corrected Rumor) Average
1.613
0.408
1.585
1.202
word-of-mouth communication.” Applications and the
Internet, 2003. Proceedings. 2003 Symposium on.
IEEE, 2003.
[7] Shahzad, Basit, and Esam Alwagait. ”Best and the
Worst Times to Tweet: An Experimental Study.”
WSEAS, 15th International Conference on
Mathematics and Computers in Business and
Economics (MCBE’14), Proceedings of 15th
International Conference on Mathematics and
Computers in Business and Economics. 2014.
[8] Hiroto Endo, and Masato Noto. “A word-of-mouth
information recommender system considering
information reliability and user preferences.” Systems,
Man and Cybernetics, 2003. IEEE International
Conference on. Vol. 3. IEEE, 2003.
[9] Fujio Toriumi, Kosuke Shinoda, Satoshi Kurihara,
Takeshi Sakaki, Kazuhiro Kazama, and Itsuki NOda:
Disaster Changes the Social Media, JWEIN11,
pp.41-46, 2011. (In Japanese)