Automatic extraction and identification of chart patterns towards financial forecast ,

Transcription

Automatic extraction and identification of chart patterns towards financial forecast ,
Applied Soft Computing 7 (2007) 1197–1208
www.elsevier.com/locate/asoc
Automatic extraction and identification of chart patterns
towards financial forecast
James N.K. Liu *, Raymond W.M. Kwong
Department of Computing, The Hong Kong Polytechnic University, Hong Kong
Available online 20 March 2006
Abstract
Technical analysis of stocks mainly focuses on the study of irregularities, which is a non-trivial task. Because one time scale alone cannot be
applied to all analytical processes, the identification of typical patterns on a stock requires considerable knowledge and experience of the stock
market. It is also important for predicting stock market trends and turns. The last two decades has seen attempts to solve such non-linear financial
forecasting problems using AI technologies such as neural networks, fuzzy logic, genetic algorithms and expert systems but these, although
promising, lack explanatory power or are dependent on domain experts. This paper presents an algorithm, PXtract to automate the recognition
process of possible irregularities underlying the time series of stock data. It makes dynamic use of different time windows, and exploits the potential
of wavelet multi-resolution analysis and radial basis function neural networks for the matching and identification of these irregularities. The study
provides rooms for case establishment and interpretation, which are both important in investment decision making.
# 2006 Elsevier B.V. All rights reserved.
Keywords: Forecasting; Wavelet analysis; Neural networks; Radial basis function network; Chart pattern extraction; Stock forecasting; CBR
1. Introduction
According to the efficient market theory, it is practically
impossible to infer a fixed long-term global forecasting model
from historical stock market information. It is said that if the
market presents some irregularities, someone will take advantages of it and this will cause the irregularities to disappear. But it
does not exclude that hidden short-term local conditional
irregularities may exist; this means that we can still take
advantage from the market if we have a system which can identify
the hidden underlying short-term irregularities when they occur.
The behavior of these irregularities is mostly non-linear amid
many uncertainties inherent in the real world. In general, the
response to those irregularities will follow the golden rule —
‘‘buy low, sell high’’ for most investors. If one foresees that the
stock prices will have a certain degree of upward movement, one
will buy the stocks. In contrast, if one foresees that a certain
degree of drop will happen, one will sell the stocks on hand. This
gives arise the problems of what irregularities we should focus on,
forecasting techniques we can deplore, effective indicators we
can assemble, data information and features we can select to
facilitate the modeling and making of sound investment decision.
* Corresponding author.
E-mail addresses: [email protected] (James N.K. Liu),
[email protected] (Raymond W.M. Kwong).
1568-4946/$ – see front matter # 2006 Elsevier B.V. All rights reserved.
doi:10.1016/j.asoc.2006.01.007
Since the late 1980s, advances in technology have allowed
researchers in finance and investment to solve non-linear
financial forecasting problems using artificial intelligence
technologies including neural networks [1–4], fuzzy logic [5–
7], genetic algorithms and expert systems [8]. These methods
have all shown promise, but each has its own advantages and
disadvantages. Neural networks and genetic algorithms have
produced promisingly accurate and robust predictions, yet they
lack explanatory power and investors show little confidence in
their recommendations. Expert systems and fuzzy logic provide
users with explanations but usually require experts to set up the
domain knowledge. At last but not least, none of these expert
systems can learn.
In this paper we introduce an algorithm, PXtract to automate
the recognition process of possible irregularities underlying the
time series of stock data. It makes dynamic use of different time
windows, and exploits the potential of using wavelet multiresolution analysis and radial basis function neural networks for
the matching and identification of these irregularities.
2. Related work
Many of financial researchers believe that there are some
hidden indicators and patterns underlying stocks [9]. Weinstein
[10] found that every stock has its own characteristics. It mainly
1198
J.N.K. Liu, R.W.M. Kwong / Applied Soft Computing 7 (2007) 1197–1208
falls into five categories, they are: finance, utilities, property,
and commercial/industrial and technology. Stocks’ price movements in different categories are depending on different factors. It
is difficult to identify which factors will affect a particular stock’s
price movement. To address the problem, we explored the use of
genetic algorithm to provide a dynamic mechanism for selecting
appropriate factors from available fundamental data and
technical indicators [11]. Our investigation of the HK stock
market included potential parameters in fundamental data such
as daily high, daily low, daily opening, daily closing, daily
turnover, gold price, oil price, HK/US dollar exchange rate, HK
deposit call, HK interbank call, HK prime rate, silver price, and
Hang Seng index comprising 33 stocks from the said five
categories. The aggregate market capitalization of these stocks
accounts for about 79% of the total market capitalization on The
Stock Exchange of Hong Kong Limited (SEHK).
On the other hand, for the technical indicators, we examined
the influences of popular indicators such as the relative strength
index (RSI), moving average (MA), stochastic and Ballinger
bands, prices/index movements, time lags and several data
transformations [12,13]. Each of these indicators provides
guidance for investors to analyze the trend of the stocks’ prices
movements. In particularly, the RSI is quite useful to technical
analyst in chart interpretation. The theoretical basis of the
relative strength index is the concept of momentum. A
momentum oscillator is used to measure the velocity or rate of
change of price over time. It is essentially a short-term trading
indicator and also quite effective in extracting price information
for a non-trending market. In short, the total number of
potential inputs being tested was 57 [11]. We applied GAs to
determine which input parameters are optimal for different
stock modeling in Hong Kong. The fitness value of the
chromosome in the genetic algorithm was the classification rate
of the neural network. It was calculated by counting on how
many days the network’s output matched the derived ‘‘best
strategy’’. We defined the best strategy at trading time t as:
8
priceðt þ 1Þ priceðtÞ
>
>
buy if
> z%
>
>
<
priceðtÞ
best strategy ¼
priceðt þ 1Þ priceðtÞ
>
sell if
< z%
>
>
priceðtÞ
>
:
hold otherwise
where z is the decision threshold, and the output of the network
is encoded as 1, 0, and 1 corresponding to the suggested
investment strategies ‘buy’, ‘hold’, ‘sell’, respectively. We
observed that the daily closing price and its transformation
were the most sensitive input parameters for the stock forecast.
In contrast, technical indicators such as RSI and MA were not
critical in those experiments. As such, we feel confident to
concentrate on the investigation of the closing price movements
for possible trends and irregularities. This will be the subject of
chart pattern analysis below.
3. Wave pattern identification
According to Thomas [14], there are up to 47 different chart
patterns, which can be identified in stock price charts. These
chart patterns play a very important role in technical analysis
with different chart patterns revealing different market trends.
For example, a head-and-shoulders tops chart pattern reveals
that the market will most likely to have a 20–30% rise in the
coming future. Successfully identifying the chart pattern is said
to be the crucial step towards the win. Fig. 1 shows 16 samples
of typical chart patterns.
However, the analysis and identification of wave patterns is
difficult for two reasons. Firstly, there exists no single time
scale that works for all analytical purposes. Secondly, any stock
chart may exhibit countless different pattern combinations,
some containing sub-patterns. Choosing the most representative presents quite a dilemma. Furthermore, there is no readily
report of research development on the automatic process of
identifying chart patterns. We address this problem using the
following algorithm.
3.1. The PXtract algorithm
The PXtract algorithm extracts wave patterns from stock
price charts based on the following phases:
3.1.1. Window size phase
As there is hardly a single time scale that works for all
analytical purposes in a wave identification process [2,29], a set
of time window sizes W={fw1 ; w2 ; . . . ; wn g j w1 > w2 > . . .
> wn is defined (wi is the window size for 1 < = i< = n).
Different window sizes are used to determine whether a wave
pattern occurs in a specific time range. For example, in a shortterm investment strategy, a possible window size can be defined
as Wi 2 W = {40, 39, . . ., 10}.
3.1.2. Time subset generation phase
Stock price trading data contain a set of time data T = {t1, t2,
. . ., tn} j t1 > t2 > . . . > tn. For a given time window size wi , T
will be divided into a temporary subset T0. A set P is also
defined, where P T. It contains the time ranges in which
previously identified wave patterns have occurred. Set P is f in
the beginning.
It is said that any large change in a trend plays a more
important role in the prediction process [13]. A range which
has previously been discovered to contain a wave pattern will
not be tested again (i.e. If T0 P, tests will not be carried out).
Details about time subset T0 generation processes are shown in
Fig. 2.
For example, T = {10 Jan, 9 Jan, 8 Jan, 7 Jan, 6 Jan, 5 Jan, 4
Jan, 3 Jan, 2 Jan, 1 Jan}, the current testing window size is 3
(w ¼ 3), and P = {9 Jan, 8 Jan, 7 Jan, 6 Jan}. After the time
subset generation process, T0 = {(5 Jan, 4 Jan, 3 Jan), (4 Jan, 3
Jan, 2 Jan), (3 Jan, 2 Jan, 1 Jan)}.
3.1.3. Pattern recognition
For a given set of time T00 j T00 T0, apply the wavelet theory
to identify the desired sequences. If a predefined wave pattern is
discovered, add T00 to P. Details are described below.
The proposed algorithm PXtract is given in Fig. 3. The
function genSet(wi ) is the subset generation process discussed
J.N.K. Liu, R.W.M. Kwong / Applied Soft Computing 7 (2007) 1197–1208
1199
Fig. 1. Samples of typical chart patterns [14].
earlier. At the end of the algorithm, all the time information of
the identified wave pattern is stored in set P.
Pattern matching can be carried out using simple multiresolution (MR) matching (or radial basis function neural
network (RBFNN) matching. Details of the wavelet recognition
and simple MR matching can be found in our previous work
[15].
univariate function c, defined on R when subjected to
fundamental operations of shifts and dyadic dilation, yielding
an orthogonal basis of L2(R).
The orthonormal basis of compactly supported wavelets of
L2(R) is formed by the dilation and translation of a single
function c (x).
4. Wavelet recognition and matching
where j, k 2 Z. Vanishing moments means that the basis functions are chosen to be orthogonal to the low degree polynomials. It is said that a function w(x) has a vanishing kth moment
at point t0 if the following equality holds with the integral
converging absolutely:
Z
ðt t0 Þk ’ðtÞdt ¼ 0
Wavelet analysis is a relatively recent development of applied
mathematics in 1980s. It has since been applied widely with
encouraging results in signal processing, image processing and
pattern recognition [16]). As the waves in stock charts are 1D
patterns, no transformation from higher dimension to 1D is
needed. In general, wavelet analysis involves the use of a
Fig. 2. Time subset generation.
c j;k ðxÞ ¼ 2i=2 cð2 j x kÞ
Fig. 3. Algorithm PXtract.
1200
J.N.K. Liu, R.W.M. Kwong / Applied Soft Computing 7 (2007) 1197–1208
The function c(x) has a companion, the scaling function f(x),
and these functions satisfy the following relations:
fðxÞ ¼
L1
pffiffiffiX
2 hk fð2x kÞ
k¼0
’ðxÞ ¼
L1
pffiffiffiX
2 gk fð2x kÞ
ing to the wavelet orthonormal decomposition as shown in
Eq. (1), Vj is first decomposed orthogonally into a high-frequency sub-space Vj+1 and Wj+1. The low-frequency sub-space
Vj+1 is further decomposed into Vj+2 and Wj+2 and the processes
can be continued. The above wavelet orthonormal decomposition can be represented by
V j ¼ W jþ1 V jþ1 ¼ W jþ1 W jþ2 V jþ2
k¼0
where hk and gk are the low- and high-pass filter coefficients,
respectively, L is related to the number of vanishing moments k
and L is always even. For example, L = 2k in the Daubechies
wavelets.
gk ¼ ð1Þk hLk1 ;
þ1
Z
k ¼ 0; . . . ; L 1
¼ W jþ1 W jþ2 W jþ3 V jþ3 ¼ . . .
According to Tang et al. [16], projective operators Aj and Dj
are defined as:
A j : L2 ðRÞ V j projective operator from L2 ðRÞ to V j
D j : L2 ðRÞ W j projective operator from L2 ðRÞ to W j
Since f ðxÞ 2 V j L2 ðRÞ :
X
c j;k f j;k ðxÞ ¼ A jþ1 f ðxÞ þ D jþ1 f ðxÞ
f ðxÞ ¼ A j f ðxÞ ¼
fðxÞdx ¼ 1
1
The filter coefficients are assumed to satisfy the orthogonality relations:
X
hn hnþ2 j ¼ dð jÞ
X
¼
k 2 ZZ
c jþ1;m f jþ1;m ðxÞ þ
m 2 ZZ
X
d jþ1;m c jþ1;m ðxÞ
m 2 ZZ
Also, Tang et al. [16] has proved the following equations:
n
X
c jþ1;m ¼
hn gnþ2 j ¼ 0
X
hk c j;kþ2m
(2)
gk c j;kþ2m
(3)
n
for all j, where d(0) = 1 and d(j) = 0 for j 6¼0.
d jþ1;m ¼
X
4.1. Multi-resolution analysis
Multi-resolution analysis (MRA) was formulated based on the
study of orthonormal, compactly supported wavelet bases [17].
The wavelet basis induces a MRA on L2(R), the decomposition of
the Hilbert space L2(R), into a chain of closed sub-space:
V4 V3 V2 V1 V0 such that
\ j 2 Z V j ¼ f0g and [ j 2 Z V j ¼ L2 ðRÞ
f ðxÞ 2 V j , f ð2xÞ 2 V jþ1
f ðxÞ 2 V0 , f ðx kÞ 2 V0
9 c 2 V0 ; fcðx kÞgk 2 Z is an orthogonal basis of V0
In pattern recognition, an 1D pattern, f(x), can always be
viewed as a signal of finite energy; such that,
þ1
Z
j f ðxÞj2 < þ 1
1
It is mathematically equivalent to f(x) 2 L2(R). It means that
MRA can be applied to the function f(x) and can decompose it
to L2(R) space. In MRA, closed sub-space Vj1 can be
decomposed orthogonally as:
V j ¼ V jþ1 W jþ1
(1)
Vj contains the low-frequency signal component of Vj1 and Wj
contains the high-frequency signal component of Vj1. Accord-
According to the wavelet orthonormal decomposition as shown
in Eq. (1), the original signal V0 can be decomposed orthogonally into a high-frequency sub-space W0 and a low-frequency sub-space V0 by using the wavelet transform Eqs. (2)
and (3). In the chart pattern recognition process, V0 should be
the original wave pattern, while V1 and W1 should be the
wavelet-transformed sub-patterns.
If we want to analyze the current data to determine whether it
is a predefined chart pattern, a template of the chart pattern is
needed. According to the noisy input data, direct comparing the
data with the template will lead to an incorrect result.
Therefore, wavelet decomposition should be applied to both
the input data and the template. Example of matching the input
data to a ‘‘head-and-shoulder, top’’ pattern is illustrated in
Fig. 4.
We can match sub-patterns using either a range of coarse-tofine scales, or by matching the input data with features in the
pattern template. The matching process will only be terminated
if the target is accepted or rejected. If the result is
undetermined, it continues to at the next, finer scale. The
coarse scale coefficients obtained from the low pass filter
represent the global features of the signal.
For a high-resolution scale, the intraclass variance will be
larger than for a low resolution scale. A threshold scale
should be defined to determine the acceptance level. For
example, scale n is defined as the lowest resolution. The
resolution threshold is t and t > n. At each resolution t, its
root-means-square should be greater than another threshold
J.N.K. Liu, R.W.M. Kwong / Applied Soft Computing 7 (2007) 1197–1208
Fig. 4. Wavelet decomposition in both input data and chart pattern template.
value l, which called the level threshold. It is difficult to
derive optimal thresholds; therefore, we need to determine
this through empirical testing. Fig. 5 illustrates the details of
the process.
4.2. Radial basis function neural network (RBFNN)
Neural networks are widely used to provide non-linear
forecasts [18–20] and have been found to be good in pattern
recognition and classification problems. Radial basis function
neural network (RBFNN), its universal approximation capabilities have been proven by Park and Sandberg [21,22] to be
suitable for solving our pattern/signal matching problem [23].
We have created different RBFNNs for recognizing different
patterns at different resolution levels. The input of the network
is the wavelet-transformed values in a particular resolution.
As shown in Fig. 6, a typical network consists of three layers.
The first layer is the input layer having two portions: (1) past
1201
network outputs that feedback to the network; (2) major corelative variables that are concerned with the prediction
problem. Past network outputs enter into the network by means
of time-delay unit as the first inputs. These outputs are also
affected by a decay factor g ¼ aelk , where l is the decay
constant, a is the normalization constant, and k is the forecast
horizon. In general, the time series prediction of the proposed
network is to predict the outcome of the sequence x1i+k at the
time t + k that is based on the past observation sequence of size
n, i.e. x1t, x1t1, x1t2, x1t3, . . ., x1tn+1 and on the major
variables that influence the outcome of the time series at time t.
The numbers of input nodes in the first and second portions are
set to n and m, respectively. The number of hidden nodes is set
to p. The predictive steps are set to k, so the number of output
nodes is k. At time t, the input will be [x1t, x1t1, x1t2, x1t3, . . .,
x1tn+1] and [x21, x22, . . ., x2m], respectively. The output is given
by xt+k, denoted by pkt for simplicity, wijt denotes the connection
weight between the ith node and the jth node at time t.
To simplify the network, the choice of the centers of the
Gaussian functions is determined by the K-means algorithm
[24]. The variances of the Gaussians are chosen to be equal to
the mean distance of every Gaussian center from its
neighboring Gaussian centers. A constructive learning
approach is used to select the number of hidden units in the
RBFNN. The hidden nodes can be created one at a time. During
the iteration we add one hidden node and check the new
network for errors. This procedure is repeated until the error
goal is met, or until the preset maximum number of hidden
nodes is reached. Normally, the preset maximum number of
hidden nodes should be less than the total number of input
patterns. The existence of fewer hidden nodes shows that the
network generalizes well though it may not be accurate enough.
5. Training set collections
Stock chart pattern identification is highly subjective and
humans are far better than machines at recognizing stock
patterns, which are meaningful to investors. Moreover,
Fig. 5. The multi-resolution matching.
1202
J.N.K. Liu, R.W.M. Kwong / Applied Soft Computing 7 (2007) 1197–1208
Fig. 6. Schematic diagram of a typical RBFNN.
extracting chart patterns in the stock time series data is a time
consuming and expensive operation. We have examined five
typical stocks for the period 1 January 1995 to 31 December
2001 (see Table 1). A summary of the total numbers of real
training data for fourteen different chart patterns is shown in
Table 2. The training set of the chart patterns is collected based
on the judgment of a human critic following the rules suggested
by Thomas [14] from the real and deformed data described in
the following. The training set contains totally 308 records. A
quarter of the training set is extracted as the validation set. We
set the wavelet resolution equal to 8. We found that the signal/
pattern for the resolution 1–3, was too smooth and each pattern
was similar to each others at those levels. The network was not
able to recognize different patterns well. Therefore, only four
RBFNNs were created for training different chart patterns at the
resolution levels 4–7. The performance of the networks at
different resolution levels and the classification results are
shown in Section 6.
In our training set, the initial quantity of data is insufficient
for training the system well. If we tried to extract over 200 chart
patterns in the time series data, it would be infeasible, time
consuming and expensive. In order to expand the training set,
we use a simple but powerful mechanism to generate more
training data based on the real data.
To generate more training samples, a radial deformation
method is introduced. Here are the major steps of the radial
deformation process:
(a) P = {p1, p2, p3, . . ., pn} is a set of data points containing a
chart pattern.
(b) Randomly pick i points (i< = n) in set P for deformation.
(c) Randomly generate a set of the radial deformation distance
D = {d1, d2, . . ., di}.
(d) For each point in P, a random step dr is taken in a random
direction. The deformed pattern is constructed by joining
consecutive points with straight lines. Details are depicted
in Fig. 7.
(e) Justify the deformed pattern using human critics.
Psychophysical studies [25] tell us that humans are better
than machines at recognizing objects, which are more
Table 1
The five different stocks and their stock IDs
Stock ID
Stock name
00341
00293
00011
00005
00016
CAFÉ DE CORAL HOLDINGS Ltd.
CATHAY PACIFIC AIRWAYS Ltd.
HANG SENG BANK Ltd.
HSBC HOLDINGS PLC.
SUN HUNG KAI PROPERTIES Ltd.
Fig. 7. Radial deformation. (a) An example of accepted deformed pattern. (b)
An example of NOT accepted deformed pattern.
J.N.K. Liu, R.W.M. Kwong / Applied Soft Computing 7 (2007) 1197–1208
1203
Table 2
Total numbers of training patterns in fourteen typical chart patterns of five different stocks
meaningful to humans. In assessing the generated training
data, the whole training set (including real and generated
data) is accepted and selected based on the opinion of the
human critic. In the training set, 64 chart patterns have been
extracted from five different stocks in Hong Kong stock
market.
By applying the radial deformation technique, the 64 real
training patterns were extended to a total of 308 patterns. All of
the patterns generated by radial deformation must be judged by
humans to identify whether a human would accept the
deformed pattern or find it meaningful. Fig. 8 illustrates
examples of (a) an accepted and (b) a NOT accepted of
deformed pattern. Fig. 9 shows the training set from both the
real and deformed chart patterns.
6. Experimental results
Two set of experiments have been conducted to evaluate the
accuracy of the proposed system. The first set evaluates whether
the algorithm PXtract is scaleable, and the second set evaluates
the performance comparison between using simple multiresolution matching and RBFNN matching.
Algorithm PXtract uses different time window sizes to
locate any occurrence of a specific chart pattern. The major
concern is the performance of the algorithm. To assess the
relative performance of the algorithms and to investigate their
scale-up properties, we performed experiments on an IBM PC
workstation with 500 MHz CPU, 128 MB memory. To evaluate
the performance of the algorithm using RBFNN Matching over
Table 3
Optimal wavelet and thresholds setting found by empirical testing
Wavelet family
Resolution threshold
Threshold value
Accuracy (%)
Total number of patterns discovered
Daubechies (DB2)
4
0.3
0.2
0.15
0.1
0.3
0.2
0.15
0.1
0.3
0.2
0.15
0.1
0.3
0.2
0.15
0.1
6.2
7.1
14.2
43.1
7.1
9.4
17.4
53
8.9
13.5
19.9
56.9
10.5
14.5
18.5
48.3
8932
7419
3936
543
7734
6498
2096
420
7146
5942
1873
231
6023
5129
1543
194
5
6
7
Processing time (s)
312
931
3143
8328
1204
J.N.K. Liu, R.W.M. Kwong / Applied Soft Computing 7 (2007) 1197–1208
Fig. 8. Accepted and NOT accepted deformed patterns by radial transformation.
a large range of widow’s sizes, we used typical stock prices of
SUN HUNG KAI and Co. Ltd. (0086) for the period from 2
January 1992 to 31 December 2001.
As shown in Fig. 10, the algorithm scales linearly as the size
of the time window increases.
In the experiments on wavelet chart patterns recognition,
different wavelet families were selected as the filter. The
maximum resolution level was set to be 7. The highest
resolution level 8 is taken as the raw input. The left hand side of
Fig. 11 shows the price of the stock CATHAY PACIFIC (00293)
for the period from 7 June 1999 to 22 July 1999. This period
contains the ‘Double Tops’ pattern. For the identification of the
chart patterns, two matching methods were studied — simple
multi-resolution (MR) matching and RBFNN matching,
respectively. For the simple MR matching, similarity between
the input and the template is measured by mean absolute
percentage error (MAPE). A low MAPE denotes that they are
similar. The performance of simple MR matching was tested in
experiments using different resolution threshold t and different
level threshold l.
Table 3 shows the most accurate combinations. We note
that the accuracy using simple MR matching is not accurate
J.N.K. Liu, R.W.M. Kwong / Applied Soft Computing 7 (2007) 1197–1208
1205
Fig. 9. Training set from both the real and deformed chart patterns.
Fig. 10. Execution time of algorithm PXtract using RBFNN matching under
different time window sizes.
with an average recognition rate of just 30%. Furthermore,
the calculation of MAPE between the input data and the
pattern templates creates a heavy workload. Although it is
possible to reach a recognition rate of more than 50%, if we
set a level threshold at a low value (about 0.1) and a highresolution threshold (above 6), the processing time is
unacceptably long (about 3143 s). This illustrates that simple
MR matching is not a good choice for use in the matching
process.
Table 4 illustrates the overall classification results. It shows
that the classification rate is over 90% and the optimal
recognition resolution level is 6. Four wavelet families were
tested, and their performances were more or less the same
except that the Haar wavelet was found to be not suitable for
use.
1206
J.N.K. Liu, R.W.M. Kwong / Applied Soft Computing 7 (2007) 1197–1208
Fig. 11. Algorithm PXtract using wavelet multi-resolutions analysis on the pattern ‘‘double tops’’ template.
Having found the appropriate setting for the RBFNN, we
applied it to extract all the chart patterns from 10 different stocks
over the last 10 years. Table 5 shows the accuracy of the 14
different chart patterns. The RBFNN is on average 81% accurate.
Multi-resolution RBFNN matching has a high accuracy in
recognizing different chart patterns. However, the accuracy
Table 4
RBFNNs: accuracy in different wavelet families and at different resolution
levels
Wavelet
Families
Resolution
level
Training
set (%)
Validation
set (%)
Haar
DB1
4
5
6
7
66
75
81
87
64
72
78
74
Daubechies
DB2
4
5
6
7
73
85
95
97
64
78
91
85
Coiflet
C1
4
5
6
7
77
86
95
98
72
81
90
84
Symmlet
S8
4
5
6
7
75
84
93
96
68
78
89
82
Table 5
Accuracy of identifying the fourteen different chart patterns using RBFNN
extraction methods
Chart pattern
Accuracy (%)
Broadening bottoms
Broadening formations, right-angled and ascending
Broadening formations, right-angled and descending
Broadening tops
Broadening wedges, ascending
Bump-and-run reversal bottoms
Bump-and-run reversal tops
Cup with handle
Double bottoms
Double tops
Head-and-shoulders, top
Head-and-shoulders, bottoms
Triangles, ascending
Triangles, descending
73
84
81
79
86
83
82
63
92
89
86
87
73
76
of the recognition process is heavily dependent on the
resolution level. Once the resolution level has been identified,
based on empirical testing, the proposed method is highly
accurate.
7. Conclusion and future works
In this paper, we examined the sensitive factors associated
with stock forecast and stressed the importance of chart pattern
J.N.K. Liu, R.W.M. Kwong / Applied Soft Computing 7 (2007) 1197–1208
identification. We have demonstrated how to automate the
process of chart pattern extraction and recognition, which has
not been discussed in previous studies. The PXtract algorithm
provides a dynamic means for extracting all the possible
chart patterns underlying stock price charts. It is shown that
PXtract consistently achieves high accuracy with a desirable
result.
Currently, we have analyzed only 14 of the representatives
of chart patterns and templates from a total of 308 training
samples. According to Thomas [14], there are totally 47
different chart patterns, which can be extracted from the time
series data. In order to complete the system, the future direction
of work will be to build templates for the remaining chart
patterns.
On the other hand, the identification and extraction of the
chart patterns enable us to establish cases for interpretation and
stock forecast. We regard these chart patterns as potentially
suitable for case representation in a CBR system. It may be
worthwhile revisiting the selection of indicators associated with
the relevant chart patterns in order to form feature vector (e.g.
time range, RSI, OBV, prices moving average, wave pattern).
We might then compare these feature vectors, v1 ; v2 2 ½a; b,
such that the similarity of v1 and v2 is computed by the
following expression:
simðv1 ; v2 Þ ¼ 1 jv1 v2 j
ba
for b 6¼ a
For the attribute ‘‘class pattern’’ in the feature vector, the
similarity measure of the attribute between the two cases can be
measured by the following expression:
1 if v1 ¼ v2
simðv1 ; v2 Þ ¼
0 otherwise
The overall similarity between two cases c1 and c2 is
measured by the weighted-sum metric shown below:
P
w simðv1i ; v2i Þ
Pi
simðc1 ; c2 Þ ¼ i¼1...n
i¼1...n wi
The system retrieves the updated stock data and converts them
into case knowledge. It then studies the new current status of the
cases and appends the new result set into the result database for
users’ direct query. The system can be set to refer to three
successive cases within a series of stock cases as one complete
CASE for the purposes of prediction. Further exploration in this
area is ongoing. Typical examples can be obtained from our
previous work [26,27]. On the other hand, the consideration of
the use of hybrid approaches such as support vector machine
with adaptive parameters (e.g. [28]), evolutionary fuzzy neural
networks (e.g. [32]), etc. shall be able to help improve financial
forecast. This will be the subject of future research.
Acknowledgement
The authors would like to acknowledge the partial support of
the Hong Kong Polytechnic University via CRG grant G-T375.
1207
References
[1] G. Zhang, B.E. Patuwo, M.Y. Hu, Forecasting with artificial neural
networks: the state of the art, Int. J. Forecasting 14 (1998) 32–62.
[2] M. Austin, C. Looney, J. Zhuo, Security market timing using neural
networks, New Rev. Appl. Expert Syst. (2000).
[3] P.K.H. Phua, X. Zhu, C.H. Koh, Forecasting stock index increments using
neural networks with trust region methods, in: Proceedings of the International Joint Conference on Neural Networks, vol. 1, 2003, pp. 260–
265.
[4] S. Heravi, D.R. Osborn, C.R. Birchenhall, Linear verus neural network
forecasts for European industrial production series, Int. J. Forecasting 20
(2004) 435–446.
[5] M. Funabashi, A. Maeda, Y. Morooka, K. Mori, Fuzzy and neural hybrid
expert systems. Synergetic AI, IEEE Expert (1997) 32–40.
[6] H.S. Ng, K.P. Lam, S.S. Lam, Incremental genetic fuzzy expert trading
system for derivatives market timing, in: Proceeding of IEEE 2003
International Conference on Computational Intelligence for Financial
Engineering, Hong Kong, 2003.
[7] M. Mohammadian, M. Kingham, An adoptive hierarchical fuzzy logic
system for modeling of financial systems, Intell. Syst. Account. Financ.
Manag. 12 (1) (2004) 61–82.
[8] K. Boris, V. Evgenii, Data Mining in Finance — Advances in Relational
and Hybrid Methods, Kluwer Academic Publishers, 2000.
[9] T. Plummer, Forecasting Financial Markets, Kogan Page Ltd., 1993.
[10] S. Weinstein, Stan Weinstein’s Secrets for Profiting in Bull and Bear
Markets, McGraw Hill, 1988.
[11] Kwong, R. (2004). Intelligent web-based agent system (iWAF) for efinance application, MPhil, The Hong Kong Polytechnic University.
[12] E. Gately, Neural Networks for Financial Forecasting — Top techniques
for Designing and Applying the Latest Trading Systems, Wiley Trader’s
Advantage, 1996.
[13] R. Bensignor, New Thinking in Technical Analysis, Bloomberg Press,
2002.
[14] N.B. Thomas, Encyclopedia of Chart Patterns, John Wiley & Sons, 2000.
[15] J.N.K. Liu, R. Kwong, Chart Patterns Extraction and Recognition in CBR
System for Financial Forecasting., in: Proceeding of the IASTED International Conference ACI2002, Tokyo, Japan, (2002), pp. 227–232.
[16] Y.Y. Tang, L.H. Yang, J.N.K. Liu, H. Ma, Wavelet Theory and Its
Application to Pattern Recognition, World Scientific Publishing, River
Edge, NJ, 2000.
[17] S. Mallat, Multiresolution approximations and wavelet orthonormal bases
of L2(R), Trans. Am. Math. Soc. (1989) 69–87.
[18] R.G. Donaldson, M. Kamstra, Forecast combining with neural networks,
J. Forecasting 15 (1996) 49–61.
[19] M. Adya, F. Collopy, How effective are neural networks at forecasting and
prediction? A review and evaluation, J. Forecasting 17 (1998) 481–
495.
[20] A. Kanas, Non-linear forecasts of stock returns, J. Forecasting 22 (2003)
299–315.
[21] J. Park, I.W. Sandberg, Universal approximation using radial basis function networks, Neural Comput. 3 (1991) 246–257.
[22] J. Park, I.W. Sandberg, Approximation and radial basis function networks,
Neural Comput. 5 (1993) 305–316.
[23] F.J. Chang, J.M. Liang, Y.-C. Chen, Flood forecasting using RBF neural
networks, IEEE Trans. SMC Part C 31 (4) (2001) 530–535.
[24] J.T. Tou, R.C. Gonzalez, Pattern Recognition Principles, Addison Wesley,
Reading, MA, 1974.
[25] W.R. Uttal, T. Baruch, L. Allen, The effect of combinations of image
degradations in a discrimination task, Perception Psychophys. 57 (5)
(1995) 668–681.
[26] J.N.K. Liu, T.T.S. Leung, A web-based CBR agent for financial forecasting a workshop program, in: Proceeding of the 4th International Conference on Case-Based Reasoning, Vancouver, CA, (2001), pp. 243–253.
[27] Y. Li, S.C.K. Shiu, S.K. Pal, J.N.K. Liu, Case-base maintenance using soft
computing techniques, in: Proceedings of the Second International Conference on Machine Learning and Cybernetics, Machine Learning and
1208
J.N.K. Liu, R.W.M. Kwong / Applied Soft Computing 7 (2007) 1197–1208
Cybernetics, Sheraton Hotel, Xi’an, China, 02–05 November 2003,
(2003), pp. 1768–1773.
[28] L.J. Cao, F.E.H. Tay, Support vector machine with adaptive parameters in
financial time series forecasting, IEEE Trans. Neural Networks 14 (6)
(2003) 1506–1518.
[29] P. Blakey, Pattern recognition techniques [in stock price and volumes],
IEEE Microwave Mag. 3 (1) (2000) 28–33.
[32] L.Y. Yu, Y.-Q. Zhang, Evolutionary fuzzy neural networks for
hybrid financial prediction, IEEE Trans. SMC Part C 35 (2) (2005)
244–249.