Identification of intracellular calcium dynamics in stimulated

Transcription

Identification of intracellular calcium dynamics in stimulated
32nd Annual International Conference of the IEEE EMBS
Buenos Aires, Argentina, August 31 -­ September 4, 2010
Identification of intracellular calcium dynamics in stimulated
cardiomyocytes
A. Vallmitjana, M. Barriga, Z. Nenadic, A. Llach, E. Alvarez-Lacalle, L. Hove-Madsen and R. Benitez
calcium may cause anomalies in the heart function such
as T-wave alternans, ventricular fibrillation or conduction
problems [6]. In particular, previous studies have established
an interrelation between ventricular fibrillation and an
overload in the intracellular calcium [7]. Similarly, the
presence of spatially discordant alternans, characterized by
an out-of-phase activity in different regions of the cell, is
known to be related to the apparition of lethal arrhythmias
[8]–[10].
Abstract— We have developed an automatic method for the
analysis and identification of dynamical regimes in intracellular
calcium patterns from confocal calcium images. The method
allows the identification of different dynamical patterns such
as spatially concordant and discordant alternans, irregular
behavior or phase-locking regimes such as period doubling or
halving. The method can be applied to the analysis of different
cardiac pathologies related to anomalies at the cellular level
such as ventricular reentrant arrhythmias.
I. INTRODUCTION
There is an increasing number of studies that aim to
establish relations between clinical conditions and physiological activity at the cellular level. This kind of research
requires an interdisciplinary approach that combines knowledge and methods from different fields. Since most of the
information at the cellular level is obtained by means of
cell imaging techniques, novel image processing methods
are needed in order to analyze, quantify and classify spatial
and temporal patterns observed in life science areas such
as neuroscience or cardiology [1]. In this context, calcium
imaging is particularly relevant because calcium dynamics
is a cell regulatory mechanism that plays an important role
in many cellular processes such as muscle activation, gene
expression or fertilization [2], [3].
In this work we present an automatic image processing
method to analyze confocal calcium images of isolated
cardiac myocytes. Cardiac myocytes are heart muscle
cells that exhibit a variety of dynamical patterns due to
the intracellular calcium dynamics [3]. The spatial and
temporal distribution of intracellular calcium in cardiac
myocytes determines the excitation-contraction coupling
of the myocardium and is therefore a basic mechanism
underlying heart function [4]. In particular, it is well known
that high-frequency pacing of ventricular myocytes leads to
the emergence of complex spatiotemporal patterns in the
distribution of the intracellular calcium. The apparition of
these complex dynamical regimes is a consequence of the
nonlinear interplay between different cellular Ca2+ control
mechanisms [3], [5]. Irregular distribution of intracellular
The purpose of this work is to present an analysis
method that processes a sequence of fluorescence images
of stimulated isolated myocytes and automatically identifies
the spatiotemporal dynamics exhibited by the cell. The
objective is to distinguish physiologically relevant regimes
such as spatially concordant and discordant alternans,
phase-locking oscillations or irregular patterns. The method
uses a feature extraction technique that permits an effective
characterization of the experimental sequence allowing for
a robust identification of each regime. More specifically, an
approach based in the Principal Component Analysis (PCA)
is presented to detect the presence of spatial alternans in
the experiment. Similar study that addresses this problem
in the context of cardiac tissue patterns can be found in the
recent literature [11].
The paper is organized as follows: In Section II we introduce the experimental data and provide a detailed description
of the processing method. The main capabilities of the
technique are described in Section III, where we evaluate its
performance and report on several examples of the correct
identification of different regimes. Finally, the potentialities
of the method and an exposition of further improvements are
discussed in Section IV.
II. MATERIALS AND METHODS
A. Data acquisition
A total of 22 atrial myocytes were loaded with 2.5 µM
fluo-4 for 15 minutes followed by wash and de-esterification
for 30 minutes. The myocytes were stimulated intracellularly
with an EPC-10 patch-clamp system (HEKA, Germany) as
described in [12]. The sequences of confocal images were
acquired at a frame rate of 100 Hz with a resonance scanning
Leica SP5 AOBS confocal microscope. Ionic currents were
recorded simultaneously with a HEKA EPC-10 amplifier.
Synchronization of confocal images and current recordings
was achieved using a Leica DAQ box and HEKA patchmaster software. Patch-master was used to design electro-
A. Vallmitjana and R. Benitez are with the Automatic
Control
Department,
Universitat
Politecnica
de
Catalunya
(UPC),
Barcelona,
Spain.
[email protected],
[email protected]
Z. Nenadic is with the Department of Biomedical Engineering, University
of California, Irvine (USA). [email protected]
E. Alvarez-Lacalle is with the Applied Physics Department (UPC).
[email protected]
L. Hove-Madsen, A. Llach and M. Barriga are with the Cardiovascular
Research Center CSIC-ICCC and Cardiology Department, Hospital de Sant
Pau (Barcelona, Spain). [email protected]
978-­1-­4244-­4124-­2/10/$25.00 ©2010 IEEE
68
physiological protocols and to generate triggers for confocal
image acquisition and event marking in the stimulation
protocols. Local and global changes in cytosolic Ca2+ levels
were detected by quantifying fluo-4 fluorescence in selected
regions of interest. The cardiomyocytes were analyzed at
different stimulation rates with frequencies ranging from
0.25 to 2 Hz. This resulted in a set of 101 experimental
sequences, each consisting in a sequence of N images of
512 × 140 pixels with a physical pixel size of 0.28µm. All
the processing and analysis steps have been implemented
in MATLABTM (The Mathworks, Natick MA). The original fluorescence images (24-bit truecolor) are converted to
grayscale intensity images by using a weighted sum of the R,
G, and B components with weights [0.2989, 0.5870, 0.1140].
We refer to an experimental sequence of grayscale images
k
as {Xij
}, where k = 1 . . . N indexes the frame in the
sequence and i = 1 . . . Nx , j = 1 . . . Ny specify a particular
pixel in the image. In order to avoid the presence of static
heterogeneities in the spatial distribution of the fluorescence,
each pixel is normalized by subtracting its time average
activity in the experiment.
The distribution of amplitudes is considered homogeneous
if the variability of the peaks σa is four times smaller
than the noise in the signal σ 1 . When the distribution of
amplitudes is not homogeneous, alternating and irregular
regimes are distinguished by testing for the presence of
sustained oscillations in the peak amplitude. Similarly, an
irregular behavior is identified when the variability in the
inter-peak intervals exceeds a certain heuristic threshold
σi /mi > 0.6. Finally, the auto-correlation function of Fk
is used to determine the n:m correspondence between the
calcium peaks and stimulation pulses.
The previous procedure results in a set of four features,
namely amplitude homogeneity, presence of alternance, irregularity of inter-peaks intervals and the n:m stimulation
response.
2) Identification of spatial alternans: When the peak
detection procedure detects the presence of an alternance in
amplitude, an additional method is used in order to distinguish between spatially concordant or discordant alternans.
To this extent, Principal Components Analysis (PCA) was
used to identify the basic spatial modes in the experiment
and to identify the existence of regions with an out-of-phase
activity [13], [14].
In order to process the data, each image in the sequence
k
Xij
, i = 1 . . . Nx , j = 1 . . . Ny was subtracted from its
temporal mean and arranged as a d-dimensional column
vector zk = [zk1 , zk2 , . . . , zkd ]T where d = Nx Ny . The whole
experimental sequence was then represented by the d × N
matrix A = [z1 , z2 , · · · , zN ]. The principal components are
obtained by diagonalizing the d × d covariance matrix AAT .
In our case, since the dimension of the data d is much
larger than the number of observations N (typical values
are d ∼ 7 × 104 , while N ∼ 2 × 103 ), we reduce the
computational cost by using the fact that the largest N
eigenvalues of AAT are the eigenvalues {λ1 , λ2 , . . . , λN }
of the N × N matrix AT A [15]. The eigenvectors of AAT
representing the spatial modes w can be then obtained from
w = Av, where v are the eigenvectors of AT A.
The main spatial mode in the experiment is found by
reconstructing from the eigenvector w1 associated with the
largest eigenvalue λ1 . PCA reconstruction is achieved by
projecting w1 to the data matrix A, which results in an image
representing the main spatial variability of the experimental
sequence. The histogram of the reconstructed image is then
divided in two regions A and B defined by the pixels above
and below the average pixel intensity outside the cell (i.e.
without calcium activity). The ratio between the pixel count
in each region ρ = nB /nA defines a quantity that allows
to identify the existence of regions presenting an out-ofphase activity in the sequence. Indeed, in the absence of
spatial alternance the first order PCA reconstruction is homogeneous and the number of pixels in region B is low due
B. Feature extraction
Fig. 1 describes the basic steps of the method, which
includes feature extraction and classification. Feature extraction consists of two parts: On the one hand, we determine
the temporal properties of the oscillations in the average
fluorescence and its correspondence to the stimulation times.
On the other hand, we analyze the experimental sequence in
order to determine if the images present out-of-phase spatial
heterogeneities. These two steps constitute a basis for peak
detection and spatial analysis methods detailed below.
Fig. 1.
Schematic description of the method.
1) Peak detection: We first compute the average
fluorescence
cell activity in each frame Fk
=
�
k
X
/(N
N
),
k
=
1
.
.
.
N
,
and
we
identify
sequential
x y
ij
i,j
pairs of local extrema corresponding to the peaks and
valleys of Fk . We then compute the mean and standard
deviation of the peaks amplitude ma , σa and of the intervals
between consecutive peaks mi , σi (inter-peak intervals).
1 Noise is robustly estimated by the median absolute deviation of s σ̂ =
k
1.4826 · median(|sk − median(sk )|), where sk = Fk − Fkd is a residual
d
constructed from a denoised version Fk obtained by applying a wavelet
schrinkage method to the signal Fk (Symmlet order 8, soft heuristic SURE
threshold).
69
to background fluctuations in fluorescence (nB � nA , i.e.
ρ ∼ 0). When the sequence includes a spatially discordant
alternant, the PCA projection captures the spatial variability
by setting the pixels of the discordant region to negative
values, therefore increasing the relative size of region B and
consequently the value of ρ. A heuristic threshold as low
as ρ = 0.1 is proven to be sufficient to detect small spatial
discordances.
a)
B
b)
10
5
F/Fo
III. RESULTS
A. Identification of Ca
2+
A
dynamical regimes
0
0
5
10
5
10
c)
15
20
25
region30A
region B
15
20
25
30
4
The information obtained from the peak detection and
PCA analysis provide a set of features that allow us to
classify the experimental sequences into one of the following
cases:
1) Normal dynamics: Normal cell response is characterized by a 1:1 stimulation response showing homogeneity
in the peak amplitude and a spatial distribution of calcium
activity. An example of this behavior is represented in Fig.
4a. As it can be seen, the cell responds to a train of
stimulation pulses applied every 4 seconds by generating
a calcium transient. This regime is the typical response
of a healthy cell and is normally observed at low pacing
frequencies.
2) Spatially concordant alternans: An example of spatially concordant alternans is depicted in Fig. 2, which shows
a 1:1 stimulation response presenting an alternance in peak
amplitudes. This temporal alternance appears in the whole
cell without spatial inhomogeneities.
2
0
0
time (s)
Fig. 3. Analysis of spatial alternance with PCA. A reconstruction from
the most relevant eigenvector allows to identify two different regions A and
B with alternating activities.
of a calcium signal with a frequency different from the
frequency imposed by external pacing. Fig. 4b shows an
example of period-halving of the calcium signal with respect
to the stimulation pulses, whereas Fig. 4c depicts a case in
which every other stimulation pulse is blocked and evokes
no calcium transient.
a)
15
10
5
0
0
60
5
10
15
b)
20
25
30
35
15
50
F/Fo
40
10
5
F/Fo 30
0
20
10
5
10
15
20
25
30
c)
10
0
0
5
5
10
15
20
time (s)
25
30
35
0
40
Fig. 2. Example of spatially concordant alternans at stimulation frequency
0.25 Hz. The whole cell responds to the stimulation pulses with alternating
amplitudes. In all figures, F0 corresponds to the background fluorescence
of the quiescent cell and vertical marks indicate stimulation times.
5
10
15
time (s)
20
25
Fig. 4. Normal cell response and phase-locking at pacing frequency 0.25
Hz. a) Normal dynamics b) Example of phase-locking 2:1 (period halving):
The cell responds with two Ca2+ transients every stimulation pulse. Note
the correspondence between stimulation marks and signal peaks. c) Example
of phase-locking 1:2 (period doubling): The cell responds with one Ca2+
transient every two stimulation pulses (blocking).
3) Spatially discordant alternans: Spatially discordant
alternans present different regions with out-of-phase activity
in response to different stimulation pulses. In Fig. 3a out-ofphase regions A and B are presented. The corresponding
average calcium signal of each region is shown in Fig.
3c, exhibiting an alternating behavior in the activity of
each zone. The use of the PCA method becomes necessary
since this regime cannot be distinguished from a spatially
concordant alternant from the average cell activity (see Fig.
3b).
4) Phase-locking regimes: Phase-locking is a dynamical regime in which there is a n:m phase synchronization
between stimulation pulses and peaks in the signal. In
such cases, a nonlinear interaction between stimulation and
calcium regulation mechanisms results in the appearance
5) Irregular dynamics: Irregular dynamics occur when
either inter-peak intervals present significant variability (i.e.,
non-periodic behavior) or when peak amplitudes are highly
heterogeneous presenting no alternance. In such cases, we
observe dynamical regimes as the ones shown in Fig. 5.
B. Performance evaluation
To quantify the performance of the method, we analyzed
the 101 experimental sequences and compared the classification results to those obtained by an expert. True and false
positive rates (TPR, FPR) were computed for each of the
four classification groups (Normal, phase-locking, Alternans
-both concordant and discordant- and Irregular) as FPR =
70
with the alternating spatial modes. This might improve
the overall method since PCA only identifies uncorrelated
modes which are not necessarily statistically independent.
Moreover, further dynamical information about the sequence
may be obtained by using temporal and spatial phase synchronization techniques [5], [17], [18].
a)
10
5
0
0
2
4
6
8
10
12
14
16
18
20
22
b)
60
F/Fo
40
20
0
0
2
4
6
c)
8
10
12
10
V. ACKNOWLEDGMENTS
5
0
0
2
4
6
8
10
12
time (s)
14
16
18
The authors acknowledge financial support by MICINN
(Spain) under project DPI2009-06999.
20
R EFERENCES
Fig. 5. Examples of irregular Ca2+ transients at stimulation frequency of
1.33 Hz.
[1] J. Rittscher, R. Machiraju, and S. Wong, Eds., Microscopic Image
analysis for life science applications, ser. Bioinformatics and Biomedical imaging. Artech House, 2008.
[2] M. J. Berridge, M. D. Bootman, and H. L. Roderick, “Calcium
signalling: dynamics, homeostasis and remodelling,” Nat Rev Mol Cell
Biol, vol. 4, no. 7, pp. 517–29, Jul 2003.
[3] J. P. Keener and J. Sneyd, Mathematical physiology, 2nd ed., ser.
Interdisciplinary applied mathematics.
New York, NY: Springer,
2009, vol. 8.
[4] D. Bers, “Cardiac excitation-contraction coupling,” Nature, vol. 415,
no. 6868, pp. 198–205, 2002.
[5] S. H. Strogatz, Nonlinear Dynamics And Chaos: With Applications
To Physics, Biology, Chemistry, And Engineering, 1st ed. Westview
Press, 2001.
[6] A. Karma and F. G. Jr, “Nonlinear dynamics of heart rhythm disorders,” Physics Today, vol. 60, no. 3, pp. 51–57, 2007.
[7] E. Chudin, J. Goldhaber, A. Garfinkel, J. Weiss, and B. Kogan, “Intracellular ca(2+) dynamics and the stability of ventricular tachycardia,”
Biophys J, vol. 77, no. 6, pp. 2930–41, Dec 1999.
[8] M. A. Watanabe, F. H. Fenton, S. J. Evans, H. M. Hastings, and
A. Karma, “Mechanisms for discordant alternans,” J Cardiovasc
Electrophysiol, vol. 12, no. 2, pp. 196–206, Feb 2001.
[9] D. Sato, Y. Shiferaw, A. Garfinkel, J. N. Weiss, Z. Qu, and A. Karma,
“Spatially discordant alternans in cardiac tissue: role of calcium
cycling,” Circ Res, vol. 99, no. 5, pp. 520–7, Sep 2006.
[10] J. G. Restrepo and A. Karma, “Spatiotemporal intracellular calcium
dynamics during cardiac alternans,” Chaos, vol. 19, no. 3, p. 037115,
Sep 2009.
[11] Z. Jia, H. Bien, and E. Entcheva, “Detecting space-time alternating
biological signals close to the bifurcation point,” IEEE Trans Biomed
Eng, vol. 57, no. 2, pp. 316–24, Feb 2010.
[12] L. Hove-Madsen, C. Prat-Vidal, A. Llach, F. Ciruela, V. Casadó,
C. Lluis, A. Bayes-Genis, J. Cinca, and R. Franco, “Adenosine
a2a receptors are expressed in human atrial myocytes and modulate
spontaneous sarcoplasmic reticulum calcium release,” Cardiovasc Res,
vol. 72, no. 2, pp. 292–302, Nov 2006.
[13] R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification, 2nd ed.
John Wiley and Sons, Inc., 2001.
[14] B. Ghanem and N. Ahuja, “Phase PCA for dynamic texture video
compression,” in IEEE International Conference on Image Processing,
2007.
[15] G. Blanchet and M. Charbit, Digital signal and image processing using
MATLAB. ISTE-Wiley, 2006.
[16] J. V. Stone, Independent Component Analysis: A tutorial introduction.
The MIT Press, 2004.
[17] M. Palus, “Detecting phase synchronization in noisy systems,” Physics
Letters A, vol. 235, no. 4, 1997.
[18] M. G. Rosenblum, A. S. Pikovsky, and J. Kurths, “Phase synchronization of chaotic oscillators,” Phys. Rev. Lett., vol. 76, no. 11, pp.
1804–1807, 1996.
TABLE I
P ERFORMANCE OF THE IDENTIFICATION METHOD
Index
TPR
FPR
Sample size
Normal
92%
6%
51
Phase-Locking
100%
14%
6
Alternans
80%
0%
10
Irregular
88%
16%
34
ratio of false positives over number of negatives and TPR =
ratio of true positives over number of positives. Within the
alternans group, the technique correctly distinguished all the
cases presenting spatially discordant activity.
IV. CONCLUSIONS AND FUTURE WORK
A. Conclusions
We have developed an automatic method for the identification of spatiotemporal regimes in a sequence of calcium fluorescence images in stimulated cardiomyocytes. The method
distinguishes between spatially concordant and discordant
alternating patterns and is able to identify phase-locking
dynamics such as period doubling or halving as well as the
presence of an irregular behavior.
The technique can be used to obtain quantitative information about the dynamical response of the stimulated
myocyte. In particular, it might be useful to characterize
the sequence of bifurcations that the system undergoes as
the pacing frequency is increased. Although the proposed
method has been successfully applied to real experimental
sequences, it would be necessary to quantify its performance
and robustness under different signal-to-noise conditions.
One of the straightforward improvements of the method is
to substitute the PCA technique used for the identification
of spatial alternans by an approach based on the use of
Independent Components Analysis (ICA) [16]. This method
would allow the decomposition of an experimental sequence
in a set of statistically independent source signals associated
71
Conditioning Data for Condition Assessment of a
Power Transformer
Roberto Villafáfila-Robles1, Marta Rodríguez 1, Pau Lloret1, Andreas Sumper1,2, Samuel Galceran-Arellano1
1
Centre of Technological Innovation in Static Converters and Drives (CITCEA)
Universitat Politècnica de Catalunya (UPC)
E.U. d’Enginyeria Tècnica Industrial de Barcelona, Electrical Engineering Department
Comte d’Urgell, 189. 08036 Barcelona (Spain)
e-mail: [email protected]
2
Catalonia Institute for Energy Research (IREC)
Barcelona (Spain)
Abstract — Utilities have to guarantee a proper condition of network components in order to meet with regulatory
and society demands regarding reliability and quality of power supply while optimizing costs. Maintenance strategies have evolved to cope with this issue. Condition Based Maintenance (CBM) strategy permits to adapt the maintenance actions to condition of the equipment. It is mainly used for critical equipment like power transformers. If an
on-line monitoring system is used, the actual condition of the assets can be estimated. Such system consists of a set of
sensors for acquiring condition related parameters and techniques/tools that process and analyze the data in order to
assess its condition. However, anomalous data may appear due to a malfunction of monitoring system and may lead
to errors when in data interpretation. Then, in order to overcome this issue, conditioning of such data is needed previously to analyse them. When the monitored data is refined, the condition can be estimated through models. A conditioning data process is presented for a case study of a power transformer in service. Furthermore, data mining
process for obtaining behaviour patterns is also introduced.
Keywords: Condition monitoring system, Condition assessment, Conditioning Data, Condition Based Maintenance
I.
INTRODUCTION
Asset management has become one of the main activities for utilities due to liberalization of electric sector.
This environment needs for new strategies in operation and maintenance activities in order to reduce their costs
while improving reliability and quality of power supply in order to meet with regulatory frames and society demands. Furthermore, the risk is likely to increase when optimizing technical and economical resources if financial interests are above the actual condition of the assets and not at the same level.
The condition of assets is guaranteed through maintenance actions. Such actions can be grouped in different
strategies depending of the criticality of the asset, its cost and available spare parts. There are four main maintenance strategies with the following characteristics [1]:
 Corrective maintenance (CM): there is no inspection or maintenance until breakdown.
 Time Based maintenance (TBM): there is a fixed time intervals for inspections and maintenance.
 Condition Based maintenance (CBM): there is continuous or occasional monitoring and the maintenance is performed when required.
 Reliability Centred maintenance (RCM): there is a priority list obtained from a connection of condition and failure effects that permits risk management.
Utilities have been mainly performing maintenance of their assets as a combination of CM and TBM strategies, depending on the network component. However, a CM plan will have a significant impact on power system
operation if critical component failures. On the other hand, TBM plans might over-maintain young equipment
whereas infra-maintain ones close to their end-of-life. Thus, there is a shift towards a CBM approach for critical
equipment, like power transformers, in order to avoid damages of network components by means of detecting
faults at incipient stage. As it is no possible to measure directly the time to failure of any network component,
1
such time is predicted by means of monitoring parameters that can provide an approximation of actual condition
and ageing process after the corresponding analysis. A step forward in maintenance strategies after CBM is
RCM. This last plan considers, apart from actual condition, other facts like resource constraints and power quality indices to prioritize the maintenance orders. However, in order to set up last two maintenance strategies, utilities require a high financial effort for deploying the related systems, and qualified and experienced staff able to
manage and take advantage of such systems.
The monitoring of condition related parameters of equipment can be done through both on-line and off-line
methods. In order to carry out an on-line monitoring system, it is necessary to install sensors that continuously
acquire the data from the monitored network component and information and communication system that transmits and storages such data. Then, these data are accessible for a later analysis. However, the installation of sensors represents an important drawback for equipment in service. Off-line monitoring methods can overcome this
problem by checking equipment that should be out of service. However, such measurements might be done too
late for preventing damages or might not provide useful results.
Any action within a CBM strategy, like alarms, maintenance or replace orders, depends on assess of the condition of the equipment and later diagnosis. On one hand, some monitoring techniques use monitored data in standard models, like thermal models defined at IEEE Std. C.57.91 and IEC-354. However, such models consider
parameters that have to be calculated for each one. On the other hand, other monitoring methods require power
transformer’s fingerprint that is used as reference in later analysis to determine the evolution, like Frequency
Response Analysis (FRA). However, these techniques need qualified staff to perform the test and assess the results. Thus, determining the condition of a power transformer and the limits to raise the alarm is a cumbersome
task.
Power transformers are a key component in power systems and utilities are doing huge efforts for avoiding
damages in such equipment by deploying CBM plans for them. The techniques used for condition monitoring
and condition assessment for power transformers can be found in [2]. As it has been already mentioned, on-line
monitoring for a CBM implies two steps: a data acquisition system that gets the value of condition related parameters of the equipment and techniques/tools that process and analyze such data in order to assess its condition. The main outcome is to detect incipient faults and perform proper actions to reduce the damage and recover
a good state-of-health of the equipment
However, conditioning the data from monitoring system is needed in order to get useful information and remove erroneous data. If this process is not performed properly, it could conduct to wrong results. A mistake in
the assessment of condition can lead to loss of both the equipment and significant amount of money. This paper
deals with conditioning monitored data for estimating the condition of a transformer based on a case study.
II.
PILOT PLANT
A condition monitoring pilot plant has been deployed according to methodology described in [3]. The pilot
plant is shown in Figure 1. It consists in 66/25 kV 30 MVA power transformer and substation circuit-breakers.
The description of the condition monitoring system: sensors, data acquisition and warehouse systems, and communication systems are described deeper in [4] and [5].
The parameters of the active part of the power transformer that are monitored and the sensors are listed in
Table I. The values of such parameters are acquired continuously and a pre-process is done before they are
stored in the data base. The storage of these parameters is synchronous: instant values are aggregated in the average every 15 minutes and such average and the maximum and the minimum for each quarter of an hour are
recorded. Date and time are recorded with each measure.
Table I. Power transformer monitored variable
Monitoring parameter
Upper oil temperature
Gases dissolved in oil
Oil humidity
Lower oil temperature
High-voltage 3-phase currents
High-voltage 3-phase voltages
Sensor
Pt100
Hydran M2
Vaisala MMT318
Pt100
Current transformer
Voltage transformer
2
Figure 1. Monitoring pilot plant
III.
CONDITIONING MONITORED DATA
The data base stores data that can be used for assessing the condition of power transformer. A first step is to
plot such data. Figure 2 shows monitored data of transformer temperature from the monitoring system for the
same month in two different years. It can be seen that some data is missed or present a value equal to zero.
Therefore, a conditioning procedure is needed to identify the cause of this situation and extract accurate information that permits estimate the condition of the power transformer in order to specify the proper maintenance actions if needed. The proposed methodology is shown in Figure 3 and is described next. It has two parts: finding
wrong data and generate patter of behaviour. The objective is to obtain the set of data free of anomalous values
and create for each monitored parameter a behaviour pattern to identify changes or trends that conduct to an
unwanted situation
Evolution of mean temperature values
Evolution of mean temperature values
60
60
50
50
Time
30/04/2009
29/04/2009
28/04/2009
26/04/2009
24/04/2009
23/04/2009
22/04/2009
21/04/2009
20/04/2009
19/04/2009
18/04/2009
17/04/2009
15/04/2009
14/04/2009
13/04/2009
12/04/2009
11/04/2009
24/04/2008
23/04/2008
22/04/2008
21/04/2008
20/04/2008
19/04/2008
17/04/2008
16/04/2008
15/04/2008
14/04/2008
13/04/2008
12/04/2008
11/04/2008
10/04/2008
08/04/2008
07/04/2008
06/04/2008
05/04/2008
04/04/2008
03/04/2008
02/04/2008
0
01/04/2008
0
10/04/2009
Ambient
10
09/04/2009
20
10
08/04/2009
Top-layer
Bottom-layer
06/04/2009
30
04/04/2009
20
40
02/04/2009
30
ºC
Top-layer
Bottom-layer
Ambient
01/04/2009
ºC
40
Time
Figure 2. Temperature measurements in April 2008 (left) and 2009 (right)
a. IDENTIFYING WRONG DATA
On-line monitoring systems might have some malfunction that cause that anomalous data is inserted in the
data base. Such abnormal data have not to be taken into consideration for assessing the condition. The origin of
these inaccuracies is misoperation of some of its components like sensors and disfunction of communication and
software systems, as for example a damaged sensor, loss of communication between sensors and data base due
to the cable is broken, and writing failure when inserting in data base.
In order to cope with these sources of errors, stored values of monitored parameters are asked next questions:
3
Is the number of data expected? This doubt discovers missing intervals as the storage of monitored
data is carried out in constant time intervals.
o Is there a date without measure? This enquiry notices that a measurement is not recorded in the data
base.
o Are there data with zero value? This issue detects an error in stored data, although a null value in
current and voltage could mean that the transformer is out of service.
As a result of each question, a list with the detected wrong data is created and stored. The exact cause of misoperation of the on-line monitoring system can be determined by analysing the data lists generated after the
question. After the whole set of data goes through the questions, the appropriate data is available for obtaining
behaviour pattern of each parameter.
o
Figure 3. Algorithm for conditioning monitored data
4
b. GENERATION OF BEHAVIOUR PATTERNS
The behaviour patterns are found through data mining process applied to free error set of monitored data. Data
mining has been selected due to it is an efficient technique to obtain useful information from cleaned large
amount of data. There are different techniques for performing data mining: neuronal networks, decision trees,
genetic algorithms, clustering, linear regression, statistics, etc.
Statistical analysis has been selected to derive the behaviour patterns for watching the evolution of the condition of the power transformer. This technique consists of adjusting the data to a statistical distribution model. A
data distribution fit-test determines the suitable model. Before performing such tests, the influence of the season
and time of day have to be considered. Then, refined data are separated in winter (from December to February),
spring (from March to May), summer (from June to August) and autumn (from September to November); and
for each season, the data is considered hourly. The fit-tests have been carried out according to previous conditions and the normal distribution fits with the refined data, as Figure 4 shows for top-layer temperature.
Figure 4. Distribution fit-tests (with Minitab®) for top-layer temperature:
exponential (upper left), Weibull (lower left), normal (upper right) and log-normal (lower right)
Therefore, each parameter has four behaviour patterns that each one consists of a daily model made of 24
normal distributions, one for each hour of the day. Figure 5 depict the behaviour pattern of top-layer temperature
for spring, where hourly means are connected by a continuous line, and the upper and lower lines limit the confidence interval of 95.44% (±2). For deriving this pattern, the wrong data that Figure 2 shows in April 2009
has not been taking into account and do not affect it.
The models are stored in the data base and when the monitoring system acquires new raw data, such data is
firstly refined and later is used for updating the corresponding pattern. The behaviour patterns permit to assess
the evolution of monitored parameters and evaluate the condition of the power transformer by means of comparison and correlation between the parameters.
Figure 5. Spring top-temperature behaviour pattern.
Continuous line: mean. Dot-point line: upper limit. Dot line: lower limit
5
IV.
CONCLUSIONS
Power transformers are an important asset in power systems. Monitoring of power transformers permits to estimate their condition and life expectancy. Although degradation process of insulation materials and failure
modes are known, the assessment of their ageing and time to failure is hard difficult. On-line monitoring systems
help to estimate current condition of power transformers. However, raw data might present anomalous values
due to malfunction of the monitoring system and if these are not identified, incorrect conclusions could appear in
later analysis.
In order to overcome such situation, a refining stage previous to condition analysis is needed. A conditioning
monitored data process for an on-line monitoring pilot plant system has been described. This process permits to
derive behaviour patterns to identify changes or trends that might conduct to an unwanted condition of power
transformer. The patterns have been derived from cleaned monitored data using statistics data mining techniques,
namely normal distribution considering season and each hour of the day.
REFERENCES
[1]
[2]
[3]
[4]
[5]
Joachim Schneider, Armin J. Gaul, Claus Neumann, Jurgen Hografer, Wolfram Wellow, Michael
Schwan, Armin Schnettler, Asset management techniques, International Journal of Electrical Power &
Energy Systems, Volume 28, Issue 9, Selection of Papers from 15th Power Systems Computation Conference, 2005 - PSCC'05, November 2006, Pages 643-654, ISSN 0142-0615, DOI:
10.1016/j.ijepes.2006.03.007.
Ahmed E.B. Abu-Elanien, M.M.A. Salama, “Asset management techniques for transformers”, Electric
Power Systems Research, Volume 80, Issue 4, April 2010, Pages 456-464, ISSN 0378-7796, DOI:
10.1016/j.epsr.2009.10.008.
Velasquez, J.L.; Villafafila, R.; Lloret, P.; Molas, L.; Galceran, S.; "Guidelines for the implementation of
condition monitoring systems in power transformers," Advanced Research Workshop on Transformers
2007, ARWtr2007, vol., no., pp.1-6, 29-31 Oct. 2007.
Velasquez, J.L.; Villafafila, R.; Lloret, P.; Molas, L.; Sumper, A.; Galceran, S.; Sudria, A.; "Development and implementation of a condition monitoring system in a substation," International Conference on
Electrical Power Quality and Utilisation, 2007, EPQU 2007, vol., no., pp.1-5, 9-11 Oct. 2007.
Lloret, P.; Velasquez, J.L.; Molas-Balada, L.; Villafafila, R.; Sumper, A.; Galceran-Arellano, S.; "IEC
61850 as a flexible tool for electrical systems monitoring," 9th International Conference on Electrical
Power Quality and Utilisation, 2007, EPQU 2007, vol., no., pp.1-6, 9-11 Oct. 2007.
ACKNOWLEDGEMENT
The pilot plant project has awarded with Endesa’s R+D+i international prize NOVARE 2005 on distribution
networks in the category of Power Quality and Reliability by the project: 'Substation monitoring for predictive
maintenance'.
6
ETS de Ingenieros Industriales y
de Telecomunicación.
Universidad de Cantabria
Formación y evaluación de la competencia
en habilidad espacial
Jordi Torner, Francesc Alpiste Penalba, Miguel Brigos Hermida
Urgell 187, Barcelona, 934137398, 934017800 [email protected]
Resumen
Diversos estudios señalan la habilidad espacial como una variable clave en los estudios de
Ingeniería Industrial. Es fundamental para la actividad proyectual del ingeniero ya que
resulta vital en el diseño de proyectos. Entre otros factores, se correlaciona con buenos
resultados académicos y con facilidad de aprendizaje de sistemas de información y
herramientas informáticas. Asimismo, el nuevo escenario creado por el EEES nos conduce a
la definición y medición competencias, entre las cuales la integraremos. En el presente
artículo planteamos la estrecha relación que une el desarrollo de esta habilidad con el
trabajo con software de modelado de sólidos en 3D. El estudio se realiza con 812
estudiantes de 1er año de Ingeniería Industrial de la Universidad Politécnica de Catalunya,
analizando la evolución de las puntuaciones obtenidas a través de los test DAT-SR y MRT,
antes y después de la asignatura de diseño asistido por ordenador.
Palabras Clave: competencias; EEES; habilidad espacial.
Abstract
Many studies show that spatial ability is a key factor in engineering studies. It is essential for
the engineer in sketching activity and vital on projects design. Among other factors, it is
correlated with brilliant academic results and capability on learning information systems and
software. Besides, the new scenario created by EEES drives us to the competences
definition and evaluation. In this paper we show the big relationship between this ability
development with 3D solid modelling software. This study is made with 812 first year
Engineering students at UPC-Barcelona Tech, analyzing the scores evolution over DAT-SR
and MRT tests, before and after computer aided design subject.
Keywords: competences; EEES; spatial ability.
1. Introducción
La inteligencia humana se pone de manifiesto en el nivel de desarrollo de ciertas
habilidades (verbal, numérica, espacial, etc.).
Diversos autores destacan la importancia de la habilidad espacial (HE) en los procesos
de diseño en Ingeniería y proponen estrategias didácticas para favorecer su desarrollo
entre los estudiantes. El desarrollo de la habilidad espacial forma parte del currículum
de la Ingeniería Gráfica desde hace largo tiempo [1]. En los últimos años, el interés ha
XVIII Congreso Universitario de Innovación Educativa en las Enseñanzas Técnicas
Escuela Técnica Superior de Ingenieros Industriales y de Telecomunicación. Universidad de Cantabria
Santander, 6 a 9 de julio de 2010
ETS de Ingenieros Industriales y
de Telecomunicación.
Universidad de Cantabria
ido creciendo debido a las novedades y el impulso tomado desde la informática gráfica.
Su valor reside básicamente en la relación entre la HE con el diseño y con la
comunicación gráfica.
El concepto de HE cubre un amplio abanico de funciones cognitivas. En la actualidad
existen multitud de tests y pruebas que permiten abordar los diferentes componentes
de dicha habilidad. Este hecho provoca que el concepto quede fragmentado en
múltiples sub-factores y resulta complicado encontrar una definición aceptada de forma
unánime por toda la comunidad científica.
No obstante, encontramos 2 componentes básicos de la habilidad del que derivan los
demás, aceptados por la comunidad científica [1]:
Visión espacial: habilidad de manipular un objeto en un espacio 3D imaginario creando
representaciones del objeto desde diferentes puntos de vista.
Orientación espacial: se refiere a la capacidad para controlar el espacio de nuestro
entorno y predecir el movimiento y la posición de los objetos.
Un ingeniero debe ser capaz de resolver gráficamente la representación de estructuras
y sistemas complejos en el desarrollo de su trabajo. Por lo que necesariamente la HE
es útil y puede llegar a ser clave en el desarrollo de proyectos de ingeniería, tal y como
apuntan diversos estudios [2,3]. En las primeras fases del diseño de proyectos es
fundamental solventar con rapidez problemas en los que el razonamiento espacial
juega un papel decisivo, por ejemplo, en la fase de croquización.
Por otra parte, la HE se ha reconocido como factor determinante en la predicción de
éxito en diversas áreas, especialmente en las áreas tecnológicas [4]. Es decir, se han
establecido correlaciones positivas entre la HE y los resultados académicos de los
estudiantes en ingeniería. Se han establecido correlaciones positivas con la capacidad
de aprendizaje de aplicaciones informáticas, herramientas de CAD, en el diseño de
Bases de Datos o en el desarrollo de estructuras moleculares [5].
El aprendizaje de una herramienta profesional de CAD en los estudios de Ingeniería
Industrial se hace cada vez más necesaria debido, entre otros factores, a la demanda
XVIII Congreso Universitario de Innovación Educativa en las Enseñanzas Técnicas
Escuela Técnica Superior de Ingenieros Industriales y de Telecomunicación. Universidad de Cantabria
Santander, 6 a 9 de julio de 2010
ETS de Ingenieros Industriales y
de Telecomunicación.
Universidad de Cantabria
del mercado laboral. En consecuencia la gran mayoría de universidades y escuelas
técnicas utilizan una herramienta de CAD en los primeros cursos de las ingenierías.
Varios autores [6,7] han demostrado que el uso de herramientas CAD puede potenciar
el desarrollo de la visión espacial.
En resumen, la HE se configura como:

Competencia básica en el currículum del ingeniero.

Fundamental para la actividad proyectual: Resulta vital en el diseño y desarrollo
de proyectos.

Se correlaciona con buenos resultados académicos y con facilidad de
aprendizaje de sistemas de información y herramientas informáticas

Necesarias para resolver gráficamente la representación de estructuras y
sistemas complejos en el desarrollo de su trabajo.

Factor determinante en la predicción de éxito en diversas áreas, especialmente
en las áreas de ciencias y tecnológicas (correlaciones positivas con resultados
académicos de los estudiantes en ingeniería).

Se han establecido correlaciones positivas con la capacidad de aprendizaje de
aplicaciones informáticas, herramientas de CAD, en el diseño de BBDD o en el
desarrollo de estructuras moleculares.

Relación entre la HE i la habilidad para trabajar con sistemas de información
informáticos (navegación por menús jerárquicos y bases de datos, portales de
e-learning, sistemas de almacenamiento de información y en general todo tipo
de espacios web).
Debido a todos estos condicionantes, este trabajo pretende desarrollar un modelo que
permita evaluar la HE de los estudiantes de ingeniería de la asignatura de primer curso
“Expresión Gráfica y diseño asistido por ordenador” .
XVIII Congreso Universitario de Innovación Educativa en las Enseñanzas Técnicas
Escuela Técnica Superior de Ingenieros Industriales y de Telecomunicación. Universidad de Cantabria
Santander, 6 a 9 de julio de 2010
ETS de Ingenieros Industriales y
de Telecomunicación.
Universidad de Cantabria
El modelo contará con los procedimientos y con los indicadores necesarios, permitirá
ponderar las principales variables y orientar sobre las acciones a introducir de mejora
de la práctica docente.
Nuestro objetivo es comprobar si el uso de una herramienta de modelado de sólidos
3D, como Solidworks, desarrolla la HE en los estudiantes.
Para ello se realiza la pasación de 2 tests de HE al inicio y final del cuatrimestre y se
comprobará si existen diferencias significativas entre las puntuaciones obtenidas antes
y después de las clases.
De esta manera podremos estudiar si la intervención realizada en las clases de la
asignatura produce un entrenamiento de la HE.
El estudio se realiza en un momento de cambios importantes ya que se ha procedido a
la adaptación de la asignatura Expresión Gráfica y Diseño asistido por ordenador al
modelo acordado en Bolonia. La integración de las universidades en el Espacio Europeo
de Educación Superior (EEES) nos conduce a modificar la estructura, los contenidos y
el modelo de enseñanza-aprendizaje de nuestros programas de formación. Este hecho
nos abre un nuevo eje en la investigación.
Este escenario nos lleva a la definición de competencias: Las actividades formativas se
orientan a la adquisición de competencias específicas de cada asignatura, adoptándose
un enfoque formativo-práctico.
Y a la evaluación de resultados: Se requiere la evaluación de los resultados obtenidos
en el proceso en términos de competencia, intentando acercar el perfil profesional al
académico observando los conocimientos y habilidades necesarios en el mundo laboral.
La integración al espacio Europeo nos conduce a definir las competencias específicas
que definirán la asignatura de Expresión Gráfica y DAO. Tal y como se ha comentado
una de las competencias más importantes en la figura del Ingeniero y de la asignatura
es la HE. Por lo tanto definir y evaluar dicha competencia se convierte en otro objetivo
importante de nuestra investigación.
XVIII Congreso Universitario de Innovación Educativa en las Enseñanzas Técnicas
Escuela Técnica Superior de Ingenieros Industriales y de Telecomunicación. Universidad de Cantabria
Santander, 6 a 9 de julio de 2010
ETS de Ingenieros Industriales y
de Telecomunicación.
Universidad de Cantabria
2. Objetivos y metodología
El objetivo último del estudio es desarrollar un modelo que permita evaluar la HE en
los estudiantes de Ingeniería Industrial y que permita a su vez evaluar las estrategias y
los métodos de la programación y su relación con la HE.
Figura 1. Esquema de investigación
En el desarrollo de
este trabajo se ha analizado la asignatura de primer curso
Expresión Gráfica y Diseño asistido por ordenador de la UPC.
Mediante la utilización del modelo propuesto, se quiere comprobar si la metodología
didáctica
y las actividades realizadas en la asignatura colaboran en un desarrollo
significativo de la HE en los estudiantes.
En referencia a la detección de HE mediante test se estudiaran las principales
soluciones utilizadas. De entre ellas, se elegirá la tipología de test que se adapte mejor
al objetivo de nuestro estudio.
XVIII Congreso Universitario de Innovación Educativa en las Enseñanzas Técnicas
Escuela Técnica Superior de Ingenieros Industriales y de Telecomunicación. Universidad de Cantabria
Santander, 6 a 9 de julio de 2010
ETS de Ingenieros Industriales y
de Telecomunicación.
Universidad de Cantabria
Con este objetivo se pasarán 2 test (DAT-SR y MRT) de HE al inicio y final del
cuatrimestre y se comprobará si existen diferencias significativas entre las
puntuaciones obtenidas antes y después de las clases mediante técnicas estadísticas.
Figura 2. Rotación de figuras (basado en MRT)
3. Competencia en HE
Con el EEES la definición y evaluación de competencias adquiere un papel relevante.
La competencia es la habilidad aprendida para llevar a cabo una tarea, deber o rol
adecuadamente. Un alto nivel de competencia es un prerrequisito de buena ejecución.
Navío [8] apunta que las competencias profesionales son un conjunto de elementos
combinados que se integran atendiendo a una serie de atributos personales tomando
como referencia las experiencias personales y profesionales y que se manifiestan
mediante determinados comportamientos o conductas en el contexto de trabajo.
Destacan entre otros el trabajo de Moon [9] para la programación de la asignatura y el
de Urraza [10] que nos propone un modelo de competencias de la asignatura en el
que la HE queda integrada:
XVIII Congreso Universitario de Innovación Educativa en las Enseñanzas Técnicas
Escuela Técnica Superior de Ingenieros Industriales y de Telecomunicación. Universidad de Cantabria
Santander, 6 a 9 de julio de 2010
ETS de Ingenieros Industriales y
de Telecomunicación.
Universidad de Cantabria
Tabla 1. . Competencias específicas de Expresión Gráfica y DAO y relación con las
competencias trasversales implicadas.
C.
COMPETENCIAS ESPECÍFICAS
TRANSVERSALES
Instrumentales;
T.I:
T.P:
Interpersonales T.S: Sistémicas
COMPETENCIAS RELACIONADAS CON LOS CONCEPTOS Y CONOCIMIENTOS
BÁSICOS
T.I.2. Capacidad de análisis y
C.1 Comprender, gestionar y aplicar un soporte de síntesis
T.I.3.
Capacidad
de
conocimientos sobre los fundamentos y normalización del gestión de la información T.I.5.
Dibujo de Ingeniería Industrial, plataforma necesaria para Conocimientos básicos de la
abordar los problemas de ingeniería gráfica.
profesión
T.S.2.
Aprendizaje
autónomo
C.2 Aplicar con destreza los programas de DAO, que
hacen que el ordenador se constituya en una herramienta
didáctica, precisa y rápida, para la confección de la base
documental
de
los
objetos
que
deben
de
ser
representados desde la perspectiva de los conocimientos
T.I.6.
Conocimientos
de
informática T.S.2. Aprendizaje
autónomo
del Dibujo de Ingeniería.
COMPETENCIAS RELACIONADAS CON EL APRENDIZAJE CONSTRUCTIVISTA
C.3 Gestionar y aplicar la capacidad espacial utilizando
como soporte la croquización, dentro de un marco de T.I.1. Resolución de problemas
desarrollo estrategias cognitivas que ayuden a la T.S.2. Aprendizaje autónomo
visualización tridimensional de los objetos técnicos.
C.4 Interpretar y realizar planos normalizados del Dibujo T.I.1. Resolución de problemas
de Ingeniería Industrial.
T.S.1. Capacidad de aplicar los
XVIII Congreso Universitario de Innovación Educativa en las Enseñanzas Técnicas
Escuela Técnica Superior de Ingenieros Industriales y de Telecomunicación. Universidad de Cantabria
Santander, 6 a 9 de julio de 2010
ETS de Ingenieros Industriales y
de Telecomunicación.
Universidad de Cantabria
conocimientos
a
la
práctica
T.S.2. Aprendizaje autónomo
C.5
Aplicar
el
conocimiento
procedimental
en
la
resolución de los problemas de la Geometría Constructiva
orientados a la representación de superficies.
T.I.1. Resolución de problemas
T.S.1. Capacidad de aplicar los
conocimientos
a
la
práctica
T.S.2. Aprendizaje autónomo
T.I.1. Resolución de problemas
T.S.1. Capacidad de aplicar los
C.6 Aplicar las habilidades de investigación y
creatividad en la introducción al diseño industrial.
la conocimientos
T.S.2.
a
la
Aprendizaje
T.S.3.
práctica
autónomo
Creatividad
T.S.5.
Habilidades de investigación
C.7 Gestionar las fuentes de información, exponiendo y T.I.4.
justificando de forma gráfica, oral y escrita los aspectos organización
relacionados
con
las
ideas
de
diseño
y
con
Capacidad
y
de
planificación
la T.I.7. Comunicación gráfica, oral
interpretación y realización de los documentos de y escrita
Ingeniería.
C.8 Trabajo en equipo que facilite el desarrollo de los T.P.1. Trabajo en equipo T.P.2.
conocimientos con un intercambio cultural crítico y Capacidad
responsable.
de
autocrítica
XVIII Congreso Universitario de Innovación Educativa en las Enseñanzas Técnicas
Escuela Técnica Superior de Ingenieros Industriales y de Telecomunicación. Universidad de Cantabria
Santander, 6 a 9 de julio de 2010
crítica
y
ETS de Ingenieros Industriales y
de Telecomunicación.
Universidad de Cantabria
4. Modelo
Definimos un Modelo para el desarrollo de la HE en la Expresión Gráfica.
El objetivo del modelo es disponer de recursos de mejora docente a partir del estudio
de la HE. El modelo permite el control de variables que afectan la HE y facilita su
medida Pre y Post curso. Además, el modelo establece relaciones entre las
metodologías didácticas, los resultados académicos
y la satisfacción de los
estudiantes.
Figura 3. Modelo para el desarrollo de la HE en la Expresión Gráfica
La actividad se centra en la programación de la asignatura de Expresión Gráfica y
Diseño Asistido por Ordenador (EGDAO) y en el estudio de las habilidades espaciales
que se desarrollan en ella.
XVIII Congreso Universitario de Innovación Educativa en las Enseñanzas Técnicas
Escuela Técnica Superior de Ingenieros Industriales y de Telecomunicación. Universidad de Cantabria
Santander, 6 a 9 de julio de 2010
ETS de Ingenieros Industriales y
de Telecomunicación.
Universidad de Cantabria
Se aporta un modelo para medir la mejora de la habilidad espacial, qué es una
competencia básica de los ingenieros.
Para ello, se describen las variables que afectan la HE y se propone un sistema de
depuración de las mismas que, además, orienta en las acciones didácticas a tomar
para mejorar la HE
Los estudios estadísticos realizados permiten obtener valores cuantitativos que pueden
ser utilizados como referencia para los indicadores de calidad. Además contribuyen en
la determinación de la fiabilidad de las encuestas realizadas.
Se determina la correlación entre los valores de HE y los resultados académicos
obtenidos a partir de las evaluaciones de las principales actividades didácticas
realizadas. Esta correlación nos permite determinar la influencia de las metodologías
docentes utilizadas en la mejora de HE y nos orienta acerca de la selección de
actividades más eficaces.
Finalmente, todos los datos analizados revierten en la toma de decisiones para incidir
en la mejora de la calidad docente, toda vez que disponemos de un conjunto de
métodos y herramientas con las que obtener y comparar los registros con los
indicadores de referencia utilizados en un proceso de mejora continua.
5. Conclusiones y líneas futuras de investigación
De todas las variables analizadas en el estudio, se identifican mediante el análisis de
los resultados, las siguientes variables determinantes en las puntuaciones de HE:
Uso de software de CAD: se aprecian diferencias significativas en los alumnos con
experiencia en este tipo de programas.
Especialidad:
encontramos
diferencias
importantes
entre
especialidades,
especialmente en química, que obtiene las medias más bajas.
La relación más fuerte se encuentra entre del DAT inicial y la prueba DAO2 dedicada a
la geometría del espacio. Por lo tanto, se propone potenciar las actividades
relacionadas con la geometría del espacio para maximizar el desarrollo de la HE.
XVIII Congreso Universitario de Innovación Educativa en las Enseñanzas Técnicas
Escuela Técnica Superior de Ingenieros Industriales y de Telecomunicación. Universidad de Cantabria
Santander, 6 a 9 de julio de 2010
ETS de Ingenieros Industriales y
de Telecomunicación.
Universidad de Cantabria
DAT parece ser un buen indicador de éxito en la asignatura ya que muestra los valores
más altos para las correlaciones.
MRT no resulta un instrumento interesante ya que no nos aporta diferencias
significativas en ninguna de las comparativas realizadas.
A partir de aquí, algunas de las posibles líneas de acción futuras serían:

Utilizar los indicadores obtenidos comparar los registros con los indicadores en
un proceso de mejora continua:
Resultados de los test: Incrementos en el DAT, comparaciones
internacionales, nacionales, años, rendimiento académico y la relación
del DAT con la Nota Final, DAO1 y DAO2.

Completar el modelo con la medición de otras competencias fundamentales en
el campo de la ingeniería y con su aplicación en la mejora de la práctica
docente mediante la incorporación y evaluación de nuevas metodologías
relacionando los resultados académicos y la adquisición de competencias.
XVIII Congreso Universitario de Innovación Educativa en las Enseñanzas Técnicas
Escuela Técnica Superior de Ingenieros Industriales y de Telecomunicación. Universidad de Cantabria
Santander, 6 a 9 de julio de 2010
ETS de Ingenieros Industriales y
de Telecomunicación.
Universidad de Cantabria
5. Referencias
1. Miller, C. L., & Bertoline, G. R. Spatial visualization research and theories: their
importance in the development of an engineering and technical design graphics
curriculum model. Engineering Design Graphics Journal 55 (3), (1991). 5-14.
2. H Jerz, R. “Redesigning engineering graphics to include CAD and sketching
exercises” ASEE Annual Conference Proceedings, Montreal, Canada (2002).
3. J Strong, S. and Smith, R. “Spatial visualization: fundamentals and trends in
engineering graphics”. Journal of Industrial Technology, vol. 18, no. 1, (2001).
4. Strong, S., & Smith, R.. Spatial visualization: Fundamentals and trends in
engineering graphics [Electronic version ]. Journal of Industrial Technology, 18(1),
(2001-2002), 1-5.
5. Norman, K. L. Spatial Visualitzation. A gateway to Computer Based Technology.
Journal of Special Educational Technology, XII, (3), 195-206 (1994).
6. Devon, R., Engel, R.S., Foster, R.J., Sathianathan, D, and Turner, G.F.W. “The effect
of solid modelling on 3D visualization Skills”. Engineering Design Graphics Journal, vol.
58, no. 2, 4-11 (1994).
7. Sorby, S.A., “Improving the spatial skills of engineering students: impact on graphics
performance and retention”. Engineering Design Graphics Journal, vol. 65, no. 3, pp.
31-36 (2000).
8. Navío Gámez, Antonio. “Las competencias del formador de formación continua.
Análisis desde los
programas de formación de formadores”. Tesis Doctoral.
Universidad Autónoma de Barcelona (2001).
9. Moon, J. Linking Levels, Learning Outcomes and Assessment criteria: the Design of
Programmes and Modules in Higher Education. unpublished paper, Staff Development
Unit, University of Exeter (2000).
10. Urraza, Guillermo. Evaluación de competencias en el diseño curricular de la
asignatura de Expresión Gráfica y DAO. XVII Congreso Internacional de Ingeniería
Gráfica, (2006).
XVIII Congreso Universitario de Innovación Educativa en las Enseñanzas Técnicas
Escuela Técnica Superior de Ingenieros Industriales y de Telecomunicación. Universidad de Cantabria
Santander, 6 a 9 de julio de 2010
61st International Astronautical Congress, Prague, CZ. Copyright ©2010 by the International Astronautical Federation. All rights reserved.
IAC-10-A1.8.4
SMALL MEDICAL EXPERIMENTS IN INNOVATIVE AEROBATIC SINGLE-ENGINE PARABOLIC
FLIGHTS: PROVIDING DATA AND INSPIRATION FOR THE EXPLORERS OF TOMORROW
Prof. Antoni Pérez-Poch
EUETIB, Escola Universitària d’Enginyeria Tècnica Industrial de Barcelona; UPC, Universitat Politècnica de
Catalunya, Spain, [email protected]
Daniel Ventura González
Aeroclub Barcelona-Sabadell, Barcelona, Spain, [email protected]
Gloria García-Cuadrado
BAIE Barcelona Aeronautics & Space Association, Spain [email protected]
Recent research undertaken by the joint venture led by the Universitat Politecnica de Catalunya, with its partners, the
Aeroclub Barcelona-Sabadell and BAIE, Barcelona Aeronautics Space Association, has shown that it is possible and
safe to obtain zero-gravity conditions for up to 8 seconds with single-engine aerobatic planes. The quality of the
microgravity is comparable to that obtained by conventional parabolic flights. The main advantage of this technique
is that a lower cost-to-time of microgravity ratio, during the parabola is obtained. Small life science experiments that
require no more than this short period of time and cannot be run in drop towers, benefit from an easy access to the
experimental platform. We present here how data of small medical experiments which had own with our platform are
thereafter used for the first time as an educational tool. Experiments were aimed at validating a numerical model
(NELME) that has been developed in our research group, which is intended to suggest what actual changes in the
cardiovascular system can be expected when the human body is exposed to reduced gravity. An educational tutorial
was developed, based on these experiments, containing an introduction to space physiology, how the data was
obtained and why it was useful, and a hands-on material where students can actually use a simulation software to see
what changes may happen to the human body when exposed to long-term scenarios, like a long expedition to the
Moon, or a trip to Mars. The material was tested by engineering students, who had nearly no previous understanding
of medical concepts, but it can easily used also for life sciences students with no knowledge of simulation
techniques. A final survey, and an evaluation of the students work results was conducted, in order to assess the
impact of this activity. Students of our University also have the opportunity to design their own experiment, and
actually build it and fly it in zero gravity at Sabadell Airport (Catalonia, Spain), very near to our Faculty premises in
Barcelona. Students from the International Space University Space Studies Program 2010 have designed a number of
experiments which will likely be flown by us this year, and an international contest led by the Space Generation
Advisory Council is just being started. In conclusion, we believe that this innovative microgravity platform will open
new doors to inspire students around the world to get an interest on space medicine and research, and we look
forward to expand this opportunity in the upcoming years.
I. INTRODUCTION
Parabolic flights are a common way nowadays
to obtain microgravity. About 20-30 seconds of
microgravity can be obtained during parabolic flights.
Jet airplanes such as the KC135(NASA) and the
Caravelle or the Airbus A300(ESA) or the Ilyushin IL76 MDK (Gagarin Cosmonaut Training Center,
Moscow) are used with their interiors completely empty
and padded with foam rubber [1]. These planes are
operated in professional or student experimental
campaigns involving a number of different teams and
experiments on-board, and typically require months of
preparation.
IAC-10-A1.8.4
The flight profile is the following (see Figure
1): coming from a steady flight profile an introductory
pull-up maneuver is performed at increased acceleration
(roughly 2g for these planes), pilot reduces thrust and,
with throttle or idle engines the airplane follows the
parabolic trajectory of a free-flying body. As a
consequence, after a short phase of transition,
microgravity is obtained for about 20-30 seconds. After
the recovery maneuver at increased acceleration (2g),
the airplane flies again horizontally to the ground level
for some minutes before introducing the next parabola.
During one flight mission about 20 parabolas are
performed.
Page 1 of 5
61st International Astronautical Congress, Prague, CZ. Copyright ©2010 by the International Astronautical Federation. All rights reserved.
Figure 1:
AIRBUS 300 parabolic maneuver
(Credit: ESA/Novespace)
Due to flight perturbations and the presence of many
crew members, however, there is a comparatively low
microgravity level of only about 0.01g. The utilization
of such procedures ranges from testing of technology
and procedures to qualification of experiments and
subsystems to astronaut training.
ESA has used since 1984 six types of airplanes
to conduct its parabolic flight campaigns [2]: the KC135, the Caravelle from CNES, the Russian Ilyushin Il76 MDK, the Cessna Citation II, and the Airbus A-300
'zero-g' from CNES, all of them with 2 or 4 engines. An
important number of physical and life sciences
experiments have been conducted showing the success
of this kind of access to microgravity.
Our approach is different from the successfully
previously reported parabolic flights as we propose the
use of a tiny 2-passengers aerobatic plane. This kind of
aircraft is already certified to sustain this maneuver and
could also be used for professional experiments and
testing technology. The advantage of this approach is
the immediate preparation and saving cost as the budget
of the flight is significantly small than those parabolic
flights with bigger and more complex airplanesajor
headings are capitalized, underlined and centred in the
column.
II. CALIBRATION AND OPERATIONS
We first reported the implementation of parabolic
maneuvres for professional experimentation in
microgravity with a CAP-10B aerobatic airplane,
certified to make aerobatic maneuvers at IAC in
Glasgow 2008, after our maiden flight in November
2007 with the first calibration data [3].
The plane is a 2-passenger light model of
airplane (Figure 2) and can be flown easily from an
aerodrome with little preparation apart from the usual
procedures in private flying.
IAC-10-A1.8.4
Figure 2: CAP-10B plane owned by ACBS.
(Credit: Aeroclub Barcelona-Sabadell).
The only limitation of this approach is that no
huge equipment can be loaded into the cockpit as this
was designed to be smart for aerobatic sport, but it is
quite adequate for rapid testing and prototyping of
technology subsystems, as well as physical or life
science experiments that don't need a huge space to be
stowed.
We conducted in this mission six parabolic
flights from the Sabadell Airport in November 2007
with an experiment on board. Every parabola was
carefully planned and coordinate between the pilot
(Ventura) and the mission specialist (Perez-Poch) of this
mission. Timing of every part of maneuvre, g
acceleration, and a number of parameters regarding the
experiment on board were recorded through a laptop on
board.
As this is a 1-engine plane with a limited
capacity of thrust, the power available from the plane
engine to perform the parabola was less than those
available from the other planes reported to have
undergone parabolic maneuvres. As a result of this
limitation, a more intense acceleration is needed in the
pull-up entry reaching 3.8 g instead of the usual 1.8g
found when using the Airbus 300 zero-g. With this
approach we report six series of 4.5 to 6.8 seconds of
microgravity during the parabola zero-g phase. Again,
a nearly 4g pull-out maneuver is performed by the pilot
to recover horizontal flight. We repeated the maneuver
every two minutes with the experiment on board.
The quality of the gravity attained is
comparable to that obtained with earlier parabolic
experiments, although we didn't control the z-axis so
precisely as other planes do as the control of this plane
is totally manual. However it can be estimated that the
order of magnitude is comparable to that of 0.01 g
obtained in bigger airplanes with more precise and strict
control of the balance.
The pilot of these maneuvers is an experienced
aerobatic pilot (Ventura), who trains regularly as a sport
aerobatic aviator. The mission specialist is a private
pilot (Perez-Poch) with no previous experience in
aerobatic flight, but did not require any medical
treatment previous, during nor after the parabolas. No
Page 2 of 5
61st International Astronautical Congress, Prague, CZ. Copyright ©2010 by the International Astronautical Federation. All rights reserved.
motion sickness symptoms were reported by any of us
in this mission, although it is advisable to be fit enough
to sustain the nearly 4g pull-up and pull-out experience.
After the maneuvers the plane was conducted from the
surrounding area to Sabadell Airport, 20km from
Barcelona, and safely landed with no incidences to
report.
During the manual performance of the
maneuver by the pilot (Ventura) the mission specialist
(Perez-Poch) was carrying on himself the payload
intended to in-flight validate the NELME model
proposed and developed by the same author [4]. The
equipment consisted of a laptop with an RF receiver, a
blood pressure monitor with RF emitter, and a state-ofthe-art pulsometer able to register heart rate.. Analysis
of these results were found to be reliable as their
variations were minimal for every one of the six
parabolas performed. The numerical model predicted
the variations in blood pressure and heart rate when
applying 3.8g , then zero g, and back to 3.8g of the
subject. The experimental findings were fully
compatible with the model in spite of the few seconds
available in microgravity. More detailed results can be
found in [4] as well as the whole description of the
model.
The total cost of this mission was estimated in
less than 300 euros including the cost of hiring a
professional aerobatic pilot, the same plane, essence and
airport taxes. This is less than a thousand than what can
be estimated for a usual parabolic campaign, thus
resulting in a very advantageous time of
microgravity/cost ratio.
The preparation of the mission was reduced to
a series of breafing and debreafing sessions as no
special requirements were needed for this life sciences
experiment. Therefore, the access time to microgravity
was also significantly reduced from that need in a usual
parabolic campaign which may last for months.
Since then, an extensive number of flight tests
have been carried out in order to improve the
proficiency of the manual manuevre. Thanks to this
optimization the quality of g attained has been
significantly improved, and the likelihood of g jitter
lessened. Thanks to these efforts, the technique was
optimized in order to be able to provide a reliable source
of microgravity to the European space research
community, and also to provide with flight opportunities
to the students. A joint venture between the Aeroclub
Barcelona-Sabadell, UPC Barcelona Tech and BAIE
Barcelona Aeronautics & Space Association has started
early this year. This joint venture is able to provide
flight opportunities and a legal framework for the
researchers and students who wish to take advantage of
this platform. An Announcement of Opportunity was
released this year [5,6] by the institutions funding and
leading this endeavour.
IAC-10-A1.8.4
III. EDUCATIONAL OPPORTUNITIES
III.I Medical Data as a motivational tool
During the calibration process, from November 2007 to
the current data, data for the validation of the NELME
model of the cardiovascular system under variable g
conditions were collected.
An educational tutorial was developed, based
on these experiments, containing an introduction to
space physiology, how the data was obtained and why it
was useful, and a hands-on material where students can
actually use a simulation software to see what changes
may happen to the human body when exposed to longterm scenarios, like a long expedition to the Moon, or a
trip to Mars. The material was tested by engineering
students, who had nearly no previous understanding of
medical concepts, but it can easily used also for life
sciences students with no knowledge of simulation
techniques. A final survey, and an evaluation of the
students work results was conducted, in order to assess
the impact of this activity.
The students had to work out what changes
were important, what implications have the data for the
hypothesis of the experiment, and propose future lines
of research. Students had a one-hour tutorial workshop
introduction, two hours of class work, and 4 days to
submit their work. All student teams presented their
work on time, and the evaluation was fairly good to
excellent for all teams. Students qualified with a 3.8 +/0.4 the activity (1 being boring, 5 exciting) and
provided some quotes as ‘the activity was the most
original of my studies’ or ‘I wish to also take part in the
experiments’.
A limited number of UPC graduate research
collaborators, and UPC undergraduate students have
also been invited by us to actually take part in these inflight tests during the calibration processes. In these
selected motivational flights, which were also funded
and directed by UPC, and operated by the Aeroclub
Barcelona-Sabadell with Mr D.V. Gonzalez as pilot-incommand performing the maneuvres, these students
could, as a result of these operations, make some
proposals of in-flight experiments [7].
III.II International Space University Summer Students
Program 2010 “Fly-your-experiment” Barcelona
Campaign
The International Space University is the
leading University in the space sector providing top
education under its three lines of
inspiration:
International, Interdisciplinar and Intercultural. As part
of its educational curricula it organizes every year an
Page 3 of 5
61st International Astronautical Congress, Prague, CZ. Copyright ©2010 by the International Astronautical Federation. All rights reserved.
intensive 9-week Summer Space Program . This
program is attended every year by more than 100
graduate and undergraduate students from all over the
world. During the program, they are exposed to a
number of fundamental core lectures, workshops, and
departmental activities. The last three weeks are
dedicated entirely to the development of a Team Project
in a topic related to space activities [8].
During the recent SSP10 one of the proposed activities
was to design and actually build an experiment to be
flown with our platform. A 1-hour workshop was
conducted by us, in which the students were introduced
with the basics of parabolic flight, the special features of
our platform, and then they were challenged with the
possibility to actually fly their designs with us. The
students were given the detailed requirements that had
to be taken into account, as well as safety mandatory
requirments.
A 1-hour guided work time was granted, during
which the students formed their teams and began
making their experiment designs, with the mentoring of
experienced professors of this particular field.
Finally, the students, had to develop and submit a
detailed form, in a professional way, detailing all
aspects concerning their experiment, with the
endorsement of an expert professor in the space field.
A selection process is currently underway based on
this form, with the three best experiments to be chosen
and entitled to fly with our platform.
We are
particularly impressed by the quality of some of them,
and will be most likely to provide high-quality
meaningful data. Students during all the prrocess of
design, build and fly the experiment, clearly benefit
from the interaction of a leading university in the space
sector, and an innovative challenge to actually
experiment in zero-g their ideas.
The ISU SSP10 Barcelona Aerobatics Challenge
students’ flight campaign will probably take place in the
October-November timeframe in Sabadell Airport,
Barcelona,Spain.
After this calibration period, a non-profit joint
venture has been set up by the three institutions in
Barcelona (Catalonia, Spain) involved in the
development of the technique. Researchers in Europe
can benefit from this opportunity thanks to the opening
of a continously open call for proposals. Educational
activities have been from the beginning an essential part
of our motivation, and have provided meaningful results
and a number of flight opportunities for students
experiments, as well as tutorials after data collection.
The Space Generation Advisory Council, a leading
group of space enthusiast students and youth
professionals has been invited by us to lead an
international Challenge for students all over the world,
in order to design and fly their experiments in zero-g.
We expect this competition to start with their
involvement before the end of this year.
We are convinced that this innovative microgravity
platform is already making an impact, and inspiring
students around the world to get an interest on space
medicine and research. Therefore, we certainly look
forward to expand these activities in the upcoming
years.
IV. CONCLUSIONS
We first reported a successful series of
parabolas performed with a light aerobatic plane with a
life sciences experiment on board. Between 5 to 8
seconds of microgravity were attained with a very small
cost. The optimization of the tecnique has made
possible to provide a quality of g between 0.1g and
0.01g with a g jitter reduction depending on the strength
of wind gusts. Very limited time is needed to prepare
and perform the experiment so this approach is
specifically suited for those kind of rapid prototyping
technology tests, or simple experiments that do not need
huge or sophisticated equipments.
IAC-10-A1.8.4
Page 4 of 5
61st International Astronautical Congress, Prague, CZ. Copyright ©2010 by the International Astronautical Federation. All rights reserved.
REFERENCES
[1] Messerschmid E, Bertrand R, Space Stations.
Systems and Utilization. pp 300-310. Springer Verlag.
ISBN 3-540-65464-X Berlin 1999.
[2] Pletser V, Short duration microgravity experiments
in physical and life sciences during parabolic flights: the
first 30 ESA campaigns. Acta Astronautica, 55(10) 829854. 2004.
[3] Pérez-Poch A., González, D.V. “Aerobatic flight:
an innovative access to microgravity from a centennial
sport”. 58th International Astronautical Conference,
Glasgow, 2008. Conference Paper # IAC-08-A2.3-12
[4] Pérez-Poch A. "On the role of numerical simulations
in studies of reduced gravity-induced physiological
effects in humans. Results from NELME.". Proceedings
of the 38th COSPAR General Assembly,Bremen, July
2010. Submitted to Advances in Space Research.
[5] BAIE Announcement of Opportunity for
European researchers/students:
http://www.bcnaerospace.org/public/new.php?id=51 ,
retrieved on 30th August 2010.
[6] CRAE-UPC Announcement of Opportunity for
European researchers/students:
http://recerca.upc.edu/crae/news/acrobatic-flight-aninnovative-access-to-microgravity-from-a-centennialsport-announcement-of-opportunity-for-upc-studentsresearchers ,
retrieved on 30th August 2010.
[7] Schroeder, J.W. , Zurita D. “Aerog- the portal to
weightlessness. Aerobatic flights as an educational
platform for microgravity experiments” 60th International
Astronautical Conference, Daejon 2009.
Conference Paper # IAC-09-E1.4.6
[8] International Space University, Summer Space Program
2010 official website:
http://ssp10.isunet.edu , retrieved on 3rd September 2010.
IAC-10-A1.8.4
Page 5 of 5
61st International Astronautical Congress, Prague, CZ. Copyright ©2010 by the International Astronautical Federation. All rights reserved.
IAC-10-D5.2.10
A NEW INFORMATION SYSTEM ARCHITECTURE FOR A NEW SPACE EXPLORATION
PARADIGM: USING STAKEHOLDER ANALYSIS TO REENGINEER THE VALUE CHAIN
Prof. A.Pérez-Poch
EUETIB Escola Universitària d’Enginyeria Tècnica Industrial de Barcelona, Software and Information Systems
Department, UPC, Universitat Politècnica de Catalunya, Barcelona, Spain, [email protected]
An information system architecture is proposed in order to take into account the map of stakeholders and
relations into a space human exploration venture led by private entrepreneurship and commercial ventures. The
model is based on a stakeholder analysis that enables us to capture the main mechanisms that add value to nowadays'
visions for space exploration.
Current visions for human space exploration have turned their focus into commercial and private companies,
rather than in public funding. Cancellation of the Constellation may deeply affect the development of technologies
for human space exploration. Nevertherless, this decision will in fact open new opportunities for private
entrepeneurships to participate in space exploration. In order to ensure success of these new ventures, requirements
should be rewritten from the beginning, allowing main stakeholders to produce benefits in the value chain. A detailed
value chain ow model is proposed, with an adaptive neural network of value propagation discussed. We conclude
that advances in the modeling of information systems architecture, are useful not only for identifying key
stakeholders in global enterprises such as human exploration, but also a tool for reengineering the whole process and
adapting it into new vision paradigms.
I. INTRODUCTION
Stakeholder analysis has gained importance in the
last decade, as a key process to perform corporate
analysis. However, its implentation in large companies
and in particular, large public enterprises has proven to
be difficult, not to mention a combination of both.
Requirements analysis is a well-known technique, that
is widely used in many management project routines.
Usual requirements analysis tend to select a particular
set of architectures based on techical merit, rather than
on any other topic. Stakeholders analysis is only taken
account in a later step of the design process, with only
minor consideration to its importance.
However, space exploration is basically a human
endeavour. Rationale to venture into space is not based
on technical reasons, but on to the will of the human
mind to get further and explore the unknown. Space
anchorman Walter Cronkite defined during the 2002
IAC Opening Ceremony the arrival of men to the Moon
as ‘the most important moment in human history’. He
was not quoting this enterprise just because of its
difficulty in technical terms. Indeed, he was referring to
it as its value to mankind, as an accomplished that
produced enormous value to a wide rage of human
beings. Not without the social and political support of
those times for putting a step on the moon would it had
been possible to begin thinking in economic and
technical terms.
Therefore, it is obvious that stakeholders are not
only important for any space endeavour, but a
IAC-10-D5.2.10
fundamental one. Without a wide and public support
from a number of space actors it is impossible to even
think in a large investment for a space activity. Space is
a particular field in which stakeholders are key vectors
of the full enterprise.
It has been difficult in large government space
projects, to identify and analyse the role of every
stakeholder. The involvement of the private sector is a
growing trend in space activities, with large public
funded projects being cancelled, and private companies
incoming into the development of new space
technologies. The private sector involves a bigger role
for a group of space stakeholders, with starring
characters that were unthinkable decades ago.
In spite of these considerations, the information
system architectures currently in consideration in the
space area are mostly atomized and do not take into
account the relevant role of the stakeholders that creat
value and momentum to the space activities.
We first propose that the value chain vector should
be considered in order to identify which stakeholders
are most relevant to the space endeavours. We state that
from a strategic point of view, the identification and
analysis of stakeholders adding value to the process
should be the core of the design process, and not a
secondary addition to technical considerations.
Those design solutions with a proper understanding
of the system’s stakeholders will be those with early and
clearly defined roles for them, that involve later
decisions in accordance of their presence in the value
chain of the project. The fundamental aim of this paper
Page 1 of 4
61st International Astronautical Congress, Prague, CZ. Copyright ©2010 by the International Astronautical Federation. All rights reserved.
is to provide a general framework that reduces the gap
between the stakeholders identification process and their
technical considerations.
We begin in Section 2 considering stakeholders,
their needs and relationships between them. In section 3
we address the value chain, as a vector to reengineer
space activities, as private business take a major role. In
section 4 we provide with a basic tool with metrics to
optimize the process of an organizational change. A
discussion of the pratical implications of such a
framework is taken into account in the latter section.
II. SPACE STAKEHOLDERS
Stakeholders are defined as those individuals,
entities or organizations that have a role in a definite
process. The stakeholders analysis is usually aimed at
finding which is the best organization design that
optimizes its effectiveness. The work is performed by
focusing in the stakeholders that take a substantial role
on the value chain of the company. Basic needs and
identification of the main relationships are most relevant
for the public sector, where the concept of ‘added value’
is more difficult to identify.
If we would like to identify the key stakeholders at
the space area, the question should be: Who are the
stakeholders of space exploration that will make value
grow? A review of the literature [1-5] will show us that
the major characters had already been identified.
Science, Security, International Partners, Economic
Area, Executive & Congress, People, Educators and
Media are the main groups of people and organizations
that typically add value in the United States, according
to the latter references. Some of them, like Educators
and Media are mainly intermediaries with the People.
Finally, the major public space agency in the US,
NASA is noted, to which the private sector should be
added in an emerging growing role.
Exploration missions require that people involved in
these areas make flow the benefit, tangible or intangible
that emerges from the space activity.
The overall process of identifying stakeholders and
assigning them a proper role and interrelationships
between the different systems involved, are known to be
the design of the stakeholders model.
III. VALUE CHAIN AS A VECTOR TO
REINGEENER THE PROCESS
Once the basic process of modeling is done, we will
have a detailed map of the connections between the
different stakeholders involved. The process model is a
dynamic one, although only a steady-state photograph
of the whole system is considered.
IAC-10-D5.2.10
At this point of time, we introduce the concept of
value chain coming from the industry and information
systems architecture. Value chain is a collection of
value flows which are connected by stakeholders,
relevant to the process. Major white papers and
requirements standards [6] refer to these concepts in the
space area as well as others.
The chain has the responsability to change and add a
definite value onto the system. Only stakeholders that
form part of input-output flows are the ones relevant to
the reengineering process.
By reengineering we define a major organizational
change, that aims to optimize the creation of value
within the system. A reengineering process based on the
value chain, should follow the next steps: 1- Defining
value for our system 2-Modeling the stakeholders
matrix, 3-Identifying the key stakeholders which
contribute to the value chain, and 4-Rearrenging the
value flows in the organization to reinsert key
stakeholders into the value chain.
Individual flows according to [5] are categorized
into six groups: Policy, Money, Workforce,
Technology, Knowledge and Goods and Services.
In the process of creating a value flow model
framework, a number of decisions have to be made in
order to simplify the value loops, and make the model
easily understood. Value loops are defined as value
chains that return to the starting stakeholder.
Simplification of this map has no standard procedure,
and is dependant on the level of detail needed in the
reeengineering system.
The overall system is then redesigned in order to
help the value chain grow, and to lessen interferences
and expenditure of resources on to areas that do not
really add value in the system.
IV. QUANTIFICATION AND METRICS
The process of reengineering an organization is
often regarded as an holystic one. Different levels of
detailed among authors are observed, but it is somewhat
difficult to quantitize what the necessary changes in a
process of optimization. are.
A framework to help reorganize and optimize the
value chain should be composed of:
-
Stakeholders matrix.
Value flow model.
Metrics.
Optimization application tool.
A feedback process
Page 2 of 4
61st International Astronautical Congress, Prague, CZ. Copyright ©2010 by the International Astronautical Federation. All rights reserved.
The stakeholders matrix and the value flow model
have been described in previous section.
Metrics are a part of the reengineering design
process. Qualitative variables can be quantitized in
ordinal terms. The exact number of individual
people, little institutions (like schools) has to be
estimated and the number of people involved in the
space activity driven from it. The output-input flow
is then derived from the value map.
It is somewhat difficult to assess science or
education results in terms of ‘what space exploration
inspires’. Usual quality terms in education or
science evaluation can be used, such us number of
degrees attained in the space area, number of papers
published in peer-reviewed journal, etc.
An important design decision is the weight that
is associated with every single indicator in the
model. The optimization algorithm should be one of
easy implementation, and classic effectiveness.
In our particular example we decided to
parametrize the influence of the emerging private
sector. We added to the model introduced by [5] a
group of private companies as a block interconned
as the pubic space agency was. The number of
people involved in these activities was estimated to
be a fraction of the overall workforce. We then
constructed a flow map duplicating the input-output
chain of the public sector, mantaining the rest of
stakeholders like the public or science intact. We
included a restriction that every increased step for
the variables in the private sector should be
accompanied by a decreased step for the public ones.
For our particular study we chose as value the
overall public understanding of space science,
including space exploration outcomes, space
science, and the increase of education and public
understanding outreach.
An adaptive neural network was chosen as the
optimizing algorithm, and a process of iteration was
conducted until value loops reflected a nearly
steady-state.
The system evolutioned to a significant part of
the private sector taking over, and a pruning of the
number of public organizations interconnetions. The
independent variable was optimized, and in doing so
a number of interconnexions had their values
decreased nearly to zero, which suggest they should
disappear were others began to grow. The overall
results suggest that a significant part of the value
chain could be taken over for the private sector,
gaining value for the system, while reducing the
overall costs.
A significant result was that the risk of
downgrading the benefits was higher while keeping
the public funding low, or decreasing the public
workforce under a certain deadline.
IAC-10-D5.2.10
DISCUSSION AND CONCLUSIONS
We have presented the concept of stakeholders
analysis, in relation with the value chain. We have
considered the reenginering process, as a vector for
organizational change, that allows to focus workforce
and economic efforts into the process that add value to
the system.
Based on literature space stakeholders
models, we have added a more relevant private sector
into the system, and thought of what implications may it
have on the effectiveness of the variables involved.
Quantization of the variables involved in the value
map allows to implement an optimizing method that
visualizes possible changes that may arise from the new
involvement of the private sector into the stakeholders
matrix. Preliminary results show consistent findings
with what is expected by new directions in the major
components of the US space program.
More work needs to be done in order to define more
precisely the optimum metrics into the stakeholders map
and the neural network architecture to optimize it.
The process of quantifying and optimizing the map
has proven to be successful in order to propose longterm organizational changes in the space arena, that will
make space exploration more plausible and costeffective in human and economic terms.
Acknowledgements
We are greateful to Mr Angel Linares-Zapater,
Serinfo Information Systems CEO, for fruitful insights
and discussions. EUETIB School of Engineering, from
UPC Universitat Politècnica de Catalunya ‘Barcelona
Tech’ has funded the study hereby presented.
REFERENCES
[1]
E. Rebentisch, E. Crawley, G. Loureiro, J.
Dickmann, J. Catanzaro, Using stakeholder analysis to
build Exploration sustainability, in: 1st Space
Exploration Conference: Continuing the Voyage of
Discovery, Orlando, Florida, January 30-1, 2005,
AIAA-2005-2553.
[2] NASA Exploration Systems Architecture Study.
NASATM-2005-214062
http://www.sti.nasa.gov,
November 2005, pp. 194, 541
[3] M.C. Jensen, Value Maximization, Stakeholder
Theory and the Corporate Objective Function. Harvard
Page 3 of 4
61st International Astronautical Congress, Prague, CZ. Copyright ©2010 by the International Astronautical Federation. All rights reserved.
Business School (Negotiation, Organization and
Markets Unit), Working Paper no. 01-01, October 2001.
URL: _http://papers.ssrn.com/abstract-220671_ (cited 6
November 2004).
[4] Hoffman, Edward J, “NASA System Engineering
Handbook”, SP610S, June 1995.
IAC-10-D5.2.10
[5] Cameron B.G., Crawley E.F., Loureiro G.,
Rebentisch E. Acta Astronautica 62 (2008) 324-33.
[6] NASA Systems Engineering Processes and
Requirements, NPR7123.1, Effective March 13, 2006.
Page 4 of 4
61st International Astronautical Congress, Prague, CZ. Copyright ©2010 by the International Astronautical Federation. All rights reserved.
IAC-10-E1-3.15
PROGRAMAESPACIAL.COM: A DREAM COME TRUE
Prof. Antoni Pérez-Poch
EUETIB, Escola Universitària d’Enginyeria Tècnica Industrial de Barcelona.
Universitat Politècnica de Catalunya, ‘Barcelona Tech’. Spain
[email protected]
Claudio Javier Mariani, Sergio Ezequiel Taleisnik
[email protected]
Programaespacial.com is a collaborative venture, a totally non-profit space which outreaches an educational web-site
and broadcasts live Shuttle launches and other space events in the Spanish language.
Back in the year 2006, two Argentinean friends who had already met in an online spaceflight forum felt they could not
find any place inside the internet to share their love and passion for spaceflight the way they wished any more. That's why
they decided to create their own spaceflight website: programaespacial.com. The main goal of the project is to educate and
promote spaceflight and science interest on Spanish speaking communities. In order do so, project collaborators work
voluntarily to inspire people by means of the different features the project consists on. Throughout these four years, the
website has grown a lot, evolving from a simple spaceflight website to an educational venture. Many people have joined
the project, and contribute to it every day.
The Broadcasting Centre is one of the main features the website has. The centre consists of a webpage where the visitor
can watch NASA TV in English and at the same time read live updated Spanish texts which are uploaded by the
broadcaster. The broadcasts are carried out during major spaceflight events, mainly in Space Shuttle launches and
landings. The broadcasts have included live updates by a website's correspondent, present at the Kennedy Space Centre
(KSC), and who is becoming the only Latin American media in attendance at the spaceport. Last year, the Centre started to
broadcast both text and audio; the audio broadcast implied a more fluid and enriching transmission, allowing speakers to
explain concepts more deeply and to make live interviews.
Another important project which is carried out by programaespacial.com members is “RDH" or Hondareyte Digital
Reconstruction. It consists of the digitalization of old spaceflight radio broadcasts in Spanish, recorded by a professional
radio operator called Luis Hondareyte.
The website also includes a forum, where members of the community can exchange their opinions, experiences and
doubts.
I. INTRODUCTION
Space exploration has been a matter of interest ever
since the dark ages. Our ancestors tried to explain what
they saw up in the skies by means of theories and
myths. The invention of the telescope, together with
further scientific breakthroughs, performed by brilliant
minds such as Kepler, Newton, Euler, Lagrange and
many other, offered people a better understanding of the
universe. Despite all the scientific advances, it was not
until the mid twentieth century when the human race
saw the technical possibility to fulfill that dream that
Jules Verne had seeded throughout the world with his
1865 novel From the Earth to the Moon: the dream of
not only watching the stars, but flying to them. The first
ones to travel were machines, followed by plants,
animals, and finally, on 1961, the first human orbited
the Earth. From that day on, spaceflight provided further
knowledge about the universe that surrounds us,
together with a battery of new technologies developed
to perform those journeys.
IAC-10-E1.3.15
But it was not only scientific knowledge that space
exploration offered us: the possibility of finally
traveling to space became a source of interest and
inspiration to millions around the globe. Just as former
NASA engineer Homer Hickam relates in his
autobiographical novel how he got interested in science
after watching the Sputnik fly over his coal mining
town, hundreds of thousands of people find in
spaceflight a source of motivation and encouragement
to fulfill their personal objectives. Spaceflight bounds
science
with
adventure.
It
represents
the
accomplishment of massive challenges by means of
hard work.
Hickam’s story makes us think how to motivate
more and more people not only into spaceflight but into
scientific-related subjects.
Unfortunately, we live in a world where space
exploration does not represent big news any more.
Fortunately, communication technologies are offering,
day after day, new tools to inform and teach spaceflight
Page 1 of 4
61st International Astronautical Congress, Prague, CZ. Copyright ©2010 by the International Astronautical Federation. All rights reserved.
news and concepts. During the late 60s and early 70s,
Apollo missions were broadcasted to the entire world by
international broadcasting companies. Nowadays, small
groups organized by enthusiasts carry the flag of
spaceflight to space-isolated communities.
That is the case of programaespacial.com, a nonprofit educational venture created to inform, motivate
and inspire Spanish-speaking people throughout the
entire world.
II. BIRTH AND EARLY DEVELOPMENT OF
PROGRAMAESPACIAL.COM
Mr. Claudio Mariani is an Argentinean graphic designer
who has been a passionate for spaceflight since his early
years.
Back in the year 2005, Mr. Mariani met a
young Argentinean physicist called Mr. Pablo Traverso
in an online forum. They became best friends and in
short time, they started to envision a project in which
they could canalize their passion for spaceflight by
sharing their knowledge with other people and, at the
same time, learning from them, creating a true
spaceflight cooperative learning community.
The original idea evolved and resulted into an
educational venture designed to educate and promote
spaceflight and science interest on Spanish speaking
communities.
The
project
was
called
programaespacial.com and it officially began in the year
2006.
On its early months, the site presented merely a
news blog but, as time went by, both friends gave birth
to the main features of the project which include: a
broadcasting centre, an enhanced news section, a
personal blogs section and an online forum.
Ever since its creation, many people have
joined the project. Some of them contributed to a
particular feature, and others are still participating.
III. THE BROADCASTING CENTER
Public affairs policies in some of the busiest space
agencies, such as the European ESA, the Japanese
JAXA or the Russian RKA are clearly different with
respect to NASA’s. This can be easily deduced by
watching the American’s NASA TV. Originally created
to provide the agency's Space Shuttle Program,
managers and engineers with real time video of Shuttle
operations, it evolved into a 24/7 informational and
educational programming on space exploration, space
science, earth science, and aeronautic research provided
to the media and U.S citizens.
IAC-10-E1.3.15
NASA TV is available in Unites States only through
cable operators and in the rest of the world through the
use of the Internet. Unfortunately for Spanish-speaking
communities, NASA TV is only broadcasted in English.
The scientific importance of Space Shuttle
missions, together with the milestone significance and
visual impact of Shuttle launches and landings, make
those human spaceflight events the perfect occasions to
interest general public.
Not so long, after the birth of
programaespacial.com, both Mariani and Traverso
decided to begin a series of broadcasts in Spanish on the
website during Shuttle launches and landings. They
managed to create an interface where the visitor could
watch NASA TV’s live broadcast and at the same time
read a board with comments in Spanish so as to be able
to understand the activities that were being carried out;
the board was updated every 30 seconds in order to keep
visitors constantly updated. The feature was named
Broadcasting Centre and ever since its creation, it has
covered every single shuttle launch and landing.
The broadcasts inform visitors about updated
launch/landing activities, flight data and mission
information such as objectives, crew and schedule. The
length of the broadcast depends on the event and the
occurrence of delaying issues: normally, launch
broadcasts lasted approximately 5 hours and landing
broadcasts no more than 2 hours. Landing delays due to
weather constraints extend broadcasts about 3 or 4
hours.
To increase participation and interest, the
Broadcasting Centre includes a live conversation board
where visitors can express their opinions or ask
questions.
One of the most important features the
Broadcasting Centre included was the correspondent
present at NASA’s Kennedy Space Center in Florida.
Martín Juárez collaborated with the broadcasts in
numerous occasions by providing an up-close personal
look of the entire activities that were being carried on.
Mr. Juárez has also attended pre and post
launch/landing press conferences and had the possibility
to be present at numerous shuttle processing milestones
like rollout from OPF to VAB, rollover from VAB to
Launch Pad, and RSS retract hours before launch. Each
activity he performed in the KSC was further reported
in articles.
Audio broadcasts
Starting with launch of Shuttle mission
STS-129 on
late November 2009, the Broadcasting Centre began
transmitting audio while keeping the traditional text
board.
Page 2 of 4
61st International Astronautical Congress, Prague, CZ. Copyright ©2010 by the International Astronautical Federation. All rights reserved.
Just as with traditional broadcasts, the speakers are not
located in the same place; on the contrary, they transmit
from different locations: Mr. Sergio Taleisnik from
Cordoba City (Argentina), Mr. Traverso from Buenos
Aires (Argentina) and Mr. Mariani from Quilmes City
(Argentina) are the main speakers. Speakers
communicate throughout a multi-part conference using
IP telephony. The dialogue converges into Mr.
Taleisnik’s computer, who mixes the conversation,
equalizes it, and then uploads it to the Internet utilizing
free online broadcasting services.
The audio broadcasts implied a more fluent
communication with visitors: it transformed the
Broadcasting Centre into a very real audio show. The
first transmissions consisted basically of reading what
the broadcasters were used to post on the broadcasting
board, but as time went by, the program evolved into a
complete radio show including news reporting, live
interviews and discussions about relevant spaceflight
topics. The audio broadcast allowed speakers to perform
live communication with Juárez who was at the KSC,
where he could express more deeply his experience of
being in the place where the events were taking place.
Together with the main speakers, recent audio
transmissions have included permanent participation of
science writer/producer, Angela Posada-Swafford.
Angela lives in Miami and has uncountable experience
with spaceflight. Her participation in the Broadcast
Centre has attracted numerous listeners from Latin
America, who delight themselves with Angela’s stories
and experiences related to human spaceflight. Angela is
usually at the KSC for shuttle launches and landings, so
she habitually speaks from the spaceport.
IV. THE RDH PROJECT
Back in July 2008, an interview that Mariani gave to a
local newspaper caught the attention of Mr. Luis
Hondareyte, a radio broadcaster who had recorded the
transmissions in Spanish of NASA’s Mercury, Gemini,
Apollo and early STS missions of a radio known as
“The Voice of America”.
Hondareyte contacted Mariani and offered him
those recordings. They both decided to digitalize them
in order to save them from potential wearing away
which would end in total loss of the recordings. The
final objective of the project would be to publish the
digitalized work in the Internet so people around the
world could gain access to it.
The project was named “Reconstrucción
Digital Hondareyte (RDH)” or “Hondareyte Digital
Reconstruction,” after Luis who was the one to record
the original tapes.
IAC-10-E1.3.15
In order to perform the digitalization, Mariani contacted
his friend Mr. Damián Ferroso, who had lots of
experience on digital and analog recordings and was a
space enthusiast as well.
Ferroso was in charge of the entire
digitalization process. First he had to find an apparatus
that could process the old tapes; then he had to ensure
that playing the tapes would not destroy the originals,
and finally he had to find the way to bond the playing
appliance to a digital recording device and perform the
digitalization.
After digitalizing, Mr. Osvaldo Pulqui, a
professional speaker and friend of both Mariani and
Ferroso, was contacted to record a series of messages
that would be included in each recording.
The project received the collaboration of Mr. Jorge
Cartès, one of the official designers of Space Shuttle
and International Space Station Expedition patches and
active member of programaespacial.com who accepted
the community’s proposal of him designing an official
RDH patch. His design was performed and improved
with feedback provided by programaespacial.com
members on the site’s forum.
In a first stage of complete digitalization, the
RDH project opened its website www.proyectordh.com,
through which visitors can listen to different recordings
every week.
V. NEWS ARTICLES AND BLOGS
On a daily basis, website story editors upload
spaceflight articles. The articles are mainly pieces of
news related to rocket launches and human spaceflight
missions. However, there are also articles about
spaceflight history, astronomy and technology.
The articles are product of individual research
and production carried out by each editor. Internet is the
main source of information: official websites of the
space agencies and specialized spaceflight websites are
the preferred sources. NASA TV is also quoted in
articles about broadcasted events.
In order to make the articles interesting for a
considerable group of potential readers, the information
acquired has to be translated and processed. The sources
are never found written in Spanish and the level of
complexity is usually excessively high for notfamiliarized public. Programaespacial.com was not
envisioned as a mere spaceflight news website in
Spanish: its objective has always been to educate and
promote spaceflight to Spanish-speaking communities
and in order to achieve that goal, the information must
be adapted to match a more popular level of knowledge.
Programaespacial.com articles also include special
reports and interviews carried out by members of the
Page 3 of 4
61st International Astronautical Congress, Prague, CZ. Copyright ©2010 by the International Astronautical Federation. All rights reserved.
project. Last year, Mr. Mariani and Mr. Traverso
interviewed in the US Embassy of Buenos Aires NASA
astronaut Chris Cassidy during an official visit to
Argentina.
Articles published in programaespacial.com are
quoted in several Internet Blogs and forums. Last year
an article published by Mr. Taleisnik regarding the
upcoming Ares I-X flight was quoted in an article Ms.
Posada-Swafford wrote for Spanish science magazine
Muy Interesante.
Apart
from
the
news
articles,
programaespacial.com also features a special blog
section where Claudio Mariani, administrator of
programaespacial.com and space memorabilia collector,
Angela Posada-Swafford, science writer/producer, and
Martín Juárez, correspondent at the KSC, write about
their experiences. This section contributes to the project
with subjective information: personal experiences of
people somehow involved with space science motivate,
inspire and are of extreme interest to the general public.
VI. ONLINE FORUMS
The explosive expansion of the Internet since the
mid-1990s has fostered the proliferation of virtual
communities. In this context, programaespacial.com
built its own via an online forum which was called
“Comunidad Espacial” (Space Community).
Just as any standard Internet forum, Comunidad
Espacial helps members of the community and regular
visitors to communicate by allowing them to express
their opinions, exchange their knowledge and ask
questions. The forum also acts in an integration zone,
where incoming visitors can introduce themselves so as
to begin participating both in the forum and in the entire
project.
The biggest challenge that the project will be
facing in the forthcoming years is to achieve a higher
level of organizational sustainability. The path to
achieve this is to expand and ensure the continuity of its
volunteer crew, fortify their commitment with the
project, and carry out new educational features.
Funding is a key factor in sustainability.
Nowadays, the project is supported solely on Mr.
Mariani’s contributions. Unfortunately, finding support
on such project is extremely hard, taking into account
the low popularity of space science on the general
public and the relative importance of this subject in
local politics. Nevertheless, demonstrating the
importance of informal science education and acquiring
external support to enhance further activities is one of
the challenges the project will be facing the next years.
Considering the imminent retirement of the
Space Shuttle, one of the challenges will be to create
new shows on the Broadcast Centre in order to replace
the frequent Shuttle launch and landing broadcasts.
Several ideas are already being considered, such as
weekly radio programs including news, interviews and
live debate. On the other hand, just as the transition
from text to audio transmissions was the big first step,
the upcoming challenge will be to broadcast audio and
video together, transforming the Broadcast Centre into a
true Space TV in Spanish.
Finally, one of the ultimate goals of the project
is to engage into formal education by means of habitual
visits to academic institutions. As schools have regular
science education, the objective of the project will be to
inspire the younglings throughout interactive
presentations. The main target will be elementary
schools, but plans also include kindergarten and high
schools. Mr. Mariani already did several presentations
in schools of Buenos Aires.
VII. FUTURE. CHALLENGES AND
OPPORTUNITIES
More than four years have passed since the beginning of
the project. A lot of work has been done and still much
is left to do. The project has evolved just as space
exploration did, and both will continue to do so. In the
next 12 months the Space Shuttle will be retiring. In the
upcoming years the International Space Station will be
engaging into full international scientific capability,
China will be facing new human space challenges,
commercial entrepreneurs will see themselves proving
their competence and developing space agencies will
enhance their own space exploration ventures.
IAC-10-E1.3.15
Page 4 of 4
Electrochemical analysis of peptidefunctionalized titanium dental implant surfaces
D.Rodríguez1,2,3, P.Sevilla2,3, G.Vidal1, F.J.Gil2,3
[email protected]
1E.U.
Enginyeria Tècnica Industrial de Barcelona (EUETIB), Technical University of Catalonia (UPC); C.Urgell,187, 08036-Barcelona (Spain).
2Center for Research in NanoEngineering (CRnE), UPC.
3Biomaterials, Biomechanics and Tissue Engineering Group (BIBITE), UPC.
Objective
Methods
Analysis of the functionalization of titanium
surfaces with peptides1 is not immediate. This
study compares standard analysis techniques with
electrochemical techniques.
Peptide
Clean titanium surfaces were plasma-activated and silanized with APTES
(Ti+APTES samples). Silanized samples were immersed in peptide solution (4Morpholineethanesulfonic acid (MES) buffer, pH 6.0, with EDC NHS) overnight at
room temperature. GGRGDSGG peptides (RGD: cell adhesion motive)2 linked to
the silane through the carboxilate group.
Clean titanium samples (Ti samples) and Ti samples with adsorbed RGD
peptide (Ti+RGD samples) were as used as control group. Replicas were made
to allow statistical analysis of data.
Samples were characterized with XPS, ToF-SIMS, contact angle and FTIR-DR
techniques. Electrochemical characterization of the samples was done with a
ParStat 2273 potentiostat (medium: HBSS at 37ºC; reference electrode: KCl
electrode; counterelectrode: graphite bar). Tests included a free potential
measurement, cyclic voltammetry and Electrochemical Impedance Spectroscopy
(EIS) with a sweeping range of 64kHz-2mHz and a signal of 50mV.
N-Terminus
Amide bond
H
3-aminopropyl triethoxy silane (APTES).
EtO
EtO
Si
N
O
H
C
H
H
C
H
C
H
C
H
H
C
H
H
N
Si
H
EtO
Titanium functionalized with
APTES silane + peptide.
TiO2
Surface chemical analysis
Results
Cyclic voltammetry
Chemical composition of the samples (at%).
Results of XPS, contact angle (not shown) and ToF-SIMS (not shown)
confirm that the samples were silanized and functionalized as expected,
detecting presence of silane and silane+peptide, respectively, as shown for
XPS (presence of Si on silanized samples and N on functionalized
samples).
FTIR
Measured values of free potential showed some differences between
samples (Ti: 0.102V, Ti+APTES: -0.141V, Ti+APTES+RGD: -0.168V).
Cyclic voltammetry showed significant differences in current intensity for the
Ti+APTES+RGD samples compared to other samples.
EIS
FTIR was used to analyze the presence of silane and peptide on the
titanium surface. The FT-IR spectra indicate the presence of the covalent
bonding of the silane. No clear peak related to the presence of the peptide
in the Ti+APTES+peptide sample was detected.
Conclusions
The presence of APTES silanes and peptides
such as RGD sequences on the surface of
titanium can be detected and studied with
electrochemical measurements.
The EIS models assume the presence of a TiO2 layer (Rb, Qb), and a double
layer of silane (Rp, Qp) and peptide (Rs, Qs). The best-fitting parameters
present significant differences between functionalised samples and the rest
of the samples.
References
1 Chollet C. et al, Biomaterials. 2009; 30:711-20.
2 Ruoslahti E. et al, Annu Rev Cell Dev Biol. 1996;12:697-715.
Acknowledgements
The authors would like to thank prof. Carlos Aleman and Dr. Elaine
Armelin (CRnE, UPC) for the use of the electrochemical equipment.
Supported by the Spanish Ministry of Science and Innovation through
Project MAT2008-06887-C03.
Detailed Study of the Rotor-Stator Interaction Phenomenon in
a Moving Cascade of Airfoils
Alfred Fontanals1, Miguel Coussirat2, Alfredo Guardo2 and Eduard Egusquiza2
1
Fluid Mechanics Department. EUETIB. Universitat Politècnica de Catalunya. Compte d’Urgell 187,
Barcelona, 08036, Spain, [email protected]
2
Centre de Diagnòstic Industrial i Fluidodinàmica. Universitat Politècnica de Catalunya.
Av. Diagonal 647, Pab. D+1, Barcelona, 08028, Spain, [email protected],
[email protected], [email protected]
Abstract
In turbomachinery the Rotor-Stator Interaction (RSI) is an important phenomenon that has a strong influence on the
machine behavior. These interactions can have a significant impact on the vibrational and acoustical characteristics of
the machine. Unsteadiness and turbulence play a fundamental role in complex flow structure and the use of
Computational Fluid Dynamics (CFD) is becoming a usual requirement in design in turbomachinery due to the
difficulties and high cost of the necessary experiments needed to identify RSI phenomena. The flow inside a
turbomachinery working under design condition is complex but apparently, when working under off-design conditions, it
becomes more complex due to the boundary layer separation phenomena. Therefore, the choice of an appropriate
turbulence model is far from trivial and a suitable turbulence modeling plays a very important role for successful CFD
results. In this work the RSI generated between a moving cascade of blades and fixed flat plate located downstream were
studied by means of CFD modeling and compared against experimental results. Design and off-design conditions were
modeled and a detailed comparison between them has been made. To analyze in detail the flow pattern, mean velocities
in the boundary layer were obtained and compared against experimental results. Furthermore, results concerning to
turbulence intensity were compared against an experimental database. It was observed that for each operating condition,
the flow in the cascade show special features. For flow inside the turbomachine under design conditions there is no
separation, the wake is thin and the characteristic length of the eddies is small. For off-design conditions, there is a large
separation and the wake is thick with large eddies. The results obtained can be used to obtain a deeper insight into the
RSI phenomena.
Keywords: Turbomachinery, Rotor Stator Interaction, Computational Fluid Dynamics, Turbulence
1. Introduction
Due to its complexity, the blade and vane design in turbomachinery was currently based on the assumption that the flow in
both the impeller and the diffuser is turbulent but steady. The steadiness however implies that the radial gap between impeller
discharge and diffuser inlet is large so that no flow unsteadiness of any kind due to blade row interaction would occur [1,2].
However if the rows are closely spaced, there may be a strong interaction that influences both the aerodynamics and structural
performance of blades and vanes. In some cases, this has led to vane failure. This phenomenon is called rotor-stator interaction
(RSI).
Nowadays, computational fluid dynamics (CFD) is broadly used to help the design of turbomachinery and it is frequently used
to perform computations to solve RSI problems. From the viewpoint of the numerical prediction of RSI phenomena, it is not an
easy task to model this type of flow due to its complexity. The geometrical complexity of the impeller and the diffuser, the
turbulence of the flow and the unsteadiness phenomena play an important role in the RSI phenomena.
The RSI may be divided into two different mechanisms: potential flow interaction and wake interaction [3,4]. If the impeller
and the diffuser are closely spaced, both mechanisms will occur simultaneously. Potential interaction strongly depends on the
machine’s geometry and the relative movement between fixed and moving parts. In spite of that complexity, a theoretical analysis
is possible, allowing to compute its influence by means of a mathematical expression. Rodriguez et al. [5] carried out a theoretical
analysis to predict and explain in a qualitative way the frequencies and amplitudes of the potential interaction in turbomachinery.
The theoretical analysis incorporates the number of blades, the number of guide vanes, the RSI non-uniform fluid force and the
© Eduard Egusquiza, 2010. Published in Engineering Conferences Online (ECO): http://eco.pepublishing.com
DOI: to be inserted by the publisher
1/8
DOI: to be inserted by the publisher
sequence of interaction. This analysis gives a resulting force over the turbomachine taking into account the frequencies of
interaction between blades and the relationship between amplitudes of pressure fluctuations.
Wake interaction is related with the wake behind the blades. This wake induces structural vibrations due to the vortex shedding
and extends farther downstream. The turbulence cascade process induced by these vortices strongly influences the turbulence state
of the flow, enhancing the energy dissipation increasing the rate at which the vortices are dissipated [6]. The wake convects
downstream and arrives to the gap between the fixed and moving blades, generating a periodic flow structure as a result of the
interaction between the wake and the blades due to their relative movement. These structures are convected and affect the
boundary layer of the blades situated downstream of the interaction zone, generating unsteadiness in the structure of its boundary
layer [7].
Wake characterization is a subject of main importance in the study of RSI. Experimental results from Tsukamoto et al. [8]
show the behavior of pressure fluctuations in a radial pump working in design conditions using semi-conductor type pressure
transducers installed in a guide vane, Values of instantaneous pressure were obtained in order to study the interaction between
impeller and diffuser vanes. Results obtained show that the pressure at a stationary point in guide vanes fluctuates with the basic
frequency of the impeller blade passing frequency and higher harmonics. The maximum values are observed near the leading edge
in the suction side of the guide vane. Results also show that the pressure fluctuation can propagate in circumferential direction.
Depending on the frequency of the harmonic the propagation direction could be opposite to impeller rotation. Wang et al. [9] did a
similar experiment but in off-design working conditions. Results obtained show that the impeller blade passing frequency and its
higher harmonics are always dominant in the pressure fluctuations downstream of the impeller for the whole flow range because
of the RSI phenomena, and that there exist some lower dominant frequencies in the pressure fluctuation downstream of the
impeller for unstable flow range because of the effects of the complex flow structures such as separating flow, rotating stall and
reverse flow. These lower dominant frequencies are dependent on the flow rate, and the unsteady pressure fluctuation is chaotic in
these unstable flow ranges.
Experimental results from Uzol et al. [10] and Chow et al. [11] using particle image velocimetry (PIV) in an axial compressor
show that the flow is composed by a lattice of wakes and the resulting wake-wake and wake-blade interactions cause major
turbulence and flow non-uniformities, showing that these interactions are dominant contributors to the generation of high
deterministic stresses and tangential non-uniformities in the rotor-stator gap near the blades and in the wakes behind them. These
non-uniformities in the flow structures have significant effects on the overall performance of the machine. The non-uniformities
are mainly composed by localized regions with concentrated mean vorticity and elevated turbulence levels. At this zones the wake
is chopped-off by the downstream blades. Due to difference in the mean velocity field, the wake segment on the pressure side of
the upstream blades is convected faster than the segment on the suction side (using an absolute frame of reference) creating
discontinuities in the stator wake trajectory, causing non-uniformities in the velocity field downstream. These non-uniformities are
“hot spots” with concentrated vorticity, high turbulence level and high shear stresses. Although the “hot spots” diffuse as they are
convected downstream, they still have an elevated turbulence level compared to the local turbulence levels around them.
It is also clear that the turbulence plays a fundamental role in the flow structure. Soranna et al. [1] studied a rotor working
downstream of a row of inlet guide vanes. Results show that the wake impingement significantly modifies the wall-parallel
velocity component and its gradients along the blade downstream. Due to spatially non-uniform velocity distribution, especially in
the suction side, the wake deforms while propagating along the blade, expanding near the leading edge and shrinking near the
trailing edge. Turbulence levels here become spatially non-uniform and highly anisotropic.
Experiments form Henderson et al. [2] are focused on the influence of the free-stream turbulence on the wake dispersion and
boundary layer transition process. Results show that increments in the free-stream turbulence level strongly enhance the dispersion
of inlet guide vanes wake. This fact modifies the interaction between stator and rotor wakes, leading to a significant decrease in
the periodic unsteadiness experienced by the downstream stator. These observations have important implications for the prediction
of the flow behavior in multistage turbomachines.
From the viewpoint of numerical simulation, all effects should be taken into account for a suitable modeling of RSI. The first
step is the suitable characterization of the boundary layer along the blade and the wake behind the blade. Several authors have
attempted to obtain both experimental and numerical results for boundary layers along bodies (e.g. cylinders and blades) and the
wake flows behind them, focusing in the wake structure. The most extensively studied case is the wake of cylinders, (see e.g. [1215]), and experimental databases of the wake generated by airfoils are also available in literature (see e.g. Nakayama [16], Wang
[17] and Ausoni [18]).
The second step is the characterization of the wake interaction between moving and fixed blades. Experimental data of the
wake related with RSI in real turbomachines is more difficult to obtain. The main problem in these cases is to know the setup
geometry details (see e.g. [3][4][10] and [11]), but some interesting results in linear cascade of moving cylinders and airfoils are
available in the literature (see e.g. [7-19]). In this work the database from Gete and Evans [19] experiment was selected in order to
obtain results for this characterization step. After these steps, application for design of real turbomachines follows.
The main goal of this work is to obtain reliable and detailed numerical results that complement the experimental data on the
RSI in a multistage turbomachine. Suitable CFD modeling is critical for understanding the physical mechanism of the RSI and its
consequences in the turbomachine performance. This evaluation will contribute to the better understanding of this phenomenon.
Results obtained are directly applied to the turbomachinery RSI modeling
25th IAHR Symposium on Hydraulic Machinery and Systems, September 20-24, 2010, Timisoara, Romania
2/8
DOI: to be inserted by the publisher
2. Numerical Modeling
2.1. Geometry and Grid Generation
A turbomachinery stage was represented with a two-dimensional rotating rig and a flat plate arrangement in a wind tunnel (fig.
1). The moving mechanism comprises seven NACA0024 airfoils (rotor blades), with chord length of 50 mm an exit angle of 57.7°
relative to input free-stream flow velocity, attached to rotating synchronized gear belts thereby generating travelling periodic wake
disturbance in the oncoming air-flow. The blade spacing is s = 0.1 m and the distance between the trailing edge airfoil and the
leading edge of flat plate is 40 mm. The flat plate (stator) has a 0.8 mm diameter trip wire located 20 mm from the leading edge.
Complete details of experiments have been reported in [19]. The model consists in two parts: the moving rotor blades and the
stator plate. Unstructured meshing technique is adopted establishing sliding mesh configuration as the analysis is unsteady as per
CFD code [20].
velocity Ur
periodic
periodic
rotor velocity rotor
rotorblade
foil
inlet velocity inlet wall
wall
U 0 U 0 y
s
plate
x pressure outlet interface
wall
periodic Fig. 1. General drawing of the setup and boundary condition imposed in the numerical modeling
For evaluating the mesh sensitivity three 2D grids were used. The boundary layer around the rotor blades and the stator plate
was modeled using the two layer model scheme, with a y+ = 1 (table 1).
Table 1. Mesh sensitivity test
mesh
1
2
3
Steady
cells
9E+04
1E+05
1.3E+05
+
y
1
1
1
Unsteady
cells
y+
2.9E+05
1
3.2E+05
1
4.3E+05
1
2.2. Unsteady Calculations Setup
Two-dimensional, unsteady Reynolds-averaged Navier-Stokes equations were solved by means of a commercial CFD code
[20]. To obtain the boundary layer-mean velocity, a constant velocity of U0 = 3 m/s was applied at the inlet, and a rotor transverse
velocity Ur = 2, 3 and 4 m/s are employed. A non-slip condition was specified for the flow at the wall boundaries of the rotor blade,
the stator plate and the wind-tunnel walls. A periodic condition was applied to the rotor fluid and to the external wind tunnel fluid,
and a static pressure condition was imposed at the outlet of the stator plate.
The ratio of the transverse rotor blades speed to the blade spacing determines the frequency of the wake disturbances passing
in front the stator flat-plate, f = Ur/s. For the rotor transverse velocity Ur = 2, 3 and 4 m/s, the corresponding frequencies are f = 20,
30 and 40 Hz. The turbulence was modeled using the SST k-ω model, since it is a good option due to its accurate performance
both in boundary layer and as in wake flow modeling [21]. Experimental turbulence intensity of 0.07% was applied at the inlet
velocity boundary condition. The unsteady formulation used was a second-order implicit velocity formulation, and a pressurebased solver was chosen. The SIMPLE pressure-velocity coupling algorithm was used, and second order scheme discretization
was selected for the numerical experiments. The interface between the rotor and the stator plate was set to a sliding mesh, in which
the relative position between the rotor and the stator is updated every time step. The time step was set to 1 x 10-4 s. The maximum
number of iterations for each time step was set to 40 in order to reduce all computed residuals under 1 x 10-5. Due to the unsteady
nature of the flow, it is required that the whole flow domain is affected by the unsteady fluctuations. In order to check the
aforementioned, a flow rate monitor was recorded at the domain outlet. Pseudo-steady flow behavior was reached after 40 and 80
25th IAHR Symposium on Hydraulic Machinery and Systems, September 20-24, 2010, Timisoara, Romania
3/8
DOI: to be inserted by the publisher
cycles of the rotor blades for f = 20 and 40 Hz respectively, due to the length of the stator plate. Boundary layer velocities vs. time
were recorded for different locations of the stator plate (x = 0.1, 0.3, 0.5 and 0.7 m).
2.3. Validation of the model
A comparison between the numerical results obtained for the boundary layer velocities and the experimental results obtained
by Gete and Evans [19] was established. A steady state analysis without the effect of the moving rotor blade (f = 0 Hz) and an
unsteady state analysis considering the RSI at different rotor frequencies (f = 20, 30 and 40 Hz) was developed. Figures 2 to 5
show the comparison between numerical and experimental results. The validation shows a close agreement between the
implemented numerical model and the experimental results.
Naca0024-placa-f=00Hz, model=g, x=03
Naca0024-placa-f=00Hz, model=g, x=05
Naca0024-placa-f=00Hz, model=g, x=07
1.4
1.2
1.2
1.2
1.2
1.0
1.0
1.0
1.0
0.8
0.8
0.8
0.8
0.6
0.6
exp
0.4
exp
0.4
mesh 1
0.2
0.2
mesh 2
0.2
0.4
0.6
0.8
1.0
1.2
0.6
exp
0.4
mesh 1
0.2
0.0
0.0
0.2
0.4
y/delta
0.6
0.8
1.0
1.2
exp
0.4
mesh 1
0.2
mesh 2
mesh 3
1.4
0.6
mesh 1
mesh 2
mesh 3
0.0
0.0
u/Uo
1.4
u/Uo
1.4
u/Uo
u/Uo
Naca0024-placa-f=00Hz, model=g, x=01
1.4
mesh 2
mesh 3
0.0
0.0
1.4
0.2
0.4
y/delta
0.6
0.8
1.0
1.2
mesh 3
1.4
0.0
0.0
0.2
0.4
y/delta
0.6
0.8
1.0
1.2
1.4
y/delta
Fig. 2. Steady turbulent velocity in the boundary layer on stator plate (f=0Hz) at x = 0.1, 0.3, 0.5, and 0.7 m.
N a ca0 02 4-p laca-f=20 H z, m o del=g, x=0 3
N aca0 024 -pla ca-f=2 0H z, m ode l=g , x=05
N aca0 024 -pla ca-f=2 0H z, m ode l=g , x=07
1.4
1.4
1.2
1.2
1.2
1.2
1.0
1.0
1.0
1.0
0.8
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.4
0.6
0.8
1.0
0.4
e xp
e xp
m e sh 1
m e sh 1
m e sh 1
0.2
m e sh 2
0.2
0.6
e xp
0.2
0.4
y/delta
0.6
0.8
1.0
1.2
e xp
m e sh 1
0.2
m e sh 2
m e sh 3
0.0
1.4
0.0
1.2
0.2
m e sh 2
m e sh 3
0.0
0.0
u/Uo
1.4
u/Uo
1.4
u/Uo
u/Uo
N aca0 024 -pla ca-f=2 0H z, m ode l=g , x=01
m e sh 2
m e sh 3
0.0
0.0
1.4
0.2
0.4
y/delta
0.6
0.8
1.0
1.2
m e sh 3
0.0
0.0
1.4
0.2
0.4
0.6
y/delta
0.8
1.0
1.2
1.4
y/delta
Fig. 3. Unsteady turbulent velocity in the boundary layer on stator plate (f=20Hz) at x=0.1, 0.3, 0.5, 0.7 m
N aca002 4-placa-f=3 0H z, m ode l=g, x=03
N aca002 4-placa-f=3 0H z, m ode l=g, x=05
N aca002 4-placa-f=3 0H z, m ode l=g, x=07
1.4
1.4
1.2
1.2
1.2
1.2
1.0
1.0
1.0
1.0
0.8
0.8
0.8
0.8
0.6
0.6
0.4
0.6
0.4
0.2
0.4
0.4
0.6
0.8
1.0
0.4
e xp
e xp
m e sh 1
m e sh 1
m e sh 1
0.2
m e sh 2
0.2
0.6
e xp
1.2
0.2
m e sh 2
m e sh 3
0.0
0.0
u/Uo
1.4
u/Uo
1.4
u/Uo
u/Uo
N aca002 4-placa-f=3 0H z, m ode l=g, x=01
m e sh 3
1.4
0.0
0.0
0.2
0.4
0.6
y/delta
0.8
1.0
m e sh 1
m e sh 2
m e sh 3
0.0
1.4
0.0
1.2
e xp
0.2
m e sh 2
0.2
0.4
y/delta
0.6
0.8
1.0
1.2
m e sh 3
1.4
0.0
0.0
0.2
0.4
0.6
y/delta
0.8
1.0
1.2
1.4
y/delta
Fig. 4. Unsteady turbulent velocity in the boundary layer on stator plate (f=30Hz) at x=0.1, 0.3, 0.5, 0.7 m
N a ca 0 02 4 -p la ca -f=4 0 H z, m o de l=g , x=0 3
N a ca 0 02 4 -p la ca -f=4 0 H z, m o de l=g , x=0 5
N aca0024-placa-f=40H z, m odel=g, x=07
1.4
1.4
1.2
1.2
1.2
1.2
1.0
1.0
1.0
1.0
0.8
0.8
0.8
0.8
0.6
0.6
0.4
0.6
0.4
m e sh 2
0.4
0.6
0.8
y/delta
1.0
1.2
1.4
e xp
m e sh 1
0.2
m e sh 2
m e sh 3
0.2
0.4
e xp
m e sh 1
0.0
0.0
0.6
0.4
e xp
0.2
u/Uo
1.4
u/Uo
1.4
u/Uo
u/Uo
N a ca 0 02 4 -p la ca -f=4 0 H z, m o de l=g , x=0 1
m e sh 2
m e sh 3
0.0
0.0
0.2
0.4
0.6
0.8
y/delta
1.0
1.2
e xp
m e sh 1
0.2
0.0
1.4
0.0
m e sh 1
0.2
m e sh 2
m e sh 3
0.2
0.4
0.6
0.8
1.0
1.2
0.0
1.4
0.0
m e sh 3
0.2
0.4
y/delta
0.6
0.8
1.0
1.2
1.4
y/delta
Fig. 5. Unsteady turbulent velocity in the boundary layer on stator plate (f=40Hz) at x=0.1, 0.3, 0.5, 0.7 m.
25th IAHR Symposium on Hydraulic Machinery and Systems, September 20-24, 2010, Timisoara, Romania
4/8
DOI: to be inserted by the publisher
3. Results and Discussion
3.1. Velocity profiles
As a first approach, a steady state analysis without RSI effects generated by the rotor blade (f = 0 Hz) was performed. Figure 2
shows the results for steady turbulent boundary-layer velocity profiles for several meshes and different longitudinal locations on
the stator plate. At the first monitor point (x = 0.1 m from the plate’s edge) the experimental boundary-layer is a transitional flow
type (Rex = 2 x 104), and the SST k-ω turbulent model predict a fully developed flow. This is because the eddy viscosity models
(EVM) do not capture adequately the transitional flow. Downstream a fully developed flow is present and the results are in good
agreement with experiments.
For the RSI unsteady cases (f = 20 and 30 Hz), the numerical boundary-layer mean velocity profiles are in good agreement
with experimental data (figs. 3, 4), and it can be observed that the obtained results are mesh-dependant.
For f = 40 Hz, there is a good fit between the experimental and the numerical data in the logarithmic zone of the boundary
layer velocity profiles, while in the transition sub-layer, between the logarithmic and the viscous layers, velocity values are
underestimated. In this case, the obtained results do not show mesh dependency.
3.2. Turbulence intensity
For all the unsteady state cases modeled a numerical study of the flow behavior in the wake generated by the moving rotor
blades and its corresponding interaction with the stator plate by means of modeling the flow turbulence intensity on the wake.
Figure 6 shows that the behavior pattern of the wake is different for each rotor blade frequency analyzed. It can be observed that
the turbulence intensity pattern is related to the relative velocity at the rotor blade outlet, reaching near-design operating conditions
at f = 40 Hz. For f = 20 and 30 Hz, the turbulence intensity patterns clearly show off-design operating conditions. For off-design
operating conditions there is boundary layer flow separation, and the wake presents a vortex shedding with large eddies. For
design conditions the wake is thin and the characteristic length of eddies is smaller than for off-design conditions.
Fig. 6. Computed turbulence intensity contour fields at several frequencies (f = 20, 30 and 40 Hz)
Computed values for turbulence intensity are shown in Table 2. It can be seen that for off-design conditions (f = 20 and 30 Hz)
computed values are higher than those experimentally obtained, and that for design conditions (f = 40 Hz), computed values are
lower than the experimental results. Previous work developed by our research group [21] showed that EVM are not able to capture
coherent fluctuations in the lift coefficient for thin trailing edge foils and for a small angle of attack.
Table 2. Turbulence intensity values at x = 0.1 m
Turbulence intensity I (%)
at x = 0.1 m
f (Hz)
Experimental [19]
Numerical
0
0.7
1
20
4
10
30
6
8
40
8
4
In order to check the influence of the wake over RSI vortex shedding, a simulation set of the moving rotor blade without the
stator plate was performed. Figure 7 shows the computed vortex shedding at off-design operating conditions (f = 20 Hz) with and
without considering the potential flow interaction effects generated by the stator plate. Under the aforementioned operating
conditions, it can be observed for both situations that the characteristic length of the eddies is similar, and that this length is close
to the characteristic length of the wind tunnel (equal to the distance between the stator plate and the tunnel´s wall). Under these
operating conditions the flow pattern is influenced by the boundary conditions of the system, and the computed values for the
turbulence intensity are overestimated if compared with the experimental results reported by Gete and Evans [19].
25th IAHR Symposium on Hydraulic Machinery and Systems, September 20-24, 2010, Timisoara, Romania
5/8
DOI: to be inserted by the publisher
f = 20Hz
Fig. 7. Computed turbulence intensity with RSI effect and without stator plate (f = 20Hz)
For near-design operating conditions (f = 40 Hz), figure 8 compares the computed vortex shedding with and without
considering the potential flow interaction effects generated by the stator plate. For this situation it can be seen that in absence of
the stator plate, the computed wake does not show fluctuations. The potential flow interaction effects are visible when the stator
plate is included in the geometrical model. The movement of the rotor blade in front of the stator plate induces the onset of the
vortex shedding. At this frequency, the used EVM is only able to capture the potential flow interaction effects. The incapability of
the tested EVM in capturing the vortex shedding due to wake interaction effects leads to an underestimation of the turbulence
intensity levels when compared to the experimental data. In this case, the non-accurate estimation of the turbulence intensity may
justify the velocity underestimation in the transition sub-layer of the boundary layer shown in figure 5.
f = 40Hz
Fig. 8. Computed turbulence intensity with RSI effect and without stator plate (f=40Hz)
4. Conclusions
CFD has been applied to the study of RSI. A turbomachinery stage was represented with a two-dimensional rotating rig and a
flat plate arrangement in a wind tunnel .Boundary layer mean velocities at various distances from the leading edge and the
turbulence intensity over the stator plate were computed. A comparison between the numerical results obtained for the boundary
layer velocities and the experimental results obtained by Gete and Evans [19] was established. A steady state analysis without the
effect of the moving rotor blade (f = 0 Hz) and an unsteady state analysis considering the RSI at different rotor frequencies (f = 20,
30 and 40 Hz) was developed.
For near-design conditions (f = 40 Hz) there is no flow detachment on the blade, and the vortex shedding flow pattern is thin
and with small eddies. Under these operating conditions, computed turbulence intensity is underestimated when compared to the
experimental results.
For off-design conditions (f = 20 and 30 Hz), there is flow detachment in the blade, and the vortex shedding flow pattern is
wide and with large eddies. Under these operating conditions, computed turbulence intensity is overestimated when compared to
the experimental results.
It was possible to corroborate that EVM are not able to model coherent fluctuations in the lift coefficient for thin trailing edge
foils and small attack angles when the potential flow interaction effects are not present in the computed model, as previously
shown in [21]. When the potential flow interaction effect is present in the computed model, EVM are capable of modeling the
vortex shedding.
25th IAHR Symposium on Hydraulic Machinery and Systems, September 20-24, 2010, Timisoara, Romania
6/8
DOI: to be inserted by the publisher
Results obtained in this work lead to the conclusion that the choosing of a suitable EVM for modeling the RSI phenomena is
strongly dependent on the operating conditions in the cascade blades (design/off-design) due to the characteristic flow features for
each case. The flow structures in each case present challenges of different nature, and a suitable EVM for modeling a off-design
flow condition is not good enough to model another case. The results obtained can be used to obtain a deeper insight into the RSI
phenomena.
Acknowledgements
Funding from the Spanish Ministry of Science and Innovation (Grant No. DPI 2009 – 12827) is appreciated. A travel &
congress registration grant from EUETIB – UPC for A. Fontanals is also acknowledged.
Nomenclature
C
dt
f
I
L
Re
s
u
Chord of foil [m]
computational time step [s]
Frequency [Hz]
Turbulence intensity (=u’/Uref)
Characteristic length [m]
Reynolds number (=U0L/)
Blade spacing [m]
Velocity [m/s]
Uref, Uo
Ur
u’
x, y
y+
δ
ν
Free-stream flow velocity [m/s]
Rotor velocity [m/s]
Turbulent velocity fluctuations [m/s]
Coordinates
Non-dimensional wall distance
Boundary layer thickness [m]
Dynamic viscosity [m2/s]
References
[1] Soranna, F., Chow, Y., Uzol, O. and Katz, J., 2006, “The effect of inlet guide vanes wake impingement on the flo
w structure and turbulence around a rotor blade”, J. of Turbomachinery, No. 128, pp. 82-95.
[2] Henderson, A., Walker, G. and Hughes, J., 2006, “The influence of turbulence on wake dispersion and blade row in
teraction in an axial compressor”, J. of Turbomachinery, No. 128, pp. 150-165.
[3] Dring, Joslyn, Hardin and Wagner, H., 1982, “Turbine Rotor-Stator Interaction”, J. Eng. for Power, No. 104, pp. 72
9-742.
[4] Ardnt, Acosta, Brennen and Caughey, 1989, “Rotor-Stator Interaction in a Diffuser Pump”, Journal of Turbomachinery, No. 111(3), pp. 213-221.
[5] Rodriguez, C., Egusquiza, E., and Santos, I., 2007, “Frequencies in the Vibration Induced by the Rotor Stator Interaction in a
Centrifugal Pump Turbine”, Journal of Fluids Engineering, No. 129, pp. 1428-1435.
[6] Coussirat, M. G., 2003, “Theoretical/Numeric Study of flows with strong Streamlines Curvature”, Ph. D. Thesis, Department
of Fluids Mechanics, UPC, Barcelona.
[7] Holland, R. and Evans, R., 1996, “The effect of periodic wake structures on turbulent boundary layers”, Journal of Fluids and
Structures, No. 10, pp. 269-280.
[8] Tsukamoto, H., Uno, M., Hamafuku, N., And Okamura, T., 1995, “Pressure fluctuations downstream of a diffuser pump
impeller”, The 2nd Joint ASME/JSME Fluids Engineering Conference, Forum of unsteady flow, FED Vol. 216, pp. 133-138.
[9] Wang, H. and Tsukamoto, H., 2003, “Experimental and numerical study of unsteady flow in a diffuser pump at off-design
conditions”, J. Fluid Engineering, No. 125, pp. 767-778.
[10] Uzol, O., Chow, Y., Katz, J. and Meneveau, C., 2002, “Experimental investigation of unsteady flow field within a
two-stage axial turbomachine using particle image velocimeter”, J. of Turbomachinery, Vol. 124 pp. 542-552.
[11] Chow, Y., Uzol, O. and Katz J., 2002, “Flow nonuniformities and turbulent “hot spots” due to wake-blade and wa
ke-wake interaction in a multi-stage turbomachine”, J. of Turbomachinery, No. 124, pp. 553:563.
[12] White, F, 1974, “Viscous fluid flow”, McGraw-Hill, New York.
[13] Cantwell, B., and Coles, D., 1983,” An experimental study on entrainment and transport in the turbulent near wake of a
circular cylinder”, J. Fluid Mechanic, No. 136, pp. 321-374.
[14] Hwang, R., and Yao, C., 1997, “ A numerical study of vortex shedding from a square cylinder with ground effect”, J. of Fluid
Eng., No. 119, pp. 512-518.
[15] Jordan, S., and Ragab, S., 1997, “A large-Eddy simulation of the near wake of a circular cylinder”, J. of Fluid Eng., No. 120,
pp. 243-252.
[16] Nakayama, 1985, “Characteristics of the Flow around Conventional and Supercritical Airfoils”, Journal of Fluids
Mechanics, No. 160, pp. 155-179.
[17] Wang, H., 2004, “An experimental study of bubbly hydrofoil wakes”, MsC Thesis, University of Minnessota.
[18] Ausoni, Farhat, and Avellan, 2005, “Fluid-Structure Interaction Induced by Karman Vortices in the Wake of a Truncated 2D
Hydrofoil at Fixed Incidence Angle”, Hydrodyna Project report, delivery 3.2 part 1, LHM, Lausanne.
25th IAHR Symposium on Hydraulic Machinery and Systems, September 20-24, 2010, Timisoara, Romania
7/8
DOI: to be inserted by the publisher
[19] Gete, Z. and Evans, R., 2003, “An experimental investigation of unsteady turbulent wake boundary layer interaction”, Journal
of Fluids and Structures No. 17, pp. 43-55.
[20] Fluent Inc., Fluent 6.3. User’s guide, 2006.
[21] Coussirat, M., Fontanals, A., Grau, J., Guardo, A. and Egusquiza, E. 2008, “CFD study of the boundary layer influence on the
wake for turbulent unsteady flow in rotor-stator interaction”. IAHR 4th. Symposium on Hydraulic Machinery and Systems. Foz do
Iguassu (Brazil).
25th IAHR Symposium on Hydraulic Machinery and Systems, September 20-24, 2010, Timisoara, Romania
8/8
SIDO Buck Converter with Independent Outputs
H. Eachempatti (1, 2), S. Ganta (1), J. Silva-Martinez (1) and H. Martínez-García
(1)
Analog and Mixed Signal Center
Electrical and Computer Engineering
Department (ECE)
Texas A&M University
College Station, TX, 77843-3128, USA
[email protected] and
[email protected]
(2)
Qualcomm Incorporated
5775, Morehouse Drive,
San Diego, CA, 92121, USA
[email protected]
Abstract— The portable electronics market is rapidly migrating
towards more compact devices requiring multiple high-integrity
high-efficiency voltage supplies for empowering the systems. This
paper demonstrates a single inductor used in a buck converter with
two output voltages from an input battery with voltage of value 3V.
The main target is low cross regulation between the two outputs to
supply independent load current levels while maintaining desired
output voltage values well within a ripple that is set by adaptive
hysteresis levels. A reverse current detector to avoid negative
current flowing through the inductor prevents possible efficiency
degradation.
I.
(3)
(3)
College of Industrial Engineering of
Barcelona (EUETIB)
Department of Electronics Eng.
Technical Univ. of Catalonia (UPC)
C/ Comte d’Urgell, 187.
08036 Barcelona, Spain.
[email protected]
windows ensures the minimization of cross-regulation that arises
from sharing the inductor between the outputs.
INTRODUCTION
The typical buck converter is the most frequently used
switching converter in portable applications. Since multiple
voltage rails are required on a Power Management IC (PMIC),
several such converters are normally used in a device for
obtaining different voltage levels. If a PMIC supplies N
independent voltage rails, N such converters are required. The
costliest and most area consuming component on the board of a
SMPS design is the inductor. A solution for this issue is to use a
single-inductor serving to multiple outputs [1-8]. A singleinductor dual output (SIDO) buck converter is shown in Fig. 1.
Two independent outputs V1 and V2 are obtained from a single
inductor L. C1 and C2 are output capacitors that maintain average
load voltages V1 and V2 respectively, and provide output current
when the inductor is serving the other output. The voltages
V1,2upDC, V1,2lowDC, V1,2up(t) and V1,2low(t) are described in the
following subsections. For the SIMO buck, T1, T2, … and TN are
the time windows for which L is connected to the outputs V1, V2,
... VN, respectively. The timing diagram in Fig. 2 shows the
different phases of operation of a SIDO buck converter. The
slopes of inductor charge and discharge depend on the output that
L is connected to for regulation. Since the inductor is shared, the
minimization of cross-regulation is highly desirable to maintain
the regulator’s outputs independent of each other for a wide
range of load values[1]. The time frames T1 and T2 depend on the
load demanded at each output and are adjusted interactively by
the feedback dynamic level comparator. The control of the time
Páginas: 719– 722
ISBN: 978-84-95809-75-9
719
(a)
(b)
Fig. 1 SIDO Buck Converter SMPS and Timing Diagram
SAAEI’10
Bilbao, 7 – 9 de julio de 2010
II.
CONTROL METHODOLOGY
In order to support low cross-regulation across
independent load levels and achieve high output voltage
accuracy, variable frequency control is chosen, for which
hysteresis comparison levels are used. In this paper, the hysteretic
control is slighlty different from the conventional one since the
dynamic hysteresis levels that contain information about the
slopes of the output voltages are used for providing an indication
of the voltages and load currents. The first derivative of the
voltage indicates the amount of load present at the outputs. For
each output, two dynamic levels V1,2up(t) and V1,2low(t) are
properly defined and serve as thresholds against which the SIDO
buck outputs are compared so that the voltage ripples are limited
to within a set percentage of the reference voltages under all
loading conditions. Let us define dynamic thresholds V1,2upDC and
V1,2lowDC as the bounds for the SIDO buck’s outputs V1,2 as
follows:
dV (t )
(1)
V1,2up (t )  V1,2upDC  K z  1,2
dt
dV (t )
(2)
V1,2low (t )  V1,2lowDC  K z  1,2
dt
The derivative of the output voltages is a measure of
inductor current; hence the threshold levels are dynamically
adjusted according to IL. The value of the coefficient KZ
determines the sensitivity of the dynamic levels. A very large
value of KZ causes high swing in the upper and lower dynamic
levels and their possible overlap whereas a small value
desensitizes the threshold levels to load current variations
increasing the output voltage ripple.
Confinement of the output voltages to well within the
dynamic levels described by (1) and (2) helps achieve the low
cross-regulation and well-defined ripple levels Thus, the
proposed controller for the SIDO buck converter is able to supply
the output at full load as well as the output at stand-by
simultaneously without the undesirable drop or rise respectively
in voltage levels at either output.
A. Ripple Control and Cross Regulation
The output voltages V1,2 are limited to the hysteresis
bands by comparing them with the dynamic levels to control the
switches SP and SN. As shown in Fig. 2, by monitoring the output
voltage and its first derivative, the transient response is improved
and the ripple is limited around the desired DC value. In Fig. 2,
during T1, when L is connected to one of the outputs, SP is
activated and its voltage variation is positive due to the current
injected by the inductor. Based on the speed of the variation of
the output voltage, the dynamic levels (1) and (2) are adjusted;
large load current leads to large steps in the dynamic levels.
Since IL is positive, the threshold voltage V1,2up decreases thus
preventing significant overshoot at the end of the SP phase even
in the presence of control circuit delays. In the following SN
phase one of the inductor terminals is grounded but continues to
serve the output if L is sized sufficiently to support the total DC
load. During T2, the output voltage discharge at a rate given by I1,2/C1,2. This causes a step increase in the dynamic levels,
making it move closer to the output voltage profile. When
switching from one output to the other, switch S1 is closed if the
voltage V1 discharges to below V1low(t); i.e. S1 and S2 are
controlled by load levels in the outputs. This guarantees that V1
and V2 stay well within the static bounds V1,2upDC and V1,2lowDC.
Thus the ripples of the output voltages are stronger functions of
the static levels V1,2upDC and V1,2lowDC than of the external LC tank.
This is an advantage from form factor reduction point of view.
Lower ripple requires closer static bounds, and also increases the
frequency with which S1, S2, Sp, Sn switch.
SP O N
SN O N
L ser vin g th e oth er ou tp u t
v 1 ,2 u p (t)
V 1 ,2 u p D C
v 1 ,2 (t)
V ref1 ,2
C 1 ,2 d isch a rg in g
p h a se
M a in d ecision s
ta k en
v 1 ,2 lo w (t)
V 1 ,2 lo w D C
t
T1
T2
Fig. 2 SIDO’s steady state output and corresponding dynamic thresholds
The time periods T1 and T2 are adjusted by the controller
such that the average loads at both outputs are delivered over one
time period when the converter is in steady state; i.e.
T2
T1

 i (t)dt
il (t)dt
0
T1  T2
l
 i1average
T1
T1  T2
 i2average
.
(3)
The issue of cross-regulation arises from the sharing of
boundary conditions of inductor current between the output
branches. This causes coupling between the sub converters. If the
inductor were to discharge to a state of zero current at the end of
every time window, then independent load supply can be
achieved at each output without undesirable rise or fall in voltage
[2-5]. However, the disadvantage of operating the SIDO buck
converter in this mode of discontinuous conduction for all load
conditions is the rise in peak currents flowing through the
inductor, increasing current stress of the switches and conduction
losses as well as loss in system efficiency due to the full charge
and discharge of additional parasitic capacitors. To decrease the
peak inductor currents the inductor might be reset to a constant
value Idc instead of zero [6-7]. A variation of this technique is to
reset the inductor to different current values that are dependent
on the individual loads. This technique requires an additional
low-resistance switch across the inductor. Further, current
sensing circuits that are sensitive to high frequency noise are
required. Control methodologies like Adaptive Delta Modulation
[8] and Ordered Power Distributive Control [9] use digital
algorithms and analog signal processing circuits to control the
voltages. These solutions have a fixed frequency of operation
which leads to the inability of the SIMO SMPS to regulate with
widely varied load ranges at both outputs.
In order to meet the average load current requirements
of both the outputs as set by (3), the time multiplexing of L is
controlled. In Fig. 2, when S1,2 is ON during T1, C2,1 discharges,
causing V2,1 to droop with a rate that is directly proportional to
its load current. This causes an upward shift in V2,1low(t). At time
720
T2 when V1 hits V2,1low(t), S2,1 is turned ON. Thus the output with
higher load current takes priority since the transient voltage
profile is monitored. L is connected for a longer time to the
output with the higher load. The absence of any averaging
circuits or any form of linear compensation leads to better
transient performance.
B. Architecture and control flow
Fig. 3 shows the topology and control architecture of the
SIDO buck converter. Eqns. (1) and (2) are implemented using
analog differentiators with DC offsets. V1 and V2 are compared
with the dynamic levels to generate the control signals for
switches Sp and Sn. S1 and S2 are controlled by the comparator
that sets the priority of the outputs based on the voltage error and
load current, accordingly setting the flag ‘M’ to 1 or 0. The
delays due to the control gates, drivers and comparators are
negligible compared to the output time constant, leading to very
small control loop delay. Reverse current is detected by
monitoring voltages across S1 and S2 in order to avoid the flow of
negative current flowing through the inductor. When the reverse
current detector ‘R’ is high the switches Sn and Saux are closed
while all the other switches are immediately turned OFF. This
switching action grounds the inductor terminals avoiding
negative current flow through it and also prevents efficiency
degradation. The flowchart shown in Fig. 4 summarizes the
sequence of operations implemented by the digital controller. The
flag ‘M’ and reverse current flag ‘R’ control the states of S1 and
S2. Once the inductor is connected to a particular output, the
voltage is compared to corresponding dynamic levels for
manipulation of Sp and Sn. Sp is switched ON when the voltage
goes below the lower dynamic threshold and Sn is turned ON
when the voltage overshoots the upper dynamic threshold. Thus
the two loops that control the battery side and load side switches
work independently of each other, except in the event of reverse
current flow. This independent operation guarantees the reliable
operation of the control system.
1.26 KZ
dV1t 
dt
V1 t 
1.14  KZ
dV1t 
dt
V2 V 2low
V1 V 1low
V2 V 2low
V1 V 1low
V2 V 2up
V1 V1up
Fig. 4 Logical flow of operations in the SIDO Buck Converter
III.
SCHEMATIC SIMULATION RESULTS
In this work, the value of L is1 µH and the values of the
output capacitors are 4.7 µF each. The controller was designed
and simulated at transistor level using a conventional 0.5 m
CMOS technology. When maximum load i.e. 300 mA is present
at each output, the transient response of V1(t) and V2(t) with
corresponding dynamic levels are shown in Fig. 5. Overshoots or
undershoots about their designated static bounds are due to the
response time of the loop control.
V1 t 
V2 t 
Fig. 5 Steady state V1 and V2 when load currents I1 = I2 = 300 mA
V2 t 
In Fig. 6, the load at V1(t) is 10 mA and at V2(t) is 300
mA. In Fig. 6, S1 stays ON for just as long as the capacitor C1
gets charged to over V1low. Once the light load output V1 is
charged over its acceptable lower limit, the inductor is
immediately connected to the output V2 with the heavier load, and
the switches Sp and Sn continue to get manipulated according to
the corresponding dynamic thresholds. Hence both outputs with
widely varied load values are served by the inductor thus
minimizing the cross regulation.
Fig. 7 shows two families of curves; the dotted traces
correspond to the output voltages with dynamic hysteresis, while
the solid lines indicate the outputs with static hysteresis. When
dynamic levels are used the voltage ripple is better controlled
dV2 t 
dt
V1t 1.2
1.425 KZ
dV2 t 
dt
V2t 1.5
1.575  K Z
V2 t  V1 t 
Fig. 3 SIDO system Overview
721
within the permissible bounds. In the case of static hysteresis, a
large output capacitor would be required to effectively control
this ripple value.
depicted in Fig. 9 that is desirable in many applications in
order to optimize the performance of the power supple over
a wide range of loads. Higher values of efficiency are
achievable with more sophisticated technology, leading to
lower switch on-state resistance and parasitic capacitances.
Fig. 6 Steady state V1 and V2 when load currents I1 = 10mA and I2 = 300 mA
Fig. 9 Overall efficiency vs. total load current I1 and I2
IV.
Fig. 7 Comparison of static and dynamic hysteresis
Fig. 8 shows the load regulation of V1(t) and cross
regulation of V2(t). There is a step increase in the load current I1
from 10mA to 300mA leading to an increase in the ripple
frequency of V1(t) at the instant of the load step. This increase in
the switching frequency ensures that the sudden load step is
handled by the inductor energy, thus helping the loop recover
without any undesirable dips in the output voltages.
CONCLUSION
A single-inductor two-output switching regulator with
low cross regulation, high accuracy and 2.5% ripple limits has
been described. These achievements are a result of the proposed
non linear hysteresis control applied to the SIDO buck converter
leading to superior transient performance and disturbance
rejection. Using the proposed dynamic hysteresis control
methodology, constant efficiency values at different load
combinations that is essential for optimum performance is
achieved. The downside of the proposed method is the variation
of the operating frequency with load. Nevertheless, the switching
frequency can be controlled to stay within a band of acceptable
frequencies by tuning the width of the hysteresis loop using an
auxiliary feedback loop. The principles presented in this paper
can be extended to a multiple output (n>2) buck converter.
REFERENCES
Fig. 8 Time response of V1 and V2 to load step from 10 mA to 300 mA.
Hence from simulations it can be concluded that
frequency of operation is “adaptive” leading to faster
switching during high loads and slower switching during
low loads, leading to improved efficiency. Also, the
problem of negative inductor current during discontinuous
conduction mode is solved, with no ringing transients, due
to the presence of the auxiliary switch. The efficiency of
the system is almost constant over the individual load
ranges. This leads to an almost flat efficiency-load curve as
[1] Wing-Hung Ki and Dongsheng Ma, “Single-Inductor Multiple-Output
Switching Converters”, in Proc. IEEE PESC, Vol. 1, pp. 226-231, June 2001.
[2] Dongsheng Ma, Wing-Hung Ki, Chi-Ying Tsui, and Philip K.T. Mok, “A
Single Inductor Dual-Output Integrated DC/DC Boost Converter for Variable
Voltage Scheduling”, in Proc. of the 2001 Conference on Asia South Pacific
Design Automation, pp. 19-20, 2001.
[3] Massimiliano Belloni, Edoardo Bonizzoni, and Franco Maloberti, “On the
Design of a Single-Inductor-Multiple-Output DC-DC Buck Converters” in Proc.
Of 2008 International Symposium on Circuit and Systems, Vol. 3, pp. 3049-3052,
May 2008.
[4] Dongwon Kwon, and Gabriel A. Rincón-Mora, “Single-Inductor–MultipleOutput Switching DC–DC Converters” in IEEE Transactions on Circuits and
Systems-II: Express Briefs, Vol. 56, Nº. 8, Aug. 2009.
[5] Suet-Chui Koon, Yat-Hei Lam, and Wing-Hung Ki, “Integrated Charge
Control Single-Inductor
Step-Up/Step-Down Converter” in International
Symposium on Circuit and Systems, Vol. 4, pp. 3071-3074, May 2005.
[6] Ming-Hsin Kuang, Ke-Horng Chen, “Single Inductor dual-output (SIDO) DCDC converters for minimized cross regulation and high efficiency in soc
supplying systems”, in IEEE International Midwest Symposium on Cicuits &
Systems, Vol.1, pp. 550-553, 2007.
[7] Anmol Sharma, and Shanti Pavan, “A single inductor multiple output
converter with adaptive delta current mode control” in Proc of IEEE International
Symposium on Circuit and Systems, pp. 5643-5646, 2006.
[8] Hanh-Phuc Le, Chang-Seok Chae, Kwang-Chan Lee, Se-Won Wang, Gyu-Ha
Cho, and Gyu-Hyeong Cho, “A single-inductor switching DC-DC converter with
five outputs and ordered power-distributive control” in IEEE Journal of SolidState Circuits, Vol. 42, Issue 12, pp. 2706-2714, December 2007.
722
Current-Steering Switching Policy for a SIDO
Linear-Assisted Hysteretic DC/DC Converter
Herminio Martínez (1), Jose Silva-Martínez (2), Eduard Alarcón (3) and Alberto Poveda (3)
(1)
College of Industrial Engineering of
Barcelona (EUETIB)
Department of Electronics Eng.
Technical Univ. of Catalonia (UPC)
C/ Comte d’Urgell, 187.
08036 Barcelona. SPAIN
[email protected]
(2)
Analog and Mixed Signal Center
Electrical and Computer Engineering
Department (ECE)
Texas A&M University
College Station, TX, 77843-3128, USA
[email protected]
Abstract— This paper proposes the use of linear-assisted
switching power converters in the context of single-inductor
dual-output (SIDO) applications. By combining a DC/DC ripplecontrolled switching power converter with the respective voltage
linear regulators at each output, improved performance in
terms of load and line regulations is obtained. To achieve that
aim, a current-steering switching policy is proposed, together
with a resource-aware circuit implementation. The ripple-based
hysteretic control results in variable switching frequency to
guarantee critical conduction mode (boundary of CCM and
DCM).
I.
INTRODUCTION
Multiple regulated supply voltages are becoming a need in
many applications that require different supply voltages for
different subsystems. Possible applications include mobile
phones, personal digital assistant (PDAs), microprocessors,
wireless transceivers, etc. [1]. In order to obtain these output
voltages, switching converters and voltage linear regulators
are the main alternatives at the core of power-management
systems. As all designers put effort into size reduction, a
converter with different output voltages cannot stay out of that
trend, forcing designers to find a method to shrink the size in
both on-chip and off-chip implementations [2]. Of all of the
approaches, single-inductor single-input multiple-output
(SIMO) converters come to prevail.
SIMO converters can support more than one output while
requiring only one off-chip inductor, promising many
appealing advantages, in particular the reduction of bulky
power devices, including inductors, capacitors and control ICs
[1], [2]. In this way, the cost of mass production is remarkably
reduced. Therefore, the SIMO topology appears as the most
suitable and cost-effective solution in the future development
of power management systems, attracting many
manufacturing companies with different applications in
portable devices. However, it is still a notable challenge to
find the best topology and control for the implementation of
this type of converter.
In order to obtain multiple outputs, two main alternatives
have historically been used: (1) voltage series linear
regulators, that have been widely used for decades [3]-[6], and
(2) DC/DC switching converters, thanks to which high-
Páginas: 805– 810
ISBN: 978-84-95809-75-9
(3)
School of Telecommunications
Engineering of Barcelona (ETSETB)
Department of Electronics Eng.
Technical Univ. of Catalonia (UPC)
C/ Gran Capitán s/n, Ed. C4,
08034 Barcelona. SPAIN
[email protected]
efficiency power supply systems can be obtained [7]-[9].
Linear-assisted DC/DC converters (also known as linearswitching hybrid converters) are circuit topologies of strong
interest when designing power supplies concurrently requiring
as design specifications both: (1) high slew-rate of the output
current and (2) high current consumption by the output load.
This is the case of the systems based on modern
microprocessors and DSPs, where both requirements converge
[10], [11]. This interest is also applicable to wideband
adaptive supply of RF power amplifiers.
Linear-switching hybrid converters are compact circuit
topologies that preserve the well-known advantages of the two
typical alternatives for the implementation of DC/DC voltage
regulators, namely, achieving both moderately high
efficiencies –by virtue of the switching regulator- together
with fast wideband ripple-free regulation –by virtue of the
linear regulator-. In this paper, the linear-assisted strategy is
applied to SIMO converters.
II.
TOPOLOGY OF A HYSTERETIC LINEAR-ASSISTED
DC/DC CONVERTER
The basic schematic of a single-input single-output (SISO)
linear-assisted converter is shown in figure 1.a [12], [13]. This
structure consists, mainly, of a voltage linear regulator in
parallel with a step-down switching DC/DC converter. In this
type of converters, the value of the output voltage,
theoretically constant, is fixed with good precision by the
voltage linear regulator. The current through the linear
regulator is constantly sensed by the current sense element Rm.
Based on this sensed signal, the controller activates the output
of comparator CMP1 which controls the switching element of
the DC/DC converter. Notice that the current flowing through
the linear regulator constitutes a measurement of the error of
the power supply.
The power stage (this is, the switching converter) supplies
to the output the current required to force to a minimum value
the current flowing through the linear regulator. As a
consequence, it is obtained, altogether, a power supply circuit
in which the switching frequency comes fixed, among other
parameters (such as the possible hysteresis of the analog
comparator), by the value of the current flowing through the
linear regulator. In the linear-assisted converter shown in
805
SAAEI’10
Bilbao, 7 – 9 de julio de 2010
figure 1.b, a step-down (buck) switching converter [14], [15]
is used. On the other hand, the linear regulator consists of a
push-pull output stage (transistors Q2a and Q2b). In this
approach, the main objective of the DC/DC switching
converter is to provide most of the load current in steady-state
conditions (to obtain a good efficiency of the whole system).
Thus, in steady state, the linear regulator provides a small part
of the load current, maintaining the output voltage to an
acceptable DC value.
Vin
level, turning OFF the DC/DC switching converter. Thus, the
current through inductor L1 will be zero (figure 2). Therefore,
the voltage linear regulator supplies the required output
current (Ireg=Iout).
I (A)
iL(t)
L1
DC-DC Switching
Converter
Iout
ireg(t)
IH
I
IL
0
Driver
Vref
Rm
Ireg
Voltage Linear
Regulator
Iout
Vout
RL
IQ
(a)
L1
Q1
R3
D1
R2
IL
Vin
R1
+
VC
–
+
–
CMP1
I 
Rm
Voltage Linear Regulator
Iout
Vin
+
OA1
Vout
RL
Q2a
VZ
Vref
(1)
Rm
It is important to emphasize that reducing the value of the
power dissipated in the pass transistor of the linear regulator
increases the efficiency of the set, even for significant output
currents. Therefore, it is important to fix the current limit Iγ to
an appropriate value between a maximum border to limit the
maximum power dissipation, and a minimum border to
operate the regulator properly, without penalizing its good
characteristics of regulation. Thus, Iγ must be set at a value
such that: (a) It does not significantly increase the power
dissipation of the pass transistor in the linear regulator and
does not excessively diminish the efficiency of the linearassisted converter. (b) It does not significantly deteriorate the
regulation of the output voltage.
Current
Sensing
Vref
TOFF
However, when the current demanded by the load is above
this current limit Iγ, the output of the comparator will
automatically toggle to high level. As a consequence, the
current flowing through the inductance L1 will grow linearly.
Considering that the output current Iout=Ireg+IL is assumed to
be constant (equal to Vout/RL), the linear regulator current Ireg
will also decrease linearly, until the time instant in which it
will become slightly smaller than Iγ. At this moment, the
comparator will change its output to low level, turning OFF
the switch transistor Q1 and causing the current trough the
inductor to decrease. When the inductor current decreases to a
value in which Ireg>Iγ, the comparator changes its state to high
level, thereby repeating the complete switching cycle. Without
hysteresis in the comparator, the switching instant of the
DC/DC converter is controlled by Iγ. This control signal can
be adjusted to a given command thanks to the gain of the
current sensing element, Rm, and the reference voltage Vref,
according to the expression:
Current
Sensing
Vin
TON
Fig. 2.- Principle of operation of the proposed linear-assisted DC/DC
converter.
VC
+
Both linear and switching
blocks enabled
IL
CMP1
–
+
–
t
Linear block enabled
Switching block disabled
Ireg
–
Q2b
Fig. 1.- (a) Block diagram of the proposed linear-assisted converter.(b) Basic
structure of the proposed linear-assisted DC/DC converter.
If the current demanded by the load Iout is below a
maximum current threshold, denominated switching threshold
current, Iγ, the output of comparator CMP1 will be at low
Thus, we can denominate this type of control as a strategy
control with non-zero average linear regulator current. For
load currents below 10 A, it can be concluded through circuitlevel characterization that the suitable value of Iγ that fulfills
the two previous conditions is between 10 mA and 50 mA.
The proposed linear-assisted DC/DC converter is suitable
to any kind of converter, in particular to SIMO linear-assisted
806
DC/DC converters. Next sections are devoted to the extension
of a single-output linear-assisted converter to obtain a SIDO
converter.
III.
SIDO LINEAR-ASSISTED DC/DC CONVERTER
Based on figure 1.b, the structure of the SIDO linearassisted DC/DC converter is obtained as shown in figure 3. In
this topology, two voltage linear regulators (A and B), one for
each output, are used and one buck DC/DC switching
converter (without the output capacitor) provides part of the
output current for the two outputs. In the presented topology
the SIDO linear-assisted DC/DC converter operates at the
boundary of continuous conduction mode (CCM) and
discontinuous conduction mode (DCM) with variable
switching frequency, as it will be justified in the next section.
converter until IL=0. It should be evident that information
from both subconverters is needed to determine which of the
two output currents is the largest, and any change in one phase
necessarily affects the other two phases, this rendering the
control of the two outputs interdependent.
Notice that the aforementioned topology can easily be
extended to implement different algorithms and generate
multiple output voltages.
SW1
Vin
VC1
VC5
Controller
IL1
VCout1
IL2
SW3
SW4
Linear Regulator A
Q12a
VZ1
+
Vout1
Iout1
Vin
RL1
Ireg1
OA1
–
Q12b
SWITCHING POLICY FOR THE SIDO LINEAR-ASSISTED
DC/DC CONVERTER
Linear Regulator B
The concept of SIMO converters control algorithms has
been disclosed in different papers [16]. In classical
approaches, the control and timing scheme is a form of time
division multiplexing. This time multiplexing can be extended
from two outputs (SIDO converter) to N outputs, and each
output should occupy a time slot for charging and discharging
the inductor. In all cases, the structure can work with constant
or variable switching frequency.
An important component of the proposed SIDO structure
shown in figure 3 is the switching control of the four switches
that determine the operation phases of the DC/DC converter.
In this topology the SIDO linear-assisted DC/DC converter
operates at the boundary of CCM and DCM with variable
switching frequency. In the proposed control algorithm
considered in this work (figure 4), each period is divided into
three phases, not necessarily of equal duration. In phase 1, the
inductor is charged from 0 A to the larger of the two output
currents (Iout1 in the case under discussion). In phase 2, the
inductor discharges into the first converter until IL becomes
smaller than the lower output current (Iout2 in our case).
Finally, in phase 3, the inductor drains IL into the second
SW2
VSO2 VSO1 VSL1
Due to the current sensing circuit, the controller generates
the control signals for the four switches of the SIDO linearassisted DC/DC converter as a current-steering switching
policy. In this particular application, it is necessary to sense
the two output currents (sensing signals VSO1 and VSO2). On the
other hand, the current flowing through the inductor of the
switching converter (sensing signal VSL1) has to be sensed as
well.
For a multiple-output converter with stable outputs, each
output should be independently regulated. If the output
voltage of a subconverter is affected by the change of load of
another subconverter, cross regulation occurs. This is an
undesired effect that, in the worst case, could make the system
unstable [1], [16].
IL
VCout2
On the other hand, four switches, which determine the
operation phases of the DC/DC converter, steer the inductor
current of the switching converter to the appropriate output.
Note that synchronous rectification is considered as
unavoidable in a low-voltage chip-compatible scenario.
IV.
L1
Iout2
Vin
Q22a
VZ2
+
Vout2
RL2
Ireg2
OA2
–
Q22b
Fig. 3.- Basic structure of a SIDO linear-assisted DC/DC converter.
V.
CONTROLLER IMPLEMENTATION FOR THE SIDO
LINEAR-ASSISTED DC/DC CONVERTER
The control algorithm considered in this paper is shown in
figure 4. It is necessary to sense the two output currents (VSO1
and VSO2 in figure 3) and the current flowing through the
inductor of the switching converter (sensing signal VSL1). As a
consequence, four control signals are obtained in order to
control the four switches of the SIDO linear-assisted DC/DC
converter, namely: control signal VC1 for the switch SW1, VC5
to control SW2, VCout1 for the switch SW3 and VCout2 for the
switch SW4.
In order to implement the control algorithm presented in
figure 4 it is necessary to obtain which of the two output
currents is the largest. In addition, it is necessary to compare
the inductor current with these two output currents, generating
internal control signals. The scheme presented in figure 5.a
807
shows the circuit that implements this part, obtaining three
internal threshold levels: VT1, VT2 and VT3. Notice that the
output of the comparator CMP1 provides the intermediate
control signal VS that indicates which output current (Iout1 or
Iout2) is the largest one.
These three levels (VT1, VT2 and VT3) are the intermediate
or internal signals that control a state machine, consisting of
three R-S latches (figure 5.b). The state machine generates the
control signals VC1, VC5, VCout1 and VCout2 for the switches SW1,
SW2, SW3 and SW4, respectively. Finally, in figure 5.c, it is
shown the block which, in the inductor discharge interval,
decides which output (switch SW3 or SW4) is selected first.
Note that this decision depends upon the signal VS provided by
comparator CMP1 (figure 5.a). Thus, the largest of the two
currents is selected in the subinterval TOFF1 and the lower in
the interval TOFF2.
Iout1
From 1.67 A to 0.83 A at t=250 s and vice versa at t=500 s,
being Ireg2=1.33 A.
VII.
In this paper, the design and performance characterization
of a SIDO linear-assisted DC/DC converter has been
described. A current-steering switching policy, in combination
with a linear-assisted hysteretic DC/DC regulator in the
context of single-inductor dual-output (SIDO) converters,
allows to provide two independent outputs with suitable load
and line regulations.
In the proposed topology the SIDO linear-assisted DC/DC
converter operates at the critical conduction mode with
variable switching frequency by means of a hysteretic control,
thereby restricting the inductor current ripple.
Finally, note that different control algorithms can be
implemented in the proposed SIDO structure in order to obtain
the appropriate and accurate load and line regulations.
Final experimental results corroborating the previous
simulation results will be included in the definitive version of
the article.
IL
Iout2
ACKNOWLEDGMENT
This work has been partially funded by project TEC2007–
67988–C02–01/MIC from the Spanish MCYT and EU
FEDER funds.
t
Iout1
Ireg1
IL
REFERENCES
[1]
Ireg2
Iout2
[2]
0
TON
Inductor charge
TOFF1
TOFF2
t
[3]
Inductor discharge
[4]
Fig. 4.- Current waveforms of the SIDO linear-assisted DC/DC converter
with control strategy A: through the load 1 and load 2 (red color traces),
inductance L1 (blue trace), linear regulator 1 (discontinuous green trace) and
linear regulator 2 (discontinuous violet trace).
VI.
CONCLUSIONS
[5]
BEHAVIOUR CHARACTERIZATION OF THE SIDO
LINEAR-ASSISTED DC/DC CONVERTER
[6]
In order to validate the presented structure for the SIDO
linear-assisted DC-DC converter depicted in figure 3, its
controller shown in figure 5 and the control algorithm
presented in figure 4, circuit level characterization has been
obtained for system specifications requiring 5.0 V at Vout1 and
2.0 V at the output Vout2, being Vin=9 V. Figure 6 shows the
most representative waveforms when the SIDO linear-assisted
converter provides 1.67 A at the output 1 and 0.67 A at the
output 2.
In order to validate the controller operation under
variations of the maximum of the two output current, figure 7
shows the current waveforms of the structure of the SIDO
linear-assisted DC/DC converter when the output current Ireg1
changes from the largest value to a value lower than Ireg2:
[7]
[8]
[9]
[10]
[11]
[12]
[13]
808
D. Ma, W-H. Ki, C-Y. Tsui. “A Pseudo-CCM/DCM SIMO Switching
Converter with Freewheel Switching”. IEEE Journal of Solid-State
Circuits, vol. 38 (nº 6), pp. 1007-1014, June 2003.
H-P. Le, C-S. Chae, K-C. Lee, S-W. Wang, G-H. Cho, G-H Cho. “A
Single-Inductor Switching DC-DC Converter with Five Outputs and
Ordered Power-Distributive Control”. IEEE Journal of Solid-State
Circuits, vol. 42 (nº 12), pp. 2706-2714, December 2007.
C. K. Chava, J. Silva-Martínez. “A Frequency Compensation Scheme for
LDO Voltage Regulators”. IEEE Transactions on Circuits and Systems–I:
Regular Papers, vol. 51 (nº 6), pp. 1041-1050, June 2004.
R. J. Milliken, J. Silva-Martínez, E. Sánchez-Sinencio. “Full On-Chip
CMOS Low-Dropout Voltage Regulator”. IEEE Transactions on Circuits
and Systems–I: Regular Papers, vol. 54 (nº 9), pp. 1879-1890, September
2007.
R. K. Dokania, G. A. Rincón–Mora. “Cancellation of Load Regulation in
Low Drop–Out Regulators”. Electronic Letters, vol. 38 (nº 22), pp. 1300–
1302, 24th October 2002.
V. Grupta, G. A. Rincón–Mora, P. Raha. “Analysis and Design of
Monolithic, High PSR, Linear Regulator for SoC Applications”.
Proceedings of the IEEE International SoC Conference, pp. 311–315,
2004.
R. W. Erickson, D. Maksimovic. “Fundamentals of Power Electronics”.
2nd edition, Ed. Kluwer Academic Publishers, 2001.
J. G. Kassakian, M. F. Schlecht, G. C. Verghese. “Principles of Power
Electronics”. Ed. Addison–Wesley, 1991.
N. Mohan, T. M. Underland, W. P. Robbins. “Power Electronics:
Converters, Applications and Design”. Ed. John Wiley & Sons, 1989.
V. Yousefzadeh, E. Alarcon, and D. Maksimovic, “Band Separation and
Efficiency Optimization in Linear-Assisted Switching Power Amplifiers”.
37th IEEE Power Electronics Specialists Conference, 2006 (PESC’06),
pp. 1-7, 18-22 Jun. 2002.
B. Arbetter, D. Maksimovic. “DC–DC Converter with Fast Transient
Response and High Efficiency for Low–Voltage Microprocessor Loads”.
IEEE Applied Power Electronics Conference, pp. 156-162. 1998.
P. Midya, F. H. Schlereth. ‘Dual Switched Mode Power Converter’.
IECON’89. Industrial Electronics Society, pp. 155–158, 1989.
F. H. Schlereth, P. Midya. ‘Modified Switched Power Convertor with
Zero Ripple’. Proceedings of the 32nd IEEE Midwest Symposium on
[14]
[15]
Circuits and Systems (MWSCAS’90), pp. 517–520, 1990.
H. Martínez, A. Conesa,. “Modeling of Linear-Assisted DC–DC
Converters”. European Conference on Circuit Theory and Design 2007
(ECCTD 2007), 26th-30th August 2007.
A. Conesa, H. Martínez, J. M. Huerta. “Modeling of Linear & Switching
Hybrid DC–DC Converters”. 12th European Conference on Power
Electronics and Applications (EPE 2007), September 2007.
[16]
D. Ma, W-H. Ki, C-Y. Tsui, P. K. T. Mok. “Single-Inductor MultipleOutput Switching Converters with Time-Multiplexing Control in
Discontinuous Conduction Mode”. IEEE Journal of Solid-State Circuits,
vol. 38 (nº 1), pp. 89-100, January 2003.
SWA
VC4
SWA1
VCout1
–
VT2
+
VSO1
+
VSO2
VS
–
SWA2
–
SWB1
CMP1
SWB
VS
CMP2
SWC
VT1
VC3
+
VCout2
CMP3
–
VT3
+
CMP4
SWB2
VSL1
SWD
(c)
(a)
VC1
VC4
VC2
VC5
VT2
VT3
VT1
S
Q
VT3
S
Q
R
Q
VT1
VC3
VC5
VC3
S
R
R
Vres
(b)
Fig. 5.- Structure of the controller block for the SIDO linear-assisted DC/DC converter: (a) Generator of the internal threshold levels for the state machine.
(b) State machine that generates the control signals VC1, VC5, VCout1 and VCout2 for the switches SW1, SW2, SW3 and SW4, respectively. (c) Block to decide which
of the two outputs (switch SW3 or SW4) is selected first within the inductor discharge interval.
809
6.0V
Vout1
4.0V
Vout2
2.0V
0V
0s
50us
V(Vout2)
V(Vout1)
100us
150us
200us
250us
300us
350us
400us
Time
(a)
2.0A
Iout1
1.5A
Ireg1
IL
1.0A
Iout2
Ireg2
0.5A
0A
0s
I(L1)
50us
I(RL21)
I(RL11)
100us
I(RL1s)
150us
200us
250us
300us
350us
400us
I(RL2s)
Time
(b)
Fig. 6.- Current waveforms of the structure of the SIDO linear-assisted DC/DC converter: (a) Output voltages Vout1 and Vout2. (b) Currents of interest in the
circuit: IL, Ireg1, Ireg2, Iout1 and Iout2.
2.0A
Iout1
Ireg1
1.5A
Iout2
IL
1.0A
Ireg2
0.5A
0A
0s
I(L1)
I(RL21)
100us
I(RL11)
I(RL1s)
200us
I(RL2s)
300us
400us
500us
600us
Time
Fig. 7.- Current waveforms of the structure of the SIDO linear-assisted DC/DC converter when the output current Ireg1 changes from 1.67 A to 0.83 A at t=250
s and vice versa at t=500 s: (a) Output voltages Vout1 and Vout2. (b) Currents of interest in the circuit: IL, Ireg1, Ireg2, Iout1 and Iout2.
810
III Congreso Nacional de Pulvimetalurgia
Valencia, 13 y 14 de junio de 2010
XXX-XXX
ESTUDIO DE LA INFLUENCIA DE LAS VARIABLES DE
MOLIENDA EN LAS PROPIEDADES DEL POLVO DE
ALUMINIO NANOCRISTALINO
J. Solà, J. Llumà, J. Jorba
Dep. Ciència de Materials i Enginyeria Metal•lúrgica, EUETIB,
Universitat Politècnica de Catalunya, Comte d’Urgell 187, 08036 Barcelona.
[email protected], [email protected], [email protected]
RESUMEN
En la última década ha habido un creciente interés y esfuerzo de investigación centrado en
el desarrollo de materiales nanocristalinos debido a la notable mejora de sus propiedades
químicas, eléctricas, magnéticas, ópticas y mecánicas. Estos materiales pueden ser
producidos mediante varios procedimientos. Aunque algunas de estas técnicas, como la
molienda mecánica (BM), producen materiales en polvo que requieren una consolidación
posterior para tener una aplicación estructural, permiten obtener los menores tamaños de
grano. Por esta razón, se han realizado amplios estudios sobre la dinámica de los procesos
de molienda y de su influencia en los cambios microestructurales producidos en esos
materiales. No obstante, el principal esfuerzo se han centrado en materiales férricos, y son
pocos los trabajos en aluminio puro.
En la presente comunicación se presenta la evolución de la dureza y tamaño de partícula de
aluminio nanocristalino obtenido por molienda mecánica y el rendimiento del proceso de
producción en función de los parámetros de molienda.
ABSTRACT
In the last decade or so there has been an increasing interest and research effort focused on
nanocrystalline materials due to the remarkable improvement of their chemical, electrical,
magnetic, optic and mechanical properties. These materials can be produced by several
processes. Although some of those techniques as ball milling (BM) produce powdered
materials that need to be consolidated in a further process to be useful in structural
applications, they are able to obtain the smaller grain size. For this reason, the dynamics of
the milling process and its influence on the microstructural changes produced in these
materials has been extensive studied. However, the main efforts have focused on ferrous
materials, and there are few studies of pure aluminium. This paper presents the evolution of
hardness and particle size of nanocrystalline aluminium powder obtained by mechanical
milling and the efficiency of production processes versus milling process parameters.
1
III Congreso Nacional de Pulvimetalurgia, Valencia, junio 2010
Palabras clave (Keywords): aluminio, materiales nanocristalinos, propiedades mecánicas,
técnicas de producción de polvos.
1. INTRODUCCIÓN
La notable mejora en las propiedades químicas, eléctricas, magnéticas, ópticas y mecánicas
que se obtiene en los materiales nanocristalinos ha propiciado una eclosión en su estudio
durante la última década [1,2]. Los procesos que permiten obtener estos materiales son
muchos y diversos, pero aquellos que permiten obtener un menor tamaño de grano cristalino
suelen producir el material en forma de polvo, el cual precisa de una posterior sinterización
para producir componentes aptos para aplicaciones estructurales.
Entre estas técnicas se cuenta la molienda mecánica, que ha sido objeto de numerosos
estudios metodológicos sobre la influencia de los parámetros de molienda en la evolución
del material molido [3,4,5]. Ello ha permitido obtener tamaños de grano nanocristalino en
diversos metales [6] y aleaciones [7].
No obstante, el principal esfuerzo se ha centrado en materiales férricos, y son
comparativamente pocos los trabajos de molienda mecánica en aluminio puro [ 8], aunque
no con sus aleaciones [9]. Este hecho no ha impedido detallados estudios de la evolución de
la microestructura del aluminio al acumular trabajo en frío mediante otras técnicas de
deformación [10], que en grandes líneas son extrapolables al caso de la molienda mecánica.
Tal vez por este motivo, la mayoría de trabajos se centran en la influencia del tiempo de
molienda en las propiedades mecánicas del polvo obtenido y deja de lado la influencia de
otras variables como la energía de los impactos (consecuencia de la velocidad de giro) o el
número de impactos por unidad de tiempo (consecuencia del número de bolas). El presente
trabajo está enfocado a describir la influencia de estas variables, no sólo en las propiedades
mecánicas del material molturado, sino de aquellas variables que influenciarán en el
posterior proceso de sinterización en etapa industrial, es decir, tamaño de partícula del
polvo y rendimiento del proceso de molturación.
2. MATERIALES Y PROCEDIMIENTO EXPERIMENTAL
El material utilizado en este estudio es polvo de aluminio puro suministrado por la empresa
ECKA Granules® con la especificación ECKA Aluminium AS 51 que ha sido tamizado en
el laboratorio. La fracción utilizada corresponde al intervalo 72-100 µm. En la figura 1 se
muestra la distribución de tamaño de partícula de la fracción tamizada que ha sido
determinada mediante análisis de imagen de la superficie proyectada de las partículas [11].
En las figuras 2a y 2b se muestran imágenes de la morfología de las partículas y de la
sección de una de las partículas obtenidas, respectivamente, mediante SEM y microscopía
óptica. En la tabla 1 se muestra la composición química del material original y del polvo
molturado en las condiciones extremas ensayadas determinada mediante espectrometría de
masas con fuente de plasma de acoplamiento inductivo (MS-ICP). La dureza del polvo
original es 38,9 ± 2,7 HV 0,025 determinada siguiendo la metodología descrita
posteriormente.
2
Estudio de la influencia de las variables de molienda en las propiedades del polvo de aluminio
nanocristalino
7
Porcentaje de partículas
6
5
4
3
2
1
0
1E-3
0,01
Tamaño de partícula (mm2)
Figura 1. Distribución del tamaño de partícula de la fracción tamizada expresada como área
de la superficie proyectada.
Figura 2. Morfología de las partículas de polvo utilizado y del interior de una partícula.
El polvo de aluminio ha sido molturado en molino planetario de bolas Pulverisette 5 de la
firma Fritsch® utilizando contenedores cilíndricos de 250 ml fabricados con acero
X 5 Cr Ni 18 10 y bolas de 10 mm de diámetro fabricadas con acero 100 Cr 6. Se han
ensayado distintas condiciones de tiempo de molienda, relación masa de bolas-masa de
polvo (BPR) y velocidad de giro del molino (rpm) sin agente de control. También se ha
determinado la influencia de Licowax C de Clarient®, cera tipo amida (EBS), como agente
de control.
3
III Congreso Nacional de Pulvimetalurgia, Valencia, junio 2010
Se determinado el rendimiento de cada condición de molienda como la diferencia entre la
masa de polvo inicial y de polvo obtenido después de la molienda, y se expresa como
porcentaje de variación respecto a la masa inicial.
El tamaño medio y la distribución de tamaños de las partículas de polvo obtenidas en cada
molturación han sido determinados mediante análisis de imagen de la superficie proyectada
de cada partícula realizado sobre muestras no inferiores a 400 partículas. En algunos casos
se han desestimado algunas partículas que, por su forma aplanada y tamaño, pudieran sesgar
la medida. En todos los casos se ha ajustado una distribución log-normal y se han calculado
la moda de esa distribución. Las barras de error representadas en las correspondientes
gráficas delimitan los intervalos de tamaño de grano que incluyen el 90% de la población de
partículas de la muestra.
La dureza del polvo original y del material obtenido en las distintas moliendas se expresa en
escala Vickers (HV 0,025) y ha sido determinada en un microdurómetro Buehler 5114.
Todas las muestras utilizadas han sido embutidas en caliente (150ºC) en resina epoxi de alta
resistencia (Epomet de Buehler®) y pulidas siguiendo el procedimiento metalográfico
habitual hasta la mitad de su diámetro, aproximadamente. Se ha utilizado el criterio Cauchy
para admitir o descartar en el cálculo los valores experimentales de dureza obtenidos El
valor medio de dureza corresponde al promedio entre 15 medidas válidas de dureza y la
barra de error representa la dispersión entre los valores de dureza válidos asumiendo un
nivel de confianza del 95%.
Tabla 1. Composición química del polvo del material inicial y del polvo molturado en las
condiciones extremas ensayadas.
Condiciones de molienda
Material original
50 h/20 BPR/160 rpm
20 h/30 BPR/160 rpm
20 h/20 BPR/220 rpm
Cd
<20
<20
<20
<20
Cu
<20
75
60
85
Mn
68
50
60
Zn
118
135
130
150
Fe
1600
1900
1950
2200
Mo
<300
<200
<200
<200
Ni
<200
<200
<200
<200
Cr
<20
<45
<45
<45
Al
Resto
Resto
Resto
Resto
3. RESULTADOS Y DISCUSIÓN
El análisis químico de las muestras sometidas a las condiciones extremas de molienda
ensayadas (Tabla 1) presentan incrementos en la cantidad de Fe que pasa desde un valor
1600 ppm en el material inicial, a 1900 ppm después de 50 horas de molturación (15,8% de
aumento), a 1950 ppm después de molturación con 30 BPR (21,9% de aumento) y a
2200 ppm a 220 rpm (37,5% de aumento). La concentración de Cr, Ni y de los otros
elementos minoritarios está por debajo del límite de análisis. Este aumento del nivel de Fe
muy probablemente proviene del desgaste del recipiente y/o de las bolas. Aunque el
aumento de Fe expresado en porcentaje es importante, el valor absoluto máximo obtenido
es 0,22% y no ha sido detectado mediante EDAX durante la observación de muestras de
estas partículas por SEM, lo que indica que se halla disperso. No obstante, este porcentaje
4
Estudio de la influencia de las variables de molienda en las propiedades del polvo de aluminio
nanocristalino
es superior al máximo admisible en situación de equilibrio y puede formar compuestos AlFe durante el proceso de consolidación [8].
3.1 Efecto de las variables de proceso sobre la dureza.
Figura 3 se han agrupado las gráficas de la evolución de la dureza en función del porcentaje
de agente de control de proceso (EBS), del tiempo de molienda, de la relación masa de
bolas-masa de polvo (BPR) y de la velocidad de giro del molino (rpm). En todos los casos
existe un aumento de la dureza con el aumento de la magnitud en estudio. En la Figura 3a se
muestra la evolución de la dureza del material molturado durante 20 h con una relación 20:1
BPR y a 160 rpm en función del porcentaje de cera EBS. Se aprecia un gran aumento del
valor de la dureza respecto del polvo original no molturado para porcentajes de EBS hasta
0,8 y una disminución abrupta desde este valor para el porcentaje 1% EBS. Por otra parte,
es interesante comentar el aumento constante de la dureza con el tiempo de molienda hasta
alcanzar un máximo local alrededor de 20 h de molienda (Figura 3b), seguido de una zona
de sucesivos ablandamientos (30 y 40 h) y endurecimientos (35 y 50 h).
a
120
b
120
110
100
Dureza (HV 0.025)
Dureza (HV 0.025)
100
80
60
40
90
80
70
60
50
Material inicial
Material inicial
40
20 h - 20 BPR - 160 rpm
20
0,0
0,2
0,4
0,6
0,8
0% EBS -20 BPR - 160 rpm
30
1,0
0
20
60
80
100
Tiempo (h)
EBS (%)
c
120
d
120
100
Dureza (HV 0.025)
100
Dureza (HV 0.025)
40
80
60
40
0% EBS - 20 h - 160 rpm
20
5
10
15
60
Material inicial
40
Material inicial
0
80
20
BPR
25
30
35
0% EBS -20 h - 20 BPR
20
40
0
50
100
150
200
250
RPM
Figura 3 Variación de la dureza en función del porcentaje de EBS (a), del tiempo (b), de
BPR(c) y de rpm(d).
5
III Congreso Nacional de Pulvimetalurgia, Valencia, junio 2010
Este mismo comportamiento se aprecia en la evolución de la dureza con BPR (Figura 3c)
donde se produce un aumento de la dureza con el aumento de la relación BPR hasta
alcanzar un máximo local en 25 BPR, seguido de un mínimo para 30 BPR y un posterior
aumento de dureza a 35 BPR. La misma secuencia, aunque más suave, se produce en la
variación de la dureza con la velocidad del molino (Figura 3d) pero en este caso las barras
de error del valor mínimo se solapan con uno de los valores adyacentes. En este último
caso, no se produce endurecimiento del material hasta superar un valor mínimo en la
velocidad de giro, pero, cuando se alcanza este valor, el incremento de dureza se produce
de forma abrupta hasta alcanzar una zona de saturación. La comparación de estos
comportamientos induce a pensar que el efecto del tiempo de molienda es mayor que el de
la relación BPR y mayor que el de EBS en el rango ensayado. En todos los casos se observa
un umbral de activación para iniciar el endurecimiento. Por otra parte, los valores máximos
locales de dureza se alcanzan para unas condiciones de 0,0 y 0,4% EBS, 20 y 35 horas de
molienda, 25 y 35 BPR, y 160 y 250 rpm, manteniendo constantes en cada caso las otras
variables de proceso.
3.2 Efecto de las variables de proceso sobre el tamaño de partícula.
En la Figura 4 se muestra la variación del tamaño de partícula en función de las variables de
proceso estudiadas. En todos los casos existe un aumento del tamaño de partícula con el
aumento de la magnitud en estudio. En el caso del efecto del porcentaje de cera se aprecia
un fuerte aumento del tamaño de partícula durante el proceso de molturación con el
porcentaje de EBS hasta 0,4% y una disminución posterior hasta valores del tamaño de
partícula parecidos a los originales para porcentajes de 0,8 y 1,0% de EBS, con una gran
dispersión del tamaño en el último porcentaje. Ni el tiempo de molienda ni la relación BPR
producen variaciones sustanciales en el tamaño de partícula que se mantiene alto y casi
constante a las condiciones estudiadas. La velocidad de giro del molino aumenta la
dispersión del tamaño a baja velocidad (hasta 80 rpm) y produce un aumento brusco del
tamaño con disminución de la dispersión a partir de 100 rpm. La observación mediante
SEM de las partículas formadas indica que éstas se han formado por apelmazamiento de
partículas más pequeñas, generalmente aplanadas, que se han doblado sobre sí mismas
formando esferas en muchos casos huecas que después han crecido por apelmazamiento y/o
soldadura mecánica de otras partículas sobre la superficie exterior [11]. Por esta razón, el
tamaño de partícula crece de forma abrupta cuando se dan las condiciones de proceso
necesarias y es necesario un mínimo para que el apelmazamiento se inicie, pero una vez
iniciado las condiciones de molienda no afectan substancialmente al tamaño final.
6
Estudio de la influencia de las variables de molienda en las propiedades del polvo de aluminio
nanocristalino
a
0,1
0,01
Material inicial
1E-3
1E-4
20 h - 20 BPR - 160 rpm
1E-5
0,0
0,2
0,4
0,6
0,8
b
1
Tamaño de partícula (mm2)
Tamaño de partícula (mm2)
1
0,1
0,01
Material inicial
1E-3
1E-4
0% EBS -20 BPR - 160 rpm
1E-5
1,0
0
20
EBS (%)
c
0,01
Material inicial
1E-3
1E-4
0% EBS - 20 h - 160 rpm
5
10
15
20
BPR
80
100
25
30
35
d
1
Tamaño de partícula (mm2)
Tamaño de partícula (mm2)
0,1
0
60
Tiempo (h)
1
1E-5
40
0,1
Material inicial
0,01
1E-3
1E-4
0% EBS -20 h - 20 BPR
1E-5
0
50
100
150
200
250
RPM
Figura 4. Variación del tamaño de partícula en función del porcentaje de EBS (a),
tiempo (b), de BPR (c) y de rpm (d).
3.3 Efecto de las variables de proceso sobre el rendimiento del proceso de molturación.
En la Figura 5 se han agrupado las gráficas que muestran el efecto de las distintas variables
de proceso estudiadas sobre el rendimiento del proceso de molturación. En el caso del
efecto del porcentaje de cera se aprecia un comportamiento errático con fuertes variaciones
del rendimiento que es muy alto para 0,0%, 0,3%, 0,8% y 1,0% de EBS, y del orden del
55% de rendimiento para 0,1% y 0,4%. En estos casos se ha producido una fuerte
adherencia en las paredes del recipiente. Es interesante comentar el bajo rendimiento
detectado a bajos tiempos de molturación, que aumenta rápidamente cuando aumenta el
tiempo de proceso alcanzando rendimientos muy altos cuando el tiempo de proceso es igual
o superior a 20 horas y que decrece levemente a tiempo muy largos (100 h). La relación
BPR afecta levemente al rendimiento cuando se mantiene entre valores 5:1 y 20:1, donde
alcanza el máximo, a partir del cual decrece hasta rendimientos del 60% para relaciones
35:1 consecuencia de una marcada adherencia del material en las bolas. En cuanto a la
influencia de la velocidad de giro, resaltar que el rendimiento es generalmente alto, que
disminuye para velocidades alrededor de 120 rpm, pero que se ha detectado un rendimiento
7
III Congreso Nacional de Pulvimetalurgia, Valencia, junio 2010
comparativamente bajo (20%) a 220 rpm con una masiva adherencia del material sobre las
paredes del recipiente.
a
100
80
Rendimiento (%)
80
Rendimiento (%)
b
100
60
40
20
60
40
20
20 h - 20 BPR - 160 rpm
0,0
0,2
0,4
0% EBS -20 BPR - 160 rpm
0,6
0,8
1,0
0
20
40
EBS (%)
c
100
80
100
d
100
80
Rendimiento (%)
80
Rendimiento (%)
60
Tiempo (h)
60
40
60
40
20
20
0% EBS - 20 h - 160 rpm
5
10
15
20
BPR
25
30
35
0% EBS -20 h - 20 BPR
50
100
150
200
250
RPM
Figura 5 Variación del rendimiento del proceso de molturación en función del porcentaje de
EBS (a), tiempo (b), de BPR (c) y de rpm (d).
4. CONCLUSIONES
El efecto que las variables de proceso (tiempo, BPR y rpm) tienen sobre la dureza y el
tamaño de partícula presenta una evolución similar, aunque son más notables con la
variación del tiempo de molienda y menos con la velocidad de giro del molino (rpm), en los
rangos de variación estudiados. El efecto del porcentaje del agente de control (EBS) sobre
estas propiedades tiende a ser contrario al de las otras variables.
El efecto de los parámetros de proceso sobre el rendimiento parece ser completamente
distinto, lo que sugiere la posibilidad de optimizar el rendimiento del proceso manteniendo
constantes las características del polvo obtenido.
8
Estudio de la influencia de las variables de molienda en las propiedades del polvo de aluminio
nanocristalino
5. AGRADECIMIENTOS
Los autores desean agradecer el soporte económico recibido de CICyT a través del proyecto
MAT 2008-06793-C02-01.
6. REFERENCIAS
[1] Koch, Carl C. (2002). Nanostructured Materials - Processing, Properties and Potential
Applications. William Andrew Publishing/Noyes.
[2] C. Suryanarayana. Prog. Mater. Sci. 46 (2001) 1-184.
[3] H. Huang et al. Mat. Sci. Eng. A-Struct. 241 (1997) 38-47.
[4] YS Kwon, K. b. Gerasimov, SK Yoon, J. Alloy Compd. 346 (2002) 276-281.
[5] J. Alkebro et al. J. Solid State Chem. 164 (2002) 88-97.
[6] F. Delogu, G. Cocco. Mat. Sci. Eng. A-Struct. 422 (2006) 198-204.
[7] R. Rodriguez-Baracaldo et al. Mat. Sci. Eng. A-Struct. 493 (2008) 215-220
[8] J. Cintas et al. Rev. Metal. Madrid 43 (2007) 196-208.
[9] F.G. Cuevas et al. Rev. Metal. Madrid Sp. Iss. SI (2005) 83-88.
[10] B. Bay et al. Acta metall. Mater. 40 (1992) 205-219.
[11] J. Sola-Saracibar. “Caracterització i consolidació de compactes d’alumini
comercialment pur processat per deformació severa” Proyecto Final de Master, Dep.
Ciència de Mat. i Eng. Met. ,2008. [http://hdl.handle.net/2099.1/5028]
9
COMUNICACIÓN TÉCNICA
El sector económico del medio ambiente
en el municipio de Terrassa 2008
(Barcelona)
Autor: Bárbara Sureda Carbonell
Institución: Universidat Politécnica de Cataluña
e-mail: [email protected]
http://www.conama10.es/web/generico.php?idpaginas=&lang=es&menu=90&id=61&op=view
Otros Autores: J.J. de Felipe Blanch (Universitat Politècnica de Catalunya-UPC
(EPSEM))
RESUMEN
La constatación de los impactos ambientales ocasionados por el actual modelo de
desarrollo ha permitido identificar nuevas necesidades de la sociedad
contribuyendo al desarrollo de un sector económico que adquiere importancia en
las economías locales, regionales y estatales. Este sector en pleno auge es el
llamado sector económico del medio ambiente. Las actividades de este sector
están relacionadas con la prevención, mitigación y corrección de los impactos
ambientales.
El sector económico del medio ambiente es un sector que crece año tras año
respecto al número de empresas y facturación. En el año 1999, la Fundación Foro
Ambiental identificaba 820 empresas en el sector ambiental de Cataluña que
ocupaban de forma directa a 40.345 personas y facturaban 2.221 millones de euros.
En el informe del año 2008 identifica 1.313 empresas, que ocupan a 42.490
trabajadores y facturan 7.482 millones de euros (Fundación Foro Ambiental, 2008).
En esta comunicación técnica se presenta el análisis de este sector económico en
Terrassa (Barcelona), para el año 2008. Este estudio es fruto del convenio firmado
en el año 2009 entre el Ayuntamiento de Terrassa y Fomento de Terrassa SA con la
Universidad Politécnica de Cataluña (Cátedra UNESCO de Sostenibilidad - grupo de
investigación Medida y Modelización de la Sostenibilidad).
Las conclusiones más importantes del estudio podemos resumirlas en:
°La creación de nuevas empresas del sector ambienta l ha tenido un crecimiento
sostenido durante el periodo de estudio 1998-2008, y presenta un dinamismo
superior al crecimiento de nuevas empresas de la ciudad.
°Las actividades empresariales de la economía ambie ntal en Terrassa se centran,
en cuanto a la ocupación, en cuatro ámbitos: energético, residuos industrialesmunicipales y agua. Presenta una gran estabilidad en la creación de empleo.
°La economía del sector ambiental en el municipio d e Terrassa
aproximadamente un peso del 5,6% dentro del PIB total del municipio.
tiene
Palabras Clave: Economía verde, Economía del medio ambiente
www.conama10.es
2
Dra. Bàrbara Sureda Carbonell*, Dr. José Juan de Felipe Blanch**
*Consorci Escola Industrial Barcelona (CEIB) - Universitat Politècnica de Catalunya
(UPC), EUETIB (Escuela Universitaria Ingeniería Técnica Industrial de Barcelona)
[email protected]
**Universitat Politècnica de Catalunya (UPC), EPSEM (Escuela Politécnica Superior de
Ingeniería de Manresa)
[email protected]
1 Introducción
La actual crisis financiera ha llevado a una recesión económica mundial provocando que
tanto la Unión Europea como algunos de sus estados miembros hayan llevado a cabo
acciones de estímulo económico para revitalizar sus economías. Con la constatación de
la existencia de impactos medioambientales y con la oportunidad de utilizar los estímulos
económicos para transformar la economía en economía verde se ha ido desarrollando un
sector económico relacionado con actividades de prevención, mitigación y corrección de
estos impactos ambientales, que ha creado y desarrollado diversos ámbitos de actuación
relacionados con el medio ambiente, identificando nuevas necesidades de la sociedad.
Este sector económico en plena expansión es la denominada economía verde y/o
economía del medio ambiente. Según la OCDE, la situación del sector del medio
ambiente es óptima (Departamento de Investigación y Estrategias de Mercado, 2007). El
estudio "Advanced Renewable Strategy" de la UE sobre el potencial del sector de las
energías renovables, comenta que este sector podría llegar a crear 2,5 millones de
puestos de trabajos netos en toda la UE para el 2020 (Empleo verde en Europa.
Oportunidades y perspectivas futuras, 2009).
Se puede definir el sector económico del medio ambiente como el conjunto de empresas
y de actividades económicas dedicadas a la prevención (ante), a la mitigación (durante)
y/o la corrección (post) de los problemas creados a los sistemas naturales por las
actividades humanas (Fundación Fórum Ambiental, 2006).
El sector económico del medio ambiente es un sector que crece año tras año respecto al
número de empresas y su facturación. En el año 1999, la Fundación Fórum Ambiental
identificaba 820 empresas en el sector ambiental de Cataluña que ocupaban de forma
directa a 40.345 personas y facturaban 2.221 millones de euros (Fundación Fórum
Ambiental, 2000). El último informe de la Fundación Fórum Ambiental publicado en el año
2008 identifica 1.313 empresas con actividades ambientales en Cataluña, las cuales
ocupan aproximadamente a 42.490 trabajadores (dato del 2006) y facturan 7.482
millones de euros (dato del 2006) (Fundación Fórum Ambiental, 2008).
La diversificación económica es el marco propicio para el desarrollo de la economía
verde. En la mayoría de los casos, este incipiente sector es el resultado de la
proliferación de pequeñas y medianas empresas de alcance local y regional. La
oportunidad de éstas es actuar allí en dónde se manifiestan impactos ambientales de
carácter local o bien existe una marcada concienciación de los impactos a nivel global
(como por ejemplo el cambio climático). En este sentido, una condición necesaria para
www.conama10.es
3
que pueda consolidarse la economía verde es que los habitantes de la región donde se
pretenda establecer una empresa posean claras conductas pro ambientales. Eso último
se fomenta desde una estructura social consolidada y con espíritu crítico acostumbrada a
ser parte de sus propios procesos decisorios. Por lo tanto, el factor institucional viene
ligado a la existencia de un marco que respete la democracia y que potencie esta
participación social.
La Administración juega un papel fundamental en la promoción y dinamización del sector
económico verde con sus exigencias de actuación medioambientales. Ésta tiene que
cubrir las necesidades de una población creciente -en el caso de Cataluña ha aumentado
en los últimos años de 6.356.889 habitantes en el año 2001 a 7.242.458 el año 2008
(Idescat, 2009) y, en el caso de Terrassa, de 175.649 habitantes a 1 de enero de 2001 a
207.663 a 1 de enero de 2007 (Fomento de Terrassa, 2008)- lo cual conlleva un
crecimiento en la demanda de recursos y en la generación de residuos, provocando
diversos impactos medioambientales de ámbito local y global. En este último aspecto, la
Administración actúa regulando a través de normativas las acciones de prevención,
mitigación y corrección necesarias para minimizar los impactos medioambientales. Estas
normas de la Administración en muchas ocasiones generan en las empresas la
necesidad de ofrecer nuevos servicios o productos; cuando menos, ofrecen la posibilidad
de ampliar el segmento o la cuota de mercado de las empresas, por lo tanto, ayudan a la
dinamización del sector económico del medio ambiente.
El objetivo principal de éste análisis es llevar a cabo un estudio sobre la economía verde
en la ciudad de Terrassa.
Este estudio es fruto del convenio firmado en el año 2009 entre el Ayuntamiento de
Terrassa y Fomento de Terrassa SA con la Universidad Politécnica de Cataluña (Cátedra
UNESCO de Sostenibilidad - Grupo de investigación Medida y Modelización de la
Sostenibilidad).
2
Sector económico del medio ambiente en Terrassa
2.1
Clasificación de les empresas verdes
Las empresas verdes se han clasificado siguiendo dos criterios: a partir de su actividad
empresarial, y por ámbitos ambientales. Las actividades empresariales de las empresas
verdes del municipio de Terrassa las hemos diferenciado a partir de su oferta de
productos y servicios (tabla 1-1).
Tabla 1-1 Actividades empresariales de las empresas del sector del medio ambiente en Terrassa
Actividades empresariales
Servicios
www.conama10.es
4
Producción
Comercial
Un segundo criterio de clasificación seguido, es la división por ámbitos ambientales,
correspondiente a los problemas ambientales que las empresas pretenden resolver (tabla
1-2).
Tabla 1-2 Ámbitos ambientales de las empresas del sector del medio ambiente en Terrassa
Ámbitos ambientales
Residuos industriales
Residuos municipales
Consultoría
Agua
Energía
Aire
Suelo
Espacios naturales
Ruido
Olores
Otros
2.1
Evolución del sector del medio ambiente en Terrassa
La evolución del sector económico verde en cuanto al número de empresas ha sido
positiva en el periodo 1998-2008, siendo un sector muy dinámico en cuanto a la creación
de nuevas empresas en el conjunto de la economía en Terrassa. Esta evolución se
puede contemplar en el gráfico 1-1.
Gráfico 1-1 Evolución del número de empresas del sector verde y empresas totales en Terrassa 1998-2008
www.conama10.es
5
La evolución de la ocupación del sector de la economía verde en Terrassa ha sido
positiva durante todo el periodo 1987-2006. Hay un estancamiento en la generación de
ocupación en el periodo 2007-2008, periodo en el cual se empiezan a notar los efectos de
la crisis económica global. Esta evolución se puede contemplar en el gráfico 1-2.
Gráfico 1-2 Evolución del número de trabajadores del sector verde y totales en Terrassa 1987-2008
www.conama10.es
6
La dimensión empresarial, es decir, el número de trabajadores por empresa, tiene una
evolución marcadamente diferente en el sector de la economía verde y en la economía
de la ciudad. Esta diferencia se puede ver en el gráfico 1-3.
Gráfico 1-3 Evolución comparativa de la dimensión empresarial del sector verde y total en Terrassa 1998-2008
La dimensión empresarial del sector de la economía verde es muy superior a la
dimensión empresarial de la economía de la ciudad. Si tenemos en cuenta el aumento de
empresas del sector verde durante el periodo de tiempo 2006-2007 (gráfico 1-1), significa
que este incremento se debe a empresas con dimensión empresarial muy reducida.
El peso del sector de la economía verde dentro de la economía global de Terrassa se
puede ver en el gráfico 1-4.
Gráfico 1-4 Evolución del PIB de Terrassa (Índice = 100, valor año 1998) y peso del PIB de la economía verde en el PIB de
Terrassa 1998-2008
www.conama10.es
7
2.3
Estudio de los flujos de las empresas del sector verde en Terrassa
Bajo este epígrafe se presentan los datos económicos obtenidos, así como la estructura
de la economía verde por actividad empresarial y ámbito ambiental.
2.3.1 Facturación
La facturación se refiere a la suma de las ventas totales de cada empresa (antes de
restar impuestos) por año. La evolución del total de facturación de las empresas
analizadas en el periodo 2000-2007 se muestra a continuación:
Gráfico 1-5 Evolución de la facturación de las empresas del sector verde en el período 2000-2007
El sector de la economía verde en Terrassa ha crecido de manera constante, un 69,42%
en el periodo considerado.
2.3.2 Estructura de la actividad económica verde en la ciudad de Terrassa
Las empresas de la economía verde en Terrassa tienen una actividad empresarial
predominantemente de servicios. Así para el año 2009 la estructura de las empresas era
la siguiente.
www.conama10.es
8
Gráfico 1-6 Estructura empresarial de la economía verde en la ciudad de Terrassa, año 2009
Así, un 62% son empresas de servicios (incluidos instaladores), un 31% son
exclusivamente comerciales y un 7% se dedican a la producción.
Los ámbitos ambientales en que trabajan las empresas del sector verde se puede
observar en el siguiente gráfico.
Gráfico 1-7 Ámbitos ambientales y empresas del sector verde en Terrassa 2009
www.conama10.es
9
Los sectores más dinámicos han sido: energía, consultoría, otros, ruidos y espacios
naturales.
El análisis de las variables económicas por actividad empresarial se muestra en los
siguientes gráficos.
Gráfico 1-8 Número de ocupados por ámbito ambiental 2007
Destacamos en cuanto a la ocupación cuatro ámbitos, el energético, el de residuos, tanto
industriales como municipales, y el del agua.
Gráfico 1-9 Facturación por ámbito ambiental, año 2007
www.conama10.es
10
Del análisis del gráfico 1-9 se deduce que los sectores más dinámicos son la energía, los
residuos industriales, el agua y otros.
En el gráfico 1-10 se muestran los resultados financieros por ámbitos ambientales para el
año 2007.
www.conama10.es
11
Gráfico 1-10 Resultados financieros por ámbitos ambientales, año 2007
3
Conclusiones
•
•
•
La economía del sector ambiental o economía verde en el municipio de Terrassa
tiene aproximadamente un peso del 5,6% dentro del PIB total del municipio. Esta
proporción ha ido creciendo durante el periodo 1998-2008, a pesar de que los
últimos datos presentan un dinamismo menor respecto a la ciudad, que ha hecho
que se haya estabilizado su porcentaje en el PIB del municipio.
Otra característica económica importante del sector ambiental es la estabilidad en
la creación de ocupación; así, en el periodo de estudio de esta variable (19872008) solamente presenta un descenso relativo el último dato; no hay ningún otro
año con un descenso relativo de la ocupación, al contrario que en el caso de los
trabajadores ocupados de la ciudad, en los que sí se ha dado esta situación
(periodo 1991-1993).
La creación de nuevas empresas del sector ambiental ha tenido un crecimiento
sostenido en el periodo de estudio 1998-2008, y presenta un dinamismo (tasas de
crecimiento) superior al crecimiento de nuevas empresas de la ciudad. Estas
nuevas empresas, en general, tienen una dimensión empresarial pequeña que ha
hecho que la dimensión empresarial del sector económico ambiental se haya
mantenido estable durante el periodo de estudio, en contraposición con la
www.conama10.es
12
•
•
•
•
•
•
•
4
evolución de la dimensión empresarial de la economía de la ciudad, que ha ido
creciendo durante el periodo analizado.
Con respecto a Cataluña, tanto el porcentaje de participación del valor de la
facturación como el número de trabajadores ocupados del sector ambiental de
Terrassa, además del número de empresas han aumentado. Eso nos indica un
fuerte dinamismo de la actividad ambiental en Terrassa.
Las actividades empresariales de la economía ambiental en Terrassa se centran,
en cuanto a la ocupación, en cuatro ámbitos: el energético, el de residuos, tanto
industriales como municipales, y el del agua.
Los datos económicos estudiados muestran una desaceleración en el crecimiento
del sector económico ambiental en el último año estudiado, 2007-2008.
Existe una gran necesidad de reforzar la influencia que las empresas verdes
pueden tener sobre el resto de empresas pertenecientes a su rama económica.
Sencillez para ingresar en el sector de la economía verde, ya que no se requiere
invertir grandes cantidades de dinero, tiempo o de formación.
Resalta la estabilidad y rentabilidad financiera del sector de la economía verde en
comparación con otros sectores.
El sector tiene el potencial para actuar como un mecanismo de reactivación y
estabilización de la economía en tiempo de recesión, depresión y crisis.
Bibliografía
1. Administración de los Recursos Humanos (9º edición). Wayne, R. Mondy, Noe, Robert
M. Prentice Hall México.2005. México.
2. Apollo Alliance i Urban Habitat. Community Jobs in the Green Economy. USA. 2007.
3. Banco Interamericano de Desarrollo – BID. Fundamentos de Evaluación de Impacto
Ambiental. Banco Interamericano de Desarrollo – BID. Santiago de Chile. 2001.
4. Baron, Valérie. Práctica de la gestión medioambiental ISO 14001. AEONOR. Madrid.
1998.
5. Departamento de Investigación y Estrategias de Mercado. El sector del medio
ambiente en Cataluña y España. Colección Informes Sectoriales. Nº 1. Departament
d’Investigació i Estratègies de Mercat. Fira de Barcelona. Barcelona. 2007.
6. Ética en los negocios: conceptos y casos (6º edición). Velásquez, Manuel G. Prentice
Hall México. 2006. México.
7. Fomento de Terrassa S.A. Anàlisi i detecció de les necessitats per a la sostenibilitat
territorial a Terrassa. Observatori Econòmic i Social de Terrassa. Terrassa. 2005.
8. Fomento de Terrassa. Informe de Conjuntura de Terrassa 2006. Foment de Terrassa
S.A. Terrassa. 2007.
9. Fundación Fòrum Ambiental. Directori i estudi del sector econòmic del medi ambient a
Catalunya 2006. Fundació Fòrum Ambiental. Fira de Barcelona. Barcelona. 2006.
10. La responsabilidad social corporativa interna - La "Nueva Frontera" de los recursos
humanos. Carneiro, Manuel C. (Escuela Superior de Gestion Comercial y Marketing
(Esic)). 2004.
www.conama10.es
13
11. Mckeown, Rosalyn. Manual de Educación para el Desarrollo Sostenible. Centro de
Energía, Medio Ambiente y Recursos. Universidad de Tennessee. Knoxville, TN, USA.
2002.
12. Nuestro futuro común. Comisión mundial del medio ambiente y del desarrollo. Alianza
editorial. 1988. Madrid.
13. Puig, R. Análisis del ciclo de vida. Rubes editorial. ISBN: 84-497-0070-1. 1997.
14. WWF. Empleo verde en Europa. Oportunidades y perspectivas
http://www.scribd.com/doc/16927524/Empleo-Verde-en-Europa. 2009
www.conama10.es
futuras.
14
1453
20th European Symposium on Computer Aided Process Engineering – ESCAPE20
S. Pierucci and G. Buzzi Ferraris (Editors)
© 2010 Elsevier B.V. All rights reserved.
On-line fault diagnosis based on the identification
of transient stages
Isaac Monroya, Raul Bb, Gerard Ec, Moisès Ga.
a
Chemical Engineering Department, bAutomatic Control Department, cSoftware
Department, EUETIB, UPC, Comte d’Urgell 187, Barcelona 08036, Spain,
[email protected], [email protected], [email protected],
[email protected].
Abstract
A new approach for on-line fault diagnosis has been proposed taking into account the
transient stages of the faults as information for constructing data models. Neural
Networks (NN) are used as classification algorithm and the scores obtained after
applying PCA are the inputs of the established structure. NN models are applied to online validation data sets of ten samples, also considering the transient stages, detecting
all the abnormal situations since the faults start occuring.
Keywords: Transient stages, PCA, NN, Fault diagnosis.
1. Introduction
Fault diagnosis is a challenging problem in industrial and engineering practice. Most of
the data-based fault diagnosis approaches address this problem by considering the
process dynamics when the fault is already fully developed. However, in order for
automatic fault diagnosis to become a practical decision-making tool during plant
operation, on-line detection and identification of the early transient stages of the fault
evolution is required.
In addition, on-line monitoring of transient operations is important to detect abnormal
events and enable timely recovery. Previous attempts to deal with this problem include
the determination of detection delays using different statistical indexes such as the
Hotelling’s T2 in Principal Component Analysis (PCA) and Correspondence Analysis
[1,2,3,4]
Some approaches to process modelling, alarm management, fault diagnosis and other
automation systems are ineffective during transitions because they are usually
configured assuming a single state of operation. When the plant moves out of that state,
these applications lead to false alarms even when a desired change is ocurring. Thus
some frameworks have been already developed for managing transitions and detecting
faults [5,6].
This work addresses data-based fault diagnosis by means of a transients-based
approach. PCA is applied to simulated data as monitoring and detection technique in
order to obtain the scores (principal component variables that are the axes of a new
coordinate system), which represent the directions of maximum variability, the
Hotelling’s T squared statistic, the Q statistic and thereby the delays in the fault
appearance.
In addition, off-line learning of the transient stages during fault evolution is
implemented using simulation and then, the models are used to implement an on-line
identification of the faults as new data of the process is acquired. Neural networks are
1454
I. Monroy, et al.
used as classification algorithm and thereby as fault diagnosis tool. The methodology is
explained in the next section.
2. Methodology
The proposed methodology for the on-line fault diagnosis considering the transient
stages in the occurrence of faults in continuous processes, takes into account since the
process monitoring and fault detection up to the identification and diagnosis of the
abnormalities in the process.
As first step, a process monitoring technique is applied to a set of observations obtained
by simulation of a continuous process under nominal conditions. In this approach PCA
is applied and its statistics are used as process monitoring and fault detection indexes
when the faulty data are projected with the constructed PCA model.
Suppose that m samples are available and that p is the number of measured variables in
each sample. Let x and S be the sample mean vector and covariance matrix,
respectively, of these observations. The Hotelling T2 statistic becomes
T2
x )' S 1 ( x
(x
x)
(Eqn.1.)
The control limit for this statistic [7] is:
p(m 1)(m 1)
F
m
m 2 mp
UCL
(Eqn.2.)
, p, m p
The scores plot presents the projection of each observation onto the reduced plane
defined by the principal components and the Q plot, which is calculated with the sum of
the squared residuals, represents the squared distance of each observation to this
plane[8].
Qm
p
p 1
em ( p) 2 , e
X T
TP'
(Eqn.3.)
Where P is the component matrix (loading matrx or eigenvectors matrix) and T is the
scores matrix obtained by the product of the data matrices X times P. The control limit
is calculated according to the Jackson and Mudholkar equation (1979).
Q
1
1
h0 )
2 h0 (1
1
i
,
2
1
2
z (2 2 h02 )1 / 2
1
h0
(Eqn.4.)
1
2
i
,
3
3
i
, h0
1
2 1 3
3 22
Both T2 and Q statistics are indicators of “normality” in processes when their values are
below the control limits and can help to determine the transient stages of the process
when ocurring faults. Furthermore Hotelling T2 offers the oportinity of distinguishing
the delays in the faults occurrence.
As second step, classifiers models are constructed by off-line learning of the transient
stages during fault evolution once these stages have been located using the PCA
1455
On-line fault diagnosis based on the identification of transient stages
indexes. The models for the classification of faults, properly called fault diagnosis, are
obtained using NN as classification technique. The inputs of the algorithm are the scores
obtained from PCA and the structure of the NN must be fixed. The parameters to chose
are the number of input nodes, the number of hidden nodes and the transfer function in
the layer (typically sigmoid or tangent functions). The number of inputs will depend of
the number of PC to retain and the number of hidden nodes can also be optimized.
Finally, the network is validated with on-line data validation sets obtained by
simulations establishing the number of samples in which the classifiers will be applied.
This situation simulates the reality of continuous processes, which do not stop and the
monitoring, detection and diagnosis (model application) tasks have to be implemented
with data that are being obtained and it is a decision-making to choose the number of
observations to use for validating the models or network in this case.
3. Case study
PCA and NN techniques have been computationally implemented in MATLAB by
using the PCA model developed by Dr Kris Villez and the free NN toolbox in Matlab.
The resulting fault detection and identification system (FDS) is then applied to the
Tennessee Eastman process[9] as case study. This benchmark consists of 52 process
variables (p) and 20 faults (classes) to be diagnosed. Simulation runs of 50 hours with
the fault produced at 2 h and a sampling time of 1 minute have been carried out for
obtaining source data sets. PCA model is obtained using these observations of the
process under nominal conditions and then is applied not only to these data but also to
the faulty observations for monitoring purposes in order to obtain the T2 and Q statistics
for each state of the process and thereby the fault delays.
NN inputs are composed of the PCA scores of some observations containing the
transient stage and once the NN model is obtained, this will be used to implement an
on-line identification of the faults as new data of the process are acquired. In this case,
data used for testing the NN models are also obtained by 10 h simulations of the TE
process, where faults ocurr at first hour. The results are presented in the next section.
4. Results and Discussions
PCA is applied to the TE process data under nominal conditions and faulty situations,
then T2 and Q statistics are calculated. Figure 1 shows the T 2 values for some faults and
figure 2 shows the Q values for these same process situations including their respective
control limits (9.2 for T2 and 66.7 for Q).
Table 1 shows the delays in minutes of the twenty-fault occurrence considering both
statistics.
1456
I. Monroy, et al.
Figure 1. Hotelling T2 for some faults of the TE process
Figure 2. Q statistic for some faults of the TE process
Two PCA models are constructed. The first one takes into account the first threehundred scores of both nominal and faulty data once the faults are harrassed in the
simulations and the second one considers the same number of scores per class but once
the delays have passed according to the T2. These scores will be used as inputs of the
networks. Validation sets of 10 samples (scores after applying the PCA models) will be
used for applying classification using the two NN and until five-hundred observations.
The first NN structure consists of two layers and four input nodes (retention of 75% of
variability in the scores) and the second one of also two layers and five input nodes
(80% of variability). The first layer for both structures has six tangent sigmoid nodes
and the second one has twenty logistic nodes as the number of faults with which
validation observations (scores properly) will be classified. Then, the NN structures are
4-6-20 and 5-6-20 and the targets of the NN structure are given using the values of +1
(in case of any fault) and zero in case of nominal-condition process.
1457
On-line fault diagnosis based on the identification of transient stages
Table 1. Time delays of the TEP faults using Q and T2 statistics of PCA and NN results
Delays (min)
Delays (min) NN classification
According to T2 According to with 2nd model
Q
1
16
6
2,4,6,7,8,18
2
34
17
2,4,6,7,8,18
3
284
65
3,6,7,8
4
3
2
3,6,7,8
5
293
9
3,6,7,8
6
21
2
3,6,7,8
7
2
2
2,3,6,7,8
8
303
21
2,3,4,6,7,8,18
9
272
65
3,6,7,8
10
564
34
3,6,7,8
11
10
8
3,6,7,8
12
317
62
3,6,7,8
13
168
65
2,3,4,6,7,8,18
14
284
6
3,6,7,8
15
293
65
3,6,7,8
16
31
14
3,6,7,8
17
84
65
2,3,6,7,8,18
18
301
65
2,3,4,6,7,8,18
19
259
65
3,6,7,8
20
172
65
2,3,4,6,7,8,18
As results, what has to be highlighted is that using the first NN structure, which
considers the first three-hundred scores per class once faults are developed, all the
observations of each fault are assigned to faults 1,2,5,16,17 and 19 simultaneously. On
the other hand, by applying the second structure, which considers the scores of the
observations taking into account the delays in each fault, observations of each fault are
assigned also to simultaneous faults as is shown in the last column of table 1.
Observations under nominal conditions are well-diagnosed to none fault.
As it can be seen, on-line faults are well detected with NN but diagnosed as different
faults. This can be due to the small number of scores taken into account for constructing
the NN structure; however, because of computational cost problems, this could not be
done. Moreover, a major number of observations for validating the structure should be
required. Furthermore, it has to be remembered that the structure of the NN can be
optimized and thereby probably obtain better results in terms of classification (correct
diagnose).
FAULT
5. Conclusions
An approach to diagnose on-line faults applying data based models that use transient
stage data has been proposed. PCA is used not only to monitor the different states of the
process, but also to calculate the delays in each fault. Scores are used as inputs of a NN
structure, which is used as classification and diagnosis algorithm. Although faults have
been diagnosed to more than one and not to the one is occurring in some cases, this
approach represents a future powerful technique for real application because by only
using few data during transition stages, all the faulty observations have been detected
even when the fault is barely occurring.
1458
I. Monroy, et al.
Acknowledgements
Financial support from Generalitat de Catalunya through the FI fellowship program is
fully appreciated. Support from the Spanish Ministry of Education through project no.
DPI 2006-05673 and from the modelEAU team of the Université Laval are also
acknowledged.
References
[1] E Russell, et al, 2000, Fault detection in industrial processes using canonical variate analysis
and dynamic principal component analysis. Chemometrics and Intelligent Laboratory Systems,
51, 81-93.
[2] K.P Detroja KP, et al, 2007, Plant-wide detection and diagnosis using correspondence
analysis. Control Engineering Practice, 15, 12, 1468-1483.
[3] J-D Shao, et al, 2009, Generalized orthogonal locality preserving projections for nonlinear
fault detection and diagnosis. Chemometrics and Intellingent Laboratory Systems, 96, 75-83.
[4] Y. Seng, et al, 2009, An adjoined multi-model approach for monitoring batch and transient
operations. Computers and chemical engineering, 33, 887-902.
[5] A. Sundarraman, et al, 2003, Monitoring transitions in chemical plants using enhanced trend
analysis. Computers and chemical engineering, 27, 1455-1472.
[6] R. Srinivasan, et al, 2005, A framework for managing transitions in chemical plants.
Computers and chemical engineering, 29, 305-322.
[7] D.C. Montgomery, 5a, Introduction to Statistical Quality Control, Wiley International Edition,
USA, 2005.
[8] P. Nomikos, et al, 1995, Multivariate SPC charts for monitoring batch processes.
Technometrics, 37, 41-59.
[9] J.J Downs, et al, 1993, A plant-wide industrial process control problem. Computers and
Chemical Engineering, 17, 3.
5025
1
El concepto de Smart Metering en el nuevo
escenario de distribución eléctrica
Francisco Casellas*, Guillermo Velasco*, Francesc Guinjoan** y Robert Piqué*
Departament d’Enginyeria Electrònica (DEE) – Universitat Politècnica de Catalunya (UPC).
*EUETIB - Compte d’Urgell 187, 08036 – Barcelona (Spain)
** ETSETB - C. Jordi Girona, 1-3. 08934 - Barcelona (SPAIN)

Resumen—. La medición inteligente se ha convertido en un
tema de la máxima actualidad y de importancia creciente; ya se
aplica la normativa para su despliegue, se desarrollan proyectos
piloto y han llegado al mercado nuevos dispositivos.
En este trabajo se recogen propuestas y tendencias de los
contadores actuales de energía eléctrica. El contador se representa
como la interfaz entre el sistema del usuario, que es el consumidorproductor distribuido de energía, y la red eléctrica. A partir de esta
relación y con la tecnología electrónica actual aparece el concepto
denominado “smart metering”.
El documento describe las funciones de los contadores de
energía eléctrica desde el inicio, con el contador electromecánico
hace 120 años, hasta los dispositivos actuales. Se incluyen las
características genéricas de medida, las tecnologías de estos
equipos y referencias normativas que utilizan estos sistemas de
medida de energía eléctrica. Se concluye con las tendencias
previsibles en un futuro inmediato.
Términos — Contador de estado sólido, Medida de energía,
Smart meter, Telegestión.
I. INTRODUCCIÓN
A
finales de 2010 termina la primera fase con el 30%, para
la renovación de los contadores eléctricos actuales que
deberán sustituirse por otros electrónicos capaces de trabajar
con discriminación horaria y la telegestión, la cuarta fase
concluye a finales del 2018 con el despliegue total de los
equipos.
La aplicación de los equipos de medida electrónicos
actuales viene impulsada por la directiva 2006/32/CE del
Parlamento Europeo y del Consejo sobre la eficiencia del uso
final de la energía y los servicios energéticos [1], este impulso
lo recogen las normas para el mercado interior de la
electricidad.
El Real Decreto 809/2006 [2], indica que “a partir del 1 de
julio de 2007, los equipos de medida deberán permitir la
discriminación horaria de las medidas así como la
telegestión”, se pasa a establecer el denominado "Plan
Contador" que obliga la sustitución de contadores de medida y
define los plazos de sustitución de estos equipos.
Este trabajo se ha desarrollado gracias a las personas que componen la
URT de EdePAE y ha sido parcialmente financiado por el Ministerio español
de Ciencia e Innovación y por la Unión Europea (FEDER) a través de los
proyectos con referencia: DPI-2009-14713-C03-03 y RUE CSD2009-00046,
del programa Consolider-Ingenio 2010.
Para proporcionar la información en tiempo real o la
discriminación horaria es necesario un equipo de medida
distinto al contador electromecánico y con prestaciones
especiales. Por lo tanto se necesita definir una nueva forma de
medir denominada Smart Metering o medición inteligente,
esta se refiere al proceso de medida por el cual se cuantifica y
transmite instantáneamente la información de las cantidades
de energía consumidas o producidas para su gestión en la red
eléctrica. Esta energía puede ser en cualquier soporte físico
pero se determina principalmente como energía eléctrica, y se
denomina submetering en el caso de gas u otro recurso como
agua y fluido caloportador.
Para el sector eléctrico Smart Metering incluye la
posibilidad de actuar sobre el sistema de consumo con la
acción de conexión-desconexión mediante el Interruptor de
Control de Potencia (ICP) de la instalación, que puede estar
integrado en el propio contador [3].
Los primeros sistemas Smart Metering implementados se
basan en sistemas electrónicos de medida y tienen como
objeto principal dos aspectos:
- Mantener informado al consumidor-productor de energía de
los valores actuales del flujo energético.
- Cuantificar instantáneamente el estado de la red de
distribución en el lado del consumidor.
En la vertiente de información permite al usuario establecer
sus políticas correspondientes de consumo, ahorro o
producción de energía con objeto de minimizar el impacto
ambiental y económico por la utilización de energía [4]. El
usuario con sus decisiones se transforma en parte del sistema
gestor de la red.
En la vertiente de cuantificación permite al proveedor de
energía realizar de forma más eficiente su trabajo, como es
controlando la calidad del servicio, y proporcionando nuevos
servicios a los usuarios, como son las tarifas personalizadas.
La definición evoluciona según avanza el desarrollo
tecnológico, reservándose el término inteligente a los
dispositivos de última generación o del futuro próximo [5].
El objeto de este documento es presentar el ámbito de
utilización de los contadores de energía actuales en el entorno
de la red eléctrica, para lo cual se indica como está
estructurada esta red y el lugar que ocupa el dispositivo para
determinar la energía eléctrica que lo atraviesa.
5025
2
A continuación se indican los distintos tipos de
contadores desde un punto de vista evolutivo, que se inicia
con el contador electromecánico hasta los actuales contadores
de estado sólido, sigue una clasificación operativa y el detalle
tecnológico de funcionamiento de estos equipos de medida.
Los nuevos servicios obtenidos con los contadores más
modernos permiten implementar aplicaciones en las medidas
de otros parámetros energéticos no eléctricos facilitando la
automatización a usuarios y gestores.
II. LA MEDIDA DE ENERGÍA ELÉCTRICA
A. La red eléctrica
La red eléctrica, la red, no es una entidad única sino el
conjunto de múltiples sub-redes con una estructura jerárquica
y con un flujo neto de energía, de hecho es la máquina más
grande fabricada.
La red está compuesta por los elementos de generación
eléctrica, las líneas de transporte normalmente en estructura
anular en alta tensión, estaciones transformadoras, las líneas
de distribución en estructura radial, normalmente en media
tensión y en baja tensión, y los distintos consumidores [6].
Fluyendo la energía del productor al consumidor a lo largo de
la red.
La utilización de las energías renovables implica cambio en
la forma de la producción ya que las fuentes de energía
distribuidas, Distributed Energy Resource (DER), en
empresas y hogares comienzan a implantarse para generar
electricidad lo que les permite vender la energía excedente,
modificando el flujo de energía en la red de distribución.
La evolución del sistema es necesaria para tener una mayor
eficacia en el consumo, se necesita gestión en tiempo real de
los flujos de energía y proporcionar medida bidireccional en la
producción de energía local. Así la tendencia actual es lo que
se denomina Smart Grid entendiéndose como un sistema de
gestión, información y comunicaciones aplicado a la red
eléctrica [7], es un concepto cuyo objeto es aumentar
conectividad, automatización y coordinación entre
productores, proveedores y consumidores en la red de
distribución, lo que implica que se tienen dos redes en
paralelo, una de energía y otra de información (Fig. 1).
Esta red eléctrica inteligente puede integrar las acciones de
todos a los elementos conectados a la misma, los generadores,
consumidores y gestores a fin de entregar eficientemente de
forma sostenible, económica y segura el suministro eléctrico
[8].
Smart Grid implementa equipos y servicios novedosos, que
junto con la monitorización inteligente, control, comunicación
y tecnologías de auto-verificación pretenden:
- Facilitar la conexión y el funcionamiento de los
generadores, cargas y acumuladores de diferentes
tamaños y tecnologías.
- Permitir a los consumidores desempeñar un papel en la
optimización del funcionamiento del sistema.
- Proporcionar a los consumidores una mayor información y
elección de la oferta.
- Reducir significativamente el impacto ambiental de todo el
sistema de suministro de electricidad, optimizando su
consumo.
- Ofrecen mejoras de los niveles de fiabilidad y seguridad del
suministro.
La participación activa del consumo en la red permite
contribuir a mejorar la eficiencia de uso, pero sólo si hay una
actividad coordinada entre la red, contador de energía,
usuarios y fabricantes de dispositivos consumidores.
Punto importante del proceso es determinar la energía
eléctrica consumida de la red, así para la gestión del servicio
se necesita un dispositivo que cuantifique el valor de la
energía, que memorice este valor y que presente a la compañía
comercializadora-gestora o al usuario la cantidad, cuentas o
pulsos, correspondientes a la energía que ha atravesado el
dispositivo y gestionado la red.
B. Energía eléctrica
Los sistemas eléctricos de medida energética determinan
dos tipos de variables a medir: energía y potencia.
- En física: energía se define como la capacidad para realizar
un trabajo.
- En física: potencia es la cantidad de trabajo efectuado por
unidad de tiempo.
Las medidas a realizar por el equipo contador son la
potencia y la energía, activa y reactiva en monofásica o
trifásica. Para la potencia en VA y VA reactivos. Para la
energía kWh y kVAh reactivos [9].
Las señales que intervienen en la medida son tensión v(t) y
corriente i(t), se consideran como sinusoides desfasadas un
determinado ángulo φ. Donde V e I son los valores eficaces
respectivos.
(1)
v(t )  2  V  sin t
i (t )  2  I  sin(t   )
Fig. 1. Flujo de energía en la red eléctrica, aparece con una red en paralelo
para llevar la información que permite la gestión de la energía.
AGC: Automatic generation control
EMS: Energy Management System
SCADA: Supervisory Control and Data Acquisition
DMS: Distribution Management System
DA: Distribution Automation
AMI: Advanced Meter Infrastructure
La potencia instantánea p(t) será el producto de los valores
instantáneos de estas v(t) e i(t).
p (t )  v(t )  i (t )
(2)
p (t )  2  V  I  sin t sin(t   )  V  I  cos  - cos(2t   )
En monofásica el primer término después de la última
5025
3
igualdad, V·I·cos(φ), define la potencia activa, la que realiza
trabajo, siendo constante para los valores eficaces constantes
(3). La potencia reactiva es la producida por los campos
eléctricos y magnéticos, debida a que el consumidor presenta
carácter capacitivo o más comúnmente inductivo (4). La suma
fasorial de estas potencias (3) y (4) se denomina potencia
aparente (5).
P  V  I  cos 
Q  V  I  sin 
(3)
S  P2  Q2
(5)
(4)
El término de la energía es el valor acumulado de la
potencia a lo largo del tiempo (puede ser el valor de
facturación), la expresión de la energía total es:
T
E   v (t )i (t )dt
(6)
0
Las medidas de estas señales se ven muy influenciadas por
el ángulo entre los fasores tensión-corriente y la distorsión en
las formas de onda debidas al contenido armónico de las
ondas de tensión y de las formas de ondas de corriente.
III. LOS CONTADORES DE ENERGÍA
El equipo para la medida de la energía eléctrica consumida
es un contador eléctrico o meter el cual consta de tres
elementos principales, como son el sistema de medida, el
elemento de memoria y el dispositivo de información (Fig. 2).
En este sentido el contador eléctrico realiza la función de
interfaz de la red con el usuario, es el front-end de la red.
Fig. 2. Estructura genérica del contador de energía eléctrica.
Los equipos de medida de energía eléctrica pueden
clasificarse según sus características [10]:
- Tecnológicas, pudiendo ser contadores electromecánicos o
electrónicos (contadores de estado sólido).
- Funcionales como monofásicos o trifásicos.
- Energéticas como contadores de activa y/o contadores de
reactiva.
- Operativas como dispositivo de tipo registrador o
programables que permiten la telegestión.
Los equipos de tipo registrador pueden ser de las dos
tecnologías:
 Electromecánicos que permiten medir solamente un
tipo de energía, kWh acumulados o kVAh acumulados,
no poseen discriminación tarifaria siendo los contadores
estándar electromecánicos de inducción.
 Electrónicos, Automatic Meter Reading (AMR),
permiten medir solamente energía acumulada, registran
la medida de energía total mensual o por intervalos de
tiempo
predefinidos.
Contemplan
comunicación
bidireccional básica entre el medidor y el servidor de
datos, permitiendo a partir de esta tecnología las medidas
de tiempo de utilización, Time of Use (ToU).
Los equipos programables de medida, son de tipo
electrónico:
 Advanced Meter Infrastructure (AMI), estos equipos
permiten la lectura del consumo “a la carta” de la energía
acumulada o de la potencia instantánea, admiten
opciones de precios diferenciados por tipo de medida y
registros de la demanda, o programación de intervalos de
“carga” previamente acordados con cada cliente.
Permiten comunicación en red con la oficina de gestión.
 Smart Meters, estos equipos proporcionan mediante el
centro de gestión la información y el control de los
parámetros de calidad y de programación del servicio
junto con la actualización del software de medición de
forma telemática. Contempla la comunicación ampliada
en red con el gestor y Home Area Network (HAN) con
los equipos locales de consumo.
A. Contadores Electromecánicos
La idea base para el medidor electromecánico de inducción
son los estudios de Galileo Ferraris que hace un
descubrimiento clave, mediante dos campos de alterna
desfasados se puede hacer girar un disco sólido metálico. Este
descubrimiento estimuló el desarrollo de motores de inducción
y así implementar los medidores electromecánicos de
inducción [11].
Los distintos tipos de electromotores para los contadores
eléctricos se pueden clasificar en [12]:
- Tipo conmutador. Commutator type.
- Tipo de inducción. Induction type.
- Tipo disco de Faraday. Faraday disc type.
El contador eléctrico más común para el monofásico es el
de Thomson o contador electromecánico de inducción,
patentado por Elihu Thomson en 1889, este modelo
considerado como el estándar es la base de los contadores
electromecánicos más modernos que han estado instalándose
por más de 120 años [13][14][15].
B. Contadores Electrónicos
Los primeros dispositivos de medida automáticos son del
periodo pre-microprocesador y pre-internet. Se trata de
dispositivos con la medida electromecánica basados en los
contadores eléctricos existentes y con unas comunicaciones
digitales basadas en las incipientes tecnologías digitales de
principios de los 60. Algunas patentes referenciadas a
continuación permiten determinar la evolución de estos
dispositivos:
5025
- Equipo de lectura por detección de la posición angular en
los indicadores para obtener un código binario del valor
medido [16].
- Equipo totalmente electrónico, mide tensión y corriente a
partir del valor medio de las señales rectificadas que con
un convertidor tensión-frecuencia y un contador que
permiten la visualización del valor medido [17].
- Dispositivo que permite las comunicaciones mediante
llamada telefónica a la central, transmite el código del
medidor y el valor de la medida [18][19].
Al principio de la década de los 70 la adquisición de datos,
procesado y comunicaciones estaban muy limitadas por la
capacidad de cálculo de los microcontroladores y las
interconexiones de los sistemas digitales, los primeros diseños
están basados en ordenadores pero lejos de ser viables
económicamente.
El marco tecnológico definido por la microelectrónica y las
comunicaciones en el desarrollo de los AMR pueden
resumirse en las siguientes fechas:
- 1963 la empresa Sylvania comercializa los primeros
circuitos integrados de tecnología TTL.
- 1969 primera red interconectada mediante el primer enlace
entre las universidades de UCLA y Stanford.
- 1971 producción del microprocesador 4004.
- Década de los 70, procesadores domésticos.
- 1981 ordenador personal de IBM PC5150.
En 1978 la empresa Metretek, Inc. [20], desarrolló un
diseño pre-Internet y produjo el primer AMR totalmente
automatizado disponible comercialmente para la telelectura
del contador mediante un sistema de gestión que utilizó un
mini-ordenador de IBM.
Con estas premisas en la industria electrónica en la década
de los 80 se comienzan a producir los primeros contadores
híbridos, basados en los contadores de inducción [7]. Los
primeros AMR son dispositivos de medida (contadores de
energía en el sentido clásico) que incorporan una MicroController Unit (MCU), que permite tanto automatizar el
sistema como dotarlo de capacidad de comunicación con un
sistema central. Se trata de dispositivos de medida que
facilitan los valores de consumo eléctrico con una cadencia
predefinida y que pueden transmitir la medida mensualmente
o definir un periodo menor de facturación.
Los contadores totalmente electrónicos comienzan con los
modelos monofásicos para después implementar los
polifásicos, en la década de los 90 sin piezas electromecánicas a excepción de las borneras.
Con este tipo de dispositivo se pretende proporcionar
fiabilidad a los datos obtenidos, la necesidad es obtener
medidas reales en lugar de valores estimados o facilitados por
el usuario ya que los equipos de medida se pueden encontrar
en lugares privados de difícil acceso. Es importante para el
gestor obtener un perfil de consumo fiel a la realidad, para lo
que necesita medidas de periodicidad programable. Otra
posibilidad del dispositivo es incluir submetering.
La operativa consiste en enviar la información hasta el
Data Management (DM) como parte de una infraestructura de
4
medida, recopilación y gestión de datos donde es necesario un
nuevo contador denominado AMI [21]. Las lecturas se indican
a los usuarios en tiempo real para que puedan cambiar su
comportamiento de consumo en función de las tarifas o de sus
inquietudes ecológicas.
IV. EL SMART METER
La siguiente evolución tecnológica es el Smart Meter que
básicamente es un AMI que incluye como mínimo los
siguientes suplementos, control de energía mediante un ICP
programable que establece el límite de consumo, un puerto
HAN y servicios de tarificación bajo demanda, (Fig. 3).
La estructura general del contador mantiene los tres
elementos principales como son el sistema de medida, la
memoria y el dispositivo de información principal que es
ahora el sistema de comunicaciones. Para ampliar sus
capacidades operativas se le añaden los elementos
complementarios siguientes:
- Sistemas de alimentación.
- Procesador de cálculo.
- Procesador de comunicaciones.
- Dispositivo de accionamiento o control.
Fig. 3. Estructura general del sistema de telemedida.
A. Características genéricas del contador electrónico
El valor de la energía que se desea calcular sigue los
siguientes procesos antes de visualizarse en el LCD del
contador [22]:
- Proceso de digitalización en la fase correcta los valores
instantáneos de tensión y de corriente mediante un
convertidor ΣΔ de alta resolución.
- Cálculo del producto de las variables para obtener los
valores instantáneos de potencia.
- Integración de las variables calculadas a lo largo del tiempo
que proporcionan los valores de las energías.
La energía activa se determina mediante el algoritmo que
implementa la expresión (7):


E   p(t )dt  lim p(nT )  T 
T 0

 n 1
(7)
La estructura general de un contador está representada en la
5025
Fig. 4. La diferencia principal entre los distintos fabricantes es
la forma del diseño electrónico en el que se implementa el
contador, donde pueden encontrarse las siguientes opciones:
- Dispositivo MCU de gama media o Digital Signal Processor
(DSP) o dispositivo de lógica programable, las tres opciones
con tecnología mixta que incluye Programable Gain
Amplifier (PGA) y Analog to Digital Converter (ADC), las
tres opciones tienen en común un puerto serie para su
control por un procesador externo que realiza el resto de
operaciones. Es el modelo básico.
- MCU de gama media con capacidades de cálculo como
multiplicación por hardware, estructura Reduced Instruction
Set Computer (RISC) con periféricos de comunicaciones,
memoria de varios tipos y elementos analógicos-mixtos
ADC con PGA. Suelen ser dispositivos MCU genéricos
adaptados a un diseño de medidas.
- MCU de gama baja que se dedica a la gestión global del
sistema y es más cómoda su programación, rodeado de una
serie de periféricos similares al caso anterior donde se ha
añadido uno basado en DSP o un dispositivo de lógica
programable con los elementos de tecnología mixta
necesarios. Este sistema de doble núcleo permite que el
proceso de cálculo de potencias y energía se desarrolle de un
modo más determinista y optimizado por el DSP.
En todos estos diseños hay que añadir unos elementos extra
de comunicaciones pues los periféricos que llevan
implementados solamente admiten conexiones de tipo local de
algunos metros como máximo. El sistema de conexión debe
incluir aislamiento galvánico por estar la masa eléctrica del
sistema conectada a neutro de la red.
Indicar también que la tendencia es integrar en silicio lo
máximo posible del sistema, indicándose esto mediante
terminología del tipo System on Chip (SoC) y System in
Package (SiP) para los integrados más complejos.
Fig. 4. Estructura genérica de un contador.El dispositivo puede ser de tipo:
- Básico: sensores, calculador y generador de pulsos.
- Medio: al anterior añade una MCU y sus periféricos.
- Avanzado: incluye también varios elementos de comunicaciones.
B. Etapa de sensado y acondicionado de la señal
La primera parte del proceso de medida es analógica y
externa al IC, para detectar la señal de tensión hay que atenuar
típicamente con un divisor de tensión o un transformador y
5
adaptar la amplitud a los valores del PGA, las medidas se
realizan entre fases o mejor entre fase y neutro.
La medida de la corriente se realiza mediante una tensión
por transformador de corriente, resistencia shunt o bobina
Rogowski (di/dt). Existen otros sensores de corriente como
son fluxgate, Hall o el grupo de las magneto-resistencias
(MR), AMR, GMR, CMR. La medida de corriente puede ser
de una fase, de dos fases, de tres fases sin neutro o de tres
fases con neutro dependiendo del contador a implementar y
del número de ADC’s que disponga el integrado.
La utilización de transformadores para las medidas de
tensión o corriente presenta un problema, el desfase
introducido en la señal del secundario respecto a la señal de
medida, esto implica un error en el cálculo de la potencia
instantánea por parte del DSP, que se ha de minimizar
implementando un filtro digital para la corrección de este
desfase y que se ajusta mediante la calibración del contador.
C. Cuantificado y procesado de las señales
La siguiente etapa es adaptar las señales con los PGA y
convertirlas con los ADC pasando los valores al DSP para la
realización de los cálculos de las potencias. Típicamente los
ADC son de alta resolución 16 a 24 bits con un ancho de
banda mayor de 14 kHz.
Según indica la expresión (2) la potencia instantánea, p(t)
se obtiene mediante el producto de los valores instantáneos de
tensión, v(t) y de corriente, i(t). Al resultado de la operación
se aplica un filtro promediador obteniéndose el término
constante que corresponde a la potencia activa, P, ver la
expresión (3).
Para la potencia reactiva, Q, se desplaza el fasor de
corriente en 90º, después se realiza el mismo cálculo que con
la potencia activa. La potencia aparente, S, es calculada según
la expresión (5).
Para el cálculo de las energías el DSP utiliza un algoritmo
que implementa la expresión (7) para el cálculo de la potencia
activa. Para la energía reactiva se utiliza Q como variable de
cálculo y S para la energía aparente.
Otros posibles cálculos que pueden estar implementados
son los referentes a la calidad de la señal en la línea como son,
los valores de pico en la tensión eficaz, la reducción en la
tensión eficaz, las variaciones de frecuencia en la señal o los
errores aperiódicos de cruce por cero en la onda de tensión.
D. Control del sistema y transmisión de datos
En los dispositivos con doble núcleo procesador los valores
calculados por el DSP quedan como variables en los registros
de datos a disposición del MCU, para su almacenamiento y
teletransmisión, paralelamente como opción para el diseñador
el generador de pulsos activa las salidas de potencias activa y
reactiva, enviando los pulsos a un contador exterior.
Las comunicaciones se implementan mediante enlaces con
la HAN y con el DM utilizando los puertos serie que dispone
el MCU como periféricos, se trata de una comunicación local
Machine to Machine (M2M) con otros dispositivos
especializados en las redes que se deseen desplegar. Esta
5025
6
modularidad permite adaptar el diseño a las circunstancias de
cada aplicación.
Como elementos complementarios típicos en los
dispositivos se pueden indicar los sensores de temperatura
internos, el reloj en tiempo real, la gestión de energía del
sistema, el supervisor de las memorias de configuración del
sistema y del firmware o la detección de fugas de corriente,
tamper function.
distintos tipos de medidores, y se han presentado datos de los
fabricantes más importantes y referencias de normativas para
los sistemas de bajo coste.
El resultado es la descripción de un contador que se
encontrará implementado en los millones de puntos de
consumo, cuyo objeto es proporcionar la infraestructura de
soporte necesaria para que el usuario sea una parte importante
de la gestión en la red de distribución.
V. TENDENCIAS
ANEXO
El sensado y las comunicaciones del contador permiten la
medida distribuida de la energía consumida, esto es, el
monitorizado de las señales de interés a lo largo de un ciclo de
la red eléctrica que permite el cálculo de potencias y energías
involucradas, y de esta forma determinar el flujo de la energía.
El escenario previsible, de acuerdo con la tendencia actual,
apunta a una generación y un consumo de energía cada vez
más distribuidos que exigen resolver nuevos problemas de
eficiencia, seguridad y gestión de la red. En concreto, los
requerimientos de los sistemas de monitorización, supervisión
y control de la calidad del servicio, en términos de
automatización, capacidad y velocidad de medida de un
número creciente de parámetros de las señales eléctricas, serán
cada vez más exigentes. Asimismo, estos sistemas deben
soportar futuras políticas de gestión que evolucionan desde
perfiles libres, tanto de consumo como de producción, a unos
perfiles más acotados con menores grados de libertad.
En este contexto, la línea de trabajo a seguir implica
caracterizar los sistemas actuales de medida y valorar las
necesidades de control necesarias, bajo la óptica de micro-red
a partir de los perfiles de consumo-producción energético que
verá la red eléctrica en el punto de conexión, para
eventualmente, rediseñar el sistema y su gestión.
VI. CONCLUSIONES
La estructura Smart Grid exige la lectura de datos en
tiempo real para aspectos a nivel de sistema como la gestión
de recursos y su supervisión, además de aspectos a nivel de
usuario como la facturación automática o el control del
consumo energético, que son, entre otros, algunos de los
nuevos servicios y aplicaciones que requiere la generación
distribuida y el consumo sostenible.
Esta forma de lectura de los datos con los nuevos equipos
AMI y los nuevos Smart Meter se basan en la capacidad de
gestionar tanto los contadores como el gran volumen de datos
medidos mediante lo que se denomina Smart Metering.
En este trabajo se ha indicado la importancia creciente de
todos estos conceptos como los elementos que permitirán
desplegar la infraestructura tecnológica impuesta por la
normativa y, de esta forma, proporcionar los datos necesarios
a los sistemas gestores y a los consumidores de energía.
Se han descrito, bajo un concepto de Smart Grid, los
distintos elementos que intervienen en el proceso, como son la
red, la estructura general del sistema de telemedida y el Smart
Meter. Se ha detallado la estructura y funcionamiento de los
A. Fabricantes de electrónica
Los fabricantes de dispositivos electrónicos para contadores
de estado sólido o AMI ofrecen distintos diseños de base para
implementarlos.
Existen dos grupos de diseños de base, los basados en
circuitos integrados específicos (IC), Tabla 1, y los basados en
MCU estándar a los que se les añaden las prestaciones más
avanzadas en forma de circuitería externa y firmware.
TABLA I
ALGUNOS FABRICANTES DE IC ESPECÍFICOS Y SUS APLICACIONES AMI
Fabricante
Referencia
Tipo
medidor
Parámetros medidos
Analog Dev.[23]
ADE5169
Fase
Irms. Vrms, P, Q, S
ADE7878
Polifase
Irms. Vrms, P, Q, S
ATMEL Corp. [24]
AVR465
Fase
P
Austriamicro-sys. [25] AS8268
Fase
Irms. Vrms, P, Q
Cirrus Logic [26]
CS5463
2 ADC
Irms. Vrms, P, Q, S
CS5464
3 ADC
Irms. Vrms, P, Q, S
CS5467
4 ADC
Irms. Vrms, P, Q, S
Maxim [27]
MAXQ3183
Polifase
Irms. Vrms, P, Q, S
Microchip [28]
MCP3905
Fase
P
MCP3909
Polifase
P
Sames [29]
SA9607M
Fase
P
SA9904B
Polifase
Irms. Vrms, P, Q
ST [30]
STMP14
Fase
P
Teridian [31]
71M6532F
Fase
Irms. Vrms, P, Q, S
71M6534
Polifase
Irms. Vrms, P, Q, S
Fase: medida de dos líneas, 2 ADC o 3 ADC.
Polifase: medida trifásica puede ser de 4 ADC (conexión Aaron) o
trifásica completa con o sin neutro.
Medida para valor eficaz de corriente, Irms. De tensión Vrms.
Potencia activa, P.
Potencia reactiva, Q.
Potencia aparente, S
B. Fabricantes de contadores eléctricos
Las empresas para la fabricación de contadores electrónicos
son más numerosas, como referencia se indican posiblemente
las siete mayores o con más historia, Tabla 2 [32][33].
C. Normas internacionales
Los Smart Meters han de cumplir con los estándares que les
permitan comunicarse con DM y con la red HAN del usuario:
- Comunicaciones con el concentrador y el DM, Last-Mile
Communication:
 IEEE802.15.4 o ZigBee.
 IEEE 802.11 o Wi-Fi.
 Worldwide Interoperability for Microwave Access
5025
7
WIMAX.
 Power Line Communications (PLC).
 General packet radio service (GPRS), Short Message
Service (SMS).
 Device Language Message specification (DLMS) COmpanion Specification for Energy Metering
(COSEM) [34].
- HAN y Energy Gateway:
 IEEE802.15.4 o ZigBee.
 Bluetooth de baja energía.
 IEEE 802.11 o Wi-Fi.
Al tiempo que deben desempeñar su función principal del
proceso de medida. Dos estándares definen la exactitud en los
equipos de medida de la energía eléctrica (c.a.):
- ANSI C12.20, USA, clases para los contadores eléctricos.
- IEC 62053 europea UNE-EN62053, requisitos particulares.
TABLA 2
ALGUNOS FABRICANTES DE CONTADORES ELECTRÓNICOS
Fabricante
- paísCircutor
-EspañaEchelon
-EEUU-
Elster Group
-LuxenburgoGE Energy
-EEUUIskraemeco
-EsloveniaItron, Actaris
-EEUULandis+Gyr
-SuizaSiemens Energy
-AlemaniaZIV
-España-
-
-
-
metrológico del Estado sobre los contadores de energía
eléctrica, estáticos combinados, activa y reactiva a instalar
en suministros de energía eléctrica hasta una potencia de
15 kW de activa.
- Orden ITC/3860/2007 por la que se revisan las tarifas
eléctricas a partir del 1 de enero de 2008.
Según el "Plan Contador", los actuales contadores de
medida en los suministros de energía eléctrica deberán
sustituirse por otros nuevos equipos electrónicos, capaces de
permitir la discriminación horaria y la telegestión, en un plazo
de once años, entre el 1 de enero de 2008 y el 31 de diciembre
de 2018.
E. Grupos de desarrollo
- NIST (National Institute of Standards and Technology)[35].
- Proyecto europeo OPEN meter. Comisión europea, dentro
del programa marco FP7, de I+D, como el proyecto
denominado Open Meter (Open and Public Extended
Network METERing) [36].
Equipos y servicios que ofrecen
Diseño y fabricación de equipos para la eficiencia
energética eléctrica, protección eléctrica industrial,
medida y control de la energía eléctrica.
Nerwoked Energy Services (NES), Smart Electric
Meters. Smart meters certifier for ANSI and IEC
standars.
Proveedor de equipos para control de red y software.
Desarrollo de soluciones de medición inteligente.
Proveedor mundial de productos avanzados de
medición y soluciones inteligentes de medición.
Electric, Gas and Water Smart Meters.
Electric AMR and Smart Meters.
Proveedor mundial de los dispositivos y sistemas de
medición de energía eléctrica, registro y facturación.
Electric Smart Meters.
Es un proveedor de tecnologías energéticas e
industrias del agua.
Electric, Gas and Water Smart Meters.
Medición de electricidad, con posicionado en
Telegestión (AMM) o “SMART meters”
Electric, Gas and Water Smart Meters.
Especializado en sistemas eléctricos de
automatización y Smarts Meters. Metering
Information System (AMIS) solution.
Contadores de energía eléctrica y sistemas de
contadores, Equipos de medida de calidad de servicio
eléctrico.
D. Normas para el mercado interior de la electricidad
La normativa aplicable se puede resumir en [3]:
Directivas del Parlamento Europeo y del Consejo,
2009/72/EC normas comunes para el mercado interior de
la electricidad y 2006/32/EC sobre la eficiencia del uso
final de la energía y los servicios energéticos .
Ley 17/2007 por la que se modifica la Ley 54/1997 del
Sector Eléctrico.
Real Decreto 1110/2007 por el que se aprueba el
Reglamento unificado de puntos de medida del sistema
eléctrico.
Orden ITC/3022/2007 por la que se regula el control
REFERENCIAS
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
Legislación comunitaria UE. “Síntesis de la legislación de la UE”, [En
línea]. Disponible:
http://europa.eu/legislation_summaries/energy/energy_efficiency/l27057
_es.htm
Real Decreto 809/2006. [En línea]. Disponible:
http://noticias.juridicas.com/actual
”Informe 23/2009 de la CNE Solicitado por la Secretaria de Estado de
Energía Sobre da Propuesta de Orden por la que se Establece el “Plan
Contador””, Comisión Nacional de Energía, 20 Jul. 2009, [En línea],
Disponible: www.cne.es/cne/doc/publicaciones/cne119_09.pdf
European Smart Metering Alliance. [En línea]. Disponible: www.esmahome.eu/
Echelon, “Making Devices Smart and the Grid Smarter“,GSA Expo,
October 2009 [En línea]. Disponible:
http://www.gsaglobal.org/resources/presentations/index.asp
J. D. McDonald, “Electric power substations engineering”, 2nd ed.
CRC Press, 2007.
Various authors, 2010, January, 11, “Smart grid”, Wikipedia [En linea].
Disponible: http://en.wikipedia.org/wiki/Smart_grid
Various authors, 2008, setember, “Strategic Deployment Document for
Europe’s Electricity Networks of the Future”, The SmartGrids
Technology Platform [En línea]. Disponible: www.smartgrids.eu/
J. Grubbs, “Power System Stability and Control: Metering of Electric
Power and Energy”, CRC Press, 2007.
R. Levy, “An Overview of Smart Grid Issues”, presented at the Oregon
Public Utility Commission. Smart Grid Workshop, 2009, September, 9.
D. Dahle, 2009, February, 11, “A brief history of meter companies and
meter evolution” [En línea]. Disponible:
http://watthourmeters.com/history.html
Hawkins, “Electrical guide number seven”, THEO. AUDEL & CO.,
New York, 1915.
E. Thomson, “Electric Meter”, U.S. Pat. 448280, Filed 20 Nov. 1890.
Various authors, December 2000 “Watt-hour meter maintenance and
testing”. United States Department Of The Interior Bureau Of
Reclamation. [En linea]. Disponible:
www.usbr.gov/power/data/fist/fist3_10/vol3-10.pdf
HP Davis & F. Conrad, “Electric meter and motor”, US Pat. 608842 Filed 18 Jun 1898.
D. Martell, “Decoder Circuits For Shaft Encoder Apparatus”, U.S.
Patent 3750156, Jul. 7 1973.
L. Laurence, “Energy monitoring device”, U.S. Patent 4080568, March
21 1978.
G. Theodoros, “Apparatus and method for remote sensor monitoring,
metering and control”, U.S. Patent 4241237, Dec. 12 1980.
G. Theodoros, “Apparatus and method for remote sensor monitoring,
metering and control”, U.S. Patent 4455453, Jun. 6 1984.
5025
[20] Various authors, 2010, January, 23, “Automatic meter reading”,
Wikipedia [En línea]. Disponible:
http://en.wikipedia.org/wiki/Automatic_meter_reading
[21] Federal Energy Regulatory Commission staff report, “Assessment of
Demand Response and Advanced Metering” (Docket AD06-2-000).
U.S. Department of Energy, 2006. Pag. 20.
[22] A. Harney, “Smart Metering Technology Promotes Energy Efficiency for
a Greener World”, Analog Devices [En línea], 2009. Disponible:
http://www.analog.com/library/analogDialogue/archives/4301/smart_metering.pdf
[23] www.analog.com
[24] www.atmel.com
[25] www.austriamicrosystems.com
[26] www.cirrus.com
[27] www.maxim-ic.com
[28] www.microchip.com
[29] www.sames.co.za
[30] www.st.com
[31] www.teridian.com
[32] Refabrica (2009, 10). Who's Who in Smart Grid and Smart Metering.
Journal [En línea]. Disponible: www.refabrica.com
[33] J. Berst. (2009, 11). Meter Maker Shakedown: The 5 That Will Survive.
Journal [En línea]. Disponible: www.smartgridnews.com
[34] www.dlms.com
[35] Pikeresearch, “Smart Electrical Meters, Advanced Metering
Infrastructure, and Meter Communications: Market Analysis and
Forecasts”, 2010, Index Disponible:
http://www.pikeresearch.com/research/smart-meters
[36] www.openmeter.com
8
Spectral Analysis of the RR series and the Respiratory Flow Signal on
Patients in Weaning Process
Andrés Arcentales1, Beatriz F. Giraldo1,2,3, Pere Caminal1,2,4, Iván Díaz5, Salvador Benito5
1Dept.
ESAII, Universitat Politècnica de Catalunya (UPC), Barcelona, Spain.
2CIBER de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Spain.
3Biomedical Signal Processing and Interpretation Group, Institut de Bioenginyeria de Catalunya (IBEC), Barcelona, Spain.
4Centre de Recerca en Enginyeria Biomèdica (CREB), UPC, Barcelona, Spain.
5Hospital de la Santa Creu i Sant Pau, Barcelona, Spain.
Abstract — A considerable number of patients in weaning process have problems to keep spontaneous breathing during the trial and after it. This study proposes to extract
characteristic parameters of the RR series and respiratory flow signal according to the patients’ condition in weaning test. Three groups of patients have been considered: 93
patients with successful trials (group S), 40 patients that failed to maintain spontaneous breathing (group F), and 21 patients who had successful weaning trials, but that had to be
reintubated before 48 hours (group R). The characterization was performed using spectral analysis of the signals, through the power spectral density, cross power spectral density
and Coherence method. The parameters were extracted on the three frequency bands (VLF, LF and HF), and the principal statistical differences between groups were obtained in
bands of VLF and HF. The results show an accuracy of 76.9% in the classification of S and F groups.
I. INTRODUCTION
II. ANALYZED DATA
One of the most challenging problems in intensive care
is the process of discontinuing mechanical ventilation,
commonly
referred
to
as
weaning.
Previous
investigation reported that near 40% of the intensive
care unit patients need mechanical ventilator for
sustaining their lives.
Electrocardiography
(ECG)
and
respiratory flow (FLW) signals were
measured in 154 patients on weaning
trials from mechanical ventilation
(WEANDB data base).
Various studies have been carried out to detect which
physiological variables can identify readiness to
undertake a weaning trial.
Both signals were recorded for 30 min
at a sampling frequency of 250 Hz.
Patients were classified into 3 groups:
The aim of this study is to characterize the patients on
weaning trial using extracted parameters from the
spectral analysis of the electrocardiography and
respiratory flow signal.
 Successful (GS)  93 patients
 Failure (GF)
 40 patients
 Reintubated (GR)  21 patients
III. METHODOLOGY
Pre-processing
Spectral Bands
Cardiac interbeat duration (RR) was obtained by processing the ECG
signal using an algorithm based on wavelet analysis. The RR signal
was obtained by sampling at 250 Hz of the linear interpolation of the
RR series. Both signals were filtered and removed linear trend.
 Very low frequency (VLF) 0 - 0.04 Hz
 Low frequency (LF) 0.04 - 0.15 Hz
 High frequency (HF) 0.15 - 0.4 Hz
Parameter Extraction
Spectral Analysis
Using the Welch s averaged modified periodogram method, Power
Spectral Density (PSD) was calculated by
x(n)  signal,
w(n)  Hamming window,
L  length of the segments,
D  separation of segments,
K  number of segments.
Cross Power Spectral Density (CPSD) was calculated between RR
and FLW.
The magnitude square coherence
(MSC) is defined as
where x and y are the two time series signals, Sxy(ejω) is the
cross spectral density between these two signals, and Sxx(ejω)
and Syy(ejω) are the autospectral density of each one.
The PSD of the RR signal was characterized by the
total power (PT), the power of different bands (VLF,
LF and HF), the ratio LF/HF, and the frequency
peak (fp) of the principal power peak.
The spectrum of the respiratory flow signal was
characterized by fp of the main power peak, the
discriminant band (DB) of it, the PT, and the power
in the DB (PDB).
The CPSD and MSC were analyzed on the three
spectral bands (VLF, LF and HF). For each band, it
was calculated the fp and its DB, the PDB and the
PT. We also estimated the dispersion of the power
by the standard deviation (SD) and interquartile
range (IQR).
CPSD Parameters comparing GF vs. GR
GF
GR
Parameters
p-value
MEAN ± SD Mean ± SD
Power Parameters
IV. RESULTS
PDB
GS
0.40 ± 0.12
39104.70 ± 70292.90
GF
GR
p-value
0.52 ± 0.14
0.43 ± 0.15
p<0.001
46453.96 ± 71257.37
54436.30 ± 51469.26
n.s.
V. DISCUSSION AND CONCLUSION
The best statistical differences between the groups of patients were found using the characterization of the
CPSD. The PDB and the IQR of the power, provide significant differences between all groups of patients in the
bands VLF and HF.
The three groups of patients were classified using a linear discriminant analysis, applying leave-one-out
cross-validation. The best classification was obtained between the groups S and F with an accuracy of 76.9%.
This approach together with other methods based on nonlinear dynamics, pertains to the perspective of
capturing the whole information on the cardiac and respiratory signals on patients in weaning process.
VLF
CPSD Parameters comparing GS vs. GR
GS
GR
Parameters
p-value
MEAN ± SD Mean ± SD
Power Parameters
PDB
4.3 ± 6.5
6.5 ± 7.8
0.049
4.5 ± 6.5
6.8 ± 8
n.s.
PT
Dispersion Parameters
Power SD (PT) 1.5 ± 2.4
2.2 ± 2.5
n.s.
3.7 ± 4.1
0.033
Power IQR (PT) 2.4 ± 4.0
PDB
3.3 ± 5.3
6.5 ± 7.8
0.013
PT
3.0 ± 4.8
6.8 ± 8
0.012
Dispersion Parameters
Power SD (PT)
0.8 ± 1.3
2.2 ± 2.5
0.009
Power IQR (PT)
1.4 ± 2.3
3.7 ± 4.1
0.005
PDB
1.6 ± 2.4
7.8 ± 21.0
0.005
PT
3.6 ± 4.3
12.8 ± 32.6
0.047
Power Parameters
HF
fp (Hz)
HF
Respiratory Frequency Peak of the PSD
CPSD Parameters comparing GS vs. GF
GS
GF
Parameters
p-value
MEAN ± SD Mean ± SD
Power Parameters
PDB
3.9 ± 6.3
1.6 ± 2.4
0.003
6.7 ± 9.3
3.6 ± 4.3
0.028
PT
Dispersion Parameters
Power SD (PT) 4.5 ± 6.4
2.1 ± 2.9
0.003
2.4 ± 2.9
0.005
Power IQR (PT) 5.9 ± 8.8
VLF
The best results were obtained whit the parameters extracted from de
CPSD and PSD of the respiratory flow.
Dispersion Parameters
Power SD (PT)
2.1 ± 2.9
6.3 ± 10.6
0.026
Power IQR (PT)
2.4 ± 2.9
6.6 ± 9.9
0.019
32nd Annual International Conference of the IEEE EMBS
Buenos Aires, Argentina, August 31 -­ September 4, 2010
Spectral Analysis of the RR series and the Respiratory Flow Signal
on Patients in Weaning Process
Andrés Arcentales, Beatriz F. Giraldo, Member, IEEE, Pere Caminal, Iván Diaz, Salvador Benito
1 !
Abstract— A considerable number of patients in weaning
process have problems to keep spontaneous breathing during
the trial and after it. This study proposes to extract
characteristic parameters of the RR series and respiratory flow
signal according to the patients’ condition in weaning test.
Three groups of patients have been considered: 93 patients with
successful trials (group S), 40 patients that failed to maintain
spontaneous breathing (group F), and 21 patients who had
successful weaning trials, but that had to be reintubated before
48 hours (group R). The characterization was performed using
spectral analysis of the signals, through the power spectral
density, cross power spectral density and Coherence method.
The parameters were extracted on the three frequency bands
(VLF, LF and HF), and the principal statistical differences
between groups were obtained in bands of VLF and HF. The
results show an accuracy of 76.9% in the classification of the
groups S and F.
S
I. INTRODUCTION
PECTRAL analysis of heart rate, respiration and blood
pressure signals is a well established tool for the
noninvasive
investigation
of
cardiovascular
and
cardiorespiratory control mechanisms [1]. Changes in
frequencies above 0.04 Hz provide evidence for the active
existence of either sympathetic or parasympathetic control
mechanisms. Standards of measurement, physiological
interpretation and clinical use of HRV have been published,
involving three different components: a very low frequency
(VLF) component in the range between 0 and 0.04 Hz, a low
frequency (LF) component between 0.04 and 0.15 Hz, and a
high frequency (HF) component between 0.15 and 0.4 Hz
[1]. The power in the LF band is considered to be a measure
of the sympathetic activity on the heart, although its
interpretation is controversial, as e.g. when the respiratory
frequency lies in the LF band. The power in the HF band is
considered to be a measure of the parasympathetic activity
and mainly due to respiratory sinus arrhythmia [2].
One of the most challenging problems in intensive care is
the process of discontinuing mechanical ventilation, which is
commonly referred to as weaning. Critical-care clinicians
Manuscript received April 23, 2010. This work was supported in part by
Ministerio de Ciencia e Innovación under grants TEC2007-63637 and
TEC2007-68076-C02-01 from the Spanish Government.
A. Arcentales and P. Caminal are with Dept. ESAII, Universitat
Politècnica de Catalunya (UPC). (e-mail: [email protected] Pere.
Caminal @upc.edu).
B.F. Giraldo is with Dept. ESAII, Escola Universitaria Enginyeria
Tècnica Industrial de Barcelona (EUETIB), Universitat Politècnica de
Catalunya (UPC), Institut de Bioingenyeria de Catalunya (IBEC) and
CIBER de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN). c/.
Pau Gargallo, 5, 08028, Barcelona, Spain (e-mail: Beatriz.Giraldo
@upc.edu).
S. Benito and I. Diaz are with Dept. Intensive Care Medicine, Hospital
de la Santa Creu i Sant Pau, Barcelona, Spain.
978-­1-­4244-­4124-­2/10/$25.00 ©2010 IEEE
must carefully weigh the benefits of rapid liberation for
mechanical ventilation against the risks of premature trials
of spontaneous breathing and extubation [3]-[5]. Previous
investigation reported that near 40% of the intensive care
unit patients need mechanical ventilator for sustaining their
lives. Among them, 90% of the patients can be weaned from
the ventilator in several days while the other 5%-15% of the
patients need longer ventilator support. However, ventilator
support should be withdrawn promptly when no longer
necessary so as to reduce the likelihood of known
nosocomial complications and cost.
Various studies have been carried out to detect which
physiological variables can identify readiness to undertake a
weaning trial [6]-[8]. The assessment of autonomic control
provides information about heart physiology imbalances
within the cardiorespiratory system. Since ventilation
weaning represents a period of transition from mechanical
ventilation to spontaneous breathing and is associated with a
change in autonomic activity, change of heart rate variability
during weaning is to be expected. Ventilation can alter
cardiovascular
function.
Different
cardiorespiratory
interdependencies during the weaning trials are particular
aspects of dynamic autonomic functional coordination. Up
to now, it is not clear whether there are more stable
functional relations between breaths and heart beats in
patients with successful trials. As the coupling between heart
rate and respiration is assumed to be strongly non-linear,
several methods have been developed to analyze the
cardiorespiratory coordination.
The purpose of this paper is to characterize the patients on
weaning trial using extracted parameters from the spectral
analysis of the electrocardiography and respiratory flow
signal. The aim of this study is to provide enhanced
information in order to identify patients with successful
spontaneous breathing trials, patients with unsuccessful trials
and patients who successfully passed a trial but were unable
to maintain spontaneous breathing and required the
reinstitution of mechanical ventilation in less than 48 hours.
II. ANALYZED DATA
Electrocardiography (ECG) and respiratory flow (FLW)
signals were measured in 154 patients on weaning trials
from mechanical ventilation (WEANDB data base). These
patients were recorder in the Departments of Intensive Care
Medicine at Santa Creu i Sant Pau Hospital and Getafe
Hospital, Spain, according to the protocols approved by the
local ethic committees.
2485
Using clinical criteria based on the T-tube test, the
patients were disconnected from the ventilator, and
maintained spontaneous breathing through an endotraqueal
tube during 30 min. Patients were classified into three
groups: 93 patients with successful weaning trials (group S);
40 patients who failed to maintain spontaneous breathing
and were reconnected after the 30 min (group F); and 21
patients who having passed successfully weaning trials must
to be reintubated before 48 hours (group R).
The ECG signal was recorded using a SpaceLab Medical
monitor. The respiratory flow signal was obtained with a
pneumotachograph (Datex-Ohmeda monitor with variable
reluctance transducer) connected to the endotracheal tube.
Both signals were recorded for 30 min at a sampling
frequency of 250 Hz. Cardiac interbeat duration (RR) was
obtained by processing the ECG signal using and algorithm
based on wavelet analysis [9].
III.
U &
1 L %1
2
w"n #
$
L n &0
(2)
The cross power spectral density (CPSD) was calculated
between the RR and FLW signals using the same method
(Welch's averaged modified periodogram method of spectral
estimation).
The Coherence function of two time series signals is
defined as their cross correlation also known as the cross –
spectral density normalized by the autospectral density of
the two original signals. The magnitude square coherence
(MSC) is defined as
2
" #& S
) e
j(
"S "e ##
"e # S "e #
xx
xy
j(
j(
yy
2
j(
(3)
where x and y are the two time series signals, Sxy(ej() is the
cross spectral density between these two signals, and Sxx(ej()
and Syy(ej() the autospectral density of each one.
METHODOLOGY
A. Signal preprocessing
According to our previous studies [10], [11], not all the 30
minutes of the trial are equally useful in terms of decision
for extubation. A common feeling of anxiety among all the
patients at the beginning of the weaning trial may explain
the minor usefulness of the first part. The first and last 5 min
of the signal were not considered in this study, working with
20 min of the signals.
The signals were preprocessed removing linear trend. The
RR signal was obtained by sampling at 250 Hz of the linear
interpolation of the RR series. Additionally, FLW signal was
filtered with an IIR filter (order 10).
Spectral analysis was broken down into three bands: very
low frequency (VLF) band (0-0.04 Hz), low frequency (LF)
band (0.04-0.15 Hz), and high frequency (HF) band (0.150.4 Hz). However, the respiratory frequency can be as low
as 0.1 Hz during relaxation and as high as 0.7 Hz during
intense exercise. Consequently, the PSD of FLW signal has
been calculated in the range 0-1Hz. Figs 1 and 2 illustrate
the performance of the PSD calculated on the RR and FLW
signals, respectively.
C. Parameter extraction
B. Spectral Analysis
Power spectral density (PSD), cross power spectral density
(CPSD) and coherence method are applied in order to
characterize the RR and FLW signals in patients on weaning
trials.
Power Spectral Density (PSD) was calculated for each
signal, using the Welch´s averaged modified periodogram
method [12], considering segments of 40 s and 50% overlap,
by the following equation:
" #
Sx e
j(
1
&
KLU
K %1 L %1
$ $ w"n# x"n ' i D#e
Fig. 1. The PSD of RR signal.
2
% jn(
(1)
i &0 n &0
where x(n) denotes the signal, w(n) is a Hamming window, L
is the length of the segments, D is the separation between
segments, K is the number of segments. The normalization
factor U removes the energy bias introduced by the
windowing, and is given by
Fig. 2. PSD of the FLW signal.
For each minute of the signal were obtained the PSD,
CPSD and MSC. For each patient, the mean values of these
spectral analyses on the whole signal were estimated, and
2486
the characteristic parameters were calculated.
The PSD of the RR signal was characterized by the total
power (PT), the power of different bands (VLF, LF and HF),
the ratio LF/HF, and the frequency peak (fp) of the principal
power peak.
The spectrum of the FLW signal was characterized by fp of
the main power peak, the discriminant band (DB) of it, the
PT, and the power in the DB (PDB). The DB was defined as
the frequency interval centered at the frequency peak,
between the minimum values of the right and left side.
The CPSD and MSC were analyzed on the three spectral
bands (VLF, LF and HF). For each band, it was calculated
the fp and its DB, the PDB and the PT. We also estimated the
dispersion of the power by the standard deviation (SD) and
interquartile range (IQR).
For the analysis of the statistical differences between each
two groups, Mann-Whitney test was applied. The best
results were obtained with the CPSD parameters. In Table II
the p-values for HF CPSD parameters comparing the groups
S versus F are shown. The frequency peak in the group S is
more remarked on the three frequency bands than in the
group F. The VLF and LF bands did not present parameters
with significant differences.
Table III presents the p-values of VLF CPSD comparing
groups S and R. Analyzing groups F and R, CPSD
parameters showed significant differences in VLF and HF
(Table IV). Fig. 3 illustrates the performance of the CPSD
and MSC on the RR and FLW signals of one patient of each
group.
TABLE III
CPSD PARAMETER IN THE VLF BAND COMPARING
GS VS GR PATIENTS
IV. RESULTS
Parameters
Kruskal-Wallis test is used to assess whether the
characteristic parameters of patients belonging to the groups
S, F and R present significant differences. The PSD
parameters of RR signal and the coherence of these with
FLW signal did not show any differences.
PDB
39104.70 ± 70292.90
46453.96 ± 71257.37
54436.30 ± 51469.26
n.s.
4.3 ± 6.5
4.5 ± 6.5
6.5 ± 7.8
6.8 ± 8
0.049
n.s.
2.2 ± 2.5
3.7 ± 4.1
n.s.
0.033
Dispersion Parameters
Power SD (PT)
Power IQR (PT)
1.5 ± 2.4
2.4 ± 4.0
TABLE IV
CPSD PARAMETER IN THE VLF AND HF BANDS COMPARING
GF VS GR PATIENTS
DISCRIMINANT BAND OBATINED BY POWER SPECTRAL
DENSITY
fp
p-value
PDB
PT
TABLE I
RESPIRATORY FREQUENCY PEAK AND POWER OF THE
0.40 ± 0.12 Hz
0.52 ± 0.14 Hz
0.43 ± 0.15 Hz
p<0.001
GR
Mean ± SD
Power Parameters
Table I present the results of the PSD applied to FLW
signal. The frequency peak in the group F is higher than in
the group S, whereas the value of the group R is between
those two groups.
GR
Mean ± SD
p-value
PDB
3.3 ± 5.3
PT
3.0 ± 4.8
Dispersion Parameters
6.5 ± 7.8
6.8 ± 8
0.013
0.012
Power SD (PT)
Power IQR (PT)
2.2 ± 2.5
3.7 ± 4.1
0.009
0.005
PDB
1.6 ± 2.4
PT
3.6 ± 4.3
Dispersion Parameters
7.8 ± 21.0
12.8 ± 32.6
0.005
0.047
Power SD (PT)
Power IQR (PT)
6.3 ± 10.6
6.6 ± 9.9
0.026
0.019
Parameters
GF
Mean ± SD
Power Parameters
VLF
GS
GF
GR
p-value
GS
Mean ± SD
Using Kruskal-Wallis test.
0.8 ± 1.3
1.4 ± 2.3
Power Parameters
GF
Mean ± SD
p-value
PDB
3.9 ± 6.3
PT
6.7 ± 9.3
Dispersion Parameters
1.6 ± 2.4
3.6 ± 4.3
0.003
0.028
Power SD (PT)
Power IQR (PT)
2.1 ± 2.9
2.4 ± 2.9
0.003
0.005
Parameters
GS
Mean ± SD
HF
TABLE II
CPSD PARAMETERS IN THE HF BAND COMPARING
GS VS GF PATIENTS
2.1 ± 2.9
2.4 ± 2.9
Power Parameters
4.5 ± 6.4
5.9 ± 8.8
V. DISCUSSION AND CONCLUSION
This paper proposed to extract characteristic parameters
from the spectral analysis and mutual spectral behavior
through PSD, CPSD and MSC, of the RR and FLW signals
of patients on weaning trials.
2487
Fig. 3. Spectral analysis of one patient of GS, GF and GR (a) PSD of the RR series, (b) PSD of the FLW signal, (c) CPSD of
the RR series and FLW signal, and (d) MSC of the RR series and FLW signal.
The best statistical differences between the groups of
patients were found using the characterization of the CPSD.
The PDB and the IQR of the power, provide significant
differences between all groups of patients in the bands VLF
and HF.
The three groups of patients were classified using a linear
discriminant analysis, applying leave-one-out crossvalidation. The best classification was obtained between the
groups S and F with an accuracy of 76.9%.
This approach together with other methods based on
nonlinear dynamics, pertains to the perspective of capturing
the whole information on the cardiac and respiratory signals
on patients in weaning process.
[5]
[6]
[7]
[8]
[9]
[10]
REFERENCES
[1]
[2]
[3]
[4]
TASK FORCE OF ESC AND NASPE, T. Heart rate variability.
Standards of measurement, physiological interpretation, and clinical
use. Eur. Heart J., vol. 17, 1996, pp. 354–381.
R. Bailón, P. Laguna, L. Mainardi and Leif S¨ornmo, (2007). Analysis
of Heart Rate Variability Using Time-Varying Frequency Bands
Based on Respiratory Frequency. Proceedings of the 29th Annual
International
MacIntyre, N. R. (2001). Evidence-based guidelines for weaning and
discontinuing ventilatory support*. Chest, 120(6 suppl):375–396
Brochard, L., Rauss, A., Benito, S., Conti, G., Mancebo, J., Rekik, N.
Gasparetto, A., and Lemaire, F. (1994). Comparison of three methods
of gradual withdrawal from ventilatory support during weaning from
mechanical ventilation. Am. J. Respir. Crit. Care Med., 150(4):896–
903.
[11]
[12]
[13]
2488
Tobin, M. J. (2001). Advances in mechanical ventilation. The New
England journal of medicine, 344(26):1986–1996.
Jubran, A., Grant, B. J. B., Laghi, F., Parthasarathy, S., and Tobin, M.
J. (2005). Weaning Prediction: Esophageal Pressure Monitoring
Complements Readiness Testing. Am. J. Respir. Crit. Care Med.,
171(11):1252–1259.
Casaseca, J. P., Martin-Fernandez, M., and Alberola-Lopez, C.
(2006). Weaning from mechanical ventilation: a multimodal signal
analysis. IEEE Transactions on Biomedical Engineering, 53(7):1330–
1345.
Tobin, M. J. (2004). Of principles and protocols and weaning. Am. J.
Respir. Crit. Care Med., 169(6):661–662.
J. P. Martinez, R. Almeida, S. Olmos, A. P. Rocha and P. Laguna, "A
wavelet-based ECG delineator: Evaluation on standard databases,"
IEEE Transactions on Biomedical Engineering, vol. 51, pp. 570-581,
APR. 2004.
Orini M., B.F. Giraldo, R. Bailón, M. Vallverdú, L. Mainardi, S.
Benito, I. Díaz, P. Caminal, (2008). “Time-Frequency Analysis of
Cardiac and Respiratory Parameters for the Prediction of Ventilator
Weaning”, 30th International Conference of the IEEE Engineering in
Medicine and Biology Society, Vancouver, Canada, August 20-24,
2008, pp. 2793-2796.
B. Giraldo, C. Arizmendi, E. Romero, R. Alquezar, P. Caminal, S.
Benito and D. Ballesteros, “Patients on Weaning Trials from
Mechanical Ventilation Classified with Neural Networks and
Feature”, 28th Annual International Conference of the IEEEEngineering-in-Medicine-and-Biology-Society, New York, August
30 - September 3, 2006, Selection. 2006, pp. 4112.
P. D. Welch, "Use of Fast Fourier Transform for Estimation of Power
Spectra - a Method Based on Time Averaging Over Short Modified
Periodograms," Ieee Transactions on Audio and Electroacoustics, vol.
AU15, pp. 70-&, 1967.
M. Niccolai, M. Varanini, A. Macerata, S. Pola, M. Emidin, M.
Cipriani and C. Marchesi, “Analysis of Non Stationary Heart Rate
Series by Evolutionary Periodogram”, 22nd Annual Scientific
Meeting of Computers in Cardiology, September 10-13, 1995, Vienna,
Austria, 1995, pp. 452.
4MB-02
1
Computation of current distribution in
YBCO tapes with defects obtained from Hall
magnetic mapping by inverse problem
solution
M. Carrera, J. Amorós, X. Granados, R. Maynou, T. Puig, X. Obradors
Abstract— The development of superconducting devices based
on long-length HTS tapes often requires of these tapes high
homogeneity along its length as well as across its width. This
implies the absence of significant local defects.
Non-destructive characterization techniques to examine critical
current distribution for defect detection are of great interest,
specially if they could be applied in situ for real-time testing of
large lengths of tape.
In this work, we continue the adaptation of our method for the
computation of critical current maps from Hall measurements of
the magnetic field over the tape. We compute the current density
distribution in a stretch of a commercial YBCO tape which
contains defects by using a specifically designed fast inverse
problem solver. The 2-dimensional current map meshes with the
current distributions in a cross-section of the tape that we
previously computed in real time, so that a map of the critical
current circulating on the entire surface of a tape with isolated
defects may be obtained, regardless of its length, by running a
Hall probe over it.
This method is applied to a series of Hall mappings
corresponding to several magnetization regimes, produced by
applying different current intensities to the tape. Details of the
experiments and the calculation method are reported and the
applicability to detect the impact of the defects in the tape over
the current distribution is discussed.
Index Terms— Hall Mapping, HTS
Characterization, Magnetic Inverse Calculation.
T
Tapes
Fast
I. INTRODUCTION
HE evolution of the increasing demand of
superconducting materials, specifically superconducting
tape, at the commercial level for the amount of devices that
are projected or worldwide ongoing, requires large scale
Manuscript received 2 August 2010.
M. Carrera is with the Departament de Medi Ambient i Ciències del Sòl,
Universitat de Lleida, Jaume II, 69. 25001 Lleida, Spain (e-mail:
[email protected]).
X. Granados, T. Puig, and X. Obradors are with the Institut de Ciència de
Materials de Barcelona, CSIC, Campus UAB, 08193, Bellaterra, Spain (email: [email protected]).
J. Amorós is with the Departament de Matemàtica Aplicada I, Universitat
Politècnica de Catalunya, Diagonal 647, Barcelona, Spain (e-mail:
[email protected]).
R. Maynou is with CEIB-EUETIB and Departament de Matemàtica
Aplicada III, Universitat Politècnica de Catalunya, Comte d'Urgell 168,
Barcelona, Spain (e-mail [email protected]).
production and well defined quality classification to match the
designers’ needs. On site precise characterization is a helpful
and necessary instrument which allows a better classification,
homogeneizing the properties of the material in each class.
Identification of defects or quality fluctuations at the
production level is a way for diminishing production cost and
giving a confident basis for HTS device design.
Although a good characterization should include many
other aspects as mechanical, thermal and magnetic behavior,
at room temperature (RT) and at operating temperature (OT)
several ways have been proposed for characterization of
electrical properties on the basis of the magnetic and transport
properties of HTS tapes including direct transport critical
current (Ic) determination in consecutive tape segments [1],
mutual inductance critical current determination in an
overlapped sequence [2], magnetic field trapped analysis by
exploring a line of the superconducting surface after local or
full magnetization [3,4], or, finally, full Hall probe mapping
[5,6].
All the “in situ” characterization systems require giving
their results in real time to get a useful feedback for the
production systems. This constrain leads to both fast data
collection and fast computation time to get on-the-fly
availability of useful data, thus pressing for optimization of
the number of points and direct identification of defects in the
cached map transporting the identification process to a
secondary effort done separately.
The effort of our work has been devoted to have an efficient
and fast way for data collecting with high resolution and to
design a fast inversion way for local Ic calculation from
mapping of the out of plane magnetic field component. In
this work we will report on the results of computation of the
current distribution in a set of samples by a fast and efficient
inversion method capable of reliable on-the-fly computation,
as adaptation of the previously reported algorithms developed
for HTS bulks [5,7,8,9] in a first step, and extended to tapes in
a second step [10].
II.
COMPUTATION MODEL
The procedure used to invert the Biot-Savart problem on the
measured tapes is an adaptation of the discretization and QRinversion method used by the authors on bulk samples in
[7,8,9].
4MB-02
2
The method is based in the subdivision of a region
containing the stretch of tape to be studied into a rectangular
discretization grid. The current J circulating on this region is
the curl of the imantation M, which is assumed to have a
constant value Mij on every element ∆ij in this grid. If the
vertical magnetic field generated by the current is measured at
a second rectangular grid of points Pkl=(xb,yb,zb), at a fixed
height over the tape, the imantation values Mij must satisfy a
linear system of equations formed by an equation of the form
(1) for each point Pkl. In that equation r is the distance ||Pkl(x,y,z)|| from the measurement point to points (x,y,z) in the
element ∆ij. This linear system may be duly inverted, yielding
the value of its unknowns Mij.
2
2
⎛
⎞
3 ( z b − z) − r
⎜ 0
⎟
(1)
Bz ( Pkl ) = ∑ ⎜
dxdy ⎟ M ij
5
∫∫
4
π
Δ
ij
⎜
⎟
Δij ⎝
r
⎠
μ
In practice, to minimize the propagation of errors from the
measured Bz to the inverted M it is advisable to measure the
vertical magnetic field as closely (in all dimensions) to the
tape as possible, and to take redundant measurements and
solve an overdetermined linear system.
Unlike in the previous applications of this procedure to bulks,
this inversion procedure is applied here to open circuits,
usually to a short stretch in a long tape with current flowing in
and out of the measured region. This difficulty is overcome by
adding to the discretization grid a set of long, thin elements on
each end of the measured stretch of tape, supporting a current
that is rectilinear and homogeneous along the length of tape
that they cover. The current on these tape ends is allowed to
vary in intensity across the tape width to match the ends of the
central discretization grid. If the actual current in these end
regions is rectilinear and length-homogeneous then the 2dimensional map for M and the circulating current density J
computed in the central discretization grid is valid. If the
current circulating on the ends of the tape is not rectilinear, or
just unknown, the computed maps of M and J in the central
discretization grid are still valid at a distance away from the
ends of the grid. The safety distance is determined by the
bounds on the current density, or on the irregularity of the
tape, available for the tape stretches adjoining the computation
region.
As the current J is the curl of an imantation M only in closed
circuits, if the tape is transporting current two lateral elements
are added to the sides of the discretization grid and its ends.
These elements simulate thin wires closing the circuit, and
have no appreciable effect on the computation provided that
they are placed symmetrically and far away from the tape.
III. YBCO TAPES
Two samples of YBCO coated conductors of different
commercial suppliers have been tested:
(a) A tape with a cross width of 4.15 mm (S1). It is coated by
an YBCO superconducting layer in the range of 1 μmthickness. A metallic stabilizer made of silver and copper
is coating the HTS layer with a thickness of about 25 μm
over the HTS coating which separates the external
explored area from the superconducting sheet.
(b) A second sample (S2) from a different supplier has also
been studied. The sample with an YBCO superconducting
layer in the range of one micron is protected by a metallic
sheet which separates the exploring area from the HTS
coating a distance in the range of 60 μm. Both magnetic
contributions, that of the protecting metallic sheet and that
of the substrate are negligible.
Stretches of about 7 and 12 cm of both tapes were inserted in a
transport current circuit by fixing the beginning and end of the
tape to blocks of Cu large enough to drain the heat produced
in the contacts during the experiment. The explored area of the
tapes is more than 1 cm far from the contacts for sample S1,
and 2 cm in case of S2. The tapes were then immersed in
liquid N2 at 77K, and the circuit was designed so that the
current outside the tape was symmetrically distributed on both
sides of the tape so as to avoid its magnetic influence.
The tape was subjected to a ZFC process, and a transport
current was applied then to the circuit, with total intensity
varying from zero to critical superconducting intensity Ic in
each tape according to the 1 μV/cm criterion. Then, the
intensity was decreased to zero. The vertical magnetic field Bz
was measured at different intensities throughout this cycle.
A Hall probe was rastered in parallel rows, crossing the tape
orthogonally to its main axis, at a height of 80 μm above the
tape, i.e. 100 and 150 μm (S1 and S2 respectively) above the
superconducting layer. The probe had an active area of 100 x
100 μm, and the vertical magnetic field Bz above the tape was
measured on each row with a steps of 50 μm.
Results in tape S1
The tape S1 was object of a complete study of the
distributions of field Bz and the current distributions obtained
from inversion, corresponding to different values of applied
current intensity, and showing a notably homogeneous
distribution along the tape, with a critical current of 122 A.
This study was reported in [10]. After this characterization, the
authors have created in the tape artificial defects in different
positions by puncturing it with a fine needle in order to study
its influence on the current distribution.
The magnetic field Bz was measured on a 0.2 x 0.2 mm2 grid
covering a 30x30 mm2 square over the artificial defects on the
tape. Fig. 1 shows the measured magnetic field when the
intensity of current carrying through the tape is of 85 A (after
having reached its critical current at 91 A). The measurement
is centered on a stretch of tape centered in the main puncture.
The distribution of the current density J obtained applying our
algorithm of inversion is represented as a vector field
superimposed on the map of Bz of the Fig 1, where we can
observe the perturbation induced by the defect. The effect of
the puncture can be seen along the axis stretch from x=-2 to
0.5 mm. The transport current becomes asymmetrical as most
of it passes through one side of the puncture. It is likely that
the current drops to zero on the puncture proper, but our
discretization procedure blurs the current map on a band of 12 discretization elements wide around any domain, so in the
case of a small perturbation like this it is able to detect a drop
in the density J and a change in its direction around the hole.
4MB-02
Fig. 1. Magnetic field Bz(G) (indicated by color) measured over a strech of
tape S1 while carrying a current of I=85 A after achieving its critical
current of 91 A. The computed currents are shown as vector fields over the
map of Bz.
Fig. 2. Magnetic field Bz(G) (indicated by color) measured over a strech of
tape S1 in state of remanence at the end of current cycle after achieving its
critical current of 91A. Computed currents are shown as vector fields over
the map. of Bz.
Fig. 3. Calculated current density in a transverse section of tape S1: initial
sample before perturbing the tape (black, dotted line), and after holing the
tape: (a) in a nonperturbed stretch (blue line), (b) across the hole (red line). In
both cases the tape was submitted to the same process of carrying current (see
text).
3
Fig 2 shows, on the same stretch of the tape S1, the map of
the trapped field once the transport current has been
eliminated and therefore the superconductor reaches a
remanent state of magnetization. The computed distribution
of current density J, represented as vectors over the map of
field Bz, reflects faithfully the disruptions caused by the hole
in the circulation of the closed loops of current. It is important
to note that the remanence state reflects closely the
inhomogeneities in the tape, which points towards the
application of this method of characterization in reel-to-reel
configurations because it means that the stretch explored with
the Hall probe does not need to carry current in the moment
of the scanning.
In the Fig. 3 we show the distribution of current density J
obtained in cross-sections of the sample S1 when the applied
current is 85 A: a profile of J that corresponds to the central
position of the hole in the map of Fig 1 (red continuous line);
a profile made in a homogeneous region of the tape (blue
continuous line); and a profile of J (black discontinuous line)
obtained from the map of Bz measured in similar conditions
(I=80 A) in the sample S1 before its puncturing, when all the
cross-sections along the measured stretch were practically
identical. Apart from the asymmetry produced by the hole, it
may be observed clearly how the distribution of J reflects the
fact that, because of the decrease of the total critical current,
the most homogeneous regions have not achieved their local
critical current when the applied intensity reached the critical
current at the punctured stretch of 91 A. Hence their structure
of double-peak in the profile of J, unlike the profile of a
single peak in the initial sample without holes.
As an accuracy check of the of inversion method, the authors
computed by Biot-Savart law the magnetic field Bz that the
distribution of current J obtained with the algorithm of
inversion would produce in the grid of points used in the
measure of Bz, so that we can compare the measured field Bz
with the recalculated field. Fig 4 illustrates this comparison in
the case of the remanence state shown in Fig 2, taking two
cross sections corresponding to the centre of the hole (x=-0.75
mm in the map Fig 2), and to a homogeneous stretch of the
tape. The results are typical for the application of our
inversion procedure to thin tapes: the difference between the
measured and recomputed field Bz above the tape is 3% on
average, with the error concentrating over the edges.
Fig, 4. Measured magnetic field Bz is compared to the field generated by the
current determined by our computation in two cross-sections of tape S1. First
cross section lies in an unperturbed stretch of tape: measured Bz (black,
dotted line) and “recomputed” Bz (green). Secon section crosses the main
perturbation at x=-0.75 mm in the map of Fig. 2: measured Bz (red, dotted
line) and “recomputed” Bz (blue).
4MB-02
Results in tape S2
Sample S2 has been subjected to Hall probe scannings over
longer tape stretches, up to 60-70 mm long, with Bz measured
on a 30 mm wide band centered on the tape and with a
resolution of 0.5 x 0.5 mm2. The tape has been probed under a
set of conditions ranging from a transport current bigger than
its critical current of 240 A down to a remanent field state.
Fig. 5 shows the magnetic field Bz over the tape in the
remanent state at the end of the process. The distribution of
the field shows that this tape is significantly inhomogeneous,
unlike sample S1 before its perturbation. Therefore an
accurate description of the circulating current requires 2dimensional maps.
The computed current density J is shown in Fig. 5, as a vector
field superposed to the magnetic field map. This current
distribution has been obtained using a procedure apt for
continuous use on tapes of any length: the current J has been
computed in 3cm-long stretches of tape (computation
windows), based on the measured field Bz on each stretch and
the discretization procedure for open circuits described in
section 1. The result is reliable in a central stretch 1cm-long in
each computation, as it is far enough from the areas where we
made any assumptions of regularity on the circulating current.
The computation window has been advanced only 1cm at a
time, providing a 2 cm-long overlap with the previous window
that allows for the comparison of 2 or 3 computations of J in
each 1-cm stretch of tape. These computations agree, yielding
the vector field plotted on Fig. 5. The vectors of current
circulation J at the initial and final 1cm-stretches of tape has
not been plotted, as its computed value cannot be corroborated
without knowledge of its neighboring tape areas beyond the
Hall probe scan.
4
examining the current density on its cross sections or, more
visually, by the direction of the current vectors deviating from
the main tape axis.
IV.
CONCLUSIONS
The work presented here shows the capability of Hall
magnetometry, combined with the authors' Biot-Savart
inversion procedure, for the detection of inhomogeneities in
superconducting tapes and analysis of their effects.
The inversion procedure reported here allows the computation
of 2-dimensional maps of current distribution in tapes with
localized defects, in order to asses the impact of these defects
on the current circulation and critical current level of the tape.
Resolutions of up to 0.2 mm have been achieved so far in
these maps.
The experimental setup employed by the authors can perform
fast Hall probe measurements of Bz over 5-10 cm long
stretches of tape, while varying its magnetization state, and
including both transport of current and remanent states. This
setup can be switched to a reel-to-reel Hall probe measuring
system which, combined with the above Biot-Savart inversion
procedure produces in real (i.e., measuring) time 2dimensional maps of the circulating current on a tape,
regardless of its length. This real-time tape characterization
scheme for long tapes will be reported elsewhere.
ACKNOWLEDGMENT
The authors would like to acknowledge the support of
Consolider NANOSELECT project funded by the Education
Ministry of the Spanish Government and EU-FP7ECCOFLOW project.
REFERENCES
Fig, 5. Magnetic field Bz(G) measured over a stretch of tape S2. Computed
current density is shown as a superimposed vector field. See text for details
about computation process strategy.
The resulting current distribution reflects the inhomogeneities
of the superconducting tape. The current density has maximal
values around 7500 A/m (corresponding to 7.5x105 A/cm2 if
the thickness of the HTS layer is taken into account). The
fluctuation in the trapped field along the tape is detected by
[1] Y. Xie, H. Lee, V. Selvamanickam.
Patent US2006073977-A1;
WO2006036537-A2; US7554317-B2
[2] S. Furtner, R. Nemetschek, R. Semerad, G. Sigl, W. Prusseit, “Reel-to-reel
critical current measurements of coated conductors”, Supercond. Sci.
Technol., vol. 17, pp. S281-S284, 2004.
[3] M. Zehetmayer, M. Eisterer and H.W. Weber, “Simulation of a current
dynamics in a superconductor induced by a small permanent magnet:
application to the magnetoscan technique”, Supercond. Sci. Technol, vol. 19,
pp. S429-S437, 2006.
[4] M. Zehetmayer, R. Fuger, F. Hengstberger, M. Kitzmantel, M. Eisterer,
H.W. Weber, “Modified magnetoscan technique for assessing
inhomogeneities in the current flow of coated conductors-Theory and
experiment”, Physica C, vol. 460-462, pp. 158-161, 2007.
[5] M. Carrera, X. Granados, J. Amorós, R. Maynou, T. Puig and X.
Obradors, “Detection of current distribution in bulk samples with artificial
defects from inversion of Hall magnetic maps”, IEEE Trans. Appl.
Supercond., vol. 19 (3), pp. 3553-3556, 2009.
[6] F. Hengstberger, M. Eisterer, M. Zehetmayer and H.W. Weber, “Assessing
the spatial and field dependence of the critical current density in YBCO bulk
superconductors by scanning Hall probes”, Supercond. Sci. Technol., vol. 22,
025011 (6pp), 2009.
[7] M. Carrera, X. Granados, J. Amorós, R. Maynou, T. Puig, X. Obradors,
“Computation of critical current in artificially structured bulk samples from
Hall measurements”, J. Phys.: Conf. Ser., vol. 97, 012107 (6 pp), 2008.
[8] M. Carrera, J. Amorós, A.E. Carrillo, X. Obradors, J. Fontcuberta,
“Current distribution maps in large YBCO melt-textured blocks”, Physica C,
vol. 385, pp. 539-543, 2003.
[9] M. Carrera, J. Amorós, X. Obradors, J. Fontcuberta, “A new method of
computation of current distribution maps in bulk high-temperature
superconductors: analysis and validation”, Supercond. Sci. Technol., vol. 16,
pp. 1187-1194, 2003.
4MB-02
[10] M. Carrera, X. Granados, J. Amorós, R. Maynou, T. Puig, X. Obradors,
“Current distribution in HTSC tapes obtained by inverse problem calculation”,
J. Phys.: Conf. Ser., vol. 234, 012009, 2010.
5