TH`ESE Zhe CHEN The Control System in Formal

Transcription

TH`ESE Zhe CHEN The Control System in Formal
N◦ d’ordre : 1036
THÈSE
présentée à
l’Institut National des Sciences Appliquées de Toulouse
pour l’obtention du titre de
DOCTEUR
de l’Université de Toulouse délivré par l’INSA
Spécialité: Systèmes Informatiques
par
Zhe CHEN
Lab. Toulousain de Technologie et d’Ingénierie des Systèmes (LATTIS)
Laboratoire d’Analyse et d’Architecture des Systèmes (LAAS-CNRS)
École Doctorale Systèmes
Titre de la thèse :
The Control System in Formal Language Theory and
The Model Monitoring Approach for Reliability and Safety
Soutenue le 9 Juillet 2010, devant le jury :
Rapporteur :
Rapporteur :
Examinateur:
Examinateur:
Examinateur:
Maritta Heisel
Fabrice Bouquet
Gilles Motet
Karama Kanoun
Jean-Paul Blanquart
Professeur à l’Universität Duisburg-Essen, Germany
Professeur à l’INRIA, Université de Franche-Comté
Professeur à l’INSA de Toulouse - Directeur de thèse
Directeur de Recherche au LAAS-CNRS
Astrium, European Aeronautic Defence and Space (EADS)
Acknowledgement
Although I have been mostly working independently in the lab during all those
years, I would not have been able to do this thesis without the support, advice and
encouragement of others, teachers, friends and colleagues. Accordingly, I would like
to use this opportunity to express my gratitude to a number of people who over the
years have contributed in various ways to the completion of this work.
In the first place I would like to record my gratitude to my thesis supervisor
Prof. Gilles Motet for guiding me in my research, and at the same time, for letting
me the freedom to take initiatives in the development of my lines of research. He
provided me unflinching encouragement and support in various ways. His truly
scientist intuition has made him as a constant oasis of ideas and passions in science.
Thanks also for being available almost at all times for me. Working with you was a
real pleasure.
Thanks to the reading committee, Prof. Maritta Heisel and Prof. Fabrice Bouquet for accepting to spend time reading and evaluating my thesis and their constructive comments on this thesis. I am thankful that in the midst of all their
activities, they accepted to be members of the reading committee. Also, I would
like to thank the other members of my committee, Prof. Karama Kanoun and Dr.
Jean-Paul Blanquart who all immediately agreed when I asked them to join in.
Thanks to all members of the lab LATTIS, LAAS-CNRS, and the Department
of Electronic and Computer Engineering at INSA. If I look back to all these years,
I think that at one moment or another, each one of them helped me in some way.
Thanks in particular to Stéphanie, Roberto, Samuel, Guillaume and Karim for discussions. Thanks to our director Danièle and our secretaries, Rosa, Estelle, Joëlle,
Karima, Sophie, Hélène. I would like to thank you all for the nice atmosphere within
the lab.
I would like to thank all my Chinese friends in France: Ping, Ruijin, Yanwen,
Junfeng, Susu, Yanjun, Hongwei, Yanping, Xiaoxiao, Fan, Letian, Wei, Dongdong,
Haojun, Xinwei, Hong, Juan, Haoran, Linqing, Tao, Lijian, Jie, Jing, Lei, Binhong,
Bo, Wenhua. With your friendship, the time in France became joyful and memorable. Our celebration of the Chinese Spring Festival, our parties and games were
fantastic!
I was extraordinarily fortunate in having several nice and apt supervisors in
China. I could never have embarked and started all of this without their prior
guidance in computer science and thus opened up unknown areas to me. I convey
special acknowledgement to them: Dunwei Wen and Dezhi Xu at Central South
University, Ming Zhou, Chin-Yew Lin and Jiantao Sun at Microsoft Research Asia,
and Yuxi Fu at Shanghai Jiao Tong University.
a · ¤kP“‚§´\‚¦·‹e ûÐ •£Ä:"AOa ÙýP
“§s´˜ ÑÚ •Ì?§sé· ì ÚK• Ø•u=ŠÆS"± P
“§a sé·êÆÆS éu§s ¹’° 4·ˆ "a · Ù¦¥Æ
ã P“‚µ ŒS§Q“=§ÇïJ§4¤Œ§o §Nû§Ç[ù§oö§
‘Š}§
\‚3· þs¤ Œþžmڰ姕·l¯‰ÆïÄÚ Š‹
e j¢ Ä:§\‚
°·òµP3%"
a ·ŠO I1§
\‚‘·5 ù‡-.§
\‚ ›õc5é·
[%ì Ú
§
\‚•N· =íÚØï§
\‚‰·•`D é„
˜§
\‚‰· Ép
˜ Ŭ§\‚
y4·‰Íu‰Ž"Éßu
Y§ óŠñx§òùŸØ©¥‰· I1§±L
ƒ%"
Finally, I would like to thank everybody who were important to the successful
realization of this thesis, as well as express my apology that I could not mention
personally one by one.
Abstract
This thesis contributes to the study of reliability and safety of computer and
software systems which are modeled as discrete event systems. The major contributions include the theory of Control Systems (C Systems) and the model monitoring
approach.
In the first part of the thesis, we study the theory of control systems which combines and significantly extends regulated rewriting in formal languages theory and
supervisory control. The control system is a generic framework, and contains two
components: the controlled component and the controlling component that restricts
the behavior of the controlled component. The two components are expressed using
the same formalism, e.g., automata or grammars. We consider various classes of
control systems based on different formalisms, for example, automaton control systems, grammar control systems, and their infinite versions and concurrent variants.
After that, an application of the theory is presented. The Büchi automata based
control system is used to model and check correctness properties on execution traces
specified by nevertrace claims.
In the second part of the thesis, we investigate the model monitoring approach
whose theoretical foundation is the theory of control systems. The key principle of
the approach is “property specifications as controllers”. In other words, the functional requirements and property specification of a system are separately modeled
and implemented, and the latter one controls the behavior of the former one. The
model monitoring approach contains two alternative techniques, namely model monitoring and model generating. The approach can be applied in several ways to improve reliability and safety of various classes of systems. We present some typical
applications to show its strong power. First, the approach provides better support
for the change and evolution of property specifications. Second, it provides the
theoretical foundation of safety-related systems in the standard IEC 61508 for ensuring the functional validity. Third, it is used to formalize and check guidelines
and consistency rules of UML.
These results lay out the foundations for further study of more advanced control
mechanisms, and provide a new way for ensuring reliability and safety.
Keywords: C system, control system, model monitoring, model generating, model
checking, formal language, automaton, grammar, regulated rewriting, supervisory
control, safety-related system, UML
Résumé
Cette thèse contribue à l'étude de la abilité et de la sécurité-innocuité des
systèmes informatisés, modélisés par des systèmes à événements discrets. Les principales contributions concernent la théorie des Systèmes de Contrôle (C Systems) et
l'approche par Monitoring des modèles.
Dans la première partie de la thèse, nous étudions la théorie des systèmes de
contrôle qui combine et étend de façon signicative, les systèmes de réécriture de
la théorie des langages et le contrôle supervisé. Un système de contrôle est une
structure générique qui contient deux composants : le composant contrôlé et le composant contrôlant qui restreint le comportement du composant contrôlé. Les deux
composants sont exprimés en utilisant le même formalisme comme des automates ou
des grammaires. Nous considérons diérentes classes de systèmes de contrôle basés
sur diérents formalismes comme, par exemple, les automates, les grammaires, ainsi
que leurs versions innies et concurrentes. Ensuite, une application de cette théorie
est présentée. Les systèmes de contrôle basés sur les automates de Büchi sont utilisés pour vérier par model-checking, des propriétés dénissant la correction sur des
traces d'exécution spéciées par une assertion de type nevertrace.
Dans la seconde partie de la thèse, nous investiguons l'approche de monitoring des
modèles dont la théorie des systèmes de contrôle constitue les fondations formelles.
Le principe pivot de cette approche est la spécication de propriétés comme contrôleur. En d'autres termes, pour un système, les exigences fonctionnelles, d'une
part, et des propriétés, d'autre part, sont modélisées et implantées séparément, les
propriétés spéciées contrôlant le comportement issu des exigences fonctionnelles.
De cette approche découle ainsi deux techniques alternatives, respectivement nommées monitoring de modèle et génération de modèle. Cette approche peut être utilisée de diverses manières pour améliorer la abilité et la sécurité-innocuité de divers
types de systèmes. Nous présentons quelques applications qui montrent l'intérêt
pratique de cette contribution théorique. Tout d'abord, cette approche aide à prendre en compte les évolutions des spécications des propriétés. En second lieu, elle
fournit une base théorique à la sécurité fonctionnelle, popularisée par la norme IEC
61508. En troisième lieu, l'approche peut être utilisée pour formaliser et vérier
l'application de guides de bonnes pratiques ou des règles de modélisation appliquées
par exemple pour des modèles UML.
Ces résultats constituent les bases pour des études futures de dispositifs plus
perfectionnés, et fournissent une nouvelle voie pour s'assurer de la abilité et de la
sécurité-innocuité des systèmes.
Mots clef : C system, système de contrôle, monitoring de modèle, génération de
modèle, model checking, langage formel, automate, grammaire, réécriture, contrôle
supervisé, sécurité fonctionnelle, UML
Contents
1 Introduction
1.1 Failures and Accidents of Computer and Software Systems . . . . . . . .
1.2 Dependability, Reliability and Safety . . . . . . . . . . . . . . . . . . . .
1.3 Formal Verification and Correctness . . . . . . . . . . . . . . . . . . . .
1.4 Problems and Challenges . . . . . . . . . . . . . . . . . . . . . . . . . .
1.4.1 Checking Properties on Execution Traces . . . . . . . . . . . . .
1.4.2 Supporting the Change and Evolution of Property Specifications
1.4.3 Functional Validity of Safety-Related Systems in IEC 61508 . . .
1.4.4 Guidelines and Consistency Rules of UML Models . . . . . . . .
1.5 A Preview of the Model Monitoring Approach . . . . . . . . . . . . . . .
1.5.1 The Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.5.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.6 Outline of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
I
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Control Systems in Formal Language Theory (C Systems)
2 Preliminaries
2.1 Automata, Grammars and Languages . . . .
2.1.1 The Chomsky Hierarchy . . . . . . . .
2.1.2 Grammars and Languages . . . . . . .
2.1.3 Automata and Languages . . . . . . .
2.2 Grammars with Regulated Rewriting . . . . .
2.3 Supervisory Control . . . . . . . . . . . . . .
2.4 From Propositional Logic to Temporal Logic
2.5 Automata-Theoretic Model Checking . . . . .
1
1
3
4
5
5
6
6
6
7
7
9
9
11
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
13
13
13
13
14
15
17
18
19
3 Control Systems in Formal Language Theory
3.1 Grammar Control Systems (GC Systems) . . . . . . . . . . . . . . . .
3.1.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.2 Generative Power . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Leftmost-Derivation-Based Grammar Control Systems (LGC Systems)
3.2.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.2 Generative Power . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3 Automaton Control Systems (AC Systems) . . . . . . . . . . . . . . .
3.3.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.2 Generative Power . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
23
24
24
24
26
26
26
29
29
30
i
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
ii
CONTENTS
.
.
.
.
.
.
.
.
32
36
36
36
37
37
37
39
.
.
.
.
.
.
.
.
.
.
41
42
43
45
47
48
49
52
55
61
61
5 Control Systems on ω-Words
5.1 ω-Grammar Control Systems (ω-GC Systems) . . . . . . . . . . . . . . . . .
5.1.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.2 Generative Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2 Leftmost-Derivation-Based ω-Grammar Control Systems (ω-LGC Systems)
5.2.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2.2 Generative Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3 ω-Automaton Control Systems (ω-AC Systems) . . . . . . . . . . . . . . . .
5.3.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3.2 Generative Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3.3 Equivalence and Translation between ω-AC and ω-LGC Systems . .
5.4 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
63
63
63
64
70
70
70
74
74
75
78
81
82
6 Büchi Automaton Control Systems and Concurrent Variants
6.1 Büchi Automaton Control Systems (BAC Systems) . . . . . . . .
6.1.1 Büchi Automata . . . . . . . . . . . . . . . . . . . . . . .
6.1.2 Büchi Automaton Control Systems . . . . . . . . . . . . .
6.1.3 Alphabet-level Büchi Automaton Control Systems . . . .
6.1.4 Checking Büchi Automaton Control Systems . . . . . . .
6.2 Input/Output Automaton Control Systems (IO-AC) . . . . . . .
6.2.1 Input/Output Automata . . . . . . . . . . . . . . . . . .
6.2.2 Input/Output Automaton Control Systems . . . . . . . .
6.3 Interface Automaton Control Systems (IN-AC) . . . . . . . . . .
6.3.1 Interface Automata . . . . . . . . . . . . . . . . . . . . .
6.3.2 Interface Automaton Control Systems . . . . . . . . . . .
83
83
83
84
87
89
89
90
91
93
93
96
3.4
3.5
3.6
3.3.3 Equivalence and Translation between AC and
Parsing Issues . . . . . . . . . . . . . . . . . . . . . .
3.4.1 GC Systems are not LL-Parsable . . . . . . .
3.4.2 Parsing by Extending Classical Algorithms .
Related Work . . . . . . . . . . . . . . . . . . . . . .
3.5.1 Grammars with Regulated Rewriting . . . . .
3.5.2 Supervisory Control . . . . . . . . . . . . . .
Conclusion . . . . . . . . . . . . . . . . . . . . . . .
LGC
. . .
. . .
. . .
. . .
. . .
. . .
. . .
Systems
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
4 On the Generative Power of (σ, ρ, π)-accepting ω-Grammars
4.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1.1 ω-automata and ω-languages . . . . . . . . . . . . . . .
4.1.2 ω-grammars and ω-languages . . . . . . . . . . . . . . .
4.1.3 Main Characterizations . . . . . . . . . . . . . . . . . .
4.2 On the Generative Power of (σ, ρ, π)-accepting ω-Grammars . .
4.2.1 Special Forms of ω-Grammars . . . . . . . . . . . . . . .
4.2.2 Leftmost Derivation of ω-Grammars . . . . . . . . . . .
4.2.3 Non-leftmost Derivation of ω-Grammars . . . . . . . . .
4.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
6.4
6.5
Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
7 Nevertrace Claims for Model Checking
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2 Constructs for Formalizing Properties in SPIN . . . . . . .
7.3 Nevertrace Claims . . . . . . . . . . . . . . . . . . . . . . .
7.3.1 Label Expressions . . . . . . . . . . . . . . . . . . .
7.3.2 Transition Expressions . . . . . . . . . . . . . . . . .
7.3.3 Nevertrace Claims . . . . . . . . . . . . . . . . . . .
7.4 Theoretical Foundation for Checking Nevertrace Claims . .
7.4.1 The Asynchronous Composition of Büchi Automata
7.4.2 The Asynchronous-Composition BAC System . . . .
7.4.3 From Nevertrace Claims to AC-BAC Systems . . . .
7.5 On Expressing Some Constructs in SPIN . . . . . . . . . . .
7.5.1 Expressing Notrace Assertions . . . . . . . . . . . .
7.5.2 Expressing Remote Label References . . . . . . . . .
7.5.3 Expressing the Non-Progress Variable . . . . . . . .
7.5.4 Expressing Progress-State Labels . . . . . . . . . . .
7.5.5 Expressing Accept-State Labels . . . . . . . . . . . .
7.6 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . .
7.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . .
II
iii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
The Model Monitoring Approach and Applications
8 The
8.1
8.2
8.3
99
99
101
103
103
104
106
107
108
109
109
111
111
112
112
112
113
114
115
117
Model Monitoring Approach
The Model Monitoring Approach . . . . . . . . . . . . . . . . . .
Supporting the Change and Evolution of Property Specifications
Example: Oven and Microwave Oven . . . . . . . . . . . . . . . .
8.3.1 Using the Model Checking Approach . . . . . . . . . . . .
8.3.2 Using the Model Monitoring Approach . . . . . . . . . . .
8.3.3 A Technical Comparison with Model Checking . . . . . .
Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
119
. 119
. 121
. 123
. 123
. 125
. 126
. 127
. 128
9 Applications of the Model Monitoring Approach
9.1 Theoretical Foundation of Safety-Related Systems in IEC 61508 .
9.1.1 IEC 61508 and Safety-Related Systems . . . . . . . . . .
9.1.2 Functional Validity of Safety-Related Systems . . . . . . .
9.1.3 Example: Chemical Reactor . . . . . . . . . . . . . . . . .
9.1.4 Methodology for Designing Functionally Valid SRS . . . .
9.1.5 Model Monitoring as the Theoretical Foundation . . . . .
9.1.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2 Formalizing Guidelines and Consistency Rules of UML . . . . . .
9.2.1 Unified Modeling Language . . . . . . . . . . . . . . . . .
9.2.2 Guidelines and Consistency Rules . . . . . . . . . . . . .
9.2.3 The Grammar of UML in XMI . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
8.4
8.5
129
129
129
130
131
137
142
142
142
143
147
149
iv
CONTENTS
9.2.4
9.2.5
9.2.6
9.2.7
Formalizing Rules Using LGC Systems
Implementation of LGC Systems . . .
Related Work . . . . . . . . . . . . . .
Conclusion . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
152
156
157
158
10 Conclusion
159
10.1 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
10.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Bibliography
163
Index
173
Publications
175
Chapter 1
Introduction
This thesis contributes to the study of reliability and safety of computer and software
systems, which are modeled as discrete event systems. In this chapter, after summarizing
basic concepts, a new methodology, namely the model monitoring approach, is informally
proposed, in order to treat several identified problems and challenges.
1.1
Failures and Accidents of Computer and Software Systems
Computer and software systems are pervasive in our society nowadays. People is increasingly dependent on computers and software applications in almost every aspect of daily
life, e.g., Internet technology, mobile phones, audio and video systems, consumer electronics. A more important fact is that, embedded software solutions control safety critical
systems whose failures may cause accidents and great loss, such as satellite constellation,
aircraft and flight control, high-speed trains, cars, traffic control, nuclear plant, chemical
processes, medical devices, cash dispensers. Therefore, it becomes critical to ensure the
absence of failures and accidents.
A (design, interaction or physical) fault in a system may be activated and cause an
error, which will be propagated and cause a failure [3, 4]. A failure is an event switching
the system to incorrect behavior (deviation from the specification). An error is the part of
the internal state of a system that may lead to failure. Fortunately, in the past two decades,
the overall mean time to failure has decreased by two orders, thanks to the progressive
mastering of physical faults [3]. Therefore, the current dominate sources of failure are
design and interaction faults, due to the increasing size and complexity of systems.
Failures may have substantial financial consequences for the manufacturer. For instance, the bug in Intel’s Pentium II floating point division unit caused a loss of about
475 million US dollars to replace faulty processors in the early nineties.
More seriously, the failure of safety critical systems may result in accidents, which lead
to severe harms to the environment and the user.
Between 1985 and 1987, a software fault in the control module of the radiation therapy
machine Therac-25 caused the death of six cancer patients, because they were exposed to
an overdose of radiation due to bad but authorized use of the system.
A more recent example is the self-destructing explosion of Ariane 5 launcher on June
4, 1996. The accident was resulted from the successive failures of the active Inertial
Reference System (IRS) and the backup IRS [88]. Ariane 5 adopted the same reference
system as Ariane 4. However, the profile of Ariane 5 was different from that of Ariane
1
2
Chapter 1. Introduction
4 — the acceleration communicated as input value to the IRS of Ariane 5 was higher.
Furthermore, the interactions between the IRS and other components were not redefined
and checked. During the launch, an exception occurred when a large 64-bit floating point
number was converted to a 16-bit signed integer. Due to the overflow of input value
computation, the IRS stopped working [81]. Then, the signaled error was interpreted as
a launcher attitude, and led the control system to rotate the tailpipe at the end stop [59],
which caused the self-destruction of the rocket.
It is worth noting that the absence of failures does not necessarily mean the absence of
accidents. Some safety constraints may be unknown when developing, or not included in
the specification, or not correctly described at design time. As a result, the requirements,
specification and their reliable implementation may be all unsafe.
As an example, consider an accident occurred in a batch chemical reactor in England
[80, 87]. Figure 1.1 shows the design of the system. The computer, which served as a
control system, controlled the flow of catalyst into the reactor and the flow of water for
cooling off the reaction, by manipulating the valves. Additionally, the computer received
sensor inputs indicating the status of the system. The designers were told that if an
abnormal signal occurred in the plant, they were to leave all controlled variables as they
were and to sound an alarm.
Gearbox
Catalyst
Reactor
Cooling
Water
Control
System
Figure 1.1: Reactor Control System
On one occasion, the control system received an abnormal signal indicating a low oil
level in a gearbox, and reacted as the functional requirements specified, that is, sounded an
alarm and maintained all variables with their present condition. Unfortunately, a catalyst
had just been added into the reactor, but the control system had not opened the flow of
cooling water completely. As a result, the reactor overheated, the relief valve lifted and
the contents of the reactor were discharged into the atmosphere. Note that there were no
failures involved in this accident, since all components worked as specified.
Another example is the loss of the Mars Polar Lander, which was attributed to a
misapprehensive interaction between the onboard software and the landing leg system
[10]. The landing leg system was expected and specified to generate noise (spurious signals)
when the landing legs were deployed during descent to the planet surface. However, the
onboard software interpreted these signals as an indication that landing had occurred (as
specified in their requirements) and shut down the descent engines, causing the spacecraft
to crash into the Mars surface. Note that again there were no failures involved in this
accident, since all components (the landing legs and the software) worked as specified.
1.2. Dependability, Reliability and Safety
1.2
3
Dependability, Reliability and Safety
Dependability of a computing system is the ability to deliver services that can justifiably be trusted [85, 3, 4, 59]. Other similar concepts exist, such as trustworthiness and
survivability.
Dependability includes the following basic attributes: reliability, safety, confidentiality,
availability, integrity, maintainability [3, 4]. Combinations or specializations of the basic
attributes derive several other attributes, such as security and robustness.
Here, we are interested in the first two criteria, i.e., reliability and safety.
Reliability means continuity of correct service [87, 3], i.e., the aptitude of a system to
accomplish a required function in given conditions, and for a given interval of time [59].
Note that a system is said to be “correct” if it fulfills the specification, i.e., correctness is
always relative to a specification. Thus, reliability is defined as the absence of failures. The
reliability of a physical product is always decreasing, due to the degradation of hardware.
However, in the case of software systems, reliability stays constant, due to the absence of
ageing phenomena.
Safety means the absence of accidents [87], involving severe consequences on the environment including the user [3], i.e., freedom from unacceptable risk of physical injury or
of damage to the health of people, either directly, or indirectly as a result of damage to
property or to the environment [77].
In fact, reliability and safety are different attributes. High reliability is neither necessary nor sufficient for safety [86]. This observation opposes the most pervasive assumption
in engineering that, safety is proportional to reliability, i.e., safety can be increased by
only increasing system or component reliability.
A system can be reliable but unsafe. Accidents may result from a reliable system
implementing unsafe requirements and specification. For example, the loss of the Mars
Polar Lander was caused by an interaction accident, which results from noise generated
by the landing legs. The landing legs and the onboard software performed correctly
and reliably (as specified in their requirements), but the accident occurred because the
designers did not consider all possible interactions between components [87, 88].
A system can be safe but unreliable. For example, human operators are not reliable,
if they do not follow the specified requirements (i.e., procedures). But they may prevent
an accident if the specified requirements turn out to be unsafe under the environment at
a particular time. Another example is fail-safe systems, which are designed to fail into a
safe state [87]. A stopped car can avoid traffic accidents.
Therefore, a more reasonable assumption is that, if all the necessary safety constraints
are known and completely and correctly included in requirements and specification, then
safety can be increased by increasing reliability. Unfortunately, it is always hard to satisfy
this assumption in practice.
First, it is frequently impossible to identify all the necessary safety constraints when
designing. Also, requirements and specification may evolve or change through learning
new safety constraints from historical accidents [90].
Second, in some cases, we cannot include some safety constraints in the requirements
and specification, although they are known. There are various reasons, such as the technical limit, the tradeoff between economics and safety, etc. Sometimes, reliability and safety
even conflict, that is, increasing one of them may decrease the other [86]. For example,
some missions are inherently unsafe, such as toxic chemical reaction, nuclear bomb, etc.
4
Chapter 1. Introduction
Not building such systems is the most safe solution.
However, we shall also acknowledge that there is an intersection between reliability
and safety, because if a necessary safety constraint is explicitly specified then the reliable
implementations can improve safety.
To conclude, we may increase reliability by eliminating failures, while we shall increase
safety by eliminating or preventing effects of hazards. High reliability does not necessarily
mean high safety, due to the safety constraints which are unknown, or not included in the
specification, or not correctly described.
1.3
Formal Verification and Correctness
An important stage of development is verification and validation (V&V). The former
checks that the design satisfies the identified requirements (whether we are building the
system right), while the latter judges whether the formalized problem statement (model
+ properties) is consistent with the informal conception of the design (whether we are
verifying the right system).
System verification aims at improving reliability, i.e., checking that the design satisfies
certain properties. The properties to be verified are mostly obtained from the specification
of the system. The system is considered to be “correct” if it fulfills all specified properties, i.e., the correctness contributes to reliability. It is a major challenge to ensure the
correctness of the design at the earliest stage possible. In fact, the cost of verification
and validation of a computing system is at least half of the development cost, and three
quarters for highly critical systems [3].
The most currently practiced verification techniques include peer reviewing, simulation
and testing [103]. Empirical studies show that peer reviewing of uncompiled code catches
between 31% and 93% of the faults with a median around 60%, but concurrency and
algorithm faults are hard to catch. Simulation and testing are dynamic techniques that
traverse a set of execution paths. However, exhaustive testing of all execution paths is
practically infeasible.
A serious problem with these techniques is that we never know when to stop verification, and whether there are still bugs in the design, although they are effective in early
stages of debugging. Another problem is their inability to scale up for large complex
designs, because they can only explore a small portion of the possible behaviors of the
system.
Over the last thirty years, a very attractive approach toward the correctness is formal
verification, which uses an exhaustive exploration of all possible behaviors of the system.
Generally, formal methods are considered as the applied mathematics for modeling and
analyzing systems, and are one of the “highly recommended” verification techniques according to several international standards. For formal verification, computer and software
systems are usually modeled as Discrete Event Systems (DES) which are discrete-state,
event-driven systems. That is, the state space is a discrete set, and the state evolution
depends entirely on the occurrence of asynchronous discrete events over time [18].
A prominent technique of formal verification is model checking [31, 5], which was
developed independently by Clarke and Emerson [30] and Quielle and Sifakis [111]. It
allows the desired properties to be verified automatically by exploring exhaustively and
systematically the state space of the system model.
The methodology of model checking is shown in Fig. 1.2.
1.4. Problems and Challenges
system
model
5
satisfies
system behaviors
(sequences of events)
property
specification
verification
Figure 1.2: The Methodology of Model Checking
The system model, describing the behaviors of the system, is usually written using
specific languages (e.g., Promela), or automatically generated from a model description
specified in some dialect of programming languages (e.g., C, Java) or hardware description
languages (e.g., Verilog, VHDL).
The property specification prescribes what the system should or should not do, such
as functional correctness (does the system do what it is expected to do?), reachability
(is it possible to reach a certain state, e.g., deadlock?), safety (something bad never occurs), liveness (something good will eventually occur), fairness (does an event can occur
repeatedly?), and real-time properties (is the system acting in time?).
Then efficient algorithms are used to determine whether the system model satisfies the
property specification by checking all possible system behaviors, i.e., through exhaustive
enumeration (explicit or implicit) of all states reachable by the system and the sequences of
events that traverse through them. If there exists one behavior that violates the property,
it is highlighted as a counterexample.
The model checking approach enjoys two remarkable advantages. First, it is fully automatic. Unlike theorem proving, neither user interaction nor expertise is required. Second,
if the model fails to satisfy a property, a counterexample is always produced serving as
priceless debugging information. If the design is announced correct, it implies that all
behaviors have been explored, and the coverage problem becomes irrelevant. As a result,
model checking has been widely used in industrial projects to verify safety critical systems, such as aircraft [11, 13], traffic alert and control system for airplanes [19], spacecraft
[68], train control system [52], storm surge barrier control system [122, 126] and other
embedded software [117].
1.4
Problems and Challenges
In this section, we list some current problems and challenges in the domains related to
reliability and safety. These issues inspired us to develop the new theory in this thesis,
which in turn contributes to these applications.
1.4.1
Checking Properties on Execution Traces
Software model checkers, e.g., SPIN, support various constructs for formalizing different
classes of properties. The most powerful constructs are the never claim, the trace and
notrace assertions. The never claim is used to specify the properties on sequences of states
that should never occur, while the trace and notrace assertions are used to specify the
properties on sequences of transitions of simple channel operations, i.e., simple send and
receive operations on message channels. A transition is a statement between two states,
thus the trace and notrace assertions only treat a restricted subset of transitions.
6
Chapter 1. Introduction
However, we observed that the existing constructs cannot specify the properties on
full sequences of transitions apart from the transitions of simple channel operations, e.g.,
assignments, random receive operations, etc. Therefore, we need a new construct for
specifying correctness properties related to execution traces, and a theory to support the
checking of the properties.
1.4.2
Supporting the Change and Evolution of Property Specifications
The change and evolution of property specifications are now challenges to the traditional development process. The changes are common both at design-time and postimplementation, especially for the systems whose life period is long, e.g., aircrafts, nuclear
plants, critical embedded electronic systems, etc. The changes may result from various
factors, such as the change of dependability requirements and new safety regulations.
Unfortunately, the changes always cause high expenditure of rechecking and revising
the system, especially when the system is too complex to be clearly analyzed manually
or so large that the revision is not trivial. Moreover, the changes may not only lead to
modify a small portion of the system, but to revise the entire design. Therefore, we need
a technique to support changeful property specifications at a lower cost.
1.4.3
Functional Validity of Safety-Related Systems in IEC 61508
The international standard IEC 61508 [77] provides a generic process for electrical, electronic, or programmable electronic (E/E/PE) Safety-Related Systems (SRS) to achieve
an acceptable level of functional safety. The achieved safety requirements consist of two
parts, safety functions and associated safety integrity levels (SIL).
However, the standard focuses more on the realization of integrity requirements rather
than function requirements. As a result, the standard indicates only the product is of a
given reliable integrity, but not whether it implements the right safety requirements. Furthermore, the standard does not prescribe exactly how the verification of safety functions
of an SRS could technically be done.
As a result, the problem of functional validity arises. Functional validity means
whether the safety functions realized by the SRS can really prevent accidents and recover
the system from hazardous states, provided the expected safety integrity level is reached.
Therefore, we need a generic technical methodology to achieve functional validity.
1.4.4
Guidelines and Consistency Rules of UML Models
The Unified Modeling Language (UML) is a visual modeling language developed by Object
Management Group (OMG) [108, 109]. Guidelines and consistency rules of UML are used
to control the degrees of freedom provided by the UML language to prevent faults. Guideline contains a set of rules which recommend certain uses of technologies (e.g., modeling
and programming languages) to produce more reliable, safe and maintainable products
such as computer and software systems. Consistency problems of UML models have attracted great attention from both academic and industrial communities [83, 82, 76]. A list
of 635 consistency rules are identified by [124, 125].
However, guidelines and consistency rules provide informal restrictions on the use of
language, which makes checking difficult. Therefore, we need theories and techniques to
formalize guidelines and consistency rules to control the use of UML.
1.5. A Preview of the Model Monitoring Approach
1.5
7
A Preview of the Model Monitoring Approach
In order to treat the identified problems and challenges, a new formal method, namely the
model monitoring approach, and its theoretical foundation, namely the Control System (C
System), will be proposed in this thesis. In this section, we will informally introduce the
methodology and its applications.
1.5.1
The Methodology
Let us consider the language-theoretic approach for modeling, analyzing and controlling
discrete event systems. These systems receive input events acquired from sensors and
produce output events via actuators. The set of all possible sequences of events (i.e., words)
is a language whose alphabet is the set of inputs and outputs (timed and stochastic features
are not considered). If a language is finite, we could list all its elements, i.e., all possible
system behaviors. Unfortunately, this is rarely practical, since infinite sequences exist.
Note that these sequences are not random but restrained by the system implementation.
Therefore, we use discrete event modeling formalisms to represent the system model, which
is implemented to generate system behaviors.
It is usually expected that a system satisfies some properties to ensure dependability.
The properties restrict all possible system behaviors to a subset. That is, some undesired
sequences of events and execution traces are not admissible, since they violate the property
specification. The admissible subset of execution traces can be defined by a controlling
language whose alphabet includes the elements of execution traces. Again, the controlling
language cannot be specified by enumerating all admissible execution traces, since infinite
admissible execution traces exist. Therefore, the properties are also modeled using certain
formalisms, such as automata and temporal logic.
Usually, the properties are verified during the development process to restrict the
design of the system. However, as the natures of the system model and the properties are
different, their manual integration is complex and error-prone. Therefore, we try to find a
new methodology to ensure the properties at a lower cost by separating the system model
and the property specification.
The Model Monitoring Approach. As shown in Fig. 1.3, our new methodology
consists in preserving the parting of the system model and the property specification. The
property specification is modeled and implemented as a control structure which supervises
the behavior of the system. Thus the behaviors generated by the global system satisfy
the properties. This is significantly different from the methodology of model checking (c.f.
Fig. 1.2).
The system behaviors are expressed as a language L on the inputs and outputs, generated by the system model. Whereas the properties are formulated as a controlling model
accepting a controlling language L̂ on the execution traces rather than inputs and outputs. Then the two models can be automatically merged or, preferably, executed in parallel. Technically, we use two alternative implementations: model monitoring and model
generating. For model monitoring, the controlling model supervises the execution of the
system at runtime. Whereas model generating automatically combines the two models
to generate a global system satisfying the properties. As a result, the sequence of inputs
controls certain degrees of freedom of the system behaviors, and the properties control
other degrees of freedom leading to correct and safe outputs.
8
Chapter 1. Introduction
property
specification
global
system
control
system
model
system behaviors
(sequences of events)
Figure 1.3: The Methodology of the Model Monitoring Approach
Control System (C System). As the theoretical foundation of the model monitoring
approach, a new formalism based on the formal language theory, namely the Control System (C System), will be proposed. The C System is a generic framework, and contains
two components: the controlled component and the controlling component. The controlled
component expresses a language L on inputs and outputs. Whereas the controlling component, also called a controller, expresses a controlling language L̂ restricting the use of
L without changing L itself. Technically, each component can be formulated using automata or grammars, resulting in two types of C systems: automaton control systems and
grammar control systems, respectively.
Automaton Control System (AC System). A discrete event system is always modeled as an automaton A which accepts a language L(A) = L. The property specification
is modeled as a controlling automaton  accepting a language L(Â) = L̂. The model Â
controls the execution of A, thus all controlled behaviors satisfy the properties. Therefore,
the global system which consists of A and  satisfies the desired property specification.
Grammar Control System (GC System). A grammar system is always modeled as
a grammar G which accepts a language L(G) = L. The property specification is modeled
as a controlling grammar Ĝ accepting a language L(Ĝ) = L̂. The grammar Ĝ controls
the derivation of G, thus all controlled derivations produce only the words satisfying the
properties. Therefore, the global system which consists of G and Ĝ satisfies the desired
property specification.
Infinite Versions. Some system behaviors may be of infinite length, e.g., nonstop systems. If we consider the system behaviors of infinite length as ω-words, then the set of all
possible sequences of events is an ω-language, which can be generated by an ω-automaton
or an ω-grammar. In this case, we define the infinite versions of the above types of control
system, namely ω-C System, ω-AC System, and ω-GC System, respectively.
We would like to mention some obvious differences between our formalism and Ramadge and Wonham’s supervisory control [112]. The C System is a more generic framework taking into account automata, grammars, ω-automata and ω-grammars etc., whereas
supervisory control only considers finite state automata. Furthermore, other technical differences will be discussed in the subsequent chapters.
1.6. Outline of the Thesis
1.5.2
9
Applications
One generic application of the model monitoring approach is to provide better support
for the change and evolution of property specifications. If the property specification obtained from dependability requirements changes, only the controlling component needs
modifications, thus decreasing the cost.
The model monitoring approach can also serve as the theoretical foundation of safetyrelated systems in IEC 61508. A safety-related system can be modeled as a controlling
component that supervises the execution of the controlled system.
The GC system can be used to formalize and check guidelines and consistency rules of
UML. The grammar of UML is considered as a grammar system, and the system behaviors
are the conforming user models. With the supervisory of controlling grammars specifying
guidelines and consistency rules, the global system accepts only consistent UML user
models.
A variant of the ω-AC system can be used to specify and model checking the properties on execution traces, since the controlling language defines the admissible subset of
execution traces.
1.6
Outline of the Thesis
This dissertation consists of two parts, which expose the theory of Control Systems (C
Systems), and the model monitoring approach, respectively.
The first part proposes the theory of C Systems, based on the theory of automata and
formal languages. The C System has two versions: finite and infinite, which are named C
System and ω-C System, respectively. The finite version is based on the classic theory of
automata and formal languages, and used to model systems with finite length behaviors,
e.g., the systems with final states, the grammar of UML which accepts conforming models
through finite derivations. The infinite version is based on the theory of ω-automata and
ω-languages, and used to model nonstop systems, e.g., chemical reactor systems.
Chapter 2 briefly recalls the necessary preliminaries. We introduce some basic notions about automata, grammars and languages. We then review some related theories,
including regulated rewriting, supervisory control and model checking.
Chapter 3 proposes the theory of Control Systems (C Systems) on finite words. Technically, three types of C Systems and their generative power are studied, namely Grammar Control Systems (GC Systems), Leftmost-derivation-based Grammar Control Systems
(LGC Systems) and Automaton Control Systems (AC Systems).
Chapter 4 provides the preliminary for Chapter 5. We propose the (σ, ρ, π)-accepting
ω-grammar, and study its relative generative power to (σ, ρ)-accepting ω-automata.
Chapter 5 proposes the theory of ω-Control Systems (ω-C Systems) on infinite words.
Technically, three types of ω-C Systems and their generative power are studied, namely ωGrammar Control Systems (ω-GC Systems), Leftmost-derivation-based ω-Grammar Control Systems (ω-LGC Systems) and ω-Automaton Control Systems (ω-AC Systems).
Chapter 6 proposes the Büchi Automaton Control System (BAC System), which is
a special case of the ω-AC system. The variants of BAC systems in the context of concurrency are also discussed, such as Input/Output Automaton Control System (IO-AC
System) and Interface Automaton Control System (IN-AC System).
Chapter 7 presents an application of the BAC system in model checking. We propose
10
Chapter 1. Introduction
the nevertrace claim, which is a new construct for specifying correctness properties. The
theoretical foundation for checking nevertrace claims is studied.
The second part proposes the model monitoring approach based on the theory of C
Systems, and introduces its applications.
Chapter 8 presents the model monitoring approach. An application concerning supporting the change and evolution of property specifications is discussed to illustrate the
merits.
Chapter 9 discusses two additional applications of the model monitoring approach to
show its strong power. The first one concerns safety-related systems in IEC 61508, which
are automata-based systems. The second one concerns guidelines and consistency rules of
UML, which is a grammar system.
Chapter 10 concludes this thesis by summarizing contributions and future work.
It is worth noting that the two parts are strongly connected. This means, in order to
read the second part, the reader needs to understand the basic idea and notations introduced in the first part. However, the proofs can be skipped in a first reading. Furthermore,
as an exception, Chapters 4 and 5, which explore the theoretical aspects of the ω-languages
theory, could be safely skipped if the reader is more interested in the applications than
the theoretical results.
Provenance of the material. This thesis is partially based on published materials.
Chapter 6, which presents the Büchi Automaton Control System (BAC System) and its
concurrent variants, is an improved and extended version of our earlier work on these
subjects. In particular, the BAC system was proposed in [26, 25], the IO-AC system
with existential meta-composition operator was introduced in [22], and the IN-AC system
with universal meta-composition operator was discussed in [23, 20]. A short version of
Chapter 7 which proposes the nevertrace claim appeared in [24]. In Chapter 8, the
idea of separating functional and dependability requirements was introduced in [25, 26].
In Chapter 9, the application of grammar control systems for modeling and checking
guidelines and consistency rules of UML was discussed in [21].
Part I
Control Systems in Formal
Language Theory (C Systems)
11
Chapter 2
Preliminaries
In this chapter, we briefly recall the necessary preliminaries. We introduce some basic
notions about automata, grammars and languages. We then review some related theories,
including regulated rewriting, supervisory control and model checking.
2.1
Automata, Grammars and Languages
Some basic definitions, conventions and notations are recalled in this section. The exhaustive investigations on this subject could be found in the monographs [61, 74, 116, 115].
2.1.1
The Chomsky Hierarchy
The Chomsky hierarchy is a containment hierarchy of classes of formal grammars and
languages, which was first proposed in [28, 29]. The four numbered types and two additional important types of languages, along with their associated classes of grammars and
automata, are summarized in Table 2.1.
Hierarchy
0
1
N/A
2
N/A
3
Grammars
Phrase structure
Context-sensitive
Tree-adjoining
Context-free
Deterministic context-free
Regular
Languages
Recursively enumerable
Context-sensitive
Mildly context-sensitive
Context-free
Deterministic context-free
Regular
Automata
Turing machine
Linear-bounded
Embedded pushdown
Nondeterministic pushdown
Deterministic pushdown
Finite state
Table 2.1: The Chomsky Hierarchy
2.1.2
Grammars and Languages
Conventions: The capital letters A, B, C, D, E and S denote nonterminals, S is the start
symbol unless stated. The lowercase letters near the beginning of the alphabet a, b, c, d, e
denote terminals. The capital letters near the end of the alphabet X, Y, Z denote either
terminals or nonterminals. The lowercase letters near the end of the alphabet u, v, w, x, y, z
denote strings of terminals. The lowercase Greek letters α, β, γ denote strings of either
terminals or nonterminals.
Definition 2.1. A phrase structure grammar (PSG) is denoted G = (N, T, P, S), where N
is a finite set of nonterminals, T is a finite set of terminals, P is a finite set of productions
13
14
Chapter 2. Preliminaries
of the form α → β with α 6= , and α, β are strings of symbols from (N ∪ T )∗ , and S ∈ N
is the start symbol. We define the vocabulary V = N ∪ T . The language accepted by G is
L(G) = {w ∈ T ∗ | S ⇒∗ w}.
Some other names for phrase structure grammars include type 0 grammars, or unrestricted grammars.
Definition 2.2. A context-sensitive grammar (CSG) is a phrase structure grammar whose
productions are of the form α → β with α 6= and |α| ≤ |β|, and α, β ∈ (N ∪ T )∗ .
The term “context-sensitive” comes from a normal form for these grammars, where
each production is of the form α1 Aα2 → α1 βα2 , with β 6= .
Definition 2.3. A context-free grammar (CFG) is a phrase structure grammar whose
productions are of the form A → α where A ∈ N is a nonterminal, and α ∈ (N ∪ T )∗ . Definition 2.4. A right-linear grammar is a phrase structure grammar whose productions
are of the form A → wB or A → w, where A and B are nonterminals and w ∈ T ∗ is a
string of terminals. If all productions are of the form A → Bw or A → w, we call it
left-linear grammar. A right- or left-linear grammar is called a regular grammar (REG).
2.1.3
Automata and Languages
Conventions: The lowercase letters a, b, c, d, e near the beginning of the alphabet denote
input symbols. The lowercase letters u, v, w, x, y, z near the end of the alphabet denote
strings of input symbols. The capital letters A, B, C, X, Y, Z denote tape/stack symbols.
The lowercase Greek letters α, β, γ denote strings of tape/stack symbols.
Definition 2.5. A (nondeterministic) Turing machine (TM) is a tuple M = (Q, Σ, Γ,
δ, q0 , B, F ), where Q is a finite set of states, Σ is a finite input alphabet not including B, Γ
is a finite tape alphabet, δ is a transition function mapping Q × Γ 7→ 2Q×Γ×{L,R,S} where
L, R and S denote moving left, right and stationary, respectively, q0 ∈ Q is the initial
state, B ∈ Γ is the blank, and F ⊆ Q is a set of final states.
A TM M = (Q, Σ, Γ, δ, q0 , B, F ) is a deterministic Turing machine (DTM), if δ is a
transition function mapping Q × Γ 7→ Q × Γ × {L, R, S}.
An instantaneous description (ID) is denoted by α1 qα2 , where q is the current state,
α1 α2 ∈ Γ∗ is the contents of the tape up to the rightmost nonblank symbol. The tape head
is assumed to be scanning the leftmost symbol of α2 , or if α2 = , the head is scanning a
blank.
We denote by `M a move, `∗M zero or multiple moves, `iM exactly i moves. The
subscript is dropped if M is understood, i.e., `, `∗ and `i .
Definition 2.6. For a TM M = (Q, Σ, Γ, δ, q0 , B, F ), the language accepted is the set
L(M ) = {w | w ∈ Σ∗ and q0 w `∗ α1 pα2 for some p ∈ F and α1 , α2 ∈ Γ∗ }
2.2. Grammars with Regulated Rewriting
15
Definition 2.7. A (nondeterministic) pushdown automaton (PDA) is a tuple D = (Q, Σ, Γ,
δ, q0 , Z0 , F ), where Q is a finite set of states, Σ is a finite input alphabet, Γ is a finite stack
∗
alphabet, δ is a transition function mapping Q × (Σ ∪ {}) × Γ 7→ 2Q×Γ , q0 ∈ Q is the
initial state, Z0 ∈ Γ is the start symbol, and F ⊆ Q is a set of final states.
A PDA D = (Q, Σ, Γ, δ, q0 , Z0 , F ) is a deterministic pushdown automata (DPDA), if:
1. for each q ∈ Q and Z ∈ Γ, whenever δ(q, , Z) is nonempty, then δ(q, a, Z) is empty
for all a ∈ Σ;
2. for each q ∈ Q, Z ∈ Γ, and a ∈ Σ∪{}, δ(q, a, Z) contains no more than one element.
An instantaneous description (ID) is a triple (q, w, γ), where q is a state, w is a string
of input symbols, and γ is a string of stack symbols.
We say (q, aw, Zα) `D (p, w, βα) is a move, if δ(q, a, Z) contains (p, β), where a ∈
Σ ∪ {}. We denote by `∗D zero or multiple moves, `iD exactly i moves. The subscript is
dropped if D is understood, i.e., `, `∗ and `i .
Definition 2.8. For PDA D = (Q, Σ, Γ, δ, q0 , Z0 , F ), the language accepted by final state
is the set
L(D) = {w | (q0 , w, Z0 ) `∗ (p, , γ) for some p ∈ F and γ ∈ Γ∗ }
The language accepted by empty stack is the set
N (D) = {w | (q0 , w, Z0 ) `∗ (p, , ) for some p ∈ Q}
When acceptance is by empty stack, we usually let the set of final states be ∅.
Definition 2.9. A (nondeterministic) finite state automaton (FSA) with -transitions is
a tuple A = (Q, Σ, δ, q0 , F ), where Q is a finite set of states, Σ is a finite input alphabet,
δ is a transition function mapping Q × (Σ ∪ {}) 7→ 2Q , q0 ∈ Q is the initial state, and
F ⊆ Q is a set of final states.
A FSA A = (Q, Σ, δ, q0 , F ) is a deterministic finite state automaton (DFSA), if δ is a
transition function mapping Q × Σ 7→ Q.
We define the extended transition function δ̂ mapping Q × Σ∗ 7→ 2Q by extending δ to
apply to a state and a string (see [74]).
Definition 2.10. A string w is accepted by an FSA A = (Q, Σ, δ, q0 , F ), if δ̂(q0 , w) contains
a state in F . The language accepted by A is the set
L(A) = {w | δ̂(q0 , w) contains a state in F }
2.2
Grammars with Regulated Rewriting
There are reasons for introducing control structures over grammars and automata. For example, grammars with regulated rewriting [64, 39] and path-controlled grammars [95, 96]
are defined to impose restrictions on the derivations of context-free grammars to increase
16
Chapter 2. Preliminaries
generative power, since context-free grammars are not able to cover all linguistic phenomena.
Typical types of grammars with controlled derivations are the regularly controlled
grammar and the matrix grammar. Generally, a regular set is used to control the derivations of a grammar [40, 38]. They are defined as follows.
Notations. We denote by REG, CF G, CSG, P SG the families of regular, contextfree, context-sensitive, arbitrary phrase structure grammars, respectively. For REG, we
only consider right-linear grammars as in [39]. We denote by F SA, P DA, T M the families
of nondeterministic finite state, nondeterministic pushdown automata, and nondeterministic Turing machines, respectively, while the deterministic machines are denoted by DF SA,
DP DA, DT M , respectively. For a family X of grammars or automata, we denote the
associated family of languages by L(X).
A phrase structure grammar is denoted G = (N, T, P, S), where P is a finite set of
productions of the form p : α → β where p is the name (or label) of the production.
p
A derivation using a specified production p is denoted by α ⇒ β, and its reflexive
and transitive closure is denoted by α ⇒∗ γ, or with the sequence of applied productions
p1 ...p
α =⇒k γ.
Definition 2.11 (Def. 1 of [38] and Def. 2.1.2 of [39]). A regularly controlled grammar
is a quintuple G = (N, T, P, S, R), where N, T, P, S are specified as in a phrase structure
grammar, R is a regular set over P .
The language L(G) accepted by G consists of all words w ∈ T ∗ such that there is a
p1 ...p
derivation S =⇒k w with p1 ...pk ∈ R.
In words, a regular set controls the acceptable sequences of derivation. We denote
by L(rC, X) the family of languages accepted by regularly controlled grammars with P
containing all productions of type X, e.g. L(rC, CF G).
Definition 2.12 (Def. 4 of [38] and Def. 1.1.1 of [39]). A matrix grammar is a tuple
G = (N, T, P, S, M ), where N, T, P, S are specified as in a phrase structure grammar,
M = {m1 , m2 , ..., mn } is a finite set of sequences of productions mi = (pi1 , pi2 , ..., pik(i) ),
k(i) ≥ 1, 1 ≤ i ≤ n, pi1 , pi2 , ..., pik(i) ∈ P .
m
pi
pi
pik(i)
i
1
2
For mi , 1 ≤ i ≤ n, and α, β ∈ (N ∪ T )∗ , we define α =⇒
β by α = α0 =⇒
α1 =⇒
· · · =⇒
αk(i) = β.
The language L(G) accepted by G consists of all words w ∈ T ∗ such that there is a
mj
mj
mj
k
1
2
βk = w, for some k ≥ 1, 1 ≤ ji ≤ n, 1 ≤ i ≤ k.
derivation S =⇒
β1 =⇒
· · · =⇒
In words, a set of sequences of productions controls the acceptable sequences of derivation. We denote by L(M, X) the family of languages accepted by matrix grammars with
P containing all productions of type X, e.g. L(M, CF G).
Concerning the generative power of the two types of controlled grammars above, we
have the following theorem, of which (i) follows from Thm. 1.2.1 and Thm. 2.1.1 of [39],
(ii) follows from Thm. 1 of [38] and Thm. 2.1.1 of [39], and (iii) follows from Thm. 2.1 of
[40].
Theorem 2.13. (i) L(M, REG) = L(rC, REG) = L(REG).
(ii) L(CF G) ⊂ L(M, CF G) = L(rC, CF G) ⊂ L(P SG).
(iii) L(rC, CF G) is incomparable with L(CSG).
2.3. Supervisory Control
2.3
17
Supervisory Control
The theory of supervisory control on deterministic systems was first introduced by P. J.
Ramadge and W. M. Wonham in their papers [112, 113]. An extensive survey on the
supervisory control of deterministic systems can be found in [18].
The supervisory control is used to restrict the inadmissible behavior of a Discrete
Event System (DES). Given a DES, or a plant, whose uncontrolled behavior is modeled
by automaton G, we introduce a supervisor modeled by automaton S to restrict the
behavior of the plant.
Let L ⊆ Σ∗ be a language, and s ∈ Σ∗ , then the prefix-closure of L is L = {s ∈
∗
Σ | ∃t ∈ Σ∗ , st ∈ L}. L is prefix-closed if L = L.
A deterministic finite state automaton (DFSA) is a tuple A = (Q, Σ, δ, q0 , F ). We
define the active event function Γ : Q → 2Σ , such that Γ(q), the active event set at q, is
the set of all events e for which δ(q, e) is defined.
Definition 2.14. The language generated by a DFSA A = (Q, Σ, δ, q0 , F ) is the set
Lg (A) = {s ∈ Σ∗ | δ(q0 , s) is defined}
The language marked (or accepted ) by A is the set
Lm (A) = {s ∈ Σ∗ | δ(q0 , s) ∈ F }
Note that Lg (A) is prefix-closed by definition, i.e., Lg (A) = Lg (A). Also, we have
Lm (A) ⊆ Lm (A) ⊆ Lg (A).
Definition 2.15. Automaton A is blocking if Lm (A) ⊂ Lg (A), and nonblocking if Lm (A) =
Lg (A).
Let a DES be G = (Q, Σ, δ, q0 , F ), where the set of events can be partitioned into two
disjoint subsets Σ = Σc ∪ Σuc where Σc is the set of controllable events, Σuc is the set of
uncontrollable events.
A supervisor S is a function S : Lg (G) → 2Σ to dynamically enable or disable events
of G. For each generated string s ∈ Lg (G), the set of enabled events at current state is
S(s) ∩ Γ(δ(q0 , s)).
A supervisor S is admissible if for all s ∈ Lg (G), Σuc ∩ Γ(δ(q0 , s)) ⊆ S(s). That is, S
is not allowed to disable a feasible uncontrollable event. We will only consider admissible
supervisors.
Given G and S, the resulting controlled system is denoted by S/G.
Definition 2.16. The language generated by S/G is defined recursively as follows
1. ∈ Lg (S/G)
2. sσ ∈ Lg (S/G) if and only if, s ∈ Lg (S/G) and sσ ∈ Lg (G) and σ ∈ S(s).
The language marked by S/G is
Lm (S/G) = Lg (S/G) ∩ Lm (G)
Overall, we have the inclusions ∅ ⊆ Lm (S/G) ⊆ Lm (S/G) ⊆ Lg (S/G) = Lg (S/G) ⊆
Lg (G).
18
Chapter 2. Preliminaries
Definition 2.17. The DES S/G is blocking if Lm (S/G) ⊂ Lg (S/G), and nonblocking
if Lm (S/G) = Lg (S/G). We say S is blocking (nonblocking resp.) if S/G is blocking
(nonblocking resp.).
The controllability theorem states the necessary and sufficient condition of the existence of supervisors under partial controllability.
Theorem 2.18 (Controllability Theorem (CT)). Given a DES G = (Q, Σ, δ, q0 , F ) where
Σ = Σc ∪Σuc , let K ⊆ Lg (G) and K 6= ∅. There exists supervisor S such that Lg (S/G) = K
if and only if
KΣuc ∩ Lg (G) ⊆ K
The standard realization of S is to build an automaton R that marks exactly the
language K, i.e., Lg (R) = Lm (R) = K. Other realizations are extensively discussed in the
literature, such as induced supervisors, reduced state realization.
If it is required that the supervisor S is nonblocking, i.e., Lm (S/G) = Lg (S/G), we
need to extend CT to nonblocking supervisors.
Theorem 2.19 (Nonblocking Controllability Theorem (NCT)). Given a DES G = (Q, Σ, δ,
q0 , F ) where Σ = Σc ∪ Σuc , let K ⊆ Lm (G) and K 6= ∅. There exists a nonblocking supervisor S such that
Lm (S/G) = K
and
Lg (S/G) = K
if and only if the two following conditions hold:
1. Controllability: KΣuc ∩ Lg (G) ⊆ K
2. Lm (G)-closure: K = K ∩ Lm (G).
2.4
From Propositional Logic to Temporal Logic
Logic systems are widely used in modeling, reasoning and specification checking about
computer systems [75]. In this section, we briefly recall several classic logics and their
notations.
Propositional logic. Propositional logic is the most basic and simple logic system. It
can only express formulas that contain atomic propositions and a small set of connectives.
Its grammar can be defined in Backus-Naur Form (BNF) as
φ ::= p | (¬φ) | (φ ∧ φ) | (φ ∨ φ) | (φ → φ)
Given a propositional model M , a set of propositional formulas Γ, and a property φ, the
checking problems of whether M |= φ and whether the semantic entailment Γ |= φ holds
are both decidable. The validating algorithm can be based on the truth table.
2.5. Automata-Theoretic Model Checking
19
Predicate logic. Predicate logic extends propositional logic by introducing quantifiers
∀, ∃ and predicate expressions. Its grammar can be defined in BNF as
t ::= x | c | f (t, . . . , t)
φ ::= P (t1 , t2 , . . . , tn ) | (¬φ) | (φ ∧ φ) | (φ ∨ φ) | (φ → φ) | (∀φ) | (∃φ)
In the upper production, t is a term, x ranges over a set of variables, c over nullary function
symbols, and f over the function symbols with parameters. In the lower production, P is
a predicate symbol of arity n ≥ 1, and ti are terms.
Temporal logic. There are mainly three types of temporal logic.
Linear-time temporal logic (LTL) extends propositional logic by introducing temporal
connectives X, F, G, U, W, R. LTL implicitly quantifies universally over paths. Its syntax
is given in BNF as
φ ::= > | ⊥ | p | (¬φ) | (φ ∧ φ) | (φ ∨ φ) | (φ → φ)
| (Xφ) | (F φ) | (Gφ) | (φU φ) | (φW φ) | (φRφ)
Computation tree logic (CTL) is a branching-time logic that extends propositional logic
by introducing quantified temporal connectives AX, EX, AF, EF , etc., in order to quantify
explicitly over paths. Its syntax is given in BNF as
φ ::= > | ⊥ | p | (¬φ) | (φ ∧ φ) | (φ ∨ φ) | (φ → φ)
| AXφ | EXφ | AF φ | EF φ | AGφ | EGφ | A[φU φ] | E[φU φ]
CTL* is a logic which combines the expressive powers of LTL and CTL, by dropping
the CTL constraint that every temporal operator has to be associated with a unique
path quantifier. The syntax involves two classes of formulas: state formulas φ, which are
evaluated in states, and path formulas α, which are evaluated along paths:
φ ::= > | p | (¬φ) | (φ ∧ φ) | A[α] | E[α]
α ::= φ | (¬α) | (α ∧ α) | (αU α) | (Gα) | (F α) | (Xα)
2.5
Automata-Theoretic Model Checking
Pnueli was the first to use temporal logic for reasoning about programs [110] in 1977.
Temporal logic model checking algorithms were introduced by Clarke and Emerson [30]
and Quielle and Sifakis [111] in the early 1980s to automate reasoning.
In this section, we will briefly recall the notations and fundamentals of standard
automata-theoretic model checking technique. The exhaustive investigations on this subject could be found in the monographs [31, 5].
The first step of model checking is constructing a formal model M that captures the
behavior of system. We always use the Kripke structure.
Definition 2.20. Let AP be a set of atomic propositions. A Kripke structure M over
AP is a tuple (S, S0 , R, L), where S is a finite set of states, S0 ⊆ S is a set of initial states,
R ⊆ S × S is a transition relation, L : S → 2AP is a labeling function that labels each
state with the set of atomic propositions true in that state.
20
Chapter 2. Preliminaries
The second step is to specify the property to be verified φ in a certain temporal logic,
typically LTL (Linear Temporal Logic), CTL (Computation Tree Logic), CTL* [75]. We
use the path quantifiers A (“for all computation paths”) and E (“for some computation
path”), and the temporal operators X (“next state”), F (“in the future”), G (“globally”),
U (“until”) and R (“release”).
Given the model M and the formula φ, the model checking problem is to decide whether
S0 ⊆ {s ∈ S | M, s |= φ}. There are two popular solutions as follows.
• symbolic approach, which operates by labeling each state s with the set label(s) of
subformulas of φ which are true in s (initially label(s) = L(s)). When the algorithm
terminates, we have M, s |= φ if and only if φ ∈ label(s). The SMV [98] is a typical
implementation.
• automata-theoretic approach, which translates the formula φ into an automaton for
checking. The SPIN [73, 71] is a typical implementation.
property
specification
(2)
modeling
formula
φ
(4)
translating
automaton
A¬φ
system
design
(1)
modeling
model
M
(3)
translating
automaton
AM
(7)
revision
(5)
intersect
counterexample
END
AI =
AM ∩ A¬φ
no
(6)
emptiness
yes checking
Figure 2.1: The Process of Model Checking
For the convenience of comparison, we are concerned with the automata-theoretic
approach. The overall process of model checking is shown in Fig. 2.1 where the steps are
numbered. This method is based on Büchi automata [16, 120, 121].
Definition 2.21. A (nondeterministic) Büchi automaton is a tuple A = (Q, Σ, δ, q0 , F ),
where Q is a finite set of states, Σ is a finite alphabet, δ ⊆ Q × Σ × Q is a set of transitions,
q0 ∈ Q is the initial state, F ⊆ Q is a set of accepting states.
A run of A on an ω-word v = v(0)v(1) . . . ∈ Σω is a sequence of states ρ = ρ(0)ρ(1) . . . ∈
Qω such that ρ(0) = q0 , (ρ(i), v(i), ρ(i+1)) ∈ δ for i ≥ 0. Let inf(ρ) be the set of states that
appear infinitely often in the run ρ, then ρ is a successful run if and only if inf(ρ) ∩ F 6= ∅.
A accepts v if there is a successful run of A on v. The ω-language accepted by A is
L(A) = {v ∈ Σω | A accepts v}.
If an ω-language L = L(A) for some Büchi automaton A, then L is Büchi recognizable.
Büchi recognizable ω-languages are called regular ω-languages. The expressive power of
regular ω-languages includes that of LTL [120], although Büchi automata are syntactically
simple. Thus, we can translate LTL formulas into Büchi automata.
At the third and fourth steps, the modeled system M and the property φ are both
represented in Büchi automata, respectively.
A Kripke structure M = (S, R, S0 , L) is translated into an automaton AM = (S ∪
{q0 }, Σ, δ, {q0 }, S ∪ {q0 }), where q0 6∈ S and Σ = 2AP . We have (s, a, s0 ) ∈ δ for s, s0 ∈ S if
and only if (s, s0 ) ∈ R and a = L(s0 ), and (q0 , a, s) ∈ δ if and only if s ∈ S0 and a = L(s).
2.5. Automata-Theoretic Model Checking
21
The negation of the property ¬φ in LTL is translated into an automaton A¬φ over the
same alphabet 2AP [60]. L(A¬φ ) includes exactly the ω-words satisfying ¬φ. Note that
each edge of A¬φ is annotated with a boolean expression that represents several sets of
atomic propositions, where each set corresponds to a truth assignment for AP that satisfies
the boolean expression. For example, let AP = {a, b, c}, an edge labeled a ∧ b matches the
transitions labeled with {a, b} and {a, b, c}. We denote by Σ(a ∧ b) = {{a, b}, {a, b, c}}.
The mathematical foundation of checking is the following one. The system AM satisfies the specification φ when L(AM ) ⊆ L(Aφ ). Therefore, one checks whether L(AM ) ∩
L(A¬φ ) = ∅, since L(A¬φ ) = Σω − L(Aφ ). If the intersection is not empty, any behavior
in it corresponds to a counterexample.
At the fifth step, we compute the automaton AI accepting L(AM ) ∩ L(A¬φ ) (denoted
by AI = AM ∩ A¬φ ), by using the following theorem about the intersection of two Büchi
automata.
Theorem 2.22. Let A1 = (Q1 , Σ, δ1 , q1 , F1 ) and A2 = (Q2 , Σ, δ2 , q2 , F2 ). We can construct an automaton accepting L(A1 ) ∩ L(A2 ) as follows:
A1 ∩ A2 = (Q1 × Q2 × {0, 1, 2}, Σ, δ, (q1 , q2 , 0), Q1 × Q2 × {2})
where ((qi , qj , x), a, (qm , qn , y)) ∈ δ if and only if (qi , a, qm ) ∈ δ1 , (qj , a, qn ) ∈ δ2 , and x, y
satisfy the following conditions:

0, if x = 2



 1, if x = 0 and q ∈ F
m
1
y=

2, if x = 1 and qn ∈ F2



x, otherwise
Notice that all states of AM are accepting, the computation of intersection can be
simplified as follows.
Theorem 2.23. Let A1 = (Q1 , Σ, δ1 , q1 , Q1 ) and A2 = (Q2 , Σ, δ2 , q2 , F2 ). We can construct an automaton accepting L(A1 ) ∩ L(A2 ) as follows:
A1 ∩ A2 = (Q1 × Q2 , Σ, δ, (q1 , q2 ), Q1 × F2 )
where ((qi , qj ), a, (qm , qn )) ∈ δ if and only if (qi , a, qm ) ∈ δ1 and (qj , a, qn ) ∈ δ2 .
Finally, at the sixth step, the last task is to check emptiness of the intersection AI . A
memory efficient algorithm, namely double DFS (Depth First Search) [37], was developed
by extending Tarjan’s DFS [118]. If L(AI ) = ∅, then the system M satisfies the property
φ. Or else, a counterexample v ∈ L(AI ) will be reported, and guide the revision of the
original design (step 7).
On one hand, revisions of the design may bring in new faults. On the other hand,
model checkers always produce only one counterexample each time, indicating a single
fault. Thus, the iterative process of model checking, counterexample analysis and revision
will be repeated, until L(AI ) = ∅.
Chapter 3
Control Systems in Formal
Language Theory
In this chapter, we propose the theory of Control Systems (C Systems) in formal language
theory. The control system is a generic framework, and contains two components: the
controlled component and the controlling component. The two components are expressed
using the same formalism, e.g., automata or grammars. The reason is that people can use
the same technique to implement the two components in practice. In contrast, traditional
grammars with regulated rewriting do not satisfy this principle. Indeed, their controlled
components are grammars, but their controlling components are other structures such as
regular sets and matrix.
The control system adopts one restriction and three extensions of regulated rewriting
[39]. The restriction is that the appearance checking mode is disabled, since it is not easy
to implement this mechanism in practice. The extensions are as follows.
First, the controlling components are expressed using the same formalism as the controlled components, e.g., automata or grammars, rather than regular sets or matrix.
Second, context-free control sets are enabled to increase expressive power. The regular
set has limited expressive power in specifying constraints, thus context-free control sets
may be needed in some applications. Therefore, we need to use context-free languages as
controlling sets, and study their generative power.
Third, the controlled derivations are restricted to be leftmost, in order to make the
global system LL-parsable. Regularly controlled grammars are not LL-parsable, since
they are not compatible with LL parsers by introducing conflicts in parsing (see the example in Section 3.4). Therefore, we need to define LL-parsable grammars with controlled
derivations, whose controlling components should be based on leftmost derivations of the
controlled grammars.
We will define three types of control systems. The Grammar Control System (GC System) and Leftmost-derivation-based Grammar Control System (LGC System) implement
the two extensions, respectively. The third type, namely the Automaton Control System
(AC System), is a novel formalism based on the automaton representation. Emerging
applications call for such a theory, since automata have become a popular tool for system
modeling [75].
23
24
3.1
3.1.1
Chapter 3. Control Systems in Formal Language Theory
Grammar Control Systems (GC Systems)
Definitions
A grammar control system consists of a controlled grammar and a controlling grammar
that restricts the derivation of the controlled grammar. The set of terminals of the controlling grammar equals the set of production names of the controlled grammar.
Definition 3.1. Given a controlled grammar (or simply grammar) G1 = (N1 , T1 , P1 , S1 ),
a controlling grammar over G1 is a quadruple G2 = (N2 , T2 , P2 , S2 ) with T2 = P1 . L(G1 )
and L(G2 ) are called controlled language and controlling language, respectively.
Without loss of generality, we assume that N1 ∩ N2 = ∅. Note that several productions
may have the same name. Generally, we denote the controlled grammar and the controlling
grammar by G1 and G2 respectively.
Definition 3.2. A Grammar Control System (GC System) includes a grammar G1 =
(N1 , T1 , P1 , S1 ) and a controlling grammar G2 . The global language of G1 controlled by
G2 is:
p1
p
k
L(G1 , G2 ) = {w|S1 ⇒ α1 · · · ⇒
αk = w, p1 , ..., pk ∈ P1 and p1 p2 ...pk ∈ L(G2 )}
Obviously, the set of accepted input strings is a subset of the controlled language,
such that the sequences of applied productions belong to the controlling language, i.e.,
L(G1 , G2 ) ⊆ L(G1 ).
Example 3.3. Given G1 , G2 with the following productions:

(
 p1 : S1 → AB
S2 → p1 C
p2 : A → aAb
p3 : B → Bc
G2
G1

C → p2 p3 C | p4 p5
p4 : A → ab
p5 : B → c
The context-free language L(G1 ) = an bn c+ and the regular language L(G2 ) = p1 (p2 p3 )∗ p4 p5
constitute a non-context-free global language L(G1 , G2 ) = an bn cn , n ≥ 1.
Two trivial types of controlling grammars are empty controlling grammars and full
controlling grammars. The former ones accept the empty controlling language which rejects
all the sequences of productions applied in derivations, i.e., L(G2 ) = ∅. The latter ones
accept full controlling languages that accept all the sequences of productions applied in
derivations, i.e., L(G2 ) = P1∗ , where P1 is the set of production names of the controlled
grammar G1 . Note that the two types of languages are both regular languages.
3.1.2
Generative Power
We denote by L(X, Y ) the family of languages accepted by GC systems whose controlled
grammars are of type X and controlling grammars are of type Y .
If the controlling grammars are regular grammars, we observe that L(X, REG) is
equivalent to L(rC, X) accepted by regularly controlled grammars. Therefore, we have
the following two theorems by Thm. 2.13.
Theorem 3.4. L(REG, REG) = L(REG).
3.1. Grammar Control Systems (GC Systems)
25
Theorem 3.5. L(CF G) ⊂ L(CF G, REG) ⊂ L(P SG), and L(CF G, REG) is incomparable with L(CSG).
We now examine the cases of context-free controlling grammars. We need the following
result by [97], which is proved in [97, 6].
Lemma 3.6. Let G = (N, T, P, S) be a grammar such that the rules in P have the form
α → β, α ∈ N + . Then the language accepted by G in a leftmost manner is context-free,
i.e., Ll (G) ∈ L(CF G).
Theorem 3.7. L(REG, CF G) = L(CF G).
Proof. (i) L(REG, CF G) ⊇ L(CF G). Given a grammar G = (N, T, P, S) ∈ CF G, we
construct G1 = ({S1 }, T, P1 , S1 ) ∈ REG where P1 = {pa : S1 → aS1 | a ∈ T } ∪ {p : S1 →
}, and G2 = (N ∪ {S2 }, T2 , P2 , S2 ) ∈ CF G where
1. h is a homomorphism from T to T2 : h(a) = pa ,
2. T2 = h(T ) ∪ {p },
3. P2 = {S2 → Sp } ∪ h(P ), where h(P ) is the set of productions replacing each
terminal a ∈ T of productions in P by pa ∈ T2 .
It is easy to see, G accepts w, if and only if G1 accepts w by using productions h(w) · p ,
and L(G2 ) accepts h(w) · p . Therefore, L(G) = L(G1 , G2 ) ∈ L(REG, CF G).
(ii) L(REG, CF G) ⊆ L(CF G). Given an REG G1 = (N1 , T1 , P1 , S1 ) and a CFG
G2 = (N2 , T2 , P2 , S2 ) with T2 = P1 , P1 contains all productions of the forms: A → wB,
A → w, w ∈ T1∗ . We construct a grammar
G = (N1 ∪ N2 ∪ T2 ∪ {S}, T1 , P, S)
where S 6∈ N1 ∪ N2 ∪ T2 , P consists of the following productions:
1. S → S1 S2 ,
2. AC → Aα, for A ∈ N1 , C ∈ N2 , C → α ∈ P2 ,
3. Ap → wB, for p ∈ T2 , A, B ∈ N1 , p : A → wB ∈ P1 ,
4. Ap → w, for p ∈ T2 , A ∈ N1 , p : A → w ∈ P1 .
It is easy to see that the nonterminals p ∈ T2 control the derivation in such a way that
the acceptable derivations in G1 are simulated by leftmost derivations in G. Therefore,
L(G1 , G2 ) = Ll (G) ∈ L(CF G), as G fulfils the conditions in Lemma 3.6.
Theorem 3.8. L(CF G) ⊂ L(CF G, CF G) ⊆ L(P SG).
Proof. (i) L(CF G) ⊂ L(CF G, CF G) is obvious, by Thm. 3.5 and L(CF G, REG) ⊆
L(CF G, CF G).
(ii) L(CF G, CF G) ⊆ L(P SG). Given CFG’s G1 = (N1 , T1 , P1 , S1 ) and G2 = (N2 , T2 ,
P2 , S2 ) with T2 = P1 , we construct a grammar:
G = (N1 ∪ N2 ∪ T2 ∪ {S}, T1 , P2 ∪ P, S)
where S 6∈ N1 ∪ N2 ∪ T2 , P consists of the following productions:
26
Chapter 3. Control Systems in Formal Language Theory
1. S → S1 S2 ,
2. Xp → pX, for X ∈ N1 ∪ T1 and p ∈ T2 ,
3. Ap → α, for p : A → α ∈ P1 .
At first, we derive S ⇒ S1 S2 (rule 1). Then S2 derives an acceptable sequence of productions by using P2 . The name of each derived production p can move leftward (rule 2) and
rewrite a nonterminal A ∈ N1 (rule 3). It is easy to see, L(G1 , G2 ) = L(G) ∈ L(P SG). We remark that it remains an open problem whether L(P SG) ⊆ L(CF G, CF G). The
difficulty comes from how to simulate context-sensitive productions by using only contextfree productions, e.g., productions of the form AB → CD.
3.2
Leftmost-Derivation-Based Grammar Control Systems (LGC Systems)
3.2.1
Definitions
An important variant of the grammar control system is the one whose control is based
on leftmost derivations. Suppose there is a leftmost derivation (denoted by lm) of the
p1
p1 ...p
p2
pk
grammar G for w: S ⇒ α1 ⇒ · · · ⇒
αk = w. As an abbreviation, we write S =⇒k w. If
lm
lm
lm
pk
lm
p1 = p2 = · · · = pk = p, we write S ⇒ w. We will omit “lm” if there is no confusion.
lm
Definition 3.9. A Leftmost-derivation-based Grammar Control System (LGC System)
includes a grammar G1 = (N1 , T1 , P1 , S1 ) and a controlling grammar G2 . The global
language of G1 controlled by G2 in a leftmost manner is:
p1
p
lm
lm
k
L(G1 , G2 , lm) = {w|S1 ⇒ α1 · · · ⇒
αk = w, p1 , ..., pk ∈ P1 and p1 p2 ...pk ∈ L(G2 )}
We may also denote L(G1 , G2 , lm) by L(G1 ~· G2 ).
Example 3.10. Given G1 , G2 with the following productions:

(
 p1 : S1 → AB
S2 → p1 Cp5
p2 : A → aAb
p3 : B → Bc
G1
G2

C → p2 Cp3 | p4
p4 : A → ab
p5 : B → c
The context-free languages L(G1 ) = an bn c+ and L(G2 ) = p1 pk2 p4 pk3 p5 constitute a noncontext-free global language L(G1 , G2 , lm) = an bn cn , k ≥ 0, n ≥ 1. Note that G2 is
different from that of Example 3.3.
3.2.2
Generative Power
We denote by L(X, Y, lm) the family of languages accepted by LGC systems whose controlled grammars are of type X and controlling grammars are of type Y .
If the controlled grammars are regular grammars, we observe that L(REG, X, lm)
is equivalent to L(REG, X), since every derivation of regular grammars is a leftmost
derivation. Therefore, we have the following two theorems by Theorems 3.4 and 3.7.
Theorem 3.11. L(REG, REG, lm) = L(REG).
Theorem 3.12. L(REG, CF G, lm) = L(CF G).
3.2. Leftmost-Derivation-Based Grammar Control Systems (LGC Systems)
27
We now examine the cases of context-free controlled grammars.
Theorem 3.13. L(CF G, REG, lm) = L(CF G).
Proof. (i) L(CF G, REG, lm) ⊇ L(CF G) is obvious, since we can use a regular controlling
grammar accepting a full controlling language.
(ii) L(CF G, REG, lm) ⊆ L(CF G). Given a CFG G1 = (N1 , T1 , P1 , S1 ) and an REG
G2 = (N2 , T2 , P2 , S2 ) with T2 = P1 , P2 contains all productions of the forms: B → pC,
B → , p ∈ P1 . We construct a grammar
G = (N1 ∪ T10 ∪ N2 ∪ {S, $}, T1 , P, S)
where T10 = {a0 | a ∈ T1 }, and P consists of the following productions:
1. S → S2 S1 $,
2. BA → Cα0 , for B → pC ∈ P2 and p : A → α ∈ P1 , where α0 is the string replacing
each terminal a ∈ T1 of α by a0 ,
3. Ba0 → aB, for a ∈ T1 , B ∈ N2 ,
4. B$ → , for B → .
It is easy to see that the nonterminals B ∈ N2 control the derivation in such a way
that the acceptable leftmost derivations in G1 are simulated by leftmost derivations in G.
Therefore, L(G1 , G2 , lm) = Ll (G) ∈ L(CF G), as G fulfils the conditions in Lemma 3.6.
To prove the generative power of L(CF G, CF G, lm), we need the following lemma,
which is a result of [63].
Lemma 3.14. For each recursively enumerable set L ∈ L(P SG), there exist deterministic
context-free languages L1 and L2 , and a homomorphism h : Σ∗1 → Σ∗2 such that L =
h(L1 ∩ L2 ). Without loss of generality, we assume that Σ1 ∩ Σ2 = ∅.
Theorem 3.15. L(CF G, CF G, lm) = L(P SG).
Proof. (i) L(CF G, CF G, lm) ⊆ L(P SG). Given CFG’s G1 = (N1 , T1 , P1 , S1 ) and G2 =
(N2 , T2 , P2 , S2 ) with T2 = P1 , we construct a grammar
G = (N1 ∪ N2 ∪ T2 ∪ T20 ∪ {S, §, $}, T1 , P2 ∪ P, S)
where T20 = {p0 | p ∈ T2 }, and P consists of the following productions:
1. S → §S1 S2 $,
2. Xp → pX, for X ∈ N1 ∪ T1 and p ∈ T2 , (move p leftward skipping X)
3. §p → §p0 , for p ∈ T2 , (p becomes p0 at the leftmost position)
4. p0 a → ap0 , for a ∈ T1 and p ∈ T2 , (p0 skips terminals)
5. p0 A → α, for p : A → α ∈ P1 , (p0 rewrites A using the production p)
6. a$ → $a, for a ∈ T1 , (move $ leftward, skipping terminals)
28
Chapter 3. Control Systems in Formal Language Theory
7. §$ → . (eliminate § and $ at the leftmost position)
It is easy to see that the nonterminals p0 ∈ T20 control the derivation in such a way that
the acceptable leftmost derivations in G1 are simulated by derivations in G. Therefore,
L(G1 , G2 , lm) = L(G) ∈ L(P SG).
(ii) L(CF G, CF G, lm) ⊇ L(P SG). Given L ∈ L(P SG), there exist context-free languages L1 , L2 , and h such that L = h(L1 ∩ L2 ) by Lemma 3.14. We only need to translate
h(L1 ∩L2 ) into an LGC system L(G1 , G2 , lm), where L1 , L2 ∈ T ∗ , h : T ∗ → T1∗ , T ∩T1 = ∅.
There exists a context-free grammar G = (N, T, P, S) such that L(G) = L1 ∈ L(CF G).
We construct G1 = (N ∪ T, T1 , P ∪ P1 , S), where P1 = {pa : a → h(a) | a ∈ T }, h(a) ∈ T1∗ ,
T ∩ T1 = ∅. Note that P only generates nonterminals, while P1 only generates terminals.
Clearly, L(G1 ) = h(L1 ).
Then, we construct G2 . Let h0 be a homomorphism such that h0 (p) = for p ∈ P ,
h0 (pa ) = a for a ∈ T . Let L0 = h0−1 (L2 ) ∈ L(CF G), there exists a context-free grammar
G2 such that L(G2 ) = L0 . Note that h0−1 (L2 ) makes G2 to restrict the use of P1 which
generates terminals, but no restriction is imposed on the use of P .
Therefore, G2 ensures that the acceptable sequences of productions of G1 derive h(L2 ).
This means, G1 can only derive strings in both h(L1 ) and h(L2 ). Thus, L(G1 , G2 , lm) =
h(L1 ∩ L2 ). That is, L = h(L1 ∩ L2 ) = L(G1 , G2 , lm) ∈ L(CF G, CF G, lm).
Theorem 3.16. Let X be a family of grammars with L(CF G) ⊆ L(X), then L(CF G, X, lm) =
L(P SG).
Proof. (i) L(CF G, X, lm) ⊆ L(P SG). The proof is the same as the proof (i) of Thm. 3.15
by noting G2 is of type X.
(ii) L(CF G, X, lm) ⊇ L(P SG). This part is obvious,
since L(CF G, X, lm) ⊇ L(CF G, CF G, lm) ⊇ L(P SG).
Immediately, we have the following corollary.
Corollary 3.17. L(CF G, P SG, lm) = L(P SG).
For context-sensitive controlling grammars, we should modify the proof a little.
Theorem 3.18. L(CF G, CSG, lm) = L(P SG).
Proof. (i) L(CF G, CSG, lm) ⊆ L(P SG). The proof is the same as the proof (i) of Thm.
3.15 by noting G2 is a context-sensitive grammar.
(ii) L(CF G, CSG, lm) ⊇ L(P SG). Given L ∈ L(P SG), there exist CFG’s G1 , G2
such that L(G1 , G2 , lm) = L by the proof (ii) of Thm. 3.15.
(1) if 6∈ L(G2 ), we can construct a context-sensitive grammar G3 such that L(G3 ) =
L(G2 ) by eliminating -productions.
(2) if ∈ L(G2 ), we can construct a context-sensitive grammar G3 such that L(G3 ) =
L(G2 ) − {} by eliminating -productions.
Thus, we have L = L(G1 , G2 , lm) = L(G1 , G3 , lm) ∈ L(CF G, CSG, lm).
3.3. Automaton Control Systems (AC Systems)
3.3
3.3.1
29
Automaton Control Systems (AC Systems)
Definitions
An automaton control system consists of a controlled automaton and a controlling automaton that restricts the behavior of the controlled automaton. The input alphabet of
the controlling automaton equals the set of transition names of the controlled automaton.
Definition 3.19. Given a controlled automaton (or simply automaton) A1 with a set of
transitions δ1 = {pi }i∈I where pi is a name of transition, a controlling automaton A2 over
A1 has the input alphabet Σ2 = δ1 . L(A1 ) and L(A2 ) are called controlled language and
controlling language, respectively.
Note that the concept of transition name is introduced. We do not make any specific
assumptions about the relation between the name and symbol of a transition. That is,
several transitions may have the same name but different symbols, or they may have
distinct names but the same symbol. For example, assume an FSA with the transitions
δ(q1 , a) = {q2 , q3 } and δ(q3 , b) = {q2 }, we may denote by pi : (q1 , a, q2 ) ∈ δ and pj :
(q1 , a, q3 ) ∈ δ and pk : (q3 , b, q2 ) ∈ δ with the names pi , pj , pk . All the following cases are
possible: pi = pj or pi 6= pj , pi = pk or pi 6= pk , and so forth.
Definition 3.20. An Automaton Control System (AC System) includes an automaton A1
and a controlling automaton A2 , denoted by A1 ~· A2 . The global language of A1 controlled
by A2 is:
L(A1 ~· A2 ) = {w|A1 accepts w by using a sequence of transitions p1 p2 ...pk ∈ L(A2 )}
The symbol ~· is called “meta-composition”, denoting the left operand is controlled
by the right operand.
Example 3.21. Given a PDA D1 = ({q, f }, {a, b, c}, {S, Z, Z 0 , A, B, a, b, c}, δ, q, Z, {f }),
where δ includes the following transitions:
ps : δ(q, , Z) = (q, SZ 0 ) p1 : δ(q, , S) = (q, AB) p2 : δ(q, , A) = (q, aAb)
p3 : δ(q, , B) = (q, Bc) p4 : δ(q, , A) = (q, ab)
p5 : δ(q, , B) = (q, c)
pa : δ(q, a, a) = (q, )
pb : δ(q, b, b) = (q, )
pc : δ(q, c, c) = (q, )
pf : δ(q, , Z 0 ) = (f, )
n−1
p5 p+
Given a controlling PDA D2 such that L(D2 ) = ps p1 (p2 pa )n−1 p4 pa p+
c pf , it is
b p3
n
n
n
easy to see that L(D1 ~· D2 ) = a b c , n ≥ 1.
If we let P = {pa , pb , pc }, L(D3 ) = ps p1 P ∗ (p2 P ∗ )n−1 p4 P ∗ (p3 P ∗ )n−1 p5 P ∗ pf , we also
have L(D1 ~· D3 ) = an bn cn , n ≥ 1. Note that L(D3 ) specifies a weaker constraint, i.e.,
L(D2 ) ⊂ L(D3 ). However, the two AC systems accept the same global language. We
remark that the controlling language can express a “weaker” or “stronger” restriction
to obtain the same effect, if the “valid” constraint on the controlled automaton is not
changed.
Two trivial types of controlling automata are empty controlling automata and full
controlling automata. The former ones accept the empty controlling language which rejects
all the sequences of applied transitions, i.e., L(A2 ) = ∅. The latter ones accept full
controlling languages that accept all the sequences of applied transitions, i.e., L(A2 ) = δ1∗ ,
30
Chapter 3. Control Systems in Formal Language Theory
where δ1 is the set of transition names of the controlled automaton A1 . Note that the two
types of languages are both regular languages.
3.3.2
Generative Power
We denote by L(X ~· Y ) the family of languages accepted by AC systems whose controlled
automata are of type X and controlling automata are of type Y .
Theorem 3.22. L(F SA ~· F SA) = L(F SA).
Proof. (i) L(F SA ~· F SA) ⊇ L(F SA) is obvious, since we can use a controlling finite state
automaton accepting a full controlling language.
(ii) L(F SA ~· F SA) ⊆ L(F SA). Given FSA’s A1 = (Q1 , Σ1 , δ1 , q1 , F1 ) and A2 =
(Q2 , Σ2 , δ2 , q2 , F2 ) with Σ2 = δ1 , we construct an FSA A = (Q, Σ1 , δ, q0 , F ) where:
1. Q = Q1 × Q2 ,
2. δ((qi , qj ), a) contains (ri , rj ), for p : δ1 (qi , a) = ri , a ∈ Σ1 ∪ {} and δ2 (qj , p) contains
rj ,
δ((qi , qj ), ) contains (qi , rj ), for δ2 (qj , ) contains rj ,
3. q0 = (q1 , q2 ),
4. F = F1 × F2 .
A simulates the actions of A1 , and changes the state of A2 . Given an input string
w, A1 enters a final state ri ∈ F1 , and A2 enters a final state rj ∈ F2 by reading the
sequence of transitions, if and only if A enters the final state (ri , rj ) ∈ F1 × F2 . Thus,
L(A1 ~· A2 ) = L(A) ∈ L(F SA).
Theorem 3.23. L(F SA ~· P DA) = L(P DA).
Proof. (i) L(F SA ~· P DA) ⊇ L(P DA). Given a PDA D = (Q, Σ, Γ, δ, q0 , Z0 , F ), we
construct an FSA A and a controlling PDA D2 as follows:
A = ({qA }, Σ, δA , qA , {qA })
where δA contains pa : δA (qA , a) = qA , for each a ∈ Σ.
D2 = (Q, h(Σ), Γ, δ2 , q0 , Z0 , F )
where,
1. h is a homomorphism: h(a) = pa , h() = , for a ∈ Σ,
2. for a set Σ, h(Σ) = {h(a) | a ∈ Σ},
3. δ2 (q, h(a), B) contains (r, β), for δ(q, a, B) contains (r, β), where a ∈ Σ ∪ {}, B ∈ Γ,
β ∈ Γ∗ .
Clearly, L(D2 ) = h(L(D)). Since each transition pa ∈ δA generates exactly a symbol a,
we have L(A ~· D2 ) = h−1 (L(D2 )) = h−1 (h(L(D))) = L(D).
(ii) L(F SA ~· P DA) ⊆ L(P DA). Given an FSA A1 = (Q1 , Σ1 , δ1 , q1 , F1 ) and a controlling PDA D2 = (Q2 , Σ2 , Γ, δ2 , q2 , Z0 , F2 ), we construct a PDA:
3.3. Automaton Control Systems (AC Systems)
31
D = (Q, Σ1 , Γ, δ, q0 , Z0 , F )
where:
1. Q = Q1 × Q2 ,
2. δ((qi , qj ), a, B) contains ((ri , rj ), β), for p : δ1 (qi , a) = ri , and δ2 (qj , p, B) contains
(rj , β), where a ∈ Σ1 ∪ {}, B ∈ Γ, β ∈ Γ∗ ,
δ((qi , qj ), , B) contains ((qi , rj ), β), for qi ∈ Q1 , and δ2 (qj , , B) contains (rj , β),
where B ∈ Γ, β ∈ Γ∗ ,
3. q0 = (q1 , q2 ),
4. F = F1 × F2 .
D simulates the actions of D2 and changes the state of A1 . Given an input string w,
A1 enters a final state ri ∈ F1 , and D2 enters a final state rj ∈ F2 by reading the
sequence of transitions, if and only if D enters the final state (ri , rj ) ∈ F1 × F2 . Thus,
L(A1 ~· D2 ) = L(D) ∈ L(P DA).
Theorem 3.24. L(P DA ~· F SA) = L(P DA).
Proof. (i) L(P DA ~· F SA) ⊇ L(P DA) is obvious, since we can use a controlling finite
state automaton accepting a full controlling language.
(ii) L(P DA ~· F SA) ⊆ L(P DA). Given a PDA D1 = (Q1 , Σ1 , Γ, δ1 , q1 , Z0 , F1 ) and a
controlling FSA A2 = (Q2 , Σ2 , δ2 , q2 , F2 ), we construct a PDA:
D = (Q, Σ1 , Γ, δ, q0 , Z0 , F )
where:
1. Q = Q1 × Q2 ,
2. δ((qi , qj ), a, B) contains ((ri , rj ), β), for p : δ1 (qi , a, B) = (ri , β), and δ2 (qj , p) contains rj , where a ∈ Σ1 ∪ {}, B ∈ Γ, β ∈ Γ∗ ,
δ((qi , qj ), , B) contains ((qi , rj ), B), for qi ∈ Q1 , B ∈ Γ, and δ2 (qj , ) contains rj ,
3. q0 = (q1 , q2 ),
4. F = F1 × F2 .
D simulates the actions of D1 and changes the state of A2 . Given an input string w,
D1 enters a final state ri ∈ F1 , and A2 enters a final state rj ∈ F2 by reading the
sequence of transitions, if and only if D enters the final state (ri , rj ) ∈ F1 × F2 . Thus,
L(D1 ~· A2 ) = L(D) ∈ L(P DA).
Theorem 3.25. L(P DA ~· P DA) = L(T M ).
Proof. (i) L(P DA ~· P DA) ⊆ L(T M ). Given two PDA’s D1 , D2 , we construct a Turing
Machine M with four tapes. The first tape holds the input string w. The second tape
simulates the stack of D1 . Once D1 applies a transition, the name of the transition is
sequentially recorded on the third tape. When D1 accepts w, M starts to simulate D2
using the third and fourth tapes (the third tape holds the sequence of transitions wp as
32
Chapter 3. Control Systems in Formal Language Theory
the input string to D2 , and the fourth tape simulates the stack of D2 ). M accepts if and
only if D2 accepts. Obviously, L(D1 ~· D2 ) = L(M ).
(ii) L(P DA ~· P DA) ⊇ L(T M ). Given L ∈ L(T M ) = L(P SG), we can translate L
into an LGC system L(G1 , G2 , lm) by Thm. 3.15. Then we can translate L(G1 , G2 , lm)
into L(A1 ~· A2 ) by using the automaton representation (c.f. Corollary 3.32 in the sequel).
3.3.3
Equivalence and Translation between AC and LGC Systems
In this section, we study the equivalence and translation between automaton control systems and grammar control systems.
The equivalence between automata and grammars is well known. We can establish
equivalence and translation between controlled (resp. controlling) automata and controlled (resp. controlling) grammars, i.e. L(A1 ) = L(G1 ) (resp. L(A2 ) = L(G2 )). Thus,
one might think that an automaton control system was equivalent to a grammar control system. However, we will show that an automaton control system is equivalent to a
leftmost-derivation-based grammar control system.
At first, we prove a lemma which shows that we can make each name in the controlled
grammar or automaton unique without changing the global language.
Lemma 3.26. (i) Given two grammars G1 of type X and G2 of type Y with T2 = P1 ,
if L(Y ) is closed under substitution by -free regular sets, there exist two grammars G01
of type X and G02 of type Y where T20 = P10 , and P10 only renames some productions
of P1 such that each production name in P10 is unique and L(G1 , G2 ) = L(G01 , G02 ) and
L(G1 , G2 , lm) = L(G01 , G02 , lm).
(ii) Given two automata A1 of type X and A2 of type Y with Σ2 = δ1 , if L(Y ) is
closed under substitution by -free regular sets, there exist two automata A01 of type X
and A02 of type Y where Σ02 = δ10 , and δ10 only renames some transitions of δ1 such that
each transition name in δ10 is unique and L(A1 ~· A2 ) = L(A01 ~· A02 ).
Proof. The proofs of the two propositions are similar, thus we only prove the first one.
Suppose there are k productions having the name pi in P1 , we rename them to be pi1 , ..., pik
in P10 . Thus, L(G1 ) = L(G01 ).
We define the substitution s(pi ) = {pij }1≤j≤k for each pi . There exists a grammar
0
G2 ∈ Y such that L(G02 ) = s(L(G2 )) ∈ L(Y ), since L(Y ) is closed under substitution by
-free regular sets. It is easy to see, a sequence of productions of G1 is accepted by L(G2 )
if and only if the corresponding renamed sequence of productions of G01 is accepted by
s(L(G2 )). Therefore, the proposition holds.
The above lemma makes it reasonable to make the assumption that each name in a
controlled grammar or automaton is unique, when necessary.
Theorem 3.27. Let X be a family of automata, Y be a family of grammars, if L =
L(X) = L(Y ) is closed under (1) concatenation with finite sets, (2) substitution by free regular sets, (3) -free homomorphisms, (4) right derivatives, then L(F SA ~· X) =
L(REG, Y, lm).
Proof. (i) L(F SA ~· X) ⊆ L(REG, Y, lm). Given an automaton control system (A ~· A2 ),
A ∈ F SA, A2 ∈ X, we construct G ∈ REG, G2 ∈ Y as follows.
3.3. Automaton Control Systems (AC Systems)
33
First, we construct a right-linear grammar G from A. The key point of the translation
is the name mapping between transitions and productions. Given A = (Q, Σ, δ, q0 , F ), we
construct G = (N, T, P, S0 ) where:
1. for each qi ∈ Q, there is a nonterminal Si ∈ N , and S0 is the start symbol.
2. T = Σ.
3. for each transition pk : δ(qi , ak ) = qj , where pk is the name of transition and ak ∈
Σ ∪ {}, there is a production pk : Si → ak Sj ∈ P .
4. for each final state qi ∈ F , there is a production pi : Si → ∈ P .
Second, we construct G2 ∈ Y from A2 ∈ X. Let P = {pi }qi ∈F and L0 = L(A2 ) · P , we
have L0 ∈ L, since L is closed under concatenation with finite sets and P is a finite and
regular set. Thus, there exists G2 ∈ Y such that L(G2 ) = L0 .
We now prove L(A ~· A2 ) = L(G, G2 , lm). It is easy to see, an input string w =
ai1 ...aik ∈ L(A~· A2 ) where aij = for some j, if and only if (q0 , ai1 ...aik ) `pi1 (qj1 , ai2 ...aik ) `pi2
· · · `pik (qjk , ), where qjk ∈ F and pi1 ...pik ∈ L(A2 ), if and only if there exists a derivation
pi1
pi2
p
lm
lm
lm
pεjk
ik
of G: S0 ⇒ ai1 Sj1 ⇒ · · · ⇒
ai1 ...aik Sjk ⇒ ai1 ...aik , and pi1 ...pik pεjk ∈ L(G2 ), if and only
lm
if w = ai1 ...aik ∈ L(G, G2 , lm).
(ii) L(F SA ~· X) ⊇ L(REG, Y, lm). Given an LGC system (G, G2 , lm), G2 ∈ Y ,
G = (N, T, P, S0 ) ∈ REG, where N = {Si }0≤i<|N | , P consists of all productions of the
forms: Si → wSj , Si → w, w ∈ T ∗ . We assume that each name of production in P is
unique, since L(Y ) is closed under substitution by -free regular sets.
First, we construct from G a right-linear grammar G1 = (N1 , T, P1 , S0 ), where P1
consists of the following productions:
1. pkm : Sk(m−1) → akm Skm , if pk : Si → wSj ∈ P , where w = ak1 · · · ak|w| , |w| ≥ 1,
1 ≤ m ≤ |w|, Sk0 = Si , Sk|w| = Sj .
2. pk : Si → Sj , if pk : Si → Sj ∈ P .
3. pkm : Sk(m−1) → akm Skm , if pk : Si → w ∈ P , where w = ak1 · · · ak|w| , |w| ≥ 1,
1 ≤ m ≤ |w|, Sk0 = Si , Sk|w| = S .
4. pk : Si → S , if pk : Si → ∈ P .
5. p : S → .
Then, we construct from G1 a finite automaton A = (Q, Σ, δ, q0 , {q }), where:
1. for each Si ∈ N1 , there is a state qi ∈ Q, and q0 is the start state.
2. Σ = T .
3. for each production pk : Si → ak Sj ∈ P1 , where ak ∈ T ∪ {}, there is a transition
pk : δ(qi , ak ) = qj .
34
Chapter 3. Control Systems in Formal Language Theory
Second, we construct A2 ∈ X from G2 ∈ Y . Let h be an -free homomorphism as
follows:


pk , if pk is of the form Si → Sj in P



 pk p , if pk is of the form Si → in P
h(pk ) =

pk1 · · · pk|w| , if pk is of the form Si → wSj in P , |w| ≥ 1



 p · · · p p , if p is of the form S → w in P , |w| ≥ 1
k1
k|w| k
i
Let L0 = h(L(G2 ))/{p }, we have L0 ∈ L, since L(G2 ) ∈ L and L is closed under -free
homomorphisms and right derivatives. There exists an automaton A2 ∈ X such that
L(A2 ) = L0 ∈ L(X).
It is easy to see, an input string w ∈ L(A ~· A2 ), if and only if w ∈ L(G, G2 , lm).
Note that the translations between X and Y in the above proof are independent of
concrete automata or grammars, but it is easy to concretize the translations. For example,
in the proof (i), the translation from A2 ∈ F SA to G2 ∈ REG is the following one: let
G1 = (N1 , T1 , P1 , S) such that L(G1 ) = L(A2 ) (using the translation from automaton to
right-linear grammar), then G2 = (N1 , T1 ∪ P , P2 , S), where P2 is obtained by replacing
each production in P1 of the form A → w, A ∈ N1 , w ∈ T1∗ by a set of productions
{A → wpi | pi ∈ P }. Clearly, L(G2 ) = L(A2 ) · P finishes the construction.
The following corollaries are based on the theory of Abstract Families of Languages
(AFL), which is initiated by [62] (a summary can be found in [61]). The theory of AFL
proved that, any AFL satisfies the closure properties required in the above theorem, and
the family of regular languages and the family of context-free languages are both AFL.
Let X be F SA, Y be REG, we have the following corollary.
Corollary 3.28. L(F SA ~· F SA) = L(REG, REG, lm).
Let X be P DA, Y be CF G, we have the following corollary.
Corollary 3.29. L(F SA ~· P DA) = L(REG, CF G, lm).
Theorem 3.30. Let X be a family of automata, Y be a family of grammars, if L =
L(X) = L(Y ) is closed under (1) substitution by -free regular sets, (2) concatenation
with regular sets, then L(P DA ~· X) = L(CF G, Y, lm).
Proof. (i) L(P DA ~· X) ⊆ L(CF G, Y, lm). Given an AC system (D ~· A2 ), A2 ∈ X,
D = (Q, Σ, Γ, δ, q0 , Z0 , F ) ∈ P DA, we construct grammars G ∈ CF G, G2 ∈ Y as follows.
First, let qe 6∈ Q, Ze 6∈ Γ, we construct G = (N, Σ, P, S) from D, where N is the set
of objects of the form [q, B, r] (denoting popping B from the stack by several transitions,
switching the state from q to r), q, r ∈ Q, B ∈ Γ, P is the union of the following sets of
productions:
1. Ps = {S → [q0 , Z0 , r][r, Ze , qe ] | r ∈ Q ∪ {qe }}. (starting productions)
2. Pk = {[q, B, qm+1 ] → a[q1 , B1 , q2 ][q2 , B2 , q3 ] · · · [qm , Bm , qm+1 ] |
pk : δ(q, a, B) = (q1 , B1 B2 ...Bm ), q2 , ..., qm+1 ∈ Q}, where a ∈ Σ ∪ {}, and
B, B1 , ..., Bm ∈ Γ. (If m = 0, then the production is [q, B, q1 ] → a.)
3. Pf = {[q, X, r] → [qe , X, r] | q ∈ F, X ∈ Γ ∪ {Ze }, r ∈ Q ∪ {qe }}. (simulating the
case of entering a final state)
3.3. Automaton Control Systems (AC Systems)
35
4. Pe = {[qe , X, qe ] → | X ∈ Γ ∪ {Ze }}. (eliminating remaining nonterminals in the
sentential form)
Note that for each transition pk ∈ δ, there exists a finite set of corresponding productions Pk in P , whereas each of Ps , Pf , Pe is a unique finite set in P .
Second, we construct G2 ∈ Y from A2 ∈ X. Let s be a substitution by -free finite
sets, such that s(pk ) = Pk , L0 = Ps · s(L(A2 )) · Pf · Pe∗ , we have L0 ∈ L(X), since L(A2 ) ∈
L(X) = L and L is closed under substitution by -free regular sets and concatenation with
regular sets. Thus, there exists G2 ∈ Y such that L(G2 ) = L0 .
It is easy to see, an input string w ∈ L(D ~· A2 ), if and only if w ∈ L(G, G2 , lm).
Indeed, G starts with Ps , then Pk simulates the transition pk of D. D enters a final state
Pf
q and accepts w, if and only if G has the derivations: S ⇒∗ w[q, X1 , qe ][qe , X2 , qe ]... ⇒
P∗
e
w[qe , X1 , qe ][qe , X2 , qe ]... ⇒
w.
(ii) L(P DA ~· X) ⊇ L(CF G, Y, lm). Given an LGC system (G, G2 , lm), where G =
(N, T, P, S) ∈ CF G, G2 ∈ Y , we construct an AC system as follows.
First, we construct from G an equivalent PDA D that accepts L(G) as follows:
D = ({q, f }, T, N ∪ T ∪ {Z, Z 0 }, δ, q, Z, {f })
where δ is defined as follows:
1. ps : δ(q, , Z) = (q, SZ 0 ),
2. pi : δ(q, , B) = (q, β), for pi : B → β ∈ P , B ∈ N ,
3. pa : δ(q, a, a) = (q, ), for a ∈ T ,
4. pf : δ(q, , Z 0 ) = (f, ).
Second, we construct A2 ∈ X from G2 ∈ Y . Let s(pi ) = pi · {pa | a ∈ T }∗ be a
substitution by -free regular sets, L0 = {ps } · s(L(G2 )) · {pf }, we have L0 ∈ L, since L is
closed under substitution by -free regular sets and concatenation with regular sets. Thus,
there exists A2 such that L(A2 ) = L0 .
It is easy to see, the leftmost derivations in G are simulated by sequences of transitions
in D. Thus, an input string w ∈ L(D ~· A2 ), if and only if w ∈ L(G, G2 , lm).
Note that the translations between X and Y in the above proof are independent of
concrete automata or grammars, but it is easy to concretize the translations. For example,
in the proof (ii), the translation from G2 ∈ REG to A2 ∈ F SA is the following one:
first, we construct from G2 an equivalent finite automaton A = (Q, Σ, δ, q0 , F ) such that
L(A) = L(G2 ) by the translation from regular grammars to finite automata. Then, we
construct
A2 = (Q ∪ {qs , qf }, Σ ∪ {ps , pf } ∪ {pa | a ∈ T }, δ ∪ δ 0 , qs , {qf })
where δ 0 includes the following transitions:
1. δ 0 (qs , ps ) = {q0 },
2. δ 0 (q, pa ) contains q, for q ∈ Q, a ∈ T ,
3. δ 0 (q, pf ) = {qf }, for q ∈ F .
36
Chapter 3. Control Systems in Formal Language Theory
Clearly, L(A2 ) = {ps } · s(L(A)) · {pf } finishes the construction.
The following corollaries are based on the result that any AFL satisfies the closure
properties required by the above theorem.
Let X be F SA, Y be REG, we have the following corollary.
Corollary 3.31. L(P DA ~· F SA) = L(CF G, REG, lm).
Let X be P DA, Y be CF G, we have the following corollary.
Corollary 3.32. L(P DA ~· P DA) = L(CF G, CF G, lm).
Because of these equivalences, we may also denote an LGC system (G1 , G2 , lm) by
G1 ~· G2 for a uniform notation. Moreover, this operator is more straightforward for
expressing the result of constructive computation of meta-composition, e.g., A = A1 ~· A2 ,
G = G1 ~· G2 . The construction of meta-compositions appeared in the previous proofs are
useful and important, not only in the proofs of generative power, but also in emerging
applications.
3.4
3.4.1
Parsing Issues
GC Systems are not LL-Parsable
The GC systems are not compatible with LL (also LR) parsers, which are based on leftmost
(or rightmost) derivations [1]. Indeed, GC systems introduce conflicts in parsing. Consider
the following example.
Example 3.33. Consider the regularly controlled grammar (also a grammar control system in L(CF G, REG)):
G = ({S, A}, {a, b}, {p1 , p2 , p3 }, S, R)
where: p1 : S → AA, p2 : A → a, p3 : A → b, R = {p1 p2 p3 }. Thus the following two
derivations are both accepted by L(G) = {ab, ba}:
(
p1
p2
p3
p1
p2
p3
D1 :
S ⇒ AA ⇒ aA ⇒ ab
D2 :
S ⇒ AA ⇒ Aa ⇒ ba
However, when we use an LL parser to analyze the input string ba, a leftmost derivation
will produce the sequence of productions p1 p3 p2 , which is not accepted by R and leads to
reject ba. However, we know that ba ∈ L(G). Therefore, there is a conflict.
The conflict results from the fact that R is not a control set based on leftmost derivations. Indeed, if R controls leftmost derivations, then only D1 will be accepted and D2
will be rejected.
3.4.2
Parsing by Extending Classical Algorithms
It is easy to extend LL parsing algorithms [1] to parse LGC systems. Let us consider an
LGC system in which CFG G1 is controlled by CFG G2 , where G1 is an LL grammar.
Given an input string w, the parsing algorithm is as follows:
3.5. Related Work
37
1. parsing w with G1 using an LL parser. If G1 rejects w, the LGC system rejects. If G1
accepts w, we obtain a sequence of productions wp that is applied in the associated
leftmost derivation. Note that the sequence is unique, since LL grammars have no
ambiguity.
2. checking if wp is a member of L(G2 ). Note that G2 could be any context-free
grammar, since this step could be implemented using CYK algorithm. The LGC
system accepts w if and only if wp ∈ L(G2 ).
Similarly, we can also extend Earley’s algorithm [43] to parse LGC systems. We will
not discuss the technical details of this extension, since it is intuitive and easy.
3.5
Related Work
In this section, we show the differences between the control system and related work. In
particular, we compare the LGC system with regulated rewriting, and the AC system with
supervisory control.
3.5.1
Grammars with Regulated Rewriting
The control system is a generic framework, and the controlled component and the controlling component are expressed using the same formalism, e.g., automata or grammars.
Therefore, the two components of an LGC system are both expressed using grammars.
In contrast, traditional grammars with regulated rewriting do not satisfy this principle.
Indeed, their controlled components are grammars, but their controlling components are
other structures such as regular sets and matrix.
The control system adopts one restriction and three extensions of regulated rewriting
[39]. The restriction is that the appearance checking mode is disabled. The extensions are
as follows.
First, the controlling components are expressed using the same formalism as the controlled components, e.g., automata or grammars, rather than regular sets or matrix.
Second, the controlled derivations of an LGC system are restricted to be leftmost,
in order to make the global system LL-parsable. We aim at obtaining an LL-parsable
grammar control system. Since a controlled LL grammar can be implemented as an LL
parser generating leftmost derivations, the control mechanism and the controlling component should be compatible and easy to be integrated into the parser. However, regularly
controlled grammars are not LL-parsable nor compatible with LL parsers.
Third, context-free controlling grammars are enabled to increase generative power.
The generative power of L(CF G, REG) is reduced to be L(CF G ~· REG) = L(CF G)
because of the restriction of leftmost derivations. Indeed, we proved that L(CF G) ⊂
L(CF G, REG) ⊂ L(P SG) and L(CF G ~· REG) = L(CF G). As a result, in order to
express non-context-free languages, we have to use context-free controlling grammars. To
illustrate this point, we may compare Example 3.3 and Example 3.10 (both of them accept
an bn cn ). The different controlling grammars are due to their different generative power.
3.5.2
Supervisory Control
The theory of supervisory control on deterministic systems was first introduced by P. J.
Ramadge and W. M. Wonham in their papers [112, 113]. The supervisory control is used
38
Chapter 3. Control Systems in Formal Language Theory
to restrict the inadmissible behavior of a Discrete Event System (DES). Given a DES, or a
plant, whose uncontrolled behavior is modeled by automaton G, we introduce a supervisor
modeled by automaton S to restrict the behavior of the plant.
Let L ⊆ Σ∗ be a language, and s ∈ Σ∗ , then the prefix-closure of L is L = {s ∈
Σ∗ | ∃t ∈ Σ∗ , st ∈ L}. L is prefix-closed if L = L.
The language generated by a deterministic finite state automaton (DFSA) A = (Q, Σ, δ,
q0 , F ) is the set
Lg (A) = {s ∈ Σ∗ | δ(q0 , s) is defined}
The language marked (or accepted ) by A is the set
Lm (A) = {s ∈ Σ∗ | δ(q0 , s) ∈ F }
We say that the automaton A is blocking if Lm (A) ⊂ Lg (A), and nonblocking if
Lm (A) = Lg (A).
The first significant difference between the AC system and supervisory control is that
the AC system is a more generic formalism. Indeed, AC systems may consist of all types
of automata, while supervisory control only considers finite state automaton. Therefore,
the AC system can express a wider range of classes of systems.
Second, let us compare finite state AC systems in L(F SA ~· F SA) and supervisory
control. We would like to say that the AC system uses the transition-level control, while
supervisory control uses the alphabet-level control.
We can show that the power of supervisory control is not greater than the power of
AC systems. Given a controlled automaton, if we set each transition name identical to its
associated symbol, then each supervisor can be expressed as a controlling automaton.
Furthermore, the AC system is strictly more powerful, because the AC system can
impose some constraints outside the power of supervisory control. We use a simple example
to show the difference between them.
Suppose an automaton A1 shown in Fig. 3.1. Obviously, the automaton accepts the
language L(A1 ) = (abd + abcd)∗ or Lm (A1 ) = (abd + abcd)∗ . Now we need a nonblocking
controller or supervisory to restrict the behavior of the automaton to be (abd)∗ .
q2
p5 : d
p2 : b
q0
q1
p1 : a
p4 : c
q3
p3 : b
Figure 3.1: A Controlled Automaton
It is easy to construct an AC system with the controlling automaton A2 accepting
(p1 + p2 + p4 + p5 )∗ by excluding p3 . Thus, the new global system accepts the language
L(A1 ~· A2 ) = (abd)∗ , which satisfies the requirement.
However, there does not exist a nonblocking supervisor satisfying the requirement. The
key problem is the nondeterminism at the state q1 . Assume there is such a supervisory S.
Then S can only specify whether b can occur in a state which is reached after a occurs. If
no, then neither of p2 and p3 is allowed, thus violates the requirement. If yes, then both
of p2 and p3 are allowed. In this case, to avoid the occurrence of abcd, S may disable the
3.6. Conclusion
39
transition labeled c in the next state. However, the automaton enters a blocking state q3
after choosing p3 : b. As a result, the supervisor is blocking, and violates the requirement.
Thus, S does not exist.
Therefore, we conclude that the AC system is strictly more powerful than supervisory
control.
3.6
Conclusion
In this chapter, we proposed the theory of Control Systems (C Systems). The control
system is a generic framework, and contains two components: the controlled component
and the controlling component. We defined three types of control systems, namely GC,
LGC, and AC systems. The generative power and parsing issues were studied. As we
mentioned, the proofs of generative power provide also the techniques for constructing
meta-compositions, which are useful for the verification and implementation of the global
system.
The inclusion relationships of generative power are summarized in Fig. 3.2, where a line
denotes equivalence, an arrow denotes that the upper family includes (maybe properly)
the lower family. The generative power of AC systems are not explicitly presented, since
they are equivalent to that of LGC systems.
GC Systems
LGC Systems
L(CF G, CF G)
L(P SG)
L(CF G ~· CF G)
L(CF G, REG)
L(CSG)
L(REG, CF G)
L(CF G)
L(REG ~· CF G)
L(REG, REG)
L(REG)
L(REG ~· REG)
L(CF G ~· REG)
Figure 3.2: Generative Power of Control Systems
Chapter 4
On the Generative Power of
(σ, ρ, π)-accepting ω-Grammars
The theory of ω-languages has been studied in the literature in various formalisms. Most
of the works focus on two aspects.
The first one is the relationship between ω-automata and the theory of second order
logic, and related decision problems. Büchi [16] started the study of obtaining decision
procedures for the restricted second order logic theory by using finite state ω-automata.
Thomas summarized related work in [120, 121].
The second aspect concerns the generative power of ω-automata and ω-grammars, and
the closure property of ω-languages. McNaughton [99] investigated finite state ω-automata
and the equivalences between their variants. Landweber [84] classified the families of ωlanguages accepted by deterministic finite state ω-automata in the Borel hierarchy with
respect to the product topology. Later, Cohen studied systematically the Chomsky hierarchy for ω-languages [32, 33, 35] and deterministic ω-automata [34, 36]. Engelfriet studied
(σ, ρ)-accepting X-automata on ω-words [49].
In this chapter, we propose the (σ, ρ, π)-accepting ω-grammar, motivated by the second aspect above. The tuple (σ, ρ, π) includes various accepting models, where σ denotes
that certain productions appear infinitely often or at least once, ρ is a relation between
a set of productions and an accepting set, and π denotes leftmost or non-leftmost derivations. Cohen only focused on the ω-automata with five types of i-acceptances and the
ω-grammars with Muller acceptance condition that leads to the Chomsky hierarchy for ωlanguages [32], while Engelfriet studied the ω-automata with six types of (σ, ρ)-acceptance
[49]. As a result, there does not exist the ω-grammar corresponding to the (σ, ρ)-accepting
ω-automaton in the literature.
Therefore, this chapter aims at defining the (σ, ρ, π)-accepting ω-grammar associated
with Engelfriet’s (σ, ρ)-accepting ω-automaton, systematically studying the relative generative power to (σ, ρ)-accepting ω-automata. In particular, by establishing the translation
methods, we will prove that the relationship between the two types of ω-devices is similar
to the one in the case of finite words for most of the accepting models. However, for some
accepting models, the generative power of non-leftmost derivations of ω- CFG is strictly
weaker than ω- PDA, and the generative power of non-leftmost derivations of ω- CSG is
equal to ω- TM (rather than linear-bounded ω-automata-like devices). Furthermore, we
will discuss remaining open questions for two of the accepting models. The questions
41
42
Chapter 4. On the Generative Power of (σ, ρ, π)-accepting ω-Grammars
show that the relationship between ω-grammars and ω-automata is not easy, although the
relationship between grammars and automata on finite words has been well established.
This chapter is organized as follows. In Section 4.1, notations, definitions and important known results are recalled. In Section 4.2, the generative power of (σ, ρ, π)-accepting
ω-grammars is explored. Related work is discussed in Section 4.3, and we conclude in
Section 4.4.
4.1
Preliminaries
The terminology and notations are mostly taken from [32, 33, 49], and conform to [74].
We may use the terms “w.r.t.” and “s.t.” denoting “with respect to” and “such that”
respectively.
ω
Definition
Q∞ 4.1. Let Σ denote a finite alphabet, Σω denote all infinite (ω-length) strings
u = i=1 ai where ai ∈ Σ. Any member u of Σ is called an ω-word or ω-string. An
ω-language is a subset of Σω .
For any language L ⊆ Σ∗ , define:
Lω = {u ∈ Σω | u =
∞
Y
xi , where for each i, 6= xi ∈ L}
i=1
Note that if L = {} then Lω = ∅.
The following definitions will be used to define the accepting models for ω-automata
and ω-grammars.
Definition 4.2. Let A and B be two sets, for a mapping f : A → B, we define:
inf(f ) = {b | b ∈ B, |f −1 (b)| ≥ ω}
ran(f ) = {b | b ∈ B, |f −1 (b)| ≥ 1}
where |X| denotes the cardinality of the set X.
Let N be the set of natural numbers, Q be a finite set, f ∈ Qω be an infinite sequence
f = f1 f2 . . .. We consider f as a mapping from N to Q where f (i) = fi . Therefore, inf(f )
is the set of all elements that appear infinitely often in f , and ran(f ) is the set of all
elements that appear at least once in f .
To define various accepting models, in the sequel of this chapter, we assume that
σ ∈ {ran, inf} and ρ ∈ {u, ⊆, =} unless specified [49], where A u B means A ∩ B 6= ∅.
Thus we consider the six types of acceptance given in Table 4.1, which includes also the
relation between our notation and the five types of i-acceptance used in [84, 32]. Formally,
we define the (σ, ρ)-accepting model as follows.
Definition 4.3. Let σ : Qω → 2Q be a mapping that assigns to each infinite sequence
over Q a subset of Q, ρ be a binary relation over 2Q , F ⊆ 2Q be a family of sets. The
infinite sequence f : N → Q is (σ, ρ)-accepting w.r.t. F, if there exists a set F ∈ F such
that σ(f )ρF .
4.1. Preliminaries
(σ, ρ)
(ran, u)
(ran, ⊆)
(ran, =)
(inf, u)
(inf, ⊆)
(inf, =)
43
i-accepting
1-accepting
1’-accepting
2-accepting
2’-accepting
3-accepting
Semantics
(∃F ∈ F) ran(f ) ∩ F 6= ∅
(∃F ∈ F) ran(f ) ⊆ F
ran(f ) ∈ F
(∃F ∈ F) inf(f ) ∩ F 6= ∅
(∃F ∈ F) inf(f ) ⊆ F
inf(f ) ∈ F
Alias
Büchi
Muller
Table 4.1: f is (σ, ρ)-accepting w.r.t. F
4.1.1
ω-automata and ω-languages
The definitions of ω-automata are generalized from those of classical automata by adding
a set of designated state sets.
Definition 4.4. A finite state ω-automaton (ω- FSA) is a tuple A = (Q, Σ, δ, q0 , F),
where Q is a finite set of states, Σ is a finite input alphabet, q0 ∈ Q is the initial state,
δ ⊆ Q × (Σ ∪ {}) × Q is a transition function, and F ⊆ 2Q is a set of designated state
sets. If δ is deterministic, then A is a deterministic finite state ω-automaton (ω- DFSA).
Q
ω
Let u = ∞
i=1 ai ∈ Σ , where ∀i ≥ 1, ai ∈ Σ. A legal run (or complete run) of A on
u is an infinite sequence of states
1 r2 . . ., where r1 = q0 and ∀i ≥ 1, ∃bi ∈ Σ ∪ {}
Q r = rQ
∞
b
=
satisfying δ(ri , bi ) 3 ri+1 and ∞
i
i=1 ai .
i=1
Note that bi is introduced due to the existence of -transitions. All computations which
do not correspond to complete runs, e.g. computations which involve infinite -loops, will
be disregarded.
Definition 4.5. A pushdown ω-automaton (ω- PDA) is a tuple D = (Q, Σ, Γ, δ, q0 , Z0 , F),
where Γ is a finite stack alphabet, δ ⊆ Q × (Σ ∪ {}) × Γ × Q × Γ∗ is a transition function,
Z0 ∈ Γ is the start symbol. If δ is deterministic, then D is a deterministic pushdown
ω-automaton (ω- DPDA).
A configuration of an ω- PDA is a pair (q, γ), where q ∈ Q, γ ∈ Γ∗ and the leftmost
symbol of γ is on the top of the stack. For a ∈ Σ ∪ {}, β, γ ∈ Γ∗ and Z ∈ Γ, we write
a : (q, Zγ) `D (q 0 , βγ) if δ(q, a, Z) 3 (q 0 , β).
Q
ω
Let u = ∞
i=1 ai ∈ Σ , where ∀i ≥ 1, ai ∈ Σ. A legal run (or complete run) of D on
u is an infinite sequence of configurations r = {(qi , γi )}i≥1 , where
) = (q0 , Z0 ) and
Q (q1 , γ1Q
∞
∀i ≥ 1, ∃bi ∈ Σ ∪ {} satisfying bi : (qi , γi ) `D (qi+1 , γi+1 ) and ∞
b
=
i=1 i
i=1 ai .
Definition 4.6. A Turing ω-machine (ω- TM) with a single semi-infinite tape is a tuple
M = (Q, Σ, Γ, δ, q0 , F), where Γ is a finite tape alphabet such that Σ ⊆ Γ, δ ⊆ Q × Γ ×
Q × Γ × {L, R, S} is a transition function. If δ is deterministic, then M is a deterministic
Turing ω-machine (ω- DTM).
A configuration of an ω- TM is a tuple (q, γ, i), where q ∈ Q and γ ∈ Γω and i ∈ N
indicating the position of the tape head. The relations `M are defined as usual.
Q
ω
Let u = ∞
i=1 ai ∈ Σ , where ∀i ≥ 1, ai ∈ Σ. A run of M on u is an infinite
sequence of configurations r = {(qi , γi , ji )}i≥1 , where (q1 , γ1 , j1 ) = (q0 , u, 1) and ∀i ≥ 1,
(qi , γi , ji ) `M (qi+1 , γi+1 , ji+1 ).
A run r is complete if ∀n ≥ 1, ∃k ≥ 1, s.t. jk > n. A run r is oscillating if ∃n0 ≥
1, ∀l ≥ 1, ∃k ≥ l, s.t. jk = n0 .
44
Chapter 4. On the Generative Power of (σ, ρ, π)-accepting ω-Grammars
A legal run (or complete non-oscillating run, abbreviated c.n.o.) of M on u is a run
which is complete and non-oscillating, and corresponds to an infinite computation that
scans each square on the tape only finitely many times.
An m-tape Turing ω-machine (m-ω- TM) (m ≥ 1) has m semi-infinite tapes, each with
a separate reading head. We assume that initially the input appears on the first tape and
the other tapes are blank. The transitions are defined in the usual way [74]. The notion
of c.n.o. run for an m-ω- TM means an infinite computation that scans each square on the
first tape only finitely many times. There is no such restriction for the other tapes.
Definition 4.7. A state qT ∈ Q is a traverse state iff ∀a ∈ Γ, δ(qT , a) = {(qT , a, R)}. The following definitions are common notations for all the types of ω-automata defined
above.
Q
ω
Definition 4.8. Let A be an ω-automaton, u = ∞
i=1 ai ∈ Σ , where ∀i ≥ 1, ai ∈ Σ. A
legal run r of A on u induces an infinite sequence of states fr = f1 f2 . . ., where f1 = q0
and fi is the state entered in the i-th step of the legal run r. We define the ω-language
(σ, ρ)-accepted by A as
Lσ,ρ (A) = {u ∈ Σω | there exists a legal run r of A on u such that
fr is (σ, ρ)-accepting w.r.t. F}
Since (inf, =)-acceptance is the most powerful model of ω-recognition (i.e. 3-acceptance
in [32, 33]), we adopt (inf, =)-acceptance as our standard definition of acceptance. Henceforth, (inf, =)-acceptance will be referred to simply as “acceptance”, and Linf,= (A) will be
denoted by L(A) (the ω-language “accepted” by A) by omitting (inf, =).
In the sequel, we denote by ω- FSA, ω- PDA, ω- TM the families of finite state, pushdown ω-automata, and Turing ω-machines, and denote by ω- DFSA, ω- DPDA, ω- DTM
the families of deterministic ones, respectively. For a family X of ω-automata, we denote
the associated family of (σ, ρ)-accepted ω-languages by Lσ,ρ (X). As usual, we denote
simply Linf,= (X) by L(X).
Definition 4.9. Two ω-automata A1 and A2 are (σ, ρ)-equivalent iff Lσ,ρ (A1 ) = Lσ,ρ (A2 ).
They are equivalent iff L(A1 ) = L(A2 ).
Definition 4.10. An ω-automaton with a unique designated set, i.e., |F| = 1, is called a
U-ω-automaton. We may denote the unique designated set by F ⊆ Q instead of F = {F }.
Lemma 4.11 (Lemma 2.7 of [49]). Let ρ ∈ {u, ⊆}, for every (deterministic) ω-automaton
A there exists a (deterministic) U-ω-automaton A0 such that Lσ,ρ (A) = Lσ,ρ (A0 ).
Definition 4.12. An ω-automaton A has the continuity property, abbreviated Property C,
iff for every ω-words u ∈ Σω there is a legal run of A on u. We say A is a C-ω-automaton.
Note that a legal run is not necessarily accepted, but only means that it does not
block the ω-automaton. It is easy to see, by utilizing the nondeterminism, for all (σ, ρ)acceptances, every X-type ω-automaton A without Property C can be modified into a
(σ, ρ)-equivalent nondeterministic X-type ω-automaton A0 with Property C.
4.1. Preliminaries
4.1.2
45
ω-grammars and ω-languages
Let us recall that a phrase structure grammar is denoted G = (N, T, P, S), where N is a
finite set of nonterminals, T is a finite set of terminals, P is a finite set of productions of
the form p : α → β where p is the name (or label) of the production, α 6= , and α, β are
strings of symbols from (N ∪ T )∗ , and S ∈ N is the start symbol. We define the vocabulary
V = N ∪ T . The language accepted by G is L(G) = {w ∈ T ∗ | S ⇒∗ w}.
p
A derivation using a specified production p is denoted by α ⇒ β, and its reflexive
and transitive closure is denoted by α ⇒∗ γ, or with the sequence of applied productions
p1 ...p
α =⇒k γ.
p1
p2
We denote a leftmost derivation (denoted by lm) in the language L(G) by α =⇒ α1 =⇒
p
p1 ...p
lm
lm
lm
lm
k
αk . As an abbreviation, we write α =⇒k αk . We will omit “lm” if there is no
· · · =⇒
confusion.
Definition 4.13. A phrase structure ω-grammar (ω- PSG) is a quintuple G = (N, T, P, S, F),
where G1 = (N, T, P, S) is an ordinary phrase structure grammar, the productions in P
are all of the form p : α → β, where p is the name (or label) of the production, α ∈ N + ,
β ∈ V ∗ , and F ⊆ 2P . The sets in F are called the production repetition sets.
Let d be an infinite derivation in G, starting from some string α ∈ V ∗ :
p1
p2
pi
pi+1
d : α = u0 α0 =⇒ u0 u1 α1 =⇒ · · · =⇒ u0 u1 · · · ui αi =⇒ · · ·
where for each i ≥ 0, ui ∈ T ∗ , αi ∈ N V ∗ , pi+1 ∈ P . We say d is a leftmost derivation iff
for each i ≥ Q
1, the production pi rewrites the leftmost nonterminal of αi−1 .
ω
ω
Let u = ∞
i=0 ui . If u ∈ T , we write d : α ⇒ u. The derivation d induces a sequence
of productions dP = p1 p2 . . ., i.e., a mapping dP : N → P where dP (i) = pi .
For π ∈ {l, nl} (denoting leftmost and non-leftmost derivations, respectively), we define
the ω-languages (σ, ρ, π)-accepted by G as
Lσ,ρ,l (G) = {u ∈ T ω | there exists a leftmost derivation d : S =⇒ω u in G
lm
such that dP is (σ, ρ)-accepting w.r.t. F}
Lσ,ρ,nl (G) = {u ∈ T
ω
| there exists a derivation d : S ⇒ω u in G
such that dP is (σ, ρ)-accepting w.r.t. F}
As usual, Linf,=,π (G) will be denoted by Lπ (G).
Definition 4.14. A context sensitive ω-grammar (ω- CSG) is an ω- PSG in which for each
production α → β, |β| ≥ |α| holds.
Definition 4.15. A context-free ω-grammar (ω- CFG) with production repetition sets is
an ω- PSG whose productions are of the form A → α, A ∈ N , α ∈ (N ∪ T )∗ .
Definition 4.16. A right linear ω-grammar (ω- RLG) with production repetition sets is
an ω- PSG whose productions are of the form A → uB or A → u, A, B ∈ N , u ∈ T ∗ . In the sequel, we denote by ω- RLG, ω- CFG, ω- CSG, ω- PSG the families of rightlinear, context-free, context-sensitive, arbitrary phrase structure ω-grammars, respectively.
For a family X of ω-grammars, we denote the associated families of (σ, ρ, π)-accepted ωlanguages by Lσ,ρ,π (X). As we mentioned, we denote simply Linf,=,π (X) by Lπ (X).
46
Chapter 4. On the Generative Power of (σ, ρ, π)-accepting ω-Grammars
Definition 4.17. Two ω-grammars G1 and G2 are (σ, ρ, π)-equivalent iff Lσ,ρ,π (G1 ) =
Lσ,ρ,π (G2 ). They are equivalent in π-derivation iff Lπ (G1 ) = Lπ (G2 ).
Definition 4.18. An ω-grammar with a unique designated set, i.e., |F| = 1, is called a
U-ω-grammar. We may denote the unique designated set by F ⊆ P instead of F = {F }.
Definition 4.19. An ω-grammar with a designated set F = 2P is a unrestricted ωgrammar, denoted by u-ω-grammar.
The previous definitions concern the grammars with production repetitions sets. Now
we switch to the (σ, ρ, π)-acceptance w.r.t. variable repetition sets of context-free ωgrammars.
Definition 4.20. A context-free ω-grammar with variable repetition sets (ω- CFG - V)
is a quintuple G = (N, T, P, S, F), where G1 = (N, T, P, S) is an ordinary context-free
grammar and F ⊆ 2N . The sets in F are called the variable repetition sets.
Let d : α ⇒ω u ∈ T ω be an infinite derivation in G. The derivation d induces a
sequence of nonterminals dN = n1 n2 . . ., i.e., a mapping dN : N → N with dN (i) = ni ,
where ni ∈ N is the nonterminal which is the left-hand side of the i-th production in dP .
For π ∈ {l, nl}, we define the ω-languages (σ, ρ, π)-accepted by G as
Lσ,ρ,l (G) = {u ∈ T ω | there exists a leftmost derivation d : S =⇒ω u in G
lm
such that dN is (σ, ρ)-accepting w.r.t. F}
Lσ,ρ,nl (G) = {u ∈ T
ω
| there exists a derivation d : S ⇒ω u in G
such that dN is (σ, ρ)-accepting w.r.t. F}
As usual, Linf,=,π (G) will be denoted by Lπ (G).
Definition 4.21. A right linear ω-grammar with variable repetition sets (ω- RLG - V) is
an ω- CFG - V whose productions are of the form A → uB or A → u, A, B ∈ N , u ∈ T ∗ .
The following theorem states that the (inf, =, π)-acceptance w.r.t. the two types of
repetition sets defined above are equivalent in generative power. The proofs of the two
equations can be found in Remark 2.7 and Proposition 4.1.1 of [33].
Theorem 4.22 (Thm. 3.1.4 of [32]). (1) Ll (ω- CFG) = Ll (ω- CFG - V).
(2) Lnl (ω- CFG) = Lnl (ω- CFG - V).
Note that for right linear ω-grammars, every derivation is a leftmost derivation. Thus
we have the following theorem.
Theorem 4.23. Ll (ω- RLG) = Lnl (ω- RLG) = Ll (ω- RLG - V) = Lnl (ω- RLG - V).
However, the leftmost generation in ω- CFG is strictly more powerful than the nonleftmost generation with the (inf, =)-accepting model.
Theorem 4.24 (Thm. 4.3.7 of [33]). (1) Lnl (ω- CFG) ⊂ Ll (ω- CFG).
(2) Lnl (ω- CFG - V) ⊂ Ll (ω- CFG - V).
Therefore, we choose the leftmost derivation as our standard definition of acceptance
in ω- CFG’s. That means, Lσ,ρ,l (G), Ll (G), Lσ,ρ,l (X), Ll (X) will be denoted simply by
Lσ,ρ (G), L(G), Lσ,ρ (X), L(X), respectively.
4.1. Preliminaries
4.1.3
47
Main Characterizations
We denote by REGL, CF L, REL the families of regular, context-free, recursive enumerable languages, and denote by ω- REGL, ω- CFL, ω- REL the families of ω-type ones that
are defined in this section, respectively.
Definition 4.25. For any family L of languages over alphabet Σ, the ω-Kleene closure
of L, denoted by ω- KC(L) is:
ω- KC(L) = {L ⊆ Σω | L =
k
[
Ui Viω for some Ui , Vi ∈ L, 1 ≤ i ≤ k ∈ N}
i=1
where N is the set of natural numbers.
Theorem 4.26 (Thm. 2.2.2 and 3.1.9 of [32], [16], [99]). For any ω-language L ⊆ Σω ,
the following conditions are equivalent:
1. L ∈ ω- KC(REGL)
2. L ∈ L(ω- FSA)
3. L ∈ Linf,u (U - ω- FSA) or L ∈ Linf,u (ω- FSA)
4. L ∈ L(ω- DFSA)
5. L ∈ L(ω- RLG) or L ∈ L(ω- RLG - V)
The ω-language L is a regular ω-language (ω- REGL), if it satisfies the conditions. It is
effectively given if it is given in one of the forms above.
Theorem 4.27 (Thm. 2.2.4 of [32], Thm. 1.8 and 1.12 of [33], [16], [99]). The family
of regular ω-languages (ω- REGL, i.e., L(ω- FSA)) is closed under all Boolean operations,
regular substitution and generalized sequential machine (gsm) mapping.
Theorem 4.28 (Thm. 2.2.5 of [32]). For any regular ω-languages L1 and L2 effectively
given, it is decidable whether (1) L1 is empty, finite or infinite; (2) L1 = L2 ; (3) L1 ⊆ L2 ;
(4) L1 ∩ L2 = ∅.
Theorem 4.29 (Thm. 4.1.8 of [32]). For any ω-language L ⊆ Σω , the following conditions
are equivalent:
1. L ∈ ω- KC(CF L)
2. L ∈ L(ω- PDA)
3. L ∈ Linf,u (U - ω- PDA) or L ∈ Linf,u (ω- PDA)
4. L ∈ L(ω- CFG) or L ∈ L(ω- CFG - V)
The ω-language L is a context-free ω-language (ω- CFL), if it satisfies the conditions. It
is effectively given if it is given in one of the forms above.
Theorem 4.30 (Section 1 of [33]). The family of context-free ω-languages (ω- CFL, i.e.,
L(ω- PDA)) is closed under union, intersection with ω- REGL, quotient with ω- REGL,
context-free substitution and gsm mapping, is not closed under intersection and complementation.
48
Chapter 4. On the Generative Power of (σ, ρ, π)-accepting ω-Grammars
Theorem 4.31 (Thm. 4.2.6 and 4.2.8 of [32]). For any context-free ω-language L and
regular ω-language R effectively given, it is decidable whether (1) L is empty, finite or
infinite; (2) L ⊆ R.
Theorem 4.32. For every m-ω- TM, m ≥ 1, there can be constructed a (σ, ρ)-equivalent
ω- TM. Therefore, Lσ,ρ (m-ω- TM) = Lσ,ρ (ω- TM).
Proof. The proof resembles that of Thm. 7.3 of [35]. If m ≥ 3, then m-ω- TM can be
translated into a 2 -ω- TM. For every 2 -ω- TM M with the set of designated sets F, there
can be constructed an ω- TM M 0 who simulates M by two tracks α, β on the tape. The
simulation applies the relative folding process for β w.r.t. α (see Section 6 of [35]). For
each (σ, ρ)-acceptance, one can define a set of designated sets H to finish the proof.
Theorem 4.33 (Theorems 5.1, 5.9 and 8.2 of [35]). For any ω-language L ⊆ Σω , the
following conditions are equivalent:
1. L ∈ Lσ,ρ (ω- TM), for σ ∈ {ran, inf} and ρ ∈ {u, ⊆, =}
2. L ∈ Lσ,ρ (m-ω- TM)
3. L ∈ Lnl (ω- PSG)
4. L ∈ Lnl (ω- CSG)
The ω-language L is a recursive enumerable ω-language (ω- REL), if it satisfies the conditions. Note that ω- KC(REL) ⊂ ω- REL.
Note that (1) extends Thm. 8.2 of [35] where only i-acceptances are considered, and
(2) follows from Thm. 4.32.
Theorem 4.34 (Section 5.3 and Thm. 8.4 of [35]). The family of recursive enumerable
ω-languages (ω- REL, i.e., L(ω- TM)) is closed under union, intersection, recursive enumerable substitution and concatenation with recursive enumerable languages, is not closed
under complementation.
The following result shows inclusion or equivalence between the families of ω-languages
recognized by various (σ, ρ)-accepting X-type ω-automata.
Theorem 4.35 (Thm. 3.5 of [49]). For the above various accepting models of X-type ωautomata, X ∈ {ω- FSA, ω- PDA, ω- TM}, we have Lran,⊆ (X) ⊆ Lran,u (X) = Lran,= (X) =
Linf,⊆ (X) ⊆ Linf,u (X) = Linf,= (X).
4.2
On the Generative Power of (σ, ρ, π)-accepting ω-Grammars
This section is devoted to explore the relationships between (σ, ρ, π)-accepting ω-grammars
and (σ, ρ)-accepting ω-automata. After introducing some special forms of ω-grammars,
we study the leftmost and non-leftmost derivations of ω-grammars.
4.2. On the Generative Power of (σ, ρ, π)-accepting ω-Grammars
4.2.1
49
Special Forms of ω-Grammars
We introduce some special forms of ω-grammars that will be used to study the generative
power in the sequel. Some of the results are extensions of these on the grammar on finite
words. We start with a special form of ω- RLG.
Lemma 4.36. Given an ω- RLG G = (N, T, P, S0 , F), there can be constructed a (σ, ρ, π)equivalent ω- RLG G0 = (N 0 , T, P 0 , S0 , H) whose productions are of the form A → aB or
A → a, a ∈ T ∪ {}, such that Lσ,ρ,π (G) = Lσ,ρ,π (G0 ).
Proof. Without loss of generality, assume that N = {Sk }0≤k<|N | , P = {pk }1≤k≤|P | consists
of all productions of the forms: pk : Si → uSj , pk : Si → u, u ∈ T ∗ . We construct G0 as
follows: P 0 consists of all productions of the following forms:
1. pk : Si → aSj , if pk : Si → aSj ∈ P , a ∈ T ∪ {}.
2. pk : Si → a, if pk : Si → a ∈ P , a ∈ T ∪ {}.
3. pkm : Sk(m−1) → akm Skm , for 1 ≤ m ≤ |u|, Sk0 = Si , Sk|u| = Sj , if pk : Si → uSj ∈
P , where u = ak1 · · · ak|u| , |u| ≥ 2.
4. pkm : Sk(m−1) → akm Skm , for 1 ≤ m ≤ |u|, Sk0 = Si , Sk|u| = , if pk : Si → u ∈ P ,
where u = ak1 · · · ak|u| , |u| ≥ 2.
We denote by Pk the set of productions named
S pk or pkm . Let F = {Fi }1≤i≤n , we
construct the set H = {Hi }1≤i≤n where Hi = pk ∈Fi Pk . It can be easily verified that
Lσ,ρ,π (G) = Lσ,ρ,π (G0 ) for the different (σ, ρ, π)-accepting models.
The next lemma concerns the -production-free ω- CFG.
Lemma 4.37. Given an ω- CFG G = (N, T, P, S, F), there can be constructed a (σ, ρ, π)equivalent -production-free ω- CFG G0 = (N, T, P 0 , S, H) with no productions of the form
A → , such that Lσ,ρ,π (G) = Lσ,ρ,π (G0 ), for (σ, ρ, π) 6∈ {(ran, u, l), (ran, =, l)}.
Proof. Define N L(α) = {D ⊆ P | there exists a finite derivation d : α ⇒∗ s.t. ran(dP ) =
D}. Define the substitution h as: for A ∈ N , h(A) = A if N L(A) = ∅, h(A) = {A, } if
N L(A) 6= ∅, Q
and for a ∈ T , h(a)
Ql = a.
l
Let α = i=1 Ai and β = i=1 Bi ∈ h(α), where ∀1 ≤ i ≤ l, Ai ∈ N ∪ T , Bi ∈ h(Ai ).
To accumulate
S the productions that rewrite some nonterminals in α to be , we define
P E(β) = { li=1 Pi | Pi = ∅ if Bi = Ai , and Pi ∈ N L(Ai ) if Bi = }.
Let P 0 = {[p, K, β] : A → β | p : A → α ∈ P, 6= β ∈ h(α), K
S ∈ P E(β)}. Define
P ro([p, K, β]) = {p} ∪ K for each production in P 0 , and P ro(H) = p∈H P ro(p) for a set
H of productions.
An intuitive view of the simulation is shown in Fig. 4.1. The productions used to
rewrite Ai and Aj are accumulated in the name of the production p.
Let F = {Fk }1≤k≤n , we construct the set H according to different accepting models:
S
1. (ran, u, nl)-acceptance. H = {{p ∈ P 0 | P ro(p) ∩ nk=1 Fk 6= ∅}}.
S
2. (ran, ⊆, nl)-acceptance. Let Hk = {H ⊆ P 0 | P ro(H) ⊆ Fk }, then H = 1≤k≤n Hk .
3. (ran, =, nl)-acceptance. H = {H ⊆ P 0 | P ro(H) ∈ F}.
4. (inf, u, nl)-acceptance. The same as (1).
50
Chapter 4. On the Generative Power of (σ, ρ, π)-accepting ω-Grammars
α = A1
C1 · · · Cm
···
Ai
A
A
p
[p, · · · ∪ Pi ∪ · · · ∪ Pj ∪ · · · , β]
···
Aj
···
Al
Pi ∈ N L(Ai )
Pj ∈ N L(Aj )
··· ··· β = A1
···
···
···
Al
C1 · · · Cm
Figure 4.1: Simulation by an -production-free ω- CFG
5. (inf, ⊆, nl)-acceptance. The same as (2).
6. (inf, =, nl)-acceptance. The same as (3).
7. (ran, ⊆, l)-acceptance. The same as (2).
8. (inf, ⊆, l)-acceptance. The same as (2).
For the above cases, it can be easily verified that Lσ,ρ,π (G) = Lσ,ρ,π (G0 ). However, the
constructive proof does not work for other cases.
For (inf, u, l)-acceptance and (inf, =, l)-acceptance, it is easy to prove Linf,u,l (ω- CFG) =
ω- KC(CF L), and Linf,=,l (ω- CFG) = ω- KC(CF L) by Thm. 4.29. Therefore, every ωS
language L accepted by the two models can be expressed in the form L = ki=1 Ui Viω , for
some natural number k, and ∀1 ≤ i ≤ k, 6∈ Ui ∈ CF L, 6∈ Vi ∈ CF L. Obviously, the 2k
context-free languages can be generated by 2k -production-free context-free grammars.
Thus, for each of the two accepting models, an -production-free ω- CFG that accepts L
can be constructed from the 2k -production-free context-free grammars.
Unfortunately, for (ran, u, l)-acceptance and (ran, =, l)-acceptance, the construction of
-production-free ω- CFG is still an open problem. If we use a similar construction as
the nl-derivation case, the difficulty is the following one: there may exist a production
[p, K, β] : A → β in P 0 , where Bi in β is supposed to be (Ai is rewritten by some
productions to be ) but Ai cannot be reached by the corresponding leftmost derivation
in the original ω-grammar G.
The following lemmas concern some special forms of ω- PSG.
Lemma 4.38. Given an ω- PSG G = (N, T, P, S, F), there can be constructed a (σ, ρ, nl)equivalent ω- PSG G0 = (N 0 , T, P 0 , S, H) whose productions are of the forms α → β, A → a
or A → , α, β ∈ N + , A ∈ N , a ∈ T , such that Lσ,ρ,nl (G) = Lσ,ρ,nl (G0 ).
Proof. Without loss of generality, assume that P = {pk }1≤k≤|P | , the maximal length of
the right-hand sides of the productions is l. We construct G0 with N 0 = N ∪ {aki | a ∈
T, pk ∈ P, 1 ≤ i ≤ l} ∪ {Ek | pk ∈ P }. P 0 consists of all productions of the following forms:
1. pk : α → β, if pk : α → β ∈ P , α, β ∈ N + .
2. pk : A → , if pk : A → ∈ P , A ∈ N .
3. pk : α → a1,k1 ...a|γ|,k|γ| where ai,ki = ai if ai ∈ N , and pki : ai,ki → ai for each
ai ∈ T , if pk : α → γ ∈ P , α ∈ N + , γ = a1 ...a|γ| ∈ V + − N + .
4. pk : α → Ek and pk : Ek → , if pk : α → ∈ P , α ∈ N + − N .
4.2. On the Generative Power of (σ, ρ, π)-accepting ω-Grammars
51
We define the function f as follows:

{pk }, if pk ∈ P is of the form in case (1) or (2).



 {p } ∪ {p | a ∈ T and a is the i-th symbol of γ},
i
i
k
ki
f (pk ) =

if
p
∈
P
is
of
the form in case (3).
k



{pk , pk }, if pk ∈ P is of the form in case (4).
S
For a set H of productions, f (H) = pk ∈H f (pk ). Let F = {Fi }1≤i≤n , we construct the set
H = {f (Fi )}1≤i≤n . It can be easily verified that Lσ,ρ,nl (G) = Lσ,ρ,nl (G0 ) for the different
accepting models.
We have similar lemmas for ω- CFG and ω- CSG, by using the same proof technique
as the above one.
Lemma 4.39. Given an ω- CFG G = (N, T, P, S, F), there can be constructed a (σ, ρ, nl)equivalent ω- CFG G0 = (N 0 , T, P 0 , S, H) whose productions are of the forms A → β,
A → a or A → , β ∈ N + , A ∈ N , a ∈ T , such that Lσ,ρ,nl (G) = Lσ,ρ,nl (G0 ).
Lemma 4.40. Given an ω- CSG G = (N, T, P, S, F), there can be constructed a (σ, ρ, nl)equivalent ω- CSG G0 = (N 0 , T, P 0 , S, H) whose productions are of the forms α → β or
A → a, α, β ∈ N + , |α| ≤ |β|, A ∈ N , a ∈ T , such that Lσ,ρ,nl (G) = Lσ,ρ,nl (G0 ).
For completeness, the following definition concerning $-boundary is taken from Def.
4.5 of [35].
Definition 4.41. An ω- PSG (ω- CSG, resp.) with $-boundary is an ω-grammar G =
(N ∪ {$, S}, T, P, S0 , F), in which each production is of one of the following forms (1)-(4)
((1)-(3), resp.):
1. α → β, α, β ∈ N + ,
2. S → $α, α ∈ N + ,
3. $A → a$, A ∈ N , a ∈ T ,
4. A → , A ∈ N .
The $-boundary divides sentential forms into two parts. The left part consists of
terminals generated. The right part consists of nonterminals to be rewritten.
The following lemma extends Thm. 4.6 of [35] where only 3-acceptance was considered
(furthermore, the assumption in the proof was used without justification, while we provide
a proof in Lemma 4.38).
Lemma 4.42. Given an ω- PSG (ω- CSG, resp.), there can be constructed a (σ, ρ, nl)equivalent ω- PSG (ω- CSG, resp.) with $-boundary.
Proof. Let G = (N, T, P, S, F) be an ω- PSG. By Lemma 4.38, we assume P = P1 ∪
P2 ∪ P3 , where P1 = {pk : α → β, α, β ∈ N + }, P2 = {pk : A → a, A ∈ N, a ∈ T },
P3 = {pk : A → , A ∈ N }. There can be constructed a (σ, ρ, nl)-equivalent ω- PSG
G0 = (N ∪ {a | a ∈ T } ∪ {S1 , $}, T, P 0 , S1 , H), where P 0 = Ps ∪ P1 ∪ P20 ∪ P3 ∪ P4 , and
Ps = {S1 → $S}, P20 = {pk : A → a | pk : A → a ∈ P2 }, P4 = {pa : $a → a$ | a ∈ T }.
Let F = {Fi }1≤i≤n , we construct the set H according to different accepting models:
52
Chapter 4. On the Generative Power of (σ, ρ, π)-accepting ω-Grammars
1. (ran, u, nl)-acceptance. H = F.
2. (ran, ⊆, nl)-acceptance. H = {Fi ∪ Ps ∪ P4 }1≤i≤n .
3. (ran, =, nl)-acceptance. Let Hi = {Fi ∪ Ps ∪ H | ∅ ⊂ H ⊆ P4 }, then H =
Sn
i=1 Hi .
4. (inf, u, nl)-acceptance. The same as (1).
5. (inf, ⊆, nl)-acceptance. H = {Fi ∪ P4 }1≤i≤n .
6. (inf, =, nl)-acceptance. Let Hi = {Fi ∪ H | ∅ ⊂ H ⊆ P4 }, then H =
Sn
i=1 Hi .
It can be easily verified that Lσ,ρ,nl (G) = Lσ,ρ,nl (G0 ).
If G is an ω- CSG, P3 above will be empty and G0 will be an ω- CSG with $-boundary.
The important special forms discussed can facilitate the proofs in the sequel.
4.2.2
Leftmost Derivation of ω-Grammars
In the case of leftmost derivation, we will show the equivalences of ω- RLG and ω- FSA,
of ω- CFG and ω- PDA, as we expect. Furthermore, the generative power of ω- CSG or
ω- PSG is not greater than ω- PDA. Most of the results are obtained by extending these
on the grammar on finite words.
Theorem 4.43. Lσ,ρ,l (ω- RLG) = Lσ,ρ (ω- FSA).
Proof. (i) Lσ,ρ,l (ω- RLG) ⊆ Lσ,ρ (ω- FSA). Let G = (N, T, P, S0 , F) be an ω- RLG with
N = {Sk }0≤k<|N | , P = {pk }1≤k≤|P | . Without loss of generality, we assume the productions
are of the form Si → aSj or Si → a, a ∈ T ∪ {} (by Lemma 4.36). Construct an ω- FSA
A = (Q, T, δ, q0 , H), where:
1. Q = {q0 } ∪ {qik | pk : Si → γ ∈ P , γ ∈ (N ∪ T )∗ }, q0 is the start state.
2. δ(q0 , ) = {q0k | pk : S0 → γ ∈ P , γ ∈ (N ∪ T )∗ }.
3. δ(qik , a) contains {qjn | pn : Sj → γ ∈ P , γ ∈ (N ∪ T )∗ }, for each production
pk : Si → aSj ∈ P , where a ∈ T ∪ {}.
We define f (pk ) = qik if pk : Si → γ ∈ P , γ ∈ (N ∪ T )∗ . Obviously, G uses pk , iff A uses a
transition starting from qik in a simulation. Let F = {Fk }1≤k≤n , we construct the set H
according to different accepting models:
1. (ran, u)-acceptance. Let Hk = {f (pi ) | pi ∈ Fk }, then H = {Hk }1≤k≤n .
2. (ran, ⊆)-acceptance. Let Hk = {q0 } ∪ {f (pi ) | pi ∈ Fk }, then H = {Hk }1≤k≤n .
3. (ran, =)-acceptance. The same as (2).
4. (inf, u)-acceptance. The same as (1).
5. (inf, ⊆)-acceptance. The same as (1).
6. (inf, =)-acceptance. The same as (1).
4.2. On the Generative Power of (σ, ρ, π)-accepting ω-Grammars
53
It can be easily verified that Lσ,ρ,l (G) = Lσ,ρ (A).
(ii) Lσ,ρ,l (ω- RLG) ⊇ Lσ,ρ (ω- FSA). Let A = (Q, Σ, δ, q0 , F) be an ω- FSA with Q =
{qk }0≤k<|Q| . Construct an ω- RLG G = (N, Σ, P, S0 , H) where:
1. for each qi ∈ Q, there is a nonterminal Si ∈ N , and S0 is the start symbol.
2. for each transition δ(qi , a) 3 qj , where a ∈ Σ ∪ {}, there is a production piaj : Si →
aSj ∈ P .
We denote by Pi the set of the production piaj ∈ P for any a ∈ Σ∪{}, Sj ∈ N . Obviously,
A uses a transition starting from qi , iff G uses a production in Pi in a simulation. Let
F = {Fk }1≤k≤n , we construct the set H according to different accepting models:
S
1. (ran, u, l)-acceptance. Let Hk = qi ∈Fk Pi , then H = {Hk }1≤k≤n .
2. (ran, ⊆, l)-acceptance. The same as (1).
3. (ran, S
=, l)-acceptance. Let Hk = {H ⊆
H = nk=1 Hk .
S
qi ∈Fk
Pi | ∀qi ∈ Fk , H ∩ Pi 6= ∅}, then
4. (inf, u, l)-acceptance. The same as (1).
5. (inf, ⊆, l)-acceptance. The same as (1).
6. (inf, =, l)-acceptance. The same as (3).
It can be easily verified that Lσ,ρ,l (G) = Lσ,ρ (A).
Theorem 4.44. Lσ,ρ,l (ω- CFG) = Lσ,ρ (ω- PDA).
Proof. (i) Lσ,ρ,l (ω- CFG) ⊆ Lσ,ρ (ω- PDA). Let G = (N, T, P, S, F) be an ω- CFG with
P = {pi }1≤i≤|P | . Construct an ω- PDA D = (Q, T, Γ, δ, q00 , Z0 , H), where Γ = N ∪T ∪{Z0 },
Q = {q00 , q0 } ∪ {qi | pi ∈ P }, δ is defined as follows:
1. δ(q00 , , Z0 ) = (q0 , SZ0 ),
2. δ(q0 , a, a) = (q0 , ) for all a ∈ T ,
3. δ(q0 , , A) 3 (qi , ) if pi : A → γ ∈ P , A ∈ N , γ ∈ (N ∪ T )∗ ,
4. δ(qi , , X) = (q0 , γX) if pi : A → γ ∈ P , X ∈ Γ.
Let F = {Fk }1≤k≤n , we construct the set H according to different accepting models:
1. (ran, u)-acceptance. Let Hk = {qi | pi ∈ Fk }, then H = {Hk }1≤k≤n .
2. (ran, ⊆)-acceptance. Let Hk = {q00 , q0 } ∪ {qi | pi ∈ Fk }, then H = {Hk }1≤k≤n .
3. (ran, =)-acceptance. The same as (2).
4. (inf, u)-acceptance. The same as (1).
5. (inf, ⊆)-acceptance. Let Hk = {q0 } ∪ {qi | pi ∈ Fk }, then H = {Hk }1≤k≤n .
6. (inf, =)-acceptance. The same as (5).
54
Chapter 4. On the Generative Power of (σ, ρ, π)-accepting ω-Grammars
It can be easily verified that Lσ,ρ,l (G) = Lσ,ρ (D).
(ii) Lσ,ρ,l (ω- CFG) ⊇ Lσ,ρ (ω- PDA). Let D = (Q, Σ, Γ, δ, q0 , Z0 , F) be an ω- PDA.
Construct an ω- CFG G = (N, Σ, P, S, H), where N is the set of objects of the form
[q, B, r] (denoting popping B from the stack by several transitions, switching the state
from q to r), q, r ∈ Q, B ∈ Γ, P is the union of the following sets of productions:
1. Ps = {S → [q0 , Z0 , qi ] | qi ∈ Q}.
2. P 0 = {[qi , B, qj ] → a[qj1 , B1 , qj2 ][qj2 , B2 , qj3 ] · · · [qjm , Bm , qj ] |
δ(qi , a, B) = (qj1 , B1 B2 ...Bm ), qj2 , ..., qjm , qj ∈ Q}, where a ∈ Σ ∪ {}, and
B, B1 , ..., Bm ∈ Γ. (If m = 0, then the production is [qi , B, qj1 ] → a.)
We denote by Pi the set of productions of the form [qi , B, qj ] → γ, for any B ∈ Γ, qj ∈ Q,
γ ∈ N ∗ ∪ ΣN ∗ . Obviously, D uses a transition starting from qi , iff G uses a production in
Pi in a simulation.
Let F = {Fk }1≤k≤n , we construct the set H according to different accepting models:
S
1. (ran, u, l)-acceptance. Let Hk = qi ∈Fk Pi , then H = {Hk }1≤k≤n .
S
2. (ran, ⊆, l)-acceptance. Let Hk = Ps ∪ qi ∈Fk Pi , then H = {Hk }1≤k≤n .
3. (ran, =, l)-acceptance.
Let Hk = {H ⊆ P | H ∩ Ps 6= ∅ and ∀qi ∈ Fk , H ∩ Pi 6= ∅},
S
then H = nk=1 Hk .
4. (inf, u, l)-acceptance. The same as (1).
5. (inf, ⊆, l)-acceptance. The same as (1).
6. (inf, =, l)-acceptance. Let Hk = {H ⊆ P 0 | ∀qi ∈ Fk , H∩Pi 6= ∅}, then H =
It can be easily verified that Lσ,ρ,l (G) = Lσ,ρ (D).
Sn
k=1 Hk .
Theorem 4.45. Lσ,ρ,l (ω- PSG) = Lσ,ρ,l (ω- CFG).
Proof. (i) Lσ,ρ,l (ω- PSG) ⊇ Lσ,ρ,l (ω- CFG) is trivial.
(ii) Lσ,ρ,l (ω- PSG) ⊆ Lσ,ρ,l (ω- CFG). We only need to prove Lσ,ρ,l (ω- PSG) ⊆ Lσ,ρ (ω- PDA)
and the result follows from Thm. 4.44.
Let G = (N, T, P, S, F) be an ω- PSG with P = {pi }1≤i≤|P | . Construct an ω- PDA
D = (Q, T, Γ, δ, q00 , Z0 , H), where Γ = N ∪ T ∪ {Z0 }. Let l be the maximal length
S of the
left-hand sides of the productions of P , then Q = {q00 , q0 } ∪ {q[iα] | pi ∈ P, α ∈ lj=1 N j }.
δ is defined as follows:
1. δ(q00 , , Z0 ) = (q0 , SZ0 ),
2. δ(q0 , a, a) = (q0 , ) for all a ∈ T ,
3. δ(q0 , , A) 3 (q[iA] , ) if pi : Aγ1 → γ ∈ P , A ∈ N , γ1 ∈ N ∗ , γ ∈ (N ∪ T )∗ ,
4. δ(q[iα] , , A) = (q[iαA] , ) if pi : αAγ1 → γ ∈ P , α ∈ N + ,
5. δ(q[iα] , , X) = (q0 , γX) if pi : α → γ ∈ P , X ∈ Γ.
4.2. On the Generative Power of (σ, ρ, π)-accepting ω-Grammars
55
We denote by Pref(α) the set of prefixes (length between 1 and |α|) of finite word α. Let
F = {Fk }1≤k≤n , Qi = {q[iβ] | pi : α → γ ∈ P and β ∈ Pref(α)}, we construct the set H
according to different accepting models:
1. (ran, u)-acceptance. Let Hk = {q[iα] | pi : α → γ ∈ Fk }, then H = {Hk }1≤k≤n .
S
2. (ran, ⊆)-acceptance. Let Hk = {q00 , q0 } ∪ pi ∈Fk Qi , then H = {Hk }1≤k≤n .
3. (ran, =)-acceptance. The same as (2).
4. (inf, u)-acceptance. The same as (1).
5. (inf, ⊆)-acceptance. Let Hk = {q0 } ∪
S
pi ∈Fk
Qi , then H = {Hk }1≤k≤n .
6. (inf, =)-acceptance. The same as (5).
It can be easily verified that Lσ,ρ,l (G) = Lσ,ρ (D).
Theorem 4.46. Lσ,ρ,l (ω- CSG) = Lσ,ρ,l (ω- CFG), for (σ, ρ) 6∈ {(ran, u), (ran, =)}.
Proof. (i) Lσ,ρ,l (ω- CSG) ⊆ Lσ,ρ,l (ω- CFG) follows from Lσ,ρ,l (ω- CSG) ⊆ Lσ,ρ,l (ω- PSG)
and Thm. 4.45.
(ii) Lσ,ρ,l (ω- CFG) ⊆ Lσ,ρ,l (ω- CSG) follows from Lemma 4.37.
For the remaining two accepting models, we can only prove case (i) (the next theorem).
Theorem 4.47. Lσ,ρ,l (ω- CSG) ⊆ Lσ,ρ,l (ω- CFG), for (σ, ρ) ∈ {(ran, u), (ran, =)}.
Unfortunately, whether the equivalence of ω- CSG and ω- CFG holds for the two accepting models is still an open problem.
Because of the equivalence between (σ, ρ, l)-accepting ω- CFG and (σ, ρ)-accepting
ω- PDA, we have the following corollary.
Corollary 4.48. (i) Lσ,ρ,l (ω- PSG) = Lσ,ρ (ω- PDA).
(ii) Lσ,ρ,l (ω- CSG) = Lσ,ρ (ω- PDA), for (σ, ρ) 6∈ {(ran, u), (ran, =)}.
(iii) Lσ,ρ,l (ω- CSG) ⊆ Lσ,ρ (ω- PDA), for (σ, ρ) ∈ {(ran, u), (ran, =)}.
4.2.3
Non-leftmost Derivation of ω-Grammars
In the case of non-leftmost derivation, we will show the equivalences of ω- RLG and ω- FSA,
of ω- PSG and ω- TM, as we expect. Furthermore, the generative power of ω- CFG is not
greater than (may be strictly included in, or equal to) ω- PDA. The generative power of
ω- CSG is also equal to ω- TM. Most of the results are obtained by extending these on the
grammar on finite words.
Theorem 4.49. Lσ,ρ,nl (ω- RLG) = Lσ,ρ,l (ω- RLG) = Lσ,ρ (ω- FSA).
Proof. It is trivial, since every derivation is a leftmost derivation for ω- RLG.
To study the generative power of ω- CFG, we must have in mind the following fact.
Recall that for CF G on finite words, given G ∈ CF G, the language generated by leftmost
derivations and the one generated by non-leftmost derivations are equal, i.e., Ll (G) =
Lnl (G). For ω- CFG, given G ∈ ω- CFG, if we do not take into account the production
repetition sets F, then G generates leftmostly u ∈ T ω , iff G generates u in a non-leftmost
derivation, because u is the leftmost substring consisting of terminals in a sentential form.
56
Chapter 4. On the Generative Power of (σ, ρ, π)-accepting ω-Grammars
Therefore, in some non-leftmost derivations of ω- CFG’s, a string may appear in the unreached part of the sentential form (does not contribute to u), then its possible impact on
the derivation lies only in its set of applied productions.
Definition 4.50. Let G = (N, T, P, S, F) be an ω- CFG and V = N ∪ T . Let d be a
non-leftmost infinite derivation in G, d : α1 ⇒ α2 ⇒ · · · ⇒ αi ⇒ · · · . Every sentential
form αi can be decomposed into αi = βi γi , and the derivation starting from αi can be
decomposed into dβi : βi ⇒ βi+1 ⇒ · · · and dγi : γi ⇒ γi+1 ⇒ · · · where αk = βk γk for
k ≥ i, such that
1. for any nonterminal A in βi s.t. βi = γAγ 0 , γ, γ 0 ∈ V ∗ , dβi rewrites γ ⇒∗ T ∗ ,
2. for any k ≥ i, βk ∈ V ∗ N V ∗ .
We say βi and γi are the reached part and the unreached part of αi , respectively.
If a string γ appears in the unreached part of a sentential form, then it does not
contribute to the terminals in the generated ω-word, but only contributes to the sets of
the productions used (by its transient sets) and the sets of the productions that appear
infinitely often (by its self-providing sets).
Definition 4.51. Let G = (N, T, P, S, F) be an ω- CFG. For any γ ∈ (N ∪ T )∗ , the class
of self-providing sets SP (γ) and the class of transient sets T R(γ) are defined as
SP (γ) = {D ⊆ P | there exists an infinite nl-derivation d
starting in γ s.t. inf(dP ) = D}
T R(γ) = {D ⊆ P | there exists a finite nl-derivation d : γ ⇒∗ γ 0
for some γ 0 ∈ (N ∪ T )∗ s.t. ran(dP ) = D}
It follows immediately that
SP (αβ) = SP (α) ∪ SP (β) ∪ {H1 ∪ H2 | H1 ∈ SP (α), H2 ∈ SP (β)}
T R(αβ) = T R(α) ∪ T R(β) ∪ {H1 ∪ H2 | H1 ∈ T R(α), H2 ∈ T R(β)}
Using the above concepts, we are ready to show the nl-derivation of ω- CFG can be simulated by the computation of ω- PDA.
Theorem 4.52. Lσ,ρ,nl (ω- CFG) ⊆ Lσ,ρ (ω- PDA).
Proof. Given a (σ, ρ, nl)-accepting ω- CFG G = (N, T, P, S, F), we need only show how to
construct an ω- PDA D = (Q, T, Γ, δ, q, S, H), such that G and D accept exactly the same
ω-language.
Without loss of generality (because of the closure property under union), we may
assume F consists of only one repetition set, denoted by F . We may also assume that
P = P1 ∩ P2 , where P1 are of the form A → α, α ∈ N + , and P2 are of the form A → a,
a ∈ Σ ∪ {} (by Lemma 4.39). Let T R1 (γ) = {D ⊆ F | there exists a finite nl-derivation
d : γ ⇒∗ γ 0 for some γ 0 ∈ (N ∪ T )∗ s.t. ran(dP ) = D}.
(Case 1) (inf, ⊆, nl)-acceptance. Assume P = {pi }1≤i≤|P | , we construct an ω- PDA D
with Q = {q, q0 } ∪ {qi | pi ∈ P }, Γ = N ∪ T ∪ {Z} where Z 6∈ N ∪ T . The transition
function δ is defined as follows.
4.2. On the Generative Power of (σ, ρ, π)-accepting ω-Grammars
57
1. δ(q, , S) = (q0 , SZ),
2. δ(q0 , , A) 3 (qk , ) for pk : A → γ ∈ P ,
3. δ(qk , , X) = (q0 , γX) for X ∈ Γ, pk : A → γ ∈ P ,
4. δ(q0 , a, a) = (q0 , ) for a ∈ T .
Let H = {{q0 } ∪ {qk | pk ∈ F }}, it can be easily verified that Linf,⊆,nl (G) = Linf,⊆ (D).
(Case 2) (ran, ⊆, nl)-acceptance. The same as Case 1, except H = {{q, q0 } ∪ {qk | pk ∈
F }}.
(Case 3) (inf, u, nl)-acceptance. We construct an ω- PDA D with Q = {q0 , q1 } ×
{0, 1, 2}, Γ = N ∪ {Z} where Z 6∈ N , and q = [q0 , 0]. Define the function f as follows:
if p ∈ F , then f (0, p) = 1, else f (0, p) = 0; for p ∈ P , f (1, p) = 0 and f (2, p) = 2. The
transition function δ is defined as follows.
1. δ([q0 , i], a, A) = ([q0 , f (i, p)], ), if p : A → a ∈ P2 ,
2. δ([q0 , i], , A) 3 ([q0 , f (i, p)], γ), if p : A → γ ∈ P1 ,
3. δ([q0 , i], , A) 3 ([q0 , 2], γ1 Z), if A → γ1 γ2 ∈ P1 , γ1 , γ2 6= , and ∃K ∈ SP (γ2 ) s.t.
F ∩ K 6= ∅, (nondeterministically choose γ2 as a unreached part.)
4. δ([q0 , i], , A) 3 ([q1 , i], A) for A ∈ N , (start the derivation of the reached part using
only productions that appear infinitely often.)
5. δ([q1 , i], a, A) = ([q1 , f (i, p)], ), if p : A → a ∈ P2 ,
6. δ([q1 , i], , A) 3 ([q1 , f (i, p)], γ) if p : A → γ ∈ P1 ,
7. δ([q1 , i], , A) 3 ([q1 , 1], γ1 Z) if A → γ1 γ2 ∈ P1 , γ1 , γ2 6= , and ∃K ∈ T R(γ2 ) s.t.
F ∩ K 6= ∅.
Let H = {{[q1 , 1], [q1 , 2]}}, it can be easily verified that Linf,u,nl (G) = Linf,u (D).
(Case 4) (ran, u, nl)-acceptance. We construct an ω- PDA D with Q = {q0 } × {0, 1},
Γ = N ∪ {Z} where Z 6∈ N , and q = [q0 , 0]. Define the function f as follows: if p ∈ F ,
then f (0, p) = 1, else f (0, p) = 0; for p ∈ P , f (1, p) = 1. The transition function δ is
defined as follows.
1. δ([q0 , i], a, A) 3 ([q0 , f (i, p)], ), if p : A → a ∈ P2 ,
2. δ([q0 , i], , A) 3 ([q0 , f (i, p)], γ), if p : A → γ ∈ P1 ,
3. δ([q0 , i], , A) 3 ([q0 , 1], γ1 Z), if A → γ1 γ2 ∈ P1 , γ1 , γ2 6= , and ∃K ∈ T R(γ2 ) s.t.
F ∩ K 6= ∅, (nondeterministically choose γ2 as a unreached part.)
Let H = {{[q0 , 1]}}, it can be easily verified that Lran,u,nl (G) = Lran,u (D).
F
(Case 5) (inf, =, nl)-acceptance. We construct an ω- PDA D with Q = {q0 } × 22 ∪
{q1 } × 2F × 2F ∪ {q} × 2F , Γ = N ∪ {Z} where Z 6∈ N , and q = [q0 , ∅]. The transition
function δ is defined as follows.
1. δ([q0 , H], a, A) 3 ([q0 , H], ) for H ⊆ 2F , if A → a ∈ P2 ,
58
Chapter 4. On the Generative Power of (σ, ρ, π)-accepting ω-Grammars
2. δ([q0 , H], , A) 3 ([q0 , H], γ) for H ⊆ 2F , if A → γ ∈ P1 ,
3. δ([q0 , H], , A) 3 ([q0 , H1 ], γ1 Z) for H ⊆ 2F , if A → γ1 γ2 ∈ P1 , γ1 , γ2 6= , and
H1 = {K1 ∪ K2 | K1 ∈ H, K2 ∈ SP (γ2 )}, (nondeterministically choose γ2 as a
unreached part, and accumulate the productions that appear infinitely often by
rewriting γ2 .)
4. δ([q0 , H], , A) 3 ([q1 , F − K, ∅], A) for H ⊆ 2F , A ∈ N , and K ∈ H, (start the
derivation of the reached part using only productions that appear infinitely often.
The third component of the state is used to accumulate productions applied infinitely
often hereafter. F − K computes the productions needed to appear infinitely often
hereafter.)
5. δ([q1 , K, H], a, A) 3 ([q1 , K, H ∪ {p}], ) for K, H ⊆ F , if p : A → a ∈ P2 and p ∈ F ,
(accumulate a production that appears once.)
6. δ([q1 , K, H], , A) 3 ([q1 , K, H ∪ {p}], γ) for K, H ⊆ F , if p : A → γ ∈ P1 and p ∈ F ,
(accumulate a production that appears once.)
7. δ([q1 , K, H], , A) 3 ([q1 , K, H ∪ H1 ∪ {p}], γ1 Z) for K, H ⊆ F , H1 ∈ T R1 (γ2 ), if
p : A → γ1 γ2 ∈ P1 , γ1 , γ2 6= , and p ∈ F , (accumulate the productions that appear
in γ2 .)
8. δ([q1 , K, H], , A) 3 ([q, K], A) for K, H ⊆ F , A ∈ N , if K ⊆ H, (all the required
productions appearing infinitely often are accumulated.)
9. δ([q, K], , A) 3 ([q1 , K, ∅], A) for K ⊆ F , A ∈ N . (restart accumulating, because
each production in K has been used at least once since the last time D has entered
[q1 , K, ∅].)
Let H = {{q × 2F }}, it can be easily verified that, there can be constructed an ω- PDA
D0 from D by modifying H, such that Linf,=,nl (G) = Linf,u (D) = Linf,= (D0 ).
(Case 6) (ran, =, nl)-acceptance. We construct an ω- PDA D with Q = {q0 } × 2F ,
Γ = N ∪ {Z} where Z 6∈ N , and q = [q0 , ∅]. The transition function δ is defined as follows.
1. δ([q0 , H], a, A) 3 ([q0 , H1 ], ) for H ⊆ F , if p : A → a ∈ P2 and p ∈ F , and
H1 = H ∪ {p},
2. δ([q0 , H], , A) 3 ([q0 , H1 ], γ) for H ⊆ F , if p : A → γ ∈ P1 and p ∈ F , and
H1 = H ∪ {p},
3. δ([q0 , H], , A) 3 ([q0 , H1 ], γ1 Z) for H ⊆ F , if p : A → γ1 γ2 ∈ P1 and p ∈ F ,
γ1 , γ2 6= , and H1 ∈ {H ∪ {p} ∪ K | K ∈ T R1 (γ2 )}, (nondeterministically choose γ2
as a unreached part.)
Let H = {H ⊆ Q | [q0 , F ] ∈ H}, it can be easily verified that Lran,=,nl (G) = Lran,= (D). The above result also means Lσ,ρ,nl (ω- CFG) ⊆ Lσ,ρ,l (ω- CFG). Now we consider
whether the proper inclusion or the equivalence holds.
Theorem 4.53. (i) Lσ,ρ,nl (ω- CFG) ⊂ Lσ,ρ,l (ω- CFG), for (σ, ρ) ∈ {(inf, u), (inf, =)}.
(ii) Lσ,ρ,nl (ω- CFG) = Lσ,ρ,l (ω- CFG), for (σ, ρ) ∈ {(ran, ⊆), (inf, ⊆)}.
4.2. On the Generative Power of (σ, ρ, π)-accepting ω-Grammars
59
Proof. We consider the accepting models one by one.
1. Linf,=,nl (ω- CFG) ⊂ Linf,=,l (ω- CFG). It was proved that there exists an ω-language
L = {an bn | n ≥ 1}ω , such that L ∈ Linf,=,l (ω- CFG), but L 6∈ Linf,=,nl (ω- CFG) (see
Proposition 4.3.6 of [33]). It follows that Linf,=,l (ω- CFG) * Linf,=,nl (ω- CFG), thus
Linf,=,l (ω- CFG) 6= Linf,=,nl (ω- CFG).
2. Linf,u,nl (ω- CFG) ⊂ Linf,u,l (ω- CFG). In one hand, we have L ∈ Linf,u,l (ω- CFG),
since Linf,=,l (ω- CFG) = Linf,u,l (ω- CFG) by Thm. 4.44 and Thm. 4.35. On the
other hand, L 6∈ Linf,u,nl (ω- CFG), because it is easy to see Linf,u,nl (ω- CFG) ⊆
Linf,=,nl (ω- CFG).
3. Lσ,⊆,nl (ω- CFG) = Lσ,⊆,l (ω- CFG), for σ ∈ {ran, inf}. It is easy to show Lσ,⊆,l (ω- CFG) ⊆
Lσ,⊆,nl (ω- CFG) holds: for any G ∈ ω- CFG, Lσ,⊆,l (G) = Lσ,⊆,nl (G).
Unfortunately, for (σ, ρ) ∈ {(ran, u), (ran, =)}, whether proper inclusion or equivalence
holds is still an open problem.
The following lemmas lead to the characterization of the generative power of ω- CSG
and ω- PSG.
Lemma 4.54. Lσ,ρ (ω- TM) ⊆ Lσ,ρ,nl (ω- CSG).
Proof. Let M = (Q, Σ, Γ, δ, q0 , F) be an ω- TM. Construct an ω- CSG G = (N, Σ, P, S, H),
where N = Σ × Γ ∪ Q ∪ {$, S, S1 }, P contains the following productions:
1. S → $q0 S1 ,
2. S1 → [a, a]S1 , for every a ∈ Σ,
3. q[a, A] → [a, C]p, if δ(q, A) 3 (p, C, R) for every a ∈ Σ,
4. [b, B]q[a, A] → p[b, B][a, C], if δ(q, A) 3 (p, C, L) for every a, b ∈ Σ, B ∈ Γ,
5. q[a, A] → p[a, C], if δ(q, A) 3 (p, C, S) for every a ∈ Σ,
6. $[a, A] → a$, for every a ∈ Σ, A ∈ Γ.
Rule (2) generates the input ω-word. The first component of Σ × Γ is used to record the
input symbol, and the second is used to simulate M .
We denote by Pi the set of productions of type (i) above. For every q ∈ Q, we denote
by Pq the set of productions in which q appears on the left-hand side.
Let F = {Fk }1≤k≤n , we construct the set H according to different accepting models:
S
1. (ran, u, nl)-acceptance. Let Hk = q∈Fk Pq , then H = {Hk }1≤k≤n .
S
2. (ran, ⊆, nl)-acceptance. Let Hk = P1 ∪ P2 ∪ P6 ∪ q∈Fk Pq , then H = {Hk }1≤k≤n .
S
3. (ran, =, nl)-acceptance. Let Hk = {H ⊆ P1 ∪ P2 ∪ P6 ∪ q∈Fk P
Sq | P1 ⊆ H and
H ∩ P2 6= ∅ and H ∩ P6 6= ∅ and ∀q ∈ Fk , H ∩ Pq 6= ∅}, then H = nk=1 Hk .
4. (inf, u, nl)-acceptance. The same as (1).
5. (inf, ⊆, nl)-acceptance. Let Hk = P2 ∪ P6 ∪
S
q∈Fk
Pq , then H = {Hk }1≤k≤n .
60
Chapter 4. On the Generative Power of (σ, ρ, π)-accepting ω-Grammars
S
6. (inf, =, nl)-acceptance. Let Hk = {H ⊆ P2 ∪ PS6 ∪ q∈Fk Pq | H ∩ P2 6= ∅ and
H ∩ P6 6= ∅ and ∀q ∈ Fk , H ∩ Pq 6= ∅}, then H = nk=1 Hk .
It can be easily verified that Lσ,ρ,nl (G) = Lσ,ρ (M ).
Lemma 4.55. Lσ,ρ,nl (ω- PSG) ⊆ Lσ,ρ (2 -ω- TM).
Proof. Let G = (N, T, P, S, F) be an ω- PSG. By Lemma 4.42, we may assume that
G is an ω- PSG with $-boundary. Construct an 2 -ω- TM M = (Q, T, Γ, δ, q0 , H) where
Q = Q0 ∪ {q0 , q1 , qD } ∪ {qp : p ∈ P }, Q0 is a set of working states and qD is a dead state
(no further transitions). The machine has two tapes. The first tape contains the input
word u ∈ T ω , while on the second tape M simulates nondeterministically a derivation in
G. M starts with writing S in the first square on the second tape. For every production
p in P there is a corresponding state qp in Q, entered by M every times production p
is simulated on the second tape. If M cannot find a production to simulate, then M
enters the dead state qD . Furthermore, each times M simulates a production of the form
$A → a$, the terminal a ∈ Σ is checked against the letter pointed to on the first tape. If
there is a match, M enters state q1 , moves one square to the right on the two tapes and
then proceeds with the simulation. Otherwise, M enters the dead state qD .
Let F = {Fi }1≤i≤n , we construct the set H according to different accepting models:
1. (ran, u)-acceptance. Let Hi = {qp | p ∈ Fi }, then H = {Hi }1≤i≤n .
2. (ran, ⊆)-acceptance. Let Hi = Q0 ∪ {q0 , q1 } ∪ {qp | p ∈ Fi }, then H = {Hi }1≤i≤n .
3. (ran, S
=)-acceptance. Let Hi = {H ∪ {q0 , q1 } ∪ {qp | p ∈ Fi } | H ⊆ Q0 }, then
H = ni=1 Hi .
4. (inf, u)-acceptance. The same as (1).
5. (inf, ⊆)-acceptance. Let Hi = Q0 ∪ {q1 } ∪ {qp | p ∈ Fi }, then H = {Hi }1≤i≤n .
S
6. (inf, =)-acceptance. Let Hi = {H ∪{q1 }∪{qp | p ∈ Fi } | H ⊆ Q0 }, then H = ni=1 Hi .
It can be easily verified that Lσ,ρ,nl (G) = Lσ,ρ (M ).
Note that the proof above has two important differences with the proof of Thm. 5.1
in [35] in which only the 3-accepting (i.e., (inf, =)-accepting) ω-grammar was considered.
The first difference is that we use two tapes rather than two tracks. Because if we used
two tracks, except the (inf, =)-acceptance case, for any input u ∈ T ω the ω- TM M would
have a c.n.o. run satisfying the accepting condition by applying an infinite computation as
follows: M generates only finite (maybe zero) terminals on the leftmost side of the second
track, and then never uses the productions $A → a$ any more. Thus, M would accept
Lσ,ρ (M ) = T ω . The second difference is that we use qD instead of the traverse state qT ,
because the latter one brings inconvinience for (ran, u)-acceptance: for any input u ∈ T ω ,
M may simulate certain p ∈ Fi once, then enter qT by a dismatch when checking, and
the run is a c.n.o. run and thus accepted. Therefore, M would accept Lran,u (M ) = T ω .
In order to provide a uniform constructive proof for all the accepting models, we give our
proof by modifying the proof of Thm. 5.1 in [35].
By the above lemmas and Thm. 4.32, we showed the fact Lσ,ρ (ω- TM) ⊆ Lσ,ρ,nl (ω- CSG) ⊆
Lσ,ρ,nl (ω- PSG) ⊆ Lσ,ρ (2 -ω- TM) ⊆ Lσ,ρ (m-ω- TM) = Lσ,ρ (ω- TM). Thus we have the following theorems.
4.3. Related Work
61
Theorem 4.56. Lσ,ρ,nl (ω- PSG) = Lσ,ρ (ω- TM).
Theorem 4.57. Lσ,ρ,nl (ω- CSG) = Lσ,ρ (ω- TM).
4.3
Related Work
Cohen only focused on the five types of i-accepting ω-automata and the ω-grammars
defined with 3-acceptance [32], and mainly discussed the ω- CFG - V with variable repetition sets rather than the one with production repetition sets [33]. Therefore, Cohen
studied actually some relationships between (inf, =, π)-accepting ω-grammars and (inf, =)accepting ω-automata (i.e., 3-acceptance), including (inf, =, π)-accepting ω- CFG - V and
(inf, =)-accepting ω- PDA [32, 33], (inf, =, nl)-accepting ω- PSG and (inf, =)-accepting
ω- TM [35]. Furthermore, Cohen studied also some special forms and normal forms of
(inf, =, π)-accepting ω-grammars.
We extended Cohen’s work in the following aspects. First, we examined the ωgrammars beyond (inf, =, π)-acceptance, since only 3-accepting ω-grammars were considered in the literature. We showed that for some accepting models, the relative generative power of the ω-grammars to corresponding ω-automata is different from that of
3-accepting ω-grammars. Second, the ω- CFG with production repetition sets was studied
to form uniform relation and translation over the entire hierarchy of ω-grammars. However, the literature only considered ω- CFG - V, and the variable repetition sets are not
applicable to ω- CSG for example. Third, we tried to provide uniform proofs for various accepting models, since some of the proofs in the literature are based on the ω-Kleene
closure of language families (again, related to 3-acceptance), rather than uniform constructive proofs over various accepting models, e.g., the proof for -production-free 3-accepting
ω- CFG - V (see Thm. 4.2.2 - Thm. 4.2.5 in [32]). Fourth, we considered one more accepting model than the five types of i-acceptances for ω-automata, for the corresponding
between ω-grammars and ω-automata.
Later, Engelfriest studied the six types of (σ, ρ)-accepting ω-automata from the perspective of X-automata [49], but did not consider the grammar form. It is reasonable to
establish the corresponding grammar forms for these ω-automata. Therefore, we define
the (σ, ρ, π)-accepting ω-grammar corresponding to the (σ, ρ)-accepting ω-automaton, and
establish the relationship and the translation methods between the ω-grammar and the
ω-automaton.
4.4
Conclusion
This chapter married related works on ω-automata in the literature, and proposed the
(σ, ρ, π)-accepting ω-grammar. The relative generative power of ω-grammars and ωautomata was studied in the previous theorems, which are summarized and compared
in Table 4.2.
One should particularly note:
1. the generative power of non-leftmost derivations of ω- CFG may be strictly weaker
than or equal to that of ω- PDA.
2. the generative power of leftmost derivations of ω- CSG or ω- PSG is not greater than
that of ω- PDA.
3. the generative power of non-leftmost derivations of ω- CSG is equal to ω- TM.
62
Chapter 4. On the Generative Power of (σ, ρ, π)-accepting ω-Grammars
ω- RLG
ω- CFG
ω- CSG
ω- PSG
Lσ,ρ,l Lσ,ρ (ω- FSA)
Lσ,ρ (ω- PDA)
⊆ Lσ,ρ (ω- PDA)(1) Lσ,ρ (ω- PDA)
Lσ,ρ,nl Lσ,ρ (ω- FSA) ⊆ Lσ,ρ (ω- PDA)(2)
Lσ,ρ (ω- TM)
Lσ,ρ (ω- TM)
(1) Lσ,ρ,l (ω- CSG) = Lσ,ρ (ω- PDA), for (σ, ρ) 6∈ {(ran, u), (ran, =)}.
(2) Lσ,ρ,nl (ω- CFG) ⊂ Lσ,ρ (ω- PDA), for (σ, ρ) ∈ {(inf, u), (inf, =)}.
Lσ,ρ,nl (ω- CFG) = Lσ,ρ (ω- PDA), for (σ, ρ) ∈ {(ran, ⊆), (inf, ⊆)}.
Table 4.2: Relative Generative Power of ω-Grammars and ω-Automata
As a result, it is not necessary to define linear-bounded ω-automata-like devices, since
ω- CSG does not have an independent level of generative power.
The open problems lie on the (ran, u)-accepting and (ran, =)-accepting models. Fortunately, the results are enough for major applications, which concern mainly (inf, ρ)acceptances.
Thanks to the theorems and the translation methods given in the proofs in this chapter,
the closure property and the decision problems of the families of ω-languages generated
by ω-grammars and those of the families of ω-languages recognized by ω-automata can be
deduced from each other.
Chapter 5
Control Systems on ω-Words
This chapter is the first to marry restricted ω-grammars and regulated rewriting in classical
language theory [39] by proposing the formal control system on ω-words (ω-C system),
which is a framework consisting of a controlled ω-device and a controlling ω-device that are
modeled using the same formalism. These ω-devices could be (σ, ρ)-accepting ω-automata
or (σ, ρ, π)-accepting ω-grammars. The generative power is studied. The motivation comes
from extending regularly controlled grammars on finite words [40, 38] and exploring their
properties in the context of infinite words.
This chapter is organized as follows. In Sections 5.1, 5.2 and 5.3, three types of control
systems on ω-words (ω-C systems) are studied. Related work is discussed in Section 5.4,
and we conclude in Section 5.5.
5.1
5.1.1
ω-Grammar Control Systems (ω-GC Systems)
Definitions
An ω-grammar control system consists of an ω-grammar and a controlling ω-grammar.
Definition 5.1. Given a controlled ω-grammar (or simply ω-grammar) G1 = (N1 , T1 , P1 ,
S1 , F1 ), a controlling ω-grammar over G1 is a quadruple G2 = (N2 , T2 , P2 , S2 , F2 ), where
T2 = P1 .
Without loss of generality, we assume that N1 ∩ N2 = ∅. Generally, we denote the
controlled ω-grammar and the controlling ω-grammar by G1 and G2 respectively.
Definition 5.2. An ω-Grammar Control System (simply ω-GC System) includes an ωgrammar G1 = (N1 , T1 , P1 , S1 , F1 ) and a controlling ω-grammar G2 . The global ω-language
of G1 controlled by G2 is:
Lσ,ρ,nl (G1 , G2 ) = {u ∈ T1ω | there exists a derivation d : S1 ⇒ω u of G1
such that dP is (σ, ρ)-accepting w.r.t. F1
and dP ∈ Lσ,ρ,nl (G2 )}
We say Lσ,ρ,nl (G2 ) is a controlling ω-language.
Obviously, the set of accepted inputs is a subset of the controlled ω-language, such
that the sequence of applied productions belongs to the controlling ω-language, i.e.,
Lσ,ρ,nl (G1 , G2 ) ⊆ Lσ,ρ,nl (G1 ).
63
64
Chapter 5. Control Systems on ω-Words
As usual, Linf,=,nl (G1 , G2 ) will be denoted by Lnl (G1 , G2 ).
Example 5.3. Given G1 = (N1 , T1 , P1 , S1 , F1 ), G2 = (N2 , T2 , P2 , S2 , F2 ) where F1 =
{H | {p1 , p2 , p5 , p6 } ⊆ H ⊆ P1 }, F2 = {H | {r1 } ⊆ H ⊆ P2 }, with the following productions:


0S
0 → AB

p
:
S
→
S
p
:
S
 1 1
 r1 : S2 → p1 p2 C
2
1 1
1
p3 : A → aAb
p4 : B → Bc
r2 : C → p3 p4 C
P1
P2



p5 : A → ab
p6 : B → c
r3 : C → p5 p6 S2
Let L1 = {an bn c+ | n ≥ 1}, L2 = p1 p2 (p3 p4 )∗ p5 p6 , L3 = {an bn cn | n ≥ 1}. The
context-free ω-language Lnl (G1 ) = Lω1 ∪ L∗1 aω and the regular ω-language Lnl (G2 ) =
Lω2 ∪ L∗2 p1 p2 (p3 p4 )ω constitute a non-context-free global ω-language Lnl (G1 , G2 ) = Lω3 . Two trivial types of controlling ω-grammars are empty controlling ω-grammars and full
controlling ω-grammars. The former ones accept the empty controlling ω-language which
rejects all the sequences of productions applied in derivations, i.e., Lσ,ρ,nl (G2 ) = ∅. The
latter ones accept full controlling ω-languages that accept all the sequences of productions
applied in derivations, i.e., Lσ,ρ,nl (G2 ) = P1ω , where P1 is the set of productions of the
controlled ω-grammar G1 . Note that the two types of ω-languages are both regular.
5.1.2
Generative Power
In this section, we prove theorems on the generative power. Each theorem concerns different combinations of controlled and controlling ω-grammars.
Theorem 5.4. Lσ,ρ,nl (ω- RLG, ω- RLG) = Lσ,ρ,nl (ω- RLG).
Proof. (i) Lσ,ρ,nl (ω- RLG, ω- RLG) ⊇ Lσ,ρ,nl (ω- RLG) is obvious, since we can use a regular
controlling ω-grammar generating a full controlling ω-language.
(ii) Lσ,ρ,nl (ω- RLG, ω- RLG) ⊆ Lσ,ρ,nl (ω- RLG). Given an ω- RLG G1 = (N1 , T1 , P1 , S1 , E)
and an ω- RLG G2 = (N2 , T2 , P2 , S2 , F) with P1 = T2 , P2 contains all productions of the
forms: B → uC, B → u, B, C ∈ N2 , u ∈ P1∗ , E = {Ei }1≤i≤m , F = {Fi }1≤i≤n . We
define Rset(pi ) = {pk ∈ P1 | pi : B → uC ∈ P2 , pk in string u}. Let us consider different
(σ, ρ, nl)-accepting models.
(a) For ρ ∈ {⊆, =}, we construct an ω-grammar
G = (N1 × N2 ∪ {S}, T1 , P, S, H)
where N1 ×N2 includes composite nonterminals of the form [A, B], P includes the following
sets of productions:
1. Ps = {S → [S1 , S2 ]},
u
2. P 0 = {pkA : [A, B] → α[D, C], for pk : B → uC ∈ P2 , A ∈ N1 and A =⇒ αD in G1 }.
Let fS
1 (pkA ) = {pk }, f2 (pkA ) = Rset(pk ).
f2 (H) = pkA ∈H f2 (pkA ). We construct the set
(a.1) (ran, ⊆, nl)-acceptance. H = {H 0 ∪ Ps
E, Fj ∈ F}.
(a.2) (ran, =, nl)-acceptance. H = {H 0 ∪ Ps
E, Fj ∈ F}.
S
For H ⊆ P 0 , f1 (H) = pkA ∈H f1 (pkA ),
H as follows:
| H 0 ⊆ P 0 , f1 (H 0 ) ⊆ Ei , f2 (H 0 ) ⊆ Fj , Ei ∈
| H 0 ⊆ P 0 , f1 (H 0 ) = Ei , f2 (H 0 ) = Fj , Ei ∈
5.1. ω-Grammar Control Systems (ω-GC Systems)
65
(a.3) (inf, ⊆, nl)-acceptance. H = {H 0 | H 0 ⊆ P 0 , f1 (H 0 ) ⊆ Ei , f2 (H 0 ) ⊆ Fj , Ei ∈
E, Fj ∈ F}.
(a.4) (inf, =, nl)-acceptance. H = {H 0 | H 0 ⊆ P 0 , f1 (H 0 ) = Ei , f2 (H 0 ) = Fj , Ei ∈
E, Fj ∈ F}.
It is easy to see that the component B of [A, B] controls the derivation in such a way
that the acceptable derivations in G1 are simulated by derivations in G. Therefore, for
σ ∈ {ran, inf}, ρ ∈ {⊆, =}, Lσ,ρ,nl (G1 , G2 ) = Lσ,ρ,nl (G) ∈ Lσ,ρ,nl (ω- RLG).
(b) (σ, u)-acceptances.
loss of generality, assume that E = {E} and F = {F }
SWithout
S
n
F
.
We
construct an ω-grammar
E
,
F
=
where E = m
i=1 i
i=1 i
G = (N ∪ {S}, T1 , P, S, H)
where N = N1 × N2 × {0, 1}2 , and P includes the following sets of productions:
1. S → [S1 , S2 , 00],
u
2. pkAx : [A, B, x] → α[D, C, y], for pk : B → uC ∈ P2 , A ∈ N1 and A =⇒ αD in G1 ,
x ∈ {0, 1}2 ,
y=

00, if x = 11





10, if x = 00 and Rset(pk ) ∩ E 6= ∅






 01, if x = 00 and pk ∈ F
11,



11,





11,




x,
if x = 00 and Rset(pk ) ∩ E 6= ∅ and pk ∈ F
if x = 10 and pk ∈ F
if x = 01 and Rset(pk ) ∩ E 6= ∅
else.
Let H = {{pkA11 | pk ∈ P2 , A ∈ N1 }}. It is easy to see that the component B of
[A, B, x] ∈ N control the derivation in such a way that the acceptable derivations in G1 are
simulated by derivations in G, while the component x is used to record the appearances of
productions in the repetition sets E and F . Therefore, for σ ∈ {ran, inf}, Lσ,u,nl (G1 , G2 ) =
Lσ,u,nl (G) ∈ Lσ,u,nl (ω- RLG).
Theorem 5.5. Lσ,ρ,nl (ω- CFG) ⊆ Lσ,ρ,nl (ω- RLG, ω- CFG) ⊆ Lσ,ρ,l (ω- CFG).
Proof. (i) Lσ,ρ,nl (ω- CFG) ⊆ Lσ,ρ,nl (ω- RLG, ω- CFG). Given an ω- CFG G = (N, T, P, S, H),
we construct an ω- RLG G1 = ({S1 }, T, P1 , S1 , E) where P1 = {pa : S1 → aS1 | a ∈ T }
and E = 2P1 , and an ω- CFG G2 = (N, T2 , P2 , S, H) where
1. h is a homomorphism: h(a) = pa , for a ∈ T ,
2. T2 = h(T ),
3. P2 = h(P ), where h(P ) is the set of productions replacing each terminal a ∈ T of
productions in P by pa ∈ T2 .
G generates u ∈ T ω , if and only if G1 generates u by using productions h(u), and G2 generates h(u). Thus it is easy to see that Lσ,ρ,nl (G) = Lσ,ρ,nl (G1 , G2 ) ∈ Lσ,ρ,nl (ω- RLG, ω- CFG)
for each accepting model.
66
Chapter 5. Control Systems on ω-Words
(ii) Lσ,ρ,nl (ω- RLG, ω- CFG) ⊆ Lσ,ρ,l (ω- CFG). Given an ω- RLG G1 = (N1 , T1 , P1 , S1 , E)
and an ω- CFG G3 = (N3 , T3 , P3 , S3 , F3 ) with P1 = T3 , P1 contains all productions of the
forms: B → uC, B → u, u ∈ T1∗ .
There exists a (σ, ρ, l)-accepting ω- CFG G2 = (N2 , T2 , P2 , S2 , F) with P1 = T2 such
that the controlling ω-language Lσ,ρ,nl (G3 ) = Lσ,ρ,l (G2 ) by Thm. 4.52. Assume N1 ∩ N2 =
∅, E = {Ei }1≤i≤m , F = {Fi }1≤i≤n . Let us consider different (σ, ρ, l)-accepting models for
G2 .
(a) For ρ ∈ {⊆, =}, we construct an ω-grammar
G = (N1 ∪ N2 ∪ T2 ∪ {S}, T1 , P, S, H)
where P includes productions of the following forms:
1. ps : S → S1 S2 ,
2. p2kB : BA → Bα, for B ∈ N1 , pk : A → α ∈ P2 ,
3. p1k : Bpk → uC, for pk ∈ T2 , pk : B → uC ∈ P1 .
We denote by P ri the set of productions of type (i) above. Let P1k = {p1k ∈ P r3 },
P2k = {p2kB ∈ P r2 }. We construct the set H as follows:
S
0
00
0
00
S (a.1) (ran, ⊆, l)-acceptance.
Sm Sn Let Hij = {H ∪ H ∪ P r1 | H = pk ∈Ei P1k , H =
i=1 j=1 Hij .
pk ∈Fj P2k }, then H =
S
0 ∪ H 00 ∪ P r | H 0 =
00
(a.2)
(ran,
=,
l)-acceptance.
Let
H
=
{H
ij
1
pk ∈Ei P1k , H ⊆
S
S
S
n
m
00
j=1 Hij .
i=1 S
pk ∈Fj P2k and ∀pk ∈ Fj , H ∩ P2k 6= ∅}, then H =
S
(a.3) (inf, ⊆, l)-acceptance. Let Hij = {H 0 ∪ H 00 | H 0 = pk ∈Ei P1k , H 00 = pk ∈Fj P2k },
S Sn
then H = m
i=1 j=1 Hij .
S
S
(a.4) (inf, =, l)-acceptance. Let Hij = {H 0 ∪ H 00 | H 0 = pk ∈Ei P1k , H 00 ⊆ pk ∈Fj P2k
S
S
n
and ∀pk ∈ Fj , H 00 ∩ P2k 6= ∅}, then H = m
i=1 j=1 Hij .
It is easy to see that the nonterminals pk ∈ T2 control the derivation in such a way
that the acceptable leftmost derivations in G1 are simulated by leftmost derivations in G.
Therefore, for σ ∈ {ran, inf}, ρ ∈ {⊆, =}, Lσ,ρ,nl (G1 , G3 ) = Lσ,ρ,l (G) ∈ Lσ,ρ,l (ω- PSG) =
Lσ,ρ,l (ω- CFG), by Theorem 4.45.
(b) (σ, u)-acceptances.
loss of generality, assume that E = {E} and F = {F }
SWithout
Sm
n
where E = i=1 Ei , F = i=1 Fi . We construct an ω-grammar
G = (N1 ∪ N2 ∪ T2 ∪ Y ∪ {S}, T1 , P, S, H)
where Y = {0, 1}2 , and P includes productions of the following forms:
1. ps : S → xS1 S2 , x = 00 ∈ Y ,
2. p2kBx : xBA → yBα, for B ∈ N1 , pk : A → α ∈ P2 , x ∈ Y ,

00,



 01,
y=

11,



x,
if x = 11
if x = 00 and pk ∈ F
if x = 10 and pk ∈ F
else.
5.1. ω-Grammar Control Systems (ω-GC Systems)
67
3. p1kx : xBpk → uyC, for pk ∈ T2 , pk : B → uC ∈ P1 , x ∈ Y ,

00,



 10,
y=

11,



x,
if x = 11
if x = 00 and pk ∈ E
if x = 01 and pk ∈ E
else.
Let H = {{p1k11 | pk ∈ P1 } ∪ {p2kB11 | B ∈ N1 , pk ∈ P2 }}. It is easy to see, for σ ∈
{ran, inf}, Lσ,u,nl (G1 , G3 ) = Lσ,u,l (G) ∈ Lσ,u,l (ω- PSG) = Lσ,u,l (ω- CFG), by Theorem
4.45.
Theorem 5.6. Lσ,ρ,nl (ω- CFG) ⊂ Lσ,ρ,nl (ω- CFG, ω- RLG) ⊆ Lσ,ρ,nl (ω- PSG).
Proof. (i) Lσ,ρ,nl (ω- CFG) ⊂ Lσ,ρ,nl (ω- CFG, ω- RLG) is obvious, since we can use a regular
controlling ω-grammar generating a full controlling ω-language. The strictness follows
from Example 5.3.
(ii) Lσ,ρ,nl (ω- CFG, ω- RLG) ⊆ Lσ,ρ,nl (ω- PSG). Given an ω-CFG G1 = (N1 , T1 , P1 , S1 , E)
and an ω- RLG G2 = (N2 , T2 , P2 , S2 , F) with P1 = T2 , P2 contains all productions of the
forms: B → uC, B → u, B, C ∈ N2 , u ∈ P1∗ . Assume N1 ∩ N2 = ∅, E = {Ei }1≤i≤m ,
F = {Fi }1≤i≤n . Assume that the maximal length of u of the productions in P2 is l. Let
us consider different (σ, ρ, nl)-accepting models.
(a) For ρ ∈ {⊆, =}, we construct an ω-grammar
G = (N1 ∪ N2 ∪ N20 ∪ {S}, T1 , P, S, H)
where N20 = N2 ×
Sl
l
i=0 T2
and P includes the following sets of productions:
1. S → S1 S2 ,
2. p2k : B → [C, u], for pk : B → uC ∈ P2 ,
3. [B, ] → B, for B ∈ N2 ,
4. p1k[B,u] : A[B, pk u] → α[B, u], for [B, pk u] ∈ N20 and pk : A → α in P1 ,
5. BX → XB and XB → BX, for X ∈ N1 ∪ T1 , B ∈ N2 ∪ N20 .
We denote by P ri the set of productions of type (i) above. Let P1k = {p1k[B,u] ∈ P r4 },
P2k = {p2k ∈ P r2 }. We construct the set H as follows:
S
0
00
0
00
S (a.1) (ran, ⊆, l)-acceptance.
Sm SnLet Hij = {H ∪H ∪P r1 ∪P r3 ∪P r5 | H = pk ∈Ei P1k , H =
pk ∈Fj P2k }, then H =
i=1 j=1 Hij .
S
(a.2) (ran, =, l)-acceptance. Let Hij S
= {H 0 ∪ H 00 ∪ P r1 ∪ H3 ∪ H5 | H 0 ⊆ pk ∈Ei P1k
s.t. ∀pk ∈ Ei , H 0 ∩ P1k 6= ∅, and H 00 = pk ∈Fj P2k , and ∅ ⊂ H3 ⊆ P r3 , ∅ ⊂ H5 ⊆ P r5 },
S Sn
then H = m
i=1 j=1 Hij .
S
0 ∪ H 00 ∪ P r ∪ P r | H 0 =
00
(a.3)
(inf,
⊆,
l)-acceptance.
Let
H
=
{H
ij
3
5
pk ∈Ei P1k , H =
S
Sm Sn
pk ∈Fj P2k }, then H =
i=1 j=1 Hij .
S
(a.4) (inf, =, l)-acceptance. LetSHij = {H 0 ∪ H 00 ∪ H3 ∪ H5 | H 0 ⊆ pk ∈Ei P1k s.t.
∀pk ∈ Ei , H 0 ∩ P1k 6= ∅, and H 00 = pk ∈Fj P2k , and ∅ ⊂ H3 ⊆ P r3 , ∅ ⊂ H5 ⊆ P r5 }, then
S Sn
H= m
i=1 j=1 Hij .
68
Chapter 5. Control Systems on ω-Words
It is easy to see that the nonterminals [B, pk u] ∈ N20 control the derivation in such a
way that the acceptable derivations in G1 are simulated by derivations in G. Therefore,
for σ ∈ {ran, inf}, ρ ∈ {⊆, =}, Lσ,ρ,nl (G1 , G2 ) = Lσ,ρ,nl (G) ∈ Lσ,ρ,nl (ω- PSG).
(b) (σ, u)-acceptances.
loss of generality, assume that E = {E} and F = {F }
Sm
SWithout
n
where E = i=1 Ei , F = i=1 Fi . We construct an ω-grammar
G = (N1 ∪ N20 ∪ N200 ∪ {S}, T1 , P, S, H)
where N20 = N2 × {0, 1}2 , N200 = N2 × {0, 1}2 ×
of productions:
Sl
l
i=0 T2
and P includes the following sets
1. S → S1 [S2 , 00],
2. p2k[x] : [B, x] → [C, y, u], for pk : B → uC ∈ P2 , x ∈ {0, 1}2 ,

00,



 01,
y=

11,



x,
if x = 11
if x = 00 and pk ∈ F
if x = 10 and pk ∈ F
else.
3. [B, x, ] → [B, x], for B ∈ N2 , x ∈ {0, 1}2 ,
4. p1k[B,x,u] : A[B, x, pk u] → α[B, y, u], for [B, x, pk u] ∈ N200 and pk : A → α in P1 ,
x ∈ {0, 1}2 ,

00, if x = 11



 10, if x = 00 and p ∈ E
k
y=

11, if x = 01 and pk ∈ E



x, else.
5. BX → XB and XB → BX, for X ∈ N1 ∪ T1 , B ∈ N20 ∪ N200 .
Let H = {{p1k[B,11,u] | pk ∈ P1 , [B, 11, u] ∈ N200 } ∪ {p2k[11] | pk ∈ P2 }}. It is easy
to see that the nonterminals [B, x, pk u] ∈ N200 control the derivation in such a way that
the acceptable derivations in G1 are simulated by derivations in G. Therefore, for σ ∈
{ran, inf}, Lσ,u,nl (G1 , G2 ) = Lσ,u,nl (G) ∈ Lσ,u,nl (ω- PSG).
Theorem 5.7. Lσ,ρ,nl (ω- CFG) ⊂ Lσ,ρ,nl (ω- CFG, ω- CFG) ⊆ Lσ,ρ,nl (ω- PSG).
Proof. (i) Lσ,ρ,nl (ω- CFG) ⊂ Lσ,ρ,nl (ω- CFG, ω- CFG) is obvious, since we can use a regular
controlling ω-grammar generating a full controlling ω-language. The strictness follows
from Example 5.3.
(ii) Lσ,ρ,nl (ω- CFG, ω- CFG) ⊆ Lσ,ρ,nl (ω- PSG). Given an ω-CFG G1 = (N1 , T1 , P1 , S1 , E)
and an ω- CFG G2 = (N2 , T2 , P2 , S2 , F) with P1 = T2 . Assume N1 ∩ N2 = ∅, E =
{Ei }1≤i≤m , F = {Fi }1≤i≤n . Let us consider different (σ, ρ, nl)-accepting models.
(a) For ρ ∈ {⊆, =}, we construct an ω-grammar
G = (N1 ∪ T10 ∪ N2 ∪ T2 ∪ {$, S}, T1 , P, S, H)
where T10 = {a0 | a ∈ T1 } and P includes the following sets of productions:
5.1. ω-Grammar Control Systems (ω-GC Systems)
69
1. S → $S1 S2 ,
2. p2k : B → β, for pk : B → β ∈ P2 ,
3. Xp → pX, for X ∈ N1 ∪ T10 , p ∈ T2 ,
4. p1k : Apk → α0 , for pk : A → α in P1 , α0 replaces each terminal a ∈ T1 of α by a0 ,
5. $a0 → a$, for a ∈ T1 .
We denote by P ri the set of productions of type (i) above. Let P1k = {p1k ∈ P r4 },
P2k = {p2k ∈ P r2 }. We construct the set H as follows:
S
0
00
0
00
S (a.1) (ran, ⊆, nl)-acceptance.
Sm Sn Let Hij = {H ∪H ∪P r1 ∪P r3 ∪P r5 | H = pk ∈Ei P1k , H =
i=1 j=1 Hij .
pk ∈Fj P2k }, then H =
S
(a.2)
(ran, =, nl)-acceptance. Let Hij = {H 0 ∪ H 00 ∪ P r1 ∪ H3S
∪ H5S| H 0 = pk ∈Ei P1k ,
S
n
.
H 00 = pk ∈Fj P2k , and ∅ ⊂ H3 ⊆ P r3 , ∅ ⊂ H5 ⊆ P r5 }, then H = m
i=1 j=1 HijS
0
00
0
(a.3)
SnHij = {H ∪ H ∪ P r3 ∪ P r5 | H = pk ∈Ei P1k ,
S (inf, ⊆, nl)-acceptance.
S Let
H 00 = pk ∈Fj P2k }, then H = m
i=1 j=1 Hij .
S
0 ∪ H 00 ∪ H ∪ H | H 0 =
(a.4)
(inf,
=,
nl)-acceptance.
Let
H
=
{H
ij
3
5
pk ∈Ei P1k ,
S
Sm Sn
00
H = pk ∈Fj P2k , and ∅ ⊂ H3 ⊆ P r3 , ∅ ⊂ H5 ⊆ P r5 }, then H = i=1 j=1 Hij .
It is easy to see that the nonterminals pk ∈ T2 control the derivation in such a way
that the acceptable derivations in G1 are simulated by derivations in G. Therefore, for
σ ∈ {ran, inf}, ρ ∈ {⊆, =}, Lσ,ρ,nl (G1 , G2 ) = Lσ,ρ,nl (G) ∈ Lσ,ρ,nl (ω- PSG).
(b) (σ, u)-acceptances.
loss of generality, assume that E = {E} and F = {F }
SWithout
Sm
n
where E = i=1 Ei , F = i=1 Fi . We construct an ω-grammar
G = (N1 ∪ T10 ∪ N2 ∪ T2 ∪ Y ∪ {$, S}, T1 , P, S, H)
where T10 = {a0 | a ∈ T1 }, Y = {0, 1}2 and P includes the following sets of productions:
1. S → $xS1 S2 , x = 00 ∈ Y ,
2. p2kx : xB → yβ, for pk : B → β ∈ P2 , x ∈ Y ,

00, if x = 11



 01, if x = 00 and p ∈ F
k
y=

11, if x = 10 and pk ∈ F



x, else.
3. Xp → pX, for X ∈ N1 ∪ T10 , p ∈ T2 ,
4. p1kx : xApk → yα0 , for pk : A → α in P1 , α0 replaces each terminal a ∈ T1 of α by
a0 , x ∈ Y ,

00, if x = 11



 10, if x = 00 and p ∈ E
k
y=

11, if x = 01 and pk ∈ E



x, else.
5. $a0 → a$, for a ∈ T1 ,
6. xX → Xx and Xx → xX, for X ∈ N1 ∪ T10 ∪ N2 ∪ T2 , x ∈ Y .
Let H = {{p1k11 | pk ∈ P1 } ∪ {p2k11 | pk ∈ P2 }}. It is easy to see, for σ ∈ {ran, inf},
Lσ,u,nl (G1 , G2 ) = Lσ,u,nl (G) ∈ Lσ,u,nl (ω- PSG).
70
Chapter 5. Control Systems on ω-Words
5.2
Leftmost-Derivation-Based ω-Grammar Control Systems (ω-LGC Systems)
5.2.1
Definitions
Definition 5.8. A Leftmost-derivation-based ω-Grammar Control System (simply ω-LGC
System) includes an ω-grammar G1 = (N1 , T1 , P1 , S1 , F1 ) and a controlling ω-grammar
G2 . The global ω-language of G1 controlled by G2 is:
Lσ,ρ,l (G1 , G2 ) = {u ∈ T1ω | there exists a leftmost derivation d : S1 =⇒ω u of G1
lm
such that dP is (σ, ρ)-accepting w.r.t. F1
and dP ∈ Lσ,ρ,l (G2 )}
We say Lσ,ρ,l (G2 ) is a controlling ω-language. We may also denote Lσ,ρ,l (G1 , G2 ) by
Lσ,ρ (G1 ~· G2 ).
As usual, Linf,=,l (G1 , G2 ) will be denoted by Ll (G1 , G2 ).
Example 5.9. Given G1 = (N1 , T1 , P1 , S1 , F1 ), G2 = (N2 , T2 , P2 , S2 , F2 ) where F1 =
{H | {p1 , p2 , p5 , p6 } ⊆ H ⊆ P1 }, F2 = {H | {r1 } ⊆ H ⊆ P2 }, with the following productions:

r1 : S2 → S20 S2



0
0

p2 : S1 → AB
 p1 : S1 → S1 S1
 r : S 0 → p p Cp
2
1 2
6
2
p3 : A → aAb
p4 : B → Bc
P2
P1


r3 : C → p3 Cp4

p5 : A → ab
p6 : B → c


r4 : C → p5
Let L1 = {an bn c+ | n ≥ 1}, L2 = {p1 p2 pk3 p5 pk4 p6 | k ≥ 0}, L3 = {an bn cn | n ≥ 1}.
The context-free ω-languages Ll (G1 ) = Lω1 and Ll (G2 ) = Lω2 constitute a non-context-free
global ω-language Ll (G1 , G2 ) = Lω3 .
Two trivial types of controlling ω-grammars are empty controlling ω-grammars and full
controlling ω-grammars. The former ones accept the empty controlling ω-language which
rejects all the sequences of productions applied in derivations, i.e., Lσ,ρ,l (G2 ) = ∅. The
latter ones accept full controlling ω-languages that accept all the sequences of productions
applied in derivations, i.e., Lσ,ρ,l (G2 ) = P1ω , where P1 is the set of productions of the
controlled ω-grammar G1 . Note that the two types of ω-languages are both regular.
5.2.2
Generative Power
As every derivation in an ω- RLG is a leftmost derivation, the following result follows from
Thm. 5.4.
Theorem 5.10. Lσ,ρ,l (ω- RLG, ω- RLG) = Lσ,ρ,l (ω- RLG).
The proof of the following result is omitted, since it is nearly the same as that of Thm.
5.5.
Theorem 5.11. Lσ,ρ,l (ω- RLG, ω- CFG) = Lσ,ρ,l (ω- CFG).
Theorem 5.12. Lσ,ρ,l (ω- CFG, ω- RLG) = Lσ,ρ,l (ω- CFG).
5.2. Leftmost-Derivation-Based ω-Grammar Control Systems (ω-LGC Systems)
71
Proof. (i) Lσ,ρ,l (ω- CFG, ω- RLG) ⊇ Lσ,ρ,l (ω- CFG) is obvious, since we can use a regular
controlling ω-grammar generating a full controlling ω-language.
(ii) Lσ,ρ,l (ω- CFG, ω- RLG) ⊆ Lσ,ρ,l (ω- CFG). Given an ω-CFG G1 = (N1 , T1 , P1 , S1 , E)
and an ω-RLG G2 = (N2 , T2 , P2 , S2 , F) with P1 = T2 , P2 contains all productions of the
forms: B → uC, B → u, B, C ∈ N2 , u ∈ P1∗ . Assume N1 ∩ N2 = ∅, E = {Ei }1≤i≤m ,
F = {Fi }1≤i≤n . We define Rset(pi ) = {pk ∈ P1 | pi : B → uC ∈ P2 , pk in u}. Let us
consider different (σ, ρ, l)-accepting models.
(a) For ρ ∈ {⊆, =}, we construct an ω-grammar
G = (N1 ∪ T10 ∪ N2 ∪ {S}, T1 , P, S, H)
where T10 = {a0 | a ∈ T1 }, and P includes the following sets of productions:
1. Ps = {S → S2 S1 },
u
2. P 0 = {pkA : BA → Cα0 , for pk : B → uC ∈ P2 and A =⇒ α in G1 , where α0 is the
string replacing each terminal a ∈ T1 of α by a0 },
lm
3. P 00 = {Ba0 → aB, for a ∈ T1 , B ∈ N2 }.
S
0
Let fS
1 (pkA ) = {pk }, f2 (pkA ) = Rset(pk ). For H ⊆ P , f1 (H) =
pkA ∈H f1 (pkA ),
f2 (H) = pkA ∈H f2 (pkA ). We construct the set H as follows:
(a.1) (ran, ⊆, l)-acceptance. H = {H 0 ∪ Ps ∪ P 00 | H 0 ⊆ P 0 , f1 (H 0 ) ⊆ Ei , f2 (H 0 ) ⊆
Fj , Ei ∈ E, Fj ∈ F}.
(a.2) (ran, =, l)-acceptance. H = {H 0 ∪ Ps ∪ H 00 | H 0 ⊆ P 0 , f1 (H 0 ) = Ei , f2 (H 0 ) =
Fj , Ei ∈ E, Fj ∈ F, and ∅ ⊂ H 00 ⊆ P 00 }.
(a.3) (inf, ⊆, l)-acceptance. H = {H 0 ∪ P 00 | H 0 ⊆ P 0 , f1 (H 0 ) ⊆ Ei , f2 (H 0 ) ⊆ Fj , Ei ∈
E, Fj ∈ F}.
(a.4) (inf, =, l)-acceptance. H = {H 0 ∪ H 00 | H 0 ⊆ P 0 , f1 (H 0 ) = Ei , f2 (H 0 ) = Fj , Ei ∈
E, Fj ∈ F, and ∅ ⊂ H 00 ⊆ P 00 }.
It is easy to see that the nonterminals B ∈ N2 control the derivation in such a way
that the acceptable leftmost derivations in G1 are simulated by leftmost derivations in G.
Therefore, for σ ∈ {ran, inf}, ρ ∈ {⊆, =}, Lσ,ρ,l (G1 , G2 ) = Lσ,ρ,l (G) ∈ Lσ,ρ,l (ω- PSG) =
Lσ,ρ,l (ω- CFG) by Thm. 4.45.
(b) (σ, u)-acceptances.
loss of generality, assume that E = {E} and F = {F }
S
SWithout
n
where E = m
E
,
F
=
F
.
We
construct an ω-grammar
i=1 i
i=1 i
G = (N1 ∪ T10 ∪ N20 ∪ {S}, T1 , P, S, H)
where T10 = {a0 | a ∈ T1 }, N20 = {[A, x] | A ∈ N2 , x ∈ {0, 1}2 }, and P includes the following
sets of productions:
1. Ps = {S → [S2 , 00]S1 },
u
2. P 0 = {pkAx : [B, x]A → [C, y]α0 , for pk : B → uC ∈ P2 and A =⇒ α in G1 , where
lm
72
Chapter 5. Control Systems on ω-Words
x ∈ {0, 1}2 ,


 00,



10,






 01,
y = 11,



11,





11,




x,
if x = 11
if x = 00 and Rset(pk ) ∩ E 6= ∅
if x = 00 and pk ∈ F
if x = 00 and Rset(pk ) ∩ E 6= ∅ and pk ∈ F
if x = 10 and pk ∈ F
if x = 01 and Rset(pk ) ∩ E 6= ∅
else.
and α0 is the string replacing each terminal a ∈ T1 of α by a0 },
3. P 00 = {[B, x]a0 → a[B, x], for a ∈ T1 , [B, x] ∈ N20 }.
Let H = {{pkA11 | pk ∈ P2 , A ∈ N1 }}. It is easy to see that the nonterminals [B, x] ∈ N20
control the derivation in such a way that the acceptable leftmost derivations in G1 are
simulated by leftmost derivations in G. Therefore, for σ ∈ {ran, inf}, Lσ,u,l (G1 , G2 ) =
Lσ,u,l (G) ∈ Lσ,u,l (ω- PSG) = Lσ,u,l (ω- CFG) by Thm. 4.45.
Theorem 5.13. Lσ,ρ,l (ω- CFG) ⊂ Lσ,ρ,l (ω- CFG, ω- CFG) ⊆ Lσ,ρ,nl (ω- PSG).
Proof. (i) Lσ,ρ,l (ω- CFG) ⊂ Lσ,ρ,l (ω- CFG, ω- CFG) is obvious, since we can use a regular
controlling ω-grammar generating a full controlling ω-language. The strictness follows
from Example 5.9.
(ii) Lσ,ρ,l (ω- CFG, ω- CFG) ⊆ Lσ,ρ,nl (ω- PSG). Given an ω-CFG G1 = (N1 , T1 , P1 , S1 , E)
and an ω- CFG G2 = (N2 , T2 , P2 , S2 , F) with P1 = T2 . Assume N1 ∩ N2 = ∅, E =
{Ei }1≤i≤m , F = {Fi }1≤i≤n . Let us consider different (σ, ρ, nl)-accepting models for
ω- PSG.
(a) For ρ ∈ {⊆, =}, we construct an ω-grammar
G = (N1 ∪ T10 ∪ N2 ∪ T2 ∪ T20 ∪ {$, §, S}, T1 , P, S, H)
where T10 = {a0 | a ∈ T1 }, T20 = {p0 | p ∈ T2 } and P includes the following sets of
productions:
1. S → $S1 §S2 ,
2. p2k : §B → §β, for pk : B → β ∈ P2 ,
3. Xp → pX, for X ∈ N1 ∪ T10 ∪ {§}, p ∈ T2 , (move p leftward skipping X)
4. $p → $p0 , for p ∈ T2 , (p becomes p0 at the leftmost position)
5. p0 a0 → a0 p0 , for a0 ∈ T10 and p0 ∈ T20 , (p0 skips terminals)
6. p1k : p0k A → α0 , for pk : A → α ∈ P1 , α0 replaces each terminal a ∈ T1 of α by a0 , (p0k
rewrites A using the production pk )
7. $a0 → a$, for a ∈ T1 .
5.2. Leftmost-Derivation-Based ω-Grammar Control Systems (ω-LGC Systems)
73
We denote by P ri the set of productions of type (i) above. Let P1k = {p1k ∈ P r6 },
P2k = {p2k ∈ P r2 }. We construct the set H as follows:
Let Hij =
{H 0S∪ H 00 ∪ P r1 ∪ P r3 ∪ P r4 ∪ P r5 ∪ P r7 | H 0 =
S (a.1) (ran, 00⊆, nl)-acceptance.
S
Sm
n
i=1 j=1 Hij .
pk ∈Ei P1k , H =
pk ∈Fj P2k }, then H =
Let Hij = {H 0 ∪ H 00 ∪ P r1 ∪ H3 ∪ H4 ∪ H5 ∪ H7 | H 0 =
S (a.2) (ran, =, nl)-acceptance.
S
00
P2k , and ∅ ⊂ H3 ⊆ P r3 , ∅ ⊂ H4 ⊆ P r4 , ∅ ⊂ H5 ⊆ P r5 , ∅ ⊂
pk ∈Ei P1k , and H = S pk ∈F
j
m Sn
H7 ⊆ P r7 }, then H = i=1 j=1 Hij .
nl)-acceptance.
Let Hij S
= {HS0 ∪ H 00 ∪ P r3 ∪ P r4 ∪ P r5 ∪ P r7 | H 0 =
S (a.3) (inf, ⊆,
S
m
n
00
pk ∈Ei P1k , H =
pk ∈Fj P2k }, then H =
i=1 j=1 Hij .
S
(a.4) (inf,
=, nl)-acceptance. Let Hij = {H 0 ∪H 00 ∪H3 ∪H4 ∪H5 ∪H7 | H 0 = pk ∈Ei P1k ,
S
and H 00 = pk ∈Fj P2k , and ∅ ⊂ H3 ⊆ P r3 , ∅ ⊂ H4 ⊆ P r4 , ∅ ⊂ H5 ⊆ P r5 , ∅ ⊂ H7 ⊆ P r7 },
S Sn
then H = m
i=1 j=1 Hij .
It is easy to see that the nonterminals p0k ∈ T20 control the derivation in such a way that
the acceptable leftmost derivations in G1 are simulated by derivations in G. Therefore,
for σ ∈ {ran, inf}, ρ ∈ {⊆, =}, Lσ,ρ,l (G1 , G2 ) = Lσ,ρ,nl (G) ∈ Lσ,ρ,nl (ω- PSG).
(b) (σ, u)-acceptances.
loss of generality, assume that E = {E} and F = {F }
S
SWithout
n
where E = m
E
,
F
=
F
.
We
construct an ω-grammar
i
i
i=1
i=1
G = (N1 ∪ T10 ∪ N2 ∪ T2 ∪ T20 ∪ Y ∪ {$, S}, T1 , P, S, H)
where T10 = {a0 | a ∈ T1 }, T20 = {p0 | p ∈ T2 }, Y = {0, 1}2 and P includes the following
sets of productions:
1. S → $S1 xS2 , x = 00 ∈ Y ,
2. p2kx : xB → yβ, for pk : B → β ∈ P2 , x ∈ Y ,

00,



 01,
y=
 11,



x,
if x = 11
if x = 00 and pk ∈ F
if x = 10 and pk ∈ F
else.
3. Xp → pX, for X ∈ N1 ∪ T10 , p ∈ T2 , (move p leftward skipping X)
4. $p → $p0 , for p ∈ T2 , (p becomes p0 at the leftmost position)
5. p0 a0 → a0 p0 , for a0 ∈ T10 and p0 ∈ T20 , (p0 skips terminals)
6. p1kx : xp0k A → yα0 , for pk : A → α ∈ P1 , α0 replaces each terminal a ∈ T1 of α by a0 ,
x∈Y,

00, if x = 11



 10, if x = 00 and p ∈ E
k
y=

11, if x = 01 and pk ∈ E



x, else.
(p0k rewrites A using the production pk )
7. $a0 → a$, for a ∈ T1 ,
74
Chapter 5. Control Systems on ω-Words
8. xX → Xx and Xx → xX, for X ∈ N1 ∪ T10 ∪ T2 ∪ T20 , x ∈ Y .
Let H = {{p1k11 | pk ∈ P1 } ∪ {p2k11 | pk ∈ P2 }}. It is easy to see, for σ ∈ {ran, inf},
Lσ,u,l (G1 , G2 ) = Lσ,u,nl (G) ∈ Lσ,u,nl (ω- PSG).
5.3
5.3.1
ω-Automaton Control Systems (ω-AC Systems)
Definitions
An ω-automaton control system consists of an ω-automaton and a controlling ω-automaton.
Definition 5.14. Given a controlled ω-automaton (or simply ω-automaton) A1 with a set
of transitions δ1 = {pi }i∈I where pi is the name of a transition, a controlling ω-automaton
A2 over A1 has the set of terminals Σ2 = δ1 .
Note that each transition has a name. For example, assume δ1 (q1 , a) = {q2 , q3 }, we
may denote by pi : δ1 (q1 , a) = q2 and pj : δ1 (q1 , a) = q3 .
A legal run r of an ω-automaton A1 induces an infinite sequence of transitions tr =
t1 t2 ... ∈ δ1ω where ti is the transition used in the i-th step of the legal run r, i.e., a mapping
tr : N → δ1 where tr (i) = ti .
Definition 5.15. An ω-Automaton Control System (simply ω-AC System) includes an ωautomaton A1 and a controlling ω-automaton A2 . The global ω-language of A1 controlled
by A2 is:
Lσ,ρ (A1 ~· A2 ) = {u ∈ Σω | there exists a legal run r of A1 on u such that
fr is (σ, ρ)-accepting w.r.t. F
and tr ∈ Lσ,ρ (A2 )}
We say Lσ,ρ (A2 ) is a controlling ω-language.
The symbol ~· is called “meta-composition”, denoting the left operand is controlled
by the right operand.
As usual, Linf,= (A1 ~· A2 ) will be denoted by L(A1 ~· A2 ).
Example 5.16. Given an ω- PDA D1 = ({q}, {a, b, c}, {S, Z, A, B, a, b, c}, δ, q, Z, {{q}}),
where δ includes the following transitions:
ps : δ(q, , Z) = (q, SZ) p1 : δ(q, , S) = (q, AB) p2 : δ(q, , A) = (q, aAb)
p3 : δ(q, , B) = (q, Bc) p4 : δ(q, , A) = (q, ab)
p5 : δ(q, , B) = (q, c)
pa : δ(q, a, a) = (q, )
pb : δ(q, b, b) = (q, )
pc : δ(q, c, c) = (q, )
n−1
ω
Given a controlling ω- PDA D2 such that L(D2 ) = (ps p1 (p2 pa )n−1 p4 pa p+
p5 p+
c ) ,
b p3
n
n
n
ω
it is easy to see that L(D1 ~· D2 ) = {a b c | n ≥ 1} .
Let P = {pa , pb , pc }, L(D3 ) = (ps p1 P ∗ (p2 P ∗ )n−1 p4 P ∗ (p3 P ∗ )n−1 p5 P ∗ )ω , we also have
L(D1 ~· D3 ) = {an bn cn | n ≥ 1}ω . We remark that the controlling ω-language can express
a “weaker” or “stronger” restriction to obtain the same effect, in the case that the “valid”
constraint on the controlled ω-automaton is not changed.
Two trivial types of controlling ω-automata are empty controlling ω-automata and
full controlling ω-automata. The former ones accept the empty controlling ω-language
which rejects all the sequences of applied transitions, i.e., Lσ,ρ (A2 ) = ∅. The latter ones
accept full controlling ω-languages that accept all the sequences of applied transitions, i.e.,
Lσ,ρ (A2 ) = δ1ω , where δ1 is the set of transitions of the controlled ω-automaton A1 . Note
that the two types of ω-languages are both regular.
5.3. ω-Automaton Control Systems (ω-AC Systems)
5.3.2
75
Generative Power
Theorem 5.17. Lσ,ρ (ω- FSA ~· ω- FSA) = Lσ,ρ (ω- FSA).
Proof: (i) Lσ,ρ (ω- FSA ~· ω- FSA) ⊇ Lσ,ρ (ω- FSA) is obvious, since we can use a finite
state controlling ω-automaton generating a full controlling ω-language.
(ii) Lσ,ρ (ω- FSA ~· ω- FSA) ⊆ Lσ,ρ (ω- FSA). Given ω- FSA’s A1 = (Q1 , Σ1 , δ1 , q1 , E)
and A2 = (Q2 , Σ2 , δ2 , q2 , F), where E = {Ei }1≤i≤m , F = {Fi }1≤i≤n . Let us consider
different (σ, ρ)-accepting models.
(a) For ρ ∈ {⊆, =}, we construct an ω- FSA A = (Q, Σ1 , δ, q0 , H) where:
1. Q = Q1 × Q2 ,
2. δ((qi , qj ), a) contains (ri , rj ), if p : δ1 (qi , a) = ri , a ∈ Σ1 ∪ {} and δ2 (qj , p) contains
rj ,
δ((qi , qj ), ) contains (qi , rj ), if δ2 (qj , ) contains rj ,
3. q0 = (q1 , q2 ).
S
We define
f
((q
,
q
))
=
{q
},
f
((q
,
q
))
=
{q
}.
For
H
⊆
Q,
f
(H)
=
1
i
j
i
2
i
j
j
1
q∈H f1 (q),
S
fS2 (H)S= q∈H f2 (q). Let Hij = {H | H ⊆ Q, f1 (H) = Ei , f2 (H) = Fj }, then H =
m
n
i=1 j=1 Hij .
A simulates the actions of A1 , and changes the state of A2 . It is easy to see, for
σ ∈ {ran, inf}, ρ ∈ {⊆, =}, Lσ,ρ (A1 ~· A2 ) = Lσ,ρ (A) ∈ Lσ,ρ (ω- FSA).
(b) (σ, u)-acceptances.
loss of generality, assume that E = {E} and F = {F }
SWithout
S
n
F
.
We
construct an ω- FSA A = (Q, Σ1 , δ, q0 , H) where:
E
,
F
=
where E = m
i
i
i=1
i=1
1. Q = Q1 × Q2 × {0, 1}2 ,
2. δ((qi , qj , x), a) contains (ri , rj , y), if p : δ1 (qi , a) = ri and δ2 (qj , p) contains rj , x ∈
{0, 1}2 ,


 00, if x = 11



10, if x = 00 and qi ∈ E






 01, if x = 00 and qj ∈ F
y = 11, if x = 00 and qi ∈ E and qj ∈ F



11, if x = 01 and qi ∈ E





11, if x = 10 and qj ∈ F




x, else.
3. q0 = (q1 , q2 , 00).
Let H = {Q1 × Q2 × {11}}. It is easy to see, for σ ∈ {ran, inf}, Lσ,u (A1 ~· A2 ) = Lσ,u (A) ∈
Lσ,u (ω- FSA).
Theorem 5.18. Lσ,ρ (ω- FSA ~· ω- PDA) = Lσ,ρ (ω- PDA).
Proof: (i) Lσ,ρ (ω- FSA ~· ω- PDA) ⊇ Lσ,ρ (ω- PDA). Given an ω- PDA D = (Q, Σ, Γ, δ,
q0 , Z0 , F), we construct an ω- FSA A and a controlling ω- PDA D2 as follows:
A = ({qA }, Σ, δA , qA , {{qA }})
76
Chapter 5. Control Systems on ω-Words
where δA contains pa : δA (qA , a) = qA , for each a ∈ Σ.
D2 = (Q, h(Σ), Γ, δ2 , q0 , Z0 , F)
where,
1. h is a homomorphism: h(a) = pa , h() = , for a ∈ Σ,
2. for a set Σ, h(Σ) = {h(a) | a ∈ Σ},
3. δ2 (q, h(a), B) contains (r, β), if δ(q, a, B) contains (r, β), where a ∈ Σ ∪ {}, B ∈ Γ,
β ∈ Γ∗ .
Clearly, Lσ,ρ (D2 ) = h(Lσ,ρ (D)). Since each transition pa ∈ δA generates exactly a terminal
a, we have Lσ,ρ (A ~· D2 ) = h−1 (Lσ,ρ (D2 )) = h−1 (h(Lσ,ρ (D))) = Lσ,ρ (D).
(ii) Lσ,ρ (ω- FSA ~· ω- PDA) ⊆ Lσ,ρ (ω- PDA). Given an ω- FSA A1 = (Q1 , Σ1 , δ1 , q1 , E)
and a controlling ω- PDA D2 = (Q2 , Σ2 , Γ, δ2 , q2 , Z0 , F), where E = {Ei }1≤i≤m , F =
{Fi }1≤i≤n . Let us consider different (σ, ρ)-accepting models.
(a) For ρ ∈ {⊆, =}, we construct an ω- PDA D = (Q, Σ1 , Γ, δ, q0 , Z0 , H) where:
1. Q = Q1 × Q2 ,
2. δ((qi , qj ), a, B) contains ((ri , rj ), β), for p : δ1 (qi , a) = ri , and δ2 (qj , p, B) contains
(rj , β), where a ∈ Σ1 ∪ {}, B ∈ Γ, β ∈ Γ∗ ,
δ((qi , qj ), , B) contains ((qi , rj ), β), for qi ∈ Q1 , and δ2 (qj , , B) contains (rj , β),
where B ∈ Γ, β ∈ Γ∗ ,
3. q0 = (q1 , q2 ),
S
We define
f
((q
,
q
))
=
{q
},
f
((q
,
q
))
=
{q
}.
For
H
⊆
Q,
f
(H)
=
1
i
j
i
2
i
j
j
1
q∈H f1 (q),
S
fS2 (H)S= q∈H f2 (q). Let Hij = {H | H ⊆ Q, f1 (H) = Ei , f2 (H) = Fj }, then H =
m
n
i=1 j=1 Hij .
D simulates the actions of D2 and changes the state of A1 . It is easy to see, for
σ ∈ {ran, inf}, ρ ∈ {⊆, =}, Lσ,ρ (A1 ~· D2 ) = Lσ,ρ (D) ∈ Lσ,ρ (ω- PDA).
(b) (σ, u)-acceptances.
Without loss of generality, assume that E = {E} and F = {F }
S
Sm
where E = i=1 Ei , F = ni=1 Fi . We construct an ω- PDA D = (Q, Σ1 , Γ, δ, q0 , Z0 , H)
where:
1. Q = Q1 × Q2 × {0, 1}2 ,
2. δ((qi , qj , x), a, B) contains ((ri , rj , y), β), for p : δ1 (qi , a) = ri , and δ2 (qj , p, B) contains (rj , β), where a ∈ Σ1 ∪ {}, B ∈ Γ, β ∈ Γ∗ ,
δ((qi , qj , x), , B) contains ((qi , rj , y), β), for qi ∈ Q1 , and δ2 (qj , , B) contains (rj , β),
where B ∈ Γ, β ∈ Γ∗ ,
in the transitions above, x ∈ {0, 1}2 ,

00, if x = 11





10, if x = 00 and qi ∈ E




 01, if x = 00 and qj ∈ F


y = 11, if x = 00 and qi ∈ E and qj ∈ F



 11, if x = 01 and qi ∈ E




11, if x = 10 and qj ∈ F




x, else.
5.3. ω-Automaton Control Systems (ω-AC Systems)
77
3. q0 = (q1 , q2 , 00),
Let H = {Q1 × Q2 × {11}}. It is easy to see, for σ ∈ {ran, inf}, Lσ,u (A1 ~· D2 ) = Lσ,u (D) ∈
Lσ,u (ω- PDA).
Theorem 5.19. Lσ,ρ (ω- PDA ~· ω- FSA) = Lσ,ρ (ω- PDA).
Proof: (i) Lσ,ρ (ω- PDA ~· ω- FSA) ⊇ Lσ,ρ (ω- PDA) is obvious, since we can use a finite
state controlling ω-automaton generating a full controlling ω-language.
(ii) Lσ,ρ (ω- PDA ~· ω- FSA) ⊆ Lσ,ρ (ω- PDA). Given an ω- PDA D1 = (Q1 , Σ1 , Γ, δ1 , q1 ,
Z0 , E) and an ω- FSA A2 = (Q2 , Σ2 , δ2 , q2 , F), where E = {Ei }1≤i≤m , F = {Fi }1≤i≤n . Let
us consider different (σ, ρ)-accepting models.
(a) For ρ ∈ {⊆, =}, we construct an ω- PDA D = (Q, Σ1 , Γ, δ, q0 , Z0 , H), where:
1. Q = Q1 × Q2 ,
2. δ((qi , qj ), a, B) contains ((ri , rj ), β), if p : δ1 (qi , a, B) = (ri , β), and δ2 (qj , p) contains
rj , where a ∈ Σ1 ∪ {}, B ∈ Γ, β ∈ Γ∗ ,
δ((qi , qj ), , B) contains ((qi , rj ), B), if δ2 (qj , ) contains rj , where B ∈ Γ,
3. q0 = (q1 , q2 ).
S
We defineSf1 ((qi , qj )) = {qi }, f2 ((qi , qj )) = {qj }. For H ⊆ Q, f1 (H) = q∈H f1 (q),
f2 (H)S= q∈H f2 (q). Let Hij = {H | H ⊆ Q, f1 (H) = Ei , f2 (H) = Fj }, then H =
S
m
n
i=1 j=1 Hij .
D simulates the actions of D1 and changes the state of A2 . It is easy to see, for
σ ∈ {ran, inf}, ρ ∈ {⊆, =}, Lσ,ρ (D1 ~· A2 ) = Lσ,ρ (D) ∈ Lσ,ρ (ω- PDA).
(b) (σ, u)-acceptances.
Without loss of generality, assume that E = {E} and F = {F }
S
S
n
E
,
F
=
where E = m
i=1 Fi . We construct an ω- PDA D = (Q, Σ1 , Γ, δ, q0 , Z0 , H),
i=1 i
where:
1. Q = Q1 × Q2 × {0, 1}2 ,
2. δ((qi , qj , x), a, B) contains ((ri , rj , y), β), if p : δ1 (qi , a, B) = (ri , β), and δ2 (qj , p)
contains rj , where a ∈ Σ1 ∪ {}, B ∈ Γ, β ∈ Γ∗ ,
δ((qi , qj , x), , B) contains ((qi , rj , y), B), if δ2 (qj , ) contains rj , where B ∈ Γ,
in the transitions above, x ∈ {0, 1}2 ,

00, if x = 11





10, if x = 00 and qi ∈ E




 01, if x = 00 and qj ∈ F


y = 11, if x = 00 and qi ∈ E and qj ∈ F



 11, if x = 01 and qi ∈ E




11, if x = 10 and qj ∈ F




x, else.
3. q0 = (q1 , q2 , 00).
Let H = {Q1 × Q2 × {11}}. It is easy to see, for σ ∈ {ran, inf}, Lσ,u (D1 ~· A2 ) = Lσ,u (D) ∈
Lσ,u (ω- PDA).
78
Chapter 5. Control Systems on ω-Words
Theorem 5.20. Lσ,ρ (ω- PDA) ⊂ Lσ,ρ (ω- PDA ~· ω- PDA) ⊆ Lσ,ρ (ω- TM).
Proof: (i) Lσ,ρ (ω- PDA) ⊂ Lσ,ρ (ω- PDA ~· ω- PDA) is obvious, since we can use a finite
state controlling ω-automaton generating a full controlling ω-language. The strictness
follow from Example 5.16.
(ii) Lσ,ρ (ω- PDA ~· ω- PDA) ⊆ Lσ,ρ (ω- TM). Given two ω- PDA’s D1 , D2 , we construct
an ω- TM M with four tapes. The first tape holds the input string u. The second tape
simulates the stack of D1 . Once D1 applies a transition, the name of the transition is
sequentially recorded on the third tape, and M starts to simulate D2 using the third and
fourth tapes (the third tape holds a prefix of the infinite sequence of transitions tr as the
input to D2 , and the fourth tape simulates the stack of D2 ). Once the pointer of the third
tape reaches a blank square, M return to the simulation of D1 .
The states of M may contain three components: one for simulating D1 , one for simulating D2 , and one for coordinating the two simulations. It is easy to see, Lσ,ρ (D1 ~· D2 ) =
Lσ,ρ (M ) by selecting proper sets of designated state sets for various accepting models. 5.3.3
Equivalence and Translation between ω-AC and ω-LGC Systems
In this section, we study the equivalence and the translation between ω-AC systems and
ω-LGC systems. To make the proofs succinct, some details are discussed without formal
description, e.g., the constructions of repetition sets and designated state sets.
We first prove some lemmas that will be used in the latter part of this section. Note
that, if the equivalence between families of ω-automata and ω-grammars holds, we may
only prove a result on one of them. For example, Lσ,ρ (ω- FSA) and Lσ,ρ,l (ω- RLG) are
equivalent in generative power, then we may only prove a result on one of them.
Lemma 5.21. Lσ,ρ (ω- FSA) and Lσ,ρ,l (ω- RLG) are closed under substitution by -free
finite sets.
Proof. Given an ω- RLG G and an -free finite substitution s, we construct G0 such that
Lσ,ρ,l (G0 ) = s(Lσ,ρ,l (G)) as follows: each productions of the form pk : A → uB is replaced
by a set of productions Pk = {A → vB | v ∈ s(u)}. It is easy to construct the repetition
sets for various accepting models using the correspondence between pk and Pk .
Lemma 5.22. Lσ,ρ (ω- FSA) and Lσ,ρ,l (ω- RLG) are closed under concatenation with finite
sets.
Proof. Given an ω- RLG G = (N, T, P, S, F) and a finite set R, we construct G0 = (N ∪
{S 0 }, T, P ∪ {S 0 → uS | u ∈ R}, S 0 , F 0 ) such that Lσ,ρ,l (G0 ) = R · Lσ,ρ,l (G). It is easy to
construct the repetition sets F 0 for various accepting models, since we only added a set of
starting productions.
Lemma 5.23. For (σ, ρ) ∈ {(inf, u), (inf, =)}, Lσ,ρ (ω- FSA) and Lσ,ρ,l (ω- RLG) are closed
under substitution by -free regular sets.
Proof. Given an ω- FSA A = (Q, Σ, δ, q, F) and an -free regular substitution s, such that
for each ak ∈ Σ, s(ak ) = L(Ak ) for a F SA Ak = (Qk , Σk , δk , rk , Fk ) on finite words, we
construct A0 = (Q0 , Σ0 , δ 0 , q 0 , F 0 ) as follows: if there is a transition from δ(qi , ak ) = qj ,
then replace the transition by a copy of Ak and add the transitions δ 0 (qi , ) = rk and
δ 0 (r, ) = qj for all r ∈ Fk . It is easy to construct F 0 for various accepting models.
We have the same closure properties for ω- CFG using the same proof techniques.
5.3. ω-Automaton Control Systems (ω-AC Systems)
79
Lemma 5.24. Lσ,ρ (ω- PDA) and Lσ,ρ,l (ω- CFG) are closed under substitution by -free
finite sets and concatenation with finite sets.
Lemma 5.25. For (σ, ρ) ∈ {(inf, u), (inf, =)}, Lσ,ρ (ω- PDA) and Lσ,ρ,l (ω- CFG) are closed
under substitution by -free regular sets.
Theorem 5.26. Let X be a family of ω-automata, Y be a family of ω-grammars, for σ ∈
{ran, inf}, ρ ∈ {u, ⊆, =}, if L = Lσ,ρ (X) = Lσ,ρ,l (Y ) is closed under substitution by -free
finite sets and concatenation with finite sets, then Lσ,ρ (ω- FSA ~· X) = Lσ,ρ,l (ω- RLG, Y ).
Proof. (i) Lσ,ρ (ω- FSA ~· X) ⊆ Lσ,ρ,l (ω- RLG, Y ). Given an ω-AC system (A ~· A2 ), A ∈
ω- FSA, A2 ∈ X, we construct G ∈ ω- RLG, G2 ∈ Y as follows.
First, we construct G from A. The key point of the translation is the name mapping
between transitions and productions. Given A = (Q, Σ, δ, q0 , E) where Q = {qi }0≤i<|Q| ,
we construct G = (N, Σ, P, S0 , E 0 ) where:
1. for each qi ∈ Q, there is a nonterminal Si ∈ N , and S0 is the start symbol.
2. for each transition pk : δ(qi , ak ) = qj , where pk is the name of transition and ak ∈
Σ ∪ {}, there is a production pik : Si → ak Sj ∈ P .
Let Pi = {pik ∈ P }, it is easy to construct E 0 for various accepting models such that
Lσ,ρ (A) = Lσ,ρ,l (G) using the correspondence between qi and Pi .
Second, we construct G2 ∈ Y from A2 ∈ X. Let h(pk ) = {pik | pk : δ(qi , ak ) = qj }
be an -free homomorphism, for each (σ, ρ), there exists an ω- FSA A02 ∈ X such that
Lσ,ρ (A02 ) = h(Lσ,ρ (A2 )), since Lσ,ρ (X) is closed under -free homomorphism. Therefore,
there exists G2 ∈ Y such that Lσ,ρ,l (G2 ) = h(Lσ,ρ (A2 )) since Lσ,ρ (X) = Lσ,ρ,l (Y ). It is
easy to see Lσ,ρ (A ~· A2 ) = Lσ,ρ,l (G, G2 ).
(ii) Lσ,ρ (ω- FSA ~· X) ⊇ Lσ,ρ,l (ω- RLG, Y ). Given an ω-LGC system (G, G2 ), G =
(N, T, P, S0 , E) ∈ ω- RLG, G2 ∈ Y , where N = {Si }0≤i<|N | , P consists of all productions
of the forms: Si → uSj , Si → u, u ∈ T ∗ .
First, we construct from G an ω- RLG G1 = (N1 , T, P1 , S0 , E 0 ), where P1 consists of all
productions of the following forms:
1. pkm : Sk(m−1) → akm Skm , for 1 ≤ m ≤ |u|, if pk : Si → uSj ∈ P , where u =
ak1 · · · ak|u| , Sk0 = Si , Sk|u| = Sj .
2. pk : Si → Sj , if pk : Si → Sj ∈ P .
3. pkm : Sk(m−1) → akm Skm , for 1 ≤ m ≤ |u|, if pk : Si → u ∈ P , where u = ak1 · · · ak|u| ,
Sk0 = Si , Sk|u| = .
4. pk : Si → , if pk : Si → ∈ P .
We denote by Pk0 the set of productions named
S pk or 0 pkm . Let E = {Ei }1≤i≤n , we
0
0
0
construct the set E = {Ei }1≤i≤n where Ei = pk ∈Ei Pk . It can be easily verified that
Lσ,ρ,l (G1 ) = Lσ,ρ,l (G) for different accepting models.
Then, we construct from G1 an ω- FSA A = (Q, T, δ, q0 , E 00 ), where Q = {q0 }∪{qik | pk :
Si → γ ∈ P1 , γ ∈ (N1 ∪ T )∗ }, q0 is the start state, δ is defined as follows:
1. δ(q0 , ) = {q0k | pk : S0 → γ ∈ P1 , γ ∈ (N1 ∪ T )∗ }.
80
Chapter 5. Control Systems on ω-Words
2. p[k,n] : δ(qik , a) = qjn for pn : Sj → γ ∈ P1 , γ ∈ (N1 ∪ T )∗ , if pk : Si → aSj ∈ P1 ,
where a ∈ T ∪ {}.
It is easy to construct E 00 for various accepting models such that Lσ,ρ (A) = Lσ,ρ,l (G1 )
using the correspondence between pk and qik . We denote by T R1 the set of transitions of
type (1) above, Pk00 the set of transitions named p[k,n] .
Second, we construct A2 ∈ X from G2 ∈ Y . Let h be an -free homomorphism as
follows:


 pk , if pk is of the form Si → Sj in P


 pk , if pk is of the form Si → in P
h(pk ) =

pk1 · · · pk|u| , if pk is of the form Si → uSj in P



 p · · · p , if p is of the form S → u in P
k1
k|u|
k
i
Let f be an -free finite substitution: f (pk ) = Pk00 .
For each (σ, ρ), let L0 = T R1 · f (h(Lσ,ρ,l (G2 ))), we have L0 ∈ L, since Lσ,ρ,l (G2 ) ∈ L
and L is closed under substitution by -free finite sets and concatenation with finite sets.
Therefore, there exists A2 ∈ X such that Lσ,ρ (A2 ) = L0 ∈ Lσ,ρ (X). It is easy to see that,
an input u ∈ Lσ,ρ (A ~· A2 ), if and only if u ∈ Lσ,ρ,l (G, G2 ).
Let X be ω- FSA, Y be ω- RLG, we have the following corollary.
Corollary 5.27. Lσ,ρ (ω- FSA ~· ω- FSA) = Lσ,ρ,l (ω- RLG, ω- RLG).
Let X be ω- PDA, Y be ω- CFG, we have the following corollary.
Corollary 5.28. Lσ,ρ (ω- FSA ~· ω- PDA) = Lσ,ρ,l (ω- RLG, ω- CFG).
Theorem 5.29. Let X be a family of ω-automata, Y be a family of ω-grammars, for
σ ∈ {ran, inf}, ρ ∈ {u, ⊆, =}, if L = Lσ,ρ (X) = Lσ,ρ,l (Y ) is closed under substitution by -free regular sets and concatenation with finite sets, then Lσ,ρ (ω- PDA ~· X) =
Lσ,ρ,l (ω- CFG, Y ).
Proof. (i) Lσ,ρ (ω- PDA ~· X) ⊆ Lσ,ρ,l (ω- CFG, Y ). Given an ω-AC system (D ~· A2 ),
D = (Q, Σ, Γ, δ, q0 , Z0 , E) ∈ ω- PDA, A2 ∈ X, we construct ω-grammars G ∈ ω- CFG,
G2 ∈ Y as follows.
First, we construct G = (N, Σ, P, S, E 0 ) from D, where N is the set of objects of the
form [q, B, r] (denoting popping B from the stack by several transitions, switching the
state from q to r), q, r ∈ Q, B ∈ Γ, P is the union of the following sets of productions:
1. Ps = {S → [q0 , Z0 , qi ] | qi ∈ Q}.
2. P 0 = {[qi , B, qj ] → a[qj1 , B1 , qj2 ][qj2 , B2 , qj3 ] · · · [qjm , Bm , qj ] |
δ(qi , a, B) = (qj1 , B1 B2 ...Bm ), qj2 , ..., qjm , qj ∈ Q}, where a ∈ Σ ∪ {}, and
B, B1 , ..., Bm ∈ Γ. (If m = 0, then the production is [qi , B, qj1 ] → a.)
We denote by Pi the set of productions of the form [qi , B, qj ] → γ, for any B ∈ Γ,
qj ∈ Q, γ ∈ N ∗ ∪ ΣN ∗ . It is easy to construct E 0 for various accepting models such that
Lσ,ρ (D) = Lσ,ρ,l (G) using the correspondence between qi and Pi .
Second, we construct G2 ∈ Y from A2 ∈ X. Let s be an -free finite substitution, such
that s(pi ) = Pi . For each (σ, ρ), let L0 = Ps · s(Lσ,ρ (A2 )), we have L0 ∈ Lσ,ρ (X), since
Lσ,ρ (A2 ) ∈ Lσ,ρ (X) = L and L is closed under -free finite substitution and concatenation
5.4. Related Work
81
with finite sets. Thus, there exists G2 ∈ Y such that Lσ,ρ,l (G2 ) = L0 . It is easy to see, an
input u ∈ Lσ,ρ (D ~· A2 ), if and only if u ∈ Lσ,ρ,l (G, G2 ).
(ii) Lσ,ρ (ω- PDA ~· X) ⊇ Lσ,ρ,l (ω- CFG, Y ). Given an ω-LGC system (G, G2 ), where
G = (N, T, P, S, E) ∈ ω- CFG, G2 ∈ Y , where P = {pi }1≤i≤|P | , we construct an ω-AC
system as follows.
First, we construct an ω- PDA D = (Q, T, Γ, δ, q00 , Z0 , E 0 ) from G, where Γ = N ∪ T ∪
{Z0 }, Q = {q00 , q0 } ∪ {qi | pi ∈ P }, δ is defined as follows:
1. ps : δ(q00 , , Z0 ) = (q0 , SZ0 ),
2. pa : δ(q0 , a, a) = (q0 , ) for all a ∈ T ,
3. pi1 : δ(q0 , , A) = (qi , ) if pi : A → γ ∈ P , A ∈ N , γ ∈ (N ∪ T )∗ ,
4. pi2Z : δ(qi , , Z) = (q0 , γZ) if pi : A → γ ∈ P , Z ∈ Γ.
It is easy to construct E 0 for various accepting models such that Lσ,ρ (D) = Lσ,ρ,l (G) using
the correspondence between qi and pi . Let Pi2 be the set of transitions named pi2Z .
Second, we construct A2 ∈ X from G2 ∈ Y . Let s(pi ) = pi1 · Pi2 · {pa | a ∈ T }∗ be
an -free regular substitution, L0 = {ps } · s(Lσ,ρ,l (G2 )), we have L0 ∈ L, since L is closed
under -free regular substitution and concatenation with finite sets. Thus, there exists
A2 ∈ X such that Lσ,ρ (A2 ) = L0 .
It is easy to see an input u ∈ Lσ,ρ (D ~· A2 ), if and only if u ∈ Lσ,ρ,l (G, G2 ).
Let X be ω- FSA, Y be ω- RLG, we have the following corollary.
Corollary 5.30. For (σ, ρ) ∈ {(inf, u), (inf, =)},
Lσ,ρ (ω- PDA ~· ω- FSA) = Lσ,ρ,l (ω- CFG, ω- RLG).
Let X be ω- PDA, Y be ω- CFG, we have the following corollary.
Corollary 5.31. For (σ, ρ) ∈ {(inf, u), (inf, =)},
Lσ,ρ (ω- PDA ~· ω- PDA) = Lσ,ρ,l (ω- CFG, ω- CFG).
It is a good news that the equivalence holds for the most important Büchi and Muller
acceptances, although whether the results above hold for other accepting models is still
an open problem.
Because of these equivalences, we may also denote an ω-LGC system Lσ,ρ,l (G1 , G2 ) by
Lσ,ρ (G1 ~· G2 ) for a uniform notation. Moreover, this operator is more straightforward for
expressing the result of constructive computing of meta-composition, e.g., A = A1 ~· A2 ,
G = G1 ~· G2 . These meta-compositions are useful and important in the proofs of generative
power, also in emerging applications.
5.4
Related Work
Cohen studied the unrestricted ω-grammars with control sets in [33]. More precisely, with
our notation, a family of ω-LGC systems Lσ,ρ,l (X, Y ) was studied in the literature with
the following properties: (1) (σ, ρ) = (inf, =); (2) X is a unrestricted ω-grammar, i.e.,
F = 2P ; (3) only consider the leftmost derivation of X; (4) the control set is a regular
ω-language (i.e., Y is a (inf, =)-accepting ω- RLG). Obviously, only a few types of ω-C
systems were considered.
82
5.5
Chapter 5. Control Systems on ω-Words
Conclusion
In this chapter, we proposed the control system on ω-words (ω-C system), which is a
framework consisting of a controlled ω-device and a controlling ω-device that are modeled using the same formalism. These ω-devices could be (σ, ρ)-accepting ω-automata
or (σ, ρ, π)-accepting ω-grammars. Three subtypes of ω-C systems are proposed: ω-GC,
ω-LGC and ω-AC Systems. We are the first to marry restricted ω-grammars with various
accepting models and regulated rewriting on finite words in classical language theory.
Note that the proofs of the theorems also provide algorithms to build ω-grammars or
ω-automata accepting ω-languages that are equivalent to the global ω-languages. This is
useful to implement tools based on these theorems.
Chapter 6
Büchi Automaton Control
Systems and Concurrent Variants
In this chapter, we propose the Büchi automaton control system and its concurrent variants. A Büchi automaton control system consists of a controlled Büchi automaton and a
Büchi controlling automaton that restricts the behavior of the controlled automaton. The
Büchi automaton is a special case of finite state ω-automata, and has been widely used
for modeling and reasoning about systems. Therefore, we would like to discuss the control
system based on Büchi automata and its variants for concurrent systems, although it is a
special case of previously introduced ω-AC systems.
6.1
Büchi Automaton Control Systems (BAC Systems)
In this section, we present the definitions and properties of the Büchi automaton control
system, after a brief introduction of Büchi automata.
6.1.1
Büchi Automata
Let us recall the classic definition of Büchi automata [16, 120, 121].
Definition 6.1. A (nondeterministic) Büchi automaton (simply automaton) is a tuple
A = (Q, Σ, δ, q0 , F ), where Q is a finite set of states, Σ is a finite alphabet, δ ⊆ Q × Σ × Q
is a set of named transitions, q0 ∈ Q is the initial state, F ⊆ Q is a set of accepting states.
For convenience, we denote the set of transition names also by δ.
A Büchi automaton A = (Q, Σ, δ, q0 , F ) is a deterministic Büchi automaton, if δ is a
transition function mapping Q × Σ 7→ Q.
Note that the concept of transition name is introduced. A transition in δ is of the
form pk : (q, a, q 0 ), where pk is the name of the transition. In the transition diagram, a
transition pk : (q, a, q 0 ) ∈ δ is denoted by an arc from q to q 0 labeled pk : a.
We do not make any specific assumptions about the relation between the name and
symbol of a transition. That is, several transitions may have the same name but different
symbols, or they may have distinct names but the same symbol. For example, assume
δ(q1 , a) = {q2 , q3 } and δ(q3 , b) = {q2 }, we may denote by pi : (q1 , a, q2 ) ∈ δ and pj :
(q1 , a, q3 ) ∈ δ and pk : (q3 , b, q2 ) ∈ δ with the names pi , pj , pk . All the following cases are
possible: pi = pj or pi 6= pj , pi = pk or pi 6= pk , and so forth.
83
84
Chapter 6. Büchi Automaton Control Systems and Concurrent Variants
Definition 6.2. A run of A on an ω-word v = v(0)v(1) . . . ∈ Σω is a sequence of states
ρ = ρ(0)ρ(1) . . . ∈ Qω such that ρ(0) = q0 , σ(i) : (ρ(i), v(i), ρ(i + 1)) ∈ δ for i ≥ 0. Let
inf(ρ) be the set of states that appear infinitely often in the run ρ, then ρ is a successful
run if and only if inf(ρ) ∩ F 6= ∅. A accepts v if there is a successful run of A on v. The
ω-language accepted by A is L(A) = {v ∈ Σω | A accepts v}.
Given a run of A on v, an execution is the sequence of alternating states and input
symbols ρ(0)v(0)ρ(1)v(1) . . ., while an execution trace (or simply trace) is the sequence of
transitions σ = σ(0)σ(1) . . . ∈ δ ω .
If an ω-language L = L(A) for some Büchi automaton A, then L is Büchi recognizable.
Büchi recognizable ω-languages are called regular ω-languages. The expressive power of
regular ω-languages includes that of LTL (Linear Temporal Logic) [120], although Büchi
automata are syntactically simple. Thus, we can translate LTL formulas into Büchi automata.
Note that whether the transition names of A are unique does not affect L(A). This
means, we can arbitrarily modify the names without changing L(A), e.g., making them
unique.
6.1.2
Büchi Automaton Control Systems
A Büchi automaton control system consists of a controlled Büchi automaton and a Büchi
controlling automaton that restricts the behavior of the controlled automaton. The alphabet of the controlling automaton equals the set of transition names of the controlled
automaton.
Definition 6.3. Given a controlled Büchi automaton (or simply automaton) A1 = (Q1 ,
Σ1 , δ1 , q1 , F1 ), with a set of transition names δ1 = {pi }i∈I where pi is a name of transition,
a Büchi controlling automaton (or simply controlling automaton) over A1 is a tuple A2 =
(Q2 , Σ2 , δ2 , q2 , F2 ) with Σ2 = δ1 . L(A1 ) and L(A2 ) are called controlled ω-language and
controlling ω-language, respectively.
Definition 6.4. A Büchi Automaton Control System (BAC System) includes an automaton A1 and a controlling automaton A2 , denoted by A1 ~· A2 . A run of A1 ~· A2 on an
ω-word v = v(0)v(1) . . . ∈ Σω1 contains
- a sequence of states ρ1 = ρ1 (0)ρ1 (1) . . . ∈ Qω1
- a sequence of transitions σ = σ(0)σ(1) . . . ∈ δ1ω
- a sequence of controlling states ρ2 = ρ2 (0)ρ2 (1) . . . ∈ Qω2
such that ρ1 (0) = q1 , σ(i) : (ρ1 (i), v(i), ρ1 (i + 1)) ∈ δ1 for i ≥ 0, and ρ2 (0) = q2 ,
(ρ2 (j), σ(j), ρ2 (j + 1)) ∈ δ2 for j ≥ 0. Let inf(ρ1 ) and inf(ρ2 ) be the sets of states that
appear infinitely often in the sequences ρ1 and ρ2 , respectively. Then the run is successful
if and only if inf(ρ1 ) ∩ F1 6= ∅ and inf(ρ2 ) ∩ F2 6= ∅. A1 ~· A2 accepts v if there is a successful
run on v. The global ω-language accepted by A1 ~· A2 is L(A1 ~· A2 ) = {v ∈ Σω1 | A1 ~· A2
accepts v}.
The symbol ~· is called “meta-composition”, denoting the left operand is controlled
by the right operand.
Example 6.5. Given a Büchi automaton A1 = ({q0 , q1 }, {a, b, c}, δ, q0 , {q1 }) in Fig. 6.1,
where δ includes the following transitions: p1 : (q0 , a, q0 ), p2 : (q0 , b, q1 ), p3 : (q1 , a, q1 ),
p2 : (q1 , c, q1 ). Obviously, L(A1 ) = a∗ b(a + c)ω .
6.1. Büchi Automaton Control Systems (BAC Systems)
85
Given a Büchi controlling automaton A2 such that L(A2 ) = p2 p3 pω2 , it is easy to see
that L(A1 ~· A2 ) = bacω .
p1 : a
p3 : a
q0
p2 : b
p2
p2
q1
r0
p3
r1
r2
p2 : c
A1
A2
Figure 6.1: A Büchi Automaton Control System
If we let L(A3 ) = (p2 + p3 )p∗1 p3 pω2 , we also have L(A1 ~· A3 ) = bacω . Note that L(A3 )
specifies a weaker constraint, i.e., L(A2 ) ⊂ L(A3 ). However, the two BAC systems accept
the same global ω-language. We remark that the controlling ω-language can express a
“weaker” or “stronger” restriction to obtain the same effect, if the “valid” constraint on
the controlled automaton is not changed.
Two trivial types of controlling automata are empty controlling automata and full
controlling automata. The former ones accept the empty controlling ω-language which
rejects all the sequences of applied transitions, i.e., L(A2 ) = ∅. The latter ones accept full
controlling ω-languages that accept all the sequences of applied transitions, i.e., L(A2 ) =
δ1ω , where δ1 is the set of transition names of the controlled automaton A1 . Note that the
two types of languages are both regular ω-language.
Let us consider the computation of meta-composition.
Theorem 6.6. Given two Büchi automata A1 = (Q1 , Σ1 , δ1 , q1 , F1 ) and A2 = (Q2 , Σ2 , δ2 ,
q2 , F2 ) with Σ2 = δ1 , the meta-composition of A1 and A2 is a Büchi automaton:
A = A1 ~· A2 = (Q1 × Q2 × {0, 1, 2}, Σ1 , δ, (q1 , q2 , 0), Q1 × Q2 × {2})
where p : ((qi , qj , x), a, (qm , qn , y)) ∈ δ if and only if p : (qi , a, qm ) ∈ δ1 , (qj , p, qn ) ∈ δ2 , and
x, y satisfy the following conditions:

0,



 1,
y=
 2,



x,
if x = 2
if x = 0 and qm ∈ F1
if x = 1 and qn ∈ F2 , or if qm ∈ F1 and qn ∈ F2
otherwise
We have L(A) = L(A1 ~· A2 ).
Proof. Note that a symbol a is enabled at the state (qi , qj , x) by the transition p of A,
if and only if a is enabled at qi by the transition p of A1 , and p is enabled at qj of A2 .
Furthermore, A changes the third component of Q1 × Q2 × {0, 1, 2} from 0 to 1 when
an F1 state occurs, from 1 to 2 when subsequently an F2 state occurs, and back to 0
immediately afterwards. Thus, 2 occurs infinitely often if and only if some F1 state and
some F2 state appear infinitely often. According to Def. 6.4, it is easy to see A accepts
exactly L(A1 ~· A2 ).
If all the states of A1 are accepting states, it is only required that some state in F2
appears infinitely often. Therefore, the computation can be simplified as follows.
86
Chapter 6. Büchi Automaton Control Systems and Concurrent Variants
Theorem 6.7. Given two Büchi automata A1 = (Q1 , Σ1 , δ1 , q1 , Q1 ) and A2 = (Q2 , Σ2 , δ2 ,
q2 , F2 ) with Σ2 = δ1 , the meta-composition of A1 and A2 is a Büchi automaton:
A = A1 ~· A2 = (Q1 × Q2 , Σ1 , δ, (q1 , q2 ), Q1 × F2 )
where p : ((qi , qj ), a, (qm , qn )) ∈ δ if and only if p : (qi , a, qm ) ∈ δ1 , (qj , p, qn ) ∈ δ2 . We have
L(A) = L(A1 ~· A2 ).
It is easy to see the generative power of BAC systems and that of Büchi automata are
equivalent. Formally, let BA be the family of Büchi automata, and L(X) be the family of
languages accepted by a set X of automata, we have the following theorem.
Theorem 6.8. L(BA ~· BA) = L(BA).
Proof. (i). L(BA ~· BA) ⊆ L(BA) follows immediately from Thm. 6.6.
(ii). L(BA ~· BA) ⊇ L(BA). Given a Büchi automaton A1 , it can be constructed a
BAC system A1 ~· A2 such that L(A1 ~· A2 ) = L(A1 ) by using a full controlling automaton
A2 .
The next theorem shows that we can make each transition name in the controlled
automaton unique without changing the global ω-language.
Theorem 6.9. Given two Büchi automata A1 = (Q1 , Σ1 , δ1 , q1 , F1 ) and A2 = (Q2 , Σ2 , δ2 ,
q2 , F2 ) with Σ2 = δ1 , there exist two Büchi automata A01 = (Q1 , Σ1 , δ10 , q1 , F1 ) and A02 =
(Q2 , Σ02 , δ20 , q2 , F2 ) where Σ02 = δ10 and δ10 only renames some transitions of δ1 , such that
each transition name in δ10 is unique and L(A1 ~· A2 ) = L(A01 ~· A02 ).
Proof. Suppose there are k transitions of the name pi in δ1 , we rename them to be pi1 , ..., pik
in δ10 . Then, we replace any transition of the form (q, pi , q 0 ) ∈ δ2 for q, q 0 ∈ Q2 by a set of
transitions {(q, pij , q 0 )}1≤j≤k . It is easy to see the theorem holds.
Example 6.10. Let us consider the Büchi automaton A1 and the Büchi controlling automaton A2 in Fig. 6.1. We construct the automata A01 and A02 in Fig. 6.2, such that A01
renames the two transitions named p2 , and A02 replaces the transition labeled p2 by two
transitions p21 , p22 . It is easy to see that L(A01 ~· A02 ) = L(A1 ~· A2 ) = bacω .
p1 : a
p3 : a
q0
p21 : b
A01
p21 , p22
p21 , p22
q1
r0
p22 : c
p3
r1
r2
A02
Figure 6.2: A Büchi Automaton Control System with Unique Names
Discussion. Theorem 6.6 ensures that the meta-composition is also a Büchi automaton, and can be implemented as a reactive system whose behavior is described by a Büchi
automaton. Additionally, Theorem 6.8 also means that any reactive system can be represented in the form of BAC systems. In other words, the family of Büchi automata is
closed under the meta-composition operator. If all of the states of a controlled automaton are accepting states, Theorem 6.7 can simplify the computation of meta-composition.
Theorem 6.9 makes it reasonable to make the assumption that each transition name in a
controlled automaton is unique, when necessary.
6.1. Büchi Automaton Control Systems (BAC Systems)
6.1.3
87
Alphabet-level Büchi Automaton Control Systems
Let us consider a special subset of the BAC systems that have two properties: (1) each
transition name of the controlled automaton is unique; (2) all transitions associated with
the same symbol in the controlled automaton always appear together between two states
of the controlling automaton. Formally, we define them as follows.
Definition 6.11. Given two Büchi automata A1 = (Q1 , Σ1 , δ1 , q1 , F1 ) and A2 = (Q2 , Σ2 , δ2 ,
q2 , F2 ) with Σ2 = δ1 . The global system A1 ~· A2 is an Alphabet-level Büchi Automaton
Control System (A-BAC System), if the following conditions are satisfied:
(1) each transition name in δ1 is unique;
(2) if (qi , p, qj ) ∈ δ2 and p : (qm , a, qn ) ∈ δ1 , then for any transition pk : (qx , a, qy ) ∈ δ1
associated with the symbol a, there exists (qi , pk , qj ) ∈ δ2 .
We say A2 is an Alphabet-level Büchi Controlling Automaton (A-BCA).
An important property of the A-BAC system is that it expresses the semantics of
intersection of two regular ω-languages over the alphabet Σ1 .
Theorem 6.12. For each A-BAC system A1 ~· A2 , there exists a Büchi automaton A3 =
(Q2 , Σ1 , δ3 , q2 , F2 ), where (qj , a, qn ) ∈ δ3 if and only if (qj , p, qn ) ∈ δ2 and p : (qi , a, qm ) ∈ δ1
associates with the symbol a, such that L(A1 ~· A2 ) = L(A1 ) ∩ L(A3 ).
Proof. According to Thm. 6.6, the meta-composition of A1 and A2 is a Büchi automaton:
A = A1 ~· A2 = (Q1 × Q2 × {0, 1, 2}, Σ1 , δ, (q1 , q2 , 0), Q1 × Q2 × {2})
where p : ((qi , qj , x), a, (qm , qn , y)) ∈ δ if and only if p : (qi , a, qm ) ∈ δ1 , (qj , p, qn ) ∈ δ2 .
According to the construction of A3 , we have p : ((qi , qj , x), a, (qm , qn , y)) ∈ δ if and only
if p : (qi , a, qm ) ∈ δ1 , (qj , a, qn ) ∈ δ3 . Thus, A accepts the intersection of the ω-languages
accepted by A1 and A3 , i.e., L(A) = L(A1 ) ∩ L(A3 ). Therefore, the equation holds.
Example 6.13. Let us consider the Büchi automaton A01 in Fig. 6.2 and the Büchi
controlling automaton A3 in Fig. 6.3. A3 is an alphabet-level Büchi controlling automaton,
since p1 , p3 that associate with a always appear together. We can construct A4 in Fig. 6.3
such that L(A01 ~· A3 ) = L(A01 ) ∩ L(A4 ).
p21 , p22
p21 , p22
r0
b, c
p1 , p3
r1
a
b, c
r2
A3
r0
r1
r2
A4
Figure 6.3: Alphabet-level Büchi Controlling Automaton
Obviously, the A-BAC system is a special case of the BAC system. Thus, its generative
power is not greater than the BAC system. In fact, we have the following theorem.
Theorem 6.14. L(BA ~· A-BCA) = L(BA).
88
Chapter 6. Büchi Automaton Control Systems and Concurrent Variants
Proof. (i). L(BA ~· A-BCA) ⊆ L(BA) follows immediately from L(BA ~· A-BCA) ⊆
L(BA ~· BA) ⊆ L(BA).
(ii). L(BA~· A-BCA) ⊇ L(BA). Given a Büchi automaton A1 , it can be constructed a
BAC system A1 ~· A2 such that L(A1 ~· A2 ) = L(A1 ) by using a full controlling automaton
A2 that is also an alphabet-level Büchi controlling automaton.
We would like to say that the BAC system is of transition-level. The BAC system is
more flexible, because it is not required that all the transitions associated with the same
symbol in δ1 must appear together between two states of A2 .
In particular, the BAC system can impose some constraints outside the power of ABAC systems. We use a simple example to show the difference between BAC and A-BAC
systems in expressive power.
Suppose a nondeterministic vending machine A1 (see Fig. 6.4) that receives coin a and
dispenses three types of goods b, c, d (inputs are marked by ?, while outputs are marked
by !). The machine may receive a coin a, then output b, c, d in order. It may also
output only b, d. The nondeterministic choice may be made by the machine depending
on the luck, or may result from some unobservable or unmodeled events because of the
limitation of sensors or other factors. Obviously, the machine accepts the ω-language
L(A1 ) = (abd + abcd)ω .
q2
p5 : d!
p2 : b!
q0
q1
p1 : a?
p4 : c!
q3
p3 : b!
Figure 6.4: A Nondeterministic Vending Machine
To increase profit, people decide to modify the behavior by adding a new requirement:
the dispensation of bcd will be no longer allowed, only bd can be dispensed. That is, the
transition p3 will not be allowed.
It is easy to construct a BAC system with the controlling automaton A2 accepting
(p1 + p2 + p4 + p5 )ω by excluding p3 . Thus, the new global system accepts the ω-language
L(A1 ~· A2 ) = (abd)ω , which satisfies the new requirement.
However, there does not exist an A-BAC system satisfying the new requirement. The
key problem is the nondeterminism in the state q1 . Assume there is such an alphabet-level
controlling automaton A2 . Then A2 can only specify whether b can occur in a state which
is reached after a occurs. If no, then neither of p2 and p3 is allowed, i.e., no goods will be
dispensed, which violates the requirements. If yes, then both of p2 and p3 are allowed. In
this case, to avoid the occurrence of abcd, A2 may disable the transition dispensing c in the
next state. However, the machine enters a dead state q3 after choosing p3 and dispensing
b. As a result, the global system dispenses only b and enters a dead state. This does not
conform to the requirement, since d is not dispensed and the machine is deadlocked. Thus,
A2 does not exist.
Therefore, we conclude that the BAC system is more flexible than the A-BAC system.
Let us consider a subset of alphabet-level controlling automata that are constructed
6.2. Input/Output Automaton Control Systems (IO-AC)
89
from LTL formulas (LTL-A-BCA). Suppose there is an LTL formula φ on the alphabet Σ.
At first, the LTL formula can be translated into a Büchi automaton A over Σ [120]. Then,
we can construct an alphabet-level controlling automaton Aφ from A as follows. For each
transition (q, a, q 0 ) of A where a ∈ Σ, we replace it by a set of transitions (q, pi , q 0 ) where
pi is a transition associated with a in the controlled automaton.
It is well known that the expressive power of LTL formulas is strictly included in that of
Büchi automata. Thus, similarly, the expressive power of LTL-A-BCA is strictly included
in that of alphabet-level Büchi controlling automata (A-BCA).
We formalize the above results as follows. Let LTL-A-BCA be the family of alphabetlevel Büchi controlling automata translated from LTL formulas, A-BCA be the family
of alphabet-level Büchi controlling automata, BCA be the family of Büchi controlling
automata. The following theorem characterizes their difference in expressive power.
Theorem 6.15. Given a Büchi automaton A,
(i) for any LTL-A-BCA Aφ , there exists an A-BCA Aα such that L(Aα ) = L(Aφ ) and
L(A ~· Aα ) = L(A ~· Aφ ). The reverse does not hold.
(ii) for any A-BCA Aα , there exists a BCA A2 such that L(A2 ) = L(Aα ) and L(A~· A2 ) =
L(A ~· Aα ). The reverse does not hold.
Proof. Clause (i) follows from the fact that the expressive power of LTL formulas is strictly
included in that of Büchi automata [120]. Clause (ii) follows from Def. 6.11 (the A-BAC
system is a special case of the BAC system) and the example above (a BCA can express
some constraints outside the power of A-BCA’s).
Discussion. Model checking techniques and tools always use LTL formulas or Büchi
automata to specify properties. Theorem 6.15 shows that the BAC system is more expressive than LTL model checking in specifying properties, since model checking uses the
LTL formula which is equivalent to LTL-A-BCA. Theorem 6.15 shows also that the BAC
system is more expressive than the tools checking regular properties, each of which specifies a regular ω-language. Because these tools (e.g., SPIN [73, 71, 72]) use the properties
represented by Büchi automata, which are equivalent to alphabet-level Büchi controlling
automata (A-BCA).
It is worth noting, the alphabet-level control and the translation from LTL formulas are
not necessary, and restricts the expressive power of BAC systems. We can define directly
the controlling automaton, c.f. the example of vending machine before Thm. 6.15.
6.1.4
Checking Büchi Automaton Control Systems
We may want to check whether the meta-composition of a BAC system satisfies some
properties. Since the techniques for checking Büchi automata is rather mature, we need
only one more step to achieve this objective. The additional step is to compute the metacomposition of controlled automaton and controlling automaton using Thm. 6.6. Then we
can check whether the meta-composition (also a Büchi automaton) satisfies the properties
using the existing techniques.
6.2
Input/Output Automaton Control Systems (IO-AC)
In this section, we first recall input/output automata, then introduce the definitions and
properties of the input/output automaton control system. This formalism is used to
90
Chapter 6. Büchi Automaton Control Systems and Concurrent Variants
model and control concurrent systems that consist of several components with broadcasting
communications.
6.2.1
Input/Output Automata
The input/output automaton [93, 94, 91] extends classic automata theory [74] for modeling
concurrent and distributed discrete event systems with different input, output and internal
actions. The input/output automaton and various variants are widely used for describing
and reasoning about asynchronous systems [92].
The input/output automaton is different from CSP (Communicating Sequential Processes) [70] in the following two aspects. The first feature is input-enableness, i.e., every
input action is enabled at every state. This means that the automaton is unable to block its
input, thus required to respond appropriately to every possible input sequences. Whereas
CSP assumes the environment behaves correctly, i.e., reasonable input sequences. The second feature is that an action is controlled by at most one component, i.e., an action could
be an output action of only one component in composition, but possibly be simultaneous
input action of several components. The output is transmitted to all the passive recipient
components. The two features are also different from CCS (Calculus of Communicating
Systems) [101], where an output can be only transmitted to one recipient component.
Definition 6.16. An (nondeterministic) input/output automaton (I/O automaton or
simply automaton) is a tuple A = (Q, ΣI , ΣO , ΣH , δ, S), where Q is a set of states,
ΣI , ΣO , ΣH are pairwise disjoint sets of input, output and internal actions, respectively,
δ ⊆ Q × Σ × Q is a set of transitions such that for each q ∈ Q and a ∈ ΣI there is a
transition pk : (q, a, q 0 ) ∈ δ where pk is the name of the transition (input-enableness),
S ⊆ Q is a nonempty set of initial states.
We denote by Σ = ΣI ∪ ΣO ∪ ΣH the set of actions, ΣE = ΣI ∪ ΣO the set of external
actions, and ΣL = ΣO ∪ ΣH the set of locally-controlled actions.
In the transition diagram, a transition pk : (q, a, q 0 ) ∈ δ is denoted by an arc from q to
labeled pk : a. To discriminate explicitly the different sets of actions in diagrams, we
may suffix a symbol “?”, “!” or “;” to an input, output or internal action, respectively.
Note that an I/O automaton A = (Q, Σ, δ, S) is equivalent to a Büchi automaton
A = (Q, Σ, δ, S, Q) where all the states are accepting states (if we allow a set of initial
states, and do not consider the fairness condition introduced later). This means, the
controlling automaton and meta-composition over a single I/O automaton are the same
as the case of Büchi automata. Therefore, here we only consider the systems consisting of
several components.
As a preliminary, we recall the composition of I/O automata [94].
Let N = {n1 , ..., nk } ⊆ N be a countable set with cardinality k, and for each nj ∈ N ,
Snj be a set. We define the Cartesian product as:
q0
Y
Snj = {(xn1 , xn2 , ..., xnk ) | ∀j ∈ {1, ..., k}, xnj ∈ Snj }
nj ∈N
For each j ∈ {1, ..., k}, we denote the j-th component of the vector q = (xn1 , xn2 , ..., xnk )
by the projection q[j], i.e., q[j] = xnj .
6.2. Input/Output Automaton Control Systems (IO-AC)
91
H
Definition 6.17. A countable collection of I/O automata {An = (Qn , ΣIn , ΣO
n , Σn , δn , Sn )}n∈N
O
is said to be strongly compatible if for all i, j ∈ N and i 6= j, we have ΣO
i ∩ Σj = ∅,
H
Σi ∩ Σj = ∅, and no action is contained in infinitely many sets Σi .
Q
Definition 6.18. The composition A = n∈N An of a countable collection of strongly
H
compatible I/O automata {An = (Qn , ΣIn , ΣO
n , Σn , δn , Sn )}n∈N is an I/O automaton
Y
Y
A=(
Qn , ΣI , ΣO , ΣH , δ,
Sn )
n∈N
n∈N
S
S
S
S
O
O
H
H
0
where ΣI = n∈N ΣIn − n∈N ΣO
n, Σ =
n∈N Σn , Σ = S n∈N Σn , and for each q, q ∈
Q
0
n∈N Qn and a ∈ Σ, we have pI : (q, a, q ) ∈ δ where pI ⊆
n∈N δn , iff for all 1 ≤ j ≤ |N |
and nj ∈ N ,
1. if a ∈ Σnj then pi : (q[j], a, q 0 [j]) ∈ δnj and pi ⊆ pI , and for any other transition
pk ∈ δnj − {pi }, pk ∩ pI = ∅;
2. if a 6∈ Σnj then q[j] = q 0 [j] and for all pi ∈ δnj , pi ∩ pI = ∅.
We say the transition pI is a composite transition, since its name pI is the union of
several sets pi of primitive transitions, where
Q pi is a set of (possibly containing only one)
primitive transitions. A transition of A = n∈N An may be composed of i (primitive or
composite) transitions of its components, where 1 ≤ i ≤ |N |.
Note that an output action is essentially a broadcast from one component to other
recipient components, so the composition does not hide actions representing interaction.
This is different from CCS [101].
We are in general only interested in the executions of a composition in which all
components are treated fairly. The fairness is the property that each component infinitely
often performs one of its locally-controlled actions.
Q
Definition 6.19. An execution α of a composition A = n∈N An is fair, if the following
conditions hold
1. If α is finite, then for all n ∈ N , no action in ΣL
n is enabled in the final state of α.
2. If α is infinite, then for all n ∈ N , either α contains infinitely many events from
L
ΣL
n , or α contains infinitely many occurrences of states in which no action in Σn is
enabled.
6.2.2
Input/Output Automaton Control Systems
The controlling automaton over a composition of I/O automata should control all the
transitions of its primitive components.
Q
Definition 6.20. Given a composition A = Sn∈N An , a (Büchi) controlling automaton
over A is Ac = (Qc , Σc , δc , Sc , Fc ) with Σc = n∈N δn . The global system is called an
Input/Output Automaton Control System (IO-AC System).
There are two alternative mechanisms for controlling automata to control composite
transitions. The first mechanism is that a composite transition is allowed iff at least one
of its member transitions is allowed by the controlling automaton. The second mechanism
is that a composite transition is allowed iff all of its member transitions are allowed by the
92
Chapter 6. Büchi Automaton Control Systems and Concurrent Variants
p5,13 : b1 !
p 5 : b1 ?
p1 : s!
m1
p 8 : b2 ?
m0 p3 : b1 ?
p 4 : b2 ?
p 7 : b1 ?
p2 : a!
m2
p10 : a?
p14 : b2 !
p9 : s?
p13 : b1 !
p16 : a?
p15 : s?
u0
u1
p11 : b1 !
p12 : b2 !
q11
p1,15 : s!
b2 !
b1 !
q00
b2 !
p2,16 : a!
(2) Au
q21
p6,14 : b2 !
p 6 : b2 ?
(1) Am
b1 !
(3) Amu
Figure 6.5: Input/Output Automata of the Candy Machine
controlling automaton. We formally define the two types of meta-composition as follows
(the two definitions differ only in their last conditions, i.e., ∃pk ∈ pI vs. ∀pk ∈ pI ).
Q
Definition
6.21. Q
The existential meta-composition of a composition A = n∈N An =
Q
( n∈N
SQn , Σ, δ, n∈N Sn ) and a controlling automaton Ac = (Qc , Σc , δc , Sc , Fc ) with
Σc = n∈N δn is a tuple:
Y
Y
Y
A0 = A ~·E Ac = ((
Qn ) × Qc , Σ, δ 0 , (
Sn ) × Sc , (
Qn ) × Fc )
n∈N
n∈N
n∈N
Q
where for each qi , ql ∈ n∈N Qn , qj , qm ∈ Qc and a ∈ Σ, we have pI : ((qi , qj ), a, (ql , qm )) ∈
δ 0 iff, pI : (qi , a, ql ) ∈ δ and ∃pk ∈ pI , (qj , pk , qm ) ∈ δc .
Q
Definition
6.22. Q
The universal meta-composition of a composition A = n∈N An =
Q
Q
,
Σ,
δ,
( n∈N
n
n∈N Sn ) and a controlling automaton Ac = (Qc , Σc , δc , Sc , Fc ) with
S
Σc = n∈N δn is a tuple:
Y
Y
Y
A0 = A ~·A Ac = ((
Qn ) × Qc , Σ, δ 0 , (
Sn ) × Sc , (
Qn ) × Fc )
n∈N
n∈N
n∈N
Q
where for each qi , ql ∈ n∈N Qn , qj , qm ∈ Qc and a ∈ Σ, we have pI : ((qi , qj ), a, (ql , qm )) ∈
δ 0 iff, pI : (qi , a, ql ) ∈ δ and ∀pk ∈ pI , (qj , pk , qm ) ∈ δc .
If we let {An }n∈N contain only one automaton, i.e., |N | = 1, the two definitions will
result in the same case, i.e., the meta-composition over a single I/O automaton. That is,
the controlling automaton over a single I/O automaton is only a special case of the above
definitions, thus we will not give the formal definition of this case.
Note that the resulting meta-composition is not necessarily input-enabled. This is
reasonable, since the controlling automaton rejects some sequences of input actions which
may lead to unexpected states.
To illustrate the principle, we use an example concerning a system composed of two
components: a candy vending machine and a customer. We hope the example will provide
an interesting illustration of our idea, since this class of examples is so popular in the
literatures of formal methods, e.g., CSP [70] and I/O automata [94].
The candy machine Am in Fig. 6.5(1) may receive inputs b1 , b2 indicating that button 1
or button 2 is pushed, respectively. It may produce outputs s, a indicating dispensation of
two types of candy, SKYBAR and ALMONDJOY, respectively. The machine may receive
several inputs before delivering a candy.
6.3. Interface Automaton Control Systems (IN-AC)
p1 , p2
c0
93
c1
p3 , p4
(1) Ac1
c0
q111
s!
δ(s), δ(a)
c1
b1 !
q000
b2 !
δ(b1 ), δ(b2 )
(2) Ac2
a!
q211
(3) A0
Figure 6.6: Controlling Automata for the Input/Output Automata
A greedy user Au in Fig. 6.5(2) may push buttons b1 , b2 or get candies s, a. The greedy
user may not wait for a candy before pressing a button again.
The composition of the machine and the user is Amu = Am · Au which is shown in Fig.
6.5(3), where qij denotes a composite state (mi , uj ), pi1 ,...,ik denotes a composite transition
pi1 ,...,ik = {pi1 , . . . , pik }. For example, p1,15 : s is the synchronization of p1 : s! and p15 : s?
which belong to Am and Au , respectively.
If we consider only the fair executions, then we rule out the hazardous situation that
the user repeatedly pushes a single button without giving the machine a chance to dispense
a candy.
Suppose we expect that the machine dispenses s if the user pushed b1 first, and a if the
user pushed b2 first. However, the user may push b1 first, then push b1 or b2 several times,
finally push b2 and wait for the candy. Obviously, the machine will dispense a, which is
an undesired execution for our expectation. To prevent this situation, we may impose
the constraint “the user is not allowed to change his/her choice,” i.e., whenever one of
the transitions p3 , p4 (actions b1 , b2 ) occurs, the next transition must be p1 or p2 which
dispenses candy. Note that the constraint concerns two components, which is different to
the case of BAC systems.
The constraint can be formalized as the controlling automaton Ac1 in Fig. 6.6(1). By
computing the existential meta-composition of Amu and Ac1 , we get the global system
A0 = (Am · Au ) ~·E Ac1 in Fig. 6.6(3), where qijk denotes a composite state (mi , uj , ck ). It
is easy to see, all the behaviors of A0 satisfy the constraint.
The constraint can also be formalized as the controlling automaton Ac2 in Fig. 6.6(2).
In the diagram, δ(x) denotes all the primitive transitions associated with the action x ∈ Σ.
For example, δ(a) = {p2 , p10 , p16 }. By computing the universal meta-composition of Amu
and Ac2 , we get also the global system A0 = (Am · Au ) ~·A Ac2 in Fig. 6.6(3).
6.3
Interface Automaton Control Systems (IN-AC)
In this section, we first recall interface automata, then introduce the definitions and properties of the interface automaton control system. This formalism is used to model and
control concurrent systems that consist of several components with rendezvous communications at interfaces.
6.3.1
Interface Automata
The interface automaton [41] extends the theory of I/O automata [93, 94] for modeling
component-based systems with interfaces.
94
Chapter 6. Büchi Automaton Control Systems and Concurrent Variants
The interface automaton is different from the I/O automaton in the following two aspects. The first feature is that an interface automaton is not required to be input-enabled.
This means that some input actions may be recognized as illegal at some states. The
second feature is that a synchronization of input and output actions results in an internal
action in the composition, since interface automata use rendezvous communications at
interfaces rather than broadcasting.
Definition 6.23. An (nondeterministic) interface automaton (simply automaton) is a
tuple A = (Q, ΣI , ΣO , ΣH , δ, S), where Q is a set of states, ΣI , ΣO , ΣH are pairwise disjoint
sets of input, output and internal actions, respectively, δ ⊆ Q×Σ×Q is a set of transitions,
S ⊆ Q is a set of initial states, where |S| ≤ 1.
S
S
We denote by Σ = ΣI ΣO ΣH the set of actions. If |S| = 0, A is called empty. An interface automaton A is closed if it has only internal actions, i.e., ΣI = ΣO = ∅.
Otherwise, we say A is open.
An action a ∈ Σ is enabled at a state q ∈ Q if there is a transition (q, a, q 0 ) ∈ δ for
some q 0 ∈ Q. We denote by ΣI (q), ΣO (q), ΣH (q) the subsets of input, output and internal
actions that are enabled at the state q, and let Σ(q) = ΣI (q) ∪ ΣO (q) ∪ ΣH (q). We call
the input actions in ΣI − ΣI (q) the illegal inputs at q.
If a ∈ ΣI (resp. a ∈ ΣO , a ∈ ΣH ), then (q, a, q 0 ) ∈ δ is called an input (resp. output,
internal) transition. We denote by δ I , δ O , δ H the set of input, output, internal transitions,
respectively.
In the transition diagram, a transition pk : (q, a, q 0 ) ∈ δ is denoted by an arc from q
to q 0 labeled pk : a, where pk is the name of the transition. To discriminate explicitly the
different sets of actions in diagrams, we may again suffix a symbol “?”, “!” or “;” to an
input, output or internal action, respectively.
Note that an interface automaton A = (Q, Σ, δ, S) is equivalent to a Büchi automaton
A = (Q, Σ, δ, S, Q) where all the states are accepting states (if we allow a set of initial
states). This means, the controlling automaton and meta-composition over a single interface automaton are the same as the case of Büchi automata. Therefore, here we only
consider the systems consisting of several components.
Now we define the composition of two interface automata [41], where the two composable automata will synchronize on shared actions, and asynchronously interleave all other
actions. The composition is defined based on the product of two composable automata.
Definition 6.24. Two interface automata A = (QA , ΣA , δA , SA ) and B = (QB , ΣB , δB , SB )
I
I
O
O
H
are composable if ΣH
A ∩ ΣB = ∅, ΣA ∩ ΣB = ∅, ΣA ∩ ΣB = ∅, ΣA ∩ ΣB = ∅. We let
shared(A, B) = ΣA ∩ ΣB .
O
I
Note that if A and B are composable, then shared(A, B) = (ΣIA ∩ ΣO
B ) ∪ (ΣA ∩ ΣB ).
Definition 6.25. Given two composable interface automata A and B, their product A⊗B
is the interface automaton
P = A ⊗ B = (QA⊗B , ΣA⊗B , δA⊗B , SA⊗B )
O
O
where QA⊗B = QA × QB , ΣIA⊗B = (ΣIA ∪ ΣIB ) − shared(A, B), ΣO
A⊗B = (ΣA ∪ ΣB ) −
6.3. Interface Automaton Control Systems (IN-AC)
95
H
H
shared(A, B), ΣH
A⊗B = ΣA ∪ ΣB ∪ shared(A, B),
δA⊗B = {pi : ((v, u), a, (v 0 , u)) | pi : (v, a, v 0 ) ∈ δA ∧ a 6∈ shared(A, B) ∧ u ∈ QB }
∪ {pj : ((v, u), a, (v, u0 )) | pj : (u, a, u0 ) ∈ δB ∧ a 6∈ shared(A, B) ∧ v ∈ QA }
∪ {pij : ((v, u), a, (v 0 , u0 )) | pi : (v, a, v 0 ) ∈ δA ∧ pj : (u, a, u0 ) ∈ δB ∧ a ∈ shared(A, B)}
and SA⊗B = SA × SB .
Note that some transitions that are present in A or B may not be present in the
product, due to the lack of input-enableness.
We say the transition pij is a composite transition, since its name contains the names
of two primitive transitions pi ∈ δA and pj ∈ δB . Indeed, pij is an internal transition
representing synchronization of input and output actions. This means, an output action
is transmitted to only one recipient component, and resulting an internal action. This is
similar to CCS [101].
In the product A ⊗ B, there may be illegal states, where one of the automata may
produce an output action a ∈ shared(A, B) that is an input action of the other automaton,
but is not accepted.
Definition 6.26. Given two composable interface automata A and B, the set of illegal
states of the product A ⊗ B is
I
Illegal(A, B) = {(u, v) ∈ QA × QB | ∃a ∈ shared(A, B)( (a ∈ ΣO
A (u) ∧ a 6∈ ΣB (v))
I
∨ (a ∈ ΣO
B (v) ∧ a 6∈ ΣA (u)) )}
When the product A ⊗ B is closed (having only internal actions), then A and B are
compatible if no illegal state of A ⊗ B is reachable. When A ⊗ B is open, then A and B
are compatible if there is a legal environment that can prevent entering the illegal states
although they are possibly reachable in A ⊗ B.
Definition 6.27. An environment for an interface automaton P is an interface automaton
E such that (1) E is composable with P , (2) E is nonempty, (3) ΣIE = ΣO
P , and (4)
Illegal(P, E) = ∅.
Definition 6.28. Given two composable interface automata A and B, a legal environment
for the pair (A, B) is an environment for A ⊗ B such that no state in Illegal(A, B) × QE
is reachable in (A ⊗ B) ⊗ E.
The composition of two interface automata A, B is obtained by restricting the product
of the two automata to the set of compatible states Cmp(A, B), which are the states from
which there exists an environment that can prevent entering illegal states.
Definition 6.29. Given two composable interface automata A and B, a pair of states
(u, v) ∈ QA × QB is compatible if there is an environment E for A ⊗ B such that no state
in Illegal(A, B) × QE is reachable in (A ⊗ B) ⊗ E from the state {(u, v)} × SE . We denote
the set of compatible states of A ⊗ B by Cmp(A, B).
Hence, two nonempty, composable interface automata A and B are compatible iff their
initial states are compatible, i.e., SA × SB ⊆ Cmp(A, B).
96
Chapter 6. Büchi Automaton Control Systems and Concurrent Variants
Definition 6.30. Given two composable interface automata A and B, the composition
A||B is an interface automaton
C = A||B = (QA||B , ΣA||B , δA||B , SA||B )
O
H
H
where QA||B = Cmp(A, B), ΣIA||B = ΣIA⊗B , ΣO
A||B = ΣA⊗B , ΣA||B = ΣA⊗B , δA||B =
δA⊗B ∩ (Cmp(A, B) × ΣA||B × Cmp(A, B)), and SA||B = SA⊗B ∩ Cmp(A, B).
Now we can rephrase the definition of compatibility for interface automata.
Definition 6.31. Two interface automata A and B are compatible iff (1) they are composable, and (2) their composition is nonempty.
6.3.2
Interface Automaton Control Systems
The controlling automaton over a composition of interface automata should control all the
transitions of its primitive components.
Definition 6.32. Given a composition of a set of interface automata A S
=kn∈N An , a
(Büchi) controlling automaton over A is Ac = (Qc , Σc , δc , Sc , Fc ) with Σc = n∈N δn . The
global system is called an Interface Automaton Control System (IN-AC System).
There are two alternative mechanisms for controlling automata to control composite
transitions. The first mechanism is that a composite transition is allowed iff at least one
of its member transitions is allowed by the controlling automaton. The second mechanism
is that a composite transition is allowed iff all of its member transitions are allowed by the
controlling automaton. We formally define the two types of meta-composition as follows
(the two definitions differ only in their last conditions, i.e., ∃pk ∈ pI vs. ∀pk ∈ pI ).
Definition 6.33. The existential meta-composition of a composition A = (Q, Σ, δ, S)
and a controlling automaton Ac = (Qc , Σc , δc , Sc , Fc ) is a tuple:
A0 = A ~·E Ac = (Q × Qc , Σ, δ 0 , S × Sc , Q × Fc )
where for each qi , ql ∈ Q, qj , qm ∈ Qc and a ∈ Σ, we have pI : ((qi , qj ), a, (ql , qm )) ∈ δ 0 iff,
pI : (qi , a, ql ) ∈ δ and ∃pk ∈ pI , (qj , pk , qm ) ∈ δc .
Definition 6.34. The universal meta-composition of a composition A = (Q, Σ, δ, S) and
a controlling automaton Ac = (Qc , Σc , δc , Sc , Fc ) is a tuple:
A0 = A ~·A Ac = (Q × Qc , Σ, δ 0 , S × Sc , Q × Fc )
where for each qi , ql ∈ Q, qj , qm ∈ Qc and a ∈ Σ, we have pI : ((qi , qj ), a, (ql , qm )) ∈ δ 0 iff,
pI : (qi , a, ql ) ∈ δ and ∀pk ∈ pI , (qj , pk , qm ) ∈ δc .
Different from the IO-AC system, a composite transition pI contains at most two
primitive transitions. If we let A be a single primitive automaton, the two definitions will
result in the same case, i.e., the meta-composition over a single interface automaton. That
is, the controlling automaton over a single interface automaton is only a special case of
the above definitions, thus we will not give the formal definition of this case.
To illustrate the principle, we use the same example used for I/O automaton control
systems concerning a system composed of two components: a candy vending machine and a
customer. The candy machine Am , a greedy user Au and their composition Amu = Am ||Au
6.4. Related Work
97
p5,11 : b1 ;
p 5 : b1 ?
m1
p1 : s!
p 8 : b2 ?
m0 p3 : b1 ?
p 4 : b2 ?
p 7 : b1 ?
p2 : a!
m2
q11
p1,13 : s;
p11 : b1 !
p12 : b2 !
p13 : s?
p14 : a?
u0
u1
p 9 : b1 !
p10 : b2 !
b2 ;
b1 ;
q00
b2 ;
b1 ;
p2,14 : a;
q21
p6,12 : b2 ;
p 6 : b2 ?
(1) Am
(2) Au
(3) Amu
Figure 6.7: Interface Automata of the Candy Machine
p1 , p2
c0
c1
p3 , p4
(1) Ac1
c0
c1
δm ∪ δu − {p11 , p12 }
(2) Ac2
q111
s;
δm ∪ δu − {p11 , p12 }
b1 ;
q000
b2 ;
a;
q211
(3) A0
Figure 6.8: Controlling Automata for the Interface Automata
are shown in Fig. 6.7. Note that pi,j is an internal transition that synchronizes input and
output actions, and denotes a composite transition pi,j = {pi , pj }.
Suppose we expect that the machine dispenses s if the user pushed b1 first, and a if the
user pushed b2 first. However, the user may push b1 first, then push b1 or b2 several times,
finally push b2 and wait for the candy. Obviously, the machine will dispense a, which is
an undesired execution for our expectation. To prevent this situation, we may impose
the constraint “the user is not allowed to change his/her choice,” i.e., whenever one of
the transitions p3 , p4 (actions b1 , b2 ) occurs, the next transition must be p1 or p2 which
dispenses candy. Note that the constraint concerns two components, which is different to
the case of BAC systems.
The constraint can be formalized as the controlling automaton Ac1 in Fig. 6.8(1). By
computing the existential meta-composition of Amu and Ac1 , we get the global system
A0 = (Am ||Au ) ~·E Ac1 in Fig. 6.8(3), where qijk denotes a composite state (mi , uj , ck ). It
is easy to see, all the behaviors of A0 satisfy the constraint.
The constraint can also be formalized as the controlling automaton Ac2 in Fig. 6.8(2),
which forbids the unexpected transitions p11 , p12 . By computing the universal metacomposition of Amu and Ac2 , we get also the global system A0 = (Am ||Au ) ~·A Ac2 in
Fig. 6.8(3).
6.4
Related Work
The BAC system is different from Ramadge and Wonham’s supervisory control [112]. In
their theory, the supervisor controls an automaton via the alphabet of symbols rather than
transitions, thus it is similar to the alphabet-level controlling automaton. That means, our
controlling automata, which specify properties on the transitions instead of the alphabet,
98
Chapter 6. Büchi Automaton Control Systems and Concurrent Variants
have stronger expressiveness in specifying constraints.
6.5
Conclusion
In this chapter, we proposed the Büchi Automaton Control System (BAC system) and
its concurrent variants, including the IO-AC system and the IN-AC system. Each of the
formalisms consists of a controlled automaton and a Büchi controlling automaton that
restricts the behavior of the controlled automaton. The BAC system is used for modeling
and controlling nonstop systems that generate ω-words, while the concurrent variants are
used for modeling and controlling concurrent systems that consist of several components
with various types of communications.
To control the systems modeled with other types of automata, e.g., team automata
[48, 119] and component-interaction automata [14], it is not hard to develop corresponding
automaton control systems using the methodology proposed in this chapter.
Chapter 7
Nevertrace Claims for Model
Checking
In this chapter, we propose the nevertrace claim, which is a new construct for specifying
the correctness properties that either finite or infinite execution traces (i.e., sequences of
transitions) that should never occur. In semantics, it is neither similar to never claim
and trace assertion, nor a simple combination of them. Furthermore, the theoretical
foundation for checking nevertrace claims, namely the Asynchronous-Composition Büchi
Automaton Control System (AC-BAC System), is proposed. The major contributions of
the nevertrace claim include: a powerful construct for formalizing properties related to
transitions and their labels, and a way for reducing the state space at the design stage.
7.1
Introduction
The SPIN (Simple Promela INterpreter) model checker is an automated tool for verifying
the correctness of asynchronous distributed software models [73, 71, 72]. System models
and correctness properties to be verified are both described in Promela (Process Meta
Language). This chapter is based on SPIN Version 5.2.5, released on 17th April 2010.
Promela supports various constructs for formalizing different classes of properties. The
most powerful constructs are the never claim, the trace and notrace assertions. The
never claim is used to specify the properties on sequences of states that should never occur,
while the trace and notrace assertions are used to specify the properties on sequences
of transitions of simple channel operations, i.e., simple send and receive operations on
message channels. A transition is a statement between two states, thus the trace and
notrace assertions only treat a restricted subset of transitions.
However, we observed that the existing constructs cannot specify the properties on
full sequences of transitions apart from the transitions of simple channel operations, e.g.,
assignments, random receive operations, etc.
In this chapter, we propose the nevertrace claim, which is a new claim construct
for specifying correctness properties related to all types of transitions and their labels.
A nevertrace claim specifies the properties that either finite or infinite execution traces
(i.e., sequences of transitions) that should never occur. A nevertrace claim could be
nondeterministic, and performed at every single execution step of the system.
Literally, it seems that the nevertrace claim combines the never claim and the trace
assertion. However, we will show that, in semantics, it is neither similar to any of them,
99
100
Chapter 7. Nevertrace Claims for Model Checking
nor a simple combination of them.
The major contributions of this construct include the following two aspects:
First, the nevertrace claim provides a powerful construct for formalizing properties
related to transitions and their labels. Furthermore, the nevertrace claim can be used
to express the semantics of some existing constructs in Promela.
Second, the nevertrace claim provides a way for reducing the state space at the
design stage. We observed that variables are always used for two objectives: functional
computation, or implicitly recording the execution trace for verification. The nevertrace
claim can reduce the usage of variables for marking the execution trace. The decreased
number of variables can reduce the state space.
This chapter is organized as follows. In Section 7.2, the existing constructs in Promela
are recalled to facilitate further discussion and comparison. In Section 7.3, the nevertrace
claim is proposed and illustrated by example. The theoretical foundation for checking
nevertrace claims, namely the Asynchronous-Composition Büchi Automaton Control
System (AC-BAC System), is presented in Section 7.4. Then in Section 7.5 we show how
to express some constructs in Promela using nevertrace claims. We discuss related work
in Section 7.6 and conclude in Section 7.7.
To illustrate some constructs in Promela and our new construct in the sequel, we use a
simple Promela model (see Listing 7.1) as an example. This model contains two channels
(c2s and s2c), three processes (two clients and a server) and four types of messages. Client
1 can send msg1 through the channel c2s, and receive ack1 from the channel s2c, and
repeat the procedure infinitely. Client 2 does the similar with msg2 and ack2. There are
nine labels in the model, e.g., again at Line 7, c2srmsg at Line 23. The variable x counts
the number of messages in the channel c2s, thus is used for functional computation.
Listing 7.1: A Promela Model of Client and Server
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
mtype = {msg1 , msg2 , ack1 , ack2 } ;
chan c 2 s = [ 2 ] of {mtype } ;
chan s 2 c = [ 0 ] of {mtype } ;
int x = 0 ;
active proctype c l i e n t 1 ( ) {
again :
c2ssmsg1 :
c 2 s ! msg1 ;
x inc :
x = x+1;
s 2 c ? ack1 ;
goto a g a i n ;
}
active proctype c l i e n t 2 ( ) {
again :
c2ssmsg2 :
c 2 s ! msg2 ;
x inc :
x = x+1;
s 2 c ? ack2 ;
goto a g a i n ;
}
active proctype s e r v e r ( ) {
c2srmsg : do
: : c 2 s ? msg1 ;
x dec1 :
x = x −1;
s 2 c ! ack1 ;
7.2. Constructs for Formalizing Properties in SPIN
27
28 x d e c 2 :
29
30
31 }
7.2
101
: : c 2 s ? msg2 ;
x = x −1;
s 2 c ! ack2 ;
od ;
Constructs for Formalizing Properties in SPIN
Promela supports the following constructs for formalizing correctness properties, of which
numerous examples could be found in the monographs [72, 8].
• Basic assertions. A basic assertion is of the form assert(expression). A basic
assertion should be satisfied in specific reachable states. Otherwise, an assertion
violation is reported, and the execution stops at the point in the model where the
assertion failure was detected.
• End-state labels. Every label name that starts with the prefix end is an end-state
label. At the end of an execution sequence (i.e., no more statements are executable),
all processes must have reached a valid end state, i.e., its closing curly brace or
an end-state label. Otherwise, an “invalid end states” error is reported, indicating
invalid deadlock.
• Progress-state labels. Every label name that starts with the prefix progress is a
progress-state label. All potentially infinite execution cycles must pass through at
least one of the progress labels. Otherwise, a “non-progress cycles” error is reported.
Note that the search for non-progress cycles is implemented through never claims.
• Accept-state labels. Every label name that starts with the prefix accept is an
accept-state label 1 . There should not exist any execution that can pass through an
accept-state label infinitely often. An “acceptance cycles” error is reported, if a
potentially infinite execution cycle passes through at least one accept-state label.
• Never claims. A never claim specifies either finite or infinite system behavior that
should never occur. An error is reported, if the full behavior specified in the never
claim is matched by any feasible execution, where “match” means the termination
of the claim or an acceptance cycle is reachable. Besides control-flow constructs, a
never claim may contain only two types of statements: condition statements and
assertions.
• Trace assertions. A trace assertion specifies properties about sequences of simple
send and receive operations on message channels. Note that only the channel names
that appear in the assertion are considered to be within the scope of the check. An
error is reported, if any feasible sequence of operations within scope cannot match
the trace assertion, i.e., the assertion does not have a matching event (including
the assertion reaches at its closing curly brace). Besides control-flow constructs, a
trace assertion may contain only simple send and receive operations.
1
The accept-state label is normally reserved for specifying acceptance conditions of never claims. Although this is rarely done, it can also be used in a Promela model, and does not require the presence of a
never claim.
102
Chapter 7. Nevertrace Claims for Model Checking
• Notrace assertions. A notrace assertion specifies the opposite of a trace assertion,
but uses the same syntax. An error is reported, if the full behavior specified in the
notrace assertion is matched completely by any feasible execution, i.e., either the
closing curly brace or an end-state label of the assertion is reached.
The basic assertion is the only type of correctness properties that can be checked
in the simulation mode (also the verification mode). The three special types of labels
only have special meaning in the verification mode. The never claim and the trace
and notrace assertions can only be interpreted and checked in the verification mode.
Note that never claims could be nondeterministic, whereas trace and notrace assertions
must be deterministic. Furthermore, Promela also supports Linear Temporal Logic (LTL)
formulas, which are converted into never claims for verification [58].
To facilitate the comparison in the sequel, we recall first the semantics of trace and
notrace assertions through simple examples, since they are less used than never claims.
The following trace assertion specifies the property that the sequence of send operations on the channel c2s should alternate between c2s!msg1 and c2s!msg2. Note that
only the send operations on the channel c2s are within the scope of this check, and other
statements are ignored.
trace {
if
l1:
l2:
}
/* alternating between c2s!msg1 and c2s!msg2 */
:: c2s!msg1 ->
:: c2s!msg2 ->
fi;
c2s!msg1 -> goto
c2s!msg2 -> goto
goto l2;
goto l1;
l2;
l1;
The model in Listing 7.1 violates this assertion, since c2s!msg1 and c2s!msg2 can
appear in any order.
The following notrace assertion specifies the property that there should not exist a
sequence of send operations on the channel c2s that contains two consecutive c2s!msg1.
Note that, for notrace assertions, an error is reported, if the assertion is matched completely. Note that only the send operations on the channel c2s are within the scope of
this check, and other statements are ignored.
notrace {
S0:
if
::
::
fi;
S1:
if
::
::
fi;
/* S2 */
}
/* containing two consecutive c2s!msg1 */
c2s!msg1 -> goto S1;
c2s!msg2 -> goto S0;
c2s!msg1;
c2s!msg2 -> goto S0;
7.3. Nevertrace Claims
103
The model in Listing 7.1 violates this assertion, since sequences containing two consecutive
c2s!msg1 are feasible. These sequences cause the termination of the notrace assertion.
c2s!msg1
c2s!msg1
S0
S1
c2s!msg1, c2s!msg2
S2
c2s!msg1, c2s!msg2
Figure 7.1: The Nondeterministic Automaton of notrace Assertion
c2s!msg1
S0
c2s!msg1
S1
S2
c2s!msg2
c2s!msg2
c2s!msg1, c2s!msg2
Figure 7.2: The Deterministic Automaton of notrace Assertion
We observed that notrace assertions could be also constructed from LTL formulas.
The procedure reuses the technique that translates LTL formulas into never claims [58].
Note that never claims specify Nondeterministic Finite Automata (NFA). Thus, an additional step of manually determinizing NFA [74] is needed, since a notrace assertion must
be deterministic.
For this example, we can convert the LTL formula <>(c2s!msg1 -> X c2s!msg1) into
an NFA in Fig. 7.1. Note that the condition statement (1) (or true) was replaced by
all send operations on the channel c2s in the model. To obtain a deterministic notrace
assertion, we have to manually determinize the NFA to a Deterministic Finite Automaton
(DFA) and minimize the DFA. The result is shown in Fig. 7.2. Finally, we can write the
above assertion according to the established DFA.
Note that the trace and notrace assertions can only treat simple channel operations,
thus the properties on other types of transitions cannot be specified. In the next section,
we propose a new construct that can specify the properties on all types of transitions.
7.3
Nevertrace Claims
In this section, we will propose the nevertrace claim. A nevertrace claim may contain
control-flow constructs and transition expressions, whose major ingredient is the label
expression. Thus, we will present three new constructs.
7.3.1
Label Expressions
In a program or model (e.g., Promela model), some statements have labels. A statement
may have several labels. A label name in a model contains only the following characters:
digits (0 to 9), letters (a to z, A to Z), and underscore ( ). We assume that all the unlabeled
statements have the empty string (denoted by ) as its default label name.
A label expression is a regular expression for matching label names. The label expression reuses a subset of the characters of POSIX-extended regular expressions2 , and adds
some new symbols. The special characters are listed in Table 7.1, where the additional
2
POSIX-extended regular expressions are widely used in Unix applications.
104
Chapter 7. Nevertrace Claims for Model Checking
symbols in the bottom part are not included in POSIX-extended regular expressions. All
the characters other than the special characters listed in Table 7.1, including digits, letters
and underscore, match themselves. Also note that the interpretation of these symbols is
over the restricted alphabet [0-9A-Za-z ].
Symbol
.
( )
[ ]
{ }
*
+
?
|
(ˆ )
#
Meaning over the Alphabet [0-9A-Za-z ]
A dot matches any single character.
Parentheses group a series of patterns into a new pattern.
A character class matches any character within the brackets. If the first
character is a circumflex [ˆ], it matches any character except the ones
within the brackets. A dash inside the brackets indicates a character
range, e.g., [a-d] means [abcd], [ˆa-d] means [0-9A-Ze-z ].
If the braces contain one number, it indicates the exact number of times
the previous pattern can match. While two numbers indicate the minimum
and maximum number of times.
A Kleene star matches zero or more copies of the previous pattern.
A positive closure matches one or more copies of the previous pattern.
A question mark matches zero or one copy of the previous pattern.
An alternation operator matches either the previous pattern or the following pattern.
Additional Symbols
If the first character is a circumflex (ˆ), it matches any string except the
ones expressed by the expression within the parentheses.
A hash mark matches any string over the alphabet, i.e., [0-9A-Za-z ]*.
Table 7.1: Special Characters in Label Expressions
For example, in Listing 7.1, the label expression c2ssmsg# matches the labels starting
with c2ssmsg, i.e., c2ssmsg1 and c2ssmsg2. The expression (^c2ss#) matches all labels
in the model other than those starting with c2ss. The empty string could be matched
by a{0}, where a could be other letters or any digit.
Let us consider the complexity of deciding whether a label name is matched by a label expression. It is well known that a Deterministic Finite Automaton (DFA) can be
effectively constructed from a regular expression [74]. In the label expression, most of the
special characters are reused from regular expressions. It is easy to see, the additional
characters also produce DFA’s. For example, (^) means constructing the complementation of a regular language (and its DFA) [74]. Therefore, a DFA can be also effectively
constructed from a label expression.
It is also well known that the membership problem for regular languages (accepted by
DFA’s) can be decided in linear time [74]. Therefore, the membership problem for label
expressions can be also decided in linear time. This means, given a label name l of length
n, whether l is matched by a label expression can be decided in linear time O(n). This
shows that label expressions are feasible in practice.
7.3.2
Transition Expressions
An atomic transition expression is of the form procname[pid]$lblexp, and may take
three arguments. The first optional argument is the name of a previously declared
7.3. Nevertrace Claims
105
proctype procname. The second optional argument is an expression enclosed in brackets,
which provides the process identity number pid of an active process. The third required
argument lblexp is a label expression, which matches a set of label names in the model.
There must be a symbol $ between the second and the third arguments.
Given a transition and its labels, an atomic transition expression procname[pid]$lblexp
matches the transition (i.e., returns true), if the transition belongs to the process procname
[pid], and at least one of the labels is matched by the label expression lblexp. We should
notice that the first two arguments are only used to restrict the application domain of the
label expression.
A transition expression contains one or more atomic transition expressions connected
by propositional logic connectives. It can be defined in Backus-Naur Form as follows:
t ::= a | (!t) | (t && t) | (t || t) | (t -> t)
where t is transition expression, and a is atomic transition expression.
Given a transition and its labels, a transition expression matches the transition (i.e.,
returns true), if the propositional logic formula is evaluated to true according to the values
of its atomic transition expressions. Note that the transition expression is side effect free.
That is, it does not generate new system behavior, just like condition statements.
For example, in Listing 7.1, the (atomic) transition expression client1[0]$c2ssmsg#
matches all transitions that have a label starting with c2ssmsg in the process 0 of type
client1, i.e., the statement with label c2ssmsg1 at Line 8. The transition expression
(client2[1]$(c2s#)) && $again# matches all transitions that have a label starting with
c2s and a label starting with again, in the process 1 of type client2, i.e., the statement
with two labels again and c2ssmsg2 at Line 16.
In an atomic transition expression, the second arguments (together with the brackets)
can be omitted, if there is only one active process of the type specified by the first argument,
or the transition expression is imposed on all active processes of the type. The first and
the second arguments (together with the brackets) can be both omitted, if the transition
expression is imposed on all active processes. But note that the symbol $ cannot be
omitted in any case.
For example, in Listing 7.1, the transition expression client1[0]$c2ssmsg# is equivalent to client1$c2ssmsg#. The transition expression $c2ssmsg# matches the transitions
that have a label starting with c2ssmsg in all active processes, i.e., the statements at Lines
8 and 16.
The reader may find that the atomic transition expression is syntactically similar to
the remote label reference in Promela, except the symbol $. Note that, at first, there are
two superficial differences: (1) the first argument of the remote label reference cannot be
omitted, (2) the third argument of the remote label reference should be an existing label
name, rather than a label expression. Furthermore, we will show later that they have
different semantics in their corresponding claims.
Let us consider the complexity of deciding whether a transition is matched by a transition expression.
For atomic transition expressions, we showed that the membership problem for label
expressions (the third argument) can be decided in linear time O(n). As mentioned, the
first two arguments only check the owner of the label, so do not affect the complexity.
Thus, given a transition with several labels of the total length n, whether it is matched
by an atomic transition expression can be decided in linear time O(n). That is, the
106
Chapter 7. Nevertrace Claims for Model Checking
membership problem for atomic transition expressions can be also decided in linear time
O(n).
Suppose a transition expression has i atomic transition expressions and j logic connectives, then the membership problem can be decided in i · O(n) + O(j) time. Since i, j
are constants for a given transition expression, the membership problem can be decided
in linear time O(n). This shows that transition expressions are feasible in practice.
7.3.3
Nevertrace Claims
A nevertrace claim specifies the properties that either finite or infinite execution traces
(i.e., sequences of transitions) that should never occur. A nevertrace claim could be
nondeterministic, and performed at every single execution step of the system.
A nevertrace claim may contain only control-flow constructs and transition expressions. A nevertrace claim can contain end-state, progress-state and accept-state labels
with the usual interpretation in never claims. Therefore, it looks like a never claim,
except the keyword nevertrace and allowing transition expressions instead of condition
statements.
An example of nevertrace claim for the model in Listing 7.1 is as follows:
nevertrace { /* ![]( $x_inc -> <> $x_dec#) */
T0_init:
if
:: (!$x_dec# && $x_inc) -> goto accept_S4
:: $# -> goto T0_init
fi;
accept_S4:
if
:: (!$x_dec#) -> goto accept_S4
fi;
}
In the example, the claim specifies the property that increasing x always leads to
decrease x later. In other words, if one of the transitions labeled x_inc is executed, then
one of the transitions that have a label starting with x_dec will be executed in the future.
By the way, if we replace $x_inc by $x_i#nc#, or replace $x_dec# by server[2]$x_dec#,
for this model, the resulting claim is equivalent to the above one.
A nevertrace claim is performed as follows, starting from the initial system state.
One transition expression of the claim process is executed each time after the system
executed a transition. If the transition expression matches the last executed transition,
then it is evaluated to true, and the claim moves to one of the next possible statements.
If the claim gets stuck, then this means that the undesirable behavior cannot be matched.
Therefore, no error is reported.
For a rendezvous communication, the system executes an atomic event in which two
primitive transitions are actually executed at a time, one send operation and one receive
operation. In this case, we assume that the send operation is executed before the receive
operation in an atomic rendezvous event.
An error is reported, if the full behavior specified could be matched by any feasible
execution. The violation can be caught as termination of the claim, or an acceptance
7.4. Theoretical Foundation for Checking Nevertrace Claims
107
cycle, just like never claims. Note that all the transitions of the model are within the
scope of the check.
In the example, it is easy to see that there exists no violation, since the nevertrace
claim cannot be matched completely.
Note that it is hard to express this property using existing constructs in Promela. For
example, trace or notrace assertions are not capable of expressing this property, since
they can only specify properties on simple channel operations. The three types of special
labels do not have this power neither.
Fortunately, never claims can express this property by introducing new variables to
implicitly record the information about execution traces. For instance, we introduce
two boolean variables a, b. After each statement labeled x_inc, we add the statements
“a=1; a=0;”, which let a be 1 once. After each statement labeled x_dec#, we add the
statements “b=1; b=0;”, which let b be 1 once. Then the property can be expressed as
LTL formula [](a -> <>b). The negation of this formula can be converted into a never
claim that specifies the required property.
However, please note that the additional variables quadruple the state space (different
combinations of a and b), and make the program malformed and harder to read. In
contrast, the nevertrace claim is more economic, since it takes full advantage of the
transition information (e.g., control-flow states and their labels) that is already tracked as
part of the state space in the verification mode of SPIN.
An interesting thing is that nevertrace claims could be also converted from LTL
formulas. In the example, a never claim can be generated from the LTL formula
![]($x_inc -> <>$x_dec#) by SPIN. Then we can obtain the nevertrace claim above
by replacing the condition statement (1) or true by $# matching all transitions. This
fact can facilitate the use of nevertrace claims in practice.
Finally, let us consider the syntax definition of nevertrace claims. There are various
ways to modify the grammar of Promela to take into account the nevertrace claim. For
example, we can add the following productions into the grammar of Promela.
unit :
nevertrace ;
nevertrace : NEVERTRACE body ;
expr
: PNAME ’[’ expr ’]’ ’$’ expr_label
| PNAME ’$’ expr_label
| ’$’ expr_label
;
Here unit, body and expr are existing nonterminals in the grammar of Promela, thus we
only extend the grammar by appending the new productions. The productions for the
nonterminal expr_label are omitted, since its syntax is clearly specified in Table 7.1.
7.4
Theoretical Foundation for Checking Nevertrace Claims
In this section, we propose the theory of asynchronous-composition Büchi automaton control systems. Then we will show the connection between the theory and the checking of
nevertrace claims by example.
108
7.4.1
Chapter 7. Nevertrace Claims for Model Checking
The Asynchronous Composition of Büchi Automata
At first, we recall the classic definition of Büchi automata [16, 120].
Definition 7.1. A (nondeterministic) Büchi automaton (simply automaton) is a tuple
A = (Q, Σ, δ, q0 , F ), where Q is a finite set of states, Σ is a finite alphabet, δ ⊆ Q × Σ × Q
is a set of named transitions, q0 ∈ Q is the initial state, F ⊆ Q is a set of accepting states.
For convenience, we denote the set of transition names also by δ.
Note that the concept of transition name is introduced. A transition in δ is of the
form pk : (q, a, q 0 ), where pk is the name of the transition. In the transition diagram, a
transition pk : (q, a, q 0 ) ∈ δ is denoted by an arc from q to q 0 labeled pk : a.
Given a set of automata, they execute asynchronously, but may synchronize on rendezvous events. We assume that the send operation is executed before the receive operation in an atomic rendezvous event. Formally, we define their asynchronous composition
as follows.
Let N = {n1 , ..., nk } ⊆ N be a countable set with cardinality k, and for each nj ∈ N ,
Snj be a set. We recall the Cartesian product as:
Y
Snj = {(xn1 , xn2 , ..., xnk ) | ∀j ∈ {1, ..., k}, xnj ∈ Snj }
nj ∈N
For each j ∈ {1, ..., k}, we denote the j-th component of the vector q = (xn1 , xn2 , ..., xnk )
by the projection q[j], i.e., q[j] = xnj .
Q
Definition 7.2. The asynchronous composition A = n∈N An of a countable collection
of Büchi automata {An = (Qn , Σn , δn , qn , Fn )}n∈N is a Büchi automaton
Y
Y
A=(
Qn , Σ, δ,
qn , F )
n∈N
where Σ =
S
n∈N
n∈N
Σn ,
δ = {pk : (q, a, q 0 ) | ∃ni ∈ N, a ∈ Σni ∧ pk : (q[i], a, q 0 [i]) ∈ δni ,
and ∀nj ∈ N, nj 6= ni → q[j] = q 0 [j]}
F = {q ∈
Q
n∈N
Qn | ∃ni ∈ N, q[i] ∈ Fni }.
We interpret the asynchronous composition as the expanded version which fully expands all possible values of the variables in the state space (see Appendix A of [72]). The
executability of a transition depends on its source state.
The asynchronous composition is different from the Input/Output automaton [93, 94]
in the following major aspects: (1) input-enableness is not required, i.e., not all receive
operations are enabled at every states; (2) strong-compatibility is not required, i.e., different processes can have the same send operation; (3) the interaction always occurs between
two automata, rather than broadcasting of Input/Output automata.
The asynchronous composition is different from the interface automaton [41] in the following major aspects: (1) the asynchronous composition is over several processes, rather
than only two components; (2) The primitive send and receive operations in rendezvous
communications are still visible to the environment outside the composition; (3) the interactions between two automata do not result in internal actions.
7.4. Theoretical Foundation for Checking Nevertrace Claims
109
Furthermore, different from the Input/Output automaton and the interface automaton,
the Büchi automaton has a set of accepting states to specify acceptance conditions. Thus
it is more suitable for model checking.
7.4.2
The Asynchronous-Composition BAC System
An Asynchronous-Composition Büchi Automaton Control System (AC-BAC System) consists of an asynchronous composition of Büchi automata and a Büchi controlling automaton. The controlling automaton controls all the transitions of the primitive components
in the composition. Thus, the alphabet of the controlling automaton equals the set of
transition names of the controlled automata.
Definition 7.3. Given an asynchronous composition of a set of Büchi automata A =
Q
S n∈N An , a (Büchi) controlling automaton over A is Ac = (Qc , Σc , δc , qc , Fc ) with Σc =
n∈N δn . The global system is called an Asynchronous-Composition Büchi Automaton
Control System (AC-BAC System).
The controlling automaton is used to specify the sequences of transitions that should
never occur, since a controlling automaton accepts sequences of transitions.
We compute the meta-composition of an asynchronous composition and a controlling
automaton. In the meta-composition, a transition is allowed iff it is in the asynchronous
composition and allowed by the controlling automaton. The name “meta-composition”
denotes that the controlling automaton is at a higher level, since it treats the set of transitions rather than the alphabet. We formally define the meta-composition ( ~· operator)
as follows.
Definition 7.4. The meta-composition of an asynchronous composition A = (Q, Σ, δ, q0 , F )
and a controlling automaton Ac = (Qc , Σc , δc , qc , Fc ) is a Büchi automaton:
A0 = A ~· Ac = (Q × Qc , Σ, δ 0 , (q0 , qc ), (F × Qc ) ∪ (Q × Fc ) )
where for each qi , ql ∈ Q, qj , qm ∈ Qc and a ∈ Σ, we have pk : ((qi , qj ), a, (ql , qm )) ∈ δ 0 iff
pk : (qi , a, ql ) ∈ δ and (qj , pk , qm ) ∈ δc .
If we let {An }n∈N contain only a single primitive automaton, i.e., |N | = 1, the definition will express the meta-composition over a single automaton. This means, the controlling automaton over a single automaton is only a special case of the above definition.
We observe that A has feasible sequences of transitions accepted by Ac (or A matches
Ac ), iff the language accepted by the meta-composition is not empty. This fact will be
used for checking nevertrace claims.
Note that the emptiness problem for meta-compositions is decidable. It is well known
that the emptiness problem for Büchi automata is decidable [120]. Since the metacomposition is a Büchi automaton, checking its emptiness is also decidable.
It is worth noting that the controlling automaton specifies properties on transitions
rather than states, which is the major difference between our theory and Vardi and
Wolper’s automata theoretical framework for model checking.
7.4.3
From Nevertrace Claims to AC-BAC Systems
We will show at first how to translate a model with a nevertrace claim into an AC-BAC
system. Let us consider a simple example in Listing 7.2 consisting of a client and a server.
110
Chapter 7. Nevertrace Claims for Model Checking
Listing 7.2: A Simplified Promela Model of Client and Server
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
mtype = {msg , ack } ;
chan c 2 s = [ 2 ] of {mtype } ;
chan s 2 c = [ 0 ] of {mtype } ;
int x = 0 ;
active proctype c l i e n t ( ) {
again :
c2ssmsg :
c 2 s ! msg ;
x = x+1;
s2crack :
s 2 c ? ack ;
goto a g a i n ;
}
active proctype s e r v e r ( ) {
c2srmsg : do
: : c 2 s ?msg ;
x = x −1;
s2csack :
s 2 c ! ack ;
od ;
}
Figures 7.3 (1) and (2) on Page 116 show the automata A1 , A2 describing the behavior
of the client and the server, respectively. Each transition between two control-flow states
is labeled by its transition name (may be related to line number and control-flow state,
etc.) and statement. For example, 8 is the transition name between the states 8 and 9.
The asynchronous composition A1 · A2 is shown in Fig. 7.3(3). A system state consists
of four elements: the control-flow states of A1 and A2 , the value of x and the contents of the
channel c2s. Note that the dashed transitions are not executable at their corresponding
states, although themselves and omitted subsequent states are part of the composition.
That is, the figure contains exactly the reachable states in the system. At the bottom of
the figure, the transitions labeled 18 and 10 constitute a handshake.
Let us consider the following nevertrace claim:
nevertrace { /* ![]( $c2s# -> <> $s2crack) */
T0_init:
if
:: (!$s2crack && $c2s#) -> goto accept_S4
:: $# -> goto T0_init
fi;
accept_S4:
if
:: (!$s2crack) -> goto accept_S4
fi;
}
The claim specifies the property that any operation on channel c2s always leads to
receive ack from the channel s2c later. In other words, if one of the transitions that have
a label starting with c2s is executed, then one of the transitions labeled s2crack will be
executed in the future.
Figure 7.3(4) shows the automaton specified by the nevertrace claim. Figure 7.3(5)
shows the controlling automaton specified by the nevertrace automaton. For example,
7.5. On Expressing Some Constructs in SPIN
111
the transition expression (!$s2crack && $c2s#) matches the transitions 8 and 16.
The automata in Figures 7.3 (3) and (5) constitute an AC-BAC system, which is
established from the model and the nevertrace claim. The next step is to check whether
the asynchronous composition A1 · A2 matches the claim Ac .
The meta-composition (A1 · A2 ) ~· Ac is shown in Fig. 7.3(6) (To save space, the
state space starting from the state (9,15,q0 ) is not drawn). The state of the controlling
automaton is added into the system state. In the meta-composition, a transition is allowed,
iff it is in (A1 · A2 ) and allowed by Ac . Note that the transition 10 is blocked, since it is
not allowed by the state q1 of Ac . It is easy to see there does not exist any acceptance
cycle, thus the ω-language accepted by the meta-composition is empty. This means, no
counterexample can be found, and the system satisfies the required correctness property.
To conclude, checking nevertrace claims is equivalent to the emptiness problem for
meta-compositions. A nevertrace claim is violated if and only if the meta-composition is
not empty. As we mentioned, the emptiness problem for meta-compositions is decidable.
Therefore, checking nevertrace claims is feasible in practice. Furthermore, the emptiness
of meta-compositions can be checked on-the-fly, using the technique for checking never
claims [60].
7.5
On Expressing Some Constructs in SPIN
In this section, we will show how to express the semantics of some constructs in Promela
using nevertrace claims, although the major objective of the nevertrace claim is expressing a new class of properties rather than replacing them.
7.5.1
Expressing Notrace Assertions
There are various ways to convert a notrace assertion into a nevertrace claim. As an
example, let us consider the notrace assertion in Section 7.2. We can construct an NFA
in Fig. 7.4 which specifies the same property as the DFA in Fig. 7.2 of the notrace
assertion for the model in Listing 7.1.
A nevertrace claim can be written according to the automaton.
nevertrace { /* containing
S0:
if
:: $c2ssmsg1 -> goto
:: $c2ssmsg2 -> goto
:: (!$c2ss#) -> goto
fi;
S1:
if
:: $c2ssmsg1;
:: $c2ssmsg2 -> goto
:: (!$c2ss#) -> goto
fi;
/* S2 */
}
two consecutive c2s!msg1 */
S1;
S0;
S0;
S0;
S1;
It is easy to see, we used the following rules to convert the notrace assertion into a
112
Chapter 7. Nevertrace Claims for Model Checking
nevertrace claim.
First, in the system model, all send operations of message msg on a channel ch must
have a label chsmsg, while all receive operations of message msg on the channel must have
a label chrmsg. The labels of all statements other than channel operations should not
start with the names of declared channels.
Second, in the notrace assertion, (1) replace ch!msg by $chsmsg, ch?msg by $chrmsg;
(2) for each state, add new transition expressions to match the statements outside the
scope of the notrace assertion. In the example, for each state, we add a transition from
the state to itself with the transition expression (!$c2ss#), since only send operations on
the channel c2s are within the scope of the notrace assertion.
7.5.2
Expressing Remote Label References
The predefined function procname[pid]@label is a remote label reference. It returns a
nonzero value only if the next statement that can be executed in the process procname[pid]
is the statement with label.
It seems that procname[pid]@label can be also used as a transition expression, if
replacing @ by $. However, there is a little difference in semantics. The remote label reference is evaluated over the next statement of the process procname[pid], but the transition
expression is evaluated over the last executed transition, which does not necessarily belong
to the process procname[pid].
7.5.3
Expressing the Non-Progress Variable
The predefined non-progress variable np_ holds the value false, if at least one running
process is at a control-flow state with a progress-state label. It is used to detect the
existence of non-progress cycles.
It seems that the variable np_ is equivalent to our transition expression !$progress#.
However, there is a little difference. The variable np_ is evaluated over all the running
processes, but the transition expression is evaluated over the last executed process.
7.5.4
Expressing Progress-State Labels
There are two types of progress cycles. A weak progress cycle is an infinite execution
cycle that contains at least one of the progress-state labels, which denotes reaching some
progress-state labels infinitely often. A strong progress cycle is a weak progress cycle
with the requirement that each statement with a progress-state label in the cycle must be
executed infinitely often.
Promela supports only the weak progress cycle, whereas our nevertrace claim can
express the strong progress cycle.
As an example, let us consider the model in Listing 7.3 consisting of two processes p1
and p2. Note that p1 does not execute any statement, but waits at the label progress
for ever.
Listing 7.3: A Promela Model of Two Processes
1 chan ch = [ 0 ] of { bool } ;
2
3 active proctype p1 ( ) {
4
bool x ;
7.5. On Expressing Some Constructs in SPIN
113
5
p r o g r e s s : ch ? x ;
6 }
7
8 active proctype p2 ( ) {
9
bool x = 0 ;
10
do
11
: : x==0; x=1;
12
: : x==1; x=0;
13
od ;
14 }
In the verification mode of SPIN, none (weak) non-progress cycle is found, both with
or without fairness condition3 . All executions are weak progress cycles, since p1 stays for
ever at a progress-state label.
In contrast, we can find a strong non-progress cycle using the following nevertrace
claim, which can be constructed from the LTL formula <>[]!$progress#. If we modify a
never claim generated from the LTL formula by SPIN, remember to replace the condition
statement (1) or true by $# matching all transitions.
nevertrace { /* strong non-progress cycle detector */
T0_init:
if
:: (!$progress#) -> goto accept_S4
:: $# -> goto T0_init
fi;
accept_S4:
if
:: (!$progress#) -> goto accept_S4
fi;
}
Note that the evaluation of transition expressions is over the last executed transition,
and all the executable transitions do not have a progress-state label. Therefore, strong
non-progress cycles can be detected as counterexamples.
7.5.5
Expressing Accept-State Labels
There are two types of acceptance cycles. A weak acceptance cycle is an infinite execution
cycle that contains at least one of the accept-state labels, which denotes reaching some
accept-state labels infinitely often. A strong acceptance cycle is a weak acceptance cycle
with the requirement that each statement with an accept-state label in the cycle must be
executed infinitely often.
Promela supports only the weak acceptance cycle, whereas our nevertrace claim can
express the strong acceptance cycle.
As an example, let us replace the label progress by accept in the model of Listing
7.3. Note that p1 does not execute any statement, but waits at the label accept for ever.
3
The (weak) fairness in SPIN means, if the executability of an executable statement never changes, it
will eventually be executed. The strong fairness means, if a statement becomes executable infinitely often,
it will eventually be executed [72].
114
Chapter 7. Nevertrace Claims for Model Checking
In the verification mode of SPIN, a (weak) acceptance cycle is found, both with or
without fairness condition. All executions are weak acceptance cycles, since p1 stays for
ever at an accept-state label. Therefore, (weak) acceptance cycles can be detected as
counterexamples.
In contrast, we cannot find any strong acceptance cycle using the following nevertrace
claim, which can be constructed from the LTL formula []<>($accept#). If we modify a
never claim generated from the LTL formula by SPIN, remember to replace the condition
statement (1) or true by $# matching all transitions.
nevertrace { /* strong acceptance cycle detector */
T0_init:
if
:: $accept# -> goto accept_S9
:: $# -> goto T0_init
fi;
accept_S9:
if
:: $# -> goto T0_init
fi;
}
Note that the evaluation of transition expressions is over the last executed transition,
and all the executable transitions do not have an acceptance-state label. Therefore, there
is no strong acceptance cycle.
7.6
Related Work
In the previous section, we mentioned some differences between nevertrace claims and
some constructs in Promela. In this section, we would like to summarize the comparison
with the most powerful two constructs in Promela, never claims and notrace assertions.
The major differences between nevertrace claims and never claims are obvious. They
are used to specify properties on sequences of transitions (execution traces) and sequences
of states, respectively. A nevertrace claim is performed after executing a transition,
whereas a never claim is started from the initial system state. A nevertrace claim is
evaluated over the last executed transition, whereas a never claim is evaluated over the
current system state. Thanks to their different focuses, they can be used together in a
model checker to achieve stronger power.
Nevertrace claims and notrace (also trace) assertions are both about execution
traces, their major differences are as follows.
1. They have different scopes of checking. Nevertrace claims consider all transitions,
whereas only simple send/receive operations are within the scope of notrace assertions. Furthermore, only the channel names that are specified in a notrace assertion
are considered to be within its scope. All other transitions are ignored.
2. The notrace assertion cannot contain random receive, sorted send, or channel poll
operations. But these can be also tracked by a nevertrace claim.
3. The notrace assertion must be deterministic, whereas the nevertrace claim could
be nondeterministic, just like the never claim.
7.7. Conclusion
115
4. The notrace assertion does not execute synchronously with the system, but executes
only when events of interest occur. Whereas the nevertrace assertion executes
synchronously with the system, just like the never claim.
7.7
Conclusion
In this chapter, we proposed the nevertrace claim, which is a new construct for specifying
the correctness properties that either finite or infinite execution traces (i.e., sequences of
transitions) that should never occur. We showed the major contributions of this construct:
a powerful construct for formalizing properties related to transitions and their labels, and
a way for reducing the state space at the design stage.
The Asynchronous-Composition Büchi Automaton Control System (AC-BAC System)
provides the theoretical foundation for checking nevertrace claims. We showed that the
nevertrace claim and its checking problem are feasible in practice.
One important future work is to implement the nevertrace claim. Then empirical
data could be collected to show whether and how much the nevertrace claim can reduce
the state space and decrease checking time in practice, comparing with checking the same
properties specified by existing constructs in SPIN. Another research direction is to make
the checking of nevertrace claims compatible with partial order reduction technique.
116
Chapter 7. Nevertrace Claims for Model Checking
(label: again, c2ssmsg)
8: c2s!msg
8
9
$#
!$s2crack && $c2s#
10
T0
10: s2c?ack
(label: s2crack)
(1) A1
(4) Nevertrace Automaton
17: x=x-1;
15
17
(2) A2
18: s2c!ack
(label: s2csack)
18
8
8,15
q0
8
16
9 16 16
9
9,15
q1
9
16
10,15
q1
9,17
0,[ ][ ]
9,17
q1
16
17
10,17
1,[ ][ ]
10
8
9,15
q0
16
16
9
9
9
10,18
q1
18
18
10,15
0,[ ][ ]
(3) A1 · A2
9,18
q1
17
18
10,18
0,[ ][ ]
10
17
10,17
q1
9,18
-1,[ ][ ]
17
q1
(5) Ac
9,15
0,[m][ ]
10
{8, 9, 16, 17, 18}
{8, 16}
q0
8,15
0,[ ][ ]
10,15
1,[m][ ]
S4
δ1 ∪ δ2 = {8, 9, 10, 16, 17, 18}
(label: c2srmsg)
16: c2s?msg
9
!$s2crack
9: x=x+1;
10
(6) (A1 · A2 ) ~· Ac
10,15
q1
Figure 7.3: An Example of Asynchronous-Composition BAC System
!$c2ss#
!$c2ss#
$c2ssmsg1
S0
!$c2ss#
$c2ssmsg1
S1
S2
$c2ssmsg2
$c2ssmsg2
$c2ssmsg1, $c2ssmsg2
Figure 7.4: The Nondeterministic Automaton of nevertrace Claim
10
Part II
The Model Monitoring Approach
and Applications
117
Chapter 8
The Model Monitoring Approach
In this chapter, we propose the model monitoring approach, which is based on the theory of control systems presented in the first part. The key principle of the approach is
“property specifications as controllers”. In other words, the functional requirements and
property specification of a system are separately modeled and implemented, and the latter one controls the behavior of the former one. The model monitoring approach contains
two alternative techniques, namely model monitoring and model generating. One important merit of the approach is better support for the change and evolution of property
specifications, which challenge the traditional development process.
8.1
The Model Monitoring Approach
Functional requirements and property specifications are two important factors in the development process. A design or product should implement the functional requirements to
ensure functionality, and satisfy the property specification to ensure dependability. The
principle of traditional development process is to integrate the property specification into
the implementation of functional requirements through testing, formal verification, etc.
In this section, we propose the model monitoring approach whose key principle is
“property specifications as controllers”. In other words, the functional requirements and
property specification of a system are separately modeled and implemented, and the latter
one controls the behavior of the former one. This is obviously different from the principle
of traditional development process.
There are numerous reasons and merits for using such a new principle. The most
important two of them are as follows.
First, functional requirements and property specifications have different natures, since
they are defined by different groups of people in practice. Indeed, functional requirements
are usually derived from the requirements of clients, the decision of designers and developers, etc. While property specifications are mainly required by quality engineers and
testing engineers, whose major interest is the dependability of products.
Second, property specifications may change during the entire life-cycle, including the
development stage and post-implementation. The merit of the model monitoring approach
in this context is the low coupling between the implementations of functional requirements
and property specification, which can decrease the cost of ensuring the changeful property
specification.
The process of the model monitoring approach consists of three major steps, as shown
119
120
Chapter 8. The Model Monitoring Approach
in Fig. 8.1.
property
specification
(2)
modeling
controlling
model M2
functional
requirements
(1)
modeling
system
model M1
(3)
meta-composition
M = M1 ~· M2
Figure 8.1: The Process of the Model Monitoring Approach
At the first step, the system model M1 is captured from the functional requirements.
This model contains functional semantics, which defines how the system behaves. System
behavior modeling is achieved by system engineers, such as designers, programmers and
developers.
At the second step, the controlling model M2 over M1 is derived from the property
specification. M2 is also called a controller. This model contains correctness semantics,
which defines what the system should do and what it should not do. Modeling controllers
is the duty of quality engineers whose responsibility is to ensure dependability, including
reliability, safety, etc. Quality engineers may consist of requirements engineers, testing
engineers, managers from higher socio-technical levels who define safety standards or regulations [89], etc.
Note that M1 and M2 could be expressed using various types of model or formalisms,
such as automata, grammars, ω-automata, pseudo-codes and other describing languages.
The two models M1 and M2 should be expressed using the same formalism, or two formalisms that could be converted into a common one. For example, linear temporal logic
formulas could be translated into Büchi automata.
At the third step, the objective is to construct a control system consisting of M1 and
M2 . Then the global system M = M1 ~· M2 will satisfy the properties, since M2 restricts the
behavior of M1 . Technically, we have two choices for implementing the meta-composition
operator to deduce such a globally correct system as follows.
The first one is named model monitoring or explicit model monitoring. M1 and M2
are separately implemented, but maybe at different stages of life-cycle. That is, M2 can
be added or modified at later stages of life-cycle, even after the implementation of M1 .
The system M1 is controlled at runtime by M2 which reports, or blocks undesired actions,
or recovers the system from hazardous states. We can incrementally add new properties
to the controlling system M2 after we learn new dependability requirements. We can also
modify the existing properties in the controlling system M2 if we find them erroneous.
Note that we do not really implement the meta-composition M = M1 ~· M2 . Instead, M1
and M2 constitute a global system that is equivalent to M . If the property specification
changes, M1 will not be modified. We only need to revise M2 , which is much easier and
more efficient than the traditional approach which leads to revise the whole system.
The second one is named model generating or implicit model monitoring. M1 and M2
are combined at design-time to generate a new correct model M = M1 ~· M2 satisfying the
property specification. Then we implement the model M where M2 implicitly monitors
M1 . If the specification changes, we only need to revise M2 , then regenerate and implement
a new model M 0 . Because the computation of meta-composition can be automated, it
is more efficient than manual analysis of counterexamples obtained through testing and
verification and manual revision of the design in the traditional approach.
8.2. Supporting the Change and Evolution of Property Specifications
121
As a whole, we generally call the two alternative techniques the model monitoring
approach. In both of the two cases, the property specification is design-oriented. That is,
the property specification is implemented directly or integrated into the design. This is
different from the traditional approach, where the property specification is more testingoriented, i.e., used to verify a system design.
The word model in the term “model monitoring approach” denotes that designing a
controller usually depends on analyzing the model of the controlled system, and what is
generated by model generating is usually a model rather than a complete implementation.
It is worth noting that the models M1 , M2 could be expressed using automata or grammars. This means, the model monitoring approach could be used for both automata-based
systems and grammar systems. Numerous industrial systems are automata-based systems
(e.g., described using Büchi automata), while programming and modeling languages are
typical grammar systems described using context-free grammars. In the next chapter, we
will discuss two typical applications to show the strong power of this approach. The first
one concerns safety-related systems, which are automata-based systems. The second one
concerns the Unified Modeling Language, which is a grammar system.
8.2
Supporting the Change and Evolution of Property Specifications
In this section, we discuss an important merit of the model monitoring approach, i.e.,
supporting the change and evolution of property specifications, which is a challenge to
the traditional development process. The changes are common both at design-time and
post-implementation, especially for the systems whose life period is long, e.g., aircrafts,
nuclear plants, critical embedded electronic systems, etc. Unfortunately, the changes
of property specifications always cause high expenditure of rechecking and revising the
system, especially when the system is too complex to be clearly analyzed manually or so
large that the revision is not trivial. Moreover, the changes may not only lead to modify
a small portion of the system, but to revise the entire design. We are searching for a
technique supporting changeful property specifications at a lower cost.
It is well known that the change and evolution of requirements are common and challenge the practice of system developing and maintenance [66]. The change may be a result
of various factors [17], such as experience of using the system after implementation and distributing, dynamic and turbulent environments, requirements elicitation, new regulations,
etc. As a result, engineers must take into account these changes and revise their system
post-implementation, of course at a high cost. The change and evolution of requirements
threw down the gauntlet to the traditional development process. We must support the
change throughout the entire life-cycle from various aspects [50]. We are interested in the
innovations on the system development methodology for better support.
As we know, requirements include the following two classes [67]:
1. Functional requirements, which are requirements pertaining to all functions that are
to be performed by the target system.
2. Dependability requirements, which are requirements about the dependable operation
of the target system. Dependability requirements contain safety requirements, security
requirements, reliability requirements, etc. For example, safety requirements [2, 9] focus
on safety constraints specifying authorized system behaviors and components interactions,
where a safety constraint specifies a specific safeguard [55]. A system may have different
dependability requirements under different contexts or critical levels. For instance, control
122
Chapter 8. The Model Monitoring Approach
software is imposed more strict constraints than entertainment software, even if both are
embedded in aircrafts.
We are interested in the change and evolution of dependability requirements, the latter
one of the two classes above. There are two common causes of the changes of dependability
requirements.
First, dependability requirements may change at design-time. Functional and dependability requirements are defined by different groups of people in industrial practice, i.e.,
system engineers and quality engineers, respectively. At the beginning stage of system
developing, quality engineers often find it very difficult to produce a complete set of dependability requirements. Thus, they add emergent dependability requirements during
the development process, along with their increasing knowledge on the design.
Second, dependability requirements may change post-implementation. Some dependability requirements were unknown before the system is developed and used in real environment. For example, people always need to learn new safety requirements from historical
events [90, 87]. Moreover, safety regulations change several times during the life-cycle as
people requires a safer system. However, it will be expensive to modify the design after
we learn these requirements, since the product has been released.
In the development process, dependability requirements are always modeled as property specifications. As a result, the changes discussed above will cause the change and
evolution of property specifications.
In the past thirty years, model checking [31] achieved great success as a verification
technique for ensuring property specifications. It has been widely used to verify reactive
systems, by specifying and checking properties written in temporal logic formulas [75] or
expressed using Büchi automata.
The major disadvantages of model checking techniques under the change of property
specifications are the following two. First, the analysis of counterexamples and revision
of designs are not automated. If the system is complex, it is difficult to locate the faults
and revise the design without introducing new faults. As a result, the verification process
is iterated until no fault is detected, thus increases the cost. Second, once new properties
are introduced or existing properties are modified, the whole design or implementation
(product) must be revised or redistributed at a high cost even impossible, especially when
the system is very large.
Motivated by the need of improving the drawbacks, we can use the model monitoring
approach to fill the gap between the change of property specifications and traditional
verification process. The novel approach models the functional requirements and property
specification separately. Then two alternative techniques can be applied to ensure the
properties.
For the first alternative, namely model monitoring or explicit model monitoring for
emphasis, the implementation of property specification is separated from the implementation of functional requirements. A controlling system realizing the property specification
controls the behavior of the target system at runtime. The two systems constitute a correct system from a global view. The interest of this approach is that we only need to revise
the controlling system to guarantee correctness, when the property specification changes.
Since the controlling system is generally much smaller than the overall system, the cost of
analysis and revision will be much lower.
For the second alternative, namely model generating or implicit model monitoring, a
new system design satisfying the property specification can be automatically generated.
8.3. Example: Oven and Microwave Oven
123
We need only to implement directly the generated system design. The computation can
be automated, thus the cost of modifying the design will be lower.
The two alternatives improve the two mentioned disadvantages of traditional development process using model checking in the context of change and evolution.
8.3
Example: Oven and Microwave Oven
In this section, we first illustrate the model checking approach and our model monitoring
approach by treating the same example, then we compare the two approaches to show the
differences between them.
Let us consider the behaviors of an oven and a microwave oven. They have similar
operations: start oven, open door, close door, heat, etc. One significant difference is that
we can use an oven with its door open. We should only use a microwave oven with its
door closed for avoiding the damaging effects of radiation. Suppose we have a design of
an oven, we want to reuse this design for producing microwave ovens. We must impose
additional constraints (a case of evolution of property specification). For example, we add
a constraint: “the door must be closed when heating”.
Figure 8.2 extracts the Kripke model M of a design of oven. The atomic propositions
are: s (start), c (door is closed) and h (heat), i.e., AP = {s, c, h}. Each state is labeled
with both the atomic propositions that are true and the negations of the propositions that
are false in the state.
8.3.1
Using the Model Checking Approach
We aim at checking the property “when the oven is heating, the door must be closed”, i.e.,
def
the LTL formula φ = G(h → c). For clarity and avoiding complex figures, we use this
simple formula. The approach can be automated and scaled up for more complex models
and formulas. We are concerned with the automata-theoretic model checking which is
syntactically similar to our approach (but there is no semantic similarity).
First, M is translated into a Büchi automaton AM of Fig. 8.3 by adding an initial state
q0 . Each transition is labeled with its name pi and the associated symbol a ∈ Σ = 2AP .
Each state is an accepting state.
Second, the negation of φ is translated into a Büchi automaton A¬φ of Fig. 8.4. For
¬φ, we have
¬φ = ¬G(h → c) = ¬(F alseR(¬h ∨ c)) = T rueU(h ∧ ¬c)
Thus, A¬φ of Fig. 8.4 is constructed recursively from T rueU(h ∧ ¬c). Each transition is
labeled with a boolean expression, representing a set of symbols that correspond to truth
assignments for AP that satisfy the boolean expression. Specifically, let Σ(ϕ) be the set
of symbols making ϕ true, we have Σ(h ∧ ¬c) = {{h}, {s, h}}, Σ(T rue) = 2AP . We may
represent A¬φ by labeling its transitions with symbols in Σ instead of boolean expressions,
i.e., the automaton of Fig. 8.5, with a similar style to Fig. 8.3.
Then, we compute the intersection AI = AM ∩ A¬φ . The double DFS (Depth First
Search) algorithm is called to decide the emptiness of AI on-the-fly. A subgraph of the
intersection constructed by the algorithm is shown in Fig. 8.6. A state tij denotes a
composite state (qi , rj ). The first DFS (solid transitions) starts from t00 to t73 , then the
second DFS (dashed transitions) is started at t73 . A successor t83 is detected on the stack
of the first DFS, then the algorithm reports a counterexample. Thus, L(AI ) 6= ∅.
124
Chapter 8. The Model Monitoring Approach
cook
close door
done
¬s, ¬c, h
¬s, ¬c, ¬h
cook
done
¬s, c, ¬h
¬s, c, h
open door
start cooking
start oven
close door
warmup
s, ¬c, h
start cooking
start oven
s, ¬c, ¬h
warmup
s, c, ¬h
s, c, h
open door
Figure 8.2: The Kripke Model M of Oven
q0
p12 : {h}
p1 : ∅ p : {c}
2
p13 : ∅
q8
q1
p7 : {c, h}
p8 : {c}
q2
q5
p3 : ∅
p11 : {h}
q7
p9 : {s}
p4 : {s, c}
p14 : {s, c}
p10 : {s, h}
q6
p6 : {c, h}
p5 : {s, c, h}
q3
q4
p15 : {s}
Figure 8.3: The Büchi Automaton AM of Oven
t00
{h}
r0
h ∧ ¬c
T rue
r0
r2
r1
r2
∅
t13
T rue
h ∧ ¬c
∅
{h}, {s, h}
AP
2
2
{s}
{h}, {s, h}
r1
r3
T rue
T rue
Figure 8.4: The Büchi Automaton A¬φ of ¬φ
t83
t11
AP
2AP
{h}
{s, h}
r3
2AP
t73
{s}
{s, h}
t63
t72
t61
Figure 8.5: A¬φ Labeled Figure 8.6: A Subgraph of
with Σ
AI
t00
p1
δ(¬h)
¬h
δ(¬h)
¬h
r0
p2
t11
r1
r0
r1
c
Figure 8.7: The Büchi Automaton Aφ of φ
p4
p14
δ(c)
c
p3
p9
p8
t21
δ(c)
t61
p15
p7
t52
p6
p5
t31
t42
Figure 8.8: The Control- Figure 8.9: C = AM ~· Âφ
ling Automaton Âφ
8.3. Example: Oven and Microwave Oven
125
Finally, engineers must manually analyze the original design with the guide of the
reported counterexample, locate the faults in the design, and revise the design. The
iterative process of model checking, counterexample analysis and revision is executed, until
L(AI ) = ∅. Note that if the system is complex or large, the analysis of counterexample
and revision would be hard, because it is difficult to locate the faults. As a result, due
to the complexity and size of the system, the cost of manual analysis of counterexamples
and revision is high.
8.3.2
Using the Model Monitoring Approach
We also need the automaton AM of Fig. 8.3. However, we need a controlling automaton
Âφ instead of A¬φ . A controlling automaton is a Büchi automaton having an alphabet that
equals the set of transitions of the controlled automaton AM , i.e., Âφ = (Q0 , Σ0 , δ 0 , q00 , F 0 )
with Σ0 = δ where δ is the set of transitions of the controlled automaton AM . The
controlling automaton can be constructed from the property specification directly, or by
translating the automaton Aφ (resulting in an alphabet-level controlling automaton).
For this example, we use Aφ which is translated from φ using the translation from LTL
formulas to automata. For φ, we have
φ = G(h → c) = F alseR(¬h ∨ c))
The constructed automaton Aφ is shown in Fig. 8.7.
We observe the following fact: Let AI = AM ∩ Aφ , we have L(AI ) ⊆ L(Aφ ), i.e., AI
satisfies φ. The intersection AI satisfies the property φ by removing the transitions of AM
that violate the property. In other words, AI is a model satisfying φ.
We translate Aφ into an alphabet-level controlling automaton Âφ , by replacing each
boolean expression ϕ by δ(ϕ), which is the set of transitions labeling with one of the
symbols that correspond to truth assignments that satisfy ϕ. For example, each transition
of Âφ in Fig. 8.8 is labeled with a set of names of transitions of AM , where
δ(¬h) = {p1 , p2 , p3 , p4 , p8 , p9 , p13 , p14 , p15 }
δ(c) = {p2 , p4 , p5 , p6 , p7 , p8 , p14 }
close door
¬s, ¬c, ¬h
cook
¬s, c, ¬h
done
¬s, c, h
open door
start oven
start oven
close door
s, ¬c, ¬h
start cooking
warmup
s, c, ¬h
s, c, h
open door
Figure 8.10: The Kripke Model M 0 of Microwave Oven
Then we compute the meta-composition C of AM and the controlling automaton Âφ ,
denoted by C = AM ~· Âφ . The automaton C of Fig. 8.9 starts from the initial state
126
Chapter 8. The Model Monitoring Approach
t00
p1
p2
r0
r0
δ(¬h)
c
¬h
r1
r1
r2
¬h
δ(c)
c
Figure 8.11: The Büchi
Automaton Aφ of φ
δ(¬h)
p2
t11
t32
δ(c)
p3
p9
p14
r2
δ(¬h)
p3
p4
δ(c)
c
¬h
t22
p15
p8
t21
p4
p14
t61
p15
p7
t52
p6
p5
t31
t42
Figure 8.12: The Control- Figure 8.13: C = AM ~· Âφ
ling Automaton Âφ
t00 , a transition is allowed if and only if it is allowed by both AM and Âφ . Note that the
hazardous transitions p10 , p11 , p12 and the unreachable transition p13 are eliminated. The
model C satisfies the property φ. We can recover it to a Kripke model M 0 of Fig. 8.10 by
removing the initial state. It is easy to see the model M 0 satisfies the required property.
The model monitoring approach contains two alternative techniques to implement the
meta-composition operator as follows.
One alternative, namely model monitoring or explicit model monitoring for emphasis,
implements AM and Âφ separately, which constitute an overall correct system. Specifically,
Âφ can be realized as a controlling system that monitors the behavior of the system
implementing AM . If AM tries to apply a certain action that violates the properties, the
control system can detect it and call some predefined functions to alert, block the action,
or recover from the hazardous state.
Another alternative, namely model generating or implicit model monitoring, implements directly the automatically generated model M 0 as a correct model of microwave
ovens, rather than separating the two implementations. In this case, Âφ monitors implicitly the behavior of AM .
We may also use an equivalent automaton Aφ of Fig. 8.11 specifying φ, with the
corresponding controlling automaton in Fig. 8.12. The meta-composition C is shown in
Fig. 8.13. The hazardous transitions p10 , p11 , p12 and the unreachable transition p13 are
also eliminated. However, due to the redundant states in Âφ (e.g., r1 and r2 are equivalent),
the number of states of C is increased. If we implement AM and Âφ separately using model
monitoring, then there is no influence in complexity, because the overall system explores
only one path at a time. If we use model generating, we must minimize the controlling
automaton first, or we minimize the generated meta-composition, in order to reduce the
number of states by eliminating redundance in equivalence classes of states (e.g., t21 and
t22 are equivalent). For example, the automaton of Fig. 8.13 can be reduced to the one of
Fig. 8.9 by removing t22 and t32 . To automate the computation of minimization, classic
algorithms for minimizing finite automata can be used [74].
8.3.3
A Technical Comparison with Model Checking
It is easy to see, the two approaches use different techniques. The model checking approach
checks the emptiness of the intersection AM ∩ A¬φ , while the model monitoring approach
constructs the meta-composition AM ~· Âφ . That is, the model monitoring approach uses
a controlling automaton Âφ rather than the automaton A¬φ specifying ¬φ, and meta-
8.4. Related Work
127
composition rather than intersection and emptiness checking.
Model checking leads to revise the original design by manually analyzing the counterexample violating the new property. Thus the cost is high, especially when the system is so
large and complex that the counterexamples cannot be effectively analyzed and revision
is not easy.
Model monitoring uses the automaton of Fig. 8.3, and adds a controlling system
implementing the automaton of Fig. 8.8. The global system in Fig. 8.9 satisfies the new
property. Note that Âφ is usually much smaller than the overall system, thus it is easier
and cheaper to modify only the controlling component when the property specification
changes at the later stages of life-cycle.
Model generating can automatically generate a new correct design (Fig. 8.10) by
computing the meta-composition of the automata in Fig. 8.3 and Fig. 8.8. Higher
efficiency can be achieved since the computation can be automated.
We remark here that Âφ of Fig. 8.8 is an alphabet-level controlling automaton, i.e.,
all transitions associated with the same symbol of AM always appear together on the
transitions between two states of Âφ . In fact, all the controlling automata translated
from LTL formulas are alphabet-level controlling automata. The feature of alphabetlevel control and the translation from LTL formulas are not necessary for constructing
controlling automata. As we discussed in Section 6.1.3, controlling automata are more
flexible, and can be defined directly. These unnecessary features are just used for the
comparison with model checking in this example.
8.4
Related Work
A standard automata-theoretic model checking technique, developed by Vardi, Wolper,
Holzmann et al. [37, 123, 60, 73, 71], was recalled in Section 2.5. We provided an example
to show the differences between our approach and model checking. The comparison in
Section 8.3.3 shows that there is no semantic similarity between them, although they are
syntactically similar due to the formalism of Büchi automata.
In this section, we further compare the model monitoring approach with model checking
to show its merits and limitations.
Merits
In the principle aspect, the essence of the model monitoring approach is the separation
of functional requirements and property specification (derived from dependability requirements), while model checking emphasizes integration. Thanks to the approach, people
may achieve lower cost of revising designs when the dependability requirements change,
because only the controlling component needs modifications.
In the technical aspect, the model monitoring approach only checks one execution
path at a time at runtime, while model checking verifies all the possible execution paths at
design time. Therefore, model monitoring is often significantly more practical than model
checking, since it explores one computational path at a time, while model checking suffers
the state explosion problem because of the exhaustive search.
In the expressive power aspect, as we discussed in Section 6.1.3, the theoretical results
show that the BAC system is more powerful in specifying properties than the model
checking techniques that use LTL formulas or regular properties.
128
Chapter 8. The Model Monitoring Approach
Limitations
There exist some unmonitorable properties that can be expressed by controlling automata,
but cannot be monitored at runtime, e.g., liveness properties denoting that some state will
occur in the future. This is due to the fact that we cannot decide the violation of these
properties in finite amount of time. For example, it cannot be monitored whether a
state will appear infinitely often. These unmonitorable properties should be ensured using
model checking. To characterize the class of monitorable properties is a promising and
interesting direction for future work.
Complementarity
It is worth noting that the model monitoring approach and model checking have their own
advantages, and should be used complementarily.
The model monitoring approach separates the system model and its controlling system
related to the property specification. Thus, the change and evolution of property specifications only result in modifying the design of controlling system. Since the controlling
system is often much smaller than the overall system, the cost of modification is lower.
This approach is especially useful in ensuring changeful properties.
The model checking approach integrates properties into the system design through automated counterexample searching, manual analysis and revision. As a result, the change
and evolution of property specifications bring high cost of revising the design. However,
the resulting system may be more efficient, since there is no additional communications
between the controlled and controlling components. Thus, model checking is more suitable
for verifying invariable properties.
Therefore, invariable properties should be ensured through model checking, while
changeful ones should be implemented through the model monitoring approach. Note
that classifying properties into the two categories (i.e., invariable and changeful) is a
trade-off process. The classification depends on the individual system and the experiences
of designers. Discussions on this issue are beyond the scope of this thesis.
Furthermore, model checking could be used to verify the global control system constructed via the model monitoring approach, in order to increase our confidence in the
design of control systems.
8.5
Conclusion
In this chapter, we proposed the model monitoring approach, which is based on the theory
of control systems. The key principle of the approach is “property specifications as controllers”. As a merit, we showed that it can fill the gap between the change and evolution
of property specifications and the traditional development process using model checking.
One important future work is to characterize the monitorability. Then we are able to
decide whether a given property is monitorable and whether it can be ensured via the model
monitoring approach. Another research direction is to extensively explore other possible
applications of the approach, which may in turn contribute to the further development of
the approach and the underlining theory.
Chapter 9
Applications of the Model
Monitoring Approach
In this chapter, we discuss two applications of the model monitoring approach. The first
one concerns safety-related systems, which are automata-based systems. The second one
concerns the UML (Unified Modeling Language), which is a grammar system. The two
applications show the strong power of the proposed approach.
9.1
Theoretical Foundation of Safety-Related Systems in IEC 61508
In this section, after an introduction of the standard IEC 61508 and the SRS (SafetyRelated System), we propose the concept of functional validity of the SRS, which means
whether the safety functions realized by the SRS can really prevent accidents and recover
the system from hazardous states, provided the expected safety integrity level is reached.
A generic technical methodology is presented to achieve the functional validity of the SRS,
and industrial experiences in designing functionally valid SRS are summarized. Finally, we
will show that model monitoring is the theoretical foundation of safety-related systems, and
checking functional validity is essentially checking an (ω-)AC system, e.g., BAC system.
9.1.1
IEC 61508 and Safety-Related Systems
The international standard IEC 61508 [77, 114, 15] provides a generic process for electrical, electronic, or programmable electronic (E/E/PE) safety-related systems to achieve
an acceptable level of functional safety. The principles of IEC 61508 have been recognized as fundamental to modern safety management [114], thus have gained a widespread
acceptance and been used in practice in many countries and industry sectors [53].
Like other safety standards (e.g., DO-178B [69]), IEC 61508 gives recommendations
on best practices such as planning, documentation, verification, safety assessment, rather
than concrete technical solutions. Thus, it is a generic standard for the safety management
throughout the life-cycle, rather than a system development standard. More sector-specific
and application-specific standards can be derived based on the standard, such as IEC 61511
for the process industry [57], IEC 62061 for the machinery industry, IEC 61513 for the
nuclear plant, EN 50126 for the European railway, and ISO 26262 for automotive safety.
As shown in Fig. 9.1, the first premise of the standard is that there is an equipment
intended to provide a function, and a system which controls it. The equipment is called
129
130
Chapter 9. Applications of the Model Monitoring Approach
an Equipment Under Control (EUC). The Control System (CS) may be integrated with
or remote from the EUC. A fundamental tenet of the standard is that, even if the EUC
and the CS are reliable, they are not necessarily safe. This is true for numerous systems
implementing hazardous specification. They may pose risks of misdirected energy which
result in accidents.
The second premise is that Safety-Related Systems (SRS) are provided to implement
the expected safety requirements, which are specified to reduce the risks and achieve functional safety for the EUC. The SRS may be placed within or separated from the CS. In
principle, their separation is preferred.
Control
System
Safety-Related
System
EUC
Figure 9.1: The Architecture of Systems with the SRS
An SRS may comprise subsystems such as sensors, logic controllers, communication
channels connected to the CS, and actuators. Usually, an SRS may receive two types of
input: the values of safety-related variables monitored by its sensors, and the messages
sent by the CS. Then the SRS executes computation to decide whether the system is in a
safe state. According to the result, it may actualize safety functions through two types of
output: sending directly commands to its actuators, or sending messages to the CS.
9.1.2
Functional Validity of Safety-Related Systems
Let us consider two important problems that occur in industrial practice.
The first one questions the rightness of overall and allocated safety requirements.
According to IEC 61508, safety requirements consist of two parts, safety functions and
associated safety integrity levels (SIL). The two elements are of the same importance
in practice, because the safety functions determine the maximum theoretically possible
risk reduction [56]. However, the standard focuses more on the realization of integrity
requirements rather than function requirements. As a result, the standard indicates only
the product is of a given reliable integrity, but not whether it implements the right safety
requirements.
Second, the standard does not prescribe exactly how the verification of safety functions
of an SRS could technically be done. On one hand, the standard calls for avoiding faults
in the design phase, since the ALARP principle (As Low As Reasonably Practicable) is
adopted for determining tolerability of risk. Indeed, systematic faults are often introduced
during the specification and design phases. Unlike random hardware failures, the likelihood
of systematic failures cannot easily be estimated. On the other hand, the standard only
recommends a process and a list of techniques and measures during the design phase to
avoid the introduction of systematic faults, such as computer-aided design, formal methods
(e.g., temporal logic), assertion programming, recovery (see parts 2, 3 of [77]). The detailed
9.1. Theoretical Foundation of Safety-Related Systems in IEC 61508
131
use of these techniques is left to the designer.
As a result, the problem of functional validity arises. Functional validity means
whether the safety functions realized by the SRS can really prevent accidents and recover the system from hazardous states, provided the expected safety integrity level is
reached. People are searching for a generic technical methodology to achieve functional
validity.
In fact, with the introduction of the SRS, it becomes much more hard to ensure the
safety of the overall system, due to complex interactions and the resulting huge state
space. Unlike the case of a single CS, human is no longer capable to analyze manually and
clearly the behaviors of the overall system. Therefore, we shall expect a computer-aided
method.
In the sequel, we will propose such a generic technical methodology (or a framework)
for designing functionally valid SRS. To the best of our knowledge, we are the first to
consider the technical solution to functional validity in the literature. We focus on the
systems that operate on demand (i.e., discrete event). The methodology is based on
computer-aided design in association with automated verification tools (e.g., SPIN).
9.1.3
Example: Chemical Reactor
As an example, consider an accident occurred in a batch chemical reactor in England
[80, 87]. Figure 9.2 shows the design of the system. The computer, which served as a
control system, controlled the flow of catalyst into the reactor and the flow of water for
cooling off the reaction, by manipulating the valves. Additionally, the computer received
sensor inputs indicating the status of the system. The designers were told that if an
abnormal signal occurred in the plant, they were to leave all controlled variables as they
were and to sound an alarm.
Gearbox
Gearbox
Catalyst
Catalyst
Reactor
Cooling
Water
Control
System
Figure 9.2: Reactor Control System
Reactor
Control
System
Cooling
Water
Safety-Related
System
Figure 9.3: Reactor Control System with
SRS
On one occasion, the control system received an abnormal signal indicating a low oil
level in a gearbox, and reacted as the functional requirements specified, that is, sounded
an alarm and maintained all the variables with their present condition. Unfortunately, a
catalyst had just been added into the reactor, but the control system had not yet opened
the flow of cooling water. As a result, the reactor overheated, the relief valve lifted and
the contents of the reactor were discharged into the atmosphere.
132
Chapter 9. Applications of the Model Monitoring Approach
We believe that the safety-related system could be used to avoid the accident. Figure
9.3 shows the role of the SRS in the overall system. It receives signals from additional
sensors, and communicates with the CS. The key issue is how to specify and design the
SRS and prove its functional validity, i.e., the SRS is really efficient in the hazardous
context. We illustrate a methodology based on computer-aided design in association with
the SPIN model checker.
The SPIN (Simple Promela INterpreter) model checker is an automated tool for verifying the correctness of asynchronous distributed software models [73, 71, 72]. System
models and correctness properties to be verified are both described in Promela (Process
Meta Language). This chapter is based on SPIN Version 5.2.5, released on 17th April
2010.
We will illustrate two main steps of the methodology: modeling the CS, and modeling
the SRS.
Modeling Control Systems. The first step is to analyze the behaviors of the CS
by modeling it using Promela language. Listing 9.1 shows the model. The CS scans the
status of the reactor, and then manipulates the valves according to the status.
Listing 9.1: The Promela Program for Reactor Control System
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
#define s a ( ( abnorm && ! c a t a ) | | ( abnorm && c a t a && water ) )
#define f c o n s t a t u s==n o c a t a | | s t a t u s==e n c a t a
mtype = { abnormal , nocata , e n c a t a , nowater , e n w a t e r } ;
mtype s t a t u s = n o c a t a ; /∗ s t a t u s o f t h e r e a c t o r ∗/
bool c a t a
= false ;
/∗ w h e t h e r c a t a l y s t f l o w i s open ∗/
bool water = f a l s e ;
/∗ w h e t h e r w a t e r f l o w i s open ∗/
bool abnorm = f a l s e ;
/∗ w h e t h e r abnormal s i g n a l o c c u r e d ∗/
/∗ random s i m u l a t i o n o f s c a n n i n g t h e s t a t u s ∗/
inline scan ( ) {
if
: : true −> s t a t u s = abnormal ;
: : c a t a == f a l s e −> s t a t u s = n o c a t a ;
: : c a t a == true −> s t a t u s = e n c a t a ;
: : water == f a l s e −> s t a t u s = nowater ;
: : water == true −> s t a t u s = e n w a t e r ;
fi ;
}
/∗ p o s s i b l e a c t i o n s o f t h e s y s te m ∗/
i n l i n e o p e n c a t a ( ) { c a t a =true ;
p r i n t f ( ” open c a t a −> ” ) ; }
i n l i n e c l o s e c a t a ( ) { c a t a =f a l s e ; p r i n t f ( ” c l o s e c a t a −> ” ) ; }
i n l i n e openwater ( ) { water=true ;
p r i n t f ( ” open water −> ” ) ; }
i n l i n e c l o s e w a t e r ( ) { water=f a l s e ; p r i n t f ( ” c l o s e water −> ” ) ; }
i n l i n e alarm ( ) { abnorm = true ; p r i n t f ( ” alarm −> ” ) ; }
i n l i n e e n d i n g ( ) { p r i n t f ( ” e n d i n g −> ” ) ; }
active proctype c o n t r o l s y s t e m ( ) {
/∗ I n i t i a l a c t i o n s o m i t t e d ∗/
do
: : scan ( ) ;
if
: : s t a t u s == abnormal −> alarm ( ) ; goto END;
: : e l s e −> i f
: : s t a t u s==n o c a t a && c a t a ==f a l s e −>o p e n c a t a ( ) ;
: : s t a t u s==e n c a t a && c a t a ==true −>c l o s e c a t a ( ) ;
: : s t a t u s==nowater && water==f a l s e −>openwater ( ) ;
: : s t a t u s==e n w a t e r && water==true −>c l o s e w a t e r ( ) ;
: : e l s e −> skip ;
fi ;
fi ;
od ;
9.1. Theoretical Foundation of Safety-Related Systems in IEC 61508
133
43 END: e n d i n g ( ) ;
44 a s s e r t ( s a ) ;
45 }
Lines 1-2 define macros for specifying correctness properties.
Lines 3-7 define the variables. Note that status denotes the detection of events (e.g.,
no catalyst or enough catalyst in the reactor), and controls the flow of the computation,
while the other three variables save the state information.
Lines 10-18 simulate the input events to the CS. In the verification mode, SPIN can
only treat closed system without users’ input. Therefore, the status of reactor is generated in a random manner, i.e., the if statement (lines 11-17) chooses randomly one of
the alternative statements whose condition holds. Note that this matches exactly what
happens in practice, since the status of the reactor is determined nondeterministically by
the reaction and the environment, not the user.
Lines 21-26 define the primitive actions of the CS. To obtain an efficient model for
verification, the real codes for manipulating physic equipments are replaced by printf
statements, which could be used for observing the execution traces of the CS.
Lines 28-45 specify the model of the CS. The non-critical codes (irrelevant to the
concerned property, e.g., the code for initiation) are omitted or simplified to produce an
efficient model. In line 44, we check whether the system satisfies the safety assertion sa
(c.f. line 1), which means when an abnormal signal occurs, either the flow of catalyst
must be closed, or the flow of water must be also open if the flow of catalyst is open. The
violation of this assertion may result in the mentioned accident.
In order to see whether the model describes exactly the behaviors of the CS, we run
the model through the simulation mode of SPIN. One of the outputs is as follows.
$ spin reactor.pml
# Linux command
open cata -> alarm -> ending ->
spin: line 44 "reactor.pml", Error: assertion violated
The execution trace shows that the alarm is sounded after opening the flow of catalyst,
then the safety assertion is violated. This trace characterizes exactly what happened in
the accident. It is worth noting that, due to the uncertainty of the value of status (c.f.
lines 11-17), the assertion may be satisfied in another run. This is the reason why the
plant had functioned well before the accident. As a result, in the simulation mode, it is
possible that an existing fault will not be detected after numerous runs.
To systematically check all the state space, we use the verification mode, which needs
correctness properties specifying hazard-free situation.
One possibility is to use assertions, such as what we did in line 44. Note that an
assertion can only check the state at a single position in the execution (i.e., line 44), not
the remainder positions. The SPIN checks the assertion in the verification mode as follows.
$ spin -a reactor.pml
$ gcc pan.c
$ ./a.out
State-vector 16 byte, depth reached 12, errors: 1
21 states, stored
0 states, matched
21 transitions (= stored+matched)
134
Chapter 9. Applications of the Model Monitoring Approach
After examing 21 states, SPIN detected the unsafe state. We can simulate the counterexample by tracing simulation.
$ spin -t reactor.pml
open cata -> alarm -> ending ->
spin: line 44 "reactor.pml", Error: assertion violated
The result shows that the unsafe state can really be reached.
Another alternative for specifying correctness property is LTL formula. For this exdef
ample, the formula is φ = G(cata → Fwater), where G means globally and F means
future [31]. It means whenever the flow of catalyst is opened, the system will have the
chance to open the flow of water in the future. Of course, this is based on a reasonable
assumption that the status will eventually be nowater when there is not enough water in
the reactor (i.e., there is a reliable sensor). This assumption is expressed by the fairness
condition GF!f con. The SPIN checks the LTL formula in the verification mode as follows
([] denotes G, <> denotes F).
$ spin -a -f ’![](cata -> <>water)&&[]<>!fcon’ reactor.pml
$ gcc pan.c
$ ./a.out -A -a
#disable assertion, check stutter-invariant
State-vector 20 byte, depth reached 29, errors: 1
23 states, stored
1 states, matched
24 transitions (= stored+matched)
After examined 23 states, SPIN detected the unsafe state. We can simulate the counterexample by tracing simulation.
$ spin -t reactor.pml
open cata -> alarm -> ending ->
spin: trail ends after 30 steps
Since we have already known that the CS may cause a hazardous state, we must make
sure that the specified properties can detect the hazardous state, or else the properties are
not right. Then we introduce a SRS to control potential hazardous executions.
Modeling Safety-Related Systems. The second step is to construct the model of
the SRS. We reuse and extend the established model of the CS, and derive an accurate
model describing the behaviors of the SRS. Listing 9.2 shows the models. The model of
the SRS receives messages sent by the CS and scans the values of the variables monitored
by its sensors, then computes whether the system is safe, and sends a message back to the
CS.
Listing 9.2: The Promela Program for Reactor Control System with the SRS
1
2
3
4
5
6
7
8
9
10
#define s a ( ( abnorm && ! c a t a ) | | ( abnorm && c a t a && water ) )
#define f c o n s t a t u s==n o c a t a | | s t a t u s==e n c a t a
mtype = { abnormal , nocata , e n c a t a , nowater , e n w a t e r } ;
mtype s t a t u s = n o c a t a ; /∗ s t a t u s o f t h e r e a c t o r ∗/
bool c a t a
= false ;
/∗ w h e t h e r c a t a l y s t f l o w i s open ∗/
bool water = f a l s e ;
/∗ w h e t h e r w a t e r f l o w i s open ∗/
bool abnorm = f a l s e ;
/∗ w h e t h e r abnormal s i g n a l o c c u r e d ∗/
/∗ d e f i n e s a f e t y −r e l a t e d v a r i a b l e s , messages s t r u c t u r e ∗/
typedef SRV { bool w a t e r ; }
9.1. Theoretical Foundation of Safety-Related Systems in IEC 61508
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
typedef MSG { bool w a t e r ; bool abnorm ; bool r e s ; }
chan ch = [ 0 ] of { MSG } ;
/∗ message c h a n n e l ∗/
/∗ random s i m u l a t i o n
inline scan ( ) {
if
: : true −> s t a t u s
: : c a t a == f a l s e
: : c a t a == true
: : water == f a l s e
: : water == true
fi ;
}
o f s c a n n i n g t h e s t a t u s ∗/
= abnormal ;
−> s t a t u s =
−> s t a t u s =
−> s t a t u s =
−> s t a t u s =
nocata ;
encata ;
nowater ;
enwater ;
/∗ p o s s i b l e a c t i o n s o f t h e s y s te m ∗/
i n l i n e o p e n c a t a ( ) { c a t a=true ; p r i n t f ( ” open c a t a −> ” ) ; }
i n l i n e c l o s e c a t a ( ) { c a t a=f a l s e ; p r i n t f ( ” c l o s e c a t a −> ” ) ; }
i n l i n e openwater ( )
{ water = true ;
c s 2 s r s ( ) ; p r i n t f ( ” open water −> ” ) ; }
inline closewater ()
{ water = f a l s e ; c s 2 s r s ( ) ; p r i n t f ( ” c l o s e water −> ” ) ; }
i n l i n e alarm ( ) {abnorm=true ; c s 2 s r s ( ) ; p r i n t f ( ” alarm −>” ) ; }
i n l i n e e n d i n g ( ) { p r i n t f ( ” e n d i n g −> ” ) ; }
inline cs epro ()
{ printf ( ” e r r o r p r o c e s s i n g ( ” ) ;
p r i n t f ( ” water opened ” ) ;
water = true ;
p r i n t f ( ” ) −> ” ) ; }
/∗ communication b e t w e e n CS and SRS ∗/
inline c s 2 s r s () {
if
: : abnorm == true −> msg . abnorm = true ;
: : abnorm == f a l s e −> msg . abnorm = f a l s e ;
fi ;
if
: : water == true −> msg . w a t e r = true ;
: : water == f a l s e −> msg . w a t e r = f a l s e ;
fi ;
msg . r e s = true ;
ch ! msg ;
ch ? msg ;
if
: : msg . r e s == f a l s e −> c s e p r o ( ) ;
: : e l s e −> skip ;
fi ;
}
active proctype c o n t r o l s y s t e m ( ) {
/∗ I n i t i a l a c t i o n s o m i t t e d ∗/
MSG msg ;
do
: : scan ( ) ;
if
: : s t a t u s == abnormal −> alarm ( ) ; goto END;
: : e l s e −> i f
: : s t a t u s==n o c a t a && c a t a ==f a l s e −>o p e n c a t a ( ) ;
: : s t a t u s==e n c a t a && c a t a ==true −>c l o s e c a t a ( ) ;
: : s t a t u s==nowater && water==f a l s e −>openwater ( ) ;
: : s t a t u s==e n w a t e r && water==true −>c l o s e w a t e r ( ) ;
: : e l s e −> skip ;
fi ;
fi ;
od ;
END: e n d i n g ( ) ;
assert ( sa ) ;
}
135
136
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
Chapter 9. Applications of the Model Monitoring Approach
/∗ ∗∗∗∗∗ The S a f e t y −r e l a t e d System ∗∗∗∗∗ ∗/
/∗ random s i m u l a t i o n o f s c a n n i n g t h e v a l u e s o f v a r i a b l e s ∗/
inline srs scan () {
if
: : s r v . w a t e r = true ;
: : srv . water = false ;
fi ;
}
/∗ compute w h e t h e r t h e sy s te m i s
inline srs compute () {
if
: : msg . abnorm == true && msg .
msg . r e s =
: : msg . abnorm == true && s r v .
msg . r e s =
: : e l s e −> msg . r e s = true ;
fi
}
s a f e ∗/
w a t e r == f a l s e −>
false ;
w a t e r == f a l s e −>
false ;
active proctype s r s ( ) {
/∗ I n i t i a l a c t i o n s o m i t t e d ∗/
MSG msg ;
SRV s r v ;
do
: : true −>
e n d s r s : ch ? msg ;
srs scan () ;
srs compute () ;
ch ! msg ;
od ;
}
Lines 10-11 define the inputs to the SRS. The type SRV defines a set of safety-related
variables, whose values could be obtained from additional sensors outside the CS and
managed by the SRS. For this example, SRV monitors only whether the water flow is open.
The type MSG defines the structure of the messages communicated between the CS and
the SRS. Line 12 defines a rendezvous channel for the message.
In lines 28-32, for each primitive action that modifies the values of the variables monitored by the SRS, the communication between the CS and the SRS is inserted. The
communication module (lines 41-57) reads information needed, and sends a message to
the SRS, then receives a response. The module analyzes the result in the returned message. If it indicates an unsafe state, the system calls the error processing module to recover
from the hazardous state.
The error processing module (lines 34-38) uses the information in the returned message
to decide the actions. The process could change the values of certain variables (e.g., line
37) after manipulating physic equipments (e.g., in line 36, printf statement abstracts
the action of opening the valve). Note that more information could be contained in the
returned message in more complex systems (i.e., not only a boolean result), in order to
provide sufficient information for the error processing module to analyze the current state.
Lines 99-110 define the model of the SRS. It waits for the message sent by the CS, then
scans the values of the safety-related variables (lines 81-86), and computes whether the
system is safe using the message and the safety-related variables (lines 89-97). Finally the
SRS sends a response to the CS. Note that the computation in srs scan and srs compute
could be different in various systems. Embedded C code could be used to implement more
complex functions. Anyway, we use the same methodology and framework.
9.1. Theoretical Foundation of Safety-Related Systems in IEC 61508
137
In order to see whether the model characterizes exactly the behaviors of the SRS, we
run the model through the simulation mode of SPIN. One of the outputs is as follows.
open cata -> error processing ( water opened ) -> alarm -> ending ->
The execution trace shows that the safety assertion is not violated, which is exactly what
we expect to avoid the mentioned accident.
Then we check the assertion in the verification mode, and no error is found.
State-vector 40 byte, depth reached 208, errors: 0
409 states, stored
57 states, matched
466 transitions (= stored+matched)
We may also check the LTL formula in the verification mode.
State-vector 44 byte, depth reached 401, errors: 0
707 states, stored (979 visited)
658 states, matched
1637 transitions (= visited+matched)
Zero errors mean that the assertion and the LTL property always hold in the execution,
i.e., no unsafe state could be reached. Therefore, we conclude that the established model
of the SRS can successfully avoid the accident, i.e., a “functionally valid” SRS. Note that,
if we use another computation in srs compute, the SRS may be not functionally valid
(e.g., always let msg. res be true). That is the reason why we need such a methodology
to ensure functional validity.
It is worth noting that the combination of the CS and the SRS has 707 states and
1637 transitions (the more complex the overall system is, the larger the state space is).
Human is not able to analyze the correctness of such a complicated system. As a result,
computer-aided design may be the only choice for developing functionally valid SRS.
9.1.4
Methodology for Designing Functionally Valid SRS
In this section, we propose the generic methodology for designing functionally valid safetyrelated systems. As we mentioned, the state space of the overall system including the CS
and the SRS is much larger than a single CS or a single SRS. As a result, manual analysis
and design of the SRS are always not trustworthy and error-prone. This methodology uses
computer-aided design in association with automated verification tools, thus can improve
our confidence on the functional validity of the design of SRS.
We try to list exhaustively all the key issues that we know in the designing process,
in order to guide the practice. Due to the wide application of the standard and the SRS,
the reader may encounter different situation in various projects and industry sectors.
Therefore, some necessary adaptations should be made in detail for a specific project.
Generally, there are three steps for developing a functionally valid SRS: modeling the
CS, modeling the SRS and implementing the SRS. Here we only focus on the first two steps
(i.e., the design process) which lead to a functionally valid design of the SRS, although it
is worth noting that faults may be also introduced in the implementation process.
Modeling Control Systems. The first step is to construct the model of the CS. The
results of this step are a Promela program for the CS and the correctness properties. We
list some key issues as follows.
138
Chapter 9. Applications of the Model Monitoring Approach
1. The first task is to derive accurate and efficient abstraction of the behaviors of the
CS. Some typical issues are the following ones:
(a) Define variables, both for computation and checking. Some variables are used
for computation, that is, implementing control flow, behaviors and semantics
of the system. Some variables are used for representing the state of the system,
so they do not contribute to the functionality of the system. They are only
used in the correctness properties to check the property of the indicated state.
It is worth noting that the size of variables must be carefully defined. The
principle is “as small as possible”. The model checker will produce a huge state
space, of which each state contains all the defined variables. As a result, the
restriction from int to byte will save considerable memory when checking.
(b) Simulate the random inputs to the CS. In the verification mode, the Promela
model is a closed system. In other words, it cannot receive users’ inputs. As a
result, we must generate the inputs in the program.
The best way is to generate the inputs randomly, i.e., nondeterministically
choose one member from the set of all possible values. The advantage of this
method is that it can simulate the uncertainty of the inputs, that is, we do not
know when a specific input will occur. For example, consider the sensors’ signal
which is determined by the environment.
(c) Simplify reasonably the computation of the CS. Due to the size and complexity
of the real system, an automated model checker may even be not able to produce
the result in an acceptable instant. Obviously, the huge size contributes to a
huge number of states, and the complexity contributes to a huge number of
transitions. As a result, the size of the state space may be much larger than
the memory, then the model checker will fail to accomplish the verification.
One solution is to provide a more coarse abstraction of the CS. That is, we omit
some non-critical computations, e.g., the initiation of the system. Furthermore,
some manipulations of physic equipments can be expressed with only a printf
statement, which does not increase the size of the state space.
Another solution is to decompose the system into several parts, and check these
parts one by one. When checking one single part, we make the assumption that
the other parts are correct. It is worth noting that the decomposition is relevant
to the properties to check. That is, we must put all the functions relevant to a
certain property into the same part, when checking the property.
(d) Use embedded C codes if necessary. Due to the complexity of embedded C codes
and the lack of syntax checking, they are mainly used for automated model
extraction in SPIN. However, the strong expressive power of C code is anyway
a tempting feature. Thus the recommendation is made only “if necessary”.
(e) Simplify or eliminate the codes for controlling equipments. Usually we assume
that the codes for implementing primitive manipulations are correct, e.g., opening the flow of water.
The criteria for judging the quality of the model are mainly accuracy and efficiency.
Accuracy means that the model behaves exactly like the real CS, while efficiency
means that the model is smart enough, e.g., the program should use variable, statement, and memory as less as possible.
9.1. Theoretical Foundation of Safety-Related Systems in IEC 61508
139
2. The second task is to derive correctness properties, i.e., assertions and LTL formulas.
(a) Assertions are used to check the property at a specific position in the program.
Obviously, the expressive power is limited. Thus, if we want to check a property
over all the state space, we must use LTL formulas.
(b) LTL formulas are used to check all the states in the system. Obviously, LTL
formulas are more powerful. However, it is also worth noting that LTL formulas
can considerably largen the state space. Thus we suggest to use them only when
assertions are not able to express a property.
3. Some Experiences.
(a) Check the exactitude of the established model, by using simulation mode. The
printf statement can be used to output information about the state of the CS.
Simulating the model for sufficient many times can show whether the model
works as expected. Note that the criteria “sufficient many times” is due to the
uncertainty of the inputs.
(b) Check the exactitude of the correctness properties, by using verification mode.
If we aim at creating or modifying a SRS for identified risks, we must make
sure that the correctness properties can detect the error caused by the identified
risks.
Modeling Safety-Related Systems. The second task is to construct the model of
the SRS. The results of this step are a Promela program for the SRS and the codes for
communication and error processing in the modified model of the CS. We list some key
issues as follows.
1. The premise is that we have the model of the CS and reuse it, e.g., the results of
the last step.
2. The first task is to derive accurate and efficient abstraction of the behaviors of the
SRS. Some typical issues are the following ones:
(a) Define the scope of input and output of the SRS. There are two types of input:
the messages sent by the CS and the values of safety-related variables sent by
the sensors. The SRS may use only one of the two types, or both of them.
There are two types of output: the messages sent to the CS and the direct
manipulation of the equipment. Also, the SRS may use only one of the two
types, or both of them.
(b) Define safety-related variables. A safety-related variable, which saves a value
sent by sensors, is used to collect additional information which is beyond the
scope of the CS, or to collect a specific piece of information in the scope of
the CS for increasing the reliability of the information. Note that more safetyrelated variables means higher cost of implementation.
(c) Choose the type of communication between the CS and the SRS. The SPIN
model checker supports two types of communications: rendezvous and buffered
communication. In order to process the demand from the CS to the SRS as
soon as possible, and also provide information from the SRS to the CS to decide
the next action, we usually choose the rendezvous communication.
140
Chapter 9. Applications of the Model Monitoring Approach
(d) Define the structure of message. This depends on the information needed by the
CS and the SRS. The values of the variables monitored by the SRS should be
sent from the CS to the SRS. The message should also contain all the necessary
information needed by the CS to deal with risks. In the simplest case, it is a
Boolean result, indicating the system is safe or not. However, generally, the
CS need more information to determine why the system is not safe, then it can
activate corresponding error processing functions.
(e) Define message channels. Usually, one channel is enough for the communication
between the CS and the SRS.
(f) Simulate the scanning of the values of safety-related variables. It is similar
to the case “simulate the random inputs to the CS”. The random simulation
express exactly the fact that the values are nondeterministically decided by the
environment.
(g) Simplify reasonably the computation of the SRS. (Similar to the case of the
CS.)
(h) Use embedded C code if necessary. (Similar to the case of the CS.)
3. The second task is to define the position for the communication between the CS and
the SRS. Generally, the usual location is between the assignment of key variables
monitored by the SRS and the manipulation of physic equipment. Therefore, the
SRS can check whether the system will be safe if the next manipulation is executed.
If no, the SRS can send a message to activate the error processing functions.
4. The third task is to define the function of error processing. This step is different for
different projects, because it is based on the requirements of a specific system. In
fact, the correctness of error processing plays also an important role in the correctness
of the overall system and the functional validity of the SRS.
5. Some Experiences.
(a) Check the exactitude of the established model, by using simulation mode. (Similar to the case of the CS.)
(b) Check the exactitude of the correctness properties, by using verification mode.
We must make sure that the overall system (including the CS and the SRS)
is safe, i.e., satisfies the specified correctness properties. If we aim at creating
or modifying a SRS for identified risks, we must make sure that the overall
system can avoid the previously detected errors, since the SRS component and
additional error processing functions have been added.
If errors are detected, we must check the design of the SRS, and also the exactitude of the correctness properties (because they may specify a semantics
different to what we expect).
Implement the SRS. We implement the SRS using the established model and computation. Note that faults may also occur at this step. Since numerous guidelines exist in
industrial sectors to handle this issue, the discussion on reliable implementation is beyond
the scope of this thesis.
9.1. Theoretical Foundation of Safety-Related Systems in IEC 61508
141
cw
ow
oc
¬c, ¬w, ¬a
ow
cc
c, ¬w, ¬a
c, w, ¬a
cc
a
oc
a
¬c, ¬w, a
¬c, w, ¬a
cw
a
c, ¬w, a
a
¬c, w, a
c, w, a
Figure 9.4: The Kripke Model M of Reactor
p5 : ∅
p6 : {w}
q0
p1 : ∅
p2 : {c}
q1
p3 : {c, w}
p4 : {w}
q2
q3
p10 : {a}
p12 : {c, a}
p11 : {a}
p7 : {c, w}
p14 : {c, w, a}
p13 : {c, a}
q5
q4
p8 : {c}
p9 : ∅
p16 : {w, a}
p15 : {c, w, a}
q7
q6
p17 : {w, a}
q8
Figure 9.5: The Büchi Automaton AM of Reactor
δ(w ∨ ¬a)
r0
Figure 9.6: The Controlling Automaton  Modeling the SRS
p5
t00
p6
p1
p2
t10
p3
t20
p9
p4
t30
t40
p8
p7
p14
t70
p16
p15
t80
Figure 9.7: The Meta-composition C = AM ~· Â
p17
142
9.1.5
Chapter 9. Applications of the Model Monitoring Approach
Model Monitoring as the Theoretical Foundation
From the above example, we can perceive that the SRS controls the behavior of the CS at
runtime. If the CS tries to perform some undesired actions, the SRS can alert, block the
actions or recover the system from hazardous states. Intuitively, we can feel that there is
some connection with model monitoring.
Let us consider the models of the CS and the SRS, i.e., the Promela programs. A
Kripke model M in Fig. 9.4 can be extracted from the Promela program of the CS, and
can be translated into a Büchi automaton AM in Fig. 9.5. The automaton violates the
specified properties, since hazardous states can be reached. For example, q6 will be reached
after opening the flow of catalyst (p2 ) and sounding an alarm (p12 ), which can result in
the mentioned accident.
A controlling automaton  in Fig. 9.6 can be extracted from the semantics of the
model of the SRS, where δ(φ) denotes all transitions labeled with a truth assignment
satisfying φ. For example, δ(w ∨ ¬a) = δ − {p10 , p11 , p12 , p13 }.
The automata AM and  constitute a BAC system. The meta-composition of the two
components results in the automaton C = AM ~· Â in Fig. 9.7. The global system satisfies
the property, since the hazardous states have been eliminated.
We also observed that checking functional validity of the SRS, which is equivalent
to verify the global system (the CS + the SRS), is essentially checking the BAC system
C = AM ~· Â.
In fact, numerous industrial systems can be modeled as automata. Therefore, it is easy
to see that the (ω-)automaton control system ((ω-)AC System) and model monitoring are
exactly the theoretical foundation of safety-related systems.
Furthermore, it is worth noting that checking functional validity is essentially checking
an (ω-)AC System. We have observed the equivalence between the CS and the controlled
automaton, and the equivalence between the SRS and the controlling automaton. Therefore, the meta-composition of an (ω-)AC system satisfies a property, if and only if the
global system of the CS and the SRS is functionally valid. That is, if the verification on
the (ω-)AC system does not find any faults, then the functional validity of the constructed
SRS is proved.
9.1.6
Conclusion
In this section, we proposed the concept of functional validity of the SRS, and a generic
technical methodology (or a framework) for designing functionally valid SRS. The methodology is based on computer-aided design in association with automated verification tools.
To the best of our knowledge, we are the first to consider the technical solution to functional validity of the SRS in the literature. Furthermore, we showed that model monitoring
is the theoretical foundation of safety-related systems, and checking functional validity is
essentially checking an (ω-)AC system.
9.2
Formalizing Guidelines and Consistency Rules of UML
Guidelines and consistency rules of UML are used to control the degrees of freedom provided by the UML language to prevent faults. However, guidelines and consistency rules
provide informal restrictions on the use of language, which make checking difficult. In
this section, we consider these problems from a language-theoretic view. Guidelines and
9.2. Formalizing Guidelines and Consistency Rules of UML
143
consistency rules are formalized as controlling grammars that control the use of UML,
i.e. the derivations using the grammar of UML. This approach can be implemented as a
parser, which can automatically verify the rules on a UML user model in XMI format. A
comparison to related work shows our contribution: a generic, metamodel-independent,
syntax-based approach that checks language-level constraints at compile-time.
This section is organized as follows. In the first three subsections, we introduce the
UML specifications and the transformation to XML documents, the guidelines and consistency rules, and the grammar of UML in XMI, respectively. Then we illustrate how to
formalize guidelines and consistency rules using Leftmost-derivation-based Grammar Control Systems (LGC Systems). Finally, we discuss the implementation issue and related
work of this approach.
9.2.1
Unified Modeling Language
The Unified Modeling Language (UML) is a visual modeling language developed by Object
Management Group (OMG) [108, 109]. UML has emerged as the software industry’s
dominant modeling language for modeling, specifying, constructing and documenting the
artifacts of systems [100, 12]. Numerous commercial UML CASE tools are available, e.g.,
Rational Rose, Altova UModel, MagicDraw, Visual Paradigm. Several open-source tools
also exist, e.g., ArgoUML, StarUML.
Model, Metamodel, Meta-metamodel
A model is an instance of a metamodel (or language). For example, UML and the Common
Warehouse Metamodel (CWM) are metamodels, and a UML user model is an instance of
the UML metamodel.
A metamodel is an instance of a meta-metamodel (or metalanguage). A meta-metamodel
is typically more compact than a metamodel that it describes, and often used to define
several metamodels. For instance, the Meta Object Facility (MOF) is an example of
meta-metamodel, since UML and CWM are both instances of MOF.
The specifications of UML and MOF include complex import and merge relationships
between their packages, as shown in Fig. 9.8.
UML 2 Specification
The UML specification is defined using a metamodeling approach, i.e., a metamodel is
used to specify the model that comprises UML. The UML specification is organized into
two volumes. The UML Infrastructure defines the foundational language constructs required for UML [108]. The UML Superstructure defines the user level constructs required
for UML [109].
The infrastructure of UML is defined by the package InfrastructureLibrary. The infrastructure has three objectives: (1) defines a meta-metamodel (i.e., metalanguage) core that
can be reused to define a variety of metamodels (i.e, languages) such as UML, MOF, CWM,
and other emerging OMG metamodels; (2) architecturally aligns UML, MOF, and XMI
so that model interchange is fully supported; (3) allows customization of UML through
Profiles and creation of new metamodels (i.e, languages) based on the same metalanguage
core as UML, such as SysML.
144
Chapter 9. Applications of the Model Monitoring Approach
Figure 9.8: Packages of UML and MOF
The package InfrastructureLibrary consists of the packages Core and Profiles.
The Core package is a complete metamodel particularly designed for high reusability.
Note that the Core package is the architectural kernel of MDA (Model Driven Architecture). That is, MDA metamodels (e.g., UML, MOF, CWM) reuse all or parts of the Core
package, which allows to benefit from the abstract syntax and semantics that have already
been defined. In order to facilitate reuse, the Core package is subdivided into 4 packages:
• The package PrimitiveTypes simply contains a few predefined types that are commonly used when metamodeling, e.g., UML and MOF.
• The package Abstractions contains abstract metaclasses that are intended to be
further specialized or reused by many metamodels. It is also subdivided into several
smaller packages.
• The package Basic represents a few constructs that are used as the basis for the
produced XMI for UML, MOF and other metamodels based on the InfrastructureLibrary.
• The package Constructs contains concrete metaclasses for object-oriented modeling.
It is particular reused by both MOF and UML, and aligns the two metamodels.
As the second capacity besides reusability, the Core package is used to define the
modeling constructs used to create metamodel, through instantiation of metaclasses in the
InfrastructureLibrary. For example, the metaclasses in InfrastructureLibrary instantiate
the elements of UML, MOF, CWM and indeed the InfrastructureLibrary itself (i.e., selfdescribing or reflective).
The Profiles package depends on the Core package, and defines the mechanisms used
to tailor existing metamodels towards specific platforms (e.g., C++, CORBA, or Java) or
domains (e.g., real-time, business objects). Its primary target is UML, but may also be
9.2. Formalizing Guidelines and Consistency Rules of UML
145
used with any metamodel instantiated from the common Core package.
The superstructure of UML is defined by the package UML, which is divided into a
number of packages. The Kernel package is at the very heat of UML, and the metaclasses
of other packages are dependent on it.
The Kernel package is primarily reused from the InfrastructureLibrary by using package merge. Indeed, the Kernel package reuses the Constructs and PrimitiveTypes packages, and adds more capabilities to the modeling constructs that are not necessary for
reusing or alignment with MOF.
MOF 2 Specification
The MOF specification consists of the packages Essential MOF (EMOF) and Complete
MOF (CMOF) [105], which are both constructed by merging the UML Infrastructure
Library packages (mainly the Core package) [108] and adding additional packages.
• EMOF model is defined as a kernel metamodeling capability. The value of EMOF
is that it provides a framework for mapping MOF models to implementations such
as JMI and XMI for simple metamodels.
• CMOF model is the metamodel used to specify other metamodels such as UML 2.
It is built from EMOF and the package Core::Constructs of UML 2. CMOF does
not define any classes of its own. It only merges packages with its extensions that
together define basic metamodeling capabilities.
In particular, EMOF and CMOF are both described using CMOF, which is also used to
describe UML, CWM, etc. Therefore, CMOF is simply referred to as MOF.
MOF is used as the metamodel for UML (as a model) and other languages such as
CWM. In other words, every model element of UML is an instance of exactly one model
element of MOF. Note that the InfrastructureLibrary is used at both the metamodel and
meta-metamodel levels, since it is being reused by UML (Infrastructure and Superstructure) and MOF, respectively.
Therefore, the InfrastructureLibrary is reused in two different ways:
• All the elements of the UML metamodel are instantiated from meta-metaclasses in
the InfrastructureLibrary. (as meta-metamodel)
• The UML metamodel imports and specializes all metaclasses in the InfrastructureLibrary. (as metamodel)
Other Related Specifications
OMG also provides numerous complementary specifications for UML 2 to facilitate the
development of industrial applications. The most significant specifications are as follows.
The Object Constraint Language (OCL) specification [106] defines a formal language
used to describe expressions on UML models. These expressions typically specify invariant
conditions that must hold for the system being modeled or queries over objects described
in a model.
The XML Metadata Interchange (XMI) specification [107] defines a model driven XML
integration framework for defining, interchanging, manipulating and integrating XML data
146
Chapter 9. Applications of the Model Monitoring Approach
and objects. XMI-based standards are in use for integrating tools, repositories, applications and data warehouses. XMI provides a mapping from MOF to XML schema, and
rules by which a schema can be generated for any valid XMI-transmissible MOF-based
metamodel. For example, XMI can facilitate interchanging UML models between different
modeling tools in XML format.
The UML Diagram Interchange (UMLDI) specification enables a smooth and seamless
exchange of documents compliant to the UML standard (referred to as UML models)
between different software tools. While this certainly includes tools for developing UML
models, it also includes tools such as whiteboard tools, code generators, word processing
tools, and desktop publishing tools. Special attention is given to the Internet as a medium
for exchanging and presenting UML models.
The MOF Query/View/Transformation (QVT) specification addresses a technology
neutral part of MOF and pertains to: (1) queries on models; (2) views on metamodels;
and (3) transformations of models.
Transformations
Figure 9.9: Different Representations of UML Diagrams
Figure 9.9 vertically includes three levels of metamodeling: model (M1), metamodel or
language (M2), meta-metamodel or meta-language (M3). The figure horizontally includes
different representations of UML diagrams: graphic, XML-based and textual models. This
means, a UML user model may be expressed as a(n):
• graphic model. A UML user model is an instance of UML, which is an instance of
MOF.
• XML-based model. It is an XMI-compliant XML document that conforms to its
XML schema, and is a derivative of the XMI document productions which is defined
as a grammar. The XML schema is a derivative of the XMI schema productions. The
XMI specification defines both the XMI schema productions and the XMI document
productions in [107].
9.2. Formalizing Guidelines and Consistency Rules of UML
147
XMI provides a mapping between a UML user model and an XML document, and a
mapping between UML (also MOF) and an XML Schema. XMI generates an XML
file using the XMI document productions, and generates an XML schema using the
XMI schema productions. Each of the two sets of productions composes a contextfree grammar in Extended BNF (Backus-Naur Form) [78]. A UML user model
can be expressed using an XMI-compliant XML document that conforms to the
corresponding XML Schema, and is a derivative of the XMI document grammar.
• textual model, which is an instance of its grammar in BNF. For instance, a Java
program must be a derivative of the Java grammar. Also, a BNF grammar is an
instance of the grammar describing BNF grammars.
Numerous UML CASE tools support the standard transformation between UML diagrams and XML files by implementing the XMI specification, such as UModel, Visual
Paradigm, MagicDraw.
Besides the standard UML transformation, there is related work on the transformations
between generic graphic modeling languages and EBNF-based (Extended BNF) grammars.
For instance, [127] defined an EBNF grammar for ADORA, which is a graphic modeling
language. While in the converse direction, [51] discussed how to transform EBNF grammars into graphic UML diagrams.
Numerous tools support various model transformations. For example, ATL (ATLAS
Transformation Language) [79] is a generic model transformation tool based on transformation rules. Most tools transform UML models to XML documents [42], e.g., the UMT
(UML Model Transformation Tool) [65] is a transformation tool specific to UML, based
on a simplified schema of UML models, named XMI-Light.
9.2.2
Guidelines and Consistency Rules
Guideline is an important concept in industrial practice. It contains a set of rules which
recommend certain uses of technologies (e.g., modeling and programming languages) to
produce more reliable, safe and maintainable products such as computer and software
systems. Guidelines are often required on the systems in some specific application domains.
If these rules are not respected, the presence of faults is not sure but its risk is high.
As an example, the OOTiA (Object-Oriented Technology in Aviation [54]) provides the
guidelines proposed by FAA (Federal Aviation Administration) to suggest the proper use
of OOT for designing more reliable and safe avionic software systems. In OOTiA, one rule
says “multiple inheritance should be avoided in safety critical, certified systems” (IL #38),
because “multiple inheritance complicates the class hierarchy” (IL #33) and “overuse of
inheritance, particularly multiple inheritance, can lead to unintended connections among
classes, which could lead to difficulty in meeting the DO-178B objective of data and control
coupling” (IL #25,37). Another rule is “the length of an inheritance should be less than 6”.
Another rule about SDI (State Defined Incorrectly) says “if a computation performed by
an overriding method is not semantically equivalent to the computation of the overridden
method with respect to a variable, a behavior anomaly can result”.
Consistency problems of UML models have attracted great attention from both academic and industrial communities [83, 82, 76]. A list of 635 consistency rules are identified
by [124, 125]. One kind of inconsistency is intra-model inconsistency among several diagrams of a single model. Another kind is inter-model inconsistency between several models
148
Chapter 9. Applications of the Model Monitoring Approach
or versions of one system. These inconsistencies result from the following facts.
First, generally, evolving descriptions of software artifacts are frequently inconsistent,
and tolerating this inconsistency is important [7, 44]. Different developers construct and
update these descriptions at different times during development [104], thus result in inconsistencies.
Second, the multiple views of UML can provide pieces of information which are redundant or complementary, on which consistency constraints exist. Indeed, UML allows
multiple views of a system to be expressed, and a piece of information of a view can
be deduced from other pieces of information. For example, the communication diagrams
can be designed from the sequence diagrams. However, these various views may conflict,
and produce inconsistent models. Therefore, the overall consistency is required, i.e., each
partial view has to be in accordance with the others.
An important difference between guidelines and consistency rules is that consistency
rules must be satisfied in any context, or else there are definitely faults in the model.
However, guidelines depend on context, it must be respected in some critical contexts to
reduce the risk of faults, but could be violated in other less critical contexts. This means,
consistency rules can be used for fault detection, whereas guidelines are fault prevention
means.
It seems as if consistency rules and guidelines were irrelevant at first glance. However,
in fact, they have the same origin from a language-theoretic view. We noticed that both
of the two types of potential faults in models come from the degrees of freedom offered
by a language. These degrees of freedom cannot be eliminated by reducing the capability
of the language, but by controlling the use of language [102]. For instance, the multiple
diagrams in UML are useful, as they describe various viewpoints on one system, even if
they are at the origin of numerous inconsistencies. In the same way, multiple inheritance
is a powerful feature of the C++ language, although it violates the guidelines in OOTiA
and increases the risk of faults.
Guidelines and consistency rules fall into two classes: language level (i.e. metamodel
level) and model level rules. The language level rules concern the use of language features,
e.g., avoiding multiple inheritance. The model level rules concern the semantics of a
specific model, e.g., the SDI issue is related to the semantics of overridden and overriding
methods.
The language level rules are used to control the faults that result from the degrees
of freedom of modeling languages. As an example, inheritance and multiple inheritance
are important features of OOT. They provide the degrees of freedom of using them, e.g.,
the length of inheritance chain, the number of super classes. However, overuse of these
features can lead to unintended connections among classes, thus cause the risk of faults.
Therefore, we may suggest some rules to control the complexity of models, e.g., “the length
of an inheritance chain should be less than 6”, “no more than 30 attributes in a class”.
To prevent these risks of faults, the use of language must be controlled. However, the
expression of guidelines and consistency rules is informal, thus checking them is difficult.
For instance, 6 months were needed to check 350 consistency rules on an avionics UML
model including 116 class diagrams.
This section aims at formalizing guidelines and consistency rules, and proposing an
approach to ensure the correct use of language from a language-theoretic view. Technically, acceptable uses of language are formalized as a controlling grammar handling the
productions of the grammar of the language. To support this idea, UML must be speci-
9.2. Formalizing Guidelines and Consistency Rules of UML
149
fied by a formal language, or at least a language with precisely defined syntax, e.g., XMI.
Thus, a graphic model can be serialized. This formalism also provides a deeper view on
the origin of inconsistencies in models.
9.2.3
The Grammar of UML in XMI
XMI (XML Metadata Interchange) [107] is used to facilitate interchanging UML models
between different modeling tools in XML format. Many tools implement the transformation from UML models to XMI documents, e.g., Altova UModelr can export UML models
as XMI files.
The grammar and its XMI document productions for deriving XMI-compliant XML
documents of UML models are defined in [107]. The main part of the grammar is given here
after. To make our presentation more concise, we omit declaration and version information
of XML files (and the related productions whose names start with “1”).
To make later reasoning easier, we modified some representations of the productions,
but without changing the generative power of the grammar.
1. The choice operator “|” is used to compose several productions with the same lefthand side into a single line in [107]. We decomposed some of these productions into several
productions without the choice operator. An original production n having k choices might
be divided into a set of productions {n i}1≤i≤k . For example, the original production 2
with three choices was divided into the productions 2 1, 2 2 and 2 3.
2. The closure operator “*” is used to simplify the representation of the grammar
in [107], but it also would make the representation of reasoning confusing. Thus, the
productions whose names start with “3” were added to replace the productions with
closure operators.
The grammar G of UML in XMI includes the following productions (each production
is labeled with a name starting with a digit):
3_1:
3_2:
XMIElements ::= 2:XMIElement
XMIElements ::= 2:XMIElement 3:XMIElements
2_1:
2_2:
2_3:
XMIElement ::= 2a:XMIObjectElement
XMIElement ::= 2b:XMIValueElement
XMIElement ::= 2c:XMIReferenceElement
2a_1: XMIObjectElement ::= "<" 2k:QName 2d:XMIAttributes "/>"
2a_2: XMIObjectElement ::= "<" 2k:QName 2d:XMIAttributes ">"
3:XMIElements "</" 2k:QName ">"
2b_1: XMIValueElement ::= "<" xmiName ">" value "</" xmiName ">"
2b_2: XMIValueElement ::= "<" xmiName "nil=‘true’/>"
2c_1: XMIReferenceElement::= "<" xmiName 2l:LinkAttribs "/>"
2c_2: XMIReferenceElement::= "<" xmiName 2g:TypeAttrib
2l:LinkAttribs "/>"
2d_1: XMIAttributes ::= 2g:TypeAttrib 2e:IdentityAttribs
3h:FeatureAttribs
150
Chapter 9. Applications of the Model Monitoring Approach
2d_2: XMIAttributes ::= 2e:IdentityAttribs
2e:
3h:FeatureAttribs
IdentityAttribs ::= 2f:IdAttribName "=‘" id "’"
2f_1: IdAttribName ::= "xmi:id"
2f_2: IdAttribName ::= xmiIdAttribName
2g:
TypeAttrib ::= "xmi:type=‘" 2k:QName "’"
3h_1: FeatureAttribs ::= 2h:FeatureAttrib
3h_2: FeatureAttribs ::= 2h:FeatureAttrib
3h:FeatureAttribs
2h_1: FeatureAttrib ::= 2i:XMIValueAttribute
2h_2: FeatureAttrib ::= 2j:XMIReferenceAttribute
2i:
XMIValueAttribute ::= xmiName "=‘" value "’"
2j:
XMIReferenceAttribute ::= xmiName "=‘" (refId | 2n:URIref)+"’"
2k:
QName ::= "uml:" xmiName
2l:
LinkAttribs ::= "xmi:idref=‘" refId "’" | 2m:Link
2m:
Link ::= "href=‘" 2n:URIref "’"
2n:
URIref ::= (2k:QName)? uriReference
|
xmiName
In the grammar, the symbol “::=” stands for the conventional rewriting symbol “→”
in formal language theory [74]. Each nonterminal starts with a capital letter, prefixing
a label of the related production, e.g., “2:XMIElement” is a nonterminal with possible
productions “2 1, 2 2, 2 3”. Each terminal starts with a lowercase letter or is quoted.
Figure 9.10: A Class Diagram
Figure 9.11: An Activity Diagram
As an example to illustrate the use of the grammar, Figure 9.10 represents a package
9.2. Formalizing Guidelines and Consistency Rules of UML
151
Root which includes three classes, where the class FaxMachine is derived from Scanner and
Printer. The core part of the exported XMI 2.1 compliant file (using Altova UModelr ) is
as follows:
<uml:Package xmi:id="U00000001-7510-11d9-86f2-000476a22f44"
name="Root">
<packagedElement xmi:type="uml:Class"
xmi:id="U572b4953-ad35-496f-af6f-f2f048c163b1"
name="Scanner" visibility="public">
<ownedAttribute xmi:type="uml:Property"
xmi:id="U46ec6e01-5510-43a2-80e9-89d9b780a60b"
name="sid" visibility="protected"/>
</packagedElement>
<packagedElement xmi:type="uml:Class"
xmi:id="Ua9bd8252-0742-4b3e-9b4b-07a95f7d242e"
name="Printer" visibility="public">
<ownedAttribute xmi:type="uml:Property"
xmi:id="U2ce0e4c8-88ee-445b-8169-f4c483ab9160"
name="pid" visibility="protected"/>
</packagedElement>
<packagedElement xmi:type="uml:Class"
xmi:id="U6dea1ea0-81d2-4b9c-aab7-a830765169f0"
name="FaxMachine" visibility="public">
<generalization xmi:type="uml:Generalization"
xmi:id="U3b334927-5573-40cd-a82b-1ee065ada72c"
general="U572b4953-ad35-496f-af6f-f2f048c163b1"/>
<generalization xmi:type="uml:Generalization"
xmi:id="U86a6818b-f7e7-42d9-a21b-c0e639a4f716"
general="Ua9bd8252-0742-4b3e-9b4b-07a95f7d242e"/>
</packagedElement>
</uml:Package>
This text is a derivative of the XMI document productions, c.f. the previous grammar
G. We may use the sequence of productions “2a 2, 2k(Package), 2d 2, 2e, 2f 1, 3h 1, 2h 1,
2i” to derive the following sentential form:
<uml:Package xmi:id="U00000001-7510-11d9-86f2-000476a22f44"
name="Root">
3:XMIElements "</" 2k:QName ">"
Note that the production 2k has a parameter xmiName, i.e. the value of the terminal
when applying the production. In a derivation, we specify a value of the parameter
as “2k(value)”. For example, “2k(Package)” is a derivation using 2k with xmiName =
“Package”. For simplicity, we consider “2k(value)” as a terminal as a whole. We continue
to apply productions, and finally derive the XMI file previously presented.
Notice that the model of Fig. 9.10 (both in UML and XML) does not conform to the
guidelines in OOTiA about multiple inheritance, since it uses multi-inheritance.
152
Chapter 9. Applications of the Model Monitoring Approach
As another example, the model of Fig. 9.11 has an inconsistency: “the number of outgoing edges of F orkN ode is not the same as the number of incoming edges of JoinN ode”.
In particular, JoinN ode joins two outgoing edges from the same DecicionN ode, This join
transition will never be activated, since only one of the two outgoing edges will be fired.
9.2.4
Formalizing Rules Using LGC Systems
We believe that the LGC System can be used to formalize and ensure the rules. In
this section, we use some practical examples to illustrate how to check the conformance
to guidelines and consistency rules by controlling the use of the grammar of UML. We
denote the grammar of UML by G = (N, T, P, S), where P is the set of productions listed
in the previous section, and each production p ∈ P is labeled with a name starting with
a digit.
Example 9.1. Consider two rules on class diagrams:
Rule 1: Each class can have at most one generalization. As we mentioned, this rule
is a guideline. This rule is also a consistency rule in the context of Java, since Java does
not allow multiple inheritance. However we may derive a class from multiple classes in
the context of C++.
Rule 2: Each class can have at most 30 attributes. This rule may be adopted by
software authorities as a guideline in avionics, in order to improve reliability and safety of
software systems by minimizing the complexity of classes.
Note that these rules cannot be explicitly integrated into the grammar of UML, but
only recommended as guidelines or consistency rules. We cannot put Rule 1 into the
standard of UML, since UML models can be implemented with both C++ and Java
programming languages. Rule 2 is a restriction for a specific domain, and we should not
require all programmers to use limited number of attributes by including the rule in the
UML specification.
We aim to specify the rules from the meta-language level, that is, control the use of the
UML language. Consider the example of Fig. 9.10, to obtain the associated XMI text, the
sequence of applied productions of G in the leftmost derivation is as follows (“...” stands
for some omitted productions, to save space):
2a_2, 2k(Package), 2d_2, 2e, 2f_1, 3h_1, 2h_1, 2i,
..., 2k(packagedElement), ..., 2k(Class),
..., 2k(ownedAttribute), ..., 2k(Property),
..., 2k(packagedElement),
..., 2k(packagedElement), ..., 2k(Class),
..., 2k(ownedAttribute), ..., 2k(Property),
..., 2k(packagedElement),
..., 2k(packagedElement), ..., 2k(Class),
..., 2k(generalization), ..., 2k(Generalization),
..., 2k(generalization), ..., 2k(Generalization),
..., 2k(packagedElement),
..., 2k(Package)
Let c, g stand for 2k(Class), 2k(Generalization), respectively. Note that the occurrence of two g after the third c violates Rule 1. In fact, all the sequences of productions
9.2. Formalizing Guidelines and Consistency Rules of UML
c, D
D
D
g
c
Qc
S
153
g
Qg
c
g
c, g, D
Figure 9.12: The Automaton Âc
of the pattern “...c...g...g...” are not allowed by the rule (there is no c between the two g),
indicating that the class has at least two generalizations.
Thus, we propose the following controlling grammar Ĝc to restrict the use of language
to satisfy Rule 1:

S → c Qc | D S | 


Q →c Q | g Q | D Q | c
c
g
c
Ĝc
(9.1)

Q
→
c
Q
|
D
Q
|
g
c
g



D → {p | p ∈ P ∧ p 6∈ {c, g}}
where S, Qc , Qg , D are nonterminals, D includes all productions except c, g. L(Ĝc ) accepts
the sequences of productions satisfying Rule 1.
Implicitly, the controlling grammar specifies a finite state automaton Âc in Fig. 9.12,
where
is an implicit error state (the dashed circle). Strings containing the pattern
∗
∗
D cD gD∗ g will lead Âc to the error state, indicating that there exists a class having at
least two generalizations.
If the sequence of productions applied to derive a model is accepted by the language
L(Ĝc ), then the model conforms to Rule 1. In Fig. 9.10, the derivation of the class
F axM achine uses the pattern D∗ cD∗ gD∗ gD∗ 6∈ L(Ĝc ), which leads to of the automaton,
thus it violates Rule 1. On the contrary, the derivations of Scanner and P rinter are
accepted by L(Ĝc ), thus satisfy Rule 1. If we consider the whole model, the derivation
uses the pattern D∗ cD∗ cD∗ cD∗ gD∗ gD∗ 6∈ L(Ĝc ), which also violates Rule 1.
Globally, any UML user model M derived from the LGC System C = G ~· Ĝc , i.e.
M ∈ L(C), conforms to Rule 1.
Now let us consider Rule 2. Let c, pr, pe stand for 2k(Class), 2k(P roperty),
2k(P ackagedElement), respectively. Note that the occurrence of more than 30 pr after a
c violates Rule 2. In fact, all the sequences of productions of the pattern “...c...(pr...)n , n >
30” are not allowed by the rule (there is no c between any two pr’s), indicating that a
class has more than 30 attributes.
To satisfy Rule 2, we propose the following controlling grammar Ĝp to restrict the use
of language:
154
Chapter 9. Applications of the Model Monitoring Approach
pe, D
S
c, D
c
pe
Qc
D
c
pr
pe
Q1
D
c
pr
pr
pe
pr
D
c
Q29
pr
Q30
pe
pr
c, pr, pe, D
Figure 9.13: The Automaton Âp

S






 Qc
Qi
Ĝp



 Q30



D
→ pe S | c Qc | D S | → pe S | c Qc | pr Q1 | D Qc | → pe S | c Qc | pr Qi+1 | D Qi | (1 ≤ i < 30)
(9.2)
→ pe S | c Qc | D Q30 | → {p | p ∈ P ∧ p 6∈ {c, pr, pe}}
where S, Qc , Qi , Q30 , D are nonterminals, D includes all productions except c, pr, pe. L(Ĝp )
accepts the sequences of productions satisfying Rule 2.
Implicitly, the controlling grammar specifies a finite state automaton Âp in Fig. 9.13.
Strings containing the pattern “D∗ c(D∗ pr)31 ” will lead Âp to the error state, indicating
there exists a class having more than 30 attributes.
If the sequence of productions applied to derive a model is accepted by the language
L(Ĝp ), then the model conforms to Rule 2. In Fig. 9.10, the derivations of the classes
Scanner and P rinter use the pattern D∗ cD∗ prD∗ ∈ L(Ĝp ), thus satisfy Rule 2. If we
consider the whole model, the derivation uses the pattern D∗ cD∗ prD∗ cD∗ prD∗ cD∗ ∈
L(Ĝp ), thus also satisfies Rule 2.
Globally, any UML user model M derived from the LGC System C = G ~· Ĝp , i.e.
M ∈ L(C), conforms to Rule 2.
Thanks to the controlling grammars, when a model violates required rules, the controlling language will reject the model (an implicit error state will be activated). Some
error handling method may be called to process the error, e.g., printing an error message
indicating the position and the cause.
As another example, we can also use controlling grammar to handle a consistency rule
concerning activity diagrams.
Example 9.2. In an activity diagram, the number of outgoing edges of F orkN ode should
be the same as the number of incoming edges of its pairwise JoinN ode.
Let n, f, j, i, o stand for 2k(node), 2k(F orkN ode), 2k(JoinN ode), 2k(incoming), 2k(outgoing),
respectively. We propose the following controlling grammar Ĝa to restrict the use of language to satisfy the rule:
9.2. Formalizing Guidelines and Consistency Rules of UML

S → N F I ∗ Q O∗ N | N I ∗ O∗ N | D∗





Q→O Q I |N S N J



∗


N → n D


 F → f D∗
Ĝa

J → j D∗





I → i D∗





O → o D∗



D → {p | p ∈ P ∧ p 6∈ {n, f, j, i, o}}
155
(9.3)
L(Ĝa ) accepts all the sequences of productions of the pattern
N F I ∗ On N SN JI n O∗ N , which derive the models respecting the rule. This context-free
grammar implicitly specifies a PDA (Pushdown Automaton [74]), which is more complex
than the finite state automata in Figures 9.12 and 9.13.
Globally, any UML user model M derived from the LGC System C = G ~· Ĝa , i.e.
M ∈ L(C), conforms to the rule in Example 9.2.
As an application of the LGC system, we consider the model in Fig. 9.11. The XMIcompliant document of the model in Fig. 9.11 is as follows:
<packagedElement xmi:type="uml:Activity"
xmi:id="U937506ed-af64-44c6-9b4c-e735bb6d8cc6"
name="Activity1" visibility="public">
<node xmi:type="uml:InitialNode" xmi:id="U16aa15e8-0e5d4fd1-930a-725073ece9f0">
<outgoing xmi:idref="Ue9366b93-a45b-43f1-a201-2038b0bd0b30"/>
</node>
<node xmi:type="uml:ForkNode" xmi:id="U26768518-a40c4713-b35e-c267cc660508" name="ForkNode">
<incoming xmi:idref="Ue9366b93-a45b-43f1-a201-2038b0bd0b30"/>
<outgoing xmi:idref="Ua800ba9b-e167-4a7c-a9a9-80e6a77edeb7"/>
</node>
<node xmi:type="uml:DecisionNode" xmi:id="Uc9e4f0de-8da64c98-9b95-b4cde30ccfc0" name="DecisionNode">
<incoming xmi:idref="Ua800ba9b-e167-4a7c-a9a9-80e6a77edeb7"/>
<outgoing xmi:idref="Ua4a2b313-13d6-4d69-9617-4803560731ef"/>
<outgoing xmi:idref="U6eede33f-98ac-4654-bb17-dbe6aa7e46be"/>
</node>
<node xmi:type="uml:JoinNode" xmi:id="Ud304ce3c-ebe44b06-b75a-fa2321f8a151" name="JoinNode">
<incoming xmi:idref="Ua4a2b313-13d6-4d69-9617-4803560731ef"/>
<incoming xmi:idref="U6eede33f-98ac-4654-bb17-dbe6aa7e46be"/>
</node>
<edge xmi:type="uml:ControlFlow"
xmi:id="Ua4a2b313-13d6-4d69-9617-4803560731ef"
source="Uc9e4f0de-8da6-4c98-9b95-b4cde30ccfc0"
target="Ud304ce3c-ebe4-4b06-b75a-fa2321f8a151">
<guard xmi:type="uml:LiteralString"
156
Chapter 9. Applications of the Model Monitoring Approach
xmi:id="U6872f3b3-680c-430e-bdb3-21c0a317d290"
visibility="public" value="x>10"/>
</edge>
<edge xmi:type="uml:ControlFlow"
xmi:id="U6eede33f-98ac-4654-bb17-dbe6aa7e46be"
source="Uc9e4f0de-8da6-4c98-9b95-b4cde30ccfc0"
target="Ud304ce3c-ebe4-4b06-b75a-fa2321f8a151">
<guard xmi:type="uml:LiteralString"
xmi:id="Ub853080d-481c-46ff-9f7c-92a31ac24349"
visibility="public" value="else"/>
</edge>
<edge xmi:type="uml:ControlFlow"
xmi:id="Ua800ba9b-e167-4a7c-a9a9-80e6a77edeb7"
source="U26768518-a40c-4713-b35e-c267cc660508"
target="Uc9e4f0de-8da6-4c98-9b95-b4cde30ccfc0"/>
<edge
xmi:type="uml:ControlFlow"
xmi:id="Ue9366b93-a45b-43f1-a201-2038b0bd0b30"
source="U16aa15e8-0e5d-4fd1-930a-725073ece9f0"
target="U26768518-a40c-4713-b35e-c267cc660508"/>
</packagedElement>
It is easy to detect that the sequence of applied productions, which is of the pattern
“...nD∗ f D∗ iD∗ oD∗ nD∗ ... nD∗ jD∗ iD∗ i...”, is not accepted by L(Ĝa ) (one o follows f ,
while two i follow j), thus there is an inconsistency.
We remark here that there are two preconditions of using the controlling grammar
concerning the sequences of the model elements in the XML document: 1. F orkN ode must
appear before its pairwise JoinN ode; 2. incoming edges must appear before outcoming
edges in a node. The two conditions are trivial, since it is easy to control their positions
in the exported XMI documents when implementing such a transformation.
9.2.5
Implementation of LGC Systems
In this section, we would like to shortly discuss the implementation issues.
Implementation of LGC Systems. As an instance of the model monitoring approach,
we have two alternative techniques for implementing the meta-composition operator.
For the first alternative, namely model monitoring, the controlled grammar G and the
controlling grammar Ĝ can be implemented as two parsers separately. The technique for
constructing a parser from a context-free grammar is rather mature [43, 1]. Some tools
provide automated generation of parsers from a grammar specification, such as Lex/Yacc,
Flex/Bison.
Note that the inputs of controlling parsers are the sequences of productions applied for
parsing the model using G. So there are communications between the two parsers. Once
module G uses a production pi , then the name of the production is sent to Ĝ as an input.
If Ĝ accepts the sequence of productions and G accepts the model, then the LGC system
G ~· Ĝ accepts the model.
9.2. Formalizing Guidelines and Consistency Rules of UML
157
For the second alternative, namely model generating, the controlled grammar G and
the controlling grammar Ĝ can be translated into two automata A and Â. Then the metacomposition of the automata C = A ~· Â can be computed and implemented as a parser by
an automated tool. The meta-composition C accepts the model, if and only if the LGC
system G ~· Ĝ accepts the model, i.e., the model satisfies the constraints.
Multiple Rules. If we have multiple guidelines or consistency rules, each rule is formalized using a grammar. We can develop an automated tool that converts the grammars
Ĝ1 , ..., Ĝn into automata Â1 , ..., Ân , and then combine these automata to compute an
intersection, i.e., an automaton A0 such that L(A0 ) = L(Â1 ) ∩ · · · ∩ L(Ân ) [74]. The intersection A0 can be used as a controlling automaton, which specifies a controlling language
L(A0 ) that includes all the semantics of the rules.
Cost vs. Benefit. It seems as if writing a controlling grammar was expensive, because
it involves formal methods. However, it is probably not the case. A controlling grammar
specifies language-level constraints, and can be reused by all the models derived from the
same controlled grammar. Thus the controlling grammar can be identified and formalized
by the organizations who define the language or its authorized usage, e.g., OMG and FAA
(Federal Aviation Administration), respectively. Developers and software companies can
use the published standard controlling grammar for checking inconsistencies and ensuring
guidelines in their models. In contrast, if every user write their own checking algorithms
and codes, e.g., in OCL or other expressions, the codes will be hard to be reused by other
users who have different models to check. Thus the total cost of all the users may be
higher.
9.2.6
Related Work
Most related work on checking consistency of UML uses specific semantics of UML models.
Therefore, they have the flavor of model checking, e.g., Egyed’s UML/Analyzer [45, 46, 47]
and OCL (Object Constraint Language) [106, 27]. At first, developers design UML diagrams as a model. Then, the consistency rules are specified as OCL or similar expressions.
Certain algorithms are executed to detect counterexamples that violate the rules.
There are several drawbacks of the existing approaches. First, these techniques use
model-level constraints to express both model-level and language-level rules. Since they
use specific semantics of UML models, these tools can only be used for UML, but not other
MOF-based languages. Second, OCL can only specify constraints on classes and types in
the class model. This means, it cannot be used to analyze other types of diagrams. Third,
due to the complexity of the semantics of UML, existing tools only implement part of the
consistency rules. In particular, few UML tools have implemented the feature of checking
OCL expressions.
Unlike these techniques, our approach takes another way. At first, we specify the
grammar G of the language, which defines how the grammar can derive a model. This
step has been done by the XMI specification. Then the rules on the use of language are
modeled as a controlling grammar Ĝ, which defines what a language is authorized to derive.
Finally, the two grammars constitute a global system satisfying the rules. Therefore, any
derivation of the global system is a correct and consistent use of the language.
In particular, our work differs from traditional techniques in the following aspects:
158
Chapter 9. Applications of the Model Monitoring Approach
1. Our work and traditional techniques express language-level and model-level constraints, respectively. Language-level constraints are more efficient, because they implicitly
have reusability. That is, we only need to develop one language-level constraint and apply
it to all the models in the language. However, if using traditional techniques, we need to
replicate model-level constraints for each model.
2. Our work and traditional techniques use syntax-based and semantics-based approaches (or static and dynamic analysis), respectively. Therefore, our approach is generic
and metamodel-independent, and concerns little about semantics. There are two merits.
First, we can use the same implementation method to ensure various constraints. Second,
it can be applied to all MOF-compliant languages, not only UML. However, traditional
techniques depend on the semantics of a language, thus specific algorithm should be developed for each constraint and each metamodel.
3. Our work and traditional techniques catch errors at compile-time and runtime,
respectively. Indeed, our approach implements the membership checking of context-free
languages. That is, it searches in a limited space defined by the model. However, traditional techniques, like model checking, may search in a larger, even intractable space
generated by the execution of the model. As a result, people have to limit the state space
of computation, and introduce the coverage problem.
9.2.7
Conclusion
In this section, we provided a language-theoretic view on guidelines and consistency rules of
UML, and proposed to use LGC systems for ensuring the rules. The rules are considered
as controlling grammars which control the use of modeling languages. This approach
is a generic, metamodel-independent, syntax-based technique that checks language-level
constraints at compile-time. It can be also applied to other MOF-compliant languages,
not only UML, since it does not depend on the specific semantics of UML.
One important future work is to develop a tool that implements the proposed approach.
Then empirical data could be collected to show the merits and limitations, and compare
the benefits and costs of the approach.
Chapter 10
Conclusion
In this chapter, we conclude this thesis by summarizing contributions and future work.
10.1
Contribution
This thesis contributes to the study of reliability and safety of computer and software
systems, which are modeled as discrete event systems. The major contributions include
the theory of Control Systems (C Systems) and the model monitoring approach.
In the first part of the thesis, we studied the theory of C systems which combines
and significantly extends regulated rewriting in formal languages theory and supervisory
control. The family of C systems is shown in Fig. 10.1.
The C system is a generic framework, and contains two components: the controlled
component and the controlling component that restricts the behavior of the controlled
component. The two components are expressed using the same formalism, e.g., automata
or grammars. The controlled component expresses a language L on inputs and outputs.
Whereas the controlling component, also called a controller, expresses a controlling language restricting the use of L without changing L itself.
We consider various classes of control systems based on different formalisms, for example, automaton control systems, grammar control systems, and their infinite versions and
concurrent variants. In particular, we may classify C Systems as two categories, namely
C Systems and ω-C Systems, according to the length of accepted strings, i.e., finite words
and infinite ω-words, respectively.
The C System on finite words is used for modeling and controlling systems with finite length behaviors. It includes three classes, namely Grammar Control Systems (GC
Systems), Leftmost-derivation-based Grammar Control Systems (LGC Systems) and Automaton Control Systems (AC Systems).
The ω-C System on ω-words is used for modeling and controlling nonstop systems
that generate ω-words. It includes also three major classes, namely ω-Grammar Control
Systems (ω-GC Systems), Leftmost-derivation-based ω-Grammar Control Systems (ωLGC Systems) and ω-Automaton Control Systems (ω-AC Systems).
We further discussed the Büchi Automaton Control System (BAC System) which is a
special case of the ω-AC System. The BAC system can provide transition-level supervisory
control on ω-words. The Alphabet-level BAC System (A-BAC System) is a restricted case
that can only provide alphabet-level supervisory control.
Some concurrent variants of the BAC system were also presented, such as the In159
160
Chapter 10. Conclusion
C systems
C systems on finite words
GC systems
LGC systems
AC systems
C systems on ω-words
ω-GC systems
ω-LGC systems
ω-AC systems
BAC systems
A-BAC systems
IO-AC systems
IN-AC systems
AC-BAC systems
(discussed applications)
(modeling systems with finite length behaviors)
(guidelines and consistency rules of UML)
(transition-level supervisory control on finite words)
(modeling nonstop systems)
(theoretical foundation of safety-related systems)
(transition-level supervisory control on ω-words)
(alphabet-level supervisory control on ω-words)
(modeling concurrent systems - broadcasting comm.)
(modeling concurrent systems - rendezvous comm.)
(nevertrace claims for model checking)
Figure 10.1: The Family of C Systems
put/Output Automaton Control System (IO-AC System), the Interface Automaton Control System (IN-AC System) and the Asynchronous-Composition Büchi Automaton Control System (AC-BAC System). The concurrent variants are used for modeling and controlling concurrent systems that consist of several components with various types of communication, e.g., broadcasting and rendezvous communications.
We studied the generative power of various classes of control systems. Note that the
proofs of generative power provide also the techniques for constructing meta-compositions,
which are useful for the verification and implementation of the global system.
After that, an application of the theory was presented. The AC-BAC system is used
to model and check correctness properties on execution traces specified by nevertrace
claims. We showed that the nevertrace claim and its checking problem are feasible in
practice.
In the second part of the thesis, we investigated the model monitoring approach whose
theoretical foundation is the theory of control systems. The key principle of the approach
is “property specifications as controllers”. In other words, the functional requirements and
property specification of a system are separately modeled and implemented, and the latter
one controls the behavior of the former one. The model monitoring approach contains two
alternative techniques, namely model monitoring and model generating.
The approach can be applied in several ways to improve reliability and safety of various
classes of systems. We presented some typical applications to show its strong power.
First, the model monitoring approach provides better support for the change and
evolution of property specifications. We showed that it can fill the gap between the
change and the traditional development process using model checking.
Second, the (ω-)AC System provides the theoretical foundation of safety-related systems in the standard IEC 61508 for ensuring the functional validity. We also showed that
checking functional validity is essentially checking an (ω-)AC system.
Third, the LGC system is used to formalize and check guidelines and consistency
rules of UML. The rules are considered as controlling grammars which control the use of
modeling languages. This approach is a generic, metamodel-independent, syntax-based
technique that checks language-level constraints at compile-time. It can be also applied to
10.2. Future Work
161
other MOF-compliant languages, not only UML, since it does not depend on the specific
semantics of UML.
These results lay out the foundations for further study of more advanced control mechanisms, and provide a new way for ensuring reliability and safety.
10.2
Future Work
Numerous interesting and promising directions could be considered in the future, in order
to develop a more mature theory or explore other possible applications.
In the theory aspect, we need to further investigate the theoretical properties of C
systems. For example, the precise generative powers of some classes of C systems are
still unknown. Another problem is to characterize the monitorability, i.e., characterization
of the class of monitorable properties. This is important for deciding whether a given
property is monitorable and whether it can be ensured via the model monitoring approach.
In the application aspect, we may extensively explore other possible applications of
the approach, which may in turn contribute to the further development of the approach
and the underlining theory.
We may also develop tools to implement the discussed approaches and techniques which
are theoretically proved feasible. Then empirical data could be collected to show the merits
and limitations, and to compare the benefits and costs of the approach. For instance, it is
interesting to implement the nevertrace claim. Then empirical data could be collected
to show whether and how much the nevertrace claim can reduce the state space and
decrease checking time in practice, comparing with checking the same properties specified
by existing constructs in SPIN. The similar research can also be done for formalizing and
checking guidelines and consistency rules of UML.
We may try to apply the model monitoring approach to more complex industrial systems, and obtain empirical results. The results may provide new directions for theoretical
research as well as valuable experiences for other applications.
Bibliography
[1] Alfred V. Aho, Ravi Sethi, and Jeffrey D. Ullman. Compilers: Principles, Techniques, and Tools. Addison-Wesley, Reading MA, 1986.
[2] Karen Allenby and Tim Kelly. Deriving safety requirements using scenarios. In
Proceedings of the 5th IEEE International Symposium on Requirements Engineering
(RE 2001), pages 228–235. IEEE Computer Society, 2001.
[3] Algirdas Avizienis, Jean-Claude Laprie, and Brian Randell. Fundamental Concepts
of Dependability. UCLA CSD Report 010028, LAAS-CNRS Report 01-145, Newcastle University Report CS-TR-739, 2001.
[4] Algirdas Avizienis, Jean-Claude Laprie, Brian Randell, and Carl E. Landwehr. Basic
concepts and taxonomy of dependable and secure computing. IEEE Transactions
on Dependable and Secure Computing, 1(1):11–33, 2004.
[5] Christel Baier and Joost-Pieter Katoen. Principles of Model Checking. The MIT
Press, 2008.
[6] B.S. Baker. Non-context-free grammars generating context-free languages. Information and Control, 24:231–246, 1974.
[7] Robert Balzer. Tolerating inconsistency. In Proceedings of the 13th International
Conference on Software Engineering (ICSE 1991), pages 158–165. IEEE Computer
Society, 1991.
[8] Mordechai Ben-Ari. Principles of the SPIN Model Checker. Springer, 2008.
[9] Friedemann Bitsch. Safety patterns - the key to formal specification of safety requirements. In Udo Voges, editor, Proceedings of the 20th International Conference
on Computer Safety, Reliability and Security (SAFECOMP 2001), volume 2187 of
Lecture Notes in Computer Science, pages 176–189. Springer, 2001.
[10] JPL Special Review Board. Report on the loss of the mars polar lander and deep
space 2 missions. NASA Jet Propulsion Laboratory, 2000.
[11] Thomas Bochot, Pierre Virelizier, Hélène Waeselynck, and Virginie Wiels. Model
checking flight control systems: The airbus experience. In Proceedings of the 31st International Conference on Software Engineering (ICSE 2009), Companion Volume,
pages 18–27. IEEE, 2009.
[12] Grady Booch, James Rumbaugh, and Ivar Jacobson. The Unified Modeling Language
User Guide (2nd Edition). Addison-Wesley, 2005.
163
164
BIBLIOGRAPHY
[13] Dominique Brière and Pascal Traverse. Airbus a320/a330/a340 electrical flight controls: A family of fault-tolerant systems. In Proceedings of the 23rd Annual International Symposium on Fault-Tolerant Computing (FTCS 1993), pages 616–623. IEEE
Computer Society, 1993.
[14] Lubos Brim, Ivana Cerná, Pavlı́na Vareková, and Barbora Zimmerova. Componentinteraction automata as a verification-oriented component-based system specification. ACM SIGSOFT Software Engineering Notes, 31(2), 2006.
[15] Simon Brown. Overview of IEC 61508 - design of electrical/electronic/programmable
electronic safety-related systems. Computing & Control Engineering Journal,
11(1):6–12, 2000.
[16] J. R. Büchi. On a decision method in restricted second order arithmetic. In Proceedings of the International Congress on Logic, Methodology, and Philosophy of
Science, pages 1–11. Stanford University Press, 1960.
[17] Jim Buckley, Tom Mens, Matthias Zenger, Awais Rashid, and Günter Kniesel. Towards a taxonomy of software change. Journal of Software Maintenance, 17(5):309–
332, 2005.
[18] Christos G. Cassandras and Stéphane Lafortune. Introduction to Discrete Event
Systems, 2nd Edition. Springer, 2008.
[19] William Chan, Richard J. Anderson, Paul Beame, Steve Burns, Francesmary Modugno, David Notkin, and Jon Damon Reese. Model checking large software specifications. IEEE Transactions on Software Engineering, 24(7):498–520, 1998.
[20] Zhe Chen and Gilles Motet. Formalizing safety requirements using controlling automata. In Proceedings of the 2nd International Conference on Dependability (DEPEND 2009), pages 81–86. IEEE Computer Society, 2009.
[21] Zhe Chen and Gilles Motet. A language-theoretic view on guidelines and consistency
rules of UML. In Proceedings of the 5th European Conference on Model Driven
Architecture - Foundations and Applications (ECMDA-FA 2009), volume 5562 of
Lecture Notes in Computer Science, pages 66–81. Springer, 2009.
[22] Zhe Chen and Gilles Motet. Modeling system safety requirements using input/output
constraint meta-automata. In Proceedings of the 4th International Conference on
Systems (ICONS 2009), pages 228–233. IEEE Computer Society, 2009.
[23] Zhe Chen and Gilles Motet. System safety requirements as control structures. In
Proceedings of the 33rd Annual IEEE International Computer Software and Applications Conference (COMPSAC 2009), pages 324–331. IEEE Computer Society,
2009.
[24] Zhe Chen and Gilles Motet. Nevertrace claims for model checking. In Proceedings of
the 17th International SPIN Workshop on Model Checking of Software (SPIN 2010),
Lecture Notes in Computer Science. Springer, 2010. To appear.
BIBLIOGRAPHY
165
[25] Zhe Chen and Gilles Motet. Separating functional and dependability requirements of
embedded systems. In Proceedings of the 7th International Conference on Embedded
Software and Systems (ICESS 2010). IEEE Computer Society, 2010. To appear.
[26] Zhe Chen and Gilles Motet. Towards better support for the evolution of safety
requirements via the model monitoring approach. In Proceedings of the 32nd International Conference on Software Engineering (ICSE 2010), pages 219–222. ACM,
2010.
[27] Dan Chiorean, Mihai Pasca, Adrian Cârcu, Cristian Botiza, and Sorin Moldovan.
Ensuring UML models consistency using the OCL environment. Electronic Notes in
Theoretical Computer Science, 102:99–110, 2004.
[28] Noam Chomsky. Three models for the description of language. IEEE Transactions
on Information Theory, 2(3):113–124, September 1956.
[29] Noam Chomsky. Syntactic Structures. Mouton, The Hague, 1957.
[30] Edmund M. Clarke and E. Allen Emerson. Design and synthesis of synchronization
skeletons using branching-time temporal logic. In Logics of Programs, Workshop,
Yorktown Heights, New York, May 1981, volume 131 of Lecture Notes in Computer
Science, pages 52–71. Springer, 1981.
[31] Edmund M. Clarke, Orna Grumberg, and Doron A. Peled. Model Checking. The
MIT Press, 2000.
[32] Rina S. Cohen and Arie Y. Gold. Theory of ω-languages. I: Characterizations of
ω-context-free languages. Journal of Computer and System Sciences, 15(2):169–184,
1977.
[33] Rina S. Cohen and Arie Y. Gold. Theory of ω-languages. II: A study of various
models of ω-type generation and recognition. Journal of Computer and System
Sciences, 15(2):185–208, 1977.
[34] Rina S. Cohen and Arie Y. Gold. ω-computations on deterministic pushdown machines. Journal of Computer and System Sciences, 16(3):275–300, 1978.
[35] Rina S. Cohen and Arie Y. Gold. ω-computations on turing machines. Theoretical
Computer Science, 6:1–23, 1978.
[36] Rina S. Cohen and Arie Y. Gold. On the complexity of ω-type turing acceptors.
Theoretical Computer Science, 10:249–272, 1980.
[37] Costas Courcoubetis, Moshe Y. Vardi, Pierre Wolper, and Mihalis Yannakakis.
Memory-efficient algorithms for the verification of temporal properties. Formal
Methods in System Design, 1(2/3):275–288, 1992.
[38] Jürgen Dassow. Grammars with regulated rewriting. In Carlos Martin-Vide, Victor
Mitrana, and Gheorghe Paun, editors, Formal Languages and Applications, Studies
in Fuzziness and Softcomputing 148, pages 249–273. Springer, 2004.
166
BIBLIOGRAPHY
[39] Jürgen Dassow and Gheorghe Pǎun. Regulated Rewriting in Formal Language Theory. Springer, 1989.
[40] Jürgen Dassow, Gheorghe Pǎun, and Arto Salomaa. Grammars with controlled
derivations. In Grzegorz Rozenberg and Arto Salomaa, editors, Handbook of Formal
Languages, volume II, pages 101–154. Springer, 1997.
[41] Luca de Alfaro and Thomas A. Henzinger. Interface automata. In Proceedings of
the 8th European Software Engineering Conference and 9th ACM SIGSOFT International Symposium on Foundations of Software Engineering (ESEC/FSE 2001),
pages 109–120, 2001.
[42] Eladio Domı́nguez, Jorge Lloret, Beatriz Pérez, Áurea Rodrı́guez, Angel Luis Rubio,
and Marı́a Antonia Zapata. A survey of UML models to XML schemas transformations. In Proceedings of the 8th International Conference on Web Information
Systems Engineering (WISE 2007), volume 4831 of Lecture Notes in Computer Science, pages 184–195. Springer, 2007.
[43] Jay Earley. An efficient context-free parsing algorithm. Commun. ACM, 13(2):94–
102, 1970.
[44] Steve M. Easterbrook and Marsha Chechik. 2nd international workshop on living
with inconsistency. In Proceedings of the 23rd International Conference on Software
Engineering (ICSE 2001), pages 749–750. IEEE Computer Society, 2001.
[45] Alexander Egyed. Instant consistency checking for the UML. In Leon J. Osterweil,
H. Dieter Rombach, and Mary Lou Soffa, editors, Proceedings of the 28th International Conference on Software Engineering (ICSE 2006), pages 381–390. ACM,
2006.
[46] Alexander Egyed. Fixing inconsistencies in UML design models. In Proceedings
of the 29th International Conference on Software Engineering (ICSE 2007), pages
292–301. IEEE Computer Society, 2007.
[47] Alexander Egyed. UML/Analyzer: A tool for the instant consistency checking of
UML models. In Proceedings of the 29th International Conference on Software Engineering (ICSE 2007), pages 793–796. IEEE Computer Society, 2007.
[48] Clarence A. Ellis. Team automata for groupware systems. In Proceedings of International Conference on Supporting Group Work (GROUP 1997), pages 415–424.
ACM, 1997.
[49] Joost Engelfriet and Hendrik Jan Hoogeboom. X-automata on ω-words. Theoretical
Computer Science, 110(1):1–51, 1993.
[50] Neil A. Ernst, John Mylopoulos, Yijun Yu, and Tien Nguyen. Supporting requirements model evolution throughout the system life-cycle. In Proceedings of the 16th
IEEE International Requirements Engineering Conference (RE 2008), pages 321–
322. IEEE Computer Society, 2008.
BIBLIOGRAPHY
167
[51] Fathi Essalmi and Leila Jemni Ben Ayed. Graphical UML view from extended
backus-naur form grammars. In Proceedings of the 6th IEEE International Conference on Advanced Learning Technologies (ICALT 2006), pages 544–546. IEEE
Computer Society, 2006.
[52] Johannes Faber and Roland Meyer. Model checking data-dependent real-time properties of the european train control system. In Proceedings of the 6th International
Conference on Formal Methods in Computer-Aided Design (FMCAD 2006), pages
76–77. IEEE Computer Society, 2006.
[53] Rainer Faller. Project experience with IEC 61508 and its consequences. Safety
Science, 42(5):405–422, 2004.
[54] Federal Aviation Administration. Handbook for Object-Oriented Technology in Aviation (OOTiA), vol. 2.1, Considerations and issues. U.S. Department of Transportation, October 2004.
[55] Donald Firesmith. Engineering safety requirements, safety constraints, and safetycritical requirements. Journal of Object Technology, 3(3):27–42, 2004.
[56] Derek Fowler and Phil Bennett. IEC 61508 - a suitable bases for the certification
of safety-critical transport-infrastructure systems?? In Floor Koornneef and Meine
van der Meulen, editors, Proceedings of the 19th International Conference on Computer Safety, Reliability and Security (SAFECOMP 2000), volume 1943 of Lecture
Notes in Computer Science, pages 250–263. Springer, 2000.
[57] Heinz Gall. Functional safety IEC 61508 / IEC 61511 the impact to certification and
the user. In Proceedings of the 6th ACS/IEEE International Conference on Computer
Systems and Applications (AICCSA 2008), pages 1027–1031. IEEE, 2008.
[58] Paul Gastin and Denis Oddoux. Fast LTL to Büchi automata translation. In Proceedings of the 13th International Conference on Computer Aided Verification (CAV’01),
volume 2102 of Lecture Notes in Computer Science, pages 53–65. Springer, 2001.
[59] Jean-Claude Geffroy and Gilles Motet. Design of Dependable Computing Systems.
Kluwer Academic Publishers, 2002.
[60] Rob Gerth, Doron Peled, Moshe Y. Vardi, and Pierre Wolper. Simple on-the-fly
automatic verification of linear temporal logic. In Piotr Dembinski and Marek Sredniawa, editors, Proceedings of the 15th IFIP WG6.1 International Symposium on
Protocol Specification, Testing and Verification, pages 3–18. Chapman & Hall, 1995.
[61] Seymour Ginsburg. Algebraic and Automata-Theoretic Properties of Formal Languages. Elsevier Science Inc., New York, NY, USA, 1975.
[62] Seymour Ginsburg and Sheila A. Greibach. Abstract families of languages. Studies
in Abstract Families of Languages, Memoir No. 87:1–32, 1969.
[63] Seymour Ginsburg, Sheila A. Greibach, and Michael A. Harrison. One-way stack
automata. Journal of the ACM, 14(2):389–418, 1967.
168
BIBLIOGRAPHY
[64] Seymour Ginsburg and Edwin H. Spanier. Control sets on grammars. Mathematical
Systems Theory, 2(2):159–177, 1968.
[65] Roy Grønmo and Jon Oldevik. An empirical study of the UML model transformation
tool (UMT). In Proceedings of the First International Conference on Interoperability
of Enterprise Software and Applications (INTEROP-ESA), 2005.
[66] S.D.P. Harker, K.D. Eason, and J.E. Dobson. The change and evolution of requirements as a challenge to the practice of software engineering. In Proceedings of IEEE
International Symposium on Requirements Engineering (RE 1993), pages 266–272.
IEEE Computer Society, 1993.
[67] Denis Hatebur and Maritta Heisel. A foundation for requirements analysis of dependable software. In Proceedings of the 28th International Conference on Computer
Safety, Reliability, and Security (SAFECOMP 2009), volume 5775 of Lecture Notes
in Computer Science, pages 311–325. Springer, 2009.
[68] Klaus Havelund, Michael R. Lowry, and John Penix. Formal analysis of a space-craft
controller using SPIN. IEEE Transactions on Software Engineering, 27(8):749–765,
2001.
[69] Debra S. Herrmann. Software Safety and Reliability: Techniques, Approaches, and
Standards of Key Industrial Sectors. IEEE Computer Society, 2000.
[70] C.A.R Hoare. Communicating Sequential Processes. Prentice Hall, 1985.
[71] Gerard J. Holzmann. The model checker SPIN. IEEE Transactions on Software
Engineering, 23(5):279–295, 1997.
[72] Gerard J. Holzmann. The SPIN Model Checker: Primer and Reference Manual.
Addison-Wesley Professional, 2003.
[73] Gerard J. Holzmann and Doron Peled. The state of SPIN. In Rajeev Alur and
Thomas A. Henzinger, editors, Proceedings of the 8th International Conference on
Computer Aided Verification (CAV 1996), volume 1102 of Lecture Notes in Computer Science, pages 385–389. Springer, 1996.
[74] John E. Hopcroft and Jeffrey D. Ullman. Introduction to Automata Theory, Languages, and Computation. Addison-Wesley, Reading MA, 1979.
[75] Michael Huth and Mark Ryan. Logic in Computer Science: Modelling and Reasoning
about Systems, 2nd Edition. Cambridge University Press, 2004.
[76] Zbigniew Huzar, Ludwik Kuzniarz, Gianna Reggio, and Jean Louis Sourrouille, editors. Workshop on consistency problems in UML-based software development III, colocated with UML’04, 2004. Available at http://www.ipd.bth.se/consistencyUML/.
[77] IEC. IEC 61508, Functional Safety of Electrical/Electronic/Programmable Electronic Safety-Related Systems. International Electrotechnical Commission, 1999.
[78] ISO/IEC. ISO/IEC 14977:1996(E): Extended BNF. 1996.
BIBLIOGRAPHY
169
[79] Frédéric Jouault, Freddy Allilaire, Jean Bézivin, and Ivan Kurtev. Atl: A model
transformation tool. Science of Computer Programming, 72(1-2):31–39, 2008.
[80] T.A. Kletz. Human problems with computer control. Plant/Operations Progress,
1(4), 1982.
[81] Takehisa Kohda and Yohei Takagi. Accident cause analysis of complex systems based
on safety control functions. In Proceedings of Annual Reliability and Maintainability
Symposium (RAMS 2006), pages 570–576. ACM, 2006.
[82] Ludwik Kuzniarz, Zbigniew Huzar, Gianna Reggio, and Jean Louis Sourrouille, editors. Workshop on consistency problems in UML-based software development II, colocated with UML’03, 2003. Available at http://www.ipd.bth.se/consistencyUML/.
[83] Ludwik Kuzniarz, Gianna Reggio, Jean Louis Sourrouille, and Zbigniew Huzar, editors. Workshop on consistency problems in UML-based software development I, colocated with UML’02, 2002. Available at http://www.ipd.bth.se/consistencyUML/.
[84] Lawrence H. Landweber. Decision problems for ω-automata. Mathematical Systems
Theory, 3(4):376–384, 1969.
[85] Jean-Claude Laprie. Dependability: Basic Concepts and Terminology. Springer,
1992.
[86] Nancy Leveson. Engineering A Safer World. Available at http://sunnyday.mit.
edu/book2.html.
[87] Nancy Leveson. Safeware: System Safety and Computers. Addison-Wesley, Reading,
MA, 1995.
[88] Nancy Leveson. Evaluating accident models using recent aerospace accidents. Technical Report, MIT Dept. of Aeronautics and Astronautics, 2001. Available at
http://sunnyday.mit.edu/accidents.
[89] Nancy Leveson. A new accident model for engineering safer systems. Safety Science,
42(4):237–270, 2004.
[90] Nancy Leveson. Applying systems thinking to analyze and learn from events. In
Workshop NeTWorK 2008: Event Analysis and Learning From Events, 2008. Available at http://sunnyday.mit.edu/papers/network-08.doc.
[91] Nancy A. Lynch. Distributed Algorithms. Morgan Kaufmann Publishers, San Mateo,
CA, 1996.
[92] Nancy A. Lynch. Input/output automata: Basic, timed, hybrid, probabilistic, dynamic. In Proceedings of the 14th International Conference on Concurrency Theory
(CONCUR 2003), volume 2761 of Lecture Notes in Computer Science, pages 187–
188. Springer, 2003.
[93] Nancy A. Lynch and Mark R. Tuttle. Hierarchical correctness proofs for distributed
algorithms. In Proceedings of the 6th Annual ACM Symposium on Principles of
Distributed Computing (PODC 1987), pages 137–151, 1987.
170
BIBLIOGRAPHY
[94] Nancy A. Lynch and Mark R. Tuttle. An introduction to input/output automata. CWI Quarterly, 2(3):219–246, 1989. Also available as MIT Technical Memo
MIT/LCS/TM-373.
[95] Solomon Marcus, Carlos Martı́n-Vide, Victor Mitrana, and Gheorghe Paun. A newold class of linguistically motivated regulated grammars. In Proceedings of the 11th
International Conference on Computational Linguistics in Netherlands, volume 37 of
Language and Computers - Studies in Practical Linguistics, pages 111–125. Rodopi,
2000.
[96] Carlos Martı́n-Vide and Victor Mitrana. Decision problems on path-controlled grammars. International Journal of Foundations of Computer Science, 18(6):1323–1332,
2007.
[97] G.H. Matthews. A note on asymmetry in phrase structure grammars. Information
and Control, 7:360–365, 1964.
[98] Kenneth Lauchlin McMillan. Symbolic Model Checking: An Approach to the State
Explosion Problem. Kluwer Academic, 1993.
[99] Robert McNaughton. Testing and generating infinite sequences by a finite automaton. Information and Control, 9(5):521–530, 1966.
[100] Nenad Medvidovic, David S. Rosenblum, David F. Redmiles, and Jason E. Robbins.
Modeling software architectures in the unified modeling language. ACM Trans.
Softw. Eng. Methodol., 11(1):2–57, 2002.
[101] Robin Milner. Communication and Concurrency. Prentice Hall, 1989.
[102] Gilles Motet. Risks of faults intrinsic to software languages: Trade-off between
design performance and application safety. Safety Science, 2009.
[103] Glenford J. Myers. The Art of Software Testing. Wiley, 1979.
[104] Bashar Nuseibeh, Steve M. Easterbrook, and Alessandra Russo. Leveraging inconsistency in software development. IEEE Computer, 33(4):24–29, 2000.
[105] OMG. Meta Object Facility (MOF) Core Specification, version 2.0 (06-01-01). Object Management Group, 2006.
[106] OMG. Object Constraint Language, version 2.0 (06-05-01). Object Management
Group, 2006.
[107] OMG. MOF 2.0 / XMI Mapping, version 2.1.1 (07-12-01). Object Management
Group, 2007.
[108] OMG. Unified Modeling Language: Infrastructure, version 2.1.1 (07-02-06). Object
Management Group, 2007.
[109] OMG. Unified Modeling Language: Superstructure, version 2.1.1 (07-02-05). Object
Management Group, 2007.
BIBLIOGRAPHY
171
[110] Amir Pnueli. The temporal logic of programs. In Proceedings of the 18th IEEE
Symposium on Foundations of Computer Science (FOCS 1977), pages 46–57. IEEE
Computer Society, 1977.
[111] Jean-Pierre Queille and Joseph Sifakis. Specification and verification of concurrent
systems in cesar. In Proceedings of the 5th International Symposium on Programming, volume 137 of Lecture Notes in Computer Science, pages 337–351. Springer,
1982.
[112] P. J. Ramadge and W. M. Wonham. Supervisory control of a class of discrete event
processes. SIAM Journal on Control and Optimization, 25(1):206–230, 1987.
[113] P. J. Ramadge and W. M. Wonham. The control of discrete event systems. Proceedings of the IEEE, 77(1):81–98, 1989.
[114] Felix Redmill. IEC 61508 - principles and use in the management of safety. Computing & Control Engineering Journal, 9(5):205–213, 1998.
[115] Grzegorz Rozenberg and Arto Salomaa. Handbook of Formal Languages: Volume
I-III. Springer Verlag, 1997.
[116] Arto Salomaa. Formal Languages. Academic Press Professional, 1987.
[117] Jørgen Staunstrup, Henrik Reif Andersen, Henrik Hulgaard, Jørn Lind-Nielsen,
Kim Guldstrand Larsen, Gerd Behrmann, Kåre J. Kristoffersen, Arne Skou, Henrik Leerberg, and Niels Bo Theilgaard. Practical verification of embedded software.
IEEE Computer, 33(5):68–75, 2000.
[118] Robert Endre Tarjan. Depth-first search and linear graph algorithms. SIAM Journal
on Computing, 1(2):146–160, 1972.
[119] Maurice H. ter Beek, Clarence A. Ellis, Jetty Kleijn, and Grzegorz Rozenberg. Synchronizations in team automata for groupware systems. Computer Supported Cooperative Work, 12(1):21–69, 2003.
[120] Wolfgang Thomas. Automata on infinite objects. In Handbook of Theoretical Computer Science (vol. B): Formal Models and Semantics, pages 133–191. Elsevier Science Publishers B.V., 1990.
[121] Wolfgang Thomas. Languages, automata, and logic. In Handbook of Formal Languages, volume III, pages 389–455. Springer, 1997.
[122] Jan Tretmans, Klaas Wijbrans, and Michel R. V. Chaudron. Software engineering with formal methods: The development of a storm surge barrier control system revisiting seven myths of formal methods. Formal Methods in System Design,
19(2):195–215, 2001.
[123] Moshe Y. Vardi and Pierre Wolper. Reasoning about infinite computations. Information and Computation, 115(1):1–37, 1994.
[124] Jean-Pierre Seuma Vidal, Hugues Malgouyres, and Gilles Motet. UML 2.0
Consistency Rules. Available on Internet, 2005. URL: http://www.lattis.univtoulouse.fr/UML/.
172
BIBLIOGRAPHY
[125] Jean-Pierre Seuma Vidal, Hugues Malgouyres, and Gilles Motet. UML 2.0 consistency rules identification. In Proceedings of the 2005 International Conference on
Software Engineering Research and Practice (SERP’05). CSREA Press, 2005.
[126] Klaas Wijbrans, Franc Buve, Robin Rijkers, and Wouter Geurts. Software engineering with formal methods: Experiences with the development of a storm surge
barrier control system. In Proceedings of the 15th International Symposium on Formal Methods (FM 2008), volume 5014 of Lecture Notes in Computer Science, pages
419–424. Springer, 2008.
[127] Yong Xia and Martin Glinz. Rigorous EBNF-based definition for a graphic modeling
language. In Proceedings of the 10th Asia-Pacific Software Engineering Conference
(APSEC 2003), pages 186–196. IEEE Computer Society, 2003.
Index
C systems
ω-AC systems, 74
ω-GC systems, 63
ω-LGC systems, 70
A-BAC systems, 87
AC systems, 29
AC-BAC systems, 109
BAC systems, 84
GC systems, 24
IN-AC systems, 96
IO-AC systems, 91
LGC systems, 26
consistency rules, 147
functional validity, 131
guidelines, 147
IEC 61508, 129
model generating, 120
model monitoring, 120
nevertrace claim, 106
Promela, 99, 132
safety-related systems, SRS, 130
SPIN, 99, 132
the model monitoring approach, 121
unified modeling language, UML, 143
173
Publications
1. Zhe Chen and Gilles Motet. Towards better support for the evolution of safety
requirements via the model monitoring approach. In Proceedings of the 32nd International Conference on Software Engineering (ICSE 2010), pages 219–222. ACM,
2010.
2. Zhe Chen and Gilles Motet. Nevertrace claims for model checking. In Proceedings of
the 17th International SPIN Workshop on Model Checking of Software (SPIN 2010),
Lecture Notes in Computer Science. Springer, 2010. To appear.
3. Zhe Chen and Gilles Motet. Separating functional and dependability requirements of
embedded systems. In Proceedings of the 7th International Conference on Embedded
Software and Systems (ICESS 2010). IEEE Computer Society, 2010. To appear.
4. Zhe Chen and Gilles Motet. System safety requirements as control structures. In
Proceedings of the 33rd Annual IEEE International Computer Software and Applications Conference (COMPSAC 2009), pages 324–331. IEEE Computer Society,
2009.
5. Zhe Chen and Gilles Motet. A language-theoretic view on guidelines and consistency
rules of UML. In Proceedings of the 5th European Conference on Model Driven Architecture - Foundations and Applications (ECMDA-FA 2009), volume 5562 of Lecture
Notes in Computer Science, pages 66–81. Springer, 2009.
6. Zhe Chen and Gilles Motet. Formalizing safety requirements using controlling automata. In Proceedings of the 2nd International Conference on Dependability (DEPEND 2009), pages 81–86. IEEE Computer Society, 2009.
7. Zhe Chen and Gilles Motet. Modeling system safety requirements using input/output constraint meta-automata. In Proceedings of the 4th International Conference
on Systems (ICONS 2009), pages 228–233. IEEE Computer Society, 2009.
175