Digital I

Transcription

Digital I
Chapter 14N
Class 14/D1: Logic Gates
Contents
14NClass 14/D1: Logic Gates
14N.1Why? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.2Analog versus Digital . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.2.1What does this distinction mean? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.2.2But why bother with digital? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.2.3Alternatives to Binary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.2.4Digital Processing makes sense, more obviously, when the information to be handled is digital from the outset.
14N.3Number codes: two’s-complement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.3.1Negative Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.3.2Hexadecimal Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.4Combinational Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.4.1Digression: a little history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.4.2deMorgan’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.4.3Active High versus active Low . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.5The usual way to do digital logic: programmable arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.5.1Active-Low with PLD’s: A Logic Compiler can Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.5.2An Ugly Version of this Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.6Gate Types: TTL & CMOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.6.1Gate Innards: TTL vs CMOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.6.2Thresholds and “Noise Margin” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.7Noise Immunity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.7.1DC noise immunity: CMOS vs TTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.7.2Differential transmission: another way to get good noise immunity . . . . . . . . . . . . . . . . . . . . . .
14N.8More on Gate Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.8.1Output Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.8.2Logic with TTL and CMOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.8.3Speed versus Power consumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.9AoE Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
2
2
2
3
4
6
6
7
7
8
8
10
11
14
15
15
16
17
18
19
19
21
22
22
24
24
25
Chapter 14N
Class 14/D1: Logic Gates
Contents
14NClass 14/D1: Logic Gates
14N.1Why? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.2Analog versus Digital . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.2.1What does this distinction mean? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.2.2But why bother with digital? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.2.3Alternatives to Binary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.2.4Digital Processing makes sense, more obviously, when the information to be handled is digital from the outset.
14N.3Number codes: two’s-complement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.3.1Negative Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.3.2Hexadecimal Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.4Combinational Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.4.1Digression: a little history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.4.2deMorgan’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.4.3Active High versus active Low . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.5The usual way to do digital logic: programmable arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.5.1Active-Low with PLD’s: A Logic Compiler can Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.5.2An Ugly Version of this Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.6Gate Types: TTL & CMOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.6.1Gate Innards: TTL vs CMOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.6.2Thresholds and “Noise Margin” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.7Noise Immunity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.7.1DC noise immunity: CMOS vs TTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.7.2Differential transmission: another way to get good noise immunity . . . . . . . . . . . . . . . . . . . . . .
14N.8More on Gate Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.8.1Output Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.8.2Logic with TTL and CMOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.8.3Speed versus Power consumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14N.9AoE Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
2
2
2
3
4
6
6
7
7
8
8
10
11
14
15
15
16
17
18
19
19
21
22
22
24
24
25
Class 14/D1: Logic Gates
2
REV 01 ; October 7, 2014.
14N.1
Why?
We aim to apply MOSFETs so as to form gates capable of Boolean operations.
14N.2
Analog versus Digital
14N.2.1
What does this distinction mean?
The major and familiar distinction is between digital and analog. Along the way, let’s also distinguish
“binary” from the more general and more interesting notion, “digital.”
• First, the “analog” versus “digital” distinction:
– an analog system represents information as a continuous function of the information (as a voltage
may be proportional to temperature, or to sound pressure).2
– a digital system, in contrast, represents information with discrete codes. The codes to represent increasing temperature readings could be increasing digital numbers (0001, 0010, 0011, for
example)—but they could, alternatively, use any other code that you found convenient. The digital representation also need not be binary—a narrower notion mentioned just below.
• . . . then the less important “binary” versus “digital” distinction: binary is a special case of digital representation in which only two values are possible, often called True or False. When more than two values
are to be encoded using binary representations, multiple binary digits are required. Binary digits, as
you know are called “bits” (= “binary digit”)
A binary logic gate classifies inputs into one of two categories: the gate is a comparator—one that is simple
and fast:
Figure 1: Two comparator circuits: digital inverter, and explicit comparator, roughly equivalent
The digital gate resembles an ordinary comparator (hereafter we won’t worry about the digital/binary distinction; we will assume our digital devices are binary).
1 Revisions: add headerfile (6/14); add ‘Why’ (1/14); insert figure showing LVDS scheme (9/130; add definitions of bit, byte and octet;
replace some numerical footnotes that could be exponents with symbols (5/13); add mention of hysteresis on XL PAL (7/12); add index
(6/12); correct Boole’s Fabius example to say it is NAND, not XOR (3/11); cut ref to ABEL (10/11); credit ’truth table’ to Wittgenstein,
not Boole; amend logic gate speed plot (10/10); add scope images of varying number of bits in ADC-DAC conversion(10/20/09).
2 We don’t quite want to say that the representation—say, the voltage—is proportional to the quantity represented, since a log
converter, for example, can make the representation stranger than that.
Class 14/D1: Logic Gates
2
REV 01 ; October 7, 2014.
14N.1
Why?
We aim to apply MOSFETs so as to form gates capable of Boolean operations.
14N.2
Analog versus Digital
14N.2.1
What does this distinction mean?
The major and familiar distinction is between digital and analog. Along the way, let’s also distinguish
“binary” from the more general and more interesting notion, “digital.”
• First, the “analog” versus “digital” distinction:
– an analog system represents information as a continuous function of the information (as a voltage
may be proportional to temperature, or to sound pressure).2
– a digital system, in contrast, represents information with discrete codes. The codes to represent increasing temperature readings could be increasing digital numbers (0001, 0010, 0011, for
example)—but they could, alternatively, use any other code that you found convenient. The digital representation also need not be binary—a narrower notion mentioned just below.
• . . . then the less important “binary” versus “digital” distinction: binary is a special case of digital representation in which only two values are possible, often called True or False. When more than two values
are to be encoded using binary representations, multiple binary digits are required. Binary digits, as
you know are called “bits” (= “binary digit”)
A binary logic gate classifies inputs into one of two categories: the gate is a comparator—one that is simple
and fast:
Figure 1: Two comparator circuits: digital inverter, and explicit comparator, roughly equivalent
The digital gate resembles an ordinary comparator (hereafter we won’t worry about the digital/binary distinction; we will assume our digital devices are binary).
1 Revisions: add headerfile (6/14); add ‘Why’ (1/14); insert figure showing LVDS scheme (9/130; add definitions of bit, byte and octet;
replace some numerical footnotes that could be exponents with symbols (5/13); add mention of hysteresis on XL PAL (7/12); add index
(6/12); correct Boole’s Fabius example to say it is NAND, not XOR (3/11); cut ref to ABEL (10/11); credit ’truth table’ to Wittgenstein,
not Boole; amend logic gate speed plot (10/10); add scope images of varying number of bits in ADC-DAC conversion(10/20/09).
2 We don’t quite want to say that the representation—say, the voltage—is proportional to the quantity represented, since a log
converter, for example, can make the representation stranger than that.
Class 14/D1: Logic Gates
3
How does the digital gate differ from a comparator like, say, the LM311?
Input & output circuitry: The ’311 is more flexible (you choose its threshold & hysteresis; you choose its output range). Most digital
gates include no hysteresis. The exceptional gates that include hysteresis usually are those intended to listen to a computer bus
(assumed to be extra noisy), and the inputs to some larger IC’s such as the PAL that you will meet in our labs, XC9572XL,
which includes 50mV of hysteresis on all of its inputs.
Speed: The logic gate makes its decision (and makes its output show that ‘decision’) at least 20 times as fast as the ’311 does.
Simplicity: The logic gate requires no external parts, and needs only power, ground and In and Out pins. It can work without hysteresis
because its inputs transition decisively and fast.
14N.2.2
But why bother with digital?
Sub-questions:
14N.2.2.1 (Naive query:) Is This Transformation Perverse?
Is it not perverse to force an analog signal—which can carry a rich store of information on a single wire—into a crude
binary High/Low form?
Figure 2: Naive version of “digital audio”: looks foolish!
Lest you think this question is utterly crazy (it is mostly crazy), consider this promotional pitch by a company
that makes analog storage and playback IC’s.3 This company heaps scorn on silly-old digital methods, with
their binary encoding of information (ha ha! how crude!), and multiple-bit samples (oof—how clumsy!):
The technology, which is based on a multilevel storage methodology, utilizes a new approach for storing information.
Unlike the more traditional digitized voice solutions, ChipCorder enables both voice and audio signals to be stored
directly in their natural analog form in nonvolatile EEPROM memory cells. The high-density storage approach provides
an approximate 8:1 advantage over alternative digitized signal-storage technologies (which, by comparison, can store
only one of two levels—either 1 or 0—per memory cell, and therefore would require approximately eight cells to store
the same information as one ChipCorder memory cell). ChipCorder eliminates the need for external analog-to-digital
and digital-to-analog circuitry, providing a compact, single-chip, solid-state solution.
This promotion—announcing the re-discovery of the miracle of analog signal storage (perhaps you remember
analog audio tapes, and long-playing records)—inverts the usual advertising pitch. That pitch ordinarily urges
you to buy a new toothbrush or toaster because it is so up-to-date that it is digital. In defense of ChipCorder,
we should say, the device works very well. It’s only its promotional materials that are a bit odd.
Disadvantages of Digital:
Complexity: more lines required to carry same information
Speed: processing the numbers that encode the information sometimes is slower than handling the analog signal
AoE §10.1.1
Advantages of Digital:
Noise Immunity: Signal is born again at each gate; from this virtue flow the important applications of digital:
• Allows Stored-Program Computers (may look like a wrinkle at this stage, but turns out, of course, hugely
important);
3 The
company is Information Storage Devices, of San Jose, CA.
Class 14/D1: Logic Gates
3
How does the digital gate differ from a comparator like, say, the LM311?
Input & output circuitry: The ’311 is more flexible (you choose its threshold & hysteresis; you choose its output range). Most digital
gates include no hysteresis. The exceptional gates that include hysteresis usually are those intended to listen to a computer bus
(assumed to be extra noisy), and the inputs to some larger IC’s such as the PAL that you will meet in our labs, XC9572XL,
which includes 50mV of hysteresis on all of its inputs.
Speed: The logic gate makes its decision (and makes its output show that ‘decision’) at least 20 times as fast as the ’311 does.
Simplicity: The logic gate requires no external parts, and needs only power, ground and In and Out pins. It can work without hysteresis
because its inputs transition decisively and fast.
14N.2.2
But why bother with digital?
Sub-questions:
14N.2.2.1 (Naive query:) Is This Transformation Perverse?
Is it not perverse to force an analog signal—which can carry a rich store of information on a single wire—into a crude
binary High/Low form?
Figure 2: Naive version of “digital audio”: looks foolish!
Lest you think this question is utterly crazy (it is mostly crazy), consider this promotional pitch by a company
that makes analog storage and playback IC’s.3 This company heaps scorn on silly-old digital methods, with
their binary encoding of information (ha ha! how crude!), and multiple-bit samples (oof—how clumsy!):
The technology, which is based on a multilevel storage methodology, utilizes a new approach for storing information.
Unlike the more traditional digitized voice solutions, ChipCorder enables both voice and audio signals to be stored
directly in their natural analog form in nonvolatile EEPROM memory cells. The high-density storage approach provides
an approximate 8:1 advantage over alternative digitized signal-storage technologies (which, by comparison, can store
only one of two levels—either 1 or 0—per memory cell, and therefore would require approximately eight cells to store
the same information as one ChipCorder memory cell). ChipCorder eliminates the need for external analog-to-digital
and digital-to-analog circuitry, providing a compact, single-chip, solid-state solution.
This promotion—announcing the re-discovery of the miracle of analog signal storage (perhaps you remember
analog audio tapes, and long-playing records)—inverts the usual advertising pitch. That pitch ordinarily urges
you to buy a new toothbrush or toaster because it is so up-to-date that it is digital. In defense of ChipCorder,
we should say, the device works very well. It’s only its promotional materials that are a bit odd.
Disadvantages of Digital:
Complexity: more lines required to carry same information
Speed: processing the numbers that encode the information sometimes is slower than handling the analog signal
AoE §10.1.1
Advantages of Digital:
Noise Immunity: Signal is born again at each gate; from this virtue flow the important applications of digital:
• Allows Stored-Program Computers (may look like a wrinkle at this stage, but turns out, of course, hugely
important);
3 The
company is Information Storage Devices, of San Jose, CA.
Class 14/D1: Logic Gates
4
– Allows transmission and also unlimited processing without error—except for round-off/quantization,
that is, deciding in which binary bin to put the continuously variable quantity;
– can be processed out of “real time:” at one’s leisure. (This, too, is just a consequence of the noise
immunity already noted.)
14N.2.2.2 Can we get the advantages of digital without loss?
Not without any loss, but one can carry sufficient detail of the Stradivarius’ sound using the digital form—as
we try to suggest in fig. 3—by using many lines:
Figure 3: Digital audio done reasonably
A single bit allowed only two categories: it could say, of the music signal, only “in High-half of range”
or “in Low-half.” Two bits allow finer discrimination: four categories (top-quadrant, third-quadrant, secondquadrant, bottom-quadrant). Each additional line or bit doubles the number of slices we apply to the full-scale
range.
Our lab computer uses 8 bits—a “byte”† —allowing 28 = 256 slices.‡ Commercial CD’s and most contemporary digital audio formats use 16 bits, permitting 216 ≈ 65, 000 slices. To put this another way, 16-bit audio
takes each of our finest slices (about as small as we can handle in lab, with our noisy breadboarded circuits)
and slices that slice into 28 sub-slices. This is slicing the voltage very fine indeed.
In fig. 4 we demonstrate the effect of 1, 2, 3, 4, 5, and finally 8 bits in the analog-digital conversions (both
into digital form and back out of it, to form the recovered analog signal that is shown).
Figure 4: Increasing the number of bits improves detail in the digital representation of an analog quantity
14N.2.3
Alternatives to Binary
AoE §14.5.5.2
Binary is almost all of digital electronics, because binary coding is so simple and robust. But a handful
of integrated circuits do use more than two levels. A few use three or four voltage levels, internally, in
order to increase data density: NAND flash, for example, uses four levels. The data density improvement is
impressive: whereas 8 storage cells can store 256 values in binary form (28 ), they can store 65K values in
† Some purists insist on calling an 8-bit quantity an “octet,” recalling a long-ago time when byte had not come to mean 8-bits, but
depended on word-length in a particular computer. Byte now is the standard way to refer to an 8-bit quantity.
‡ The SiLabs controller that some people will choose in the micro labs can cut 16 times finer, with 12 bits. But we cannot make
practical use of that resolution on our breadboarded circuits.
Class 14/D1: Logic Gates
4
– Allows transmission and also unlimited processing without error—except for round-off/quantization,
that is, deciding in which binary bin to put the continuously variable quantity;
– can be processed out of “real time:” at one’s leisure. (This, too, is just a consequence of the noise
immunity already noted.)
14N.2.2.2 Can we get the advantages of digital without loss?
Not without any loss, but one can carry sufficient detail of the Stradivarius’ sound using the digital form—as
we try to suggest in fig. 3—by using many lines:
Figure 3: Digital audio done reasonably
A single bit allowed only two categories: it could say, of the music signal, only “in High-half of range”
or “in Low-half.” Two bits allow finer discrimination: four categories (top-quadrant, third-quadrant, secondquadrant, bottom-quadrant). Each additional line or bit doubles the number of slices we apply to the full-scale
range.
Our lab computer uses 8 bits—a “byte”† —allowing 28 = 256 slices.‡ Commercial CD’s and most contemporary digital audio formats use 16 bits, permitting 216 ≈ 65, 000 slices. To put this another way, 16-bit audio
takes each of our finest slices (about as small as we can handle in lab, with our noisy breadboarded circuits)
and slices that slice into 28 sub-slices. This is slicing the voltage very fine indeed.
In fig. 4 we demonstrate the effect of 1, 2, 3, 4, 5, and finally 8 bits in the analog-digital conversions (both
into digital form and back out of it, to form the recovered analog signal that is shown).
Figure 4: Increasing the number of bits improves detail in the digital representation of an analog quantity
14N.2.3
Alternatives to Binary
AoE §14.5.5.2
Binary is almost all of digital electronics, because binary coding is so simple and robust. But a handful
of integrated circuits do use more than two levels. A few use three or four voltage levels, internally, in
order to increase data density: NAND flash, for example, uses four levels. The data density improvement is
impressive: whereas 8 storage cells can store 256 values in binary form (28 ), they can store 65K values in
† Some purists insist on calling an 8-bit quantity an “octet,” recalling a long-ago time when byte had not come to mean 8-bits, but
depended on word-length in a particular computer. Byte now is the standard way to refer to an 8-bit quantity.
‡ The SiLabs controller that some people will choose in the micro labs can cut 16 times finer, with 12 bits. But we cannot make
practical use of that resolution on our breadboarded circuits.
Class 14/D1: Logic Gates
5
quaternary form.§ But this density comes at the cost of diminished simplicity and noise immunity, so remains
rare.
More important instances of non-binary digital encoding occur in data transmission protocols. Telephone
modems (“modulator-demodulators”) confronted the tight limitation of the system’s 3.4kHz frequency limit,
and managed to squeeze more digital information into that bandwidth by using both multiple-levels (more
than binary) and phase encoding. One combination of the two called “16QAM” squeezes 4 bits worth of
information into each “symbol,” pushing the data rate to 9600 bits/second. Does that not sound quite impossible on a 3.4kHz line? It is, of course, possible. The process of increasing coding complexity has continued,
pushing data rates still higher.
A plot like a phasor diagram—with “real” and “imaginary” axes—can display the several phases as well as
amplitudes of single-bit encodings. The plot in fig. 5 shows “4QAM:” single-amplitude, four-phase-angle
encoding that permits four unique “symbols” (or, “two bits per symbol”) by summing two waveforms that
are 90 degrees out of phase (the “quadrature” of the QAM acronym).4
Figure 5: “Constellation” diagram, drawn by vector display—“phasor”-like: showing QAM (phase- and amplitude-encoded digital information)
Denser encodings are possible, using the same scheme. The constellation diagrams in fig. 6 show 16QAM,
which carries 4 bits of information in each “symbol,” one symbol being represented by one dot on the diagram.5
Figure 6: Denser “16QAM” encoding shown in constellation diagram
§ Fun fact (according to Wikipedia): a quaternary digit sometimes is called a crumb, joining its foody companions, “bit,” “byte” and
“nybble” (a 4-bit binary value).
4 Source: National Instruments tutorial, “Quadrature Amplitude Modulation,” URL: http://zone.ni.com/devzone/cda/tut/p/id/3896.
This tutorial lets one watch, in slow motion, the relation between the time-domain waveform and this phase-and-amplitude constellation
plot. It helps.
5 Source: Acterna tutorial, URL: chapters.scte.org/cascade/QAM Overview and Testing.ppt. Acterna was acquired by JDSU, Milpitas, CA, in 2005.
Class 14/D1: Logic Gates
5
quaternary form.§ But this density comes at the cost of diminished simplicity and noise immunity, so remains
rare.
More important instances of non-binary digital encoding occur in data transmission protocols. Telephone
modems (“modulator-demodulators”) confronted the tight limitation of the system’s 3.4kHz frequency limit,
and managed to squeeze more digital information into that bandwidth by using both multiple-levels (more
than binary) and phase encoding. One combination of the two called “16QAM” squeezes 4 bits worth of
information into each “symbol,” pushing the data rate to 9600 bits/second. Does that not sound quite impossible on a 3.4kHz line? It is, of course, possible. The process of increasing coding complexity has continued,
pushing data rates still higher.
A plot like a phasor diagram—with “real” and “imaginary” axes—can display the several phases as well as
amplitudes of single-bit encodings. The plot in fig. 5 shows “4QAM:” single-amplitude, four-phase-angle
encoding that permits four unique “symbols” (or, “two bits per symbol”) by summing two waveforms that
are 90 degrees out of phase (the “quadrature” of the QAM acronym).4
Figure 5: “Constellation” diagram, drawn by vector display—“phasor”-like: showing QAM (phase- and amplitude-encoded digital information)
Denser encodings are possible, using the same scheme. The constellation diagrams in fig. 6 show 16QAM,
which carries 4 bits of information in each “symbol,” one symbol being represented by one dot on the diagram.5
Figure 6: Denser “16QAM” encoding shown in constellation diagram
§ Fun fact (according to Wikipedia): a quaternary digit sometimes is called a crumb, joining its foody companions, “bit,” “byte” and
“nybble” (a 4-bit binary value).
4 Source: National Instruments tutorial, “Quadrature Amplitude Modulation,” URL: http://zone.ni.com/devzone/cda/tut/p/id/3896.
This tutorial lets one watch, in slow motion, the relation between the time-domain waveform and this phase-and-amplitude constellation
plot. It helps.
5 Source: Acterna tutorial, URL: chapters.scte.org/cascade/QAM Overview and Testing.ppt. Acterna was acquired by JDSU, Milpitas, CA, in 2005.
Class 14/D1: Logic Gates
6
The QAM waveforms (seen in time-domain) are exceedingly weird:a
Figure 7: QAM waveforms
a Source:
Acterna, tutorial, again.
Complex encoding schemes like QAM are appropriate to data transmission, but—luckily for us—the storage
and manipulation of digital data remains overwhelmingly binary, and therefore much more easily understood.
14N.2.4
Digital Processing makes sense, more obviously, when the information to be
handled is digital from the outset.
Here are two examples of cases where the information is born digital; no conversion is needed: numbers
(e.g., pocket calculator) & words (word processor).6
Figure 8: Example of all-digital system
Since the information never exists in analog form (except perhaps in the mind of the human), it makes good
sense to manipulate the information digitally: as sets of codes.
14N.3
Number codes: two’s-complement
AoE §10.1.3
Binary numbers may be familiar to you, already: each bit carries a weight double its neighbor’s:
Figure 9: Binary numbers: unsigned and 2’s comp: the difference is in the sign given to the MSB
6 You may be too young to know that people once used calculators rather than smartphones. The antique desktop in the figure (fig. 8)
comports with the antique notion of ‘pocket calculator.’
Class 14/D1: Logic Gates
6
The QAM waveforms (seen in time-domain) are exceedingly weird:a
Figure 7: QAM waveforms
a Source:
Acterna, tutorial, again.
Complex encoding schemes like QAM are appropriate to data transmission, but—luckily for us—the storage
and manipulation of digital data remains overwhelmingly binary, and therefore much more easily understood.
14N.2.4
Digital Processing makes sense, more obviously, when the information to be
handled is digital from the outset.
Here are two examples of cases where the information is born digital; no conversion is needed: numbers
(e.g., pocket calculator) & words (word processor).6
Figure 8: Example of all-digital system
Since the information never exists in analog form (except perhaps in the mind of the human), it makes good
sense to manipulate the information digitally: as sets of codes.
14N.3
Number codes: two’s-complement
AoE §10.1.3
Binary numbers may be familiar to you, already: each bit carries a weight double its neighbor’s:
Figure 9: Binary numbers: unsigned and 2’s comp: the difference is in the sign given to the MSB
6 You may be too young to know that people once used calculators rather than smartphones. The antique desktop in the figure (fig. 8)
comports with the antique notion of ‘pocket calculator.’
Class 14/D1: Logic Gates
7
That’s just analogous to decimal numbers, as you know: it’s the way we would count if we had just one finger.
The number represented is just the sum of all the bit values: 1001 = 23 + 20 = 910
AoE §10.1.3.3
14N.3.1
Negative Numbers
But 1001 need not represent 910 . Whether it does or not, in a particular setting, is up to us, the humans. We
decide what a sequence of binary digits is to mean. Now, that observation may strike you as the emptiest
of truisms, but it is not quite Humpty Dumpty’s point. We are not saying 1001 means whatever I want it to
mean; only that sometimes it is useful to let it mean something other than 910 . In a different context it may
make more sense to let the bit pattern mean something like “turn on stove, turn off fridge and hot plate, turn
on lamp.” And, more immediately to the point, it often turns out to be useful to let 1001 represent not 910 ,
but a negative number.
The scheme most widely used to represent negative numbers is called “two’s-complement.” The relation of a
2’s-comp number to positive or “unsigned” binary is extremely simple:
the 2’s comp number uses the most-significant-bit (MSB)—the leftmost—to represent a negative
number of the same weight as for the unsigned number.
So, 1000 is +8 in unsigned binary; it is −8 in 2’s comp. And 1001 is −7. Two more examples appear in
fig. 10: the 4-bit 1011 can represent a negative five, or an unsigned decimal eleven; 0101, in contrast, is
(positive-) 5 whether interpreted as “signed” (that is, 2’s-comp) or “unsigned.”
Unsigned
2’s Comp
1011
8 + 2 + 1 = 1 11 0
- 8 + 2 + 1 = - 51 0
0101
4 + 1 = 5 10
… = 5 10
Figure 10: Examples of 4-bit numbers interpreted as unsigned versus signed
AoE §10.1.3.3
This formulation is not the standard one. More often, you are told a rule for forming the 2’s comp representation of a negative number, and for converting back. This AoE does for you:
To form a negative number, first complement each of the bits of the positive number (i.e., write
1 for 0, and vice versa; this is called the “1’s complement”), then add 1 (that’s the “2’s complement”).
AoE §10.1.3.4
It may be easier to form and read 2’s comp if, as we’ve suggested, you simply read the MSB as a large negative
number, and add it to the rest of the number, which represents a (smaller-) positive number interpreted exactly
as in ordinary unsigned binary.
2’s-comp may seem rather odd and abstract to you, just now. When you get a chance to use 2’s comp it should
come down to earth for you. When you program your microcomputer, for example, you will sometimes need
to tell the machine exactly how far to “branch” or “jump” forward or back, as it executes a program. In the
example below we will say that by providing a 2’s-comp value that is either positive or negative. But, first, a
few words on the hexadecimal number format.
14N.3.2
Hexadecimal Notation
Let’s digress to mention a notation that conveniently can represent binary values of many bits. This is hexadecimal notation—a scheme in which 16 rather than 10 values are permitted. Beyond 9 we tack in 5 more
values, A, B, C, D, E and F, representing the decimal equivalents 10 through 15.
Class 14/D1: Logic Gates
7
That’s just analogous to decimal numbers, as you know: it’s the way we would count if we had just one finger.
The number represented is just the sum of all the bit values: 1001 = 23 + 20 = 910
AoE §10.1.3.3
14N.3.1
Negative Numbers
But 1001 need not represent 910 . Whether it does or not, in a particular setting, is up to us, the humans. We
decide what a sequence of binary digits is to mean. Now, that observation may strike you as the emptiest
of truisms, but it is not quite Humpty Dumpty’s point. We are not saying 1001 means whatever I want it to
mean; only that sometimes it is useful to let it mean something other than 910 . In a different context it may
make more sense to let the bit pattern mean something like “turn on stove, turn off fridge and hot plate, turn
on lamp.” And, more immediately to the point, it often turns out to be useful to let 1001 represent not 910 ,
but a negative number.
The scheme most widely used to represent negative numbers is called “two’s-complement.” The relation of a
2’s-comp number to positive or “unsigned” binary is extremely simple:
the 2’s comp number uses the most-significant-bit (MSB)—the leftmost—to represent a negative
number of the same weight as for the unsigned number.
So, 1000 is +8 in unsigned binary; it is −8 in 2’s comp. And 1001 is −7. Two more examples appear in
fig. 10: the 4-bit 1011 can represent a negative five, or an unsigned decimal eleven; 0101, in contrast, is
(positive-) 5 whether interpreted as “signed” (that is, 2’s-comp) or “unsigned.”
Unsigned
2’s Comp
1011
8 + 2 + 1 = 1 11 0
- 8 + 2 + 1 = - 51 0
0101
4 + 1 = 5 10
… = 5 10
Figure 10: Examples of 4-bit numbers interpreted as unsigned versus signed
AoE §10.1.3.3
This formulation is not the standard one. More often, you are told a rule for forming the 2’s comp representation of a negative number, and for converting back. This AoE does for you:
To form a negative number, first complement each of the bits of the positive number (i.e., write
1 for 0, and vice versa; this is called the “1’s complement”), then add 1 (that’s the “2’s complement”).
AoE §10.1.3.4
It may be easier to form and read 2’s comp if, as we’ve suggested, you simply read the MSB as a large negative
number, and add it to the rest of the number, which represents a (smaller-) positive number interpreted exactly
as in ordinary unsigned binary.
2’s-comp may seem rather odd and abstract to you, just now. When you get a chance to use 2’s comp it should
come down to earth for you. When you program your microcomputer, for example, you will sometimes need
to tell the machine exactly how far to “branch” or “jump” forward or back, as it executes a program. In the
example below we will say that by providing a 2’s-comp value that is either positive or negative. But, first, a
few words on the hexadecimal number format.
14N.3.2
Hexadecimal Notation
Let’s digress to mention a notation that conveniently can represent binary values of many bits. This is hexadecimal notation—a scheme in which 16 rather than 10 values are permitted. Beyond 9 we tack in 5 more
values, A, B, C, D, E and F, representing the decimal equivalents 10 through 15.
Class 14/D1: Logic Gates
8
The partial table in fig. 11 illustrates how handily hexadecimal notation—familiarly called “hex”—makes
multi-bit values readable, and “discussable.”
Binary
Hexadecimal
Decimal
0111
1001
1100
1111
10010110
7
9
C
F
96
7
9
12
15
150
Figure 11: Hexadecimal notation puts binary values into a compact representation
You don’t want to baffle another human by saying something like, “My counter is putting out the value One,
Zero, Zero, One, Zero, One, One, Zero.” “What?,” asks your puzzled listener. Better to say “My counter is
putting out the value 96h.” One can indicate the hex format, as in this example, by appending “h.” Or one can
show it, instead by writing “0x96.”7
Now, back to the use of 2’s-comp to tell a computer to jump backward or forward. The machine doesn’t need
to use different commands for “jump forward” versus “jump back.” Instead, it it simply adds the number you
feed it to a present value that tells it where to pick up an instruction. If the number you provide is a negative
number (in 2’s comp), program execution hops back. If the number you provide is a positive number, it jumps
forward.
For example:
+
+
0 . . . 0 1 0 4
F . . . F F F C
0 . . . 0 1 0 0
+
0 . . . 0 1 0 4
0 . . . 0 0 0 4
0 . . . 0 1 0 8
+
PRESENT VALUE (LOCATION)
DISPLACEMENT
BACK 4
NEXT VALUE
.
.
AHEAD 4
NEXT VALUE
Figure 12: Example of 2’s comp use: plain adder can add or subtract, depending on sign of addend: 8051 jump offset
Perhaps this example will begin to persuade you that there can be an interesting, substantial difference between “subtracting A from B” and “adding negative A to B.” Subtraction—if taken to mean what “subtraction” says—requires special hardware; addition of a negative number, in contrast, uses the same hardware
as an ordinary addition. That’s just what makes the use of 2’s-comp a tidy scheme for the microcomputer’s
branch or jump operations.
14N.4
Combinational Logic
Explaining why one might want to put information into digital form is harder than explaining how to manipulate digital signals. In fact, digital logic is pleasantly easy, after the strains of analog design.
14N.4.1
Digression: a little history
Skip this subsection if you take Henry Ford’s view, that “History is more or less bunk.”
7 The
RIDE assembler/compiler that you soon will meet recognizes both conventions.
Class 14/D1: Logic Gates
8
The partial table in fig. 11 illustrates how handily hexadecimal notation—familiarly called “hex”—makes
multi-bit values readable, and “discussable.”
Binary
Hexadecimal
Decimal
0111
1001
1100
1111
10010110
7
9
C
F
96
7
9
12
15
150
Figure 11: Hexadecimal notation puts binary values into a compact representation
You don’t want to baffle another human by saying something like, “My counter is putting out the value One,
Zero, Zero, One, Zero, One, One, Zero.” “What?,” asks your puzzled listener. Better to say “My counter is
putting out the value 96h.” One can indicate the hex format, as in this example, by appending “h.” Or one can
show it, instead by writing “0x96.”7
Now, back to the use of 2’s-comp to tell a computer to jump backward or forward. The machine doesn’t need
to use different commands for “jump forward” versus “jump back.” Instead, it it simply adds the number you
feed it to a present value that tells it where to pick up an instruction. If the number you provide is a negative
number (in 2’s comp), program execution hops back. If the number you provide is a positive number, it jumps
forward.
For example:
+
+
0 . . . 0 1 0 4
F . . . F F F C
0 . . . 0 1 0 0
+
0 . . . 0 1 0 4
0 . . . 0 0 0 4
0 . . . 0 1 0 8
+
PRESENT VALUE (LOCATION)
DISPLACEMENT
BACK 4
NEXT VALUE
.
.
AHEAD 4
NEXT VALUE
Figure 12: Example of 2’s comp use: plain adder can add or subtract, depending on sign of addend: 8051 jump offset
Perhaps this example will begin to persuade you that there can be an interesting, substantial difference between “subtracting A from B” and “adding negative A to B.” Subtraction—if taken to mean what “subtraction” says—requires special hardware; addition of a negative number, in contrast, uses the same hardware
as an ordinary addition. That’s just what makes the use of 2’s-comp a tidy scheme for the microcomputer’s
branch or jump operations.
14N.4
Combinational Logic
Explaining why one might want to put information into digital form is harder than explaining how to manipulate digital signals. In fact, digital logic is pleasantly easy, after the strains of analog design.
14N.4.1
Digression: a little history
Skip this subsection if you take Henry Ford’s view, that “History is more or less bunk.”
7 The
RIDE assembler/compiler that you soon will meet recognizes both conventions.
Class 14/D1: Logic Gates
9
14N.4.1.1 Boole and deMorgan
It’s strange—almost weird—that the rules for computer logic were worked out pretty thoroughly by English
mathematicians in the middle of the 19th century, about 100 years before the hardware appeared that could
put these rules to hard work. George Boole worked out most of the rules, in an effort to apply the rigor of
mathematics to the sort of reasoning expressed in ordinary language. Others had tried this project—notably,
Aristotle and Leibniz—but had not gotten far. Boole could not afford a university education, and instead
taught high school while writing papers as an amateur, years before an outstanding submission won him a
university position.
Boole saw assertions in ordinary language as propositions concerning memberships in classes. For example,
Boole wrote, of a year when much of Europe was swept by revolutions,
. . . during this present year 1848 . . . aspects of political change [are events that] . . . timid men
view with dread, and the ardent with hope.
And he explained that this remark could be described as a statement about classes:
. . . by the words timid, ardent we mark out of the class. . . those who posses the particular
attributes. . . 8
Finally, he offered a notation to describe compactly claims about membership in classes:
. . . if x represent “Men”, y “Rational beings” and z “Animals” the equation
x = yz
will express the proposition
“Men and rational animals are identical”
So claimed Mr. Boole, the most rational of animals!
And he then stated a claim that suggests his high hopes for this system of logic:
It is possible to express by the above notation any categorical proposition in the form of
an equation. For to do this it is only necessary to represent the terms of the proposition . . . by
symbols . . . and then to connect the expressions . . . by the sign of equality.9
Given Boole’s ambitions, it is not surprising that the sort of table that now is used to show the relation between
inputs and outputs—a table that you and I may use to describe, say, a humble AND gate—is called by the
highfalutin name, “truth table.” No engineer would call this little operation list by such a name, and it was
not Boole, apparently, who named the table, but the philosopher Ludwig Wittgenstein, 70-odd years later.10
Since Boole’s goal was to systematize the analysis of thinking, the name seems quite appropriate, odd though
it may sound to an engineer.
Before we return to our modest little electronics networks, which we will describe in Boole’s notation, let’s
take a last look at the sort of sentence that Boole liked to take on. This example should make you grateful
that we’re only turning LED’s On or Off, in response to a toggle switch or two! He proposes to render in
mathematical symbols a passage from Cicero, treating conditional propositions.
Boole presents an equation, “y(1-x) + x(1-y) + (1-x)(1-y) = 1,” and explains,
8 “The Nature of Logic” (1848), in Grattan-Guinness, Ivory, ed., George Boole Selected Manuscripts on Logic and its Philosophy
(1997), p. 5.
9 “On the Foundations of the Mathematical Theory of Logic . . . ” (1856) in Grattan-Guinness, p. 89.
10 Wittgenstein’s sole book published in his lifetime is the apparent source of the name. If “truth table” sounds a bit intimidating, how
about the title to Wittgenstein’s book : Tractatus Logico-Philosophicus(1921)!.
Class 14/D1: Logic Gates
9
14N.4.1.1 Boole and deMorgan
It’s strange—almost weird—that the rules for computer logic were worked out pretty thoroughly by English
mathematicians in the middle of the 19th century, about 100 years before the hardware appeared that could
put these rules to hard work. George Boole worked out most of the rules, in an effort to apply the rigor of
mathematics to the sort of reasoning expressed in ordinary language. Others had tried this project—notably,
Aristotle and Leibniz—but had not gotten far. Boole could not afford a university education, and instead
taught high school while writing papers as an amateur, years before an outstanding submission won him a
university position.
Boole saw assertions in ordinary language as propositions concerning memberships in classes. For example,
Boole wrote, of a year when much of Europe was swept by revolutions,
. . . during this present year 1848 . . . aspects of political change [are events that] . . . timid men
view with dread, and the ardent with hope.
And he explained that this remark could be described as a statement about classes:
. . . by the words timid, ardent we mark out of the class. . . those who posses the particular
attributes. . . 8
Finally, he offered a notation to describe compactly claims about membership in classes:
. . . if x represent “Men”, y “Rational beings” and z “Animals” the equation
x = yz
will express the proposition
“Men and rational animals are identical”
So claimed Mr. Boole, the most rational of animals!
And he then stated a claim that suggests his high hopes for this system of logic:
It is possible to express by the above notation any categorical proposition in the form of
an equation. For to do this it is only necessary to represent the terms of the proposition . . . by
symbols . . . and then to connect the expressions . . . by the sign of equality.9
Given Boole’s ambitions, it is not surprising that the sort of table that now is used to show the relation between
inputs and outputs—a table that you and I may use to describe, say, a humble AND gate—is called by the
highfalutin name, “truth table.” No engineer would call this little operation list by such a name, and it was
not Boole, apparently, who named the table, but the philosopher Ludwig Wittgenstein, 70-odd years later.10
Since Boole’s goal was to systematize the analysis of thinking, the name seems quite appropriate, odd though
it may sound to an engineer.
Before we return to our modest little electronics networks, which we will describe in Boole’s notation, let’s
take a last look at the sort of sentence that Boole liked to take on. This example should make you grateful
that we’re only turning LED’s On or Off, in response to a toggle switch or two! He proposes to render in
mathematical symbols a passage from Cicero, treating conditional propositions.
Boole presents an equation, “y(1-x) + x(1-y) + (1-x)(1-y) = 1,” and explains,
8 “The Nature of Logic” (1848), in Grattan-Guinness, Ivory, ed., George Boole Selected Manuscripts on Logic and its Philosophy
(1997), p. 5.
9 “On the Foundations of the Mathematical Theory of Logic . . . ” (1856) in Grattan-Guinness, p. 89.
10 Wittgenstein’s sole book published in his lifetime is the apparent source of the name. If “truth table” sounds a bit intimidating, how
about the title to Wittgenstein’s book : Tractatus Logico-Philosophicus(1921)!.
Class 14/D1: Logic Gates
10
The interpretation of which is: —Either Fabius was born at the rising of the dogstar, and
will not perish in the sea; or he was not born at the rising of the dogstar, and will perish in the
sea; or he was not born at the rising of the dogstar, and will not perish in the sea.
One begins to appreciate the compactness of the notation, on reading the “interpretation” stated in words.
(You’ll recognize, if you care to parse the equation, that Boole writes “(1-x)” where we would write x (that
is, x false).11 .
So, if we call one of Boole’s propositions B (re Fabius birth), the other P (re his perishing), in our contemporary notation we would write the function, “f” as
f = (B · P + B · P ) + (B · P )
.
This rather indigestible form can be expressed more compactly as a function that is false only under the condition that B and P are true: f = B · P .
The truth table would look like:
B
0
0
1
1
P
0
1
0
1
f
1
1
1
0
And probably you recognize this function as NAND.
Comforting Truth #1
AoE §10.1.4.5
To build any digital device (including the most complex computer) we need only three logic functions:
AND
OR
NOT
Figure 13: Just three fundamental logic functions are necessary
All logic circuits are combinations of these three functions, and only these.
Comforting and remarkable Truth #2
Perhaps more surprising: it turns out that just one gate type (not three) will suffice to let one build any digital
device.
The gate type must be NAND or NOR; these two types are called “universal gates.”
Figure 14: “Universal” gates: NAND and NOR
DeMorgan—a pen-pal of Boole’s—showed that what looks like an AND function can be transformed into
OR (and vise-versa) with the help of some inverters. This is the powerful trick that allows one gate type to
build the world.
14N.4.2
deMorgan’s Theorem
This is the only important rule of Boolean algebra you are likely to have to memorize (the others that we use
are pretty obvious: propositions like A + A* = 1).
11 Hawking, Stephen ed.,
God Created the Integers (2005), p. 808.
AoE §10.1.7
Class 14/D1: Logic Gates
10
The interpretation of which is: —Either Fabius was born at the rising of the dogstar, and
will not perish in the sea; or he was not born at the rising of the dogstar, and will perish in the
sea; or he was not born at the rising of the dogstar, and will not perish in the sea.
One begins to appreciate the compactness of the notation, on reading the “interpretation” stated in words.
(You’ll recognize, if you care to parse the equation, that Boole writes “(1-x)” where we would write x (that
is, x false).11 .
So, if we call one of Boole’s propositions B (re Fabius birth), the other P (re his perishing), in our contemporary notation we would write the function, “f” as
f = (B · P + B · P ) + (B · P )
.
This rather indigestible form can be expressed more compactly as a function that is false only under the condition that B and P are true: f = B · P .
The truth table would look like:
B
0
0
1
1
P
0
1
0
1
f
1
1
1
0
And probably you recognize this function as NAND.
Comforting Truth #1
AoE §10.1.4.5
To build any digital device (including the most complex computer) we need only three logic functions:
AND
OR
NOT
Figure 13: Just three fundamental logic functions are necessary
All logic circuits are combinations of these three functions, and only these.
Comforting and remarkable Truth #2
Perhaps more surprising: it turns out that just one gate type (not three) will suffice to let one build any digital
device.
The gate type must be NAND or NOR; these two types are called “universal gates.”
Figure 14: “Universal” gates: NAND and NOR
DeMorgan—a pen-pal of Boole’s—showed that what looks like an AND function can be transformed into
OR (and vise-versa) with the help of some inverters. This is the powerful trick that allows one gate type to
build the world.
14N.4.2
deMorgan’s Theorem
This is the only important rule of Boolean algebra you are likely to have to memorize (the others that we use
are pretty obvious: propositions like A + A* = 1).
11 Hawking, Stephen ed.,
God Created the Integers (2005), p. 808.
AoE §10.1.7
Class 14/D1: Logic Gates
11
deMorgan’s Theorem (in graphic rather than algebraic form)
You can swap shapes if at the same time you invert all inputs and outputs.
Figure 15: deMorgan’s theorem in graphic form
When you do this you are changing only the symbol used to draw the gate; you are not changing the logic—
the hardware. (That last little observation is easy to say and hard to get used to. Don’t be embarrassed if it
takes you some time to get this idea straight.)
So, any gate that can invert can carry out this transformation for you. Therefore some people actually used
to design and build with NAND’s alone, back when design was done with discrete gates. These gates came
a few to a package, and that condition made the game of minimizing package-count worthwhile; it was nice
to find that any leftover gates were “universal”. Now, when designs usually are done on large arrays of gates,
this concern has faded away. But deMorgan’s insight remains important as we think about functions and draw
them, in the presence of the predominant active-low signals.
This notion of DeMorgan’s, and the “active low” and “assertion-level” notions will drive you crazy for a
while, if you have not seen them before. You will be rewarded in the end (toward the end of this course, in
fact), when you meet a lot of signals that are “active low.” Such signals spend most of their lives close to 5
volts (or whatever voltage defines a logic High), and go low (close to zero volts) only when they want to say
“listen to me!”
14N.4.3
Active High versus active Low
AoE §10.1.2.1
Signals come in both flavors. Example: Two forms of a signal that says “Ready”:
Figure 16: Active-High versus Active-Low Signals
A signal like the one shown in the top trace of fig. 16 would be called “Ready;” one like the lower trace would
be called “Ready,” the “bar” indicating that it is “active-low:” “true-when-low.” Signals are made active low
not in order to annoy you, but for good hardware reasons. We will look at those reasons when we have seen
how gates are made. But let’s now try manipulating gates, watching the effect of our assumption about which
level is True.
14N.4.3.1 Effect on logic of active level: Active High versus Active Low
AoE §10.1.7
As we considered what the bit pattern“1001” means (§2, above), treated as a number, we met the curious
fact—perhaps pleasing—that we can establish any convenient convention to define what the bit pattern means.
Sometimes we want to call it “910 ” (“[positive] nine”) sometimes we will want to call it “−710 ” (“negative
7”). It’s up to us.
The same truth appears as we ask ourselves what a particular gate is doing in a circuit. The gate’s operation is
Class 14/D1: Logic Gates
11
deMorgan’s Theorem (in graphic rather than algebraic form)
You can swap shapes if at the same time you invert all inputs and outputs.
Figure 15: deMorgan’s theorem in graphic form
When you do this you are changing only the symbol used to draw the gate; you are not changing the logic—
the hardware. (That last little observation is easy to say and hard to get used to. Don’t be embarrassed if it
takes you some time to get this idea straight.)
So, any gate that can invert can carry out this transformation for you. Therefore some people actually used
to design and build with NAND’s alone, back when design was done with discrete gates. These gates came
a few to a package, and that condition made the game of minimizing package-count worthwhile; it was nice
to find that any leftover gates were “universal”. Now, when designs usually are done on large arrays of gates,
this concern has faded away. But deMorgan’s insight remains important as we think about functions and draw
them, in the presence of the predominant active-low signals.
This notion of DeMorgan’s, and the “active low” and “assertion-level” notions will drive you crazy for a
while, if you have not seen them before. You will be rewarded in the end (toward the end of this course, in
fact), when you meet a lot of signals that are “active low.” Such signals spend most of their lives close to 5
volts (or whatever voltage defines a logic High), and go low (close to zero volts) only when they want to say
“listen to me!”
14N.4.3
Active High versus active Low
AoE §10.1.2.1
Signals come in both flavors. Example: Two forms of a signal that says “Ready”:
Figure 16: Active-High versus Active-Low Signals
A signal like the one shown in the top trace of fig. 16 would be called “Ready;” one like the lower trace would
be called “Ready,” the “bar” indicating that it is “active-low:” “true-when-low.” Signals are made active low
not in order to annoy you, but for good hardware reasons. We will look at those reasons when we have seen
how gates are made. But let’s now try manipulating gates, watching the effect of our assumption about which
level is True.
14N.4.3.1 Effect on logic of active level: Active High versus Active Low
AoE §10.1.7
As we considered what the bit pattern“1001” means (§2, above), treated as a number, we met the curious
fact—perhaps pleasing—that we can establish any convenient convention to define what the bit pattern means.
Sometimes we want to call it “910 ” (“[positive] nine”) sometimes we will want to call it “−710 ” (“negative
7”). It’s up to us.
The same truth appears as we ask ourselves what a particular gate is doing in a circuit. The gate’s operation is
Class 14/D1: Logic Gates
12
fixed in the hardware (the little thing doesn’t know or care how we’re using it); but what its operation means
is for us to interpret.
That sounds vague, and perhaps confusing; let’s look at an example (just deMorgan revisited, you will recognize). Most of the time—at least when we first meet gates—we assume that High is true. So, for example,
when we describe what an AND gate does, we usually say something like
“The output is true if both inputs are true.”
The usual AND truth table of course says the same thing, symbolically.
A
0
0
1
1
B
0
1
0
1
A. B
0
0
0
1
or:
A
F
F
T
T
B
F
T
F
T
A. B
F
F
F
T
“AND” gate using active-high signals—and truth table in more abstract form, levels not indicated
The right-hand form of the table—showing Truth and Falsehood—is the more general. On the left, we’ve
shown the table written with 1’s and zero’s, which tend to suggest high voltages and low. So far, so familiar. . . .12
But—as deMorgan promised—if we declare that Zeros interest us rather than Ones, at both input and output,
the tables look different, and the gate whose behavior the table describes evidently does something different:
A
0
0
1
1
B
0
1
0
1
A. B
0
0
0
1
or:
A
T
T
F
F
B
T
F
T
F
A. B
T
T
T
F
Example of the effect of active-low Signals: “AND” gate (so-called!) doing the job of OR’ing Lows
We get a Zero out if A or B is Zero. In other words we have an OR function, if we are willing to stand signals
on their heads. We have, then, a gate that OR’s lows, and we should draw it to show this behavior:
Figure 17: AND gate drawn so as to show that it is OR’ing LOW’s
This is the correct way to draw an AND gate when it is doing this job, handling active-low signals.
It turns out that often we need to work with signals ‘stood on their heads’: active low. Note, however, that we
call this piece of hardware an AND gate regardless of what logic it is performing in a particular circuit. To
call one piece of hardware by two names would be too hard on everyone.
We will try to keep things straight by calling this an AND gate, but saying that it performs an OR function
(OR’ing lows, here). Sometimes it’s clearest just to refer to the gate by its part number: ‘It’s an ’08.’ We all
should agree on that point, at least!
14N.4.3.2 Missile-launch Logic
Here’s an example—-a trifle melodramatic—-of signals that are active low: you are to finish the design of a
circuit that requires two people to go crazy at once, in order to bring on the third world war:
12 AoE declares its intention to distinguish between “1” and HIGH in logic representations (10.1.2.1). We are not so pure. We will
usually write “1” to mean HIGH, because the “1” is compact and matches its meaning in the representation of binary numbers.
Class 14/D1: Logic Gates
12
fixed in the hardware (the little thing doesn’t know or care how we’re using it); but what its operation means
is for us to interpret.
That sounds vague, and perhaps confusing; let’s look at an example (just deMorgan revisited, you will recognize). Most of the time—at least when we first meet gates—we assume that High is true. So, for example,
when we describe what an AND gate does, we usually say something like
“The output is true if both inputs are true.”
The usual AND truth table of course says the same thing, symbolically.
A
0
0
1
1
B
0
1
0
1
A. B
0
0
0
1
or:
A
F
F
T
T
B
F
T
F
T
A. B
F
F
F
T
“AND” gate using active-high signals—and truth table in more abstract form, levels not indicated
The right-hand form of the table—showing Truth and Falsehood—is the more general. On the left, we’ve
shown the table written with 1’s and zero’s, which tend to suggest high voltages and low. So far, so familiar. . . .12
But—as deMorgan promised—if we declare that Zeros interest us rather than Ones, at both input and output,
the tables look different, and the gate whose behavior the table describes evidently does something different:
A
0
0
1
1
B
0
1
0
1
A. B
0
0
0
1
or:
A
T
T
F
F
B
T
F
T
F
A. B
T
T
T
F
Example of the effect of active-low Signals: “AND” gate (so-called!) doing the job of OR’ing Lows
We get a Zero out if A or B is Zero. In other words we have an OR function, if we are willing to stand signals
on their heads. We have, then, a gate that OR’s lows, and we should draw it to show this behavior:
Figure 17: AND gate drawn so as to show that it is OR’ing LOW’s
This is the correct way to draw an AND gate when it is doing this job, handling active-low signals.
It turns out that often we need to work with signals ‘stood on their heads’: active low. Note, however, that we
call this piece of hardware an AND gate regardless of what logic it is performing in a particular circuit. To
call one piece of hardware by two names would be too hard on everyone.
We will try to keep things straight by calling this an AND gate, but saying that it performs an OR function
(OR’ing lows, here). Sometimes it’s clearest just to refer to the gate by its part number: ‘It’s an ’08.’ We all
should agree on that point, at least!
14N.4.3.2 Missile-launch Logic
Here’s an example—-a trifle melodramatic—-of signals that are active low: you are to finish the design of a
circuit that requires two people to go crazy at once, in order to bring on the third world war:
12 AoE declares its intention to distinguish between “1” and HIGH in logic representations (10.1.2.1). We are not so pure. We will
usually write “1” to mean HIGH, because the “1” is compact and matches its meaning in the representation of binary numbers.
Class 14/D1: Logic Gates
13
Figure 18: Logic that lights fuse if both operators push Fire* at the same time
How should you draw the gate that does the job?13 What is its conventional name?14
Just now, these ideas probably seem an unnecessary complication. By the end of the course—-to reiterate a
point we made earlier—-when you meet the microcomputer circuit in which just about every control signal
is active low, you will be grateful for the notions “active low” and “assertion-level symbol.”
14N.4.3.3 Why are we doing this to you?! Why ‘active low’s’?
Here, in fig. 19, is a network of gates that you will implement if you build the microcomputer from many IC’s
in lab µ 1(this is logic that you will “wire” in a programmable array of gates, a PAL).15
Figure 19: Strange-looking nest of active-low signals—imposed on us
Did we rig these signals as active-low just in order to give you a challenge, to exercise your brain (as,
admittedly, we did in the missile-launch problem of fig. 18 on the preceding page)?
No. Our gates look funny (assuming that you share most people’s sense that all those bubbles look like a
strange complication) because we were forced to deal with active-low signals. It was not our choice.
You can see this in the names of the signals going in and out of our nest of gates: every signal is active-low,
except the “A” and “D” lines, which carry Address and Data (and therefore have no “inactive” state: Low and
High, in their case, are of equal dignity, equally “true”). The RAM write signal is “WE*;” the enables of the
chip on the right both show bubbles that indicate that they, too, are active-low.
And here is a corner of the processor that is the brains of the computer: its output control signals, WR*, RD*
13 You need a gate that responds to a coincidence (AND) of two Low’s, putting out, for that combination, a Low output. This is a gate
that AND’s lows, and you should draw it that way: AND shape, with bubbles on both inputs and on the output.
14 The name of this gate is OR, even though in this setting it is performing an AND function upon active-low signals.
15 PAL is an acronym for “Programmable Array Logic,” a rearrangement of the earlier name, “Programmable Logic Array.” Whereas
the acronym “PLA” sounds like the action of spitting out something distasteful, “PAL” sounds rather obviously like a gadget that is userfriendly. We should admit, however, that a PAL is not just a renamed PLA. The name “PAL” describes a subset of PLA’s, a particular
configuration: an OR-of-AND terms. You’ll find more on this topic next time, in Note S151.
Class 14/D1: Logic Gates
13
Figure 18: Logic that lights fuse if both operators push Fire* at the same time
How should you draw the gate that does the job?13 What is its conventional name?14
Just now, these ideas probably seem an unnecessary complication. By the end of the course—-to reiterate a
point we made earlier—-when you meet the microcomputer circuit in which just about every control signal
is active low, you will be grateful for the notions “active low” and “assertion-level symbol.”
14N.4.3.3 Why are we doing this to you?! Why ‘active low’s’?
Here, in fig. 19, is a network of gates that you will implement if you build the microcomputer from many IC’s
in lab µ 1(this is logic that you will “wire” in a programmable array of gates, a PAL).15
Figure 19: Strange-looking nest of active-low signals—imposed on us
Did we rig these signals as active-low just in order to give you a challenge, to exercise your brain (as,
admittedly, we did in the missile-launch problem of fig. 18 on the preceding page)?
No. Our gates look funny (assuming that you share most people’s sense that all those bubbles look like a
strange complication) because we were forced to deal with active-low signals. It was not our choice.
You can see this in the names of the signals going in and out of our nest of gates: every signal is active-low,
except the “A” and “D” lines, which carry Address and Data (and therefore have no “inactive” state: Low and
High, in their case, are of equal dignity, equally “true”). The RAM write signal is “WE*;” the enables of the
chip on the right both show bubbles that indicate that they, too, are active-low.
And here is a corner of the processor that is the brains of the computer: its output control signals, WR*, RD*
13 You need a gate that responds to a coincidence (AND) of two Low’s, putting out, for that combination, a Low output. This is a gate
that AND’s lows, and you should draw it that way: AND shape, with bubbles on both inputs and on the output.
14 The name of this gate is OR, even though in this setting it is performing an AND function upon active-low signals.
15 PAL is an acronym for “Programmable Array Logic,” a rearrangement of the earlier name, “Programmable Logic Array.” Whereas
the acronym “PLA” sounds like the action of spitting out something distasteful, “PAL” sounds rather obviously like a gadget that is userfriendly. We should admit, however, that a PAL is not just a renamed PLA. The name “PAL” describes a subset of PLA’s, a particular
configuration: an OR-of-AND terms. You’ll find more on this topic next time, in Note S151.
Class 14/D1: Logic Gates
14
and PSEN*16 also are active-low.
Figure 20: Control signals from the processor also are active-low
So, we need to get used to handling active-low signals.
We will be able to explain in a few minutes why control signals typically are active low. To provide that
explanation we need, first, to look inside some gates, and soon we will. First, however, let’s give a glance
to the method normally used to do digital logic: not to use a handful of little gates like the ones described
in § 14N.6.1 on page 17, below. We will discover the happy fact that a logic compiler can make the pesky
problem of “active low” signals very easy to handle.
14N.5
The usual way to do digital logic: programmable arrays
Small packages of gates—like the 4 NAND’s in a 74HC00 gate, which you’ll meet in the first digital lab—do
not provide an efficient way to do digital logic. It is better to build digital designs by programming a circuit
that integrates a large number of gates on one chip. In this course, we will use small versions of such an IC,
integrating about 1600 hundred gates on an IC (yes, that’s small by present standards). After this course you
may meet some of the larger parts.
The advantage of these IC’s are two: first, they are more flexible than hard-wired designs. You can change
the logic after wiring the circuit, if you like. (You can reprogram the part even while it is soldered onto a
circuit board.) Second, they are much cheaper than hard-wired designs, simply because of their scale (a few
dollars for these 1600 gates).
First, a hasty sketch of the sorts of parts that are available:
• gate arrays. These come in two categories. . .
– application-specific IC’s (ASIC’s). These are expensive to design (perhaps $100,000 to implement a design), and their use pays only if one plans to produce a very large number.
– field-programmable gate arrays (FPGA’s). These cost more per part, but present no initial design
cost beyond the time needed to generate good code. These devices store the patterns that link their
gates either in volatile memory (RAM, which forgets when power is removed), or in non-volatile
memory (ROM, usually “flash” type).
• programmable logic devices (PLD’s—often called by their earlier trade-name, “PAL”–Programmable
Array Logic). These are simpler than FPGA’s, and usually are in the form of a wide OR of several wide
AND gates. We’ll be using these, soon.
16 In case you’re impatient to know what these signal names mean, PSEN is “Program Store Enable,” a special form of read especially
for reading code rather than ordinary data. You’ll get to know these signals in the micro labs.
Class 14/D1: Logic Gates
14
and PSEN*16 also are active-low.
Figure 20: Control signals from the processor also are active-low
So, we need to get used to handling active-low signals.
We will be able to explain in a few minutes why control signals typically are active low. To provide that
explanation we need, first, to look inside some gates, and soon we will. First, however, let’s give a glance
to the method normally used to do digital logic: not to use a handful of little gates like the ones described
in § 14N.6.1 on page 17, below. We will discover the happy fact that a logic compiler can make the pesky
problem of “active low” signals very easy to handle.
14N.5
The usual way to do digital logic: programmable arrays
Small packages of gates—like the 4 NAND’s in a 74HC00 gate, which you’ll meet in the first digital lab—do
not provide an efficient way to do digital logic. It is better to build digital designs by programming a circuit
that integrates a large number of gates on one chip. In this course, we will use small versions of such an IC,
integrating about 1600 hundred gates on an IC (yes, that’s small by present standards). After this course you
may meet some of the larger parts.
The advantage of these IC’s are two: first, they are more flexible than hard-wired designs. You can change
the logic after wiring the circuit, if you like. (You can reprogram the part even while it is soldered onto a
circuit board.) Second, they are much cheaper than hard-wired designs, simply because of their scale (a few
dollars for these 1600 gates).
First, a hasty sketch of the sorts of parts that are available:
• gate arrays. These come in two categories. . .
– application-specific IC’s (ASIC’s). These are expensive to design (perhaps $100,000 to implement a design), and their use pays only if one plans to produce a very large number.
– field-programmable gate arrays (FPGA’s). These cost more per part, but present no initial design
cost beyond the time needed to generate good code. These devices store the patterns that link their
gates either in volatile memory (RAM, which forgets when power is removed), or in non-volatile
memory (ROM, usually “flash” type).
• programmable logic devices (PLD’s—often called by their earlier trade-name, “PAL”–Programmable
Array Logic). These are simpler than FPGA’s, and usually are in the form of a wide OR of several wide
AND gates. We’ll be using these, soon.
16 In case you’re impatient to know what these signal names mean, PSEN is “Program Store Enable,” a special form of read especially
for reading code rather than ordinary data. You’ll get to know these signals in the micro labs.
Class 14/D1: Logic Gates
14N.5.1
15
Active-Low with PLD’s: A Logic Compiler can Help
One programs a PLD (or FPGA)—as you soon will do—with the help of a computer that runs a “logic
compiler,” a program that converts your human-readable commands into connection patterns. The logic
compiler can diminish the nastiness of “active-low” signals. In this course you will use a logic compiler—or
“Hardware Description Language” (“HDL”)—named Verilog. Verilog competes with another HDL named
VHDL.17
The technique that can make active-low’s not so annoying is to create an active-high equivalent for each
active-low signal: we define a signal that is the complement of each active-low signal.18 Then one can write
logic equations in a pure active-high world. One can forget about the active levels of particular signals. Doing
this not only eases the writing of equations; it also produces code that is more intelligible to a reader.
Here is a simple example: a gate that is to AND two signals, one active-high, the other active-low. The gate’s
output is active-low. It’s easy enough to draw this logic.19
Figure 21: Schematic of simple logic mixing active-high and active-low signals
In Verilog, the logic compiler that we will use, the equation for this logic can be written in either of two ways.
14N.5.2
An Ugly Version of this Logic
It is simpler, though uglier, to take each signal as it comes—active-high or active-low. But the result is a
funny-looking equation. Here is such a Verilog file:
module actlow_ugly_oct11(
input a_bar,
input b,
output out_bar
);
// see how ugly things look when we use the mixture of active-high and active-low signals:
assign out_bar = !(!a_bar & b);
endmodule
The file begins with a listing of all signals, input and output. We have appended “ bar” to the names of the
signals that are active-low, simply to help us humans remember which ones are active-low. The symbol “!”
(named “bang”) indicates logic inversion; “&” means “AND.”
The equation to implement the AND function is studded with “bangs,” and these make it hard to interpret.
None of these bangs indicates that a signal is false or disasserted; all reflect no more than the active level of
signals.
17 VHDL, a bit hard to pronounce is better than what it stands for: “Very [High Speed Integrated Circuit] Hardware Description
Language.” It was developed at the instance of the U.S. Department of Defense, whereas Verilog arose in private industry.
18 This newly-defined signal exists only “on paper” for the convenience of the human programmer; no extra hardware implementation
is implied.
19 We allowed the logic compiler to draw this for us; a human would not have drawn that final inverter; a human would have placed a
bubble at the output of the AND shape.
Class 14/D1: Logic Gates
14N.5.1
15
Active-Low with PLD’s: A Logic Compiler can Help
One programs a PLD (or FPGA)—as you soon will do—with the help of a computer that runs a “logic
compiler,” a program that converts your human-readable commands into connection patterns. The logic
compiler can diminish the nastiness of “active-low” signals. In this course you will use a logic compiler—or
“Hardware Description Language” (“HDL”)—named Verilog. Verilog competes with another HDL named
VHDL.17
The technique that can make active-low’s not so annoying is to create an active-high equivalent for each
active-low signal: we define a signal that is the complement of each active-low signal.18 Then one can write
logic equations in a pure active-high world. One can forget about the active levels of particular signals. Doing
this not only eases the writing of equations; it also produces code that is more intelligible to a reader.
Here is a simple example: a gate that is to AND two signals, one active-high, the other active-low. The gate’s
output is active-low. It’s easy enough to draw this logic.19
Figure 21: Schematic of simple logic mixing active-high and active-low signals
In Verilog, the logic compiler that we will use, the equation for this logic can be written in either of two ways.
14N.5.2
An Ugly Version of this Logic
It is simpler, though uglier, to take each signal as it comes—active-high or active-low. But the result is a
funny-looking equation. Here is such a Verilog file:
module actlow_ugly_oct11(
input a_bar,
input b,
output out_bar
);
// see how ugly things look when we use the mixture of active-high and active-low signals:
assign out_bar = !(!a_bar & b);
endmodule
The file begins with a listing of all signals, input and output. We have appended “ bar” to the names of the
signals that are active-low, simply to help us humans remember which ones are active-low. The symbol “!”
(named “bang”) indicates logic inversion; “&” means “AND.”
The equation to implement the AND function is studded with “bangs,” and these make it hard to interpret.
None of these bangs indicates that a signal is false or disasserted; all reflect no more than the active level of
signals.
17 VHDL, a bit hard to pronounce is better than what it stands for: “Very [High Speed Integrated Circuit] Hardware Description
Language.” It was developed at the instance of the U.S. Department of Defense, whereas Verilog arose in private industry.
18 This newly-defined signal exists only “on paper” for the convenience of the human programmer; no extra hardware implementation
is implied.
19 We allowed the logic compiler to draw this for us; a human would not have drawn that final inverter; a human would have placed a
bubble at the output of the AND shape.
Class 14/D1: Logic Gates
16
14N.5.2.1 . . . A Prettier Version of this Logic
If we are willing to go to the trouble of setting up an active-high equivalent for each active-low signal (admittedly, a chore), our reward will be an equation that is easy to interpret.
Here, we define a pair of active-high equivalents, after listing the actual signals as in the earlier file; we do
this for the one active-low input and for the active-low output. (The odd line “wire. . . ” tells the compiler
what sort of signal our home-made “out” is):
module actlow_pretty_oct11(
input a_bar,
input b,
output out_bar
);
wire out;
// Now flip all active-low signals, so we can treat ALL signals as active-high as we write equations
assign a = !a_bar; // makes a temporary internal signal (write) never assigned to a pin
assign out_bar = !out; // this output logic looks reversed. But it’s correct because the active-high
// "out" will be an input to this logic, used to generate the OUTPUT
// then see how pretty the equation looks using the all-active-high signal names
assign out = a & b;
endmodule
The equation says what we understand it to mean—active levels apart: “make something happen if both of
two input conditions are fulfilled.”
In short, the logic compiler allows us to keep separate two issues that are well kept separate:
1. what are the active levels—high or low? This is disposed of near the start of the file (as in the “pretty”
version, above).
2. what logical operations do we want to perform on the inputs? (This is the interesting part of the task).
If this Verilog code is hard to digest, on first reading, don’t worry. We will look more closely at Verilog and
its handling of active-low signals next time. This all takes getting used to.
14N.6
Gate Types: TTL & CMOS
AoE §12.1.1
CMOS versus TTL
TTL—-made of bipolar transistors—-ruled the world for about 20 years; now its days are pretty much over,
as the ad below was meant to suggest (this ad also reminds us how excited someone can get about a few
nanoseconds: what Zytrex had to offer was an advantage of a few tens of ns over ordinary CMOS).
Figure 22: Reflections on mortality: RIP Zytrex (1985-87)
Class 14/D1: Logic Gates
16
14N.5.2.1 . . . A Prettier Version of this Logic
If we are willing to go to the trouble of setting up an active-high equivalent for each active-low signal (admittedly, a chore), our reward will be an equation that is easy to interpret.
Here, we define a pair of active-high equivalents, after listing the actual signals as in the earlier file; we do
this for the one active-low input and for the active-low output. (The odd line “wire. . . ” tells the compiler
what sort of signal our home-made “out” is):
module actlow_pretty_oct11(
input a_bar,
input b,
output out_bar
);
wire out;
// Now flip all active-low signals, so we can treat ALL signals as active-high as we write equations
assign a = !a_bar; // makes a temporary internal signal (write) never assigned to a pin
assign out_bar = !out; // this output logic looks reversed. But it’s correct because the active-high
// "out" will be an input to this logic, used to generate the OUTPUT
// then see how pretty the equation looks using the all-active-high signal names
assign out = a & b;
endmodule
The equation says what we understand it to mean—active levels apart: “make something happen if both of
two input conditions are fulfilled.”
In short, the logic compiler allows us to keep separate two issues that are well kept separate:
1. what are the active levels—high or low? This is disposed of near the start of the file (as in the “pretty”
version, above).
2. what logical operations do we want to perform on the inputs? (This is the interesting part of the task).
If this Verilog code is hard to digest, on first reading, don’t worry. We will look more closely at Verilog and
its handling of active-low signals next time. This all takes getting used to.
14N.6
Gate Types: TTL & CMOS
AoE §12.1.1
CMOS versus TTL
TTL—-made of bipolar transistors—-ruled the world for about 20 years; now its days are pretty much over,
as the ad below was meant to suggest (this ad also reminds us how excited someone can get about a few
nanoseconds: what Zytrex had to offer was an advantage of a few tens of ns over ordinary CMOS).
Figure 22: Reflections on mortality: RIP Zytrex (1985-87)
Class 14/D1: Logic Gates
17
Ever heard of Zytrex, forecaster of TTL’s doom? Probably not. A short time after this ad appeared, we
noticed the following news clip in a trade newspaper:
Figure 23: Sic transit gloria Zytrecis
(Is there, perhaps, a moral to this story?)
The lust for speed goes on and on, of course. A more recent add boasts. . .
Figure 24: Zero Delay?
Isn’t this company pushing things too far?20 Zero delay?! Well, it’s a trick, really: not exactly false. This
device is not a logic gate at all; it is only an analog switch (with low RON ). So, any delays that occur are your
fault, customer! (It’s your stray capacitance, after all, that causes the slow-down; that’s not QuickSwitch’s
responsibility, is it?) But that’s not to say these things aren’t useful; they are useful. If load capacitance isn’t
too great, they can outrun any ordinary driver gate.
AoE §10.2.2
14N.6.1
Gate Innards: TTL vs CMOS
Figure 25: TTL & CMOS gates: NAND, NOT
AoE §10.2.3
A glance at these diagrams should reveal some characteristics of the gates:
20 The
company is Integrated Device Technology, San Jose, CA.
Class 14/D1: Logic Gates
17
Ever heard of Zytrex, forecaster of TTL’s doom? Probably not. A short time after this ad appeared, we
noticed the following news clip in a trade newspaper:
Figure 23: Sic transit gloria Zytrecis
(Is there, perhaps, a moral to this story?)
The lust for speed goes on and on, of course. A more recent add boasts. . .
Figure 24: Zero Delay?
Isn’t this company pushing things too far?20 Zero delay?! Well, it’s a trick, really: not exactly false. This
device is not a logic gate at all; it is only an analog switch (with low RON ). So, any delays that occur are your
fault, customer! (It’s your stray capacitance, after all, that causes the slow-down; that’s not QuickSwitch’s
responsibility, is it?) But that’s not to say these things aren’t useful; they are useful. If load capacitance isn’t
too great, they can outrun any ordinary driver gate.
AoE §10.2.2
14N.6.1
Gate Innards: TTL vs CMOS
Figure 25: TTL & CMOS gates: NAND, NOT
AoE §10.2.3
A glance at these diagrams should reveal some characteristics of the gates:
20 The
company is Integrated Device Technology, San Jose, CA.
Class 14/D1: Logic Gates
18
Inputs: You can see why TTL inputs float high, and CMOS do not.
Threshold: You might guess that TTL’s threshold is off-center—-low, whereas CMOS is approximately centered.
Output: You can see why TTL’s high is not a clean 5 V, but CMOS’ is.
Power consumption: You can see that CMOS passes no current from +5 to ground, when the output sits either high or low; you can
see that TTL, in contrast, cannot sit in either output state without passing current in (a) its input base pullup (if an input is pulled
low) or (b) in its first transistor (which is ON if the inputs are high).
14N.6.2
Thresholds and “Noise Margin”
All digital devices show some noise immunity. The guaranteed levels for TTL and for 5-volt CMOS show
that CMOS has the better noise immunity:
Figure 26: Thresholds & noise margin: TTL versus CMOS
Curious footnote: as AoE points out, TTL and NMOS devices are so widely used that some families of
CMOS, labeled 74xCTxx, have been taught TTL’s bad habits on purpose: their thresholds are put at TTL’s
nasty levels (”CT” means “CMOS with TTTL thresholds”). We will use a lot of such gates (74HCTxx) in
our lab microcomputer, where we are obliged to accommodate the microprocessor, whose output High is as
wishy-washy as TTL’s (even though the processor is fabricated in CMOS). When we have a choice, however,
we will stick to straight CMOS, though the world has voted rather for HCT. More functions are available in
HCT than in straight HC.
Answer to the question, “Why is the typical control signal active low?”
We promised that a look inside the gate package would settle this question, and it does. TTL’s asymmetry
explains this preference for active low. If you have several control lines, each of which is inactive most of the
time, it’s better to let the inactive signals rest high, let only the active signal be asserted low. This explanation
applies only to TTL, not to CMOS. But the conventions were established while TTL was supreme, and they
persist long after their rationale ceases to apply.
Here’s the argument: two characteristics of TTL push in favor of making signals active-low.
A TTL high input is less vulnerable to noise than a TTL low input Although the guaranteed noise margins differ by a few tenths of
a volt, the typical margins differ by more. So, it’s safer to leave your control lines High most of the time; now and then let them
dive into danger.
A TTL input is easy to drive High In fact, since a TTL input floats high, you can drive it essentially for ‘free:’ at a cost of no current
at all. So, if you’re designing a microprocessor to drive TTL devices, make it easy on your chip by letting most the lines rest at
the lazy, low-current level, most of the time.
Both these arguments push in the same direction; hence the result—-which you will be able to confirm when
Class 14/D1: Logic Gates
18
Inputs: You can see why TTL inputs float high, and CMOS do not.
Threshold: You might guess that TTL’s threshold is off-center—-low, whereas CMOS is approximately centered.
Output: You can see why TTL’s high is not a clean 5 V, but CMOS’ is.
Power consumption: You can see that CMOS passes no current from +5 to ground, when the output sits either high or low; you can
see that TTL, in contrast, cannot sit in either output state without passing current in (a) its input base pullup (if an input is pulled
low) or (b) in its first transistor (which is ON if the inputs are high).
14N.6.2
Thresholds and “Noise Margin”
All digital devices show some noise immunity. The guaranteed levels for TTL and for 5-volt CMOS show
that CMOS has the better noise immunity:
Figure 26: Thresholds & noise margin: TTL versus CMOS
Curious footnote: as AoE points out, TTL and NMOS devices are so widely used that some families of
CMOS, labeled 74xCTxx, have been taught TTL’s bad habits on purpose: their thresholds are put at TTL’s
nasty levels (”CT” means “CMOS with TTTL thresholds”). We will use a lot of such gates (74HCTxx) in
our lab microcomputer, where we are obliged to accommodate the microprocessor, whose output High is as
wishy-washy as TTL’s (even though the processor is fabricated in CMOS). When we have a choice, however,
we will stick to straight CMOS, though the world has voted rather for HCT. More functions are available in
HCT than in straight HC.
Answer to the question, “Why is the typical control signal active low?”
We promised that a look inside the gate package would settle this question, and it does. TTL’s asymmetry
explains this preference for active low. If you have several control lines, each of which is inactive most of the
time, it’s better to let the inactive signals rest high, let only the active signal be asserted low. This explanation
applies only to TTL, not to CMOS. But the conventions were established while TTL was supreme, and they
persist long after their rationale ceases to apply.
Here’s the argument: two characteristics of TTL push in favor of making signals active-low.
A TTL high input is less vulnerable to noise than a TTL low input Although the guaranteed noise margins differ by a few tenths of
a volt, the typical margins differ by more. So, it’s safer to leave your control lines High most of the time; now and then let them
dive into danger.
A TTL input is easy to drive High In fact, since a TTL input floats high, you can drive it essentially for ‘free:’ at a cost of no current
at all. So, if you’re designing a microprocessor to drive TTL devices, make it easy on your chip by letting most the lines rest at
the lazy, low-current level, most of the time.
Both these arguments push in the same direction; hence the result—-which you will be able to confirm when
Class 14/D1: Logic Gates
19
you put together your microcomputer (‘Big Board’ version), where every control line is active low.21
14N.7
Noise Immunity
All logic gates are able to ignore noise on their inputs, to some degree. The test setup below shows that some
logic families do better than others, in this respect. We’ll look, first, at the simplest sort of noise rejection, dc
noise immunity; we’ll find that CMOS, with its nearly-symmetric High and Low definitions does better than
TTL (the older, bipolar family), with its off-center thresholds. Then we’ll see another strategy for resisting
noise, differential transmission.
14N.7.1
DC noise immunity: CMOS vs TTL
Here’s the test setup we used, to mix some noise into a logic-level input. We fed this noisy signal to four sorts
of logic gate, one TTL, three CMOS.
Figure 27: DC noise immunity test setup: noise added to signal, fed to 4 gate types
21 The
phrase ‘control line’ may puzzle you. Yes, we are saying a little less than that every signal is active low. Data and address
lines are not. But every line that has an active state, to be distinguished from inactive, makes that active state low. Some lines have no
active versus inactive states: a data line, for example, is as ‘active’ when low as when high; same for an address line. So, for those lines
designers leave the active-high convention alone. That’s lucky for us: we are allowed to read a value like “1001” on the data lines as
“9;” we don’t need to flip every bit to interpret it! So, instead of letting the active low convention get you down, count your blessings: it
could be worse.
Class 14/D1: Logic Gates
19
you put together your microcomputer (‘Big Board’ version), where every control line is active low.21
14N.7
Noise Immunity
All logic gates are able to ignore noise on their inputs, to some degree. The test setup below shows that some
logic families do better than others, in this respect. We’ll look, first, at the simplest sort of noise rejection, dc
noise immunity; we’ll find that CMOS, with its nearly-symmetric High and Low definitions does better than
TTL (the older, bipolar family), with its off-center thresholds. Then we’ll see another strategy for resisting
noise, differential transmission.
14N.7.1
DC noise immunity: CMOS vs TTL
Here’s the test setup we used, to mix some noise into a logic-level input. We fed this noisy signal to four sorts
of logic gate, one TTL, three CMOS.
Figure 27: DC noise immunity test setup: noise added to signal, fed to 4 gate types
21 The
phrase ‘control line’ may puzzle you. Yes, we are saying a little less than that every signal is active low. Data and address
lines are not. But every line that has an active state, to be distinguished from inactive, makes that active state low. Some lines have no
active versus inactive states: a data line, for example, is as ‘active’ when low as when high; same for an address line. So, for those lines
designers leave the active-high convention alone. That’s lucky for us: we are allowed to read a value like “1001” on the data lines as
“9;” we don’t need to flip every bit to interpret it! So, instead of letting the active low convention get you down, count your blessings: it
could be worse.
20
Class 14/D1: Logic Gates
Progressively Increase Noise Level . . . First, moderate Noise: all gate types succeed, in the left-hand
image of fig. 28, all gates ignore the triangular noise. These gates, in other words, are showing off the
essential strength of digital devices.
Figure 28: Moderate noise: all gate types ignore the noise; more noise fools TTL gates
In the right-hand image of fig. 28, the TTL and HCT parts fail. They fail when the noise is added to a low
input. HCT fails when TTL fails, and this makes sense, since its thresholds have been adjusted to match those
of TTL22 CMOS did better than TTL because of its larger noise margin.
Much Noise: all but Schmitt Trigger fail: When we increase the noise level further, even CMOS fails.
Figure 29: Much noise: all gate types fail except a gate with hysteresis (HC14)
But we included one more gate type, in the test shown in fig. 29, a gate that does even better than the CMOS
inverter. That one gate not fooled by the large noise amplitude, shown on the bottom trace, is a gate with
built-in hysteresis (a 74HC14).
Seeing this gate succeed where the others fail might lead you to expect that all logic gates would evolve
to include hysteresis. They don’t, though, because hysteresis slightly slows the switching, and the concern
for speed trumps the need for best noise immunity. This rule holds except in gates designed for especiallynoisy settings. Gates designed to receive (“buffer”) inputs from long bus lines, for example, often do include
hysteresis.
Below, in § 14N.7.2 on the facing page, we’ll meet a gate type that takes quite a different approach—and,
in fact, is proud of its small swing. Here is a chart in which this gate type, called LVDS (“Low Voltage
Differential Signaling”) boasts of its tiny signal swing:23
22 You may be wondering why anyone would offer HCT, whose noise-immunity is inferior to garden-variety HC CMOS. HCT exists
to allow upgrading existing TTL designs by simply popping in HCT, and also to form a logic-family bridge, between TTL-level circuitry
and CMOS. In one version of the micro labs that conclude this course, we use HCT that way: to accept signals from the handful of parts
that deliver TTL output levels rather than CMOS. See, for example, the HCT139, a 2:4 decoder that receives TTL-level signals from the
microcontroller (Lab µ2 and “Big Picture” showing the full micro circuit).
23 National Semiconductor: http://www.national.com/appinfo/lvds/files/ownersmanual.pdf, § 5-4.
20
Class 14/D1: Logic Gates
Progressively Increase Noise Level . . . First, moderate Noise: all gate types succeed, in the left-hand
image of fig. 28, all gates ignore the triangular noise. These gates, in other words, are showing off the
essential strength of digital devices.
Figure 28: Moderate noise: all gate types ignore the noise; more noise fools TTL gates
In the right-hand image of fig. 28, the TTL and HCT parts fail. They fail when the noise is added to a low
input. HCT fails when TTL fails, and this makes sense, since its thresholds have been adjusted to match those
of TTL22 CMOS did better than TTL because of its larger noise margin.
Much Noise: all but Schmitt Trigger fail: When we increase the noise level further, even CMOS fails.
Figure 29: Much noise: all gate types fail except a gate with hysteresis (HC14)
But we included one more gate type, in the test shown in fig. 29, a gate that does even better than the CMOS
inverter. That one gate not fooled by the large noise amplitude, shown on the bottom trace, is a gate with
built-in hysteresis (a 74HC14).
Seeing this gate succeed where the others fail might lead you to expect that all logic gates would evolve
to include hysteresis. They don’t, though, because hysteresis slightly slows the switching, and the concern
for speed trumps the need for best noise immunity. This rule holds except in gates designed for especiallynoisy settings. Gates designed to receive (“buffer”) inputs from long bus lines, for example, often do include
hysteresis.
Below, in § 14N.7.2 on the facing page, we’ll meet a gate type that takes quite a different approach—and,
in fact, is proud of its small swing. Here is a chart in which this gate type, called LVDS (“Low Voltage
Differential Signaling”) boasts of its tiny signal swing:23
22 You may be wondering why anyone would offer HCT, whose noise-immunity is inferior to garden-variety HC CMOS. HCT exists
to allow upgrading existing TTL designs by simply popping in HCT, and also to form a logic-family bridge, between TTL-level circuitry
and CMOS. In one version of the micro labs that conclude this course, we use HCT that way: to accept signals from the handful of parts
that deliver TTL output levels rather than CMOS. See, for example, the HCT139, a 2:4 decoder that receives TTL-level signals from the
microcontroller (Lab µ2 and “Big Picture” showing the full micro circuit).
23 National Semiconductor: http://www.national.com/appinfo/lvds/files/ownersmanual.pdf, § 5-4.
Class 14/D1: Logic Gates
21
Figure 30: Small signal swing can be a virtue, but requires differential signaling
The small swings of LVDS gates afford two side benefits: low EMI (emissions: “Electro Magnetic Interference”) and reduced disturbance of the power supply. We call these “side” benefits because the fundamental
great strength of the low swings is that the logic can give good noise immunity at low supply voltages, as
we’ll argue in § 14N.7.2.24
14N.7.2
Differential transmission: another way to get good noise immunity
Recent logic devices have been designed for ever-lower power supplies—3.3V, 2.5V and 1.8V—rather than
for the traditional +5V. The trend is likely to continue. Such supplies make it difficult to protect gates from
errors caused by noise riding a logic level. The 0.4V DC-noise margin available in TTL and HCMOS is
possible only because the supply voltage is relatively large. Compressing all specifications proportionately
would, for example, give 50% less DC noise margin in a 2.5V system—and things would get worse at
lower supply voltages. The problem is most severe on lines that are long, like those running on a computer
backplane.
A solution to the problem has been provided by differential drivers and receivers. These send and receive not
a single signal, but a signal and its logical complement, on two wires.
small swing (0.35V)
V+ (e.g., 1.8V)
LVDS
+
LVDS
-
V+ (e.g., 1.8V)
Figure 31: Differential signals can give good noise immunity despite low power supply voltages
Typically, noise will affect the two signals similarly, so subtracting one from the other will strip away most
of the impinging noise. This is a process (and a figure) you saw back in the analog differential-amplifier
lab; here we find nothing new, except that the technique is applied to digital signals. The differential drivers
called LVDS25 transmit currents rather than voltages; these currents are converted to a voltage difference by
a resistor placed between the differential lines at the far end of the signal lines, the receiving end. (This is
nice, “terminating” the lines, forestalling “reflections.” See the note on Transmission Lines near the end of
this book.)
Below, we put a standard 0/5V logic (TTL) signal into a differential driver, and injected about a volt of noise
(a triangular waveform).26
24 Fig.
30 is drawn from National Semiconductor LVDS guide: http://www.national.com/appinfo/lvds/files/ownersmanual.pdf).
is “Low-Voltage Differential Signaling.”
26 We did this in a rather odd way: by driving the ground terminal on the transmitter IC with this 1V (peak-to-peak) triangular
waveform.
25 “LVDS”
Class 14/D1: Logic Gates
21
Figure 30: Small signal swing can be a virtue, but requires differential signaling
The small swings of LVDS gates afford two side benefits: low EMI (emissions: “Electro Magnetic Interference”) and reduced disturbance of the power supply. We call these “side” benefits because the fundamental
great strength of the low swings is that the logic can give good noise immunity at low supply voltages, as
we’ll argue in § 14N.7.2.24
14N.7.2
Differential transmission: another way to get good noise immunity
Recent logic devices have been designed for ever-lower power supplies—3.3V, 2.5V and 1.8V—rather than
for the traditional +5V. The trend is likely to continue. Such supplies make it difficult to protect gates from
errors caused by noise riding a logic level. The 0.4V DC-noise margin available in TTL and HCMOS is
possible only because the supply voltage is relatively large. Compressing all specifications proportionately
would, for example, give 50% less DC noise margin in a 2.5V system—and things would get worse at
lower supply voltages. The problem is most severe on lines that are long, like those running on a computer
backplane.
A solution to the problem has been provided by differential drivers and receivers. These send and receive not
a single signal, but a signal and its logical complement, on two wires.
small swing (0.35V)
V+ (e.g., 1.8V)
LVDS
+
LVDS
-
V+ (e.g., 1.8V)
Figure 31: Differential signals can give good noise immunity despite low power supply voltages
Typically, noise will affect the two signals similarly, so subtracting one from the other will strip away most
of the impinging noise. This is a process (and a figure) you saw back in the analog differential-amplifier
lab; here we find nothing new, except that the technique is applied to digital signals. The differential drivers
called LVDS25 transmit currents rather than voltages; these currents are converted to a voltage difference by
a resistor placed between the differential lines at the far end of the signal lines, the receiving end. (This is
nice, “terminating” the lines, forestalling “reflections.” See the note on Transmission Lines near the end of
this book.)
Below, we put a standard 0/5V logic (TTL) signal into a differential driver, and injected about a volt of noise
(a triangular waveform).26
24 Fig.
30 is drawn from National Semiconductor LVDS guide: http://www.national.com/appinfo/lvds/files/ownersmanual.pdf).
is “Low-Voltage Differential Signaling.”
26 We did this in a rather odd way: by driving the ground terminal on the transmitter IC with this 1V (peak-to-peak) triangular
waveform.
25 “LVDS”
Class 14/D1: Logic Gates
22
Figure 32: LVDS signals: Differential signals can survive noise greater than the signal swing
The two middle traces, above, show the output of the driver IC: a differential pair of signals, one in-phase
with the TTL input, one 180o out of phase. The driver converts the voltage TTL input to currents flowing as
diff + and diff − ; at the receiver, a terminating resistor (100Ω) converts the current back into a voltage.27 The
differential swing is small: about 0.35v—dwarfed by the triangular noise, above. But since the differential
receiver looks at the difference between diff + and diff − , it reconstructs the original TTL cleanly, rejecting
the noise.
This success, with the small differential swing, illustrates how differential signaling can provide good noise
immunity to logic that use extremely low supply voltages. The LVDS specification requires a voltage swing
of only 100mV at the receiver.
These differential gates show two other virtues: 1) they are fast (propagation delays of transmitter and receiver
each under 3ns), and 2) they emit less radiated noise than a traditional voltage gate (as noted above). They
do this, first, because of the small voltage swings, and, further, because of the symmetric current signals that
travel in the signal lines (currents of opposite signs, flowing side-by-side), signals whose magnetic fields tend
to cancel.28
14N.8
More on Gate Types
14N.8.1
Output Configurations
14N.8.1.1 active pullup
All respectable gates use active pullup on their outputs, to provide firm Highs as well as Lows:
27 The terminating
resistor serves another good purpose: matching the “characteristic impedance” of the transmitting lines, it prevents
ugly spikes caused by “reflection” of a waveform that might otherwise occur as it hit the high-impedance input of the receiving gate.
28 See National Semiconductor LVDS guide: http://www.national.com/appinfo/lvds/files/ownersmanual.pdf).
Class 14/D1: Logic Gates
22
Figure 32: LVDS signals: Differential signals can survive noise greater than the signal swing
The two middle traces, above, show the output of the driver IC: a differential pair of signals, one in-phase
with the TTL input, one 180o out of phase. The driver converts the voltage TTL input to currents flowing as
diff + and diff − ; at the receiver, a terminating resistor (100Ω) converts the current back into a voltage.27 The
differential swing is small: about 0.35v—dwarfed by the triangular noise, above. But since the differential
receiver looks at the difference between diff + and diff − , it reconstructs the original TTL cleanly, rejecting
the noise.
This success, with the small differential swing, illustrates how differential signaling can provide good noise
immunity to logic that use extremely low supply voltages. The LVDS specification requires a voltage swing
of only 100mV at the receiver.
These differential gates show two other virtues: 1) they are fast (propagation delays of transmitter and receiver
each under 3ns), and 2) they emit less radiated noise than a traditional voltage gate (as noted above). They
do this, first, because of the small voltage swings, and, further, because of the symmetric current signals that
travel in the signal lines (currents of opposite signs, flowing side-by-side), signals whose magnetic fields tend
to cancel.28
14N.8
More on Gate Types
14N.8.1
Output Configurations
14N.8.1.1 active pullup
All respectable gates use active pullup on their outputs, to provide firm Highs as well as Lows:
27 The terminating
resistor serves another good purpose: matching the “characteristic impedance” of the transmitting lines, it prevents
ugly spikes caused by “reflection” of a waveform that might otherwise occur as it hit the high-impedance input of the receiving gate.
28 See National Semiconductor LVDS guide: http://www.national.com/appinfo/lvds/files/ownersmanual.pdf).
Class 14/D1: Logic Gates
23
Figure 33: Passive versus active pullup output stages
You will confirm in the lab that the passive-pullup version (labelled “NMOS” in fig. 33 on the preceding
page) not only wastes power but also is slow. Why slow?29
14N.8.1.2 Open-collector/Open-drain
AoE §10.2.4.3
30
Figure 34: Open drain or open collector: rarely useful
Once in a great while “open drain” or “open collector” is useful. You have seen this on the ’311 comparator.
It permits an easy way to OR multiple signals: if signals A, B and C use active-low open-drain outputs, for
example, they can share a single pullup resistor. Then any one of A, B or C can pull that shared line low. We
will meet this possible application when we treat interrupts in class µ 3.
14N.8.1.3 Three-state
AoE §10.2.4.1
Very often three-state outputs are useful:31these allow multiple drivers to share a common output line (then
called a “bus”). These are very widely used in computers.
Beware the misconception that the “third state” is a third output voltage level. It’s not that; it is the off or
disconnected condition. Here it is, first conceptually, then the way we’ll build it in today’s lab:
29 Slow
because the inevitable stray capacitance must be driven by a mere pullup resistor, rather than by a transistor switch.
bipolar gates; “open-drain” for MOSFET gates.
31 You will often hear these called “Tri-State.” That is a trademark belonging to National Semiconductor (now absorbed by Texas
Instruments), so “three-state” is the correct generic term.
30 “Open-collector” for
Class 14/D1: Logic Gates
23
Figure 33: Passive versus active pullup output stages
You will confirm in the lab that the passive-pullup version (labelled “NMOS” in fig. 33 on the preceding
page) not only wastes power but also is slow. Why slow?29
14N.8.1.2 Open-collector/Open-drain
AoE §10.2.4.3
30
Figure 34: Open drain or open collector: rarely useful
Once in a great while “open drain” or “open collector” is useful. You have seen this on the ’311 comparator.
It permits an easy way to OR multiple signals: if signals A, B and C use active-low open-drain outputs, for
example, they can share a single pullup resistor. Then any one of A, B or C can pull that shared line low. We
will meet this possible application when we treat interrupts in class µ 3.
14N.8.1.3 Three-state
AoE §10.2.4.1
Very often three-state outputs are useful:31these allow multiple drivers to share a common output line (then
called a “bus”). These are very widely used in computers.
Beware the misconception that the “third state” is a third output voltage level. It’s not that; it is the off or
disconnected condition. Here it is, first conceptually, then the way we’ll build it in today’s lab:
29 Slow
because the inevitable stray capacitance must be driven by a mere pullup resistor, rather than by a transistor switch.
bipolar gates; “open-drain” for MOSFET gates.
31 You will often hear these called “Tri-State.” That is a trademark belonging to National Semiconductor (now absorbed by Texas
Instruments), so “three-state” is the correct generic term.
30 “Open-collector” for
Class 14/D1: Logic Gates
24
Figure 35: Three-State output: conceptual; the way we build it in the lab; driving shared bus
14N.8.2
Logic with TTL and CMOS
The basic TTL gate that we looked at a few pages back in § 14N.6.1 on page 17 was a NAND; it did its logic
with diodes. CMOS gates do their logic differently: by putting two or more transistors in series or parallel,
as needed. Here is the CMOS NAND gate you’ll build in the lab, along with a simplified sketch, showing it
to be just such a set of series and parallel transistor switches:
Figure 36: NAND gate built with CMOS
The logic is simple enough so that you don’t need a truth table. You can see that the output will be pulled
low only if both inputs are high—turning on both of the series transistors to ground. Thus, it implements the
NAND function.
14N.8.3
Speed versus Power consumption
[REVISE TO FIT AoE OR CUT?]
The plot below shows the tradeoffs available between speed and power-saving. A few years ago the choice
AoE fig.10.26
Class 14/D1: Logic Gates
24
Figure 35: Three-State output: conceptual; the way we build it in the lab; driving shared bus
14N.8.2
Logic with TTL and CMOS
The basic TTL gate that we looked at a few pages back in § 14N.6.1 on page 17 was a NAND; it did its logic
with diodes. CMOS gates do their logic differently: by putting two or more transistors in series or parallel,
as needed. Here is the CMOS NAND gate you’ll build in the lab, along with a simplified sketch, showing it
to be just such a set of series and parallel transistor switches:
Figure 36: NAND gate built with CMOS
The logic is simple enough so that you don’t need a truth table. You can see that the output will be pulled
low only if both inputs are high—turning on both of the series transistors to ground. Thus, it implements the
NAND function.
14N.8.3
Speed versus Power consumption
[REVISE TO FIT AoE OR CUT?]
The plot below shows the tradeoffs available between speed and power-saving. A few years ago the choice
AoE fig.10.26
Class 14/D1: Logic Gates
25
was stark: TTL for speed (ECL for the very impatient), CMOS (4000 series) for low power. Now the choices
are more puzzling: some CMOS is very fast. GaAs is fastest. As you can see from this figure, everyone is
trying to snuggle down into the lower left corner, the state of Nirvana where you get fast results for almost
nothing. Low voltage differential signalling and “Tiny Logic” (the “NC7. . . ” dots, in fig. 37) seem the most
promising, at the present date.32
Figure 37: Speed versus power consumption: some present and obsolete logic families
And arrays of gates—PAL’s and FPGA’s—can show speeds better than those indicated in fig. 37, because
stray capacitances can be kept smaller on-chip than off.
14N.9
AoE Reading
Reading:
Chapter 10 (Digital Logic I):
• §10.1: Basic Logic Concepts)
• . . . §10.2: Digital ICs: CMOS and bipolar (TTL)
– for big picture of logic family competition see fig.10.21
• §10.3
– §10.3.1logic identities are useful; deMorgan’s theorem is important
– don’t worry about Karnaugh mapping: §10.3.2
– . . . and don’t study the long catalog of available combinational functions: §10.3.3
– take a quick look at §10.6: some typical digital circuits
Chapter 11 (Digital Logic II):
•
•
•
•
§11.1: history of logic families
§11.2.1: PAL’s
. . . in §11.2, omit the complex example, . . . byte generator (§11.3)
. . . except take a quick look at the contrasted schematic versus HDL design entry methods (§11.3.3.1 and
§11.3.4)
• postpone §11.3.5: microcontroller
• Take a look at summary advice in §11.4.1 and §11.4.2
Problems: Embedded problems; try to explain to yourself how the gating shown in Circuit Ideas A, B, and E achieves the
results claimed
(class notes d1 headerfile june14.tex; October 7, 2014)
32 “Tiny Logic” is
a tradename of Fairchild Semiconductor.
Class 14/D1: Logic Gates
25
was stark: TTL for speed (ECL for the very impatient), CMOS (4000 series) for low power. Now the choices
are more puzzling: some CMOS is very fast. GaAs is fastest. As you can see from this figure, everyone is
trying to snuggle down into the lower left corner, the state of Nirvana where you get fast results for almost
nothing. Low voltage differential signalling and “Tiny Logic” (the “NC7. . . ” dots, in fig. 37) seem the most
promising, at the present date.32
Figure 37: Speed versus power consumption: some present and obsolete logic families
And arrays of gates—PAL’s and FPGA’s—can show speeds better than those indicated in fig. 37, because
stray capacitances can be kept smaller on-chip than off.
14N.9
AoE Reading
Reading:
Chapter 10 (Digital Logic I):
• §10.1: Basic Logic Concepts)
• . . . §10.2: Digital ICs: CMOS and bipolar (TTL)
– for big picture of logic family competition see fig.10.21
• §10.3
– §10.3.1logic identities are useful; deMorgan’s theorem is important
– don’t worry about Karnaugh mapping: §10.3.2
– . . . and don’t study the long catalog of available combinational functions: §10.3.3
– take a quick look at §10.6: some typical digital circuits
Chapter 11 (Digital Logic II):
•
•
•
•
§11.1: history of logic families
§11.2.1: PAL’s
. . . in §11.2, omit the complex example, . . . byte generator (§11.3)
. . . except take a quick look at the contrasted schematic versus HDL design entry methods (§11.3.3.1 and
§11.3.4)
• postpone §11.3.5: microcontroller
• Take a look at summary advice in §11.4.1 and §11.4.2
Problems: Embedded problems; try to explain to yourself how the gating shown in Circuit Ideas A, B, and E achieves the
results claimed
(class notes d1 headerfile june14.tex; October 7, 2014)
32 “Tiny Logic” is
a tradename of Fairchild Semiconductor.
Index
“active low”, 11
“analog”
contrasted with “digital”, 2
“bit” defined, 2
“digital”
contrasted with “analog”, 2
LVDS, 21
PAL, 15
PLD, 14
programmable logic array, 14
QAM, 4
QuickSwitch, 17
active low
. . . in Verilog, 15
why prevalent, 18
assertion level logic notation, 11–16
three-state, 23
tri-state, 23
truth table, 9
two’s-complement, 7
binary number codes, 6
binary numbers
hexadecimal notation, 8
signed, 7
unsigned, 6
Boole, 9–10
universal gate, 10
Verilog, 15
VHDL, 15
combinational logic, 8
deMorgan, 9
theorem, 11
digital
alternatives to binary, 4
resolution, 4
why, 3
Ford, Henry, 9
FPGA, 14
gate array, 14
logic families, 16
TTL & CMOS circuitry, 17
logic gate
active pullup, 22
noise immunity
differential transmission, 21
hysteresis, 3, 20
open drain/open collector, 23
speed versus power consumption, 24
three-state, 23
threshold, 18
logic gates
noise margin, 18
26
Index
“active low”, 11
“analog”
contrasted with “digital”, 2
“bit” defined, 2
“digital”
contrasted with “analog”, 2
LVDS, 21
PAL, 15
PLD, 14
programmable logic array, 14
QAM, 4
QuickSwitch, 17
active low
. . . in Verilog, 15
why prevalent, 18
assertion level logic notation, 11–16
three-state, 23
tri-state, 23
truth table, 9
two’s-complement, 7
binary number codes, 6
binary numbers
hexadecimal notation, 8
signed, 7
unsigned, 6
Boole, 9–10
universal gate, 10
Verilog, 15
VHDL, 15
combinational logic, 8
deMorgan, 9
theorem, 11
digital
alternatives to binary, 4
resolution, 4
why, 3
Ford, Henry, 9
FPGA, 14
gate array, 14
logic families, 16
TTL & CMOS circuitry, 17
logic gate
active pullup, 22
noise immunity
differential transmission, 21
hysteresis, 3, 20
open drain/open collector, 23
speed versus power consumption, 24
three-state, 23
threshold, 18
logic gates
noise margin, 18
26