Thursday, 9 June 2011

DATA CONVERTERS

PLEASE CHECK IT OUT:

SYNCHRONOUS COUNTERS


PLEASE CHECK OUT IT.

ASYNCHRONOUS COUNTERS

In the previous section, we saw a circuit using one J-K flip-flop that counted backward in a two-bit binary sequence, from 11 to 10 to 01 to 00. Since it would be desirable to have a circuit that could count forward and not just backward, it would be worthwhile to examine a forward count sequence again and look for more patterns that might indicate how to build such a circuit.

Since we know that binary count sequences follow a pattern of octave (factor of 2) frequency division, and that J-K flip-flop multivibrators set up for the "toggle" mode are capable of performing this type of frequency division, we can envision a circuit made up of several J-K flip-flops, cascaded to produce four bits of output. The main problem facing us is to determine how to connect these flip-flops together so that they toggle at the right times to produce the proper binary sequence. Examine the following binary count sequence, paying attention to patterns preceding the "toggling" of a bit between 0 and 1:



Note that each bit in this four-bit sequence toggles when the bit before it (the bit having a lesser significance, or place-weight), toggles in a particular direction: from 1 to 0. Small arrows indicate those points in the sequence where a bit toggles, the head of the arrow pointing to the previous bit transitioning from a "high" (1) state to a "low" (0) state:



Starting with four J-K flip-flops connected in such a way to always be in the "toggle" mode, we need to determine how to connect the clock inputs in such a way so that each succeeding bit toggles when the bit before it transitions from 1 to 0. The Q outputs of each flip-flop will serve as the respective binary bits of the final, four-bit count:

If we used flip-flops with negative-edge triggering (bubble symbols on the clock inputs), we could simply connect the clock input of each flip-flop to the Q output of the flip-flop before it, so that when the bit before it changes from a 1 to a 0, the "falling edge" of that signal would "clock" the next flip-flop to toggle the next bit:
This circuit would yield the following output waveforms, when "clocked" by a repetitive source of pulses from an oscillator:

The first flip-flop (the one with the Q0 output), has a positive-edge triggered clock input, so it toggles with each rising edge of the clock signal. Notice how the clock signal in this example has a duty cycle less than 50%. I've shown the signal in this manner for the purpose of demonstrating how the clock signal need not be symmetrical to obtain reliable, "clean" output bits in our four-bit binary sequence. In the very first flip-flop circuit shown in this chapter, I used the clock signal itself as one of the output bits. This is a bad practice in counter design, though, because it necessitates the use of a square wave signal with a 50% duty cycle ("high" time = "low" time) in order to obtain a count sequence where each and every step pauses for the same amount of time. Using one J-K flip-flop for each output bit, however, relieves us of the necessity of having a symmetrical clock signal, allowing the use of practically any variety of high/low waveform to increment the count sequence.

As indicated by all the other arrows in the pulse diagram, each succeeding output bit is toggled by the action of the preceding bit transitioning from "high" (1) to "low" (0). This is the pattern necessary to generate an "up" count sequence.

A less obvious solution for generating an "up" sequence using positive-edge triggered flip-flops is to "clock" each flip-flop using the Q' output of the preceding flip-flop rather than the Q output. Since the Q' output will always be the exact opposite state of the Q output on a J-K flip-flop (no invalid states with this type of flip-flop), a high-to-low transition on the Q output will be accompanied by a low-to-high transition on the Q' output. In other words, each time the Q output of a flip-flop transitions from 1 to 0, the Q' output of the same flip-flop will transition from 0 to 1, providing the positive-going clock pulse we would need to toggle a positive-edge triggered flip-flop at the right moment:
One way we could expand the capabilities of either of these two counter circuits is to regard the Q' outputs as another set of four binary bits. If we examine the pulse diagram for such a circuit, we see that the Q' outputs generate a down-counting sequence, while the Q outputs generate an up-counting sequence:





Unfortunately, all of the counter circuits shown thusfar share a common problem: the ripple effect. This effect is seen in certain types of binary adder and data conversion circuits, and is due to accumulative propagation delays between cascaded gates. When the Q output of a flip-flop transitions from 1 to 0, it commands the next flip-flop to toggle. If the next flip-flop toggle is a transition from 1 to 0, it will command the flip-flop after it to toggle as well, and so on. However, since there is always some small amount of propagation delay between the command to toggle (the clock pulse) and the actual toggle response (Q and Q' outputs changing states), any subsequent flip-flops to be toggled will toggle some time after the first flip-flop has toggled. Thus, when multiple bits toggle in a binary count sequence, they will not all toggle at exactly the same time:



As you can see, the more bits that toggle with a given clock pulse, the more severe the accumulated delay time from LSB to MSB. When a clock pulse occurs at such a transition point (say, on the transition from 0111 to 1000), the output bits will "ripple" in sequence from LSB to MSB, as each succeeding bit toggles and commands the next bit to toggle as well, with a small amount of propagation delay between each bit toggle. If we take a close-up look at this effect during the transition from 0111 to 1000, we can see that there will be false output counts generated in the brief time period that the "ripple" effect takes place:



Instead of cleanly transitioning from a "0111" output to a "1000" output, the counter circuit will very quickly ripple from 0111 to 0110 to 0100 to 0000 to 1000, or from 7 to 6 to 4 to 0 and then to 8. This behavior earns the counter circuit the name of ripple counter, or asynchronous counter.

In many applications, this effect is tolerable, since the ripple happens very, very quickly (the width of the delays has been exaggerated here as an aid to understanding the effects). If all we wanted to do was drive a set of light-emitting diodes (LEDs) with the counter's outputs, for example, this brief ripple would be of no consequence at all. However, if we wished to use this counter to drive the "select" inputs of a multiplexer, index a memory pointer in a microprocessor (computer) circuit, or perform some other task where false outputs could cause spurious errors, it would not be acceptable. There is a way to use this type of counter circuit in applications sensitive to false, ripple-generated outputs, and it involves a principle known as strobing.

Most decoder and multiplexer circuits are equipped with at least one input called the "enable." The output(s) of such a circuit will be active only when the enable input is made active. We can use this enable input to strobe the circuit receiving the ripple counter's output so that it is disabled (and thus not responding to the counter output) during the brief period of time in which the counter outputs might be rippling, and enabled only when sufficient time has passed since the last clock pulse that all rippling will have ceased. In most cases, the strobing signal can be the same clock pulse that drives the counter circuit:



With an active-low Enable input, the receiving circuit will respond to the binary count of the four-bit counter circuit only when the clock signal is "low." As soon as the clock pulse goes "high," the receiving circuit stops responding to the counter circuit's output. Since the counter circuit is positive-edge triggered (as determined by the first flip-flop clock input), all the counting action takes place on the low-to-high transition of the clock signal, meaning that the receiving circuit will become disabled just before any toggling occurs on the counter circuit's four output bits. The receiving circuit will not become enabled until the clock signal returns to a low state, which should be a long enough time after all rippling has ceased to be "safe" to allow the new count to have effect on the receiving circuit. The crucial parameter here is the clock signal's "high" time: it must be at least as long as the maximum expected ripple period of the counter circuit. If not, the clock signal will prematurely enable the receiving circuit, while some rippling is still taking place.

Another disadvantage of the asynchronous, or ripple, counter circuit is limited speed. While all gate circuits are limited in terms of maximum signal frequency, the design of asynchronous counter circuits compounds this problem by making propagation delays additive. Thus, even if strobing is used in the receiving circuit, an asynchronous counter circuit cannot be clocked at any frequency higher than that which allows the greatest possible accumulated propagation delay to elapse well before the next pulse.

The solution to this problem is a counter circuit that avoids ripple altogether. Such a counter circuit would eliminate the need to design a "strobing" feature into whatever digital circuits use the counter output as an input, and would also enjoy a much greater operating speed than its asynchronous equivalent. This design of counter circuit is the subject of the next section.

LATCHES AND FLIP-FLOPS

Latches and flip-flops

In the same way that gates are the building blocks of combinatorial circuits, latches and flip-flops are the building blocks of sequential circuits.
While gates had to be built directly from transistors, latches can be built from gates, and flip-flops can be built from latches. This fact will make it somewhat easier to understand latches and flip-flops.

Both latches and flip-flops are circuit elements whose output depends not only on the current inputs, but also on previous inputs and outputs. The difference between a latch and a flip-flop is that a latch does not have a clock signal, whereas a flip-flop always does.

Latches

How can we make a circuit out of gates that is not combinatorial? The answer is feed-back, which means that we create loops in the circuit diagrams so that output values depend, indirectly, on themselves. If such feed-back is positive then the circuit tends to have stable states, and if it is negative the circuit will tend to oscillate.
A latch has positive feedback. Here is an example of a simple latch:



This latch is called SR-latch, which stands for set and reset.

It is not practical to use the methods that we have used to describe combinatorial circuits to describe the behavior of the SR-latch. Later, we will show a method for describing flip-flops and clocked sequential circuits. For now, we just rely on our intuition to describe how latches work.

The SR-latch is meant to have at most one of its inputs equal to 1 at any time. When both of its inputs are 0 it has two different stable states possible. Either x is 0, in which case we have the following signal values:



or else x is 1, in which case we have the following signal values:



The actual value depends on the history of input values as we will show next.

Now suppose that s is 1 (and therefore r is 0 since we allow at most one input to be 1 at any time). We get the following signal values:



The 1 on the s input makes sure the output of the upper nor-gate is 0, and the two 0s on the input of the lower nor-gate make sure the x output is 1.

Now suppose the s input goes from 1 to 0, while the r input remains at 0. The second input of the upper nor-gate is 1, so the transition from 1 to 0 of the s input, does not make any difference. The x output remains at 1. In this case, if the s and r inputs are both 0, there is only one possible stable state, the one that gives x the value 1.

Conversely, suppose that r is 1 (and therefore s is 0 since we allow at most one input to be 1 at any time). We get the following signal values:



The 1 on the r input makes sure the x output is 0, and the two 0s on the input of the upper nor-gate make sure the output of the upper nor-gate is 0.

Now suppose the r input goes from 1 to 0, while the s input remains at 0. The second input of the lower nor-gate is 1, so the transition from 1 to 0 of the r input, does not make any difference. The output of the upper nor-gate remains at 1. In this case, if the s and r inputs are both 0, there is only one possible stable state, the one that gives x the value 0.

From the discussion above, we conclude that the SR-latch is able to remember the last state of the inputs, in the sense that it remembers which of the two inputs, s or r, last had the value of 1.

When we need to draw an SR-latch, we use the following symbol:



Flip-flops

Latches are asynchronous, which means that the output changes very soon after the input changes. Most computers today, on the other hand, are synchronous, which means that the outputs of all the sequential circuits change simultaneously to the rhythm of a global clock signal.
A flip-flop is a synchronous version of the latch. To complicate the situation even more, there are several fundamental types of flip-flops. Here, we shall only consider a type called master-slave flip-flop.

In addition to the fundamental types of flip-flops, there are minor variations depending on the number of inputs and how they control the state of the flip-flop. Here, we shall only consider a very simple type of flip-flop called a D-flip-flop. A master-slave D-flip-flop is built from two SR-latches and some gates. Here is the circuit diagram:



The leftmost SR-latch is called the master and the rightmost is called the slave.

Let us first consider what happens when the clock signal is 1. In this case, the two and-gates in front of the input of the master are open, i.e., they let the value of the D-input through to the s input of the master, and the inverse of the D input to the r input of the master. Thus, the value of the D input will go straight trough the master to the x output of the master. But the two and-gates of the slave re closed, i.e., their outputs are always 0, so the slave keeps its old value.

When instead the clock signal is 0, the reverse is true, i.e., the and-gates at the input of the master are closed, whereas the ones at the input of the slave are open. In this case, the flip-flop is completely insensitive to changes in input.

Now, let us consider what happens when the clock goes from 1 to 0. For this to work, we have to assume that the input remains the same during a brief period from right before to right after the clock signal changes. The first thing that happens is that the and-gates at the input of the master turn off, i.e., they become insensitive to further changes in input. The value of the x output of the master is now the value of the D input right before the clock started changing. A brief moment later, the clock signal transition has traversed the inverter and reaches the and-gates of the slave. These gates open, allowing the x output of the master to be propagated to the x value of the slave. The x value of the slave, and therefore that of the entire flip-flop now contains the value of the D input right before the clock started changing. We can say that the clock transition copied the input to the output of the flip-flop. But at no point in time is there a direct path from input to output. The output changes only as a result of clock transitions from 1 to 0.

Finally, let us see what happens when the clock goes from 0 to 1. First, the and-gates of the master open, letting the value of the D input into the master. By the time the D value reaches the master, the clock signal transition reaches the and-gates of the slave, and turns them off before the possibly modified output of the master reaches the slave. Thus, the slave keeps its old value. From the outside, nothing seems to happen, since the output does not change. From now on, however, the master is open to changes in the input.

Here is the symbol we use for D-flip-flops:



The little triangle for the clock input indicates that this input is sensitive only to transitions as opposed to levels as described in the previous paragraph. Sometimes we do not draw the clock input at all when it is understood that it is there. Clock signals are boring since they are all just connected to each other. There is therefore little use to draw them all, and thereby clutter the diagram unnecessarily.

Summary

We have shown how to build a D-flip-flop. It copies its input to its output as a result of a clock signal transition from 1 to 0. The value copied is the value the input has immediately before the clock transition. Most of the clock period, the D-flip-flop is insensitive to changes in the input.
We shall use this key characteristic of the D-flip-flop to build synchronous sequential circuits.

FLIP FLOP

Flip-Flops

extracts from:
http://www.elec.uq.edu.au/~3e211/pracs/prac2/prac2.htm
Dept. of Computer Science and Electrical Engineering
The University of Queensland
St. Lucia Qld 4072 Australia
The memory elements in a sequential circuit are called flip-flops. A flip-flop circuit has two outputs, one for the normal value and one for the complement value of the stored bit. Binary information can enter a flip-flop in a variety of ways and gives rise to different types of flip-flops.

Introduction - Basic Flip-Flop Circuit

A flip-flop circuit can be constructed from two NAND gates or two NOR gates. These flip-flops are shown in Figure 2 and Figure 3. Each flip-flop has two outputs, Q and Q', and two inputs, set and reset. This type of flip-flop is referred to as an SR flip-flop or SR latch. The flip-flop in Figure 2 has two useful states. When Q=1 and Q'=0, it is in the set state (or 1-state). When Q=0 and Q'=1, it is in the clear state (or 0-state). The outputs Q and Q' are complements of each other and are referred to as the normal and complement outputs, respectively. The binary state of the flip-flop is taken to be the value of the normal output.

When a 1 is applied to both the set and reset inputs of the flip-flop in Figure 2, both Q and Q' outputs go to 0. This condition violates the fact that both outputs are complements of each other. In normal operation this condition must be avoided by making sure that 1's are not applied to both inputs simultaneously.



(a) Logic diagram



(b) Truth table

Figure 2. Basic flip-flop circuit with NOR gates



(a) Logic diagram



(b) Truth table

Figure 3. Basic flip-flop circuit with NAND gates

The NAND basic flip-flop circuit in Figure 3(a) operates with inputs normally at 1 unless the state of the flip-flop has to be changed. A 0 applied momentarily to the set input causes Q to go to 1 and Q' to go to 0, putting the flip-flop in the set state. When both inputs go to 0, both outputs go to 1. This condition should be avoided in normal operation.

Back to Contents

Introduction - Clocked SR Flip-Flop

The clocked SR flip-flop shown in Figure 4 consists of a basic NOR flip-flop and two AND gates. The outputs of the two AND gates remain at 0 as long as the clock pulse (or CP) is 0, regardless of the S and R input values. When the clock pulse goes to 1, information from the S and R inputs passes through to the basic flip-flop. With both S=1 and R=1, the occurrence of a clock pulse causes both outputs to momentarily go to 0. When the pulse is removed, the state of the flip-flop is indeterminate, ie., either state may result, depending on whether the set or reset input of the flip-flop remains a 1 longer than the transition to 0 at the end of the pulse.



(a) Logic diagram



(b) Truth table

Figure 4. Clocked SR flip-flop

Back to Contents

Introduction - D Flip-Flop

The D flip-flop shown in Figure 5 is a modification of the clocked SR flip-flop. The D input goes directly into the S input and the complement of the D input goes to the R input. The D input is sampled during the occurrence of a clock pulse. If it is 1, the flip-flop is switched to the set state (unless it was already set). If it is 0, the flip-flop switches to the clear state.



(a) Logic diagram with NAND gates



(b) Graphical symbol



(c) Transition table

Figure 5. Clocked D flip-flop

Back to Contents

Introduction - JK Flip-Flop

A JK flip-flop is a refinement of the SR flip-flop in that the indeterminate state of the SR type is defined in the JK type. Inputs J and K behave like inputs S and R to set and clear the flip-flop (note that in a JK flip-flop, the letter J is for set and the letter K is for clear). When logic 1 inputs are applied to both J and K simultaneously, the flip-flop switches to its complement state, ie., if Q=1, it switches to Q=0 and vice versa.

A clocked JK flip-flop is shown in Figure 6. Output Q is ANDed with K and CP inputs so that the flip-flop is cleared during a clock pulse only if Q was previously 1. Similarly, ouput Q' is ANDed with J and CP inputs so that the flip-flop is set with a clock pulse only if Q' was previously 1.

Note that because of the feedback connection in the JK flip-flop, a CP signal which remains a 1 (while J=K=1) after the outputs have been complemented once will cause repeated and continuous transitions of the outputs. To avoid this, the clock pulses must have a time duration less than the propagation delay through the flip-flop. The restriction on the pulse width can be eliminated with a master-slave or edge-triggered construction. The same reasoning also applies to the T flip-flop presented next.



(a) Logic diagram



(b) Graphical symbol



(c) Transition table

Figure 6. Clocked JK flip-flop

Back to Contents

Introduction - T Flip-Flop

The T flip-flop is a single input version of the JK flip-flop. As shown in Figure 7, the T flip-flop is obtained from the JK type if both inputs are tied together. The output of the T flip-flop "toggles" with each clock pulse.



(a) Logic diagram



(b) Graphical symbol



(c) Transition table

Figure 7. Clocked T flip-flop

Back to Contents

Introduction - Triggering of Flip-flops

The state of a flip-flop is changed by a momentary change in the input signal. This change is called a trigger and the transition it causes is said to trigger the flip-flop. The basic circuits of Figure 2 and Figure 3 require an input trigger defined by a change in signal level. This level must be returned to its initial level before a second trigger is applied. Clocked flip-flops are triggered by pulses.

The feedback path between the combinational circuit and memory elements in Figure 1 can produce instability if the outputs of the memory elements (flip-flops) are changing while the outputs of the combinational circuit that go to the flip-flop inputs are being sampled by the clock pulse. A way to solve the feedback timing problem is to make the flip-flop sensitive to the pulse transition rather than the pulse duration.

The clock pulse goes through two signal transitions: from 0 to 1 and the return from 1 to 0. As shown in Figure 8 the positive transition is defined as the positive edge and the negative transition as the negative edge.



Figure 8. Definition of clock pulse transition

The clocked flip-flops already introduced are triggered during the positive edge of the pulse, and the state transition starts as soon as the pulse reaches the logic-1 level. If the other inputs change while the clock is still 1, a new output state may occur. If the flip-flop is made to respond to the positive (or negative) edge transition only, instead of the entire pulse duration, then the multiple-transition problem can be eliminated.

Back to Contents

Introduction - Master-Slave Flip-Flop

A master-slave flip-flop is constructed from two seperate flip-flops. One circuit serves as a master and the other as a slave. The logic diagram of an SR flip-flop is shown in Figure 9. The master flip-flop is enabled on the positive edge of the clock pulse CP and the slave flip-flop is disabled by the inverter. The information at the external R and S inputs is transmitted to the master flip-flop. When the pulse returns to 0, the master flip-flop is disabled and the slave flip-flop is enabled. The slave flip-flop then goes to the same state as the master flip-flop.



Figure 9. Logic diagram of a master-slave flip-flop

The timing relationship is shown in Figure 10 and is assumed that the flip-flop is in the clear state prior to the occurrence of the clock pulse. The output state of the master-slave flip-flop occurs on the negative transition of the clock pulse. Some master-slave flip-flops change output state on the positive transition of the clock pulse by having an additional inverter between the CP terminal and the input of the master.



Figure 10. Timing relationship in a master slave flip-flop

Back to Contents

Introduction - Edge Triggered Flip-Flop

Another type of flip-flop that synchronizes the state changes during a clock pulse transition is the edge-triggered flip-flop. When the clock pulse input exceeds a specific threshold level, the inputs are locked out and the flip-flop is not affected by further changes in the inputs until the clock pulse returns to 0 and another pulse occurs. Some edge-triggered flip-flops cause a transition on the positive edge of the clock pulse (positive-edge-triggered), and others on the negative edge of the pulse (negative-edge-triggered). The logic diagram of a D-type positive-edge-triggered flip-flop is shown in Figure 11.



Figure 11. D-type positive-edge triggered flip-flop

When using different types of flip-flops in the same circuit, one must ensure that all flip-flop outputs make their transitions at the same time, ie., during either the negative edge or the positive edge of the clock pulse.

Back to Contents

Introduction - Direct Inputs

Flip-flops in IC packages sometimes provide special inputs for setting or clearing the flip-flop asynchronously. They are usually called preset and clear. They affect the flip-flop without the need for a clock pulse. These inputs are useful for bringing flip-flops to an intial state before their clocked operation. For example, after power is turned on in a digital system, the states of the flip-flops are indeterminate. Activating the clear input clears all the flip-flops to an initial state of 0. The graphic symbol of a JK flip-flop with an active-low clear is shown in Figure 12.



(a) Graphic Symbol



(b) Transition table

Figure 12. JK flip-flop with direct clear

Back to Contents

Preparation

Prepare the following in your prac book:

Basic Flip-Flop

Draw the logic circuit for an unclocked NOR gate flip-flop.
Enter the expected timing diagram for signals Q and Q' in Figure 13.


Figure 13. NOR gate flip-flop timing diagram

Draw the logic circuit for an unclocked NAND gate flip-flop.
Enter the expected timing diagram for signals Q and Q' in Figure 14.


Figure 14. NAND gate flip-flop timing diagram

Master-Slave Flip-Flop

Draw the logic circuit implemented with gates for the SR master-slave flip-flop in Figure 9. Use NOR gate flip-flops.
Enter the expected timing diagram for the signals Y, Y', Q, and Q' in Figure 15.


Figure 15. SR master-slave flip-flop timing diagram

Edge Triggered Flip-Flop

Draw the logic circuit for the D-type positive-edge triggered flip-flop in Figure 11.
Enter the expected timing diagram for the signals S, R, Q, and Q' in Figure 16.


Figure 16. D-type edge triggered flip-flop timing diagram

Back to Contents

Procedure

Use LogicWorks to simulate the circuits that you have prepared. Use switches from the I/O library for the inputs and probes from the I/O library for the outputs. Place signal names on the circuit so that the signals are visible in the timing window. Create a separate drawing for each circuit.

To be sure that your circuits don't cross the printing page boundaries, check the Show Page Outlines option from the Drawing|Display Options... menu.

Only print out the circuit and waveforms for the SR master-slave flip-flop.

Back to Contents

Equipment

Computer - Room 47-405
Back to Contents

References

Mano, M., "Digital Design", Prentice/Hall, 1984. Chapter 6.
Smith, R., "Circuits, Devices and Systems", Wiley, 1980.
"LogicWorks for Windows 3.0" by Capilano Computing Systems, Ltd., Addison-Wesley, 1995.
74LS Device Data
Back to Contents

TRUTH TABLE


CIRCUITS

Symbolic Logic
Boolean algebra derives its name from the mathematician George Boole. Symbolic Logic uses values, variables and operations :




True is represented by the value 1.
False is represented by the value 0.
Variables are represented by letters and can have one of two values, either 0 or 1. Operations are functions of one or more variables.

AND is represented by X.Y
OR is represented by X + Y
NOT is represented by X' . Throughout this tutorial the X' form will be used and sometime !X will be used.
These basic operations can be combined to give expressions.




Example :




X
X.Y
W.X.Y + Z



Precedence
As with any other branch of mathematics, these operators have an order of precedence. NOT operations have the highest precedence, followed by AND operations, followed by OR operations. Brackets can be used as with other forms of algebra. e.g.




X.Y + Z and X.(Y + Z) are not the same function.




Function Definitions
The logic operations given previously are defined as follows :




Define f(X,Y) to be some function of the variables X and Y.




f(X,Y) = X.Y

1 if X = 1 and Y = 1
0 Otherwise



f(X,Y) = X + Y

1 if X = 1 or Y = 1
0 Otherwise



f(X) = X'

1 if X = 0
0 Otherwise



Truth Tables
Truth tables are a means of representing the results of a logic function using a table. They are constructed by defining all possible combinations of the inputs to a function, and then calculating the output for each combination in turn. For the three functions we have just defined, the truth tables are as follows.




AND

X

Y

F(X,Y)

0

0

0

0

1

0

1

0

0

1

1

1




OR

X

Y

F(X,Y)

0

0

0

0

1

1

1

0

1

1

1

1




NOT

X

F(X)

0

1

1

0




Truth tables may contain as many input variables as desired




F(X,Y,Z) = X.Y + Z

X

Y

Z

F(X,Y,Z)

0

0

0

0

0

0

1

1

0

1

0

0

0

1

1

1

1

0

0

0

1

0

1

1

1

1

0

1

1

1

1

1




Boolean Switching Algebras
A Boolean Switching Algebra is one which deals only with two-valued variables. Boole's general theory covers algebras which deal with variables which can hold n values.




Axioms
Consider a set S = { 0. 1}

Consider two binary operations, + and . , and one unary operation, -- , that act on these elements. [S, ., +, --, 0, 1] is called a switching algebra that satisfies the following axioms S




Closure



If X S and Y S then X.Y S

If X S and Y S then X+Y S




Identity



an identity 0 for + such that X + 0 = X

an identity 1 for . such that X . 1 = X




Commutative Laws



X + Y = Y + X

X . Y = Y . X




Distributive Laws



X.(Y + Z ) = X.Y + X.Z

X + Y.Z = (X + Y) . (X + Z)




Complement



X S a complement X'such that

X + X' = 1

X . X' = 0

The complement X' is unique.









Theorems



A number of theorems may be proved for switching algebras




Idempotent Law



X + X = X

X . X = X




DeMorgan's Law



(X + Y)' = X' . Y', These can be proved by the use of truth tables.




Proof of (X + Y)' = X' . Y'




X

Y

X+Y

(X+Y)'

0

0

0

1

0

1

1

0

1

0

1

0

1

1

1

0




X

Y

X'

Y'

X'.Y'

0

0

1

1

1

0

1

1

0

0

1

0

0

1

0

1

1

0

0

0




The two truth tables are identical, and so the two expressions are identical.




(X.Y) = X' + Y', These can be proved by the use of truth tables.




Proof of (X.Y) = X' + Y'




X

Y

X.Y

(X.Y)'

0

0

0

1

0

1

0

1

1

0

0

1

1

1

1

0




X

Y

X'

Y'

X'+Y'

0

0

1

1

1

0

1

1

0

1

1

0

0

1

1

1

1

0

0

0




Note : DeMorgans Laws are applicable for any number of variables.




Boundedness Law



X + 1 = 1

X . 0 = 0




Absorption Law



X + (X . Y) = X

X . (X + Y ) = X




Elimination Law



X + (X' . Y) = X + Y

X.(X' + Y) = X.Y




Unique Complement theorem



If X + Y = 1 and X.Y = 0 then X = Y'




Involution theorem



X'' = X

0' = 1




Associative Properties



X + (Y + Z) = (X + Y) + Z

X . ( Y . Z ) = ( X . Y ) . Z




Duality Principle
In Boolean algebras the duality Principle can be is obtained by interchanging AND and OR operators and replacing 0's by 1's and 1's by 0's. Compare the identities on the left side with the identities on the right.




Example




X.Y+Z' = (X'+Y').Z




Consensus theorem



X.Y + X'.Z + Y.Z = X.Y + X'.Z

or dual form as below

(X + Y).(X' + Z).(Y + Z) = (X + Y).(X' + Z)




Proof of X.Y + X'.Z + Y.Z = X.Y + X'.Z:




X.Y + X'.Z + Y.Z

= X.Y + X'.Z

X.Y + X'.Z + (X+X').Y.Z

= X.Y + X'.Z

X.Y.(1+Z) + X'.Z.(1+Y)

= X.Y + X'.Z

X.Y + X'.Z

= X.Y + X'.Z




(X.Y'+Z).(X+Y).Z = X.Z+Y.Z instead of X.Z+Y'.Z

X.Y'Z+X.Z+Y.Z

(X.Y'+X+Y).Z

(X+Y).Z

X.Z+Y.Z




The term which is left out is called the consensus term.




Given a pair of terms for which a variable appears in one term, and its complement in the other, then the consensus term is formed by ANDing the original terms together, leaving out the selected variable and its complement.




Example :

The consensus of X.Y and X'.Z is Y.Z




The consensus of X.Y.Z and Y'.Z'.W' is (X.Z).(Z.W')




Shannon Expansion Theorem
The Shannon Expansion Theorem is used to expand a Boolean logic function (F) in terms of (or with respect to) a Boolean variable (X), as in the following forms.




F = X . F (X = 1) + X' . F (X = 0)




where F (X = 1) represents the function F evaluated with X set equal to 1; F (X = 0) represents the function F evaluated with X set equal to 0.




Also the following function F can be expanded with respect to X,




F = X' . Y + X . Y . Z' + X' . Y' . Z




= X . (Y . Z') + X' . (Y + Y' . Z)




Thus, the function F can be split into two smaller functions.




F (X = '1') = Y . Z'




This is known as the cofactor of F with respect to X in the previous logic equation. The cofactor of F with respect to X may also be represented as F X (the cofactor of F with respect to X' is F X' ). Using the Shannon Expansion Theorem, a Boolean function may be expanded with respect to any of its variables. For example, if we expand F with respect to Y instead of X,




F = X' . Y + X . Y . Z' + X' . Y' . Z




= Y . (X' + X . Z') + Y' . (X' . Z)




A function may be expanded as many times as the number of variables it contains until the canonical form is reached. The canonical form is a unique representation for any Boolean function that uses only minterms. A minterm is a product term that contains all the variables of F¿such as X . Y' . Z).




Any Boolean function can be implemented using multiplexer blocks by representing it as a series of terms derived using the Shannon Expansion Theorem.




Summary of Laws And Theorms



Identity

Dual

Operations with 0 and 1

X + 0 = X (identity)

X.1 = X

X + 1 = 1 (null element)

X.0 = 0

Idempotency theorem

X + X = X

X.X = X

Complementarity

X + X' = 1

X.X' = 0

Involution theorem

(X')' = X

Cummutative law

X + Y = Y + X

X.Y = Y X

Associative law

(X + Y) + Z = X + (Y + Z) = X + Y + Z

(XY)Z = X(YZ) = XYZ

Distributive law

X(Y + Z) = XY + XZ

X + (YZ) = (X + Y)(X + Z)

DeMorgan's theorem

(X + Y + Z + ...)' = X'Y'Z'... or { f ( X1,X2,...,Xn,0,1,+,. ) } = { f ( X1',X2',...,Xn',1,0,.,+ ) }

(XYZ...)' = X' + Y' + Z' + ...

Simplification theorems

XY + XY' = X (uniting)

(X + Y)(X + Y') = X

X + XY = X (absorption)

X(X + Y) = X

(X + Y')Y = XY (adsorption)

XY' + Y = X + Y

Consensus theorem

XY + X'Z + YZ = XY + X'Z

(X + Y)(X' + Z)(Y + Z) = (X + Y)(X' + Z)

Duality

(X + Y + Z + ...)D = XYZ... or {f(X1,X2,...,Xn,0,1,+,.)}D = f(X1,X2,...,Xn,1,0,.,+)

(XYZ ...)D = X + Y + Z + ...

Shannon Expansion Theorem

f(X1,...,Xk,...Xn)

Xk * f(X1,..., 1 ,...Xn) + Xk' * f(X1,..., 0 ,...Xn)

f(X1,...,Xk,...Xn)

[Xk + f(X1,..., 0 ,...Xn)] * [Xk' + f(X1,..., 1 ,...Xn)]


**FOR MORE INFO VISIT:http://www.asic-world.com/digital/boolean1.html**

LOGIC GATE

1. Logic Gates (Introduction)
The package Truth Tables and Boolean Algebra set out the basic
principles of logic. Any Boolean algebra operation can be associated
with an electronic circuit in which the inputs and outputs represent
the statements of Boolean algebra. Although these circuits may be
complex, they may all be constructed from three basic devices. These
are the AND gate, the OR gate and the NOT gate.
x
y
x · y
AND gate
x
y
x + y
OR gate
x x
0
NOT gate
In the case of logic gates, a different notation is used:
x ∧ y, the logical AND operation, is replaced by x · y, or xy.
x ∨ y, the logical OR operation, is replaced by x + y.
¬x, the logical NEGATION operation, is replaced by x
0
or x.
The truth value TRUE is written as 1 (and corresponds to a high
voltage), and FALSE is written as 0 (low voltage).Section 2: Truth Tables 4
2. Truth Tables
x
y
x · y
x y x · y
0 0 0
0 1 0
1 0 0
1 1 1
Summary of AND gate
x y x + y
0 0 0
0 1 1
1 0 1
1 1 1
Summary of OR gate
x
y
x + y
x x
0
x x
0
0 1
1 0
Summary of NOT gateSection 3: Basic Rules of Boolean Algebra 5
3. Basic Rules of Boolean Algebra
The basic rules for simplifying and combining logic gates are called
Boolean algebra in honour of George Boole (1815 – 1864) who was a
self-educated English mathematician who developed many of the key
ideas. The following set of exercises will allow you to rediscover the
basic rules:
Example 1
x
1
Consider the AND gate where one of the inputs is 1. By using the
truth table, investigate the possible outputs and hence simplify the
expression x · 1.
Solution From the truth table for AND, we see that if x is 1 then
1 · 1 = 1, while if x is 0 then 0 · 1 = 0. This can be summarised in the
rule that x · 1 = x, i.e.,
x
1
xSection 3: Basic Rules of Boolean Algebra 6
Example 2
x
0
Consider the AND gate where one of the inputs is 0. By using the
truth table, investigate the possible outputs and hence simplify the
expression x · 0.
Solution From the truth table for AND, we see that if x is 1 then
1 · 0 = 0, while if x is 0 then 0 · 0 = 0. This can be summarised in the
rule that x · 0 = 0
x
0
0Section 3: Basic Rules of Boolean Algebra 7
Exercise 1. (Click on the green letters for the solutions.) Obtain
the rules for simplifying the logical expressions
(a) x + 0 which corresponds to the logic gate
x
0
(b) x + 1 which corresponds to the logic gate
x
1
Exercise 2. (Click on the green letters for the solutions.) Obtain
the rules for simplifying the logical expressions:
(a) x + x which corresponds to the logic gate
x
(b) x · x which corresponds to the logic gate
xSection 3: Basic Rules of Boolean Algebra 8
Exercise 3. (Click on the green letters for the solutions.) Obtain
the rules for simplifying the logical expressions:
(a) x + x
0
which corresponds to the logic gate
x
(b) x · x
0
which corresponds to the logic gate
x
Quiz Simplify the logical expression (x
0
)
0
represented by the following
circuit diagram.
x
(a) x (b) x
0
(c) 1 (d) 0Section 3: Basic Rules of Boolean Algebra 9
Exercise 4. (Click on the green letters for the solutions.) Investigate the relationship between the following circuits. Summarise your
conclusions using Boolean expressions for the circuits.
(a)
x
y
x
y
(b)
x
y
x
y
The important relations developed in the above exercise are called De
Morgan’s theorems and are widely used in simplifying circuits. These
correspond to rules (8a) and (8b) in the table of Boolean identities on
the next page.Section 4: Boolean Algebra 10
4. Boolean Algebra
(1a) x · y = y · x
(1b) x + y = y + x
(2a) x · (y · z) = (x · y) · z
(2b) x + (y + z) = (x + y) + z
(3a) x · (y + z) = (x · y) + (x · z)
(3b) x + (y · z) = (x + y) · (x + z)
(4a) x · x = x
(4b) x + x = x
(5a) x · (x + y) = x
(5b) x + (x · y) = x
(6a) x · x
0 = 0
(6b) x + x
0 = 1
(7) (x
0
)
0 = x
(8a) (x · y)
0 = x
0 + y
0
(8b) (x + y)
0 = x
0
· y
0Section 4: Boolean Algebra 11
These rules are a direct translation into the notation of logic gates
of the rules derived in the package Truth Tables and Boolean
Algebra. We have seen that they can all be checked by investigating
the corresponding truth tables. Alternatively, some of these rules can
be derived from simpler identities derived in this package.
Example 3 Show how rule (5a) can be derived from the basic identities derived earlier.
Solution
x · (x + y) = x · x + x · y using (3a)
= x + x · y using (4a)
= x · (1 + y) using (3a)
= x · 1 using Exercise 1
= x as required.
Exercise 5. (Click on the green letter for the solution.)
(a) Show how rule (5b) can be derived in a similar fashion.Section 4: Boolean Algebra 12
The examples above have all involved at most two inputs. However,
logic gates can be put together to join an arbitrary number of inputs.
The Boolean algebra rules of the table are essential to understand
when these circuits are equivalent and how they may be simplified.
Example 4 Let us consider the circuits which combine three inputs
via AND gates. Two different ways of combining them are
x
y
z
(x · y) · z
and
x
y
z
x · (y · z)Section 4: Boolean Algebra 13
However, rule (2a) states that these gates are equivalent. The order
of taking AND gates is not important. This is sometimes drawn as a
three (or more!) input AND gate
x
y
z
x · y · z
but really this just means repeated use of AND gates as shown above.
Exercise 6. (Click on the green letter for the solution.)
(a) Show two different ways of combining three inputs via OR gates
and explain why they are equivalent.
This equivalence is summarised as a three (or more!) input OR gate
x
y
z
x + y + z
this just means repeated use of OR gates as shown in the exercise.Section 5: Final Quiz 14
5. Final Quiz
Begin Quiz
1. Select the Boolean expression that is not equivalent to x·x+x·x
0
(a) x · (x + x
0
) (b) (x + x
0
) · x (c) x
0
(d) x
2. Select the expression which is equivalent to x · y + x · y · z
(a) x · y (b) x · z (c) y · z (d) x · y · z
3. Select the expression which is equivalent to (x + y) · (x + y
0
)
(a) y (b) y
0
(c) x (d) x
0
4. Select the expression that is not equivalent to x · (x
0 + y) + y
(a) x · x
0 + y · (1 + x) (b) 0 + x · y + y (c) x · y (d) y
End QuizSolutions to Exercises 15
Solutions to Exercises
Exercise 1(a) From the truth table for OR, we see that if x is 1 then
1 + 0 = 1, while if x is 0 then 0 + 0 = 0. This can be summarised in
the rule that x + 0 = x
x
0
x
Click on the green square to return Solutions to Exercises 16
Exercise 1(b) From the truth table for OR we see that if x is 1 then
1 + 1 = 1, while if x is 0 then 0 + 1 = 1. This can be summarised in
the rule that x + 1 = 1
x
1
1
Click on the green square to return Solutions to Exercises 17
Exercise 2(a) From the truth table for OR, we see that if x is 1 then
x + x = 1 + 1 = 1, while if x is 0 then x + x = 0 + 0 = 0. This can be
summarised in the rule that x + x = x
x
x
Click on the green square to return Solutions to Exercises 18
Exercise 2(b) From the truth table for AND, we see that if x is 1
then x · x = 1 · 1 = 1, while if x is 0 then x · x = 0 · 0 = 0. This can
be summarised in the rule that x · x = x
x
x
Click on the green square to return Solutions to Exercises 19
Exercise 3(a) From the truth table for OR, we see that if x is 1 then
x + x
0 = 1 + 0 = 1, while if x is 0 then x + x
0 = 0 + 1 = 1. This can
be summarised in the rule that x + x
0 = 1
x
1
Click on the green square to return Solutions to Exercises 20
Exercise 3(b) From the truth table for AND, we see that if x is 1
then x · x
0 = 1 · 0 = 0, while if x is 0 then x · x
0 = 0 · 1 = 0. This can
be summarised in the rule that x · x
0 = 0
x
0
Click on the green square to return Solutions to Exercises 21
Exercise 4(a) The truth tables are:
x
y
x y x + y (x + y)
0
0 0 0 1
0 1 1 0
1 0 1 0
1 1 1 0
x
y
x y x
0
y
0
x
0
· y
0
0 0 1 1 1
0 1 1 0 0
1 0 0 1 0
1 1 0 0 0
From these we deduce the identity
x
y
(x + y)
0 =
x
y
x
0
· y
0
Click on the green square to return Solutions to Exercises 22
Exercise 4(b) The truth tables are:
x
y
x y x · y (x · y)
0
0 0 0 1
0 1 0 1
1 0 0 1
1 1 1 0
x
y
x y x
0
y
0
x
0 + y
0
0 0 1 1 1
0 1 1 0 1
1 0 0 1 1
1 1 0 0 0
From these we deduce the identity
x
y
(x · y)
0 =
x
y
x
0 + y
0
Click on the green square to return Solutions to Exercises 23
Exercise 5(a)
x + x · y = x · (1 + y) using (3a)
= x · 1 using Exercise 1
= x as required.
Solutions to Exercises 24
Exercise 6(a) Two different ways of combining them are
x
y
z
(x + y) + z
and
x
y
z
x + (y + z)
However, rule (2b) states that these gates are equivalent. The order
of taking OR gates is not important. Solutions to Quizzes 25
Solutions to Quizzes
Solution to Quiz: From the truth table for NOT we see that if x
is 1 then (x
0
)
0 = (1
0
)
0 = (0)
0 = 1, while if x is 0 then (x
0
)
0 = (0
0
)
0 =
(1)
0 = 0. This can be summarised in the rule that (x
0
)
0 = x



**FOR DETAILED INFO PLEASE VISIT :http://www.tech.plym.ac.uk/maths/resources/pdflatex/boolean_alg2.pdf***

BCD(BASIC)

In computing and electronic systems, binary-coded decimal (BCD) is a digital encoding method for decimal numbers in which each digit is represented by its own binary sequence. In BCD, a numeral is usually represented by four bits which, in general, represent the decimal range 0 through 9. Other bit patterns are sometimes used for a sign or for other indications (e.g., error or overflow). Uncompressed BCD consumes a byte for each represented numeral, whereas compressed or packed BCD typically carries two numerals in a single byte by taking advantage of the fact that four bits will represent the full numeral range.
BCD's main virtue is ease of conversion between machine- and human-readable formats, as well as a more precise machine-format representation of decimal quantities. As compared to typical binary formats, BCD's principal drawbacks are a small increase in the complexity of the circuits needed to implement basic mathematical operations and less efficient usage of storage facilities.
Although BCD is not as widely used as in the past, decimal fixed-point and floating-point formats are still important and continue to be used in financial, commercial, and industrial computing, where subtle conversion and rounding errors that are inherent to floating point binary representations cannot be tolerated.[1]

Basics

As described in the introduction, BCD takes advantage of the fact that any one decimal numeral can be represented by a four bit pattern:
Decimal: 0 1 2 3 4 5 6 7 8 9
Binary : 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001
As most computers store data in 8-bit bytes, it is possible to use one of the following methods to encode a BCD number:
Uncompressed: each numeral is encoded into one byte, with four bits representing the numeral and the remaining bits having no significance.
Packed: two numerals are encoded into a single byte, with one numeral in the least significant nibble (bits 0-3) and the other numeral in the most significant nibble (bits 4-7).
As an example, encoding the decimal number 91 using uncompressed BCD results in the following binary pattern of two bytes:
Decimal: 9 1
Binary : 0000 1001 0000 0001
In packed BCD, the same number would fit into a single byte:
Decimal: 9 1
Binary : 1001 0001
Hence the numerical range for one uncompressed BCD byte is zero through nine inclusive, whereas the range for one packed BCD is zero through ninety-nine inclusive.
To represent numbers larger than the range of a single byte any number of contiguous bytes may be used. For example, to represent the decimal number 12345 in packed BCD, using big-endian format, a program would encode as follows:
Decimal: 1 2 3 4 5
Binary : 0000 0001 0010 0011 0100 0101
Note that the most significant nibble of the least significant byte is zero, implying that the number is in actuality 012345. Also note how packed BCD is more efficient in storage usage as compared to uncompressed BCD; encoding the same number in uncompressed format would consume 67 percent more storage.
Shifting and masking operations are used to pack or unpack a packed BCD digit. Other logical operations are used to convert a numeral to its equivalent bit pattern or reverse the process.
[edit]BCD in Electronics

BCD is very common in electronic systems where a numeric value is to be displayed, especially in systems consisting solely of digital logic, and not containing a microprocessor. By utilizing BCD, the manipulation of numerical data for display can be greatly simplified by treating each digit as a separate single sub-circuit. This matches much more closely the physical reality of display hardware—a designer might choose to use a series of separate identical seven-segment displays to build a metering circuit, for example. If the numeric quantity were stored and manipulated as pure binary, interfacing to such a display would require complex circuitry. Therefore, in cases where the calculations are relatively simple working throughout with BCD can lead to a simpler overall system than converting to binary.
The same argument applies when hardware of this type uses an embedded microcontroller or other small processor. Often, smaller code results when representing numbers internally in BCD format, since a conversion from or to binary representation can be expensive on such limited processors. For these applications, some small processors feature BCD arithmetic modes, which assist when writing routines that manipulate BCD quantities.
[edit]Packed BCD

A common variation of the two-digits-per-byte encoding is called packed BCD (or simply packed decimal), which has been in use since the 1960s or earlier and implemented in all IBM mainframe hardware since then. In most representations, one or more bytes hold a decimal integer, where each of the two nibbles of each byte represent a decimal digit, with the more significant digit in the upper half of each byte, and with leftmost byte (residing at the lowest memory address) containing the most significant digits of the packed decimal value. The lower nibble of the rightmost byte is usually used as the sign flag (although in some representations this nibble may be used as the least significant digit if the packed decimal value does not have a sign at all, i.e., is purely unsigned). As an example, a 4-byte value consists of 8 nibbles, wherein the upper 7 nibbles store the digits of a 7-digit decimal value and the lowest nibble indicates the sign of the decimal integer value.
Standard sign values are 1100 (hex C) for positive (+) and 1101 (D) for negative (−). This convention was derived from abbreviations for accounting terms (Credit and Debit), as packed decimal coding was widely used in accounting systems.[citation needed] Other allowed signs are 1010 (A) and 1110 (E) for positive and 1011 (B) for negative. Some implementations also provide unsigned BCD values with a sign nibble of 1111 (F)[citation needed]. ILE RPG uses 1111 (F) for positive and 1101 (D) for negative.[2] In packed BCD, the number 127 is represented by 0001 0010 0111 1100 (127C) and −127 is represented by 0001 0010 0111 1101 (127D).
Sign
Digit BCD
8 4 2 1 Sign Notes
A 1 0 1 0 +
B 1 0 1 1 −
C 1 1 0 0 + Preferred
D 1 1 0 1 − Preferred
E 1 1 1 0 +
F 1 1 1 1 + Unsigned
No matter how many bytes wide a word is, there are always an even number of nibbles because each byte has two of them. Therefore, a word of n bytes can contain up to (2n)−1 decimal digits, which is always an odd number of digits. A decimal number with d digits requires ½(d+1) bytes of storage space.
For example, a 4-byte (32-bit) word can hold seven decimal digits plus a sign, and can represent values ranging from ±9,999,999. Thus the number −1,234,567 is 7 digits wide and is encoded as:
0001 0010 0011 0100 0101 0110 0111 1101
1 2 3 4 5 6 7 −
(Note that, like character strings, the first byte of the packed decimal — with the most significant two digits — is usually stored in the lowest address in memory, independent of the endianness of the machine.)
In contrast, a 4-byte binary two's complement integer can represent values from −2,147,483,648 to +2,147,483,647.
While packed BCD does not make optimal use of storage (about 1/6 of the memory used is wasted), conversion to ASCII, EBCDIC, or the various encodings of Unicode is still trivial, as no arithmetic operations are required. The extra storage requirements are usually offset by the need for the accuracy and compatibility with calculator or hand calculation that fixed-point decimal arithmetic provides. Denser packings of BCD exist which avoid the storage penalty and also need no arithmetic operations for common conversions.
Packed BCD is supported in the COBOL programming language as the "COMPUTATIONAL-3" (an IBM extension adopted by many other compiler vendors) or "PACKED-DECIMAL" (part of the 1985 COBOL standard) data type. Besides the IBM System/360 and later compatible mainframes, packed BCD was implemented in the native instruction set of the original VAX processors from Digital Equipment Corporation.
[edit]Fixed-point packed decimal
Fixed-point decimal numbers are supported by some programming languages (such as COBOL and PL/I). These languages allow the programmer to specify an implicit decimal point in front of one of the digits. For example, a packed decimal value encoded with the bytes 12 34 56 7C represents the fixed-point value +1,234.567 when the implied decimal point is located between the 4th and 5th digits:
12 34 56 7C
12 34.56 7+
The decimal point is not actually stored in memory, as the packed BCD storage format does not provide for it. Its location is simply known to the compiler and the generated code acts accordingly for the various arithmetic operations.
[edit]Higher-density encodings
If a decimal digit requires four bits, then three decimal digits require 12 bits. However, since 210 (1,024) is greater than 103 (1,000), if three decimal digits are encoded together, only 10 bits are needed. Two such encodings are Chen-Ho encoding and Densely Packed Decimal. The latter has the advantage that subsets of the encoding encode two digits in the optimal seven bits and one digit in four bits, as in regular BCD.
[edit]Zoned decimal

Some implementations, for example IBM mainframe systems, support zoned decimal numeric representations. Each decimal digit is stored in one byte, with the lower four bits encoding the digit in BCD form. The upper four bits, called the "zone" bits, are usually set to a fixed value so that the byte holds a character value corresponding to the digit. EBCDIC systems use a zone value of 1111 (hex F); this yields bytes in the range F0 to F9 (hex), which are the EBCDIC codes for the characters "0" through "9". Similarly, ASCII systems use a zone value of 0011 (hex 3), giving character codes 30 to 39 (hex).
For signed zoned decimal values, the rightmost (least significant) zone nibble holds the sign digit, which is the same set of values that are used for signed packed decimal numbers (see above). Thus a zoned decimal value encoded as the hex bytes F1 F2 D3 represents the signed decimal value −123:
F1 F2 D3
1 2 −3
[edit]EBCDIC zoned decimal conversion table
BCD Digit Hexadecimal EBCDIC Character
0+ C0 A0 E0 F0 { (*) \ (*) 0
1+ C1 A1 E1 F1 A ~ (*) 1
2+ C2 A2 E2 F2 B s S 2
3+ C3 A3 E3 F3 C t T 3
4+ C4 A4 E4 F4 D u U 4
5+ C5 A5 E5 F5 E v V 5
6+ C6 A6 E6 F6 F w W 6
7+ C7 A7 E7 F7 G x X 7
8+ C8 A8 E8 F8 H y Y 8
9+ C9 A9 E9 F9 I z Z 9
0− D0 B0 } (*) ^ (*)
1− D1 B1 J
2− D2 B2 K
3− D3 B3 L
4− D4 B4 M
5− D5 B5 N
6− D6 B6 O
7− D7 B7 P
8− D8 B8 Q
9− D9 B9 R
(*) Note: These characters vary depending on the local character code page setting.
[edit]Fixed-point zoned decimal
Some languages (such as COBOL and PL/I) directly support fixed-point zoned decimal values, assigning an implicit decimal point at some location between the decimal digits of a number. For example, given a six-byte signed zoned decimal value with an implied decimal point to the right of the fourth digit, the hex bytes F1 F2 F7 F9 F5 C0 represent the value +1,279.50:
F1 F2 F7 F9 F5 C0
1 2 7 9. 5 +0
[edit]IBM and BCD

Main article: BCD (6-bit)
IBM used the terms binary-coded decimal and BCD for 6-bit alphameric codes that represented numbers, upper-case letters and special characters. Some variation of BCD alphamerics was used in most early IBM computers, including the IBM 1620, IBM 1400 series, and non-Decimal Architecture members of the IBM 700/7000 series.
The IBM 1400 series were character-addressable machines, each location being six bits labeled B, A, 8, 4, 2 and 1, plus an odd parity check bit (C) and a word mark bit (M). For encoding digits 1 through 9, B and A were zero and the digit value represented by standard 4-bit BCD in bits 8 through 1. For most other characters bits B and A were derived simply from the "12", "11", and "0" "zone punches" in the punched card character code, and bits 8 through 1 from the 1 through 9 punches. A "12 zone" punch set both B and A, an "11 zone" set B, and a "0 zone" (a 0 punch combined with any others) set A. Thus the letter A, (12,1) in the punched card format, was encoded (B,A,1) and the currency symbol $, (11,8,3) in the punched card, as (B,8,3). This allowed the circuitry to convert between the punched card format and the internal storage format to be very simple with only a few special cases. One important special case was digit 0, represented by a lone 0 punch in the card, and (8,2) in core memory. [3]
The memory of the IBM 1620 was organized into 5-bit addressable digits, the usual 8, 4, 2, 1 plus F, used as a flag bit. BCD alphamerics were encoded using digit pairs, with the "zone" in the even-addressed digit and the "digit" in the odd-addressed digit, the "zone" being related to the 12, 11, and 0 "zone punches" as in the 1400 series. Input/Output translation hardware converted between the internal digit pairs and the external standard 6-bit BCD codes.
In the Decimal Architecture IBM 7070, IBM 7072, and IBM 7074 alphamerics were encoded using digit pairs (using two-out-of-five code in the digits, not BCD) of the 10-digit word, with the "zone" in the left digit and the "digit" in the right digit. Input/Output translation hardware converted between the internal digit pairs and the external standard 6-bit BCD codes.
With the introduction of System/360, IBM expanded 6-bit BCD alphamerics to 8-bit EBCDIC, allowing the addition of many more characters (e.g., lowercase letters). A variable length Packed BCD numeric data type was also implemented, providing machine instructions that performed arithmetic directly on packed decimal data.
Today, BCD data is still heavily used in IBM processors and databases, such as IBM DB2, mainframes, and Power6. In these products, the BCD is usually zoned BCD (as in EBCDIC or ASCII), Packed BCD (two decimal digits per byte), or "pure" BCD encoding (one decimal digit stored as BCD in the low four bits of each byte). All of these are used within hardware registers and processing units, and in software.
[edit]Other computers and BCD

The Digital Equipment Corporation VAX-11 series included instructions that could perform arithmetic directly on packed BCD data and convert between packed BCD data and other integer representations. The MicroVAX and later VAX implementations dropped this ability from the CPU but retained code compatibility with earlier machines by implementing the missing instructions in an operating system-supplied software library. This was invoked automatically via exception handling when the no longer implemented instructions were encountered, so that programs using them could execute without modification on the newer machines.
In more recent computers such capabilities are almost always implemented in software rather than the CPU's instruction set, but BCD numeric data is still extremely common in commercial and financial applications.
[edit]Addition with BCD

It is possible to perform addition in BCD by first adding in binary, and then converting to BCD afterwards. Conversion of the simple sum of two digits can be done by adding 6 (that is, 16 – 10) when the result has a value greater than 9. For example:
1001 + 1000 = 10001 = 0001 0001
9 + 8 = 17 = 1 1
In BCD, there cannot exist a value greater than 9 (1001) per nibble. To correct this, 6 (0110) is added to that sum to get the correct first two digits:
0001 0001 + 0000 0110 = 0001 0111
1 1 + 0 6 = 1 7
which gives two nibbles, 0001 and 0111, which correspond to the digits "1" and "7". This yields "17" in BCD, which is the correct result. This technique can be extended to adding multiple digits, by adding in groups from right to left, propagating the second digit as a carry, always comparing the 5-bit result of each digit-pair sum to 9.
[edit]Subtraction with BCD

Subtraction is done by adding the ten's complement of the subtrahend. To represent the sign of a number in BCD, the number 0000 is used to represent a positive number, and 1001 is used to represent a negative number. The remaining 14 combinations are invalid signs. To illustrate signed BCD subtraction, consider the following problem: 357 - 432.
In signed BCD, 357 is 0000 0011 0101 0111. The ten's complement of 432 can be obtained by taking the nine's complement of 432, and then adding one. So, 999 - 432 = 567, and 567 + 1 = 568. By preceding 568 in BCD by the negative sign code, the number -432 can be represented. So, -568 in signed BCD is 1001 0101 0110 1000.
Now that both numbers are represented in signed BCD, they can be added together:
0000 0011 0101 0111 + 1001 0101 0110 1000 = 1001 1000 1011 1111
0 3 5 7 + 9 5 6 8 = 9 8 11 15
Since BCD is a form of decimal representation, several of the digit sums above are invalid. In the event that an invalid entry (any BCD digit greater than 1001) exists, 6 is added to generate a carry bit and cause the sum to become a valid entry. The reason for adding 6 is that there are 16 possible 4-bit BCD values (since 24 = 16), but only 10 values are valid (0000 through 1001). So adding 6 to the invalid entries results in the following:
1001 1000 1011 1111 + 0000 0000 0110 0110 = 1001 1001 0010 0101
9 8 11 15 + 0 0 6 6 = 9 9 2 5
Thus the result of the subtraction is 1001 1001 0010 0101 (-925). To check the answer, note that the first bit is the sign bit, which is negative. This seems to be correct, since 357 - 432 should result in a negative number. To check the rest of the digits, represent them in decimal. 1001 0010 0101 is 925. The ten's complement of 925 is 1000 - 925 = 999 - 925 + 1 = 074 + 1 = 75, so the calculated answer is -75. To check, perform standard subtraction to verify that 357 - 432 is -75.
Note that in the event that there are a different number of nibbles being added together (such as 1053 - 122), the number with the fewest number of digits must first be padded with zeros before taking the ten's complement or subtracting. So, with 1053 - 122, 122 would have to first be represented as 0122, and the ten's complement of 0122 would have to be calculated.
[edit]Background

The binary-coded decimal scheme described in this article is the most common encoding, but there are many others. The method here can be referred to as Simple Binary-Coded Decimal (SBCD) or BCD 8421. In the headers to the table, the '8 4 2 1', etc., indicates the weight of each bit shown; note that in the fifth column two of the weights are negative. Both ASCII and EBCDIC character codes for the digits are examples of zoned BCD, and are also shown in the table.
The following table represents decimal digits from 0 to 9 in various BCD systems:

Digit BCD
8 4 2 1 Excess-3
or Stibitz Code BCD 2 4 2 1
or Aiken Code BCD
8 4 −2 −1 IBM 702 IBM 705
IBM 7080 IBM 1401
8 4 2 1 ASCII
0000 8421 EBCDIC
0000 8421
0 0000 0011 0000 0000 1010 0011 0000 1111 0000
1 0001 0100 0001 0111 0001 0011 0001 1111 0001
2 0010 0101 0010 0110 0010 0011 0010 1111 0010
3 0011 0110 0011 0101 0011 0011 0011 1111 0011
4 0100 0111 0100 0100 0100 0011 0100 1111 0100
5 0101 1000 1011 1011 0101 0011 0101 1111 0101
6 0110 1001 1100 1010 0110 0011 0110 1111 0110
7 0111 1010 1101 1001 0111 0011 0111 1111 0111
8 1000 1011 1110 1000 1000 0011 1000 1111 1000
9 1001 1100 1111 1111 1001 0011 1001 1111 1001

BCD ADDER

A binary coded decimal (BCD) adder. Note that you should only apply input values from 0..9 to the inputs of the adder, because the remaining values A..F are undefined for BCD arithmetic. Click the hex-switches or use the 'a' and 'b' bindkeys to select the input values fo

r the adder.

Naturally, it would be easy to design a special circuit for the binary coded decimal arithmetic. However, this is seldom done.

The circuit shown here relies on the same trick that is often used in microprocessors for BCD arithmetic instructions. For example, many microprocessors including the Intel 808x and Motorola 68xx families provide a special decimal adjust accumulator instruction (DAA). A BCD addition is then performed in two steps, namely a standard addition followed by the DAA instruction. The basic operation performed by DAA is to add a constant value of 6 for each bcd-digit that overflowed during the first addition. Only very little logic is required to implement this operation.

To make this behaviour explicit, the circuit shown in the applet uses two stages of binary adders, each built with a single 7483 4-bit adder. The first stage consists of just the binary adder. The second stage uses a few gates to check for a decimal overflow, that is, output values larger than 9. If an overflow is detected, the second adder is hardwired to add the value 6 (0110) to the output of the first adder - which is equivalent to a subtraction of 10, thereby undoing the overflow of the first stage. The resulting 4-bit output value and 1-bit carry are the correct sum in BCD arithmetic.

HEXADECIMAL NUMBER SYSTEM

Hexadecimal Number System

A big problem with the binary system is verbosity. To represent the value 202 requires eight binary digits.
The decimal version requires only three decimal digits and, thus, represents numbers much more compactly than does the binary numbering system. This fact was not lost on the engineers who designed binary computer systems.

When dealing with large values, binary numbers quickly become too unwieldy. The hexadecimal (base 16) numbering system solves these problems. Hexadecimal numbers offer the two features:

hex numbers are very compact
it is easy to convert from hex to binary and binary to hex.
The Hexadecimal system is based on the binary system using a Nibble or 4-bit boundary. In Assembly Language programming, most assemblers require the first digit of a hexadecimal number to be 0, and place an "h" at the end of the number to denote the number base. In PICBASIC, we simply put a "$" at the left end of the number.

The Hexadecimal Number System:

uses base 16
includes only the digits 0 through 9 and the letters A, B, C, D, E, and F
In the Hexadecimal number system, the hex values greater than 9 carry the following decimal value:

Binary Decimal Hex
00 0 $0
01 1 $1
10 2 $2
11 3 $3
%0100 4 $4
%0101 5 $5
%0110 6 $6
%0111 7 $7
%1000 8 $8
%1001 9 $9
%1010 10 $A
%1011 11 $B
%1100 12 $C
%1101 13 $D
%1110 14 $E
%1111 15 $F
This table provides all the information you'll ever need to convert from one number base into any other number base for the decimal values from 0 to 16.

To convert a hexadecimal number into a binary number, simply break the binary number into 4-bit groups beginning with the LSB and substitute the corresponding four bits in binary for each hexadecimal digit in the number.

For example, to convert $ABCD into a binary value, simply convert each hexadecimal digit according to the table above. The binary equivalent is:

$ABCD = 1010 1011 1100 1101
To convert a binary number into hexadecimal format is almost as easy. The first step is to pad the binary number with leading zeros to make sure that the the binary number contains multiples of four bits. For example, given the binary number 1011001010, the first step would be to add two bits in the MSB position so that it contains 12 bits. The revised binary value is 001011001010.

The next step is to separate the binary value into groups of four bits, e.g., 0010 1100 1010. Finally, look up these binary values in the table above and substitute the appropriate hexadecimal digits, e.g., 10=$2, %1100=$C, %1010=$A. 1011001010=$2CA.

The weighted values for each position is as follows:

163 162 161 160
4096 256 16 1
Binary to Hex Conversion

It is easy to convert from an integer binary number to hex. This is accomplished by:

Break the binary number into 4-bit sections from the LSB to the MSB.
Convert the 4-bit binary number to its Hex equivalent.
For example, the binary value 1010111110110010 will be written:

1010 1111 1011 0010
A F B 2
Hex to Binary Conversion

It is also easy to convert from an integer hex number to binary. This is accomplished by:

Convert the Hex number to its 4-bit binary equivalent.
Combine the 4-bit sections by removing the spaces.
For example, the hex value $AFB2 will be written:

A F B 2
1010 1111 1011 0010
This yields the binary number 1010111110110010.

Hex to Decimal Conversion

To convert from Hex to Decimal, multiply the value in each position by its hex weight and add each value. Using the value from the previous example, $AFB2, we would expect to obtain the decimal value 44978.

(A*163) + (F*162) + ( B*161) + (2*160) =

(10*4096) + (15*256) + (11*16) + (2*1) =

40960 + 3840 + 176 + 2 = 44978

Decimal to Hex Conversion

To convert decimal to hex is slightly more difficult. The typical method to convert from decimal to hex is repeated division by 16. While we may also use repeated subtraction by the weighted position value, it is more difficult for large decimal numbers.

Repeated Division By 16

For this method, divide the decimal number by 16, and write the remainder on the side as the least significant digit. This process is continued by dividing the quotient by 16 and writing the remainder until the quotient is 0. When performing the division, the remainders which will represent the hex equivalent of the decimal number are written beginning at the least significant digit (right) and each new digit is written to the next more significant digit (the left) of the previous digit. Consider the number 44978.

Division Quotient Remainder Hex Number
44978 / 16 2811 2 2
2811 / 16 175 11 B2
175 / 16 10 15 FB2
10 / 16 0 10 AFB2
As you can see, we are back with the original number. That is what we should expect.

MSD AND LSD

MSD and LSD

When determining the most and least significant digits in an octal number, use the same rules that you used with the other number systems. The digit farthest to the left of the radix point is the MSD, and the one farthest right of the radix point is the LSD.

Example:



If the number is a whole number, the MSD is the nonzero digit farthest to the left of the radix point and the LSD is the digit immediately to the left of the radix point. Conversely, if the number is a fraction only, the nonzero digit closest to the radix point is the MSD and the LSD is the nonzero digit farthest to the right of the radix point.

Addition of Octal Numbers

The addition of octal numbers is not difficult provided you remember that anytime the sum of two digits exceeds 7, a carry is produced. Compare the two examples shown below:



The octal addition table in table 1-4 will be of benefit to you until you are accustomed to adding octal numbers. To use the table, simply follow the directions used in this example:

Add: 68 and 58

Table 1-4. - Octal Addition Table



Locate the 6 in the X column of the figure. Next locate the 5 in the Y column. The point in area Z where these two columns intersect is the sum. Therefore,



If you use the concepts of addition you have already learned, you are ready to add octal numbers.

Work through the solutions to the following problems:



As was mentioned earlier in this section, each time the sum of a column of numbers exceeds 7, a carry is produced. More than one carry may be produced if there are three or more numbers to be added, as in this example:



The sum of the augend and the first addend is 68 with a carry. The sum of 68 and the second addend is 58 with a carry. You should write down the 58 and add the two carries and bring them down to the sum.

OCTAL NUMBER SYSTEM

OCTAL NUMBER SYSTEM

The octal, or base 8, number system is a common system used with computers. Because of its relationship with the binary system, it is useful in programming some types of computers.

Look closely at the comparison of binary and octal number systems in table 1-3. You can see that one octal digit is the equivalent value of three binary digits. The following examples of the conversion of octal 2258 to binary and back again further illustrate this comparison:



Table 1-3. - Binary and Octal Comparison



Unit and Number

The terms that you learned in the decimal and binary sections are also used with the octal system.

The unit remains a single object, and the number is still a symbol used to represent one or more units.

Base (Radix)

As with the other systems, the radix, or base, is the number of symbols used in the system. The octal system uses eight symbols - 0 through 7. The base, or radix, is indicated by the subscript 8.

Positional Notation

The octal number system is a positional notation number system. Just as the decimal system uses powers of 10 and the binary system uses powers of 2, the octal system uses power of 8 to determine the value of a number's position. The following bar graph shows the positions and the power of the base:



Remember, that the power, or exponent, indicates the number of times the base is multiplied by itself. The value of this multiplication is expressed in base 10 as shown below:



All numbers to the left of the radix point are whole numbers, and those to the right are fractional numbers.

BINARY NUMBER SYSTEM

Basic Concepts Behind the Binary System

To understand binary numbers, begin by recalling elementary school math. When we first learned about numbers, we were taught that, in the decimal system, things are organized into columns:

H | T | O
1 | 9 | 3
such that "H" is the hundreds column, "T" is the tens column, and "O" is the ones column. So the number "193" is 1-hundreds plus 9-tens plus 3-ones.
Years later, we learned that the ones column meant 10^0, the tens column meant 10^1, the hundreds column 10^2 and so on, such that

10^2|10^1|10^0
1 | 9 | 3
the number 193 is really {(1*10^2)+(9*10^1)+(3*10^0)}.
As you know, the decimal system uses the digits 0-9 to represent numbers. If we wanted to put a larger number in column 10^n (e.g., 10), we would have to multiply 10*10^n, which would give 10^(n+1), and be carried a column to the left. For example, putting ten in the 10^0 column is impossible, so we put a 1 in the 10^1 column, and a 0 in the 10^0 column, thus using two columns. Twelve would be 12*10^0, or 10^0(10+2), or 10^1+2*10^0, which also uses an additional column to the left (12).

The binary system works under the exact same principles as the decimal system, only it operates in base 2 rather than base 10. In other words, instead of columns being


10^2|10^1|10^0
they are
2^2|2^1|2^0
Instead of using the digits 0-9, we only use 0-1 (again, if we used anything larger it would be like multiplying 2*2^n and getting 2^n+1, which would not fit in the 2^n column. Therefore, it would shift you one column to the left. For example, "3" in binary cannot be put into one column. The first column we fill is the right-most column, which is 2^0, or 1. Since 3>1, we need to use an extra column to the left, and indicate it as "11" in binary (1*2^1) + (1*2^0).

Examples: What would the binary number 1011 be in decimal notation?



Click here to see the answer

Try converting these numbers from binary to decimal:

10
111
10101
11110
Remember:
2^4| 2^3| 2^2| 2^1| 2^0
| | | 1 | 0
| | 1 | 1 | 1
1 | 0 | 1 | 0 | 1
1 | 1 | 1 | 1 | 0




Click here to see the answer
Return to Table of Contents

Binary Addition

Consider the addition of decimal numbers:

23
+48
___
We begin by adding 3+8=11. Since 11 is greater than 10, a one is put into the 10's column (carried), and a 1 is recorded in the one's column of the sum. Next, add {(2+4) +1} (the one is from the carry)=7, which is put in the 10's column of the sum. Thus, the answer is 71.

Binary addition works on the same principle, but the numerals are different. Begin with one-bit binary addition:

0 0 1
+0 +1 +0
___ ___ ___
0 1 1
1+1 carries us into the next column. In decimal form, 1+1=2. In binary, any digit higher than 1 puts us a column to the left (as would 10 in decimal notation). The decimal number "2" is written in binary notation as "10" (1*2^1)+(0*2^0). Record the 0 in the ones column, and carry the 1 to the twos column to get an answer of "10." In our vertical notation,

1
+1
___
10
The process is the same for multiple-bit binary numbers:

1010
+1111
______
Step one:
Column 2^0: 0+1=1.
Record the 1.
Temporary Result: 1; Carry: 0
Step two:
Column 2^1: 1+1=10.
Record the 0, carry the 1.
Temporary Result: 01; Carry: 1
Step three:
Column 2^2: 1+0=1 Add 1 from carry: 1+1=10.
Record the 0, carry the 1.
Temporary Result: 001; Carry: 1
Step four:
Column 2^3: 1+1=10. Add 1 from carry: 10+1=11.
Record the 11.
Final result: 11001
Alternately:

11 (carry)
1010
+1111
______
11001
Always remember

0+0=0
1+0=1
1+1=10
Try a few examples of binary addition:

111 101 111
+110 +111 +111
______ _____ _____


Click here to see the answer

Return to Table of Contents

Binary Multiplication

Multiplication in the binary system works the same way as in the decimal system:

1*1=1
1*0=0
0*1=0
101
* 11
____
101
1010
_____
1111
Note that multiplying by two is extremely easy. To multiply by two, just add a 0 on the end.

Return to Table of Contents
Binary Division

Follow the same rules as in decimal division. For the sake of simplicity, throw away the remainder.

For Example: 111011/11


10011 r 10
_______
11)111011
-11
______
101
-11
______
101
11
______
10
Return to Table of Contents

Decimal to Binary

Converting from decimal to binary notation is slightly more difficult conceptually, but can easily be done once you know how through the use of algorithms. Begin by thinking of a few examples. We can easily see that the number 3= 2+1. and that this is equivalent to (1*2^1)+(1*2^0). This translates into putting a "1" in the 2^1 column and a "1" in the 2^0 column, to get "11". Almost as intuitive is the number 5: it is obviously 4+1, which is the same as saying [(2*2) +1], or 2^2+1. This can also be written as [(1*2^2)+(1*2^0)]. Looking at this in columns,

2^2 | 2^1 | 2^0
1 0 1
or 101.
What we're doing here is finding the largest power of two within the number (2^2=4 is the largest power of 2 in 5), subtracting that from the number (5-4=1), and finding the largest power of 2 in the remainder (2^0=1 is the largest power of 2 in 1). Then we just put this into columns. This process continues until we have a remainder of 0. Let's take a look at how it works. We know that:

2^0=1
2^1=2
2^2=4
2^3=8
2^4=16
2^5=32
2^6=64
2^7=128
and so on. To convert the decimal number 75 to binary, we would find the largest power of 2 less than 75, which is 64. Thus, we would put a 1 in the 2^6 column, and subtract 64 from 75, giving us 11. The largest power of 2 in 11 is 8, or 2^3. Put 1 in the 2^3 column, and 0 in 2^4 and 2^5. Subtract 8 from 11 to get 3. Put 1 in the 2^1 column, 0 in 2^2, and subtract 2 from 3. We're left with 1, which goes in 2^0, and we subtract one to get zero. Thus, our number is 1001011.
Making this algorithm a bit more formal gives us:

Let D=number we wish to convert from decimal to binary
Repeat until D=0
a. Find the largest power of two in D. Let this equal P.
b. Put a 1 in binary column P.
c. Subtract P from D.
Put zeros in all columns which don't have ones.
This algorithm is a bit awkward. Particularly step 3, "filling in the zeros." Therefore, we should rewrite it such that we ascertain the value of each column individually, putting in 0's and 1's as we go:
Let D= the number we wish to convert from decimal to binary
Find P, such that 2^P is the largest power of two smaller than D.
Repeat until P<0 If 2^P<=D then put 1 into column P subtract 2^P from D Else put 0 into column P End if Subtract 1 from P Now that we have an algorithm, we can use it to convert numbers from decimal to binary relatively painlessly. Let's try the number D=55. Our first step is to find P. We know that 2^4=16, 2^5=32, and 2^6=64. Therefore, P=5. 2^5<=55, so we put a 1 in the 2^5 column: 1-----. Subtracting 55-32 leaves us with 23. Subtracting 1 from P gives us 4. Following step 3 again, 2^4<=23, so we put a 1 in the 2^4 column: 11----. Next, subtract 16 from 23, to get 7. Subtract 1 from P gives us 3. 2^3>7, so we put a 0 in the 2^3 column: 110---
Next, subtract 1 from P, which gives us 2.
2^2<=7, so we put a 1 in the 2^2 column: 1101--
Subtract 4 from 7 to get 3. Subtract 1 from P to get 1.
2^1<=3, so we put a 1 in the 2^1 column: 11011-
Subtract 2 from 3 to get 1. Subtract 1 from P to get 0.
2^0<=1, so we put a 1 in the 2^0 column: 110111
Subtract 1 from 1 to get 0. Subtract 1 from P to get -1.
P is now less than zero, so we stop.
Another algorithm for converting decimal to binary

However, this is not the only approach possible. We can start at the right, rather than the left.

All binary numbers are in the form

a[n]*2^n + a[n-1]*2^(n-1)+...+a[1]*2^1 + a[0]*2^0
where each a[i] is either a 1 or a 0 (the only possible digits for the binary system). The only way a number can be odd is if it has a 1 in the 2^0 column, because all powers of two greater than 0 are even numbers (2, 4, 8, 16...). This gives us the rightmost digit as a starting point.
Now we need to do the remaining digits. One idea is to "shift" them. It is also easy to see that multiplying and dividing by 2 shifts everything by one column: two in binary is 10, or (1*2^1). Dividing (1*2^1) by 2 gives us (1*2^0), or just a 1 in binary. Similarly, multiplying by 2 shifts in the other direction: (1*2^1)*2=(1*2^2) or 10 in binary. Therefore

{a[n]*2^n + a[n-1]*2^(n-1) + ... + a[1]*2^1 + a[0]*2^0}/2
is equal to

a[n]*2^(n-1) + a[n-1]*2^(n-2) + ... + a[1]2^0
Let's look at how this can help us convert from decimal to binary. Take the number 163. We know that since it is odd, there must be a 1 in the 2^0 column (a[0]=1). We also know that it equals 162+1. If we put the 1 in the 2^0 column, we have 162 left, and have to decide how to translate the remaining digits.

Two's column: Dividing 162 by 2 gives 81. The number 81 in binary would also have a 1 in the 2^0 column. Since we divided the number by two, we "took out" one power of two. Similarly, the statement a[n-1]*2^(n-1) + a[n-2]*2^(n-2) + ... + a[1]*2^0 has a power of two removed. Our "new" 2^0 column now contains a1. We learned earlier that there is a 1 in the 2^0 column if the number is odd. Since 81 is odd, a[1]=1. Practically, we can simply keep a "running total", which now stands at 11 (a[1]=1 and a[0]=1). Also note that a1 is essentially "remultiplied" by two just by putting it in front of a[0], so it is automatically fit into the correct column.

Four's column: Now we can subtract 1 from 81 to see what remainder we still must place (80). Dividing 80 by 2 gives 40. Therefore, there must be a 0 in the 4's column, (because what we are actually placing is a 2^0 column, and the number is not odd).

Eight's column: We can divide by two again to get 20. This is even, so we put a 0 in the 8's column. Our running total now stands at a[3]=0, a[2]=0, a[1]=1, and a[0]=1.

We can continue in this manner until there is no remainder to place.

Let's formalize this algorithm:
1. Let D= the number we wish to convert from decimal to binary.
2. Repeat until D=0:
a) If D is odd, put "1" in the leftmost open column, and subtract 1 from D.
b) If D is even, put "0" in the leftmost open column.
c) Divide D by 2.
End Repeat
For the number 163, this works as follows:
1. Let D=163
2. b) D is odd, put a 1 in the 2^0 column.
Subtract 1 from D to get 162.
c) Divide D=162 by 2.
Temporary Result: 01 New D=81
D does not equal 0, so we repeat step 2.

2. b) D is odd, put a 1 in the 2^1 column.
Subtract 1 from D to get 80.
c) Divide D=80 by 2.
Temporary Result: 11 New D=40
D does not equal 0, so we repeat step 2.

2. b) D is even, put a 0 in the 2^2 column.
c) Divide D by 2.
Temporary Result:011 New D=20

2. b) D is even, put a 0 in the 2^3 column.
c) Divide D by 2.
Temporary Result: 0011 New D=10

2. b) D is even, put a 0 in the 2^4 column.
c) Divide D by 2.
Temporary Result: 00011 New D=5

2. a) D is odd, put a 1 in the 2^5 column.
Subtract 1 from D to get 4.
c) Divide D by 2.
Temporary Result: 100011 New D=2

2. b) D is even, put a 0 in the 2^6 column.
c) Divide D by 2.
Temporary Result: 0100011 New D=1

2. a) D is odd, put a 1 in the 27 column.
Subtract 1 from D to get D=0.
c) Divide D by 2.
Temporary Result: 10100011 New D=0

D=0, so we are done, and the decimal number 163 is equivalent to the binary number 10100011.
Since we already knew how to convert from binary to decimal, we can easily verify our result. 10100011=(1*2^0)+(1*2^1)+(1*2^5)+(1*2^7)=1+2+32+128= 163.

Return to Table of Contents
Negation in the Binary System

Signed Magnitude
One's Complement
Two's Complement
Excess 2^(m-1)
These techniques work well for non-negative integers, but how do we indicate negative numbers in the binary system?

Before we investigate negative numbers, we note that the computer uses a fixed number of "bits" or binary digits. An 8-bit number is 8 digits long. For this section, we will work with 8 bits.

Signed Magnitude:
The simplest way to indicate negation is signed magnitude. In signed magnitude, the left-most bit is not actually part of the number, but is just the equivalent of a +/- sign. "0" indicates that the number is positive, "1" indicates negative. In 8 bits, 00001100 would be 12 (break this down into (1*2^3) + (1*2^2) ). To indicate -12, we would simply put a "1" rather than a "0" as the first bit: 10001100.

One's Complement:
In one's complement, positive numbers are represented as usual in regular binary. However, negative numbers are represented differently. To negate a number, replace all zeros with ones, and ones with zeros - flip the bits. Thus, 12 would be 00001100, and -12 would be 11110011. As in signed magnitude, the leftmost bit indicates the sign (1 is negative, 0 is positive). To compute the value of a negative number, flip the bits and translate as before.

Two's Complement:
Begin with the number in one's complement. Add 1 if the number is negative. Twelve would be represented as 00001100, and -12 as 11110100. To verify this, let's subtract 1 from 11110100, to get 11110011. If we flip the bits, we get 00001100, or 12 in decimal.

In this notation, "m" indicates the total number of bits. For us (working with 8 bits), it would be excess 2^7. To represent a number (positive or negative) in excess 2^7, begin by taking the number in regular binary representation. Then add 2^7 (=128) to that number. For example, 7 would be 128 + 7=135, or 2^7+2^2+2^1+2^0, and, in binary,10000111. We would represent -7 as 128-7=121, and, in binary, 01111001.

Note:

Unless you know which representation has been used, you cannot figure out the value of a number.
A number in excess 2^(m-1) is the same as that number in two's complement with the leftmost bit flipped.
To see the advantages and disadvantages of each method, let's try working with them.

Using the regular algorithm for binary adition, add (5+12), (-5+12), (-12+-5), and (12+-12) in each system. Then convert back to decimal numbers.




Click here to see the answers Return to Table of Contents
Answers

What would the binary number 1011 be in decimal notation?

1011=(1*2^3)+(0*2^2)+(1*2^1)+(1*2^0)
= (1*8) + (0*4) + (1*2) + (1*1)
= 11 (in decimal notation)
Go back to the question
Try converting these numbers from binary to decimal:

10=(1*2^1) + (0*2^0) = 2+0 = 2
111 = (1*2^2) + (1*2^1) + (1*2^0) = 4+2+1=7
10101= (1*2^4) + (0*2^3) + (1*2^2) + (0*2^1) + (1*2^0)=16+0+4+0+1=21
11110= (1*2^4) + (1*2^3) + (1*2^2) + (1*2^1) + (0*2^0)=16+8+4+2+0=30
Go back to the question
Try a few examples of binary addition:

1 1
111 111 111
+110 +110 +110
______ ______ _____
1 01 1101

1 11 1
101 101 101
+111 +111 +111
_____ ____ _____
0 00 1100

1 1 1
111 111 111
+111 +111 +111
_____ _____ _____
0 10 1110
Click here to return to the question

Using the regular algorithm for binary adition, add (5+12), (-5+12), (-12+-5), and (12+-12) in each system. Then convert back to decimal numbers.
Signed Magnitude:

5+12 -5+12 -12+-5 12+-12

00000101 10000101 10001100 00001100
00001100 00001100 10000101 10001100
__________ ________ ________ _________
00010001 10010001 00010000 10011000

17 -17 16 -24

One' Complement:

00000101 11111010 11110011 00001100
00001100 00001100 11111010 11110011
_________ ________ ________ ________
00010001 00000110 11101101 11111111

17 6 -18 0

Two's Complement:

00000101 11111011 11110100 00001100
00001100 00001100 11111011 11110100
________ ________ ________ ________
00010001 00000111 11101111 00000000

17 7 -17 0

Signed Magnitude:

10000101 01111011 01110100 00001100
10001100 10001100 01111011 01110100
________ ________ ________ ________
00010001 00000111 11101111 01111100

109 119 111 124