Assembly Language and Architecture

views updated

Assembly Language and Architecture

When they hear the term architecture, most people automatically visualize a building. However, architecture can also refer to a computer system. Architecture can also be defined as an interconnected arrangement of readily available components. A computer systems architect takes a collection of parts and organizes them so that they all work together in an optimal way. More than one way exists to put a computer system together from constituent parts, and some configurations will yield a computer system that is better at a particular task than other configurations, which might be better at something else. For example, consider a computer system for use by human users to support activities in managing their work in an office environmentcomposing documents, storing them, and printing them out. This computer system architecture would be completely different from that of a computer system designed to deal with a task like guiding a rocket in flight.

Even though there are many different ways of structuring computer systems so that they can be matched to the jobs for which they are responsible, there is surprisingly little variation in the nature of the fundamental building blocks themselves. Most conventional computer systems are comprised of a central processing unit (CPU) , the part of a computer that performs computations and controls and coordinates other parts of the computer; some memoryboth random access memory (RAM) and read only memory (ROM) ; secondary storage to hold other programs and data; and lastly, interconnecting pathways called buses . The part that makes a computer different from many other machines is the CPU. Memory devices, storage units, and buses are designed to act in a supporting role, while the principal player is the CPU.

Often people studying the essential nature of a CPU for the first time struggle with some of the concepts because it is not like any other machine they know. A car engine or sewing machine has large moving parts that are easier to analyze and understand, while a CPU does not have moving parts to observe. However, by imagining moving mechanisms, one can gain a better understanding of what happens down inside those black ceramic packages.

The fundamental component of a CPU is an element called a register. A register is an array of flip-flop devices that are all connected and operate in unison. Each flip-flop can store one binary bit (a 0 or 1) that the CPU will use. Registers can be loaded up with bits in a parallel operation and they can then shift the bits left or right if needed. Two registers can be used to hold collections of bits that might be added together, for example. In this case, corresponding bits in each register would be added together with any carried bits being managed in the expected wayjust as would be done by a person manually, using pencil and paper.

Registers tend to vary in size from one processor to another, but are usually eight, sixteen, thirty-two, or sixty-four bits in width. This means that they are comprised of that particular number of flip-flop devices. Some registers are set aside to hold specific types of information, like memory addresses or instructions. These are known as special purpose registers. In addition, there are general purpose registers that hold data that are to be used in the execution of a program.

CPUs contain another set of elements that are very similar to registers, called buffers. Buffers, like registers, are constructed from groups of flip-flops, but unlike registers, the information contained within them does not change. Buffers are simply temporary holding points for information while it is being transferred from one place to another in the CPU, while registers actually hold information as it is being operated on.

The part of the CPU that actually carries out the mathematical operations is called the arithmetic and logic unit (ALU). It is more complex than the registers and handles operations like addition, subtraction, and multiplication, as well as operations that implement logic functions, like logical "or" and "and" functions.

The most complex part of the CPU and the one that requires the most effort in design is the control unit. Registers, buffers, and arithmetic and logic units are all well-documented building blocks, but the control unit is more mysterious. Most manufacturers keep the design of their control units a closely guarded secret, since the control unit manages how the parts of the CPU all work together. This part of the CPU influences the architecture. The control unit is constructed to recognize all of the programming instructions that the CPU is capable of carrying out. When all these instructions are written down in a document that the manufacturer provides, it is known as the instruction set of that particular CPU. All instructions that the processor understands are to be represented as a sequence of bits that will fit into the registers. The control unit responds to the instructions by decoding them, which means that it breaks them down into sub-operations, before getting the ALU and registers to carry them out. Even a relatively simple instruction like subtracting a number in one register from some number in another register, requires the control unit to decode and manage all the steps involved. This would include loading the two registers with the numbers, triggering the ALU to do the subtraction, and finding somewhere to store the difference.

Although it is possible for human users to construct programs as correct sequences of binary bits for the processor to execute, this is very intensive and error-prone. Actually creating a program in this way is known as writing a program in machine code because these sequences of bits are the codes that the CPU machine knows and understands. When programmable computers were first being developed in the mid-twentieth century, this was the only means of programming them. Human programmers were soon looking for a less laborious way of getting the machine code to the CPU. The answer was to represent each of the machine code instructions by shortened words, rather than the sequence of bits. For example, a command to the CPU to add two numbers together would be represented as a short human-readable instruction like "ADD A, B" where A and B are the names of two registers. This would be used instead of a confusing list of binary bits and tends to make programming much easier to comprehend and execute. The short words used to represent the instructions are called mnemonics . The programmer can write the program in a computer language known as assembly language using mnemonics. Another program called an assembler translates the assembly language mnemonics into the machine code, which is what the CPU can understand.

Other computing languages can be developed that are even more amenable to human use. These languages can be translated to assembly language and then to machine code. That way, human programmers can concentrate more on making sure that their programs are correct and leave all of the drudgery of translation to other programs.

No two assembly languages are exactly alike and most differ markedly from one another in their syntax . Since assembly language is quite close to the particular instructions that the CPU understands, the programmer must know a great deal about the particular architecture of the processor under study. However, little of this knowledge is directly transferable to processors developed by other manufacturers. The ways in which two different CPUs work might be quite similar, but there will always be some differences in the details that prevent assembly language programs from being transportable to other computers. The advantage of assembly language is that the programs constructed in assembly language are usually much smaller than programs constructed in high-level languages, they require less memory for storage, and they tend to run very fast. Many computer programs that are developed for small-scale but high-market-volume embedded systems environments (like domestic appliances and office equipment) are written in assembly language for these reasons.

Personal computers have their own type of architectures and can be programmed in assembly language. However, assembly language on these computers is usually used only in certain parts of the operating system that need to manage the hardware devices directly.

see also Binary Number System; Central Processing Unit; Object-Oriented Languages; Procedural Languages; Programming.

Stephen Murray

Bibliography

Klingman, Edwin E. Microprocessor Systems Design. Upper Saddle River, NJ: Prentice Hall, 1977.

Mano, M. Morris, and Charles R. Kime. Logic and Computer Design Fundamentals. Upper Saddle River, NJ: Prentice Hall, 2000.

Milutinovic, Veljko M., ed. High Level Language Computer Architecture. Rockville, MD: Computer Science Press, 1989.

Stallings, William. Computer Organization and Architecture. Upper Saddle River, NJ: Prentice Hall, 2000.

Tanenbaum, Andrew S., and Albert S. Woodhull. Operating Systems Design and Implementation. Upper Saddle River, NJ: Prentice Hall, 1997.

Triebel, Walter A., and Avtar Singh. The 8088 and 8086 Microprocessors. Upper Saddle River, NJ: Prentice Hall, 1991.

Uffenbeck, John. The 8086/8088 Family: Design, Programming and Interfacing. Upper Saddle River, NJ: Prentice Hall, 1987.

Wakerly, John F. Digital Design Principles and Practices. Upper Saddle River, NJ: Prentice Hall, 2000.

About this article

Assembly Language and Architecture

Updated About encyclopedia.com content Print Article