Computer Organization - Notes

Home PDF

Semiconductor memory is a type of storage device that uses semiconductor circuits as the storage medium. It is composed of semiconductor integrated circuits known as memory chips. Based on their function, semiconductor memories can be categorized into two main types: Random Access Memory (RAM) and Read-Only Memory (ROM).

Accessing information stored in semiconductor memory is done using a random access method, which allows for quick retrieval of data from any location within the memory. This method provides several advantages:

  1. High Storage Speed: Data can be accessed quickly because any memory location can be accessed directly without having to go through other locations.

  2. High Storage Density: Semiconductor memory can store a large amount of data in a relatively small physical space, making it efficient for use in modern electronic devices.

  3. Easy Interface with Logic Circuits: Semiconductor memory can be easily integrated with logic circuits, making it suitable for use in complex electronic systems.

These characteristics make semiconductor memory a crucial component in modern computing and electronic devices.


The Stack Pointer (SP) is an 8-bit special-purpose register that indicates the address of the top element of the stack, specifically the location of the stack’s top within the internal RAM block. This is determined by the stack designer. In a hardware stack machine, the stack is a data structure used by the computer to store data. The role of the SP is to point to the data that is currently being pushed onto or popped from the stack, and it automatically increments or decrements after each operation.

However, there is a specific detail to note: in this context, the SP increments when data is pushed onto the stack. Whether the SP increments or decrements upon a push operation is determined by the CPU manufacturer. Typically, the stack is composed of a storage area and a pointer (SP) that points to this storage area.

In summary, the SP is crucial for managing the stack by keeping track of the current top of the stack and adjusting its value as data is pushed onto or popped from the stack, with the specific behavior (incrementing or decrementing) being a design choice made by the CPU manufacturer.


Let’s break down the roles of the state register, program counter, and data register in a CPU:

  1. State Register:
    • Purpose: The state register, also known as the status register or flag register, holds information about the current state of the CPU. It contains flags that indicate the outcome of arithmetic and logic operations.
    • Flags: Common flags include the zero flag (indicating a result of zero), carry flag (indicating a carry out of the most significant bit), sign flag (indicating a negative result), and overflow flag (indicating an arithmetic overflow).
    • Role: The state register helps in decision-making processes within the CPU, such as conditional branching based on the results of previous operations.
  2. Program Counter (PC):
    • Purpose: The program counter is a register that holds the address of the next instruction to be executed.
    • Role: It keeps track of the instruction sequence, ensuring that instructions are fetched and executed in the correct order. After an instruction is fetched, the program counter is typically incremented to point to the next instruction.
    • Control Flow: The program counter is crucial for managing the flow of execution in a program, including handling branches, jumps, and function calls.
  3. Data Register:
    • Purpose: Data registers are used to temporarily hold data that the CPU is currently processing.
    • Types: There are various types of data registers, including general-purpose registers (used for a wide range of data manipulation tasks) and special-purpose registers (used for specific functions, like the accumulator).
    • Role: Data registers facilitate quick access to data during processing, reducing the need to access slower main memory. They are essential for performing arithmetic, logic, and other data manipulation operations efficiently.

Each of these registers plays a critical role in the operation of a CPU, enabling it to execute instructions, manage data, and control the flow of a program effectively.


A microprogram is a low-level program stored in a control storage (often a type of read-only memory, or ROM) that is used to implement the instruction set of a processor. It is composed of microinstructions, which are detailed, step-by-step commands that direct the processor’s control unit to perform specific operations.

Here’s a breakdown of the concept:

In summary, microprograms play a crucial role in the operation of a processor by providing a detailed, step-by-step implementation of each machine-level instruction, stored in a dedicated control storage area.


A parallel interface is a type of interface standard where data is transmitted in parallel between the two connected devices. This means that multiple bits of data are sent simultaneously over separate lines, rather than one bit at a time as in serial communication.

Here are the key aspects of a parallel interface:

In summary, parallel interfaces enable fast data transmission by sending multiple bits of data simultaneously over separate lines, with the data width typically measured in bytes.


The interrupt mask is a mechanism used to temporarily disable or “mask” certain interrupts, preventing them from being processed by the CPU. Here’s how it works:

In summary, the interrupt mask provides a way to control which interrupts the CPU responds to, allowing for better management of system resources and priorities.


The arithmetic logic unit (ALU) is a fundamental component of a central processing unit (CPU) that performs arithmetic and logical operations. Here’s an overview of its role and functions:

In summary, the ALU is the part of the CPU that performs mathematical and logical operations, enabling the CPU to process data and execute instructions efficiently.


The XOR (exclusive OR) operation is a logical operation that compares two bits and returns a result based on the following rules:

In summary, XOR returns 1 if the bits are different and 0 if they are the same. This operation is often used in various applications, including:

The XOR operation is fundamental in digital logic and computing, providing a way to perform bitwise comparisons and manipulations.


Serial transmission is a method of data transmission where data is sent one bit at a time over a single communication line or channel. Here are the key aspects of serial transmission:

In summary, serial transmission involves sending data bits one at a time over a single line, offering simplicity and cost-effectiveness at the expense of speed compared to parallel transmission.


You’ve provided a good overview of some common I/O buses used in computing. Let’s clarify and expand on each of these:

  1. PCI (Peripheral Component Interconnect) Bus:
    • Description: PCI is a parallel bus standard for connecting peripheral devices to a computer’s CPU and memory. It is designed to be processor-independent, meaning it can work with various types of CPUs.
    • Features: Supports multiple peripherals, operates at high clock frequencies, and provides high data transfer rates. It has been widely used in personal computers for connecting components like graphics cards, sound cards, and network cards.
    • Successors: PCI has evolved into newer standards like PCI-X and PCI Express (PCIe), which offer even higher performance and more advanced features.
  2. USB (Universal Serial Bus):
    • Description: USB is a standard interface for connecting a wide range of peripheral devices to computers. It simplifies the process of connecting and using devices by providing a universal plug-and-play interface.
    • Features: USB supports hot-swapping, meaning devices can be connected and disconnected without restarting the computer. It also provides power to peripheral devices and supports data transfer rates suitable for many types of devices.
    • Versions: USB has several versions, including USB 1.1, USB 2.0, USB 3.0, and USB4, each offering increased data transfer speeds and additional features.
  3. IEEE 1394 (FireWire):
    • Description: Developed by Apple and standardized as IEEE 1394, FireWire is a high-speed serial bus designed for high-bandwidth applications. It is commonly used in multimedia and storage applications.
    • Features: FireWire supports high data transfer rates, making it suitable for devices like digital cameras, external hard drives, and audio/video equipment. It also supports peer-to-peer device communication and isochronous data transfer, which is important for real-time applications.
    • Applications: Although less common today, FireWire was popular in professional audio/video equipment and some consumer electronics.

These bus standards have played crucial roles in the development of modern computing and consumer electronics, enabling the connection of a wide range of devices with varying performance requirements.


In a stack data structure, the stack pointer (SP) is a register that keeps track of the top of the stack. The initial value of the stack pointer depends on the architecture and the specific implementation of the stack. Here are two common approaches:

  1. Full Descending Stack: In this approach, the stack grows downwards in memory. The stack pointer is initialized to the highest memory address allocated for the stack. As items are pushed onto the stack, the stack pointer decrements.

  2. Empty Ascending Stack: In this approach, the stack grows upwards in memory. The stack pointer is initialized to the lowest memory address allocated for the stack. As items are pushed onto the stack, the stack pointer increments.

The choice between these approaches depends on the system’s design and conventions. In many systems, especially those using a descending stack, the initial value of the stack pointer is set to the highest address of the allocated stack space, and it decrements as data is pushed onto the stack.


In direct addressing mode, the operand’s address is directly specified within the instruction itself. This means that the address of the operand is explicitly included as part of the instruction code. Here’s how it works:

  1. Instruction Format: The instruction contains an opcode (operation code) and an address field. The address field directly specifies the memory location where the operand is stored.

  2. Execution: When the instruction is executed, the CPU uses the address specified in the instruction to access the memory location directly. The operand is fetched from or stored to this memory address without any further address calculations.

  3. Efficiency: Direct addressing is straightforward and efficient because it involves minimal address computation. However, it is less flexible compared to other addressing modes like indirect or indexed addressing, as the address is fixed at the time the instruction is written.

In summary, in direct addressing, the operand’s address is explicitly included in the instruction, allowing the CPU to access the operand directly from the specified memory location.


To execute the ADD R1, R2, R3 instruction in a single-bus architecture CPU, we need to follow a sequence of steps that involve fetching the instruction, decoding it, and executing it. Here’s a detailed breakdown of the execution flow:

  1. Instruction Fetch:
    • The Program Counter (PC) holds the address of the next instruction to be executed.
    • The address in PC is loaded into the Memory Address Register (MAR).
    • The memory reads the instruction at the address specified by MAR and loads it into the Memory Data Register (MDR).
    • The instruction is then transferred from MDR to the Instruction Register (IR).
    • PC is incremented to point to the next instruction.
  2. Instruction Decode:
    • The instruction in IR is decoded to determine the operation (ADD) and the operands (R1, R2, R3).
  3. Operand Fetch:
    • The addresses of R2 and R3 are placed on the bus to read their contents.
    • The contents of R2 and R3 are fetched and temporarily stored in a buffer or directly used in the next step.
  4. Execution:
    • The Arithmetic Logic Unit (ALU) performs the addition of the contents of R2 and R3.
    • The result of the addition is temporarily stored in a buffer or directly sent to the next stage.
  5. Write Back:
    • The result from the ALU is written back to the register R1.
    • The address of R1 is placed on the bus, and the result is stored in R1.
  6. Completion:
    • The instruction execution is complete, and the CPU is ready to fetch the next instruction from the address now in PC.

This sequence outlines the basic flow of executing an ADD instruction in a single-bus architecture, where each step involves using the shared bus to transfer data between the CPU components and memory.


The term “one-digit multiplication” in the context of binary arithmetic refers to a method where each digit (or bit) of the multiplier is considered one at a time. This method is analogous to the way we perform multiplication in decimal arithmetic, where we multiply each digit of one number by the entire other number, shifting the results appropriately.

Here’s why it’s called “one-digit multiplication”:

  1. Bit-by-Bit Processing: In binary multiplication, each bit of the multiplier is processed individually. For each bit that is 1, the multiplicand is added to the result, shifted appropriately. For each bit that is 0, the multiplicand is not added, but the position is still shifted.

  2. Shift and Add: The process involves shifting the multiplicand to the left by one position for each subsequent bit of the multiplier. This shifting corresponds to multiplying by powers of 2, similar to how shifting digits in decimal multiplication corresponds to multiplying by powers of 10.

  3. Partial Products: Each step produces a partial product, which is then summed to get the final result. This mirrors the process in decimal multiplication where partial products are generated for each digit of the multiplier.

The term emphasizes the simplicity and foundational nature of the method, as it breaks down the multiplication process into smaller, manageable steps involving single bits. This approach is fundamental in digital systems and computer arithmetic, where operations are performed at the bit level.


To perform the multiplication of (4 \times 5) using the one-digit multiplication method with four-digit signed binary numbers (original code), we need to follow these steps:

  1. Convert the numbers to four-digit signed binary (original code):
    • (4) in four-digit signed binary is (0100).
    • (5) in four-digit signed binary is (0101).
  2. Perform the multiplication:
    • Multiply each bit of the second number by the entire first number, shifting left each time you move to the next bit.

Here’s the step-by-step multiplication process:

    0100  (4 in binary)
  × 0101  (5 in binary)
  --------
    0100  (0100 × 1, no shift)
   0000   (0100 × 0, shift left by 1)
  0100    (0100 × 1, shift left by 2)
  --------
  0010100 (Sum of the partial products)
  1. Sum the partial products:
    • Adding the partial products together, we get (0010100).
  2. Convert the result back to decimal:
    • The binary number (0010100) is equivalent to (20) in decimal.

Thus, the result of (4 \times 5) using four-digit signed binary multiplication is (20).


Interrupts are a mechanism used in computer systems to handle events that require immediate attention. They allow the CPU to respond to external or internal events by pausing the current task and executing a specific interrupt handler or interrupt service routine (ISR). Here’s a breakdown of the types of interrupts:

  1. External Interrupts (Hardware Interrupts): These are triggered by hardware devices to signal that they need attention. For example, a keyboard interrupt occurs when a key is pressed, or a network interrupt occurs when data is received. External interrupts are asynchronous, meaning they can occur at any time regardless of what the CPU is doing.

  2. Internal Interrupts (Exceptions): These are generated by the CPU itself in response to certain conditions that occur during the execution of instructions. Examples include:
    • Divide by Zero: Triggered when a division operation attempts to divide by zero.
    • Illegal Instruction: Triggered when the CPU encounters an instruction it cannot execute.
    • Overflow: Triggered when an arithmetic operation exceeds the maximum size of the data type.
  3. Software Interrupts: These are intentionally triggered by software using specific instructions. They are often used to invoke system calls or switch between different modes of operation (e.g., user mode to kernel mode). Software interrupts are synchronous, meaning they occur as a direct result of executing a specific instruction.

Each type of interrupt serves a specific purpose in managing system resources and ensuring that the CPU can respond to urgent or exceptional conditions efficiently.


In the context of computer systems, particularly when discussing bus architecture, the terms “master” and “slave” are often used to describe the roles of devices in communication over a bus. Here’s a breakdown of these terms:

  1. Master Device: This is the device that has control over the bus. The master device initiates data transfer by sending commands and addresses to other devices. It manages the communication process and can read from or write to other devices connected to the bus.

  2. Slave Device: This is the device that responds to the commands issued by the master device. The slave device is accessed by the master device and can either send data to or receive data from the master device. It does not initiate communication but rather responds to requests from the master.

These roles are essential for coordinating data transfer between different components in a computer system, such as the CPU, memory, and peripheral devices.


In a computer, registers are small, fast storage locations within the CPU that hold data temporarily during processing. There are several types of registers, each serving a specific purpose:

  1. General-Purpose Registers (GPRs): These are used for various data manipulation tasks, such as arithmetic operations, logical operations, and data transfer. Examples include the AX, BX, CX, and DX registers in x86 architecture.

  2. Special-Purpose Registers: These have specific functions and are not generally available for all types of data operations. Examples include:
    • Instruction Register (IR): Holds the current instruction being executed.
    • Program Counter (PC): Contains the address of the next instruction to be executed.
    • Stack Pointer (SP): Points to the top of the stack in memory.
    • Base and Index Registers: Used for memory addressing.
  3. Segment Registers: Used in some architectures (like x86) to hold the base address of a segment in memory. Examples include the Code Segment (CS), Data Segment (DS), and Stack Segment (SS) registers.

  4. Status Register or Flags Register: Holds condition codes or flags that indicate the result of the last operation, such as zero, carry, overflow, etc.

  5. Control Registers: Used to control CPU operations and modes. Examples include control registers in x86 architecture that manage paging, protection, and other system-level features.

  6. Floating-Point Registers: Used for floating-point arithmetic operations in CPUs that support floating-point hardware.

  7. Constant Registers: Some architectures have registers that hold constant values, such as zero or one, to optimize certain operations.

These registers work together to facilitate the execution of instructions, manage data flow, and control the operation of the CPU.


A machine instruction, also known as a machine code instruction, is a low-level command that a computer’s CPU (Central Processing Unit) can execute directly. Each instruction typically contains several key components:

  1. Operation Code (Opcode): This specifies the operation to be performed, such as addition, subtraction, load, store, etc. The opcode tells the CPU what action to take.

  2. Operands: These are the data items or values that the instruction will operate on. Operands can be immediate values (constants), registers, or memory addresses.

  3. Addressing Mode: This determines how the operands are accessed. Common addressing modes include immediate addressing, direct addressing, indirect addressing, and register addressing.

  4. Instruction Format: This defines the structure of the instruction, including the size and position of the opcode and operands within the instruction.

  5. Condition Codes: Some instructions may affect or be affected by condition codes or flags, which are special-purpose registers that hold status information about the results of operations (e.g., zero flag, carry flag).

These components work together to define a precise action that the CPU will perform, such as moving data, performing arithmetic operations, or controlling the flow of a program.


Yes, you’re describing Register Direct Addressing, which is another type of addressing mode used in computer architecture. Here’s an explanation of this mode:

Register Direct Addressing (寄存器直接寻址):

Register direct addressing is efficient because it leverages the speed of CPU registers, making it one of the fastest addressing modes. It’s commonly used in operations where operands are frequently accessed or modified, such as in loops or arithmetic operations.


Sure! Let’s go through examples of each addressing mode to illustrate how they work:

  1. Immediate Addressing (立即寻址):
    • Example:
      MOV A, #5
      
    • Explanation: In this example, the value 5 is directly included in the instruction. The CPU moves the value 5 into register A without needing to access memory. This is immediate addressing because the operand (5) is immediately available in the instruction.
  2. Direct Addressing (直接寻址):
    • Example:
      MOV A, [1000]
      
    • Explanation: Here, the instruction tells the CPU to move the value stored at memory address 1000 into register A. The operand is located at a specific memory address, so the CPU needs to access memory to retrieve it. This is direct addressing because the address of the operand is directly specified in the instruction.
  3. Indirect Addressing (间接寻址):
    • Example:
      MOV A, [B]
      
    • Explanation: In this example, register B contains an address (say 2000). The CPU first retrieves the address from register B, then accesses memory at address 2000 to get the actual operand value, and finally moves that value into register A. This is indirect addressing because the instruction points to a location that contains the address of the operand, adding an extra level of indirection.

These examples illustrate how each addressing mode accesses operands, with immediate addressing being the most direct and fastest, and indirect addressing involving additional steps to retrieve the operand.


In the context of computer architecture, addressing modes determine how the operand of an instruction is accessed. Here’s an explanation of the three addressing modes you mentioned, ordered from fastest to slowest:

  1. Immediate Addressing (立即寻址):
    • Speed: Fastest
    • Explanation: In immediate addressing, the operand is part of the instruction itself. This means the data is directly available within the instruction, so no additional memory access is required to fetch the operand. This makes it the fastest mode because the CPU can immediately use the data without any further lookups.
  2. Direct Addressing (直接寻址):
    • Speed: Fast
    • Explanation: In direct addressing, the instruction contains the address of the operand in memory. The CPU directly accesses this address to fetch the operand. This mode is slower than immediate addressing because it requires an additional memory access to retrieve the operand from the specified address.
  3. Indirect Addressing (间接寻址):
    • Speed: Slowest
    • Explanation: In indirect addressing, the instruction contains an address that points to another address, which in turn contains the actual operand. This can involve multiple memory accesses: first to get the address of the operand, and then to get the operand itself. This additional level of indirection makes it the slowest among the three modes.

In summary, immediate addressing is the fastest because the operand is directly available, direct addressing is slower because it requires one memory access to fetch the operand, and indirect addressing is the slowest because it may require multiple memory accesses.


The passage you provided discusses aspects of a Complex Instruction Set Computing (CISC) architecture, which is a type of computer architecture known for its rich and varied set of instructions. Let’s break down the key points and explain them in English:

CISC Architecture

  1. Basic Processing Component: CISC is a fundamental design principle for many desktop computer systems. It refers to the way a processor executes instructions.

  2. Core of Microprocessors: In a CISC architecture, the core functionality of a microprocessor involves executing complex instructions. These instructions are designed to perform multiple operations, such as moving data into registers or performing arithmetic operations like addition.

  3. Instruction Storage: Instructions are stored in registers, which are small, fast storage locations within the processor. The term “AR register” likely refers to an Address Register, which holds memory addresses for instructions or data.

  4. Multi-Step Execution: CISC instructions are often composed of multiple steps. Each instruction can perform several operations, making the execution process more complex but potentially more efficient for certain tasks.

  5. Operations: Typical operations in a CISC processor include moving values into registers and performing arithmetic operations, such as addition. These operations are fundamental to how the processor manipulates data.

In summary, CISC architecture is characterized by its ability to execute complex instructions that perform multiple operations, utilizing registers to store and manipulate data efficiently. This design aims to optimize performance for a wide range of tasks by reducing the number of instructions needed to complete a given operation.


Parallel transmission, also known as parallel communication, is a method of conveying multiple data bits simultaneously. In this type of transmission, data is sent in parallel, meaning that multiple bits are transmitted at the same time over separate channels or wires. This is in contrast to serial transmission, where data bits are sent one after the other over a single channel.

Key Characteristics of Parallel Transmission:

  1. Speed: Parallel transmission can be faster than serial transmission because multiple bits are sent at once. This can significantly increase data throughput, especially for large amounts of data.

  2. Complexity: Parallel transmission requires more wires or channels compared to serial transmission. This increases the complexity and cost of the system, especially over long distances.

  3. Synchronization: Ensuring that all bits arrive at the same time can be challenging. Skew, which is the difference in arrival times of bits, can be an issue in parallel transmission systems.

  4. Use Cases: Parallel transmission is commonly used in internal computer buses, such as the data bus between the CPU and memory, where short distances and high speeds are crucial.

  5. Error Handling: Parallel transmission systems often require robust error-checking mechanisms to handle potential issues like crosstalk (interference between signals on adjacent wires).

Examples of Parallel Transmission:

Parallel transmission is less commonly used for long-distance communication due to the complexity and cost associated with maintaining multiple channels. Instead, serial transmission is often preferred for such applications, with techniques like multiplexing used to achieve higher data rates.


In computer architecture, the term “instruction word length” refers to the size, in bits, of the instructions that a processor can execute. This length is a crucial aspect of a computer’s architecture because it determines several key characteristics:

  1. Instruction Set Complexity: The length of an instruction word influences the complexity and variety of operations that a processor can perform. Longer instruction words can encode more complex operations, while shorter ones may be limited to simpler tasks.

  2. Memory Usage: The instruction word length affects how much memory is required to store programs. Shorter instructions use less memory, which can be advantageous in systems with limited memory resources.

  3. Processing Speed: The length of instructions can impact the speed at which a processor can execute them. Shorter instructions may be faster to decode and execute, but they might require more instructions to perform complex tasks.

  4. Compatibility and Portability: The instruction word length is a fundamental aspect of a processor’s design, and programs compiled for one instruction word length may not run on processors with a different length without modification.

Common instruction word lengths include 8-bit, 16-bit, 32-bit, and 64-bit, each with its own advantages and trade-offs in terms of performance, memory usage, and complexity.


Back Donate