Skip to content

Implements a 5-stage RISC processor pipeline (IF, ID, EX, MEM, WB) to improve performance through instruction-level parallelism. Measures performance using throughput and latency.

Notifications You must be signed in to change notification settings

diyaa26/32-bit-Pipelining-Processor-based-on-RISC-architecture

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 

Repository files navigation

32-bit-Pipelining-Processor-based-on-RISC-architecture

What is Pipelining?

Pipelining is a technique used in modern processors to improve performance by executing multiple instructions simultaneously.
It breaks down the execution of instructions into several stages, where each stage completes a part of the instruction.
These stages can overlap, allowing the processor to work on different instructions at various stages of completion—similar to an assembly line in manufacturing.

Design of a Basic Pipeline

In a pipelined processor, the pipeline has two ends: the input end and the output end.
Between these ends are multiple stages or segments, where the output of one stage is connected to the input of the next.
Each stage performs a specific operation independently.

To maintain the intermediate results between stages, interface registers (also known as latches or buffers) are used.
These registers store the output of one stage before it's passed to the next.

All stages in the pipeline, along with the interface registers, are synchronized by a common clock

Execution in a Pipelined Processor

The execution sequence of instructions in a pipelined processor can be visualized using a space-time diagram.

Example: Execution in a 4-Stage Pipelined Processor

For example, consider a processor with 4 stages and 2 instructions to be executed.
The execution pattern would look like overlapping steps across clock cycles—showing how multiple instructions progress through different stages simultaneously.

Space-Time Diagram:

Total Time = 5 Cycles

A RISC processor typically uses a 5-stage instruction pipeline to execute instructions efficiently.
Each stage performs a specific operation, allowing multiple instructions to be processed in parallel.

5 Stages of the RISC Pipeline:

  1. Instruction Fetch (IF)
    The CPU fetches the instruction from memory using the address stored in the Program Counter (PC).

  2. Instruction Decode (ID)
    The instruction is decoded, and the register file is accessed to obtain the values of the source registers.

  3. Instruction Execute (EX)
    The Arithmetic Logic Unit (ALU) performs operations such as addition, subtraction, etc., as specified by the instruction.

  4. Memory Access (MEM)
    If required, data is read from or written to memory, based on the instruction.

  5. Write Back (WB)
    The result from the execution or memory stage is written back into the destination register.

Each instruction passes through these stages in sequence, enabling overlapping execution and improved throughput due to pipelining.

Performance of a Pipeline

Pipeline performance is primarily measured using two key metrics: Throughput and Latency.

What is Throughput?

  • Measures the number of instructions completed per unit time.
  • Indicates the overall processing speed of the pipeline.
  • Higher throughput means faster processing.
  • Formula:
    Throughput = Number of instructions executed / Execution time
  • Affected by:
    • Pipeline length
    • Clock frequency
    • Efficiency of instruction execution
    • Pipeline hazards or stalls

What is Latency?

  • Measures the time taken for a single instruction to complete its execution.
  • Indicates the delay or execution time per instruction through all pipeline stages.
  • Lower latency means better performance.
  • Formula:
    Latency = Execution time / Number of instructions executed
  • Affected by:
    • Pipeline depth and length
    • Clock cycle time
    • Instruction dependencies
    • Pipeline hazards

Advantages of Pipelining

  1. Increased Throughput
    Multiple instructions are processed simultaneously at different stages, improving the total number of instructions completed per unit time.

  2. Improved CPU Utilization
    All parts of the processor are engaged at different stages, minimizing idle time and maximizing resource usage.

  3. Higher Instruction Throughput
    Concurrent execution of different stages allows more instructions to be completed in less time compared to non-pipelined architectures.

  4. Better Performance for Repetitive Tasks
    Ideal for loops and repeated instruction patterns, reducing the average execution time per instruction.

  5. Scalability
    Easily implemented across a range of processors, from simple CPUs to advanced multi-core processors.

Disadvantages of Pipelining

  1. Pipeline Hazards

    • Data Hazards: Dependencies between instructions
    • Control Hazards: Branch instructions causing delays
    • Structural Hazards: Limited hardware resources
    • These hazards may introduce delays or require complex handling.
  2. Increased Complexity
    Design and management of a pipelined processor is more complex due to synchronization, hazard handling, and instruction tracking.

  3. Stall Cycles
    Hazards can introduce pipeline stalls or bubbles, reducing the effective throughput and introducing idle stages.

  4. Instruction Latency
    Although overall throughput improves, the latency for individual instructions may remain the same or even increase slightly due to pipeline overheads.

  5. Hardware Overhead
    Additional pipeline registers and control logic increase the design complexity and cost of the processor.

About

Implements a 5-stage RISC processor pipeline (IF, ID, EX, MEM, WB) to improve performance through instruction-level parallelism. Measures performance using throughput and latency.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published