Advanced Computer Architecture, Lecture notes, Newar, Study notes for Advanced Computer Architecture. Biyani Girls College

Advanced Computer Architecture

Description: Parallel Computer Models : The state of computing, multiprecessors and multicomputers, multivector and SIMD computers, architectural development tracks. Program and Network Properties : Conditions of parallelism, program partitioning and scheduling, program flow mechanisms. System Interconnect Architectures : Network properties and routing, Static interconnection network and dynamic intercommection networks. Processors and Memory Hierachy : Advanced processor technology—CISC, RISC, Superscalar, Vector VLIW and symbolic processors, memory technology. Bus, Cache and Shared Memory. Linerer Pipeline Processors, Nonlinear Pipeline, processors Instruction pipeline Design Multiprocessors System Interconnets Vector Processing Principles, Multivector Multiprocessors. Show more
Showing pages  1  -  4  of  30
Advanced Computer Architecture
(B.C.A. Part-III)
Nitika Newar, MCA
Deptt. of I.T.
Biyani Girls College, Jaipur
Hastlipi, Jaipur
Syllabus
B.C.A. Part-III
Advanced Computer Architecture
Parallel Computer Models : The state of computing, multiprecessors and
multicomputers, multivector and SIMD computers, architectural development tracks.
Program and Network Properties : Conditions of parallelism, program partitioning
and scheduling, program flow mechanisms.
System Interconnect Architectures : Network properties and routing, Static
interconnection network and dynamic intercommection networks.
Processors and Memory Hierachy : Advanced processor technologyCISC, RISC,
Superscalar, Vector VLIW and symbolic processors, memory technology.
Bus, Cache and Shared Memory.
Linerer Pipeline Processors, Nonlinear Pipeline, processors Instruction pipeline
Design Multiprocessors System Interconnets Vector Processing Principles, Multivector
Multiprocessors.
Content
S. No. Name of Topic Page No.
1. Parallel Computer Models 9-29
1.1 Multiprocesors
1.2 Parallel processing
1.3 State of computing
1.4 History of computer Architecture
1.5 Parallelism
1.6 Levels of Paralleism
1.7 Vector super computers
1.8 Shared memory multiprocessor
1.9 Distributed memory multicomputers
1.10 SIMD computers
1.11 Architectural Development Tracks
1.12 SIMD array processor
2. Program partitioning or scheduling 30-33
2.1 Program Flow Mechanisms
2.2 Data flow Architecture
2.3 Grain Sizes & Latency
2.4 Scheduling procedure
3. System Interconnect Architecture 34-42
3.1 Network properties
3.2 Bisection width
3.3 Data routing functions
Chapter 1
Parallel Computer Models
Q.1. What is multiprocessors? What are the types of multi-processors?
Ans. A multiprocessor system is an inter connection of two or more CPUs with
memory as input-output equipment. A multiprocessors are
classified as multiple instruction stream, multiple data stream (MIMD) systems.
There are some similarties between multiprocessor & multicomputer systems
since both support concurrent operations. However there exists an important
distinction between a system with multiple computers & a system with multiple
processors. Computers are interconnected with each other by means of
communication lines to form a computer network. The network consists of
several autonomous computers that may or may not communicate with each
other. A multiprocessor system is controlled by one operating system that
provides inter-connection between processors& all the components of the system
cooperate in the solution of a problem very large scale integrated circuit
technology has reduced the cost of computer components to such a low level that
the concept of applying multiple processors to meet system performance
requirements has become an attractive design possibility.
Multiprocessing improves the reliability of system so that a failure or error in one
part has limited effect on rest of system. If a fault causes one processor to fail, a
second processor can be assigned to perform the functions of disabled processor.
The system as whole can continue to function correctly with perhaps some loss
in efficiency. The benefits derived from a multiprocessor organisation is an
improved system performance. The system derives its high performance from
the fact that computations can proceed in parallel in one of two ways:
1. Multiple independent jobs can be made to operate in parallel.
2. A single job can be partitioned into multiple parallel tasks.
An example is a computer system where are processor performs the
computations for an industrial process control while others monitors control
various parameter such as temperature and flow rate.
Another example is a computer where are processor performs high speed
floating point mathematical computations and another take care of routine data
processing tasks.
Multiprocessing can improve performance by decomposing a
program into parallel executable tasks. This can be achieved in one of two ways:
The user can explicitly declare that certain tasks of the program be executed in
parallel. This must be done prior to load the program by specifying the parallel
executable segments. Most multiprocessor manufacturers provide an operating
system with programming language construct suitable for specifying parallel
processing.
The other, more efficient way is to provide a compiler with multiprocessor
software that can automatically detect parallelism in a users’s program. The
compiler checks for data dependency in the program. If a program depends on
data generated in another part, the part yielding the needed data must be
executed first. However two parts of a program that do not use data generated
by each can run concurrently. The parallelizing compiler checks the entire
program to detect any possible data dependence. These that have no data
dependency are then considered for concurrent scheduling on different
processors.
Multi processors are classified by the way their memory is organized. A multiprocessor
system with common shared memory is classified as shared memory or tightly
coupled multiprocessor. This does not preclude each processor from having its
own local memory. In fact, most commercial tightly coupled multiprocessor
provide a cache memory with each CPU. In addition there is a global common
memory that all CPUs can access. Information can therefore be shared among the
CPU by placing it in the common global memory.
An alternative model of microprocessor is the distributed memory or loosely
coupled system. Each processor element in a loosely coupled system has its own
private local memory. The processors are tied together by a switching scheme
designed to route information from one processor to another through a message
passing scheme. The processors relay program is data to other processors in
packets. A packet consists of an address, the data content and some error
detection code. The packets are addressed to a specific processor or taken by first
available processor, depending on the communication system used. Loosely
coupled systems are most efficient when the interaction between tasks is
minimal, whereas tightly coupled systems can tolerate a higher degree of
interaction between tasks.
Q.2. What is parallel processing?
Ans. Parallel processing is a term used to denote a large class of techniques that are
used to provide simultaneous data processing tasks for the purpose of increasing
the computational speed of computer system. Instead of processing each
instruction sequentially as in conventional computer, a parallel processing
system is able to perform concurrent data processing to achieve faster execution
time. The purpose of parallel processing is to speed up the computer processing
capability and increase its throughput, that is, the amount of processing that can
be accomplished during a interval of time. Parallel processing at higher level of
complexity can be achieved by having multiplicity of functional units that
perform identical or different operations simultaneously. Parallel processing is
established by distributing the data among the multiple functional units. For
example the arithmetic logic and shift operationscan be separated into three units
and the operands diverted to each unit under the supervision of control unit.
Singe Instruction stream – Single Data Stream (SISD)
Single Instruction Multiple Data Stream (SIMD)
CU: Control Unit
PU: Processing Unit
MM: Memory Module
These are variety of ways that parallel processing can be classified. One
classification introduced by M.J. Flynn considers the organization of computer
system by number of instructions and data items that are manipulated
simultaneously. The normal operation of a computer is to fetch instructions from
memory and execute them in the processor. The sequence of instructions read
from memory constitutes an instruction stream. The operations performed on the
data is processor constitutes a data stream parallel processing may be occur in
the instruction stream, in data stream or both.
IS
CU
IS
PU
DS
MM






PU1
DS1 MM1
PU2
DS2
MM2
CU
PUn
DSn
MMn
IS
Single instruction stream, single Data stream (SISD)
Single instruction stream, multiple data stream (SIMD)
Multiple instruction stream, single data stream (MISD)
Multiple instruction stream, multiple data stream (MIMD)
Multiple Instruction Stream Single Data Stream (MISD)
Multiple Instream stream Multiple Data Stream (MIMD)
Q.3. Explain the state of computing?
Ans. Modern computers are equipped with powerful hardware facilitates driven by
extensive software packages. To asses the state of computing we first review
historical milestones in the development of computers.






The preview of this document ends here! Please or to read the full document or to download it.
Document information
Uploaded by: bindiya
Views: 30734
Downloads : 50+
Address:
University: Biyani Girls College
Upload date: 01/09/2011
Embed this document:
rajesh - Kalasalingam University

very bad

01/03/13 16:08
Docsity is not optimized for the browser you're using. In order to have a better experience please switch to Google Chrome, Firefox, Internet Explorer 9+ or Safari! Download Google Chrome