CSCI 4335

COMPUTER ARCHITECTURE

Syllabus for Fall 2007

 

Dr. John P. Abraham

 

Office          : Engineering Building 3.276

Telephone   : Office 381-3550

Email          : jabraham@panam.edu Email is the best way to get a hold of me.

 

 My Schedule

CSCI 6303.01

M 5:45 pm to 8:25 pm                Eng 1.272

 

 

CSCI 4335.01

MWF 1:45 pm to 2:35 pm          Eng 1.272

 

 

CSCI 3327.01

MWF  10:45 am-11:35 am          Eng 1.290

 

 

Office Hours

MW 3:30 pm to 5:30 pm

TR By appointment

 

 

 

 

 

 

 

Textbooks:

Stallings, William.  Computer Organization & Architecture, Designing for Performance.  Seventh Edition.  2006. Prentice Hall.  ISBN 0-13-185644-8

 

Recommended Reading:

Silberschatz, Abraham, Galvin, P.B., and Gane, G. Operating System Concepts.  Seventh Edition. 2007. Wiley. ISBN: 0-471-69466-5

Tanenbaum, Modern Operating Systems, 2nd Ed. Prentice Hall

Tanenbaum. Structured Computer Organization.  5th Edition. Prentice Hall.

Irvine. Assembly Language for Intel-Based Computers. 5th Edition. Prentice Hall.

Nutt. Operating Systems, A Modern Perspective, 3rd Edition. Addisson Wesley.

Prerequisites:

Pre-requisites courses: CSCI 2333 and CSCI 3334.  Students should have an understanding of assembly and machine language, assembly language programming, binary numbers, addressing modes, and discrete mathematics.  Students should have an understanding of the roles of hardware and software in a computer system. 

Course Topics:

Architecture of modern computers including (1) computer instruction sets and registers, (2) ALU components, (3) the fetch-execute cycle, (4) micro-code,  (5) the system bus, (6) main memory (7) cache, (7) virtual-memory, (8) input and output communications, (9) RISC, (10) pipeline architectures, (10) parallel processor architectures,  (11) introduction  to digital circuits and binary representations, and (12) History of computer evolution.

 

Between this course (4335) and Assembly Language (2333) you will receive instruction in the following areas:

Learning outcomes for each section are given below.

AR1. Digital Logic and Design Systems [core], 6 hours

Overview and history of computer architecture

Fundamental building blocks (logic gates, flip-flops, counters, registers, PLA)

Logic expressions, minimization, sum of product forms

Register transfer notation

Physical considerations (gate delays, fan-in, fan-out)

1. Describe the progression of computer architecture from vacuum tubes to VLSI.

2. Demonstrate an understanding of the basic building blocks and their role in the historical development of computer architecture.

3. Use mathematical expressions to describe the functions of simple combinational and sequential circuits.

4. Design a simple circuit using the fundamental building blocks.

 

AR2. Machine level representation of data [core], 3 hours

Bits, bytes, and words

Numeric data representation and number bases

Fixed- and floating-point systems

Signed and twos-complement representations

Representation of nonnumeric data (character codes, graphical data)

Representation of records and arrays

1. Explain the reasons for using different formats to represent numerical data.

2. Explain how negative integers are stored in sign-magnitude and twos-complement representation.

3. Convert numerical data from one format to another.

4. Discuss how fixed-length number representations affect accuracy and precision.

5. Describe the internal representation of nonnumeric data.

6. Describe the internal representation of characters, strings, records, and arrays.

 

AR3. Assembly level Machine Organization [core], 9 hours

Basic organization of the von Neumann machine

Control unit; instruction fetch, decode, and execution

Instruction sets and types (data manipulation, control, I/O)

Assembly/machine language programming

Instruction formats

Addressing modes

Subroutine call and return mechanisms

I/O and interrupts

1. Explain the organization of the classical von Neumann machine and its major functional units.

2. Explain how an instruction is executed in a classical von Neumann machine

3. Summarize how instructions are represented at both the machine level and in the context of a symbolic assembler.

4. Explain different instruction formats, such as addresses per instruction and variable length vs. fixed length formats.

5. Write simple assembly language program segments.

6. Demonstrate how fundamental high-level programming constructs are implemented at the machine-language level.

7. Explain how subroutine calls are handled at the assembly level.

8. Explain the basic concepts of interrupts and I/O operations.

 

AR4. Memory system organization and Architecture [core], 5 hours

Storage systems and their technology

Coding, data compression, and data integrity

Memory hierarchy

Main memory organization and operations

Latency, cycle time, bandwidth, and interleaving

Cache memories (address mapping, block size, replacement and store policy)

Virtual memory (page table, TLB)

Fault handling and reliability

1. Identify the main types of memory technology.

2. Explain the effect of memory latency on running time.

3. Explain the use of memory hierarchy to reduce the effective memory latency.

4. Describe the principles of memory management.

5. Describe the role of cache and virtual memory.

6.Explain the workings of a system with virtual memory management.

 

AR5. Interfacing and communication [core], 3 hours

I/O fundamentals: handshaking, buffering, programmed I/O, interrupt-driven I/O

Interrupt structures: vectored and prioritized, interrupt acknowledgment

External storage, physical organization, and drives

Buses: bus protocols, arbitration, direct-memory access (DMA)

Introduction to networks

Multimedia support

RAID architectures

1. Explain how interrupts are used to implement I/O control and data transfers.

2. Identify various types of buses in a computer system.

3. Describe data access from a magnetic disk drive.

4. Compare the common network configurations.

5. Identify interfaces needed for multimedia support.

6. Describe the advantages and limitations of RAID architectures.

 

AR6. Functional Organization [core], 7 hours

Implementation of simple datapaths

Control unit: hardwired realization vs. microprogrammed realization

Instruction pipelining

Introduction to instruction-level parallelism (ILP)

1. Compare alternative implementation of datapaths.

2. Discuss the concept of control points and the generation of control signals using hardwired or microprogrammed implementations.

3. Explain basic instruction level parallelism using pipelining and the major hazards that may occur.

 

AR7. Multiprocessing and Alternative Architectures [core], 3 hours

Introduction to SIMD, MIMD, VLIW, EPIC

Systolic architecture

Interconnection networks (hypercube, shuffle-exchange, mesh, crossbar)

Shared memory systems

Cache coherence

Memory models and memory consistency

1. Discuss the concept of parallel processing beyond the classical von Neumann model.

2. Describe alternative architectures such as SIMD, MIMD, and VLIW.

3. Explain the concept of interconnection networks and characterize different approaches.

4. Discuss the special concerns that multiprocessing systems present with respect to memory management and describe how these are addressed.

5. Explain the effect of different isolation levels on the concurrency control mechanisms.

 

AR8. Performance enhancments, [elective]

Superscalar architecture

Branch prediction

Prefetching

Speculative execution

Multithreading

Scalability

1. Describe superscalar architectures and their advantages.

2. Explain the concept of branch prediction and its utility.

3. Characterize the costs and benefits of prefetching.

4. Explain speculative execution and identify the conditions that justify it.

5. Discuss the performance advantages that multithreading can offer in an architecture along with the factors that make it difficult to derive maximum benefits from this approach.

6. Describe the relevance of scalability to performance.

 

Grading:

            Test grades                    -        65%                               90%-100%  A

          Assignments         -        25%                               80%-89%    B

          Notebook                                                    70%-79%    C

                   & presentation     10%                               60%-69%    D

         

WEDNESDAY, DECEMBER 12, 2007
For classes that meet

 

 

Meeting Time

Exam Period

MWF 1:45- 2:35 p.m.

9:45 -- 11:30 a.m.

 

Homework assignments will be handed out at the beginning of each chapter. Some assignments will be directly from the textbook, others will be on current hardware trends; you will have to do some research. 

Each student will assemble a computer and demonstrate it to me.  Since we only have parts for only a few computers please sign up for a time slot.

Research paper: You will be required to submit one research paper of 10 to 15 pages in length.  Choose any of the following:

Latest developments in Hardware (Chips, CPUs, I/O, etc.)

Chip Design, Manufacturing, packaging, etc.

Your work on performance evaluation, load balancing, workload characterization etc.

Hazards of pipelining

Instruction level parallelism.

Loop unrolling.

Latest trends in achieving higher speed.

Any other topic that you are interested in.  Get my permission first.

 

Tentative schedule:

 

Week

Chapters

Topic

1

1, 10

Introduction and Instruction sets

2

2-3

History and Top Level View of a Computer

3

4

Memory

4

4

Memory

5

4 & 5

Memory

6

6

Exam I

6

Chapters 1 – 5

Storage Devices

7

7

Input/output modules and devices

8

8

Operating System support

9

9 & 10

Instruction Set

10

11 & 12

Addressing modes & instruction formats

11

12 & 13

Processor and Register organization, RISC

12

12

Exam II

14

Chapters 6 – 12

Parallelism

13

16-17

The control Unit

14

FINAL

Wrap up and Final Exam

 


If you must miss an exam, make prior arrangements. No make-up exams will be given unless you contact me in advance! Homework may be submitted to me by email or hardcopy in my mailbox prior to class time. Late homework will be levied heavy penalties.  Penalty: One day late 10%, 1 week late 20%, 2 weeks late 50%.  Not accepted afterwards.

Note to students with disabilities:

If you have a documented disability which will make it difficult for you to carry out the work as I have outlined and/or if you need special accommodations/assistance due to a disability, please contact the office of services for persons with persons disabilities (OSPD), Emilia Remirez-Schunior Hall, Room 1.101 immediately, or the Associate Director at Maureen@utpa.edu, ext. 7005.  Appropriate arrangements/accommodations can be arranged.

Verification of disability and processing of special services required, such as notetakers, extended time tests, separate accommodations for testing, will be determined by OSPD.  Please do not assume adjustments/accommodations are impossible.  Please consult with the Associate Director, OSPD, at extension 7005.

The following will help you to study for the exams.

1.      Draw the structure of an IAS computer and describe how a small program is executed.

2.      Given a bus width design an instruction set for an IAS computer and show the compromises that need to be made between number of instructions and addressable memory.

3.      Describe the key design elements for interconnection of peripherals to the CPU.

4.      Using diagrams show what happens in each of the sub-cycles of an instruction cycle.

5.      Describe the memory hierarchy and calculate performance improvements.

6.      Calculate block memory transfers between designated cache and main memory and show similarity of paging and explain resource management in a computer system.

7.      Explain interaction between the hardware, the operating system, the application software and the user of a modern computer system.

8.      Calculate performance improvements of a pipelined RISC.

9.      Describe the need for different types of addressing modes

10.  Describe the services an operating system provides to users and processes.

11.  Differentiate between processes and threads and show why scheduling, creation, termination and communication are important.  Describe various scheduling criteria and algorithm that may be used.

12.  Describe importance of process synchronization and discuss classic problems of synchronization.