Advanced concepts in operating systems : distributed, database, and multiprocessor operating systems / Mukesh Singhal and Niranjan G. Shivaratri.

By: Singhal, MukeshMaterial type: TextTextSeries: McGraw-Hill series in computer sciencePublication details: New York : McGraw-Hill, c1994Description: xxii, 522 p. : ill. ; 25 cmISBN: 9780070472686 (acidfree paper) :Subject(s): Operating Systems (Computers)DDC classification: 005.23
Contents:
1 Overview 1.1 Introduction 1.2 Functions of an Operating System 1.3 Design Approaches 1.3.1 Layered Approach 1.3.2 The Kernel Based Approach 1.3.3 The Virtual Machine Approach 1.4 Why Advanced Operating Systems 1.5 Types of Advanced Operating Systems 1.6 An Overview of the Book 1.7 Summary 1.8 Further Reading References 2 Synchronization Mechanisms 2.1 Introduction 2.2 Concept of a Process 2.3 Concurrent Processes 2.3.1 Threads 2.4 The Critical Section Problem 2.4.1 Early Mechanisms for Mutual Exclusion 2.4.2 Semaphores 2.5 Other Synchronization Problems 2.5.1 The Dining Philosophers Problem 2.5.2 The Producer-Consumer Problem 2.5.3 The Readers-Writers Problem 2.5.4 Semaphore Solution to Readers-Writers Problem 2.6 Language Mechanisms for Synchronization 2.6.1 Monitors 2.6.2 Serializers 2.6.3 Path Expressions 2.6.4 Communicating Sequential Processes (CSP) 2.6.5 Ada Rendezvous in Axiomatic Verification of Parallel Programs 2.7.1 The Language 2.7.2 The Axioms 2.7.3 Auxiliary Variables 2.7.4 An Example: Proof of Mutual Exclusion 2.8 Summary 2.9 Further Reading Problems References 3 Process Deadlocks 3.1 Introduction 3.2 Preliminaries 3.2.1 Definition 3.2.2 Deadlock versus Starvation 3.2.3 Fundamental Causes of Deadlocks 3.2.4 Deadlock Handling Strategies 3.3 Models of Deadlocks 3.3.1 The Single-Unit Requesl Model 3.3.2 The AND Request Model 3.3.3 The OR Request Model 3.3.4 The AND-OR Request Model 3.3.5 The P-out-of-Q Request Model 3.4 Models of Resources 3.4.1 Types of Resources 3.4.2 Types of Resource Accesses 3.5 A Graph-Theoretic Model of a System State 3.5.1 General Resource Systems 3.5.2 General Resource Graph 3.5.3 Operations on the General Resource Graph 3.6 Necessary and Sufficient Conditions for a Deadlock 3.6.1 The Graph Reduction Method 3.7 Systems with Single-Unit Requests 3.8 Systems with only Consumable Resources 3.9 Systems with only Reusable Resources 3.9.1 Systems with Single-Unit Resources 3.9.2 Deadlock Detection 3.9.3 Deadlock Prevention 3.9.4 Deadlock Avoidance 3.9.5 Pros and Cons of Different Strategies 3.10 Summary 3.11 Further Reading Problems References Part n Distributed Operating Systems 4 Architectures of Distributed Systems 4.1 Introduction 4.2 Motivations 4.3 System Architecture Types 4'.4 Distributed Operating Systems 4.5 Issues in Distributed Operating Systems 4.6 4.5.1 Global Knowledge 4.5.2 Naming 4.5.3 Scalability 4.5.4 Compatibility 4.5.5 Process Synchronization 4.^6 Resource Management 4.5.7 Seciuity 4.5.8 Structuring 4.5.9 Client-server Computing Model Communication Networks 4.6.1 \^de Area Networks 4.6.2 Local Area Networks 4.7 Communication Primitives 4.7.1 The Message Passing Model 4.7.2 Remote Procedure Calls 4.7.3 Design Issues in RPC 4.8 Summary 4.9 Further Reading \ References' 5 Theoretical Foundations 5.1 Introduction 5.2 Inherent Limitations of a Distributed System 5.2.1 Absence of a Global Clock 5.2.2 Absence of Shared Memory 5.3 Lamport's Logical Clocks 5.3.1 A Limitation of Lamport's Clocks 5.4 Vector Clocks 5.5 Causal Ordering of Messages 5.6 Global State 5.6.1 Chandy-Lamport's Global State Recording Algorithm 5.7 Cuts of a Distributed Computation 5.8 Termination Detection 5.9 Summary 5.10 Further Reading Problems References 6 Distributed Mutual Exclusion 6.1 Introduction 6.2 The Classification of Mutual Exclusion Algorithms 6.3 Preliminaries 6.3.1 Requirements of Mutual Exclusion Algorithms 6.3.2 How to Measure Performance 6.4 A Simple Solution to Distributed Mutual Exclusion 6.5 Non-Token-Based Algorithms 6.6 Lamport's Algorithm 6.7 The Ricart-Agrawala Algorithm 6.8 Maekawa's Algorithm 6.9 A Generalized Non-Token-Based Algorithm 6.9.1 Information Structures 6.9.2 The Generalized Algorithm 6.9.3 Static versus Dynamic Information Structures 6.10 Token-Based Algorithms 6.11 Suzuki-Kasami's Broadcast Algorithm 6.12 Singhal's Heuristic Algorithm 6.13 Raymond's Tree-Based Algorithm 6.14 A Comparative Performance Analysis 6.14.1 Response Time 6.14.2 Synchronization Delay 6.14.3 Message Traffic 6.14.4 Universal Performance Bounds 6.15 Summary 6.16 Further Reading Problems References 7 Distributed Deadlock Detection 7.1 Introduction 7.2 Preliminaries 7.2.1 The System Model 7.2.2 Resource versus Communication Deadlocks 7.2.3 A Graph-Theoretic Model 7.3 Deadlock Handling Strategies in Distributed Systems 7.3.1 Deadlock Prevention 7.3.2 Deadlock Avoidance 7.3.3 Deadlock Detection 7.4 Issues in Deadlock Defection and Resolution 7.5 Control Organizations for Distributed Deadlock Detection 7.5.1 Centralized Control 7.5.2 Distributed Control 7.5.3 Hierarchical Control 7.6 Centralized Deadlock-Detection Algorithms 7.6.1 The Completely Centralized Algorithm 7.6.2 The Ho-Ramamoorthy Algorithms 7.7 Distributed Deadlock Detection Algorithms 7.7.1 A Path-Pushing Algorithm 7.7.2 An Edge-Chasing Algorithm 7.7.3 A Diffusion Computation Based Algorithm 7.7.4 A Global State Detection Based Algorithm 7.8 Hierarchical Deadlock Detection Algorithms 7.8.1 The Menasce-Muntz Algorithm 7.8.2 The Ho-Ramamoorthy Algorithm 7.9 Perspective 7.10 Summary 7.11 Further Reading Problems References 8 Agreement Protocols 8.1 Introduction 8.2 The System Model 8.2.1 Synchronous versus Asynchronous Computations 8.2.2 Model of Processor Failures 8.2.3 Authenticated versus Non-Authenticated Messages 8.2.4 Performance Aspects 8.3 A Classification of Agreement Problems 8.3.1 The Byzantine Agreement Problem 8.3.2 The Consensus Problem 8.3.3 The Interactive Consistency Problem 8.3.4 Relations Among the Agreement Problems 8.4 Solutions to the Byzantine Agreement Problem 8.4.1 The Upper Bound on the Number of Faulty Processors 8.4.2 An Impossibility Result 8.4.3 Lamport-Shostak-Pease Algorithm 8.4.4 Dolev et al.'s Algorithm 8.5 Applications of Agreement Algorithms 8.5.1 Fault-Tolerant Clock Synchronization 8.5.2 Atomic Commit in DDBS 8.6 Summary 8.7 Further Reading Problems References Part III Distributed Resource Management 9 Distributed File Systems 9.1 Introduction 9.2 Architecture 9.3 Mechanisms for Building Distributed File Systems 9.3.1 Mounting 9.3.2 Caching 9.3.3 Hints 9.3.4 Bulk Data Transfer 9.3.5 Encryption 9.4 Design Issues 9.4.1 Naming and Name Resolution 9.4.2 Caches on Disk or Main Memory 9.4.3 Writing Policy 9.4.4 Cache Consistency 9.4.5 Availability 9.4.6 Scalability 9.4.7 Semantics 9.5 Case Studies 9.5.1 The Sun Network File System 9.5.2 The Sprite File System 9.5.3 Apollo DOMAIN Distributed File System 9.5.4 Coda 9.5.5 The x-Kemel Logical File System 9.6 Log-Structured File Systems 9.6.1 Disk Space Management 9.7 Summary 9.8 Further Readings Problems References 10 Distributed Shared Memory 10.1 Introduction 10.2 Architecture and Motivation . 10.3 Algorithms for Implementing DSM 10.3.1 The CentralrSeWer Algorithm 10.3.2 The Migration Algorithm 10.3.3 The Read-Replication Algorithm 10.3.4 The Full-Replication Algorithm 10.4 Memory Coherence 10.5 Coherence Protocols 10.5.1 Cache Coherence in the PLUS System 10.5.2 Unifying Synchronization and Data Transfer in Clouds 10.5.3 Type-Specific Memory Coherence in the Munin System 10.6 Design Issues 10.6.1 Granularity 10.6.2 Page Replacement 10.7 Case Studies 10.7.1 IVY 10.7.2 Mirage 10.7.3 Clouds 10.8 Summary 10.9 Further Reading Problems References 11 Distributed Scheduling 11.1 Introduction 11.2 Motivation 11.3 Issues in Load Distributing 11.3.1 Load 11.3.2 Classification of Load Distributing Algorithms 11.3.3 Load Balancing versus Load Sharing 11.3.4 Preemptive versus Nonpreemptive Transfers 11.4 Components of a Load Distributing Algorithm 11.4.1 Transfer Policy 11.4.2 Selection Policy 11.4.3 Location Policy 11.4.4 Information Policy 11.5 Stability 11.5.1 The Queuing-Theoretic Perspective 11.5.2 The Algorithmic Perspective 11.6 . Load Distributing Algorithms 11.6.1 Sender-Initiated Algorithms 11.6.2 Receiver-Initiated Algorithms 11.6.3 Symmetrically Initiated Algorithms 11.6.4 Adaptive Algorithms 11.7 Performance Comparison 11.7.1 Receiver-initiated versus Sender-initiated Load Sharing 11.7.2 Symmetrically Initiated Load Sharing 11.7.3 Stable Load Sharing Algorithms 11.7.4 Performance Under Heterogeneous Workloads 11.8 Selecting a Suitable Load Sharing Algorithm 11.9 Requirements for Load Distributing 11.10 Load Sharing Policies: Case Studies 11.10.1 The V-System 11.10.2 The Sprite System 11.10.3 Condor 11.10.4 The Stealth Distributed Scheduler 11.11 Task Migration 11.12 Issues in Task Migration 11.12.1 State Transfer 11.12.2 Location Transparency 11.12.3 Structure of a Migration Mechanism 11.12.4 Performance 11.13 Summary 11.14 Further Reading Problems References Part IV Failure Recovery and Fault Tolereuice 12 Recovery 12.1 Introduction 12.2 Basic Concepts 12.3 Classification of Failures 12.4 Backward and Forward Error Recovery 12.5 Backward-Error Recovery: Basic Approaches 12.5.1 The Operation-Based Approach 12.5.2 State-based Approach 12.6 Recovery in Concurrent Systems 12.6.1 Orphan Messages and the Domino Effect 12.6.2 Lost Messages 12.6.3 Problem of Livelocks 12.7 Consistent Set of Checkpoints 12.7.1 A Simple Method for Taking a Consistent Set of Checkpoints 12.8 Synchronous Checkpointing and Recovery 12.8.1 The Checkpoint Algorithm 12.8.2 The Rollback Recovery Algorithm 12.9 Asynchronous Checkpointing and Recovery 12.9.1 A Scheme for Asynchronous Checkpointing and Recovery 12.10 Checkpointing for Distributed Database Systems 12.10.1 An Algorithm for Checkpointing in a DDES 12.11 Recovery in Replicated Distributed Database Systems 12.11.1 An Algorithm for Site Recovery 12.12 Sununary 12.13 Further Readings Problems References 13 Fault Tolerance 13.1 Introduction 13.2 Issues 13.3 Atomic Actions and Conunitting 13.4 Commit Protocols 13.4.1 The Two-Phase Commit Protocol 13.5 Nonblocking Commit Protocols 13.5.1 Basic Idea 13.5.2 The Nonblocking Commit Protocol for Single Site Failure 13.5.3 Multiple Site Failures and Network Partitioning 13.6 Voting Protocols 13.6.1 Static Voting 13.7 Dynamic Voting Protocols 13.8 The Majority Based Dynamic Voting Protocol 13.9 Dynamic Vote Reassignment Protocols 13.9.1 Autonomous Vote Reassignment 13.9.2 Vote Increasing Policies 13.9.3 Balance of Voting Power 13.10 Failure Resilient Processes 13.10.1 Backup Processes 13.10.2 Replicated Execution 13.11 Reliable Communication 13.11.1 Atomic Broadcast 13.12 Case Studies 13.12.1 Targon/32: Fault Tolerance Under UNIX 13.13 Summary 13.14 Further Reading Problems References Part V Protection and Security 14 Resource Security and Protection: Access and Flow Control 14.1 Introduction 14.2 Preliminaries 14.2.1 Potential Security X^olations 14.2.2 External versus Internal Security 14.2.3 Policies and Mechanisms a 14.2.4 Protection Domain - 14.2.5 Design Principles for Secure Systems 14.3 The Access Matrix Model 14.4 Implementations of Access Matrix 14.4.1 Capabilities 14.4.2 The Access Control List Method 14.4.3 The Lock-Key Method 14.5 Safety in the Access Matrix Model 14.5.1 Changing the Protection State 14.5.2 Safety in the Access Matrix Model 14.6 Advanced Models of Protection 14.6.1 The Take-Grant Model 14.6.2 Bell-LaPadula Model 14.6.3 Lattice Model of Information Flow 14.7 Case Studies 14.7.1 The UNIX Operating System 14.7.2 The Hydra Kernel 14.7.3 Amoeba 14.7.4 Andrew 14.8 Summary 14.9 Further Reading Problems References 15 Data Security: Cryptography 15.1 Introduction 15.2 A Model of Cryptography 15.2.1 Terms and Definitions 15.2.2 A Model of Cryptographic Systems 15.2.3 A Classification of Cryptographic Systems 15.3 Conventional Cryptography 15.4 Modem Cryptography 15.5 Private Key Cryptography: Data Encryption Standard 15.5.1 Data Encryption St^dard G5ES) 15.5.2 Cipher Block Chaining 15.6 Public Key Cryptography 15.6.1 Implementation Issues 15.6.2 The Rivest-Shamir-Adleman Method 15.6.3 Signing Messages 15.7 Multiple Encryption 15.8 Authentication in Distributed Systems 15.8.1 Authentication Servers 15.8.2 Establishing Interactive Connections 15.8.3 Performing One-Way Communication 15.8.4 Digital Signatures 15.9 Case Study: The Kerberos System 15.9.1 Phase I: Getting the Initial Ticket 15.9.2 Phase 11: Getting Server Tickets 15.9.3 Phase HI: Requesting the Service 15.10 Sununary 15.11 Further Readings Problems References Part VI Multiprocessor Operating Systems 16 Multiprocessor System Architectures 16.1 Introduction 16.2 Motivations for Multiprocessor Systems 16.3 Basic Multiprocessor System Architectures 16.3.1 Tightly Coupled versus Loosely Coupled Systems 16.3.2 UMA versus NUMA versus NORMA Architectures 16.4 Interconnection Networks for Multiprocessor Systems 16.4.1 Bus 16.4.2 Cross-Bar Switch 16.4.3 Multistage Interconnection Network 16.5 Caching 16.5.1 The Cache Coherency Problem 16.6 Hypercube Architectures 16.7 Summary 16.8 Further Reading References 17 Multiprocessor Operating Systems 17.1 Introduction 17.2 Structures of Multiprocessor Operating Systems 17.3 Operating System Design Issues 17.4 Threads 17.4.1 User-Level Threads 17.4.2 Kernel-Level Threads 17.4.3 First-Class Threads 17.4.4 Scheduler Activations 17.5 Process Synchronization 17.5.1 Issues in Process Synchronization 17.5.2 The Test-and-Set Instruction 17.5.3 The Swap Instruction 17.5.4 The Fetch-and-Add Instruction of the Ultracomputer 17.5.5 SLIC Chip of the Sequent 17.5.6 Implementation of Process Wait 17.5.7 The Compare-and-Swap Instruction 17.6 Processor Scheduling 17.6.1 Issues in Processor Scheduling 17.6.2 Coscheduling of the Medusa OS 17.6.3 Smart Scheduling 17.6.4 Scheduling in the NYU Ultracomputer 17.6.5 Affinity Based Scheduling 17.6.6 Scheduling in the Mach Operating System 17.7 Memory Management: The Mach Operating System 17.7.1 Design Issues 17.7.2 The Mach Kernel 17.7.3 Task Address Space 17.7.4 Memory Protection 17.7.5 Machine Independence 17.7.6 Memory Sharing 17.7.7 Efficiency Considerations 17.7.8 Implementation: Data Structures and Algorithms 17.7.9 Sharing of Memory Objects 17.8 Reliability/Fault Tolerance: The Sequoia System 17.8.1 Design Issues 17.8.2 The Sequoia Architecture 17.8.3 Fault Detection 17.8.4 Fault Recovery 17.9 Sununary 17.10 Further Reading Problems References Part Vn Database Operating Systems 18 Introduction to Database Operating Systems 18.1 Introduction 18.2 What Is Different? 18.3 Requirements of a Database Operating System 18.4 Further Reading References 19 Concurrency Control: Theoretical Aspects 19.1 Introduction 19.2 Database Systems 19.2.1 Transactions 19.2.2 Conflicts 19.2.3 Transaction Processing 19.3 A Concurrency Control Model of Database Systems 19.4 The Problem of Concurrency Control 19.4.1 Inconsistent Retrieval 19.4.2 Inconsistent Update 19.5 Serializability Theory 19.5.1 Logs 19.5.2 Serial Logs 19.5.3 Log Equivalence 19.5.4 Serializable Logs 19.5.5 The Serializability Theorem 19.6 Distributed Database Systems 19.6.1 Transaction Processing Model 19.6.2 Serializability Condition in DDES 19.6.3 Data Replication 19.6.4 Complications due to Data Replication 19.6.5 Fully-Replicated Database Systems 19.7 Sununary 19.8 Further Reading Problems References 20 Concurrency Control Algorithms 20.1 Introduction 20.2 Basic Synchronization Primitives 20.2.1 Locks 20.2.2 Timestamps 20.3 Lock Based Algorithms 20.3.1 Static Locking 20.3.2 Two-Phase Locking (2PL) 20.3.3 Problems with 2PL: Price for Higher Concurrency 20.3.4 2PL in DDES 20.3.5 Timestamp-Based Locking 20.3.6 Non-Two-Phase Locking 20.4 Timestamp Based Algorithms 20.4.1 Basic Timestamp Ordering Algorithm 20.4.2 Thomas Write Rule (TWR) 20.4.3 Multiversion Timestamp Ordering Algorithm 20.4.4 Conservative Timestamp Ordering Algorithm 20.5 Optimistic Algorithms 20.5.1 Kung-Robinson Algorithm 20.6 Concurrency Control Algorithms: Data Replication 20.6.1 Completely Centralized Algorithm 20.6.2 Centralized Locking Algorithm 20.6.3 INGRES' Primary-Site Locking Algorithm 20.6.4 Two-Phase Locking Algorithm 20.7 Summary 20.8 Further Reading Problems References
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Call number Status Date due Barcode Item holds
General Books General Books Central Library, Sikkim University
General Book Section
005.23 SIN/A (Browse shelf(Opens below)) Available P21222
Total holds: 0

Includes bibliographical references and index.

1 Overview
1.1 Introduction
1.2 Functions of an Operating System
1.3 Design Approaches
1.3.1 Layered Approach
1.3.2 The Kernel Based Approach
1.3.3 The Virtual Machine Approach
1.4 Why Advanced Operating Systems
1.5 Types of Advanced Operating Systems
1.6 An Overview of the Book
1.7 Summary
1.8 Further Reading
References
2 Synchronization Mechanisms
2.1 Introduction
2.2 Concept of a Process
2.3 Concurrent Processes
2.3.1 Threads
2.4 The Critical Section Problem
2.4.1 Early Mechanisms for Mutual Exclusion
2.4.2 Semaphores
2.5 Other Synchronization Problems
2.5.1 The Dining Philosophers Problem
2.5.2 The Producer-Consumer Problem
2.5.3 The Readers-Writers Problem
2.5.4 Semaphore Solution to Readers-Writers Problem
2.6 Language Mechanisms for Synchronization
2.6.1 Monitors
2.6.2 Serializers
2.6.3 Path Expressions
2.6.4 Communicating Sequential Processes (CSP)
2.6.5 Ada Rendezvous
in Axiomatic Verification of Parallel Programs
2.7.1 The Language
2.7.2 The Axioms
2.7.3 Auxiliary Variables
2.7.4 An Example: Proof of Mutual Exclusion
2.8 Summary
2.9 Further Reading
Problems
References
3 Process Deadlocks
3.1 Introduction
3.2 Preliminaries
3.2.1 Definition
3.2.2 Deadlock versus Starvation
3.2.3 Fundamental Causes of Deadlocks
3.2.4 Deadlock Handling Strategies
3.3 Models of Deadlocks
3.3.1 The Single-Unit Requesl Model
3.3.2 The AND Request Model
3.3.3 The OR Request Model
3.3.4 The AND-OR Request Model
3.3.5 The P-out-of-Q Request Model
3.4 Models of Resources
3.4.1 Types of Resources
3.4.2 Types of Resource Accesses
3.5 A Graph-Theoretic Model of a System State
3.5.1 General Resource Systems
3.5.2 General Resource Graph
3.5.3 Operations on the General Resource Graph
3.6 Necessary and Sufficient Conditions for a Deadlock
3.6.1 The Graph Reduction Method
3.7 Systems with Single-Unit Requests
3.8 Systems with only Consumable Resources
3.9 Systems with only Reusable Resources
3.9.1 Systems with Single-Unit Resources
3.9.2 Deadlock Detection
3.9.3 Deadlock Prevention
3.9.4 Deadlock Avoidance
3.9.5 Pros and Cons of Different Strategies
3.10 Summary
3.11 Further Reading
Problems
References
Part n Distributed Operating Systems
4 Architectures of Distributed Systems
4.1 Introduction
4.2 Motivations
4.3 System Architecture Types
4'.4 Distributed Operating Systems
4.5
Issues in
Distributed Operating Systems
4.6
4.5.1
Global Knowledge
4.5.2
Naming
4.5.3
Scalability
4.5.4
Compatibility
4.5.5
Process Synchronization
4.^6
Resource Management
4.5.7
Seciuity
4.5.8
Structuring
4.5.9
Client-server Computing Model
Communication Networks
4.6.1
\^de Area Networks
4.6.2
Local Area Networks
4.7 Communication Primitives
4.7.1 The Message Passing Model
4.7.2 Remote Procedure Calls
4.7.3 Design Issues in RPC
4.8 Summary
4.9 Further Reading \
References'
5 Theoretical Foundations
5.1 Introduction
5.2 Inherent Limitations of a Distributed System
5.2.1 Absence of a Global Clock
5.2.2 Absence of Shared Memory
5.3 Lamport's Logical Clocks
5.3.1 A Limitation of Lamport's Clocks
5.4 Vector Clocks
5.5 Causal Ordering of Messages
5.6 Global State
5.6.1 Chandy-Lamport's Global State Recording Algorithm
5.7 Cuts of a Distributed Computation
5.8 Termination Detection
5.9 Summary
5.10 Further Reading
Problems
References
6 Distributed Mutual Exclusion
6.1 Introduction
6.2 The Classification of Mutual Exclusion Algorithms
6.3 Preliminaries
6.3.1 Requirements of Mutual Exclusion Algorithms
6.3.2 How to Measure Performance
6.4 A Simple Solution to Distributed Mutual Exclusion
6.5 Non-Token-Based Algorithms
6.6 Lamport's Algorithm
6.7 The Ricart-Agrawala Algorithm
6.8 Maekawa's Algorithm
6.9 A Generalized Non-Token-Based Algorithm
6.9.1 Information Structures
6.9.2 The Generalized Algorithm
6.9.3 Static versus Dynamic Information Structures
6.10 Token-Based Algorithms
6.11 Suzuki-Kasami's Broadcast Algorithm
6.12 Singhal's Heuristic Algorithm
6.13 Raymond's Tree-Based Algorithm
6.14 A Comparative Performance Analysis
6.14.1 Response Time
6.14.2 Synchronization Delay
6.14.3 Message Traffic
6.14.4 Universal Performance Bounds
6.15 Summary
6.16 Further Reading
Problems
References
7 Distributed Deadlock Detection
7.1 Introduction
7.2 Preliminaries
7.2.1 The System Model
7.2.2 Resource versus Communication Deadlocks
7.2.3 A Graph-Theoretic Model
7.3 Deadlock Handling Strategies in Distributed Systems
7.3.1 Deadlock Prevention
7.3.2 Deadlock Avoidance
7.3.3 Deadlock Detection
7.4 Issues in Deadlock Defection and Resolution
7.5 Control Organizations for Distributed Deadlock Detection
7.5.1 Centralized Control
7.5.2 Distributed Control
7.5.3 Hierarchical Control
7.6 Centralized Deadlock-Detection Algorithms
7.6.1 The Completely Centralized Algorithm
7.6.2 The Ho-Ramamoorthy Algorithms
7.7 Distributed Deadlock Detection Algorithms
7.7.1 A Path-Pushing Algorithm
7.7.2 An Edge-Chasing Algorithm
7.7.3 A Diffusion Computation Based Algorithm
7.7.4 A Global State Detection Based Algorithm
7.8 Hierarchical Deadlock Detection Algorithms
7.8.1 The Menasce-Muntz Algorithm
7.8.2 The Ho-Ramamoorthy Algorithm
7.9 Perspective
7.10 Summary
7.11 Further Reading
Problems
References
8 Agreement Protocols
8.1 Introduction
8.2 The System Model
8.2.1 Synchronous versus Asynchronous Computations
8.2.2 Model of Processor Failures
8.2.3 Authenticated versus Non-Authenticated Messages
8.2.4 Performance Aspects
8.3 A Classification of Agreement Problems
8.3.1 The Byzantine Agreement Problem
8.3.2 The Consensus Problem
8.3.3 The Interactive Consistency Problem
8.3.4 Relations Among the Agreement Problems
8.4 Solutions to the Byzantine Agreement Problem
8.4.1 The Upper Bound on the Number of Faulty Processors
8.4.2 An Impossibility Result
8.4.3 Lamport-Shostak-Pease Algorithm
8.4.4 Dolev et al.'s Algorithm
8.5 Applications of Agreement Algorithms
8.5.1 Fault-Tolerant Clock Synchronization
8.5.2 Atomic Commit in DDBS
8.6 Summary
8.7 Further Reading
Problems
References
Part III Distributed Resource Management
9 Distributed File Systems
9.1 Introduction
9.2 Architecture
9.3 Mechanisms for Building Distributed File Systems
9.3.1 Mounting
9.3.2 Caching
9.3.3 Hints
9.3.4 Bulk Data Transfer
9.3.5 Encryption
9.4 Design Issues
9.4.1 Naming and Name Resolution
9.4.2 Caches on Disk or Main Memory
9.4.3 Writing Policy
9.4.4 Cache Consistency
9.4.5 Availability
9.4.6 Scalability
9.4.7 Semantics
9.5 Case Studies
9.5.1 The Sun Network File System
9.5.2 The Sprite File System
9.5.3 Apollo DOMAIN Distributed File System
9.5.4 Coda
9.5.5 The x-Kemel Logical File System
9.6 Log-Structured File Systems
9.6.1 Disk Space Management
9.7 Summary
9.8 Further Readings
Problems
References
10 Distributed Shared Memory
10.1 Introduction
10.2 Architecture and Motivation .
10.3 Algorithms for Implementing DSM
10.3.1 The CentralrSeWer Algorithm
10.3.2 The Migration Algorithm
10.3.3 The Read-Replication Algorithm
10.3.4 The Full-Replication Algorithm
10.4 Memory Coherence
10.5 Coherence Protocols
10.5.1 Cache Coherence in the PLUS System
10.5.2 Unifying Synchronization and Data Transfer in Clouds
10.5.3 Type-Specific Memory Coherence in the Munin System
10.6 Design Issues
10.6.1 Granularity
10.6.2 Page Replacement
10.7 Case Studies
10.7.1 IVY
10.7.2 Mirage
10.7.3 Clouds
10.8 Summary
10.9 Further Reading
Problems
References
11 Distributed Scheduling
11.1 Introduction
11.2 Motivation
11.3 Issues in Load Distributing
11.3.1 Load
11.3.2 Classification of Load Distributing Algorithms
11.3.3 Load Balancing versus Load Sharing
11.3.4 Preemptive versus Nonpreemptive Transfers
11.4 Components of a Load Distributing Algorithm
11.4.1 Transfer Policy
11.4.2 Selection Policy
11.4.3 Location Policy
11.4.4 Information Policy
11.5 Stability
11.5.1 The Queuing-Theoretic Perspective
11.5.2 The Algorithmic Perspective
11.6 . Load Distributing Algorithms
11.6.1 Sender-Initiated Algorithms
11.6.2 Receiver-Initiated Algorithms
11.6.3 Symmetrically Initiated Algorithms
11.6.4 Adaptive Algorithms
11.7 Performance Comparison
11.7.1 Receiver-initiated versus Sender-initiated Load Sharing
11.7.2 Symmetrically Initiated Load Sharing
11.7.3 Stable Load Sharing Algorithms
11.7.4 Performance Under Heterogeneous Workloads
11.8 Selecting a Suitable Load Sharing Algorithm
11.9 Requirements for Load Distributing
11.10 Load Sharing Policies: Case Studies
11.10.1 The V-System
11.10.2 The Sprite System
11.10.3 Condor
11.10.4 The Stealth Distributed Scheduler
11.11 Task Migration
11.12 Issues in Task Migration
11.12.1 State Transfer
11.12.2 Location Transparency
11.12.3 Structure of a Migration Mechanism
11.12.4 Performance
11.13 Summary
11.14 Further Reading
Problems
References
Part IV Failure Recovery and Fault Tolereuice
12 Recovery
12.1 Introduction
12.2 Basic Concepts
12.3 Classification of Failures
12.4 Backward and Forward Error Recovery
12.5 Backward-Error Recovery: Basic Approaches
12.5.1 The Operation-Based Approach
12.5.2 State-based Approach
12.6 Recovery in Concurrent Systems
12.6.1 Orphan Messages and the Domino Effect
12.6.2 Lost Messages
12.6.3 Problem of Livelocks
12.7 Consistent Set of Checkpoints
12.7.1 A Simple Method for Taking a Consistent Set of Checkpoints
12.8 Synchronous Checkpointing and Recovery
12.8.1 The Checkpoint Algorithm
12.8.2 The Rollback Recovery Algorithm
12.9 Asynchronous Checkpointing and Recovery
12.9.1 A Scheme for Asynchronous Checkpointing and Recovery
12.10 Checkpointing for Distributed Database Systems
12.10.1 An Algorithm for Checkpointing in a DDES
12.11 Recovery in Replicated Distributed Database Systems
12.11.1 An Algorithm for Site Recovery
12.12 Sununary
12.13 Further Readings
Problems
References
13 Fault Tolerance
13.1 Introduction
13.2 Issues
13.3 Atomic Actions and Conunitting
13.4 Commit Protocols
13.4.1 The Two-Phase Commit Protocol
13.5 Nonblocking Commit Protocols
13.5.1 Basic Idea
13.5.2 The Nonblocking Commit Protocol for Single Site Failure
13.5.3 Multiple Site Failures and Network Partitioning
13.6 Voting Protocols
13.6.1 Static Voting
13.7 Dynamic Voting Protocols
13.8 The Majority Based Dynamic Voting Protocol
13.9 Dynamic Vote Reassignment Protocols
13.9.1 Autonomous Vote Reassignment
13.9.2 Vote Increasing Policies
13.9.3 Balance of Voting Power
13.10 Failure Resilient Processes
13.10.1 Backup Processes
13.10.2 Replicated Execution
13.11 Reliable Communication
13.11.1 Atomic Broadcast
13.12 Case Studies
13.12.1 Targon/32: Fault Tolerance Under UNIX
13.13 Summary
13.14 Further Reading
Problems
References
Part V Protection and Security
14 Resource Security and Protection: Access and Flow Control
14.1 Introduction
14.2 Preliminaries
14.2.1 Potential Security X^olations
14.2.2 External versus Internal Security
14.2.3 Policies and Mechanisms
a
14.2.4 Protection Domain -
14.2.5 Design Principles for Secure Systems
14.3 The Access Matrix Model
14.4 Implementations of Access Matrix
14.4.1 Capabilities
14.4.2 The Access Control List Method
14.4.3 The Lock-Key Method
14.5 Safety in the Access Matrix Model
14.5.1 Changing the Protection State
14.5.2 Safety in the Access Matrix Model
14.6 Advanced Models of Protection
14.6.1 The Take-Grant Model
14.6.2 Bell-LaPadula Model
14.6.3 Lattice Model of Information Flow
14.7 Case Studies
14.7.1 The UNIX Operating System
14.7.2 The Hydra Kernel
14.7.3 Amoeba
14.7.4 Andrew
14.8 Summary
14.9 Further Reading
Problems
References
15 Data Security: Cryptography
15.1 Introduction
15.2 A Model of Cryptography
15.2.1 Terms and Definitions
15.2.2 A Model of Cryptographic Systems
15.2.3 A Classification of Cryptographic Systems
15.3 Conventional Cryptography
15.4 Modem Cryptography
15.5 Private Key Cryptography: Data Encryption Standard
15.5.1 Data Encryption St^dard G5ES)
15.5.2 Cipher Block Chaining
15.6 Public Key Cryptography
15.6.1 Implementation Issues
15.6.2 The Rivest-Shamir-Adleman Method
15.6.3 Signing Messages
15.7 Multiple Encryption
15.8 Authentication in Distributed Systems
15.8.1 Authentication Servers
15.8.2 Establishing Interactive Connections
15.8.3 Performing One-Way Communication
15.8.4 Digital Signatures
15.9 Case Study: The Kerberos System
15.9.1 Phase I: Getting the Initial Ticket
15.9.2 Phase 11: Getting Server Tickets
15.9.3 Phase HI: Requesting the Service
15.10 Sununary
15.11 Further Readings
Problems
References
Part VI Multiprocessor Operating Systems
16 Multiprocessor System Architectures
16.1 Introduction
16.2 Motivations for Multiprocessor Systems
16.3 Basic Multiprocessor System Architectures
16.3.1 Tightly Coupled versus Loosely Coupled Systems
16.3.2 UMA versus NUMA versus NORMA Architectures
16.4 Interconnection Networks for Multiprocessor Systems
16.4.1 Bus
16.4.2 Cross-Bar Switch
16.4.3 Multistage Interconnection Network
16.5 Caching
16.5.1 The Cache Coherency Problem
16.6 Hypercube Architectures
16.7 Summary
16.8 Further Reading
References
17 Multiprocessor Operating Systems
17.1 Introduction
17.2 Structures of Multiprocessor Operating Systems
17.3 Operating System Design Issues
17.4 Threads
17.4.1 User-Level Threads
17.4.2 Kernel-Level Threads
17.4.3 First-Class Threads
17.4.4 Scheduler Activations
17.5 Process Synchronization
17.5.1 Issues in Process Synchronization
17.5.2 The Test-and-Set Instruction
17.5.3 The Swap Instruction
17.5.4 The Fetch-and-Add Instruction of the Ultracomputer
17.5.5 SLIC Chip of the Sequent
17.5.6 Implementation of Process Wait
17.5.7 The Compare-and-Swap Instruction
17.6 Processor Scheduling
17.6.1 Issues in Processor Scheduling
17.6.2 Coscheduling of the Medusa OS
17.6.3 Smart Scheduling
17.6.4 Scheduling in the NYU Ultracomputer
17.6.5 Affinity Based Scheduling
17.6.6 Scheduling in the Mach Operating System
17.7 Memory Management: The Mach Operating System
17.7.1 Design Issues
17.7.2 The Mach Kernel
17.7.3 Task Address Space
17.7.4 Memory Protection
17.7.5 Machine Independence
17.7.6 Memory Sharing
17.7.7 Efficiency Considerations
17.7.8 Implementation: Data Structures and Algorithms
17.7.9 Sharing of Memory Objects
17.8 Reliability/Fault Tolerance: The Sequoia System
17.8.1 Design Issues
17.8.2 The Sequoia Architecture
17.8.3 Fault Detection
17.8.4 Fault Recovery
17.9 Sununary 17.10

Further Reading
Problems
References
Part Vn Database Operating Systems
18 Introduction to Database Operating Systems
18.1 Introduction
18.2 What Is Different?
18.3 Requirements of a Database Operating System
18.4 Further Reading
References
19 Concurrency Control: Theoretical Aspects
19.1 Introduction
19.2 Database Systems
19.2.1 Transactions
19.2.2 Conflicts
19.2.3 Transaction Processing
19.3 A Concurrency Control Model of Database Systems
19.4 The Problem of Concurrency Control
19.4.1 Inconsistent Retrieval
19.4.2 Inconsistent Update
19.5 Serializability Theory
19.5.1 Logs
19.5.2 Serial Logs
19.5.3 Log Equivalence
19.5.4 Serializable Logs
19.5.5 The Serializability Theorem
19.6 Distributed Database Systems
19.6.1 Transaction Processing Model
19.6.2 Serializability Condition in DDES
19.6.3 Data Replication
19.6.4 Complications due to Data Replication
19.6.5 Fully-Replicated Database Systems
19.7 Sununary
19.8 Further Reading
Problems
References
20 Concurrency Control Algorithms
20.1 Introduction
20.2 Basic Synchronization Primitives
20.2.1 Locks
20.2.2 Timestamps
20.3 Lock Based Algorithms
20.3.1 Static Locking
20.3.2 Two-Phase Locking (2PL)
20.3.3 Problems with 2PL: Price for Higher Concurrency
20.3.4 2PL in DDES
20.3.5 Timestamp-Based Locking
20.3.6 Non-Two-Phase Locking
20.4 Timestamp Based Algorithms
20.4.1 Basic Timestamp Ordering Algorithm
20.4.2 Thomas Write Rule (TWR)
20.4.3 Multiversion Timestamp Ordering Algorithm
20.4.4 Conservative Timestamp Ordering Algorithm
20.5 Optimistic Algorithms
20.5.1 Kung-Robinson Algorithm
20.6 Concurrency Control Algorithms: Data Replication
20.6.1 Completely Centralized Algorithm
20.6.2 Centralized Locking Algorithm
20.6.3 INGRES' Primary-Site Locking Algorithm
20.6.4 Two-Phase Locking Algorithm
20.7 Summary
20.8 Further Reading
Problems
References

There are no comments on this title.

to post a comment.
SIKKIM UNIVERSITY
University Portal | Contact Librarian | Library Portal

Powered by Koha