Brendan Ang

Search

Search IconIcon to open search

IO Subsystem

Last updated Nov 8, 2022 Edit Source

# I/O Subsystem

# I/O Hardware

The hardware communication between I/O devices and the CPU is done through the signal chain subsystem.

# Memory Mapped IO

Memory-mapped I/O uses the same address space to address both main memory and I/O devices. The memory and registers of the I/O devices are mapped to address values, so a memory address may refer to either a portion of physical RAM or to memory and registers of the I/O device. For example, accessing the VGA text buffer through the memory address 0xb8000. This address is not mapped to RAM but to some memory on the VGA device.

# Port Mapped IO

Port-mapped I/O uses a separate I/O bus for communication. Each connected peripheral has one or more port numbers. To communicate with such an I/O port, there are special CPU instructions called in and out, which take a port number and a data byte

# Kernel I/O Subsystem

# I/O Scheduling

The OS is responsible for using hardware efficiently. Schedule I/O requests by rearranging the order of services.

# Buffering

Store data in memory while transferring between devices to handle device speed mismatch and transfer size mismatch.

# Caching

Cache copies of data to improve efficiency for files that are being written and reread rapidly.

# Spooling

Store the output for a device to be used when a separate device can serve it, e.g. for devices that can serve only 1 request a time such as printers.

# Performance

I/O operations require a lot of overhead

# Asynchronous I/O

[!Non-blocking IO] The process remains in the running state (not ready state!)

# Double buffer

# Practice Problems

a. False. Buffers are used to support different device transfer speeds. What is described is the role of cache b. False. Non-blocking I/O call puts the process back in the ready or running state c. False. a. Double buffer and async i/o

1
2
3
4
5
6
7
buffer2 <- async read block;
while (not end of file) {
	while iO;
	buffer1 <- buffer2;
	buffer2 <- async read block;
	process buffer1;
}

b. Contiguous file allocation is best as the data being read is the entire file. If this entire file is stored contiguously, the time needed for the disk to access the data is minimised. a. Not necessarily, if requests are issued one at a time, the disk driver has no opportunity for SCAN optimisation (SCAN = FCFS). This can be solved by concurrently generating IO requests. b. Under light load, the overhead for scheduling might become greater than the average seek time. Performance of a disk scheduling algorithm can be affected based on the file allocation system utilised. For example, a SCAN algorithm on a linked file allocation method would have poor performance. In linked file allocation, the data blocks accessed are located all over different sectors in the disk. This means that a complete file access operation might require the operation to wait for the disk arm to move from one end to the other.

~~Seek order: 4,10,23,35,35,40,45,50,70,132 $$ \begin{align} &\text{Single cylinder seek time}=20.1ms\\ &\text{Total seek time}=(4+6+13+12+5+5+5+20+52)\times20.1=2452.2ms\\ &\text{Total rotational latency and transfer time}=(8+2)\times10=100ms\\ &\text{Average time}=2452.2+100=2552.2ms \end{align} $$