Deadlock example

The operating system is the software that manages how the CPU interacts with the available devices, resources (such as memory and storage) to perform the required task (sending an email or printing a document). The applications in your PC, MAC, or mobile phone all work the same way. They need to perform a specific task using the CPU and the available resources. Regardless of how many CPUs or cores (physical or theoretical), this number is way less than the number of apps that need to use it. One of the main very well known deadlock scenarios was the BSOD.

The process lifecycle usually goes into the following states:

  • Running: when the process is performing a task inside the CPU.
  • Waiting: when the process is not in the CPU, and not doing anything, it is just waiting for a resource to be available such as the storage or the CPU itself.
  • Finished: when the process completes its task, or if the user or the operating system kills it.

The relationship between the CPU and its processes with the operating systems is usually governed by several rules that ensure optimal performance and smooth user experience. In other words, it tries to minimize apps from freezing or crashing.

Running apps are always competing for a CPU slot, and when they are not using the CPU, they go a “waiting” state; they go back to the “running” state when they are back in the CPU.

Deadlock can be defined as the permanent blocking of a set of processes that compete for resources or communicate with each other. Deadlocks are only considered in multiprogramming environments; the batch processing where jobs are processed sequentially from a queue directly into the processor one by one.

Operating systems rely on a technique that divides time into short intervals where processes are restricted to only a single interval. Then rapidly switching between processes creating an illusion to users, as If they are all processed concurrently. I.e., at any given instant, there is only one process (application) that is using it.

The operating system handles deadlock for its own processes only, and application developers are still required to consider process management and deadlock scenarios, especially when developing multithreaded applications.

Processes may use any available resources, including CPU cores, memory, devices, files, or other I/O ports. A process can utilize resources as follows; the process requests the resource; if it is not available, the process goes to the wait state. If the resource is available, it will be immediately granted to the process to use it; now, no other process can use this resource. Finally, once the process is done with the resource, it releases it, so it is now available for use by other jobs.

For example, Process “A” is using the CD-ROM and wants to request a resource let’s assume a printer, if another process “B” is already using the printer, then process “A” will wait until the requested resource is available again. If Process “B” wants to use the CD-ROM, that is already used by process “A” that it also must wait; now, both processes are waiting for each other to release the resources. That is a typical deadlock case.

Most researchers agree that a deadlock should have all the following conditions if any condition is not met then this might not be deadlock scenario, the main requirements are:

  • Mutual exclusion: the resource cannot be shared between processes; one process can use the resource at a time, any other process requesting that resource must wait until it is released.
  • Hold and wait: the process is holding one or more resources and is waiting to use another resource that another process is holding it.
  • No pre-emption: the resource cannot be forcibly released from the process until it is complete, and it is willingly giving the resource back.
  • Circular wait: this is a result of the hold and waits for a condition, where processes are depending on each other in a circular dependency.

In general, operating systems and software engineers tend to prevent deadlocks by eliminating any of the previous deadlock conditions:

  • Mutual Exclusion: Some resources like files can allow multiple access for read-only but will still be exclusive for write operations; deadlock can occur in case more than one requires to write permission.
  • Converting none-shareable resource to shareable; printer spooler is an excellent example of how modern operating systems solved this issue, instead of calling the printer directly, a process that wants to use the printer asks the operating systems for this service, and it provides a device driver that acts as an actual printer to the application. Now, multiple applications and users can use the same printer where jobs are queued one after another.

Depending on the application, developer, and running task, users have several options to fix the deadlock scenario; the standard method when an application freeze is to force-kill the running non-responsive process, which will cause all the unsaved work to be lost. Another option for advanced users is to give the process higher priority and resources manually so that the operating system will provide it with more CPU slices; this will work if the process is cumbersome and requires complex computations.

Software application developers stepped, offering some solutions or workarounds for deadlocks other than the ones provided by the operating system. For example; the auto-save and file recovery features in Microsoft Office applications, and the process per tab in a web browser, if a website’s script caused the page to freeze the browser will only kill the affected tab will keeping all the other, because each tab is a separate process.

Technically, solving the deadlock issues has been a field for researchers to find ultimate solutions for different scenarios related to resource allocation optimization. For the operating system to avoid deadlocks, it must have additional information for each process, which us what resources each process will use in advance and in which order. With that knowledge, the system can make better decisions and whether an operation should wait for the requestor not to avoid future deadlocks. For each request, the system should consider the available resource, currently allocated to each process and future requests.