Operating system: Process

Preamble

I am learning about Distributed System . This is a very good and difficult area (for me). Most computer systems are now built on Distributed System. One of the requirements to take a Distributed System course is to understand the Operating System. This is reasonable because the problems of Distributed System are related to the problems of the Operating System but provided that the resources are on multiple machines, dispersed. For example, a Map Reduce system will need to have a partial coordination of resource use (computation and storage), which is the same as Scheduler coordinating Processes on the OS. The MapReduce system will also need a File System system to store the results. The construction of distributed file system (Distributed File System), there are quite a lot of similarities with the construction of the file system on the OS. The distributed system must also solve the problem of resource disputes between nodes, like the data disputes / resources of threads on the OS, but the solution is usually to build a locking mechanism effectively.

This article is the first in a series of articles about OS that I want to introduce. The content of the lesson is summarized mainly from Operating System Concepts Essentials, Second Edition . The content in this article is mostly taken from chapter 3 (Process) and chapter 5 (Scheduling) in the book.

More than 1 year ago, I have a public article about Unix Process , this article, I want to focus on two issues: How the operating system stores a Process, and how the Processes talk to each other

How does the OS organize Process?

Each process is created by the fork command from the parent Process . Processes are independent entities, memory space, and scripts of each Process. An executable program can be run multiple times, creating many different processes. Since processes are separated, so if a process crashes, it doesn't affect other processes either (does this make you remember the resque library?)

An operating system if allowing multiple processes to be run at the same time is called a multitasking operating system. Existing operating systems, from mobile to desktop, then servers are multitasking operating systems. The multitasking operating system will allow many processes to run together. For example, on your laptop now, you can just open the slack, go to room #hardcore, just open chrome to read this article. However, each computer has a CPU with a finite number of cores. So in order to be able to divide the time between processes using the CPU, the operating system will need a mechanism to coordinate these processes. The process coordination algorithm is called the scheduling algorithm, the program in the OS to coordinate is called the scheduler.

So how does the OS organize Processes? Each process will be represented by a PCB (Process Control Block) in the OS kernel. A PCB is a data structure that will include the following information

  • process id
  • status of the process (newly created, ready to run, waiting, …)
  • scheduling information for this process (eg process priority, pointer to queues scheduling …)
  • information about process memory space (to map to the physical memory of the machine, each process is isolate with each other, so the OS needs to store this information)
  • information about the program pointer in the process
  • Information about CPU registers for this process
  • The descriptors files that the Process is using

With the above information, when the parent process fork out a child process, a PCB will be created for the child Process. For the Linux operating system, since the child process is using the same descriptors file with the parent process, the OS will copy the cursor of the parent process's descriptors file to the child process.

The process coordination will include, stop the current process, save the state of this process, select the next process to be run, load the status of the next process, then run the next process. This process is called Context Switch. Context Switch sometimes depends on hardware (for example, storing the status of registers that the process is executing). Context Switch is a time consuming task (in the kernel itself). This is why you see so many articles talking about using multithread to handle requests on linux servers with poor performance by using eventloop (on Linux, the thread is also stored by a PCB as process) .

There are many different scheduler types. short-term scheduler is a scheduler that is run regularly,

short-term scheduler will retrieve processes in the kernel directly, and coordinate. long-term scheduler runs less frequently, in batch processing systems, or a distributed system, when the number of tasks is too large, part of the tasks will be pushed down stored at the storage, long-term scheduler layer. The theorem retrieves tasks, loads tasks into memory and coordinates.

There are two ways to design algorithms to coordinate processes:

  • cooperative : a process will use the CPU until it yielded its use. The scheduler will then select the next process in the queue to run. The yield will be called when the Process calls a certain system call. The most obvious is when the Process calls a call to IO. With this algorithm, the OS fully believes the Process for coordination. A process will be run until it waives. Coordination of cooperative is quite easy to install, but also leads to processes not being properly coordinated.

(If you want to learn more about cooperative cooperative algorithms, you can read my article on how the asynq library uses coroutine to coordinate requests . coroutine is actually a way to implement process in user space instead because kernel space)

  • preemtive : the operating system does not trust the process completely anymore, but provides a more private, fair mechanism for coordination. A simple coordination algorithm is: each process will be allowed to use the CPU for a period of time, when the time is up, the OS will disconnect the process from the CPU and transfer use to another process. The preemptive algorithm implementation encountered many problems in practice. Specifically, if two processes use the same resources, there will be resource conflicts between these two processes, so the dispute resolution should be handled by the OS or each process itself. Or if a process is calling a kernel system call, for example calling an IO action, and having to push into the IO queue, before it can push into the IO queue, the process runs out of time, the kernel has to push again. The process enters another queue, so at that time the process will be in the queue … Details of the pre-dispatch coordination algorithms I would like to stop here, appoint you in the next article.

Speaking of coordination algorithms, I also want to ask an open question for you: For languages ​​with implementations of user space processes such as Go (with goroutine), Erlang (with the actor process), these languages ​​will How do you need a scheduler? Is it a cooperative or preemtive scheduler?

Vietnam web summit

Join the largest Web programming community in Vietnam: Vietnamwebsummit.com

Interaction between processes (IPC – inter process communication)

When processes run together in the operating system, they will need to interact with each other, or interact with other processes on another machine (distributed systems are built on interacting between processes on other machines. together). There are two main mechanisms for two processes to interact with each other

shared memory

Processes share a common memory space. Interaction between two processes will be entirely due to read / write on this common memory. The OS does not interfere with this process, so shared memory is the fastest method (fast in terms of speed) for the processes to talk to each other. However, the disadvantage of this method is that processes must manage the read and write data themselves, and manage resource disputes.

What happens if both processes record to common memory? The following data will overwrite the previous recorded data, and may result in data corruption. In addition, when using shared memory, it is also necessary to avoid pointing to data outside the common memory area, because the memory space of processes is isolate with each other.

Usually, I think the best way to use shared memory is a parent process, create a common memory, and write data to it. Process children are merely reading data from the common memory.

message passing

With message passing, a process has a separate mailbox (either a mailbox or a port ). Processes will interact with each other by sending messages to the process's mailbox. And this is done with a call to system call.

Obviously message passing is slower than shared memory due to kernel intervention, but the message is more secure and flexible. Depending on the operating system and the message passing algorithm, you can have many ways to transmit messages.

For example, use pipe to message messages between parent and child processes, or use sockets to transmit messages between distributed processes.

For transmitting messages between two processes on two different machines, it is worth noting: how does sender know if the message has reached the receiver? If you can't know, what will sender do?

There are two magic tricks for sender

  • at least one : the sender will keep sending messages to the receiver until the receiver returns an ACK to the sender.
  • at most one : when sender sends too many messages, how does the receiver ensure that it only handles messages once? To ensure this, each message will need an id, and the receiver needs to keep a list of the messages it has sent. If you store all the messages, you will have to spend the same amount of resources (in memory, on processing time), so an acceptable algorithm is that the receiver only needs to keep a set of messages that have been processed. over a period of time, and check if the message sent by the sender has been processed.)

Conclude

Process is one of the basic units of operating system and distributed system. The article covers two basic issues that are coordinating processs, as well as ways to interact between processes in the operating system. Understanding the process in the OS will make it easier to learn about process activity in a distributed system.

ITZone via Kipalog

Share the news now