In practice, we will have algorithms obtained by our method and ones based upon timeout. The most remarkable difference between these two may be that using timeout produces a traditional distributed algorithm. However, these processes operate asynchronously, while our method provides a globally synchronous. In this way, every process will do the same thing at (almost) the same time. At the first sight, our method seems to go against the entire idea of distributed processing, which is to authorize different processes to operate separately and execute various functions. However, if a distributed system is considered a single system, then all the processes must be somehow synchronized. In theory, getting all processes to do the same thing simultaneously is the easiest way to synchronize them. Following that mindset, we introduce a kernel that performs the necessary synchronization which we call a method. For example, when we can ensure that two different processes will not try to modify a file at once, processes might spend only a small fraction of their time executing the synchronizing kernel. For the rest of the time, they can still operate separately (such as accessing different files). Even when fault-tolerance is not needed, we are still eager to apply this strategy. The basis of this method is simplicity which makes it easier to read the precise properties of a system. It deems to be vital if you really want to know just how fault-tolerant the system is [L.Lamport (1984)].