Concurrency
This week we entered the realm of multithreaded programs. Each process can be split up into threads, which execute inside the same address space as the process. Every thread gets its own stack so that its execution state can be independent of the other threads, and it also gets its own thread control block and thread id. The parent process runs on its own thread, called the main thread. The purpose of multithreading is to take advantage of parallelism, which allows a process to split a task up into different pieces and run them all at the same time (ideally simultaneously on multiple CPUs).
One of the biggest issues with threads is concurrent access to shared code. For instance, if multiple threads are all accessing the same data structure, their operations may clash and the results can be indeterminate. In the case of a counter, multiple threads updating/reading the counter variable will certainly not end well. To solve this issue, the concept of locking is used. Before entering a critical section (shared code), a thread must obtain a lock. If another thread has the lock, the thread must wait. One way for a thread to wait is to enter a spin (spin-wait), but that can be wasteful. A better method is for the thread to sleep while waiting for a signal from another thread to wake up (at which point the lock would be obtained).
No comments:
Post a Comment