Title
Progress guarantees, lock-free property, and scheduling assumptions
Status
resolved
Section
[intro.multithread][atomics.lockfree][atomics.types.operations.req]
Submitter
Torvald Riegel

Created on 2011-08-18.00:00:00 last changed 130 months ago

Messages

Date: 2014-02-16.00:00:00

[ 2014-02-16 Issaquah: Resolved by paper n3927 ]

Date: 2013-11-06.00:00:00

[ 2013-11-06: Jason Hearne-McGuiness comments ]

Related to this issue, the wording in [atomics.lockfree] p2,

In any given program execution, the result of the lock-free query shall be consistent for all pointers of the same type.

should be made clearer, because the object-specific (non-static) nature of the is_lock_free() functions from [atomics.types.generic] and [atomics.types.operations] imply that for different instances of pointers of the same type the returned value could be different.

Date: 2012-11-02.22:48:46

[ 2012, Portland: move to Open ]

The current wording of [intro.multithread] p2 doesn't really say very much. As far as we can tell the term lock-free is nowhere defined in the standard.

James: we would prefer a different way to phrase it.

Hans: the research literature includes the term abstraction-free which might be a better fit.

Detlef: does Posix define a meaning for blocking (or locking) that we could use?

Hans: things like compare-exchange-strong can wait indefinitely.

Niklas: what about spin-locks -- still making no progress.

Hans: suspect we can only give guidance, at best. The lock-free meaning from the theoretical commmunity (forard progress will be made) is probably too strong here.

Atrur: what about livelocks?

Hans: each atomic modification completes, even if the whole thing is blocked.

Moved to open.

Date: 2012-02-13.21:55:45

[ 2012, Kona ]

General direction: lock-free means obstruction-free. Leave the current "should" recommendation for progress. It would take a lot of effort to try to do better.

Date: 2011-12-01.00:00:00

[ 2011-12-01: Hans comments ]

[intro.multithread] p2 was an intentional compromise, and it was understood at the time that it was not a precise statement. The wording was introduced by N3209, which discusses some of the issues. There were additional reflector discussions.

This is somewhat separable from the question of what lock-free means, which is probably a more promising question to focus on.

Date: 2011-08-18.00:00:00

According to [intro.multithread] p2:

"Implementations should ensure that all unblocked threads eventually make progress."

  • If taken literally, this cannot be achieved with lock-free atomics in general because they only guarantee that some thread makes progress (i.e., minimal progress, whereas [intro.multithread] p2 seems to require maximal progress).
  • What does it mean precisely to "make progress"? Does "unblocked threads" exclude live-locked threads (if so, lock-free atomics would be sufficient I suppose)?
  • Which assumptions can an implementation make about the thread scheduling? This is relevant for how implementations implement compare-exchange with load-linked / store conditional (LL-SC), and atomic read-modifiy-write operations with load...compare-exchange-weak loops.

    • Do threads run long enough without being descheduled (e.g., OS timeslices are long enough, interrupt frequency is not too high, etc.)?
    • Or is this implementation-defined, and the sentence is just about stating that the progress guarantees will not hold on, for example, systems with unfair scheduling or thread priorities?

[atomics.lockfree] p2 declares the lock-free property for a particular object. However, "lock-free" is never defined, and in discussions that I had with committee members it seemed as if the standard's lock-free would be different from what lock-free means in other communities (eg, research, text books on concurrent programming, etc.).

  • Originally, lock-freedom for an object requires minimal progress (ie, some thread makes progress, but other threads might never do) without any assumptions about the scheduling (threads could be stopped executing (so it is "nonblocking"), and threads are not guaranteed to execute in isolation, even for very small intervals of cycles).
  • In contrast, obstruction-freedom, another nonblocking progress condition, guarantees progress for all threads that eventually get executed long enough in isolation (ie, without interference by other threads).
  • Simple load...compare-exchange-weak loops (or LL-SC loops) to implement atomic read-modify-write operations can be just obstruction-free but not lock-free because they can livelock (depending on the hardware's LL-SC implementation, though). However, they effectively guarantee the same as lock-free iff threads will eventually run in isolation for long enough (that can be an assumption about the OS scheduler), or if the implementation adds this (e.g., probabilistically by employing randomized exponential back-off when contention is detected, in all operations that can create contention).
  • Does the particular object has to be lock-free, or is it only required that threads make progress irrespective on which object? Again considering compare-exchange-weak or LL-SC here, what happens if the compare-exchange object shares a cacheline with an integer counter object that is constantly updated by other threads? The compare-exchange-weak can always fail, so the object would not be lock-free. However, if we consider progress to be overall progress for threads, it would be lock-free because other threads succeed updating the integer counter. I would have assumed the lock-free property is strictly about the atomic object, but in discussions with committee members it seemed as if progress for any object could be the intended guarantee.

Following [atomics.types.operations.req] p7 is_lock_free() returns "true if the object is lock-free". What is returned if the object is only sometimes lock-free?

Basically, I would like to see clarifications for the progress guarantees so that users know what they can expect from implementations (and what they cannot expect!), and to give implementors a clearer understanding of which user expectations they have to implement.

  1. Elaborate on the intentions of the progress guarantee in [intro.multithread] p2. As I don't know about your intentions, it's hard to suggest a resolution.

    • Is it for straightforward, non-synchronizing code only?
    • Is it for blocking code only? (Is "unblocked" more than blocked on external I/O or on deadlocks?)
    • What does it mean to "make progress"?
    • Is this meant to only waive any progress guarantees if there are thread priorities?
    • Can an implementation make any assumptions about thread scheduling?
  2. Define the lock-free property. The definition should probably include the following points:

    • Is it just nonblocking, or what is the distinction to just being nonblocking?
    • Does it make any assumptions about the scheduler?
    • What are the progress guarantees, minimal or maximal (some or all threads finish eventually).
    • Is progress guaranteed for all operations on the particular object, or do operations on other objects also count as "making progress"?
  3. Add a note explaining that compare-exchange-weak is not necessarily lock-free (but is nonblocking)? Or is it indeed intended to be lock-free (only allowed to fail spuriously but guaranteed to not fail eventually)? Implementing the latter might be a challenge on LL-SC machines or lead to space overheads I suppose, see the cacheline sharing example above.
History
Date User Action Args
2014-02-20 13:52:38adminsetmessages: + msg6878
2014-02-20 13:52:38adminsetstatus: open -> resolved
2014-01-12 13:49:06adminsetmessages: + msg6779
2012-11-02 22:48:46adminsetmessages: + msg6234
2012-11-02 22:48:46adminsetstatus: new -> open
2012-02-13 21:55:45adminsetmessages: + msg6009
2011-12-02 00:14:45adminsetmessages: + msg5940
2011-08-18 00:00:00admincreate