Imports #
"internal/race"
"sync/atomic"
"unsafe"
_ "unsafe"
"internal/abi"
"internal/goarch"
"sync/atomic"
"unsafe"
"internal/race"
"sync/atomic"
"unsafe"
_ "unsafe"
"internal/abi"
"internal/goarch"
"sync/atomic"
"unsafe"
const mutexLocked = *ast.BinaryExpr
const mutexStarving
const mutexWaiterShift = iota
const mutexWoken
const nChildren = *ast.BinaryExpr
16 children. This seems to be the sweet spot for load performance: any smaller and we lose out on 50% or more in CPU performance. Any larger and the returns are minuscule (~1% improvement for 32 children).
const nChildrenLog2 = 4
const nChildrenMask = *ast.BinaryExpr
Mutex fairness. Mutex can be in 2 modes of operations: normal and starvation. In normal mode waiters are queued in FIFO order, but a woken up waiter does not own the mutex and competes with new arriving goroutines over the ownership. New arriving goroutines have an advantage -- they are already running on CPU and there can be lots of them, so a woken up waiter has good chances of losing. In such case it is queued at front of the wait queue. If a waiter fails to acquire the mutex for more than 1ms, it switches mutex to the starvation mode. In starvation mode ownership of the mutex is directly handed off from the unlocking goroutine to the waiter at the front of the queue. New arriving goroutines don't try to acquire the mutex even if it appears to be unlocked, and don't try to spin. Instead they queue themselves at the tail of the wait queue. If a waiter receives ownership of the mutex and sees that either (1) it is the last waiter in the queue, or (2) it waited for less than 1 ms, it switches mutex back to normal operation mode. Normal mode has considerably better performance as a goroutine can acquire a mutex several times in a row even if there are blocked waiters. Starvation mode is important to prevent pathological cases of tail latency.
const starvationThresholdNs = 1e6
type equalFunc func(unsafe.Pointer, unsafe.Pointer) bool
type hashFunc func(unsafe.Pointer, uintptr) uintptr
HashTrieMap is an implementation of a concurrent hash-trie. The implementation is designed around frequent loads, but offers decent performance for stores and deletes as well, especially if the map is larger. Its primary use-case is the unique package, but can be used elsewhere as well. The zero HashTrieMap is empty and ready to use. It must not be copied after first use.
type HashTrieMap struct {
inited atomic.Uint32
initMu Mutex
root *ast.IndexExpr
keyHash hashFunc
valEqual equalFunc
seed uintptr
}
A Mutex is a mutual exclusion lock. See package [sync.Mutex] documentation.
type Mutex struct {
state int32
sema uint32
}
entry is a leaf node in the hash-trie.
type entry struct {
*ast.IndexListExpr
overflow *ast.IndexExpr
key K
value V
}
indirect is an internal node in the hash-trie.
type indirect struct {
*ast.IndexListExpr
dead atomic.Bool
mu Mutex
parent **ast.IndexListExpr
children [nChildren]*ast.IndexExpr
}
node is the header for a node. It's polymorphic and is actually either an entry or an indirect.
type node struct {
isEntry bool
}
All returns an iterator over each key and value present in the map. The iterator does not necessarily correspond to any consistent snapshot of the HashTrieMap's contents: no key will be visited more than once, but if the value for any key is stored or deleted concurrently (including by yield), the iterator may reflect any mapping for that key from any point during iteration. The iterator does not block other methods on the receiver; even yield itself may call any method on the HashTrieMap.
func (ht **ast.IndexListExpr) All() (func(yield func(K, V) bool))
Clear deletes all the entries, resulting in an empty HashTrieMap.
func (ht **ast.IndexListExpr) Clear()
CompareAndDelete deletes the entry for key if its value is equal to old. The value type must be comparable, otherwise this CompareAndDelete will panic. If there is no current value for key in the map, CompareAndDelete returns false (even if the old value is the nil interface value).
func (ht **ast.IndexListExpr) CompareAndDelete(key K, old V) (deleted bool)
CompareAndSwap swaps the old and new values for key if the value stored in the map is equal to old. The value type must be of a comparable type, otherwise CompareAndSwap will panic.
func (ht **ast.IndexListExpr) CompareAndSwap(key K, old V, new V) (swapped bool)
Delete deletes the value for a key.
func (ht **ast.IndexListExpr) Delete(key K)
Load returns the value stored in the map for a key, or nil if no value is present. The ok result indicates whether value was found in the map.
func (ht **ast.IndexListExpr) Load(key K) (value V, ok bool)
LoadAndDelete deletes the value for a key, returning the previous value if any. The loaded result reports whether the key was present.
func (ht **ast.IndexListExpr) LoadAndDelete(key K) (value V, loaded bool)
LoadOrStore returns the existing value for the key if present. Otherwise, it stores and returns the given value. The loaded result is true if the value was loaded, false if stored.
func (ht **ast.IndexListExpr) LoadOrStore(key K, value V) (result V, loaded bool)
Lock locks m. See package [sync.Mutex] documentation.
func (m *Mutex) Lock()
Range calls f sequentially for each key and value present in the map. If f returns false, range stops the iteration. This exists for compatibility with sync.Map; All should be preferred. It provides the same guarantees as sync.Map, and All.
func (ht **ast.IndexListExpr) Range(yield func(K, V) bool)
Store sets the value for a key.
func (ht **ast.IndexListExpr) Store(key K, old V)
Swap swaps the value for a key and returns the previous value if any. The loaded result reports whether the key was present.
func (ht **ast.IndexListExpr) Swap(key K, new V) (previous V, loaded bool)
TryLock tries to lock m and reports whether it succeeded. See package [sync.Mutex] documentation.
func (m *Mutex) TryLock() bool
Unlock unlocks m. See package [sync.Mutex] documentation.
func (m *Mutex) Unlock()
compareAndDelete deletes an entry in the overflow chain if both the key and value compare equal. Returns the new entry chain and whether or not anything was deleted. compareAndDelete must be called under the mutex of the indirect node which e is a child of.
func (head **ast.IndexListExpr) compareAndDelete(key K, value V, valEqual equalFunc) (**ast.IndexListExpr, bool)
compareAndSwap replaces an entry in the overflow chain if both the key and value compare equal. Returns the new entry chain and whether or not anything was swapped. compareAndSwap must be called under the mutex of the indirect node which e is a child of.
func (head **ast.IndexListExpr) compareAndSwap(key K, old V, new V, valEqual equalFunc) (**ast.IndexListExpr, bool)
func (i **ast.IndexListExpr) empty() bool
func (n **ast.IndexListExpr) entry() **ast.IndexListExpr
expand takes oldEntry and newEntry whose hashes conflict from bit 64 down to hashShift and produces a subtree of indirect nodes to hold the two new entries.
func (ht **ast.IndexListExpr) expand(oldEntry **ast.IndexListExpr, newEntry **ast.IndexListExpr, newHash uintptr, hashShift uint, parent **ast.IndexListExpr) **ast.IndexListExpr
go:linkname fatal
func fatal(string)
find searches the tree for a node that contains key (hash must be the hash of key). If valEqual != nil, then it will also enforce that the values are equal as well. Returns a non-nil node, which will always be an entry, if found. If i != nil then i.mu is locked, and it is the caller's responsibility to unlock it.
func (ht **ast.IndexListExpr) find(key K, hash uintptr, valEqual equalFunc, value V) (i **ast.IndexListExpr, hashShift uint, slot **ast.IndexExpr, n **ast.IndexListExpr)
func (n **ast.IndexListExpr) indirect() **ast.IndexListExpr
func (ht **ast.IndexListExpr) init()
go:noinline
func (ht **ast.IndexListExpr) initSlow()
func (ht **ast.IndexListExpr) iter(i **ast.IndexListExpr, yield func(key K, value V) bool) bool
loadAndDelete deletes an entry in the overflow chain by key. Returns the value for the key, the new entry chain and whether or not anything was loaded (and deleted). loadAndDelete must be called under the mutex of the indirect node which e is a child of.
func (head **ast.IndexListExpr) loadAndDelete(key K) (V, **ast.IndexListExpr, bool)
func (m *Mutex) lockSlow()
func (e **ast.IndexListExpr) lookup(key K) (V, bool)
func (e **ast.IndexListExpr) lookupWithValue(key K, value V, valEqual equalFunc) (V, bool)
func newEntryNode(key K, value V) **ast.IndexListExpr
func newIndirectNode(parent **ast.IndexListExpr) **ast.IndexListExpr
SemacquireMutex is like Semacquire, but for profiling contended Mutexes and RWMutexes. If lifo is true, queue waiter at the head of wait queue. skipframes is the number of frames to omit during tracing, counting from runtime_SemacquireMutex's caller. The different forms of this function just tell the runtime how to present the reason for waiting in a backtrace, and is used to compute some metrics. Otherwise they're functionally identical. go:linkname runtime_SemacquireMutex
func runtime_SemacquireMutex(s *uint32, lifo bool, skipframes int)
Semrelease atomically increments *s and notifies a waiting goroutine if one is blocked in Semacquire. It is intended as a simple wakeup primitive for use by the synchronization library and should not be used directly. If handoff is true, pass count directly to the first waiter. skipframes is the number of frames to omit during tracing, counting from runtime_Semrelease's caller. go:linkname runtime_Semrelease
func runtime_Semrelease(s *uint32, handoff bool, skipframes int)
Active spinning runtime support. runtime_canSpin reports whether spinning makes sense at the moment. go:linkname runtime_canSpin
func runtime_canSpin(i int) bool
runtime_doSpin does active spinning. go:linkname runtime_doSpin
func runtime_doSpin()
go:linkname runtime_nanotime
func runtime_nanotime() int64
Pull in runtime.rand so that we don't need to take a dependency on math/rand/v2. go:linkname runtime_rand runtime.rand
func runtime_rand() uint64
swap replaces an entry in the overflow chain if keys compare equal. Returns the new entry chain, the old value, and whether or not anything was swapped. swap must be called under the mutex of the indirect node which e is a child of.
func (head **ast.IndexListExpr) swap(key K, new V) (**ast.IndexListExpr, V, bool)
go:linkname throw
func throw(string)
func (m *Mutex) unlockSlow(new int32)
Generated with Arrow