Full Blog TOC

Full Blog Table Of Content with Keywords Available HERE

Monday, January 1, 2024

Go Atomic Value is NOT sync



 

Lately I've encountered a mistake in a colleague code for using Go atomic value. The original purpose of the programmer was to make parallel task run faster by not using the Go sync library, and using atomic value instead. 

This is a mistake that I've seen done several times in the past. In general, if you plan using Go atomic library you're probably wrong, or willing to accept mistakes at the cost of higher performance, and you probably should not use the atomic library at all.

See an example of the code below.


package main

import (
"fmt"
"sync/atomic"
"time"
)

type countersWithoutSync struct {
value atomic.Value
}

func (c *countersWithoutSync) increment(counter string) {
counters := c.value.Load().(map[string]int)
newCounters := make(map[string]int)
for k, v := range counters {
newCounters[k] = v
}
newCounters[counter]++
c.value.Store(newCounters)
}

func (c *countersWithoutSync) printAndClean() {
oldCounters := c.value.Swap(make(map[string]int)).(map[string]int)
fmt.Printf("%v\n", oldCounters)
}

func main() {
counters := countersWithoutSync{}
counters.value.Store(make(map[string]int))

for i := 0; i < 10; i++ {
counterName := fmt.Sprintf("counter%v", i)
go func(name string) {
for iteration := 0; iteration < 10000; iteration++ {
counters.increment(counterName)
time.Sleep(time.Millisecond)
}
}(counterName)
}

for second := 0; second < 10; second++ {
counters.printAndClean()
time.Sleep(time.Second)
}
}


We have a class that uses atomic.value to support parallel updates of counters. We might have expect that the atomic library would protect from parallel updates, but this is totally wrong. The atomic value only guarantees that we will not get concurrent modification errors for the pointer to the value, but once the pointer is returned, we're left without any parallel updates protection.


In this example, the increment method loads the atomic value, clones it to avoid parallel updates, and then stores it back. But we lack protection of another thread doing the same in parallel, so any other thread updates between our load and store API calls are lost.

The output of this example demonstrates the lost of updates. Instead of getting 1000 updates per counter every seconds, we get ~700 updates due to the parallel threads updates.

map[]
map[counter0:719 counter1:707 counter2:708 counter3:707 counter4:734 counter5:743 counter6:715 counter7:737 counter8:714 counter9:728]
map[counter0:743 counter1:734 counter2:754 counter3:757 counter4:743 counter5:743 counter6:742 counter7:749 counter8:738 counter9:754]
map[counter0:766 counter1:762 counter2:746 counter3:749 counter4:761 counter5:764 counter6:761 counter7:747 counter8:766 counter9:757]
map[counter0:780 counter1:761 counter2:777 counter3:787 counter4:772 counter5:777 counter6:777 counter7:782 counter8:790 counter9:784]
map[counter0:777 counter1:792 counter2:771 counter3:780 counter4:787 counter5:788 counter6:790 counter7:782 counter8:766 counter9:788]
map[counter0:767 counter1:775 counter2:780 counter3:755 counter4:775 counter5:768 counter6:760 counter7:772 counter8:767 counter9:759]
map[counter0:771 counter1:771 counter2:769 counter3:779 counter4:774 counter5:761 counter6:771 counter7:778 counter8:773 counter9:778]
map[counter0:776 counter1:766 counter2:764 counter3:776 counter4:763 counter5:782 counter6:764 counter7:775 counter8:764 counter9:766]
map[counter0:771 counter1:781 counter2:773 counter3:779 counter4:776 counter5:781 counter6:785 counter7:790 counter8:768 counter9:795]



One thing to add for this, is to ask why are we scared using sync.mutex? This might be a habit form other languages where the mutex might be not as efficient as in Go.

Using the atomic version of the code requires 0.5 microsecond per increment API call, while using a sync.mutex version uses 1 microsecond per increment API, hence the difference is not really meaningful in most cases.




 


No comments:

Post a Comment