Parallelize work using parwork

In order to process a lot of work we have to parallelize work across all cores, and especially if it’s CPU bound. Go has goroutines, which can be used to parallelize the work, but there is the cost of context switching for a lot of goroutines. Minimizing this context switching can be achieved by using a fork-join model when processing work. Parwork solves this problem by using goroutines, channels and waitgroups. It creates workers (goroutines) that pull work of a queue (channel), process the work and report the work back to a queue (channel). This is done in a abstracted way so the user has to provide implementation for: ...

March 14, 2018 · 3 min · 465 words · Sotirios Mantziaris

Initial release of adaptlog

Almost every application logs data one way or another. There are a plethora of logging packages available for golang. There is the one that comes with the standard packages which takes a simple approach. There are many logging packages that follow the well established leveled approach, and there are really a lot of them. The decision of choosing a specific library comes with the cost of a direct dependency. But why should we depend directly on a specific package? How painful is it to exchange a logging package for another when we already created a lot of code with a direct dependency? This is the reason why apaptlog came to life. ...

January 20, 2016 · 1 min · 212 words · Sotirios Mantziaris