Go’s concurrency model is one of its main selling points. Goroutines are cheap, channels are built-in, and the select statement makes coordination elegant. But reaching for the right pattern in the right situation takes practice.

Worker Pool

When you have many tasks and want to limit parallelism:

func processItems(items []Item, workers int) []Result {
    jobs := make(chan Item, len(items))
    results := make(chan Result, len(items))

    // Start workers
    var wg sync.WaitGroup
    for i := 0; i < workers; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            for item := range jobs {
                results <- process(item)
            }
        }()
    }

    // Send jobs
    for _, item := range items {
        jobs <- item
    }
    close(jobs)

    // Wait and collect
    go func() {
        wg.Wait()
        close(results)
    }()

    var out []Result
    for r := range results {
        out = append(out, r)
    }
    return out
}

Use this when you’re making HTTP calls, database queries, or file operations in parallel but need to cap the concurrency.

Fan-Out, Fan-In

Distribute work across goroutines, then collect results:

func fanOut(ctx context.Context, urls []string) <-chan Response {
    out := make(chan Response)
    var wg sync.WaitGroup

    for _, url := range urls {
        wg.Add(1)
        go func(u string) {
            defer wg.Done()
            resp, err := fetch(ctx, u)
            out <- Response{URL: u, Data: resp, Err: err}
        }(url)
    }

    go func() {
        wg.Wait()
        close(out)
    }()

    return out
}

Context Cancellation

Always propagate context for cancellation and timeouts:

func handleRequest(w http.ResponseWriter, r *http.Request) {
    ctx, cancel := context.WithTimeout(r.Context(), 5*time.Second)
    defer cancel()

    result, err := fetchData(ctx)
    if err != nil {
        if ctx.Err() == context.DeadlineExceeded {
            http.Error(w, "timeout", http.StatusGatewayTimeout)
            return
        }
        http.Error(w, "error", http.StatusInternalServerError)
        return
    }

    json.NewEncoder(w).Encode(result)
}

errgroup for Structured Concurrency

The errgroup package handles the common pattern of running goroutines and returning on the first error:

import "golang.org/x/sync/errgroup"

func fetchAll(ctx context.Context) (*PageData, error) {
    g, ctx := errgroup.WithContext(ctx)
    var data PageData

    g.Go(func() error {
        var err error
        data.User, err = fetchUser(ctx)
        return err
    })

    g.Go(func() error {
        var err error
        data.Orders, err = fetchOrders(ctx)
        return err
    })

    if err := g.Wait(); err != nil {
        return nil, err
    }
    return &data, nil
}

If any goroutine returns an error, the context is cancelled and Wait() returns that error.

When Not to Use Goroutines

  • Sequential operations. Don’t add concurrency just because you can.
  • Shared mutable state. If goroutines need to coordinate heavily, a mutex or single-goroutine owner is often clearer than channels.
  • CPU-bound work. Goroutines shine for I/O. For CPU-bound work, the benefit caps at GOMAXPROCS.

The best concurrent code is the simplest code that meets your performance requirements.