Skip to main content

Command Palette

Search for a command to run...

How Do Go Channels Work? An Easy Explanation

The simplest way to understand Go's concurrency communication system

Updated
15 min read
How Do Go Channels Work? An Easy Explanation

When you start learning Go, you quickly hear two things
goroutines → run functions concurrently
channels → help goroutine talk to each other

But why do they exist?

Because when multiple goroutines run at the same time, you need a safe way to share data between them without using locks, mutexes, or shared memory headaches.

So Go follows:

“Do not communicate by sharing memory; share memory by communicating.”

This is exactly what channels provides.

What Is a Channel?

A channel is a pipe that lets one goroutine send data to another goroutine.

Goroutine A  ---- data ---->  Goroutine B
              (via channel)

You can think of it like a water pipe for values.

Creating a Channel

ch := make(chan int)

This creates a channel that can send / receive integers.

Sending & Receiving

package main
import "fmt"

func main() {
    ch := make(chan int)

    // Sender goroutine
    go func() {
        ch <- 10 // sending data
    }()

    v := <-ch //receiving data
    fmt.Println(v)
}

What happens?

  • The goroutine sends 10 into the channel.

  • The main goroutine waits until it receives that value.

  • Output :10

Channels block by default → meaning the sender waits until someone receives the data.

This prevents many concurrency bugs automatically.

Why Blocking Is Actually Useful

Blocking ensures synchronisation without locks.

done: make(chan bool)

go func() {
    fmt.Println("Task running...")
    done <- true // signals completion
}()
<- done // waits for task
fmt.Println("Task finished!")

No mutex. No busy waiting. The channel does all the synchronisation.

Buffered vs Unbuffered Channels

By default, channels are unbuffered i.e send blocks until receive happens.

Let’s under this with Restaurant Analogy.

Unbuffered Channel = Chef hands plate directly to waiter

Senario

The kitchen has no counter, no shelf, no tray to place food on.

Only option → Chef must hand the plate directly to a waiter

What happens?

  • Chef prepares the dish.

  • Chef extends the plate to give it to a waiter.

  • If no waiter is available, the chef cannot let go of the plate.

  • So the chef stands and waits until a waiter arrives and takes it.

  • Once the waiter grabs the plate, both continue.

Meaning in GO

  • Sender waits until the receiver is ready.

  • Receiver waits until the sender sends.

  • Communication is synchronous.

ch := make(chan string) // unbuffered

go func() {
    ch <- "Pasta"
    // waits until someone receives
}()

fmt.Println(<-ch) // receives and unblocks chef

“Serving must happen instantly — no place to put the dish.“

Buffered Channel = Chef places plates on a counter (tray / shelf)

senario

Now the kitchen has a food counter with space for 3 plates.

  • Chef puts dishes on the counter.

  • Waiters pick them up later.

What happens?

  • Chef finishes dish and places it on counter.

  • No need to wait for a waiter.

  • Chef continues cooking.

  • Waiter picks plates when free.

  • If the counter becomes full chef must stop and wait.

Meaning in go

  • Sender places data in buffer does not wait.

  • Receiver picks up whenever ready.

  • Only blocks when buffer is full.

Buffered Channel

ch := make(chan int, 2)

This channel can hold 2 values without blocking.

ch := make(chan int, 2)

ch <- 1
ch <- 2 // ok

// ch <- 3 // would block because buffer is full

fmt.Println(<- ch) // 1
fmt.Println(<- ch) // 2

Use buffered channels when you want to control throughput.

Final Understanding

Unbuffered

“Chef waits until a waiter is RIGHT THERE to take the dish.“

Buffered

“Chef places dish on counter and keeps cooking - unless counter fills up.“

Directional Channels

In Go, channels can be restricted so a function can only send or only receive.
This improves safety by preventing incorrect usage.

Why Use Directional Channels?

  • Prevent accidental receiving when a goroutine should only send.

  • Improve readability like “this function only sends data”.

  • Allow the compiler to catch misuses early.

func worker(id int, jobs <- chan int, results chan <- int) {
    for j := range jobs {
        results <- j * 2
    }
}
  • jobs <-chan int → receive only

  • results chan <- int → send only

In this implementation worker cannot misuse channels

  • If it tries to send on jobs, compiler error.

  • If it tries to receive from results, compiler error.

Why It Matters?

In large systems:

  • Directional channels enforce correctness.

  • They reduce bugs caused by wrong channel usage.

Closing a Channel

Closing a channel signals:

“No more values will be sent“

Workers can safely stop processing when the channel is closed.

Example — Using close()

close(jobs)

Now workers reading from jobs will keep receiving until all values are consumed, then exit gracefully.

How receivers detect closure

v, ok := <-jobs
if !ok {
    fmt.Println("channel closed")
}

Key Rules:

  • Only the sender should close the channel.

  • Closing a channel twice → panic.

  • Receiving from closed channel → zero-value but safe.

Deadlocks

A deadlock happens when goroutine are stuck waiting on each other forever.

Go detect this and panics.

func main() {
    ch := make(chan int)
    ch <- 1 // no receiver -> deadlock
}

In this above code the main goroutine is waiting to send, but no goroutine is receiving.

Deadlock in Receivers

func main() {
    ch := make(chan int)
    fmt.Println(<-ch) // no sender
}

Goroutine Leaks

A goroutine leak occurs when a goroutine is started but then enters a permanent waiting state, unable to complete its execution, and there is no mechanism to stop it. It remains alive until the entire program terminates.

func worker(ch <-chan int) {
    for v := range ch { // here waits forever if channel never closed
        fmt.Println(v)
    }
}

func main() {
    ch := make(chan int)
    go worker(ch)
    // forgot to close or send
}

In this example, after calling the worker in another goroutine, we are not sending any values to that channel. To fix these leaks, always close channels when no more values will be sent.

Resources Consumed by a Leaked Goroutine

A leaked goroutine is not free. It continuously holds onto system resources, which can lead to prerfromance degradation. Below are resources consumed by the leaked goroutine.

  1. Memory (Stack)

    Every goroutine starts with a small amount of memory for its stack typically 2 KB while small, this memory is continuously allocated and reserved.

    • If you have a bug that causes a goroutine leak every time an event occurs, and that event happens frequently, the number of leaks goroutines will climb, and the total memory consumed by their stacks will grow linearly. For example 100,000 leaked goroutines will consume about 200 MB of stack memory alone.
  2. CPU and Scheduling Overheads

    Even through a leaked goroutine and isn’t actively running code, it still costs the Go runtime (the scheduler) resources

    • Scheduler Management : The Go scheduler must maintain the data structure for every active goroutine. The scheduler periodically checks the status of these blocked goroutines to see if they can be unblocked.

    • Context Switching: While minimal, if the goroutine briefly wakes up due to a false signal or is examined by the scheduler, it can contribute to small, unnecessary context-switching overhead.

  3. Channel Management

    The channel ch itself is also part of the resource consumption. The channel’s internal data structure must track that one goroutine is currently blocked waiting to receive from it.

To prevent the leak, must either:

  1. Close the channel after sending all necessary data

  2. Use a different synchronization primitive like a sync.WaitGroup if the worker should exit after completing a specific job instead of waiting for the channel to close.

How WaitGroup Prevents Premature Exit Leaks

sync.WaitGroup doesn’t directly stop a goroutine from getting stuck, it provides the mechanism to ensure the entire application waits for the worker to complete its execution path, which is key to avoid a leak if the program were to continue running.

In a simple scenario without a WaitGroup, if the main function finishes, the program terminates immediately, potentially killing active workers mid-execution, or, as in previous examples causing a deadlock.

  1. Waiting for Known Finite Tasks

    If you know a goroutine’s job is finite, the WaitGroup guarantees the program won’t exit until that job is done.

    Example: Clean Exit without a Leak

    In this pattern, the worker has a clear, finite task. The WaitGroup ensures its completion.

     package main
    
     import (
         "fmt"
         "sync"
         "time"
     )
    
     func worker(id int, wg *sync.WaitGroup) {
         // The DEFER ensures wg.Done() is called no matter what.
         defer wg.Done() 
    
         fmt.Printf("Worker %d starting...\n", id)
         time.Sleep(time.Millisecond * 100) // Finite task
     }
    
     func main() {
         var wg sync.WaitGroup
         wg.Add(1) 
    
         go worker(1, &wg) 
    
         // Main blocks here, guaranteeing the worker finishes its task.
         wg.Wait() 
         fmt.Println("Program complete.")
     }
     // Outcome: The worker runs to completion, the counter reaches zero, 
     // and the program exits cleanly. No leak.
    
  2. Preventing Leaks from Unclosed Channels Using WaitGroup with close

     func worker(ch <-chan int) {
         for v := range ch { // here waits forever if channel never closed
             fmt.Println(v)
         }
     }
    
     func main() {
         ch := make(chan int)
         go worker(ch)
         // forgot to close or send
     }
    

    In above leaky example involved a worker blocked on an unclosed channel. WaitGroup can’t fix that specific blocking mechanism, but it can work in conjunction with channel closing to provide robust leak-free concurrency.

    Example: Leak-Free Channel Consumption

    In this the WaitGroup ensures the main function waits long enough for the sender to complete and close the channel, which, in turn, allows the receiver (worker) to exit cleanly.

     package main
    
     import (
         "fmt"
         "sync"
     )
    
     func sender(ch chan<- int, wg *sync.WaitGroup) {
         defer wg.Done() // Signal completion of sending
    
         for i := 1; i <= 3; i++ {
             ch <- i // Send data
         }
         // Critical: Close the channel to signal EOD (End of Data)
         close(ch) 
         fmt.Println("Sender finished and closed channel.")
     }
    
     func receiver(ch <-chan int, wg *sync.WaitGroup) {
         defer wg.Done() // Signal completion of receiving
    
         // This loop will exit gracefully when 'ch' is closed
         for v := range ch { 
             fmt.Println("Received:", v)
         }
         fmt.Println("Receiver finished.")
     }
    
     func main() {
         var wg sync.WaitGroup
         ch := make(chan int)
    
         // Add for BOTH sender and receiver
         wg.Add(2) 
    
         go sender(ch, &wg)
         go receiver(ch, &wg)
    
         fmt.Println("Waiting for all tasks to complete...")
         wg.Wait() // Main waits for both goroutines to call Done()
         fmt.Println("All goroutines completed successfully.")
     }
     // Outcome: Both goroutines finish their work and exit gracefully. No leak.
    

    In this,

    1. sender goroutine sends its data and closes the channel.

    2. the receiver goroutine’s for v := range ch loop detects the close and exits cleanly, calling wg.Done().

    3. The main goroutine is guaranteed to wait until the receiver exits, preventing the program from termination early and leaving the receiver stuck.

Real World Example: Worker Pool Using Channels

This demonstrates how channels help coordinate many goroutines.

package main
import "fmt"

func worker(id int, jobs <-chan int,results chan<- int) {
    for j := range jobs {
        results <- j * 2 // processing
    }
}

func main() {
    jobs := make(chan int ,5)
    results := make(chan int, 5)

    // 3 worker
    for i := 1; i <=3; i++ {
        go worker(i,jobs, results)
    }

    // sending 5 jobs
    for j := 1; j<= 5; j++ {
        jobs <- j
    }
    close(jobs)

    // receiving results
    for r := 1; r <= 5; r++ {
        fmt.Println(<- results)
    }
}

Output

2
4
6
8
10

What is Above Code?

This is a worker pool example a very common concurrency pattern in Go.

  • You have multiple worker (goroutines) ready to process jobs.

  • Jobs are given through a channel.

  • Workers take a job, process it, and send the result back.

This is used in real systems like

  • Background job queues

  • Web servers

  • Task processors

  • Email senders

  • Image processing pipelines

Setp 1: Worker Function

func worker(id int, jobs <- chan int, results chan <- int) {
    for j := range jobs {
        results <- j * 2
    }
}

What is happening?

  1. jobs <- chan int

    This worker can only receive jobs.

  2. results chan <- int

    This worker can only send results.

  3. for j := range jobs

    Keeps taking jobs until channel closes.

  4. results <- j * 2

    “Processing“ the job (multiplying by 2)

Real-World analogy:

Think of these worker as 3 employees sitting in a factory taking tasks (jobs channel) and producing output (results channel)

Step 2: Creating Job and Result Channels

jobs := make(chan int, 5)
results := make(chan int, 5)
  • Jobs channel holds up to 5 tasks.

  • Results channel holds up to 5 outputs.

This buffer prevents workers from blocking too much.

Step 3: Starting 3 Workers

for i := 1; i <= 3; i++ {
    go worker(i, jobs, results)
}

This starts 3 goroutines, each running the worker function.

Step 4: Sending Jobs

for j: 1: j<=5; j++ {
    jobs <- j
}
close(jobs)

You are giving the workers 5 tasks: 1, 2, 3, 4, 5.

Once all jobs are send, you close the channel t signal

“No more tasks. Finish what you have and exit.“

Without closing, workers would wait forever and cause a deadlock.

Step 5: Receiving Results

for r := 1; r <= 5; r++ {
    fmt.Println(<-results)
}

You collect 5 results because you send 5 jobs.

Workers may complete jobs in any order, because they run concurrently.

Why Is This Example Important?

Because it teaches several core Go concurrency ideas

  • Goroutines work independently.

  • Channels coordinate data flow.

  • Closing channels signals “no more work“

  • Multiple goroutines can read from the same channel safely

  • Worker pools improve performance automatically.

Let’s understand channel more by taking real world use cases.

Unbuffered Channel - Payment Confirmation System

Imagine you are making a payment on UPI / Paytm / Google Pay.

You press “Pay“, and your app waits until the bank says : Payment Success

It cannot move ahead until it receives a response.

This is exactly unbuffered behaviour.

  • App = Sender

  • Bank server = Receiver

  • Confirmation = Data

  • App must wait → It blocks

This ensures strict, synchronised communication.

package main
import (
    "fmt"
    "time"
)

func bankServer(confirm chan string) {
    time.Sleep(2 * time.Second) // Simulate processing
    confirm <- "Payment Successful"
}

func main() {
    confirm := make(chan string) // unbuffered
    fmt.Println("Initiating Payment...")
    go bankServer(confirm)
    // Waiting for confirmation (blocks)
    status := <- confirm
    fmt.Println("Bank Response:",status)
    fmt.Println("Order Placed!")
}

Explaination

  1. User clicks pay → request goes to bank.

  2. confirm := make(chan string) creates an unbuffered channel.

  3. Your app sends the request and waits at:

     status := <-confirm
    
  4. Bank server goroutine sleeps for 2 seconds (Payment process simulation)

  5. Bank sends confirmation

     confirm <- "Payment Successful"
    
  6. Your app resumes and continues processing.

Why Unbuffered is Best

  • Payment must not continue without server confirmation.

  • Forces Synchronous, guaranteed handover.

  • Prevents inconsistency states like

    • Payment done but no order placed

    • Order placed twice

    • Payment timeout errors

Buffered Channel - Logging System

Imagine your service receives hundreds of requests per second.

Every request generates logs:

  • “User logged in“

  • “Payment initiated“

  • “Order created“

If you write logs synchronously (unbuffered)

  • Requests must WAIT until the log is written

  • Reduces performance

  • Causes slow API Response.

Instead, logs are pushed into a buffer, and a background goroutine writes them to file/database.

package main
import (
    "fmt"
    "time"
)

func logWriter(logs <- chan string) {
    for log := range logs {
        time.Sleep(500 * time.Millisecond)
        fmt.Println("Logged:",log)
    }
}

func main() {
    logs := make(chan string, 10) // buffered log queue

    go logWriter(logs)

    for i := 1; i <= 5; i++ {
        fmt.Println("Received request",i)
        logs <- fmt.Sprintf("Request %d processed",i)// fast enqueue
    }
    close(logs)
    time.Sleep(3 * time.Second)
}

Why buffered?

  • Logging is slow

  • Request processing is fast

  • Buffered channel prevents the fast operation from blocking.

Buffered + Unbuffered Hybrid - Video Streaming Pipeline

In Youtube / Netflix video processing

  • State 1: Video Upload

  • State 2 : Video encoding

  • State 3 : Thumbnail generation

These require different channel types

StageBehaviorPerfect Channel
Upload → QueueAsyncBuffered
Encoding → NotifySyncUnbuffered
package main

import "fmt"

func uploader(file chan <- string){
    files <- "video1.mp4"
    files <- "video2.mp4"
    close(files)
}

func encoder(file <- chan string, processed chan <- string) {
    for file := files {
        processed <- file + "encoded"
    }
    close(processed)
}

func notifier(processed <- chan string) {
    for p := range processed {
        fmt.Println("Notify user:",p)
    }
}

func main() {
    files := make(chan string, 10) // buffered queue of uploaded videos
    processed := make(chan string)

    go uploader(files)
    go encoder(files,processed)
    notifier(processed)
}

FAQ (Interview - Focused)

  1. Why do we need channels if we have goroutine?

    Because goroutines run independently. Channels allow safe communication and synchronization without mutex locks.

  2. Are channels thread-safe?

    YES. Channels are fully managed by GO runtime.

  3. When to use buffered vs unbuffered channels?

    Unbuffered → synchronization
    Buffered = Improve throughput / decouple sender & receiver

  4. Is closing a channel mandatory ?

No. Only needed to signal “no more values will come“. Receivers can range over it.

  1. Can a closed channel still send data?

    No → panic.

  2. Is it safe to read from a closed channel?
    Yes. You ge the remaining buffered values and then zero-value.

  3. Can we close a receive-only channel?

    No. Only senders should close channels.

  4. Are channels FIFO?

    Yes. Go guarantees that channels are FIFO.

  5. When should I use buffed channels?

    Use buffered channels when:

    • You need async processing.

    • Producers are faster than consumers.

    • Minor queueing is okay.

    • You want rate-limiting or batching

  6. When should I use unbuffered channels?

    Use unbuffered when:

    • You need strict hand-off

    • Sync between two goroutines

    • Avoid queueing.

  7. How to avoid goroutine leaks?

    • Always close channels when done

    • Use context.Context

    • Avoid infinite blocking receives.

  8. Does reading from a nil channel block?

Yes.

    var ch chan int // nil
    <-ch            // blocks forever
  1. What happens if no one receives from an unbuffered channel?

    Sending goroutine blocks forever leads to deadlock.

    ch := make(chan int)
    ch <- 10 // <— deadlock: nobody is receiving
    
  2. Does closing a channel stop goroutines?

    No. Closing a channel only means:

    • No more sends allowed.

    • Receives will still return value until buffer empties.

    • After that, receives return the zero value

Goroutines don’t stop automatically you must explicitly handle it.

  1. Buffered vs Unbuffered — Performance difference ?

    • Unbuffered channels → slow because sender + receiver must meet

    • Buffered channels → faster (asynchronous, less blocking)

  2. Can I detect if a channel is full or empty?

    No. Go intentionally hides this to avoid race conditions. You must structure your program so you don’t need to check this.

  3. Can channels replace queues or message brokers ?

    For small-scale in-memory concurrency, yes. For distributed systems, no.

    Channels are

    • In-memory

    • Single-process

    • Fast

    • Lightweight

But they cannot repleace kafka, RabbitMQ, NATS, REDIS streams
Use channels for in-process pipelines only.

Conclusion

Channels are one of Go’s most powerful concurrency features. They provide safe communication built-in synchronization, and easy coordination between goroutines without locks. Once you master channels, you unlock the real power of Go’s concurrency model.

Go Deep with Golang

Part 2 of 11

Go beyond the basics! This series explores how Go works under the hood — from memory management to goroutines, channels, and design principles that make Go ideal for modern backend development.

Up next

Understanding the fmt Package in GO

Exploring Key Features of the fmt Package in Golang