📖 Guide
Async Patterns Reference
Multi-language async/concurrency patterns — JavaScript, Python, Go, Rust, Java, and universal concurrency patterns.
98 commands across 14 categories
JavaScript — PromisesJavaScript — async/awaitJavaScript — Promise CombinatorsJavaScript — Event Loop & MicrotasksPython — asyncio BasicsPython — asyncio ConcurrencyGo — Goroutines & ChannelsGo — select, WaitGroup, Mutex, ContextRust — async/awaitJava — CompletableFutureJava — ExecutorService & Virtual ThreadsCommon Patterns — Fan-out/Fan-inCommon Patterns — Rate LimitingCommon Patterns — Retry & Circuit Breaker
JavaScript — Promises
| Command | Description |
|---|---|
new Promise((resolve, reject) => { ... })e.g. const p = new Promise((resolve, reject) => {
setTimeout(() => resolve('done'), 1000);
}); | Create a new Promise that resolves or rejects asynchronously |
.then(onFulfilled, onRejected)e.g. fetch('/api/user')
.then(res => res.json())
.then(data => console.log(data)); | Attach callbacks for resolution and/or rejection; returns a new Promise |
.catch(onRejected)e.g. fetchData().catch(err => console.error('Failed:', err)); | Attach a rejection handler — sugar for .then(undefined, onRejected) |
.finally(onFinally)e.g. fetchData()
.then(process)
.catch(handleError)
.finally(() => hideSpinner()); | Run cleanup logic regardless of fulfillment or rejection |
Promise.resolve(value)e.g. const p = Promise.resolve(42); | Create a Promise that resolves immediately with the given value |
Promise.reject(reason)e.g. const p = Promise.reject(new Error('fail')); | Create a Promise that rejects immediately with the given reason |
JavaScript — async/await
| Command | Description |
|---|---|
async function name() { ... }e.g. async function getUser(id) {
const res = await fetch(`/api/users/${id}`);
return res.json();
} | Declare an async function that implicitly returns a Promise |
await expressione.g. const data = await fetchData(); | Pause execution until the Promise resolves; must be inside async function or top-level module |
try/catch with awaite.g. try {
const result = await riskyOperation();
} catch (err) {
console.error('Error:', err);
} | Handle async errors with standard try/catch syntax |
for await (const item of asyncIterable)e.g. for await (const chunk of readableStream) {
process(chunk);
} | Iterate over an async iterable (e.g., streams, async generators) |
async function* generator()e.g. async function* paginate(url) {
let page = 1;
while (true) {
const res = await fetch(`${url}?page=${page++}`);
const data = await res.json();
if (!data.length) break;
yield data;
}
} | Async generator function — yields promises that are awaited by for-await-of |
Top-level awaite.g. const config = await loadConfig(); | Use await at the top level of ES modules (no wrapping async function needed) |
JavaScript — Promise Combinators
| Command | Description |
|---|---|
Promise.all(iterable)e.g. const [users, posts] = await Promise.all([
fetchUsers(),
fetchPosts()
]); | Wait for ALL promises to resolve; rejects if ANY rejects (fail-fast) |
Promise.allSettled(iterable)e.g. const results = await Promise.allSettled([p1, p2, p3]);
results.forEach(r => {
if (r.status === 'fulfilled') console.log(r.value);
else console.error(r.reason);
}); | Wait for ALL promises to settle (resolve or reject); never short-circuits |
Promise.race(iterable)e.g. const result = await Promise.race([
fetchData(),
timeout(5000)
]); | Resolve/reject as soon as the FIRST promise settles |
Promise.any(iterable)e.g. const fastest = await Promise.any([
fetchFromCDN1(),
fetchFromCDN2(),
fetchFromCDN3()
]); | Resolve with the FIRST fulfilled promise; rejects only if ALL reject (AggregateError) |
Promise.withResolvers()e.g. const { promise, resolve, reject } = Promise.withResolvers();
setTimeout(() => resolve('done'), 1000); | Returns { promise, resolve, reject } — ES2024, useful for deferring resolution |
JavaScript — Event Loop & Microtasks
| Command | Description |
|---|---|
queueMicrotask(callback)e.g. queueMicrotask(() => console.log('microtask')); | Schedule a microtask — runs before next macrotask (after current task completes) |
setTimeout(fn, 0)e.g. setTimeout(() => console.log('macrotask'), 0); | Schedule a macrotask — runs after all microtasks and rendering |
process.nextTick(fn)e.g. process.nextTick(() => console.log('nextTick')); | Node.js only: schedule before other microtasks (even before Promise callbacks) |
Execution order | Sync code → microtasks (Promise.then, queueMicrotask) → macrotasks (setTimeout, setInterval, I/O) |
AbortControllere.g. const ac = new AbortController();
fetch(url, { signal: ac.signal });
setTimeout(() => ac.abort(), 5000); | Cancel async operations (fetch, timers, event listeners) cooperatively |
Python — asyncio Basics
| Command | Description |
|---|---|
async def func(): ...e.g. async def fetch_data(url):
async with aiohttp.ClientSession() as session:
async with session.get(url) as resp:
return await resp.json() | Define a coroutine function |
await coroutinee.g. result = await fetch_data('https://api.example.com/data') | Suspend execution until the coroutine completes |
asyncio.run(coro)e.g. asyncio.run(main()) | Entry point: create event loop, run coroutine, close loop |
asyncio.sleep(seconds)e.g. await asyncio.sleep(1.0) | Async sleep — yields control back to the event loop |
asyncio.get_event_loop()e.g. loop = asyncio.get_running_loop() | Get the running event loop (deprecated in 3.10+, prefer asyncio.get_running_loop()) |
async withe.g. async with aiohttp.ClientSession() as session:
... | Asynchronous context manager — for resources needing async setup/teardown |
async for item in aitere.g. async for msg in websocket:
print(msg) | Iterate over an async iterator/generator |
Python — asyncio Concurrency
| Command | Description |
|---|---|
asyncio.gather(*coros, return_exceptions=False)e.g. results = await asyncio.gather(
fetch('/users'),
fetch('/posts'),
fetch('/comments')
) | Run coroutines concurrently; returns list of results in order |
asyncio.create_task(coro)e.g. task = asyncio.create_task(background_job())
# ... do other work ...
result = await task | Schedule a coroutine as a Task on the event loop (starts immediately) |
asyncio.TaskGroup() (3.11+)e.g. async with asyncio.TaskGroup() as tg:
t1 = tg.create_task(fetch_users())
t2 = tg.create_task(fetch_posts())
print(t1.result(), t2.result()) | Structured concurrency — all tasks complete or all cancel on first failure |
asyncio.wait_for(coro, timeout)e.g. result = await asyncio.wait_for(slow_op(), timeout=5.0) | Wait for a coroutine with a timeout; raises TimeoutError |
asyncio.wait(tasks, return_when=FIRST_COMPLETED)e.g. done, pending = await asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED) | Wait for tasks with flexible completion conditions (FIRST_COMPLETED, ALL_COMPLETED, FIRST_EXCEPTION) |
asyncio.as_completed(coros)e.g. for coro in asyncio.as_completed(tasks):
result = await coro
print(result) | Iterator yielding futures in the order they complete |
asyncio.Queue()e.g. queue = asyncio.Queue(maxsize=100)
await queue.put(item)
item = await queue.get() | Async-safe FIFO queue for producer-consumer patterns |
asyncio.Semaphore(value)e.g. sem = asyncio.Semaphore(10)
async with sem:
await fetch(url) | Limit concurrent access to a resource |
Go — Goroutines & Channels
| Command | Description |
|---|---|
go func() { ... }()e.g. go func() {
result := compute()
ch <- result
}() | Launch a goroutine — lightweight concurrent function execution |
ch := make(chan Type)e.g. ch := make(chan int) | Create an unbuffered channel — sends block until a receiver is ready |
ch := make(chan Type, size)e.g. ch := make(chan string, 100) | Create a buffered channel — sends block only when buffer is full |
ch <- valuee.g. ch <- 42 | Send a value into a channel |
value := <-che.g. result := <-ch | Receive a value from a channel (blocks until value available) |
close(ch)e.g. close(ch) | Close a channel — signals no more values will be sent |
for v := range ch { ... }e.g. for msg := range messages {
fmt.Println(msg)
} | Receive values from a channel until it's closed |
v, ok := <-che.g. v, ok := <-ch
if !ok {
fmt.Println("channel closed")
} | Receive with closed check — ok is false if channel is closed and empty |
chan<- Type / <-chan Typee.g. func producer(out chan<- int) { ... }
func consumer(in <-chan int) { ... } | Directional channel types — send-only or receive-only for type safety |
Go — select, WaitGroup, Mutex, Context
| Command | Description |
|---|---|
select { case ... }e.g. select {
case msg := <-ch1:
handle(msg)
case msg := <-ch2:
handle(msg)
case <-time.After(5 * time.Second):
fmt.Println("timeout")
} | Wait on multiple channel operations; executes the first one ready |
default case in selecte.g. select {
case msg := <-ch:
handle(msg)
default:
// non-blocking
} | Non-blocking select — executes default if no channel is ready |
sync.WaitGroupe.g. var wg sync.WaitGroup
for i := 0; i < 10; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
work(id)
}(i)
}
wg.Wait() | Wait for a collection of goroutines to finish |
sync.Mutex / sync.RWMutexe.g. var mu sync.Mutex
mu.Lock()
counter++
mu.Unlock() | Mutual exclusion lock for protecting shared state |
sync.Oncee.g. var once sync.Once
once.Do(func() {
initConfig()
}) | Ensure a function is only executed once (e.g., initialization) |
context.WithCancel(parent)e.g. ctx, cancel := context.WithCancel(context.Background())
defer cancel()
go worker(ctx) | Create a cancellable context — call cancel() to signal goroutines to stop |
context.WithTimeout(parent, duration)e.g. ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel() | Context that auto-cancels after a timeout |
context.WithDeadline(parent, time)e.g. deadline := time.Now().Add(30 * time.Second)
ctx, cancel := context.WithDeadline(ctx, deadline)
defer cancel() | Context that cancels at a specific time |
<-ctx.Done()e.g. select {
case <-ctx.Done():
return ctx.Err()
case result := <-ch:
return result
} | Channel that closes when the context is cancelled or times out |
errgroup.Group (golang.org/x/sync)e.g. g, ctx := errgroup.WithContext(ctx)
g.Go(func() error {
return fetchUsers(ctx)
})
g.Go(func() error {
return fetchPosts(ctx)
})
if err := g.Wait(); err != nil {
log.Fatal(err)
} | Like WaitGroup but with error propagation and context cancellation |
Rust — async/await
| Command | Description |
|---|---|
async fn name() -> T { ... }e.g. async fn fetch_url(url: &str) -> Result<String, reqwest::Error> {
reqwest::get(url).await?.text().await
} | Declare an async function — returns impl Future<Output = T> |
.awaite.g. let body = reqwest::get(url).await?.text().await?; | Await a future — suspends the current async function until the future completes |
#[tokio::main]e.g. #[tokio::main]
async fn main() {
let result = fetch_data().await;
println!("{:?}", result);
} | Macro that sets up the Tokio runtime and runs an async main function |
tokio::spawn(future)e.g. let handle = tokio::spawn(async {
expensive_computation().await
});
let result = handle.await?; | Spawn a new async task on the Tokio runtime (like a goroutine) |
tokio::join!(f1, f2, ...)e.g. let (users, posts) = tokio::join!(
fetch_users(),
fetch_posts()
); | Run multiple futures concurrently and wait for all of them |
tokio::select! { ... }e.g. tokio::select! {
val = rx.recv() => println!("received {val:?}"),
_ = tokio::time::sleep(Duration::from_secs(5)) => println!("timeout"),
} | Wait on multiple futures, proceed with the first one that completes |
tokio::time::sleep(duration)e.g. tokio::time::sleep(Duration::from_millis(500)).await; | Async sleep — yields control back to the runtime |
tokio::time::timeout(duration, future)e.g. match tokio::time::timeout(Duration::from_secs(5), operation()).await {
Ok(result) => handle(result),
Err(_) => eprintln!("timed out"),
} | Wrap a future with a timeout; returns Err(Elapsed) on timeout |
tokio::sync::Mutex<T>e.g. let data = Arc::new(tokio::sync::Mutex::new(vec![]));
let mut lock = data.lock().await;
lock.push(42); | Async-aware mutex — holds lock across .await points (unlike std::sync::Mutex) |
tokio::sync::mpsc::channel(buffer)e.g. let (tx, mut rx) = tokio::sync::mpsc::channel(100);
tokio::spawn(async move {
tx.send("hello").await.unwrap();
});
while let Some(msg) = rx.recv().await {
println!("{msg}");
} | Multi-producer, single-consumer async channel |
tokio::sync::Semaphoree.g. let sem = Arc::new(Semaphore::new(10));
let permit = sem.acquire().await?;
// ... do work ...
drop(permit); | Async semaphore for limiting concurrency |
Java — CompletableFuture
| Command | Description |
|---|---|
CompletableFuture.supplyAsync(() -> value)e.g. CompletableFuture<String> future = CompletableFuture.supplyAsync(() -> {
return fetchData();
}); | Run a supplier asynchronously and return a CompletableFuture |
CompletableFuture.runAsync(() -> { ... })e.g. CompletableFuture<Void> future = CompletableFuture.runAsync(() -> {
sendNotification();
}); | Run a Runnable asynchronously with no return value |
.thenApply(fn)e.g. future.thenApply(data -> parseJson(data)); | Transform the result when it completes (like .map) |
.thenCompose(fn)e.g. future.thenCompose(user -> fetchOrders(user.getId())); | Chain another async operation (like .flatMap) |
.thenCombine(other, combiner)e.g. userFuture.thenCombine(ordersFuture, (user, orders) -> {
return new UserWithOrders(user, orders);
}); | Combine results of two independent futures |
.exceptionally(throwable -> fallback)e.g. future.exceptionally(ex -> {
log.error("Failed", ex);
return defaultValue;
}); | Handle exceptions and provide a fallback value |
CompletableFuture.allOf(f1, f2, ...)e.g. CompletableFuture.allOf(f1, f2, f3).thenRun(() -> {
// all done
}); | Wait for all futures to complete (returns CompletableFuture<Void>) |
CompletableFuture.anyOf(f1, f2, ...)e.g. CompletableFuture.anyOf(f1, f2).thenAccept(System.out::println); | Complete when ANY future completes (returns CompletableFuture<Object>) |
.orTimeout(duration, unit)e.g. future.orTimeout(5, TimeUnit.SECONDS); | Java 9+: fail with TimeoutException if not completed in time |
.completeOnTimeout(value, duration, unit)e.g. future.completeOnTimeout(fallback, 5, TimeUnit.SECONDS); | Java 9+: complete with default value on timeout |
Java — ExecutorService & Virtual Threads
| Command | Description |
|---|---|
Executors.newFixedThreadPool(n)e.g. ExecutorService pool = Executors.newFixedThreadPool(10); | Create a thread pool with a fixed number of threads |
Executors.newCachedThreadPool()e.g. ExecutorService pool = Executors.newCachedThreadPool(); | Creates threads as needed, reuses idle threads (good for short tasks) |
executor.submit(callable)e.g. Future<String> future = pool.submit(() -> fetchData()); | Submit a task and get a Future back |
executor.invokeAll(tasks)e.g. List<Future<String>> results = pool.invokeAll(tasks); | Submit all tasks and wait for all to complete |
executor.shutdown()e.g. pool.shutdown();
pool.awaitTermination(60, TimeUnit.SECONDS); | Graceful shutdown — finish running tasks, reject new ones |
Thread.ofVirtual().start(runnable)e.g. Thread.ofVirtual().name("worker").start(() -> {
var result = blockingIO();
process(result);
}); | Java 21+: create a virtual thread (lightweight, like goroutines) |
Executors.newVirtualThreadPerTaskExecutor()e.g. try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
executor.submit(() -> fetchUsers());
executor.submit(() -> fetchPosts());
} | Java 21+: executor that creates a new virtual thread per task |
StructuredTaskScope (Preview)e.g. try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
var user = scope.fork(() -> fetchUser(id));
var orders = scope.fork(() -> fetchOrders(id));
scope.join().throwIfFailed();
return new Response(user.get(), orders.get());
} | Java 21+ preview: structured concurrency — all subtasks complete/cancel together |
Common Patterns — Fan-out/Fan-in
| Command | Description |
|---|---|
Fan-oute.g. // JS: Promise.all(urls.map(url => fetch(url)))
// Go: for _, url := range urls { go fetch(url, ch) }
// Python: await asyncio.gather(*[fetch(url) for url in urls]) | Distribute work across multiple concurrent workers |
Fan-ine.g. // Go: merge channels
func fanIn(channels ...<-chan int) <-chan int {
out := make(chan int)
var wg sync.WaitGroup
for _, ch := range channels {
wg.Add(1)
go func(c <-chan int) {
defer wg.Done()
for v := range c { out <- v }
}(ch)
}
go func() { wg.Wait(); close(out) }()
return out
} | Collect results from multiple concurrent producers into one stream |
Worker poole.g. // Go:
jobs := make(chan Job, 100)
for w := 0; w < numWorkers; w++ {
go func() {
for job := range jobs {
process(job)
}
}()
} | Fixed number of workers processing from a shared queue |
Pipelinee.g. // Go: stage1 -> stage2 -> stage3
func stage(in <-chan int) <-chan int {
out := make(chan int)
go func() {
defer close(out)
for v := range in {
out <- transform(v)
}
}()
return out
} | Chain stages where each stage's output feeds the next stage's input |
Common Patterns — Rate Limiting
| Command | Description |
|---|---|
Token buckete.g. // Go: golang.org/x/time/rate
limiter := rate.NewLimiter(rate.Every(time.Second/10), 10) // 10/sec, burst 10
if err := limiter.Wait(ctx); err != nil {
return err
} | Allow N operations per time window; tokens refill at a fixed rate |
Sliding windowe.g. // Python:
import time
class RateLimiter:
def __init__(self, max_calls, period):
self.calls = []
self.max_calls = max_calls
self.period = period
def allow(self):
now = time.time()
self.calls = [t for t in self.calls if t > now - self.period]
if len(self.calls) < self.max_calls:
self.calls.append(now)
return True
return False | Track request timestamps; allow if count in window < limit |
Concurrency limiter (Semaphore)e.g. // JS:
async function limitConcurrency(tasks, limit) {
const results = [];
const executing = new Set();
for (const task of tasks) {
const p = task().then(r => { executing.delete(p); return r; });
executing.add(p);
results.push(p);
if (executing.size >= limit) await Promise.race(executing);
}
return Promise.all(results);
} | Limit how many operations run simultaneously |
Common Patterns — Retry & Circuit Breaker
| Command | Description |
|---|---|
Retry with exponential backoffe.g. // JS:
async function retry(fn, maxRetries = 3) {
for (let i = 0; i <= maxRetries; i++) {
try { return await fn(); }
catch (err) {
if (i === maxRetries) throw err;
const delay = Math.min(1000 * 2 ** i, 30000);
const jitter = Math.random() * delay * 0.1;
await sleep(delay + jitter);
}
}
} | Retry failed operations with increasing delays: delay = base * 2^attempt + jitter |
Retry with max attempts + deadlinee.g. // Python:
async def retry_with_deadline(fn, max_attempts=5, deadline=30.0):
start = time.monotonic()
for attempt in range(max_attempts):
try:
return await asyncio.wait_for(fn(), timeout=deadline - (time.monotonic() - start))
except Exception as e:
if attempt == max_attempts - 1 or time.monotonic() - start >= deadline:
raise
await asyncio.sleep(min(2 ** attempt, 10)) | Combine retry count with an overall timeout to prevent infinite retrying |
Circuit breaker states | CLOSED (normal) → OPEN (failing, reject calls) → HALF-OPEN (test with one call) |
Circuit breaker implementatione.g. // Pseudocode:
class CircuitBreaker {
state = CLOSED; failures = 0; threshold = 5;
lastFailure = null; cooldown = 30s;
async call(fn) {
if (state === OPEN) {
if (now - lastFailure > cooldown) state = HALF_OPEN;
else throw new CircuitOpenError();
}
try {
result = await fn();
if (state === HALF_OPEN) state = CLOSED;
failures = 0;
return result;
} catch (err) {
failures++;
lastFailure = now;
if (failures >= threshold) state = OPEN;
throw err;
}
}
} | Track failure count; trip to OPEN after threshold; periodically test in HALF-OPEN |
Bulkhead patterne.g. // Java: separate pools per downstream service
ExecutorService userServicePool = Executors.newFixedThreadPool(10);
ExecutorService orderServicePool = Executors.newFixedThreadPool(10); | Isolate failures by partitioning resources (separate thread pools/semaphores per service) |
Hedged requestse.g. // Go:
func hedgedRequest(ctx context.Context, urls []string) (Response, error) {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
ch := make(chan Response, len(urls))
for _, url := range urls {
go func(u string) {
if resp, err := fetch(ctx, u); err == nil {
ch <- resp
}
}(url)
}
return <-ch, nil
} | Send duplicate requests to multiple backends; use the first response |
📖 Free, searchable command reference. Bookmark this page for quick access.