Profile img

Hi, I'm who's behind Fedify, Hollo, BotKit, and this website, Hackers' Pub! My main account is at @hongminhee洪 民憙 (Hong Minhee) :nonbinary:.

Fedify, Hollo, BotKit, 그리고 보고 계신 이 사이트 Hackers' Pub을 만들고 있습니다. 제 메인 계정은: @hongminhee洪 民憙 (Hong Minhee) :nonbinary:.

FedifyHolloBotKit、そしてこのサイト、Hackers' Pubを作っています。私のメインアカウントは「@hongminhee洪 民憙 (Hong Minhee) :nonbinary:」に。

Website
hongminhee.org
GitHub
@dahlia
Hollo
@hongminhee@hollo.social
DEV
@hongminhee
velog
@hongminhee
Qiita
@hongminhee
Zenn
@hongminhee
Matrix
@hongminhee:matrix.org
X
@hongminhee

Your CLI's completion should know what options you've already typed

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

Consider Git's -C option:

git -C /path/to/repo checkout <TAB>

When you hit Tab, Git completes branch names from /path/to/repo, not your current directory. The completion is context-aware—it depends on the value of another option.

Most CLI parsers can't do this. They treat each option in isolation, so completion for --branch has no way of knowing the --repo value. You end up with two unpleasant choices: either show completions for all possible branches across all repositories (useless), or give up on completion entirely for these options.

Optique 0.10.0 introduces a dependency system that solves this problem while preserving full type safety.

Static dependencies with or()

Optique already handles certain kinds of dependent options via the or() combinator:

import { flag, object, option, or, string } from "@optique/core";

const outputOptions = or(
  object({
    json: flag("--json"),
    pretty: flag("--pretty"),
  }),
  object({
    csv: flag("--csv"),
    delimiter: option("--delimiter", string()),
  }),
);

TypeScript knows that if json is true, you'll have a pretty field, and if csv is true, you'll have a delimiter field. The parser enforces this at runtime, and shell completion will suggest --pretty only when --json is present.

This works well when the valid combinations are known at definition time. But it can't handle cases where valid values depend on runtime input—like branch names that vary by repository.

Runtime dependencies

Common scenarios include:

  • A deployment CLI where --environment affects which services are available
  • A database tool where --connection affects which tables can be completed
  • A cloud CLI where --project affects which resources are shown

In each case, you can't know the valid values until you know what the user typed for the dependency option. Optique 0.10.0 introduces dependency() and derive() to handle exactly this.

The dependency system

The core idea is simple: mark one option as a dependency source, then create derived parsers that use its value.

import {
  choice,
  dependency,
  message,
  object,
  option,
  string,
} from "@optique/core";

function getRefsFromRepo(repoPath: string): string[] {
  // In real code, this would read from the Git repository
  return ["main", "develop", "feature/login"];
}

// Mark as a dependency source
const repoParser = dependency(string());

// Create a derived parser
const refParser = repoParser.derive({
  metavar: "REF",
  factory: (repoPath) => {
    const refs = getRefsFromRepo(repoPath);
    return choice(refs);
  },
  defaultValue: () => ".",
});

const parser = object({
  repo: option("--repo", repoParser, {
    description: message`Path to the repository`,
  }),
  ref: option("--ref", refParser, {
    description: message`Git reference`,
  }),
});

The factory function is where the dependency gets resolved. It receives the actual value the user provided for --repo and returns a parser that validates against refs from that specific repository.

Under the hood, Optique uses a three-phase parsing strategy:

  1. Parse all options in a first pass, collecting dependency values
  2. Call factory functions with the collected values to create concrete parsers
  3. Re-parse derived options using those dynamically created parsers

This means both validation and completion work correctly—if the user has already typed --repo /some/path, the --ref completion will show refs from that path.

Repository-aware completion with @optique/git

The @optique/git package provides async value parsers that read from Git repositories. Combined with the dependency system, you can build CLIs with repository-aware completion:

import {
  command,
  dependency,
  message,
  object,
  option,
  string,
} from "@optique/core";
import { gitBranch } from "@optique/git";

const repoParser = dependency(string());

const branchParser = repoParser.deriveAsync({
  metavar: "BRANCH",
  factory: (repoPath) => gitBranch({ dir: repoPath }),
  defaultValue: () => ".",
});

const checkout = command(
  "checkout",
  object({
    repo: option("--repo", repoParser, {
      description: message`Path to the repository`,
    }),
    branch: option("--branch", branchParser, {
      description: message`Branch to checkout`,
    }),
  }),
);

Now when you type my-cli checkout --repo /path/to/project --branch <TAB>, the completion will show branches from /path/to/project. The defaultValue of "." means that if --repo isn't specified, it falls back to the current directory.

Multiple dependencies

Sometimes a parser needs values from multiple options. The deriveFrom() function handles this:

import {
  choice,
  dependency,
  deriveFrom,
  message,
  object,
  option,
} from "@optique/core";

function getAvailableServices(env: string, region: string): string[] {
  return [`${env}-api-${region}`, `${env}-web-${region}`];
}

const envParser = dependency(choice(["dev", "staging", "prod"] as const));
const regionParser = dependency(choice(["us-east", "eu-west"] as const));

const serviceParser = deriveFrom({
  dependencies: [envParser, regionParser] as const,
  metavar: "SERVICE",
  factory: (env, region) => {
    const services = getAvailableServices(env, region);
    return choice(services);
  },
  defaultValues: () => ["dev", "us-east"] as const,
});

const parser = object({
  env: option("--env", envParser, {
    description: message`Deployment environment`,
  }),
  region: option("--region", regionParser, {
    description: message`Cloud region`,
  }),
  service: option("--service", serviceParser, {
    description: message`Service to deploy`,
  }),
});

The factory receives values in the same order as the dependency array. If some dependencies aren't provided, Optique uses the defaultValues.

Async support

Real-world dependency resolution often involves I/O—reading from Git repositories, querying APIs, accessing databases. Optique provides async variants for these cases:

import { dependency, string } from "@optique/core";
import { gitBranch } from "@optique/git";

const repoParser = dependency(string());

const branchParser = repoParser.deriveAsync({
  metavar: "BRANCH",
  factory: (repoPath) => gitBranch({ dir: repoPath }),
  defaultValue: () => ".",
});

The @optique/git package uses isomorphic-git under the hood, so gitBranch(), gitTag(), and gitRef() all work in both Node.js and Deno.

There's also deriveSync() for when you need to be explicit about synchronous behavior, and deriveFromAsync() for multiple async dependencies.

Wrapping up

The dependency system lets you build CLIs where options are aware of each other—not just for validation, but for shell completion too. You get type safety throughout: TypeScript knows the relationship between your dependency sources and derived parsers, and invalid combinations are caught at compile time.

This is particularly useful for tools that interact with external systems where the set of valid values isn't known until runtime. Git repositories, cloud providers, databases, container registries—anywhere the completion choices depend on context the user has already provided.

This feature will be available in Optique 0.10.0. To try the pre-release:

deno add jsr:@optique/core@0.10.0-dev.311

Or with npm:

npm install @optique/core@0.10.0-dev.311

See the documentation for more details.

Read more →
5

Building CLI apps with TypeScript in 2026

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

We've all been there. You start a quick TypeScript CLI with process.argv.slice(2), add a couple of options, and before you know it you're drowning in if/else blocks and parseInt calls. It works, until it doesn't.

In this guide, we'll move from manual argument parsing to a fully type-safe CLI with subcommands, mutually exclusive options, and shell completion.

The naïve approach: parsing process.argv

Let's start with the most basic approach. Say we want a greeting program that takes a name and optionally repeats the greeting:

// greet.ts
const args = process.argv.slice(2);

let name: string | undefined;
let count = 1;

for (let i = 0; i < args.length; i++) {
  if (args[i] === "--name" || args[i] === "-n") {
    name = args[++i];
  } else if (args[i] === "--count" || args[i] === "-c") {
    count = parseInt(args[++i], 10);
  }
}

if (!name) {
  console.error("Error: --name is required");
  process.exit(1);
}

for (let i = 0; i < count; i++) {
  console.log(`Hello, ${name}!`);
}

Run node greet.js --name Alice --count 3 and you'll get three greetings.

But this approach is fragile. count could be NaN if someone passes --count foo, and we'd silently proceed. There's no help text. If someone passes --name without a value, we'd read the next option as the name. And the boilerplate grows fast with each new option.

The traditional libraries

You've probably heard of Commander.js and Yargs. They've been around for years and solve the basic problems:

// With Commander.js
import { program } from "commander";

program
  .requiredOption("-n, --name <n>", "Name to greet")
  .option("-c, --count <number>", "Number of times to greet", "1")
  .parse();

const opts = program.opts();

These libraries handle help text, option parsing, and basic validation. But they were designed before TypeScript became mainstream, and the type safety is bolted on rather than built in.

The real problem shows up when you need mutually exclusive options. Say your CLI works either in "server mode" (with --port and --host) or "client mode" (with --url). With these libraries, you end up with a config object where all options are potentially present, and you're left writing runtime checks to ensure the user didn't mix incompatible flags. TypeScript can't help you because the types don't reflect the actual constraints.

Enter Optique

Optique takes a different approach. Instead of configuring options declaratively, you build parsers by composing smaller parsers together. The types flow naturally from this composition, so TypeScript always knows exactly what shape your parsed result will have.

Optique works across JavaScript runtimes: Node.js, Deno, and Bun are all supported. The core parsing logic has no runtime-specific dependencies, so you can even use it in browsers if you need to parse CLI-like arguments in a web context.

Let's rebuild our greeting program:

import { object } from "@optique/core/constructs";
import { option } from "@optique/core/primitives";
import { integer, string } from "@optique/core/valueparser";
import { withDefault } from "@optique/core/modifiers";
import { run } from "@optique/run";

const parser = object({
  name: option("-n", "--name", string()),
  count: withDefault(option("-c", "--count", integer({ min: 1 })), 1),
});

const config = run(parser);
// config is typed as { name: string; count: number }

for (let i = 0; i < config.count; i++) {
  console.log(`Hello, ${config.name}!`);
}

Types are inferred automatically. config.name is string, not string | undefined. config.count is number, guaranteed to be at least 1. Validation is built in: integer({ min: 1 }) rejects non-integers and values below 1 with clear error messages. Help text is generated automatically, and the run() function handles errors and exits with appropriate codes.

Install it with your package manager of choice:

npm add @optique/core @optique/run
# or: pnpm add, yarn add, bun add, deno add jsr:@optique/core jsr:@optique/run

Building up: a file converter

Let's build something more realistic: a file converter that reads from an input file, converts to a specified format, and writes to an output file.

import { object } from "@optique/core/constructs";
import { optional, withDefault } from "@optique/core/modifiers";
import { argument, option } from "@optique/core/primitives";
import { choice, string } from "@optique/core/valueparser";
import { run } from "@optique/run";

const parser = object({
  input: argument(string({ metavar: "INPUT" })),
  output: option("-o", "--output", string({ metavar: "FILE" })),
  format: withDefault(
    option("-f", "--format", choice(["json", "yaml", "toml"])),
    "json"
  ),
  pretty: option("-p", "--pretty"),
  verbose: option("-v", "--verbose"),
});

const config = run(parser, {
  help: "both",
  version: { mode: "both", value: "1.0.0" },
});

// config.input: string
// config.output: string
// config.format: "json" | "yaml" | "toml"
// config.pretty: boolean
// config.verbose: boolean

The type of config.format isn't just string. It's the union "json" | "yaml" | "toml". TypeScript will catch typos like config.format === "josn" at compile time.

The choice() parser is useful for any option with a fixed set of valid values: log levels, output formats, environment names, and so on. You get both runtime validation (invalid values are rejected with helpful error messages) and compile-time checking (TypeScript knows the exact set of possible values).

Mutually exclusive options

Now let's tackle the case that trips up most CLI libraries: mutually exclusive options. Say our tool can either run as a server or connect as a client, but not both:

import { object, or } from "@optique/core/constructs";
import { withDefault } from "@optique/core/modifiers";
import { argument, constant, option } from "@optique/core/primitives";
import { integer, string, url } from "@optique/core/valueparser";
import { run } from "@optique/run";

const parser = or(
  // Server mode
  object({
    mode: constant("server"),
    port: option("-p", "--port", integer({ min: 1, max: 65535 })),
    host: withDefault(option("-h", "--host", string()), "0.0.0.0"),
  }),
  // Client mode
  object({
    mode: constant("client"),
    url: argument(url()),
  }),
);

const config = run(parser);

The or() combinator tries each alternative in order. The first one that successfully parses wins. The constant() parser adds a literal value to the result without consuming any input, which serves as a discriminator.

TypeScript infers a discriminated union:

type Config =
  | { mode: "server"; port: number; host: string }
  | { mode: "client"; url: URL };

Now you can write type-safe code that handles each mode:

if (config.mode === "server") {
  console.log(`Starting server on ${config.host}:${config.port}`);
} else {
  console.log(`Connecting to ${config.url.hostname}`);
}

Try accessing config.url in the server branch. TypeScript won't let you. The compiler knows that when mode is "server", only port and host exist.

This is the key difference from configuration-based libraries. With Commander or Yargs, you'd get a type like { port?: number; host?: string; url?: string } and have to check at runtime which combination of fields is actually present. With Optique, the types match the actual constraints of your CLI.

Subcommands

For larger tools, you'll want subcommands. Optique handles this with the command() parser:

import { object, or } from "@optique/core/constructs";
import { optional } from "@optique/core/modifiers";
import { argument, command, constant, option } from "@optique/core/primitives";
import { string } from "@optique/core/valueparser";
import { run } from "@optique/run";

const parser = or(
  command("add", object({
    action: constant("add"),
    key: argument(string({ metavar: "KEY" })),
    value: argument(string({ metavar: "VALUE" })),
  })),
  command("remove", object({
    action: constant("remove"),
    key: argument(string({ metavar: "KEY" })),
  })),
  command("list", object({
    action: constant("list"),
    pattern: optional(option("-p", "--pattern", string())),
  })),
);

const result = run(parser, { help: "both" });

switch (result.action) {
  case "add":
    console.log(`Adding ${result.key}=${result.value}`);
    break;
  case "remove":
    console.log(`Removing ${result.key}`);
    break;
  case "list":
    console.log(`Listing${result.pattern ? ` (filter: ${result.pattern})` : ""}`);
    break;
}

Each subcommand gets its own help text. Run myapp add --help and you'll see only the options relevant to add. Run myapp --help and you'll see a summary of all available commands.

The pattern here is the same as mutually exclusive options: or() to combine alternatives, constant() to add a discriminator. This consistency is one of Optique's strengths. Once you understand the basic combinators, you can build arbitrarily complex CLI structures by composing them.

Shell completion

Optique has built-in shell completion for Bash, zsh, fish, PowerShell, and Nushell. Enable it by passing completion: "both" to run():

const config = run(parser, {
  help: "both",
  version: { mode: "both", value: "1.0.0" },
  completion: "both",
});

Users can then generate completion scripts:

$ myapp --completion bash >> ~/.bashrc
$ myapp --completion zsh >> ~/.zshrc
$ myapp --completion fish > ~/.config/fish/completions/myapp.fish

The completions are context-aware. They know about your subcommands, option values, and choice() alternatives. Type myapp --format <TAB> and you'll see json, yaml, toml as suggestions. Type myapp a<TAB> and it'll complete to myapp add.

Completion support is often an afterthought in CLI tools, but it makes a real difference in user experience. With Optique, you get it essentially for free.

Integrating with validation libraries

Already using Zod for validation in your project? The @optique/zod package lets you reuse those schemas as CLI value parsers:

import { z } from "zod";
import { zod } from "@optique/zod";
import { option } from "@optique/core/primitives";

const email = option("--email", zod(z.string().email()));
const port = option("--port", zod(z.coerce.number().int().min(1).max(65535)));

Your existing validation logic just works. The Zod error messages are passed through to the user, so you get the same helpful feedback you're used to.

Prefer Valibot? The @optique/valibot package works the same way:

import * as v from "valibot";
import { valibot } from "@optique/valibot";
import { option } from "@optique/core/primitives";

const email = option("--email", valibot(v.pipe(v.string(), v.email())));

Valibot's bundle size is significantly smaller than Zod's (~10KB vs ~52KB), which can matter for CLI tools where startup time is noticeable.

Tips

A few things I've learned building CLIs with Optique:

Start simple. Begin with object() and basic options. Add or() for mutually exclusive groups only when you need them. It's easy to over-engineer CLI parsers.

Use descriptive metavars. Instead of string(), write string({ metavar: "FILE" }) or string({ metavar: "URL" }). The metavar appears in help text and error messages, so it's worth the extra few characters.

Leverage withDefault(). It's better than making options optional and checking for undefined everywhere. Your code becomes cleaner when you can assume values are always present.

Test your parser. Optique's core parsing functions work without process.argv, so you can unit test your parser logic:

import { parse } from "@optique/core/parser";

const result = parse(parser, ["--name", "Alice", "--count", "3"]);
if (result.success) {
  assert.equal(result.value.name, "Alice");
  assert.equal(result.value.count, 3);
}

This is especially valuable for complex parsers with many edge cases.

Going further

We've covered the fundamentals, but Optique has more to offer:

  • Async value parsers for validating against external sources, like checking if a Git branch exists or if a URL is reachable
  • Path validation with path() for checking file existence, directory structure, and file extensions
  • Custom value parsers for domain-specific types (though Zod/Valibot integration is usually easier)
  • Reusable option groups with merge() for sharing common options across subcommands
  • The @optique/temporal package for parsing dates and times using the Temporal API

Check out the documentation for the full picture. The tutorial walks through the concepts in more depth, and the cookbook has patterns for common scenarios.

That's it

Building CLIs in TypeScript doesn't have to mean fighting with types or writing endless runtime validation. Optique lets you express constraints in a way that TypeScript actually understands, so the compiler catches mistakes before they reach production.

The source is on GitHub, and packages are available on both npm and JSR.


Questions or feedback? Find me on the fediverse or open an issue on the GitHub repo.

Read more →
1

Designing type-safe sync/async mode support in TypeScript

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

I recently added sync/async mode support to Optique, a type-safe CLI parser for TypeScript. It turned out to be one of the trickier features I've implemented—the object() combinator alone needed to compute a combined mode from all its child parsers, and TypeScript's inference kept hitting edge cases.

What is Optique?

Optique is a type-safe, combinatorial CLI parser for TypeScript, inspired by Haskell's optparse-applicative. Instead of decorators or builder patterns, you compose small parsers into larger ones using combinators, and TypeScript infers the result types.

Here's a quick taste:

import { object } from "@optique/core/constructs";
import { argument, option } from "@optique/core/primitives";
import { string, integer } from "@optique/core/valueparser";
import { run } from "@optique/run";

const cli = object({
  name: argument(string()),
  count: option("-n", "--count", integer()),
});

// TypeScript infers: { name: string; count: number | undefined }
const result = run(cli);  // sync by default

The type inference works through arbitrarily deep compositions—in most cases, you don't need explicit type annotations.

How it started

Lucas Garron (@lgarron) opened an issue requesting async support for shell completions. He wanted to provide Tab-completion suggestions by running shell commands like git for-each-ref to list branches and tags.

// Lucas's example: fetching Git branches and tags in parallel
const [branches, tags] = await Promise.all([
  $`git for-each-ref --format='%(refname:short)' refs/heads/`.text(),
  $`git for-each-ref --format='%(refname:short)' refs/tags/`.text(),
]);

At first, I didn't like the idea. Optique's entire API was synchronous, which made it simpler to reason about and avoided the “async infection” problem where one async function forces everything upstream to become async. I argued that shell completion should be near-instantaneous, and if you need async data, you should cache it at startup.

But Lucas pushed back. The filesystem is a database, and many useful completions inherently require async work—Git refs change constantly, and pre-caching everything at startup doesn't scale for large repos. Fair point.

What I needed to solve

So, how do you support both sync and async execution modes in a composable parser library while maintaining type safety?

The key requirements were:

  • parse() returns T or Promise<T>
  • complete() returns T or Promise<T>
  • suggest() returns Iterable<T> or AsyncIterable<T>
  • When combining parsers, if any parser is async, the combined result must be async
  • Existing sync code should continue to work unchanged

The fourth requirement is the tricky one. Consider this:

const syncParser = flag("--verbose");
const asyncParser = option("--branch", asyncValueParser);

// What's the type of this?
const combined = object({ verbose: syncParser, branch: asyncParser });

The combined parser should be async because one of its fields is async. This means we need type-level logic to compute the combined mode.

Five design options

I explored five different approaches, each with its own trade-offs.

Option A: conditional types with mode parameter

Add a mode type parameter to Parser and use conditional types:

type Mode = "sync" | "async";

type ModeValue<M extends Mode, T> = M extends "async" ? Promise<T> : T;

interface Parser<M extends Mode, TValue, TState> {
  parse(context: ParserContext<TState>): ModeValue<M, ParserResult<TState>>;
  // ...
}

The challenge is computing combined modes:

type CombineModes<T extends Record<string, Parser<any, any, any>>> =
  T[keyof T] extends Parser<infer M, any, any>
    ? M extends "async" ? "async" : "sync"
    : never;

Option B: mode parameter with default value

A variant of Option A, but place the mode parameter first with a default of "sync":

interface Parser<M extends Mode = "sync", TValue, TState> {
  readonly $mode: M;
  // ...
}

The default value maintains backward compatibility—existing user code keeps working without changes.

Option C: separate interfaces

Define completely separate Parser and AsyncParser interfaces with explicit conversion:

interface Parser<TValue, TState> { /* sync methods */ }
interface AsyncParser<TValue, TState> { /* async methods */ }

function toAsync<T, S>(parser: Parser<T, S>): AsyncParser<T, S>;

Simpler to understand, but requires code duplication and explicit conversions.

Option D: union return types for suggest() only

The minimal approach. Only allow suggest() to be async:

interface Parser<TValue, TState> {
  parse(context: ParserContext<TState>): ParserResult<TState>;  // always sync
  suggest(context: ParserContext<TState>, prefix: string):
    Iterable<Suggestion> | AsyncIterable<Suggestion>;  // can be either
}

This addresses the original use case but doesn't help if async parse() is ever needed.

Option E: fp-ts style HKT simulation

Use the technique from fp-ts to simulate Higher-Kinded Types:

interface URItoKind<A> {
  Identity: A;
  Promise: Promise<A>;
}

type Kind<F extends keyof URItoKind<any>, A> = URItoKind<A>[F];

interface Parser<F extends keyof URItoKind<any>, TValue, TState> {
  parse(context: ParserContext<TState>): Kind<F, ParserResult<TState>>;
}

The most flexible approach, but with a steep learning curve.

Testing the idea

Rather than commit to an approach based on theoretical analysis, I created a prototype to test how well TypeScript handles the type inference in practice. I published my findings in the GitHub issue:

Both approaches correctly handle the “any async → all async” rule at the type level. (…) Complex conditional types like ModeValue<CombineParserModes<T>, ParserResult<TState>> sometimes require explicit type casting in the implementation. This only affects library internals. The user-facing API remains clean.

The prototype validated that Option B (explicit mode parameter with default) would work. I chose it for these reasons:

  • Backward compatible: The default "sync" keeps existing code working
  • Explicit: The mode is visible in both types and runtime (via a $mode property)
  • Debuggable: Easy to inspect the current mode at runtime
  • Better IDE support: Type information is more predictable

How CombineModes works

The CombineModes type computes whether a combined parser should be sync or async:

type CombineModes<T extends readonly Mode[]> = "async" extends T[number]
  ? "async"
  : "sync";

This type checks if "async" is present anywhere in the tuple of modes. If so, the result is "async"; otherwise, it's "sync".

For combinators like object(), I needed to extract modes from parser objects and combine them:

// Extract the mode from a single parser
type ParserMode<T> = T extends Parser<infer M, unknown, unknown> ? M : never;

// Combine modes from all values in a record of parsers
type CombineObjectModes<T extends Record<string, Parser<Mode, unknown, unknown>>> =
  CombineModes<{ [K in keyof T]: ParserMode<T[K]> }[keyof T][]>;

Runtime implementation

The type system handles compile-time safety, but the implementation also needs runtime logic. Each parser has a $mode property that indicates its execution mode:

const syncParser = option("-n", "--name", string());
console.log(syncParser.$mode);  // "sync"

const asyncParser = option("-b", "--branch", asyncValueParser);
console.log(asyncParser.$mode);  // "async"

Combinators compute their mode at construction time:

function object<T extends Record<string, Parser<Mode, unknown, unknown>>>(
  parsers: T
): Parser<CombineObjectModes<T>, ObjectValue<T>, ObjectState<T>> {
  const parserKeys = Reflect.ownKeys(parsers);
  const combinedMode: Mode = parserKeys.some(
    (k) => parsers[k as keyof T].$mode === "async"
  ) ? "async" : "sync";

  // ... implementation
}

Refining the API

Lucas suggested an important refinement during our discussion. Instead of having run() automatically choose between sync and async based on the parser mode, he proposed separate functions:

Perhaps run(…) could be automatic, and runSync(…) and runAsync(…) could enforce that the inferred type matches what is expected.

So we ended up with:

  • run(): automatic based on parser mode
  • runSync(): enforces sync mode at compile time
  • runAsync(): enforces async mode at compile time
// Automatic: returns T for sync parsers, Promise<T> for async
const result1 = run(syncParser);  // string
const result2 = run(asyncParser);  // Promise<string>

// Explicit: compile-time enforcement
const result3 = runSync(syncParser);  // string
const result4 = runAsync(asyncParser);  // Promise<string>

// Compile error: can't use runSync with async parser
const result5 = runSync(asyncParser);  // Type error!

I applied the same pattern to parse()/parseSync()/parseAsync() and suggest()/suggestSync()/suggestAsync() in the facade functions.

Creating async value parsers

With the new API, creating an async value parser for Git branches looks like this:

import type { Suggestion } from "@optique/core/parser";
import type { ValueParser, ValueParserResult } from "@optique/core/valueparser";

function gitRef(): ValueParser<"async", string> {
  return {
    $mode: "async",
    metavar: "REF",
    parse(input: string): Promise<ValueParserResult<string>> {
      return Promise.resolve({ success: true, value: input });
    },
    format(value: string): string {
      return value;
    },
    async *suggest(prefix: string): AsyncIterable<Suggestion> {
      const { $ } = await import("bun");
      const [branches, tags] = await Promise.all([
        $`git for-each-ref --format='%(refname:short)' refs/heads/`.text(),
        $`git for-each-ref --format='%(refname:short)' refs/tags/`.text(),
      ]);
      for (const ref of [...branches.split("\n"), ...tags.split("\n")]) {
        const trimmed = ref.trim();
        if (trimmed && trimmed.startsWith(prefix)) {
          yield { kind: "literal", text: trimmed };
        }
      }
    },
  };
}

Notice that parse() returns Promise.resolve() even though it's synchronous. This is because the ValueParser<"async", T> type requires all methods to use async signatures. Lucas pointed out this is a minor ergonomic issue. If only suggest() needs to be async, you still have to wrap parse() in a Promise.

I considered per-method mode granularity (e.g., ValueParser<ParseMode, SuggestMode, T>), but the implementation complexity would multiply substantially. For now, the workaround is simple enough:

// Option 1: Use Promise.resolve()
parse(input) {
  return Promise.resolve({ success: true, value: input });
}

// Option 2: Mark as async and suppress the linter
// biome-ignore lint/suspicious/useAwait: sync implementation in async ValueParser
async parse(input) {
  return { success: true, value: input };
}

What it cost

Supporting dual modes added significant complexity to Optique's internals. Every combinator needed updates:

  • Type signatures grew more complex with mode parameters
  • Mode propagation logic had to be added to every combinator
  • Dual implementations were needed for sync and async code paths
  • Type casts were sometimes necessary in the implementation to satisfy TypeScript

For example, the object() combinator went from around 100 lines to around 250 lines. The internal implementation uses conditional logic based on the combined mode:

if (combinedMode === "async") {
  return {
    $mode: "async" as M,
    // ... async implementation with Promise chains
    async parse(context) {
      // ... await each field's parse result
    },
  };
} else {
  return {
    $mode: "sync" as M,
    // ... sync implementation
    parse(context) {
      // ... directly call each field's parse
    },
  };
}

This duplication is the cost of supporting both modes without runtime overhead for sync-only use cases.

Lessons learned

Listen to users, but validate with prototypes

My initial instinct was to resist async support. Lucas's persistence and concrete examples changed my mind, but I validated the approach with a prototype before committing. The prototype revealed practical issues (like TypeScript inference limits) that pure design analysis would have missed.

Backward compatibility is worth the complexity

Making "sync" the default mode meant existing code continued to work unchanged. This was a deliberate choice. Breaking changes should require user action, not break silently.

Unified mode vs per-method granularity

I chose unified mode (all methods share the same sync/async mode) over per-method granularity. This means users occasionally write Promise.resolve() for methods that don't actually need async, but the alternative was multiplicative complexity in the type system.

Designing in public

The entire design process happened in a public GitHub issue. Lucas, Giuseppe, and others contributed ideas that shaped the final API. The runSync()/runAsync() distinction came directly from Lucas's feedback.

Conclusion

This was one of the more challenging features I've implemented in Optique. TypeScript's type system is powerful enough to encode the “any async means all async” rule at compile time, but getting there required careful design work and prototyping.

What made it work: conditional types like ModeValue<M, T> can bridge the gap between sync and async worlds. You pay for it with implementation complexity, but the user-facing API stays clean and type-safe.

Optique 0.9.0 with async support is currently in pre-release testing. If you'd like to try it, check out PR #70 or install the pre-release:

npm  add       @optique/core@0.9.0-dev.212 @optique/run@0.9.0-dev.212
deno add --jsr @optique/core@0.9.0-dev.212 @optique/run@0.9.0-dev.212

Feedback is welcome!

Read more →
4

Logging in Node.js (or Deno or Bun or edge functions) in 2026

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

It's 2 AM. Something is wrong in production. Users are complaining, but you're not sure what's happening—your only clues are a handful of console.log statements you sprinkled around during development. Half of them say things like “here” or “this works.” The other half dump entire objects that scroll off the screen. Good luck.

We've all been there. And yet, setting up “proper” logging often feels like overkill. Traditional logging libraries like winston or Pino come with their own learning curves, configuration formats, and assumptions about how you'll deploy your app. If you're working with edge functions or trying to keep your bundle small, adding a logging library can feel like bringing a sledgehammer to hang a picture frame.

I'm a fan of the “just enough” approach—more than raw console.log, but without the weight of a full-blown logging framework. We'll start from console.log(), understand its real limitations (not the exaggerated ones), and work toward a setup that's actually useful. I'll be using LogTape for the examples—it's a zero-dependency logging library that works across Node.js, Deno, Bun, and edge functions, and stays out of your way when you don't need it.

Starting with console methods—and where they fall short

The console object is JavaScript's great equalizer. It's built-in, it works everywhere, and it requires zero setup. You even get basic severity levels: console.debug(), console.info(), console.warn(), and console.error(). In browser DevTools and some terminal environments, these show up with different colors or icons.

console.debug("Connecting to database...");
console.info("Server started on port 3000");
console.warn("Cache miss for user 123");
console.error("Failed to process payment");

For small scripts or quick debugging, this is perfectly fine. But once your application grows beyond a few files, the cracks start to show:

No filtering without code changes. Want to hide debug messages in production? You'll need to wrap every console.debug() call in a conditional, or find-and-replace them all. There's no way to say “show me only warnings and above” at runtime.

Everything goes to the console. What if you want to write logs to a file? Send errors to Sentry? Stream logs to CloudWatch? You'd have to replace every console.* call with something else—and hope you didn't miss any.

No context about where logs come from. When your app has dozens of modules, a log message like “Connection failed” doesn't tell you much. Was it the database? The cache? A third-party API? You end up prefixing every message manually: console.error("[database] Connection failed").

No structured data. Modern log analysis tools work best with structured data (JSON). But console.log("User logged in", { userId: 123 }) just prints User logged in { userId: 123 } as a string—not very useful for querying later.

Libraries pollute your logs. If you're using a library that logs with console.*, those messages show up whether you want them or not. And if you're writing a library, your users might not appreciate unsolicited log messages.

What you actually need from a logging system

Before diving into code, let's think about what would actually solve the problems above. Not a wish list of features, but the practical stuff that makes a difference when you're debugging at 2 AM or trying to understand why requests are slow.

Log levels with filtering

A logging system should let you categorize messages by severity—trace, debug, info, warning, error, fatal—and then filter them based on what you need. During development, you want to see everything. In production, maybe just warnings and above. The key is being able to change this without touching your code.

Categories

When your app grows beyond a single file, you need to know where logs are coming from. A good logging system lets you tag logs with categories like ["my-app", "database"] or ["my-app", "auth", "oauth"]. Even better, it lets you set different log levels for different categories—maybe you want debug logs from the database module but only warnings from everything else.

Sinks (multiple output destinations)

“Sink” is just a fancy word for “where logs go.” You might want logs to go to the console during development, to files in production, and to an external service like Sentry or CloudWatch for errors. A good logging system lets you configure multiple sinks and route different logs to different destinations.

Structured logging

Instead of logging strings, you log objects with properties. This makes logs machine-readable and queryable:

// Instead of this:
logger.info("User 123 logged in from 192.168.1.1");

// You do this:
logger.info("User logged in", { userId: 123, ip: "192.168.1.1" });

Now you can search for all logs where userId === 123 or filter by IP address.

Context for request tracing

In a web server, you often want all logs from a single request to share a common identifier (like a request ID). This makes it possible to trace a request's journey through your entire system.

Getting started with LogTape

There are plenty of logging libraries out there. winston has been around forever and has a plugin for everything. Pino is fast and outputs JSON. bunyan, log4js, signale—the list goes on.

So why LogTape? A few reasons stood out to me:

Zero dependencies. Not “few dependencies”—actually zero. In an era where a single npm install can pull in hundreds of packages, this matters for security, bundle size, and not having to wonder why your lockfile just changed.

Works everywhere. The same code runs on Node.js, Deno, Bun, browsers, and edge functions like Cloudflare Workers. No polyfills, no conditional imports, no “this feature only works on Node.”

Doesn't force itself on users. If you're writing a library, you can add logging without your users ever knowing—unless they want to see the logs. This is a surprisingly rare feature.

Let's set it up:

npm add @logtape/logtape       # npm
pnpm add @logtape/logtape      # pnpm
yarn add @logtape/logtape      # Yarn
deno add jsr:@logtape/logtape  # Deno
bun add @logtape/logtape       # Bun

Configuration happens once, at your application's entry point:

import { configure, getConsoleSink, getLogger } from "@logtape/logtape";

await configure({
  sinks: {
    console: getConsoleSink(),  // Where logs go
  },
  loggers: [
    { category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },  // What to log
  ],
});

// Now you can log from anywhere in your app:
const logger = getLogger(["my-app", "server"]);
logger.info`Server started on port 3000`;
logger.debug`Request received: ${{ method: "GET", path: "/api/users" }}`;

Notice a few things:

  1. Configuration is explicit. You decide where logs go (sinks) and which logs to show (lowestLevel).
  2. Categories are hierarchical. The logger ["my-app", "server"] inherits settings from ["my-app"].
  3. Template literals work. You can use backticks for a natural logging syntax.

Categories and filtering: Controlling log verbosity

Here's a scenario: you're debugging a database issue. You want to see every query, every connection attempt, every retry. But you don't want to wade through thousands of HTTP request logs to find them.

Categories let you solve this. Instead of one global log level, you can set different verbosity for different parts of your application.

await configure({
  sinks: {
    console: getConsoleSink(),
  },
  loggers: [
    { category: ["my-app"], lowestLevel: "info", sinks: ["console"] },  // Default: info and above
    { category: ["my-app", "database"], lowestLevel: "debug", sinks: ["console"] },  // DB module: show debug too
  ],
});

Now when you log from different parts of your app:

// In your database module:
const dbLogger = getLogger(["my-app", "database"]);
dbLogger.debug`Executing query: ${sql}`;  // This shows up

// In your HTTP module:
const httpLogger = getLogger(["my-app", "http"]);
httpLogger.debug`Received request`;  // This is filtered out (below "info")
httpLogger.info`GET /api/users 200`;  // This shows up

Controlling third-party library logs

If you're using libraries that also use LogTape, you can control their logs separately:

await configure({
  sinks: { console: getConsoleSink() },
  loggers: [
    { category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
    // Only show warnings and above from some-library
    { category: ["some-library"], lowestLevel: "warning", sinks: ["console"] },
  ],
});

The root logger

Sometimes you want a catch-all configuration. The root logger (empty category []) catches everything:

await configure({
  sinks: { console: getConsoleSink() },
  loggers: [
    // Catch all logs at info level
    { category: [], lowestLevel: "info", sinks: ["console"] },
    // But show debug for your app
    { category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
  ],
});

Log levels and when to use them

LogTape has six log levels. Choosing the right one isn't just about severity—it's about who needs to see the message and when.

Level When to use it
trace Very detailed diagnostic info. Loop iterations, function entry/exit. Usually only enabled when hunting a specific bug.
debug Information useful during development. Variable values, state changes, flow control decisions.
info Normal operational messages. “Server started,” “User logged in,” “Job completed.”
warning Something unexpected happened, but the app can continue. Deprecated API usage, retry attempts, missing optional config.
error Something failed. An operation couldn't complete, but the app is still running.
fatal The app is about to crash or is in an unrecoverable state.
const logger = getLogger(["my-app"]);

logger.trace`Entering processUser function`;
logger.debug`Processing user ${{ userId: 123 }}`;
logger.info`User successfully created`;
logger.warn`Rate limit approaching: ${980}/1000 requests`;
logger.error`Failed to save user: ${error.message}`;
logger.fatal`Database connection lost, shutting down`;

A good rule of thumb: in production, you typically run at info or warning level. During development or when debugging, you drop down to debug or trace.

Structured logging: Beyond plain text

At some point, you'll want to search your logs. “Show me all errors from the payment service in the last hour.” “Find all requests from user 12345.” “What's the average response time for the /api/users endpoint?”

If your logs are plain text strings, these queries are painful. You end up writing regexes, hoping the log format is consistent, and cursing past-you for not thinking ahead.

Structured logging means attaching data to your logs as key-value pairs, not just embedding them in strings. This makes logs machine-readable and queryable.

LogTape supports two syntaxes for this:

Template literals (great for simple messages)

const userId = 123;
const action = "login";
logger.info`User ${userId} performed ${action}`;

Message templates with properties (great for structured data)

logger.info("User performed action", {
  userId: 123,
  action: "login",
  ip: "192.168.1.1",
  timestamp: new Date().toISOString(),
});

You can reference properties in your message using placeholders:

logger.info("User {userId} logged in from {ip}", {
  userId: 123,
  ip: "192.168.1.1",
});
// Output: User 123 logged in from 192.168.1.1

Nested property access

LogTape supports dot notation and array indexing in placeholders:

logger.info("Order {order.id} placed by {order.customer.name}", {
  order: {
    id: "ORD-001",
    customer: { name: "Alice", email: "alice@example.com" },
  },
});

logger.info("First item: {items[0].name}", {
  items: [{ name: "Widget", price: 9.99 }],
});

Machine-readable output with JSON Lines

For production, you often want logs as JSON (one object per line). LogTape has a built-in formatter for this:

import { configure, getConsoleSink, jsonLinesFormatter } from "@logtape/logtape";

await configure({
  sinks: {
    console: getConsoleSink({ formatter: jsonLinesFormatter }),
  },
  loggers: [
    { category: [], lowestLevel: "info", sinks: ["console"] },
  ],
});

Output:

{"@timestamp":"2026-01-15T10:30:00.000Z","level":"INFO","message":"User logged in","logger":"my-app","properties":{"userId":123}}

Sending logs to different destinations (sinks)

So far we've been sending everything to the console. That's fine for development, but in production you'll likely want logs to go elsewhere—or to multiple places at once.

Think about it: console output disappears when the process restarts. If your server crashes at 3 AM, you want those logs to be somewhere persistent. And when an error occurs, you might want it to show up in your error tracking service immediately, not just sit in a log file waiting for someone to grep through it.

This is where sinks come in. A sink is just a function that receives log records and does something with them. LogTape comes with several built-in sinks, and creating your own is trivial.

Console sink

The simplest sink—outputs to the console:

import { getConsoleSink } from "@logtape/logtape";

const consoleSink = getConsoleSink();

File sink

For writing logs to files, install the @logtape/file package:

npm add @logtape/file
import { getFileSink, getRotatingFileSink } from "@logtape/file";

// Simple file sink
const fileSink = getFileSink("app.log");

// Rotating file sink (rotates when file reaches 10MB, keeps 5 old files)
const rotatingFileSink = getRotatingFileSink("app.log", {
  maxSize: 10 * 1024 * 1024,  // 10MB
  maxFiles: 5,
});

Why rotating files? Without rotation, your log file grows indefinitely until it fills up the disk. With rotation, old logs are automatically archived and eventually deleted, keeping disk usage under control. This is especially important for long-running servers.

External services

For production systems, you often want logs to go to specialized services that provide search, alerting, and visualization. LogTape has packages for popular services:

// OpenTelemetry (for observability platforms like Jaeger, Honeycomb, Datadog)
import { getOpenTelemetrySink } from "@logtape/otel";

// Sentry (for error tracking with stack traces and context)
import { getSentrySink } from "@logtape/sentry";

// AWS CloudWatch Logs (for AWS-native log aggregation)
import { getCloudWatchLogsSink } from "@logtape/cloudwatch-logs";

The OpenTelemetry sink is particularly useful if you're already using OpenTelemetry for tracing—your logs will automatically correlate with your traces, making debugging distributed systems much easier.

Multiple sinks

Here's where things get interesting. You can send different logs to different destinations based on their level or category:

await configure({
  sinks: {
    console: getConsoleSink(),
    file: getFileSink("app.log"),
    errors: getSentrySink(),
  },
  loggers: [
    { category: [], lowestLevel: "info", sinks: ["console", "file"] },  // Everything to console + file
    { category: [], lowestLevel: "error", sinks: ["errors"] },  // Errors also go to Sentry
  ],
});

Notice that a log record can go to multiple sinks. An error log in this configuration goes to the console, the file, and Sentry. This lets you have comprehensive local logs while also getting immediate alerts for critical issues.

Custom sinks

Sometimes you need to send logs somewhere that doesn't have a pre-built sink. Maybe you have an internal logging service, or you want to send logs to a Slack channel, or store them in a database.

A sink is just a function that takes a LogRecord. That's it:

import type { Sink } from "@logtape/logtape";

const slackSink: Sink = (record) => {
  // Only send errors and fatals to Slack
  if (record.level === "error" || record.level === "fatal") {
    fetch("https://hooks.slack.com/services/YOUR/WEBHOOK/URL", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({
        text: `[${record.level.toUpperCase()}] ${record.message.join("")}`,
      }),
    });
  }
};

The simplicity of sink functions means you can integrate LogTape with virtually any logging backend in just a few lines of code.

Request tracing with contexts

Here's a scenario you've probably encountered: a user reports an error, you check the logs, and you find a sea of interleaved messages from dozens of concurrent requests. Which log lines belong to the user's request? Good luck figuring that out.

This is where request tracing comes in. The idea is simple: assign a unique identifier to each request, and include that identifier in every log message produced while handling that request. Now you can filter your logs by request ID and see exactly what happened, in order, for that specific request.

LogTape supports this through contexts—a way to attach properties to log messages without passing them around explicitly.

Explicit context

The simplest approach is to create a logger with attached properties using .with():

function handleRequest(req: Request) {
  const requestId = crypto.randomUUID();
  const logger = getLogger(["my-app", "http"]).with({ requestId });

  logger.info`Request received`;  // Includes requestId automatically
  processRequest(req, logger);
  logger.info`Request completed`;  // Also includes requestId
}

This works well when you're passing the logger around explicitly. But what about code that's deeper in your call stack? What about code in libraries that don't know about your logger instance?

Implicit context

This is where implicit contexts shine. Using withContext(), you can set properties that automatically appear in all log messages within a callback—even in nested function calls, async operations, and third-party libraries (as long as they use LogTape).

First, enable implicit contexts in your configuration:

import { configure, getConsoleSink } from "@logtape/logtape";
import { AsyncLocalStorage } from "node:async_hooks";

await configure({
  sinks: { console: getConsoleSink() },
  loggers: [
    { category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
  ],
  contextLocalStorage: new AsyncLocalStorage(),
});

Then use withContext() in your request handler:

import { withContext, getLogger } from "@logtape/logtape";

function handleRequest(req: Request) {
  const requestId = crypto.randomUUID();

  return withContext({ requestId }, async () => {
    // Every log message in this callback includes requestId—automatically
    const logger = getLogger(["my-app"]);
    logger.info`Processing request`;

    await validateInput(req);   // Logs here include requestId
    await processBusinessLogic(req);  // Logs here too
    await saveToDatabase(req);  // And here

    logger.info`Request complete`;
  });
}

The magic is that validateInput, processBusinessLogic, and saveToDatabase don't need to know anything about the request ID. They just call getLogger() and log normally, and the request ID appears in their logs automatically. This works even across async boundaries—the context follows the execution flow, not the call stack.

This is incredibly powerful for debugging. When something goes wrong, you can search for the request ID and see every log message from every module that was involved in handling that request.

Framework integrations

Setting up request tracing manually can be tedious. LogTape has dedicated packages for popular frameworks that handle this automatically:

// Express
import { expressLogger } from "@logtape/express";
app.use(expressLogger());

// Fastify
import { getLogTapeFastifyLogger } from "@logtape/fastify";
const app = Fastify({ loggerInstance: getLogTapeFastifyLogger() });

// Hono
import { honoLogger } from "@logtape/hono";
app.use(honoLogger());

// Koa
import { koaLogger } from "@logtape/koa";
app.use(koaLogger());

These middlewares automatically generate request IDs, set up implicit contexts, and log request/response information. You get comprehensive request logging with a single line of code.

Using LogTape in libraries vs applications

If you've ever used a library that spams your console with unwanted log messages, you know how annoying it can be. And if you've ever tried to add logging to your own library, you've faced a dilemma: should you use console.log() and annoy your users? Require them to install and configure a specific logging library? Or just... not log anything?

LogTape solves this with its library-first design. Libraries can add as much logging as they want, and it costs their users nothing unless they explicitly opt in.

If you're writing a library

The rule is simple: use getLogger() to log, but never call configure(). Configuration is the application's responsibility, not the library's.

// my-library/src/database.ts
import { getLogger } from "@logtape/logtape";

const logger = getLogger(["my-library", "database"]);

export function connect(url: string) {
  logger.debug`Connecting to ${url}`;
  // ... connection logic ...
  logger.info`Connected successfully`;
}

What happens when someone uses your library?

If they haven't configured LogTape, nothing happens. The log calls are essentially no-ops—no output, no errors, no performance impact. Your library works exactly as if the logging code wasn't there.

If they have configured LogTape, they get full control. They can see your library's debug logs if they're troubleshooting an issue, or silence them entirely if they're not interested. They decide, not you.

This is fundamentally different from using console.log() in a library. With console.log(), your users have no choice—they see your logs whether they want to or not. With LogTape, you give them the power to decide.

If you're writing an application

You configure LogTape once in your entry point. This single configuration controls logging for your entire application, including any libraries that use LogTape:

await configure({
  sinks: { console: getConsoleSink() },
  loggers: [
    { category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },  // Your app: verbose
    { category: ["my-library"], lowestLevel: "warning", sinks: ["console"] },  // Library: quiet
    { category: ["noisy-library"], lowestLevel: "fatal", sinks: [] },  // That one library: silent
  ],
});

This separation of concerns—libraries log, applications configure—makes for a much healthier ecosystem. Library authors can add detailed logging for debugging without worrying about annoying their users. Application developers can tune logging to their needs without digging through library code.

Migrating from another logger?

If your application already uses winston, Pino, or another logging library, you don't have to migrate everything at once. LogTape provides adapters that route LogTape logs to your existing logging setup:

import { install } from "@logtape/adaptor-winston";
import winston from "winston";

install(winston.createLogger({ /* your existing config */ }));

This is particularly useful when you want to use a library that uses LogTape, but you're not ready to switch your whole application over. The library's logs will flow through your existing winston (or Pino) configuration, and you can migrate gradually if you choose to.

Production considerations

Development and production have different needs. During development, you want verbose logs, pretty formatting, and immediate feedback. In production, you care about performance, reliability, and not leaking sensitive data. Here are some things to keep in mind.

Non-blocking mode

By default, logging is synchronous—when you call logger.info(), the message is written to the sink before the function returns. This is fine for development, but in a high-throughput production environment, the I/O overhead of writing every log message can add up.

Non-blocking mode buffers log messages and writes them in the background:

const consoleSink = getConsoleSink({ nonBlocking: true });
const fileSink = getFileSink("app.log", { nonBlocking: true });

The tradeoff is that logs might be slightly delayed, and if your process crashes, some buffered logs might be lost. But for most production workloads, the performance benefit is worth it.

Sensitive data redaction

Logs have a way of ending up in unexpected places—log aggregation services, debugging sessions, support tickets. If you're logging request data, user information, or API responses, you might accidentally expose sensitive information like passwords, API keys, or personal data.

LogTape's @logtape/redaction package helps you catch these before they become a problem:

import {
  redactByPattern,
  EMAIL_ADDRESS_PATTERN,
  CREDIT_CARD_NUMBER_PATTERN,
  type RedactionPattern,
} from "@logtape/redaction";
import { defaultConsoleFormatter, configure, getConsoleSink } from "@logtape/logtape";

const BEARER_TOKEN_PATTERN: RedactionPattern = {
  pattern: /Bearer [A-Za-z0-9\-._~+\/]+=*/g,
  replacement: "[REDACTED]",
};

const formatter = redactByPattern(defaultConsoleFormatter, [
  EMAIL_ADDRESS_PATTERN,
  CREDIT_CARD_NUMBER_PATTERN,
  BEARER_TOKEN_PATTERN,
]);

await configure({
  sinks: {
    console: getConsoleSink({ formatter }),
  },
  // ...
});

With this configuration, email addresses, credit card numbers, and bearer tokens are automatically replaced with [REDACTED] in your log output. The @logtape/redaction package comes with built-in patterns for common sensitive data types, and you can define custom patterns for anything else. It's not foolproof—you should still be mindful of what you log—but it provides a safety net.

See the redaction documentation for more patterns and field-based redaction.

Edge functions and serverless

Edge functions (Cloudflare Workers, Vercel Edge Functions, etc.) have a unique constraint: they can be terminated immediately after returning a response. If you have buffered logs that haven't been flushed yet, they'll be lost.

The solution is to explicitly flush logs before returning:

import { configure, dispose } from "@logtape/logtape";

export default {
  async fetch(request, env, ctx) {
    await configure({ /* ... */ });
    
    // ... handle request ...
    
    ctx.waitUntil(dispose());  // Flush logs before worker terminates
    
    return new Response("OK");
  },
};

The dispose() function flushes all buffered logs and cleans up resources. By passing it to ctx.waitUntil(), you ensure the worker stays alive long enough to finish writing logs, even after the response has been sent.

Wrapping up

Logging isn't glamorous, but it's one of those things that makes a huge difference when something goes wrong. The setup I've described here—categories for organization, structured data for queryability, contexts for request tracing—isn't complicated, but it's a significant step up from scattered console.log statements.

LogTape isn't the only way to achieve this, but I've found it hits a nice sweet spot: powerful enough for production use, simple enough that you're not fighting the framework, and light enough that you don't feel guilty adding it to a library.

If you want to dig deeper, the LogTape documentation covers advanced topics like custom filters, the “fingers crossed” pattern for buffering debug logs until an error occurs, and more sink options. The GitHub repository is also a good place to report issues or see what's coming next.

Now go add some proper logging to that side project you've been meaning to clean up. Your future 2 AM self will thank you.

Read more →
2

Sending emails in Node.js, Deno, and Bun in 2026: a practical guide

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

So you need to send emails from your JavaScript application. Email remains one of the most essential features in web apps—welcome emails, password resets, notifications—but the ecosystem is fragmented. Nodemailer doesn't work on edge functions. Each provider has its own SDK. And if you're using Deno or Bun, good luck finding libraries that actually work.

This guide covers how to send emails across modern JavaScript runtimes using Upyo, a cross-runtime email library.

Disclosure

I'm the author of Upyo. This guide focuses on Upyo because I built it to solve problems I kept running into, but you should know that going in. If you're looking for alternatives: Nodemailer is the established choice for Node.js (though it doesn't work on Deno/Bun/edge), and most email providers offer their own official SDKs.

TL;DR for the impatient

If you just want working code, here's the quickest path to sending an email:

import { createMessage } from "@upyo/core";
import { SmtpTransport } from "@upyo/smtp";

const transport = new SmtpTransport({
  host: "smtp.gmail.com",
  port: 465,
  secure: true,
  auth: {
    user: "your-email@gmail.com",
    pass: "your-app-password", // Not your regular password!
  },
});

const message = createMessage({
  from: "your-email@gmail.com",
  to: "recipient@example.com",
  subject: "Hello from my app!",
  content: { text: "This is my first email." },
});

const receipt = await transport.send(message);
if (receipt.successful) {
  console.log("Sent:", receipt.messageId);
} else {
  console.log("Failed:", receipt.errorMessages);
}

Install with:

npm add @upyo/core @upyo/smtp

That's it. This exact code works on Node.js, Deno, and Bun. But if you want to understand what's happening and explore more powerful options, read on.


Why Upyo?

  • Cross-runtime: Works on Node.js, Deno, Bun, and edge functions with the same API
  • Zero dependencies: Keeps your bundle small
  • Provider independence: Switch between SMTP, Mailgun, Resend, SendGrid, or Amazon SES without changing your application code
  • Type-safe: Full TypeScript support with discriminated unions for error handling
  • Built for testing: Includes a mock transport for unit tests

Part 1: Getting started with Gmail SMTP

Let's start with the most accessible option: Gmail's SMTP server. It's free, requires no additional accounts, and works great for development and low-volume production use.

Step 1: Generate a Gmail app password

Gmail doesn't allow you to use your regular password for SMTP. You need to create an app-specific password:

  1. Go to your Google Account
  2. Navigate to Security2-Step Verification (enable it if you haven't)
  3. At the bottom, click App passwords
  4. Select Mail and your device, then click Generate
  5. Copy the 16-character password

Step 2: Install dependencies

Choose your runtime and package manager:

Node.js

npm add @upyo/core @upyo/smtp
# or: pnpm add @upyo/core @upyo/smtp
# or: yarn add @upyo/core @upyo/smtp

Deno

deno add jsr:@upyo/core jsr:@upyo/smtp

Bun

bun add @upyo/core @upyo/smtp

The same code works across all three runtimes—that's the beauty of Upyo.

Step 3: Send your first email

import { createMessage } from "@upyo/core";
import { SmtpTransport } from "@upyo/smtp";

// Create the transport (reuse this for multiple emails)
const transport = new SmtpTransport({
  host: "smtp.gmail.com",
  port: 465,
  secure: true,
  auth: {
    user: "your-email@gmail.com",
    pass: "abcd efgh ijkl mnop", // Your app password
  },
});

// Create and send a message
const message = createMessage({
  from: "your-email@gmail.com",
  to: "recipient@example.com",
  subject: "Welcome to my app!",
  content: {
    text: "Thanks for signing up. We're excited to have you!",
    html: "<h1>Welcome!</h1><p>Thanks for signing up. We're excited to have you!</p>",
  },
});

const receipt = await transport.send(message);

if (receipt.successful) {
  console.log("Email sent successfully! Message ID:", receipt.messageId);
} else {
  console.error("Failed to send email:", receipt.errorMessages.join(", "));
}

// Don't forget to close connections when done
await transport.closeAllConnections();

Let me highlight a few important details:

  • secure: true with port 465: This establishes a TLS-encrypted connection from the start. Gmail requires encryption, so this combination is essential.
  • Separate text and html content: Always provide both. Some email clients don't render HTML, and spam filters look more favorably on emails with plain text alternatives.
  • The receipt pattern: Upyo uses discriminated unions for type-safe error handling. When receipt.successful is true, you get messageId. When it's false, you get errorMessages. This makes it impossible to forget error handling.
  • Closing connections: SMTP maintains persistent TCP connections. Always close them when you're done, or use await using (shown next) to handle this automatically.

Pro tip: automatic resource cleanup with await using

Managing resources manually is error-prone—what if an exception occurs before closeAllConnections() is called? Modern JavaScript (ES2024) solves this with explicit resource management.

import { createMessage } from "@upyo/core";
import { SmtpTransport } from "@upyo/smtp";

// Transport is automatically disposed when it goes out of scope
await using transport = new SmtpTransport({
  host: "smtp.gmail.com",
  port: 465,
  secure: true,
  auth: {
    user: "your-email@gmail.com",
    pass: "your-app-password",
  },
});

const message = createMessage({
  from: "your-email@gmail.com",
  to: "recipient@example.com",
  subject: "Hello!",
  content: { text: "This email was sent with automatic cleanup!" },
});

await transport.send(message);
// No need to call `closeAllConnections()` - it happens automatically!

The await using keyword tells JavaScript to call the transport's cleanup method when execution leaves this scope—even if an error is thrown. This pattern is similar to Python's with statement or C#'s using block. It's supported in Node.js 22+, Deno, and Bun.

What if your environment doesn't support await using?

For older Node.js versions or environments without ES2024 support, use try/finally to ensure cleanup:

const transport = new SmtpTransport({
  host: "smtp.gmail.com",
  port: 465,
  secure: true,
  auth: { user: "your-email@gmail.com", pass: "your-app-password" },
});

try {
  await transport.send(message);
} finally {
  await transport.closeAllConnections();
}

This achieves the same result—cleanup happens whether the send succeeds or throws an error.


Part 2: Adding attachments and rich content

Real-world emails often need more than plain text.

HTML emails with inline images

Inline images appear directly in the email body rather than as downloadable attachments. The trick is to reference them using a Content-ID (CID) URL scheme.

import { createMessage } from "@upyo/core";
import { SmtpTransport } from "@upyo/smtp";
import { readFile } from "node:fs/promises";

await using transport = new SmtpTransport({
  host: "smtp.gmail.com",
  port: 465,
  secure: true,
  auth: { user: "your-email@gmail.com", pass: "your-app-password" },
});

// Read your logo file
const logoContent = await readFile("./assets/logo.png");

const message = createMessage({
  from: "your-email@gmail.com",
  to: "customer@example.com",
  subject: "Your order confirmation",
  content: {
    html: `
      <div style="font-family: sans-serif; max-width: 600px; margin: 0 auto;">
        <img src="cid:company-logo" alt="Company Logo" style="width: 150px;">
        <h1>Order Confirmed!</h1>
        <p>Thank you for your purchase. Your order #12345 has been confirmed.</p>
      </div>
    `,
    text: "Order Confirmed! Thank you for your purchase. Your order #12345 has been confirmed.",
  },
  attachments: [
    {
      filename: "logo.png",
      content: logoContent,
      contentType: "image/png",
      contentId: "company-logo", // Referenced as cid:company-logo in HTML
      inline: true,
    },
  ],
});

await transport.send(message);

Key points about inline images:

  • contentId: This is the identifier you use in the HTML's src="cid:..." attribute. It can be any unique string.
  • inline: true: This tells the email client to display the image within the message body, not as a separate attachment.
  • Always include alt text: Some email clients block images by default, so the alt text ensures your message is still understandable.

File attachments

For regular attachments that recipients can download, use the standard File API. This approach works across all JavaScript runtimes.

import { createMessage } from "@upyo/core";
import { SmtpTransport } from "@upyo/smtp";
import { readFile } from "node:fs/promises";

await using transport = new SmtpTransport({
  host: "smtp.gmail.com",
  port: 465,
  secure: true,
  auth: { user: "your-email@gmail.com", pass: "your-app-password" },
});

// Read files to attach
const invoicePdf = await readFile("./invoices/invoice-2024-001.pdf");
const reportXlsx = await readFile("./reports/monthly-report.xlsx");

const message = createMessage({
  from: "billing@yourcompany.com",
  to: "client@example.com",
  cc: "accounting@yourcompany.com",
  subject: "Invoice #2024-001",
  content: {
    text: "Please find your invoice and monthly report attached.",
  },
  attachments: [
    new File([invoicePdf], "invoice-2024-001.pdf", { type: "application/pdf" }),
    new File([reportXlsx], "monthly-report.xlsx", {
      type: "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
    }),
  ],
  priority: "high", // Sets email priority headers
});

await transport.send(message);

A few notes on attachments:

  • MIME types matter: Setting the correct type helps email clients display the right icon and open the file with the appropriate application.
  • priority: "high": This sets the X-Priority header, which some email clients use to highlight important messages. Use it sparingly—overuse can trigger spam filters.

Multiple recipients with different roles

Email supports several recipient types, each with different visibility rules:

import { createMessage } from "@upyo/core";

const message = createMessage({
  from: { name: "Support Team", address: "support@yourcompany.com" },
  to: [
    "primary-recipient@example.com",
    { name: "John Smith", address: "john@example.com" },
  ],
  cc: "manager@yourcompany.com",
  bcc: ["archive@yourcompany.com", "compliance@yourcompany.com"],
  replyTo: "no-reply@yourcompany.com",
  subject: "Your support ticket has been updated",
  content: { text: "We've responded to your ticket #5678." },
});

Understanding recipient types:

  • to: Primary recipients. Everyone can see who else is in this field.
  • cc (Carbon Copy): Secondary recipients. Visible to all recipients—use for people who should be informed but aren't the primary audience.
  • bcc (Blind Carbon Copy): Hidden recipients. No one can see BCC addresses—useful for archiving or compliance without revealing internal processes.
  • replyTo: Where replies should go. Useful when sending from a no-reply address but wanting responses to reach a real inbox.

You can specify addresses as simple strings ("email@example.com") or as objects with name and address properties for display names.


Part 3: Moving to production with email service providers

Gmail SMTP is great for getting started, but for production applications, you'll want a dedicated email service provider. Here's why:

  • Higher sending limits: Gmail caps you at ~500 emails/day for personal accounts
  • Better deliverability: Dedicated services maintain sender reputation and handle bounces properly
  • Analytics and tracking: See who opened your emails, clicked links, etc.
  • Webhook notifications: Get real-time callbacks for delivery events
  • No dependency on personal accounts: Production systems shouldn't rely on someone's Gmail

The best part? With Upyo, switching providers requires minimal code changes—just swap the transport.

Option A: Resend (modern and developer-friendly)

Resend is a newer email service with an excellent developer experience.

npm add @upyo/resend
import { createMessage } from "@upyo/core";
import { ResendTransport } from "@upyo/resend";

const transport = new ResendTransport({
  apiKey: process.env.RESEND_API_KEY!,
});

const message = createMessage({
  from: "hello@yourdomain.com", // Must be verified in Resend
  to: "user@example.com",
  subject: "Welcome aboard!",
  content: {
    text: "Thanks for joining us!",
    html: "<h1>Welcome!</h1><p>Thanks for joining us!</p>",
  },
  tags: ["onboarding", "welcome"], // For analytics
});

const receipt = await transport.send(message);

if (receipt.successful) {
  console.log("Sent via Resend:", receipt.messageId);
}

Notice how similar this looks to the SMTP example? The only differences are the import and the transport configuration. Your message creation and sending logic stays exactly the same—that's Upyo's transport abstraction at work.

Option B: SendGrid (enterprise-grade)

SendGrid is a popular choice for high-volume senders, offering advanced analytics, template management, and a generous free tier.

SendGrid is a popular choice for high-volume senders.

npm add @upyo/sendgrid
import { createMessage } from "@upyo/core";
import { SendGridTransport } from "@upyo/sendgrid";

const transport = new SendGridTransport({
  apiKey: process.env.SENDGRID_API_KEY!,
  clickTracking: true,
  openTracking: true,
});

const message = createMessage({
  from: "notifications@yourdomain.com",
  to: "user@example.com",
  subject: "Your weekly digest",
  content: {
    html: "<h1>This Week's Highlights</h1><p>Here's what you missed...</p>",
    text: "This Week's Highlights\n\nHere's what you missed...",
  },
  tags: ["digest", "weekly"],
});

await transport.send(message);

Option C: Mailgun (reliable workhorse)

Mailgun offers robust infrastructure with strong EU support—important if you need GDPR-compliant data residency.

npm add @upyo/mailgun
import { createMessage } from "@upyo/core";
import { MailgunTransport } from "@upyo/mailgun";

const transport = new MailgunTransport({
  apiKey: process.env.MAILGUN_API_KEY!,
  domain: "mg.yourdomain.com",
  region: "eu", // or "us"
});

const message = createMessage({
  from: "team@yourdomain.com",
  to: "user@example.com",
  subject: "Important update",
  content: { text: "We have some news to share..." },
});

await transport.send(message);

Option D: Amazon SES (cost-effective at scale)

Amazon SES is incredibly affordable—about $0.10 per 1,000 emails. If you're already in the AWS ecosystem, it integrates seamlessly with IAM, CloudWatch, and other services.

npm add @upyo/ses
import { createMessage } from "@upyo/core";
import { SesTransport } from "@upyo/ses";

const transport = new SesTransport({
  authentication: {
    type: "credentials",
    accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
  },
  region: "us-east-1",
  configurationSetName: "my-config-set", // Optional: for tracking
});

const message = createMessage({
  from: "alerts@yourdomain.com",
  to: "admin@example.com",
  subject: "System alert",
  content: { text: "CPU usage exceeded 90%" },
  priority: "high",
});

await transport.send(message);

Part 4: Sending emails from edge functions

Here's where many email solutions fall short. Edge functions (Cloudflare Workers, Vercel Edge, Deno Deploy) run in a restricted environment—they can't open raw TCP connections, which means SMTP is not an option.

You must use an HTTP-based transport like Resend, SendGrid, Mailgun, or Amazon SES. The good news? Your code barely changes.

Cloudflare Workers example

// src/index.ts
import { createMessage } from "@upyo/core";
import { ResendTransport } from "@upyo/resend";

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const transport = new ResendTransport({
      apiKey: env.RESEND_API_KEY,
    });

    const message = createMessage({
      from: "noreply@yourdomain.com",
      to: "user@example.com",
      subject: "Request received",
      content: { text: "We got your request and are processing it." },
    });

    const receipt = await transport.send(message);

    if (receipt.successful) {
      return new Response(`Email sent: ${receipt.messageId}`);
    } else {
      return new Response(`Failed: ${receipt.errorMessages.join(", ")}`, {
        status: 500,
      });
    }
  },
};

interface Env {
  RESEND_API_KEY: string;
}

Vercel Edge Functions example

// app/api/send-email/route.ts
import { createMessage } from "@upyo/core";
import { SendGridTransport } from "@upyo/sendgrid";

export const runtime = "edge";

export async function POST(request: Request) {
  const { to, subject, body } = await request.json();

  const transport = new SendGridTransport({
    apiKey: process.env.SENDGRID_API_KEY!,
  });

  const message = createMessage({
    from: "app@yourdomain.com",
    to,
    subject,
    content: { text: body },
  });

  const receipt = await transport.send(message);

  if (receipt.successful) {
    return Response.json({ success: true, messageId: receipt.messageId });
  } else {
    return Response.json(
      { success: false, errors: receipt.errorMessages },
      { status: 500 }
    );
  }
}

Deno Deploy example

// main.ts
import { createMessage } from "jsr:@upyo/core";
import { MailgunTransport } from "jsr:@upyo/mailgun";

Deno.serve(async (request: Request) => {
  if (request.method !== "POST") {
    return new Response("Method not allowed", { status: 405 });
  }

  const { to, subject, body } = await request.json();

  const transport = new MailgunTransport({
    apiKey: Deno.env.get("MAILGUN_API_KEY")!,
    domain: Deno.env.get("MAILGUN_DOMAIN")!,
    region: "us",
  });

  const message = createMessage({
    from: "noreply@yourdomain.com",
    to,
    subject,
    content: { text: body },
  });

  const receipt = await transport.send(message);

  if (receipt.successful) {
    return Response.json({ success: true, messageId: receipt.messageId });
  } else {
    return Response.json(
      { success: false, errors: receipt.errorMessages },
      { status: 500 }
    );
  }
});

Part 5: Improving deliverability with DKIM

Ever wonder why some emails land in spam while others don't? Email authentication plays a huge role. DKIM (DomainKeys Identified Mail) is one of the key mechanisms—it lets you digitally sign your emails so recipients can verify they actually came from your domain and weren't tampered with in transit.

Without DKIM:

  • Your emails are more likely to be flagged as spam
  • Recipients have no way to verify you're really who you claim to be
  • Sophisticated phishing attacks can impersonate your domain

Setting up DKIM with Upyo

First, generate a DKIM key pair. You can use OpenSSL:

# Generate a 2048-bit RSA private key
openssl genrsa -out dkim-private.pem 2048

# Extract the public key
openssl rsa -in dkim-private.pem -pubout -out dkim-public.pem

Then configure your SMTP transport:

import { createMessage } from "@upyo/core";
import { SmtpTransport } from "@upyo/smtp";
import { readFileSync } from "node:fs";

const transport = new SmtpTransport({
  host: "smtp.example.com",
  port: 587,
  secure: false,
  auth: {
    user: "user@yourdomain.com",
    pass: "password",
  },
  dkim: {
    signatures: [
      {
        signingDomain: "yourdomain.com",
        selector: "mail", // Creates DNS record at mail._domainkey.yourdomain.com
        privateKey: readFileSync("./dkim-private.pem", "utf8"),
        algorithm: "rsa-sha256", // or "ed25519-sha256" for shorter keys
      },
    ],
  },
});

The key configuration options:

  • signingDomain: Must match your email's "From" domain
  • selector: An arbitrary name that becomes part of your DNS record (e.g., mail creates a record at mail._domainkey.yourdomain.com)
  • algorithm: RSA-SHA256 is widely supported; Ed25519-SHA256 offers shorter keys (see below)

Adding the DNS record

Add a TXT record to your domain's DNS:

  • Name: mail._domainkey (or mail._domainkey.yourdomain.com depending on your DNS provider)
  • Value: v=DKIM1; k=rsa; p=YOUR_PUBLIC_KEY_HERE

Extract the public key value (remove headers, footers, and newlines from the .pem file):

cat dkim-public.pem | grep -v "^-" | tr -d '\n'

Using Ed25519 for shorter keys

RSA-2048 keys are long—about 400 characters for the public key. This can be problematic because DNS TXT records have size limits, and some DNS providers struggle with long records.

Ed25519 provides equivalent security with much shorter keys (around 44 characters). If your email infrastructure supports it, Ed25519 is the modern choice.

# Generate Ed25519 key pair
openssl genpkey -algorithm ed25519 -out dkim-ed25519-private.pem
openssl pkey -in dkim-ed25519-private.pem -pubout -out dkim-ed25519-public.pem
const transport = new SmtpTransport({
  // ... other config
  dkim: {
    signatures: [
      {
        signingDomain: "yourdomain.com",
        selector: "mail2025",
        privateKey: readFileSync("./dkim-ed25519-private.pem", "utf8"),
        algorithm: "ed25519-sha256",
      },
    ],
  },
});

Part 6: Bulk email sending

When you need to send emails to many recipients—newsletters, notifications, marketing campaigns—you have two approaches:

The wrong way: looping with send()

// ❌ Don't do this for bulk sending
for (const subscriber of subscribers) {
  await transport.send(createMessage({
    from: "newsletter@example.com",
    to: subscriber.email,
    subject: "Weekly update",
    content: { text: "..." },
  }));
}

This works, but it's inefficient:

  • Each send() call waits for the previous one to complete
  • No automatic batching or optimization
  • Harder to track overall progress

The right way: using sendMany()

The sendMany() method is designed for bulk operations:

import { createMessage } from "@upyo/core";
import { ResendTransport } from "@upyo/resend";

const transport = new ResendTransport({
  apiKey: process.env.RESEND_API_KEY!,
});

const subscribers = [
  { email: "alice@example.com", name: "Alice" },
  { email: "bob@example.com", name: "Bob" },
  { email: "charlie@example.com", name: "Charlie" },
  // ... potentially thousands more
];

// Create personalized messages
const messages = subscribers.map((subscriber) =>
  createMessage({
    from: "newsletter@yourdomain.com",
    to: subscriber.email,
    subject: "Your weekly digest",
    content: {
      html: `<h1>Hi ${subscriber.name}!</h1><p>Here's what's new this week...</p>`,
      text: `Hi ${subscriber.name}!\n\nHere's what's new this week...`,
    },
    tags: ["newsletter", "weekly"],
  })
);

// Send all messages efficiently
let successCount = 0;
let failureCount = 0;

for await (const receipt of transport.sendMany(messages)) {
  if (receipt.successful) {
    successCount++;
  } else {
    failureCount++;
    console.error("Failed:", receipt.errorMessages.join(", "));
  }
}

console.log(`Sent: ${successCount}, Failed: ${failureCount}`);

Why sendMany() is better:

  • Automatic batching: Some transports (like Resend) combine multiple messages into a single API call
  • Connection reuse: SMTP transport reuses connections from the pool
  • Streaming results: You get receipts as they complete, not all at once
  • Resilient: One failure doesn't stop the rest

Progress tracking for large batches

const totalMessages = messages.length;
let processed = 0;

for await (const receipt of transport.sendMany(messages)) {
  processed++;

  if (processed % 100 === 0) {
    console.log(`Progress: ${processed}/${totalMessages} (${Math.round((processed / totalMessages) * 100)}%)`);
  }

  if (!receipt.successful) {
    console.error(`Message ${processed} failed:`, receipt.errorMessages);
  }
}

console.log("Batch complete!");

When to use send() vs sendMany()

Scenario Use
Single transactional email (welcome, password reset) send()
A few emails (under 10) send() in a loop is fine
Newsletters, bulk notifications sendMany()
Batch processing from a queue sendMany()

Part 7: Testing without sending real emails

Upyo includes a MockTransport for testing:

  • No external dependencies: Tests run offline, in CI, anywhere
  • Deterministic: No flaky tests due to network issues
  • Fast: No HTTP requests or SMTP handshakes
  • Inspectable: You can verify exactly what would have been sent

Basic testing setup

import { createMessage } from "@upyo/core";
import { MockTransport } from "@upyo/mock";
import assert from "node:assert";
import { describe, it, beforeEach } from "node:test";

describe("Email functionality", () => {
  let transport: MockTransport;

  beforeEach(() => {
    transport = new MockTransport();
  });

  it("should send welcome email after registration", async () => {
    // Your application code would call this
    const message = createMessage({
      from: "welcome@yourapp.com",
      to: "newuser@example.com",
      subject: "Welcome to our app!",
      content: { text: "Thanks for signing up!" },
    });

    const receipt = await transport.send(message);

    // Assertions
    assert.strictEqual(receipt.successful, true);
    assert.strictEqual(transport.getSentMessagesCount(), 1);

    const sentMessage = transport.getLastSentMessage();
    assert.strictEqual(sentMessage?.subject, "Welcome to our app!");
    assert.strictEqual(sentMessage?.recipients[0].address, "newuser@example.com");
  });

  it("should handle email failures gracefully", async () => {
    // Simulate a failure
    transport.setNextResponse({
      successful: false,
      errorMessages: ["Invalid recipient address"],
    });

    const message = createMessage({
      from: "test@yourapp.com",
      to: "invalid-email",
      subject: "Test",
      content: { text: "Test" },
    });

    const receipt = await transport.send(message);

    assert.strictEqual(receipt.successful, false);
    assert.ok(receipt.errorMessages.includes("Invalid recipient address"));
  });
});

The key testing methods:

  • getSentMessagesCount(): How many emails were “sent”
  • getLastSentMessage(): The most recent message
  • getSentMessages(): All messages as an array
  • setNextResponse(): Force the next send to succeed or fail with specific errors

Simulating real-world conditions

import { MockTransport } from "@upyo/mock";

// Simulate network delays
const slowTransport = new MockTransport({
  delay: 500, // 500ms delay per email
});

// Simulate random failures (10% failure rate)
const unreliableTransport = new MockTransport({
  failureRate: 0.1,
});

// Simulate variable latency
const realisticTransport = new MockTransport({
  randomDelayRange: { min: 100, max: 500 },
});

Testing async email workflows

import { MockTransport } from "@upyo/mock";

const transport = new MockTransport();

// Start your async operation that sends emails
startUserRegistration("newuser@example.com");

// Wait for the expected emails to be sent
await transport.waitForMessageCount(2, 5000); // Wait for 2 emails, 5s timeout

// Or wait for a specific email
const welcomeEmail = await transport.waitForMessage(
  (msg) => msg.subject.includes("Welcome"),
  3000
);

console.log("Welcome email was sent:", welcomeEmail.subject);

Part 8: Provider failover with PoolTransport

What happens if your email provider goes down? For mission-critical applications, you need redundancy. PoolTransport combines multiple providers with automatic failover—if one fails, it tries the next.

import { PoolTransport } from "@upyo/pool";
import { ResendTransport } from "@upyo/resend";
import { SendGridTransport } from "@upyo/sendgrid";
import { MailgunTransport } from "@upyo/mailgun";
import { createMessage } from "@upyo/core";

// Create multiple transports
const resend = new ResendTransport({ apiKey: process.env.RESEND_API_KEY! });
const sendgrid = new SendGridTransport({ apiKey: process.env.SENDGRID_API_KEY! });
const mailgun = new MailgunTransport({
  apiKey: process.env.MAILGUN_API_KEY!,
  domain: "mg.yourdomain.com",
});

// Combine them with priority-based failover
const transport = new PoolTransport({
  strategy: "priority",
  transports: [
    { transport: resend, priority: 100 },    // Try first
    { transport: sendgrid, priority: 50 },   // Fallback
    { transport: mailgun, priority: 10 },    // Last resort
  ],
  maxRetries: 3,
});

const message = createMessage({
  from: "critical@yourdomain.com",
  to: "admin@example.com",
  subject: "Critical alert",
  content: { text: "This email will try multiple providers if needed." },
});

const receipt = await transport.send(message);
// Automatically tries Resend first, then SendGrid, then Mailgun if others fail

The priority values determine the order—higher numbers are tried first. If Resend fails (network error, rate limit, etc.), the pool automatically retries with SendGrid, then Mailgun.

For more advanced routing strategies (weighted distribution, content-based routing), see the pool transport documentation.


Part 9: Observability with OpenTelemetry

In production, you'll want to track email metrics: send rates, failure rates, latency. Upyo integrates with OpenTelemetry:

import { createOpenTelemetryTransport } from "@upyo/opentelemetry";
import { SmtpTransport } from "@upyo/smtp";

const baseTransport = new SmtpTransport({
  host: "smtp.example.com",
  port: 587,
  auth: { user: "user", pass: "password" },
});

const transport = createOpenTelemetryTransport(baseTransport, {
  serviceName: "email-service",
  tracing: { enabled: true },
  metrics: { enabled: true },
});

// Now all email operations generate traces and metrics automatically
await transport.send(message);

This gives you:

  • Delivery success/failure rates
  • Send operation latency histograms
  • Error classification by type
  • Distributed tracing for debugging

See the OpenTelemetry documentation for details.


Quick reference: choosing the right transport

Scenario Recommended Transport
Development/testing Gmail SMTP or MockTransport
Small production app Resend or SendGrid
High volume (100k+/month) Amazon SES
Edge functions Resend, SendGrid, or Mailgun
Self-hosted infrastructure SMTP with DKIM
Mission-critical PoolTransport with failover
EU data residency Mailgun (EU region) or self-hosted

Wrapping up

This guide covered the most popular transports, but Upyo also supports:

  • JMAP: Modern email protocol (RFC 8620/8621) for JMAP-compatible servers like Fastmail and Stalwart
  • Plunk: Developer-friendly email service with self-hosting option

And you can always create a custom transport for any email service not yet supported.

Resources

  • 📚 Documentation
  • 📦 npm packages
  • 📦 JSR packages
  • 🐙 GitHub repository

Have questions or feedback? Feel free to open an issue.


What's been your biggest pain point when sending emails from JavaScript? Let me know in the comments—I'm curious what challenges others have run into.


Upyo (pronounced /oo-pyo/) comes from the Korean word 郵票, meaning “postage stamp.”

Read more →
2

Hackers' Pub 신고(flag) 기능 기획서

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

개요

목적

신고 기능은 Hackers' Pub 커뮤니티의 행동 강령(code of conduct)을 위반하는 콘텐츠나 사용자를 식별하고, 관리자가 적절한 조치를 취할 수 있도록 돕는 시스템입니다.

핵심 철학

신고 기능의 궁극적인 목적은 계도와 성장입니다. 무균실처럼 완벽한 사용자만을 남기려는 것이 아니라, 신고를 통해 각자의 행동을 돌아보고 더 나은 커뮤니티 구성원으로 성장할 수 있는 기회를 제공하는 데 있습니다.

추방은 최후의 수단이며, 시스템은 다음과 같은 단계적 접근을 권장합니다:

  1. 인지 — 피신고자가 자신의 행동이 문제가 될 수 있음을 알게 됩니다
  2. 성찰 — 왜 그 행동이 문제인지 이해할 기회를 갖습니다
  3. 개선 — 행동을 수정하고 커뮤니티와 조화롭게 참여합니다
  4. 제재 — 개선 의지가 없거나 심각한 위반의 경우에만 적용됩니다

분산형 네트워크 고려

Hackers' Pub은 ActivityPub 프로토콜 기반의 분산형 소셜 네트워크입니다. 따라서 신고 기능도 다음을 고려하여 설계되었습니다:

  • 다른 서버(인스턴스)에 호스팅된 콘텐츠도 신고 가능
  • Mastodon 등 주요 ActivityPub 플랫폼과의 호환성
  • 연합(federation) 환경에서의 조치 전파

설계 원칙

신고자 보호

원칙
신고자의 신원은 철저히 비공개로 유지됩니다.
근거
신고자가 보복을 두려워하면 신고를 주저하게 되고, 이는 커뮤니티 건강성을 해칠 수 있습니다. 익명성이 보장되어야 신고 시스템이 효과적으로 작동합니다.
구현
  • 피신고자에게는 신고 사실과 사유만 전달되며, 신고자 정보는 공개되지 않습니다
  • 관리자만 신고자 정보에 접근할 수 있습니다
  • 데이터베이스 수준에서도 접근 제어가 적용됩니다

피신고자의 알 권리

원칙
피신고자는 자신이 왜 신고되었는지 알 권리가 있습니다.
근거
무엇이 문제인지 알지 못하면 개선할 수 없습니다. 계도라는 목적을 달성하려면 피신고자가 자신의 행동을 돌아볼 수 있는 충분한 정보를 제공해야 합니다.
구현
  • 신고 사유(행동 강령 위반 내용)가 피신고자에게 전달됩니다
  • 어떤 콘텐츠가 문제가 되었는지 명시됩니다
  • 단, 신고자가 누구인지는 알 수 없습니다

행동 강령의 유연한 참조

원칙
행동 강령은 살아있는 문서이며, 시간이 지남에 따라 발전하고 변화할 수 있습니다.
근거
커뮤니티가 성장하고 사회적 맥락이 변화함에 따라 행동 강령도 함께 진화해야 합니다. 신고 시스템은 이러한 변화에 유연하게 대응할 수 있어야 합니다.
구현
  • 신고 사유를 특정 조항 번호에 하드코딩하지 않습니다
  • 신고 시점의 행동 강령 버전을 기록하여 맥락을 보존합니다
  • LLM 매칭 시 현재 행동 강령 전문을 참조하여 동적으로 분석합니다
  • 신고자가 작성한 원본 사유는 항상 보존됩니다

투명한 처리 과정

원칙
신고의 처리 과정과 결과는 관련 당사자에게 투명하게 공유됩니다.
근거
신고자는 자신의 신고가 어떻게 처리되었는지 알 권리가 있으며, 피신고자도 어떤 조치가 취해졌는지 알아야 합니다.
구현
  • 신고자에게 처리 진행 상황과 최종 결과가 통보됩니다
  • 피신고자에게 조치 내용과 사유가 전달됩니다
  • 관리자의 판단 근거가 기록됩니다

단계적 제재

원칙
제재는 위반의 심각성과 빈도에 비례하여 단계적으로 적용됩니다.
근거
경미한 위반에 과도한 제재를 가하면 커뮤니티 참여를 위축시키고, 심각한 위반에 가벼운 제재를 가하면 커뮤니티 안전을 해칩니다.
구현
  • 경고 → 콘텐츠 검열 → 일시 정지 → 영구 정지의 단계적 체계
  • 위반 이력이 누적되어 다음 제재 수준에 반영됩니다
  • 심각한 위반은 단계를 건너뛰고 즉각적인 강력한 조치 가능

용어 정의

용어 정의
신고(flag/report) 행동 강령 위반으로 의심되는 콘텐츠나 사용자를 관리자에게 알리는 행위
신고자(reporter) 신고를 제출하는 사용자
피신고자(reported) 신고의 대상이 되는 사용자
신고 대상(target) 신고된 콘텐츠(게시글, 단문) 또는 사용자
관리자(moderator) 신고를 검토하고 조치를 취할 권한이 있는 사용자
조치(action) 관리자가 신고에 대해 취하는 결정 (기각, 경고, 검열, 정지 등)
이의 제기(appeal) 피신고자가 조치에 대해 재검토를 요청하는 행위
로컬 사용자 Hackers' Pub에 계정이 있는 사용자
원격 사용자 다른 ActivityPub 인스턴스의 사용자

신고 대상

콘텐츠 신고

사용자는 다음 유형의 콘텐츠를 개별적으로 신고할 수 있습니다:

게시글(article) 신고

대상
장문의 블로그 형식 게시글
표시 위치
게시글 하단 또는 더보기 메뉴에 “신고하기” 옵션
신고 시 수집 정보
  • 게시글 ID 및 영구 링크
  • 게시글 작성자 정보
  • 신고 시점의 게시글 내용 스냅샷 (증거 보존)
  • 신고자가 작성한 사유

단문(note) 신고

대상
짧은 마이크로블로그 형식 글
표시 위치
단문의 더보기 메뉴에 “신고하기” 옵션
신고 시 수집 정보
게시글과 동일

사용자 신고

특정 사용자의 전반적인 행동 패턴이 문제가 되는 경우, 개별 콘텐츠가 아닌 사용자 자체를 신고할 수 있습니다.

사용 시나리오
  • 여러 콘텐츠에 걸쳐 지속적으로 문제 행동을 보이는 경우
  • 개별 콘텐츠는 경계선상에 있지만, 전체적인 패턴이 문제인 경우
  • 프로필 자체(이름, 약력, 프로필 사진 등)가 행동 강령을 위반하는 경우
표시 위치
사용자 프로필 페이지의 더보기 메뉴에 “사용자 신고하기” 옵션
신고 시 수집 정보
  • 사용자 ID 및 프로필 링크
  • 신고 시점의 프로필 정보 스냅샷
  • 신고자가 작성한 사유
  • (선택) 관련 콘텐츠 링크 첨부 가능

원격 콘텐츠 및 사용자

다른 ActivityPub 인스턴스의 콘텐츠와 사용자도 동일하게 신고할 수 있습니다.

근거
연합 타임라인에 표시되는 모든 콘텐츠는 Hackers' Pub 사용자에게 영향을 미치므로, 원격 콘텐츠도 신고 대상이 되어야 합니다.
처리 방식
  • Hackers' Pub 내에서의 표시/연합 여부에 대한 조치
  • 원격 인스턴스로 ActivityPub Flag 액티비티 전송 (선택적)

신고 프로세스

신고 흐름도

flag_process start 사용자가 콘텐츠/사용자 신고 클릭 form 신고 양식 표시 start->form reason 사유 작성 (자유 형식) form->reason submit 신고 제출 reason->submit llm LLM이 사유를 분석 submit->llm coc 행동 강령 조항 매칭 llm->coc save 신고 저장 (대기 상태) llm->save notify_mod 관리자에게 알림 발송 save->notify_mod notify_reporter 신고자에게 접수 확인 notify_mod->notify_reporter

신고 양식

신고 양식은 간결하면서도 필요한 정보를 수집할 수 있도록 설계됩니다.

필수 입력 항목

신고 사유 (자유 형식 텍스트)

이 콘텐츠/사용자를 신고하는 이유를 설명해 주세요.
구체적인 행동 강령 조항을 알지 못해도 괜찮습니다.
어떤 점이 불편하거나 문제가 된다고 느꼈는지
자유롭게 작성해 주세요.

[                                        ]
[                                        ]
[                                        ]

최소 10자 이상 작성해 주세요.

근거:

  • 사용자가 행동 강령의 모든 조항을 숙지하고 있다고 가정하지 않습니다
  • 자유 형식으로 작성하면 더 풍부한 맥락을 수집할 수 있습니다
  • LLM이 사유를 분석하여 관련 조항을 자동으로 매칭합니다

선택 입력 항목

추가 콘텐츠 링크 (사용자 신고 시)

관련된 다른 콘텐츠가 있다면 링크를 추가해 주세요. (선택)

[링크 추가 +]

근거: 사용자 신고의 경우, 문제 행동의 패턴을 보여주는 여러 콘텐츠를 함께 제출하면 관리자가 더 정확한 판단을 내릴 수 있습니다.

LLM 기반 행동 강령 매칭

신고가 제출되면 LLM이 신고 사유를 분석하여 관련된 행동 강령 조항을 식별합니다.

매칭 프로세스

  1. 입력 구성

    • 신고자가 작성한 사유 텍스트
    • 현재 버전의 행동 강령 전문
    • 신고된 콘텐츠 내용 (있는 경우)
  2. LLM 분석

    • 신고 사유와 행동 강령 조항 간의 관련성 분석
    • 관련 조항 식별 및 신뢰도 점수 산출
    • 분석 요약 생성
  3. 결과 저장

    • 매칭된 조항 목록 (신뢰도 점수 포함)
    • LLM 분석 요약
    • 신고 시점의 행동 강령 버전 식별자

행동 강령 버전 관리

근거
행동 강령이 변경되면 과거 신고의 맥락이 불명확해질 수 있습니다. 따라서 신고 시점의 행동 강령 버전을 기록하여 맥락을 보존합니다.
구현 방식
  • 행동 강령 파일의 Git 커밋 해시를 버전 식별자로 사용
  • 신고 기록에 버전 식별자 저장
  • 관리자가 신고를 검토할 때 해당 버전의 행동 강령 참조 가능

매칭 결과 활용

관리자 검토
매칭 결과는 관리자의 참고 자료로 활용됩니다
최종 판단
관리자가 매칭 결과를 수정하거나 무시할 수 있습니다
피신고자 통보
최종 확정된 위반 조항이 피신고자에게 전달됩니다

중복 신고 처리

같은 콘텐츠나 사용자에 대해 여러 신고가 접수될 수 있습니다.

처리 방식
  • 동일 대상에 대한 신고는 하나의 “신고 케이스”로 그룹화됩니다
  • 각 신고의 사유는 개별적으로 보존됩니다
  • 관리자에게는 신고 건수와 함께 표시됩니다
  • 신고 건수가 많을수록 우선순위가 높아집니다
근거
  • 여러 사람이 독립적으로 같은 문제를 발견했다면 더 심각한 문제일 가능성이 높습니다
  • 다양한 관점의 신고 사유를 종합하면 더 정확한 판단이 가능합니다

신고 내역 조회

신고자는 자신이 제출한 신고의 상태를 확인할 수 있습니다.

확인 가능한 정보
  • 신고 대상 (콘텐츠/사용자)
  • 신고 일시
  • 자신이 작성한 신고 사유
  • 처리 상태 (대기 중 / 검토 중 / 처리 완료)
  • 처리 결과 (조치됨 / 기각됨)
확인 불가능한 정보
  • 다른 신고자의 존재 여부
  • 구체적인 제재 내용 (프라이버시 보호)
  • 피신고자의 이의 제기 내용

관리자 처리 프로세스

신고 검토 흐름도

moderation_process cluster_review 콘텐츠/사용자 검토 pending 신고 접수 (대기 상태) check 관리자가 신고 확인 pending->check reviewing 검토 시작 (검토 중) check->reviewing review1 신고된 콘텐츠 확인 review2 신고 사유 검토 review3 LLM 매칭 결과 참고 review4 사용자 이력 확인 review5 맥락 파악 decision 판단 결정 review5->decision dismiss 기각 decision->dismiss warn 경고 decision->warn action 제재 decision->action notify 조치 기록 및 알림 - 신고자에게 결과 통보 - 피신고자에게 조치 통보 - (필요시) 원격 서버 통보 dismiss->notify warn->notify action->notify

신고 상태

상태 설명
pending 신고가 접수되어 검토 대기 중
reviewing 관리자가 검토 중
resolved 처리 완료 (조치됨)
dismissed 기각됨 (위반 아님)

검토 시 확인 사항

관리자는 다음 정보를 종합적으로 검토합니다:

신고 정보

  • 신고자가 작성한 사유
  • LLM이 매칭한 행동 강령 조항
  • 신고 건수 (중복 신고의 경우)
  • 각 신고자의 사유 (중복 신고의 경우)

콘텐츠 정보

  • 신고된 콘텐츠 원문
  • 콘텐츠의 맥락 (댓글 스레드 등)
  • 신고 시점의 스냅샷 (수정/삭제된 경우)

사용자 정보

  • 피신고자의 이전 위반 이력
  • 이전 경고/제재 기록
  • 계정 생성일 및 활동 기간
  • 로컬/원격 사용자 여부

조치 옵션

관리자는 다음 조치 중 하나를 선택합니다:

조치 설명 적용 기준
기각 위반이 아니라고 판단 행동 강령 위반 사실이 없는 경우
경고 경고 메시지 발송 경미한 위반, 초범인 경우
콘텐츠 검열 해당 콘텐츠 숨김 처리 콘텐츠 자체가 문제인 경우
일시 정지 일정 기간 계정 정지 반복 위반 또는 중간 수준의 위반
영구 정지 계정 영구 정지 심각한 위반 또는 지속적 악의적 행동

조치 시 필수 입력 사항

관리자가 조치를 취할 때 다음을 기록해야 합니다:

위반 조항 (최종 확정):
[행동 강령 내 관련 조항 선택/입력]

조치 사유:
[관리자의 판단 근거를 상세히 기술]

피신고자에게 전달할 메시지:
[피신고자가 받을 통보 내용]

(일시 정지의 경우) 정지 기간:
[시작일] – [종료일]

근거:

  • 조치의 투명성을 확보합니다
  • 이의 제기 시 검토 자료로 활용됩니다
  • 일관된 판단 기준을 유지하는 데 도움이 됩니다

피신고자 프로세스

신고 통보

피신고자는 자신이 신고되었다는 사실과 사유를 알림으로 받습니다.

통보 시점

즉시 통보하지 않는 경우:

  • 신고 접수 직후에는 피신고자에게 통보하지 않습니다
  • 무분별한 신고로 인한 불필요한 스트레스 방지

통보하는 경우:

  • 관리자가 신고를 검토하고 조치를 결정한 후 통보합니다
  • 기각된 경우에도 교육적 목적으로 통보할 수 있습니다 (관리자 재량)

통보 내용

경고/제재 시:

귀하의 [콘텐츠/계정]에 대해 신고가 접수되어 검토한 결과,
행동 강령 위반으로 판단되어 다음과 같은 조치가 취해졌습니다.

위반 내용:
[행동 강령의 관련 조항]

대상 콘텐츠:
[해당되는 경우 콘텐츠 링크]

조치:
[경고 / 콘텐츠 검열 / N일 정지 / 영구 정지]

관리자 메시지:
[관리자가 작성한 설명]

이 조치에 대해 이의가 있으시면 아래 버튼을 통해
이의 제기를 하실 수 있습니다.

[이의 제기하기]

기각 통보 시 (선택적):

귀하의 [콘텐츠/계정]에 대해 신고가 접수되었으나,
검토 결과 행동 강령 위반에 해당하지 않는다고 판단되었습니다.

다만, 일부 커뮤니티 구성원이 불편함을 느꼈을 수 있으므로
참고해 주시면 감사하겠습니다.

관련 내용:
[간략한 설명]

피신고자가 확인할 수 있는 정보

정보 확인 가능 여부
신고된 사실 가능
위반으로 지적된 행동 강령 조항 가능
대상 콘텐츠 가능
조치 내용 및 기간 가능
관리자의 판단 사유 가능
신고자가 누구인지 불가능
신고자가 작성한 원본 사유 불가능
신고 건수 불가능

근거: 피신고자에게 개선에 필요한 정보는 모두 제공하되, 신고자를 특정할 수 있는 정보는 철저히 보호합니다.

제재 중 제한 사항

콘텐츠 검열

  • 해당 콘텐츠가 타임라인과 검색에서 숨겨집니다
  • 직접 링크(퍼머링크)로는 접근 가능하지만, 검열 안내가 표시됩니다
  • 작성자 본인은 콘텐츠를 볼 수 있습니다

일시 정지

  • 새로운 콘텐츠 작성 불가
  • 댓글 작성 불가
  • 반응 불가
  • 팔로/언팔로 불가
  • 기존 콘텐츠 열람은 가능
  • DM 수신은 가능하나 발신 불가

영구 정지

  • 계정 접근 불가
  • 모든 기능 사용 불가
  • 기존 콘텐츠는 숨김 처리됨

이의 제기 프로세스

이의 제기 자격

  • 조치를 받은 피신고자만 이의 제기 가능
  • 하나의 조치에 대해 1회의 이의 제기 가능
  • 이의 제기 기한: 조치 통보 후 14일 이내

이의 제기 흐름도

appeal_process start 피신고자가 이의 제기 write 이의 내용 작성 start->write submit 이의 제출 write->submit review 관리자 검토 (다른 관리자 권장) submit->review decision 판단 review->decision reject 기각 decision->reject uphold 조치 유지 decision->uphold modify 조치 변경 decision->modify notify 결과 통보 (피신고자, 원신고자) reject->notify uphold->notify modify->notify

이의 제기 양식

이의 제기 사유:
[왜 이 조치가 부당하다고 생각하시는지 설명해 주세요]

추가 맥락 또는 증거:
[조치 결정 시 고려되지 않았다고 생각되는
맥락이나 정보가 있다면 제공해 주세요]

[제출]

이의 제기 검토

검토 원칙
  • 가능하면 원래 조치를 결정한 관리자가 아닌 다른 관리자가 검토합니다
  • 원 신고 내용, 조치 사유, 이의 제기 내용을 종합적으로 검토합니다
  • 새로운 정보나 맥락이 있는지 확인합니다
결정 옵션
  • 이의 기각: 원 조치 유지
  • 조치 완화: 더 가벼운 조치로 변경 (예: 정지 → 경고)
  • 조치 철회: 조치 취소 및 기록 정정
  • 조치 강화: 드문 경우, 이의 제기 과정에서 더 심각한 위반이 발견된 경우

결과 통보

피신고자에게:

귀하의 이의 제기를 검토한 결과를 알려드립니다.

결정: [이의 기각 / 조치 완화 / 조치 철회]

판단 사유:
[관리자의 검토 결과 설명]

(해당 시) 변경된 조치:
[새로운 조치 내용]

원 신고자에게:

귀하가 신고하신 건에 대해 피신고자로부터
이의 제기가 있어 재검토가 진행되었습니다.

재검토 결과: [원 조치 유지 / 조치 변경]

(조치가 변경된 경우)
변경 사유에 대한 간략한 설명:
[설명]

패널티 체계

패널티 종류 및 기준

경고 (warning)

설명
위반 사실을 알리고 재발 방지를 요청하는 가장 가벼운 조치입니다.
적용 기준
  • 경미한 행동 강령 위반
  • 초범이며 악의가 없어 보이는 경우
  • 실수나 무지로 인한 위반으로 판단되는 경우
효과
  • 경고 메시지가 발송됩니다
  • 경고 이력이 기록되어 향후 판단에 참고됩니다
  • 일정 기간(예: 1년) 경과 후 이력에서 제외될 수 있습니다
경고 누적
  • 경고 3회 누적 시 자동으로 더 강한 조치 검토 대상이 됩니다
  • 단, 자동 제재는 없으며 관리자의 판단이 필요합니다

콘텐츠 검열 (content censorship)

설명
특정 콘텐츠를 공개 영역에서 숨기는 조치입니다.
적용 기준
  • 콘텐츠 자체가 행동 강령을 위반하는 경우
  • 사용자의 전반적 행동보다 특정 콘텐츠가 문제인 경우
효과
  • 해당 콘텐츠가 타임라인, 검색, 추천에서 제외됩니다
  • 퍼머링크는 유지되나, 접근 시 검열 안내가 표시됩니다
  • 연합(federation)으로 다른 서버에 Delete 액티비티가 전송될 수 있습니다

검열 콘텐츠 표시

이 콘텐츠는 행동 강령 위반으로 검열되었습니다.
[원문 보기] (클릭 시 경고와 함께 표시)

일시 정지 (temporary suspension)

설명
일정 기간 동안 계정 활동을 제한하는 조치입니다.
적용 기준
  • 경고에도 불구하고 위반이 반복되는 경우
  • 중간 수준의 심각한 위반인 경우
  • 즉각적인 활동 중단이 필요하지만 영구 정지까지는 아닌 경우
정지 기간
  • 최소 1일 – 최대 90일
  • 관리자가 위반 정도에 따라 결정
  • 권장 기준:
    • 경미한 반복 위반: 1–7일
    • 중간 수준 위반: 7–30일
    • 심각한 위반 (초범): 30–90일
효과
  • 새 콘텐츠 작성 불가
  • 상호작용(반응, 댓글 등) 불가
  • 기존 콘텐츠 열람은 가능
  • 정지 해제 시 완전한 기능 복구
원격 사용자의 경우
  • Hackers' Pub 내에서 해당 기간 동안 연합 차단
  • 원격 서버 관리자에게 ActivityPub Flag 액티비티로 통보

영구 정지 (permanent suspension)

설명
계정을 영구적으로 비활성화하는 가장 강력한 조치입니다.
적용 기준
  • 매우 심각한 행동 강령 위반 (혐오 발언, 불법 콘텐츠 등)
  • 일시 정지 후에도 동일한 위반이 반복되는 경우
  • 명백한 악의를 가지고 커뮤니티를 해치려는 의도가 확인된 경우
효과
  • 계정 로그인 불가
  • 모든 기능 사용 불가
  • 공개 콘텐츠 숨김 처리
  • 프로필 페이지에 정지 안내 표시
원격 사용자의 경우
  • Hackers' Pub과의 영구적 연합 차단
  • 원격 서버 관리자에게 ActivityPub Flag 액티비티로 통보
복구
  • 원칙적으로 영구 정지는 복구되지 않습니다
  • 극히 예외적인 경우, 충분한 시간 경과 후 재심 요청 가능

패널티 이력 관리

패널티 이력 보존 기간 비고
경고 1년 1년간 추가 위반 없으면 이력에서 제외
콘텐츠 검열 무기한 콘텐츠 존재하는 한 유지
일시 정지 무기한 기록은 유지, 판단 시 경과 시간 고려
영구 정지 무기한 -

ActivityPub 연합 처리

개요

Hackers' Pub은 ActivityPub 프로토콜을 사용하는 분산형 네트워크의 일부입니다. 신고 기능도 이 환경에서 원활히 작동해야 합니다.

Flag 액티비티

ActivityPub 명세에는 Flag 액티비티가 정의되어 있으며, 이를 통해 신고를 연합 네트워크에 전파할 수 있습니다.

Flag 액티비티 구조:

{
  "@context": "https://www.w3.org/ns/activitystreams",
  "type": "Flag",
  "actor": "https://hackerspub.example/users/moderator",
  "object": [
    "https://remote.example/users/reported_user",
    "https://remote.example/posts/problematic_post"
  ],
  "content": "Violation of Code of Conduct: harassment"
}

원격 콘텐츠 신고 처리

신고 접수

  1. 로컬 사용자가 원격 콘텐츠/사용자를 신고합니다
  2. 신고는 Hackers' Pub 데이터베이스에 저장됩니다
  3. 관리자가 일반 신고와 동일하게 검토합니다

조치 적용

  1. Hackers' Pub 내 조치:

    • 해당 콘텐츠의 로컬 캐시 숨김/삭제
    • 해당 사용자와의 연합 차단 (일시/영구)
  2. 원격 서버 통보 (선택적):

    • Flag 액티비티를 원격 서버에 전송
    • 원격 서버의 조치 여부는 해당 서버의 재량

외부에서 받은 Flag 처리

다른 서버에서 Hackers' Pub으로 Flag 액티비티가 전송된 경우:

  1. Flag 액티비티 수신 및 파싱
  2. 신고 대상이 로컬 사용자/콘텐츠인지 확인
  3. 관리자에게 외부 신고로 표시하여 알림
  4. 관리자가 검토 후 자체 판단에 따라 조치

외부 신고 표시:

[외부 신고] remote.example에서 접수됨

대상: @localuser의 콘텐츠
사유: "Violation of our community guidelines"

* 이 신고는 외부 서버에서 접수되었습니다.
  자체 행동 강령에 따라 판단해 주세요.

Mastodon 호환성

Mastodon은 가장 널리 사용되는 ActivityPub 구현체입니다. Mastodon과의 호환성을 위해 다음을 고려합니다:

  • Mastodon의 Flag 액티비티 형식 지원
  • Mastodon 관리자 API와의 연동 고려 (향후)
  • Mastodon에서 보내는 신고 수신 및 처리

알림 체계

알림 유형

알림 유형 수신자 내용
flag_received 관리자 새 신고 접수됨
flag_resolved 신고자 신고 처리 완료됨
action_taken 피신고자 조치가 취해짐
appeal_received 관리자 이의 제기 접수됨
appeal_resolved 피신고자 이의 제기 처리 완료됨
appeal_result 신고자 이의 제기로 인한 변경 알림
suspension_ending 피신고자 정지 해제 임박 알림

알림 채널

인앱 알림
기본 알림 방식
이메일
중요 알림 (조치, 정지 등)
ActivityPub
원격 사용자의 경우 해당 서버로 전송

프라이버시 및 보안

신고자 익명성 보호

원칙
신고자의 신원은 피신고자에게 절대 공개되지 않습니다.
기술적 조치
  • API 응답에서 신고자 정보 필터링
  • 관리자 UI에서만 신고자 정보 표시
  • 로그에서 신고자 정보 마스킹 (필요시)

데이터 접근 제어

역할 접근 가능 정보
일반 사용자 자신의 신고 내역만
피신고자 자신에 대한 조치 및 사유 (신고자 정보 제외)
관리자 모든 신고 정보 (신고자 정보 포함)

콘텐츠 스냅샷

신고 시점의 콘텐츠를 스냅샷으로 저장하는 이유:

  • 피신고자가 콘텐츠를 수정/삭제해도 원본 증거 보존
  • 공정한 판단을 위한 기록 유지
  • 이의 제기 시 참고 자료로 활용
보존 기간
  • 케이스 종료 후 최소 1년간 보존
  • 법적 요구사항이 있는 경우 더 오래 보존

악용 방지

허위 신고 방지
  • 동일 사용자의 동일 대상 반복 신고 제한
  • 허위 신고 시 신고자에 대한 제재 가능
  • 신고 패턴 모니터링
신고 폭주 방지
  • 단시간 다수 신고 시 속도 제한
  • 관리자에게 이상 패턴 경고

관리자 대시보드

대시보드 개요

관리자 대시보드는 신고 관리의 중심 허브입니다.

주요 화면
  1. 대기 중인 신고 목록
  2. 신고 상세 및 처리 화면
  3. 이의 제기 목록
  4. 통계 및 분석
  5. 제재 중인 사용자 목록

신고 목록 화면

┌─────────────────────────────────────────────────────────┐
│  신고 관리                                    [통계 보기] │
├─────────────────────────────────────────────────────────┤
│  필터: [전체 ▼] [대기 중 ▼] [최신순 ▼]      검색: [____]│
├─────────────────────────────────────────────────────────┤
│                                                         │
│  ⚠️ 높은 우선순위 (신고 5건 이상)                        │
│  ┌─────────────────────────────────────────────────┐   │
│  │ 🔴 @user123의 콘텐츠 (신고 7건)                 │   │
│  │    "혐오 발언", "차별적 표현" 외 5건             │   │
│  │    최초 신고: 2시간 전                          │   │
│  └─────────────────────────────────────────────────┘   │
│                                                         │
│  일반 신고                                              │
│  ┌─────────────────────────────────────────────────┐   │
│  │ 🟡 @remote@other.server 사용자 (신고 2건)       │   │
│  │    "스팸 행위"                                  │   │
│  │    최초 신고: 5시간 전                          │   │
│  └─────────────────────────────────────────────────┘   │
│  ┌─────────────────────────────────────────────────┐   │
│  │ 🟢 @newuser의 댓글 (신고 1건)                   │   │
│  │    "부적절한 언어 사용"                         │   │
│  │    신고: 1일 전                                 │   │
│  └─────────────────────────────────────────────────┘   │
│                                                         │
└─────────────────────────────────────────────────────────┘

신고 상세 화면

┌─────────────────────────────────────────────────────────┐
│  신고 상세 - 케이스 #12345                    [← 목록]  │
├─────────────────────────────────────────────────────────┤
│                                                         │
│  📋 기본 정보                                           │
│  ────────────────────────────────────────               │
│  대상: @user123의 콘텐츠                                │
│  유형: 단문 (note)                                      │
│  신고 건수: 7건                                         │
│  상태: 대기 중                                          │
│                                                         │
│  📝 신고된 콘텐츠                                       │
│  ────────────────────────────────────────               │
│  ┌─────────────────────────────────────────────────┐   │
│  │ [콘텐츠 원문 표시]                              │   │
│  │ 작성일: 2024-12-01 14:30                        │   │
│  └─────────────────────────────────────────────────┘   │
│                                                         │
│  🔍 신고 사유 (7건)                                     │
│  ────────────────────────────────────────               │
│  1. "명백한 혐오 발언입니다" - 신고자A, 2시간 전        │
│  2. "특정 집단을 비하하는 표현" - 신고자B, 3시간 전     │
│  3. "불쾌한 차별적 언어" - 신고자C, 4시간 전            │
│  ... (더 보기)                                          │
│                                                         │
│  🤖 LLM 분석 결과                                       │
│  ────────────────────────────────────────               │
│  관련 행동 강령 조항:                                   │
│  - 차별 금지 (신뢰도: 95%)                              │
│  - 존중하는 언어 사용 (신뢰도: 88%)                     │
│                                                         │
│  📊 피신고자 이력                                       │
│  ────────────────────────────────────────               │
│  - 가입일: 2024-06-15                                   │
│  - 이전 경고: 1회 (2024-09-20)                          │
│  - 이전 정지: 없음                                      │
│                                                         │
│  ⚡ 조치                                                │
│  ────────────────────────────────────────               │
│  [기각] [경고] [콘텐츠 검열] [일시 정지] [영구 정지]    │
│                                                         │
└─────────────────────────────────────────────────────────┘

통계 화면

기간 선택 드롭다운으로 조회 범위를 설정합니다 (예: 최근 30일).

요약

항목
총 신고 건수 127건
처리 완료 98건 (77%)
평균 처리 시간 4.2시간

조치 분포

조치 건수 비율
기각 45건 46%
경고 38건 39%
콘텐츠 검열 10건 10%
일시 정지 4건 4%
영구 정지 1건 1%

위반 유형 (상위 5개)

순위 유형 건수
1 스팸/광고 32건
2 혐오 발언 24건
3 괴롭힘 18건
4 부적절한 콘텐츠 12건
5 허위 정보 8건

향후 고려사항

자동화 기능 (향후 도입 검토)

  • 자동 숨김: 특정 임계값 이상의 신고가 접수되면 관리자 검토 전 임시 숨김
  • AI 기반 사전 필터링: 명백한 위반 콘텐츠 자동 감지
  • 스팸 자동 처리: 명백한 스팸에 대한 자동 조치

주의

자동화 기능은 오탐의 위험이 있으므로 신중하게 도입해야 합니다.

커뮤니티 참여

  • 신뢰할 수 있는 신고자: 정확한 신고 이력을 가진 사용자의 신고에 높은 가중치
  • 커뮤니티 중재자: 관리자 부담 분산을 위한 커뮤니티 중재자 제도 검토

다국어 지원

  • 신고 사유 자동 번역 (관리자가 다른 언어 사용 시)
  • 행동 강령 다국어 버전과의 연동
  • 조치 통보 메시지 다국어 템플릿

법적 요구사항 대응

  • 법적 요청에 따른 데이터 보존/제공 절차
  • 저작권 침해 신고 (DMCA 등) 별도 처리 절차
  • 사법기관 협조 절차

부록: 용어 대조표

한국어 영어 설명
신고 flag/report 위반 의심 콘텐츠/사용자를 알림
행동 강령 code of conduct 커뮤니티 규칙
관리자 moderator 신고 처리 권한자
검열 censorship 콘텐츠 숨김 처리
정지 suspension 계정 활동 제한
이의 제기 appeal 조치에 대한 재검토 요청
연합 federation 분산 네트워크 간 연결
콘텐츠 post 게시글과 단문을 통칭
게시글 article 장문의 블로그 형식 글
단문 note 짧은 마이크로블로그 형식 글
타임라인 timeline 콘텐츠 피드
팔로 follow 다른 사용자 구독
팔로워 follower 나를 구독하는 사용자
차단 block 특정 사용자 접근 제한
반응 react 콘텐츠에 이모지로 반응
연합우주 fediverse ActivityPub 기반 분산 소셜 네트워크
인스턴스 instance 연합우주의 개별 서버

이 문서는 Hackers' Pub 커뮤니티의 의견을 수렴하여 지속적으로 개선됩니다.

Read more →
7

I couldn't find a logging library that worked for my library, so I made one

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

When I started building Fedify, an ActivityPub server framework, I ran into a problem that surprised me: I couldn't figure out how to add logging.

Not because logging is hard—there are dozens of mature logging libraries for JavaScript. The problem was that they're primarily designed for applications, not for libraries that want to stay unobtrusive.

I wrote about this a few months ago, and the response was modest—some interest, some skepticism, and quite a bit of debate about whether the post was AI-generated. I'll be honest: English isn't my first language, so I use LLMs to polish my writing. But the ideas and technical content are mine.

Several readers wanted to see a real-world example rather than theory.

The problem: existing loggers assume you're building an app

Fedify helps developers build federated social applications using the ActivityPub protocol. If you've ever worked with federation, you know debugging can be painful. When an activity fails to deliver, you need to answer questions like:

  • Did the HTTP request actually go out?
  • Was the signature generated correctly?
  • Did the remote server reject it? Why?
  • Was there a problem parsing the response?

These questions span multiple subsystems: HTTP handling, cryptographic signatures, JSON-LD processing, queue management, and more. Without good logging, debugging turns into guesswork.

But here's the dilemma I faced as a library author: if I add verbose logging to help with debugging, I risk annoying users who don't want their console cluttered with Fedify's internal chatter. If I stay silent, users struggle to diagnose issues.

I looked at the existing options. With winston or Pino, I would have to either:

  • Configure a logger inside Fedify (imposing my choices on users), or
  • Ask users to pass a logger instance to Fedify (adding boilerplate)

There's also debug, which is designed for this use case. But it doesn't give you structured, level-based logs that ops teams expect—and it relies on environment variables, which some runtimes like Deno restrict by default for security reasons.

None of these felt right. So I built LogTape—a logging library designed from the ground up for library authors. And Fedify became its first real user.

The solution: hierarchical categories with zero default output

The key insight was simple: a library should be able to log without producing any output unless the application developer explicitly enables it.

Fedify uses LogTape's hierarchical category system to give users fine-grained control over what they see. Here's how the categories are organized:

Category What it logs
["fedify"] Everything from the library
["fedify", "federation", "inbox"] Incoming activities
["fedify", "federation", "outbox"] Outgoing activities
["fedify", "federation", "http"] HTTP requests and responses
["fedify", "sig", "http"] HTTP Signature operations
["fedify", "sig", "ld"] Linked Data Signature operations
["fedify", "sig", "key"] Key generation and retrieval
["fedify", "runtime", "docloader"] JSON-LD document loading
["fedify", "webfinger", "lookup"] WebFinger resource lookups

…and about a dozen more. Each category corresponds to a distinct subsystem.

This means a user can configure logging like this:

await configure({
  sinks: { console: getConsoleSink() },
  loggers: [
    // Show errors from all of Fedify
    { category: "fedify", sinks: ["console"], lowestLevel: "error" },
    // But show debug info for inbox processing specifically
    { category: ["fedify", "federation", "inbox"], sinks: ["console"], lowestLevel: "debug" },
  ],
});

When something goes wrong with incoming activities, they get detailed logs for that subsystem while keeping everything else quiet. No code changes required—just configuration.

Request tracing with implicit contexts

The hierarchical categories solved the filtering problem, but there was another challenge: correlating logs across async boundaries.

In a federated system, a single user action might trigger a cascade of operations: fetch a remote actor, verify their signature, process the activity, fan out to followers, and so on. When something fails, you need to correlate all the log entries for that specific request.

Fedify uses LogTape's implicit context feature to automatically tag every log entry with a requestId:

await configure({
  sinks: {
    file: getFileSink("fedify.jsonl", { formatter: jsonLinesFormatter })
  },
  loggers: [
    { category: "fedify", sinks: ["file"], lowestLevel: "info" },
  ],
  contextLocalStorage: new AsyncLocalStorage(),  // Enables implicit contexts
});

With this configuration, every log entry automatically includes a requestId property. When you need to debug a specific request, you can filter your logs:

jq 'select(.properties.requestId == "abc-123")' fedify.jsonl

And you'll see every log entry from that request—across all subsystems, all in order. No manual correlation needed.

The requestId is derived from standard headers when available (X-Request-Id, Traceparent, etc.), so it integrates naturally with existing observability infrastructure.

What users actually see

So what does all this configuration actually mean for someone using Fedify?

If a Fedify user doesn't configure LogTape at all, they see nothing. No warnings about missing configuration, no default output, and minimal performance overhead—the logging calls are essentially no-ops.

For basic visibility, they can enable error-level logging for all of Fedify with three lines of configuration. When debugging a specific issue, they can enable debug-level logging for just the relevant subsystem.

And if they're running in production with serious observability requirements, they can pipe structured JSON logs to their monitoring system with request correlation built in.

The same library code supports all these scenarios—whether the user is running on Node.js, Deno, Bun, or edge functions, without extra polyfills or shims. The user decides what they need.

Lessons learned

Building Fedify with LogTape taught me a few things:

Design your categories early. The hierarchical structure should reflect how users will actually want to filter logs. I organized Fedify's categories around subsystems that users might need to debug independently.

Use structured logging. Properties like requestId, activityId, and actorId are far more useful than string interpolation when you need to analyze logs programmatically.

Implicit contexts turned out to be more useful than I expected. Being able to correlate logs across async boundaries without passing context manually made debugging distributed operations much easier. When a user reports that activity delivery failed, I can give them a single jq command to extract everything relevant.

Trust your users. Some library authors worry about exposing too much internal detail through logs. I've found the opposite—users appreciate being able to see what's happening when they need to. The key is making it opt-in.

Try it yourself

If you're building a library and struggling with the logging question—how much to log, how to give users control, how to avoid being noisy—I'd encourage you to look at how Fedify does it.

The Fedify logging documentation explains everything in detail. And if you want to understand the philosophy behind LogTape's design, my earlier post covers that.

LogTape isn't trying to replace winston or Pino for application developers who are happy with those tools. It fills a different gap: logging for libraries that want to stay out of the way until users need them. If that's what you're looking for, it might be a better fit than the usual app-centric loggers.

Read more →
5

Stop writing if statements for your CLI flags

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

If you've built CLI tools, you've written code like this:

if (opts.reporter === "junit" && !opts.outputFile) {
  throw new Error("--output-file is required for junit reporter");
}
if (opts.reporter === "html" && !opts.outputFile) {
  throw new Error("--output-file is required for html reporter");
}
if (opts.reporter === "console" && opts.outputFile) {
  console.warn("--output-file is ignored for console reporter");
}

A few months ago, I wrote Stop writing CLI validation. Parse it right the first time. about parsing individual option values correctly. But it didn't cover the relationships between options.

In the code above, --output-file only makes sense when --reporter is junit or html. When it's console, the option shouldn't exist at all.

We're using TypeScript. We have a powerful type system. And yet, here we are, writing runtime checks that the compiler can't help with. Every time we add a new reporter type, we need to remember to update these checks. Every time we refactor, we hope we didn't miss one.

The state of TypeScript CLI parsers

The old guard—Commander, yargs, minimist—were built before TypeScript became mainstream. They give you bags of strings and leave type safety as an exercise for the reader.

But we've made progress. Modern TypeScript-first libraries like cmd-ts and Clipanion (the library powering Yarn Berry) take types seriously:

// cmd-ts
const app = command({
  args: {
    reporter: option({ type: string, long: 'reporter' }),
    outputFile: option({ type: string, long: 'output-file' }),
  },
  handler: (args) => {
    // args.reporter: string
    // args.outputFile: string
  },
});
// Clipanion
class TestCommand extends Command {
  reporter = Option.String('--reporter');
  outputFile = Option.String('--output-file');
}

These libraries infer types for individual options. --port is a number. --verbose is a boolean. That's real progress.

But here's what they can't do: express that --output-file is required when --reporter is junit, and forbidden when --reporter is console. The relationship between options isn't captured in the type system.

So you end up writing validation code anyway:

handler: (args) => {
  // Both cmd-ts and Clipanion need this
  if (args.reporter === "junit" && !args.outputFile) {
    throw new Error("--output-file required for junit");
  }
  // args.outputFile is still string | undefined
  // TypeScript doesn't know it's definitely string when reporter is "junit"
}

Rust's clap and Python's Click have requires and conflicts_with attributes, but those are runtime checks too. They don't change the result type.

If the parser configuration knows about option relationships, why doesn't that knowledge show up in the result type?

Modeling relationships with conditional()

Optique treats option relationships as a first-class concept. Here's the test reporter scenario:

import { conditional, object } from "@optique/core/constructs";
import { option } from "@optique/core/primitives";
import { choice, string } from "@optique/core/valueparser";
import { run } from "@optique/run";

const parser = conditional(
  option("--reporter", choice(["console", "junit", "html"])),
  {
    console: object({}),
    junit: object({
      outputFile: option("--output-file", string()),
    }),
    html: object({
      outputFile: option("--output-file", string()),
      openBrowser: option("--open-browser"),
    }),
  }
);

const [reporter, config] = run(parser);

The conditional() combinator takes a discriminator option (--reporter) and a map of branches. Each branch defines what other options are valid for that discriminator value.

TypeScript infers the result type automatically:

type Result =
  | ["console", {}]
  | ["junit", { outputFile: string }]
  | ["html", { outputFile: string; openBrowser: boolean }];

When reporter is "junit", outputFile is string—not string | undefined. The relationship is encoded in the type.

Now your business logic gets real type safety:

const [reporter, config] = run(parser);

switch (reporter) {
  case "console":
    runWithConsoleOutput();
    break;
  case "junit":
    // TypeScript knows config.outputFile is string
    writeJUnitReport(config.outputFile);
    break;
  case "html":
    // TypeScript knows config.outputFile and config.openBrowser exist
    writeHtmlReport(config.outputFile);
    if (config.openBrowser) openInBrowser(config.outputFile);
    break;
}

No validation code. No runtime checks. If you add a new reporter type and forget to handle it in the switch, the compiler tells you.

A more complex example: database connections

Test reporters are a nice example, but let's try something with more variation. Database connection strings:

myapp --db=sqlite --file=./data.db
myapp --db=postgres --host=localhost --port=5432 --user=admin
myapp --db=mysql --host=localhost --port=3306 --user=root --ssl

Each database type needs completely different options:

  • SQLite just needs a file path
  • PostgreSQL needs host, port, user, and optionally password
  • MySQL needs host, port, user, and has an SSL flag

Here's how you model this:

import { conditional, object } from "@optique/core/constructs";
import { withDefault, optional } from "@optique/core/modifiers";
import { option } from "@optique/core/primitives";
import { choice, string, integer } from "@optique/core/valueparser";

const dbParser = conditional(
  option("--db", choice(["sqlite", "postgres", "mysql"])),
  {
    sqlite: object({
      file: option("--file", string()),
    }),
    postgres: object({
      host: option("--host", string()),
      port: withDefault(option("--port", integer()), 5432),
      user: option("--user", string()),
      password: optional(option("--password", string())),
    }),
    mysql: object({
      host: option("--host", string()),
      port: withDefault(option("--port", integer()), 3306),
      user: option("--user", string()),
      ssl: option("--ssl"),
    }),
  }
);

The inferred type:

type DbConfig =
  | ["sqlite", { file: string }]
  | ["postgres", { host: string; port: number; user: string; password?: string }]
  | ["mysql", { host: string; port: number; user: string; ssl: boolean }];

Notice the details: PostgreSQL defaults to port 5432, MySQL to 3306. PostgreSQL has an optional password, MySQL has an SSL flag. Each database type has exactly the options it needs—no more, no less.

With this structure, writing dbConfig.ssl when the mode is sqlite isn't a runtime error—it's a compile-time impossibility.

Try expressing this with requires_if attributes. You can't. The relationships are too rich.

The pattern is everywhere

Once you see it, you find this pattern in many CLI tools:

Authentication modes:

const authParser = conditional(
  option("--auth", choice(["none", "basic", "token", "oauth"])),
  {
    none: object({}),
    basic: object({
      username: option("--username", string()),
      password: option("--password", string()),
    }),
    token: object({
      token: option("--token", string()),
    }),
    oauth: object({
      clientId: option("--client-id", string()),
      clientSecret: option("--client-secret", string()),
      tokenUrl: option("--token-url", url()),
    }),
  }
);

Deployment targets, output formats, connection protocols—anywhere you have a mode selector that determines what other options are valid.

Why conditional() exists

Optique already has an or() combinator for mutually exclusive alternatives. Why do we need conditional()?

The or() combinator distinguishes branches based on structure—which options are present. It works well for subcommands like git commit vs git push, where the arguments differ completely.

But in the reporter example, the structure is identical: every branch has a --reporter flag. The difference lies in the flag's value, not its presence.

// This won't work as intended
const parser = or(
  object({ reporter: option("--reporter", choice(["console"])) }),
  object({ 
    reporter: option("--reporter", choice(["junit", "html"])),
    outputFile: option("--output-file", string())
  }),
);

When you pass --reporter junit, or() tries to pick a branch based on what options are present. Both branches have --reporter, so it can't distinguish them structurally.

conditional() solves this by reading the discriminator's value first, then selecting the appropriate branch. It bridges the gap between structural parsing and value-based decisions.

The structure is the constraint

Instead of parsing options into a loose type and then validating relationships, define a parser whose structure is the constraint.

Traditional approach Optique approach
Parse → Validate → Use Parse (with constraints) → Use
Types and validation logic maintained separately Types reflect the constraints
Mismatches found at runtime Mismatches found at compile time

The parser definition becomes the single source of truth. Add a new reporter type? The parser definition changes, the inferred type changes, and the compiler shows you everywhere that needs updating.

Try it

If this resonates with a CLI you're building:

  • Documentation
  • Tutorial
  • conditional() reference
  • GitHub

Next time you're about to write an if statement checking option relationships, ask: could the parser express this constraint instead?

The structure of your parser is the constraint. You might not need that validation code at all.

Read more →
7

Optique 0.8.0: Conditional parsing, pass-through options, and LogTape integration

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

We're excited to announce Optique 0.8.0! This release introduces powerful new features for building sophisticated CLI applications: the conditional() combinator for discriminated union patterns, the passThrough() parser for wrapper tools, and the new @optique/logtape package for seamless logging configuration.

Optique is a type-safe combinatorial CLI parser for TypeScript, providing a functional approach to building command-line interfaces with composable parsers and full type inference.

New conditional parsing with conditional()

Ever needed to enable different sets of options based on a discriminator value? The new conditional() combinator makes this pattern first-class. It creates discriminated unions where certain options only become valid when a specific discriminator value is selected.

import { conditional, object } from "@optique/core/constructs";
import { option } from "@optique/core/primitives";
import { choice, string } from "@optique/core/valueparser";

const parser = conditional(
  option("--reporter", choice(["console", "junit", "html"])),
  {
    console: object({}),
    junit: object({ outputFile: option("--output-file", string()) }),
    html: object({ outputFile: option("--output-file", string()) }),
  }
);
// Result type: ["console", {}] | ["junit", { outputFile: string }] | ...

Key features:

  • Explicit discriminator option determines which branch is selected
  • Tuple result [discriminator, branchValue] for clear type narrowing
  • Optional default branch for when discriminator is not provided
  • Clear error messages indicating which options are required for each discriminator value

The conditional() parser provides a more structured alternative to or() for discriminated union patterns. Use it when you have an explicit discriminator option that determines which set of options is valid.

See the conditional() documentation for more details and examples.

Pass-through options with passThrough()

Building wrapper CLI tools that need to forward unrecognized options to an underlying tool? The new passThrough() parser enables legitimate wrapper/proxy patterns by capturing unknown options without validation errors.

import { object } from "@optique/core/constructs";
import { option, passThrough } from "@optique/core/primitives";

const parser = object({
  debug: option("--debug"),
  extra: passThrough(),
});

// mycli --debug --foo=bar --baz=qux
// → { debug: true, extra: ["--foo=bar", "--baz=qux"] }

Key features:

  • Three capture formats: "equalsOnly" (default, safest), "nextToken" (captures --opt val pairs), and "greedy" (captures all remaining tokens)
  • Lowest priority (−10) ensures explicit parsers always match first
  • Respects -- options terminator in "equalsOnly" and "nextToken" modes
  • Works seamlessly with object(), subcommands, and other combinators

This feature is designed for building Docker-like CLIs, build tool wrappers, or any tool that proxies commands to another process.

See the passThrough() documentation for usage patterns and best practices.

LogTape logging integration

The new @optique/logtape package provides seamless integration with LogTape, enabling you to configure logging through command-line arguments with various parsing strategies.

# Deno
deno add --jsr @optique/logtape @logtape/logtape

# npm
npm add @optique/logtape @logtape/logtape

Quick start with the loggingOptions() preset:

import { loggingOptions, createLoggingConfig } from "@optique/logtape";
import { object } from "@optique/core/constructs";
import { parse } from "@optique/core/parser";
import { configure } from "@logtape/logtape";

const parser = object({
  logging: loggingOptions({ level: "verbosity" }),
});

const args = ["-vv", "--log-output=-"];
const result = parse(parser, args);
if (result.success) {
  const config = await createLoggingConfig(result.value.logging);
  await configure(config);
}

The package offers multiple approaches to control log verbosity:

  • verbosity() parser: The classic -v/-vv/-vvv pattern where each flag increases verbosity (no flags → "warning", -v"info", -vv"debug", -vvv"trace")
  • debug() parser: Simple --debug/-d flag that toggles between normal and debug levels
  • logLevel() value parser: Explicit --log-level=debug option for direct level selection
  • logOutput() parser: Log output destination with - for console or file path for file output

See the LogTape integration documentation for complete examples and configuration options.

Bug fix: negative integers now accepted

Fixed an issue where the integer() value parser rejected negative integers when using type: "number". The regex pattern has been updated from /^\d+$/ to /^-?\d+$/ to correctly handle values like -42. Note that type: "bigint" already accepted negative integers, so this change brings consistency between the two types.

Installation

# Deno
deno add jsr:@optique/core

# npm
npm add @optique/core

# pnpm
pnpm add @optique/core

# Yarn
yarn add @optique/core

# Bun
bun add @optique/core

For the LogTape integration:

# Deno
deno add --jsr @optique/logtape @logtape/logtape

# npm
npm add @optique/logtape @logtape/logtape

# pnpm
pnpm add @optique/logtape @logtape/logtape

# Yarn
yarn add @optique/logtape @logtape/logtape

# Bun
bun add @optique/logtape @logtape/logtape

Looking forward

Optique 0.8.0 continues our focus on making CLI development more expressive and type-safe. The conditional() combinator brings discriminated union patterns to the forefront, passThrough() enables new wrapper tool use cases, and the LogTape integration makes logging configuration a breeze.

As always, all new features maintain full backward compatibility—your existing parsers continue to work unchanged.

We're grateful to the community for feedback and suggestions. If you have ideas for future improvements or encounter any issues, please let us know through GitHub Issues. For more information about Optique and its features, visit the documentation or check out the full changelog.

Read more →
5

Optique 0.7.0: Smarter error messages and validation library integrations

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

Optique 0.7.0 introduces enhancements focused on improving the developer experience and expanding its ecosystem for type-safe CLI argument parsing in TypeScript. This release brings automatic "Did you mean?" suggestions to help users correct typos, along with seamless integrations for Zod and Valibot validation libraries, ensuring more robust and efficient CLI development. Duplicate option name detection is now included to catch configuration bugs early, and context-aware error messages provide users with precise feedback. The update also features customizable shell completion naming conventions and improved line break handling in error messages. With these new features, Optique aims to streamline CLI development in TypeScript, making it more intuitive and less error-prone. This release underscores Optique's commitment to providing developers with powerful tools for building high-quality CLI applications.

Read more →
7

Optique 0.6.0: Shell completion support for type-safe CLI parsers

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

Optique 0.6.0 introduces intelligent shell completion to type-safe command-line applications, supporting Bash, zsh, fish, PowerShell, and Nushell. Unlike traditional CLI frameworks, Optique leverages the same parser structure for both argument parsing and completion, eliminating duplicate definitions and ensuring synchronization. Setting up completion is straightforward, with users generating and sourcing a completion script for their shell. The system works automatically with all Optique parser types, offering context-aware suggestions, including file system completion and custom logic for domain-specific value parsers. Additionally, the release enhances command documentation with separate brief, description, and footer texts, and introduces a `commandLine()` message term for clearer command-line examples in help text. Existing Optique users can easily migrate by adding a `completion` option to their `run()` configuration. This release aims to make Optique-based CLIs more user-friendly without sacrificing type safety and composability, providing sophisticated runtime features while maintaining compile-time guarantees.

Read more →
5

Optique 0.5.0: Enhanced error handling and message customization

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

Optique 0.5.0 is now available, bringing enhancements to error handling, help text generation, and overall developer experience while maintaining full backward compatibility. The update introduces better code organization by refactoring the large `@optique/core/parser` module into three focused modules: `@optique/core/primitives`, `@optique/core/modifiers`, and `@optique/core/constructs`. Error handling is improved with automatic conversion of default value callback errors into parser-level errors, and the new `WithDefaultError` class allows for structured error messages. Customization of default values in help text is now possible via an optional third parameter to `withDefault()`, enabling descriptive text instead of actual values. Additionally, the release provides comprehensive error message customization across all parser types and combinators, allowing context-specific feedback. These improvements aim to make Optique more user-friendly, especially for building CLI applications that require clear error messages and helpful documentation, making this release a significant step forward for developers using Optique.

Read more →
5

내가 LLM과 함께 코딩하는 방식

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

이 글은 저자가 LLM(Large Language Model)을 활용하여 코딩하는 방법에 대한 개인적인 경험과 팁을 공유합니다. LLM 코딩 에이전트 사용 시 맥락 제공의 중요성을 강조하며, Claude Code 모델을 선호하는 이유와 그 장단점을 설명합니다. 세부적인 지시를 위해 GitHub 이슈를 활용하고, 설계는 사람이, 구현은 LLM이 담당하는 역할 분담을 제안합니다. 또한, 프로젝트 지침을 담은 *AGENTS\.md* 파일의 중요성과 Context7을 활용한 문서 제공 방법을 소개합니다. 계획 모드를 통해 LLM이 스스로 피드백 루프를 돌도록 유도하고, 필요한 경우 손 코딩을 병행하여 코딩의 재미를 유지하는 전략을 제시합니다. 이 글은 LLM을 단순한 도구가 아닌 협력적인 동료로 활용하여 개발 효율성을 높이는 방법을 모색하는 개발자들에게 유용한 인사이트를 제공합니다.

Read more →
37
0
1

Upyo 0.3.0: Multi-provider resilience and deployment flexibility

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

Upyo 0.3.0 introduces three new transports designed to enhance email delivery capabilities and reliability. The update focuses on multi-provider support and flexible deployment options. The new pool transport, via the `@upyo/pool` package, combines multiple email providers with routing strategies like round-robin, weighted distribution, and priority-based routing, ensuring high availability and cost optimization. Additionally, the `@upyo/resend` package integrates Resend, an email service provider known for its developer-friendly approach and intelligent batch optimization. For those needing self-hosting options, the `@upyo/plunk` package supports Plunk, offering both cloud-hosted and Docker-based self-hosted deployments. These new transports maintain Upyo's consistent API design, ensuring they can be easily integrated into existing workflows. This release expands Upyo's utility by providing more robust and adaptable email delivery solutions.

Read more →
3

LogTape 1.1.0: Smarter buffering, seamless integration

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

LogTape 1.1.0 introduces smarter and more flexible logging with two major features. The first is "fingers crossed" logging, which buffers debug and low-level logs in memory and only outputs them when an error occurs, providing a complete sequence of events leading up to the problem. Category isolation prevents one component's errors from flushing unrelated logs, keeping logs focused and relevant. The second feature is direct log emission via the `Logger.emit()` method, which allows feeding logs from external systems like Kafka directly into LogTape while preserving original timestamps and metadata. This release also includes bug fixes and improvements across the ecosystem, such as fixes for potential data loss during high-volume logging and improved cross-runtime compatibility. Upgrading to 1.1.0 is backward-compatible and enhances debugging in production by providing complete context for every error without constant verbose logging.

Read more →
4

Stop writing CLI validation. Parse it right the first time.

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

This post introduces Optique, a new library created to address the pervasive problem of repetitive and often messy validation code in CLI tools. The author was motivated by the observation that nearly every CLI tool reinvents the wheel with similar validation patterns for dependent options, mutually exclusive options, and environment-specific requirements. Optique leverages parser combinators and TypeScript's type inference to ensure that CLI arguments are parsed directly into valid configurations, eliminating the need for manual validation. By describing the desired CLI configuration with Optique, TypeScript automatically infers the types and constraints, catching potential bugs at compile time. The author shares their experience of deleting large chunks of validation code and simplifying refactoring tasks. Optique aims to provide a more robust and maintainable approach to CLI argument parsing, potentially saving developers from writing the same validation logic repeatedly.

Read more →
21
3
1

Optique 0.4.0: Better help, rich docs, and Temporal support

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

Optique 0.4.0 introduces enhancements to streamline CLI development in TypeScript. This release focuses on improving help text organization through labeled merge groups and a new `group()` combinator, making complex CLIs more user-friendly by organizing options under clear sections. Comprehensive documentation support is added via the `run()` function, allowing brief descriptions, detailed explanations, and footers without altering parser definitions. The update also includes Temporal API support with the `@optique/temporal` package, enabling type-safe parsing for dates, times, and time zones. Improved type inference for `merge()` and `tuple()` combinators enhances type safety, alongside minor breaking changes. These updates aim to make CLI construction more intuitive and maintainable, offering developers greater control over user experience and code structure.

Read more →
5

Optique 0.3.0: Dependent options and flexible composition

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

Optique 0.3.0 introduces several enhancements aimed at simplifying the development of complex CLI applications. This release focuses on expanding parser flexibility and refining the help system, incorporating valuable community feedback. Key updates include the introduction of required Boolean flags using the new `flag()` parser, more flexible type defaults in `withDefault()` to support union types, and an extended `or()` capacity that now supports up to 10 parsers. The `merge()` combinator has also been enhanced to work with any object-producing parser, and context-aware help is now available through the `longestMatch()` combinator. Additionally, version display support has been added to both `@optique/core` and `@optique/run`, along with structured output functions for consistent terminal formatting. These improvements collectively provide developers with more powerful tools for building intuitive and feature-rich command-line interfaces.

Read more →
3

Optique: 타입 안전한 CLI 파서 컴비네이터

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

이 글에서는 Haskell의 `optparse-applicative`와 TypeScript의 Zod에서 영감을 받아 제작된 새로운 CLI 파서 라이브러리인 Optique를 소개합니다. Optique는 파서 컴비네이터를 활용하여 CLI의 구조를 레고 블록처럼 조립할 수 있게 해줍니다. `option()`, `optional()`, `multiple()`, `or()`, `object()`, `constant()`, `command()`, `argument()` 등의 다양한 파서와 컴비네이터를 통해 복잡한 CLI 구조를 유연하게 정의할 수 있습니다. 특히, `or()`와 `object()` 컴비네이터를 사용하여 상호 배타적인 옵션이나 서브커맨드를 쉽게 구현하는 방법을 예제를 통해 설명합니다. Optique는 단순한 CLI 파서 역할에 집중하고 있어 모든 기능을 제공하지는 않지만, 복잡한 CLI 구조를 표현하는 데 유용하며, 소개 문서와 튜토리얼을 통해 더 자세한 내용을 확인할 수 있습니다.

Read more →
13
2
2

Upyo 0.2.0 Release Notes

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

Upyo 0.2.0 has been released, introducing new features to this cross-runtime email library that supports Node.js, Deno, Bun, and edge functions. The latest version expands its capabilities with Amazon SES transport support, enabling AWS Signature v4 authentication and session-based authentication. Additionally, comprehensive OpenTelemetry integration has been added, offering distributed tracing, metrics collection, and error classification without altering existing code. The OpenTelemetry transport automatically instruments email operations, tracking delivery rates and latency, and integrates with existing OpenTelemetry infrastructure. Community feedback is encouraged to further improve Upyo, whether through testing the new Amazon SES transport, implementing OpenTelemetry, or contributing to the GitHub repository. This release enhances Upyo's utility by providing more transport options and robust observability features, making it a valuable tool for developers needing reliable email sending across various environments.

Read more →
4

청개구리 스택 찬가

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

이 글은 저자가 기술 스택을 선택할 때 주류를 따르지 않고 대안적인 기술을 선택하는 경향, 즉 "청개구리 스택"을 추구하는 경험을 공유합니다. 청개구리 스택은 사용자가 적어 문제 해결에 어려움이 있을 수 있지만, 기술에 대한 깊이 있는 이해와 오픈 소스 기여 기회를 제공합니다. 또한, 후발주자로서 대안적인 설계를 통해 정석 스택보다 나은 이해를 제공할 수 있습니다. 여러 부품을 직접 조립하는 과정은 번거롭지만 각 기술에 대한 깊은 이해를 얻을 수 있게 합니다. 저자는 오늘의 정석 스택도 과거에는 청개구리 스택이었을 수 있음을 지적하며, LLM 시대에도 청개구리 스택이 주는 배움의 기회는 여전할 것이라고 주장합니다. Stack Overflow에 답이 없는 길을 걸으며 얻는 깨달음은 온전히 자신의 것이 될 것이라는 메시지를 전달하며, 독자들에게도 주체적인 기술 선택과 도전을 권장합니다.

Read more →
29
1
3

OSSCA: Fedify 프로젝트 기여자들을 위한 안내

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

이 글은 오픈 소스 컨트리뷰션 아카데미 참여자, 더 나아가 Fedify 프로젝트에 기여하고자 하는 모든 이들을 위한 안내서입니다. Fedify 프로젝트 참여를 위한 준비 사항과 소통 채널, 개발 환경 설정, 그리고 프로젝트 구조에 대한 이해를 돕는 것을 목표로 합니다. 먼저 Fedify Discord 서버에 참여하여 자기소개를 하고, 연합우주(fediverse)에 대한 기본적인 이해를 쌓기 위해 계정을 만들어보는 과제가 주어집니다. JavaScript와 TypeScript에 대한 간략한 소개와 함께, Fedify가 ActivityPub 프레임워크로서 연합우주 SNS 소프트웨어 개발을 쉽게 만들어주는 도구임을 설명합니다. 저장소를 포크하고 클론하는 방법, Node.js, Deno, Bun 등 다양한 런타임 환경 설정 방법, 그리고 Visual Studio Code를 활용한 개발 환경 구성 방법을 상세히 안내합니다. 마지막으로, Fedify 저장소의 구조와 린트, 테스트 실행 방법을 소개하며, 기여할 일감을 찾는 방법과 추가 정보 링크를 제공합니다. 이 글을 통해 독자는 Fedify 프로젝트에 실질적으로 기여하기 위한 첫걸음을 내딛을 수 있으며, 오픈 소스 기여에 대한 자신감을 얻을 수 있습니다.

Read more →
11
0
1

나만의 연합우주 마이크로블로그 만들기

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

이 튜토리얼은 Fedify를 사용하여 ActivityPub 프로토콜을 구현하는 마이크로블로그를 만드는 과정을 안내합니다. Fedify는 연합 서버 앱 개발의 복잡성을 줄이고, 개발자가 비즈니스 로직에 집중할 수 있도록 돕는 TypeScript 라이브러리입니다. 튜토리얼에서는 Node.js, npm, Hono 등의 개발 환경을 설정하고, SQLite 데이터베이스를 구축하여 계정 생성, 프로필 페이지, 액터 구현, 암호 키 관리, 팔로우 기능, 게시물 작성 및 타임라인 구현 등 마이크로블로그의 핵심 기능을 단계별로 구현합니다. 특히 ActivityPub.Academy 서버를 활용하여 실제 연합우주 환경에서의 연동을 테스트하고, Mastodon과의 호환성을 확인합니다. 마지막으로, 보안 및 기능 개선을 위한 추가 과제를 제시하며, 독자가 프로젝트를 확장할 수 있도록 안내합니다. 이 튜토리얼을 통해 독자는 Fedify를 활용하여 ActivityPub 기반의 분산 소셜 네트워크 서비스를 구축하는 기본적인 이해를 얻을 수 있습니다.

Read more →
13

How to pass the invisible

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

This post explores the enduring challenge in software programming of how to pass invisible contextual information, such as loggers or request contexts, through applications without cumbersome explicit parameter passing. It examines various approaches throughout history, including dynamic scoping, aspect-oriented programming (AOP), context variables, monads, and effect systems. Each method offers a unique solution, from the simplicity of dynamic scoping in early Lisp to the modularity of AOP and the type-safe encoding of effects in modern functional programming. The post highlights the trade-offs of each approach, such as the unpredictability of dynamic scoping or the complexity of monad transformers. It also touches on how context variables are used in modern asynchronous and parallel programming, as well as in UI frameworks like React. The author concludes by noting that the art of passing the invisible is an eternal theme in software programming, and this post provides valuable insights into the evolution and future directions of this critical aspect of software architecture.

Read more →
11
1
0

If you're building a JavaScript library and need logging, you'll probably love LogTape

洪 民憙 (Hong Minhee) @hongminhee@hackers.pub

LogTape offers a novel approach to logging in JavaScript libraries, designed to provide diagnostic capabilities without imposing choices on users. Unlike traditional methods such as using debug packages or custom logging systems, LogTape operates on a "library-first design" where logging is transparent and only activated when configured. This eliminates the fragmentation problem of managing multiple logging systems across different libraries. With zero dependencies and support for both ESM and CommonJS, LogTape ensures minimal impact on users' projects, avoiding dependency conflicts and enabling tree shaking. Its universal runtime support and efficient performance make it suitable for various environments. By using a hierarchical category system, LogTape prevents namespace collisions, offering a seamless developer experience with TypeScript support and structured logging patterns. LogTape provides adapters for popular logging libraries like Winston and Pino, bridging the transition for users invested in other systems. Ultimately, LogTape offers a way to enhance library capabilities while respecting users' preferences and existing choices, making it a valuable consideration for library authors.

Read more →
10