wisp.place implements a two-tier domain system:

  • Wisp Subdomains ({handle}.wisp.place) - Free, first-come-first-serve

  • Custom Domains (BYOD) - User-owned domains verified via DNS


Database Schema

The system uses PostgreSQL with three key tables:

-- Wisp subdomains
CREATE TABLE domains (
    domain TEXT PRIMARY KEY,           -- e.g., "mysite.wisp.place"
    did TEXT NOT NULL,                 -- User's DID (did:plc:xxx)
    rkey TEXT,                         -- Site rkey (place.wisp.fs record identifier null until mapped)
);

-- Custom domains
CREATE TABLE custom_domains (
    id TEXT PRIMARY KEY,               -- SHA256 hash (first 16 chars)
    domain TEXT UNIQUE NOT NULL,       -- e.g., "example.com"
    did TEXT NOT NULL,                 -- User's DID
    rkey TEXT,                         -- Site rkey (null until mapped)
    verified BOOLEAN DEFAULT false,    -- DNS verification status
    last_verified_at BIGINT,           -- Last verification timestamp
);

-- Sites cache
CREATE TABLE sites (
    did TEXT NOT NULL,
    rkey TEXT NOT NULL,
    display_name TEXT,
    PRIMARY KEY (did, rkey)
);

Of course with appropriate indexes:

  • domains(did, rkey) - Find sites by user

  • custom_domains(did) - Find user's custom domains

  • custom_domains(verified) - Batch verification checks


Subdomains

To claim a subdomain, they must be authed into my service. That way I can prove that they control the DID (OAuthed in), and that in real time I can do schema validation and let them know if what they're trying to claim hasn't already been claimed. This is actually one thing I don't bother injesting from the firehose for though I do write a metadata place.wisp.domain into their repo.

/* POST /api/domain/claim
Body: { handle: "myhandle" }

-- 1. Validate handle
   - 3-63 characters
   - Only a-z, 0-9, hyphen
   - Not reserved (www, api, admin, static, public, preview)
   - Doesn't start/end with hyphen or contain */

-- 2. Check user domain limit (max 3 per user)
   SELECT COUNT(*) FROM domains WHERE did = ${userDid}

-- 3. Insert into database
   INSERT INTO domains (domain, did, rkey)
   VALUES ('myhandle.wisp.place', did, null)
//Write place.wisp.domain record to PDS
await agent.com.atproto.repo.putRecord({
    repo: auth.did,
	collection: "place.wisp.domain",
	rkey: "self",
	record: {
	    $type: "place.wisp.domain",
		domain,
		createdAt: new Date().toISOString(),
	} as place.wisp.domain,
});

The metadata is more acknowledgement that they claimed a wisp domain but the source of truth for this matter is with my database. The reason for this is that I really want to make this a conscious choice to delete routing because once its gone, there's now a window of opportunity that someone could hijack a former site and start hosting malicious content. Sure, not "atproto", but I'd rather this way.


Custom Domains

/* POST /api/domain/custom/add
   Body: { domain: "example.com" }

 1. Comprehensive validation:
   - Length: 3-253 characters (RFC 1035)
   - Format: ^(?:[a-z0-9](?:[a-z0-9-]{0,61}[a-z0-9])?\.)+[a-z]{2,}$
   - Each label: 1-63 characters, no leading/trailing hyphen
   - TLD: minimum 2 chars, not all numeric
   - No non-ASCII characters (homograph attack prevention)
   - Blocks: localhost, example.com, IP addresses, private IPs
*/

-- 2. Check if domain already claimed
   SELECT * FROM custom_domains WHERE domain = ${domainLower}

-- 3. Generate hash ID
   id = SHA256("${did}:${domain}").substring(0, 16)

-- 4. Insert unverified
   INSERT INTO custom_domains (id, domain, did, rkey, verified)
   VALUES (hash, domain, did, null, false)

Database insertion:

  • verified = false initially (pending DNS verification)

  • rkey = null (set when user maps it to a site)

  • id is deterministic: SHA256 hash of DID + domain

This requires a little more work to get working due to TLS certs. The reverse proxy I use to terminate TLS into my services is Caddy, and they have a really nifty feature called on demand TLS. With this, anyone who hits the end point can have Caddy create a matching cert to then secure the connection. The initial request is quick enough to not have the connection even dropped while keeping it in a local cache to then be continually renewed every 90 days.

To claim ownership of a custom domain, the process is much of the same except now I have to verify that the DID owns control of the domain. I do this the same way everyone else does, a TXT record. I ask them to put a _wisp TXT record at a level below of what they're asking, so if they want blog.nekomimi.pet, they would have to insert it at _wisp.blog.nekomimi.pet containing just their DID. Same way Bluesky and Leaflet do it. The CNAME is then the first 16 letters of sha256 hash of {DID}:{domain} to be deterministic. The TXT is authoritative (sufficient for verification)while CNAME is advisory (account for CNAME flattening services like Cloudflare).

_wisp.example.com  TXT  did:plc:abc123xyz...
example.com        CNAME {hash}.dns.wisp.place

I check every 10 minutes to see if the TXT matches expected to initially validate and then see if it remains valid. The api route for caddy to then check the validity of the domain so it can issue a cert looks like this.

// GET /api/domain/registered?domain=example.com

.get('/registered', async ({ query, set }) => {
	try {
		const domain = (query.domain || "").trim().toLowerCase();

		if (!domain) {
			set.status = 400;
			return { error: 'Domain parameter required' };
		}

		const result = await isDomainRegistered(domain);

		// For Caddy on-demand TLS: 200 = allow, 404 = deny
		if (result.registered) {
			set.status = 200;
			return result;
		} else {
			set.status = 404;
			return { registered: false };
		}
	} catch (err) {
		logger.error('[Domain] Registered check error', err);
		set.status = 500;
		return { error: 'Failed to check domain' };
	}
})

Domain-to-Site Mapping

After verification/claiming, users map domains to sites

-- POST /api/domain/wisp/map-site
-- Body: { domain: "myhandle.wisp.place", siteRkey: "site123" }

UPDATE domains SET rkey = ${siteRkey} WHERE domain = ${domain}

-- POST /api/domain/custom/{id}/map-site
-- Body: { siteRkey: "site123" }

UPDATE custom_domains SET rkey = ${siteRkey} WHERE id = ${id}

Serving Sites via Hosting Service

The hosting-service (separate Node microservice) routes requests based on domain type:

// Route 1: sites.wisp.place/{did}/{site}/*
if (hostname === 'sites.wisp.place')
Direct path resolution, immediate serve

// Route 2: DNS hash subdomain (for custom domains in-flight)
if (hostname.match(/^([a-f0-9]{16})\.dns\.wisp\.place$/))
Lookup by hash: getCustomDomainByHash(hash)
Get rkey, fetch from user's DID/rkey

// Route 3: Wisp subdomains (*.wisp.place)
if (hostname.endsWith('.wisp.place'))
Lookup: getWispDomain(hostname)
Database query: SELECT did, rkey FROM domains WHERE domain = ${hostname}

// Route 4: Custom domains (primary public route)
const customDomain = await getCustomDomain(hostname)
Database query: SELECT did, rkey, verified FROM custom_domains 
                      WHERE domain = ${hostname} AND verified = true

The result of these is that the hosting service can resolve the domain to fetch the site and multiple domains point to the same site, all while being under the DID's control.

Here's the Caddyconf I use for this

{
    on_demand_tls {
        ask http://wisp-place:8000/api/domain/registered
    }
}

*.dns.wisp.place *.wisp.place {
    reverse_proxy hosting-service:3001
}

https:// {
    tls {
        on_demand
    }
    reverse_proxy hosting-service:3001
}

This is the only part of wisp.place that isn't 'ATprotated' if you don't count the simple act of signing into the service as being that.

I hope this serves useful to someone doing something similar. As far as I can tell, Leaflet does it a very similar way but instead of Caddy, they rely on Vercel's DNS and reverse proxy services to terminate the TLS and ensure custom domains get the certs up.