FR#154 โ Search and Community
Last week, Holos Social quietly shut down Holos Discover, a fediverse search engine built on ActivityPub. It had put in serious effort to allow for user consent, it only indexed public posts from accounts with the indexable flag enabled, appeared as a visible follower, processed deletions and edits in real time, and excluded accounts that were locked or had #nobot in their bio. This is about as close you can get to building a consent-respecting search engine in the current fediverse.
Community members pointed out that the indexable flag is enabled by default on many instances, which means that a significant number of accounts with the flag set never made a deliberate choice to be indexed. The flag thatโs supposed to signal โthis person consents to being searchableโ frequently signals โthis personโs server admin didnโt change the defaultโ, and on a protocol-level, there is no difference between these two options.
Search and indexing projects on the fediverse tend to end the same way, from early full-text indexing attempts through Searchtodonโs careful experiment with personal timeline search in 2023, to FediOnFireโs relay-based firehose display earlier this year. Not all of this resistance was unjustified: Maven imported over a million fediverse posts without notice and ran AI sentiment analysis on them, which is a far cry from what Holos was building. But the community response has rarely distinguished between projects that deliberately violate consent and projects that try to respect it. Bridgy Fed survived a similar cycle by shifting to an opt-in model, but itโs the exception. The norm against search was established during periods of intense community backlash that sometimes crossed into coordinated harassment. These backlashes have grown less intense as people seem to have largely moved on. See for example how Searchtodon got an intense backlash in early 2023, and I explicitly flagged an offline-first client that could do effectively the same in fall 2025 that did not get any backlash. Still, the expectation for backlash persists as internalized caution.
The community correctly identified that the indexable flag doesnโt reliably represent individual consent. Helen Nissenbaumโs work on contextual integrity makes the case that privacy isnโt about secrecy but about appropriate information flows: posting on Mastodon carries an implicit norm about who will encounter that post and why, and violating that norm is a privacy breach even if the post was technically public. Daniel Solove and Woodrow Hartzog make a similar legal argument, saying that publicly available data is still regularly protected by privacy law, and that accessibility alone doesnโt license arbitrary downstream use.
But the only available response to discovering that the indexable flag is unreliable, treating all defaults as non-consent, has some major side effects. It removes the possibility that a server admin could legitimately say โour community values public discovery, so we set defaults that support that.โThe protocol has no way to represent whether a default was set deliberately or by inertia. So the community norm treats them the same, which in practice means that a server admin who says โour community is about public discoveryโ gets treated identically to one who never looked at the settings page. This results in a view of fediverse servers that only contains individual choices, and where a community deciding collectively to be discoverable is not an available category.
This is a strange outcome for a network thatโs supposed to enable governance diversity across communities. Mastodon published a blog post this week where Executive Director Felix Hlatky says the mission is to โconnect the world through thriving online communitiesโ. But this current structure for how to signal consent for data processing can only recognise the individual, and has no mechanism for a community to signal anything.
There is also something patronizing about the framing that treats defaults as equivalent to non-consent. If we take seriously the idea that servers are communities with governance, then an admin who configures their server for public discovery is making a governance decision on behalf of their community, not failing to notice a checkbox. Treating all defaults as non-consent refuses to recognize that decision as legitimate, which undermines exactly the kind of community-level agency that a decentralized network is supposed to enable. As I argued in another article this week, where community lives in these networks is an open question, but it canโt be answered if the architecture only recognizes individuals.
Meanwhile, there are about half a dozen ways to harvest fediverse data with no accountability and no opt-out attached, and all of them are effectively condoned because they happen out of sight. What the current setup actually does is push practices for data gathering out of sight, where no opt-out mechanisms exist, instead of creating conditions where accountable tools can be built in the open. The current system is better at protecting the communityโs idea of itself as a place that takes consent seriously, than it is at actually protecting users.
Mastodonโs Fediverse Discovery Providers project, or Fediverse Auxiliary Service Providers (FASP), is building a specification for pluggable search and discovery services that any fediverse server can connect to, funded by an NGI Search grant. It aims to solve the same problem as Holos, providing discovery infrastructure that can be used by other servers.
The FASP specification explicitly states that providers will โonly ingest content from creators who opted in to discovery in the first placeโ and will โrespect this setting,โ referring to the same indexable signal that Holos relied on. The spec is well-designed in other respects: it is decentralized, allows servers to choose among competing providers, separates content URIs from content fetching in ways that limit data exposure, and requires signed fetch requests so servers can identify and block specific providers. But the problem is that the consent mechanism at its foundation is one the community has already explicitly said it doesnโt trust.
If the Holos episode established that the indexable flag is insufficient because it canโt guarantee individual deliberate consent, then FASPโs privacy model has the same hole. It shows that the lack of search and discovery is a governance problem, not a technical problem. Holos and their experience building a search engine shows that the โindexableโ flag is not sufficient. The technical infrastructure for discovery is being built, but the governance infrastructure for consent, a way to distinguish deliberate community choices from defaults, is not discussed at all.
#nobot
https://connectedplaces.online/reports/fr154-search-and-community/