@whitequark✧✦Catherine✦✧ you have disturbed a dim memory here about how XFS does size allocations in some really confusing way. there's like a … background compaction, thing, I think? gosh it's been a long time since I looked at this
@glyph
@whitequark✧✦Catherine✦✧
As I recall, XFS was designed to quite aggressively avoid fragmentation. There were two major things here. The first was that it would leave data sitting in memory for a surprisingly long time to try to collect large blocks to write (which caused a lot of its reputation for losing data: it wouldn't commit data to disk until there was an explicit flush or the driver ran out of memory, other filesystems would generally flush data to disk periodically even if they had space). Second, it would assume files would grow and so would allocate more space than they needed and then later go and prune the space if they didn't grow to fill it. I think it also did some additional background defragmentation to fill in the fragmentation caused by these unused regions.
None of this makes sense on SSDs or modern VFS designs (which do the batching at a higher level), but they made things really fast on IRIX in the mid '90s.
Exciting retro technology!
If you have a fediverse account, you can quote this note from your own instance. Search https://infosec.exchange/users/david_chisnall/statuses/116148249942920619 on your instance and quote it. (Note that quoting is not supported in Mastodon.)