My name is Bryce. I build software. This website is a collection of my personal projects and writings.
subscribe via Atom
autotidy
Published on 2026-02-04 —

I recently released a new open source project called autotidy. It's a file-system organizer ala Hazel, Maid, and Organize.

What does it do exactly? Well, it allows you to define rules that should be applied when a directory changes. For example, I use it to organize my Downloads folder. When I download a file I often want it to automatically end up in a certain place. For example, audio files that begin with the name "New Recording" are voice memos. Images that begin with "ChatGPT" or "Gemini_" are images generated with AI models.

With autotidy I can sort these files into better homes by defining rules like this:

rules:
- name: Sort AI images
locations: ~/Downloads
filters:
- any:
- name: Gemini_*
- name: ChatGPT*
- extension: [png, jpeg]
actions:
- move: ~/images/slop

Why build it?

I built autotidy to solve this problem for myself after finding that none of the alternatives met my goals. I needed something that met the following requirements:

To my surprise, the alternatives I found all had their own shortcomings.

Hazel: MacOS only, proprietary/commerical.

Maid: Requires Ruby runtime. Requires writing Ruby to define rules.

Organize: Requires Python runtime. No built-in file watching. Requires rules to be manually run (or via cron, etc).

This is not a critique of these tools or their developers, just that my requirements differed. They are all fantastic in their own ways, and I borrowed ideas from each of them.

Technical details

I wrote autotidy in Go. I debated writing it in Rust, but I am less familiar with Rust and I wanted to knock something out relatively quickly. There is nothing overly complex about how autotidy works. It reads a configuration file, builds rules in memory, watches files for changes, and then applies the rules to the appropriate files. But there were some things that I stumbled across when making it that I thought might be interesting to mention.

Watching for changes with fsnotify

Watching directories for changes is a bit of a tedious problem, especially if you want to do so in a way that is cross-platform. On Linux this is done with inotify, on macOS kqueue, and on Windows ReadDirectoryChangesW. Instead of re-implementing the wheel here I decided to use fsnotify which provides a standard interface around all of these implementations. There are still some cross-platform details that differ, but they are relatively few and manageable.

I also stumbled across some interesting issues along the way. Like for example, what happens if a directory that is being watched is deleted? or what happens if the user defines a rule for a directory which doesn't exist until later? To solve this problem I defined what I call "Missing Roots", locations that don't exist now, but may exist in the future. When a location that is defined in the config doesn't exist, I recursively move up the file tree until I find a directory that does exist, and I watch this directory for changes that result in getting closing to the target directory. This works well, but it comes with some other problems. Like for example, what happens if the Missing Root is located at /a, and the user in quick-succession creates /a/b/c/d/location? Our watch of /a may notice the creation of /a/b, but by the time it begins watching this directory /a/b/c may have already been created. This necessitates a bit of logic to ensure that created directories are not missed. Further, because file changes can be erratic, and we want to maintain performance, we need to debounce our operations by some small period with each change.

fsnotify does not tell us what new files created are. To determine whether the file is a directory (to look for the Missing Root, or to recurse if the rule is a recursive rule) we need to ask the Operating System what type of file it is. In order to maintain performance I ended up using readdir if the number of changed files is greater than some threshold, rather than stating every file, which could be slower if there are dozens or hundreds of file changes at once.

afero for file-system abstraction

I used afero as a way to abstract file system operations. This was useful in unit tests where I wanted to operate on files in memory rather than on the file-system itself. It had the additional benefit of allowing me to build "dry-run" functionality, which is the default when running one-off rules via autotidy run. Using afero meant I could simulate the result of running rules by running them in memory by using a Copy-On-Write implementation, and then print information about the resulting changes to the terminal without actual making them.

Packaging and distribution

One of the biggest challenges with building a cross-platform headless program like this is how to distribute it. This is arguably one advantage of using a runtime like Python, Ruby, or Node. Users can simply use pip or npm to install the software. While I could have used go install for the CLI, it wouldn't be sufficient for a daemon which needs to run in the background and launch at system startup. On linux this means adding a unit to systemd, on MacOS launchd, and on Windows the registry.

I explored a few different approaches here for varying platforms.

Homebrew

Homebrew was most simple, and is done by just creating a tap repository and instructing the user to brew install from this repository.

brew install prettymuchbryce/tap/autotidy
brew services start autotidy
autotidy status

The brew services start line is required for persistent background processes.

Nix

Nix was also relatively simple. You simply add a flake.nix file to the repository and instruct the user to add it as an input to their configuration. This was important to me as it's the one I personally use.

Other Linux

I was unsure how to tackle other linux distributions aside from Nix. For new projects, pursuing official inclusion in big package managers like yum or apt seems inappropriate. I ended up crafting a one-liner to run an install script, as well as creating deb/rpm packages with nfpm.

I'm not crazy about curl piping scripts into a shell for many reasons, so I am not totally happy with this solution.

Windows

Windows is similar to the above. There is a powershell script that installs and sets up the service.

I have the least confidence in this platform, and while I have done cursory manual and automated testing, I don't use Windows regularly enough to be sure that I have adhered to all of the standard idioms.

Testing

In addition to manual testing, I setup installation-related integration tests for each platform. At the very least this helps me catch any issues related to install before shipping new versions.

Working with Claude

Since it's 2026, and therefore I am compelled to mention AI in this post, I should say that I made heavy use of Claude Code while building this project. It is not "vibe-coded" in the sense that I threw prompts at the wall and hoped for the best. I found myself using Claude Code to help answer questions, perform menial tasks (like moving files around, or setting up boilerplate), as well as writing additional unit test cases. Against common ai-influencer wisdom, I strived to review every line of code and in doing so found a healthy amount of mistakes and bugs that were missed by the agent. I am still trying to find the right balance here of using agents to move faster, while also retaining my own strong opinions about architecture and maintainability.

Summary

This was a nice little side project to build, and I had fun with it. I am not sure if the core problem is a common one, or if my solution is differentiated enough to lure users away from the existing solutions. Ultimately I saw this as an opportunity to solve my own problem. I'd like to get in the habit of doing more things like this in the future. Thanks for reading!