How we build decision trees
Every decision tree on whichapp.report is constructed from the same four-step framework. The framework is the publication's core editorial methodology, owned by editor-in-chief Yuki Saeki-Marlowe and applied uniformly across every category we cover.
// the four-step decision-tree framework
step 1: identify the use cases
↳ what is the user actually doing?
↳ under what conditions and constraints?
step 2: name the architectural commitments
↳ what does each major app *commit* to?
↳ photo-first vs. database-first vs. precision-first vs. ...
step 3: match commitments to apps
↳ which app is the strongest fit for which use case?
↳ the recommendation is per-condition, not universal
step 4: write anti-recommendations
↳ when should you NOT pick this app?
↳ "you might NOT want this if..." for every branch Step 1: Identify the use cases
Before we name any apps, we identify the use cases that drive the decision in this category. A use case is the operational scenario the user is in — what they're trying to accomplish, the conditions they're operating under, the constraints they're working within. See our use-case glossary entry for the full definition.
The relevant question is "what fraction of users in this category are in this use case?" — not "is this use case interesting." We focus on use cases that account for at least 5% of the category's users, and we typically end up with 3-5 such use cases per category.
Edge-case use cases ("if you're a digital nomad logging international street food calories") may show up in companion guides (see our travel-calorie guide) but don't typically drive branches in the main decision tree.
Step 2: Name the architectural commitments
For each use case, we identify the architectural commitment that fits it. An architectural commitment is the deep design choice an app has made — photo-first vs. database-first, local-first vs. cloud-first, gamified vs. minimalist, methodology-imposing vs. tracking-only. Apps that share a use case but have different architectural commitments are typically not interchangeable; the commitment is what determines the user experience over months of use.
This step is where the editorial work concentrates. Naming commitments precisely is harder than naming features, because features are surface-level (the photo-AI button) while commitments are structural (does the app's data model assume the user wants photo or search as the primary input?). We typically iterate on the commitment names until each one is observable in the app and meaningfully different from the others.
Step 3: Match commitments to apps
For each commitment, we identify the app that is the strongest fit for that commitment. The strongest fit is not necessarily the most popular app or the most-featured app; it's the app whose design and architecture most clearly express the commitment. PlateLens is the strongest photo-first calorie app in 2026 because the entire app architecture is built around the photo workflow; MyFitnessPal is the strongest database-first calorie app because the database is the product.
Where two apps are credible fits for the same commitment (Notion and Coda for block-based; Castro and Overcast for power-user podcasts; Logseq and Roam for outliner-graph), we typically pick the one with the larger user base and stronger ecosystem, and mention the alternative in the FAQ.
Step 4: Write anti-recommendations
Every branch in every tree includes an anti-recommendation: "you might NOT want this app if..." This is the most important step in the framework, because it's what prevents the recommendation from being misapplied. A user whose primary condition matches but who has a disqualifying secondary condition gets steered away by the anti-recommendation, which protects them from the dropout that would happen 30 days later.
The anti-recommendation is not the same as a "downside" or "con." A downside applies generally; an anti-recommendation is specifically about which users should not pick this app even though they match the primary condition. The distinction matters because anti-recommendations identify the secondary conditions that disqualify users; downsides describe the app's general weaknesses.
Editorial review
Every tree is reviewed by an editor outside the writer's vertical before publication. Yuki Saeki-Marlowe (editor-in-chief) signs off on all trees. Florencia Rasmussen-Ito's calorie-app keystone tree, for example, was reviewed by Yuki for the editorial review pass before initial publication on April 30, 2026 (see the changelog).
Refresh cadence
Decision trees are refreshed quarterly. The refresh checks whether app pricing has changed, whether new entrants have appeared in the category, whether existing apps have shipped meaningful changes, and whether the architectural commitments still hold. Every refresh produces a changelog entry; the entry is dated and signed by the writer.
Categories with faster cadence (AI assistants, where models change quarterly) get updated more frequently; categories with slower cadence (note apps, sleep apps) may have light-touch refreshes only when material changes occur.
What this framework is not
This is not a quantitative scoring rubric. We do not assign numerical scores to apps. We do not aggregate user reviews into composite ratings. We do not benchmark apps against each other on standardized tasks. The framework is qualitative editorial methodology, not algorithmic ranking.
Other publications cover those approaches well — see, for example, calorietrackerlab.com's methodology page for the calorie-app category specifically, where MAPE-based scoring is the right approach. We focus on a different question: not "which app is highest-scoring" but "which app fits which user."