How We Test Tools
The research process behind every review, comparison, and shortlist on this site, including what we evaluate, what we skip, and how we make recommendations.
Starting Point
We evaluate tools the way a practitioner would, not a feature auditor
Every tool review on Prospecting Manual is written from an operator's perspective. The questions we answer are the ones a sales team lead, RevOps manager, or agency founder would ask before committing budget: what does this tool actually do in a real outbound stack, who is it built for, where does it fall short, and when does another option make more sense.
We do not produce star-rating summaries built from weighted feature matrices. We produce verdicts with conditions: this tool works well for this team profile running this workflow, and it underperforms in these specific situations.
Research Inputs
What we use to evaluate a tool
Our primary research inputs are public-facing product documentation, published pricing pages, official feature changelogs, and hands-on product interaction where available. We do not rely on vendor-supplied talking points as the sole basis for a claim.
When a vendor provides access to their product for evaluation purposes, we note it. Vendor access does not change the evaluation criteria and does not grant the vendor any influence over the published content.
Evaluation Framework
Six dimensions, applied consistently across every tool we review
Every tool we cover is evaluated against the same six dimensions. This consistency is what makes comparisons between reviews meaningful: the same question is being asked in the same way about every tool in the same category.
2. Pricing structure: plan tiers, per-seat versus usage-based, what is gated at which level.
3. Ideal customer profile: the team size, workflow type, and outbound motion where the tool performs best.
4. Known limitations: specific features missing, caps that affect scale, workflows the tool handles poorly.
5. Stack fit: how the tool connects to adjacent tools in a typical outbound stack.
6. Competitive positioning: how it compares to the two or three most common alternatives a buyer would be choosing between.
Handling Uncertainty
Ambiguous or unconfirmed facts are flagged, not resolved by guessing
Outbound tools do not always publish complete documentation. Pricing is sometimes hidden behind a sales wall. Features described in marketing copy are not always reflected in the actual product. When we cannot confirm a claim from a primary source, we say so explicitly in the copy.
Phrases like "pricing not publicly confirmed" or "based on available documentation" appear in our reviews because the underlying fact could not be verified at time of publication. These are not hedges added for legal protection. They are accurate descriptions of our confidence level in a given claim.
Scoring and Rankings
Rankings are based on fit for defined buyer profiles, not composite scores
We do not assign numeric scores to tools. A number implies a precision that does not exist when evaluating B2B software: whether a tool scores 7.8 or 8.2 depends entirely on how you weight the criteria, and different teams weight criteria differently.
Instead, Decision pages and shortlists are organized around buyer profiles and use cases. A tool that is the top pick for a solo SDR building a cold email sequence from scratch is a different recommendation from the top pick for an agency managing fifteen client accounts. Rankings only make sense within a defined profile.
Limitations Coverage
Every review names at least one specific limitation. This is non-negotiable.
A review that does not identify a real limitation is not a useful review. It is a product page with editorial formatting. Every tool we cover has tradeoffs: things it does not do, buyers it does not serve well, pricing structures that make it uneconomical at certain scales. Those tradeoffs appear in the review.
If the limitation data for a tool is thin because documentation is sparse, we note that too. "Limitations not publicly documented" is itself a relevant data point for a buyer considering a tool.
Affiliate Relationships
Affiliate status does not change evaluation criteria or outcomes
Some tools we review have affiliate programs, and we participate in some of them. A commission earned when a reader clicks through and purchases does not change how we score, rank, or describe a tool. A tool with an affiliate program can receive a negative recommendation. A tool without one can receive the top pick on a Decision page.
Every page that contains an affiliate link carries a visible disclosure. The full list of tools we have affiliate relationships with is published on our affiliate disclosure page.
Content Updates
Reviews are updated when the underlying product facts change
We monitor tools we have covered for pricing changes, feature additions, plan restructuring, and deprecations. When a material change affects the accuracy of a published review or comparison, we update the page and reset the last-verified date. Minor UI changes and marketing copy revisions do not trigger an update.
The last-verified date on each page reflects when the core facts were last checked against primary sources. A page with an older verified date may still be accurate. Readers should use the date as a signal to cross-check against the tool's own current documentation for fast-moving details like pricing.
Last Updated
Methodology version: March 2026
This methodology page was last reviewed and updated in March 2026. Questions about how a specific review was researched can be sent to the contact address on this site.