Methodology

How we test products.

1. Source

We pull candidates from three places: Amazon Best Sellers, CJ Dropshipping, and a watch list of products our readers ask about. We never source from sponsored placement, paid catalogs, or influencer lists.

2. Score

Every candidate gets a 0 to 100 score on four axes:

  • Features: what the product actually does, not what the listing says.
  • Value: price relative to comparable products with similar specs.
  • Durability: materials, build quality, and what real reviews say after 30+ days.
  • Audience fit: does the buyer this is built for actually solve a real problem with it?

3. Real-use trial

Anything we are seriously considering, we buy and use for at least 14 days in normal conditions. That means weekend trips, real loads, real weather, real cleaning. We document what wears, what fails, and what holds up.

4. Numbers, not adjectives

When we describe a product we use numbers wherever we can. "Ships in 8 to 12 days" not "ships fast." "600D Oxford with TPU coating" not "premium waterproof material." If we cannot back a claim with a number or a source, we cut it from the page.

5. What disqualifies a product

  • Fake or AI-generated reviews on the source listing.
  • Medical or hyped claims we cannot verify.
  • Suppliers with under 4.7 stars across 90 days.
  • Products that fail the real-use trial in week 2.
  • Anything we would not put in our own homes, cars, or hands.

6. What we disclose

  • Where the product ships from.
  • How long shipping takes.
  • That we earn a margin on every sale (this is a store).
  • When affiliate links are present (on review posts, not store pages).