It's not a failure that the one without a display doesn't have a display. It's a design choice. The AirGradient unit has a display, but it's tiny and hard to read. Scrolling through the article, all the other units with displays have much larger and more readable text. You can read the biggest data points from across a room. The AirGradient has a display, but it fails to be a good display, hence the reviewer's perspective - it's not living up to its goals.
• Product A has limited features but does them well. If the customer is okay with the features the product has, the reviewer can recommend it for this customer.
• Product B has more features but is impacted by QA issues as well as product design decisions that make those features harder to use. This impacts the ability of the customer to use features they might've paid for to use the product, and it may even impact their ability to use features core to other products. This potentially makes Product B less desirable for comparable use cases.
With this in mind, I'm inclined to agree with Wired's decision.
It also raises my eyebrows that they see “repairability [as] one of our core differentiators.” It’s cool to make that possible quietly for people who are into it, but would you want a “repairable” smoke detector? Or one that just works? If it broke, would you want them to send you one that’s not broken, or parts and a booklet of repair instructions?
If I pay for X, I will be mad if I can only use X-1.
There are three outputs. LEDs that go from green to yellow or red. The small display. A webpage dashboard. Or you can plug the data into HA for whatever you want.
The only issue I have with the display is that it’s monochrome and that prevents making data easy to read the trends, by showing positive changes as green or negative ones as red.
If the display is too small the LEDs are easily visible for quick information and then the dashboard is for more data.
Reviewers often have their issues really understanding how people use products, often because rapidly changing things to review, doesn’t allow them the time to truly use and understand a product.
The reviewer states:
I’ve been using AirVisual Pros for the past five years.
so it's not like they're new to the field. They know what they want out of the product they're reviewing. That may not be what someone reading the review may be after, but that doesn't invalidate the review.To draw a parallel, I think an iPhone user may have a harder time using Android than someone who has never used either phone.
Admittedly, I'm another happy AirGradient user.
Purely focusing on the display, I can see a certain logic to say: Display not working => Not recommended. And probably I chose the wrong title for this as it made the article too much focused on this aspect.
However, the main critique for me is actually the general subjective nature of the article and the lack of a systematic testing approach for the monitors. In my opinion this review should not to be called "The Best Indoor Monitors" if Wired has an intention to provide objective and a balanced assessment of indoor monitors.
Of course I am unhappy that our monitor got labelled as "Not Recommended", but the bigger picture to what extent these "Best ..." reviews do provide a fair and comprehensive assessment is in my opinion the much more important discussion that we should have.
You can't just do that and get in quality testing time with more than one or two products.
Reviewing things fairly and helpfully is hard and takes time, and especially as AI slop takes over writing (thankfully it looks like this article at least has a byline), I think it's going to be harder and harder to find actual useful human reviews to guide decisions.
This is quite different from being tasked with comparing bicycles which would require a lot of effort to give equal time to each one. Unless the journo was a world class rider, I'd be shocked if they rode any one of them for more than 5 minutes.
> I understand why I need to check a dashboard for an outdoor air quality monitor, but having to check a dashboard for an indoor monitor seemed like an extra unnecessary step.
This is after already mentioning the unit also has led light bars to display quality without reading the number.
The reviewer seems to be saying that just lights and a web dashboard isn't enough for and indoor monitor.
Yet earlier in the article the Author picked the "Touch Indoor" as the unit with the best display, even though that is an indoor unit with no screen and only led lights.
Given that, you'd think the AirGradient unit's lights would be compared to say why they are worse, but that doesn't happen.
Having read the wired reviews, they set off my internal alarms for "low quality reviewer" who doesn't display a deep understanding of the products being reviewed or the market segment. There's a lot of fluff and stuff about screen size and very little digging into which actual accuracy and functionality.
That said, I haven't seen any good reviews from wired in a log time.