These are very good points you bring up and I will need to address them in the site's FAQ in addition to this response. I would appreciate any follow-ups as I am open to revising the opinions I include below.
First, if there are any specific examples of frameworks that have been mis-characterized, I would appreciate that we address each individually as a Github issue. For example, I will create an issue to discuss the Yesod test and its session configuration [1].
Here is our basic thinking on sessions. None of the current test types exercise sessions, but if the test types were changed to make use of sessions, session functionality should remain available within the framework.
If the a particular test implementation/configuration has gone out of its way to remove support for sessions from the framework, we consider that Stripped. If session functionality remains available but simply isn't being exercised because the test types we've created to-date don't use sessions, then at least with respect to sessions, that is Realistic.
Logging is an important point that we need to address. We intentionally disabled logging in all of the tests we created and will need to be careful to review the configuration of community-contributed tests to do the same.
You're correct, disabling logging is not consistent with the production-class goal. So, why did we opt to disable logging? A few reasons:
* We didn't want to deal with cleaning up old log files in the test scripts.
* We didn't want to deal with normalizing the logging granularity across frameworks. (Or deal with not doing so.)
* In spot checks, we didn't observe much performance differential when logging is enabled.
We're not unmovable on logging, however, and if there is sufficiency community demand, we would switch to leaving logging [2].
[1] https://github.com/TechEmpower/FrameworkBenchmarks/issues/25...
[2] https://github.com/TechEmpower/FrameworkBenchmarks/issues/25...