-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change Request: Rule Tester: Enforce that a rule marked fixable
or hasSuggestions
has a test case that produces a relevant fixer
#18008
Comments
I think it's a neat idea. While ESLint hasn't been prescriptive about requiring specific rule test cases thus far, the testing of fixability/suggestions is a good example of important test coverage that could be at least minimally enforced, while at the same time helping to ensure that the rule's metadata for those aspects is accurate. In terms of the lint rules, both eslint-plugin/require-meta-fixable (through the |
I think this is a gray area between linting and testing -- this feels more lint-worthy to me than test-worthy, as in, it feels like this type of problem is more akin to a lint violation than a unit test failure. I don't feel strongly either way, so would like to hear more opinions. @eslint/eslint-team |
Just to explore the design of such a feature some more, I see a few options:
|
Currently, |
This isn't terrible - because there are helper rules like The usecases for that sort of thing are pretty niche mind you. I think that it is literally limited to unused vars. |
Yeah, there are a few use cases for rules that don't report problems, like when you just want the results of code path analysis. I wish there was a flag like |
It could probably be a flag within the rule tester if we were to enforce at least one invalid case. But also no reason there couldn't be a rarely used flag to do that. Hey might be worth a separate proposal? |
As a note: we really shouldn't be adding any further breaking changes to v9 at this point unless absolutely necessary. For this, I'd say any breaking change really needs to come in v10. If folks want to explore the idea of rule tester options, we can definitely do that in a non-breaking way, just keep in mind that the |
Oops! It looks like we lost track of this issue. What do we want to do here? This issue will auto-close in 7 days without an update. |
Not stale, would be interested to see more exploration here. |
We need someone to drive this issue. As mentioned, the window for adding further breaking changes to v9 has closed, so that would need to be kept in mind. |
I would love to drive this issue as I have a custom wrapper for the rule tester for this and other cases.
Rule options schema:
Test cases:
Error matcher:
To make this not a breaking change I would propose to add the property |
What does a "custom wrapper" entail? It looks like there are some good ideas in that list. It may be worth creating a table indicating some more data-points for each:
Given the number of possible checks and the subjectivity of some of them, I think it's important to identify which could be rolled out to everyone vs. which ones may need to be opt-in / configurable. |
I only mentioned the "custom wrapper" part to say those ideas are all easily implementable (e.g. via a function receiving the rule and the test cases which runs the test cases via the rule tester and then the custom lint rules).
For example Ultimately all the checks are subjective as none of them prevents an error at runtime and we do not know whether the rule or its tests are seen as "complete" (e.g. why does the rule tester throw an error if I just start writing tests). There are further questions we need to resolve:
|
While the intent is good, it seems like it should be less of a tester and more of a test coverage check. a few concerns:
ruleTester.run("my-rule", rule, {invalid: somecases});
ruleTester.run("my-rule", rule, {invalid: some-other-cases}); How should tester work if it's tested in |
Honestly I think such a simple error is more than enough. The number of times I've seen someone add a fixer to an existing rule and forget to write tests for the change. An error to remind people would be massive to help improve rule quality. This is coming from my anecdotal experience working on internal eslint rules at two different companies. Both times I saw this exact problem that people would completely forget that test path. After a nudge at review time they then wrote very comprehensive tests. People want to to good but they forget very easily given how much there is to think about.
That involves setting up code coverage and enforcing/reporting it from CI. A lot of projects don't have that setup. For example at work we don't have any code coverage metrics at all as it's seen as noise and low value. Turning on code coverage for the internal lint rules is simply not an option.
Quick solution: add an option to rule tester like |
I think the main motivation is to make sure the rule metadata is correct as other third-party tools rely on this (e.g. for documentation generation or only running fixable rules)
A possible alternative is to allow multiple test suites in the same call (this is something I wanted to properly propose in the future): ruleTester.run('rule-id', rule, {
tests: [{
title: 'Edge Cases',
valid: [],
invalid: [],
}, {
title: 'Fixer',
valid: [],
invalid: [],
}],
}); |
Oops! It looks like we lost track of this issue. What do we want to do here? This issue will auto-close in 7 days without an update. |
It looks like there's general interest in adding some kind of check for this, but we really need a design proposal to move this forward. Does anyone want to take a stab at it? Note: We are now after the v9 release, so any breaking changes would end up in v10. |
Oops! It looks like we lost track of this issue. What do we want to do here? This issue will auto-close in 7 days without an update. |
ESLint version
N/A
What problem do you want to solve?
Currently it's possible to declare a rule as both
fixable
andhasSuggestions
without ever reporting either a fixer or a suggestion.This is not great because often these flags are used for documentation purposes - which leads to bad documentation for users.
Additionally it's quite doable to create a suite of tests that never produces a fixer or a suggestion - this means that you can accidentally leave untested pathways in your rule. In the case of autofixers this is especially bad because you may not have validated that your code produces syntactically valid code!
What do you think is the correct solution?
It would be great if ESLint could do some post-run validation for a rule - for example:
fixable
and no tests produced a fixer - error"hasSuggestions
and no tests produced suggestions - error"One might suggest that this could be done via lint rules (eg
eslint-plugin-eslint-plugin
) - however it can be quite hard to statically analyse this given rules need not be contained within one file (so it's not possible to enforce the existence of a fixer on at least onecontext.report()
call). Similarly tests may be assembled via generation in some way - meaning tests are dynamic and not possible to analyse. You might be able to catch some simpler cases with a lint rule.Participation
Additional comments
No response
The text was updated successfully, but these errors were encountered: