-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch to pytest #939
Switch to pytest #939
Conversation
This the minimum set of changes to use pytest. Per #931 we may want to look at using more pytest features. |
Codecov ReportBase: 94.43% // Head: 90.09% // Decreases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## main #939 +/- ##
==========================================
- Coverage 94.43% 90.09% -4.35%
==========================================
Files 83 47 -36
Lines 12067 6057 -6010
Branches 1144 905 -239
==========================================
- Hits 11396 5457 -5939
+ Misses 492 420 -72
- Partials 179 180 +1 Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
We don't need to specify any of this stuff -- it's better to let the user pick their preferred setup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. A couple of notes / changes:
- While I think we should avoid any huge test refactors in this PR, there are a couple of places where pytest gets us some quick wins. E.g.
Lines 114 to 126 in d28bd43
def test_testcases(self) -> None: for catalog in TestCases.all_test_catalogs(): catalog = catalog.full_copy() ctypes = [ CatalogType.ABSOLUTE_PUBLISHED, CatalogType.RELATIVE_PUBLISHED, CatalogType.SELF_CONTAINED, ] for catalog_type in ctypes: with self.subTest( title="Catalog {} [{}]".format(catalog.id, catalog_type) ): self.do_test(catalog, catalog_type) - I don't think it's worth going through and updating every test to use pytest's fixtures right now, but could you open an issue to capture that future task?
- Same as above but for pytest-vcr, we could robustify some of our tests by using that instead of hitting the network every time, but that can be its own task that we do after this PR is merged. Could you open an issue for that as well?
Thanks!
A comment here about including However, I see that |
I believe that was in response to a test (or test utility, I don't remember) which was not being run. Test code should always have 100% coverage, no exceptions, so it shouldn't really clutter up any reports. So, basically, while "meta-coverage" it answers a different question than production code coverage, it's still a useful thing to know. |
To me, it's confusing to include test code coverage in the "core" coverage report. If we do want to check for non-run tests, IMO that should be part of a separate coverage run. I also am personally 👍🏽 for dropping it as a part of CI -- if we're worried about non-run tests, its not too hard to do a one-off coverage run of just the test suite. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we're removing --cov=tests
, I opened #951 to capture some non-run test lines I found from a manual run.
I haven't removed that yet, but, yeah, 👍 on the new issue. I'll remove the tests directory coverage from the |
@gadomski |
Yeah, I think that's a good thing -- the tests directory was giving us an over-estimate of project coverage. There's two options here
IMO if there's an obvious place we can add a test or two to this PR to get coverage over 90%, that'll make CI happy w/o dropping our threshold too low. |
We are still above 90%. I think there was some number confusion. Hopefully, the above bullets clear that up. |
Coverage looks like its failing: https://github.com/stac-utils/pystac/actions/runs/3905719122/jobs/6673063889#step:10:64. |
Oops. Yeah, just saw that. Will look into this. |
Move from a file comparison to a simple html parser to check for valid html.
Related Issue(s):
Description:
fail_under
) was lowered from 94% to 90%.html
module.PR Checklist:
pre-commit run --all-files
)scripts/test
)