Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use subtest_streamed for the sequential executor #107

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

oschwald
Copy link
Contributor

@oschwald oschwald commented Aug 31, 2020

We have been suffering with #100 since upgrading to a modern version of Test::Class::Moose. We currently use a hacked up formatter to avoid the issue, but that is not ideal.

This change makes the sequential executor use subtest_streamed instead of async_subtest. In my testing, this works as expected.

The tests have not been updated yet as it will be fairly extensive to change all of the test_events_is checks. I wanted to make sure the general approach was acceptable first. Also, if there is an easy way to dump the events for those checks, it would likely save quite a bit of manual labor.

@autarch
Copy link
Member

autarch commented Sep 1, 2020

This seems fine to me. This doesn't seem like it would break anyone's use of this package.

I just pushed a CI fix to master, so you'll need to rebase off master to get the tests to run.

As far as dumping the events, I think I used https://metacpan.org/pod/Test2::Tools::EventDumper when I converted TCM to Test2. The author has marked it as deprecated but I think it'll still work for your purposes. I also wrote a dump_events sub in t/lib/Test/Events.pm that might be helpful.

From a quick look, I don't think the changes will be too terrible. It looks like this just adds an additional Note event at the beginning of every subtest.

@oschwald oschwald force-pushed the greg/streamed-subtests branch from 289fc62 to ae5bc9c Compare September 2, 2020 16:17
@codecov
Copy link

codecov bot commented Sep 2, 2020

Codecov Report

Merging #107 into master will increase coverage by 1.02%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #107      +/-   ##
==========================================
+ Coverage   92.89%   93.91%   +1.02%     
==========================================
  Files          92       92              
  Lines        2856     3336     +480     
  Branches      142      142              
==========================================
+ Hits         2653     3133     +480     
  Misses        148      148              
  Partials       55       55              
Impacted Files Coverage Δ
t/parallellib/TestsFor/Alpha.pm 100.00% <ø> (ø)
t/parallellib/TestsFor/Alpha/Subclass.pm 100.00% <ø> (ø)
t/parallellib/TestsFor/Beta.pm 100.00% <ø> (ø)
t/parallellib/TestsFor/Sequential.pm 100.00% <ø> (ø)
lib/Test/Class/Moose/Executor/Parallel.pm 90.19% <100.00%> (+0.61%) ⬆️
lib/Test/Class/Moose/Executor/Sequential.pm 100.00% <100.00%> (ø)
lib/Test/Class/Moose/Role/Executor.pm 87.61% <100.00%> (-0.17%) ⬇️
t/basic.t 100.00% <100.00%> (ø)
t/basiclib/TestsFor/Basic.pm 100.00% <100.00%> (ø)
t/basiclib/TestsFor/Basic/Subclass.pm 99.50% <100.00%> (+0.51%) ⬆️
... and 14 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d5d9604...a8f0812. Read the comment docs.

@oschwald
Copy link
Contributor Author

oschwald commented Sep 2, 2020

I am not sure how to read the two failures, but I think they are unrelated.

@autarch
Copy link
Member

autarch commented Sep 2, 2020

If you're not rebased off the latest master that might explain the test failure. Otherwise it means you need to run tidyall on the code.

@oschwald
Copy link
Contributor Author

oschwald commented Sep 2, 2020

It might be nice to get rid of the Subtest: prefix. Looking at how subtest_streamed is implemented, we could probably call Test2::API::run_subtest ourselves or alternatively call subtest_buffered with buffered => 0.

@oschwald
Copy link
Contributor Author

oschwald commented Sep 2, 2020

There were two Test::Vars failures unrelated to my changes that I fixed, but otherwise tidyall is clean for me.

@oschwald oschwald force-pushed the greg/streamed-subtests branch from 0150a12 to 011bd7e Compare September 2, 2020 17:10
@autarch
Copy link
Member

autarch commented Sep 2, 2020

That Subtest: prefix is coming from Test2 itself, right? I think we'd have to subclass/replace some part of it to remove that. If it's not too gross to do so, that'd be great, but it's not worth doing some sort of internals mucking about that's likely to break.

@oschwald
Copy link
Contributor Author

oschwald commented Sep 2, 2020

Yeah, the prefix is coming from subtest_streamed itself. I just pushed a commit showing how it might look without it.

@oschwald oschwald changed the title Use subtest_streamed for the sequential executor [WIP] Use subtest_streamed for the sequential executor Sep 2, 2020
@oschwald
Copy link
Contributor Author

oschwald commented Sep 3, 2020

I realized that I had failed to push my last commit. I pushed it now. I think this is ready for review.

Copy link
Member

@autarch autarch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of splitting all the test classes' expected_test_events into two methods, I think it'd be better to just pass in an argument indicating that the tests are being run in parallel.

AFAICT, all you'd need to do for this flag is something like this:

    event Note => sub {
        call message => 'TestsFor::Basic';
    } unless $is_parallel;

This will make it much simpler to change these tests in the future if that's needed.

@oschwald
Copy link
Contributor Author

My feeling was that the tests are already extremely hard to follow and update, and introducing arguments like that would make the situation even worse. Right now it is somewhat doable to update the tests by dumping the huge set of events, but once you introduce arguments and the logic for the number of cases that the two event trees differ, then that will be impossible.

That said, I think this whole method of testing even outputs is misguided. It makes the tests brittle and hard to update.

@autarch
Copy link
Member

autarch commented Oct 24, 2020

My feeling was that the tests are already extremely hard to follow and update, and introducing arguments like that would make the situation even worse. Right now it is somewhat doable to update the tests by dumping the huge set of events, but once you introduce arguments and the logic for the number of cases that the two event trees differ, then that will be impossible.

That said, I think this whole method of testing even outputs is misguided. It makes the tests brittle and hard to update.

Do you have any ideas on how to make the tests easier to work with? One possibility that comes to mind is to only test the full set of events for a few things, and then just look for a smaller subset in most tests, since we're often just checking a specific portion of the test output in each test.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants