Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Matplotlib drawer: fix bug #1437 and add tests #1438

Merged

Conversation

sergiomtzlosa
Copy link
Contributor

Checklist:

  • Reviewers confirm new code works as expected.
  • Tests are passing.
  • Coverage does not decrease.
  • Documentation is updated.

@alecandido
Copy link
Member

Already noticed that the CI is still not running...

@scarrazza scarrazza closed this Sep 6, 2024
@scarrazza scarrazza reopened this Sep 6, 2024
@scarrazza scarrazza added enhancement New feature or request and removed enhancement New feature or request labels Sep 6, 2024
@scarrazza
Copy link
Member

@alecandido
Copy link
Member

@scarrazza this is the only related change I've found in the last six months

github/docs@9e3e157

But maybe I should keep checking, I may have missed something.
The file is the following:
https://github.com/github/docs/blob/main/content/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows.md

(I believe the workflow runner is not even open source, but GitHub docs are, so we can check those, at least)

@scarrazza scarrazza added the run-workflow Forces github workflow execution label Sep 7, 2024
@scarrazza
Copy link
Member

scarrazza commented Sep 7, 2024

Ok, the problem is the if condition in rules.yml:

if: contains(github.event.pull_request.labels.*.name, 'run-workflow') || github.event_name == 'push'

This line is forcing a skip due to missing pushes (this PR was opened with a series of pushes already performed).
We should add the "PR opened event" in the if condition too.

@alecandido
Copy link
Member

Correct, thanks for the investigation.

In any case, even the run-workflow label is definitely a good solution. For sure for this PR.

Copy link
Contributor

@MatteoRobbiati MatteoRobbiati left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot @scarrazza for spotting the CI problem and @sergiomtzlosa for the fix.
The lint is now fine. Anyway, there is a problem in the test code we added to the documentation. Please remove the %matplotlib inline line from this example and ask me for a new review @sergiomtzlosa :)

@sergiomtzlosa
Copy link
Contributor Author

thanks @alecandido @scarrazza for tune in the CI pipeline, I removed the line!!

@scarrazza scarrazza added run-workflow Forces github workflow execution and removed run-workflow Forces github workflow execution labels Sep 7, 2024
Copy link

codecov bot commented Sep 7, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 99.94%. Comparing base (218f298) to head (51cd75d).
Report is 127 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1438      +/-   ##
==========================================
+ Coverage   97.10%   99.94%   +2.83%     
==========================================
  Files          81       81              
  Lines       11653    11708      +55     
==========================================
+ Hits        11316    11701     +385     
+ Misses        337        7     -330     
Flag Coverage Δ
unittests 99.94% <100.00%> (+2.83%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@MatteoRobbiati MatteoRobbiati left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @sergiomtzlosa for the fix!
Now we miss the tests. I would suggest to create a test_ui_mlpdrawings.py file (or something like that) in which we test all the code of your contribution.
Some dummy tests will be enough to cover it.
A complete report of the covered lines is provided by codecov everytime a new commit is pushed here (see last lines in the checks list) :)

Let me know if you need some help in writing/defining the tests!

@sergiomtzlosa
Copy link
Contributor Author

sergiomtzlosa commented Sep 8, 2024

Hi, thanks @MatteoRobbiati , just a question, which version should I install to do the test? I had installed pytest and I get this errors on runing the tests:

> pytest --version
pytest 8.3.2
__main__.py: error: unrecognized arguments: --cov=qibo --cov-append --cov-report=xml --cov-report=html
  inifile: ..\qibo\pyproject.toml
  rootdir: ..\qibo`
> pytest test_ui_mpldrawer.py
ERROR: usage: __main__.py [options] [file_or_dir] [file_or_dir] [...]
__main__.py: error: unrecognized arguments: --cov=qibo --cov-append --cov-report=xml --cov-report=html
  inifile: ..\qibo\pyproject.toml
  rootdir: ..\qibo`

Maybe codecov is not triggering....

@renatomello renatomello linked an issue Sep 9, 2024 that may be closed by this pull request
@scarrazza scarrazza added the run-workflow Forces github workflow execution label Sep 11, 2024
@MatteoRobbiati
Copy link
Contributor

During the meeting we agreed in setting up a bit more robust strategy to test the images. What is done so far is perfectly fine, but we just need to be sure the output of our plots is actually what we expect during the tests. For doing this, we can:

  1. generate locally the images we are re-generating in the tests;
  2. convert them into Numpy arrays and store those arrays in a folder;
  3. during tests, check whether the new plots, converted into arrays, are actually equal to the target arrays.

Alternatively, we can also think to use the Matplotlib testing tools.

@sergiomtzlosa
Copy link
Contributor Author

sergiomtzlosa commented Sep 13, 2024

During the meeting we agreed in setting up a bit more robust strategy to test the images. What is done so far is perfectly fine, but we just need to be sure the output of our plots is actually what we expect during the tests. For doing this, we can:

1. generate locally the images we are re-generating in the tests;

2. convert them into Numpy arrays and store those arrays in a folder;

3. during tests, check whether the new plots, converted into arrays, are actually equal to the target arrays.

Alternatively, we can also think to use the Matplotlib testing tools.

Good!!! I will start with the plot arrays comparison...

@MatteoRobbiati MatteoRobbiati added this to the Qibo 0.2.12 milestone Sep 18, 2024
@MatteoRobbiati
Copy link
Contributor

Good!!! I will start with the plot arrays comparison...

Hi there! Do you need any help here? It would be ideal to fix this for the next review (friday). Let me know about the status/if help is needed @sergiomtzlosa.

@sergiomtzlosa
Copy link
Contributor Author

sergiomtzlosa commented Sep 18, 2024

Good!!! I will start with the plot arrays comparison...

Hi there! Do you need any help here? It would be ideal to fix this for the next review (friday). Let me know about the status/if help is needed @sergiomtzlosa.

Hi @MatteoRobbiati , everything is done in the latest commit, please let me know if something is missing, you can launch CI indeed

@MatteoRobbiati MatteoRobbiati added run-workflow Forces github workflow execution and removed run-workflow Forces github workflow execution labels Sep 18, 2024
@MatteoRobbiati MatteoRobbiati self-requested a review September 18, 2024 11:20
@sergiomtzlosa
Copy link
Contributor Author

Hi @MatteoRobbiati , I am wondering why all test are correct , but the macos test fail or got cancelled, please can you gibe some light? Is the CI maybe crashing?

Copy link
Contributor

@MatteoRobbiati MatteoRobbiati left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this @sergiomtzlosa! It sounds like what I had in mind. I was thinking if the PNG images are the best way to store the data, but I think this can work.

  1. PIL is used to test the assertions here, but I think it has to be added to the pyproject.toml and update the lock (we need to specify this package has to be installed when performing tests);
  2. tests are failing on ios right now. It seems the arrays are not equivalent.. maybe we should consider to use some specific Matplotlib backend?

@MatteoRobbiati
Copy link
Contributor

Hi @MatteoRobbiati , I am wondering why all test are correct , but the macos test fail or got cancelled, please can you gibe some light? Is the CI maybe crashing?

Sorry!! Missed it! Remember also to ask me for review once you are done with modifications :)

@sergiomtzlosa
Copy link
Contributor Author

sergiomtzlosa commented Sep 18, 2024

Hi @MatteoRobbiati , I am wondering why all test are correct , but the macos test fail or got cancelled, please can you gibe some light? Is the CI maybe crashing?

Sorry!! Missed it! Remember also to ask me for review once you are done with modifications :)

Hi @MatteoRobbiati , now I have a big issue, I cannot fix this until the next week then I cannot finish this for friday, and i would ask for some help to get this on time otherwise i fix it next week, sorry!

@alecandido
Copy link
Member

Testing exact equality pixel by pixel is definitely a hard constraint, so you should make sure there is no platform-dependent behavior in matplotlib as well (even driven by the environment, i.e. variables and packages available on a certain system).

A different option could be generating the pictures directly in the CI, and caching them somewhere, separately for each platform, skipping the tests if they are not available.
However, while this option will be more robust from the point of view of platform-dependent behaviors, but it will be much more expensive (time-wise) to instrument...

If you're able to debug the issue, and fix some env var to obtain the same result, so much the better.
Otherwise, I'd consider acceptable even just testing that an image file is produced, whatever the content is, since it will tell us that the code didn't raise/crash (at least).

@sergiomtzlosa
Copy link
Contributor Author

sergiomtzlosa commented Sep 18, 2024

Finally, I could take some time and I changed the images by arrays in files for matching, so it is not needed to use PIL library, also I set the matplotlib backend and I hope this time all test are passing, otherwise I will go with the solutions you suggested @alecandido @MatteoRobbiati

@MatteoRobbiati MatteoRobbiati added run-workflow Forces github workflow execution and removed run-workflow Forces github workflow execution labels Sep 19, 2024
Copy link
Contributor

@MatteoRobbiati MatteoRobbiati left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for this effort @sergiomtzlosa, and again sorry not to have noticed this before.
Now everything sounds good to me. I re run the tests and everything is passing.

@sergiomtzlosa
Copy link
Contributor Author

Thanks @alecandido @MatteoRobbiati , finally we made It!

@MatteoRobbiati MatteoRobbiati added this pull request to the merge queue Sep 19, 2024
Merged via the queue into qiboteam:master with commit df709ce Sep 19, 2024
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
run-workflow Forces github workflow execution
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Variable not defined in mpldrawer
4 participants