-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pytest not working correctly! #124
Comments
Sigh, looks like I got that wrong. The tests do run but don't report failure correctly. Apparently in my enthusiasm over the right thing almost happening I failed to test test failures. I will fix that tomorrow (I hope). In the meantime if you're able to use a unittest-based runner that ought to work. Another thing to note: where you use |
Thanks for a quick note, For us the pytest is important since it enables jenkins reporting :) |
Fixes #124 Thought pytest was collecting and running tests, the results were not actually being handled. If a test failed, it still appeared to pass. The fundamental reason for this is that only the GabbiSuite which contains all the tests from each YAML file was being checked. These look like tests to pytest and pass. The eventual fix is fairly complex and could maybe be made less so by learning how to use modern parameterized pytest[1] rather than the old yield style being used here. The fix includes: * Creating a PyTestResult class which translates various unitest result types into pytest skips or xfails or reraises errors as required. * Works around the GabbiSuite.run() based fixture handling that unitest-based runners automatically use but won't work properly in the pytest yield setting by adding start() and stop() methods the suite and yielding artifical tests which call start and stop.[2] Besides getting failing tests to actually fail this also gets some other features working: * xfail and skip work, including skipping an entire yaml file with he SkipFixture * If a single test from a file is selected, all the prior tests in the file will run too, but only the one requested will report. [1] http://pytest.org/latest/parametrize.html#pytest-generate-tests [2] This causes the number of tests to increase but it seems to be the only way to get things working without larger changes.
@keyhan, it looks like I've been able to fix it in #126. If you're able to test that before I release it that would be great, otherwise I'll sleep on it, look over it again tomorrow, and then release it. It should work now. The basic problem was that results were not being managed during the running of the tests, but getting that concept pushed in turned out to be fairly complex. |
Thanks I will check it later today
|
I've got a release building now that will go out as 1.17.1 in a few minutes. It may not be perfect and still need some further refinements but it is at least better than the previous version... So no need to do anything special when you are doing your testing later; just get the latest version from pypy. Would definitely like to hear your feedback whatever happens. Thanks. |
The new version is working with the example you gave above. The main remaining issue is that when there is a failure the output is a mess, I've created an issue for that #127. |
Hi, I am testing your latest from pip. |
This is what I get on screen when that one tests passes: keyhan@keyhan-desktop:~/gabbi$ py.test --junitxml=/home/keyhan/gabbi/result.xml -svx test_keyhan.py test_keyhan.py::test_gabbits::['start_driver_google_scenario_do_get_to_google.se'] <- ../../../usr/local/lib/python2.7/dist-packages/gabbi/suite.py PASSED -------------- generated xml file: /home/keyhan/gabbi/result.xml --------------- |
Yeah, that's the result of the way fixtures are being managed. What you see there is a stopgap to deal with the separation of collecting and running tests and the way that gabbi fixture were initially implemented in way that integrates with unittest.TestSuite (instead of cases). It'll have to do until I come up with something better. I'll make an issue for that too so it is tracked. I'm pretty sure all these issues can be overcome. It's just a matter of first overcoming my ignorance (or someone else beating me to it). Your help has been extremely useful (obviously). |
Hi, I have been trying the gabbi to write some simple tests and was lucky enough when using the gabbi-run, but I need jenkins report so I tried the py.test version. with the loader code looking like this:
The yaml-file looks very simple:
The problem is now that the test passes, the URL does not exist so the test has to fail with a connection refused, I have also tried with a site returning 404 but still the test passes. Am I doing something wrong here?
The text was updated successfully, but these errors were encountered: