Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Precision errors when converting python floats to SCIP_REALs #794

Closed
Joao-Dionisio opened this issue Feb 22, 2024 · 9 comments
Closed

Precision errors when converting python floats to SCIP_REALs #794

Joao-Dionisio opened this issue Feb 22, 2024 · 9 comments
Labels

Comments

@Joao-Dionisio
Copy link
Collaborator

Joao-Dionisio commented Feb 22, 2024

Conversion of Python types to C types sometimes yields precision errors. It's not clear to me when and why this happens.

m = Model()
x = m.addVar()
m.setObjective(15.32*x)
m.writeProblem("model.cip")

image

The values stay correct right up the call to SCIPchgVarObj in L1347

Interestingly, if we set the objective at variable creation, the same doesn't happen.

m = Model()
x = m.addVar(obj=15.32)
m.writeProblem("model.cip")

image

System

  • OS: Ubuntu
  • Version 22.04
  • SCIP version 8.1.0
  • How did you install pyscipopt? Source
@leoneifler
Copy link

The number 15.32 is not exactly representable in binary floating-point format. Why this does not happen if you set it at variable creation I have no idea, but I am guessing this is the overall reason for observing this effect.

@Joao-Dionisio
Copy link
Collaborator Author

Joao-Dionisio commented Feb 26, 2024

Hey @leoneifler, thanks for looking into this! That is true, but at least in Python the precision is much higher:

image

In this PR we reverted manual variable typing, which seems to be the cause for this. See here:

image
image

Me and Mo discussed this yesterday, we have a feeling it might be due to calls to float in other files, like expr.pxi. In fact, we played around with some of this (but I can't tell for sure what did it 😞) and the error above no longer appears:

image

I have encountered this behavior in other circumstances as well, that's why I haven't created a PR.

Does this all seem reasonable, Leon? What do you think?

@DominikKamp
Copy link
Contributor

Just for interest, I would like to ask what the actual reason for this was. It looks like a reversion is considered as a fix. Maybe it makes sense to try cdef double instead of cdef float because the Python float rather behaves like a C double, even more consistent would be cdef SCIP_Real if this is somehow possible.

@Joao-Dionisio
Copy link
Collaborator Author

Oh, hey @DominikKamp! Yeah this was pretty old, and I couldn't reproduce it anymore, I think the "fix" was just what you described.

And thanks for the tip, SCIP_Real is definitely possible to use in PySCIPOpt. I'll test this out over the coming days.

@DominikKamp
Copy link
Contributor

Did it work?

@Joao-Dionisio
Copy link
Collaborator Author

Hey @DominikKamp! Yeah, I forgot about this. Just added a bunch of types and the tests pass and the above example also works, but it's difficult to know whether some other thing might go wrong.

I'll also add some types to the inputs, these shouldn't be controversial, I suppose. I'll create a PR by the end of this morning.

@Joao-Dionisio
Copy link
Collaborator Author

Okay, I managed to fail nearly 100% of the tests 😄

Image

End of this morning might have been too optimistic.

@DominikKamp
Copy link
Contributor

It is not urgent, I just thought some easy performance would be nice for the next release, but maybe it is more involved.

@Joao-Dionisio
Copy link
Collaborator Author

The performance gain should be quite small, as it only affects model creation for most people (and then some stuff on the callbacks, but again somewhat minimal gain). But yeah, I think it makes sense to do this, even if just to justify my inclusion in the release report :D

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants