Is there a performance or code maintenance issue with using
assert as part of the standard code instead of using it just for debugging purposes?
assert x >= 0, 'x is less than zero'
better or worse than
if x < 0: raise Exception, 'x is less than zero'
Also, is there any way to set a business rule like
if x < 0 raise error that is always checked without the
try/except/finally so, if at anytime throughout the code
x is less than 0 an error is raised, like if you set
assert x < 0 at the start of a function, anywhere within the function where
x becomes less then 0 an exception is raised?
To be able to automatically throw an error when x become less than zero throughout the function. You can use class descriptors. Here is an example:
class LessThanZeroException(Exception): pass class variable(object): def __init__(self, value=0): self.__x = value def __set__(self, obj, value): if value < 0: raise LessThanZeroException('x is less than zero') self.__x = value def __get__(self, obj, objType): return self.__x class MyClass(object): x = variable() >>> m = MyClass() >>> m.x = 10 >>> m.x -= 20 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "my.py", line 7, in __set__ raise LessThanZeroException('x is less than zero') LessThanZeroException: x is less than zero
Asserts should be used to test conditions that should never happen. The purpose is to crash early in the case of a corrupt program state.
Exceptions should be used for errors that can conceivably happen, and you should almost always create your own Exception classes.
For example, if you're writing a function to read from a configuration file into a
dict, improper formatting in the file should raise a
ConfigurationSyntaxError, while you can
assert that you're not about to return
In your example, if
x is a value set via a user interface or from an external source, an exception is best.
x is only set by your own code in the same program, go with an assertion.