As I was coding in C++ my own Engine with OpenGL. I forgot something to do. Maybe forgot to assign a pointer or forgot to pass a variable. At the end I had copied a NaN value to a vertieces of my Model as the Model should be a wrapper for Data I wanted to read and visualize.
Printing the entire Model into the terminal confused me why everything is NaN suddenly when it started nicely.
Took me 2 hours to find out why the final output of a neural network was a bunch of NaN. This is always very annoying but I can’t really complain, it make sense. Just sucks.
I guess you can always just add an
assert not data.isna().any()
in strategic locationsIf (var.nan){var = 0} my beloved.
It also depends on the context
That could be a nice way. Sadly it was in a C++ code base (using tensorflow). Therefore no such nice things (would be slow too). I skill-issued myself thinking a struct would be 0 -initialized but
MyStruct input;
would not whileMyStruct input {};
will (that was the fix). Long story.I too have forgotten to memset my structs in c++ tensorflow after prototyping in python.
Oof. C++ really is a harsh mistress.
I hope it was garlic NaN at least.
this is just like in regular math too. not being a number is just so fun that nobody wants to go back to being a number once they get a taste of it
The funniest thing about NaNs is that they’re actually coded so you can see what caused it if you look at the binary. Only problem is; due to the nature of NaNs, that code is almost always going to resolve to “tried to perform arithmetic on a NaN”
There are also coded NaNs which are defined and sometimes useful, such as +/-INF, MAX, MIN (epsilon), and Imaginary
Consider IEEE754 arithmetic as monadic, simple!