Hi All,
When trying to slim down my code I noticed the following unexpected behaviour -
Line of code is
if (sFull[znum] != xpts-1) sFull[znum]++;
xpts is a constant, defined from another constant, eg
#define totalx 100
#define xpts totalx/2
On changing the "xpts-1" term in the code to "49", which is the result of the #defines, it compiles using 3 less bytes. Is it actually doing the "-1" at run-time rather than optimizing it out ?
Now if you code like I do and have lots of #defines to control operation this would result in a considerable saving.
So now I'm just wondering how the pre-processor optimizes constants.....
I did check that the memory usage was the same when using a #define or constant that contains an expression without using it in a line of code with further manipulation.
NOTE - The #define is redeclared using the provided examples available on 4D website.
When trying to slim down my code I noticed the following unexpected behaviour -
Line of code is
if (sFull[znum] != xpts-1) sFull[znum]++;
xpts is a constant, defined from another constant, eg
#define totalx 100
#define xpts totalx/2
On changing the "xpts-1" term in the code to "49", which is the result of the #defines, it compiles using 3 less bytes. Is it actually doing the "-1" at run-time rather than optimizing it out ?
Now if you code like I do and have lots of #defines to control operation this would result in a considerable saving.
So now I'm just wondering how the pre-processor optimizes constants.....
I did check that the memory usage was the same when using a #define or constant that contains an expression without using it in a line of code with further manipulation.
NOTE - The #define is redeclared using the provided examples available on 4D website.
Comment