Introduction to variables
What is a variable?
A variable is simply a concept to make programming easier and more efficient. It is entirely possible to have a programming language without different types of variables, but it would be less efficient and more prone to errors.
The common variable types
A variable defines the amount of memory to be allocated for a particular type of data and how that memory chunk should be used for an application. The various different types of variables are differentiated either by the size of the memory chunk that they use or how the data stored at that memory block should be interpreted.
The table above shows definitions for some of the most common types of variables in 32- and 64-bit architectures. As shown in the image, some of the variable types are exactly the same size but are interpreted completely differently. For example, both long ints and doubles are eight bites, but one is designed to store an integer value while the other holds floating point values. A char* variable is also eight bytes but doesn’t store the target value at the indicated memory address. Instead, it stores a pointer to another block of memory where the array of chars will be stored.
Different variable types can also differ by the size of the values that they store. For example, a short int (two bytes) always stores a smaller range of values than the other types of ints. Similarly, doubles can hold more values than floats.
In most cases, the particular type of a variable doesn’t matter to an application as long as it can hold the desired value and have it be interpreted correctly. However, in some cases, the variety of different variable types can create the potential for exploitable vulnerabilities.
Signed vs. unsigned variables
With variables, the other important concept to understand is the difference between signed and unsigned variables. Whether a variable of a particular type is signed or unsigned impacts the range of values that it can contain.
Signed and unsigned variables of the same type are the same size. However, they are interpreted differently, as shown in the image above.
In a signed variable, the most significant bit of the value is used to indicate the sign of the value. A value of one in the most significant bit indicates a negative number, while a value of zero indicates a positive number.
An unsigned variable, on the other hand, is only capable of storing positive numbers. In this case, a value of one in the most significant bit indicates that the number is a relatively large value (i.e., greater than the midpoint in the range of values that the variable can store).
Signed and unsigned variables both have their advantages. A signed variable can hold negative values, while an unsigned variable can hold a larger range of positive values. However, it is vital to keep track of the signedness of the variable because only a small subset of their ranges (i.e., 0-32767) is interpreted identically regardless of signedness.
Why variable types are significant for application security
The different types and signednesses of variables can seem like an inconsequential and low-level detail of how programming languages work. However, the details of how variables are stored and interpreted within an application are the cause of some of the most common programming vulnerabilities.
Integer overflow and underflow vulnerabilities are a common and easy mistake to make when developing an application. An understanding of variable types and signedness is essential to avoiding these types of vulnerabilities within an application.
- What You Should Know About the Size of Variables in C, Assignment Shark
- Signed and Double-Length Numbers, FORTH, Inc.
- Integer Overflow/Underflow and Floating Point Imprecision, Jolly Fish (Medium)
We've encountered a new and totally unexpected error.
Get instant boot camp pricing
A new tab for your requested boot camp pricing will open in 5 seconds. If it doesn't open, click here.