We have talked about integers and floating point numbers. What's the difference? Integers do not have decimals. They are also called whole numbers.
Then we have floating point numbers, which are numbers that do have decimals. And these are known as real numbers.
In the go spec, there is a section on Numeric Types:
uint8 the set of all unsigned 8-bit integers (0 to 255)
uint16 the set of all unsigned 16-bit integers (0 to 65535)
uint32 the set of all unsigned 32-bit integers (0 to 4294967295)
uint64 the set of all unsigned 64-bit integers (0 to 18446744073709551615)
int8 the set of all signed 8-bit integers (-128 to 127)
int16 the set of all signed 16-bit integers (-32768 to 32767)
int32 the set of all signed 32-bit integers (-2147483648 to 2147483647)
int64 the set of all signed 64-bit integers (-9223372036854775808 to 9223372036854775807)
float32 the set of all IEEE-754 32-bit floating-point numbers
float64 the set of all IEEE-754 64-bit floating-point numbers
complex64 the set of all complex numbers with float32 real and imaginary parts
complex128 the set of all complex numbers with float64 real and imaginary parts
byte alias for uint8
rune alias for int32
uint either 32 or 64 bits
int same size as uint
uintptr an unsigned integer large enough to store the uninterpreted bits of a pointer value
Do we have to use all these types? The answer is, "only if you want to."
If you need to specify an exact precision, because you don't want to waste any extra bits in your program, because it's going to be scaling to billions of users, and an extra couple of bits times billions translates to terrabytes of data, then you can be very precise if you know what type of data you're going to store. In that case, you may want to choose the smallest practical data type.
Generally speaking though, the Go programming language was designed for ease of programming. Remember: efficient execution, efficient compilation, and ease of programming. So, you can just use int
and float64
package main
import (
"fmt"
)
func main() {
x := 42
y := 42.34534
fmt.Println(x)
fmt.Println(y)
fmt.Printf("%T\n", x)
fmt.Printf("%T\n", y)
}
In this case, the compiler figured out the type for us, and it used types int
and float64
We can specify the type ourselves like so:
package main
import (
"fmt"
)
var x int
var y float64
func main() {
x = 42
y = 42.34534
fmt.Println(x)
fmt.Println(y)
fmt.Printf("%T\n", x)
fmt.Printf("%T\n", y)
}
Note that here, instead of redeclaring the variables with :=
, which would create variable shadowing, we use the assignment operator =
instead.
If you try to assign a floating point number to x
after having declared it as an int
, the compiler will throw an error.
package main
import (
"fmt"
)
var x int
var y float64
func main() {
x = 42.34534
y = 42.34534
fmt.Println(x)
fmt.Println(y)
fmt.Printf("%T\n", x)
fmt.Printf("%T\n", y)
}
Looking back at the Numberic Types documentation, we can see there are int8
and uint8
, etc. The u
stands for unsigned. If it is signed, it goes from -
to +
and the positive is just presumed in the numbering system. If it is unsigned, it goes from 0
up.
If we're using 8 bits, as in int8
and uint8
, we can store 256 numbers. That's 8-bits, or 2 to the power of 8. That's 2, 4, 8, 16, 32, 64, 128, 256. So, why does it only go up to 255
? Because it includes 0
. That is, 0
, 1
, 2
, 3
, 4
, 5
, 6
, 7
, 8
, 9
are the first 10 digits.
Likewise with int8
, we store 256 values from -128
to 127
, including 0
.
Notice we can store -128
in an int8
, but not -129
package main
import (
"fmt"
)
var x int8 = -129 // try with -128
func main() {
fmt.Println(x)
fmt.Printf("%T\n", x)
}
Likewise, we can store up to 127
, and not 128
.
package main
import (
"fmt"
)
var x int8 = 128 // try with 127
func main() {
fmt.Println(x)
fmt.Printf("%T\n", x)
}
We can be fairly specific as to the level of precision we wish our variables to have. And if we do not want to specify the level of precision, we can just use int
or float64
.
The nice thing about int
is that there is also a set of predeclared numberic types with implementation-specific sizes.
uint either 32 or 64 bits
int same size as uint
uintptr an unsigned integer large enough to store the uninterpreted bits of a pointer value
So, the nice thing about using int
or uint
is that when you compile your program to run on a particular architecture, that the compiler will deterimine whether the int will store 32 bits or 64 bits. A 32-bit machine will process 32 bits. That's called the word size. The word size of a computer refers to how many 0
s and 1
s it can process at once.
All numeric types are distinct, except for byte
which is an alias for uint8
and rune
which is an alias for int32
. Remember a byte is 8-bits. So instead of using uint8
, you can also use byte
. A rune
is an alias for int32
. And that means 32 0
s and 1
s. A rune
is a character and Go uses the UTF-8 character encoding scheme.
UTF-8 was created by Ken Thompson and Rob Pike, who also created the Go programming language. It's the world's most popular coding scheme, and it uses 1 to 4 bytes to encode characters, depending upon the character. It can encode all the characters of all the languages throughout the world. The first part of UTF-8 is ASCII.
Since each character in UTF-8 can be up to 4 bytes, and each byte is 8 bits, you need 32-bits. So a rune
is a character, and for each character in UTF-8 you need 32 bits, so that's why rune
is an alias for int32
.
package main
import (
"fmt"
)
var x byte = 127
var y rune = 2147483647
func main() {
fmt.Println(x)
fmt.Println(y)
fmt.Printf("%T\n", x)
fmt.Printf("%T\n", y)
}
Let's have a look at the GoDoc runtime package. Using this package, we can see the operating system and the architecture that a Go program is running on:
package main
import (
"fmt"
"runtime"
)
func main() {
fmt.Println(runtime.GOOS)
fmt.Println(runtime.GOARCH)
}
You can see something nacl and amd64p32
. The NaCL part is because we're running this on the Go playground. The amd64p32
is saying that all pointers are 32-bit and the word size is 64 bit.