Binary Blitz: The 0's and 1's of Binary Code
When all is well with your computer, you can bask in the illusion that you'e
fluent in its language. As soon as you hit a glitch, however, you're forced to
remember that your PC speaks a lingo whose alphabet doesn't even resemble yours.
You're forced to realize that computers translate all your instructions into
strings of ones and zeroes, and that the translation process is about as
comprehensible as a foreign film on fast-forward.
Yet there are certain fundamentals of computer language that can be absorbed
by even the puny intellect of a PC beginner. Turning your intentions into binary
code is quite an involved process. To begin at the beginning, when you type
something into your computer, it gets translated into source code that runs the
software program you're using. From there, a preprocessor prepares that
translation for further modifications. A compiler then turns that version into
code that more closely resembles binary. Finally, an interpreter performs the
actual translation into binary code.
And what exactly is binary code? It's the language of zeroes and ones that
your computer or, more specifically, your computer's CPU (Central Processing
Unit) is able to understand. Each letter, numeral or symbol of human language
corresponds to between 7 and 16 zeroes and ones. Each zero or one is called a
bit, with most programs organizing 8 bits into blocks of data known as bytes.
Such bytes are able to represent 246 different values. In most applications,
ones represent the "on" mode and zeroes represent the "off" mode. In terms of
computer memory, 1,024 bytes equal one kilobyte of stored information. 1,048,576
bytes represent one megabyte. And so on.
If all of this sounds too mathematical for you, don't worry. Even computer
programmers avoid using pure binary code when creating new applications.
Hexadecimal notation allows for a more compact form of binary code, basing its
system on the number 16 instead of the number 2, as binary code does. Most
programmers work out problems in hexadecimal notation before converting their
results to binary code.
And as to binary code, even that comes in different forms today. There are
essentially three different versions in use: ASCII, EBCDIC (Extended Binary
Coded Decimal Interchange Code), and Unicode. ASCII was first developed in 1963,
and today represents the most common form of binary coding in PCs. EBCDIC is
used by IBM on its larger mainframes, while ASCII remains the code of choice for
IBM's personal computers.
Unicode differs substantially from both of its cousins because, in place of
8-bit strings, it is organized in a 16-bit "double word" format. This allows
Unicode to render languages that require a far greater range of alphabetical
symbols, such as Japanese and Russian, to name but two. Unicode is capable of
representing over 65,000 characters. Which is a lot of characters by any foreign
film's standards. Good thing this boggling world of coding is all beneath the
surface of your computer screen, and translators are usually on hand to explain
the most important parts of the plot.