Monday, April 02, 2012

Do computer science students need to know about binary anymore?

Lately I have seen a couple of couple of cultural references to "binary" and "the 0's and 1's of computers/digital data", but just this morning I realized that it has been a very long time since I needed to know much at all about "binary" data. Sure, I need to know how many "bits" are used for a character or integer or float value, but that is mostly simply to know its range of values, not the linear 0/1 quality of the specific values. Sure, I've looked at hex data semi-frequently (e.g., a SHA), but even then the 0/1 aspect of "binary" is completely deemphasized and we might as well be working with hex-based computers as binary-based computers.
 
Sure, hardware engineers still need to know about binary data.
 
And, on rare occasion software developers do find use for "bit" fields, but even though that knowledge depends on more efficient storage of binary values, I'm sure a hex-based machine could implement bit fields virtually as efficiently. In any case, bit fields don't strictly depend on the computer being binary-based. How many "web programmers" or "database programmers" or even "java programmers" need even a rudimentary comprehension of "binary" data as opposed to range of values?
 
Besides, when data is "serialized" or "encoded", the nature of the original or destination machine implementation or storage scheme is completely irrelevant. Sure we use 8-bit and 16-bit "encodings" but those are really 256-value or 65,536-value encodings or 1-byte vs. 2-byte. And the distinction would certainly be irrelevant if the underlying computer has 256-value or 65,536-value computing units.
 
Granted, software designers designing character encoding schemes (or audio or other media encoding) need to "lay out the bits", but so few people are doing that these days. It seems a supreme waste of time and energy and resources to focus your average software professional on "the 1's and 0's of binary."
 
My hunch is that "binary" and "1's and 0's" will stick with us until the point where the underlying hardware implementation shifts from 1-bit binary to hex or byte-based units (or even double-byte units), and then maybe another 5 to 10 years after that transition, if not longer. After all, we still "dial" phone numbers even though it has probably been 25 years  or more since any of us had a phone "dial" in front of us, and certainly the younger generations never had that experience.

-- Jack Krupansky

2 Comments:

At 12:05 PM MDT , Blogger Lee Devlin said...

Hi Jack,

I teach at the local college and I get students from all walks of life. When I teach C++, I have to talk about character encoding, which naturally leads to hexidecimal. You can't explain hexidecimal unless you show the layout of the bits. So I think binary is an important concept for anyone seeking to understand computers.

I was thinking about how often I use hex, and the answer is quite often. For example, for a web designer to get a color that is 'just right' she may have to get its RGB in hex format so that it can be put into the CSS.

When I first learned about base 5 and base 8 back in grade school, I was skeptical that it was something I'd ever use. Granted, those bases are not much used, but binary, hex, and decimal are quite commonly used and converted between each other in computer science. So I think it's important to understand where decimal comes from and how easy it would have been to have standardized on base 8 if, for example, humans had 4 fingers on each hand instead of 5.

 
At 7:17 AM MDT , Anonymous Powerpoint Templates said...

Thanks for sharing the idea there would be some apprehensions from segment but i am up for it.

 

Post a Comment

Subscribe to Post Comments [Atom]

<< Home