"That was the worst throw ever. Of all time."
"Not my fault. Someone put a wall in my way."
The term byte was coined by Dr. Werner Buchholz in July 1956, during the early design phase for the IBM Stretch computer. It is a respelling of bite to avoid accidental mutation to bit.
The size of a byte was at first selected to be a multiple of existing teletypewriter codes, particularly the 6-bit codes used by the U.S. Army (Fieldata) and Navy. A number of early computers were designed for 6-bit codes, including SAGE, the CDC 1604, IBM 1401, and PDP-8. Historical IETF documents cite varying examples of byte sizes. RFC 608 mentions byte sizes for FTP hosts as the most computationally efficient size of a given hardware platform.
In 1963, to end the use of incompatible teleprinter codes by different branches of the U.S. government, ASCII, a 7-bit code, was adopted as a Federal Information Processing Standard, making 6-bit bytes commercially obsolete.
In the early 1960s, AT&T introduced digital telephony first on long-distance trunk lines. These used the 8-bit µ-law encoding. This large investment promised to reduce transmission costs for 8-bit data. IBM at that time extended its 6-bit code "BCD" to an 8-bit character code, "Extended BCD" in the System/360. The use of 8-bit codes for digital telephony also caused 8-bit data "octets" to be adopted as the basic data unit of the early Internet.
Since then, general-purpose computer designs have used eight bits in order to use standard memory parts, and communicate well, even though modern character sets have grown to use as many as 32 bits per character.
In the late 1970s, microprocessors such as the Intel 8008 (the direct predecessor of the 8080, and then the 8086 used in early PCs) could perform a small number of operations on four bits, such as the DAA (decimal adjust) instruction, and the half carry flag, which were used to implement decimal arithmetic routines. These four-bit quantities were called nibbles, in
homage to the then-common 8-bit bytes.
...