1. The representation of symbols in a source alphabet by strings of binary digits, i.e. a binary code. The most commonly occurring source alphabet consists of the set of alphanumeric characters. See code.
2. The encoding of a number into a binary string in which the ith bit from the end carries weight 2i. For example, 13 is represented by 1101. This encoding of natural numbers can be extended to cover signed integers and fractions. See also radix complement, fixed-point notation, floating-point notation.
3. of a set A. Any assignment of distinctive bit strings to the elements of A. See also character encoding, Huffman encoding.
"binary encoding." A Dictionary of Computing. . Encyclopedia.com. (August 17, 2018). http://www.encyclopedia.com/computing/dictionaries-thesauruses-pictures-and-press-releases/binary-encoding
"binary encoding." A Dictionary of Computing. . Retrieved August 17, 2018 from Encyclopedia.com: http://www.encyclopedia.com/computing/dictionaries-thesauruses-pictures-and-press-releases/binary-encoding