25% developed

Unicode

From Wikibooks, open books for an open world
Jump to navigation Jump to search

The objective of this book is to maintain a reference to Unicode encoding and anything related to Unicode specification.

This book is necessary because, although the articles here about Unicode reference were removed from Wikipedia and Wikisource, this standard is widely used by IT technologies and a reference is very necessary.

Introduction[edit | edit source]

Unicode is an industry standard whose goal is to provide the means by which text of all forms and languages can be encoded for use by computers through a single character set. Originally, text-characters were represented in computers using byte-wide data: each printable character (and many non-printing, or "control" characters) were implemented using a single byte each, which allowed for 256 characters total. However, globalization has created a need for computers to be able to accommodate many different alphabets (and other writing systems) from around the world in an interchangeable way.

The old encodings in use included ASCII or EBCDIC, but it was apparent that they were not capable of handling all the different characters and alphabets from around the world. The solution to this problem was to create a set of "wide" 16-bit characters that would theoretically be able to accommodate most international language characters. This new charset was first known as the Universal Character Set (UCS), and later standardized as Unicode. However, after the first versions of the Unicode standard it became clear that 65,535 (216) characters would still not be enough to represent every character from all scripts in existence, so the standard was amended to add sixteen supplementary planes of 65,536 characters each, thus bringing the total number of representable code points to 1,114,112. To this date, less than 10% of that space is in use.

Table of Contents[edit | edit source]

Links[edit | edit source]

The Unicode® Standard (Q8819)