Download Information Theory / Data Compression [Lecture notes] by Jürgen Bierbrauer PDF

By Jürgen Bierbrauer

Downloaded from http://www.math.mtu.edu/~jbierbra/HOMEZEUGS/Infotheorytext.ps
version 28 Feb 2007

Show description

Read or Download Information Theory / Data Compression [Lecture notes] PDF

Best information theory books

Information theory: structural models for qualitative data

Krippendorff introduces social scientists to details thought and explains its software for structural modeling. He discusses key themes corresponding to: the best way to make sure a knowledge idea version; its use in exploratory study; and the way it compares with different methods akin to community research, course research, chi sq. and research of variance.

Ours To Hack and To Own: The Rise of Platform Cooperativism, a New Vision for the Future of Work and a Fairer Internet

The on-demand financial system is reversing the rights and protections staff fought for hundreds of years to win. usual net clients, in the meantime, maintain little keep watch over over their own facts. whereas promising to be the good equalizers, on-line structures have usually exacerbated social inequalities. Can the net be owned and ruled in a different way?

Additional resources for Information Theory / Data Compression [Lecture notes]

Sample text

A COV (ρ, N, n) exists if and only if the space of all bitstrings of length N can be partitioned into 2n covering codes of radius ρ. 8 is that it makes it possible to use coding theory, a highly developed discipline. For example, the COV (1, 7, 3) that we used as an illustration is based on a famous code, the Hamming code, which in fact is member of a large family of codes. To give just one further example, the single most famous code, the binary Golay code, is a COV (3, 23, 11). We want to use Shannon entropy to obtain a bound on the possible parameters of covering functions and covering codes.

In the case at hand the entry in the compressed text is 17, indicating that the letter v, which happened to be correct, is the 17-th most likely letter, considering all texts starting as above. We would have to conduct this experiment with, say, 100 texts of length 15. Let ai be the number of texts such that the compressed text has a last (15) entry of i. Then i ≤ 27 and ai /100 is the approximation to qi that the experiment produces. This is what Shannon did. In order to get an idea what F15 may be it needs to be bounded from above and below by expressions (15) involving the probabilities qi .

How does this compare to H(X) = h(x)? We have that x(1 − p) + (1 − x)p is a convex combination of x and 1 − x. Recall that in general a convex combination of a and b as an expression f (t) = (1 − t)a + tb, where 0 ≤ t ≤ 1. 64 CHAPTER 6. COMMUNICATION CHANNELS Typically one thinks of a particle which at time t = 0 is at point a, at time t = 1 at f (1) = b. When t increases the particle moves one the line from a to b. We use p as the time parameter (although it does not change) and see that y = x(1 − p) + (1 − x)p is between x and 1 − x.

Download PDF sample

Rated 4.73 of 5 – based on 31 votes