# The Harrington Compression Method (HCM) White Paper. - Theory

This is a discussion on The Harrington Compression Method (HCM) White Paper. - Theory ; Jym there is always a few odd carrots in a pile of standard carrots, yes? Ones with cracks, ones to small, ones with really long roots, etc. This is statistics in action I use the word statistics since there is ...

1. ## Re: The Harrington Compression Method (HCM) White Paper.

Jym there is always a few odd carrots in a pile of standard carrots,
yes? Ones with cracks, ones to small, ones with really long roots,
etc.

This is statistics in action

I use the word statistics since there is no "perfect number". Things
are not going to be reduced by say 1.241578823191231236% all the time.
This is why I use the words statistically and probability frequently.
If I do not some mathematicians will tear me a new one with useless
posts which divert from the reality of the entirety challenging the
notion of 'how can that be always true?"

therefore we can get a range of variation.

Also the decoding is built in.

File #1
11001

File #2
101

File #3
01

This leads from the first bit in file #1 becoming the source bit, in
the new decompressing file. The next place to look is file #2 due to
the fact it is a 1, take the 1 in file #2 and add it to the source
file. Now we look to file #3 for the next bit. We add it to the
decompress file.

Decompress:
110

File #1
1001

File #2
01

File #3
1

next one in file #1 is a 1, we add it to the decompress, then file #2
since it was a 1, so the 0 gets added as well

Decompress:
11010

File #1
001

File #2
1

File #3
1

I dare say you can complete the run now.

2. ## Re: The Harrington Compression Method (HCM) White Paper.

Let me break this down into a real simple example

64 bits goes in.

Stage 1 increases it to 72 bits statistically

Stage 2 puts it in files, we then run file #1 again.

File #1 had 32 bits when put in, now it has 36 and is split into 3
subfiles.

1.1 has 16 bits
1.2 has 12 bits
1.3 has 8 bits

Now the ratio's:
File 1.1
93.75% to 6.25%.

File 1.2
80% to 20%

File 1.3
75% to 25%

File 1.1 comes out with a statistical 9.47 bits remaining.

File 1.2 comes out with a statistical 9.36 bits remaining

File 1.3 comes out with a statistical 6.75 bits remaining

This is 25.58 bits.

Compare this to the 28.44 bits that it would normally hold

Since this is repeatable upon any file (1.1, 1.2, 1.3, 2, 3 currently
in existance in this example) this means we can compress ALL files of
sufficient size.

If no matter what file #1 after being run through the system again is
smaller than what it went in as, we have proven compression of
entropic data.

Let me stress that again.

So long as 1.1, 1.2, 1.3 is smaller than file #1 was to begin with, we
have compression of random binary data.

I will stand by this to the end of my days, and be right always!

3. ## Re: The Harrington Compression Method (HCM) White Paper.

On Aug 26, 10:17 pm, Einstein <michae...@gmail.com> wrote:
....

> So long as 1.1, 1.2, 1.3 is smaller than file #1 was to begin with, we
> have compression of random binary data.
>
> I will stand by this to the end of my days, and be right always!

Not sure why you can't see the obvious. You can't count the sum
of the 3 files. You need to count after you combine it or else
something so trival as what follows would be considered compression

take a very long random bit string at lest a milllion bits. go to
third occuance
of 01
let file one be every bit upto the 01 but not includeing it.
let file 2 be the next set of bits up to the sixth occurance of 01 but
not
includeing the 01
let file 3 be the rest.

Gues what I saved 4 bits. By your logic it was compression. But its
snot.

David A. Scott
--
My Crypto code
http://bijective.dogma.net/crypto/scott19u.zip
http://www.jim.com/jamesd/Kong/scott19u.zip old version
My Compression code http://bijective.dogma.net/
**TO EMAIL ME drop the roman "five" **
Disclaimer:I am in no way responsible for any of the statements
made in the above text. For all I know I might be drugged.
As a famous person once said "any cryptograhic
system is only as strong as its weakest link"

w

4. ## Re: The Harrington Compression Method (HCM) White Paper.

Your missing the entirety of the system

Since essentially my system self places information in the files, it
is impossible to get errors back or forward. There is no duality or
issues with two becomes one. Each result IS UNIQUE.

The different files are appended later with a cost, yes, to eachother,
into a single string in the end...

A single string which if the original source data was large enough,
will ALWAYS be compressed.

I am going to stress this to you

If the string is long enough, compression ALWAYS can occur.

Then I am going to stress this to you

THOSE FILES ARE TEMPORARY AND THE RESULTS END UP BEING ONE STRING
WHICH IS SMALLER THAN THE ORIGINAL STRING.

get past your biases, and just run the numbers as my post about file
#1 shows. It shows a perfect ability to compress data. Get past it man!

5. ## Re: The Harrington Compression Method (HCM) White Paper.

Disfruta las emociones que experimentas ahora, de creer haber
encontrado el algoritmo que cambiara nuestra era informatica. Muy
pronto experimentaras la desilusion mas profunda de tus terorias.

Solo te falta algo para no caer de verguenza en verguenza. Ser un
programador. Asi solo podras llevar tus teorias a la practica
absoluta, previniendo tus desilusiones.

El equivocarse es parte de nuestra evolucion, pero el ser testaduro es
parte de nuestra personalidad.

Suerte.

6. ## Re: The Harrington Compression Method (HCM) White Paper.

Mi agradecimiento, si lo entiendo correctamente. Español no es mi
lenguaje más fuerte.

Creo que puedo evitar el problema de malentendidos en breve. Sus
argumentos son un círculo. Soy capaz de escribir algo hasta esta noche
Espero que, si bien en el trabajo, que les ayuden a entender mejor.

La suerte para ti.

7. ## Re: The Harrington Compression Method (HCM) White Paper.

Einstein schrieb:
> Thomas you are trying to divert the actual make up.
>
>
> It's a simple concept, a 50/50%, a 25/25/25/25%, a
> 12.5/12.5/12.5/12.5/12.5/12.5/12.5/12.5%, and so forth situation can
> be made into a:
>
> Three files with a 75/25% a 67/33% and a 50/50%.

All fine, but then again, for each data item, you need to index which
file to take the next item from. For that, you also need some bits. For
example, let's split things up such that we write the (00) combinations
into file A, (01) and (10) into file B and (11) into file C, making this
for IID data two files containing on average 25% of the data, and one
file containing the other 50%. That you use a huffman in advance doesn't
matter, the output of the ideal huffman for a given IID source will be
sufficiently close to IID. Then, obviously, we can simply strip file A
and file C completely since they are all zero or all one, and only need
to keep file B (whoa, what a compression, you might think!). We can even
do better in file B by encoding a 01 as 0 and a 10 as 1, making things
even shorter. But, we also need to identify in a separate file (or the
filing system) which file to use to read the next bits from on
decompression. Obviously, this information is a IID source on a
three-letter alphabet A,B,C with probabilities p(A) = p(C) = 0.25 and
p(B) = 0.5. Thus, an optimal huffman code for this would be

B -> 1
A -> 01
C -> 00

The average length of the output per symbol is then

1 * 0.5 + 2 * 0.25 + 2 * 0.25 = 0.5 + 0.5 + 0.5 = 1.5 bits per symbol
for this side-channel. For the overall file, you need to encode for:

00 :nothing in file A, two bits in the side information: 2 bits
11 :nothing in file C, two bits in the side information: 2 bits
01 ne bit in file B, one bit of side information : 2 bits
10 ne bit in file B, one bit of side information : 2 bits

Thus, you modified your input of 1 bit per symbol into an output using
two bits per two symbols.

Hubba hub, Mr. Marx!

8. ## Re: The Harrington Compression Method (HCM) White Paper.

Einstein wrote:
) 10 00 10 10 01 01 11 01 01 01 11 10 01 01 00 00 11 10 00 11 00 10 00
) 10 00 00 00 11 01 11 11 0
)
) First I divided it into chunks of 2 bits. I found a missing bit, so I
) add a 1 to the end (Command section identifies the missing bit by
) using a bit to say "add a bit to the end")

Yeah sorry, there should have been a 0 at the beginning but the software
cut the leading 0's. I should have checked. But anyways, this will work
just as well, so no real problem there.

NB: I got these from http://www.fourmilab.ch/hotbits/ who claim to be a
genuine source of random numbers.

) <snip>
)
) Note that statistically, since we do have command section issues, we
) have obtained compression, even after adding two bits to the whole.

What do you mean by 'statistically' ?

) Yes the command section requires knowing which swaps we will do per
) file, such as in file #1.2 we swap 00 for 10 in our make up so that it
) has a value length of 2 bits., and 01 in file #1.3 has the same
) happening.
)
) This has 63 bits + command section, for 64 bits. Now the command
) sections will come out to less than 350 bits total. I can even say a
) rounded 500 bits if you desire. Therefore if this statistical sample
) was 31500 bits or more we would have factual compression after the
) command section.

So, what you're saying, roughly, is that by showing that 64 bits can be
compressed to 62 bits plus a 'command section', you have proven that
64000 bits can be compressed to 62000 bits plus a 'command section' ?
And then you claim that the 'command section' has a maximum size.

You're wrong.

The big error you're making here is to assume that your measurement
of compression on a small file will scale up to keep the same ratio
for the bigger files. I see no proof or evidence of this.

It looks like all the compression gain you get is by making a few
choices along the way (such as the swapping of huffman table values)

But those choices give you a *constant* gain, not a percentual one.

SaSW, Willem
--
Disclaimer: I am in no way responsible for any of the statements
made in the above text. For all I know I might be
drugged or something..
No I'm not paranoid. You all think I'm paranoid, don't you !
#EOT

9. ## Re: The Harrington Compression Method (HCM) White Paper.

Einstein wrote:
> Your missing the entirety of the system

> get past your biases, and just run the numbers as my post about file
> #1 shows. It shows a perfect ability to compress data. Get past it man!

Shame, shame, shame on you.

You already gone thru this in the past. When making claims the burden
is on you to prove what you claim not others.

You claimed to have tested you idea. But so far you have not:

1) Shown all the steps used to compress the data. All the steps
includes all the control codes/bits. Instead you make claims about
about what the expect number of bits will be. DON'T CLAIM IT, SHOW
IT.

2) You have not shown what the final compressed file will look like,
just claims about what you expect it to be. DON'T CLAIM IT, SHOW IT.

3) And most important, shown that the decompression stage works by
using it on the compressed data and then getting the original data
back. Every con-artist and misled compression expert always seems to
mess up at this stage of the game.

So Einstein if you want people to believe you have something
worthwhile, why are you avoiding doing everything from start to finish?

10. ## Re: The Harrington Compression Method (HCM) White Paper.

On 26 Aug, 17:14, Willem <wil...@stack.nl> wrote:
> Einstein wrote:
>
> ) Introduction:
> ) This is a lossless compression method which WILL work on random binary
> ) data and data considered Entropic.
>
> <snip>
> I'll jump right to the blatant error.
>
> ) File 1.1 = 1.05 bits
> ) File 1.2 = 1.04 bits
> ) File 1.3 = .75 bits
> ) File 2 = 2.83 bits
> ) File 3 = 2 bits
> )
> ) Statistically speaking there are 7.67 bits for every original 8 bits.
>
> And you have 5 files instead of 1.  This represents extra information.
> So much extra that it easily accounts for the 0.33 bits/byte you 'gained'..
>
> ) 5)Command Section
> ) We now need a command section to handle all the different files,
> ) changes, etc. Each file will have a specific number of bits. This can
> ) be easily represented with a simplistic counting system allowing for
> ) maximum space savings. If < 1 kilobyte then 00, if less than 1
> ) megabyte then 01, if less than 1 gigabyte then 10, if greater than 1
> ) gigabyte then 11.
>
> I note that you fail to calculate how many bits this command section will
> take, statistically.
>
> ) The claim of the pigeon hole problem naturally arises at this point.
> ) However, all the steps are reversible and lead IMMEDIATELY back to the
> ) same original code. No two results will be exactly the same. It is
> ) mathematically not possible. The HCM system completely leads to the
> ) data as inputted originally.
>
> Note that the counting theorem has been proven true.

Or the bounds of the system used to prove the 'argument' have been
exceeded, by such a simplistic assumption that all compression must
work by direct mapping of 2^n bit state possibilities of n bits.
Nothing actually said anything about 1 of 2^n to another of 2^n being
in anyway informative. No mutual information calculation has been
shown. reduct6o-ad-OBSERDum has the right name. Apart from this you
are most probably right.

cheers
jacko