Compress a signal by storing signal diff instead of actual samples - is there such a thing? The 2019 Stack Overflow Developer Survey Results Are InIs there such a thing as band-limited non-linear distortion?How to generate a good Sinus audio signal of specific SampleRate, DataBits and Chunnels in a time frameDerivative of signal with missing samplesShouldn't the Sampling Theorem imply that there should be no information loss at all after a signal is processed?Sampling Theorem: How to know the value between two samples of a SignalMethod of reconstructing a band-limited signal from discrete samplesHow can I sample a signal at 4 samples per cycle?How to find DC bias of periodic signal from samples that include a non-integer number of periods?Why use windowing function instead of truncating the signal to integer amount of periods?Reconstruction of a signal from non-uniform samples

What could be the right powersource for 15 seconds lifespan disposable giant chainsaw?

Why are there uneven bright areas in this photo of black hole?

How to quickly solve partial fractions equation?

Loose spokes after only a few rides

How did passengers keep warm on sail ships?

Can you cast a spell on someone in the Ethereal Plane, if you are on the Material Plane and have the True Seeing spell active?

Why couldn't they take pictures of a closer black hole?

Worn-tile Scrabble

Is there a way to generate a uniformly distributed point on a sphere from a fixed amount of random real numbers?

writing variables above the numbers in tikz picture

How can I have a shield and a way of attacking with a ranged weapon at the same time?

Why doesn't shell automatically fix "useless use of cat"?

Why didn't the Event Horizon Telescope team mention Sagittarius A*?

Accepted by European university, rejected by all American ones I applied to? Possible reasons?

How can I define good in a religion that claims no moral authority?

How do I free up internal storage if I don't have any apps downloaded?

Can withdrawing asylum be illegal?

How to type a long/em dash `—`

RequirePermission not working

Getting crown tickets for Statue of Liberty

A word that means fill it to the required quantity

Short story: child made less intelligent and less attractive

What does もの mean in this sentence?

What is this business jet?



Compress a signal by storing signal diff instead of actual samples - is there such a thing?



The 2019 Stack Overflow Developer Survey Results Are InIs there such a thing as band-limited non-linear distortion?How to generate a good Sinus audio signal of specific SampleRate, DataBits and Chunnels in a time frameDerivative of signal with missing samplesShouldn't the Sampling Theorem imply that there should be no information loss at all after a signal is processed?Sampling Theorem: How to know the value between two samples of a SignalMethod of reconstructing a band-limited signal from discrete samplesHow can I sample a signal at 4 samples per cycle?How to find DC bias of periodic signal from samples that include a non-integer number of periods?Why use windowing function instead of truncating the signal to integer amount of periods?Reconstruction of a signal from non-uniform samples










5












$begingroup$


I am working with EMG signals sampled at 2kHz and 16 bits, and noticed that they "look smooth", that is, the signals are differentiable, and if I apply a "diff" function (numpy.diff in my case) the magnitude of the values is considerably lower than the actual samples.



So I am considering to do something like:



  • Split the signal into chunks of a given size;

  • Foreach chunk, using variable length quantity (or similar), create a byte list and:


    • For the first sample of the chunk, add its absolute value;


    • For the remaining samples of the chunk, add their difference, relative to the previous value;


This way, the smoother the signal, and the closer it is to the baseline, the more I expect to decrease the byte-size of each chunk, by decreasing the individual byte-size of a large part of the samples.



Although I suspect this would improve things for me, I also suspect that this is nothing new, and perhaps it has a proper name, and even more elegant/efficient ways to implement it.



So the question is: what is the name of this compression technique, and what are its alternatives and/or variants?










share|improve this question









$endgroup$







  • 3




    $begingroup$
    See en.wikipedia.org/wiki/…
    $endgroup$
    – MBaz
    Apr 5 at 19:03










  • $begingroup$
    @MBaz I think your comment contains the correct answer. If you write it down I would most probably accept it. Thanks for now!
    $endgroup$
    – heltonbiker
    Apr 5 at 19:19






  • 3




    $begingroup$
    BTW: this is also done in image compresion, in PNG format, line by line (only that for each line you can choose among using difference with respect to the pixel left or up, or other two predictions - or none of them); the standard calls this "filtering", but it's actually a typical "predict and code the prediction error" scheme, of which your technique is a basic case en.wikipedia.org/wiki/Portable_Network_Graphics#Filtering
    $endgroup$
    – leonbloy
    Apr 6 at 17:19
















5












$begingroup$


I am working with EMG signals sampled at 2kHz and 16 bits, and noticed that they "look smooth", that is, the signals are differentiable, and if I apply a "diff" function (numpy.diff in my case) the magnitude of the values is considerably lower than the actual samples.



So I am considering to do something like:



  • Split the signal into chunks of a given size;

  • Foreach chunk, using variable length quantity (or similar), create a byte list and:


    • For the first sample of the chunk, add its absolute value;


    • For the remaining samples of the chunk, add their difference, relative to the previous value;


This way, the smoother the signal, and the closer it is to the baseline, the more I expect to decrease the byte-size of each chunk, by decreasing the individual byte-size of a large part of the samples.



Although I suspect this would improve things for me, I also suspect that this is nothing new, and perhaps it has a proper name, and even more elegant/efficient ways to implement it.



So the question is: what is the name of this compression technique, and what are its alternatives and/or variants?










share|improve this question









$endgroup$







  • 3




    $begingroup$
    See en.wikipedia.org/wiki/…
    $endgroup$
    – MBaz
    Apr 5 at 19:03










  • $begingroup$
    @MBaz I think your comment contains the correct answer. If you write it down I would most probably accept it. Thanks for now!
    $endgroup$
    – heltonbiker
    Apr 5 at 19:19






  • 3




    $begingroup$
    BTW: this is also done in image compresion, in PNG format, line by line (only that for each line you can choose among using difference with respect to the pixel left or up, or other two predictions - or none of them); the standard calls this "filtering", but it's actually a typical "predict and code the prediction error" scheme, of which your technique is a basic case en.wikipedia.org/wiki/Portable_Network_Graphics#Filtering
    $endgroup$
    – leonbloy
    Apr 6 at 17:19














5












5








5





$begingroup$


I am working with EMG signals sampled at 2kHz and 16 bits, and noticed that they "look smooth", that is, the signals are differentiable, and if I apply a "diff" function (numpy.diff in my case) the magnitude of the values is considerably lower than the actual samples.



So I am considering to do something like:



  • Split the signal into chunks of a given size;

  • Foreach chunk, using variable length quantity (or similar), create a byte list and:


    • For the first sample of the chunk, add its absolute value;


    • For the remaining samples of the chunk, add their difference, relative to the previous value;


This way, the smoother the signal, and the closer it is to the baseline, the more I expect to decrease the byte-size of each chunk, by decreasing the individual byte-size of a large part of the samples.



Although I suspect this would improve things for me, I also suspect that this is nothing new, and perhaps it has a proper name, and even more elegant/efficient ways to implement it.



So the question is: what is the name of this compression technique, and what are its alternatives and/or variants?










share|improve this question









$endgroup$




I am working with EMG signals sampled at 2kHz and 16 bits, and noticed that they "look smooth", that is, the signals are differentiable, and if I apply a "diff" function (numpy.diff in my case) the magnitude of the values is considerably lower than the actual samples.



So I am considering to do something like:



  • Split the signal into chunks of a given size;

  • Foreach chunk, using variable length quantity (or similar), create a byte list and:


    • For the first sample of the chunk, add its absolute value;


    • For the remaining samples of the chunk, add their difference, relative to the previous value;


This way, the smoother the signal, and the closer it is to the baseline, the more I expect to decrease the byte-size of each chunk, by decreasing the individual byte-size of a large part of the samples.



Although I suspect this would improve things for me, I also suspect that this is nothing new, and perhaps it has a proper name, and even more elegant/efficient ways to implement it.



So the question is: what is the name of this compression technique, and what are its alternatives and/or variants?







discrete-signals digital-communications sampling compression






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Apr 5 at 18:58









heltonbikerheltonbiker

652721




652721







  • 3




    $begingroup$
    See en.wikipedia.org/wiki/…
    $endgroup$
    – MBaz
    Apr 5 at 19:03










  • $begingroup$
    @MBaz I think your comment contains the correct answer. If you write it down I would most probably accept it. Thanks for now!
    $endgroup$
    – heltonbiker
    Apr 5 at 19:19






  • 3




    $begingroup$
    BTW: this is also done in image compresion, in PNG format, line by line (only that for each line you can choose among using difference with respect to the pixel left or up, or other two predictions - or none of them); the standard calls this "filtering", but it's actually a typical "predict and code the prediction error" scheme, of which your technique is a basic case en.wikipedia.org/wiki/Portable_Network_Graphics#Filtering
    $endgroup$
    – leonbloy
    Apr 6 at 17:19













  • 3




    $begingroup$
    See en.wikipedia.org/wiki/…
    $endgroup$
    – MBaz
    Apr 5 at 19:03










  • $begingroup$
    @MBaz I think your comment contains the correct answer. If you write it down I would most probably accept it. Thanks for now!
    $endgroup$
    – heltonbiker
    Apr 5 at 19:19






  • 3




    $begingroup$
    BTW: this is also done in image compresion, in PNG format, line by line (only that for each line you can choose among using difference with respect to the pixel left or up, or other two predictions - or none of them); the standard calls this "filtering", but it's actually a typical "predict and code the prediction error" scheme, of which your technique is a basic case en.wikipedia.org/wiki/Portable_Network_Graphics#Filtering
    $endgroup$
    – leonbloy
    Apr 6 at 17:19








3




3




$begingroup$
See en.wikipedia.org/wiki/…
$endgroup$
– MBaz
Apr 5 at 19:03




$begingroup$
See en.wikipedia.org/wiki/…
$endgroup$
– MBaz
Apr 5 at 19:03












$begingroup$
@MBaz I think your comment contains the correct answer. If you write it down I would most probably accept it. Thanks for now!
$endgroup$
– heltonbiker
Apr 5 at 19:19




$begingroup$
@MBaz I think your comment contains the correct answer. If you write it down I would most probably accept it. Thanks for now!
$endgroup$
– heltonbiker
Apr 5 at 19:19




3




3




$begingroup$
BTW: this is also done in image compresion, in PNG format, line by line (only that for each line you can choose among using difference with respect to the pixel left or up, or other two predictions - or none of them); the standard calls this "filtering", but it's actually a typical "predict and code the prediction error" scheme, of which your technique is a basic case en.wikipedia.org/wiki/Portable_Network_Graphics#Filtering
$endgroup$
– leonbloy
Apr 6 at 17:19





$begingroup$
BTW: this is also done in image compresion, in PNG format, line by line (only that for each line you can choose among using difference with respect to the pixel left or up, or other two predictions - or none of them); the standard calls this "filtering", but it's actually a typical "predict and code the prediction error" scheme, of which your technique is a basic case en.wikipedia.org/wiki/Portable_Network_Graphics#Filtering
$endgroup$
– leonbloy
Apr 6 at 17:19











3 Answers
3






active

oldest

votes


















6












$begingroup$

Another notion you might wanna look into for lossless compression of a bandlimited signal (it's this bandlimiting that gets you this "smoother ... signal, ...closer ... to the baseline") is Linear Predictive Coding.



I think this is historically correct that LPC was first used as a variant of Delta coding where the LPC algorithm predicts $hatx[n]$ from the set of samples: $x[n-1], x[n-2], ... x[n-N]$. If the prediction is good, then the real $x[n]$ is not far off from the prediction $hatx[n]$ and you need store only the delta $x[n]-hatx[n]$ which is smaller in magnitude and a smaller word width might be sufficient. You would need to store the LPC coefficients for each block, but there are usually no more than a dozen or so of these.



This stored difference value can be compressed further using something like Huffman coding in which you would need to either store the "codebook" along with the compressed data or have some kinda codebook standardized so that both transmitter and receiver know it.



I think it's some combination of LPC and Huffman coding that is used by various lossless audio formats. Maybe there is some perceptual stuff used to, to get almost lossless compression.






share|improve this answer









$endgroup$




















    8












    $begingroup$

    You can also think of delta encoding as linear predictive coding (LPC) where only the prediction residual ($x[n]-hatx[n]$ in @robertbristow-johnson's notation) is stored and the predictor of the current sample is the previous sample. This is a fixed linear predictor (not with arbitrary coefficients optimized to data) that can exactly predict constant signals. Run the same linear predictive coding again on the residuals, and you have exactly predicted linear signals. Next round, quadratic signals. Or run a higher-order fixed predictor once to do the same.



    Such fixed predictors are listed in Tony Robinson's SHORTEN technical report, yours in Eq. 4, and are also included in the FLAC lossless audio codec although not often used. Calculating the best prediction coefficients for each data block and storing them in a header of the compressed block results in better compression than the use of fixed predictors.



    The linear predictor is supposed to do the whitening, making the residuals independent. In lossless compression, what is left to do is to entropy code the residuals, instead of using run-length or other symbol-based encoding that doesn't work so well on noisy signals. Typically, entropy coding assigns longer code words to large residuals, approximately minimizing the mean encoding length for an assumed distribution of the residual values. A Rice code (also known as Golomb–Rice code or GR code) variant compatible with signed numbers is typically used, as is done in FLAC (table 1). Rice code has a distribution parameter that needs to be optimized for the data block and saved in the block header.



    Table 1. Binary codewords of 4-bit signed integers encoded in Rice code with different Rice code parameter $p$ values, using FLAC__bitwriter_write_rice_signed (source code). This variant of Rice code is a bit wasteful in the sense that not all binary strings are recognized as a codeword.
    $beginarrayrl
    beginarrayr\-8\-7\-6\-5\-4\-3\-2\-1\0\1\2\3\4\5\6\7endarray&beginarraylllll
    p=0&p=1&p=2&p=3\
    000000000000001&000000010&000110&01110\
    0000000000001&00000010&000100&01100\
    00000000001&0000010&00110&01010\
    000000001&000010&00100&01000\
    0000001&00010&0110&1110\
    00001&0010&0100&1100\
    001&010&110&1010\
    1&10&100&1000\
    01&11&101&1001\
    0001&011&111&1011\
    000001&0011&0101&1101\
    00000001&00011&0111&1111\
    0000000001&000011&00101&01001\
    000000000001&0000011&00111&01011\
    00000000000001&00000011&000101&01101\
    0000000000000001&000000011&000111&01111endarrayendarray$






    share|improve this answer











    $endgroup$












    • $begingroup$
      as similar to your suggestion, Subband ADPCM would possibly be the best choice...
      $endgroup$
      – Fat32
      Apr 5 at 21:08


















    6












    $begingroup$

    That's used a lot. See for example https://en.wikipedia.org/wiki/Delta_encoding, https://en.wikipedia.org/wiki/Run-length_encoding.



    "Looking Smooth" typically means "not a lot of high frequency content". The easiest way to take advantage of this, is to figure out what the highest frequency really need then low-pass filter and choose an lower sample rate.



    IF you signal has a non-flat spectrum, it's typically advantageous to "whiten" the signal, i.e. filter it so that the average spectrum is white, then encode, decode and filter with the inverse signal to recover the signal. This way you spend more bits on the high energy frequencies and less and the low energy ones. Your quantization noise follows the spectrum of the signal.



    The scheme that you suggest is one of the simplest forms of this approach: your whitening filter is a differentiator and your inverse filter is an integrator.






    share|improve this answer









    $endgroup$













      Your Answer





      StackExchange.ifUsing("editor", function ()
      return StackExchange.using("mathjaxEditing", function ()
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      );
      );
      , "mathjax-editing");

      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "295"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













      draft saved

      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdsp.stackexchange.com%2fquestions%2f56470%2fcompress-a-signal-by-storing-signal-diff-instead-of-actual-samples-is-there-su%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      6












      $begingroup$

      Another notion you might wanna look into for lossless compression of a bandlimited signal (it's this bandlimiting that gets you this "smoother ... signal, ...closer ... to the baseline") is Linear Predictive Coding.



      I think this is historically correct that LPC was first used as a variant of Delta coding where the LPC algorithm predicts $hatx[n]$ from the set of samples: $x[n-1], x[n-2], ... x[n-N]$. If the prediction is good, then the real $x[n]$ is not far off from the prediction $hatx[n]$ and you need store only the delta $x[n]-hatx[n]$ which is smaller in magnitude and a smaller word width might be sufficient. You would need to store the LPC coefficients for each block, but there are usually no more than a dozen or so of these.



      This stored difference value can be compressed further using something like Huffman coding in which you would need to either store the "codebook" along with the compressed data or have some kinda codebook standardized so that both transmitter and receiver know it.



      I think it's some combination of LPC and Huffman coding that is used by various lossless audio formats. Maybe there is some perceptual stuff used to, to get almost lossless compression.






      share|improve this answer









      $endgroup$

















        6












        $begingroup$

        Another notion you might wanna look into for lossless compression of a bandlimited signal (it's this bandlimiting that gets you this "smoother ... signal, ...closer ... to the baseline") is Linear Predictive Coding.



        I think this is historically correct that LPC was first used as a variant of Delta coding where the LPC algorithm predicts $hatx[n]$ from the set of samples: $x[n-1], x[n-2], ... x[n-N]$. If the prediction is good, then the real $x[n]$ is not far off from the prediction $hatx[n]$ and you need store only the delta $x[n]-hatx[n]$ which is smaller in magnitude and a smaller word width might be sufficient. You would need to store the LPC coefficients for each block, but there are usually no more than a dozen or so of these.



        This stored difference value can be compressed further using something like Huffman coding in which you would need to either store the "codebook" along with the compressed data or have some kinda codebook standardized so that both transmitter and receiver know it.



        I think it's some combination of LPC and Huffman coding that is used by various lossless audio formats. Maybe there is some perceptual stuff used to, to get almost lossless compression.






        share|improve this answer









        $endgroup$















          6












          6








          6





          $begingroup$

          Another notion you might wanna look into for lossless compression of a bandlimited signal (it's this bandlimiting that gets you this "smoother ... signal, ...closer ... to the baseline") is Linear Predictive Coding.



          I think this is historically correct that LPC was first used as a variant of Delta coding where the LPC algorithm predicts $hatx[n]$ from the set of samples: $x[n-1], x[n-2], ... x[n-N]$. If the prediction is good, then the real $x[n]$ is not far off from the prediction $hatx[n]$ and you need store only the delta $x[n]-hatx[n]$ which is smaller in magnitude and a smaller word width might be sufficient. You would need to store the LPC coefficients for each block, but there are usually no more than a dozen or so of these.



          This stored difference value can be compressed further using something like Huffman coding in which you would need to either store the "codebook" along with the compressed data or have some kinda codebook standardized so that both transmitter and receiver know it.



          I think it's some combination of LPC and Huffman coding that is used by various lossless audio formats. Maybe there is some perceptual stuff used to, to get almost lossless compression.






          share|improve this answer









          $endgroup$



          Another notion you might wanna look into for lossless compression of a bandlimited signal (it's this bandlimiting that gets you this "smoother ... signal, ...closer ... to the baseline") is Linear Predictive Coding.



          I think this is historically correct that LPC was first used as a variant of Delta coding where the LPC algorithm predicts $hatx[n]$ from the set of samples: $x[n-1], x[n-2], ... x[n-N]$. If the prediction is good, then the real $x[n]$ is not far off from the prediction $hatx[n]$ and you need store only the delta $x[n]-hatx[n]$ which is smaller in magnitude and a smaller word width might be sufficient. You would need to store the LPC coefficients for each block, but there are usually no more than a dozen or so of these.



          This stored difference value can be compressed further using something like Huffman coding in which you would need to either store the "codebook" along with the compressed data or have some kinda codebook standardized so that both transmitter and receiver know it.



          I think it's some combination of LPC and Huffman coding that is used by various lossless audio formats. Maybe there is some perceptual stuff used to, to get almost lossless compression.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Apr 5 at 21:41









          robert bristow-johnsonrobert bristow-johnson

          11.3k31751




          11.3k31751





















              8












              $begingroup$

              You can also think of delta encoding as linear predictive coding (LPC) where only the prediction residual ($x[n]-hatx[n]$ in @robertbristow-johnson's notation) is stored and the predictor of the current sample is the previous sample. This is a fixed linear predictor (not with arbitrary coefficients optimized to data) that can exactly predict constant signals. Run the same linear predictive coding again on the residuals, and you have exactly predicted linear signals. Next round, quadratic signals. Or run a higher-order fixed predictor once to do the same.



              Such fixed predictors are listed in Tony Robinson's SHORTEN technical report, yours in Eq. 4, and are also included in the FLAC lossless audio codec although not often used. Calculating the best prediction coefficients for each data block and storing them in a header of the compressed block results in better compression than the use of fixed predictors.



              The linear predictor is supposed to do the whitening, making the residuals independent. In lossless compression, what is left to do is to entropy code the residuals, instead of using run-length or other symbol-based encoding that doesn't work so well on noisy signals. Typically, entropy coding assigns longer code words to large residuals, approximately minimizing the mean encoding length for an assumed distribution of the residual values. A Rice code (also known as Golomb–Rice code or GR code) variant compatible with signed numbers is typically used, as is done in FLAC (table 1). Rice code has a distribution parameter that needs to be optimized for the data block and saved in the block header.



              Table 1. Binary codewords of 4-bit signed integers encoded in Rice code with different Rice code parameter $p$ values, using FLAC__bitwriter_write_rice_signed (source code). This variant of Rice code is a bit wasteful in the sense that not all binary strings are recognized as a codeword.
              $beginarrayrl
              beginarrayr\-8\-7\-6\-5\-4\-3\-2\-1\0\1\2\3\4\5\6\7endarray&beginarraylllll
              p=0&p=1&p=2&p=3\
              000000000000001&000000010&000110&01110\
              0000000000001&00000010&000100&01100\
              00000000001&0000010&00110&01010\
              000000001&000010&00100&01000\
              0000001&00010&0110&1110\
              00001&0010&0100&1100\
              001&010&110&1010\
              1&10&100&1000\
              01&11&101&1001\
              0001&011&111&1011\
              000001&0011&0101&1101\
              00000001&00011&0111&1111\
              0000000001&000011&00101&01001\
              000000000001&0000011&00111&01011\
              00000000000001&00000011&000101&01101\
              0000000000000001&000000011&000111&01111endarrayendarray$






              share|improve this answer











              $endgroup$












              • $begingroup$
                as similar to your suggestion, Subband ADPCM would possibly be the best choice...
                $endgroup$
                – Fat32
                Apr 5 at 21:08















              8












              $begingroup$

              You can also think of delta encoding as linear predictive coding (LPC) where only the prediction residual ($x[n]-hatx[n]$ in @robertbristow-johnson's notation) is stored and the predictor of the current sample is the previous sample. This is a fixed linear predictor (not with arbitrary coefficients optimized to data) that can exactly predict constant signals. Run the same linear predictive coding again on the residuals, and you have exactly predicted linear signals. Next round, quadratic signals. Or run a higher-order fixed predictor once to do the same.



              Such fixed predictors are listed in Tony Robinson's SHORTEN technical report, yours in Eq. 4, and are also included in the FLAC lossless audio codec although not often used. Calculating the best prediction coefficients for each data block and storing them in a header of the compressed block results in better compression than the use of fixed predictors.



              The linear predictor is supposed to do the whitening, making the residuals independent. In lossless compression, what is left to do is to entropy code the residuals, instead of using run-length or other symbol-based encoding that doesn't work so well on noisy signals. Typically, entropy coding assigns longer code words to large residuals, approximately minimizing the mean encoding length for an assumed distribution of the residual values. A Rice code (also known as Golomb–Rice code or GR code) variant compatible with signed numbers is typically used, as is done in FLAC (table 1). Rice code has a distribution parameter that needs to be optimized for the data block and saved in the block header.



              Table 1. Binary codewords of 4-bit signed integers encoded in Rice code with different Rice code parameter $p$ values, using FLAC__bitwriter_write_rice_signed (source code). This variant of Rice code is a bit wasteful in the sense that not all binary strings are recognized as a codeword.
              $beginarrayrl
              beginarrayr\-8\-7\-6\-5\-4\-3\-2\-1\0\1\2\3\4\5\6\7endarray&beginarraylllll
              p=0&p=1&p=2&p=3\
              000000000000001&000000010&000110&01110\
              0000000000001&00000010&000100&01100\
              00000000001&0000010&00110&01010\
              000000001&000010&00100&01000\
              0000001&00010&0110&1110\
              00001&0010&0100&1100\
              001&010&110&1010\
              1&10&100&1000\
              01&11&101&1001\
              0001&011&111&1011\
              000001&0011&0101&1101\
              00000001&00011&0111&1111\
              0000000001&000011&00101&01001\
              000000000001&0000011&00111&01011\
              00000000000001&00000011&000101&01101\
              0000000000000001&000000011&000111&01111endarrayendarray$






              share|improve this answer











              $endgroup$












              • $begingroup$
                as similar to your suggestion, Subband ADPCM would possibly be the best choice...
                $endgroup$
                – Fat32
                Apr 5 at 21:08













              8












              8








              8





              $begingroup$

              You can also think of delta encoding as linear predictive coding (LPC) where only the prediction residual ($x[n]-hatx[n]$ in @robertbristow-johnson's notation) is stored and the predictor of the current sample is the previous sample. This is a fixed linear predictor (not with arbitrary coefficients optimized to data) that can exactly predict constant signals. Run the same linear predictive coding again on the residuals, and you have exactly predicted linear signals. Next round, quadratic signals. Or run a higher-order fixed predictor once to do the same.



              Such fixed predictors are listed in Tony Robinson's SHORTEN technical report, yours in Eq. 4, and are also included in the FLAC lossless audio codec although not often used. Calculating the best prediction coefficients for each data block and storing them in a header of the compressed block results in better compression than the use of fixed predictors.



              The linear predictor is supposed to do the whitening, making the residuals independent. In lossless compression, what is left to do is to entropy code the residuals, instead of using run-length or other symbol-based encoding that doesn't work so well on noisy signals. Typically, entropy coding assigns longer code words to large residuals, approximately minimizing the mean encoding length for an assumed distribution of the residual values. A Rice code (also known as Golomb–Rice code or GR code) variant compatible with signed numbers is typically used, as is done in FLAC (table 1). Rice code has a distribution parameter that needs to be optimized for the data block and saved in the block header.



              Table 1. Binary codewords of 4-bit signed integers encoded in Rice code with different Rice code parameter $p$ values, using FLAC__bitwriter_write_rice_signed (source code). This variant of Rice code is a bit wasteful in the sense that not all binary strings are recognized as a codeword.
              $beginarrayrl
              beginarrayr\-8\-7\-6\-5\-4\-3\-2\-1\0\1\2\3\4\5\6\7endarray&beginarraylllll
              p=0&p=1&p=2&p=3\
              000000000000001&000000010&000110&01110\
              0000000000001&00000010&000100&01100\
              00000000001&0000010&00110&01010\
              000000001&000010&00100&01000\
              0000001&00010&0110&1110\
              00001&0010&0100&1100\
              001&010&110&1010\
              1&10&100&1000\
              01&11&101&1001\
              0001&011&111&1011\
              000001&0011&0101&1101\
              00000001&00011&0111&1111\
              0000000001&000011&00101&01001\
              000000000001&0000011&00111&01011\
              00000000000001&00000011&000101&01101\
              0000000000000001&000000011&000111&01111endarrayendarray$






              share|improve this answer











              $endgroup$



              You can also think of delta encoding as linear predictive coding (LPC) where only the prediction residual ($x[n]-hatx[n]$ in @robertbristow-johnson's notation) is stored and the predictor of the current sample is the previous sample. This is a fixed linear predictor (not with arbitrary coefficients optimized to data) that can exactly predict constant signals. Run the same linear predictive coding again on the residuals, and you have exactly predicted linear signals. Next round, quadratic signals. Or run a higher-order fixed predictor once to do the same.



              Such fixed predictors are listed in Tony Robinson's SHORTEN technical report, yours in Eq. 4, and are also included in the FLAC lossless audio codec although not often used. Calculating the best prediction coefficients for each data block and storing them in a header of the compressed block results in better compression than the use of fixed predictors.



              The linear predictor is supposed to do the whitening, making the residuals independent. In lossless compression, what is left to do is to entropy code the residuals, instead of using run-length or other symbol-based encoding that doesn't work so well on noisy signals. Typically, entropy coding assigns longer code words to large residuals, approximately minimizing the mean encoding length for an assumed distribution of the residual values. A Rice code (also known as Golomb–Rice code or GR code) variant compatible with signed numbers is typically used, as is done in FLAC (table 1). Rice code has a distribution parameter that needs to be optimized for the data block and saved in the block header.



              Table 1. Binary codewords of 4-bit signed integers encoded in Rice code with different Rice code parameter $p$ values, using FLAC__bitwriter_write_rice_signed (source code). This variant of Rice code is a bit wasteful in the sense that not all binary strings are recognized as a codeword.
              $beginarrayrl
              beginarrayr\-8\-7\-6\-5\-4\-3\-2\-1\0\1\2\3\4\5\6\7endarray&beginarraylllll
              p=0&p=1&p=2&p=3\
              000000000000001&000000010&000110&01110\
              0000000000001&00000010&000100&01100\
              00000000001&0000010&00110&01010\
              000000001&000010&00100&01000\
              0000001&00010&0110&1110\
              00001&0010&0100&1100\
              001&010&110&1010\
              1&10&100&1000\
              01&11&101&1001\
              0001&011&111&1011\
              000001&0011&0101&1101\
              00000001&00011&0111&1111\
              0000000001&000011&00101&01001\
              000000000001&0000011&00111&01011\
              00000000000001&00000011&000101&01101\
              0000000000000001&000000011&000111&01111endarrayendarray$







              share|improve this answer














              share|improve this answer



              share|improve this answer








              edited 20 hours ago

























              answered Apr 5 at 20:16









              Olli NiemitaloOlli Niemitalo

              8,6381638




              8,6381638











              • $begingroup$
                as similar to your suggestion, Subband ADPCM would possibly be the best choice...
                $endgroup$
                – Fat32
                Apr 5 at 21:08
















              • $begingroup$
                as similar to your suggestion, Subband ADPCM would possibly be the best choice...
                $endgroup$
                – Fat32
                Apr 5 at 21:08















              $begingroup$
              as similar to your suggestion, Subband ADPCM would possibly be the best choice...
              $endgroup$
              – Fat32
              Apr 5 at 21:08




              $begingroup$
              as similar to your suggestion, Subband ADPCM would possibly be the best choice...
              $endgroup$
              – Fat32
              Apr 5 at 21:08











              6












              $begingroup$

              That's used a lot. See for example https://en.wikipedia.org/wiki/Delta_encoding, https://en.wikipedia.org/wiki/Run-length_encoding.



              "Looking Smooth" typically means "not a lot of high frequency content". The easiest way to take advantage of this, is to figure out what the highest frequency really need then low-pass filter and choose an lower sample rate.



              IF you signal has a non-flat spectrum, it's typically advantageous to "whiten" the signal, i.e. filter it so that the average spectrum is white, then encode, decode and filter with the inverse signal to recover the signal. This way you spend more bits on the high energy frequencies and less and the low energy ones. Your quantization noise follows the spectrum of the signal.



              The scheme that you suggest is one of the simplest forms of this approach: your whitening filter is a differentiator and your inverse filter is an integrator.






              share|improve this answer









              $endgroup$

















                6












                $begingroup$

                That's used a lot. See for example https://en.wikipedia.org/wiki/Delta_encoding, https://en.wikipedia.org/wiki/Run-length_encoding.



                "Looking Smooth" typically means "not a lot of high frequency content". The easiest way to take advantage of this, is to figure out what the highest frequency really need then low-pass filter and choose an lower sample rate.



                IF you signal has a non-flat spectrum, it's typically advantageous to "whiten" the signal, i.e. filter it so that the average spectrum is white, then encode, decode and filter with the inverse signal to recover the signal. This way you spend more bits on the high energy frequencies and less and the low energy ones. Your quantization noise follows the spectrum of the signal.



                The scheme that you suggest is one of the simplest forms of this approach: your whitening filter is a differentiator and your inverse filter is an integrator.






                share|improve this answer









                $endgroup$















                  6












                  6








                  6





                  $begingroup$

                  That's used a lot. See for example https://en.wikipedia.org/wiki/Delta_encoding, https://en.wikipedia.org/wiki/Run-length_encoding.



                  "Looking Smooth" typically means "not a lot of high frequency content". The easiest way to take advantage of this, is to figure out what the highest frequency really need then low-pass filter and choose an lower sample rate.



                  IF you signal has a non-flat spectrum, it's typically advantageous to "whiten" the signal, i.e. filter it so that the average spectrum is white, then encode, decode and filter with the inverse signal to recover the signal. This way you spend more bits on the high energy frequencies and less and the low energy ones. Your quantization noise follows the spectrum of the signal.



                  The scheme that you suggest is one of the simplest forms of this approach: your whitening filter is a differentiator and your inverse filter is an integrator.






                  share|improve this answer









                  $endgroup$



                  That's used a lot. See for example https://en.wikipedia.org/wiki/Delta_encoding, https://en.wikipedia.org/wiki/Run-length_encoding.



                  "Looking Smooth" typically means "not a lot of high frequency content". The easiest way to take advantage of this, is to figure out what the highest frequency really need then low-pass filter and choose an lower sample rate.



                  IF you signal has a non-flat spectrum, it's typically advantageous to "whiten" the signal, i.e. filter it so that the average spectrum is white, then encode, decode and filter with the inverse signal to recover the signal. This way you spend more bits on the high energy frequencies and less and the low energy ones. Your quantization noise follows the spectrum of the signal.



                  The scheme that you suggest is one of the simplest forms of this approach: your whitening filter is a differentiator and your inverse filter is an integrator.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Apr 5 at 19:14









                  HilmarHilmar

                  10.5k1218




                  10.5k1218



























                      draft saved

                      draft discarded
















































                      Thanks for contributing an answer to Signal Processing Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdsp.stackexchange.com%2fquestions%2f56470%2fcompress-a-signal-by-storing-signal-diff-instead-of-actual-samples-is-there-su%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      រឿង រ៉ូមេអូ និង ហ្ស៊ុយលីយេ សង្ខេបរឿង តួអង្គ បញ្ជីណែនាំ

                      Crop image to path created in TikZ? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)Crop an inserted image?TikZ pictures does not appear in posterImage behind and beyond crop marks?Tikz picture as large as possible on A4 PageTransparency vs image compression dilemmaHow to crop background from image automatically?Image does not cropTikzexternal capturing crop marks when externalizing pgfplots?How to include image path that contains a dollar signCrop image with left size given

                      Romeo and Juliet ContentsCharactersSynopsisSourcesDate and textThemes and motifsCriticism and interpretationLegacyScene by sceneSee alsoNotes and referencesSourcesExternal linksNavigation menu"Consumer Price Index (estimate) 1800–"10.2307/28710160037-3222287101610.1093/res/II.5.31910.2307/45967845967810.2307/2869925286992510.1525/jams.1982.35.3.03a00050"Dada Masilo: South African dancer who breaks the rules"10.1093/res/os-XV.57.1610.2307/28680942868094"Sweet Sorrow: Mann-Korman's Romeo and Juliet Closes Sept. 5 at MN's Ordway"the original10.2307/45957745957710.1017/CCOL0521570476.009"Ram Leela box office collections hit massive Rs 100 crore, pulverises prediction"Archived"Broadway Revival of Romeo and Juliet, Starring Orlando Bloom and Condola Rashad, Will Close Dec. 8"Archived10.1075/jhp.7.1.04hon"Wherefore art thou, Romeo? To make us laugh at Navy Pier"the original10.1093/gmo/9781561592630.article.O006772"Ram-leela Review Roundup: Critics Hail Film as Best Adaptation of Romeo and Juliet"Archived10.2307/31946310047-77293194631"Romeo and Juliet get Twitter treatment""Juliet's Nurse by Lois Leveen""Romeo and Juliet: Orlando Bloom's Broadway Debut Released in Theaters for Valentine's Day"Archived"Romeo and Juliet Has No Balcony"10.1093/gmo/9781561592630.article.O00778110.2307/2867423286742310.1076/enst.82.2.115.959510.1080/00138380601042675"A plague o' both your houses: error in GCSE exam paper forces apology""Juliet of the Five O'Clock Shadow, and Other Wonders"10.2307/33912430027-4321339124310.2307/28487440038-7134284874410.2307/29123140149-661129123144728341M"Weekender Guide: Shakespeare on The Drive""balcony"UK public library membership"romeo"UK public library membership10.1017/CCOL9780521844291"Post-Zionist Critique on Israel and the Palestinians Part III: Popular Culture"10.2307/25379071533-86140377-919X2537907"Capulets and Montagues: UK exam board admit mixing names up in Romeo and Juliet paper"Istoria Novellamente Ritrovata di Due Nobili Amanti2027/mdp.390150822329610820-750X"GCSE exam error: Board accidentally rewrites Shakespeare"10.2307/29176390149-66112917639"Exam board apologises after error in English GCSE paper which confused characters in Shakespeare's Romeo and Juliet""From Mariotto and Ganozza to Romeo and Guilietta: Metamorphoses of a Renaissance Tale"10.2307/37323537323510.2307/2867455286745510.2307/28678912867891"10 Questions for Taylor Swift"10.2307/28680922868092"Haymarket Theatre""The Zeffirelli Way: Revealing Talk by Florentine Director""Michael Smuin: 1938-2007 / Prolific dance director had showy career"The Life and Art of Edwin BoothRomeo and JulietRomeo and JulietRomeo and JulietRomeo and JulietEasy Read Romeo and JulietRomeo and Julieteeecb12003684p(data)4099369-3n8211610759dbe00d-a9e2-41a3-b2c1-977dd692899302814385X313670221313670221